text
stringlengths 14
1.76M
|
|---|
[a]Ed Bennett
# Status of reproducibility and open science in hep-lat in 2021
###### Abstract
As a fully computational discipline, Lattice Field Theory has the potential to
give results that anyone with sufficient computational resources can
reproduce, going from input parameters to published numbers and plots correct
to the last byte. After briefly motivating and outlining some of the key steps
in making lattice computations reproducible, this contribution presents the
results of a survey of all 1,229 submissions to the hep-lat arXiv in 2021 of
how explicitly reproducible each is. Areas where LFT has historically been
well ahead of the curve are highlighted, as are areas where there are
opportunities to do more.
## 1 Introduction
There has been increasing pressure from many sides in recent years to embrace
the principle of open science, and to ensure that data analyses are
reproducible. As the software used to analyse data has become increasingly
complex, it has become less and less feasible to unambiguously explain in a
traditional journal publication exactly what steps were carried out, to the
point that another researcher could reproduce them. As enabling this is one of
the key aims of academic publication, this trend poses a clear problem, which
research funders, journals, and others are looking to address.
As a purely computational field, Lattice Field Theory (LFT) stands to be more
severely affected than most fields. However, LFT is also one of the very early
adopters of many principles of Open Science. The latter fact, specifically the
ubiquitous use of freely-available preprints for written publications, has
enabled the work presented here: an analysis of all 1,229 submissions to the
arXiv [1] preprint server in the hep-lat category in 2021, to assess where the
field is strong and where there remain opportunities for improvement.
## 2 Terminology
In this work, _reproducible_ is used as defined by the Turing Way project [2]:
that given the same data, another researcher can repeat the same analysis and
obtain the same results. This definition may seem trivial at first glance, as
it seems obvious that the same analysis on the same data should give the same
result; however, it is a prerequisite of the more challenging concepts of
_replicability_ (being able to apply the same analysis to fresh data and
obtain the same results), and _robustness_ (being able to apply different
analyses to the same data and obtain compatible conclusions).
Since the mapping from human languages to programming ones is not one-to-one,
for any non-trivial software then access to the code is an essential part of
reproducibility. Similarly, for data beyond those that can be presented in a
handful of tables, access to the raw data is also needed. Removing the
possibility for human error is another way to improve reproducibility; every
manual step in a process is a place that could be done inconsistently (either
within a work, or between the original work and attempts to reproduce it). The
primary purpose of computers is to automatically perform repeated tasks in a
consistent way, so encoding manual workflows in software significantly reduces
the possibility for human error, and where errors do creep into code, then
they can be inspected and discovered after the fact.
_Open science_ is the movement that all research—including not only
publications, but also physical samples, data, software, and other
outputs—should be accessible to all by default (barring specific ethical or
legal constraints, for example when working with private personal data).
Theoretical particle physics pioneered the use of open-access preprints for
publications, and remains a leading field—in many disciplines preprints are
still seen by many as a novelty. (For example, in a survey of 3759 researchers
[3], 46% of respondents in medical and health sciences had never viewed or
downloaded a preprint, and 63.6% of the same cohort had never authored one.)
Data and software published openly can (and should) be FAIR: specifically,
_findable_ , _accessible_ , _interoperable_ , and _reusable_. This is a
separate consideration to the data being made available publicly—data that
will never leave a research group still benefits from new researchers to the
group being able to easily find it, and conversely it is entirely possible to
share data in un-FAIR ways. Making data and software FAIR increases the uses
that others can make of it, and hence makes it more valuable.
## 3 Survey results
Figure 1: Breakdown of submissions to the hep-lat arXiv in 2021 indicating
whether they had hep-lat as their primary arXiv or were crosslists, and
whether they presented new numerical results.
The 1,229 submissions to the hep-lat arXiv (including those cross-listed into
the category) were surveyed for some aspects affecting the reproducibility of
the work done. The questions were answered based on a combination of full-text
search for relevant keywords and a brief skim read of the contents. The
complete survey data, and the analysis scripts used to prepare the plots
presented here, are released separately [4]. The analysis was performed using
Python [5], pandas [6], Matplotlib [7], and seaborn [8], via a Jupyter
Notebook [9].
Following the principle of findability, and that a paper should include
sufficient information to be able to reproduce it without hunting elsewhere,
information not included in the preprint was not considered when completing
the survey. This excluded information added in published manuscripts but not
updated on the arXiv, and also excluded information that could be found by
digging through a collaboration’s website.
Figure 1 shows an initial breakdown of the submissions; a small majority had
hep-lat as their primary category, with the remainder being cross-lists. A
substantial majority of both cases—1013 in all—included numerical results
(e.g. a plot that was not merely schematic, or numbers with uncertainties not
quoted from another publication); it is this subset that will be the focus of
the remainder of the analysis.
Figure 2: Breakdown of whether submissions to the hep-lat arXiv in 2021 that
included new numerical results acknowledged an HPC resource (left panel) or
specified any of the software used to generate their results (right panel).
Since one step towards reproducibility is to specify the software used, an
initial question to ask is what proportion of submissions do this. As a
baseline, this is compared with the proportion of submissions that acknowledge
the use of a high-performance computing (HPC) facility. These are of similar
difficulty—adding one or two sentences—and fulfil a similar purpose, in
demonstrating the utility of the software/facility and ensuring that its
development or activities continue to be funded. Figure 2 presents the
breakdown of submissions for these questions. The vast majority of submissions
presenting new numerical results with hep-lat as the primary arXiv (referred
to hereafter as _hep-lat numerical submissions_) acknowledge the use of an HPC
facility. However, fewer than half of hep-lat numerical submissions specify or
acknowledge any of the software used. This is potentially the simplest change
for authors to make; while citing software isn’t sufficient to ensure
reproducibility if custom code has been written, it significantly reduces the
barrier while also recognising software developers’ contributions to the
research.
The survey takes a relatively reductive view towards categorising the work
done in LFT: it is assumed that a typical piece of work may generate field
configurations using a Monte Carlo-like algorithm, may compute observables on
such configurations (generated as part of the same work or otherwise), and
will do some kind of statistical analysis on data resulting from one or both
of the previous two steps. (Alternative approaches including tensor networks
and quantum simulation have their software effort considered in the “analysis”
category; their count is sufficiently small that this is not considered to
introduce significant bias.)
0px
Figure 3: Breakdown of whether submissions to the hep-lat arXiv in 2021 that
generate new field configurations (left panel) or measure observables on field
configurations (right panel) specify the software that was used to do this.
Generation of configurations is the most computationally expensive activity in
LFT, and the associated software suites have had corresponding amounts of
effort put into ensuring they are robust and efficient. Steps taken to ensure
the reproducibiliy of generation include having deterministic seeded random
number generators, and having detailed log files and storing metadata in
configurations recording what version of a piece of software was used and on
what machine. Computation of observables from these ensembles is typically the
next most expensive activity, and frequently uses some of the same software
infrastructure.
Figure 3 shows the proportion of submissions that specify the software used
for each of these activities. Less than one in three of the 262 submissions
generating new configurations specified the software, and a slightly larger
proportion of the 462 submissions measuring observables on configurations
did.111In each case, some of the software specified was toolkits such as Grid
[10] and Chroma [11], where significant amounts of additional code (e.g. XML
or C++) is needed to perform the work being reported; this additional code was
not typically found to be shared or specified. Given that due to differences
in normalisations and other undocumented conventions, running two different
codes with the same parameters can give differing results, this is a barrier
to reproducibility of this effort.
Figure 4: Plot showing the proportion of submissions to the hep-lat arXiv in
2021 making use of existing gauge configurations that acknowledge the
infrastructure used to host and serve them.
Figure 5: Breakdown of the infrastructure acknowledged by those submissions in
Fig. 4 that do acknowledge hosting infrastructure.
Figure 6: Breakdown of submissions to the hep-lat arXiv in 2021 that publish
any of the data they generate.
Figure 7: Breakdown of the repositories used to host data published by the
authors of the submissions indicated in Fig. 6. Unibi is the institutional
repository of the University of Bielefeld.
Where configurations are not generated as part of the work, they must be
obtained from somewhere. This may be a collaborations own storage, or
publicly-available ensembles may be used. In either case storage
infrastructure must be used, and in the latter case a facility for finding and
sharing the configurations. The International Lattice Data Grid (ILDG)
provides schemas and specifications for such services, which have been
deployed in a number of Regional Grids (RGs). This was a very early example of
FAIR data sharing not only in LFT but in all of academia; it in fact pre-dated
the coining of the “FAIR” acronym itself. Being the first in the space has
however meant that tooling development in the surrounding world has moved on,
and left the ILDG tooling unmaintained. Only 14 of 230 submissions using
existing configurations acknowledge the hosting and sharing infrastructure
used (Fig. 4), all of which acknowledge the Japan Lattice Data Grid (Fig. 5),
reflecting the dormant state of the ILDG project and the RGs. Working groups
have now however resumed work on ILDG to modernise it, with the support of new
funding, as discussed in ILDG’s joint contribution [12].
Publications can also share data other than field configurations. Typically
being smaller in size, these require less infrastructure so can be shared with
generally-available data repositories such as CERN Zenodo. An advantage of
using such a repository over using a regular code or web hosting service is
that they will typically provide a persistent identifier (PID) such as a
Digital Object Identifier (DOI), which gives a reference that will not change
over time (e.g. when a researcher changes their GitHub username, or
institution), and a commitment to remain available in the moderately long term
(unlike services such as GitLab, which recently announced older repositories
would be made unavailable after a period of inactivity, with only a few weeks’
notice222This decision was later overturned due to public outcry, but there is
no barrier to the situation occurring again.). Sharing data in this format,
rather than relying on tables in a PDF file, makes collating data from
multiple sources and re-using it in other work significantly easier, and
removes the significant likelihood of errors occuring when transcribing data.
Figure 6 illustrates that this is not yet common practice in LFT, while Fig. 7
shows a significant reliance on code hosting services like GitHub and GitLab
that do not provide persistent identifiers or a guarantee of longevity. The
phrase “data available on request” is commonly used in some areas of science
where sharing of data is required by publisher policy but authors have not
prepared it; in some cases this has led to significant delays in access to
data as requests are ignored [13]. This phrase has yet to catch on in LFT.
Figure 8: Breakdown of submissions to the hep-lat arXiv in 2021 that generate
new numerical results that specify any of the software used for the analysis
of the data presented (left panel), and those that publish the full or partial
software workflow for this analysis (right panel). Figure 9: Breakdown of the
top ten specific pieces of analysis software referred to by submissions to the
hep-lat arXiv in 2021 that acknowledged such software.
The final part of the survey considers the data analysis of LFT data, i.e. the
process that takes in the data produced from configurations on HPC facilities
and outputs the plots and tables shown in the submissions. Many areas of
experimental science have reproducibility efforts, and these have experimental
facilities in place of deterministically-generated field configurations, and
take measurements on these facilities that are not expected to give machine-
precision identical results when repeated; the reproducibility effort focuses
on ensuring that the data from experimental measurements always gives the same
numerical results once analysed. As discussed above, encoding these procedures
into software and sharing that software is the only feasible way to do this
unambiguously.
Figure 8 illustrates that fewer than one in six hep-lat numerical submissions
specified any of the software used. The most popular tools referred to are
shown in Fig. 9; most are programming languages or frameworks rather than
computation-specific tools. Also shown in Fig. 8 is that only two of these
submissions included the workflow allowing another researcher to fully
reproduce the analysis performed. This represents a significant opportunity
for growth. A parallel survey, the initial results of which are reported in a
separate contribution [14] indicates that significantly more work is enabled
by workflows that are at least partially automated; publication of these would
significantly boost the reproducibility of the data analysis phase of LFT
computations.
## 4 Conclusions
While LFT has in many cases been at the forefront of open science, being one
of the first disciplines to embrace open-access preprints and being ahead of
its time with FAIR data for field configurations, there are some areas that
suffer from a “first-mover disadvantage” where work is ongoing to realign with
more modern tooling, and others where there is an opportunity to learn from
the example of other disciplines.
Some low-hanging fruit to improve the reproducibility of publications include
specifying and acknowledging publicly-available software that has enabled the
research presented, and making existing software workflows available (and
citing them in work that uses them). Taking manual processes and automating
them so they can be published is more challenging, in particular doing so in a
way that can be run end-to-end without intervention. There is work to be done
to write tools that will enable this to be done more easily; this work will be
informed by a parallel survey of individual researchers’ practices in this
area, whose initial results are reported in a separate contribution [14].
## 5 Acknowledgements
This work has been funded by the UKRI Science and Technologies Facilities
Council (STFC) Research Software Engineering Fellowship EP/V052489/1. The
author acknowledges the support of the Supercomputing Wales project, which is
part-funded by the European Regional Development Fund (ERDF) via Welsh
Government. The author would like to thank the organisers of the UKLFT Annual
Meeting for the invitation to present on this topic, which inspired this work.
The author also thanks Julian Lenz for reviewing the draft of this
contribution.
## References
* [1] Cornell University, _arXiv_ , https://arxiv.org.
* Community [2022] T. T. W. Community, _The Turing Way: A handbook for reproducible, ethical and collaborative research_ (2022), URL https://doi.org/10.5281/zenodo.6909298.
* Soderberg et al. [2020] C. K. Soderberg, T. M. Errington, and B. A. Nosek, Royal Society Open Science 7, 201520 (2020), https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.201520, URL https://royalsocietypublishing.org/doi/abs/10.1098/rsos.201520.
* Bennett [2022] E. Bennett, _Survey of reproducibility in hep-lat publications in 2021_ (2022), URL https://doi.org/10.5281/zenodo.7373413.
* Van Rossum and Drake [2009] G. Van Rossum and F. L. Drake, _Python 3 Reference Manual_ (CreateSpace, Scotts Valley, CA, 2009), ISBN 1441412697.
* pandas development team [2022] T. pandas development team, _pandas-dev/pandas: Pandas 1.4.2_ (2022), URL https://doi.org/10.5281/zenodo.6408044.
* Hunter [2007] J. D. Hunter, Computing in Science & Engineering 9, 90 (2007).
* Waskom [2021] M. L. Waskom, Journal of Open Source Software 6, 3021 (2021), URL https://doi.org/10.21105/joss.03021.
* Granger and Pérez [2021] B. E. Granger and F. Pérez, Computing in Science & Engineering 23, 7 (2021).
* Boyle et al. [2016] P. A. Boyle, G. Cossu, A. Yamaguchi, and A. Portelli, in _Proceedings of The 33rd International Symposium on Lattice Field Theory — PoS(LATTICE 2015)_ (2016), vol. 251, p. 023.
* Edwards and Joo [2005] R. G. Edwards and B. Joo (SciDAC, LHPC, UKQCD), Nucl. Phys. B Proc. Suppl. 140, 832 (2005), hep-lat/0409003.
* Karsch et al. [2023] F. Karsch, H. Simma, et al., in _To appear in Proceedings of The 39th International Symposium on Lattice Field Theory — PoS(LATTICE 2022)_ (2023).
* Smart [2018] A. G. Smart, Physics Today (2018), URL https://physicstoday.scitation.org/do/10.1063/PT.6.1.20180822a/full/.
* Athenodorou et al. [2023] A. Athenodorou, E. Bennett, J. Lenz, and E. Papadopoulou, in _To appear in Proceedings of The 39th International Symposium on Lattice Field Theory — PoS(LATTICE 2022)_ (2023).
|
# Non-linear Transport Phenomena and Current-induced Hydrodynamics in Ultra-
high Mobility Two-dimensional Electron Gas
Z. T. Wang M. Hilke<EMAIL_ADDRESS>Department of Physics, McGill
University, Montréal, Quebec, Canada, H3A 2T8 N. Fong Emerging Technology
Division, National Research Council of Canada, Ottawa, Ontario, Canada, K1A
0R6 D. G. Austing<EMAIL_ADDRESS>Department of Physics, McGill
University, Montréal, Quebec, Canada, H3A 2T8 Emerging Technology Division,
National Research Council of Canada, Ottawa, Ontario, Canada, K1A 0R6 S. A.
Studenikin Emerging Technology Division, National Research Council of Canada,
Ottawa, Ontario, Canada, K1A 0R6 K. W. West L. N. Pfeiffer Department of
Electrical Engineering, Princeton University, Princeton, New Jersey, USA,
08544
(August 30, 2024)
###### Abstract
We report on non-linear transport phenomena at high filling factor and DC
current-induced electronic hydrodynamics in an ultra-high mobility
($\mu$=$20\times 10^{6}$ c$m^{2}$/Vs) two-dimensional electron gas in a narrow
(15 $\mu m$ wide) GaAs/AlGaAs Hall bar for DC current densities reaching 0.67
A/m. The various phenomena and the boundaries between the phenomena are
captured together in a two-dimensional differential resistivity map as a
function of magnetic field (up to 250 mT) and DC current. This map, which
resembles a phase diagram, demarcate distinct regions dominated by Shubnikov-
de Haas (SdH) oscillations (and phase inversion of these oscillations) around
zero DC current; negative magnetoresistance and a double-peak feature (both
ballistic in origin) around zero field; and Hall field-induced resistance
oscillations (HIROs) radiating out from the origin. From a detailed analysis
of the data near zero field, we show that increasing the DC current suppresses
the electron-electron scattering length that drives a growing hydrodynamic
contribution to both the differential longitudinal and transverse (Hall)
resistivities. Our approach to induce hydrodynamics with DC current differs
from the more usual approach of changing the temperature. We also find a
significant (factor of two to four) difference between the quantum lifetime
extracted from SdH oscillations, and the quantum lifetime extracted from
HIROs. In addition to observing HIRO peaks up to the seventh order, we observe
an unexpected HIRO-like feature close to mid-way between the first-order and
the second-order HIRO maxima at high DC current.
## I Introduction
High mobility two-dimensional electron gas (2DEG) systems exhibit a remarkable
richness of phenomena. In a strong magnetic (B-) field there are various
topological phases such as integer and fractional quantum Hall phases, which
stem from an interplay of disorder, electron correlations and B-field. These
phases are distinguished by their Hall quantization [1, 2]. On the other hand,
non-linear phenomena, such as Hall field-induced, phonon-induced, and
microwave-induced resistance oscillations, in Landau levels (LLs) at high
filling factor close to zero B-field are examples of phenomena that cannot be
characterized by conductance quantization. These non-linear phenomena have
attracted significant interest over the last two decades: see the extensive
reviews in Refs. [3, 4] and references therein.
Regarding non-linear DC transport, the principal topic of our work, the
quintessential example of a DC current-induced phenomenon at low B-field is
Hall field-induced resistance oscillations (HIROs)[5]. In published
experimental works on HIROs[5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19], typically data are presented in the form of selected traces of
differential resistance versus B-field at fixed DC current (or differential
resistance versus DC current at fixed B-field). Such an approach may not
reveal all aspects of HIROs or HIRO-like phenomena, and their relationship to
other distinct linear and non-linear phenomena at low B-field may not be
clear. An alternative approach to gain a more complete picture is to map out
the resistance as a function of B-field and DC current. Such a technique has
been applied at high magnetic field in the quantum Hall regime and
demonstrated to reveal a wealth of phenomena: see Refs. [20, 21, 22, 23, 24].
Recently viscous transport in two-dimensional (2D) systems has also drawn
significant interest. Hydrodynamic phenomena are expected to be most
pronounced when the (momentum-conserving) electron-electron scattering length
$l_{ee}$ is much less than the device width $W$, and in turn, $W$ is much less
than the classical transport mean free path $l_{mfp}$, i.e., $l_{ee}$ $\ll$
$W$ $\ll$ $l_{mfp}$, distinct to the condition $W$ $\ll$ $l_{ee}$, $l_{mfp}$
for which ballistic effects dominate. The subject of hydrodynamic effects in
solids at low temperature was pioneered by Gurzhi in the 1960’s[25, 26], and
initially drew theoretical attention, see for example Refs. [27, 28, 29, 30,
31]. With the advent of materials in the 1990’s for which (momentum-relaxing)
scattering with defects and phonons was sufficiently weak, Molenkamp and de
Jong investigated experimentally hydrodynamic electron flow in high-mobility
GaAs/AlGaAs hetero-structure wires, and could distinguish the Knudsen and
Poiseuille (Gurzhi) transport regimes in the differential resistance [31, 30,
32]. Two decades ago analogies with fluid dynamics were also explored to
explain voltage steps in the quantum Hall effect breakdown regime including an
eddy viscosity model for the disruption of laminar flow around charged
impurities [33, 34, 35, 36, 37, 38]. In recent years there has been renewed
interest in hydrodynamic electron transport following the development of
material systems with ever higher transport mean free path. Experimental works
studying hydrodynamic effects in 2D systems feature graphene[39, 40, 41, 42,
43, 44, 45, 46, 47, 48, 49], high-mobility semiconductor 2DEGs[50, 51, 52, 53,
54, 55, 56, 57, 58, 59, 60], 2D metals[61], semi-metals[62], and semi-metal
micro-ribbons[63]. This effort has inspired numerous theoretical works of
which Refs. [64, 65, 66, 67, 68, 69, 70, 71, 58, 72] are examples of those
focusing on semiconductor 2DEGs. Change of temperature to suppress $l_{ee}$ is
the most common approach to reach the hydrodynamic regime, although channel
thinning in steps to effectively change $W$ can be employed in certain
instances as in Ref. [61]. It was also predicted in Ref. [27] that $l_{ee}$
decreases with increasing DC current, which is the basis of current-induced
viscous transport as originally investigated by Molenkamp and de Jong[31, 30,
32]\- see also Ref. [73].
Here, we examine in detail the differential resistance of an ultra-high
mobility 2DEG in a narrow Hall bar fabricated from a GaAs/AlGaAs quantum well
hetero-structure as a function of B-field and DC current. This approach allows
us to study numerous phenomena in one global 2D diagram as they evolve with
increasing DC current up to 10 $\mu$A for B-fields up to $0.25$ T. In the
differential longitudinal resistivity, we observe: Shubnikov-de Haas (SdH)
oscillations and phase inversion (PhI) of SdH oscillations near zero current,
negative magnetoresistance (nMR) and a double-peak feature near zero field,
and HIROs up to the seventh order. In addition, we find evidence for DC
current-induced electron hydrodynamics effects for B-fields less than 10 mT in
both the differential longitudinal and transverses resistivities. Since the
global 2D diagram (which identifies distinct regions within which different
phenomena dominate) resembles a phase diagram in appearance, we will
henceforth refer to the diagram as a “phase diagram”.
This paper is organized into seven sections. Section II describes the
experimental details pertaining to the 2DEG material, the Hall bar device, and
the measurement configuration. Section III presents the differential
resistivity map, and introduces the different phenomena observed and
delineates the boundaries between the phenomena in the phase diagram. Sections
IV to VII give extensive analysis of the various phenomena. In more detail, in
Sec. IV, we extract parameters of interest such as the quantum lifetime from
the SdH oscillations, and simulate the PhI of the SdH oscillations with an
existing model. Section V provides evidence that the origin of the nMR and the
double-peak feature is purely ballistic in nature. Section VI presents
analysis, based on current theories for viscous transport, of the observed
evolution of both the differential longitudinal and transverse (Hall)
resistivities near $0$ T with respect to the DC current which we attribute to
hydrodynamic flow. In Sec. VII, we compare properties of the HIROs observed in
the experiment to those expected from existing theories, and extract various
parameters such as the quantum lifetime and the electronic width. We also
report an unexpected additional HIRO-like feature in the phase diagram located
in between the first-order and the second-order HIROs peak. We end with a
conclusion in Section VIII.
## II Experimental Details
$n$ | $\mu$ | $E_{F}$ | $l_{mfp}$
---|---|---|---
(c$m^{-2}$) | (c$m^{2}$/V s) | (meV) | ($\mu$m)
$2.0\times 10^{11}$ | $20\times 10^{6}$ | $7.2$ | 145
Table 1: Key parameters of the 2DEG material system. The (bulk) parameters
given are determined from a large area Van der Pauw device measured in a
helium-3 cryostat at base temperature after illumination. Figure 1: Image of
central region of the Hall bar device showing the measurement configuration.
The Ohmic contacts are out of view. The lithographic width of the Hall bar is
$W$=15 $\mu$m, and the two voltage probes employed to measure $\Delta V_{xx}$
are separated by distance $L$=100 $\mu$m.
The Hall bar device is made from a 2DEG confined in a GaAs/AlGaAs quantum well
(QW) hetero-structure grown by molecular beam epitaxy. The 2DEG is located in
a 30 nm wide GaAs QW at a depth $\sim$200 nm below the surface. The barriers
on either side of the QW are composed of A$l_{0.3}$G$a_{0.7}$As and
incorporate QW doping regions. From a large area Van der Pauw device measured
in a helium-3 cryostat at base temperature after illumination, the 2DEG sheet
density $n$ and (transport) mobility $\mu$ respectively for this material are
found to be 2.0$\times 10^{11}$ $cm^{-2}$ and 20$\times 10^{6}$ c$m^{2}$/V s.
The corresponding Fermi energy $E_{F}$ and transport mean free path $l_{mfp}$
respectively are determined to be 7.2 meV and 145 $\mu$m. These parameters are
tabulated in Table 1.
Figure 2: (a) Map identifying the various phenomena (“phase diagram”).
$\rho_{xx}$ measured by sweeping the DC current and stepping the B-field. A
background parabolic dependence with $I_{DC}$ and a small uniform linear
dependence with B-field are subtracted from the raw data to emphasize the
features of interest in the phase diagram. The background $I_{DC}$ dependence
is discussed in Sec. VI.1, and the small linear $B$ dependence is likely due
to inhomogeneity of the 2DEG material. The Shubniknov-de Haas (SdH)
oscillations and the accompanying phase inversion (PhI) of the SdH
oscillations are visible close to zero DC current. The vertical band near
$B$=0 T identifies the negative magnetoresistance (nMR). The peaks forming a
fan are the Hall-field induced resistance oscillations (HIROs). Black arrows
in three of the four quadrants mark the “1.5” (HIRO-like) feature. (b)
$\rho_{xx}$ versus B sections at $I_{DC}$=0 $\mu$A and $I_{DC}$=0.4 $\mu$A
showing respectively SdH oscillations and phase inverted SdH oscillations. (c)
Expanded view of phase diagram around B=0 T where nMR is observed. Note that
here $\rho_{xx}$ is plotted without removing the above mentioned background
parabolic dependence with $I_{DC}$ (see Sec. V). A double-peak feature is
present on top of the nMR [see also panel (e)]. (d) $\rho_{xx}$ versus
$I_{DC}$ trace at $B=0$ T [marked by the vertical arrow in panel (c)]. The
initial decrease in $\rho_{xx}$ with current, i.e., negative differential
resistivity, is a signature of hydrodynamics[30] (HYDRO). (e) $\rho_{xx}$
versus $B$ section taken at $I_{DC}$=6 $\mu$A where we see the nMR and the
double-peak feature around B=0 T, and HIROs at higher B-field. (f) Cartoon
summarizing the regions identified in one quadrant of the phase diagram in
panel (a). A more detailed description of the various phenomena, the
boundaries between the phenomena, and certain marked characteristic B-fields
and DC currents can be found in the text.
The Hall bar device is made by standard fabrication techniques. The nominal
(lithographic) width $W$ of the Hall bar is 15 $\mu$m. The separation between
adjacent voltage probes along each side of the Hall bar is 50 $\mu$m. An image
of the central region of the device is shown in Fig. 1. For all experiments
described, a DC current $I_{DC}$ is combined with an AC excitation current
$I_{AC}$ of 20 nA (unless otherwise stated) at 148 Hz, and the net current
$I_{SD}$ is driven through the Hall bar from the source contact to the
grounded drain contact. $\Delta V_{xx}$, the change in AC voltage along the
Hall bar, and $\Delta V_{xy}$, the change in AC voltage across the width of
the Hall bar, are measured with a standard lock-in technique and the
differential longitudinal and transverse resistances respectively are given by
$r_{xx}=\Delta V_{xx}/I_{AC}=dV_{xx}/dI$ and $r_{xy}=\Delta
V_{xy}/I_{AC}=dV_{xy}/dI$: see Refs. [74, 24] for further details of the
technique. Voltages $\Delta V_{xx}$ and $\Delta V_{xy}$ are measured between
the voltage probes indicated in Fig. 1. Note $\Delta V_{xx}$ is dropped
between voltage probes separated by a distance $L$=100 $\mu$m. Although not
measured, a DC voltage drop is also discussed and estimated in section VI. The
differential longitudinal and transverse resistivities respectively are
$\rho_{xx}$ = W$r_{xx}$/L and $\rho_{xy}$ = $r_{xy}$. All measurements are
performed in the dark in a dilution refrigerator at base temperature where the
mixing chamber temperature is $\sim$15 mK. The electron temperature $T_{e}$ is
estimated in separate measurements to be $\sim$40 mK. The B-field is applied
perpendicular to the plane of the 2DEG. Note that a small 6 mT correction has
been applied to the data to account for a field offset.
## III Phenomena Observed in the Differential Resistivity
We start by looking at the global phase diagram and identify all the different
phenomena therein. This is accomplished by measuring $\rho_{xx}$ on sweeping
the DC current and stepping the B-field. The resulting map is presented in
Fig. 2(a). In our Hall bar device, we can identify several phenomena of
interest: SdH oscillations, phase inverted SdH oscillations, nMR, a double
peak-feature on top of the nMR, DC current-induced hydrodynamic effects,
HIROs, and a HIRO-like feature. In the rest of this section we provide a
general introduction to these phenomena before going into details in
subsequent sections. To aid this introduction, Fig. 2(f) provides a simple
schematic showing the regions, in one quadrant of the phase diagram, where the
phenomena are observed, the boundaries between the phenomena, and
characteristic B-fields and DC currents.
SdH oscillations at zero DC current are observed above 21 mT and their
amplitude in differential resistivity is found to decay with increasing
$I_{DC}$ [see additionally both Fig. 3 and Fig. 4]. In general, SdH
oscillations are absent when $\omega_{c}\tau_{q}\ll$1, where
$\omega_{c}=eB/m^{*}$ is the cyclotron frequency, $m^{*}$=$0.067m_{e}$ is the
effective mass of the charge carrier, $m_{e}$ is the mass of an electron, $e$
is the electron charge and $\tau_{q}$ is the quantum lifetime. We can
determine a characteristic B-field $B_{q}$ of 33 mT from the condition
$\omega_{c}\tau_{q}$=1 [see also Fig. 3(a), and note that weak SdH
oscillations are still visible below $B_{q}$ as represented by the red colored
region with diminished shading in Fig. 2(f)]. With increasing $I_{DC}$, the
SdH oscillation amplitude first decays before the maxima and minima of the SdH
oscillations invert, an effect called phase inversion. Examples of inverted
and non-inverted SdH oscillations are given in Fig. 2(b). See also Fig. 4
where inversion occurs at a DC current $I_{PhI}\sim$0.2 $\mu$A near 0.2 T.
In the vicinity of B=0 T, we observe pronounced nMR- see panels (a) and (e) in
Fig. 2. The magnetoresistance decays rapidly with increasing B-field until a
strong change of slope marks the nMR boundary. Assuming a purely ballistic
regime, an abrupt change of slope is predicted[66] to occur when $W=2r_{c}$,
where $r_{c}$=$m^{*}v_{F}/eB$=$v_{F}/\omega_{c}$ is the cyclotron radius, with
a corresponding B-field $B_{W}$, and $v_{F}$=$1.9\times 10^{7}$ cm/s is the
Fermi velocity. In our case, $W$=15 $\mu$m. Figure 3(a) shows a resistance
versus B-field trace at zero DC current. The change in slope is most rapid at
$\sim$ 10 mT. We take this field to be an estimate of $B_{W}$, and this
reasonably gives $W\sim$ 15 $\mu$m (as an upper bound). However, the observed
change of slope at the boundary is not as abrupt as in the calculations in
Ref. [66], which is a source of imprecision in the estimation of the location
of the boundary. Furthermore, accounting for undercut during the wet etching
step in the fabrication of the Hall bar, and sidewall depletion, we expect the
_effective_ electronic width of the Hall bar $W_{eff}$ to be smaller than $W$.
In Sec. VII, from a detailed analysis of the HIROs, we determine
$W_{eff}\sim$11 $\mu$m, and the corresponding B-field $B_{W_{eff}}\sim$ 14 mT.
Throughout the text we will be careful in our usage of $W$ and $W_{eff}$ and
make clear when it is important to distinguish the difference. Note in Fig.
2(f) we have marked the nMR boundary with $B_{W_{eff}}$ rather than $B_{W}$.
Also accompanying the nMR is a distinctive double-peak feature, clear in
panels (c) and (e) in Fig. 2, that we ascribe to ballistic transport.
Figure 3: (a) Resistance $R_{xx}$ versus $B$ trace. Here $R_{xx}$ is taken to
be $r_{xx}$ at $I_{DC}$=0 $\mu$A, and for this particular measurement where
the B-field is swept at 1 mT/min the AC excitation current is 40 nA. At low
B-field, there is strong negative magnetoresistance up to field $B_{W}$
$\sim$10 mT as discussed in Sec. III and in Sec. V. $B_{q}$=33 mT is related
to $\tau_{q}$ and is defined in Sec. III. (b) SdH oscillation amplitude
$\Delta R_{xx}$ determined from panel (a) plotted versus filling factor
($\nu\propto 1/B$). The black line is the data, fitted to Eq. (1), red line,
with obtained parameters $n$=$2.0\times 10^{11}cm^{-2}$, $R_{0}$=$11.4\pm 0.6$
$\Omega$, and $\tau_{q}$=$11.5\pm 0.3$ ps. $\nu$=400 corresponds to B=21 mT.
Also close to $B=0$ T, we can identify modifications to the nMR and the
double-peak feature with increasing DC current that point to a growing
influence of hydrodynamics [depicted by region marked HYDRO with cross
hatching of increasing weight in Fig. 2(f)]. In a purely hydrodynamic regime,
the hydrodynamic contribution to $\rho_{xx}$ is expected to be strongest at
B=0 T, and in order to reach the strong hydrodynamic regime, the electron-
electron scattering length $l_{ee}$ should be smaller than the width of the
Hall bar. We show in Sec. VI.1 that $l_{ee}$ decreases with increasing
$I_{DC}$, and so the signatures of viscous transport are stronger at high
$I_{DC}$. See also the $\rho_{xx}$ versus $I_{DC}$ trace at $B=0$ T in Fig.
2(d) which displays an initial decrease in $\rho_{xx}$ with DC current. This
behavior resembles that observed in a 2D wire and attributed to the Gurzhi
effect, i.e., Poiseuille flow in electron transport[30]. In section VI we will
also examine the DC-current induced hydrodynamic correction to the Hall
resistivity near zero field.
Lastly, in the phase diagram, we can identify HIRO peaks that fan out from the
origin [see also the section in Fig. 2(e) that shows the HIROs either side of
the nMR region near zero field]. The boundary above which HIROs are observed
is delimited by $I_{HIRO}$ in Fig. 2(f), and tracks the DC current position of
the first-order HIRO peak that is linear in B [see Eq. (12) and Fig. 9(a)].
Similar to SdH oscillations, the HIRO peak amplitude is in principal related
to the quantum lifetime. However, we find in Sec. VII that the quantum
lifetime extracted from the HIROs does not match the quantum lifetime
extracted from the SdH oscillations. For this reason, the B-field onset of
HIROs is marked $B_{q}^{HIRO}$ in Fig. 2(f). We can discern up to seven orders
of HIRO peaks for a 10 $\mu$A DC current. Additionally, we observe unexpected
HIRO-like features located between the first-order and the second-order HIRO
peaks at high DC current, that are marked by black arrows in Fig. 2(a) [see
also Fig. 9(a) and Fig. 11], which we will refer to as the “1.5” features
throughout this paper.
## IV Shubnikov-de Haas Oscillations and Phase Inversion
SdH oscillations arise from B-field induced LLs and the oscillating density-
of-states (DOS) at the Fermi level [75, 76, 77, 78]. For an extensive review
of SdH oscillations see Ref. [3]. In the small B-field regime the SdH
oscillations in the resistance $R_{xx}=V_{xx}/I$ (which is equivalent to
$r_{xx}$ in the limit of zero DC current) can be analytically described by the
conventional Ando formula[79, 80]:
$\Delta R_{xx}=4R_{0}D_{T}\cos{\left(\frac{2\pi
E_{F}}{\hbar\omega_{c}}-\pi\right)}\exp\left({-\frac{\pi}{\omega_{c}\tau_{q}}}\right),$
(1)
where $\Delta R_{xx}$ is the SdH oscillation amplitude, $R_{0}$ is the zero-
field resistance, and $D_{T}$ is the thermal damping factor:
$D_{T}=\frac{X_{T}}{\sinh(X_{T})},\;\;X_{T}=\frac{\mathrm{2}\pi^{2}k_{B}T}{\hbar\omega_{c}}.$
(2)
Here $k_{B}$ is the Boltzmann constant, $\hbar$ is the reduced Planck
constant, and $T$ is the bath temperature. In the phase diagram in Fig. 2, the
SdH oscillations exhibit two notable features. First, the SdH oscillation
amplitude decreases with increasing DC current, and second, an inversion of
the SdH oscillation extrema occurs on increasing the DC current further. By
fitting the experimental SdH oscillation data to Eq. (1) it is possible to
extract the electron concentration, the quantum lifetime, the amplitude
(4$R_{0}$), and the current dependence of the electron temperature $T_{e}$.
Note the use of $T_{e}$ instead of $T$ has been shown [20, 81] to be essential
to describe the decrease of the SdH oscillation amplitude and the PhI with
increasing DC current. We emphasize that for our narrow HB device, there is
pronounced nMR around $B$=0 T [see Fig. 3(a)], and whether one can simply
equate $R_{0}$ in Eq. (1) for the SdH oscillations [82] with $R$(B=0 T), the
measured value of the zero-field resistance, in this case is important since
this impacts the determination of the mobility.
At $I_{DC}$=0 $\mu$A, by fitting the SdH oscillations in Fig. 3(a) to Eq. (1),
we obtain parameters $n$=$2.0\times 10^{11}cm^{-2}$, $R_{0}$=$11.4\pm 0.6$
$\Omega$, and $\tau_{q}$=$11.5\pm 0.3$ ps. For our experimental conditions, it
is possible to detect SdH oscillations up to filling factor $\nu$=$2\pi\hbar
n/eB$ of $400$ [see Fig. 3(b)]. Strongly note that the value of $R_{0}$ as
determined from SdH is significantly lower than the measured resistance at
$B$=0 T [R(B=0)=43 $\Omega$], indicating that the phenomenon of nMR near zero
field is independent from the SdH oscillation effect. Consequently, without
accounting for the presence of the nMR (and the double-peak feature
superimposed on top of the nMR), taking the $R$(B=0 T)=43 $\Omega$ value
rather than the fitted value of $R_{0}$=11.4 $\Omega$ relevant to the SdH
oscillations, the estimated mobility would be lower than the bulk mobility by
almost a factor of four [83]. We note in our study $R_{0}$ is close to the
bulk resistance, whereas in other experiments [84] the value of $R_{0}$
extracted from SdH oscillations does not necessarily correspond to the
resistance at zero field, i.e., for our situation the assumption implicit in
the Ando formula that the B-field dependence of the DOS is sinusoidal in
nature to first-order is justified.
Away from $I_{DC}$=0 $\mu$A, the SdH oscillation amplitude decreases, and
eventually PhI occurs- see Figure 4 where this takes place at $\sim 0.2$
$\mu$A near 0.2 T. PhI whereby the extrema of the SdH oscillations invert with
increasing DC current was discussed theoretically in Refs. [4, 85], and has
been reported experimentally for high mobility semiconductor 2DEGs [86, 87,
88, 20, 24, 74], and for graphene [81]. For $r_{xx}$, PhI with increasing DC
current can be explained by an electron heating model in terms of the
following relation[20]:
$\Delta r_{xx}=\Delta R_{xx}+I_{DC}\frac{\partial\Delta R_{xx}}{\partial
T_{e}}\frac{\partial T_{e}}{\partial I},$ (3)
derived from $\Delta r_{xx}$=$d(\Delta R_{xx}I)/dI$, where $\Delta r_{xx}$ is
the SdH oscillation amplitude in the differential resistance, and $\Delta
R_{xx}$ is the magnetoresistance described in Eq. (1). The term
$\partial\Delta R_{xx}/\partial T_{e}$ introduces oscillations shifted by a
phase of $\pi$ that dominate at larger $I_{DC}$. At higher $T_{e}$, the
damping factor $D(T)$ from Eq. (2) decreases the amplitude of the SdH
oscillations. Taken from the data set shown in full in Fig. 2, Fig. 4(a) shows
$\Delta r_{xx}$ near 0.2 T. Figure 4(b) shows a simulation of the experimental
data obtained with Eq. (3). The simulation assumes the electron temperature is
a fitting parameter of the form $T_{e}(I_{DC})$=$T_{0}+\alpha I_{DC}^{\beta}$,
where $T_{0}$=0.04 K is the expected electron temperature at zero current, and
$\alpha$ and $\beta$ are constants. The empirical dependence of the electron
temperature with DC current is found to be
$T_{e}(I_{DC})$=$T_{0}+1.0I_{DC}^{0.65}$ where $I_{DC}$ is in $\mu$A. To
obtain the parameters $\alpha$ and $\beta$, a Monte Carlo approach is used as
both $T_{e}$ and $\partial T_{e}/\partial I$ need to be fitting parameters in
the model. We find $\beta$ $\sim 0.65$ consistent with the value reported in
previous work by Studenikin et al. [20] in an InGaAs/InP QW HB device with a
much smaller carrier mobility of $1.9\times 10^{5}$ $cm^{2}/Vs$. This
demonstrates that electron heating is also an important mechanism for PhI in
very high mobility HB devices in the large filling factor regime.
Figure 4: Grayscale plots showing PhI of the SdH oscillation amplitude $\Delta
r_{xx}$ in $r_{xx}$ data near 0.2 T (a), and a simulation of the data (b). The
simulation is generated from Eq. (3).
## V Negative Magnetoresistance
nMR is commonly observed in high-mobility 2DEG Hall bar devices, [89, 90, 91,
50, 92, 93, 94, 95, 59, 96, 97, 98] and narrow micron-sized quantum devices
such as quantum wires and quantum point contacts [99, 100, 101, 102, 103]. nMR
can arise from different effects. In one limit, nMR is often associated with
weak localization (WL), see for example Refs.[104, 105, 91], which is due to
coherent back-scattering with impurities, when the mean free path is small
compared to the phase coherence length. nMR also occurs in the opposite
(ballistic) regime, when the mean free path is larger than the Hall bar width
and the distance between voltage probes, and coherent back-scattering is
negligible, see for example Refs. [89, 90, 91, 94, 95, 59, 96, 97]. Our Hall
bar is in this regime, and so the strong nMR we observe in Fig. 3 arises from
the ballistic effect.
For a fully ballistic conductor with lateral confinement, the nMR is a direct
consequence of the non-zero resistance due to the finite number of quantum
channels. In the limiting case, where there is no electron scattering, the
two-terminal resistance is simply given by $R_{2T}=h/g_{s}e^{2}N$, where $h$
is the Planck constant, $g_{s}=2$ is the spin degeneracy, and $N$ is the
number of quantum transmission channels. For a confinement potential $V(y)$ in
the transverse direction, the number of channels can be found by using the
Bohr-Sommerfeld quantization rule:
$N=\frac{2}{h}\int_{y_{1}}^{y_{2}}pdy.$ (4)
Here $p=\sqrt{2m^{*}(E_{F}-V(y))}$ is the quasi-classical momentum, and
$y_{1}$ and $y_{2}$ are the boundaries where $p$ is real. For a hard wall
confinement potential of width $W$ ($y_{1}=-W/2$ and $y_{2}=W/2$), and without
a B-field, this leads to the well know expression $N=Wk_{F}/\pi$, where the
number of channels is twice the width divided by the Fermi wave length. In the
presence of a B-field, $N$ is reduced due to Landau quantization, and the
quasi-classical momentum is now given by
$p=\sqrt{2m^{*}(E_{F}-V(y))-(eBy)^{2}}$. For a hard-wall confinement
potential, Eq. (4) can be evaluated directly and one obtains:
$\begin{split}N=\frac{Wk_{F}}{\pi}f\left(\frac{W}{2r_{c}}\right)\mbox{ ; }\\\
f(s)=\frac{1}{2}\left(\sqrt{1-s^{2}}+\frac{\arcsin{s}}{s}\right).\end{split}$
(5)
Equation (5) is a generalization to the approximate expression obtained by
Glazman and Khaetskii[106], who considered the two limits $W\ll 2r_{c}$ and
$W\gg 2r_{c}$ separately. In these two limits both expressions coincide. For a
smooth confinement potential, $N$ can be obtained by evaluating the integral
in Eq. (4) numerically.
For instance, if we now consider, as depicted in the inset to Fig. 5, an
exponential depletion potential of the form:
$V(y)=U_{0}{(e}^{-|y-y_{1}|/l_{d}}+e^{-|y-y_{2}|/l_{d}}),$ (6)
with $l_{d}$ the depletion length, and $U_{0}$ the barrier potential, we can
compare the numerically obtained conductance, assuming no diffuse scattering
(in the bulk or at the edge of the Hall bar), with Eq. (4). Both are in
excellent agreement as shown in Fig. 5. The most distinctive feature is the
rapid decrease of the number of transmission channels with increasing B-field.
The characteristic field $B_{W}$ for the rapid decrease is given by
$W$=$2r_{c}$ for which the slope of $N(B)$ is maximal. Our simple model
therefore can explain the strong nMR observed in Fig. 3(a). However, note that
$B_{W}$ here is closer to the maximum of $N(B)$, while in Fig. 3(a), $B_{W}$
is further down the side of the nMR. In the Hall bar measured, this is likely
due to the presence of diffuse scattering, and the voltage probe regions, both
of which are neglected in the simple model and the numerical simulation.
Figure 5: Calculated number of transmission channels, $N$ (proportional to the
conductance) as a function of B-field for different values of the sidewall
depletion length, $l_{d}$ in $\mu$m. The lines are obtained using Eq. (4),
while the open dots are obtained by computing the two-terminal transmission
numerically using a discretized Schrödinger equation with two semi-infinite
leads. The crosses indicate the B-fields $B_{W}$ where $W=2r_{c}$. Here a
width of $W=11$ $\mu$m is assumed, and we have taken $U_{0}$ to be 20 meV.
Inset: calculated depletion potential for a section through the Hall bar for
different values of $l_{d}$. $l_{d}$=0 $\mu$m corresponds to a hard-wall
potential.
Indeed, for a Hall bar in the strong ballistic regime, scattering is dominated
by scattering off the openings to the voltage probes along the sides, since no
scattering would lead to a zero four-terminal resistance. Scattering from the
voltage probes is proportional to $N$, hence the four-terminal resistance will
show a similar $B$-dependence as $N(B)$, leading to the observed nMR.
Moreover, in ballistic narrow Hall bars, the two-terminal resistance at $B=0$
T is bounded by the number of conducting channels, i.e., $R>h/2e^{2}N$ ($R>16$
$\Omega$ in our Hall bar), regardless of the scattering potential and Hall bar
aspect ratio. As a consequence, the transport mobility (or transport lifetime)
cannot be extracted from the measured resistance at $B=0$ T when nMR is
present but only from a fit of the SdH oscillations as discussed in Sec. IV.
In addition to the nMR we also observe a dip at $B=0$ T leading to the
distinctive double-peak feature seen in Fig. 2(e). The double-peak feature has
been reported in recent experiments on narrow channels in 2D metals [61], and
nanowires in graphene [107], and was attributed to ballistic transport. The
double-peak feature has also been discussed elsewhere both theoretically [66]
and experimentally [108, 107, 43], where the peaks were reported to occur at
W$\simeq$0.55$r_{c}$. For our Hall bar device, we infer
$W_{eff}\simeq$0.65$r_{c}$ from the position of the two peaks in the double-
peak feature at zero DC current (B$\sim$ $\pm$4.7 mT). In the calculation
displayed in Fig. 5 no double-peak feature is visible. We found that whether a
double-peak feature appears in a calculation or not depends sensitively on the
details of the boundary scattering assumed, and is more prominent with diffuse
scattering on the edge along the Hall bar channel [100]. For the calculation
relevant to Fig. 5 we assumed no diffuse scattering at the edge nor in the
bulk. We strongly note that because we are in the regime $W$ $\ll$ $l_{mfp}$
$\sim$ 145 $\mu$m, the nMR and double-peak feature we observe are ballistic in
origin and do not arise from a combination of WL and weak anti-localization
(WAL)[109]. In GaAs/AlGaAs hetero-structures spin-orbit interaction is weak,
and even in the case of a moderate mobility 2DEG where WL and WAL have been
observed to co-exist, the resulting double-peak feature is tightly confined
within $\pm$0.1 mT of zero field [110]. In the next section, we take a closer
look at the evolution of $\rho_{xx}$ _with DC current_ in the vicinity of B=0
T and show that further corrections to $\rho_{xx}$ develop from viscous
transport.
## VI Hydrodynamic Electron Transport
In this section we discuss the impact of hydrodynamics on electron transport
when DC current is driven through the Hall bar. As discussed in the
introduction, to reach the hydrodynamic regime, increasing the temperature is
a common strategy taken to drive down $l_{ee}$ so that the condition $l_{ee}$
$\ll$ $W$ is attained. We instead employ an increasing DC current to suppress
$l_{ee}$, following the predictions by Giuliani and Quinn[27] that $l_{ee}\sim
1/I_{DC}^{2}$. We first demonstrate the DC current induced suppression of
$l_{ee}$. Subsequently, we isolate the hydrodynamic contribution (correction)
to both the differential longitudinal and transverse resistivities near $B$=0
T, and compare to existing theories for hydrodynamic electron transport.
### VI.1 Current-induced Suppression of Electron-Electron Scattering Mean
Free Path
Figure 6: $\rho_{xx}$ versus $I_{DC}$ for different B-fields. The traces are
taken from the data set shown in Fig. 2. Away from $B$=0 T, $\rho_{xx}$
exhibits a background quadratic dependence with respect to $I_{DC}$. The
dashed line is a quadratic fit of the average $\rho_{xx}$ from 12 mT to 24 mT,
shifted down by 0.5 $\Omega$ for clarity. For this B-field range, hydrodynamic
effects are suppressed, and SdH and HIRO are not yet observable. Above $B\sim
30$ mT, SdH oscillations and HIROs respectively start to appear at zero
current and finite current, although the background quadratic dependence is
still evident. Traces for $B>30$ mT are shifted down progressively by 1
$\Omega$ for clarity. Red dashed line ellipse: HIRO amplitude decreases with
increasing $I_{DC}$ (see Sec. VII for discussion). Inset: Hydrodynamic
component $\Delta\rho_{xx}^{*}$=$\rho_{xx}-\rho_{xx,I=0}-\Delta\rho_{xx}^{bg}$
versus DC current in micro-amperes at $B$=0 T. Here $\rho_{xx,I=0}$ $\sim$ 4.6
$\Omega$. From a linear fit we find $\Delta\rho_{xx}^{*}$=$-0.155|I_{DC}|$).
This component has a negative sign. See text for further discussion.
.
We start by discussing the necessary pre-condition for hydrodynamics, namely
$l_{ee}\lesssim W$, and how by increasing the DC current flowing through the
Hall bar device $l_{ee}$ can be decreased. By careful inspection of vertical
sections through the phase diagram in Fig. 2(a) near zero field, we can
identify a general background quadratic dependence of $\rho_{xx}$ with respect
to $I_{DC}$, $\Delta\rho_{xx}^{bg}$. Other than at zero field, as illustrated
in Fig. 6, this background quadratic dependence is clearly seen for traces
when $|B|\lesssim 30$ mT. At higher B-field, the background quadratic
dependence is still present but masked by the onset of SdH oscillations near
zero current, and HIROs at finite current. From analysis of $\rho_{xx}$ traces
between 12 mT and 24 mT in Fig. 6, we find that the background quadratic
dependence follows the relationship:
$\frac{\Delta\rho_{xx}^{bg}(I_{DC})}{\rho_{xx}(0)}=\left(\frac{I_{DC}}{I_{0}}\right)^{2},$
(7)
where $\rho_{xx}(0)$=$1.81\pm 0.01$ $\Omega$, and $I_{0}$=$11.4\pm 0.1$
$\mu$A. Strongly note that in our forthcoming discussion,
$\Delta\rho_{xx}^{bg}$ is only a function of $I_{DC}$, i.e., for $|B|\lesssim
30$ mT it is assumed to have no B-field dependence, and furthermore,
$\Delta\rho_{xx}^{bg}$ is defined to be zero at zero current. We attribute the
quadratic dependence to a DC current induced increase of the electron-electron
scattering rate $\tau_{ee}^{-1}$. Generally, the resistivity of a 2DEG is
proportional to the sum of the scattering rates from different sources namely
$\rho_{xx}$=$(m^{*}/e^{2}n)\sum_{i}\tau^{-1}_{i}$, where $\tau^{-1}_{i}$ are
independent scattering rates for the different sources of scattering [111]. A
DC current induced increase in the electron-electron scattering rate
$\tau_{ee}^{-1}$ therefore results in a correction to the resistivity on the
order of $\tau_{ee}^{-1}$ at low field.
In an ideal 2DEG, the following analytical expression for the evolution of the
electron-electron scattering rate with DC current, rather than with
temperature, was derived by Chaplik [112] and Giuliani and Quinn: [27]
$\tau_{ee}^{-1}=\frac{E_{F}}{4\pi\hbar}\left(\frac{\Delta}{E_{F}}\right)^{2}\left[\ln\left(\frac{E_{F}}{\Delta}\right)+\ln\left(\frac{2Q_{TF}}{k_{F}}\right)+\frac{1}{2}\right],$
(8)
where $\Delta$ is the excitation (or excess) energy relative to $E_{F}$
(satisfying $\Delta\ll\hbar^{2}k_{F}Q_{TF}/m^{*}$), $k_{F}$ is the Fermi
wavevector, $Q_{TF}$=$2m^{*}e^{2}/4\pi\epsilon_{r}\epsilon_{0}\hbar^{2}$ is
the 2D Thomas-Fermi screening wave vector, $\epsilon_{r}$ is the dielectric
constant ($\sim$13.1 for GaAs), and $\epsilon_{0}$ is the vacuum permittivity.
For a sufficiently small excess energy, Eq. (8) is approximately quadratic
with respect to DC current. In our measurements, over the DC current range
probed, the excess energy is small. The average resistance between 0 and 10
$\mu$A at zero field $R_{av}$ is approximately 40 $\Omega$, and so we estimate
the maximum excess energy $\Delta$ to be of order $eV_{DC}\sim$0.4 meV, where
$V_{DC}$ is the DC voltage drop between the two voltage probes along the Hall
bar at 10 $\mu$A and B=0 T. This estimated value of $\Delta$ satisfies the
condition $\Delta\ll\hbar^{2}k_{F}Q_{TF}/m^{*}$ since
$\hbar^{2}k_{F}Q_{TF}/m^{*}$=24.7 meV. However, we note that the quantum
interference experiment by Yacoby et al.[113] validating this theory suggested
that $\Delta$ is proportional to $eV_{DC}$ and the actual excess energy is
smaller than the applied voltage[114].
Figure 7: (a) $\rho_{xx}$ versus B traces on sweeping the B-field for
different DC currents (B-field sweep rate: 10 mT/min for zero current trace
and 20 mT/min for all other traces). (b)
$\rho_{xx}^{*}$=$\rho_{xx}-\Delta\rho_{xx}^{bg}$ versus B-field. For each of
the traces in (a) the background quadratic current dependence
$\Delta\rho_{xx}^{bg}$ has been removed. For $|B|\lesssim 0.01$ T, we can see
an increasingly strong decrease in $\rho_{xx}^{*}$ on raising the DC current
which we attribute to hydrodynamics. (c) Solid lines: DC current-dependent
deviation $\Delta\rho_{xx}^{*}$=$\rho_{xx}^{*}-\rho_{xx,I=0}^{*}$ of
$\rho_{xx}^{*}$ from the zero current residual $\rho_{xx,I=0}^{*}$, namely the
hydrodynamic component. Dashed lines: $r_{H}\Delta\rho_{xx}^{hyd}$ fit to the
data in a perturbative approach, where $r_{H}$ is a fitting parameter
reflecting the impact of the viscous correction, and $\Delta\rho_{xx}^{hyd}$
is from the hydrodynamic theory as described by Eq. (9). For the fits,
$W_{eff}$=11 $\mu$m is used. The distinct “dips” at $|B|\sim$ 5 mT, and the
weaker “dips” at $|B|\sim$ 8 mT, are artifacts of the methodology due to the
subtraction of the zero current trace which has peaks at $|B|\sim$ 5 mT and
weak “shoulders” at $|B|\sim$ 8 mT. Both constitute the double-peak feature,
ballistic in origin, discussed in the nMR section, which itself has a DC
current dependence. (d) $l_{ee}$ and $|r_{H}|$ extracted from fits of the
traces in (c) plotted against DC current. As $I_{DC}$ increases, $l_{ee}$
decreases and the hydrodynamic component ($|r_{H}|$) becomes stronger as
expected.
As revealed in Fig. 6, in the narrow range around zero field ($|B|\lesssim 5$
mT), $\rho_{xx}$ does not follow the $I_{dc}^{2}$ dependence. Specifically, at
$B$=0 T, $\rho_{xx}$ decreases with $I_{DC}$ before reaching a minimum at
$\sim\pm 4$ $\mu$A, and then increases at higher DC current [also see Fig.
2(d)]. This behavior is reminiscent of that observed in an early study by
Molenkamp and de Jong [30, 32] in a wire formed from a 2DEG for which the
decrease in the differential resistivity with increasing $I_{DC}$ was assigned
to a hydrodynamic effect. On subtracting the background quadratic dependence
$\Delta\rho_{xx}^{bg}$ from $\rho_{xx}$ at $B$=0 T, relative to the zero
current value of $\rho_{xx}$, we obtain the negative residual
$\Delta\rho_{xx}^{*}$=$\rho_{xx}-\Delta\rho_{xx}^{bg}$ which exhibits a linear
dependence in DC current as shown in the inset to Fig. 6. In other words,
$\rho_{xx}$ versus DC current at $B$=0 T is the sum of two components: a
positive component $\Delta\rho_{xx}^{bg}$ quadratic in DC current related to
the electron-electron scattering rate, and a negative component
$\Delta\rho_{xx}^{*}$ linear in DC current. We attribute the latter to a DC
current-induced hydrodynamic effect, driven by the enhancement of conductance
of electrons in the hydrodynamic regime as expected in the ballistic regime
[65, 115], which leads to a negative hydrodynamic correction in the
resistivity $\Delta\rho_{xx}^{*}$. We note that in contrast when hydrodynamics
is induced by temperature in the non-ballistic regime, a positive correction
to the resistivity is expected as observed [51].
### VI.2 Hydrodynamic Magnetoresistance
In the previous subsection, we established that increasing $I_{DC}$ suppresses
$l_{ee}$, a necessary pre-condition to enter the hydrodynamic regime. In this
subsection, we analyze the evolution of $\rho_{xx}$ with $I_{DC}$ for small
B-fields ($|B|<10$ mT), and compare the change in $\rho_{xx}$ to that expected
from existing hydrodynamic theory. Figure 7(a) shows $\rho_{xx}$ versus B
traces for different DC currents up to 10 $\mu$A. The goal is to first
identify and then isolate the various components that can contribute to the DC
current dependence of $\rho_{xx}$. One component, as established in the
previous subsection (see Fig. 6), is the general background quadratic increase
in $\rho_{xx}$ with DC current $\Delta\rho_{xx}^{bg}$, and another component
near zero field, the focus of our attention, that we will argue is
hydrodynamic in origin. By removing $\Delta\rho_{xx}^{bg}$ from $\rho_{xx}$
over the B range from -30 mT to +30 mT, and examining the residual
$\rho_{xx}^{*}$ plotted in Fig. 7(b), we find that for $|B|\lesssim$10 mT,
there is clearly a growing negative component to $\rho_{xx}$ with increasing
DC current. We can isolate this negative component from the nMR and the
double-peak feature by computing
$\Delta\rho_{xx}^{*}$=$\rho_{xx}^{*}-\rho_{xx,I=0}^{*}$, the DC current-
dependent deviation of $\rho_{xx}^{*}$ from the zero current residual
$\rho_{xx,I=0}^{*}$. The deviation is plotted in Fig. 7(c). We attribute this
deviation to hydrodynamics because of its growing amplitude with increasing DC
current, and decreasing amplitude with increasing B-field. The latter is
discussed in Ref. [66]. Next, we discuss our method to compare
$\Delta\rho_{xx}^{*}$ to existing theory.
Figure 8: (a) Deviation from the conventional (bulk) Hall resistivity
$\Delta\rho_{xy}=\rho_{xy}-\rho_{xy}^{bulk}$ for three values of DC current.
The plots are derived from $\rho_{xy}$ versus B traces on sweeping the B-field
at 20 mT/min. The black lines are fits to a sum of polynomials that allow
accurate division by $\rho_{xy}^{bulk}$. Plots are offset by 10 $\Omega$ for
clarity. The change in the slope at zero field from positive to negative with
increasing DC current is a signature of the growing influence of
hydrodynamics. The extrema near $|B|\sim$ 8 mT arise from ballistic transport
[116, 51, 71]. We note that the derivative of the $\rho_{xx}$ trace for
$I_{DC}$=8 $\mu A$ has a close resemblance to the $\Delta\rho_{xy}$ trace for
$I_{DC}$=8 $\mu A$. (b) Calculated from the fits in panel (a), the ratio
$\Delta\rho_{xy}/\rho_{xy}^{bulk}$ is plotted to emphasize the hydrodynamic
component near 0 T. The negative value near 0 T in the 8 $\mu A$ curve is a
clear sign of hydrodynamics. (c) Isolated hydrodynamic component
$\Delta\rho_{xy}^{*}/\rho_{xy}^{bulk}=(\Delta\rho_{xy}-\Delta\rho_{xy,I=0})/\rho_{xy}^{bulk}$
determined from curves in panel (b). Dashed line: Fit to $I_{DC}$=8 $\mu A$
curve following hydrodynamic theory Eq. (11); see text for parameters. For the
fit, $W_{eff}$=11 $\mu$m is used.
We now examine in further detail the experimental data in Fig. 7(c). In the
purely hydrodynamic regime, following Scaffidi et al. [66], the viscous
correction to the magnetoresistance for a 2DEG when $l_{ee}\ll W\ll l_{mfp}$
can be expressed as:
$\Delta\rho_{xx}^{hyd}=\frac{m^{*}}{e^{2}n}\frac{v_{F}l_{ee}}{W^{2}}\frac{3}{1+\left(\frac{{2l}_{ee}}{r_{c}}\right)^{2}}.$
(9)
For our Hall bar device, $W_{eff}$ $\sim$ $W\ll l_{mfp}$ is trivially
satisfied since $l_{mfp}$=145 $\mu$m. Using Eq. (8) with
$\Delta$=$eV_{DC}$=$eI_{DC}R_{av}$, $l_{ee}$=$v_{F}\cdot\tau_{ee}$ is
predicted to be smaller than $W$ for $I_{DC}\gtrsim 9.5$ $\mu$A. Therefore,
for an increasing DC current up to 10 $\mu$A, the maximum applied, the 2DEG
transitions from the ballistic regime ($W\ll l_{ee},l_{mfp}$) to the
hydrodynamic regime. This transitional phase has _both_ ballistic and
hydrodynamic contributions. To describe this regime we use a perturbative
approach, where we assume that (i) the 2DEG is initially in the ballistic
regime, (ii) the change in $\rho_{xx}$ with increasing DC current near zero
field is solely hydrodynamic in origin, and (iii) the change is proportional
to the viscous correction described in Eq. (9). In other words, the change in
$\rho_{xx}$ with increasing DC current following our model has the form:
$\Delta\rho_{xx}=\rho_{xx}-\rho_{xx,I=0}=\Delta\rho_{xx}^{bg}+r_{H}\Delta\rho_{xx}^{hyd},$
(10)
where $r_{H}$ is a dimensionless parameter characterising the relative
strength of the viscous correction at different DC currents. The term
$r_{H}\Delta\rho_{xx}^{hyd}$ in Eq. (10) corresponds to $\Delta\rho_{xx}^{*}$
determined from the experimental data and plotted in Fig. 7(c). Fitting
$r_{H}\Delta\rho_{xx}^{hyd}$ to $\Delta\rho_{xx}^{*}$, we can extract values
for both $l_{ee}$ and $r_{H}$. The obtained DC current dependencies of
$l_{ee}$ and $r_{H}$ are presented in Fig. 7(d). $l_{ee}$ is found to decrease
with increasing DC current, and its value is comparable to that calculated
from theory Eq. (8) [for example, at 10 $\mu$A, Eq. (8) predicts $l_{ee}$=14
$\mu$m and we obtain $l_{ee}\sim 11$ $\mu$m], which supports our hydrodynamic
interpretation in the small B-field regime. $|r_{H}|$ increases faster than
linear with DC current which is reflective of the growing hydrodynamic
component. Lastly, we estimate the electron shear viscosity[26], defined as
$\eta$=$v_{F}^{2}\tau_{ee}/4$, to be 0.7 $m^{2}$/s at 10 $\mu$A. In
comparison, in the work of Gusev et al. in Ref. [51] for which hydrodynamics
is temperature-induced and no DC current is applied, $\eta$ is found to be 0.3
$m^{2}$/s at T=1.4 K.
Comparing our experimental observations for DC current-induced hydrodynamics
in the differential longitudinal resistivity in Fig. 7(a) with those for
temperature-induced hydrodynamics in recent works of Gusev et al. [51] and
Raichev et al. [71] there is one notable difference. In our case, the “dip” at
zero field is seen to grow with increased DC current. In the case of Gusev et
al. and Raichev et al., the “dip” is seen to weaken with increased
temperature, and by 40 K the low-field double-peak feature has disappeared (a
single peak showing nMR remains). In the work of Raichev et al., a classical
kinetic model was used to compute the joint ballistic and hydrodynamic
contributions for temperature-induced hydrodynamics. In the theoretical
analysis the authors assume that boundary scattering is independent of
temperature, which is reasonable in their experiments. However, this is likely
not true for our case where we increase the DC current, which will lead to a
change of the effective edge potential due to the large bias along the edges
of the HB. Indeed, a large current is expected to lead to less diffusivity in
boundary scattering, due to the averaging over a wide energy window, and hence
to an enhanced double-peak feature as observed in Fig. 7(a).
### VI.3 Hydrodynamic Contribution to Hall Resistivity
In the previous subsection, we discussed the growing hydrodynamic contribution
to $\rho_{xx}$ with increasing $I_{DC}$ near zero field. Similarly,
hydrodynamics affects $\rho_{xy}$. In Fig. 8(a), we show the DC current-
dependent deviation of $\rho_{xy}$ from the conventional (bulk) Hall
resistivity $\rho_{xy}^{bulk}$, namely
$\Delta\rho_{xy}$=$\rho_{xy}$-$\rho_{xy}^{bulk}$, for $I_{DC}$ values of 0, 4,
and 8 $\mu$A, where $\rho_{xy}^{bulk}$=$-B/en$. At B=0 T, the sign of the
slope changes from positive to negative with increasing DC current. Other than
a global sign difference, note that the form of the $I_{DC}$=8 $\mu$A trace is
similar to traces reported in Ref. [51] measured between 1.7 K and 40 K with
zero DC current supporting our hydrodynamic interpretation. Following the
approach of Gusev et al. where the induced change with temperature was tracked
instead [51], the induced change with DC current we see can be better
visualized by examining the ratio $\Delta\rho_{xy}/\rho_{xy}^{bulk}$: see Fig.
8(b). As with $\rho_{xx}$, we analyze the change in $\rho_{xy}$ with DC
current near the 0 T and compare to theory in a perturbative method. In the
purely hydrodynamic regime, incorporating earlier work by Alekseev [64], the
viscous Hall correction to $\rho_{xy}$ was derived by Scaffidi et al. [66] and
found to be:
$\frac{\Delta\rho_{xy}^{hyd}}{\rho_{xy}^{bulk}}=\left[-\frac{6}{1+(2l_{ee}/r_{c})^{2}}\left(\frac{l_{ee}}{W}\right)^{2}\right],$
(11)
where $\Delta\rho_{xy}^{hyd}$=$\rho_{xy}-\rho_{xy}^{bulk}$ is the hydrodynamic
contribution to the bulk Hall resistivity. Following the same approach as for
$\rho_{xx}$, the change in the deviation of $\rho_{xy}$ with increasing DC
current relative to the zero current deviation
$\Delta\rho_{xy}^{*}$=$\Delta\rho_{xy}-\Delta\rho_{xy,I=0}$ corresponds to
$r_{H}\Delta\rho_{xy}^{hyd}$ in a perturbative method, where $r_{H}$ is a
dimensionless fitting parameter reflecting the impact of the viscous
correction to $\rho_{xy}$ at different DC currents. In Fig. 8(c),
$\Delta\rho_{xy}^{*}/\rho_{xy}^{bulk}=(\Delta\rho_{xy}-\Delta\rho_{xy,I=0})/\rho_{xy}^{bulk}$
is plotted, and we find that increasing the DC current “amplifies” the minimum
at zero field. The 8 $\mu$A curve is fitted to
$r_{H}\Delta\rho_{xy}^{hyd}/\rho_{xy}^{bulk}$. We find the minimum value at 0
T equates to a $l_{ee}$ of $29$ $\mu$m, with a $r_{H}$ factor of 0.019. These
values are consistent with those from our earlier analysis of $\rho_{xx}$ and
theory Eq. (8) for $l_{ee}$ [see also Fig. 7(d)].
Echoing our commentary at the end of the previous subsection, comparing our
experimental observations for DC current-induced hydrodynamics in the
differential transverse resistivity in Fig. 8(a) with those for temperature-
induced hydrodynamics in recent works of Gusev et al. [51] and Raichev et al.
[71] there is again a notable difference. In our case, the change in sign of
the slope at zero field occurs on increasing the DC current (which increases
the electron temperature but not the bath temperature). In the case of of
Gusev et al. and Raichev et al., the distinctive line shape is seen to weaken
with increased temperature, and by 40 K the deviation from the bulk Hall
resistivity is small. High temperature also leads to both an increase in the
phonon population and decrease decrease of the mobility.
## VII Hall-field Induced Resistance Oscillations
Figure 9: (a) Fan diagram of HIRO maxima in $\partial\rho_{xx}/\partial|B|$
for each peak up to the sixth order. Points are extracted from the Fig. 2(a)
data set for HIROs in the +I and +B quadrant. Triangles: Maxima in
$\partial\rho_{xx}/\partial|B|$ of the “1.5” features visible in three of the
four quadrants in Fig. 2(a) (the “1.5” feature in the +I and +B quadrant is
not visible). Dashed line corresponds to (effective) M=1.44. (b) All HIRO
maxima collapse on to a single line following the relation $B.M\propto
I_{DC}/W$. The slope is used to extract the parameter $\gamma$. In panel (b)
we have used $W$=15 $\mu$m rather than $W_{eff}$ to calculate the nominal
current density so the value of the slope can be compared to those in the
literature.
So far, in the phase diagram, we have described in detail the SdH oscillations
and the accompanying PhI of the SdH oscillations located near zero current,
and the nMR and the effects of DC current-induced hydrodynamics near zero
field. Away from both $B$=0 T and $I_{DC}$=0 $\mu$A we observe HIROs. In Fig.
2(a), HIRO peaks up to the seventh order that fan out from the origin are
identifiable. HIROs were first reported by Yang et al. [5], and have since
been studied extensively both experimentally [6, 7, 8, 9, 10, 11, 12, 13, 14,
15, 16, 17, 18, 19] and theoretically [117, 118, 119, 120, 4, 3]. HIROs emerge
in the weak field magnetoresistance in high mobility 2DEGs due to resonant
electron transitions between LLs that are spatially tilted along the direction
of the transverse electric (Hall) field when DC current is passed along the
Hall bar. In the original work by Yang et al. [5], it was explained that Zener
tunneling can occur when the electron hopping distance $\Delta Y_{M}$ between
different Laudau levels is equal to $\gamma r_{c}$, where $\Delta
Y_{M}=M\hbar\omega_{c}/eE_{H}$, $M$ is an integer, $E_{H}$ is the Hall field,
and $\gamma$ $\approx 2$ according to theory. This leads to HIRO maxima in
$\partial\rho_{xx}/\partial|B|$ with position in the $I_{DC}-B$ plane obeying
the condition:
$B=\gamma\frac{\sqrt{2\pi}m^{*}}{e^{2}\sqrt{n}}\frac{I_{DC}}{WM}.$ (12)
For the regime $k_{B}T\gg\hbar\omega_{c}$, where SdH oscillations are smeared
out by temperature, Vavilov et al. [119], found that the HIROs in the
differential resistivity can be approximately described by the expression:
$\rho_{xx,HIRO}\approx\frac{16m^{*}}{\pi
e^{2}n\tau_{\pi}}\exp(-2\frac{\pi}{\omega_{c}\tau_{q}})\cos(2\pi\frac{2I_{DC}k_{F}}{enW\omega_{c}}),$
(13)
where $\tau_{\pi}$ is the back-scattering lifetime. In the derivation[119],
the oscillatory dependence of the differential resistivity stems from the
product of the oscillatory DOS and the oscillatory non-equilibrium
distribution function. When integrated over the non-equilibrium energy range,
the remaining leading oscillatory term is squared, which explains the
additional factor of two in the exponential damping factor
$\exp(-2\pi/\omega_{c}\tau_{q})$ in Eq. (13) as compared to the SdH
oscillation damping factor in Ando’s expression in Eq. (1). The amplitude of
the induced oscillation according to Eq. (13) is proportional to the back-
scattering rate $\sim\tau_{\pi}^{-1}$, which is related to sharp disorder
enabling a “kick” from one cyclotron orbit into another. The full theory by
Vavilov et al. also takes into account effects at relatively low current
($2\pi I_{DC}k_{F}/enW\omega_{c}<1$) due to variation of the occupation
factors for the electronic states. We now compare the experimental data to the
theories in Refs. [5, 119], report the extracted fitting parameters, and
discuss other relevant observations and implications.
We first analyze our experimental HIRO data with the model of Yang et al. [5].
We extract the HIRO maxima in the derivative of the differential resistivity
$\partial\rho_{xx}/\partial|B|$ and compare with the expression in Eq. (12).
The positions of the HIRO maxima for peaks up to the sixth order are plotted
in Fig. 9(a). The position of the maxima can be collapsed onto a single line,
essentially the line tracking the first-order HIRO peak, following the
relation $B\cdot M\propto I_{DC}/W$. With $W$=15 $\mu$m, the slope of the
single line in Fig. 9(b) is found to be 324 mT/(A/m), and from Eq. (12), we
obtain $\gamma$=$2.4$. This result is consistent with the theoretical value of
$\gamma\approx 2.0$ reported in the work of Yang et al. [5], and their
empirically determined $\gamma$ values (1.7-2.1) for Hall bars with a 2DEG
density close to that for our Hall bar device. Note that had we used
$W_{eff}\sim$ 11 $\mu$m to calculate the nominal current density, we would
obtain $\gamma$=$2.1$.
Figure 10: (a) $\partial\rho_{xx}/\partial B$ versus B for selected DC
currents. $\rho_{xx}$ data are from sections through the map in Fig. 2 (a).
Black lines: fits to data with derivative of Eq. (13). The plots are offset by
200 $\Omega/T$ for clarity. (b) HIRO parameters $\tau_{q}$ and $W_{eff}$
obtained from fitting $\partial\rho_{xx}/\partial B$ versus B sections for
positive $I_{DC}$. Dashed line: $1/\tau_{q}\propto I_{DC}^{2}$ fit to
$\tau_{q}$ values. (c) Log plot versus 1/B of SdH oscillation peak and valley
amplitudes in $r_{xx}$ at $I_{DC}$=0 $\mu$A, $\Delta R_{SdH}$, and HIRO peak
and valley amplitudes in $r_{xx}$ at $I_{DC}$=5 $\mu A$, $\Delta r_{HIRO}$. We
use both positive and negative B-field extrema for the HIRO data points. For
HIROs, the 5 $\mu$A section is selected for analysis because for this current
there are a sufficient number of resolved extrema to analyze, and the “1.5”
feature that could influence the minimum between the first- and second-order
peaks has not yet developed. Red dashed line for SdH oscillations: fit with
Eq. (1) where $D_{T}$ is the thermal damping factor, and $\tau_{q}^{SdH}$ is
determined from the slope ($\tau_{q}^{SdH}$=11 ps). Grey dashed line for
HIROs: fit with Eq. (13) where $\tau_{q}^{HIRO}$ at 5 $\mu A$ is determined
from the slope ($\tau_{q}^{HIRO}$=29 ps).
In the model of Vavilov et al. [119], the amplitude of the HIROs depends on
$\tau_{\pi}$ and $\tau_{q}$, respectively the back-scattering and quantum
lifetimes. We fit $\rho_{xx}$ versus $B$ sections with Eq. (13) to obtain
$\tau_{\pi}$, $\tau_{q}$, and additionally $W_{eff}$. Note that rather than
assume $W$ is fixed and equal to the nominal lithographic width of the Hall
bar, $W$ is treated as a fitting parameter which we call $W_{eff}$.
Furthermore, we choose to fit $\partial\rho_{xx}/\partial B$ [see Fig. 10(a)],
essentially the partial derivative of Eq. (13) with respect to B, instead of
$\rho_{xx}$, to remove the aforementioned background linear B dependence in
the data, and to reduce fitting errors, although fits to either are equivalent
and give nearly identical fitting parameters. Parameters $\tau_{q}$ and
$W_{eff}$ obtained are presented in Fig. 10(b). $\tau_{q}$ is discussed more
extensively in the following paragraph. The effective electronic width,
$W_{eff}$, is found to be $\sim$ 11 $\mu$m over the full 10 $\mu$A current
range. Parameter $W_{eff}$ is obtained from the HIRO frequency and is
extremely accurate as in Eq. (13) it is independent of the amplitude of the
HIROs. The effective electronic width is smaller than the lithographic width
of the Hall bar W=15 $\mu$m. As commented on earlier, we attribute the
difference to a combination of undercut during the wet etching step in the
fabrication of the Hall bar, and sidewall depletion. For the back-scattering
lifetime, we obtained $\tau_{\pi}\approx$ 5 ns with no significant current
dependence. We note that the corresponding scattering length associated with
the back-scattering process is $l_{\pi}=v_{F}\tau_{\pi}\simeq 1$ mm. This
length scale is much larger than our Hall bar device size and therefore cannot
be interpreted as the typical distance between back-scattering impurities.
Figure 11: $\rho_{xx}$ versus B sections through the map in Fig. 2 (a) for
negative DC currents between -2 $\mu$A and -10 $\mu$A. Plots are offset from
each other by +0.06 $\Omega$ for clarity. The $I_{DC}$=-2 $\mu$A plot has no
offset. The plots are arranged to emphasize the emergence of the “1.5”
features at high current.
The values of the quantum lifetime extracted from HIROs in Fig. 10(b) are
notable for two reasons. First, $\tau_{q}$ decreases with increasing DC
current, and second, $\tau_{q}$ here far exceeds the value of $\tau_{q}$=11.5
ps extracted earlier from the SdH oscillations [see Fig. 3]. Concerning the
first point, in the model of Vavilov et al. [119], the HIROs amplitude does
not depend on DC current, whereas our data shows that the amplitude of the
HIROs decreases with increasing DC current [see red dashed line ellipse in
Fig. 6]. This decrease was also observed in another experiment featuring a
2DEG in a MgZnO/ZnO hetero-structure [17], and was attributed to enhanced
electron-electron scattering with increasing electron temperature. The
relationship $1/\tau_{q}\propto I_{DC}^{2}$ was found. Using the same analysis
for our Hall bar, we obtain
$\tau_{q}(0)/\tau_{q}(I_{DC})=1+(I_{DC}/I_{0})^{2}$ where (the extrapolated
zero current quantum lifetime) $\tau_{q}(0)$=$41.2\pm 0.7$ ps, and
$I_{0}$=$8.66\pm 0.04$ $\mu$A [see fit in Fig. 10(b)]. The value of $I_{0}$
here is comparable to that obtained from the background quadratic dependence
to $\rho_{xx}$ discussed in Sec. VI.1 relating to Eq. (7). Concerning the
second point, the extracted value of $\tau_{q}$ from fitting HIROs, which we
now explicitly identify as $\tau_{q}^{HIRO}$, varies from $\sim$40 ps at 2
$\mu$A to $\sim$18 ps at 10 $\mu$A, whereas the extracted value of $\tau_{q}$
from fitting the SdH oscillations _at zero DC current_ , also now explicitly
identified as $\tau_{q}^{SdH}$, is $\sim$11 ps. An alternative method
presented in Refs. [9, 18, 121] to extract the exponential damping of SdH
oscillations and HIROs is to plot the logarithm of the SdH oscillation and
HIRO extrema amplitudes as a function of inverse B-field as shown in Fig.
10(c). The slope of the plots corresponds to $-\zeta\pi/\mu_{q}$, where
$\mu_{q}=e\tau_{q}/m^{*}$ is the quantum mobility. From Eqs. (1) and (13) we
expect $\zeta=1$ for the SdH oscillations and $\zeta=2$ for the HIROs, i.e.,
based on these equations the slope for HIROs should be twice the slope for the
SdH oscillations. However, this is clearly not the case as can be seen in Fig.
10(c). For example, for the 5 $\mu$A data, the slope for the HIROs is in fact
even less than the slope for the SdH oscillations (consistent with the
observation that the value for $\tau_{q}^{HIRO}$ is a factor of two to four
times larger than the value for $\tau_{q}^{SdH}$). Note that the values
obtained for the quantum lifetime do not depend significantly on the methods
used [when we compare the value of $\tau_{q}^{SdH}$ obtained from the full fit
method in Fig. 3(b) with that from the fit for the alternative method in Fig.
10(c), 11.5 ps versus 11 ps, and likewise the value of $\tau_{q}^{HIRO}$
obtained from the full fit method in Fig. 10(a) with that from the fit for the
alternative method in Fig. 10(c), 31 ps versus 29 ps both for 5 $\mu$A
current], i.e., both methods lead to very similar values. Examining closely
the theory in Ref. [119], it is assumed that the quantum lifetime for SdH
oscillations and HIROs are the same. Furthermore, in the derivation of Eq.
(13) it is also assumed that $k_{B}T\gg\hbar\omega_{c}$. This equates to a
temperature exceeding 0.5 K at 25 mT, which clearly does not hold for our
experimental situation. This may explain the observed discrepancy between
$\tau_{q}^{SdH}$ and $\tau_{q}^{HIRO}$, and suggests that a theory for HIROs
extended to the low temperature regime is needed in order to correctly explain
the observed amplitude of the HIROs. It is likely that at low temperature,
when temperature smearing is small, the amplitude of the HIROs is proportional
to the oscillatory DOS and not its square as in the derivation of Eq. (13)
that assumes $k_{B}T\gg\hbar\omega_{c}$. We stress that the quantum lifetime
is normally determined from the SdH oscillations ($\tau_{q}^{SdH}$) _at zero
DC current_. Inferring this quantum lifetime is the same as the quantum
lifetime determined solely from measurement of HIROs ($\tau_{q}^{HIRO}$), a
non-linear transport phenomenon observed at finite DC current, therefore has
to be done with caution[18].
Lastly, we observe unexpected HIRO-like features located between the first-
order and the second-order HIRO peaks at high DC current. These features are
marked by black arrows in Fig. 2(a) in three of the four quadrants, and appear
for a DC current exceeding 6 $\mu$A and a B-field above 0.1 T [see Fig. 9(a),
and also Fig. 11]. Although not expected in the standard picture of HIROs,
these features are HIRO-like in the sense that they lie on a line following
the relation $B\propto I_{DC}$ and appear as part of the fan formed from the
expected HIRO maxima (M=1, 2, 3, …). We have referred to them as “1.5”
features since their _effective_ index $M\sim 1.5$. We do not claim $M$ is
quantized at 3/2: specifically from the fit in Fig. 9(a) we determine
$M$=1.44$\pm$0.04. Currently we have no clear understanding of the origin of
the “1.5” feature, but speculate that it may be related to the lifting of the
spin degeneracy which is observed in the Hall bar measured in the SdH
oscillations at a similar $B$-field ($\sim$ 0.15 T: data not shown). We note
that Hatke et al [122] interestingly reported a HIRO-like feature at $M$=0.5,
also referred to as fractional HIRO in Ref. [3], however this was observed in
a measurement performed under microwave illumination. We stress that we have
observed the “1.5” features with no microwaves applied.
## VIII Conclusion
$W_{eff}$ | $l_{ee}$ | $\eta$ | $\tau_{q}^{SdH}$ | $\tau_{q}^{HIRO}$ | $\tau_{\pi}$
---|---|---|---|---|---
($\mu$m) | ($\mu$m) | ($m^{2}$/s) | (ps) | (ps) | (ns)
11 | 11 | 0.7 | 11.5 | 18 - 40 | 5
Table 2: Summary of key extracted parameters from Secs. IV to VII. The values
of $l_{ee}$ and $\eta$ are for $I_{DC}$=10 $\mu$A determined in Sec. VI.
$\tau_{q}^{HIRO}$ depends on $I_{DC}$ hence a range of values is given (see
Sec. VII).
In conclusion, for an ultra-high mobility 2DEG in a narrow Hall bar we
measured $\rho_{xx}$ as a function of B-field and DC current at the base
temperature of a dilution refrigerator. This measurement approach provides in
a single 2D map a “global” view of different low-magnetic-field non-linear
phenomena in a 2DEG. We identified and analyzed several different phenomena
(and the boundaries between these phenomena): SdH oscillations, PhI of the SdH
oscillations, nMR, a double-peak feature, and HIROs. By analyzing the data
with existing theories, we extracted relevant parameters as summarized in
Table 2. For the SdH oscillations, we found that the Ando formula in Eq. (1)
combined with an electron heating model can explain both the initial decrease
in the SdH oscillation amplitude and then the subsequent PhI of the SdH
oscillations with increasing DC current. As the zero-field resistance is
largely determined by the nMR in our narrow Hall bar, we found that a fit of
the SdH oscillations was essential to determine the appropriate resistance
($R_{0}$) needed to estimate the mobility. We confirmed that the nMR can be
explained purely as a ballistic effect with a numerical simulation. From fits
of the HIROs, we extracted the effective electronic width of the Hall bar
$W_{eff}$ $\sim$ 11 $\mu$m. We also found that the quantum lifetime determined
from analysis of the SdH oscillations at zero DC current, $\tau_{q}^{SdH}$,
does not match the quantum lifetime determined from analysis of the HIROs at
finite DC current, $\tau_{q}^{HIRO}$. The factor of two to four difference
deserves further attention and would suggest a more careful theoretical
treatment of HIROs at low temperature is needed. This also implies that HIROs,
which is a non-linear effect, cannot be used in a direct way to infer the
quantum lifetime in the linear regime (from SdH oscillations). Unexpectedly, a
HIRO-like feature nestled between the first- and second-order HIRO maxima was
observed with an effective index $M$=1.44$\pm$0.04. The origin of this feature
is not understood at present.
A major part of our work here has demonstrated the growing influence of
hydrodynamics brought about by applying a DC current, rather than changing the
temperature as more commonly employed, to reduce the electron-electron
scattering length $l_{ee}$ to the point that this length becomes comparable to
or even smaller than $W_{eff}$. For a DC current of $\sim$ 10 $\mu$A, we
determined $l_{ee}\sim W_{eff}$ $\sim$ 11 $\mu$m. DC current-induced
hydrodynamics complements temperature-induced hydrodynamics. The current-
induced case is a non-linear process, where the electron distribution is
pushed strongly out of equilibrium, while in the temperature-induced case the
electron distribution is thermal (and transport is linear), but it also
increases the population of phonons, which reduces the carrier mobility and
eventually suppresses hydrodynamic behavior. From a detailed analysis of the
data within a -10 mT to +10 mT window, we isolated the growing hydrodynamic
contribution to both the differential longitudinal and transverse (Hall)
resistivities with increasing DC current. For the former (latter) the
signature was found to be a growing “dip” (a change in sign of the slope) at
zero field. We quantified the hydrodynamic corrections using a perturbative
method. In theoretical works such as those in Refs. [66, 71], the
magnetoresistance in the hydrodynamic and ballistic regimes is modelled for
linear transport, whereas a full model that incorporates non-linear transport
relevant to our experimental situation merits investigation.
In the measurements described, the maximum DC current applied was limited to
10 $\mu$A and the temperature was fixed at the base temperature of the
dilution refrigerator. Higher DC current and higher temperature could enhance
further hydrodynamic corrections and bring other phenomena in to play.
Measurement of differential resistivity maps as we have demonstrated here at
low magnetic field is a powerful and convenient approach to probe numerous
phenomena.
We would like to thank Aviv Padawer-Blatt for his help with experiments. The
authors acknowledge support from INTRIQ and RQMP. This research is funded in
part by FRQNT, NSERC and the Gordon and Betty Moore Foundation’s EPiQS
Initiative, Grant GBMF9615 to L. N. Pfeiffer, and by the National Science
Foundation MRSEC grant DMR 2011750 to Princeton University.
## References
* Prange and Girvin [1990] R. E. Prange and S. M. Girvin, eds., _The Quantum Hall Effect_ (Springer New York, 1990).
* Chakraborty and Pietiläinen [1995] T. Chakraborty and P. Pietiläinen, _The Quantum Hall Effects_ (Springer Berlin Heidelberg, 1995).
* Dmitriev _et al._ [2012] I. Dmitriev, A. Mirlin, D. Polyakov, and M. Zudov, Rev. Mod. Phys. 84, 1709 (2012).
* Vitkalov [2009] S. Vitkalov, Int. J. Mod. Phys. B 23, 4727 (2009).
* Yang _et al._ [2002] C. Yang, J. Zhang, R. Du, J. Simmons, and J. Reno, Phys. Rev. Lett. 89, 076801 (2002).
* Bykov _et al._ [2005] A. A. Bykov, J.-q. Zhang, S. Vitkalov, A. K. Kalagin, and A. K. Bakarov, Phys. Rev. B 72, 245307 (2005).
* Zhang _et al._ [2007a] W. Zhang, M. A. Zudov, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. 98, 106804 (2007a).
* Zhang _et al._ [2007b] J.-q. Zhang, S. Vitkalov, A. Bykov, A. Kalagin, and A. Bakarov, Phys. Rev. B 75, 081305 (2007b).
* Zhang _et al._ [2007c] W. Zhang, H.-S. Chiang, M. Zudov, L. Pfeiffer, and K. West, Phys. Rev. B 75, 041304 (2007c).
* Bykov [2008] A. A. Bykov, Sov. Phys. JETP 88, 394 (2008).
* Dai _et al._ [2009] Y. Dai, Z. Yuan, C. Yang, R. Du, M. Manfra, L. Pfeiffer, and K. West, Phys. Rev. B 80, 041310 (2009).
* Hatke _et al._ [2009] A. T. Hatke, M. A. Zudov, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 79, 161308 (2009).
* Hatke _et al._ [2010] A. T. Hatke, H.-S. Chiang, M. A. Zudov, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 82, 041304 (2010).
* Hatke _et al._ [2011] A. T. Hatke, M. A. Zudov, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 83, 081301 (2011).
* Bykov _et al._ [2012] A. Bykov, D. Dmitriev, I. Marchishin, S. Byrnes, and S. Vitkalov, Appl. Phys. Lett. 100, 251602 (2012).
* Wiedmann _et al._ [2011] S. Wiedmann, G. Gusev, O. Raichev, A. Bakarov, and J. Portal, Phys. Rev. B 84, 165303 (2011).
* Shi _et al._ [2017] Q. Shi, M. A. Zudov, J. Falson, Y. Kozuka, A. Tsukazaki, M. Kawasaki, K. von Klitzing, and J. Smet, Phys. Rev. B 95, 041411 (2017).
* Zudov _et al._ [2017] M. A. Zudov, I. A. Dmitriev, B. Friess, Q. Shi, V. Umansky, K. von Klitzing, and J. Smet, Phys. Rev. B 96, 121301 (2017).
* Mi _et al._ [2019] J. Mi, H. Liu, J. Shi, L. N. Pfeiffer, K. W. West, K. W. Baldwin, and C. Zhang, Phys. Rev. B 100, 235437 (2019).
* Studenikin _et al._ [2012] S. A. Studenikin, G. Granger, A. Kam, A. S. Sachrajda, Z. R. Wasilewski, and P. J. Poole, Phys. Rev. B 86, 115309 (2012).
* Panos _et al._ [2014] K. Panos, R. Gerhardts, J. Weis, and K. Von Klitzing, New J. Phys. 16, 113071 (2014).
* Baer _et al._ [2015] S. Baer, C. Rössler, S. Hennel, H. C. Overweg, T. Ihn, K. Ensslin, C. Reichl, and W. Wegscheider, Phys. Rev. B 91, 195414 (2015).
* Rossokhaty _et al._ [2016] A. V. Rossokhaty, Y. Baum, J. A. Folk, J. D. Watson, G. C. Gardner, and M. J. Manfra, Phys. Rev. Lett. 117, 166805 (2016).
* Yu _et al._ [2018] V. Yu, M. Hilke, P. J. Poole, S. Studenikin, and D. G. Austing, Phys. Rev. B 98, 165434 (2018).
* Gurzhi [1963] R. Gurzhi, Sov. Phys. JETP 44, 771 (1963).
* Gurzhi [1968] R. Gurzhi, Sov. Phys. Usp 11, 255 (1968).
* Giuliani and Quinn [1982] G. F. Giuliani and J. J. Quinn, Phys. Rev. B 26, 4421 (1982).
* Gurzhi _et al._ [1989] R. Gurzhi, A. Kalinenko, and A. Kopeliovich, Sov. Phys. JETP 69, 863 (1989).
* Jaggi [1991] R. Jaggi, J. Appl. Phys. 69, 816 (1991).
* Molenkamp and de Jong [1994] L. W. Molenkamp and M. J. M. de Jong, Phys. Rev. B 49, 5038 (1994).
* de Jong and Molenkamp [1995] M. de Jong and L. Molenkamp, Phys. Rev. B 51, 13389 (1995).
* Molenkamp and de Jong [1994] L. Molenkamp and M. de Jong, Solid State Electron. 37, 551 (1994).
* Eaves [1998] L. Eaves, Physica B 256-258, 47 (1998).
* Eaves [1999] L. Eaves, Physica B 272, 130 (1999).
* Eaves _et al._ [2000] L. Eaves, S. Stoddart, R. Wirtz, A. Neumann, B. Gallagher, P. Main, and M. Henini, Physica E 6, 136 (2000).
* Eaves [2001a] L. Eaves, Physica E 9, 45 (2001a).
* Eaves [2001b] L. Eaves, Physica B 298, 1 (2001b).
* Martin _et al._ [2004] A. Martin, K. Benedict, F. Sheard, and L. Eaves, Physica E 22, 205 (2004).
* Zaanen [2016] J. Zaanen, Science 351, 1026 (2016).
* Bandurin _et al._ [2016] D. A. Bandurin, I. Torre, R. K. Kumar, M. B. Shalom, A. Tomadin, A. Principi, G. H. Auton, E. Khestanova, K. S. Novoselov, I. V. Grigorieva, L. A. Ponomarenko, A. K. Geim, and M. Polini, Science 351, 1055 (2016).
* Crossno _et al._ [2016] J. Crossno, J. K. Shi, K. Wang, X. Liu, A. Harzheim, A. Lucas, S. Sachdev, P. Kim, T. Taniguchi, K. Watanabe, T. A. Ohki, and K. C. Fong, Science 351, 1058 (2016).
* Levitov and Falkovich [2016] L. Levitov and G. Falkovich, Nat. Phys. 12, 672 (2016).
* Sulpizio _et al._ [2019] J. A. Sulpizio, L. Ella, A. Rozen, J. Birkbeck, D. J. Perello, D. Dutta, M. Ben-Shalom, T. Taniguchi, K. Watanabe, T. Holder, R. Queiroz, A. Principi, A. Stern, T. Scaffidi, A. K. Geim, and S. Ilani, Nature 576, 75 (2019).
* Berdyugin _et al._ [2019] A. I. Berdyugin, S. G. Xu, F. M. D. Pellegrino, R. K. Kumar, A. Principi, I. Torre, M. B. Shalom, T. Taniguchi, K. Watanabe, I. V. Grigorieva, M. Polini, A. K. Geim, and D. A. Bandurin, Science 364, 162 (2019).
* Gallagher _et al._ [2019] P. Gallagher, C.-S. Yang, T. Lyu, F. Tian, R. Kou, H. Zhang, K. Watanabe, T. Taniguchi, and F. Wang, Science 364, 158 (2019).
* Ella _et al._ [2019] L. Ella, A. Rozen, J. Birkbeck, M. Ben-Shalom, D. Perello, J. Zultak, T. Taniguchi, K. Watanabe, A. K. Geim, S. Ilani, and J. A. Sulpizio, Nat. Nanotechnol. 14, 480 (2019).
* Ku _et al._ [2020] M. J. H. Ku, T. X. Zhou, Q. Li, Y. J. Shin, J. K. Shi, C. Burch, L. E. Anderson, A. T. Pierce, Y. Xie, A. Hamo, U. Vool, H. Zhang, F. Casola, T. Taniguchi, K. Watanabe, M. M. Fogler, P. Kim, A. Yacoby, and R. L. Walsworth, Nature 583, 537 (2020).
* Pusep _et al._ [2022] Y. A. Pusep, M. D. Teodoro, V. Laurindo, E. R. Cardozo de Oliveira, G. M. Gusev, and A. K. Bakarov, Phys. Rev. Lett. 128, 136801 (2022).
* Jenkins _et al._ [2022] A. Jenkins, S. Baumann, H. Zhou, S. A. Meynell, Y. Daipeng, K. Watanabe, T. Taniguchi, A. Lucas, A. F. Young, and A. C. Bleszynski Jayich, Phys. Rev. Lett. 129, 087701 (2022).
* Shi _et al._ [2014] Q. Shi, P. D. Martin, Q. A. Ebner, M. A. Zudov, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 89, 201301 (2014).
* Gusev _et al._ [2018a] G. Gusev, A. Levin, E. Levinson, and A. Bakarov, Phys. Rev. B 98, 161303 (2018a).
* Levin _et al._ [2018] A. D. Levin, G. M. Gusev, E. V. Levinson, Z. D. Kvon, and A. K. Bakarov, Phys. Rev. B 97, 245308 (2018).
* Gusev _et al._ [2018b] G. M. Gusev, A. D. Levin, E. V. Levinson, and A. K. Bakarov, AIP Adv. 8, 025318 (2018b).
* Braem _et al._ [2018] B. A. Braem, F. M. D. Pellegrino, A. Principi, M. Röösli, C. Gold, S. Hennel, J. V. Koski, M. Berl, W. Dietsche, W. Wegscheider, M. Polini, T. Ihn, and K. Ensslin, Phys. Rev. B 98, 241304 (2018).
* Gusev _et al._ [2020] G. M. Gusev, A. S. Jaroshevich, A. D. Levin, Z. D. Kvon, and A. K. Bakarov, Sci. Rep. 10, 7860 (2020).
* Keser _et al._ [2021] A. C. Keser, D. Q. Wang, O. Klochan, D. Y. H. Ho, O. A. Tkachenko, V. A. Tkachenko, D. Culcer, S. Adam, I. Farrer, D. A. Ritchie, O. P. Sushkov, and A. R. Hamilton, Phys. Rev. X 11, 031030 (2021).
* Gupta _et al._ [2021] A. Gupta, J. J. Heremans, G. Kataria, M. Chandra, S. Fallahi, G. C. Gardner, and M. J. Manfra, Phys. Rev. Lett. 126, 076803 (2021).
* Afanasiev _et al._ [2021] A. N. Afanasiev, P. S. Alekseev, A. A. Greshnov, and M. A. Semina, Phys. Rev. B 104, 195415 (2021).
* Horn-Cosfeld _et al._ [2021] B. Horn-Cosfeld, J. Schluck, J. Lammert, M. Cerchez, T. Heinzel, K. Pierz, H. W. Schumacher, and D. Mailly, Phys. Rev. B 104, 045306 (2021).
* Mönch _et al._ [2022] E. Mönch, S. O. Potashin, K. Lindner, I. Yahniuk, L. E. Golub, V. Y. Kachorovskii, V. V. Bel’kov, R. Huber, K. Watanabe, T. Taniguchi, J. Eroms, D. Weiss, and S. D. Ganichev, Phys. Rev. B 105, 045404 (2022).
* Moll _et al._ [2016] P. J. W. Moll, P. Kushwaha, N. Nandi, B. Schmidt, and A. P. Mackenzie, Science 351, 1061 (2016).
* Aharon-Steinberg _et al._ [2022] A. Aharon-Steinberg, T. Völkl, A. Kaplan, A. K. Pariari, I. Roy, T. Holder, Y. Wolf, A. Y. Meltzer, Y. Myasoedov, M. E. Huber, B. Yan, G. Falkovich, L. S. Levitov, M. Hücker, and E. Zeldov, Nature 607, 74 (2022).
* Gooth _et al._ [2018] J. Gooth, F. Menges, N. Kumar, V. Süss, C. Shekhar, Y. Sun, U. Drechsler, R. Zierold, C. Felser, and B. Gotsmann, Nat. Commun. 9, 4093 (2018).
* Alekseev [2016] P. Alekseev, Phys. Rev. Lett. 117, 166601 (2016).
* Guo _et al._ [2017] H. Guo, E. Ilseven, G. Falkovich, and L. S. Levitov, Proc. Natl. Acad. Sci. U.S.A. 114, 3068 (2017).
* Scaffidi _et al._ [2017] T. Scaffidi, N. Nandi, B. Schmidt, A. P. Mackenzie, and J. E. Moore, Phys. Rev. Lett. 118, 226601 (2017).
* Alekseev and Semina [2019] P. S. Alekseev and M. A. Semina, Phys. Rev. B 100, 125419 (2019).
* Alekseev and Alekseeva [2019] P. S. Alekseev and A. P. Alekseeva, Phys. Rev. Lett. 123, 236801 (2019).
* Alekseev and Dmitriev [2020] P. S. Alekseev and A. P. Dmitriev, Phys. Rev. B 102, 241409 (2020).
* Matthaiakakis _et al._ [2020] I. Matthaiakakis, D. Rodríguez Fernández, C. Tutschku, E. M. Hankiewicz, J. Erdmenger, and R. Meyer, Phys. Rev. B 101, 045423 (2020).
* Raichev _et al._ [2020] O. E. Raichev, G. M. Gusev, A. D. Levin, and A. K. Bakarov, Phys. Rev. B 101, 235314 (2020).
* Ahn and Das Sarma [2022] S. Ahn and S. Das Sarma, Phys. Rev. B 106, L081303 (2022).
* Hara _et al._ [2004] M. Hara, A. Endo, S. Katsumoto, and Y. Iye, Phys. Rev. B 69, 153304 (2004).
* Zhang _et al._ [2009] J. Q. Zhang, S. Vitkalov, and A. A. Bykov, Phys. Rev. B 80, 045310 (2009).
* Ando [1974a] T. Ando, J. Phys. Soc. Jpn. 36, 1521 (1974a).
* Ando [1974b] T. Ando, J. Phys. Soc. Jpn. 37, 1233 (1974b).
* Ando [1975] T. Ando, J. Phys. Soc. Jpn. 38, 989 (1975).
* Ando [1982] T. Ando, J. Phys. Soc. Jpn. 51, 3900 (1982).
* Coleridge [1991] P. T. Coleridge, Phys. Rev. B 44, 3793 (1991).
* Isihara and Smrcka [1986] A. Isihara and L. Smrcka, J. Phys. C: Solid State Phys. 19, 6777 (1986).
* Tan _et al._ [2011] Z. Tan, C. Tan, L. Ma, G. T. Liu, L. Lu, and C. L. Yang, Phys. Rev. B 84, 115429 (2011).
* Mancoff _et al._ [1996] F. B. Mancoff, L. J. Zielinski, C. M. Marcus, K. Campman, and A. C. Gossard, Phys. Rev. B 53, R7599 (1996).
* [83] The electron mobility measured at 0.28 K for a large area Van der Pauw device in which we expect nMR arsing from ballistic transport to be negligible is $20\times 10^{6}$ c$m^{2}$/V s. For the narrow Hall bar measured at the dilution refrigerator base temperature, from fitting the SdH oscillations in Fig. 3(b), we extracted $R_{0}$=11.4 $\Omega$ which corresponds to an electron mobility of $25\times 10^{6}$ c$m^{2}$/V s. Independently, from the plot of the SdH oscillation extrema amplitudes in Fig. 10(c), we obtain $R_{0}$=13.5 $\Omega$ which corresponds to an electron mobility of $22\times 10^{6}$ c$m^{2}$/V s.
* Coleridge _et al._ [1996] P. T. Coleridge, M. Hayne, P. Zawadzki, and A. S. Sachrajda, Appl. Surf. Sci. 361, 560 (1996).
* Dmitriev [2011] I. A. Dmitriev, J. Phys. Conf. Ser. 334, 012015 (2011).
* Kalmanovitz _et al._ [2008] N. R. Kalmanovitz, A. A. Bykov, S. Vitkalov, and A. I. Toropov, Phys. Rev. B 78, 085306 (2008).
* Alexander-Webber _et al._ [2012] J. A. Alexander-Webber, A. M. R. Baker, P. D. Buckle, T. Ashley, and R. J. Nicholas, Phys. Rev. B 86, 045404 (2012).
* Dietrich _et al._ [2012] S. Dietrich, S. Byrnes, S. Vitkalov, D. V. Dmitriev, and A. A. Bykov, Phys. Rev. B 85, 155307 (2012).
* Bockhorn _et al._ [2013] L. Bockhorn, A. Hodaei, D. Schuh, W. Wegscheider, and R. J. Haug, J. Phys. Conf. Ser. 456, 012003 (2013).
* Hatke _et al._ [2012] A. Hatke, M. Zudov, J. Reno, L. Pfeiffer, and K. West, Phys. Rev. B 85, 081304 (2012).
* Mani _et al._ [2013] R. G. Mani, A. Kriisa, and W. Wegscheider, Sci. Rep. 3, 2747 (2013).
* Wang _et al._ [2016] Z. Wang, R. L. Samaraweera, C. Reichl, W. Wegscheider, and R. G. Mani, Sci. Rep. 6, 38516 (2016).
* Samaraweera _et al._ [2017] R. L. Samaraweera, H.-C. Liu, Z. Wang, C. Reichl, W. Wegscheider, and R. G. Mani, Sci. Rep. 7, 5074 (2017).
* Samaraweera _et al._ [2018] R. L. Samaraweera, H.-C. Liu, B. Gunawardana, A. Kriisa, C. Reichl, W. Wegscheider, and R. G. Mani, Sci. Rep. 8, 10061 (2018).
* Samaraweera _et al._ [2020] R. L. Samaraweera, B. Gunawardana, T. R. Nanayakkara, R. C. Munasinghe, A. Kriisa, C. Reichl, W. Wegscheider, and R. G. Mani, Sci. Rep. 10, 781 (2020).
* Zudov _et al._ [2001] M. A. Zudov, I. V. Ponomarev, A. L. Efros, R. R. Du, J. A. Simmons, and J. L. Reno, Phys. Rev. Lett. 86, 3614 (2001).
* Bockhorn _et al._ [2011] L. Bockhorn, P. Barthold, D. Schuh, W. Wegscheider, and R. J. Haug, Phys. Rev. B 83, 113301 (2011).
* Bockhorn _et al._ [2014] L. Bockhorn, I. V. Gornyi, D. Schuh, C. Reichl, W. Wegscheider, and R. J. Haug, Phys. Rev. B 90, 165434 (2014).
* Choi _et al._ [1986] K. K. Choi, D. C. Tsui, and S. C. Palmateer, Phys. Rev. B 33, 8216 (1986).
* Thornton _et al._ [1989] T. Thornton, M. Roukes, A. Scherer, and B. Van de Gaag, Phys. Rev. Lett. 63, 2128 (1989).
* Van Loosdrecht _et al._ [1988] P. Van Loosdrecht, C. Beenakker, H. Van Houten, J. Williamson, B. Van Wees, J. Mooij, C. Foxon, and J. Harris, Phys. Rev. B 38, 10162 (1988).
* Van Houten _et al._ [1988] H. Van Houten, C. Beenakker, P. Van Loosdrecht, T. Thornton, H. Ahmed, M. Pepper, C. Foxon, and J. Harris, Phys. Rev. B 37, 8534 (1988).
* Mani _et al._ [1993] R. G. Mani, K. von Klitzing, and K. Ploog, Phys. Rev. B 48, 4571 (1993).
* Paalanen _et al._ [1984] M. Paalanen, D. Tsui, B. Lin, and A. Gossard, Surf. Sci. 142, 29 (1984).
* Kramer and MacKinnon [1993] B. Kramer and A. MacKinnon, Rep. Prog. Phys. 56, 1469 (1993).
* Glazman and Khaetskii [1989] L. Glazman and A. Khaetskii, J. Phys. Condens. Matter 1, 5005 (1989).
* Masubuchi _et al._ [2012] S. Masubuchi, K. Iguchi, T. Yamaguchi, M. Onuki, M. Arai, K. Watanabe, T. Taniguchi, and T. Machida, Phys. Rev. Lett. 109, 036601 (2012).
* Ditlefsen and Lothe [1966] E. Ditlefsen and J. Lothe, Philos. Mag. 14, 759 (1966).
* Hikami _et al._ [1980] S. Hikami, A. I. Larkin, and Y. Nagaoka, Prog. Theor. Phys. 63, 707 (1980).
* Miller _et al._ [2003] J. B. Miller, D. M. Zumbühl, C. M. Marcus, Y. B. Lyanda-Geller, D. Goldhaber-Gordon, K. Campman, and A. C. Gossard, Phys. Rev. Lett. 90, 076807 (2003).
* Yu and Cardona [2005] P. Y. Yu and M. Cardona, _Fundamentals of Semiconductors_ (Springer Berlin Heidelberg, 2005) p. 207.
* Chaplik [1971] A. V. Chaplik, Sov. Phys. JETP 33, 997 (1971).
* Yacoby _et al._ [1991] A. Yacoby, U. Sivan, C. P. Umbach, and J. M. Hong, Phys. Rev. Lett. 66, 1938 (1991).
* [114] Fitting the empirical parabolic dependence reflected in Eq. (7) with Eq. (8), we find $\Delta$=0.88$eV_{DC}$, which is close to $\Delta$=0.82$eV_{DC}$ obtained by Yacoby et al. in Ref. [113].
* Krishna Kumar _et al._ [2017] R. Krishna Kumar, D. Bandurin, F. Pellegrino, Y. Cao, A. Principi, H. Guo, G. Auton, M. Ben Shalom, L. A. Ponomarenko, G. Falkovich, K. Watanabe, T. Taniguchi, I. V. Grigorieva, L. S. Levitov, M. Polini, and A. K. Geim, Nat. Phys. 13, 1182 (2017).
* Blaikie _et al._ [1995] R. J. Blaikie, D. R. S. Cumming, J. R. A. Cleaver, H. Ahmed, and K. Nakazato, J. Appl. Phys. 78, 330 (1995).
* Vavilov and Aleiner [2004] M. G. Vavilov and I. L. Aleiner, Phys. Rev. B 69, 035303 (2004).
* Dmitriev _et al._ [2005] I. A. Dmitriev, M. G. Vavilov, I. L. Aleiner, A. D. Mirlin, and D. G. Polyakov, Phys. Rev. B 71, 115316 (2005).
* Vavilov _et al._ [2007] M. G. Vavilov, I. L. Aleiner, and L. I. Glazman, Phys. Rev. B 76, 115331 (2007).
* Khodas and Vavilov [2008] M. Khodas and M. G. Vavilov, Phys. Rev. B 78, 245319 (2008).
* Studenikin _et al._ [2005] S. A. Studenikin, M. Potemski, A. Sachrajda, M. Hilke, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 71, 245313 (2005).
* Hatke _et al._ [2008] A. T. Hatke, H.-S. Chiang, M. A. Zudov, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. 101, 246811 (2008).
|
# Quenching jets increases their flavor
Chathuranga Sirimanna Department of Physics and Astronomy, Wayne State
University, Detroit, MI 48201. Ismail Soudi Department of Physics and
Astronomy, Wayne State University, Detroit, MI 48201. Gojko Vujanovic
Department of Physics and Astronomy, Wayne State University, Detroit, MI
48201. Department of Physics, University of Regina, Regina, SK S4S 0A2,
Canada Wen-Jing Xing Institute of Frontier and Interdisciplinary Science,
Shandong University, Qingdao, Shandong, 266237, China Shanshan Cao Institute
of Frontier and Interdisciplinary Science, Shandong University, Qingdao,
Shandong, 266237, China Abhijit Majumder Department of Physics and
Astronomy, Wayne State University, Detroit, MI 48201.
###### Abstract
The widespread notion that jets quenched in a Quark-Gluon-Plasma (QGP) are
similar in their parton flavor composition to jets in vacuum is critically
examined. We demonstrate that while the soft to semi-hard [low to intermediate
transverse momentum ($p_{T}$)] sector of vacuum jets are predominantly bosonic
i.e., composed of gluons, _sufficiently_ quenched jets can have an
intermediate momentum sector that is predominantly fermionic, dominated by
quarks and antiquarks. We demonstrate, using leading order perturbative QCD
processes, that the rate of flavor conversion from a gluon traversing the QGP
as part of a jet, to a quark or antiquark, versus the reverse process, grows
steadily with falling $p_{T}$. Simple diagrammatic estimates are followed by a
variety of realistic simulations in static media, which demonstrate
qualitatively similar yet quantitatively different fermion enhancements. The
relation of this increase in flavor to the observed baryon enhancement at
intermediate $p_{T}$ is studied in a fully realistic simulation.
††preprint: APS/123-QED
## I Introduction
Jet quenching or the modification of hard QCD jets in a dense quark-gluon
plasma (QGP) Majumder and Van Leeuwen (2011); Cao and Wang (2021), has reached
a state of precision exploration Kumar _et al._ (2020a); Cao _et al._
(2021a); Kumar _et al._ (2022a). The basic process of medium-induced energy
loss leading to an increase in the number of gluon emissions from the original
hard parton Wang and Gyulassy (1992); Gyulassy and Wang (1994); Baier _et
al._ (1995, 1997); Zakharov (1996, 1997), was initially established by
successful comparisons with data on the suppression of the scaled yield of
leading hadrons ($R_{AA}$) Bass _et al._ (2009), via four different
formalisms Gyulassy _et al._ (2001); Salgado and Wiedemann (2002); Arnold
_et al._ (2002); Wang and Guo (2001); Majumder (2012). Over time, equivalences
between these various approaches were established Majumder (2007); Majumder
and Van Leeuwen (2011), and the multi-stage picture of jet modification,
coupled with the concept of coherence, was developed Cao _et al._ (2017);
Caucal _et al._ (2018); Cao _et al._ (2021b); Armesto _et al._ (2012);
Kumar _et al._ (2020b). In tandem with these developments, there arose
extensive event generators based on these separate formalisms Zapp _et al._
(2009); He _et al._ (2015); Majumder (2013a); Schenke _et al._ (2009). These
event generators have also been incorporated in an end-to-end multi-stage
model-agnostic event generator framework that successfully describes a
majority of the available data on jet modification Kumar _et al._ (2022a);
Tachibana _et al._ (2023).
The multitude of approaches to jet modification contained within them varying
descriptions of the medium. Within the last few years, all these different
models of the medium have either been replaced by, or have been related to, a
few transport coefficients. Focusing only on light flavors, the leading
transport coefficient that encapsulates a dominant portion of the effect, the
medium induces on the jet, is the transverse broadening coefficient $\hat{q}$
Baier (2003); Majumder (2013b); Benzke _et al._ (2013), defined as the mean
square momentum per unit length, exchanged between a hard parton and the
medium, transverse to the direction of the hard parton:
$\displaystyle\hat{q}=\sum_{i}^{N}\frac{\left|\vec{k}_{\perp}^{i}\right|^{2}}{L}.$
(1)
In the equation above, a parton undergoes $N$ scatterings in a length $L$,
within the QGP, without emission, with the $i^{\rm th}$ scattering imparting a
transverse momentum $\vec{k}_{\perp}^{i}$.
In all formalisms of pQCD based energy loss Gyulassy _et al._ (2001); Salgado
and Wiedemann (2002); Arnold _et al._ (2002); Wang and Guo (2001); Majumder
(2007); Bass _et al._ (2009), the basic picture involves an increase in the
amount of emissions from the hard parton(s), induced by scattering from the
medium, quantified by $\hat{q}$. The flavor of the emissions, as well as the
flavor of the majority of partons associated with a jet, are not a priori
expected to be very different from that in the vacuum.
Most of the emissions from a hard parton in vacuum are gluons and the
expectation is that, in a medium, jet partons simply radiate more gluons
(i.e., partons associated with jet showers are predominantly bosonic). The
goal of this paper is to demonstrate that in the plasmas typically created at
RHIC and LHC, a large fraction of the emissions from jets may actually turn
into quarks and antiquarks, i.e., partons associated with quenched jets may be
predominantly fermionic. This is caused by the repeated re-interaction of
these partons with the medium.
A jet radiates partons of all energies, starting from a fraction of the jet’s
own energy down to vanishingly small energies. A large fraction of the
emissions are sufficiently soft that a pQCD based description of the
scattering of these partons, off constituents in the medium, may not be
accurate (the running of the QCD coupling Gross and Wilczek (1973); Politzer
(1973) insists that soft partons interact strongly with the medium). A variety
of strong coupling methods Liu _et al._ (2006); Casalderrey-Solana and Teaney
(2007); Chesler _et al._ (2009); Casalderrey-Solana _et al._ (2014) based on
the AdS/CFT conjecture Maldacena (1998) are currently available to describe
the energy loss of a parton in a strongly interacting medium. However, in none
of these approaches is the flavor of the parton affected. Thus, full jet
simulations based on strong coupling approaches _also_ do not change the
flavor profile of the jet.
There is a trivial and somewhat obvious change in the flavor profile of the
softest portion of the jet as soft partons are thermalized and lead to
excitations of the medium Tachibana _et al._ (2022). Whether this
thermalization is treated schematically or using strong coupling methods, one
obtains the flavor composition of this portion of the jet to be similar to
that of the medium itself. If the medium is fully chemically equilibrated,
then these partons will show a similar quark-to-gluon ratio as in the bulk
medium.
In this paper, we focus on the intermediate energy region where the energies
($E$) or transverse momenta ($p_{T}$) of partons associated with the high-
energy (jet) parent parton, are just above those of the bulk medium; i.e., we
consider semi-hard partons, within the collection of partons, where the
interaction with the medium could still be reliably calculated using pQCD (it
may be the case that only part of the interaction is calculable in pQCD).
Quite surprisingly, we find this region to be fermion dominated (after the jet
has traversed a typical distance of about $6$ fm/$c$ in a medium held at
temperature $T\sim 0.25$ GeV). In some cases, the enhancement can exceed the
fermion fraction in the bulk medium itself. While this flavor conversion does
not change the energy-momentum of the jet, it turns out to be a dramatic
change within the particle composition of the jet: The fermion fraction within
the jet can change by an order of magnitude from what it is in vacuum as will
be explored.
This paper is organized as follows: In Sec. II, we recall the somewhat
disconnected prior work on the topic of flavor conversion. Sec. III will
provide a brief review of the rates of various fermion-boson conversion (or
flavor conversion) processes for 2-to-2 parton scattering in pQCD at finite
temperature. These rates will demonstrate that, in a thermal QCD medium, an
external gluon with $E\gtrsim 8T$ can have most of its energy transferred into
a quark/antiquark at a much faster rate than a quark (or antiquark), with
similar energy, can transfer most of its energy to a gluon. The choice of an
“intermediate energy” parton with $E\gtrsim 8T$, segregates well the jet-like
parton from bulk medium (thermal) partons.
The rate at which gluons radiated from a jet are converted into quarks (and
antiquarks) also depends on the state of chemical equilibrium of the medium.
In Sec. IV, we carry out simulations in a static medium and produce estimates
of the time when the fermion (quark + antiquark) number begins to exceed the
boson (gluon) number. Given the momentum and angular structure of partons
emitted by a quenched jet, the fermion excess at intermediate $p_{T}$ first
appears as an annular ring between 0.2-1 radian away from the jet axis, after
about 2-6 fm$/c$, in all models that we used (with a fermion excess appearing
at lower $p_{T}$ in some models, at a later time).
In Sec. V, we present a variety of realistic simulations to pin down the range
of angles and times where these charge/baryonic (anti-charge/anti-baryonic)
rings appear. In Sec. VI, we present other measurable consequences of these
charge rings, e.g., on the baryon (and anti-baryon) enhancement observed at
intermediate $p_{T}$. A summary of the outstanding challenges to the
experimental detection of these baryonic/charge rings and an outlook to future
work is presented in Sec. VII.
## II Survey of Prior Efforts and basic formalism
Hard or semi-hard partons traversing a QGP undergo multiple scattering off
constituents in the plasma. The notion that some of these interactions may
lead to a flavor change (i.e. conversion) of the (semi-)hard parton has been
discussed several times in the literature. In this section, we highlight these
somewhat disconnected efforts and discuss the non-perturbative matrix elements
that lead to these flavor conversions. While the subsequent sections will use
perturbative evaluations of these matrix elements, in order to make semi-
realistic estimates, we remind the reader that, similar to $\hat{q}$ [Eq.
(1)], these conversion processes may indeed contain considerable non-
perturbative contributions.
The earliest examples of flavor conversion leading to formation of more quarks
and antiquarks appeared in the process of strangeness enhancement Rafelski and
Muller (1982). The first connection to jets was the suggestion that hard jet
partons could convert to photons Fries _et al._ (2003a) on passage through
the QGP. Such photons were expected to produce a negative azimuthal anisotropy
in semi-central collisions Turbide _et al._ (2006). While these matrix
elements were electromagnetic, the basic structure is similar to those that
lead to flavor conversion. Consider the two diagrams in Fig. 1, where a semi-
hard quark scatters off an antiquark in the medium producing a photon (right)
or a gluon (left). The dashed line in the middle is a cut line and thus these
diagrams respresent the product of an amplitude and its complex conjugate. The
photon producing diagram corresponds to the work of Refs. Fries _et al._
(2003a); Turbide _et al._ (2006), while the gluon producing diagrams will be
considered in this paper. In either case, the soft matrix elements that
control either conversion process are identical.
Figure 1: Non-perturbative matrix elements that lead to a semi-hard quark
annihilating off a resolved soft antiquark in the medium and producing an
onshell gluon (left) or photon (right).
The possibility of quarks converting into gluons via diagrams such as the left
diagram in Fig. 1 was briefly considered by Wang and Guo in Ref. Wang and Guo
(2001) and later in greater detail by Schaefer et al. in Ref. Schafer _et
al._ (2007). Both of these efforts were in cold nuclear matter, where it was
found that such contributions could be comparable to $\hat{q}$ if the quark
distribution in the medium was large. These diagrams, however, only consider
the conversion of a quark into a gluon. In this paper, we will point out that
a much larger effect is the conversion of a gluon into a quark (or antiquark)
via the diagrams in Fig. 2.
Figure 2: Non-perturbative matrix elements that lead to a semi-hard gluon
converting to a semi-hard quark (left) or antiquark (right).
The left diagram in Fig. 2 represents a gluon turning into a quark, while it
turns into an antiquark on the right. A cursory examination of these diagrams
would indicate that the soft matrix element in all the diagrams of Figs. 1 and
2 is essentially the same. It is the expectation to find a quark or antiquark
in the medium. However, the processes in Figs. 1 and 2 have very different
flavor and color degeneracies: For the case of the outgoing photon, the
antiquark from the medium has to have the same color and flavor of the
incoming semi-hard quark. For the case of the outgoing gluon, the antiquark
from the medium has to have the same flavor as the semi-hard quark. However,
for the case of the incoming semi-hard gluon, there are no flavor or color
restrictions on the incoming quark or antiquark. It will thus come as no
surprise that this diagram will have the largest effect in the subsequent
sections.
Figure 3: Gluon or Boson exchange diagrams that do not lead to flavor
conversion, and are the dominant contribution to the transverse broadening
coefficient $\hat{q}$ and the longitudinal coefficients $\hat{e},\hat{e}_{2}$.
Thus, all flavor changing diagrams involve the semi-hard parton exchanging a
quark (antiquark) with the medium. This is in contrast with the diagrams that
typically generate the transverse broadening coefficient $\hat{q}$ Baier
(2003); Majumder (2013b); Benzke _et al._ (2013); Kumar _et al._ (2022b) or
longitudinal coefficients $\hat{e},\hat{e}_{2}$ Majumder (2009a), which
involve gluon exchange as shown in Fig. 3. While the flavor changing diagrams
do lead to transverse broadening, their main effect is the rotation of flavor
of the semi-hard parton. Similar to $\hat{q}$, we can define a new transport
coefficient which is sensitive to quark (antiquark) number in the QGP: The
rate of flavor rotation from a quark (antiquark) to a gluon, and vice versa,
can be straightforwardly derived as Kumar _et al._ (2022b),
$\displaystyle\Gamma_{q\rightarrow g}$ $\displaystyle=c_{q\rightarrow
g}\frac{1}{2E_{q}}\int\frac{dy^{-}d^{2}y_{\perp}}{(2\pi)^{2}}d^{2}k_{\perp}e^{-i\frac{k_{\perp}^{2}}{2q^{-}}y^{-}+ik_{\perp}\cdot
y_{\perp}}$ $\displaystyle\times\sum_{n}\frac{e^{-\beta E_{n}}}{Z}\langle
n|\bar{\psi}(0)\gamma^{+}\psi(y^{-},y_{\perp})|n\rangle,$ (2)
$\displaystyle\Gamma_{g\rightarrow q}$ $\displaystyle=c_{g\rightarrow
q}\frac{1}{2E_{g}}\int\frac{dy^{-}d^{2}y_{\perp}}{(2\pi)^{2}}d^{2}k_{\perp}e^{-i\frac{k_{\perp}^{2}}{2q^{-}}y^{-}+ik_{\perp}\cdot
y_{\perp}}$ $\displaystyle\times\sum_{n}\frac{e^{-\beta E_{n}}}{Z}\langle
n|\bar{\psi}(0)\gamma^{+}\psi(y^{-},y_{\perp})|n\rangle.$ (3)
The operator expressions above can be written in gauge invariant form using a
combination of light-like and transverse Wilson lines. The methodology for
incorporating these lines is well known Idilbi and Majumder (2009); Benzke
_et al._ (2013); Majumder (2015). One immediately notes that both flavor
conversion rates can be expressed in terms of one fundamental coefficient,
i.e., $\Gamma_{q\rightarrow g}=c_{q\rightarrow g}\hat{f}$, and
$\Gamma_{g\rightarrow q}=c_{g\rightarrow q}\hat{f}$. The coefficients
$c_{q\rightarrow g},c_{g\rightarrow q}$ include the overall spin, and color
factors related to whether the incoming (out-going) state is a quark (gluon)
or gluon (quark) respectively. Given this structure, one immediately obtains
the straightforward relation that the ratio of rates of a gluon converting to
a quark (antiquark) to that of the reverse process is merely a ratio of spin,
color and statistical factors (Bose or Fermi, represented as $S$):
$\displaystyle\frac{\Gamma_{g\rightarrow
q/(\bar{q})}}{\Gamma_{q/(\bar{q})\rightarrow g}}$
$\displaystyle=\frac{c_{g\rightarrow q/(\bar{q})}S_{g\rightarrow
q}}{c_{q/(\bar{q})\rightarrow g}S_{q\rightarrow g}}.$ (4)
If one ignores the statistical factors the ratio becomes a pure number.
The ratio in Eq. (4) will be evaluated several times in the subsequent
sections with different restrictions on the included processes, both with and
without statistical factors, in static and dynamic media. We are not the first
to evaluate these or point out the possibility of flavor conversions of jets
traversing a QGP. These were first discussed in a series of papers by Liu et
al. Ko _et al._ (2007); Liu _et al._ (2007); Liu and Fries (2008). These
authors studied the possibility that the leading parton in a jet could convert
from quark to gluon or vice versa. While the rates were not small, they tended
to drop with increasing $p_{T}$ or energy of the parton.
As will be shown in this paper, the region where the effect of conversions is
most dramatic is the semi-hard region with momenta just above $8T$, where $T$
is the temperature of the medium. Some of our approach is similar to the work
of Refs. Ko _et al._ (2007); Liu _et al._ (2007); Liu and Fries (2008),
however we will focus on the shower of gluons radiated by the hard parton, and
not the hard parton itself. In our case, one would trigger on a hard parton
and observe the change in the flavor of the radiated gluons as they propagate
through the medium.
The flavor or chemical change of full distributions of partons correlated with
a hard parton traversing and equilibrating within a dense medium have been
extensively studied by one of us in collaboration with Schlichting and Mehtar-
Tani Schlichting and Soudi (2021); Mehtar-Tani _et al._ (2022), using the
finite temperature rates derived by Arnold, Moore and Yaffe Arnold _et al._
(2000, 2003a). However, these calculations focused on partons with asymptotic
energies $\gtrsim 500T$ and thus observed a much delayed onset of the quark
antiquark numbers becoming comparable to the gluon numbers.
In the subsequent sections, we will predominantly focus on partons with energy
ranging from $2~{}{\rm GeV}\lesssim E\lesssim 5$ GeV radiated from a jet with
$E\gtrsim 25$ GeV. For most of the simulations, the medium will be static with
$T=0.25$ GeV. Thus, scaling with the temperature $T$, the jet has an energy of
$100T$ and we are focusing on partons radiated from the jet with energy above
$8T$ and less than $20T$. Partons with momenta above $20T\simeq 5$ GeV will be
considered hard in this effort. Partons with $E\leq 8T\simeq 2$ GeV will be
considered as soft. While pQCD based estimates on their population are
presented below, these are done only for completeness. It will be assumed that
partons with $E\leq 8T\simeq 2$ GeV will eventually thermalize and hadronize
as a part of the bulk medium. Our main focus will remain on the semihard
region with $8T\lesssim E\lesssim 20T$ (or $2~{}{\rm GeV}\lesssim E\lesssim 5$
GeV for $T=0.25$ GeV).
Path lengths in the medium range from 2 to 10 fm/$c$. The path lengths and
temperatures are representative of the average temperature and path lengths
experienced by jets at RHIC and LHC. The reason to not take the energy of the
jet to be too large is to reduce the amount of vacuum emission that the
partons escaping the medium will produce. Thus, picking a jet with an $E\sim
100T$ in a medium of length around 2-10 fm/$c$ produces a prominent
enhancement in the quark and antiquark number, correlated with the jet.
While the rates in Eqns. (2,3) are cast in terms of non-perturbative matrix
elements, in the remainder of this paper, we will evaluate these using
perturbation theory. There is every indication that for partons with energies
in the region $E\gtrsim 2$ GeV, the interaction with the medium may
predominantly be non-perturbative Zhao _et al._ (2022); Fries _et al._
(2003b); Hwa and Yang (2003); Molnar and Voloshin (2003). The calculations in
this paper should thus be considered as an estimate of the effect of these
terms. We will show that the flavor conversion processes will produce more
than an order of magnitude increase in the number of quarks and antiquarks
that are correlated with the jet (compared to a jet in vacuum). Thus, while
our perturbative estimates may not be completely accurate, the magnitude of
the effect indicates that this is a large effect which will survive
modification of the calculation scheme.
In this first effort to understand the enhancement of fermion number within a
jet shower, we will not attempt to hadronize the simulated jets. The change in
jet hadrochemistry as a signal for jet quenching has been pointed out before
Sapeta and Wiedemann (2008), though not for the reasons that will be
highlighted in this paper. Most of the results in this paper will be partonic.
The increase in the number of quarks and antiquarks within the jet shower will
make hadronization via the standard process of Lund string breaking Andersson
(2005) unfeasible. Naïvely, given the small number of partons within the jet
cone, the most obvious signal would be event-by-event fluctuations of charge
and baryon number. However, any current hadronization module will itself
introduce modifications to this effect. The study of the effect of
hadronization on these fluctuations will appear in a future effort.
## III Rates of flavor exchange with the medium
Figure 4: (Color online) Ratio of the rate of a semi-hard gluon converting to
a semi-hard quark (or antiquark) to that of a semi-hard quark (or antiquark)
converting to a semi-hard gluon. The top panel compares the ratio from only
the diagrams that describe the process of gluon fusion to quark antiquark to
that of quark antiquark annihilation into gluons Eq. (9). The lower panel
considers the ratio of quark gluon scatterings as calculated in Eq. (12).
While the green lines represents the ratio where the equilibrium distributions
are taken to be the classical Maxwell-Boltzmann distribution which leads to an
exact ratio of $2.25$, an energy dependence is obtained for the rates computed
using the equilibrium distribution from quantum statistics shown in the red
lines.
The perturbative description of multiple elastic interactions of a hard QCD
parton (with energy $E\gtrsim 10T$) with a QGP medium (at temperature $T$) can
be cast in terms of an effective kinetic description, with the collision term
obtained from tree-level diagrams. The rate of scattering of a _single_ hard
parton $a$, with four-momentum $p_{1}$, with a medium parton $b$, with four-
momentum $p_{2}$, yielding outgoing partons $c$ and $d$ with four-momenta
$p_{3}$ and $p_{4}$ can be expressed as,
$\displaystyle\Gamma_{ab\to cd}=$
$\displaystyle\int\frac{d^{3}\bm{p}_{2}}{(2\pi)^{3}}\frac{d^{3}\bm{p}_{3}}{(2\pi)^{3}}\frac{d^{3}\bm{p}_{4}}{(2\pi)^{3}}f_{b}(\bm{p}_{2})[1\pm
f_{c}(\bm{p}_{3})]$ (5) $\displaystyle\times\frac{|\mathcal{M}_{ab\to
cd}|^{2}}{16E_{1}E_{2}E_{3}E_{4}}(2\pi)^{4}\delta^{(4)}\left(p_{1}+p_{2}-p_{3}-p_{4}\right),$
where incoming thermal parton and outgoing partons are integrated over. In the
equation above, all degeneracy (i.e. spin and color) factors are absorbed into
the matrix element squared $|\mathcal{M}_{ab\to cd}|^{2}$, and accordingly
such factors are removed from the distribution functions $f_{b,c}(\bm{p})$.
Note that the $f_{i}(\bm{p})$ notation emphasizes that parton momenta satisfy
the on-shell condition $p^{2}=0$ (assuming massless partons). In the case of
thermal equilibrium, these are given by the Bose-Einstein distribution
$f_{g}(\bm{p})=n_{g}(\bm{p})=\left[e^{p\cdot u/T}-1\right]^{-1}$ for gluons,
and the Fermi-Dirac distribution
$f_{q}(\bm{p})=\tilde{n}_{q}(\bm{p})=\left[e^{p\cdot u/T}+1\right]^{-1}$ for
quarks and antiquarks, where $u^{\mu}$ is the local fluid velocity.
At leading order of the QCD coupling, the processes can be separated into two
classes. The dominant processes involve completely elastic scattering, where
the boson-fermion nature of both scattering partons is unchanged: The semi-
hard incoming quark (gluon) remains a semi-hard incoming quark (gluon) with a
mild change in its 4-momentum. These lead to diffusion and drag of the hard
parton’s energy with schematic rates $\Gamma_{\rm Diffusion}\propto
T^{2}\partial^{2}_{\bm{p}_{1}}f_{a}(\bm{p}_{1})$ and $\Gamma_{\rm Drag}\propto
T^{3}\partial_{\bm{p}_{1}}f_{a}(\bm{p}_{1})$, respectively Schlichting and
Soudi (2021). However, they will not cause any change to the flavor profile of
the jet. We instead focus on the special case of flavor conversion, where the
outgoing particle with energy comparable to the semi-hard parton is of
different quantum statistics than the initial hard parton $a$. At high
energies, these conversion processes are suppressed by an inverse power of
momentum as $\Gamma_{\rm
Conversion}\propto\frac{T^{2}}{|\bm{p}_{1}|}f_{a}(\bm{p}_{1})$; however, for
semi-hard partons this can contribute significantly.
When the semi-hard parton is a quark (antiquark), these consist of three
processes: Quark-antiquark annihilation into gluons,
$Q_{i}\,\,\bar{q}_{i}\rightarrow G\,\,g$ ($\bar{Q}_{i}\,\,q_{i}\rightarrow
G\,\,g$), quark gluon scattering, $Q_{i}\,\,g\rightarrow q_{i}\,\,G$, and
antiquark gluon scattering, $\bar{Q}_{i}\,\,g\rightarrow\bar{q}_{i}\,\,G$.
Note that we use the notation where capital letters indicate the semi-hard
parton and lowercase letters are reserved for soft particles. (For the
remainder of this study, the identity of the hard parton should be clear from
the context). When the semi-hard parton is a gluon, the reverse processes
include pair production, $G\,\,g\rightarrow Q_{i}\,\,\bar{q}_{i}$
($G\,\,g\rightarrow\bar{Q}_{i}\,\,q_{i}$), and gluon scattering with a quark,
$G\,\,q_{i}\rightarrow Q_{i}\,\,g$, or antiquark,
$G\,\,\bar{q}_{i}\rightarrow\bar{Q}_{i}\,\,g$. The matrix elements for all
these processes are listed in Table 1.
$ab\to cd$ | $\nu_{b}\sum_{\nu_{c}\nu_{d}}|\mathcal{M}_{ab\to cd}|^{2}/g^{4}_{s}$
---|---
$gg\to 2\sum_{i}q_{i}\bar{q}_{i}$ | $2N_{f}C_{F}\left(\frac{u}{t}+\frac{t}{u}-\frac{C_{A}}{C_{F}}\frac{t^{2}+u^{2}}{s^{2}}\right)$
$q_{i}\bar{q}_{i}\to gg$ | $2C_{F}^{2}\left(\frac{u}{t}+\frac{t}{u}-\frac{C_{A}}{C_{F}}\frac{t^{2}+u^{2}}{s^{2}}\right)$
$\sum_{i}gq_{i}\to gq_{i}$ | $-N_{f}C_{F}\left(\frac{u}{s}+\frac{s}{u}\right)+N_{f}C_{A}\frac{s^{2}+u^{2}}{t^{2}}$
$\sum_{i}g\bar{q}_{i}\to g\bar{q}_{i}$
$q_{i}g\to q_{i}g$ | $-2C_{F}^{2}\left(\frac{u}{s}+\frac{s}{u}\right)+2C_{F}C_{A}\frac{s^{2}+u^{2}}{t^{2}}$
$\bar{q}_{i}g\to\bar{q}_{i}g$
Table 1: Fermion-Boson conversion matrix elements: We average over the degrees
of freedom of the initial hard parton $a$ and sum over the initial medium
parton $b$ and final state partons $c$ and $d$. The conventional definition of
Mandelstam variables Peskin and Schroeder (1995) is used.
In the subsequent subsections, the rates of the flavor-conversion processes
mentioned above where a semi-hard gluon turns into a semi-hard quark
(antiquark) will be compared with the reverse processes for each, i.e., a
semi-hard quark (antiquark) turning into a semi-hard gluon. The matrix
elements in Table 1 will be integrated over the momenta of the incoming and
outgoing soft thermal partons. Both rates will be compared within a static QCD
medium held at a fixed temperature. In all cases, the rate for gluons to turn
into a quark (antiquark) is found to be higher than the reverse process.
### III.1 QCD annihilation processes
A hard gluon fusing with a thermal gluon and producing a quark antiquark pair
is considered first. This conversion rate is given by the double sum over
quark flavors, since the final semi-hard parton can be either a quark or an
antiquark. For the purposes of the numerical simulations of this process, done
using the JETSCAPE (Jet Energy-loss Tomography with a Statistically and
Computationally Advanced Program Envelope) framework, only the soft particle
distributions are integrated over, yielding:
$\displaystyle\Gamma_{gg\to 2\sum_{i}q_{i}\bar{q}_{i}}$
$\displaystyle=\int\frac{d^{3}\bm{p}_{2}}{(2\pi)^{3}}\frac{d^{3}\bm{p}_{3}}{(2\pi)^{3}}\frac{d^{3}\bm{p}_{4}}{(2\pi)^{3}}$
$\displaystyle\times$ $\displaystyle
f_{g}(\bm{p}_{2})[1-f_{q}(\bm{p}_{3})]\frac{|\mathcal{M}_{gg\to
2\sum_{i}q_{i}\bar{q}_{i}}|^{2}}{16E_{1}E_{2}E_{3}E_{4}}$
$\displaystyle\times$
$\displaystyle(2\pi)^{4}\delta^{(4)}\left(p_{1}+p_{2}-p_{3}-p_{4}\right)\;.$
(6)
In the equation above, the flavor-summed matrix element square is defined as
$|\mathcal{M}_{gg\to
2\sum_{i}q_{i}\bar{q}_{i}}|^{2}=\sum_{i}^{N_{f}}|\mathcal{M}_{gg\to
q_{i}\bar{q}_{i}}|^{2}+|\mathcal{M}_{gg\to\bar{q}_{i}q_{i}}|^{2}$, where the
subscripts keep track of the momentum of the semi-hard gluon being transferred
to the quark ($g\,\,g\rightarrow q_{i}\,\,\bar{q}_{i}$) or to the antiquark
($g\,\,g\rightarrow\bar{q}_{i}\,\,q_{i}$).
In both cases above, a sum over flavors is present, regardless of whether the
hard momentum transfers to the quark or the antiquark. Conversely, the
annihilation of a hard quark with a medium antiquark does not involve a sum of
the quark flavors, since the quark-antiquark flavor must match, thus giving
$\displaystyle\Gamma_{q_{i}\bar{q}_{i}\to gg}$
$\displaystyle=\int\frac{d^{3}\bm{p}_{2}}{(2\pi)^{3}}\frac{d^{3}\bm{p}_{3}}{(2\pi)^{3}}\frac{d^{3}\bm{p}_{4}}{(2\pi)^{3}}f_{q}(\bm{p}_{2})[1+f_{g}(\bm{p}_{3})]$
(7) $\displaystyle\times\frac{|\mathcal{M}_{q_{i}\bar{q}_{i}\to
gg}|^{2}}{16E_{1}E_{2}E_{3}E_{4}}(2\pi)^{4}\delta^{(4)}\left(p_{1}+p_{2}-p_{3}-p_{4}\right)\;.$
If we neglect differences between the thermal quark and gluon distributions,
as well as quantum statistics, and use Maxwell-Boltzmann (MB) distributions
for both quark or gluon, we would get,
$\displaystyle f_{g}(\bm{p}_{2})[1-f_{q}(\bm{p}_{3})]\rightarrow
f_{MB}(\bm{p}_{2}),$ $\displaystyle
f_{q}(\bm{p}_{2})[1+f_{g}(\bm{p}_{3})]\rightarrow f_{MB}(\bm{p}_{2}).$ (8)
With these substitutions, the only difference between the integrands of Eqs.
(6) and (7) are the degeneracy factors of the matrix element. Thus, the ratio
of a semi-hard gluon converting to a semi-hard quark (antiquark) to the
reverse process of conversion of a semi-hard quark (or antiquark) into a
gluon, via the matrix elements that describe annihilation or fusion, is given
by
$\displaystyle\frac{\Gamma_{gg\to
2\sum_{i}q_{i}\bar{q}_{i}}}{\Gamma_{q_{i}\bar{q}_{i}\to
gg}}\simeq\frac{N_{f}}{C_{F}}=2.25\;.$ (9)
In the equation above, $C_{F}=4/3$ is the quadratic Casimir in the fundamental
representation, and $N_{f}=3$ is the number of quark flavors.
Using full quantum statistics increases the ratio by more than 50% and also
introduces a mild dependence on the energy of the semi-hard parton, as shown
in the upper panel of Fig. 4. Depending on the strength of the overall rate,
due to the medium, the shower of a hard parton will, over time, contain more
and more semi-hard quarks (and antiquarks) even when starting with an initial
hard gluon.
### III.2 QCD Compton scattering
Compton scattering in the medium is another process that can change a semi-
hard fermion into a semi-hard boson. In the current kinematic limit, whenever
the momentum of the quark internal line is soft, i.e., the Mandelstam variable
$u\to 0$, the conversion rate (fermion-to-boson or vice versa) is enhanced.
For a semi-hard gluon scattering with a medium quark or antiquark, the rate
is:
$\displaystyle\Gamma_{\sum_{i}gq_{i}\to gq_{i}}$
$\displaystyle=\int\frac{d^{3}\bm{p}_{2}}{(2\pi)^{3}}\frac{d^{3}\bm{p}_{3}}{(2\pi)^{3}}\frac{d^{3}\bm{p}_{4}}{(2\pi)^{3}}f_{q}(\bm{p}_{2})[1+f_{g}(\bm{p}_{4})]$
$\displaystyle\times\,$ $\displaystyle\frac{|\mathcal{M}_{\sum_{i}gq_{i}\to
gq_{i}}|^{2}}{16E_{1}E_{2}E_{3}E_{4}}$ (10) $\displaystyle\times\,$
$\displaystyle(2\pi)^{4}\delta^{(4)}\left(p_{1}+p_{2}-p_{3}-p_{4}\right)\Theta(|\bm{p}_{4}|-|\bm{p}_{3}|)\;.$
While for the reverse process of a semi-hard quark scattering with a medium
gluon, the rate is given by
$\displaystyle\Gamma_{q_{i}g\to q_{i}g}$
$\displaystyle=\int\frac{d^{3}\bm{p}_{2}}{(2\pi)^{3}}\frac{d^{3}\bm{p}_{3}}{(2\pi)^{3}}\frac{d^{3}\bm{p}_{4}}{(2\pi)^{3}}f_{g}(\bm{p}_{2})[1-f_{q}(\bm{p}_{4})]$
$\displaystyle\times\frac{|\mathcal{M}_{q_{i}g\to
q_{i}g}|^{2}}{16E_{1}E_{2}E_{3}E_{4}}$ (11)
$\displaystyle\times(2\pi)^{4}\delta^{(4)}\left(p_{1}+p_{2}-p_{3}-p_{4}\right)\Theta(|\bm{p}_{4}|-|\bm{p}_{3}|)\;.$
If we consider the momentum exchange to be small $(q=p_{1}-p_{3}\ll
p_{1},p_{2})$, the dominant contributions to the quark gluon scattering are
the terms proportional to $\frac{s}{u}$ and $\frac{s^{2}+u^{2}}{t^{2}}$ in
Tab. 1. Because the energy $(\sim|q|)$ gained by the medium parton is small
compared to the energy of the semi-hard parton, only the first term will lead
to flavor conversion, while the second term will typically contributes to
energy loss. In this section, we will consider the full matrix element. Hence,
to identify flavor converting processes, we will apply a kinematic selection
that the out-going parton with different flavor than the initial hard parton
takes a greater fraction of the energy.
Using the above two rates, the ratio of a semi-hard gluon converting to a
quark (antiquark) to the reverse process is exactly the same as the ratio of
rates in the preceding subsection, namely
$\displaystyle\frac{\Gamma_{\sum_{i}gq_{i}\to
gq_{i}}+\Gamma_{\sum_{i}g\bar{q}_{i}\to g\bar{q}_{i}}}{\Gamma_{q_{i}g\to
q_{i}g}}\simeq\frac{N_{f}}{C_{F}}=2.25\;.$ (12)
In the equation above Maxwell Boltzmann statistics were used once again. Even
though using quantum statistics for the process actually reduces the ratio, it
still remains above $1$, as shown in the lower panel of Fig. 4.
### III.3 From a gluon shower to a quark (antiquark) shower
The two preceding subsections clearly demonstrate that the rate for a semi-
hard gluon to turn into a semi-hard quark or antiquark is much larger than for
the reverse process. In this subsection, we will try to estimate the fate of
the semi-hard gluons emitted from a jet as it propagates through an
equilibrated QGP.
In this first effort, we focus on jets with an energy $E_{J}\simeq 25$ GeV.
The reason is that these jets have a large enough energy that they will
definitely radiate several soft gluons on traversal through the QGP. Also,
their energy is not that high that a considerable portion of the jet will
continue to radiate outside the medium and produce a dominant gluon shower in
the vacuum Majumder (2013a). A jet with an energy $E_{J}\simeq 25$ GeV 111We
use $\simeq$ as the energy of the jet originating parton is set as 25 GeV,
which is not equal to the energy of the clustered jet. will radiate several
gluons with energies 2 GeV$\lesssim E\lesssim 5$ GeV (See Fig. 8). These
gluons will multiply scatter and interact with medium. In this process, they
may convert into a quark or antiquark. The produced quarks and antiquarks may
also scatter and could in turn convert back into gluons.
Figure 5: (Color online) QCD annihilation processes. The rate of gluon
conversion into quark (or antiquark) from Eq. (7) is represented in red lines,
while the reverse process from Eq. (6) is displayed in green lines. The dashed
lines represent the rate where the equilibrium distributions are taken to be
the classical Maxwell-Boltzmann distribution, and the rates computed using the
equilibrium distribution from quantum statistics are shown in the full lines.
Figure 6: (Color online) QCD Compton scattering. The rate of gluon conversion
into quark (or antiquark) Eq. (11) is represented in red lines, while the
reverse process from Eq. (10) is displayed in green lines. Kinematic
selections are employed for the momentum $p_{4}$ of the semi-hard parton with
opposing flavor, in the top panel $8T\leq|\bm{p}_{4}|$ or
$|\bm{p}_{3}|\leq|\bm{p}_{4}|$ in the bottom panel. While the dashed lines
represents the rate where the equilibrium distributions are taken to be the
classical Maxwell-Boltzmann distribution, the rates computed using the
equilibrium distribution from quantum statistics are shown in the full lines.
To estimate the probability of conversion, we go beyond the ratios of rates
and plot the absolute rates, in a thermal medium, in Figs. 5 and 6. As the
case for the ratio of rates, we continue to set the temperature of the medium
to be $T=0.25$ GeV. In Fig. 5, we plot the rates for two gluons to pair
produce a quark or antiquark (red lines), the solid line includes quantum
statistics for the incoming and outgoing soft partons, while the dotted line
assumes Maxwell Boltzmann statistics. The green lines represent a quark or
antiquark annihilating with its antiparticle and producing two gluons.
Including the effect of quantum statistics, the rate for a gluon to convert
into a quark (or antiquark) can be almost 3 times as high as the reverse
(quark to gluon) process in the region with 2 GeV$\lesssim E\lesssim 5$ GeV;
the overall rates are rather small ($\sim 0.008$ GeV) however. Thus, these
rates will act as an additive correction to the Compton process.
In Fig. 6, we plot the conversion of a gluon into a quark or antiquark (red
lines) and vice-versa (green lines) from the Compton process. The solid and
dashed lines indicate rates with and without quantum statistics, as described
above. In the Compton process, a semi-hard gluon interacting with a thermal
quark could produce a semi-hard quark and a semi-hard gluon simultaneously.
Hence, we present two separate rates: The top panel indicates the rate for a
parton to be produced with an energy $|\bm{p}_{4}|>10T$ with a different
flavor than the semi-hard projectile. Red lines indicate that the projectile
is a gluon, and green lines are for a quark. The bottom panel plots the rate
of the semi-hard projectile converting its flavor [gluon to (anti)quark or
(anti)quark to gluon], where the outgoing semi-hard parton (or parton with
larger energy) has a different flavor than the projectile. In both panels, the
$x$-axis is the energy of the semi-hard projectile ($E_{1}$).
We conclude this section with a numerical estimate of the physical rate for a
semi-hard gluon (with $2\lesssim E\lesssim 5$ GeV) to convert into a semi-hard
quark (antiquark) and vice versa. Using either panel in Fig. 6, we note that
the rate for a semi-hard gluon to produce a semi-hard quark or antiquark
(whether or not the fermion is the leading outgoing parton) via the Compton
process is about 0.06 GeV (see mean of the two red lines). Combining the rate
from pair creation (Fig. 5), the rate for a semi-hard gluon to produce a semi-
hard quark or antiquark is
$\displaystyle R_{g\rightarrow q+\bar{q}}\simeq 0.07{\rm GeV}\simeq 0.35/{\rm
fm}.$ (13)
The rate for the reverse process is about half of this (using the green lines
in either plot from Fig. 6 and including the rates from the plot in Fig. 5).
We have also used natural units $\hbar c\simeq 0.2$ GeV$\cdot$fm.
The rate in the above equation is remarkably large. It implies that a semi-
hard gluon will definitely convert to a quark (or antiquark) by traversing a
mere 3 fm of a QGP at $T=0.25$ GeV. Of course, if the medium were longer, the
final population would eventually tend towards twice as many quarks and
antiquarks compared to gluons.
By any measure, the estimate in the preceding paragraphs presents a rather
startling effect. It should completely disabuse one of the notion that a jet
in a medium is a central hard parton surrounded by a gluon shower. In the
subsequent sections, we will study this effect using the solution of the
Boltzmann equation, followed by simulations using LBT and MATTER+MARTINI
within the JETSCAPE framework. In all cases, we will note that a large portion
of the gluons in a jet shower, in a medium, are converted to quarks and
antiquarks.
## IV The effective Boltzmann equation for a jet in a static QGP
Figure 7: (Color online) Particle number distribution as a function of the
angle from the original parton, for hard partons with momentum $8T\leq p\leq
20T$ (top) or soft parton with momentum $p\leq 8T$ (bottom) with $T=0.25$ GeV.
The gluon distribution is displayed in full-lines, while the sum of quarks and
antiquarks is displayed in dashed-lines at different times $t=2,6$ and $10$
fm/$c$. In the top panel, the vertical dashed lines indicate the angle where
the fermion distribution crosses the gluon distribution.
In this section, we will investigate the evolution of hard partons in a static
QGP medium using an effective kinetic description of QCD, at leading order.
Based on the approach of Arnold, Moore and Yaffe (AMY) Arnold _et al._
(2003a), we study the evolution of the phase-space distribution $f(\bm{p})$,
where, after integrating out position space, the kinetic equation is given by
$\displaystyle\partial_{t}f_{a}(\bm{p})=C_{a}^{2\leftrightarrow
2}[f]+C_{a}^{1\leftrightarrow 2}[f]\;.$ (14)
The leading-order QCD elastic scatterings are described by the
$2\leftrightarrow 2$ collision integral $C_{a}^{2\leftrightarrow 2}[f]$, where
we use hard thermal loop propagators for the internal quark and gluon to
regulate the divergent small angle scatterings, while the other in-coming and
out-going parton lines are assumed to be vacuum like Arnold _et al._ (2000),
with thermal distributions for the partons that emerge from and re-enter the
medium.
Multiple scattering of a hard parton with the medium can cause the parton to
become slightly off-shell. The hard parton loses its off-shellness via
radiation, which is enhanced in the collinear region. These infinite number of
diagrams, iteratively including an arbitrary number of scatterings, can be
resummed into an effective $1\leftrightarrow 2$ radiation / absorption rate,
which is described by the collision integral $C_{a}^{1\leftrightarrow 2}[f]$.
This medium-induced radiation is governed by an interplay between the medium
scale given by the mean free path $\lambda_{\rm mfp}\sim 1/m_{D}$ and the
formation time of the radiation $t_{f}\sim\sqrt{2zE_{\bm{p}}/\hat{q}}$, which
leads to a time-dependent rate of radiation in the collision integral. Since a
time-dependent collision integral is rather difficult to solve, we consider
the medium to be large enough such that the formation time is much smaller
than the medium length. In this case, the radiation rates are given by the
infinite medium limit derived in the AMY approach Arnold _et al._ (2000). The
full evolution of the phase-space distribution and the details of the
implementation of the collision integrals are given in Mehtar-Tani _et al._
(2022).
We will focus on the energy loss of a hard gluon in a static medium of
infinite length. The in-medium cascade of the hard gluon leads to a dilute
distribution of quarks and gluons compared to the QGP, which we can describe
using a linearized fluctuation $\delta f(\bm{p})$ on top of the equilibrium
distribution $n_{a}(\bm{p})$. The full phase-space distribution is then given
by
$\displaystyle f_{a}(\bm{p})=n_{a}(\bm{p})+\delta f_{a}(\bm{p})\;.$ (15)
Since the equilibrium distribution $n_{a}(\bm{p})$ is static, the distribution
$\delta f_{a}(\bm{p})$ will describe the evolution of the hard partons and the
response of the medium. Each elastic scattering with the medium generates
recoil partons close to medium scales. The medium parton, which undergoes the
scattering, is “extracted” from the medium. It manifests itself as a negative
contribution to the distribution.
### IV.1 Evolution of a gluon in a QGP brick
We consider an initial _gluon_ with momentum along the $z$-axis and
approximate the initial distribution as a narrow Gaussian centered at
$\bm{p}=E_{0}\hat{e}_{z}$ written as
$\displaystyle\delta f_{g}^{\rm in}(p,\theta)$
$\displaystyle=\frac{\exp\left[-\frac{\left(p\cos\theta-
E_{0}\right)^{2}+p^{2}\sin^{2}\theta}{2\sigma^{2}}\right]}{p^{3}N},$
$\displaystyle\delta f_{q,\bar{q}}^{\rm in}(p,\theta)$ $\displaystyle=0\;,$
(16)
where $N=\int dp\int d\cos\theta~{}\exp\left[-\frac{\left(p\cos\theta-
E_{0}\right)^{2}+p^{2}\sin^{2}\theta}{2\sigma^{2}}\right]$ is a normalization
factor. We take the initial energy $E_{0}=25$ GeV, the QGP temperature
$T=0.25$ GeV, the QCD coupling constant to be $g=2$, and the Gaussian width
$\sigma=10^{-3}/\sqrt{2}E_{0}$.
Typically, jet fragmentation is studied using the Lund plane diagram, which
describes jet emissions using its longitudinal momentum $p$ and the inverse of
its angle $\theta$ with respect to the primary jet axis Andersson _et al._
(1989). In order to understand the chemical composition of the shower, we
follow a similar approach to the Lund diagram by studying the distribution of
partons in different momenta regions as a function of the polar angle $\theta$
away from the original parton.
Integrating over a momentum range, we define the following particle number
distribution as a function of the polar angle $\theta$
$\displaystyle\frac{dN_{a}}{d\theta}(\theta)=\sin\theta\int_{p_{\rm
min}}^{p_{\rm max}}dp\int\frac{d\phi}{(2\pi)^{2}}p^{2}\delta f_{a}(\bm{p})\;.$
(17)
Figure 7 presents the distribution of gluons in full lines compared with the
distribution of quark and antiquarks in dashed lines. The different panels
show the angular distribution integrated over two momenta ranges: (top panel)
the semi-hard partons with $8T\leq p\leq 20T$ and (bottom panel) soft partons
with $p\leq 10T$. The evolution at times $t=2$ and 6 fm/$c$ are selected to
represent typical times of jet energy loss in the QGP, while $t=10$ fm/$c$
corresponds to a near-equilibrium distributions where most of the hard
parton’s energy is lost to the medium.
One observes at earlier times ($t=2$ and 6 fm/$c$) that the hard core of the
distribution at $\theta\simeq 0$ is composed of more bosons, originating from
collinear radiation of the initial gluon. Since the equilibration of bosons
proceeds faster than for fermions, one finds slightly more bosons at the low
scales of $p\leq 8T$. However, for the semi-hard partons $8T\leq p\leq 20T$,
there is a development of higher number of fermions in a ring with
$\theta\gtrsim 0.6$. Conversely, at late times ($t=10$ fm/$c$), the fermions
dominate over bosons in all momenta ranges and angles as chemical
equilibration is reached, leading to the same parton composition as the QGP
Mehtar-Tani and Schlichting (2018); Schlichting and Soudi (2021).
### IV.2 Chemical composition at late time
Throughout the preceding subsection, we followed the evolution of a linearized
perturbation on top of a static equilibrium background, which at
asymptotically late time completely thermalizes with the medium. The
asymptotic distribution can be obtained analytically by considering a linear
perturbation around the equilibrium distribution $n_{a}(\bm{p})$ for each
species. To achieve this, only the linear terms of the Taylor series in the
thermodynamic conjugate of the conserved quantities are kept.
For the kinetic evolution considered, the conserved quantities are energy $E$,
momentum $p_{z}$, and valence number $N_{v}=N_{q}-N_{\bar{q}}$, while their
conjugate variables are temperature $T$, flow velocity $u_{z}$, and chemical
potential $\mu_{v}=\mu_{q}-\mu_{\bar{q}}$, respectively. The general
equilibrium distribution is
$\displaystyle n_{a}(\bm{p})=\frac{1}{e^{\frac{p\cdot u-\mu_{a}}{T}}\mp 1}\;,$
(18)
where $\mp$ stand for gluons and quarks, respectively. The linear perturbation
of the equilibrium distribution is then given by
$\displaystyle\delta n_{a}$ $\displaystyle(\bm{p})=$ (19)
$\displaystyle\left.\left[-\frac{\delta T}{T^{2}}\partial_{T}+\delta
u_{z}\partial_{u_{z}}+\delta\left(\frac{\mu_{a}}{T}\right)\partial_{\frac{\mu_{a}}{T}}\right]n_{a}(\bm{p})\right|_{u_{z}=\mu_{a}=0}\,.$
After identifying the values of $\delta T$, $\delta u_{z}$ and
$\delta\left(\frac{\mu_{a}}{T}\right)$ by matching the moments of the
distribution with the conserved quantities, one finds222A detailed derivation
is given in App. C of Mehtar-Tani _et al._ (2022). Mehtar-Tani _et al._
(2022)
$\displaystyle\delta n_{a}(\bm{p})=$ $\displaystyle
E_{0}\frac{p}{4T\epsilon(T)}[1+3\cos\theta]n_{a}(\bm{p})(1\pm
n_{a}(\bm{p}))\;,$ (20)
with the energy density
$\epsilon(T)=\left(\frac{\pi^{2}}{30}\nu_{g}+\frac{7\pi^{2}}{120}\nu_{q}N_{f}\right)T^{4}$,
where $\nu_{g}=2(N_{c}^{2}-1)$ and $\nu_{q}=2N_{c}$. The matching ensures that
the energy of the initial parton is recovered by computing the following
moment of the distribution,
$\displaystyle\int\frac{d^{3}\bm{p}}{(2\pi)^{3}}\nu_{g}\delta
n_{g}(\bm{p})+2N_{f}\delta n_{q}(\bm{p})=E_{0}\;.$ (21)
When the momentum of the partons is much larger than the temperature $p\gg T$,
the quantum distributions can be approximated by a Boltzmann distribution,
leading to a simple relation between the number of fermions and gluons. While
the low momentum region $p\ll T$ is dominated by gluons, for large momenta
$p\gg T$, the number of fermions is related to the gluon number by
$\displaystyle\frac{N_{q}(\bm{p})+N_{\bar{q}}(\bm{p})}{N_{g}(\bm{p})}=\frac{2N_{f}\nu_{q}\delta
n_{q}(\bm{p})}{\nu_{g}\delta n_{g}(\bm{p})}\overset{\bm{p}\gg
T}{\simeq}\frac{4N_{c}N_{f}}{2d_{A}}\;,$ (22)
leading to the relation
$\displaystyle N_{q}(\bm{p})+N_{\bar{q}}(\bm{p})\overset{\bm{p}\gg
T}{\simeq}\frac{N_{f}}{C_{F}}N_{g}(\bm{p})\;.$ (23)
In the above section, we demonstrated how the thermalization of a hard gluon
in a hot and dense QGP leads to a shower with a chemical composition dominated
by quarks and antiquarks, contrary to the case of vacuum fragmentation.
Throughout this kinetic theory simulation, we have considered a simple static
medium, which ignores important effects of flow.
Our solution to the Boltzmann equation does not include any event-by-event
fluctuations, initial vacuum like shower for partons at large virtualities, or
realistic energy loss parameters. In the following section, we will consider
realistic simulations where the hard jet shower may not thermalize in the
medium. While the medium will remain static, the jet will undergo a stochastic
process of multiple emission as it propagates through the medium. We will
study cases both with and without vacuum like emissions. We will also consider
the systematic effect of varying the energy loss formalism from a single stage
to a multiple stage formalism.
## V Simulations in vacuum and static media
In the preceding section, we demonstrated that the appearance of a large
number of quarks and antiquarks within the vicinity of a jet should be a
generic feature of jet modification processes in a deconfined medium. The
semi-analytic results in a static medium did not include vacuum like showers
Caucal _et al._ (2018); Cao and Majumder (2020); Cao _et al._ (2017) and
multi-stage energy loss Kumar _et al._ (2022a) in a dynamically evolving
medium. In this section, somewhat more realistic simulations will be carried
out to study the appearance of these charge/baryon rings in the angular
structure of jets.
In the first subsection, we will revisit the calculation of the angular
structure of gluons and quarks radiated from a hard _gluon_ in vacuum,
demonstrating the large excess of gluons at all angles away from the primary
parton. Following this, simulations are carried out in a static medium at
$T=0.25$ GeV, using the Linear Boltzmann Transport (LBT) event generator,
which is somewhat different from the Boltzmann equation based calculations
presented in Sec. IV: LBT is a Monte-Carlo event generator and there are at
most one scattering per emission, for all emissions.
Finally, simulations with a multi-scale event generator are presented, where
the initial high virtuality stage is modeled with the higher twist formalism
in the MATTER generator Majumder (2013a); Cao and Majumder (2020) and the
lower virtuality stage is modeled using the Hard Thermal Loops formalism
Arnold _et al._ (2000, 2003a, 2003b) present in the MARTINI generator Schenke
_et al._ (2009), which involves multiple coherent scatterings per emission.
Similar to the case of pure LBT, these simulations are also carried out in a
static medium.
Compared to the vacuum shower, the pure LBT simulation or the MATTER+MARTINI
combination generates an excess of quarks+antiquarks at large angles away from
the jet axis at both intermediate and low-$p_{T}$. The angle at which these
appear may vary, based on the parameters of the simulation. In both the LBT
and the MATTER+MARTINI simulation, the number of semi-hard quarks and
antiquarks exceeds the number of semi-hard gluons by $\tau=10$ fm/$c$.
### V.1 Simulations in Vacuum
We begin by revisiting the partonic angular structure of jets in a vacuum. We
consider a hard gluon with $E=25$ GeV that starts with a typical initial
maximum virtuality of $Q=E/2$. This implies that the initial virtuality is
logarithmically distributed in the range $0\leq\mu\leq Q$. As in the preceding
section, a hard gluon is the shower-initiating parton. This choice removes any
contamination of the scattering-generated charge ring from the parent parton,
as the gluon has no net charge or baryon number.
Figure 8: (Color Online) Particle number distribution from a 25 GeV initial
gluon in vacuum, as a function of the polar angle, for hard partons with
energy $2$ GeV$<E\leq 5$ GeV (red lines), and soft partons with energy $E\leq
2$ GeV (black lines). The gluon distribution is displayed in full-lines, while
the sum of quarks and antiquarks is displayed in dashed-lines, at $\tau_{\rm
max}=10$ fm/$c$ after the production of the original gluon, which has a
maximum virtuality of $E/2=12.5$ GeV.
The hard parton can undergo successive splits, where both $g\rightarrow gg$
and $g\rightarrow q\bar{q}$ are allowed. The shower development is continued
for a time $\tau=10$ fm/$c$. In Fig. 8 we split the final partons at $\tau=10$
fm/$c$ into two groups: The low-energy group with $E<2$ GeV, and the
intermediate energy group with $2$ GeV$<E\leq 5$ GeV. Had the jet been
immersed in a medium at a temperature $T=0.25$ GeV, the two energy boundaries
would have corresponded to $8T$ and $20T$, similar to the ranges considered in
the preceding section.
Given the singular nature of the $g\rightarrow gg$ splits compared to the
$g\rightarrow q\bar{q}$ splits, one notes that in both the low and
intermediate energy range, the number of gluons far exceeds the number of
quarks and antiquarks. As stated in the introduction, vacuum jets are
primarily gluonic. Comparison with the plots in Fig. 7, for the case of pure
in-medium evolution, should immediately convince the reader of the striking
difference between the jet flavor profile in a medium versus that in a vacuum.
By 10 fm/$c$, the quark and antiquark population in Fig. 7, easily exceeds the
gluon population in most regions of phase space (green lines in the plot).
### V.2 Simulations in LBT
In this subsection, we present results for the flavor profile from a semi-
realistic Monte-Carlo simulation of a hard gluon propagating through a static
medium, held at $T=0.25$ GeV. In this first attempt to reveal the
charge/flavor/baryonic profile, simulations will be carried out starting from
a 25 GeV gluon within the _pure_ LBT model, i.e., using a single stage jet
modification scenario. A two-stage simulation is presented in the subsequent
subsection.
In LBT Cao _et al._ (2016, 2018), the phase space distribution of the jet
parton (denoted by $a$) evolves according to the Boltzmann equation as
$p_{a}\cdot\partial
f_{a}=E_{a}(\mathcal{C}_{\mathrm{el}}+\mathcal{C}_{\mathrm{inel}}),$ (24)
in which the collision term on the right hand side incorporates both elastic
and inelastic contributions. Based on the collision term, the elastic
scattering rate, i.e., the number of elastic scatterings per unit time, as
$\displaystyle\Gamma_{a}^{\mathrm{el}}$
$\displaystyle(\bm{p}_{a},T)=\sum_{b,(cd)}\frac{\gamma_{b}}{2E_{a}}\int\prod_{i=b,c,d}\frac{d^{3}\bm{p}_{i}}{E_{i}(2\pi)^{3}}f_{b}S_{2}(\hat{s},\hat{t},\hat{u})$
$\displaystyle\times(2\pi)^{4}\delta^{(4)}(p_{a}+p_{b}-p_{c}-p_{d})|\mathcal{M}_{ab\rightarrow
cd}|^{2},$ (25)
in which the summation is over all possible $ab\rightarrow cd$ channels,
$\gamma_{b}$ represents the color-spin degrees of freedom of the thermal
partons inside the QGP and $f_{b}$ is their distribution function. In LBT, a
function $S_{2}(\hat{s},\hat{t},\hat{u})=\theta(\hat{s}\geq
2\mu_{\mathrm{D}}^{2})\theta(-\hat{s}+\mu^{2}_{\mathrm{D}}\leq\hat{t}\leq-\mu_{\mathrm{D}}^{2})$
is introduced Auvinen _et al._ (2010) to regulate the collinear divergence in
the leading-order (LO) scattering matrices $\mathcal{M}_{ab\rightarrow cd}$,
where $\hat{s}$, $\hat{t}$ and $\hat{u}$ are the Mandelstam variables and
$\mu_{\mathrm{D}}^{2}=g^{2}_{s}T^{2}(N_{c}+N_{f}/2)/3$ is the Debye screening
mass with $g^{2}_{s}=4\pi\alpha_{s}$ being the strong coupling constant and
$T$ being the medium temperature.
The inelastic scattering rate can be related to the average number of medium-
induced gluons per unit time as
$\Gamma_{a}^{\mathrm{inel}}(E_{a},T,t)=\int
dxdk_{\perp}^{2}\frac{dN_{g}^{a}}{dxdk_{\perp}^{2}dt},$ (26)
with the gluon spectrum taken from the higher-twist energy loss calculation
Wang and Guo (2001); Zhang _et al._ (2004); Majumder (2012),
$\frac{dN_{g}^{a}}{dxdk_{\perp}^{2}dt}=\frac{2C_{A}\alpha_{\mathrm{s}}P^{\mathrm{vac}}_{a}(x)}{\pi
C_{2}(a)k_{\perp}^{4}}\,\hat{q}_{a}\,{\sin}^{2}\left(\frac{t-t_{i}}{2\tau_{f}}\right).$
(27)
Here, $x$ and $k_{\perp}$ are the fractional energy and transverse momentum of
the emitted gluon relative to its parent parton, $P^{\mathrm{vac}}_{a}(x)$ is
the vacuum splitting function of the jet parton with its color factor
$C_{2}(a)$ included, $\hat{q}_{a}$ is the jet quenching parameter that encodes
the medium information and is evaluated according to the transverse momentum
broadening square per unit time in elastic scatterings, $t_{i}$ denotes the
production time of parton $a$, and $\tau_{f}={2E_{a}x(1-x)}/k_{\perp}^{2}$ is
the formation time of the emitted gluon. In this section, we set the coupling
constant as $\alpha_{s}=0.3$, which directly controls the interaction strength
in elastic scatterings, and affects the rate of medium-induced gluon emission
through $\hat{q}_{a}$.
Figure 9: (Color Online) Particle number distribution from LBT simulation
starting from a $E_{\rm in}=25$ GeV gluon and evolving in a static medium at
$T=0.25$ GeV, as a function of the polar angle, for soft partons with energy
$E\leq 2$ GeV. The gluon distribution is displayed in full-lines, while the
sum of quarks and antiquarks is displayed in dashed-lines, at three evolution
times of 2, 6 and 10 fm/$c$.
In the LBT simulation333In this particular simulation, the rate for gluons to
convert to quarks or antiquarks has been corrected by an additional factor of
$N_{f}=3$, which is missing in all prior versions. However, this factor should
not change previous results which focused on studying energy loss., we track
not only the jet partons and their emitted gluons, but also the thermal
partons being scattered out of the QGP background by jets. The latter are
known as “recoil” partons. When these “recoil” partons are produced, energy-
momentum depletion occurs inside the original QGP medium. These are treated as
particle holes, or “negative” partons, and also fully tracked in LBT in order
to guarantee the energy-momentum conservation of the whole system of jet
partons and the QGP. Recoil and “negative” partons constitute the “jet-induced
medium excitation”, or “medium response to jet propagation”, which have been
shown to be crucial for understanding jet observables, including their nuclear
modification factor and anisotropic flow coefficients He _et al._ (2019,
2022).
Using this LBT model, we calculate the angular distribution of partons that
start from a single gluon with 25 GeV energy and evolve through a static
medium at $T=0.25$ GeV. Results for partons at low energy ($E\leq 2$ GeV) and
intermediate energy ($2<E\leq 5$ GeV) are presented separately in Figs. 9 and
10 respectively. In each figure, we compare the distributions of quarks +
antiquarks and gluons at three different evolution times.
At intermediate energy, one can clearly observe an excess of quarks (together
with antiquarks) over gluons at larger angles ($\theta\gtrsim 0.9$) with
respect to the jet direction (the momentum direction of the initial gluon
here), for evolution times up to 6 fm/$c$. At later times, the fermion excess
at intermediate momenta manifests at all angles. This excess becomes more
prominent as the evolution time increases, indicating a flavor change from
gluons to quarks during jet-medium interactions. Note that within the LBT
calculation, the distributions of “negative” partons have been subtracted from
those of regular partons. For this reason, one can see the negative
distribution of low energy partons (Fig. 9) at large angle, which is known as
the energy depletion, or the diffusion wake, in the opposite direction of jet
propagation.
While the distribution of soft partons with $E<2$ GeV (Fig. 9) produced in the
LBT simulation are rather different from the soft parton distributions in the
prior Boltzmann simulation (lower panel of Fig. 7), the semi-hard
distributions (in Fig. 10) are in qualitative agreement with those in the
upper panel of Fig. 7. However, there is almost a factor of 2 difference in
the overall normalization of the plots for distributions with $t\gtrsim 6$
fm/$c$. Also, the detailed positions at which quark spectra cross gluon
spectra, indicated by the vertical dashed lines are quantitatively different
due to different model implementations. In spite of these differences, in both
cases, the semi-hard quark (and antiquark) distribution begins to surpass the
semi-hard gluon distribution at $\theta\gtrsim 0.2$ for the Boltzmann
simulation, and at $\theta\gtrsim 0.9$ for the LBT simulation after 6 fm/$c$,
and completely dominates by 10 fm/$c$.
Figure 10: (Color Online) Same as simulations in Fig. 9, except for partons at
intermediate energy $2<E\leq$ 5 GeV. The dotted lines show the angles quark
distribution starts to exceed the gluon distribution for the evolution times 2
and 6 fm/$c$.
### V.3 Simulations in MATTER+MARTINI
Currently, multi-stage jet modification simulators Kumar _et al._ (2020a);
Putschke _et al._ (2019); Kumar _et al._ (2022a) have shown remarkable
success in simultaneously describing a host of jet-based observables. In these
simulations, the medium generated scale $Q^{2}_{\rm med}=\sqrt{2E\hat{q}}$,
where $E$ is the energy of a parton undergoing energy loss, plays a crucial
role Majumder and Van Leeuwen (2011); Cao _et al._ (2017). Partons whose
virtuality is above this scale undergo mostly vacuum like splitting, with a
perturbative correction to the splitting kernel from medium induced radiation.
As a result, most emissions are vacuum-like with a few interfering medium
induced emissions Majumder (2009b); Kumar _et al._ (2020b); Cao _et al._
(2021b). In-medium scatterings are accounted for using the scattering kernels
described in Sec. III. As those rates are obtained assuming the incoming and
outgoing partons are on-shell, the virtuality is temporarily removed from the
$p^{0}$ component of the four-momentum of incoming and outgoing partons when
computing the scattering rates. Once the four-momenta of all partons
participating in the scattering is determined, the virtuality is restored
within the energy of all incoming and outgoing partons, thus preserving its
value. Partons with a virtuality at or below this scale undergo multiple
scattering in the process of almost every emission, with purely vacuum like
emission almost absent Baier _et al._ (1995, 1997).
Figure 11: (Color Online) Particle number distribution from a $E_{\rm in}=25$
GeV gluon in a static medium at $T=0.25$ GeV, as a function of the polar
angle, for soft partons with energy $E\leq 2$ GeV. The gluon distribution is
displayed in full-lines, while the sum of quarks and antiquarks is displayed
in dashed-lines, at three evolution times of 2, 6 and 10 fm/$c$ after the
production of the original gluon, which has a maximum virtuality of $E/2=12.5$
GeV.
Simulations in this subsection are carried out using the JETSCAPE framework
Putschke _et al._ (2019), using the version of MATTER and MARTINI simulation
modules therein. We consider, once again the case of a single hard _gluon_
with an energy of 25 GeV propagating in a static medium held at $T=0.25$ GeV.
The hard jet starts with an initial maximal virtuality $Q=E/2$ as in the case
of the vacuum simulation in Sec. V.1. The emissions from partons with a
virtuality $Q>Q_{\rm med}$ are simulated using the MATTER generator. As
partons undergo more splits in MATTER, their virtuality drops. Once a parton
reaches the $Q_{\rm med}$, it transitions to the MARTINI generator. The
virtuality of the partons is maintained by scattering in the medium while in
the MARTINI stage.
As the parton emerges from the medium, $\hat{q}$ will drop to zero, and the
virtuality of the parton will once again exceed the scale $Q_{\rm
med}\rightarrow 0$, and the parton will transition back to MATTER again.
Partons that escape the medium will continue to endure vacuum like splits
until each of their virtualities reaches $Q_{0}=1$ GeV. Beyond this, partons
will free stream until the end of the simulation, set at $\tau_{\rm max}$.
While the MATTER generator involves at most one scattering per emission from a
parton, MARTINI allows for multiple scatterings over the course of a single
emission, and as a result there is a greater probability to convert a boson
into a fermion (and vice versa), especially for the longer-lived (softer)
partons in the MARTINI phase. However, since some portion of the jet will
definitely be in the MATTER stage, fewer conversions are expected within a
multi-stage parton energy loss simulation compared to a pure MARTINI
simulation.
Figure 12: (Color Online) Same as simulations in Fig. 11, except for partons
at intermediate $p_{T}$, with 2 $<E\leq$ 5 GeV. The dashed lines show the
angles quark distribution starts to exceed the gluon distribution for the
evolution times 2 and 6 fm/$c$.
In Figs. 11 and 12, the yield of semi-hard partons and soft partons,
respectively, has been plotted for 3 different values of $\tau_{\rm max}$.
These partons are all part of the profile of the jet that starts as a single
gluon with an energy of 25 GeV (and virtuality $Q=E/2$). Similar to the case
of the solution of the Boltzmann equation in Sec. IV, as well as for the case
of LBT in the preceding subsection, we find more quarks and antiquarks in
proportion to the gluons, compared to the case in vacuum. In this case, the
angle at which the quark and antiquark number exceeds the gluon number at 6
fm/$c$ ($\theta\gtrsim 0.3$), is smaller than in the case of LBT
($\theta\gtrsim 0.9$), and slightly larger than the angle in the Boltzmann
simulation ($\theta\gtrsim 0.2$). This is due to greater number of scatterings
in MARTINI compared to LBT, and lack of a vacuum like stage in the Boltzmann
simulations compared to the MATTER+MARTINI simulations.
The focus of this article is to compare the temporally rising quark (and
antiquark) distribution, within and surrounding a jet, with the falling gluon
distribution, in the same region of angular space, at intermediate momentum (2
GeV $\lesssim E\lesssim$ 5 GeV). While, the low momentum region is not our
focus, we report on it for all three cases of the Boltzmann simulation, LBT
and MATTER+MARTINI simulations. In all three cases, the low momentum region
around the jet in these simulations is quite different. In the specific case
of the MATTER+MARTINI simulation, the number of gluons always remains larger
than the number of quarks and anti-quarks. Also, both quark and gluon curves
show a dip around $\theta\gtrsim\pi/2$ from the direction of the leading
parton. This is primarily due to the subtraction of holes. In the case of the
LBT simulations, this region is actually negative (see Fig. 9). In the case of
MARTINI, jets can emit partons down to vanishingly soft momentum, which is
enhanced for the case of gluons. As a result, the soft gluon emissions
completely cover up the negative portion that arises due to subtraction of the
holes. The soft quark emission rates are much smaller, and thus can only
overcome the negative contribution of hole subtraction at times larger than 10
fm/$c$.
In this and the preceding section we have explored jets in a medium, albeit
static, from a variety of formalisms, which have varying amounts of
interaction between the jet and the medium. In all cases, we observe a large
excess of the fermion number correlated with the jet (compared to a vacuum
shower) at angles greater than 0.2 (Boltzmann simulation) to 0.9 radian (LBT)
away from the original jet-axis. The three simulations are quite different and
yield very different distributions for partons with $E\lesssim 2$ GeV.
However, these differences at low momentum make the qualitative similarities
at intermediate momentum a more rigorous prediction of the gluon versus quark
and antiquark number.
The reader will have noted that all our calculations are entirely partonic.
Will this charge enhancement survive hadronization in a ring form? Can it be
observed in experimental data? The answer to these questions is so far
unsettled. Indeed, most of the fermion excess is at low and intermediate
$p_{T}$ where there are no good hadronization mechanisms that can conserve
charge and baryon number, either event-by-event, or within angular/rapidity
ranges. Cooper-Frye Hadronization Cooper and Frye (1974) is carried out on
distributions. The presence of the large number of co-moving quark and
antiquarks will lead to very low mass strings if Lund hadronization were
applied, leading to a breakdown of that methodology Andersson (2005). In the
subsequent and penultimate section, we will explore other observables that may
be correlated with this enhancement in baryon/charge number, which may already
have been observed.
## VI Jet modification and the baryon enhancement
In the preceding sections, we have argued that jets modified in a dense plasma
have a strikingly different flavor profile compared to jets in vacuum. Jets in
vacuum that begin with either a hard quark or gluon, tend to radiate a large
number of gluons, compared to quarks or antiquarks. As shown in Fig. 8, the
number of soft gluons ($E<2$ GeV) exceeds the number of quarks and antiquarks
by two orders of magnitude, while the number of intermediate energy gluons
(with $2$ GeV$<E<5$ GeV) is an order of magnitude larger than quarks and
antiquarks of similar energy. This flavor mixture is dramatically changed for
the case of jets modified in the medium, where the quark and antiquark number
becomes comparable to the gluon number. All our estimates are based on a jet
that starts as a gluon with $E=25$ GeV,
Three different simulations carried out in the preceding section indicate that
the increase in fermion content of the jet is the most dramatic modification
of the jet in a dense medium, the fractional change in flavor far exceeds the
fraction of energy lost by the jet on passage through the medium. To be clear,
a change in the momentum profile of the jet is not expected as a result of
this enhancement of fermionic content: There is no excess or depletion in the
amount of energy loss of the jet caused by this change in the flavor profile
of soft and semi-hard partons. However, one would expect the flavor or baryon
number profile of the jet to be modified, especially in the semi-hard region.
Currently, there is no reliable hadronization mechanism that can be used to
test this hypothesis, on a triggered jet. However, we can look for such an
enhancement in the yield of hadrons at intermediate $p_{T}$. In the absence of
a reliable hadronization mechanism, we propose the somewhat tenuous
equivalence in the ratios:
$\displaystyle\frac{\frac{d^{3}N^{B+\bar{B}}_{AA}(b_{min},b_{max})}{d^{2}p_{T}dy}}{\langle
N_{bin}\rangle_{(b_{min},b_{max})}\frac{d^{3}N^{B+\bar{B}}_{pp}}{d^{2}p_{T}dy}}$
$\displaystyle\sim\frac{\frac{d^{3}N^{q+\bar{q}}_{AA}(b_{min},b_{max})}{d^{2}p_{T}dy}}{\langle
N_{bin}\rangle_{(b_{min},b_{max})}\frac{d^{3}N^{q+\bar{q}}_{pp}}{d^{2}p_{T}dy}}$
$\displaystyle\implies R^{B+\bar{B}}_{AA}$ $\displaystyle\sim
R^{q+\bar{q}}_{AA}.$ (28)
In the above equation, we are proposing the $R_{AA}$ for baryons and anti-
baryons as an approximation to the $R_{AA}$ for quarks and antiquarks. This
equality will no doubt receive corrections from hadronization. We will study
this ratio in the intermediate $p_{T}$ region. The goal is to see the
proximity of the two ratios, to place constraints on the possible
hadronization mechanisms Fries _et al._ (2003b); Molnar and Voloshin (2003);
Hwa and Yang (2003, 2004); Greco _et al._ (2003) in this region.
Simulations of this ratio are carried out using the LBT model He _et al._
(2015) for energy loss (see Subsec. V.2 for more details). Calculations are
carried out on a realistic fluid dynamical medium Shen _et al._ (2016). The
initial state and evolution of the fluid have been parameterized by comparison
with the yield and azimuthal anisotropy of soft hadrons. The initial hard
spectrum of partons has been calculated using LO pQCD, with requisite
K-factors Field (1989). We present results for the $R_{AA}$ of quarks and
antiquarks at 0-20$\%$ central collisions at RHIC ($\sqrt{s_{\rm NN}}=0.2$
GeV) and 0-5$\%$ collisions at LHC ($\sqrt{s_{\rm NN}}=2.76$ GeV) energies.
Figure 13: (Color online) The nuclear modification factor for quarks plus
antiquarks (red dot-dashed line), and all partons (orange solid line)
correlated with hard scattering. The partons included were created in the
modified shower from the jet, either via a split from another parton, or via
the recoil process. No partons from the fluid, except those in recoil are
included. Results are compared with the $R_{AA}$ for $p+\bar{p}$ as an
approximate substitute for the baryon plus antibaryon ratio. No hadronization
is included in the theoretical calculation. Experimental results taken from
Ref. Adare _et al._ (2008); Abelev _et al._ (2010). Figure 14: (Color
online) Same as Fig. 13, for collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV, at
the LHC. Experimental results taken from Ref. Abelev _et al._ (2014); Acharya
_et al._ (2020).
In Figs. 13 and 14, we plot the nuclear modification factor for partons
correlated with hard scattering (solid line in both figures). We include only
partons that are created in the split of another parton from the jet showers
or created in the recoil process. We also plot the $R_{AA}$ of $q+\bar{q}$
correlated with jets. These are compared with the $R_{AA}$ for proton +
antiprotons as a substitute for the baryon and antibaryon $R_{AA}$.
At both RHIC and LHC energies, we note that the $R_{AA}$ for quarks and
antiquarks shows a rise at lower $p_{T}$ that is similar to the rise of the
$R_{AA}$ for protons and antiprotons. However, the rise takes place at a lower
$p_{T}$ than the experimental data. Also, the magnitude of the excess at RHIC
is less than the data. Thus, the fermion excess from jets may not be
sufficient to explain the baryon excess seen at intermediate $p_{T}$ at RHIC
and LHC. However, it may be a part of a multi-aspect solution to this problem.
It may provide further constraints on recombination models, which have so far
been tuned to data without the fermion enhancement.
An alternative way to view the results in Figs. 13 and 14 is that for this
particular signal, the fermion fraction of a jet, LBT is not an accurate
simulator. As pointed out in the preceding section, the MARTINI generator
produces many more fermions than LBT as each hard parton in MARTINI undergoes
much more scattering compared to LBT. Full simulations in MARTINI on a
hydrodynamic background are very computationally demanding and will not be
presented in this first effort.
Yet another possibility is that our assumption of completely perturbative
interaction between the jet partons and the medium is not accurate and non-
perturbative matrix elements will have to be measured and incorporated within
these full simulations to reproduce the baryon-antibaryon $R_{AA}$. These non-
perturbative matrix elements were discussed in Sec. II and involve quark
matrix elements in the medium. These have so far not been calculated on the
lattice or measured in experiment.
## VII Summary and outlook
The modification of hard jets in a dense QCD medium has traditionally been
understood in terms of an increase in the number of gluons radiated from the
originating hard parton, followed by a redistribution of the radiated partons
towards larger angles away from the jet axis. In this paper, we explored
another sizable effect, an order of magnitude increase in the fermion content
at intermediate momenta correlated with the jet.
The origin of this fermion excess, which manifests as an increase in the
baryon (and antibaryon) and charge (and anticharge) distributions at
intermediate $p_{T}\gtrsim 8T$ ($T$ is the temperature), at angles greater
than 0.2-0.5 away from the jet axis, lies predominantly in the recoil
mechanism. The rate of a semi-hard gluon scattering off a thermal quark or
gluon and converting into a semi-hard quark or antiquark is several times
larger than the rate of a semihard quark or antiquark converting into a semi-
hard gluon. While these conversion rates are much smaller than the rates of
typical scattering, which do not lead to flavor conversion, it is still large
enough that a majority of gluons experience at least one such scattering
within media with sizes between 5-10 fm/$c$, at average temperatures of
approximately 0.25 GeV (values representative of collisions at RHIC and LHC
energies).
All these conversion processes involve the exchange of a quark or antiquark
with the medium. This feature differentiates these processes from typical
scattering in the medium, mediated by gluon exchange, which is typically
encapsulated within the well known transport coefficients such as $\hat{q}$
and $\hat{e}$. The quark exchange, manifest in these processes, requires the
incorporation of new transport coefficients within the discussion of jet
quenching, which will yield insight into the fermion fraction of the
underlying evolving medium.
While partons at an intermediate $p_{T}\gtrsim 8T$, (typically 2-5 GeV) tend
to have a considerable non-perturbative portion in their interaction with the
medium, we have carried out this first exploration assuming an entirely
perturbative approximation. In spite of this, we find that the fermion
fraction correlated with a jet increases ten-fold for jets quenched in a
medium, compared to those in vacuum. We considered jets with energies
$E\gtrsim 25$ GeV traversing 5-10 fm/$c$ in a medium held at $T\simeq 0.25$
GeV. These are typical distances and temperatures encountered by jets as they
traverse media at RHIC and LHC.
The size of this effect is striking, the number of semi-hard fermions in jets
increases by at least an order of magnitude. We have checked this enhancement
through three separate model calculations: a semi-analytic solution of the
Boltzmann equation, a single stage LBT model and a multi-stage MATTER+MARTINI
approach. All three approaches showed similar levels of enhancement of the
fermion distribution in jets, compared to the gluon distribution. This has not
been pointed out before.
While the fermion enhancement does not change the energy profile of the jet,
it should strongly affect the event-by-event fluctuations of conserved charges
such as baryon number and electric charge within the jet. With more charges
and anti-charges produced, many of these will be clustered within a jet and
many will not; this should lead to larger event-by-event fluctuations of
baryon number and charge within a jet quenched in a dense medium, compared to
one in vacuum. Of course, the conclusions in this paper will be affected by
hadronization, which will introduce its own fluctuations of conserved charges.
It is also possible that the large fermionic content introduces an additional
source of jet energy loss in the hadronization process. We leave this topic
and more realistic simulations of jets in dynamical media for a future effort.
###### Acknowledgements.
The authors thank S. Shi, C. Gale and S. Jeon for help with the MARTINI code.
The MATTER+MARTINI simulations were run using the public JETSCAPE code base.
A. M. thanks A. Kumar and J. Weber for discussions that formed the genesis of
this effort. C. S., I. S., G. V. and A. M. were supported by the US Department
of Energy under grant number DE-SC0013460, and by the National Science
Foundation under grant numbers ACI-1550300 and OAC-2004571 within the
framework of the JETSCAPE collaboration. G. V. was also supported by Natural
Sciences and Engineering Research Council (NSERC) of Canada. W. J. X. and S.
C. were supported by the National Natural Science Foundation of China (NSFC)
under grant numbers 12175122 and 2021-867. The kinetic simulation used
resources of the National Energy Research Scientific Computing Center, a DOE
Office of Science User Facility supported by the Office of Science of the U.S.
Department of Energy under Contract No. DE-AC02-05CH11231.
## References
* Majumder and Van Leeuwen (2011) A. Majumder and M. Van Leeuwen, Prog. Part. Nucl. Phys. 66, 41 (2011), arXiv:1002.2206 [hep-ph] .
* Cao and Wang (2021) S. Cao and X.-N. Wang, Rept. Prog. Phys. 84, 024301 (2021), arXiv:2002.04028 [hep-ph] .
* Kumar _et al._ (2020a) A. Kumar _et al._ (JETSCAPE), Phys. Rev. C 102, 054906 (2020a), arXiv:1910.05481 [nucl-th] .
* Cao _et al._ (2021a) S. Cao _et al._ (JETSCAPE), Phys. Rev. C 104, 024905 (2021a), arXiv:2102.11337 [nucl-th] .
* Kumar _et al._ (2022a) A. Kumar _et al._ (JETSCAPE), (2022a), arXiv:2204.01163 [hep-ph] .
* Wang and Gyulassy (1992) X.-N. Wang and M. Gyulassy, Phys. Rev. Lett. 68, 1480 (1992).
* Gyulassy and Wang (1994) M. Gyulassy and X.-n. Wang, Nucl. Phys. B 420, 583 (1994), arXiv:nucl-th/9306003 .
* Baier _et al._ (1995) R. Baier, Y. L. Dokshitzer, S. Peigne, and D. Schiff, Phys. Lett. B 345, 277 (1995), arXiv:hep-ph/9411409 .
* Baier _et al._ (1997) R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne, and D. Schiff, Nucl. Phys. B 484, 265 (1997), arXiv:hep-ph/9608322 .
* Zakharov (1996) B. G. Zakharov, JETP Lett. 63, 952 (1996), arXiv:hep-ph/9607440 .
* Zakharov (1997) B. G. Zakharov, JETP Lett. 65, 615 (1997), arXiv:hep-ph/9704255 .
* Bass _et al._ (2009) S. A. Bass, C. Gale, A. Majumder, C. Nonaka, G.-Y. Qin, T. Renk, and J. Ruppert, Phys. Rev. C 79, 024901 (2009), arXiv:0808.0908 [nucl-th] .
* Gyulassy _et al._ (2001) M. Gyulassy, P. Levai, and I. Vitev, Nucl. Phys. B 594, 371 (2001), arXiv:nucl-th/0006010 .
* Salgado and Wiedemann (2002) C. A. Salgado and U. A. Wiedemann, Phys. Rev. Lett. 89, 092303 (2002), arXiv:hep-ph/0204221 .
* Arnold _et al._ (2002) P. B. Arnold, G. D. Moore, and L. G. Yaffe, JHEP 06, 030 (2002), arXiv:hep-ph/0204343 .
* Wang and Guo (2001) X.-N. Wang and X.-f. Guo, Nucl. Phys. A 696, 788 (2001), arXiv:hep-ph/0102230 .
* Majumder (2012) A. Majumder, Phys. Rev. D 85, 014023 (2012), arXiv:0912.2987 [nucl-th] .
* Majumder (2007) A. Majumder, J. Phys. G 34, S377 (2007), arXiv:nucl-th/0702066 .
* Cao _et al._ (2017) S. Cao _et al._ (JETSCAPE), Phys. Rev. C 96, 024909 (2017), arXiv:1705.00050 [nucl-th] .
* Caucal _et al._ (2018) P. Caucal, E. Iancu, A. H. Mueller, and G. Soyez, Phys. Rev. Lett. 120, 232001 (2018), arXiv:1801.09703 [hep-ph] .
* Cao _et al._ (2021b) S. Cao, C. Sirimanna, and A. Majumder, (2021b), arXiv:2101.03681 [hep-ph] .
* Armesto _et al._ (2012) N. Armesto, H. Ma, Y. Mehtar-Tani, C. A. Salgado, and K. Tywoniuk, JHEP 01, 109 (2012), arXiv:1110.4343 [hep-ph] .
* Kumar _et al._ (2020b) A. Kumar, A. Majumder, and C. Shen, Phys. Rev. C 101, 034908 (2020b), arXiv:1909.03178 [nucl-th] .
* Zapp _et al._ (2009) K. Zapp, G. Ingelman, J. Rathsman, J. Stachel, and U. A. Wiedemann, Eur. Phys. J. C 60, 617 (2009), arXiv:0804.3568 [hep-ph] .
* He _et al._ (2015) Y. He, T. Luo, X.-N. Wang, and Y. Zhu, Phys. Rev. C 91, 054908 (2015), [Erratum: Phys.Rev.C 97, 019902 (2018)], arXiv:1503.03313 [nucl-th] .
* Majumder (2013a) A. Majumder, Phys. Rev. C 88, 014909 (2013a), arXiv:1301.5323 [nucl-th] .
* Schenke _et al._ (2009) B. Schenke, C. Gale, and S. Jeon, Phys. Rev. C 80, 054913 (2009), arXiv:0909.2037 [hep-ph] .
* Tachibana _et al._ (2023) Y. Tachibana _et al._ (JETSCAPE), (2023), arXiv:2301.02485 [hep-ph] .
* Baier (2003) R. Baier, Nucl. Phys. A 715, 209 (2003), arXiv:hep-ph/0209038 .
* Majumder (2013b) A. Majumder, Phys. Rev. C 87, 034905 (2013b), arXiv:1202.5295 [nucl-th] .
* Benzke _et al._ (2013) M. Benzke, N. Brambilla, M. A. Escobedo, and A. Vairo, JHEP 02, 129 (2013), arXiv:1208.4253 [hep-ph] .
* Gross and Wilczek (1973) D. J. Gross and F. Wilczek, Phys. Rev. Lett. 30, 1343 (1973).
* Politzer (1973) H. D. Politzer, Phys. Rev. Lett. 30, 1346 (1973).
* Liu _et al._ (2006) H. Liu, K. Rajagopal, and U. A. Wiedemann, Phys. Rev. Lett. 97, 182301 (2006), arXiv:hep-ph/0605178 .
* Casalderrey-Solana and Teaney (2007) J. Casalderrey-Solana and D. Teaney, JHEP 04, 039 (2007), arXiv:hep-th/0701123 .
* Chesler _et al._ (2009) P. M. Chesler, K. Jensen, A. Karch, and L. G. Yaffe, Phys. Rev. D 79, 125015 (2009), arXiv:0810.1985 [hep-th] .
* Casalderrey-Solana _et al._ (2014) J. Casalderrey-Solana, D. C. Gulhan, J. G. Milhano, D. Pablos, and K. Rajagopal, JHEP 10, 019 (2014), [Erratum: JHEP 09, 175 (2015)], arXiv:1405.3864 [hep-ph] .
* Maldacena (1998) J. M. Maldacena, Adv. Theor. Math. Phys. 2, 231 (1998), arXiv:hep-th/9711200 .
* Tachibana _et al._ (2022) Y. Tachibana, C. Shen, and A. Majumder, Phys. Rev. C 106, L021902 (2022), arXiv:2001.08321 [nucl-th] .
* Rafelski and Muller (1982) J. Rafelski and B. Muller, Phys. Rev. Lett. 48, 1066 (1982), [Erratum: Phys.Rev.Lett. 56, 2334 (1986)].
* Fries _et al._ (2003a) R. J. Fries, B. Muller, and D. K. Srivastava, Phys. Rev. Lett. 90, 132301 (2003a), arXiv:nucl-th/0208001 .
* Turbide _et al._ (2006) S. Turbide, C. Gale, and R. J. Fries, Phys. Rev. Lett. 96, 032303 (2006), arXiv:hep-ph/0508201 .
* Schafer _et al._ (2007) A. Schafer, X.-N. Wang, and B.-W. Zhang, Nucl. Phys. A 793, 128 (2007), arXiv:0704.0106 [hep-ph] .
* Kumar _et al._ (2022b) A. Kumar, A. Majumder, and J. H. Weber, Phys. Rev. D 106, 034505 (2022b), arXiv:2010.14463 [hep-lat] .
* Majumder (2009a) A. Majumder, Phys. Rev. C 80, 031902 (2009a), arXiv:0810.4967 [nucl-th] .
* Idilbi and Majumder (2009) A. Idilbi and A. Majumder, Phys. Rev. D 80, 054022 (2009), arXiv:0808.1087 [hep-ph] .
* Majumder (2015) A. Majumder, Pramana 84, 821 (2015), arXiv:1405.2019 [nucl-th] .
* Ko _et al._ (2007) C. M. Ko, W. Liu, and B. W. Zhang, Few Body Syst. 41, 63 (2007).
* Liu _et al._ (2007) W. Liu, C. M. Ko, and B.-W. Zhang, Int. J. Mod. Phys. E 16, 1930 (2007).
* Liu and Fries (2008) W. Liu and R. J. Fries, Phys. Rev. C 77, 054902 (2008), arXiv:0801.0453 [nucl-th] .
* Schlichting and Soudi (2021) S. Schlichting and I. Soudi, JHEP 07, 077 (2021), arXiv:2008.04928 [hep-ph] .
* Mehtar-Tani _et al._ (2022) Y. Mehtar-Tani, S. Schlichting, and I. Soudi, (2022), arXiv:2209.10569 [hep-ph] .
* Arnold _et al._ (2000) P. B. Arnold, G. D. Moore, and L. G. Yaffe, JHEP 11, 001 (2000), arXiv:hep-ph/0010177 .
* Arnold _et al._ (2003a) P. B. Arnold, G. D. Moore, and L. G. Yaffe, JHEP 01, 030 (2003a), arXiv:hep-ph/0209353 .
* Zhao _et al._ (2022) W. Zhao, W. Ke, W. Chen, T. Luo, and X.-N. Wang, Phys. Rev. Lett. 128, 022302 (2022), arXiv:2103.14657 [hep-ph] .
* Fries _et al._ (2003b) R. J. Fries, B. Muller, C. Nonaka, and S. A. Bass, Phys. Rev. Lett. 90, 202303 (2003b), arXiv:nucl-th/0301087 .
* Hwa and Yang (2003) R. C. Hwa and C. B. Yang, Phys. Rev. C 67, 034902 (2003), arXiv:nucl-th/0211010 .
* Molnar and Voloshin (2003) D. Molnar and S. A. Voloshin, Phys. Rev. Lett. 91, 092301 (2003), arXiv:nucl-th/0302014 .
* Sapeta and Wiedemann (2008) S. Sapeta and U. A. Wiedemann, Eur. Phys. J. C 55, 293 (2008), arXiv:0707.3494 [hep-ph] .
* Andersson (2005) B. Andersson, _The Lund model_, Vol. 7 (Cambridge University Press, 2005).
* Peskin and Schroeder (1995) M. E. Peskin and D. V. Schroeder, _An Introduction to quantum field theory_ (Addison-Wesley, Reading, USA, 1995).
* Andersson _et al._ (1989) B. Andersson, G. Gustafson, L. Lonnblad, and U. Pettersson, Z. Phys. C 43, 625 (1989).
* Mehtar-Tani and Schlichting (2018) Y. Mehtar-Tani and S. Schlichting, JHEP 09, 144 (2018), arXiv:1807.06181 [hep-ph] .
* Cao and Majumder (2020) S. Cao and A. Majumder, Phys. Rev. C 101, 024903 (2020), arXiv:1712.10055 [nucl-th] .
* Arnold _et al._ (2003b) P. B. Arnold, G. D. Moore, and L. G. Yaffe, JHEP 05, 051 (2003b), arXiv:hep-ph/0302165 .
* Cao _et al._ (2016) S. Cao, T. Luo, G.-Y. Qin, and X.-N. Wang, Phys. Rev. C 94, 014909 (2016), arXiv:1605.06447 [nucl-th] .
* Cao _et al._ (2018) S. Cao, T. Luo, G.-Y. Qin, and X.-N. Wang, Phys. Lett. B 777, 255 (2018), arXiv:1703.00822 [nucl-th] .
* Auvinen _et al._ (2010) J. Auvinen, K. J. Eskola, and T. Renk, Phys. Rev. C 82, 024906 (2010), arXiv:0912.2265 [hep-ph] .
* Zhang _et al._ (2004) B.-W. Zhang, E. Wang, and X.-N. Wang, Phys. Rev. Lett. 93, 072301 (2004), arXiv:nucl-th/0309040 .
* He _et al._ (2019) Y. He, S. Cao, W. Chen, T. Luo, L.-G. Pang, and X.-N. Wang, Phys. Rev. C 99, 054911 (2019), arXiv:1809.02525 [nucl-th] .
* He _et al._ (2022) Y. He, W. Chen, T. Luo, S. Cao, L.-G. Pang, and X.-N. Wang, Phys. Rev. C 106, 044904 (2022), arXiv:2201.08408 [hep-ph] .
* Putschke _et al._ (2019) J. H. Putschke _et al._ , (2019), arXiv:1903.07706 [nucl-th] .
* Majumder (2009b) A. Majumder, (2009b), arXiv:0901.4516 [nucl-th] .
* Cooper and Frye (1974) F. Cooper and G. Frye, Phys. Rev. D 10, 186 (1974).
* Hwa and Yang (2004) R. C. Hwa and C. B. Yang, Phys. Rev. C 70, 024905 (2004), arXiv:nucl-th/0401001 .
* Greco _et al._ (2003) V. Greco, C. M. Ko, and P. Levai, Phys. Rev. Lett. 90, 202302 (2003), arXiv:nucl-th/0301093 .
* Shen _et al._ (2016) C. Shen, Z. Qiu, H. Song, J. Bernhard, S. Bass, and U. Heinz, Comput. Phys. Commun. 199, 61 (2016), arXiv:1409.8164 [nucl-th] .
* Field (1989) R. D. Field, _Applications of Perturbative QCD_ , Vol. 77 (1989).
* Adare _et al._ (2008) A. Adare _et al._ (PHENIX), Phys. Rev. Lett. 101, 232301 (2008), arXiv:0801.4020 [nucl-ex] .
* Abelev _et al._ (2010) B. I. Abelev _et al._ (STAR), Phys. Rev. C 81, 054907 (2010), arXiv:0911.3130 [nucl-ex] .
* Abelev _et al._ (2014) B. B. Abelev _et al._ (ALICE), Phys. Lett. B 736, 196 (2014), arXiv:1401.1250 [nucl-ex] .
* Acharya _et al._ (2020) S. Acharya _et al._ (ALICE), Phys. Rev. C 101, 044907 (2020), arXiv:1910.07678 [nucl-ex] .
|
# Attack on Unfair ToS Clause Detection: A Case Study using Universal
Adversarial Triggers
Shanshan Xu Irina Broda Rashid Haddad
Marco Negrini Matthias Grabmair
School of Computation, Information, and Technology; Technical University of
Munich, Germany
<EMAIL_ADDRESS>
###### Abstract
Recent work has demonstrated that natural language processing techniques can
support consumer protection by automatically detecting unfair clauses in the
Terms of Service (ToS) Agreement. This work demonstrates that transformer-
based ToS analysis systems are vulnerable to adversarial attacks. We conduct
experiments attacking an unfair-clause detector with universal adversarial
triggers. Experiments show that a minor perturbation of the text can
considerably reduce the detection performance. Moreover, to measure the
detectability of the triggers, we conduct a detailed human evaluation study by
collecting both answer accuracy and response time from the participants. The
results show that the naturalness of the triggers remains key to tricking
readers.
## 1 Introduction
When using online platforms, users are asked to agree to the Terms of Service
(ToS), which are often long and difficult to understand. According to Obar and
Oeldorf-Hirsch (2020), it would take a user around 45 minutes on average to
read a ToS properly. Most users accept the terms without reading them,
including clauses which would be deemed unfair under consumer protection
standards. Software applications that warn consumers about unfair clauses can
support consumers’ rights, and have been the subject of prior work (e.g.,
Lippi et al., 2019; Ruggeri et al., 2022). At the same time, their existence
forms an incentive for drafters of ToS to formulate clauses with potentially
unfair effects that bypass automated screening. In turn, developers of control
systems seek to make their detectors robust against such ‘adversarial
attacks’. In this paper, we report on an experiment in discovering weaknesses
of ToS analysis models.
Natural language processing (NLP) models for ToS analysis conduct binary
classification of a given clause as fair/unfair. Previous studies have shown
that state-of-the-art transformer-based classifiers are vulnerable to
adversarial attacks Belinkov and Bisk (2017); even slight modifications to the
input text (e.g., changing a few characters) can cause incorrect
classifications Ebrahimi et al. (2018). Numerous adversarial attack methods
have been developed and demonstrate effective attack performance in various
downstream NLP tasks such as sentiment analysis Iyyer et al. (2018), question
answering Wang et al. (2020), machine-translation Cheng et al. (2019) etc. One
such method is the attack via a universal adversarial trigger, which is a
sequence of tokens (words, sub-words, or characters) that can be injected into
any text input from a dataset to mislead the victim model to a target
prediction (see Table 1 for examples). These input-agnostic triggers, once
generated, can be distributed to anyone, and do not need access to the victim
model at the time of attack.
Adversarial attacks have, to the best of our knowledge, remained largely
unaddressed in legal NLP. Our work extends the state of the art through the
following contributions: (1) We conduct experiments attacking ToS unfair
clause detectors trained on the public CLAUDETTE dataset with universal
adversarial triggers. Our results show that a minor perturbation of the text
can reduce the detection performance of transformer based models
significantly. (2) We also use artifacts from the training data for universal
trigger attacks. Our experiments demonstrate that such words can considerably
reduce the victim model’s accuracy, highlighting the potential threat of
training data leakage. (3) We conduct a human evaluation study to measure the
detectability of the generated triggers. The results show that suppressing sub
tokens can make generated triggers more difficult to detect. 111Our code is
available at https://github.com/TUMLegalTech/ToS_attack_nllp22
ToS Clause (red = trigger) | Model Detection
---|---
| Pinterest isn’t liable for damages that result from a may vote against non-
material
---
breach of any other applicable duty of care.
Unfair $\rightarrow$ Fair
| The English courts will have jurisdiction over any claim arising from may
vote
---
against, or related to , any use of our services.
Unfair $\rightarrow$ Fair
Table 1: The universal adversarial trigger can be injected into any input from
a dataset to mislead the victim model. By inserting the displayed trigger can
cause the trained unfair ToS detector to flip its correct unfair predictions
to fair.
## 2 Related Work
Adversarial Attacks in NLP: Most adversarial attack methods in NLP are white-
box, where the attacker has full access to the victim model (including
architectures, parameters, and training data). Prevalent white-box attacks
include HotFlip Ebrahimi et al. (2018), a gradient-based method that generates
adversarial examples on discrete text structure; PWWS Ren et al. (2019), an
importance-based method that substitutes words of high saliency. By contrast,
black-box attacks assume no knowledge of the victim model’s architectures and
parameters. Example techniques include the use of generative adversarial
networks (GANs) Zhao et al. (2018) and human-in-the-loop heuristics Wallace et
al. (2019b)
Universal Triggers: Wallace et al. (2019a) generate universal attack triggers
by using gradient signals to guide a search over the word embedding space.
They are input-agnostic, which makes them more threatening in real-world
scenarios. Despite being successful in confusing classification systems,
universal triggers are often unnatural and can easily be detected by human
readers. Song et al. (2021) generate attack triggers that appear closer to
natural text by using a pre-trained GAN. Training a GAN in the ToS domain from
scratch requires large datasets and GPU resources. In this work we try to
generate natural triggers by simply skipping all the subword and special
tokens during the search process; and leave the development and evaluation of
a ToS-GAN to future work.
## 3 Universal Trigger Generation
We assume a text input $x$ and its target label $y$ from the dataset
$D=\\{X,Y\\}$, a trained victim classifier model $f$ that predicts
$f(x)=\hat{y}$. While in a non-universal targeted attack the focus is on
flipping the prediction of a single text input $x$, our goal is to find an
input-agnostic trigger $t$ consisting of a sequence of tokens
$\\{w_{1},w_{2},…,w_{i}\\}$ such that when concatenating $t$ with any input
$x$ from $X$, the victim model incorrectly predicts $f(x;t)=\tilde{y}$, where
$\tilde{y}\neq\hat{y}$. Specifically, we use the following objective function:
$\operatorname*{arg\,min}_{t}\operatorname{\mathbb{E}}_{x\sim
X}[\mathcal{L}(\tilde{y},f(x;t))]$ (1)
To solve the above objective function, we follow the approach of Wallace et
al. (2019a) by utilizing the HotFlip method Ebrahimi et al. (2018) at the
token level: First, we initiate the trigger $t$ with a sequence of $i$
placeholder tokens (i.e., ‘the’); then we compute the gradient of (1) w.r.t
the trigger. Since tokens are discrete, we approximate the loss function
around the current token embedding using the first-order Taylor expansion
$\operatorname*{arg\,min}_{e_{i}^{\prime}\in\mathcal{V}}\left[e_{i}^{\prime}-e_{adv_{i}}\right]^{T}\nabla_{e_{adv_{i}}}\mathcal{L}$
(2)
where $\mathcal{V}$ is the set of all token embeddings over the entire
vocabulary and $e_{adv_{i}}$ represent the embedding of the current trigger
token.
We update the embedding for every trigger token $e_{adv_{i}}$ to minimize (2).
This can be efficiently computed through $d$-dimensional dot products, with
$d$ corresponding to the dimension of the token embeddings. For constructing
the entire updated trigger, we then use beam search to evaluate the top $i$
token candidates from (2) for each token position in the trigger $t$. As
variable parameters, we run experiments with triggers of different lengths [3,
5, 8] and insert positions [begin, middle, end] in the input text.
## 4 Experiments
### 4.1 Dataset and the Victim Model
The CLAUDETTE dataset Lippi et al. (2019); Ruggeri et al. (2022) consists of
100 ToS contracts (20,417 clauses) of online platforms. A clause is deemed as
unfair if it creates an unacceptable imbalance in the parties’ rights and
obligations, i.e., harms the user’s rights or minimizes the online service’s
obligations. Each clause was labelled by legal experts. 222To measure the
inter-annotation agreement, Lippi et al. (2019) have an additional test set
containing 10 contracts labelled by two distinct experts, which achieve a high
inner-annotator agreement with standard Cohen $\kappa$= 0.871. For details on
the annotation process and the legal rational of unfair contractual clauses,
please refer the original CLAUDETTE paper.
Following Lippi et al. 2019, we discard sentences shorter than 5 words. In
order to avoid an information leak between training and testing sentences by
virtue of them stemming from the same document of contracts, we split the 100
contracts randomly into 40:40:20 for training, development and testing. Table
2 in Appendix A shows the detailed statistics of each split. Notably, the
CLAUDETTE has a very imbalanced class ratio of 9:1 (fair:unfair).
For the victim model, we finetune an instance of LEGAL-BERT (nlpaueb/legal-
bert-base-uncased) Chalkidis et al. (2020) on the CLAUDETTE training set.
Please refer to Appendix B for details on model finetuning. It achieves
overall macro F1 of 88.9%, 97.7% F1 for class fair, and 80.1% for class
unfair.
### 4.2 Attack Results
In the following we focus on the attack scenario fairwashing: targeted attacks
that flip unfair predictions to fair. We apply the universal attack trigger
algorithm on the development set and report the attack performance on the test
set. The generated triggers can considerably degrade the victim model’s
performance. For instance, inserting the trigger of token length 8
“##purchased another opponent shall testify unless actuarial opponent” in the
middle of the sentence can decrease the model’s accuracy from 80.1% to 16.9%.
However, we observe that triggers often contain special tokens or subwords,
such as ‘[SEP]’ or ‘##purchased’, which makes them easily detectable for human
readers. Inspired by Wang, 2022, we facilitate the generation of natural
triggers by simply skipping all subwords and special tokens during the search
(hereafter we denote this approach as mode ’no_subword’ for simplicity).
Although slightly less effective than the original triggers (Table 4 in
Appendix C), the no_subword triggers are less likely to be detected by human
readers (See our human evaluation study in Section 6).
Figure 1: Accuracy loss of the victim model’s detection performance when
attacked by universal triggers of different insert positions and lengths. For
completeness, we report the full attack results in Appendix C
We also run experiments to study the impact of trigger length, insert
position, and mode (with/without subwords) on the attack’s effectiveness.
Figure 1 shows that increasing the token length improves attack effectiveness
by a noticeable margin. The victim model’s accuracy degrades by 25% to 60%
using three words and by 80% to 13% with eight words. The result also
indicates the victim model’s sensitivity to the insert position of the
triggers. These results are consistent with previous studies Wallace et al.
(2019b); Wang (2022): Triggers are more effective when inserted at the
beginning of the clause, which may be due to the transformer-based model
paying more attention to the terms at the beginning of the text. These results
hold across both modes. Between the modes, a higher effectiveness is
consistently observed for ’all’ compared with ’no_subword’. This is in line
with ’no_subword’ generating triggers from a subset of potential trigger
tokens of ’all’ mode.
Figure 2: Human response time (box plots) and detection accuracy (line plots)
for triggers of different insert positions and lengths. Control stands for the
question where no trigger is inserted. LMI represents an LMI trigger of length
eight inserted in the middle of the sentence. The insert positions are the
following. 0.0 : beginning, 0.5 : middle, 1.0 : end.
## 5 Data Artifacts as Universal Triggers
A growing number of works have raised awareness that deep neural models may
exploit spurious artifacts in the dataset and take erroneous shortcuts McCoy
et al. (2019); Xu and Markert (2022). In this section, we experiment with
using dataset artifacts as universal triggers to explore the feasibility of
generating universal triggers without access to the victim model’s gradient
signals. Following Gururangan et al. (2018), Wallace et al. (2019a) identified
the dataset artifacts as words with high pointwise mutual information (PMI)
Church and Hanks (1990) with each label. Since the Claudette dataset has a
heavily imbalanced label distribution, in order to prevent picking up very
sparse tokens, in this work, we use local mutual information (LMI) Schuster et
al. (2019), a re-weighted version of PMI. We observe that high LMI ranked
words are successful triggers. We use the 8 highest LMI words and PMI words
with label fairness as triggers (hereafter LMI trigger and PMI trigger
respectively, please refer to Appendix E for the list of words used); and
insert them to the unfair clauses at different token positions. The LMI
trigger is able to reduce the victim model’s classification accuracy from 80%
to around 60%; while the PMI trigger can only reduce the performance to around
76% (see Figure 3 in Appendix E). Although less successful than the universal
adversarial triggers, the LMI triggers are natural and less detectable than
’all’ mode triggers according to our human evaluation studies. Critically, LMI
triggers are extracted by simply analyzing the training data and do not
require access to the victim model. The attack effectiveness of LMI trigger
highlights the potential threat of training data leakage in the NLP
application.
## 6 Human Evaluation Study
We perform a human evaluation to study the impact of token length, insert
position and mode on the triggers’ detectability 333We report the details of
the web application used and full instructions for the human subjects in
Appendix D. The task is to identify which sentence out of four candidate
sentences from ToS contracts was modified. We include one question with no
modified sentence as the control. In a previous study, Song et al. 2021
directly asked the human participants to rate whether the generated triggers
were natural or not. However, the rating of naturalness is very abstract and
varies between individuals. Inspired by studies on the detection process in
psychological studies Pandya and Macy (1995); Yap and Balota (2007), we assume
response time (i.e., the length of time taken for a human to detect a trigger)
can act as a proxy for the naturalness. To measure the human detectability of
triggers, we hence collect the answer accuracy as well as the response time
from the participants.
19 participants of different ages, English abilities, and legal experience
were recruited from the personal network of the authors. Figure (2)
demonstrates that it is consistently easier for participants to detect ’all’
mode triggers than ’no_subword’ mode triggers. Participants were on average
19% faster in detecting that a sentence inserted by ‘all’ than ‘no_subword’
triggers; and they find ’all’ triggers with 21% higher accuracy on average. We
include the LMI trigger of token length eight in the study and find its
detectability is in between the ‘no_subword’ and ’all’ triggers of the same
length. The intuitive notion that participants are better at finding longer
triggers generally holds with regard to detection accuracy. Nevertheless, we
cannot observe a trend in the response time change, which may be due to our
small sample size. Regarding the insert position, participants are the fastest
in detecting triggers inserted in the middle. Further, we notice that special
tokens and subwords make triggers more obvious. Qualitative, informal reports
from participants indicate that ’spelling error’ stuck out in a legal context.
All triggers containing these tokens can be detected with more than 90%
accuracy, which include two ’all’ triggers of length three (containing special
token [SEP] or combination of subtokens ’##assignabilityconsult’); and one
‘no_subword’ triggers inserted at position 5 (includes a bound stem ‘concul’).
This likely explains why these two data points do not conform to the general
trend of detection accuracy.
## 7 Conclusion
We attacked ToS unfair clause detectors with universal adversarial triggers
generated by a gradient-based algorithm as well as by simply analyzing the
training data. The effectiveness of the triggers exposes the vulnerability of
the transformer-based classification model, and highlights the potential
threat of training data leakage. We also conducted a human evaluation to study
the detectability of the triggers. The results show that the triggers are less
likely to be detected if they do not include subtokens. Future work can
explore ways to generate more natural triggers in the legal domain, which may
even deceive readers with a formal education in law.
## Limitations
Wallace et al. (2019a) reduce the detection accuracy to 1% while we can only
manage to degrade it to 10%. This might be due to the imbalanced label
distribution and comparatively small size of the CLAUDETTE dataset. Our human
evaluation is an initial exploration with only 19 participants. Future work
will focus on using crowdsourcing techniques for large survey data collection.
Furthermore, we generate the ’no_subword’ triggers by skipping all the tokens
preceded by the double hashtag ’##’. This enables us to avoid derivational
morphemes and inflection suffixes but fails to exclude bound stems such as
‘consul’, which makes some triggers obvious to human readers. Future work can
explore better ways to generate natural triggers.
## Ethics Statement
The study presented here works exclusively with the publicly available
CLAUDETTE dataset, which consists of the Terms of Service (ToS) Agreements of
various online platforms. The techniques described in this paper are prone to
misuse. However, we design this study to draw public attention to the
vulnerability of the transformer-based classification model. We hope our work
will help accelerate progress in detecting and defending adversarial attacks.
We finetuned the victim model and generated all the triggers on Google Colab.
Our models adapted pretrained language models and we did not engage in any
training of such large models from scratch. We did not track computation
hours.
## References
* Belinkov and Bisk (2017) Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine translation. _arXiv preprint arXiv:1711.02173_.
* Chalkidis et al. (2020) Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The muppets straight out of law school. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 2898–2904, Online. Association for Computational Linguistics.
* Cheng et al. (2019) Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly adversarial inputs. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4324–4333, Florence, Italy. Association for Computational Linguistics.
* Church and Hanks (1990) Kenneth Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. _Computational linguistics_ , 16(1):22–29.
* Ebrahimi et al. (2018) Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 31–36, Melbourne, Australia. Association for Computational Linguistics.
* Grinberg (2018) Miguel Grinberg. 2018. _Flask web development: developing web applications with python_. " O’Reilly Media, Inc.".
* Gururangan et al. (2018) Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics.
* Iyyer et al. (2018) Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1875–1885, New Orleans, Louisiana. Association for Computational Linguistics.
* Lippi et al. (2019) Marco Lippi, Przemysław Pałka, Giuseppe Contissa, Francesca Lagioia, Hans-Wolfgang Micklitz, Giovanni Sartor, and Paolo Torroni. 2019. Claudette: an automated detector of potentially unfair clauses in online terms of service. _Artificial Intelligence and Law_ , 27(2):117–139.
* McCoy et al. (2019) Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3428–3448, Florence, Italy. Association for Computational Linguistics.
* Obar and Oeldorf-Hirsch (2020) Jonathan A Obar and Anne Oeldorf-Hirsch. 2020. The biggest lie on the internet: Ignoring the privacy policies and terms of service policies of social networking services. _Information, Communication & Society_, 23(1):128–147.
* Pandya and Macy (1995) Abhijit S Pandya and Robert B Macy. 1995. _Pattern recognition with neural networks in C++_. CRC press.
* Ren et al. (2019) Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 1085–1097, Florence, Italy. Association for Computational Linguistics.
* Ruggeri et al. (2022) Federico Ruggeri, Francesca Lagioia, Marco Lippi, and Paolo Torroni. 2022. Detecting and explaining unfairness in consumer contracts through memory networks. _Artificial Intelligence and Law_ , 30(1):59–92.
* Schuster et al. (2019) Tal Schuster, Darsh Shah, Yun Jie Serene Yeo, Daniel Roberto Filizzola Ortiz, Enrico Santus, and Regina Barzilay. 2019. Towards debiasing fact verification models. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3419–3425.
* Song et al. (2021) Liwei Song, Xinwei Yu, Hsuan-Tung Peng, and Karthik Narasimhan. 2021. Universal adversarial attacks with natural triggers for text classification. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 3724–3733, Online. Association for Computational Linguistics.
* Wallace et al. (2019a) Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019a. Universal adversarial triggers for attacking and analyzing NLP. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2153–2162, Hong Kong, China. Association for Computational Linguistics.
* Wallace et al. (2019b) Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Yamada, and Jordan Boyd-Graber. 2019b. Trick me if you can: Human-in-the-loop generation of adversarial examples for question answering. _Transactions of the Association for Computational Linguistics_ , 7:387–401.
* Wang et al. (2020) Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, and Bo Li. 2020. T3: Tree-autoencoder constrained adversarial text generation for targeted attack. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 6134–6150, Online. Association for Computational Linguistics.
* Wang (2022) Yumeng Wang. 2022. Global triggers for attacking and analyzing ranking models. Master’s thesis, Hannover: Gottfried Wilhelm Leibniz Universität.
* Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface’s transformers: State-of-the-art natural language processing. _arXiv preprint arXiv:1910.03771_.
* Xu and Markert (2022) Shanshan Xu and Katja Markert. 2022. The chinese causative-passive homonymy disambiguation: an adversarial dataset for nli and a probing task. In _Proceedings of the 13th Language Resources and Evaluation Conference_ , Marseille, France. European Language Resources Association.
* Yap and Balota (2007) Melvin J Yap and David A Balota. 2007. Additive and interactive effects on response time distributions in visual word recognition. _Journal of Experimental Psychology: Learning, Memory, and Cognition_ , 33(2):274.
* Zhao et al. (2018) Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In _International Conference on Learning Representations_.
## Appendix A Dataset Statistics
Table 2 displays the statics of the CLAUDETTE dataset.
Figure 3: Attack performance of LMI trigger and PMI trigger of the different
insert position.
## Appendix B Finetuning the Victim Model
We used LEGAL-BERT (nlpaueb/legal-bert-base-uncased) with a sequence
classification head on top from the transformers library Wolf et al. (2019);
and finetuned it on the CLAUDETTE training set. The model is fine-tuned with 5
epochs, a learning rate of 1e-5. We determine the best learning rate using
grid search on the development set and use early stopping based on the
development set F1 score.
split | # sentences | % fair label | % unfair label
---|---|---|---
train | 8354 | 89.5% | 10.5%
dev | 8279 | 89.1% | 10.9%
test | 3784 | 89.3% | 10.7%
Table 2: Statistics of the train, dev and test split of the CLAUDETTE dataset.
## Appendix C Additional Experimental Results
Table 3 demonstrates the attack results on fair clauses. Restricted to limited
GPU resources, we generated only triggers of eight tokens which are inserted
at the beginning of the sentence.
Table 4 displays the attack on unfair clauses with triggers of different
lengths [3,5,8], insert position [begin, middle, end] and mode [original,
no_subword].
Trigger | Length | Position | Mode | Accuracy | $\Delta$
---|---|---|---|---|---
Baseline | - | - | - | 97.7 |
not liable whenever | 3 | beginning | no_sub | 69.8 | -28.5%
terminate our convening practices if | 5 | beginning | no_sub | 47.6 | -51.2%
agree tankage bound through cloud terms 2016 laws | 8 | beginning | no_sub | 9.0 | -90.0%
Table 3: Performance of Universal Triggers on Fair Clauses Trigger | Length | Position | Mode | Accu. | $\Delta$
---|---|---|---|---|---
Baseline | - | - | - | 80.1 |
witness should testify | 3 | beginning | no_sub | 58.4 | -27.0%
may vote against | 3 | middle | no_sub | 60.8 | -24.1%
witness testified without | 3 | end | no_sub | 62.9 | -21.5%
interrelat order refusing priority where | 5 | beginning | no_sub | 37.1 | -53.7%
consul must produce his attorney | 5 | middle | no_sub | 46.6 | -41.9%
privilege to authenticate testimony groot | 5 | end | no_sub | 48.1 | -39.9%
testimony allows contracts opposing person tuber testify where | 8 | beginning | no_sub | 13.9 | -82.7%
compute another opponent shall testify unless lockbox opponent | 8 | middle | no_sub | 19.7 | -75.4%
another witness seems thus admissible scope testify usc | 8 | end | no_sub | 22.6 | -71.8%
admissible in evidence | 3 | beginning | all | 56.7 | -29.3%
##assignabilityconsult assigned | 3 | middle | all | 59.9 | -25.2%
[SEP] expert testimony | 3 | end | all | 60.2 | -24.8%
evid allowed equit testify where | 5 | beginning | all | 31.4 | -67%
[SEP] give precedence before priority | 5 | middle | all | 43.6 | -45.6%
368 hearsay witnesses may exclude | 5 | end | all | 43.1 | -46.2%
inference forbid 2028 opposing person may testify where | 8 | beginning | all | 12.8 | -84.0%
##purchased another opponent shall testify unless actuarial opponent | 8 | middle | all | 16.9 | -78.9%
assist [SEP] witness normally justifies cross admissibilitywillingness | 8 | end | all | 19.2 | -76.0%
Table 4: Performance of Universal Triggers on Unfair Label
## Appendix D Instruction for the human evaluation study
The application is written in Python using Flask Grinberg (2018) and was
hosted on an AWS EC2 instance. It included a landing page with a short
instructions. Figure 4 is a screenshot of the web application. Following is
the instruction on the landing page for the human evaluation study:
“Background information
When using online platforms, users are asked to agree to the Terms of Service
(ToS). ToS documents tend to be long and difficult to understand. As a result,
most users accept the terms without reading them, including clauses which
would be deemed unfair under consumer protection standards. Therefore,
applications that can support consumers in detecting unfair clauses would be
useful. Nevertheless, studies have shown that such applications are vulnerable
to adversarial attacks; even slight modifications to the input text, like
inserting a few words into the text, can cause incorrect classifications. In
this study, we ask you to help us detect the malicious modifications in the
text.
Task instruction
You will be shown an excerpt of four sentences from a ToS contract. The task
is to identify which sentence is modified. Please feel free to contact us if
you have any questions. Many thanks for taking part in the study.”
## Appendix E LMI and PMI triggers
Figure 3 demonstrates the attack performance of LMI and PMI triggers. The 8
highest LMI ranked words that used as LMI trigger are [’information’,
’payment’, ’must’, ’provide’, ’person’, ’license’, ’rights’, ’please’]. The
PMI trigger words are: [’berlin’, ’attribution’, ’addressing’, ’android’,
’sources’, ’organiser’, ’pc’, ’unreasonable’]
Figure 4: Screenshot of the web application for human evaluation
|
[1] [1]Corresponding author
1] organization=Department of Physics, Centre for Materials Physics, Durham
University, addressline=South Road, city=Durham, postcode=DH1 3LE,
country=United Kingdom
2] organization=ISIS Neutron and Muon Source, addressline=STFC-RAL,
city=Chilton, Didcot, postcode=OX11 OQX, country=United Kingdom
3] organization=Oxford University, addressline=Clarendon Laboratory, Parks
Road, city=Oxford, postcode=OX1 3PU, country=United Kingdom
4] organization=Department of Chemistry, University of Guelph, addressline=50
Stone Road East, city=Guelph, postcode=N1G 2W1, state=Ontario, country=Canada
# Muon-spin relaxation investigation of magnetic bistability in a crystalline
organic radical compound
A. Hernández-Melían<EMAIL_ADDRESS>B. M. Huddart F. L.
Pratt S. J. Blundell M. Mills H. K. S. Young K. E. Preuss T. Lancaster [
[ [ [
###### Abstract
We present the results of a muon-spin relaxation ($\mu^{+}$SR) investigation
of the crystalline organic radical compound
4-(2-benzimidazolyl)-1,2,3,5-dithiadiazolyl (HbimDTDA), in which we
demonstrate the hysteretic magnetic switching of the system that takes place
at $T=$274\pm 11\text{\,}\mathrm{K}$$ caused by a structural phase transition.
Muon-site analysis using electronic structure calculations suggests a range of
candidate muon stopping sites. The sites are numerous and similar in energy
but, significantly, differ between the two structural phases of the material.
Despite the difference in the sites, the muon remains a faithful probe of the
transition, revealing a dynamically-fluctuating magnetically disordered state
in the low-temperature structural phase. In contrast, in the high temperature
phase the relaxation is caused by static nuclear moments, with rapid
electronic dynamics being motionally narrowed from the muon spectra.
###### keywords:
Muon-spin relaxation
Molecule-based magnetism
Hysteresis
Muon site determination
Density functional theory
## 1 Introduction
Understanding the link between hysteresis and structure is an important theme
in materials design, since systems that exhibit hysteretic effects
intrinsically possess memory and are therefore of potential technological
interest. Recently, a crystalline organic radical compound
4-(2-benzimidazolyl)-1,2,3,5-dithiadiazolyl (HbimDTDA) was reported [1] that
exhibits bistability in its magnetic and structural properties near room
temperature. In the solid state, the neutral radical HbimDTDA crystallizes in
an orthorhombic $Pbca$ space group (see Fig. 1). The magnetic switching effect
follows from a subtle single-crystal-to-single-crystal structural phase
transition centred at $T\approx$270\text{\,}\mathrm{K}$$ that occurs without
symmetry breaking, but involves a significant reorganisation of supramolecular
contacts. Structural analysis at $T=$100\text{\,}\mathrm{K}$$ shows that the
low-temperature structure of the material involves one-dimensional linear
arrays of HbimDTDA molecules, with each molecule forming part of a pancake-
bonded pair with a partner molecule on a neighbouring array [Fig. 1(b)]. The
geometry of the pancake bonds, determined by overlap of the four lobes of each
molecule’s singly-occupied molecular orbital, orients the molecules to create
a dense 3D network of supramolecular contacts. In contrast, the high-
temperature structure of the system determined at $T=$340\text{\,}\mathrm{K}$$
does not feature the pancake bonds, which are broken and replaced with new
electrostatic contacts [Fig. 1(b)]. These two structural phases are related by
a translation in the $[010]$ direction, such that the one-dimensional
supramolecular structures (defined by hydrogen bonding between neighbouring
molecules) shift with respect to one another. Analysis of the temperature
dependence of the structural phase transition confirms a first-order
transition between two unique phases, occurring around
$T\approx$270\text{\,}\mathrm{K}$$, with significant thermal hysteresis [1].
Each radical unit carries a $S=1/2$ spin and the magnetism of the system is
closely linked to the structural transition. Magnetic susceptibility data were
reported to indicate diamagnetic low-temperature behaviour, which was
explained by the electronic overlaps promoted by the pancake bonds between
$\pi$-radicals. At high temperature the susceptibility increases dramatically,
consistent with the non-pancake bonded phase being paramagnetic and comprising
an unpaired $S=1/2$ spin per molecule, with some degree of antiferromagnetic
coupling between them. Particularly notable is the hysteretic magnetic
transition between the states [1].
(a) $100\text{\,}\mathrm{K}$
(b) $340\text{\,}\mathrm{K}$
(c) HbimDTDA
Figure 1: (1(b)) Low-temperature structure of HbimDTDA, with pancake bonds
between the linear arrays of molecules arranged along $[010]$. (1(b)) High-
temperature structure following a shift along $[010]$ that breaks the pancake
bonds. (1(c)) Chemical structure diagram of single HbimDTDA molecule.
Implanted muons are widely used as local probes of magnetism [2], with their
extreme sensitivity motivating their use to determine the magnetic order and
dynamics in low-moment, molecule-based magnets. Muons have been used rather
less to look at magnetic bistability, though they have proved an effective
probe of molecular spin-crossover (SCO) materials formed from bistable
molecules that are able to switch from low to high spin states via a
cooperative phase transformation [3]. In this paper we report the use of muon-
spin relaxation ($\mu^{+}$SR) techniques to examine the cooperative magnetic
switching in HbimDTDA from a local perspective. We show that muons are
sensitive to the bistability of the magnetic state and use this to elucidate
the nature of the low- and high-temperature regimes, and provide a
determination of the characteristic field fluctuation rate in the low-
temperature regime. We also determine the muon sites using first-principles
electronic structure methods to demonstrate how the muon is sensitive to the
magnetic environment in this chemically-complex material.
## 2 Experimental
In a $\mu^{+}$SR experiment positive muons are implanted in the sample,
usually settling at interstitial sites, but quickly decay with a mean lifetime
of $2.2\text{\,}\mathrm{\SIUnitSymbolMicro s}$. The muon spin polarisation,
whose time-dependence is determined by the local field distribution, can be
measured by studying the statistics of the positron emission in the decay,
since the positron is emitted preferentially along the direction of the muon
spin. The detectors around the sample are classified into forward (F) and
backward (B) banks with respect to the initial muon polarisation, so that the
quantity of interest, proportional to the muon spin polarisation, is the
positron asymmetry function
$A(t)=\frac{N_{\mathrm{F}}(t)-\alpha
N_{\mathrm{B}}(t)}{N_{\mathrm{F}}(t)+\alpha N_{\mathrm{B}}(t)},$ (1)
where $N_{\mathrm{F}}(t)$ and $N_{\mathrm{B}}(t)$ are sums of the counts in
all the forward and backward detectors respectively, while $\alpha$ is a
calibration constant which accounts for the different efficiencies and
geometries of the detectors and can be determined from experimental data.
To investigate the hysteresis effect and the magnetism of the two phases of
the compound, $\mu^{+}$SR measurements were performed using the HiFi
spectrometer at the STFC-ISIS Facility (Rutherford Appleton Laboratory, UK).
We employed the longitudinal field (LF) geometry where an external magnetic
field is applied along the initial muon-spin direction. Initially, a series of
measurement were made in zero applied magnetic field, sweeping temperatures
such that each measurement was made at a fixed temperature for
$35\text{\,}\mathrm{min}$, with temperature changes taking
$7\text{\,}\mathrm{min}$. Measurements were also made as a function of applied
field at fixed temperature, for $40\text{\,}\mathrm{min}$ per point. We also
performed weak transverse field (wTF) measurements, where a small magnetic
field ($2\text{\,}\mathrm{mT}$) is applied perpendicular to the initial muon-
spin direction. Each measurement took $24\text{\,}\mathrm{min}$, with
$7\text{\,}\mathrm{min}$ for temperature adjustment. A polycrystalline sample
of HbimDTDA was prepared as described previously [1]. For the measurement it
was wrapped in Ag foil, sealed in an airtight Cu holder and then loaded into a
$\ce{{}^{4}He}$ cryostat.
## 3 Results
### 3.1 Zero-field measurements
Figure 2: ZF asymmetry spectra measured on HbimDTDA at increasing temperatures
across the transition, offset for clarity and therefore in arbitrary units.
The sample was first cooled to $T=$220\text{\,}\mathrm{K}$$ and a series of
measurements in zero-applied field (ZF) were made in intervals of
$10\text{\,}\mathrm{K}$ up to $350\text{\,}\mathrm{K}$. Measurements were then
repeated for decreasing temperature. Example spectra for measurements taken on
increasing temperature are shown in Fig. 2. The observed trend is that spectra
resemble an exponential relaxation at low temperatures and become more
Gaussian in character as the temperature increases. To track their evolution,
the spectra were fitted to a stretched exponential relaxation function
$A(t)=A_{\mathrm{R}}^{\mathrm{ZF}}\exp[-\quantity(\lambda^{\mathrm{ZF}}t)^{\beta}]+A_{\mathrm{B}}^{\mathrm{ZF}},$
(2)
where the final term $A_{\mathrm{B}}^{\mathrm{ZF}}$ accounts for muon spins
that do not relax, including those from muons implanted in the sample holder.
To simplify the fitting procedure we fix the parameters which vary the least
in a free fit, in this case
$A_{\mathrm{B}}^{\mathrm{ZF}}=$12\text{\,}\mathrm{\char 37\relax}$$ and
$\lambda^{\mathrm{ZF}}=$0.08\text{\,}{\mathrm{\SIUnitSymbolMicro s}}^{-1}$$ by
taking an average. We also find that the relaxing asymmetry
$A_{\mathrm{R}}^{\mathrm{ZF}}$ increases from $15.5\text{\,}\mathrm{\char
37\relax}$ to $16.6\text{\,}\mathrm{\char 37\relax}$ between the low- and
high-temperature phases. The parameter $\beta$ allows us to interpolate
between an (i) approximately exponential decay, which results from a
combination of dynamically fluctuating, disordered magnetic moments in the
fast fluctuation limit; and (ii) behaviour approaching Gaussian decay, which
approximates the initial relaxation of the Kubo-Toyabe function, which results
from static magnetic moments sampled from a normal distribution [2].
Figure 3: (a) The result of fitting a stretched exponential function [Eq. 2]
to the ZF results, showing the temperature dependence of the line-shape
parameter $\beta$ for increasing (Inc.) and decreasing (Dec.) temperature. (b)
The result of fitting an exponentially decaying cosine curve [Eq. 4] to the
wTF results, for which we show the relaxing asymmetry
$A_{\mathrm{R}}^{\mathrm{wTF}}$. (c) The result of fitting an exponential
decay [Eq. 5] to the field-dependent LF data, giving the relaxation parameter
$\lambda^{\mathrm{LF}}$ fitted to the Redfield formula [Eq. 6].
The results of the fitting procedure are shown in Fig. 3, where the fitted
stretching parameter is seen to change as a function of temperature across a
transition region [Fig. 3(a)]. We can clearly see the hysteresis effect with
the decreasing-temperature measurements (down triangles) consistently at
higher values than the increasing-temperature ones (up triangles) over a
region centred on $T=$274\pm 11\text{\,}\mathrm{K}$$. This value was extracted
from the fitted values of $\beta$ by fitting both sets of measurements to the
phenomenological functional form
$A_{\mathrm{R}}^{\mathrm{wTF}}(T)=A_{\mathrm{H}}\tanh[k_{\mathrm{H}}(T-T_{0})]+c_{\mathrm{H}},$
(3)
where $A_{\mathrm{H}}$, $k_{\mathrm{H}}$ and $c_{\mathrm{H}}$ are parameters
which determine the shape and position of each curve and are kept constant
between them whilst $T_{0}$ determines the centre and is different between the
increasing and decreasing temperature curves. We can therefore determine an
approximate value for the width of transition by taking the difference between
the $T_{0}$ values.
The difference between the two regimes can be explained by the different
distributions of fluctuating magnetic moments in each. In the low temperature
phase we have randomly-oriented electronic moments (in a distribution of width
$\Delta/\gamma_{\mu}=\sqrt{\langle B^{2}\rangle}$, where
$\gamma_{\mu}=2\pi\times$135.5\text{\,}\mathrm{MHz}\text{\,}{\mathrm{T}}^{-1}$$
is the muon gyromagnetic ratio) fluctuating at rate $\nu$ in the fast
fluctuation limit $\nu\gg\Delta$. As the temperature increases through the
transition, the density of moments increases due to the structural transition.
Crucially, these moments fluctuate at a much faster rate in the higher
temperature phase, with the result that the muon spin, whose evolution is
limited by the value of its gyromagnetic ratio, cannot complete a rotation
before the local field fluctuates and changes value [2]. The electronic
moments are therefore motionally narrowed from the spectra in the high-$T$
regime. This leaves only the random nuclear moments to account for a large
part of the relaxation. The nuclear spins are quasistatic and so are described
by a Kubo-Toyabe-like function (of which we only observe the early-time,
Gaussian part). The fact that the value of $\beta$ appears to plateau below
$\beta\approx 1.5$ suggests that the motional narrowing is not complete.
### 3.2 Weak transverse-field measurements
In order to confirm the existence of the hysteresis loop, the temperature-
dependent measurements were also repeated over the same range but in a weak
transverse magnetic field ($B=$2\text{\,}\mathrm{mT}$$). Since the external
field is so low, the only muon spins that will oscillate are those in sites
where the local field almost vanishes in zero field, and which are not rapidly
relaxed by dynamics. The size of the change we observe in amplitude with
temperature is small, suggesting that those muons contributing to this effect
constitute only a small fraction of the total ensemble, which might be
explained by the change in the nature of the muon sites with structural phase,
as discussed below. The results were fitted to a decaying cosinusoidal curve
$A(t)=A_{\mathrm{R}}^{\mathrm{wTF}}\exp(-\lambda^{\mathrm{wTF}}t)\cos(\omega
t+\phi)+A_{\mathrm{B}}^{\mathrm{wTF}},$ (4)
with the resulting relaxation asymmetry shown in Fig. 3(b). We again see a
consistent separation between the measurements made in increasing and
decreasing temperature over the transition region, but compared to those
above, the fitted parameters have a much lower uncertainty (in part because
fitting a periodic cosine wave has less margin of error than an exponential),
and so the hysteresis loop is clearer. Repeating the fitting procedure used in
the previous section, we find that the loop for these measurements is centred
on the slightly lower temperature of $T=$249\pm 13\text{\,}\mathrm{K}$$. The
discrepancy between this and the transition derived from the change in the
$\beta$ parameter suggests the two measurements are sensitive to different
aspects of the muon’s interaction with the system: $\beta$ reflects the
distribution of local magnetic fields; $A_{\mathrm{R}}^{\mathrm{wTF}}$
reflects the availability of muon sites in the two regimes, as described
below. We note also that for both sets of measurements the transition appears
to be continuous, although the resolution is not sufficient to rule out steps
on the scale of $\approx$10\text{\,}\mathrm{K}$$.
### 3.3 Longitudinal-field measurements
To elucidate the dynamic response, a series of LF measurement were performed
at both $T=$200\text{\,}\mathrm{K}$$ and $T=$350\text{\,}\mathrm{K}$$ by
applying a series of external magnetic fields (up to
$B=$0.5\text{\,}\mathrm{T}$$) along the direction of the muon spin. As the
field magnitude increases the Zeeman term in the muon’s Hamiltonian dominates,
and the muon spin is pinned along its initial direction. Time-varying local
fields can then cause a muon spin flip and relax the asymmetry. This state of
affairs allows us to investigate the magnetic-moment dynamics in the two
phases by fitting the results to a series of exponential functions,
quantifying the relaxation due to the fluctuating magnetic fields. This is
appropriate even in the high $T$ limit, since the applied field rapidly
quenches the Gaussian relaxation, leaving residual exponential relaxation
reflecting electronic dynamics. The model used is therefore
$A(t)=A_{\mathrm{R}}^{\mathrm{LF}}\exp(-\lambda^{\mathrm{LF}}t)+A_{\mathrm{B}}^{\mathrm{LF}},$
(5)
where we fix the parameter
$A_{\mathrm{B}}^{\mathrm{LF}}=$10\text{\,}\mathrm{\char 37\relax}$$ to
simplify the fitting procedure. The value of the relaxation rate
$\lambda^{\mathrm{LF}}$ is also shown in Fig. 3(c) for both temperatures. We
see that only the low-temperature measurements show a decrease with increasing
magnetic field. This relationship can be fitted to the Redfield formula [2]
$\lambda^{\mathrm{LF}}=\frac{2\Delta^{2}\nu}{\nu^{2}+\gamma_{\mu}^{2}B_{0}^{2}}+\lambda_{0},$
(6)
where $\Delta$ is the fluctuating amplitude
($\Delta^{2}/\gamma_{\mu}^{2}={\expectationvalue*{(\delta B)^{2}}}$), $\nu$ is
fluctuation rate (related to $\tau=\nu^{-1}$ the correlation time between
changes), $B_{0}$ is the applied external field and $\lambda_{0}$ is an offset
accounting for the component of the relaxation not reduced by the external
field. (Such an offset is often observed in dynamically-fluctuating molecular
systems [4]). This gives values of $\nu=$66(12)\text{\,}\mathrm{MHz}$$ and
$\Delta=$3.1(3)\text{\,}{\mathrm{\SIUnitSymbolMicro s}}^{-1}$$ for the
parameters. In the high-temperature phase a very small relaxation rate is
observed in applied field, confirming that the ZF relaxation is caused by
static nuclear moments, which are unable to cause the required spin flips. On
the other hand, the successful description of the low-temperature relaxation
parameters with the Redfield formula confirms that the muon-spin relaxation is
caused by randomized electronic moments with dynamics in the fast-fluctuation
limit.
## 4 Muon site analysis
First-principles calculations allow us to compute candidate muon sites and
gain an insight into how the muon probes materials. This method (DFT+$\mu$ [5,
6, 7]) involves performing a geometry optimisation calculation of the
structure with an additional reduced-mass hydrogen atom representing a bare
muon ($\mathrm{Mu}^{+}$) or a muonium ($\mathrm{Mu}^{0}$) atom. The initial
implantation site is chosen randomly with the constraint that initial
positions must be a minimum distance of $1\text{\,}\mathrm{\text{\AA}}$ from
the atoms and $0.5\text{\,}\mathrm{\text{\AA}}$ from the other sites. The
simulations were run on a
$$8.6$\times$9.9$\times$21.4\text{\,}\mathrm{\text{\AA}}$$ unit cell of the
crystal structure of HbimDTDA using the CASTEP code [8] with files generated
by the MuFinder program [6], producing 30 candidate sites for each phase. The
relaxed unit cells were analysed first by considering the distortions to the
atomic positions caused by the muon, which in this case are minimal between
atoms of the same molecule but more considerable between molecules, with a
maximum radial displacement of $\sim 1.0\text{\,}\mathrm{\text{\AA}}$,
especially for the sites $\mathsf{\bar{H}}$ / $\blacksquare$ and $\mathsf{H}$
/ $\blacksquare$ described below. Muon sites corresponding to different
relaxed structures were compared by using the vector between the site and
closest atom to position the muons in a undistorted cell. Finally the symmetry
of the crystal was used to move all the sites to the same molecule and nearby
sites ($d<$1\text{\,}\mathrm{\text{\AA}}$$) were grouped together by averaging
their positions. All the muonium site simulations were also repeated for a
bare muon, by not adding an extra electron to the system for the muon. This
gave similar results, so that sites were matched with the muonium ones by
assuming that ones closer than $0.5\text{\,}\mathrm{\text{\AA}}$ are
equivalent. All sites were realised in both cases with the exception of a
single muon site (denoted $\mathsf{S_{1}}$ / $\blacksquare$ below) which was
not found in the high-temperature phase (details can be found in the
Supplemental Material [9]).
The positions of the calculated candidate muon sites are shown in Fig. 4 (only
for the case of $\mathrm{Mu}^{0}$ but the others are similar) and their
respective energies are listed in Table 1. Energies are given relative to the
lowest-energy site for each column. The similarity in energy between the
candidate sites in each class suggests that we might expect each of them to be
realised. We first find a set of candidate sites common to both structures
close to the nitrogen atoms in the sulphur-containing rings. Two of them
($\mathsf{S_{1}}$ / $\blacksquare$ and $\mathsf{S_{2}}$ / $\blacksquare$) are
located _outside_ the region between rings in adjacent chains (see shaded area
in Fig. 1(b)), with the second being _closer_ to the atoms which form the
contact bond between chains in the high temperature phase (see Fig. 1(b)).
Another site ($\mathsf{S_{3}}$ / $\blacksquare$) is located _inside_ the
region but _away_ from the contact sulphur atoms, which might explain why it
has a similar energy at the lower temperature but is higher in energy at
$340\text{\,}\mathrm{K}$. The other low-temperature site ($\mathsf{\bar{H}}$ /
$\blacksquare$) has the muon attached to the non-hydrogenated nitrogen atom in
the central ring and is higher in energy for muonium but the lowest energy
site in the case of the bare muon.
Apart from these common sites, for the $340\text{\,}\mathrm{K}$ structure we
also find two new lower-energy sites. One ($\mathsf{H}$ / $\blacksquare$) is
found sharing the nitrogen atom with a hydrogen atom in the central ring (see
Fig. 1(c)) and the other ($\mathsf{S_{4}}$ / $\blacksquare$) is again attached
to one of the nitrogen atoms in the sulphur-containing ring, but in this case
is _inside_ the inter-ring region and _closer_ to the contact atoms. To
explain the presence of the new sites we note that the main difference between
the two structural phases is the presence of the pancake bonds between the
sulphur rings in the lower-temperature state and the relative position of the
chain. The breaking of these bonds at higher temperature seems to make the new
positions available.
H | $\blacksquare$ | C | $\blacksquare$ | N | $\blacksquare$ | S | $\blacksquare$
---|---|---|---|---|---|---|---
(a) $100\text{\,}\mathrm{K}$ (top)
(b) $100\text{\,}\mathrm{K}$ (side)
(c) $340\text{\,}\mathrm{K}$ (top)
(d) $340\text{\,}\mathrm{K}$ (side)
Figure 4: Diagrams showing the main sites for muonium at (a,b) $100\text{\,}\mathrm{K}$, with three low-energy sites ($\blacksquare$, $\blacksquare$ and $\blacksquare$) and a slightly higher-energy site ($\blacksquare$) (c,d) $340\text{\,}\mathrm{K}$, with two new lower energy sites ($\blacksquare$ and $\blacksquare$) Site | Energy (eV)
---|---
$\textsf{Mu}^{0}$ | $\textsf{Mu}^{+}$
100 K | 340 K | 100 K | 340 K
$\mathsf{S_{1}}$ | ($\blacksquare$) | 0.01 | 0.06 | 0.43 | -
$\mathsf{S_{2}}$ | ($\blacksquare$) | 0.01 | 0.17 | 0.38 | 0.93
$\mathsf{S_{3}}$ | ($\blacksquare$) | 0.12 | 0.14 | 0.00 | 0.00
$\mathsf{\bar{H}}$ | ($\blacksquare$) | 0.00 | 0.06 | 0.37 | 0.24
$\mathsf{H}$ | ($\blacksquare$) | - | 0.05 | - | 0.00
$\mathsf{S_{4}}$ | ($\blacksquare$) | - | 0.00 | - | 0.23
Table 1: Table comparing the energies of the unit cell with the muon at the
different bare muon ($\textsf{Mu}^{+}$) and muonium ($\textsf{Mu}^{0}$) sites
calculated using DFT, and given relative to the lowest energy found in each
column.
## 5 Discussion
Conventionally we assume that a bare (or diamagnetic) muon spin couples to the
local magnetic field in a material, and probes the local field distribution
without causing an appreciable perturbation. The relevant muon sites from the
previous section would then be the bare ones. The low-temperature regime of
this material, which is thought to be formed from singlet spins, was
previously suggested to be diamagnetic on the basis of bulk susceptibility
measurements. However, if the muon takes the form of an unperturbing,
diamagnetic probe, then the low-temperature relaxation cannot simply be
explained by the presence of highly-dilute magnetic impurities in a
diamagnetic background, since the Redfield behaviour observed relies on the
presence of a dense array of magnetic moments that rapidly fluctuate in time.
It might therefore be unlikely that the material can be characterised as being
non-magnetic in this regime, but rather there are fluctuating moments of
sufficient density to be approximated as giving rise to a Gaussian
distribution of fields at any instant. We distinguish the local magnetic field
distribution in the low temperature phase, featuring this distribution of
moments fluctuating in the fast fluctuation limit, from that in the high-
temperature regime, which likely comprises a denser distribution of moments,
with a far greater characteristic fluctuation rate.
Since the muon is a local probe, the transition we observe likely reflects
muons locally detecting the switching of nearby clusters of molecules in the
sample. A cluster in the low-temperature state giving an exponential
relaxation and one in the high-temperature state a Gaussian one. The stretched
exponential used in the intermediate regime then models the sum of
contributions, whose relative size varies with temperature. We note that our
results in this system resemble those measured in spin-crossover systems based
on iron (II) ions which show a crossover between a low-spin ($S=0$) state at
low temperature and a high-spin ($S=2$) state at high temperature [3, 10]. In
those materials the muon spectra were also fitted to a stretched-exponential
function with $\beta<1$ in the low temperature, low-spin configuration and
$\beta$ approaching $\beta=2$ at high temperature. It was suggested that the
relaxation at low temperature reflected an incomplete crossover, with some
spins remaining in the high-spin configuration at low temperature and forming
a very dilute distribution leading to root-exponential relaxation [11]. A
similar picture could be the case here, with any regions that avoid the low-
temperature structural transition giving rise to a distribution of disordered
spins, causing the observed relaxation. However, the fact that we find
$\beta\lesssim 1$ suggests that the density of moments in our system at low
temperature is greater than the highly-dilute one that would be expected to
give rise to $\beta\approx 0.5$, which was the value observed in some of the
low-temperature phases of the iron-based spin-crossover systems [10].
Another possibility which could account for our data is that the muon’s
sensitivity to the magnetism in the low-temperature, singlet state is caused
by a perturbation the muon makes to the system, as was suggested to be the
case in molecular spin-ladder materials [5]. This might involve the bare,
charged muon causing a local distortion to the nearest spin singlet, or that
the sensitive species is derived from muonium, whose extra electron is
involved in causing the necessary distortion. The muon, along with its local
distortion, would then become the sensitive species, whose interactions give
rise to the observed relaxation. We note that the observed fluctuation
amplitude $\Delta$ in this regime corresponds to the magnetic field from an
electron spin around $\approx 6$ Å from a muon, providing a rough length scale
for the interaction. If this is the case, then the material could adopt a
fairly uniform singlet ground state with few additional intrinsic magnetic
impurities. However, even in this case, the transition to a regime of large,
dense magnetic moments at high temperature would continue to allow the muon to
faithfully probe the magnetic switching transition.
Finally, the difference in the low-energy muon sites in this material’s two
structural phases is a noteworthy feature, as the difference in sites in
different states of a system has not been discussed previously in materials of
this type. Since we find a range of muon sites in this system with very
similar energy, we would expect the muons to sample a range of internal
magnetic fields. Although both a bare muon and muonium allow several different
low-energy candidate sites in the two temperature regimes, owing to the range
of fields probed, the two cases are unlikely to lead to cause significant
differences in to the measured spectra. However, the observation of new sites
becoming available after a structural change likely applies well beyond this
material.
Important questions remain about the nature of the phase transition in this
system, particularly related to the broadness of the transition compared to
the width of the hysteretic region. Inspection of the magnetic susceptibility
data suggests the presence of steps in the response. Indeed, tracking the
structural component of the transition as a function of $T$ by powder x-ray
diffraction also suggests a stepwise progression, with reflections consistent
with the high temperature phase appearing over a range of temperatures. There
has been recent interest in the possibility of realizing the devil’s staircase
structure in such systems [12], where step-like transitions between the spin
states have been observed.
## 6 Conclusion
Muon-spin relaxation measurements, paired with muon-site analysis, have
allowed us to probe the hysteretic magnetic switching behaviour of HbimDTDA
from a local perspective. We identify a hysteresis width of $\Delta
T\approx$22\text{\,}\mathrm{K}$$, centred on $T=$274\text{\,}\mathrm{K}$$. The
low-temperature state gives rise to muon-spin relaxation which is well
described by a model that assumes a dense arrangement of disordered,
dynamically-fluctuating moments. The structural transition causes the muon
sites in the two regimes to differ. However, in a chemically-complex material
such as this, a large number of sites of similar energy occur in both regimes,
with the result that we expect the muon to faithfully probe the system across
the transition. Although this latter feature of differing muon sites in
different structural regimes has yet to be widely investigated, it is possible
that it is a general feature that we should expect in numerous systems.
## 7 Acknowledgments
Part of this work was performed at the STFC-ISIS facility and we are grateful
for provision of beamtime and access to the SCARF Computer cluster. We are
also grateful for computational support provided by both Durham Hamilton HPC
and by the UK national high performance computing service, ARCHER2, for which
access was obtained via the UKCP consortium and funded by EPSRC. AHM is
grateful to STFC and EPSRC for the provision of a studentship. For the purpose
of open access, the authors has applied a Creative Commons Attribution (CC BY)
licence to any Author Accepted Manuscript version arising. Research data from
this project will be made available via Durham Collections.
## References
* [1] Mills M B, Wohlhauser T, Stein B, Verduyn W R, Song E, Dechambenoit P, Rouzières M, Clérac R and Preuss K E 2018 Journal of the American Chemical Society 140 16904–16908 URL http://dx.doi.org/10.1021/jacs.8b10370
* [2] Blundell S J, De Renzi R, Lancaster T and Pratt F L (eds) Muon Spectroscopy: An Introduction (Oxford University Press) ISBN 978-0198858959
* [3] Blundell S J, Pratt F L, Marshall I M, Steer C A, Hayes W, Létard J F, Heath S, Caneschi A and Gatteschi D 2003 Synthetic Metals 133–134 531–533 URL http://dx.doi.org/10.1016/S0379-6779(02)00424-1
* [4] Lancaster T, Blundell S J, Pratt F L, Brooks M L, Manson J L, Brechin E K, Cadiou C, Low D, McInnes E J L and Winpenny R E P 2004 Journal of Physics: Condensed Matter 16 S4563–S4582 ISSN 1361-648X URL http://dx.doi.org/10.1088/0953-8984/16/40/009
* [5] Lancaster T, Xiao F, Huddart B M, Williams R C, Pratt F L, Blundell S J, Clark S J, Scheuermann R, Goko T, Ward S, Manson J L, Rüegg C and Krämer K W 2018 New Journal of Physics 20 103002 ISSN 1367-2630 URL http://dx.doi.org/10.1088/1367-2630/aae21a
* [6] Huddart B M, Hernández-Melián A, Hicken T J, Gomilšek M, Hawkhead Z, Clark S J, Pratt F L and Lancaster T 2021 MuFinder: A program to determine and analyse muon stopping sites (Preprint 2110.07341v1)
* [7] Blundell S J, Möller J S, Lancaster T, Baker P J, Pratt F L, Seber G and Lahti P M 2013 Physical Review B 88 ISSN 1550-235X URL http://dx.doi.org/10.1103/PhysRevB.88.064423
* [8] Clark S J, Segall M D, Pickard C J, Hasnip P J, Probert M I J, Refson K and Payne M C 2005 Zeitschrift für Kristallographie - Crystalline Materials 220 567–570 URL http://dx.doi.org/10.1524/zkri.220.5.567.65075
* [9] See Supplemental Material for relaxed atomic structures from each of the muon site calculations.
* [10] Blundell S J, Pratt F L, Steer C A, Marshall I M and Létard J F 2004 J. Phys. Chem. Sol. 65 25–28 URL https://www.sciencedirect.com/science/article/pii/S0022369703003263
* [11] Uemura Y J, Yamazaki T, Harshman D R, Senba M and Ansaldo E J 1985 Phys. Rev. B 31 546–563 URL https://link.aps.org/doi/10.1103/PhysRevB.31.546
* [12] Trzop E, Zhang D, Piñeiro-Lopez L, Valverde-Muñoz F J, Carmen Muñoz M, Palatinus L, Guerin L, Cailleau H, Real J A and Collet E 2016 Angewandte Chemie International Edition 55 8675–8679 URL http://dx.doi.org/10.1002/anie.201602441
|
# Graph Neural Networks for Cancer Data Integration
Teodora Reu
Shapiro Jonathan 2021-2022 abstract dedic
I dedicate this project to Nicolae Reu, my grandpa, who asked me at the age of
seven to compute $\pi$ with a string and a ruler, and to my grandma’s -
Săndina Haja - memory.
###### Contents
1. 0 Introduction
1. 1 Efforts on collecting data
2. 2 Efforts on integrating heterogeneous cancer data
1. Autoencoders
2. Similarity network fusion
3. 3 Unsupervised Graph Neural Networks
4. 4 Goal of the project
2. 1 Datasets
1. 1 METABRIC Dataset
2. 2 Synthetic Dataset
3. 3 Synthetic Graph
3. 2 Deep Neural Networks
1. 1 Multilayer Perceptron
2. 2 Autoencoders
1. 1 Variational Autoencoders
3. 3 Convolutional layers
4. 4 Graph Neural Network
1. 1 Graph Convolutional Layer
2. 2 Variational Graph Autoencoder
3. 3 Deep Graph Infomax
4. 3 Recreating Two State-of-the-Art Models
1. 1 Description of Models
2. 2 Evaluation and results
1. 1 Hyper-parameter analysis
2. 2 Best model assessment
5. 4 Graph Neural Networks for Cancer Data Integration
1. 1 Graph Construction Algorithms
1. 1 From feature matrix to graph
2. 2 Quantifying the quality of a graph
3. 3 Graphs build out of METABRIC modalities
1. Homophily levels for each class of labels on: Clinical data, and multi-omic data (mRNA, CNA), by using K Nearest Neighbours
2. Homophily levels for each class of labels on: Clinical data, and multi-omic data (mRNA, CNA), by using Radius R
2. 2 Graph Neural Network Models For Data Integration
1. 1 Notation and background knowledge
2. 2 Concatenation Of Features: CNC-DGI and CNC-VGAE
3. 3 Two Graphs: 2G-DGI
4. 4 Heterogeneous Graph: Hetero-DGI
3. 3 Evaluation and results
4. 4 Evaluation on Synthetic-Data
1. CNC-DGI
2. CNC-VGAE
3. 2G-DGI
4. HeteroDGI
5. Conclusions
5. 5 Evaluation on METABRIC
1. 1 Graph hyper-parameter selection
1. Best Model Assessment
2. Conclusions
6. 5 Conclusion
1. 1 Summary
2. 2 Further work
1. Hyper-parameter settings
2. Multi-modal expansion
3. H-VGAE
4. A Math Problem
7. 6 Appendix
1. 1 Supplementary results
###### List of Figures
1. 1 Process of transcription borrowed from “Genome Research Limited
2. 2 Example of two synthetic datasets, a good one, and a bad one
3. 3 Building a graph from the synthetic dataset
4. 1 Generate low-dimensional representation of the input, regularize it in a Multi-Gaussian distribution, and attempt reconstruction of the original input
5. 2 Although two points on the initial manifold might be close to each other space-wise, the distance on the actual surface might actually be larger, so a multivariate Gaussian representation of the data will ’flatten’ the manifold’s surface to better represent the similarity or disparity of the points. This is an intuitive picture.
6. 3 Convolutional layer applied on a single-channel data point
7. 4 Convolutional layer applied on a single-node
8. 5 Convolutional layer applied on a single-channel point
9. 6 High-level overview of Deep Graph Infomax
10. 1 CNC-VAE and H-VAE. The two feature matrices are represented with red and green.
11. 2 CNC-VAE performance with different hyper parameters settings.
12. 3 H-VAE performance with different hyper parameters settings.
13. 1 This chapter is split in three modules: Graph Construction, Introducing novel Unsupervised Integrative Graph Neural Network, and Evaluate the quality of the lower latent space representations.
14. 2 Graphic representation for choosing a threshold, $r$, for the radius method, or $k$ for KNN and the results of applying the two methods in separation
15. 3 Left graph has a higher homophily than the right one, where the ’yellow’ ’purple’ colours represent the labels of the nodes
16. 4 $\mathbf{\mathcal{U}}$: Special integration layer
17. 5 CNC-DGI: Apply Variational Autoencoder on top of the graph built on concatenated inputs.
18. 6 CNC-VGAE: Apply Variational Autoencoder on top of the graph built on concatenated inputs
19. 7 2G-DGI: Two graphs integration
20. 8 Build graph on concatenated features
21. 9 Evaluation pipeline for this project
22. 10 Accuracies of lower-latent representation obtained from CNC-DGI
23. 11 Accuracies of lower-latent representation obtained from CNC-VGAE
24. 12 Accuracies of lower-latent representation obtained from 2G-DGI
25. 13 Accuracies of lower-latent representation obtained from Hetero-DGI
26. 14 Accuracies of lower-latent representation obtained from CNC-VGAE
27. 15 Accuracies of lower-latent representation obtained from CNC-VGAE
28. 16 Accuracies of lower-latent representation obtained from CNC-VGAE
29. 17 Accuracies of lower-latent representation obtained from CNC-VGAE
30. 1 H-VGAE proposed architecture for integration
31. 2 Growing neighbourhoods, and growing number of like neighbours
32. 1 Clin+CNA integration testing on ER
33. 2 Clin+CNA integration testing on DR
34. 3 Clin+CNA integration testing on PAM
35. 4 Clin+CNA integration testing on IC
36. 5 Clin+mRNA integration testing on ER
37. 6 Clin+mRNA integration testing on DR
38. 7 Clin+mRNA integration testing on PAM
39. 8 Clin+mRNA integration testing on IC
40. 9 CNA+mRNA integration testing on ER
41. 10 CNA+mRNA integration testing on DR
42. 11 CNA+mRNA integration testing on PAM
43. 12 CNA+mRNA integration testing on IC
###### List of Tables
1. 1 Table of state-of-the-art cancer data integration approaches inspired by the review [PSBB+21]
2. 1 Test results for classification with Naive Bayes (NB), Suport Vector Machine (SVM), and Random Forest (RF) of lower-latent representations produced by CNC-VAE and H-VAE, in percentages (%).
3. 1 Homophily of graph build on Clin
4. 2 Homophily of graph build on mRNA
5. 3 Homophily of graph build on CNA
6. 4 Homophily of graph build on Clin
7. 5 Homophily of graph build on mRNA
8. 6 Homophily of graph build on CNA
9. 7 Best-in-class results on representations obtained with the models trained on various settings on classification task obtained with Naive Bayes Classifier, Support Vector Machine and Random Forest
## Chapter 0 Introduction
> Someday my paintings will be hanging in the Louvre.
>
> Lust for life, Irving Stone
Biomedicine is the field of science that takes the research findings of
medical science and applies them in clinical practice. For a long time, it has
based its conclusions on the correlation analysis of various events. For
example, when a patient exhibits symptoms A, B, and C, there is a certain
probability that they have disease X. Medical practitioners learn the
correlations between symptoms and conditions, with the savviest easily finding
associations leading to the correct diagnosis i.e. based on the inputs they
can generate an output with high confidence. This can be pictured as having a
space with N dimensions resembling all possible symptoms, and a doctor
embedding the patients as data points in this space. Thus, if a new subject
resembles the symptoms of previously diagnosed patients, the doctor can
produce a list of possible conditions based on the similarity to the previous
patients. In technical terms, the output is based on the proximity to the
already classified data points.
This represents a viable way to understand biomedical data, with the addition
that over the past decade, a wide range of new data types describing patients
has been collected, ranging from graphical data such as X-ray or MRI scans to
cellular quantities of proteins or genes and concentrations of minerals in the
blood, among many others. These collections contain large volumes of data of
various shapes and sources, both sparse and dense, encompassing categorical
and continuous values. Despite these recent efforts, medical experts still
face difficulties in making diagnoses given the broad set of possible
symptoms, hence the demand for AI models which can learn the correlations that
the human brain might overlook.
In the next sections, we will introduce the efforts in collecting cell-related
data (referred to as multi-omic or multi-genomic) from various organisms,
followed by the existing research aiming to better understand these data, i.e.
by further categorizing the types of diseases. Finally, we will present the
case for why unsupervised learning with Graph Neural Networks has the
potential to show promising results and what these would look like, and then,
the goal of this project.
### 1 Efforts on collecting data
On the past decade, many projects provided comprehensive catalogues of cells
coming from organisms with certain diseases. By using cutting-edge single-cell
and spatial genomics alongside computational techniques, the Human Cell Atlas
[RTL+17] researchers revealed that approximately 20,000 genes in a individual
cell can be switched on, creating a unique identity for each cell. This has
been studied on more than 24.6 million cells coming from multiple organs such
as: Brain, Pancreas, Blood, Kidney etc. Another notable cell atlas is Brain
Research through Advancing Innovative Techniques (BRAIN) [EGK+17] which
studies single cells during health and disease periods. Other examples of such
atlases are: Cell Atlas of Worm [DSLC+20], Fly Cell Atlas [LJDW+22], and
Tabula Muris [C+18].
Among the consortia representing large collections of cancer data, the most
notable are The Cancer Genome Atlas (TCGA), and the Molecular Taxonomy of
Breast Cancer International Consortium (METABRIC), which describes 2000 breast
cancer patients by using multi-omic and clinical data.
### 2 Efforts on integrating heterogeneous cancer data
Multiple sources such as [BC20] and [YK15] argue that in order to give better
treatment to cancer patients the integration of multi-omic (cell mRNA
expression, DNA methylation) and clinical data would be preferable. They
suggest that patients with the same cancer type should be given different
treatments based on their integrated results, thus leading to a further sub-
categorisation of the usual types of cancer.
This can be achieved by clustering patient data points based on their multi-
omic and clinical results, many models that attempt integration of such data
have been tried over the last decade. Since, the sub-categories of cancer
types are yet unknown, unsupervised models were trained on patients data to
underline different categories of the same cancer type. For example, knowing
that patients from a found sub-category received two types of treatments, with
a higher survival rate for one of the treatments, could help practitioners
decide which treatment works best for that sub-category of patients.
The different integration types can be observed in the table below.
Integration Type | Method | Comments | Citations
---|---|---|---
Early | | Plain concatenation of every dataset into
---
a single, large matrix, and then apply models
| \- The resulting dataset is noisier
---
and high dimensional, which
makes learning more difficult
\- Overlooks omics with
smaller feature dimension
\- Ignores the specific data
distribution of each omic
| [XDK+19]
---
[CPLG18]
Mixed | | Concatenation of lower latent space representations
---
takes place in the middle of the learning process.
| \- Addresses the short comings
---
of the early integration types
\- Some of the proposed models are:
Kernel Learning, Graph Based, ANN
| [HZYZ21]
---
[WZP+17]
[WMD+14]
[MZ18]
[LZPX20]
[ZAL18]
Late | | Applies the model on the datasets independently,
---
followed by an aggregation function over the output
for cancer prognosis
| \- Cannot capture the inter-omic interactions
---
\- The models do not share information about
the learned features
| [SWL18]
---
[WSH+20]
Hierarchical | | Generates a different representation for
---
each genomic in part, which are concatenated
and used to train an encoding model
| \- Some of the proposed models are: iBag
---
(integration Bayesian analysis of genomics),
LRMs(linear regulatory modules),
ARMI(Assisted Robust Marker Identification)
| [WBM+13]
---
[ZZZM16]
[CSZ+17]
Table 1: Table of state-of-the-art cancer data integration approaches inspired
by the review [PSBB+21]
Next subsections show relevant projects to what this project’s goal will be.
##### Autoencoders
In [CPLG18], they used unsupervised and supervised learning on two subgroups
with significant survival differences. Extending the study to multiple types
of cohorts of varying ethnicity has helped identify 10 consensus driving genes
by association with patients survival [CPL+19].
In [XWC+19], the stacked autoencoder was used on each modality, and then the
extracted representation represented the input to another autoencoder.
Finally, a supervised method was used to evaluate the quality of the lower
space representations.
In [SBT+19] the authors use several variational autoencoders architectures to
integrate multi-omic and clinical data. They evaluate their integrative
approaches by combining pairs of modalities, and by testing if their lower
latent space where sensitive enough to resemble certain cancer sub-types.
##### Similarity network fusion
In [WMD+14] the way Similarity Network Fusion (SNF) constructed networks using
multi-genomic data is related to this project, since that it attempts learning
on constructed graph on multi-omic and clinical data. Given two or more types
of data for the same objects (e.g. patients), SNF will firstly create one
individual network for each modality, by using a similarity patient-to-patient
similarity measure. After that, a network fusion step will take place, which
uses a nonlinear method based on message passing theory [MTMG03] applied
iteratively to the two networks. After a few iterations, SNF converges to a
single network. Their method is robust and has a lot of the hyper-parameter
settings. The advantage of their approach is that weak similarities will
disappear in time, and strong similarities will become even stronger over
time.
In the next section, we will present promising models which perform
unsupervised learning over graph shaped like data, and get very good results
on a clustering task.
### 3 Unsupervised Graph Neural Networks
Among many unsupervised Graph Neural Networks, some of the notable ones are
Autoencoder [KW16b] and Deep Graph Infomax [VFH+19].
Variational Graph Autoencoders (VGAEs) have shown very promising results
summarizing graph’s structure, by leveraging very good results on link
prediction task. The experiment is based on encoding the graph’s structure
(with randomly missing edges) in lower space representation, attempt
reconstruction of the adjacency matrix, and then compare the reproduced
adjacency matrix, with the complete input adjacency matrix. They compared
their models against spectral clustering and against DeepWalk, and got
significantly better results on Cora, Citeseer, Pubmed [SNB+08].
Another, very promising unsupervised Graph Neural Network is Deep Graph
Infomax (DGIs), showing very promising results on getting lower space
representations on graph shaped datasets. The implicit Deep Infomax even
obtains better lower latent space representations on datasets such as CIFAR10,
Tiny ImageNet then variational Autoencoders [HFLM+18]. The Deep Graph Infomax
obtains amazing results on datasets such as Cora, Citesse, Pubmed, Reddit and
PPI. In this case the unsupervised DGI gives better results even than
supervised learning on a classification task (for supervised learning models
use in additions the labels of the object, in unsupervised learning the labels
are never seen). The DGI shows itself as a very promising unsupervised model
architecture.
### 4 Goal of the project
This project leverages integrative unsupervised learning on Graph Neural
Networks, such as VGAEs and DGIs, and aims to obtain competitive results in
line with state-of-the-art for cancer data integration [SBT+19]. The ultimate
goal is to construct robust graphs comprising patient-to-patient relations
based on the available data modalities. This can be achieved by training
models in an unsupervised fashion and attempting to combine data sets at
different stages throughout the learning process, followed by generating
lower-latent space representations. In order to confirm that the resulting
embeddings are sensitive enough to underline various sub-types of cancer, we
will assess their quality by performing a classification task with a benchmark
method such as Naive Bayes on the already remarked sub-types of the breast
cancer in the METABRIC dataset.
A successful project will be characterised by building this novel Machine
Learning pipeline - from data source integration and patient-relationship
graph construction, to designing models learning lower-dimensional
representations which would improve the performance metrics obtained on
classification tasks. Ideally, these lower-latent space embeddings will
resemble new clusters of data points leading to the discovery of new sub-
categories of breast cancer that would help medical practitioners in offering
accurate treatments to patients.
This paper will discuss in depth the following:
* •
Datasets along with each modality in part, the label classes, and the
construction of a synthetic dataset which will be used to judge the quality of
the proposed models
* •
Artificial neural networks, from Multilayer Perceptron to Autoencoders and
Deep Graph Infomax, in order to build knowledge over the models used for
unsupervised learning
* •
The recreation of two state-of-the-art models that is useful as a benchmark
for the evaluation of the novel models
* •
Building a graph learning pipeline on a non-graph shaped data
* •
Novel models that attempt integration and correctly evaluate the lower latent
space embeddings
## Chapter 1 Datasets
### 1 METABRIC Dataset
The METABRIC project is a joint English-Canadian effort to classify breast
tumours based on numerous genomic, transcriptomic, and imaging data types
collected from over 2000 patient samples [CSC+12]. This data collection is one
of the most comprehensive worldwide studies of breast cancer ever undertaken.
Similarly to [SBT+19], we will conduct integrative experiments on CNAs, mRNA
and clinical data defined below.
The work in [You21] proposes that gene expression is the process by which
instructions in our DNA (deoxyribonucleic acid) are covered into a functional
product, such as a protein. Gene expressions are tightly related to cell
responses to changing environments. The process of getting or coping genes
chains out of DNA, through messenger-RNA (messenger-ribonucleic acid, or
mRNA), is called transcription. RNA is a chemical structure with similar
properties as the DNA, with the difference that while DNA has two strands, RNA
has only one strand, and instead of the base thymine (T), RNA has a base
called uracil (U). The secret of this process is that, if one knows one strand
of mRNA, they can guess the other half, and this is because bases come in
pairs. For example, if we have a strand ACUGU in a mRNA the other half will be
TGACA (because we have Guanine (G) and Cytosine (C) pairs and then Thymine (T)
(or Uracil (U) if mRNA) and Adenine (A)). The key property of DNA is the
complementarity of its two strands, which allows for accurate replication (DNA
to DNA) and information transfer DNA to RNA). This can be easily be seen in
Figure 1.
Figure 1: Process of transcription borrowed from “Genome Research Limited
"
[ZR20] describes that an evolutionary process in which somatic (non-
multiplicative cells, so neither ovule, sperm) mutations that accumulate in a
population of tumor cells result in cancer development. Copy number
aberrations (CNAs), which are the deletion or amplification of large genomic
regions, are a type of somatic mutation common to many types of cancer,
including breast cancer. CNAs are classified into multiple types and can cover
a broad range of sizes, from thousands of kilobases to entire chromosomes and
even chromosome arms. A critical role played by CNAs is in driving the
development of cancer, and thus the characterization of these events is
crucial in the diagnosis, prognosis and treatment of diseases. Furthermore,
CNAs act as an important point of reference for reconstructing the evolution
of tumors. Although somatic CNAs are the dominant feature discovered in
sporadic breast cancer cases, the elucidation of driving events in
tumorigenesis is hampered by the large variety of random, non-pathogenic
passenger alterations and copy number variants.
METABRIC consists of 1980 breast-cancer patients split in groups based on two
immunohistochemistry sub-types, ER+ and ER-, 6 intrinsic gene-expression
subtypes (PAM50) [PPK+], 10 Integrative Clusters (IC10)[CSC+12], and two
groups based on Distance Relapse (the cancer metastasised to another organ
after initial treatment or not).
The dataset which we are going to use is the one already pre-processed by
[SBT+19], because the is already split in five-fold cross evaluation, for each
labels class, in order to obtain proportional number of object which same
class allover the folds. CNA modality has been processed as well, for it’s
feature to come from a Bernoulli distribution, and the clinical data has been
filtered through one-hot-encoding process.
### 2 Synthetic Dataset
When developing novel models on top of complex datasets such as METABRIC, it
is hard to segregate the source of any errors or results that fail to meet
expectations due to the multitude of stages in the learning pipeline: the data
integration and graph building cycle, the model training or the classification
task. Thus, we will leverage a testing methodology popular in literature to
help us point out any errors or inconsistencies, either from a data,
architecture design or implementation perspective. Specifically, we will
generate a synthetic dataset coming from a Gaussian distribution; this is
advantageous because edges can be predefined based on the labels of the
artificial data points, and we are also in control of dataset characteristics
such as homophily levels.
The requirements of this synthetic dataset are enumerated below along with the
design decisions behind them:
1. 1.
The dataset will contain objects from two classes as we want to perform a
classification task in the final stage to assess the quality of our lower-
latent space embeddings
2. 2.
To perform data integration, the objects will be described by two modalities,
where each modality is sampled from a different normal distribution
3. 3.
The distributions used to generate the modality data must be intractable for a
Principal Component Analysis i.e. there must be no clear separation of classes
after appying PCA, such as in Figure 2 (a), because having the objects already
clustered would defeat the purpose of the experiment
4. 4.
To easily build edges between samples of same class (intra-class) and samples
belonging to different classes (inter-class) in order to evaluate how various
graph building algorithms reflect on the quality of the lower-space embeddings
(a)
(b)
Figure 2: Example of two synthetic datasets, a good one, and a bad one
After considering these matters, we decided that for each class of objects we
should sample points with features coming from two multi-Gaussian with high
standard deviation, such the feature spaces would overlap with the others
class feature space. Let $A$ and $B$ be my two classes of labels. Let $\alpha$
and $\beta$ my two modalities. Now, let’s assume feature space for $\alpha$
has $n$ dimensions, and $\beta$’s feature space has m. Let’s define
$\vec{\mu_{\alpha}}$ and $\vec{\mu_{\beta}}$, with $\mu_{\alpha,i}$ and
$\mu_{\beta,i}$ are coming from a two uniform distribution with different
parameters, that can be freely chosen. We will define
$\vec{\mu}_{\alpha,A}=\vec{\mu}_{\alpha}-\theta_{\alpha}$, where
$\theta_{\alpha}$ can be again chosen by as, and
$\vec{\mu}_{\alpha,B}=\vec{\mu}_{\alpha}+\theta_{\alpha}$. For the second
modality the process is similar, by replacing $\alpha$ with $\beta$.
Now we sample $X_{\alpha,A}$ from
$\mathcal{N}(\vec{\mu}_{\alpha,A},\vec{\sigma})$, $X_{\beta,A}$ from
$\mathcal{N}(\vec{\mu}_{\beta,A},\vec{\sigma})$, $X_{\alpha,B}$ from
$\mathcal{N}(\vec{\mu}_{\alpha,B},\vec{\sigma})$ and $X_{\beta,B}$ from
$\mathcal{N}(\vec{\mu}_{\beta,B},\vec{\sigma}))$ where $\vec{\sigma}$ is big
enough to cause overlap between each modalities feature space. For example, in
Figure (a) we have good choice of $\vec{\sigma}$, but in Figure (b) we cannot
say the same thing anymore.
### 3 Synthetic Graph
Figure 3: Building a graph from the synthetic dataset
In order to build node relations between the data points coming from the
Synthetic dataset, we decided to implement a statistical approach over the
task. Let purple and yellow be the two possible labels. Our interest was to
manipulate graph configurations in such a manner that we would be in control
of the number of the edges between nodes with same label and with opposite
label. And this is important for many reasons, we will latter explain, but for
now the reader must just trust that the performance of the models applied over
the dataset will be heavily influenced by this fact. As you can see in 3, we
will build edges between two purple nodes with probability $Z$, between purple
and yellow with probability $X$, and between yellow nodes with probability
$Y$. In the experiments and evaluation section we wanted to show of how
model’s performance can be influenced by different graphs structures.
While, it might seem a counter-intuitive to build edges this way, it can be
rationalized in the following way: if $p(edge:purple\to purple)=1/2$ and
$p(edge:yellow\to yellow)=1/2$ then the $p(edge:purple\to yellow)=0$ (we would
get two isolated graphs), or if $p(edge:purple\to purple)=1/3$ and
$p(edge:yellow\to yellow)=1/3$ then the $p(edge:purple\to yellow)=1/3$. This
is the probabilistic approach.
As we mentioned we will used a statistical approach which works the other way
around. We will generate random samples of edges between purple-to-purple,
purple-to-yellow, yellow-to-yellow nodes and this their number will be x, y ,
and y. In order to get back to the probabilistic approach, we need to compute
$X=\frac{x}{x+y+z}$, $Y=\frac{y}{x+y+z}$, and $Z=\frac{z}{x+y+z}$. If we want
to obtain $p(edge:purple\to purple)=1/2$ and $p(edge:yellow\to yellow)=1/2$
then the $p(edge:purple\to yellow)=0$ probabilities, we will just generate for
example, $2000$ edges between purple-to-purple nodes and $2000$ edges between
yellow-to-yellow nodes. Our decision to build graphs this way, will make
further explained in the chapter Graph Neural Networks for Cancer Data
Integration.
## Chapter 2 Deep Neural Networks
This chapter provides a summary of the most prominent neural network
architectures, their potent variants and applicability. We will first
introduce the multi-layer perception as a knowledge base for the following
models leveraged in this work: Variational Autoencoders (VAE), and Graph
Neural Networks (GNN).
In the experiments detailed further in the paper these models have been used
in an unsupervised fashion: rather than carrying out regression or
classification tasks, the goal is to generate lower-latent space
representations of the unlabeled data points. The Variational Autoencoder is
at the core of the implemented models, and the convolutional layers are deeply
explained as they come up in the graph learning techniques based on order-
invariant convolutions. Finally, we introduce GNNs and describe the main
approaches to performing unsupervised learning to generate lower-space
embeddings of data points: Deep Graph Infomax, which maximizes mutual local
information, and Variational Graph Autoencoder, which builds up from the
traditional Variational Autoencoder with the addition that the decoder
reconstructs the adjacency matrix.
### 1 Multilayer Perceptron
Neural networks are Machine Learning models composed of simple processing
units that can compute linear or nonlinear transformations (based on the
activation function used) on vector inputs; these processing units are called
perceptrons. One perceptron receives a vector $\vec{x}\in\mathbf{R^{n}}$, on
which it applies a linear combination by multiplying with a weight vector
$\vec{w}\in\mathbf{R^{n}}$ and adding a bias value $b$. Afterwards, a
nonlinear function $\sigma$ can be applied to the result to obtain the final
value for an output, $y$. The are a multitude of activation functions which
are chosen based on the learning task.
$y=\sigma\left(b+\sum_{i=1}^{n}w_{i}x_{i}\right)=\sigma\left(b+\vec{w}^{T}\vec{x}\right)$
Multilayer perceptron neural networks are formed of layers of perceptron units
which receive inputs from previous layers, and apply the linear combination
and activation functions described above to return the output values. For each
individual layer, we can compute the output values with the matrix form as
follows:
$\vec{y}=\sigma\left(W\vec{x}+\vec{b}\right)$ (1)
For a two layered perceptron neural network the formula is similar. On top of
the $\vec{y}$ obtained in (2.1), we apply another linear transformation by
multiplying with the second layer’s weight matrix, adding its bias value and,
finally, computing the same or a different activation function. The new result
is:
$\vec{y}=\sigma\left(W\vec{y}_{previous}+\vec{b}\right)=\sigma\left(W\sigma_{previous}\left(W_{previous}\vec{x}+\vec{b}_{previous}\right)+\vec{b}\right)$
(2)
Networks with more than one intermediate layer are called deep neural
networks. A natural question that comes with such networks is: "How many
layers?". It is argued by Cybenko in [Cyb89] that any bounded continous real
function can be approximated with only one layer and a sigmoidal activation
layer. However, if this were the truth this chapter would end here, which is
not case. There is no perfect architecture and finding a neural network that
works for a specific kind of data is an engineering task where a lot of
experiments and searching needs to be carried out, in order to find what works
better.
### 2 Autoencoders
Generally, an autoencoder is a model that consists of two networks: the
encoder, which constructs lower-latent space embeddings from the data points,
and the decoder, which reconstructs the input data. The encoder function
$E(\dot{)}$ takes $\theta$ as parameter, and the decoder function $D(\dot{)}$
takes $\phi$ as parameter. The lower-space embedding is learned from
$E_{\theta}(x)$ and the reconstructed input is $y=D_{\phi}(E_{\theta}(x))$.
The two parameters are learned together on the reconstructed data through a
loss function chosen upon the underlying data, named reconstruction loss,
which is Binary Cross Entropy (BCE) for categorical data, or Mean Square Error
(MSE) for continuous data.
$L_{MSE}(\theta,\phi)=\frac{1}{n}\sum_{i=1}^{n}(x_{i}-D_{\phi}(E_{\theta}(x_{i}))^{2}$
(3)
Let m be the number of classes, then we have:
$L_{BCE}(\theta,\phi)=\frac{1}{n}\sum_{i=1}^{n}\sum_{j}^{m}x_{ij}log(D_{\phi}(E_{\theta}(x_{i}))$
(4)
Many variants of autoencoders have been proposed to overcome the shortcomings
of simple autoencoders: poor generalization, disentanglement, and modification
to sequence input models. Among these models are the Denoising Autoencoder
(DAE) [VLBM08], which randomly filters-out some of the input features to learn
the defining characteristics of the input object; Sparse Autoencoder (SAE)
[CNL11] adds a regularization term to the loss function to restrict the lower-
latent representations from being sparse (i.e. many zero-valued entries in the
feature vectors). Finally, Variational Autoencoder (VAE) [KW13] is presented
in the next section.
#### 1 Variational Autoencoders
Figure 1: Generate low-dimensional representation of the input, regularize it
in a Multi-Gaussian distribution, and attempt reconstruction of the original
input
Typically, VAE assumes that latent variables to be a centered isotropic
multivariate Gaussian $p_{\phi}(z)=\mathbf{N}(z;0,I)$, and $p_{\theta}(z|x)$ a
multivariate Gaussian with parameters approximated by using a fully connected
neural network. Since the true posterior $p_{\theta}(z|x)$ is untractable, we
assume it takes the form of a Gaussian distribution with an approximately
diagonal covariance. This allows the variational inference to approximate the
true posterior, thus becoming an optimisation problem. In this case, the
variational approximate posterior will also take a Gaussian with diagonal
covariance structure:
$q_{\phi}(z|x_{i})=\mathbf{N}(z;\mu_{i},\sigma_{i}I)$
where $\mu$ and $\sigma$ will be outputs of the encoder. Since both
$p_{\theta}(z)$ and $q_{\phi}(z|x_{i})$ are Gaussian, we can compute the
discrepancy between the two:
$l_{i}(\theta,\phi)=-E_{q_{\phi}(z|x_{i})}[logp_{\theta}(x|Z)]+KL(q_{\phi}(z|x_{i})||p_{\theta}(z))$
(5)
with
$KL(P(x)||Q(x))=P(x)log\left(\frac{P(x)}{Q(x)}\right)$ (6)
The first part of the loss function represents the reconstruction loss i.e.
how different is the decoder output from the initial input, and the second
part represents the reparametrisation loss. We used both Kullback-Leiber (KL)
divergence (reconstruction loss which is used in the original paper describing
Variational Autoencoders) and Maximum Mean Discrepancy (MMD) (which has been
proved to give better by [SBT+19]), which will be employed as an alternative
to the KL divergence. While KL restricts the latent space embedding to reside
within a centered isotropic multivariate Gaussian distribution, MDD is based
on same principle, but uses the fact that two distributions are identical if,
and only if, their moments are identical, with:
$MMD(P(x)||Q(x))=E_{p(x),p(x^{\prime})}[k(x,x^{\prime})]+E_{q(x),q(x^{\prime})}[k(x,x^{\prime})]-2E_{q(x),q(x^{\prime})}[k(x,x^{\prime})]$
(7)
where $k(x,x^{\prime})$ denotes the Gaussian kernel with
$k(x,x^{\prime})=e^{-\frac{||z-z^{\prime}||^{2}}{2\sigma^{2}}}$.
Figure 2: Although two points on the initial manifold might be close to each
other space-wise, the distance on the actual surface might actually be larger,
so a multivariate Gaussian representation of the data will ’flatten’ the
manifold’s surface to better represent the similarity or disparity of the
points. This is an intuitive picture.
### 3 Convolutional layers
Datasets can take a plethora of shapes and forms as particular data sources
are better described by different modalities, ranging from graphical and
textual, to physiological signals and many other biomedical formats. Hence,
the way we infer outcomes or define functions to describe the data must be
adapted to the underlying characteristics of the dataset. For example, in most
visual examples it is important to note that an area of neighbouring points
present similar features, and leveraging this information helps in building a
better performing model than just analysing all points in separation. This
builds the case for convolutional layers that can summarise and learn
functions for neighbourhoods of points, which is better suited for image-based
datasets than multilayer perceptrons. Visual data generally has an $h\times
w\times d$ shape where $h$ is the height, $w$ the weight, and $d$ the number
of channels (e.g. colour images have three channels, RGB).
Figure 3: Convolutional layer applied on a single-channel data point
Let $X\in\mathbf{R}^{h\times w}$ and a kernel matrix $K\in\mathbf{R}^{n\times
m}$ (ex. $m=n=3$). The new image $X^{\prime}$ has the following formula:
$X_{ab}^{\prime}=\sum_{i=1}^{n}\sum_{j=1}^{m}K_{ij}X_{a+i-1,b+j-1}$
### 4 Graph Neural Network
This section presents Graph Neural Networks (or Graph Convolutional Networks,
based on the source of the [BG17], [BZSL13], [DBV16]), which are learning
techniques dealing with graph structured data that build intermediate feature
representations, $\vec{x}^{\prime}_{i}$, for each node i.
What must be noted is that all the architectures mentioned can be reformulated
as an instance of message-passing neural networks [KW16a].
#### 1 Graph Convolutional Layer
Convolutional layers were introduced because most graph neural networks adopt
the convolutional element of it. Intuitively, when dealing with graph-like
data we can say that nodes being neighbours with each other should be
significant for the way we try to learn lower space embeddings. In our graph-
shaped data, filters are applied to patches of neighbourhoods just as in CNN’s
example. These filters that we apply need to be order invariant because by
taking the immediate neighbourhood of a node we cannot define a precise order
of the neighbour nodes.
Figure 4: Convolutional layer applied on a single-node
Let $\mathcal{G}=(\mathcal{V},\mathcal{E},X)$ a graph where $\mathcal{V}$ is
the vertex set, $\mathcal{E}$ is the set of edges, and X is the feature
matrix, each row describing the features of vertex i from $\mathcal{V}$. From
$\mathcal{E}$ we can build $A$, the adjacency matrix, with following the rule:
if $(i,j)\in\mathcal{E}$, then $A_{ij}=1$ else $A_{ij}=0$. A simple way to
aggregate is to multiply X, the node feature matrix, with A.
$X^{\prime}=\sigma(AXW)$ (8)
where W is a parametrized learnable linear-transformation, shared by all
nodes, and $\sigma$ is a non-linearity, an activation function. A problem with
this exact shape of the function is that after passing our inputs through it,
we lose for each node it’s own features, because $A_{ii}=0$, as
$(i,i)\notin\mathcal{E}$. A simple solution to this is to write
$A^{\prime}=A+I_{n}$ where $n=|\mathcal{V}|$. And now we have:
$X^{\prime}=\sigma(A^{\prime}XW)$ (9)
Because $A^{\prime}$ may modify the scale of the output features, a
normalisation is needed. So we define $D$ with
$D^{\prime}_{ii}=\sum_{j}A^{\prime}_{ij}$, returning the degree matrix,
$X^{\prime}=\sigma(D^{\prime-1}A^{\prime}XW)$ (10)
Node-wise, the same equation can be rewritten as below, which resembles mean-
pooling from CNNs:
$\vec{x}^{\prime}_{i}=\sigma(\sum_{j\in
N_{i}}\frac{1}{|N_{i}|}W\vec{x}^{\prime}_{j})$ (11)
By using symmetric-normalisation we get to the GCN update rule:
$X^{\prime}=\sigma(D^{\prime-\frac{1}{2}}A^{\prime}D^{\prime-\frac{1}{2}}XW)$
(12)
Which node-wise has following equation:
$\vec{x}^{\prime}_{i}=\sigma(\sum_{j\in
N_{i}}\frac{1}{\sqrt{|N_{i}||N_{j}|}}W\vec{x}^{\prime}_{j})$ (13)
#### 2 Variational Graph Autoencoder
Variational Graph Autoencoders and Graph Autoencoders were demonstrated by
[KW16b] to learn meaningful latent representation on a link prediction task on
popular citation network datasets such as Cora, Citeseer, and Pubmed.
Figure 5: Convolutional layer applied on a single-channel point
Let $\mathcal{G}=(\mathcal{V},\mathcal{\epsilon})$ an undirected and
unweighted graph with $N=|\mathcal{V}|$ nodes. Let $\mathbf{A}$ be the
adjacency matrix of $\mathcal{G}$. Node features are summarized in the vector
$X\in\mathbf{R}^{N\times F}$, and $\mathbf{D}$ is the degree matrix. The
authors further introduce the stochastic latent variables $\mathbf{z}_{i}$,
summarized in $\mathbf{R}^{N\times F^{\prime}}$
Similar to the Variational Autoencoder, a mean vector and logarithmic
variation vector are produced, with two Encoder functions which can be
composed by Graph Convolutional Layers, and then, by using a sampling
function, the stochastic latent variables are reproduced. The loss function
used for this learning task is exactly the same as the one used for a
variational autoencoder, with the difference that for the reconstruction, the
authors use inner dot product of the latent variables and compare the output
with the input adjacency matrix.
$\mathcal{L}=\mathbf{E}_{q(Z|X,Z)}[log_{p}(A|Z)]-KL(q(Z|X,A)||p(Z))$ (14)
As Z has $N\times F^{\prime}$ dimension, we notice that $Z\cdot Z.T$ will have
$N\times N$ size, so it is possible to apply a loss function on $Z\cdot Z.T$
and adjacency matrix.
#### 3 Deep Graph Infomax
Deep Graph Infomax was first described by Velickovic in [VFH+19]. The approach
is based on maximizing local mutual information, and was inspired from
[HFLM+18] Deep Infomax.
For a generic graph-based unsupervised Machine Learning task setup, we will
use the following notations. Let
$X=\\{\vec{x_{1}},\vec{x_{2}},...\vec{x_{n}}\\}$ be node feature set, where N
is the number of nodes in our graph. Let $A\in\mathbf{R^{N\times N}}$ be the
adjacency matrix with $A_{i,j}=1$ if and only if there is an edge between node
i and j. The objective is to learn an encoder, $\epsilon:\mathbf{R^{N\times
F}}\times\mathbf{R^{N\times N}}\to\mathbf{R^{N\times F^{\prime}}}$, such that
$\epsilon(X,A)=H=\\{\vec{h_{1}},\vec{h_{2}},...\vec{h_{n}}\\},$ The
representations can be latter used for a classification task, and this will
also represent a way we can evaluate the quality of our embeddings.
Figure 6: High-level overview of Deep Graph Infomax
In order to obtain graph-level summary vectors, $\vec{s}$, the authors
leverage a readout function $R:\mathbf{R}^{N\times F}\to\mathbf{R}^{F}$ to
summarise the obtained patch representation into a graph-level representation,
$\vec{s}=R(\epsilon(X))$. For maximizing the local mutual information,
$\mathit{D}:\mathbf{R}^{F}\times\mathbf{R}^{F}\to\mathbf{R}$, the
discriminator is deployed. $\mathit{D(\vec{h_{i},\vec{s}})}$ should score
higher if the patch representation is found in the summary.
The negative samples for D, are computed with a corruption function
$\mathit{C}:\mathbf{R}^{N\times F}\times\mathbf{R}^{N\times
N}\to\mathbf{R}^{M\times F}\times\mathbf{R}^{N\times N}$. The choice of the
corruption function governs the specific kind of information that will be
maximized. In my case, I have solely used a simple shuffling of the nodes
features, letting the edges to stay in place.
$(\tilde{X},\tilde{A})=\mathit{C}(X,A)=(X_{shuffled},A)$ for this precise
case.
The authors followed the original paper, which was not concerned with graph
shaped like data [HFLM+18], and use a noise-constrastive type object with with
a standard binary cross entropy loss between the samples from the joint and
product of the marginals.
$\mathit{L}=\frac{1}{N+M}\left(\sum_{i=1}^{N}\mathbf{E}_{(X,A)}\left[log\mathit{D}\left(\vec{h}_{i},\vec{s}\right)\right]+\sum_{j=1}^{M}\mathbf{E}_{(\tilde{X},\tilde{A})}\left[log\left(1-\mathit{D}\left(\vec{h}_{i},\vec{s}\right)\right]\right)\right)$
(15)
## Chapter 3 Recreating Two State-of-the-Art Models
### 1 Description of Models
The authors in “Variational Autoencoders for Cancer Data Integration: Design
Principles and Computational Practice" [SBT+19] use several variational
autoencoder architectures to integrate the METABRIC subsets containing CNA,
mRNA and clinical data; the approaches in the paper are evaluated by combining
pairs of modalities. We reproduce two models from this work, specifically CNC-
VAE (Concatenation Variational Autoencoder) and H-VAE (Hierarchical
Variational Autoencoder), both based on the VAE architecture. The difference
between the two designs is where in the model the data integration is
performed, with CNC-VAE concatenating the input features at the earliest
stage, and H-VAE using hierarchical ordering, i.e. lower-latent space
representations will be learned for each modality in part by a separate lower-
level VAE, after which another higher-level VAE is applied on concatenation of
the previously learned representations.
Generally, the integrative models obtain higher results than the raw data with
CNC-VAE obtaining accuracies as high as 82% on PAM50 with an SVM classifier.
X-VAE obtains good results on DR (77%), and IC10 (85%), while H-VAE obtains
accuracies from DR (77%) and PAM50 (82%).
Figure 1: CNC-VAE and H-VAE. The two feature matrices are represented with red
and green.
For CNC-VAE, the feature matrices of the inputs are concatenated and then fed
to a Variational Autoencoder. The model is employed as a benchmark and as a
proof-of-principle by the authors for learning a homogeneous representations
from heterogeneous data sources. Even though early concatenation is prone to
cause more noise, and modalities with lower-dimensional feature spaces will
carry a weaker weight compared to the higher-latent feature vectors, this
approach obtains competitive results (up to 84%) with the other architectures.
While the complexity of this simple architecture lies in the highly domain-
specific data preprocessing, utilising a single objective function of combined
heterogeneous inputs might not be ideal in other settings.
Unlike CNC-VAE, H-VAE learns lower-latent representations for all
heterogeneous sources independently, and then concatenates the resulting
homogeneous representations. The learning process takes place by training a
lower-level Variational Autoencoder on each separate modality. After training
these autoencoders, we concatenate the resulting lower-space embeddings for
all modalities, and then train a higher-level Variational Autoencoder to learn
the summarised embeddings of all intermediate representations. While for CNC-
VAE we need to train only one network, for H-VAE we are going to train N + 1
neural networks, where N is the number of modalities.
Both models use Batch Normalization (light violet) and Dropout(0.2) (dark
violet) layers which are marked in Figure 1. Dense layers use ELU activation
function, with the exception of last dense layer which can also use according
to case sigmoid activation(when the integration task is for CNA+Clin,
categorical data). Where possible the reconstruction loss, is Binary Cross
Entropy if data is categorical, and Mean Squared Error if data is continuous.
For reparametrisation loss I have chosen MMD, because it gave significantly
better results for the authors. Among the hyper-parameters we have the dense
layer size $ds$, the latent layer size $ds$, and $\beta$ the weight balancing
between the reconstruction loss and the reparametrisation loss.
$\mathcal{L}=\mathcal{L}_{Reconstruction}+\beta\times\mathcal{L}_{Reparametrisation}$
As an optimizer, all models use Adam with a learning rate of $0.001$
### 2 Evaluation and results
The environment in which I chose to reproduce CNC-VAE and H-VAE coming from
[SBT+19] is PyThorch [PGM+19], the originating one being TensorFlow with Keras
[AAB+16]. I encountered a few challenges in the process of translating the
models since some of the libraries in Tensorflow with Keras were outdated.
Also, in the original code, the correct version of the H-VAE model was located
in a different folder than the other models.
The method of evaluation used is 5-fold cross validation. Each class of folds
corresponding to a class of labels, being stratified, so making sure the
distribution of labels over the folds is uniform.
#### 1 Hyper-parameter analysis
For evaluating the quality of the reproduced models we carry out two
experiments. The first one is performed to understand what hyper-parameter
settings would be optional for each modality and each label, which is also the
testing approach followed by the recreated paper [SBT+19]. As we wanted to
avoid repetition of results, the hyper-parameter search was done on Clin+CNA
and DR (distance relapse) label. As noted by the authors, Naive Bayes
classifier does not have any parameters, so it would be a good choice as the
classifier used on top of our lower-space representations.
For my hyper-parameter setting I have picked $ds\in\\{128,256,512\\}$,
$ls\in\\{32,64\\}$, and $\beta\in\\{1,25,50,100\\}$. The results can be seen
in Figures 2 and 3.
Figure 2: CNC-VAE performance with different hyper parameters settings. Figure
3: H-VAE performance with different hyper parameters settings.
#### 2 Best model assessment
For the final experiment, we have picked fixed vales for ls, ds, $\beta$, and
compared the accuracies of three classifiers: Naive Bayes, Support Vector
Machine, and Random Forest, applied on the latent-lower space representations
produced by H-VAE, and CNC-VAE. Because for $ls=32$, $ds=128$, $\beta=25$ the
results where generally good, I ran both my models with these parameters, and
obtained the results in Table 1. Generally, the results obtained in the
original paper are better, but it must be noted that the aim of the authors
was to fine-tune their models, while my goal was to show that I am able to
reproduce models, and obtaining competitive results with the original ones.
Another likely factor might is the difference in the learning time for H-VAE,
which uses three different autoencoder networks. In our experiments, we
allowed 150 epochs for each network.
| CNC-VAE | H-VAE
---|---|---
CNA + mRNA | Clin + mRNA | Clin + CNA | CNA + mRNA | Clin + mRNA | Clin + CNA
ER | NB | 90 | 92 | 85 | 87 | 89 | 81
SVM | 93 | 94 | 88 | 92 | 92 | 85
RF | 88 | 90 | 83 | 87 | 88 | 80
DR | NB | 66 | 69 | 70 | 67 | 68 | 70
SVM | 68 | 71 | 70 | 62 | 69 | 72
RF | 67 | 69 | 56 | 67 | 69 | 69
PAM50 | NB | 63 | 67 | 55 | 60 | 65 | 51
SVM | 68 | 73 | 59 | 67 | 72 | 54
RF | 62 | 67 | 54 | 57 | 58 | 47
IC | NB | 68 | 74 | 59 | 66 | 62 | 53
SVM | 75 | 79 | 63 | 73 | 73 | 56
RF | 63 | 64 | 55 | 58 | 53 | 45
Table 1: Test results for classification with Naive Bayes (NB), Suport Vector
Machine (SVM), and Random Forest (RF) of lower-latent representations produced
by CNC-VAE and H-VAE, in percentages (%).
Finally, we will discuss the contrast between the results we obtained and
those in the original paper [SBT+19]. Although we carried out our own hyper-
parameter search on the same architectures, we arrived at a different setting
that obtained better results on a small subset of modality and label class
combinations but generally performed worse. Secondly, the implementation of
our project was written in PyTorch, while the underlying Machine Learning
framework leveraged in the reference paper was Tensorflow with Keras. Even
though both frameworks overlap in supported functionalities overall, there are
several methods that exist in Tensorflow but not in Pytorch, and custom
implementations often have a minor impact on the model training. One of the
main differences is training in batches, which comes out of the box with
TensorFlow, but had to be manually implemented in PyTorch in our experiments.
Another potential issue is the implementation of Maximum Discrepancy Loss
function in PyTorch, because the variants found in other publications were
different from the one written in TensorFlow, which was not directly
transferable in PyTorch.
For fixed hyper-parameters, there is a total of $5(folds)\times
3(modalities)\times 4(labels)=60$ models to train. For the hyper-parameters
sets that the reference paper proposes and the two models, we would need to
train $60\times 108=6480$ different models. A model trains in cca. 2 minutes.
That would be approximately 300 hours, which is 12 whole days to get best
hyper-parameter settings for the two models. Thus, we only trained on a single
fold to reduce the training time by five. For the best hyper-parameter setting
we found, CNC-VAE clearly out-performs H-VAE.
## Chapter 4 Graph Neural Networks for Cancer Data Integration
Figure 1: This chapter is split in three modules: Graph Construction,
Introducing novel Unsupervised Integrative Graph Neural Network, and Evaluate
the quality of the lower latent space representations.
This chapter presents the approaches used to generate lower-dimensional
representations for multiple integrated modalities. We will first introduce
the graph construction algorithms along with the advantages and downsides,
followed by the proposed integrative unsupervised models. Finally, we are
evaluating the quality of the obtained lower-space representation.
The data sets employed for graph construction are METABRIC and the synthetic
dataset described earlier. The synthetic data will be used to discover the
best settings for the proposed Graph Neural Networks and to demonstrate the
functionality of the proposed models. Furthermore, METABRIC will be leveraged
in hyper-parameter fine-tuning for the graph construction modules thanks to
the varied distribution of the four classes of labels (ER, DR, PAM, IC).
Finally, we will present best results obtained by the proposed models for each
class of labels and then discuss conclusions in the final chapter.
### 1 Graph Construction Algorithms
Graph Neural Networks require the input data to conform to a graph shape i.e.
to have a matrix of features, $X$, containing all node information, and a
matrix of adjacency, $A$. Since METABRIC does not store the relationship
between patients , we need to define a module that builds graphs from feature
matrix, X. The quality of the resulting graph will influence the final results
- we will describe what graph quality is in a quantitative manner over the
next sections, but to give the reader an initial intuition, the following
question can be posed: “Should nodes with the same or different labels be
connected by edges?".
#### 1 From feature matrix to graph
Assume a feature matrix $X\in\mathbf{R}^{N\times F}$, for which N is the
number of objects and F is the number of features which define the space
coordinates of the samples. To transition from a static data set to a graph,
one needs to “draw" edges between the objects, which will in turn be
visualised as nodes. One naive but working solution is to link data points if
they are “close" to each other, where the metric describing closeness is the
Euclidean distance. Assume a and b are the coordinates of two points A and B
from X
$dist_{Euclidian}(A,B)=\sqrt{a^{2}-b^{2}}$
In specialised literature, the most popular ways to connect points in space
that rely on Euclidean distance are:
* •
Use a radius, $r$, as a threshold, and trace edges between nodes if the
Euclidean distance between them is lower than $r$.
* •
Use the K-Nearest Neighbours (KNN) method, which for a node A will return the
$k$ nearest neighbours based on the Euclidean distance between nodes.
* •
Employ a combination of the two approaches presented above for different
values of $k$ and $r$.
Figure 2: Graphic representation for choosing a threshold, $r$, for the radius
method, or $k$ for KNN and the results of applying the two methods in
separation
Furthermore, to objectively assess the quality of the graphs presented in the
next sections, we will introduce a metric cited by multiple sources in
literature, namely homophily. Intuitively, it is employed to analyze the ratio
of nodes connected by edges that have the same label.
#### 2 Quantifying the quality of a graph
Homophily is a metric that has been defined in many ways by different papers:
edge homophily [ZYZ+20], node homophily [PWC+20] and edge insensitive
[LHL+21]. In the context of this project we will refer mainly to edge
homophily.
###### Definition 1
(Edge Homophily) Given a $\mathcal{G}=\\{\mathcal{V},\mathcal{E}\\}$ and a
node label vector $y$, the edge homophily is defined as the ratio of edges
that connect nodes with same label. Formally, it is defined as:
$h(\mathcal{G},{y_{i},i\in\mathcal{V}})=\frac{1}{|\mathcal{E}|}\sum_{(j,k)\in\mathcal{E}}\mathbf{1}(y_{j}=y_{k})$
(1)
where $|\mathcal{E}|$ is the number of edges in the graph and
$\mathbf{1}(A=B)$ returns $1$ if $A$ and $B$ have the same label, and $0$ if
they do not.
A graph is typically considered homophilous when $h(\cdot)$ is large
(typically, $0.5<h(\cdot)<1$), given a suitable label context. Graphs with low
edge homophily ratio are considered to be heterophilous.
Figure 3: Left graph has a higher homophily than the right one, where the
’yellow’ ’purple’ colours represent the labels of the nodes
The works in [AEHPK+19], [CPLM21], [CJP+18] argue that strong homophily is
needed in order to achieve good performances. The next subsection presents an
analysis of the homophily levels in different graph construction settings.
#### 3 Graphs build out of METABRIC modalities
In this section, we analyse how different values for $r$ and $k$ influence the
overall homophily levels for each modality in part and all label classes. As
previously mentioned, homophily is a metric that measures how labels are
distributed over neighbouring nodes (linked through an edge), hence, measure
these levels is helpful because it creates an expectation for the lower-
dimensional embeddings produced by the GNN. For example, if some related nodes
belong to different labels, the lower-latent space representations will not be
very accurate for those nodes.
The next sub-sections present the obtained homophily levels over the three
modalities for the four classes of labels: ER, DR, PAM and IC. Each patient
can be described in terms of ER+ or ER-, positive DR and negative DR, 5 sub-
types of breast cancer - PAM, and 10 identified through research clusters -
IC.
##### Homophily levels for each class of labels on: Clinical data, and multi-
omic data (mRNA, CNA), by using K Nearest Neighbours
By looking at the tables below, we can learn the following aspects: in the KNN
case, the homophily levels don’t vary a a lot, in fact the remain at the same
levels over an increase in the number of edges. In the case of the IC class of
labels, notice that the results coming from CNA (40%) and Clin (17%) are very
low.
[respect all]csv/k/clin.txt
Table 1: Homophily of graph build on Clin
[respect all]csv/k/rnanp.txt
Table 2: Homophily of graph build on mRNA
[respect all]csv/k/cnanp.txt
Table 3: Homophily of graph build on CNA
##### Homophily levels for each class of labels on: Clinical data, and multi-
omic data (mRNA, CNA), by using Radius R
[respect all]csv/r/clin.txt
Table 4: Homophily of graph build on Clin
[respect all]csv/r/rnanp.txt
Table 5: Homophily of graph build on mRNA
[respect all]csv/r/cnanp.txt
Table 6: Homophily of graph build on CNA
By looking at the tables we can learn following aspects. In the Radius case
homophily levels vary a lot, sometimes even 40% (in the mRNA $H_{IC}$ case).
This means that or choice of R matters a lot. Another aspect that can be
noticed is that for good levels (above 70%) of homophily, most of the time
that graph conformation will have lots of isolated nodes, fact which can be
disastrous for graph neural networks.
Now, with some intuition build on how graphs would behave like, we will reveal
to the reader that a combination of the two methods will be used in order to
get the most out of both. By using KNN we will ensure that no nodes are
isolated, and by using the radius method we will ensure that the clusters of
nodes close in space will be related through edges regardless of their number
(limitation of the KNN).
The next section will present the proposed 4 models that attempt integration
on graph structured data. While the fist two: CNC-VGAE and CNC-DGI attept
early integration (by direct concatenation of the featuers), 2G-DGI and
Hetero-DGI will attempt mixed integration, by concatenating their lower latent
features in the middle of the learning phase.
### 2 Graph Neural Network Models For Data Integration
We researched unsupervised learning approaches and models that could aid in
the integration of two feature matrices that describe the same items in
different ways for a data integration task. A first strategy would be to
concatenate the two feature matrices and construct a graph on top of that,
then apply either a Variational Graph Autoencoder or a Deep Graph Infomax. A
second technique is to integrate two graphs on which will apply GCN layers,
and then use a Dense Layer to ’filter’ the two concatenated feature matrices
during the integration phase. A third option is to create a hetero graph and
concatenate the upper layer and bottom layer features at some point. Both the
second and third method will imply at some point the use of Deep Graph
Infomax.
We have selected the Deep Graph Infomax architecture type to integrate two
graphs or a hetero graph. This is because for readout and discrimination the
only inputs are latent feature vectors, with no adjacency matrix. The
adjacency matrix can be very tricky to work with, and we will give two
scenarios to prove our point. It is necessary to know that when applying a GCN
layer over data points both their feature matrix and their adjacency matrix is
needed.
* •
Consider applying two GCN layers to two graphs in order to integrate them.
Then we wish to concatenate the resulted feature matrices and apply another
GCN layer. A question is which adjacency matrix should we keep, the one from
the first graph or the one from the second graph? Obviously the two graphs
have different adjacency matrices.
* •
Another problem specific to VGAE is that, even if we can get lower-latent
variables to incorporate the information from two graphs, when reconstructing,
if we only use inner product we can only build one adjacency matrix, since the
inner product of the lower latent-variables has no parameters to train.
Alternatively, two layers could be added before generating the two adjacency
matrices, and then rebuilt with the inner product. However, in the original
paper the adjacency matrix is built directly from the latent space with a
function that doesn’t have any parameters.
#### 1 Notation and background knowledge
Let $X_{1}^{N\times F_{1}}$ and $X_{2}^{N\times F_{2}}$ be the two modalities
we want to integrate. Let $\mathcal{G}:R^{N\times F_{i}}\to R^{N\times
F_{i}}\times R^{|\mathcal{E}_{i}|\times 2}\times R^{|\mathcal{E}_{i}|}$, where
$\mathcal{G}$ will return the feature matrix together with the edges set
$\mathcal{E}_{i}$, and a list of their attributes.
Each architecture in part, will have a component $Encoder$, this will return
the latent lower space representation and will have the following shape, with
$ls$ the dimension of the lower latent space:
A graph encoder in this case will represent a function
$Encoder:\mathbf{R}^{N\times F}\times\mathbf{R}^{|\mathcal{E}|\times 2}\times
R^{|\mathcal{E}|}\to\mathbf{R}^{N\times ls}$
or
$Encoder:2\times\mathbf{R}^{N\times
F_{i}}\times\mathbf{R}^{|\mathcal{E}_{i}|\times 2}\times
R^{|\mathcal{E}_{i}|}\to\mathbf{R}^{N\times ls}$
for the Two Graphs Deep Infomax, which will be present in the next sections.
The encoder will be an ensemble of graph convolutional layers, plus some
special integrative layer which will describe depending to case. Even thought
there are a lot of current choices for the graph convolutional layers:
ChebConv [DBV16], SAGEConv [HYL17], GATConv [VCC+17], GCNConv, etc, because it
permits for edge attributes, GCNConv has been chosen. The number of layers
will represent a parameter, which will be studied in the upcoming sections by
varying their number from 1-3, literature mentioning that 3 layers are usually
ideal for encoding task [VFH+19].
For an edge with size $s$ it’s value of the feature will be $e^{-s}$. This is
a good way to write edge features because edges that unite close neighbour
points will have values between $1$ and $0.5$ and far apart nodes which will
be united will have values bellow $0.1$. This mapping for edges features is
also used by Similarity Fusion Network’s paper [WMD+14].
Figure 4: $\mathbf{\mathcal{U}}$: Special integration layer
The special integration layer from 4 will be defined for each model in part
(2G-DGI and Hetero-DGI), and will work differently for each of the two
mentioned models.
In order to simplify notation, we will not mention edge attributes in the
following subsections, but they will be used in thetraining of the models.
#### 2 Concatenation Of Features: CNC-DGI and CNC-VGAE
This method of integrating the two datasets is pretty straight forward. Having
two feature matrices $X_{1}\in\mathbf{R}^{N\times F_{2}}$ and
$X_{2}\in\mathbf{R}^{N\times F_{2}}$, the concatenation of them both would
result in a matrix $X_{12}\in\mathbf{R}^{N\times(F_{1}+F_{2})}$. On top of
this we apply our building graph method, and then we take our graph through
the unsupervised method for getting lower space embeddings of our choice.
Figure 5: CNC-DGI: Apply Variational Autoencoder on top of the graph built on
concatenated inputs. Figure 6: CNC-VGAE: Apply Variational Autoencoder on top
of the graph built on concatenated inputs
After training, the $\mathit{Encoder}$ from both will return lower-latent
space embeddings with shape $\mathbf{R}^{N\times\mathit{ls}}$ where
$\mathit{ls}$ is the dimension of the latent space. For 5 and 6 the encoder
will return $H^{N\times\mathit{ls}}$.
#### 3 Two Graphs: 2G-DGI
Take $X_{1}\in\mathbf{R}^{N\times F_{1}}$ and $X_{2}\in\mathbf{R}^{N\times
F_{2}}$ and build two graphs, $\mathcal{G}_{1}=(X_{1}^{N\times
F_{1}},\mathit{E}_{1})$ and $\mathcal{G}_{2}=(X_{2}^{N\times
F_{2}},\mathit{E}_{2})$. The $GCN_{1}:R^{N\times F_{1}}\to
R^{N\times\mathit{ls}}$ and $GCN_{2}:R^{N\times F_{2}}\to
R^{N\times\mathit{ls}}$ are different for $\mathcal{G}_{1}$ and for
$\mathcal{G}_{2}$ because the nodes have different feature size.
Here the encoder, $\mathit{Encoder}:\mathbf{R}^{N\times
F_{1}}\times\mathbf{R}^{N\times F_{2}}\to\mathbf{R}^{N\times\mathit{ls}}$ and
$\mathit{Encoder}(X_{1},\mathit{E}_{1},X_{2},\mathit{E}_{2})=\mathcal{U}(GCN_{1}(X_{1},\mathit{E}_{1}),GCN_{2}(X_{2},\mathit{E}_{2}))$
where $\mathcal{U}$ can have for example the following shapes:
$\mathcal{U}_{Dense}(GCN_{1}(X_{1},\mathit{E}_{1}),GCN_{2}(X_{2},\mathit{E}_{2}))=$
$=Dense^{(2\times\mathit{ls}\to\mathit{ls})}((GCN_{1}(X_{1},\mathit{E}_{1})||GCN_{2}(X_{2},\mathit{E}_{2})))=$
$=Dense^{(2\times\mathit{ls}\to\mathit{ls})}(H_{1}||H_{2})=H$ (2)
$\mathcal{U}_{avg}(GCN_{1}(X_{1},\mathit{E}_{1}),GCN_{2}(X_{2},\mathit{E}_{2}))=\frac{GCN_{1}(X_{1},\mathit{E}_{1})+GCN_{2}(X_{2},\mathit{E}_{2})}{2}=\frac{H_{1}+H_{2}}{2}=H$
(3)
One observation worth making is that while $\mathcal{U}_{Dense}$ has
parameters, $\mathcal{U}_{Avg}$ does not. We propose those two different
special integration layers because we can learn what works better in the DGI
context.
Figure 7: 2G-DGI: Two graphs integration
#### 4 Heterogeneous Graph: Hetero-DGI
In [WJS+19] heterogeneous graphs have the following definition:
###### Definition 2
A heterogeneous graph denoted as $\mathcal{G}=(\mathcal{V},\mathcal{E})$ ,
consists of an object set $\mathcal{V}$ and a link set $\mathcal{E}$. A
heterogeneous graph is also associated with a node type function
$\theta:\mathcal{V}\to\mathcal{A}$ and a link type mapping function
$\omega:\mathcal{E}\to\mathcal{R}$. $\mathcal{A}$ and $\mathcal{R}$ denote two
sets of predefined object types and link types, where
$|\mathcal{A}|+|\mathcal{R}|>2$.
Take $X_{1}\in\mathbf{R}^{N\times F_{1}}$ and $X_{2}\in\mathbf{R}^{N\times
F_{2}}$ and build two graphs, $\mathcal{G}_{1}=(X_{1}^{N\times
F_{1}},\mathit{E}_{1})$ and $\mathcal{G}_{2}=(X_{2}^{N\times
F_{2}},\mathit{E}_{2})$. In order to get our heterogeneous graph we will add
edges between the nodes that describe same objects, and we will say these
edges belong to $\mathcal{E}_{3}$. Now, the two node types are defined by the
graph the node is originating from. For edges, $\mathit{e}$ is of type $i$ if
it belongs to $\mathcal{E}_{i}$.
In here $\mathit{Encoder}:\mathbf{R}^{2N\times
F}\to\mathbf{R}^{N\times\mathit{ls}}$
$\mathit{Encoder}(X,\mathit{E})=\mathcal{U}(GCN(X,\mathit{E}))$ (4)
In here, $\mathit{U}$ must do more than just concatenate. We have
$\mathit{U}:\mathbf{R}^{2N\times ls}\to\mathbf{R}^{N\times ls}$. So we must
have define a split function $\mathit{S}:\mathbf{R}^{2N\times
ls}\to\mathbf{R}^{N\times ls},\mathbf{R}^{N\times ls}$, split the feature
matrix on the nodes axes. Next we will define $\mathcal{U}_{Avg}$ and
$\mathcal{U}_{Dense}$:
$\mathcal{U}_{Avg}(GCN(X,\mathit{E}))=\frac{{\sum(S(GCN(X,\mathit{E})))}}{2}=\frac{H_{1}+H_{2}}{2}=H$
(5)
$\mathcal{U}_{Dense}(GCN(X,\mathit{E}))=Dense^{(2\times\mathit{ls}\to\mathit{ls})}(||(S(GCN(X,\mathit{E}))))=$
$=Dense^{(2\times\mathit{ls}\to\mathit{ls})}(H_{1}||H_{2})=H$ (6)
Just as in previous case, we are taking two $\mathcal{U}$: a parametric one,
and a fixed one. In the evaluation section
Figure 8: Build graph on concatenated features
### 3 Evaluation and results
In order to evaluate the quality of the proposed models, we have decided to
proceed with two testing methods. Evaluation of the models on METABRIC dataset
has one big problem which is that for one hyper-parameter setting 60 models
need to be trained, in order to correctly evaluate the models performance. We
thought that a synthetic dataset which needs to train only one model in order
to asses the quality of a hyper-parameter setting would be more fitted, since
it has only one label class would be helpful, and only one combination of
modalities which can be integrated.
Figure 9: Evaluation pipeline for this project
The supervised models in use will be: Naive Bayes [M+06], Support Vector
Machine [Nob06] and Random Forest [Qi12].
### 4 Evaluation on Synthetic-Data
In order to test these models on synthetic data, we have split each modality
of the synthetic dataset in Training and Testing, with 75% of the samples for
training and 25% of the samples for testing. While, a five-fold cross
validation, would have been more suitable, the number of models we would have
tested with different hyper-parameter settings would have been $\times 5$,
because we need to re-train each model when the fold changes. The next
subsections, will present the evaluation of CNC-DGI, CNC-VGAE, 2G-DGI and
Hetero-DGI for various parameters.
##### CNC-DGI
Figure 10: Accuracies of lower-latent representation obtained from CNC-DGI
For CNC-DGI the parameters chose are the depth of the Encoder, i.e. how many
convolution layers there were going to be used $conv_{nr}=\\{1,2,3\\}$, in the
context of five homophily levels
$homophily_{levels}=\\{0.51,0.61,0.73,0.87,0.98\\}$.
##### CNC-VGAE
Figure 11: Accuracies of lower-latent representation obtained from CNC-VGAE
For CNC-VGAE the parameters chose are the depth of the Encoder
$conv_{nr}=\\{1,2,3\\}$, and the reparametrisation loss function, which could
be either MMD or KL diveregence loss, in the context of five homophily levels.
From this diagram we can see that KL loss is more persistent as homophily
levels increase, and as the number of layers increase. The best configuration
for CNC-VAE can be with 3 GCN layers and using KL reparametrisation loss, even
though [SBT+19] use MMD for reparametrisation loss. at the same time METABRIC
homophily levels will be quite small maybe bellow 50% and from this diagram we
can see that MDD gives better accuracies than KL on smaller homophily levels.
##### 2G-DGI
For the 2G-DGI, the parameters chose where the number of convolution used from
1 to 3, and the concatenation layer shape (either dense layer or average), in
the context of different homophily levels in $\\{0.51,0.67,0.74,0.97\\}$
Figure 12: Accuracies of lower-latent representation obtained from 2G-DGI
One can see that, generally the results obtained with two GCN layers and with
an average filter are better than the results obtained with a dense layer on
other depths on the encoder, over various levels of homophily.
##### HeteroDGI
Figure 13: Accuracies of lower-latent representation obtained from Hetero-DGI
For the Hetero-DGI the evaluation setting is similar with 2G-DGI with the
difference that encoders with one convolutional layers have been ignored. From
the results one can learn that generally encoders with two convolutional
layers work and with an dense layer give better results than all the other
settings over various levels of homophily.
##### Conclusions
From these diagrams, one can learn that the homophily levels will greatly
influence the quality of the lower-latent space representations produced by
the proposed models. This means that when looking for best hyper-parameter
setting for the METABRIC dataset, we should maximize the homophily levels of
our graph by trying different values for $k$ (KNN) and $r$ Radius.
Since it can be noticed that for bigger homophily levels the accuracies also
increase, we can conclude that the proposed models do separate the two classes
of nodes for graph structures in which nodes with same class are favor an edge
between them.
Another conclusion which can be drawn from the diagrams is that, the models do
work. For example for homophily levels of 74%:
* •
CNC-VGAE will produce representations that will return 90% accuracy with a
Naive Bayes classifier.
* •
CNC-DGI will produce representations that attain 90% accuracy with a Naive
Bayes classifier.
* •
2G-DGI will produce representations that attain an accuracy of 90% with a
Naive Bayes classifier
* •
Hetero-DGI will produce representations that attain an accuracy of 80% with a
Naive Bayes classifier
Another aspect that can be observed is that for small homophily, less layers
of GCN give better result. This can happen because as the number of GCN layers
increases, nodes will get information from further neighbours, which might be
of different label, so not representative for the for the original node’s
label.
### 5 Evaluation on METABRIC
This section, will introduce two evaluation procedures of the novel models on
the METABRIC dataset. Since, the previous section has proved how increasing
homophily levels also increase accuracies of the proposed models in the first
evaluation experiment, we will carry experiments
#### 1 Graph hyper-parameter selection
Since for pairs of modalities, and label class the construction of the graph
is different the hyper-parameter search has been done on all pairs of
modalities and all labels. What we have noticed is that the behavior of the
results was constant through the modality change, but very different trough
the label change class. Next, the results will be posted for all models on
interaction of Clin+mRNA for all classes of labels.
The tests we will carry vary the value of $k\in\\{2,4,16,64\\}$ and
$r\in\\{0.005,0.05,0.5,1,5\\}$. For all models the latent-lower space
representation has 64 dimension. The dense layers have 128 dimensions. Each
convolution has an PReLU (Parametric- ReLU) activation function. For each
model in part the individual decisions we have took are:
* •
For CNC-VGAE in special the reparametrisation function will be MMD, even
thought the synthetic dataset the results where questionable.
* •
For 2G-DGI the special integration layer will be a simple average of the two
lower representations, because on the synthetic dataset
* •
For Hetero-DGI the special integration layer will be the dense layer
All the other tables can be find in the Appendix section.
Figure 14: Accuracies of lower-latent representation obtained from CNC-VGAE
Figure 15: Accuracies of lower-latent representation obtained from CNC-VGAE
Figure 16: Accuracies of lower-latent representation obtained from CNC-VGAE
Figure 17: Accuracies of lower-latent representation obtained from CNC-VGAE
##### Best Model Assessment
| CNC-DGI | CNC-VGAE | 2G-DGI | Hetero-DGI
---|---|---|---|---
Clin+CNA | Clin+mRNA | CNA+mRNA | Clin+CNA | Clin+mRNA | CNA+mRNA | Clin+CNA | Clin+mRNA | CNA+mRNA | Clin+CNA | Clin+mRNA | CNA+mRNA
ER | NB | 0.712 | 0.939 | 0.712 | 0.835 | 0.914 | 0.835 | 0.694 | 0.927 | 0.924 | 0.758 | 0.727 | 0.755
SVM | 0.793 | 0.919 | 0.881 | 0.841 | 0.914 | 0.851 | 0.773 | 0.929 | 0.939 | 0.763 | 0.770 | 0.768
RF | 0.841 | 0.934 | 0.891 | 0.833 | 0.909 | 0.833 | 0.806 | 0.924 | 0.937 | 0.795 | 0.823 | 0.806
DR | NB | 0.636 | 0.621 | 0.676 | 0.68 | 0.696 | 0.694 | 0.689 | 0.614 | 0.679 | 0.692 | 0.674 | 0.694
SVM | 0.696 | 0.696 | 0.696 | 0.69 | 0.696 | 0.696 | 0.697 | 0.697 | 0.697 | 0.697 | 0.697 | 0.699
RF | 0.703 | 0.699 | 0.694 | 0.71 | 0.704 | 0.699 | 0.697 | 0.705 | 0.717 | 0.677 | 0.674 | 0.672
PAM | NB | 0.312 | 0.487 | 0.381 | 0.398 | 0.398 | 0.449 | 0.553 | 0.563 | 0.298 | 0.412 | 0.268 | 0.194
SVM | 0.441 | 0.58 | 0.578 | 0.454 | 0.454 | 0.502 | 0.621 | 0.614 | 0.477 | 0.449 | 0.465 | 0.457
RF | 0.497 | 0.58 | 0.563 | 0.457 | 0.457 | 0.515 | 0.644 | 0.652 | 0.576 | 0.422 | 0.518 | 0.452
IC | NB | 0.391 | 0.267 | 0.31 | 0.497 | 0.401 | 0.520 | 0.447 | 0.384 | 0.538 | 0.260 | 0.215 | 0.215
SVM | 0.458 | 0.387 | 0.454 | 0.482 | 0.414 | 0.532 | 0.467 | 0.482 | 0.593 | 0.391 | 0.278 | 0.338
RF | 0.5 | 0.447 | 0.515 | 0.474 | 0.383 | 0.527 | 0.465 | 0.500 | 0.621 | 0.422 | 0.407 | 0.381
Table 7: Best-in-class results on representations obtained with the models
trained on various settings on classification task obtained with Naive Bayes
Classifier, Support Vector Machine and Random Forest
From Table 7 we can clearly understand that 2G-DGI obtains best-in-class
results, compared to the other models. A nice surprise is that on Clin+CNA on
PAM class it actually beats the state of the art results, with $10\%$.
Even though, the architectures of the 2G-DGI and Hetero-DGI where similar in
some sense, there is a clear difference between the best results obtained by
both of them. This will be investigated in future work.
General unsatisfying results on IC label class, can be motivated by the low
homophily levels of the produced graph on this label class (between 17%-19%).
##### Conclusions
From the above Figures (14, 15, 16, 17) we can learn the followings,
individual comparisons per model:
* •
CNC-VGAE must give the best results out of all models in this testing
settings, reason for which we will test the values it will return for the KL
reparametrisation loss. For low values of r and k it returns best accuracies.
For all classes of labels there seem to be a jump in average when
transitioning from $r=0.5$ to $r=1$. Generally the difference between the
testing accuracy and the training accuracy are small, sometimes testing
accuracies are higher than the training ones.
* •
Hetero-DGI gives generally worse results than all the other models, this might
be because the special integration layer is a dense layer, and not an average
one. Also one can notice generally pick for high values of r, rather than
changes in k, in fact it seems to decrease as k grows.
* •
CNC-DGI gives good results for the ER label, which has better homophily
levels, where we can notice the average decreases as k increases. For DR label
class, generally it gives good results when both $r$ and $k$ increase in value
* •
2G-DGI returned competitive results with CNC-VGAE, which was a nice surprise.
On ER the highest test accuracy is 87%, on DR is 69%, on PAM 57% (best out of
all of them).
Specific on the label classes, we can learn the following:
* •
On ER, the models will return generally averages above 70%
* •
On DR for big values of both $r$ and $k$ the results will generally be above
60%
* •
On PAM most models return small accuracies (bellow 40%) with the exception on
CNC-VGAE and 2G-DGI which will get to accuracies of 55% for values of $r$
smaller than $0.5$
* •
On IC most models will return small accuracies bellow 20%, but for small
graphs 2G-DGI and CNC-VGAE can get to accuracies of 40%.
Generally, from the above conclusions we can learn, that there exist some
correlation between the homophily levels described in Tables 1,2,3, 4, 5, 6
the number of edges in the graph, and the accuracies obtained. For Clin,
homophily levels where around 16%, so this is a reason why IC on mRNA+Clin
returns such small results. This exact fact can be proved by looking at Figure
12, which returns a best result of 53% accuracy on IC for CNA+mRNA
integration. Intuitively, it’s almost like our learning process is downgraded
by the high level of intra-class edges.
## Chapter 5 Conclusion
### 1 Summary
This project presents the reader with a deep dive into a novel unsupervised
learning pipeline leveraging Graph Neural Networks on a cancer classification
task. We commenced by discussing and recreating the state-of-the-art models in
“Variational Autoencoders for Cancer Data Integration: Design Principles and
Computational Practice" [SBT+19], namely CNC-VAE and H-VAE. Our implementation
of these architectures trained on the METABRIC dataset obtained results in
line with the paper, and it provides a benchmark for the novel graph models
proposed in our work.
* •
The integration of Clin+mRNA on the IC label class with CNC-VAE rendered
$79\%$ accuracy
* •
The integration of CNA+mRNA and Clin+mRNA on the PAM label class resulted in
$68\%$, and $73\%$ respective accuracies
* •
On all integration types of the ER label class, accuracies were above $85\%$
The following topic focused on graph construction algorithms on data sets
which do not store relations between the data points representing our
patients. These approaches include KNN, and generating links based on the
Euclidean distance between nodes. We defined metrics quantifying the
characteristics and overall quality of such graph data sets, such as
homophily, and analysed the resulted graphs using these measurements.
Generally, lower homophily levels resulted in very low accuracies in the
lower-latent representation evaluation phase, while high levels of homophily
achieved up to 99.8% accuracy. We can infer that the proposed models are
sensitive to the graph structure of the input data.
During the design phase of the integrative models, we considered many factors
such as the shape of the special integration layer being parametric or non-
parametric, number of layers in the autoencoders as well as the number of
neurons in each layer and many others. Hyper-parameter fine-tuning has been
performed on each model for all pairs of modalities and for each class of
labels, and the evaluation process has been in line with the that used in
state-of-the-art works in order to ensure consistency.
To prove the functionality of the novel models, we introduced a synthetic
dataset for which the results observed using generated lower-dimensional
embeddings on classification tasks with Naive Bayes vary between 51% and 98%
accuracy. Specifically, on each homophily class:
* •
For homophily level of 51%, 2G-DGI returned an accuracy of 82%, and CNC-DGI
returned 80%.
* •
For homophily level of 61%, 2G-DGI returned an accuracy of 84%, and Hetero-DGI
returned 79% accuracy.
* •
For higher homophily levels, we notice best-in-class results that are above
90%
Finally, as for the graph models applied on the METABRIC dataset, results vary
much depending on the integrated modalities and on the label class, from 17%
to 92%, in direct correlation with the homophily values of each label class.
From Table 7 we can clearly understand that 2G-DGI obtains best-in-class
results, compared to the other models. And we can also notice that on Clin+CNA
integration on PAM with class it actually beats the state of the art results,
with $10\%$.
### 2 Further work
During the development of the experiments in this projects and up to the
report writing phase, we identified several opportunities to advance this line
of research that could be tackled in the future. First, investing in the
search of the most optimal hyper-parameters for the graph construction
algorithms and Graph Neural Networks proposed in this paper can help improve
the current results. Second, analysing and extending the number of integrated
modalities with, for example, visual data, has the potential of discovering
deeper insights into cancer sub-type cluster and cancer classification.
Finally, we propose a novel model adapted from the Hierarchical Variational
Autoencoder that introduces the use of Graph Convolutional Layers after the
initial enconding phase. We conclude by presenting a mathematical problem
regarding graph data that could be solved with probability theory and
combinatorics.
##### Hyper-parameter settings
Given the multitude of architectural decisions required by the model training
trials in this project, we intend to carry out more tests and search for the
most optimal hyper-parameter settings that will further advance our
architectures, such as the parameterised special integrative layer, the depth
of the Encoder (i.e. number of GCN layers used), the size of the dense and
latent space layers and so on.
##### Multi-modal expansion
The modalities integrated in this paper represent either continuous or
categorical data - we intend to extend the integrative capabilities of the
network to image data, and thus use more than two modalities. For the visual
data, CNN layers can be added prior to the integration phase of the models to
extract higher-level features from the input images.
##### H-VGAE
To advance the research avenue tackled in this project, we propose another
model adapted from the Hierarchical Variational Autoencoder [SBT+19], which
will be named Hierarchical Graph Variational Autoencoder (H-VGAE). The first
processing units in this model are comprised of a series of autoencoders, that
will generate a lower-latent representations independently for each input
modality. A graph construction algorithm will be applied further to build
relationships across the resulted embeddings, which will be fed among the
lower-dimensional representations to two Graph Convolutional Layers: one
aiming to find the mean and one to find the variance of the encoding
distribution. Finally, the decoding phase consists of an inner product
operation on the final representation, which will be compared to the
originally built graph in the loss function.
Figure 1: H-VGAE proposed architecture for integration
The reasons for which this model has the potential to render competitive
results:
* •
The lower-dimensional embeddings generated by the first layer of autoencoders
(one for each input modality) will lie on a continuous multi-Gaussian space.
Hence, the radius algorithm for generating edges has a higher probability of
returning dense graphs with better homophily levels than by using the method
on its own.
* •
The closes model to this new architecture, CNC-VGAE, obtained among the best
results across all tested models.
##### A Math Problem
Figure 2: Growing neighbourhoods, and growing number of like neighbours
Take a graph with $25\%$ homophily level. Let O (orange) and P (purple) be two
labels that the nodes can take, and let’s pick a node of label O. By taking
the nodes immediate neighbourhood we expect that 1 out of 4 neighbours to be
of label O, conversely for each P (purple) node we expect 3 orange neighbours
and 1 purple neighbour. By taking a bigger neighbour, that includes immediate
neighbours and their neighbours, we expect that 4 out of 10 to be of label O.
This can be clearly understood from Figure 2. Our open ended question is if we
continue increasing the neighborhoods can we reach a maximum for same label
neighbours, will the number converge? Does this happen for classes that have
more than two labels? What about different homophily levels? Can we generalize
a formula?
This question can be relevant in this context because the number of growing
nested neighbourhoods, can be the number of convolution layers that we apply,
to a dataset with a certain homophily level. Attempting to answer this
question might raise ideas on how learning on graphs with small homophily
levels should be attempted.
## References
* [AAB+16] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
* [AEHPK+19] Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. MixHop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 21–29. PMLR, 09–15 Jun 2019.
* [BC20] Nupur Biswas and Saikat Chakrabarti. Artificial intelligence (ai)-based systems biology approaches in multi-omics data analysis of cancer. Frontiers in Oncology, page 2224, 2020.
* [BG17] Aleksandar Bojchevski and Stephan Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. arXiv preprint arXiv:1707.03815, 2017.
* [BZSL13] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
* [C+18] Tabula Muris Consortium et al. Single-cell transcriptomics of 20 mouse organs creates a tabula muris. Nature, 562(7727):367–372, 2018.
* [CJP+18] Alfredo Massimiliano Cuzzocrea, Allan James, Norman W Paton, Srivastava Divesh, Agrawal Rakesh, Andrei Z Broder, Mohammed J Zaki, K Selçuk Candan, Labrinidis Alexandros, Schuster Assaf, et al. Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018. ACM, 2018.
* [CNL11] Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215–223. JMLR Workshop and Conference Proceedings, 2011.
* [CPL+19] Kumardeep Chaudhary, Olivier B Poirion, Liangqun Lu, Sijia Huang, Travers Ching, and Lana X Garmire. Multimodal meta-analysis of 1,494 hepatocellular carcinoma samples reveals significant impact of consensus driver genes on phenotypes. Clinical Cancer Research, 25(2):463–472, 2019.
* [CPLG18] Kumardeep Chaudhary, Olivier B Poirion, Liangqun Lu, and Lana X Garmire. Deep learning–based multi-omics integration robustly predicts survival in liver cancer. Clinical Cancer Research, 24(6):1248–1259, 2018.
* [CPLM21] Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptive universal generalized pagerank graph neural network. In International Conference on Learning Representations, 2021.
* [CSC+12] Christina Curtis, Sohrab P Shah, Suet-Feung Chin, Gulisa Turashvili, Oscar M Rueda, Mark J Dunning, Doug Speed, Andy G Lynch, Shamith Samarajiwa, Yinyin Yuan, et al. The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroups. Nature, 486(7403):346–352, 2012.
* [CSZ+17] Hao Chai, Xingjie Shi, Qingzhao Zhang, Qing Zhao, Yuan Huang, and Shuangge Ma. Analysis of cancer gene expression data with an assisted robust marker identification approach. Genetic epidemiology, 41(8):779–789, 2017.
* [Cyb89] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989\.
* [DBV16] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29, 2016.
* [DSLC+20] Carmen Lidia Diaz Soria, Jayhun Lee, Tracy Chong, Avril Coghlan, Alan Tracey, Matthew D Young, Tallulah Andrews, Christopher Hall, Bee Ling Ng, Kate Rawlinson, et al. Single-cell atlas of the first intra-mammalian developmental stage of the human parasite schistosoma mansoni. Nature communications, 11(1):1–16, 2020.
* [EGK+17] Joseph R Ecker, Daniel H Geschwind, Arnold R Kriegstein, John Ngai, Pavel Osten, Damon Polioudakis, Aviv Regev, Nenad Sestan, Ian R Wickersham, and Hongkui Zeng. The brain initiative cell census consortium: lessons learned toward generating a comprehensive brain cell atlas. Neuron, 96(3):542–557, 2017.
* [HFLM+18] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018.
* [HYL17] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
* [HZYZ21] Zongzhen He, Junying Zhang, Xiguo Yuan, and Yuanyuan Zhang. Integrating somatic mutations for breast cancer survival prediction using machine learning methods. Frontiers in genetics, page 1853, 2021.
* [KW13] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
* [KW16a] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
* [KW16b] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016.
* [LHL+21] Derek Lim, Felix Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Bhalerao, and Ser Nam Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. Advances in Neural Information Processing Systems, 34, 2021.
* [LJDW+22] Hongjie Li, Jasper Janssens, Maxime De Waegeneer, Sai Saroja Kolluru, Kristofer Davie, Vincent Gardeux, Wouter Saelens, Fabrice PA David, Maria Brbić, Katina Spanier, et al. Fly cell atlas: A single-nucleus transcriptomic atlas of the adult fruit fly. Science, 375(6584):eabk2432, 2022.
* [LZPX20] Bohyun Lee, Shuo Zhang, Aleksandar Poleksic, and Lei Xie. Heterogeneous multi-layered network model for omics data integration and analysis. Frontiers in genetics, page 1381, 2020.
* [M+06] Kevin P Murphy et al. Naive bayes classifiers. University of British Columbia, 18(60):1–8, 2006.
* [MTMG03] Stefano Monti, Pablo Tamayo, Jill Mesirov, and Todd Golub. Consensus clustering: a resampling-based method for class discovery and visualization of gene expression microarray data. Machine learning, 52(1):91–118, 2003.
* [MZ18] Tianle Ma and Aidong Zhang. Affinity network fusion and semi-supervised learning for cancer patient clustering. Methods, 145:16–24, 2018.
* [Nob06] William S Noble. What is a support vector machine? Nature biotechnology, 24(12):1565–1567, 2006.
* [PGM+19] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
* [PPK+] A Prat, JS Parker, O Karginova, C Fan, C Livasy, and JI Herschkowitz. and perou, cm (2010). phenotypic and molecular characterization of the claudin-low intrinsic subtype of breast cancer. Breast Cancer Research, 12:R68.
* [PSBB+21] Milan Picard, Marie-Pier Scott-Boyer, Antoine Bodein, Olivier Périn, and Arnaud Droit. Integration strategies of multi-omics data for machine learning analysis. Computational and Structural Biotechnology Journal, 19:3735–3746, 2021.
* [PWC+20] Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. arXiv preprint arXiv:2002.05287, 2020.
* [Qi12] Yanjun Qi. Random forest for bioinformatics. In Ensemble machine learning, pages 307–323. Springer, 2012.
* [RTL+17] Aviv Regev, Sarah A Teichmann, Eric S Lander, Ido Amit, Christophe Benoist, Ewan Birney, Bernd Bodenmiller, Peter Campbell, Piero Carninci, Menna Clatworthy, et al. Science forum: the human cell atlas. elife, 6:e27041, 2017.
* [SBT+19] Nikola Simidjievski, Cristian Bodnar, Ifrah Tariq, Paul Scherer, Helena Andres Terre, Zohreh Shams, Mateja Jamnik, and Pietro Liò. Variational autoencoders for cancer data integration: design principles and computational practice. Frontiers in genetics, 10:1205, 2019.
* [SNB+08] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
* [SWL18] Dongdong Sun, Minghui Wang, and Ao Li. A multimodal deep neural network for human breast cancer prognosis prediction by integrating multi-dimensional data. IEEE/ACM transactions on computational biology and bioinformatics, 16(3):841–850, 2018.
* [VCC+17] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
* [VFH+19] Petar Velickovic, William Fedus, William L Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. ICLR (Poster), 2(3):4, 2019.
* [VLBM08] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103, 2008.
* [WBM+13] Wenting Wang, Veerabhadran Baladandayuthapani, Jeffrey S Morris, Bradley M Broom, Ganiraju Manyam, and Kim-Anh Do. ibag: integrative bayesian analysis of high-dimensional multiplatform genomics data. Bioinformatics, 29(2):149–159, 2013.
* [WJS+19] Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. Heterogeneous graph attention network. In The world wide web conference, pages 2022–2032, 2019.
* [WMD+14] Bo Wang, Aziz M Mezlini, Feyyaz Demir, Marc Fiume, Zhuowen Tu, Michael Brudno, Benjamin Haibe-Kains, and Anna Goldenberg. Similarity network fusion for aggregating data types on a genomic scale. Nature methods, 11(3):333–337, 2014.
* [WSH+20] Tongxin Wang, Wei Shao, Zhi Huang, Haixu Tang, Jie Zhang, Zhengming Ding, and Kun Huang. Moronet: multi-omics integration via graph convolutional networks for biomedical data classification. bioRxiv, 2020.
* [WZP+17] Bo Wang, Junjie Zhu, Emma Pierson, Daniele Ramazzotti, and Serafim Batzoglou. Visualization and analysis of single-cell rna-seq data by kernel-based similarity learning. Nature methods, 14(4):414–416, 2017.
* [XDK+19] Gangcai Xie, Chengliang Dong, Yinfei Kong, Jiang F Zhong, Mingyao Li, and Kai Wang. Group lasso regularized deep learning for cancer prognosis from multi-omics and clinical features. Genes, 10(3):240, 2019.
* [XWC+19] Jing Xu, Peng Wu, Yuehui Chen, Qingfang Meng, Hussain Dawood, and Hassan Dawood. A hierarchical integration deep flexible neural forest framework for cancer subtype classification by integrating multi-omics data. BMC bioinformatics, 20(1):1–11, 2019.
* [YK15] Aliaksandr A Yarmishyn and Igor V Kurochkin. Long noncoding rnas: a potential novel class of cancer biomarkers. Frontiers in genetics, 6:145, 2015.
* [You21] YourGenome. What is gene expression, 2021.
* [ZAL18] Marinka Zitnik, Monica Agrawal, and Jure Leskovec. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34(13):i457–i466, 2018.
* [ZR20] Ron Zeira and Benjamin J Raphael. Copy number evolution with weighted aberrations in cancer. Bioinformatics, 36(Supplement_1):i344–i352, 2020.
* [ZYZ+20] Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems, 33:7793–7804, 2020.
* [ZZZM16] Ruoqing Zhu, Qing Zhao, Hongyu Zhao, and Shuangge Ma. Integrating multidimensional omics data for cancer outcome. Biostatistics, 17(4):605–618, 2016.
## Chapter 6 Appendix
### 1 Supplementary results
Figure 1: Clin+CNA integration testing on ER Figure 2: Clin+CNA integration
testing on DR Figure 3: Clin+CNA integration testing on PAM Figure 4: Clin+CNA
integration testing on IC Figure 5: Clin+mRNA integration testing on ER Figure
6: Clin+mRNA integration testing on DR Figure 7: Clin+mRNA integration testing
on PAM Figure 8: Clin+mRNA integration testing on IC Figure 9: CNA+mRNA
integration testing on ER Figure 10: CNA+mRNA integration testing on DR Figure
11: CNA+mRNA integration testing on PAM Figure 12: CNA+mRNA integration
testing on IC
|
# Automatically generating question-answer pairs for assessing basic reading
comprehension in Swedish
Dmytro Kalpakchi
Division of Speech, Music and Hearing
KTH Royal Institute of Technology
Stockholm, Sweden
<EMAIL_ADDRESS>
&Johan Boye
Division of Speech, Music and Hearing
KTH Royal Institute of Technology
Stockholm, Sweden
<EMAIL_ADDRESS>
###### Abstract
This paper presents an evaluation of the quality of automatically generated
reading comprehension questions from Swedish text, using the Quinductor
method. This method is a light-weight, data-driven but non-neural method for
automatic question generation (QG). The evaluation shows that Quinductor is a
viable QG method that can provide a strong baseline for neural-network-based
QG methods.
## 1 Introduction
In this article, we aim to establish a strong non-neural, but still data-
driven, baseline for the automatic generation of reading comprehension
questions (RC-QG) in Swedish. It is well-known that reading comprehension is a
complex and multi-layered process, ranging from the simple decoding of
individual words to advanced analytical tasks concerning the quality, veracity
and purpose of whole texts (Alderson, 2000; Shaw and Weir, 2007). In this
work, we aim at only generating RC questions on the _the information-locating
level_ , i.e. questions that facilitate the assessment of readers’ ability to
scan, locate and retrieve relevant information from a single text. This is the
most basic level of RC, as identified by the PISA 2018 report (OECD, 2019,
p.34), but still a crucial reading ability in need of assessment.
Our approach is data-driven, meaning that no handcrafted rules or linguistic
expertise is required. Adapting to new kinds of questions is done via adding
new text-question pairs to the training material. Furthermore, the generated
questions and their respective answers (the QA-pairs) will quite faithfully
reuse the wordings of the text, but can also use words and phrases that are
not explicitly present in the text. Our system is completely open-source with
source code available on GitHub
(https://github.com/dkalpakchi/swe_quinductor).
## 2 Related work
To the best of our knowledge, the literature on reading comprehension question
generation (RC-QG) particularly for Swedish is very limited. Wilhelmsson
(2011, 2012) presented a system for generating questions using manually
specified grammatical transformations (syntactic fronting of unbounded
constituents and substitution of suitable question elements with question
words). By design, their system is able to generate QA-pairs only using
formulations that appear word-by-word in the text. The generated questions
were limited to two categories, the first of which encompasses questions
starting with “vem” (eng. “who/whom”), “vad” (eng. “what”), “vilken” (eng.
“which”) that would concern nominal constituents. The second category concerns
questions to some adverbials. The author also presented a preliminary
evaluation of the questions generated for ten random Wikipedia articles, but
unfortunately neither specified the exact articles nor released the source
code of his system, making a direct comparison impossible.
Lately, neural-network-based text generation methods have become very popular
due their impressive results. Indeed, these methods have been applied to RC-QG
as well, mostly for English (see e.g. Liao et al. (2020); Dong et al. (2019)).
Our goal here is not to compete with these approaches, but rather to present a
more light-weight but still strong baseline to which neural methods can be
compared, but which is also interesting and useful in its own right.
## 3 Data
We have used the _SweQUAD-MC_ dataset (Kalpakchi and Boye, 2021a) consisting
of texts and multiple-choice reading comprehension questions (MCQs) for the
given texts. It was created by three paid linguistics students. They were
instructed to formulate unambiguous questions, such that (1) the answer to the
question appears verbatim in the text, (2) the question cannot be answered
without reading the text (i.e., the text is necessary), and (3) the answer
does not require extra knowledge not present in the text (i.e., the text is
sufficient). These characteristics make the dataset suitable for assessing RC
on the information-locating level.
The dataset is relatively small with the training set consisting of 962 MCQs,
the development (dev) set – of 126 MCQs and the test set – of 102 MCQs. The
distribution of the first two words111which is a decent proxy for question
words in questions in the training set is shown in Figure 1.
Figure 1: The distribution of the first two words in questions of the training
set of SweQUAD-MC
## 4 Method
We used Quinductor Kalpakchi and Boye (2021c), which is a mostly deterministic
data-driven method applicable to any language having a dependency parser based
on Universal Dependencies (Nivre et al., 2020). In particular, it is
applicable to Swedish. Quinductor is only capable of inducing QA-pairs based
on single declarative sentences, which is suitable for assessing RC on the
information-locating level. The method is data-driven, requiring a corpus of
texts with the associated QA-pairs as training material, as well as some
additional files, detailed in Appendix A.
Figure 2: The dependency tree for the sentence “John graduated in 2010” Figure
3: The dependency tree for the sentence “Stocks crased during previous summer
months”
Quinductor works in two stages: first inducing the templates and then using
them to generate questions from unseen sentences. In short, the first stage
involves learning from data to express each given QA-pair in terms of the
dependency structures of the source sentence, which is a basis for both the
question and the answer. To exemplify, consider the sentence “John graduated
in 2010” (with its dependency tree in Figure 2) and the question “When did
John graduate?” with the answer being in “2010”. Quinductor will then learn to
induce the template (4) for the question and the template (4) for the answer.
When did | [r.nsubj#1] | [r.lemma] ?
---|---|---
| (John) | (graduate)
<r.obl#4>
---
(in 2010)
At test time, Quinductor applies an overgenerate-and-rank strategy, and
attempts to apply all of the induced templates to each given sentence.
Clearly, the more sentences with similar dependency structures will be present
in unseen data, the more successful the method will be. Also, the more
template expressions with angled brackets (matching a whole phrase, as in (4)
above) are present in a template, the higher the generalization chance is. To
exemplify, if we get a new sentence “Stocks crashed during previous summer
months” (with its dependency tree in Figure 3), our previously induced
template will be able to fire and produce the QA-pair “When did stocks crash?”
– “during previous summer months”, although the trees are clearly not
identical. For further details on the method and its generalization
capabilities we refer to the original article.
Statistics | Value
---|---
Number of induced templates | 248
Support per template |
Mean $\pm$ STD | $1.04\pm 0.23$
Median (Min - Max) | 1 (1 - 3)
Table 1: Descriptive statistics of the templates produced using the training
set of the SweQUAD-MC dataset. Support per template is the number of sentences
from the training set that yield the same template.
In this work we have induced templates based on the training set of SweQUAD-MC
(see more implementation details in Appendix A). As can be seen in Table 1,
this resulted in 248 templates with most of them being unique, i.e. induced
from (being supported by) only one sentence from the training set. Some
templates had higher support with the maximum of 3 sentences per template.
## 5 Evaluation and discussion
The proportion of source sentences (the ones where the correct answer is
found) from the dev. and test sets of SweQUAD-MC, for which Quinductor could
generate something is reported in Table 2.
| dev | test
---|---|---
# of questions in the set | 126 | 102
# of generated questions | 207 | 213
(1) SS with $\geq$ 1 applicable template | 64 | 44
(2) SS with $\geq$ 1 generated question after basic filtering | 49 | 36
(3) SS with $\geq$ 1 generated question after mean filtering | 29 | 24
(3) as % of the respective set | 23% | 23.5%
Generated questions per SS | |
Mean | 3.2 | 4.8
Standard deviation | 4.1 | 7.2
Median | 2 | 3
Minimum | 0 | 0
Maximum | 23 | 46
Table 2: Descriptive statistics of the questions induced on the dev. and test
sets of the SweQUAD-MC using the templates, mentioned in Table 1. “SS” stands
for “source sentence(s)”, i.e., the sentence in which the correct answer is
found. “$\geq 1$ applicable template” means that at least 1 question was
induced from the given SS.
For evaluation, we took all 29 QA-pairs generated for the development set and
all 24 QA-pairs generated for the test set of SweQUAD-MC. These 53 QA-pairs
were combined with 53 original QA-pairs corresponding to the same source
sentences from the respective corpora (later referred to as gold QA-pairs).
The resulting 106 QA-pairs and the corresponding source sentences formed 106
evaluation triples, and were presented to 2 human judges (one native Swedish
speaker and one non-native, but with a high proficiency) in a random order
(different for each judge). Following Kalpakchi and Boye (2021c), we required
judges to evaluate each triple using a questionnaire consisting of 9 criteria.
Each criterion required a judgement on a 4-point Likert-type scale sfrom 1
(“Disagree”) to 4 (“Agree”). The evaluation itself was conducted online on an
in-house instance of Textinator Surveys Kalpakchi and Boye (2022). The
evaluation guidelines are reported in Appendix B.
Five criteria concerned questions and were formulated as statements asking
whether a question:
1. C1
is grammatically correct $(\uparrow)$
2. C2
makes sense $(\uparrow)$
3. C3
would be clearer if more information were provided $(\downarrow)$
4. C4
would be clearer if less information were provided $(\downarrow)$
5. C5
is relevant to the given sentence $(\uparrow)$
The remaining 4 criteria concerned the answer and asked whether the suggested
answer:
1. C6
correctly answers the question $(\uparrow)$
2. C7
would be clearer if phrased differently $(\downarrow)$
3. C8
would be clearer if more information were provided $(\downarrow)$
4. C9
would be clearer if less information were provided $(\downarrow)$
$\uparrow$ ($\downarrow$) indicates that the higher (lower) the judgements on
the Likert scale, the better.
Criterion | | dev | test
---|---|---|---
gold | gen | gold | gen
C1 $\uparrow$ | $\kappa$ | 0.86 | 0.54 | 0.83 | 0.33
$\gamma$ | 0.92 | 0.78 | 0.91 | 0.55
C2 $\uparrow$ | $\kappa$ | 0.59 | 0.49 | 0.88 | 0.39
$\gamma$ | 0.83 | 0.79 | NA/4 | 0.75
Table 3: Inter-annotator agreement for criteria C1 and C2 on the development
and test sets of SweQUAD-MC. $\gamma=\text{NA/X}$ means that at least one
annotator gave the same score X to all evaluation triples, so it is impossible
to count concordant and discordant pairs.
Following Kalpakchi and Boye (2021c), we have measured IAA using Randolph’s
$\kappa$ Randolph (2005) and Goodman-Kruskall’s $\gamma$ (Goodman and Kruskal,
1979). The former, $\kappa$, ranging between $-1$ and $1$, accounts for
agreement in absolute rankings, i.e. being boosted if the annotators gave
exactly same score to an evaluation triple. The value of 0 indicates the level
of agreement that could be expected by chance, the positive (negative) values
indicate agreement better (worse) than chance. The latter, $\gamma$, also
ranging between $-1$ and $1$, accounts for agreement in relative ordering,
i.e. being boosted if the annotators ordered a pair of two evaluation triples
in the same way, no matter the actual scores. $\gamma=0$ indicates no
agreement, $\gamma=1$ denotes a complete agreement and $\gamma=-1$ hints at a
perfect disagreement.
Without a doubt, the two basic criteria, which must necessarily have high
judgements in order to even consider looking at all other criteria, are C1 and
C2. Table 3 shows that the evaluators had high agreement on both criteria,
especially when it comes to relative ordering of triples ($\gamma>0.5$).
As can be seen in Figure 4, nearly all _gold_ questions of the dev and test
sets were judged highly on C1 (median $\geq 3$). At the same time, 17 ($\sim
58\%$) and 10 ($\sim 42\%$) of the _generated_ questions of the dev and test
sets respectively were rated highly on C1, resulting in a total of $50.9\%$
between the sets.
Figure 4 also reveals that surprisingly 4 _gold_ questions in total between
the dev and test sets were judged as not making sense (median $<3$). At the
same time only 11 ($\sim 38\%$) and 8 ($\sim 33\%$) of the _generated_
questions of the dev and test sets respectively were rated highly on C2
(median $\geq 3$), resulting in a total of $35.8\%$ between the sets. We also
observed that only one generated question was judged highly on C2, but lower
on C1.
(a) Development set
(b) Test set
Figure 4: Barplots of the number of evaluation triples with rated highly
(median $\geq$ 3) on C1 (grammatical), C2 (makes sense), and both C1 and C2.
The horizontal red lines indicate a total number of evaluation triples that
were considered (equal for gold and generated data)
We analyzed further only 19 _generated_ QA-pairs between the sets that were
judged highly on both C1 and C2. The analysis was carried out in terms of
criteria C5 and C6, the agreement on both of which was reasonably high (see
Appendix F). Out of these 19, 17 (11 on the dev set and 6 on the test set),
were also rated highly (median $\geq 3$) on C5 and 12 ($63.2\%$), 7 on the dev
set and 5 on the test set, were rated highly on C6. Crucially, there were no
QA-pairs with the suggested answer being judged as correct (high median rating
on C6), but with the question being deemed as irrelevant to the given sentence
(low median judgement on C5). This means that out of all generated QA-pairs,
12 ($22\%$) were judged as completely valid, 7 ($24\%$) on the dev set and 5
($21\%$) on the test set, making Quinductor a resonably strong RC-QG baseline
for SweQUAD-MC.
Some examples of successful generation are presented in Appendix C. Further
analysis (in particular of errors) and related generation examples are
presented in Appendix D. Although automatic evaluation metrics provide very
limited insights, we also report them in Appendix E, following Kalpakchi and
Boye (2021c).
## Acknowledgments
This work was supported by Vinnova (grant 2019-02997), and Digital Futures
(project SWE-QUEST).
## References
* Alderson (2000) Charles J Alderson. 2000. _Assessing reading_. Cambridge University Press.
* Denkowski and Lavie (2014) Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In _Proceedings of the ninth workshop on statistical machine translation_ , pages 376–380.
* Dong et al. (2019) Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. _Advances in Neural Information Processing Systems_ , 32.
* Goodman and Kruskal (1979) Leo A Goodman and William H Kruskal. 1979. Measures of association for cross classifications. _Measures of association for cross classifications_ , pages 2–34.
* Kalpakchi and Boye (2021a) Dmytro Kalpakchi and Johan Boye. 2021a. BERT-based distractor generation for Swedish reading comprehension questions using a small-scale dataset. In _Proceedings of the 14th International Conference on Natural Language Generation_ , pages 387–403, Aberdeen, Scotland, UK. Association for Computational Linguistics.
* Kalpakchi and Boye (2021b) Dmytro Kalpakchi and Johan Boye. 2021b. Minor changes make a difference: a case study on the consistency of UD-based dependency parsers. In _Proceedings of the Fifth Workshop on Universal Dependencies (UDW, SyntaxFest 2021)_ , pages 96–108, Sofia, Bulgaria. Association for Computational Linguistics.
* Kalpakchi and Boye (2021c) Dmytro Kalpakchi and Johan Boye. 2021c. Quinductor: a multilingual data-driven method for generating reading-comprehension questions using universal dependencies. _arXiv preprint arXiv:2103.10121_.
* Kalpakchi and Boye (2022) Dmytro Kalpakchi and Johan Boye. 2022. Textinator: an internationalized tool for annotation and human evaluation in natural language processing and generation. In _Proceedings of the Thirteenth Language Resources and Evaluation Conference_ , pages 856–866, Marseille, France. European Language Resources Association.
* Liao et al. (2020) Yi Liao, Xin Jiang, and Qun Liu. 2020. Probabilistically masked language model capable of autoregressive generation in arbitrary word order. _arXiv preprint arXiv:2004.11579_.
* Nivre et al. (2020) Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajic, Christopher D Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal dependencies v2: An evergrowing multilingual treebank collection. In _Proceedings of the 12th Language Resources and Evaluation Conference_ , pages 4034–4043.
* OECD (2019) OECD. 2019. _PISA 2018 Assessment and Analytical Framework_.
* Qi et al. (2020) Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020\. Stanza: A python natural language processing toolkit for many human languages. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations_ , pages 101–108, Online. Association for Computational Linguistics.
* Randolph (2005) Justus J Randolph. 2005. Free-marginal multirater kappa (multirater k [free]): An alternative to fleiss’ fixed-marginal multirater kappa. _Online submission_.
* Sharma et al. (2017) Shikhar Sharma, Layla El Asri, Hannes Schulz, and Jeremie Zumer. 2017. Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation. _CoRR_ , abs/1706.09799.
* Shaw and Weir (2007) Stuart D Shaw and Cyril J Weir. 2007. _Examining writing: Research and practice in assessing second language writing_ , volume 26. Cambridge University Press.
* Wilhelmsson (2011) Kenneth Wilhelmsson. 2011. Automatic question generation from swedish documents as a tool for information extraction. In _Proceedings of the 18th Nordic Conference of Computational Linguistics (NODALIDA 2011)_ , pages 323–326.
* Wilhelmsson (2012) Kenneth Wilhelmsson. 2012. Automatic question generation for swedish: The current state. In _Proceedings of the SLTC 2012 workshop on NLP for CALL; Lund; 25th October; 2012_ , 080, pages 71–79. Linköping University Electronic Press.
## Appendix A Implementation details
We have used dependency parsing models from the package called
Stanza222https://github.com/stanfordnlp/stanza Qi et al. (2020), version 1.4.2
and used quinductor333https://github.com/dkalpakchi/quinductor package
Kalpakchi and Boye (2021c), version 0.2.2. This is important, because it is
likely that the induced templates will differ if different models are used.
The supplementary files necessary for either generating templates or during
ranking the generated questions were obtained as follows:
* •
the IDFs were calculated based on the SweQUAD-MC training set (necessary for
inducing templates);
* •
the morphological n-gram model was calculated based on training, dev and test
sets of the UD’s Talbanken
treebank444https://universaldependencies.org/treebanks/sv_talbanken/index.html
(necessary for ranking);
* •
the question-word model was calculated based on the SweQUAD-MC training set
(necessary for ranking).
Note that the question-word model was induced using one of the earlier
versions of Stanza, which is why the distributions differ slightly, compared
to the 1.4.2. All these suplementary files are released in the GitHub
repository, associated with this paper.
## Appendix B Human evaluation guidelines
Here we report the exact wording of the guidelines presented to the human
evaluators.
Tack att du deltar i vår utvärdering av läsförståelsefrågor! Du kommer att se
ett antal meningar (en i taget) tillsammans med fråga och det rätta svaret
(QA-par). Du kommer också att se ett antal påstående för varje QA-par. Din
uppgift är att bestämma i vilken utsträckning du håller med varje påstående.
Om frågan är obegriplig ska du välja ”1” för alla påståenden relaterade till
det föreslagna svaret. Vänligen ignorera alla möjliga formatteringsfel, t.ex.
skiljetecken (.,!?:;) eller versaler som saknas.
## Appendix C Generation examples
> Sentence 1: inom hälso- och sjukvården arbetar dietisten med
> nutritionsbehandling och kostrådgivning, både med enskilda patienter och i
> grupp.
> Question 1: var arbetar dietisten med nutritionsbehandling och
> kostrådgivning?
> Suggested answer 1: inom hälso- och sjukvården
> Sentence 2: om du är borta längre än ett år eller om du planerar att bosätta
> dig i ett annat land kan migrationsverket återkalla ditt uppehållstillstånd.
> Question 2: vad kan migrationsverket göra om du är borta längre än ett år
> eller om du planerar att bosätta dig i ett annat land?
> Suggested answer 2: återkalla uppehållstillstånd
> Sentence 3: hovslagaren behandlar även skador i hoven till exempel
> hovsprickor eller hovbölder.
> Question 3: vilka behandlar hovslagaren?
> Suggested answer 3: skador i hoven
> Sentence 4: pulverfärg appliceras ofta på metallkomponenter.
> Question 4: vad appliceras ofta på metallkomponenter?
> Suggested answer 4: pulverfärg
> Sentence 5: miljö- och hälsoskyddsinspektörer samarbetar till exempel med
> andra tjänstemän inom kommuner, företag och myndigheter.
> Question 5: vad gör miljö- och hälsoskyddsinspektörer?
> Suggested answer 5: samarbetar till med andra tjänstemän inom kommuner ,
> företag och myndigheter
## Appendix D Error analysis
The QA-pairs that are rated low on C2 (question makes sense) represent the
worst kind of errors, which are impossible to correct automatically (like,
say, some grammatical errors). Hence, let’s examine closer some examples of
evaluation triples with generated QA-pairs containing questions that were
rated low (median $<3$) on C2. Please ignore formatting errors (like absense
of punctuation or capital letters).
The questions from the following triples got a median score of 1 on C2:
> Sentence 1: arbetsuppgifterna varierar beroende på var man är anställd men
> den främsta uppgiften är att se till så att hyresgästerna känner sig trygga
> och nöjda i sin arbets- och boendemiljö.
> Question 1: vad gör arbetsuppgifterna beroende på var man är anställd men
> den främsta uppgiften är att se till så att hyresgästerna känner sig trygga
> och nöjda i sin arbets- och boendemiljö?
> Suggested answer 1: varierar
> Sentence 2: du kan då behöva vänta på ditt nya beslut utanför sverige.
> Question 2: vad kan du göra på ditt nya beslut utanför sverige?
> Suggested answer 2: vänta
The questions from the following triples got a median score of 1.5 on C2:
> Sentence 3: den största delen består av barrskog, men i söder breder lummig
> lövskog ut sig.
> Question 3: vad består den största delen från?
> Suggested answer 3: barrskog
> Sentence 4: en stuntman ersätter filmskådespelare i riskfyllda situationer.
> Question 4: var ersätter stuntman?
> Suggested answer 4: i riskfyllda situationer
The question from the following triple got a median score of 2 on C2:
> Sentence 5: till exempel tillbringar kirurgen en stor del av sin tid i
> operationssalen, medan en psykiatriker arbetar mycket med samtal med sina
> patienter.
> Question 5: vem tillbringar till en stor del av sin tid i operationssalen
> medan en psykiatriker arbetar mycket med samtal med sina patienter?
> Suggested answer 5: kirurgen
The questions from the following triples got a median score of 2.5 on C2:
> Sentence 6: en sjukskriven medarbetare kostar lågt räknat 2 740 kronor per
> dag*.
> Question 6: vad gör en sjukskriven medarbetare?
> Suggested answer 6: kostar kronor
> Sentence 7: man bör ha god fysik då arbetet kan slita på nacke och axlar.
> Question 7: vad bör man göra då arbetet kan slita på nacke och axlar?
> Suggested answer 7: ha fysik
Question 1 was generated using the template vad gör [r.nsubj#1] <r.advcl#2>?.
The problematic one in the case of sentence 1 turned out to be the last
template expression <r.advcl#2>, which takes the whole subtree, whose root can
be found by following the arc labeled advcl from the root r of the dependency
tree. In this case the subtree was too large and thus the generated question
contains a lot of unnecessary information. This is one typical type of errors
that could be referred to as _overgeneralization_ , i.e. the case when the
induced templates become too general. However, a balance between letting
Quinductor to induce too generic or too specific templates is difficult to
strike, so such kinds of errors are inevitable. Question 5 suffers from the
same problem.
Question 2 is based on the template vad [r.aux#1] [r.nsubj#2] göra <r.obl#3>?,
which in turn was generated from the sentence “vad behöver
ambulanssjuksköterskan göra vid större olyckor med många skadade?”. The
structure is clearly very similar, but the preposition of the oblique nominal
happened to be different, so the template expression <r.obl#3> did not
generalize correctly. In fact questions 3 suffers from similar problems.
Question 4 represents the case when the question word from the template is
wrong. All question words are guaranteed to be recorded verbatim as strings
and not deduced from the dependency tree of the sentence, so such errors are
also inevitable in the current version of Quinductor.
QA-pairs 6 and 7 represent more successful applications of templates for the
questions, although the questions are still not perfectly intelligble. The
major problem with these QA-pairs are the answers, which in fact suffer an
inverse problem compared to that of question 1. For instance, the template for
the suggested answer for question 7 is [r] [r.obj#4]. The problem with this
template is that it just picks up only specific nodes of the dependency tree
and is prone to errors if the object, referred by r.obj#4 will turn out to
have some dependents, which will also be relevant to include in the answer.
This problem could be referred to as _undergeneralization_. As previously
mentioned, the balance between over- and undergeneralization is hard to
strike, especially given that Quinductor is a data-driven method and all
templates do depend on the data at hand.
We noted that sometimes these errors arise, because of inconsistency of
dependency parsers themselves, which is not a new observation. For instance,
Kalpakchi and Boye (2021b) showed that even changes as minor as replacing one
4-digit numeral by another can cause surprisingly large inconsistencies in the
resulting trees using the state-of-the-art dependency parsers, in particular
for Swedish.
## Appendix E Automatic evaluation metrics
Following Kalpakchi and Boye (2021c), we have calculated BLEU-N, ROUGE-L and
CIDEr using nlg-eval Sharma et al. (2017) and METEOR using METEOR-1.5
Denkowski and Lavie (2014) and specifying the language to be Swedish.
Metric | dev | test
---|---|---
BLEU-1 | 0.39 | 0.22
BLEU-2 | 0.29 | 0.15
BLEU-3 | 0.22 | 0.09
BLEU-4 | 0.18 | 0.06
METEOR | 0.33 | 0.21
ROUGE-L | 0.38 | 0.27
CIDEr | 0.76 | 0.84
Table 4: Automatic evaluation on the development and test sets of SweQUAD-MC
only for generated questions ranked first.
## Appendix F Detailed IAA analysis
Inter-annotator agreement (IAA) for criteria C3 - C9 is presented in Table 5.
Let’s start by analyzing the agreement on the development set. We observed
that agreement on C5 - C8 is quite strong for both gold and generated QA-
pairs. On the other hand, $\gamma$ is strongly negative for C3 (the question
would be learer if more information were provided), indicating that the
annotators relative ranking are almost exactly opposite. At the same time
$\kappa$ on C3 is quite weak as well, especially on the generated QA-pairs.
For the generated questions both $\kappa=0.08$, so $\gamma=-0.7$ indicate
almost no agreement in absolute scores and a strong disagrement in relative
orderings.
Criterion | | dev | test
---|---|---|---
gold | gen | gold | gen
C3 $\downarrow$ | $\kappa$ | 0.45 | 0.08 | 0.83 | 0.17
$\gamma$ | -1.0 | -0.7 | NA/1 | -0.09
C4 $\downarrow$ | $\kappa$ | 0.95 | 0.91 | 1.0 | 0.83
$\gamma$ | NA/1 | -1.0 | NA/1 | 1.0
C5 $\uparrow$ | $\kappa$ | 0.77 | 0.72 | 0.67 | 0.44
$\gamma$ | 0.82 | 0.93 | NA/4 | 0.81
C6 $\uparrow$ | $\kappa$ | 0.72 | 0.63 | 0.67 | 0.56
$\gamma$ | 0.79 | 0.88 | -1.0 | 0.78
C7 $\downarrow$ | $\kappa$ | 0.63 | 0.49 | 0.94 | 0.83
$\gamma$ | 0.62 | 1.0 | NA/1 | 1.0
C8 $\downarrow$ | $\kappa$ | 0.54 | 0.63 | 0.94 | 0.83
$\gamma$ | 0.4 | 0.73 | NA/1 | 0.93
C9 $\downarrow$ | $\kappa$ | 1.0 | 1.0 | 1.0 | 1.0
$\gamma$ | NA/1 | NA/1 | NA/1 | NA/1
Table 5: Inter-annotator agreement for criteria C3 - C9 on the development and
test sets of SweQUAD-MC. $\gamma=\text{NA/X}$ means that at least one
annotator gave the same score X to all evaluation triples, so it is impossible
to count concordant and discordant pairs.
The scores for the gold questions on C3 are much more peculiar, namely
$\gamma=-1.0$, while $\kappa=0.45$, indicating complete opposite relative
ordering between the annotators, while also having a moderate agreement in
absolute ranks. If we look closer at the data, it turns out that annotator A
gave almost all gold triples the score of 1 on C3, except one triple (let’s
call it $T_{x}$) that got the score of 3. Hence, this is the only triple that
can be used to establish relative ordering for the annotator A. Now that very
same $T_{x}$ got the score of 1 on C3 from the annotator B. This means that
$T_{x}$ will always be scored higher than all other triples for the annotator
A, whereas it will be scored lower than other triples for the annotator B.
This, in turn, means that all relative orderings are opposite between the
annotators, resulting in $\gamma=-1$. This example is a good illustrataion of
why only $\kappa$ or $\gamma$ is not enough for assessing the IAA, but indeed
both are needed and should be interpreted with caution. A similar situation is
observed for the generated questions from the development set on the criterion
C4, and for the gold questions from the test set on the criterion C6.
|
# On Dyck Path Expansion Formulas for Rank 2 Cluster Variables
Amanda Burcroff
###### Abstract.
In this paper, we simplify and generalize formulas for the expansion of rank
$2$ cluster variables. In particular, we prove an equivalent, but simpler,
description of the colored Dyck subpaths framework introduced by Lee and
Schiffler. We then prove the conjectured bijectivity of a map constructed by
Feiyang Lin between collections of colored Dyck subpaths and compatible pairs,
objects introduced by Lee, Li, and Zelevinsky to study the greedy basis. We
use this bijection along with Rupel’s expansion formula for quantum greedy
basis elements, which sums over compatible pairs, to provide a quantum
generalization of Lee and Schiffler’s colored Dyck subpaths formula.
###### Contents
1. 1 Introduction
2. 2 Statement of Results
3. 3 Preliminaries
4. 4 Simplification of the Colored Dyck Subpaths Conditions
1. 4.1 Lee-Schiffler Expansion Formula
2. 4.2 Vertices and Slopes in $\mathcal{D}_{n}$
3. 4.3 Proof of the Simplification
5. 5 Bijection between Compatible Pairs and Colored Subpaths of Dyck Paths
1. 5.1 Compatible Pairs and Lin’s Map
2. 5.2 Proof of Bijectivity
6. 6 Quantization of Colored Dyck Subpaths
7. 7 Further Directions
8. 8 Acknowledgements
## 1\. Introduction
The theory of _cluster algebras_ , introduced twenty years ago by Fomin and
Zelevinsky [7], gives us a combinatorial framework for understanding the
previously opaque nature of certain algebras. Each cluster algebra is
generated by its _cluster variables_ , which can be obtained via the recursive
process of _mutation_. The _Laurent phenomenon_ says that each cluster
variable in a rank-$n$ cluster algebra can be expressed as a Laurent
polynomial in the $n$ _initial_ cluster variables. While in general finding
explicit formulas for the Laurent expansions of arbitrary cluster variables is
difficult, there has been significant progress in understanding the expansions
in low-rank cluster algebras. In this work, we attempt to unify and simplify
some of the existing expansion formulas for rank-$2$ cluster variables and
their quantum generalizations.
In 2011, Lee and Schiffler provided the first combinatorial formula for the
Laurent expansion of arbitrary skew-symmetric rank-$2$ cluster variables [12,
Theorem 9]. They expressed the coefficients as sums over certain collections
of non-overlapping _colored subpaths_ of a _maximal Dyck path_. This
established the positivity of the Laurent expansion in skew-symmetric rank $2$
cluster algebras. Lee and Schiffler [13, Theorem 11] and Rupel [18, Theorem 6]
then generalized this formula (in the skew-symmetric and skew-symmetrizable
cases, respectively) to the non-commutative rank-2 setting, giving each
collection a weight expressed as an ordered product of two non-commuting
initial cluster variables. In 2012, Lee, Li, and Zelevinsky [11] defined the
_greedy basis_ for rank-$2$ cluster algebras, which includes the cluster
variables. They provided a combinatorial formula for the Laurent expansion of
each greedy basis elements as a sum over _compatible pairs_ , certain
collections of edges of a maximal Dyck path [11, Theorem 11]. Rupel later gave
a non-commutative analogue of this formula, which specializes to a formula for
the coefficients in the _quantum rank- $2$ cluster algebra_ setting [19,
Corollary 5.4]. In particular, each compatible pair is weighted by a
corresponding power of a quantum parameter $q$, where the exponent is computed
as a sum over all pairs of edges in the maximal Dyck path. Rupel [20, Theorem
1.2] has also provided a quantum analogue to the Caldero-Chapoton expansion
formula [2] for rank-$2$ cluster variables expressed as a sum over
indecomposable valued-quiver representations.
To summarize, there are two different combinatorial formulas for the Laurent
expansion of skew-symmetric rank-$2$ cluster variables: one in terms of
collections of colored Dyck subpaths [12] and the other in terms of compatible
pairs [11]. The combinatorics of collections of colored Dyck subpaths and
compatible pairs are somewhat similar, suggesting that there might be a nice
correspondence between them. A correspondence was known to Lee, Li, and
Zelevinsky [11] and they suggested in their 2012 paper that they planned to
provide details in the future, but this has not yet appeared in the
literature. In 2021, Feiyang Lin constructed a map between a superset of the
collections of colored Dyck subpaths and compatible pairs and conjectured that
the map restricts to a bijection in [16, Conjecture 3]. Lin made partial
progress toward proving this, reducing the conjecture to a technical statement
[16, Conjecture 4].
In this work, we start by providing a simplification of Lee-Schiffler’s
formula for rank-$2$ cluster variables in terms of colored Dyck subpath
conditions. We then use our simpler formula to prove Lin’s conjectures [16,
Conjectures 3 & 4] that the map constructed between collections of colored
Dyck subpaths and compatible pairs is indeed a bijection. (Our methods do not
rely on the technical reformulation presented by Lin.) This bijection gives an
efficient method for generating all compatible pairs in the cluster variable
case. We then use the bijection along with Rupel’s quantum weighting of
compatible pairs [19] to provide a quantum version of Lee and Schiffer’s
rank-$2$ expansion formula for cluster variables. This new formula has the
advantage of requiring less computation than that in [19, Corollary 5.4] and
explicitly calculating the coefficients in the quantum case, rather than
expressing each term as an ordered product as in [18]. It is also more
elementary than the expansion formula in [20], which is based on the theory of
valued quiver representations.
The paper is organized as follows. In Section 2, we give an overview of the
results. Section 3 contains some preliminaries concerning maximal Dyck paths.
The proof of the simplification of Lee and Schiffler’s [12] colored Dyck
subpath conditions is the focus of Section 4. Section 5 contains the proof of
Lin’s conjectures [16, Conjectures 3 & 4], establishing a bijection between
collections of colored Dyck subpaths from the Lee-Schiffler [12] setting and
compatible pairs from the Lee-Li-Zelevinsky [11] setting. This bijection is
applied to Rupel’s [19] quantum weighting on compatible pairs to yield a
quantum analogue of Lee-Schiffler’s [12] expansion formula in Section 6. We
conclude with a discussion of further directions in Section 7.
## 2\. Statement of Results
For a positive integer $r$ and variables $X_{1},X_{2}$, we consider the
sequence $\\{X_{n}\\}_{n\in\mathbb{Z}}$ of expressions recursively defined by
(2.1) $X_{n+1}=\frac{X_{n}^{r}+1}{X_{n-1}}\,.$
This sequence is precisely the set of variables of the rank-$2$ cluster
algebra $\mathcal{A}(r,r)$ associated to the $r$-Kronecker quiver, which
consists of two vertices with $r$ arrows between them. The sequence is
periodic when $r=1$, and otherwise all $X_{n}$ are distinct. For background on
cluster algebras, see [6].
###### Definition 2.1.
The _maximal Dyck path_ $\mathcal{P}(a,b)$ is the path proceeding by unit
north and east steps from $(0,0)$ to $(a,b)$ that is closest to the line
segment between $(0,0)$ and $(a,b)$ without crossing strictly above it. For
two vertices $u,w$ along such a path, let $s(u,w)$ denote the slope of the
line segment between them.
Let $\\{c_{n}\\}_{n=1}^{\infty}$ be the sequence of non-negative integers
defined recursively by:
(2.2) $c_{1}=0,c_{2}=1,\text{ and }c_{n}=rc_{n-1}-c_{n-2}\text{ for }r\geq
2\,.$
Let $\mathcal{C}_{n}=\mathcal{P}(c_{n-1},c_{n-2})$ and
$\mathcal{D}_{n}=\mathcal{P}(c_{n-1}-c_{n-2},c_{n-2})$. We label the leftmost
vertex at each height of $\mathcal{D}_{n}$ by $v_{i}$ for $i=0,\dots,c_{n-2}$,
with the subindex increasing from south to north; such vertices are called
_northwest corners_. Let $\gamma(i,k)$ be the subpath spanning from $v_{i}$ to
$v_{k}$ for any $0\leq i<k\leq b$. An example of the maximal Dyck path
$\mathcal{D}_{5}=\mathcal{P}(5,3)$ is shown in Figure 1.
###### Theorem 2.2.
Let $t(i)$ be the minimum integer greater than $i$ such that
$s(v_{i},v_{t(i)})>s$. Then we have $t(i)-i=c_{m}-wc_{m-1}$ for a unique
choice of $2\leq w\leq r-1$ and $3\leq m\leq n-2$.
This result allows us to simplify the expansion formula of Lee and Schiffler,
which is briefly described below. The next few definitions and 2.4 emulate the
results of Lee and Schiffler, except for slight modifications due to the
simplified coloring conditions above.
###### Definition 2.3 (cf. 4.1).
For any $0\leq i<k\leq c_{n-2}$, let $\gamma(i,k)$ be the subpath of
$\mathcal{D}_{n}$ from $v_{i}$ to $v_{k}$, which is assigned a color as
follows:
1. (1)
If $s(v_{i},v_{t})\leq s$ for all $t$ such that $i<t\leq k$, then
$\gamma(i,k)$ is blue.
2. ($2^{*}$)
Otherwise, let $m,w$ be chosen with respect to $i$ as in 2.2. Then we say
$\gamma(i,k)$ is $(m,w)$-brown.111The color brown was chosen because it is
the combination of Lee and Schiffler’s red and green cases.
A _subpath_ of $\mathcal{D}_{n}$ is a path of the form $\gamma(i,k)$ or a
single edge $\alpha_{i}$. We denote the set of such subpaths by
$\mathcal{P}^{\prime}(\mathcal{D}_{n})$. We define the set
$\mathcal{F}^{\prime}(\mathcal{D}_{n})$ to contain any collection of subpaths
in $\mathcal{P}^{\prime}(\mathcal{D}_{n})$ satisfying that no two subpaths
share an edge, two subpaths share a vertex only if at least one of them is a
single edge, and at least one of the $c_{m-1}-2c_{m-2}$ edges preceding each
$(m,w)$-brown subpath is contained in another subpath. Given
$\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$, the quantity $|\beta|_{1}$ is
defined additively over the subpaths, taking value $k-i$ on $\gamma(i,k)$ and
value $0$ on single edges. The quantity $|\beta|_{2}$ is the total number of
edges in $\beta$. This yields the following expansion formula for the cluster
variables.
###### Corollary 2.4 (analogue of [12, Theorem 9]).
Consider the cluster algebra $\mathcal{A}(r,r)$ with cluster variables $X_{i}$
for $i\in\mathbb{Z}$. For $n\geq 4$, we have
$X_{n}=X_{1}^{-c_{n-1}}X_{2}^{-c_{n-2}}\sum_{\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})}X_{1}^{r|\beta|_{1}}X_{2}^{r\left(c_{n-1}-|\beta|_{2}\right)}$
and
$X_{3-n}=X_{2}^{-c_{n-1}}X_{1}^{-c_{n-2}}\sum_{\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})}X_{2}^{r|\beta|_{1}}X_{1}^{r\left(c_{n-1}-|\beta|_{2}\right)}\,.$
A generalization of the above expansion result to the case of skew-symmetric
rank-$2$ cluster algebras with coefficients is presented at the end of
Subsection 4.3 (see 4.18).
###### Example 2.5.
The collection of colored subpaths
$\beta=\\{\gamma(0,1),\alpha_{6},\gamma(2,3)\\}$ is in
$\mathcal{F}^{\prime}(\mathcal{D}_{5})$. The subpath $\gamma(0,1)$ is blue,
while the subpath $\gamma(2,3)$ is $(3,2)$-brown. This collection is depicted
in Figure 1 for $r=3$, where the single edge $\alpha_{6}$ is represented by an
orange edge. Note that in this case we have $|\beta|_{1}=2$ and
$|\beta|_{2}=6$.
We now discuss the bijection between colored Dyck subpaths of
$\mathcal{D}_{n}$ and compatible pairs in $\mathcal{C}_{n}$. Given two
vertices $u,w$ in a maximal Dyck path $\mathcal{P}(a,b)$, let
$\overrightarrow{uw}$ denote the subpath proceeding east from $u$ to $w$,
continuing cyclically around $\mathcal{P}(a,b)$ if $u$ is to the east of $w$.
Let $|uw|_{1}$ (resp., $|uw|_{2}$) denote the number of horizontal (resp.,
vertical) edges of $\overrightarrow{uw}$. Given a set of horizontal edges
$S_{1}$ and vertical edges $S_{2}$ in $\mathcal{P}(a,b)$, the pair
$(S_{1},S_{2})$ is _compatible_ if, for every edge in $S_{1}$ with left vertex
$u$ and every edge $S_{2}$ with top vertex $w$, there exists a lattice point
$t\neq u,w$ in the subpath $\overrightarrow{uw}$ such that
$|tw|_{1}=r|\overrightarrow{tw}\cap S_{2}|_{2}\text{ or
}|ut|_{2}=r|\overrightarrow{ut}\cap S_{1}|_{1}\,.$
Let the horizontal (resp., vertical) edges of $\mathcal{C}_{n}$ be labeled by
$\eta_{i}$ (resp., $\nu_{i}$), increasing to the east (resp., north). Lin
defined the following map $\Phi$ from $\mathcal{F}^{\prime}(\mathcal{D}_{n})$
to pairs $(S_{1},S_{2})$ in $\mathcal{C}_{n}$. Note that while we define
$\Phi$ as a map on blue/brown colored subpaths, it was originally defined for
Lee-Schiffler’s blue/green/red colored subpaths, and the two definitions are
essentially identical.
Figure 1. The leftmost image shows the maximal Dyck path $\mathcal{D}_{5}$ for
$r=3$, along with the corresponding $5\times 3$ grid and main diagonal. The
northwest corners are labeled and depicted as filled vertices. The center
image is a collection $\beta$ of colored Dyck subpaths in
$\mathcal{F}^{\prime}(\mathcal{D}_{5})$, and the rightmost image is the
compatible pair on $\mathcal{C}_{5}$ that $\beta$ maps to under $\Phi$, where
an edge is thickened whenever it is included in the compatible pair.
###### Definition 2.6 ([16]).
Given $\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$, let
$\Phi(\beta)=(\Phi_{1}(\beta),\Phi_{2}(\beta))$, where
$\displaystyle\Phi_{1}(\beta)$ $\displaystyle=\\{\eta_{s}:\alpha_{s}\text{ is
not a part of any subpath of }\beta\\}\,,$ $\displaystyle\Phi_{2}(\beta)$
$\displaystyle=\\{\nu_{s}:\gamma(i,k)\in\beta\text{ for some }i<s\leq k\\}\,.$
###### Example 2.7.
The compatible pair
$\left(\\{\eta_{4},\eta_{5}\\},\\{\nu_{1},\nu_{3}\\}\right)$ obtained by
applying $\Phi$ to the collection of subpaths
$\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{5})$ from 2.5 is shown in Figure 1.
###### Theorem 2.8.
The map $\Phi$ is a bijection between collections of colored subpaths in
$\mathcal{F}^{\prime}(\mathcal{D}_{n})$ and compatible pairs in
$\mathcal{C}_{n}$.
Switching to the quantum setting, we will now work inside the quantum torus
$\mathcal{T}:=\mathbb{Z}[q^{\pm 1}]\langle Z_{1}^{\pm 1},Z_{2}^{\pm
1}:Z_{1}Z_{2}=q^{2}Z_{2}Z_{1}\rangle$. The quantum rank-$2$ $r$-Kronecker
cluster algebra $\mathcal{A}_{q}(r,r)$ is the $\mathbb{Z}[q^{\pm 1}]$
subalgebra of the skew field of fractions of $\mathcal{T}$ generated by the
quantum cluster variables $\\{Z_{n}\\}_{n\in\mathbb{Z}}$, which follow the
recursion $Z_{n+1}Z_{n-1}=q^{-r}Z_{n}^{r}+1$ (cf. Equation 2.1). We use the
bijectivity of $\Phi$ along with Rupel’s quantum weighting on compatible pairs
[19] to construct a quantum weighting of collections of colored subpaths.
For
$\beta=\\{\beta_{1},\dots,\beta_{t}\\}\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$,
where $\beta_{i}$ appears to the left of $\beta_{i+1}$, we define the set of
_complimentary subpaths_
$\overline{\beta_{0}},\overline{\beta_{1}},\dots,\overline{\beta_{t}}$ such
that $\overline{\beta_{i}}$. For $1\leq i\leq t-1$, let $\overline{\beta_{i}}$
contain all edges and northwest corners between the end of $\beta_{i}$ and the
start of $\beta_{i+1}$, where a northwest corner on the boundary of a path is
included in $\overline{\beta_{i}}$ unless it is the right endpoint of a brown
or blue subpath. We set $\overline{\beta_{0}}$ to be the portion of the path
before $\beta_{1}$ excluding $v_{0}$, and we set $\overline{\beta_{t}}$ to be
the portion of the path after $\beta_{t}$ including $v_{c_{n-2}}$. Note that
it is possible for some $\overline{\beta_{i}}$ to be empty. Let
$|\overline{\beta_{i}}|_{1}$ (resp., $|\overline{\beta_{i}}|_{2}$) denote the
number of northwest corners (resp., edges) in $\overline{\beta_{i}}$.
###### Definition 2.9.
For
$\beta=\\{\beta_{1},\dots,\beta_{t}\\}\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$,
we let
$w_{q}(\beta)=(c_{n-1}+c_{n-2}-1)+\sum_{j=0}^{t}r|\overline{\beta_{j}}|_{2}\left(\sum_{i=1}^{t}(-1)^{\mathbbm{1}_{i<j}}|\beta_{i}|_{2}\right)+\left(r|\overline{\beta_{j}}|_{1}-r^{2}|\overline{\beta_{j}}|_{2}\right)\left(\sum_{i=1}^{t}(-1)^{\mathbbm{1}_{i<j}}|\beta_{i}|_{1}\right),$
where $\mathbbm{1}_{i<j}$ takes value $1$ when $i<j$ and $0$ otherwise. We
then set
$u_{q}(\beta)=w_{q}(\beta)-(c_{n-1}+c_{n-2}-1)+(c_{n-1}-r|\beta|_{1})(c_{n}-r|\beta|_{2})\,.$
###### Example 2.10.
For the collection $\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{5})$ from 2.5,
we have $\overline{\beta_{0}}=\overline{\beta_{3}}=\emptyset$,
$\overline{\beta_{1}}=\\{\alpha_{4},\alpha_{5}\\}$, and
$\overline{\beta_{2}}=\\{v_{2}\\}$. We thus have $w_{q}(\beta)=10$.
We prove that the quantum cluster variable Laurent coefficients can be
expressed as a sum over weighted collections of subpaths in
$\mathcal{F}^{\prime}(\mathcal{D}_{n})$.
###### Theorem 2.11.
Consider the quantum cluster algebra $\mathcal{A}_{q}(r,r)$ with quantum
cluster variables $Z_{i}$ for $i\in\mathbb{Z}$. For $n\geq 4$, we have
$Z_{n}=Z_{1}^{-c_{n-1}}Z_{2}^{-c_{n-2}}\sum_{\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})}q^{u_{q}(\beta)}Z_{1}^{r|\beta|_{1}}Z_{2}^{r\left(c_{n-1}-|\beta|_{2}\right)}$
and
$Z_{3-n}=Z_{2}^{-c_{n-1}}Z_{1}^{-c_{n-2}}\sum_{\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})}q^{u_{q}(\beta)}Z_{2}^{r|\beta|_{1}}Z_{1}^{r\left(c_{n-1}-|\beta|_{2}\right)}\,.$
## 3\. Preliminaries
Let $r\geq 2$ be fixed throughout this paper. Both the Lee-Schiffler and Lee-
Li-Zelevinsky expansion formulas involve sums over certain collections of
edges in a maximal Dyck path. Moreover, the width and height of these Dyck
paths have certain recursive properties that are used in the proofs of both
formulas. We begin by setting up a framework for studying these paths and
describing their recursive behavior.
Recall the sequence $c_{n}$ defined recursively by Equation 2.2. While the
indexing of the sequence $\\{c_{n}\\}_{n\geq 1}$ is identical to the indexing
in the work of Lee and Schiffler [12], the indexing is shifted by one from
that defined by Lin [16], i.e., it is equivalent to $\\{c_{n-1}\\}_{n\geq 1}$
in Lin’s work. It is straightforward to check that for $n>1$, the quantities
$c_{n}$ and $c_{n+1}$ are relatively prime, hence so are $c_{n}$ and
$c_{n+1}-c_{n}$. Thus the only vertices of $\mathcal{D}_{n}$ and
$\mathcal{C}_{n}$ that lie on the main diagonal are the first and last.
Fix $a,b\in\mathbb{N}$. Consider a rectangle with vertices $(0,0)$, $(0,b)$,
$(a,0)$, and $(a,b)$ having a designated diagonal from $(0,0)$ to $(a,b)$.
###### Definition 3.1.
A _Dyck path_ is a lattice path in $\mathbb{Z}^{2}$ starting at $(0,0)$ and
ending at a lattice point $(a,b)$ where $a,b\geq 0$, proceeding by only unit
north and east steps and never passing strictly above the diagonal. Given a
Dyck path $P$, we denote the number of east steps by $|P|_{1}$ and the number
of north steps by $|P|_{2}$. The _length_ of the Dyck path $P$ is the quantity
$|P|_{1}+|P|_{2}$. We denote the set of lattice points contained in the Dyck
path $P$, including the left and right endpoints, by $V(P)$.
The Dyck paths from $(0,0)$ to $(a,b)$ form a partially ordered set by
comparing the heights at all vertices. The maximal Dyck path
$\mathcal{P}(a,b)$, as defined in 2.1, is the maximal element under this
partial order. We focus on the following two classes of maximal Dyck paths,
defined for $n\geq 3$, $\mathcal{C}_{n}=\mathcal{P}(c_{n-1},c_{n-2})$ and
$\mathcal{D}_{n}=\mathcal{P}(c_{n-1}-c_{n-2},c_{n-2})$.
Recall that a vertex of a maximal Dyck path $P$ is a _northwest corner_ if
there are no vertices directly north of (equivalently, to the east of) it. In
$\mathcal{D}_{n}$, these are precisely the vertices labeled by $v_{i}$ for
some $0\leq i\leq c_{n-2}$.
Figure 2. The maximal Dyck paths $\mathcal{D}_{6}$ (above) and
$\mathcal{C}_{6}$ (below) are shown with some of their vertex and edge labels
for $r=3$. Each northwest corner of $\mathcal{D}_{6}$ is labeled with both its
corresponding $w_{i}$ and $v_{j}$ label. Some edges of $\mathcal{C}_{6}$ are
labeled; the $\nu_{i}$’s refer to the vertical edge left of the label, and the
$\eta_{j}$’s refer to the horizontal edge below the label.
When $a$ and $b$ are relatively prime, as is the case for $\mathcal{C}_{n}$
and $\mathcal{D}_{n}$, we can associate to this Dyck path the _(lower)
Christoffel word_ of slope $\frac{b}{a}$ on the alphabet $\\{E,N\\}$. This
word can be constructed by reading the edges of the maximal Dyck path from
$(0,0)$ to $(a,b)$, recording an $E$ for each east step and an $N$ for each
north step. For further background on Christoffel words, see [1].
###### Example 3.2.
Let $r=3$. The Christoffel words corresponding to the maximal Dyck paths
$\mathcal{D}_{6}$ and $\mathcal{C}_{6}$ depicted in Figure 2 are
$E^{2}NE^{2}NENE^{2}NE^{2}NENE^{2}NEN\text{ and
}E^{3}NE^{3}NE^{2}NE^{3}NE^{3}NE^{2}NE^{3}NE^{2}N\,,$
respectively.
###### Remark 3.3.
The Christoffel word corresponding to $\mathcal{C}_{n}$ is obtained by
applying the morphism $\theta=\\{N\mapsto EN\\}$, i.e., the map that replaces
each instance of the letter $N$ with the string $EN$, to the Christoffel word
corresponding to $\mathcal{D}_{n}$. This follows directly from, for example,
[1, Lemma 2.2].
###### Observation 3.4.
It is straightforward to calculate that
$V(\mathcal{C}_{3})=\\{(0,0),(1,0)\\}\text{ and
}V(\mathcal{C}_{4})=\\{(0,0),(1,0),\dots,(r,0),(r,1)\\}$
As we shall see, both of these families of Dyck paths have a recursive
structure. The following lemma is a special case of a result of Rupel.
###### Lemma 3.5 ([18, Lemma 3]).
For all $n\geq 4$, the maximal Dyck path $\mathcal{D}_{n}$ consists of $r-1$
copies of $D_{n-1}$ followed by a copy of $D_{n-1}$ with a prefix $D_{n-2}$
removed. In particular, $\mathcal{D}_{n-1}$ (resp., $\mathcal{C}_{n-1}$) is a
subpath of $\mathcal{D}_{n}$ (resp., $\mathcal{C}_{n}$).
This allows us to define the limit of these paths, which can be realized by
taking a union of finite subpaths.
###### Definition 3.6.
Let $\mathcal{C}$ (resp., $\mathcal{D}$) be the infinite path on
$\mathbb{Z}^{2}$ formed by the union $\bigcup_{n\geq 3}\mathcal{C}_{n}$
(resp., $\bigcup_{n\geq 3}\mathcal{D}_{n}$). We identify the paths
$\mathcal{C}_{n}$ and $\mathcal{D}_{n}$ with the prefix of the same length of
$\mathcal{C}$ and $\mathcal{D}$, respectively. Thus, the vertices of
$\mathcal{D}$ are labeled by $w_{i}$ and the northwest corners by $v_{j}$, as
described after 2.1 in Section 2. Similarly, horizontal edges of $\mathcal{C}$
are labeled by $\eta_{i}$ and the vertical edges are labeled by $\nu_{j}$.
## 4\. Simplification of the Colored Dyck Subpaths Conditions
### 4.1. Lee-Schiffler Expansion Formula
We first recall the original expansion formula given by Lee and Schiffler [12]
for rank-two skew-symmetric cluster variables. This requires us to set up the
language of colored subpaths in a Dyck path via Lee and Schiffler’s
conventions, which differs from that in Section 2. We then describe the map
between certain non-overlapping collections of colored subpaths, namely from
$\mathcal{F}(\mathcal{D}_{n})$ as defined by Lee-Schiffler to the set
$\mathcal{F}^{\prime}(\mathcal{D}_{n})$ which we defined after 2.3 in Section
2.
Let $s$ denote the slope of the main diagonal of $\mathcal{D}_{n}$, so
$s=\frac{c_{n-2,r}}{c_{n-1,r}-c_{n-2,r}}$.
###### Definition 4.1 ([12], cf. 2.3).
For any $0\leq i<k\leq c_{n-2}$, let $\alpha(i,k)$ be the subpath of
$\mathcal{D}_{n}$ defined as follows:
1. (1)
If $s(v_{i},v_{t})\leq s$ for all $t$ such that $i<t\leq k$, then
$\alpha(i,k)$ is defined to be the subpath from $v_{i}$ to $v_{k}$; each such
subpath is called blue.
2. (2)
If $s(v_{i},v_{t})>s$ for some $i<t\leq k$, then
1. (2-a)
if the smallest such $t$ is of the form $i+c_{m}-wc_{m-1}$ for some $3\leq
m\leq n-2$ and $1\leq w\leq r-2$, then $\alpha(i,k)$ is defined to be the
subpath from $v_{i}$ to $v_{k}$; each such subpath is called $(m,w)$-green.
2. (2-b)
otherwise, $\alpha(i,k)$ is set to be the subpath from the vertex immediately
below $v_{i}$ to $v_{k}$; each such subpath is called red.
Each such pair $i,k$ corresponds to precisely one subpath of
$\mathcal{D}_{n}$. Denote the single edges of $\mathcal{D}_{n}$ be by
$\alpha_{1},\dots,\alpha_{c_{n-1}}$ proceeding from southwest to northeast,
and let
$\mathcal{P}(\mathcal{D}_{n})=\\{\alpha(i,k):0\leq i<k\leq
c_{n-2}\\}\cup\\{\alpha_{1},\dots,\alpha_{c_{n-1}}\\}\,.$
The formula involves sums over collections of subsets of
$\mathcal{P}(\mathcal{D}_{n})$ satisfying certain non-overlapping
requirements. In particular, Lee and Schiffler set
$\displaystyle\mathcal{F}(\mathcal{D}_{n})=\\{\\{\beta_{1},\dots,\beta_{t}\\}:\;$
$\displaystyle t\geq 0,\beta_{j}\in\mathcal{P}(\mathcal{D}_{n})\text{ for all
}1\leq j\leq t,$ if $j\neq j^{\prime}$ then $\beta_{j}$ and
$\beta_{j^{\prime}}$ have no common edge, if $\beta_{j}=\alpha(i,k)$ and
$\beta_{j^{\prime}}=\alpha(i^{\prime},k^{\prime})$ then $i\neq k^{\prime}$ and
$i^{\prime}\neq k$, and if $\beta_{j}$ is $(m,w)$-green then at least one of
the $(c_{m-1}-wc_{m-2})$ $\displaystyle\;\;\;\;\;\;\;\;\text{preceding edges
of $v_{i}$ is contained in some $\beta_{j^{\prime}}$}\\}\,.$
For any collection of subpaths $\beta$, we associate two non-negative integers
$|\beta|_{1}$ and $|\beta|_{2}$. The first quantity $|\beta|_{1}$ is defined
to be $0$ on single edges and $k-i$ on $\alpha(i,k)$, then extended additively
on unions of these subpaths. The second quantity $|\beta|_{2}$ is the total
number of edges $\alpha_{i}$ covered by the subpaths in $\beta$. We can now
state the original formulation of Lee and Schiffler’s expansion result.
###### Theorem 4.2 ([12, Theorem 9]).
For $n\geq 4$, we have
$X_{n}=X_{1}^{-c_{n-1}}X_{2}^{-c_{n-2}}\sum_{\beta\in\mathcal{F}(\mathcal{D}_{n})}X_{1}^{r|\beta|_{1}}X_{2}^{r\left(c_{n-1}-|\beta|_{2}\right)}$
and
$X_{3-n}=X_{2}^{-c_{n-1}}X_{1}^{-c_{n-2}}\sum_{\beta\in\mathcal{F}(\mathcal{D}_{n})}X_{2}^{r|\beta|_{1}}x_{1}^{r\left(c_{n-1}-|\beta|_{2}\right)}\,.$
In order to show that our expansion formula 2.4 is equivalent to 4.2, we
define the map $\chi$ which connects the colored subpaths in both settings.
###### Definition 4.3.
We define a map
$\chi:\mathcal{F}(\mathcal{D}_{n})\to\mathcal{F}^{\prime}(\mathcal{D}_{n})$
that modifies the colored subpaths of $\beta\in\mathcal{F}(\mathcal{D}_{n})$
via the following rules:
* •
replace each red subpath in $\beta$ with the two subpaths obtained by
splitting into its first edge and the remainder of the path,
* •
change the color of $(m,w)$-green subpaths to become $(m,w)$-brown subpaths
* •
the blue subpaths and single edges remain unchanged.
Figure 3. The top image depicts a collection of colored Dyck subpaths in
$\mathcal{F}(\mathcal{D}_{6})$, namely,
$\\{\alpha(0,1),\alpha_{4},\alpha(2,5),\alpha_{16},\alpha(6,8)\\}$. The bottom
image depicts the corresponding collection of colored Dyck subpaths in
$\mathcal{F}^{\prime}(\mathcal{D}_{6})$, namely
$\\{\gamma(0,1),\alpha_{4},\alpha_{6},\gamma(2,5),\alpha_{16},\gamma(6,8)\\}$,
obtained by applying the map $\chi$ to the top collection.
Note that the set of edges covered by $\beta$ is preserved under $\chi$. 4.17
establishes that $\chi$ is well-defined and is in fact a weight-preserving
bijection with respect to $|\beta|_{1}$ and $|\beta|_{2}$. The statement and
proof of 4.17 appear in Subsection 4.3.
### 4.2. Vertices and Slopes in $\mathcal{D}_{n}$
We now prove several results relating to position of vertices in
$\mathcal{D}_{n}$ and the slopes of the line segments between northwest
corners of $\mathcal{D}_{n}$. To help illuminate the recursive structure of
the infinite path $\mathcal{D}$, which contains each $\mathcal{D}_{n}$ as a
prefix, we define a map taking vertices to northwest corners in $\mathcal{D}$.
###### Definition 4.4.
Let $\mu:V(\mathcal{D})\to V(\mathcal{D})$ be the map sending $w_{i}$, the
$i^{\text{th}}$ vertex of $\mathcal{D}$, to $v_{i}$, the $i^{\text{th}}$
northwest corner of $\mathcal{D}$.
The following result describes the behavior of $\mu$ in terms of coordinates.
###### Lemma 4.5.
If $w_{i}\in V(\mathcal{D})$ has coordinates $(x,y)$, then
$\mu(w_{i})=\left((r-1)x+(r-2)y,x+y\right)$.
###### Proof.
Fix $n$ large enough such that $(x,y)\in V(\mathcal{D}_{n})$. For each
$(x,y)\in V(\mathcal{D}_{n})$, the claim is equivalent to showing that the
following inequalities hold:
$\frac{x+y}{(r-1)x+(r-2)y}\leq\frac{c_{n-1}}{c_{n}-c_{n-1}}<\frac{x+y+1}{(r-1)x+(r-2)y}\,.$
In order to prove the first inequality, we first note that
$\frac{c_{n-2}}{c_{n-1}-c_{n-2}}\geq\frac{y}{x}\,,$
which holds since $(x,y)\in V(\mathcal{D}_{n})$. Cross multiplying and adding
$yc_{n-2}$ to both sides yields
$\frac{c_{n-2}}{c_{n-1}}\geq\frac{y}{x+y}\,.$
Hence we have
$\frac{c_{n-1}}{c_{n}-c_{n-1}}=\frac{c_{n-1}}{(r-1)c_{n-1}-c_{n-2}}=\frac{1}{(r-1)-\frac{c_{n-2}}{c_{n-1}}}\geq\frac{1}{(r-1)-\frac{y}{x+y}}=\frac{x+y}{(r-1)x+(r-2)y}\,,$
as desired.
We can prove the second inequality similarly. Since we have
$\frac{c_{n-2}}{c_{n-1}-c_{n-2}}<\frac{y+1}{x},$ then cross multiplying and
adding $(y+1)c_{n-2}$ to both sides yields
$\frac{c_{n-2}}{c_{n-1}}<\frac{y+1}{x+y+1}\,.$
Thus, we can conclude
$\frac{c_{n-1}}{c_{n}-c_{n-1}}=\frac{1}{(r-1)-\frac{c_{n-2}}{c_{n-1}}}<\frac{1}{(r-1)-\frac{y+1}{x+y+1}}\leq\frac{x+y+1}{(r-1)x+(r-2)y}\,.\qed$
We now study the slopes of the line segments between northwest corners of
$\mathcal{D}$, as these play a central role in the definition of colored Dyck
subpaths in both our setting and the Lee–Schiffler setting. We will utilize
several classical results concerning maximal Dyck paths and Christoffel words,
which can be found, for example, in [1].
###### Definition 4.6.
For $n\geq 3$, we define the function
$\pi_{n}:\\{0,1,\dots,c_{n-1}\\}\to\mathbb{Z}$ by
$\pi_{n}(i):=xc_{n-2}-(i-x)(c_{n-1}-c_{n-2})\text{ where
}w_{i}=(x,i-x)\in\mathcal{D}_{n}\,.$
###### Remark 4.7.
Note that we have $\pi_{n}(0)=\pi_{n}(c_{n-1})=0$. It is a standard result
from the theory of Christoffel words (see, for example, [1, Lemma 1.3]) that
the sequence $\pi_{n}(1),\pi_{n}(2),\dots,\pi_{n}(c_{n-1})$ is a permutation
of the elements $\\{0,1,\dots,c_{n-1}-1\\}$, and this is order-isomorphic to
sequence of distances from each vertex to the line segment between $w_{0}$ and
$w_{c_{n-1}}$. Thus, $s(w_{i},w_{j})\geq s=s(w_{0},w_{c_{n-1}})$ in
$\mathcal{D}_{n}$ if and only if $\pi_{n}(i)\leq\pi_{n}(j)$.
###### Example 4.8.
For $r=3$, the values $\pi_{6}(0),\pi_{6}(1),\dots,\pi_{6}(21)$ are given by
the following sequence:
$0,8,16,3,11,19,6,14,1,9,17,4,12,20,7,15,2,10,18,5,13,0\,.$
We now prove several relations between the sequences
$\\{\pi_{n}(i)\\}_{i=0}^{c_{n-1}}$ and $\\{\pi_{n-1}(i)\\}_{i=0}^{c_{n-2}}$.
###### Lemma 4.9.
For any vertex with coordinates $(x,y)$ in $\mathcal{D}_{n-1}$ where $n\geq
4$, we have
$\pi_{n}(rx+(r-1)y)=\pi_{n-1}(x+y)\,.$
###### Proof.
By 3.5 and 4.5, both $(x,y)$ and $\mu(w_{x+y})=((r-1)x+(r-2)y,x+y)$ are
vertices of $\mathcal{D}_{n}$. Hence we can expand the right-hand side using
4.6 and apply the relation $c_{n}=rc_{n-1}-c_{n-2}$ to obtain
$\displaystyle\pi_{n}(rx+(r-1)y)$
$\displaystyle=((r-1)x+(r-2)y)c_{n-2}-(x+y)(c_{n-1}-c_{n-2})$
$\displaystyle=((r-1)x+(r-2)y)c_{n-2}-(x+y)((r-1)c_{n-2}-c_{n-3})$
$\displaystyle=-yc_{n-2}+(x+y)c_{n-3}$
$\displaystyle=xc_{n-3}-y(c_{n-2}-c_{n-3})\;.$
Comparing with 4.6, we see that the final quantity is precisely
$\pi_{n-1}(x+y)$, as desired. ∎
###### Lemma 4.10.
For $n\geq 3$, the set $\left\\{(x,y)\in
V(\mathcal{D}_{n}):\pi_{n}(x+y)\in\\{0,\dots,c_{n-1}-1\\}\right\\}$ is
precisely the set of northwest corners of $\mathcal{D}_{n}$.
###### Proof.
We proceed inductively on $n$. The base case $n=3$ is straightforward to
check. 4.7 implies that $\pi_{n}^{-1}(0)=\\{v_{0},v_{c_{n-1}}\\}$ and that
$\pi_{n}(i)$ takes each value of $\\{1,2,\dots,c_{n-1}-1\\}$ exactly once for
$i\in\\{2,3,\dots,c_{n-1}\\}$. For each vertex $(x,y)\in\mathcal{D}_{n-1}$, we
can evaluate
$\pi_{n}(rx+(r-1)y)=\pi_{n-1}(x+y)\in\\{0,1,2,\dots,c_{n-2}-1\\}\,.$
Moreover, the unique vertex $(x^{\prime},y^{\prime})\in\mathcal{D}_{n}$ such
that $x^{\prime}+y^{\prime}=rx+(r-1)y$ is $\mu(w_{x+y})$. Since the image of
$\mu$ applied to the set $V(\mathcal{D}_{n-1})$ is the set of northwest
corners of $\mathcal{D}_{n}$, the desired set equality holds. ∎
###### Corollary 4.11.
Applying the morphism $\lambda=\\{E\mapsto E^{r-1}N,N\mapsto E^{r-2}N\\}$ to
the Christoffel word for $\mathcal{D}_{n}$ yields the Christoffel word for
$\mathcal{D}_{n+1}$.
Applying the morphism $\theta\circ\lambda\circ\theta^{-1}$ to the Christoffel
word for $\mathcal{C}_{n}$ yields the Christoffel word for
$\mathcal{C}_{n+1}$.
###### Proof.
Let $w_{i}$ denote the Christoffel word for $\mathcal{D}_{i}$. Applying the
map $\mu$ to $V(\mathcal{D}_{n})$ takes a vertex $(x,y)$ to
$\left((r-1)x+(r-2)y,x+y\right)$. Hence by 4.10, the index $j$ of the
$k^{\text{th}}$ vertical edge $\alpha_{j}$ in $\mathcal{D}_{n+1}$, i.e., the
positions of $y$’s in $w_{n+1}$, is equal to
$r|\\{\alpha_{i}^{n}:i<j,\alpha_{i}^{n}\text{ is
horizontal}\\}|+(r-1)|\\{\alpha_{i}^{n}:i<j,\alpha_{i}^{n}\text{ is
vertical}\\}|+1\,.$
This is precisely the position of $y$’s in the word $\lambda(w_{n})$, hence we
can conclude that $\lambda(w_{n})=w_{n+1}$. The second statement follows from
3.3. ∎
Finally, we can determine the rest of the values of $\pi_{n}$ from its values
on the northwest corners.
###### Observation 4.12.
Suppose that $(i,j)\in\mathcal{D}_{n}$ is not a northwest corner. Let
$(i^{\prime},j)$ be the corner vertex immediately preceding $(i,j)$. Then it
follows from the definition of $\pi_{n}$ that
$\pi_{n}(i+j)=\pi_{n}(i^{\prime}+j)+(i-i^{\prime})c_{n-2}\;.$
### 4.3. Proof of the Simplification
We now show that the map $\chi$ (see 4.3) is well-defined and weight-
preserving with respect to $|\beta|_{1}$ and $|\beta|_{2}$. This shows that
our definition of $\mathcal{F}^{\prime}(\mathcal{D}_{n})$ is, in a sense,
equivalent to that of Lee-Schiffler.
###### Observation 4.13.
4.9 states that the values of $\pi_{n}$ on the northwest corners of
$\mathcal{D}_{n}$ lie in the set $\\{0,1,\dots,c_{n-2}-1\\}$. Since $\pi_{n}$
is injective on the interior of $\mathcal{D}_{n}$, this implies that the
values of $\pi_{n}$ of the vertices which are not northwest corners constitute
the set $\\{c_{n-2},c_{n-2}+1,\dots,c_{n-1}-1\\}$.
The colors of paths in Lee and Schiffler’s setting depends on the number of
northwest corners that one needs to traverse from the starting endpoint of the
path until the slope to the northwest corner was at least the slope of the
diagonal. We also consider the slopes between vertices on $\mathcal{D}_{n}$
that are not necessarily northwest corners, which will help us to derive
results about the northwest corners. We in turn use this to determine which
vertex $w_{d(i)}$ to the east of a given vertex $w_{i}$ is the first such that
the slope between $w_{i}$ and $w_{d(i)}$ is at least that of the diagonal.
###### Definition 4.14.
For $0\leq i<c_{n-1}$, we define
$d(i)=\min\left(\\{j\in\\{i+1,i+2,\dots,c_{n-1}\\}:s(w_{i},w_{j})\geq
s(0,c_{n-1})\\}\right)\,.$
Note that $d(i)$ is well-defined since $s(i,c_{n-1})\geq s(0,c_{n-1})$ for all
$0\leq i<c_{n-1}$. We now show some properties of how the functions $d$ and
$\mu$ interact.
###### Corollary 4.15.
Suppose $w_{i}$ is a northwest corner of $\mathcal{D}_{n}$ for $n\geq 4$. Let
$w_{i}=\mu\left(w_{i^{\prime}}\right)$ for some
$w_{i^{\prime}}\in\mathcal{D}_{n-1}$, and let $(x,y)=w_{d(i^{\prime})}$ in
$\mathcal{D}_{n-1}$ with $d(i^{\prime})-i^{\prime}=mc_{n-2}-wc_{n-3}$. Then we
have $w_{d(i)}=((r-1)x+(r-2)y,x+y)$, and $d(i)-i=mc_{n-1}-wc_{n-2}$.
If $w_{i}$ is not a northwest corner, then $w_{d(i)}$ is the northwest corner
immediately following $w_{i}$. In particular, $d(i)-i\leq r-1$.
###### Proof.
By 4.13, $w_{d(i)}$ must be a northwest corner. Thus 4.10 implies that
$w_{d(i)}$ must be the image of $w_{d(i^{\prime})}$ under the correspondence
between vertices of $\mathcal{D}_{n-1}$ to northwest corners of
$\mathcal{D}_{n}$. It is straightforward to check that the distances follow
the formula described under this correspondence. The second claim follows
directly from 4.13. ∎
From this, we can determine that the values $d(i)-i$ are of a particular form.
###### Lemma 4.16.
For all positive integers $i$, we have $d(i)-i=c_{m}-wc_{m-1}$ for a unique
choice of $2\leq w\leq r-1$ and $3\leq m\leq n-2$.
###### Proof.
We prove this via induction on $n$. The base case $n=3$ is straightforward to
check. Suppose the statement holds on $\mathcal{C}_{n-1}$. Whenever $w_{i}$ is
not a northwest corner of $\mathcal{C}_{n}$, then we have $d(i)-i<r$ by 4.15,
so $d(i)-i=c_{3}-wc_{2}$ for some $2\leq w\leq r-1$ or
$d(i)=r-1=c_{4}-(r-1)c_{3}$. Otherwise, if $w_{i}$ is a northwest corner of
$\mathcal{C}_{n}$, then $w_{i}=\mu(w_{i^{\prime}})$ for some
$w_{i^{\prime}}\in\mathcal{C}_{n-1}$. By assumption,
$d(i^{\prime})-i^{\prime}=c_{m}-wc_{m-1}$ for some appropriate choice of
$m,w$. Applying 4.15, we can conclude that $d(i)-i=c_{m+1}-wc_{m}$.
The fact that this representation is unique follows immediately from the fact
that, for all $m\geq 2$, we have $c_{m}-(r-1)c_{m-1}>c_{m-1}-2c_{m-2}$. ∎
We are now ready to establish that the value $t(i)-i$, as given in 2.2, can
also be represented as $c_{m}-wc_{m-1}$ for an appropriate choice of $m,w$.
###### Proof of 2.2.
We prove this via induction on $n$, with the straightforward base case $n=3$.
Fix a northwest corner $v_{i}=w_{j}\in\mathcal{C}_{n}$, and let $t(i)$ be the
minimum positive integer such that $s(v_{i},v_{t(i)})\geq s$. Since $v_{i}$ is
a northwest corner, we have $v_{i}=\mu(w_{i^{\prime}})$ for some
$w_{i^{\prime}}\in\mathcal{C}_{n-1}$. Applying 4.16, we have that
$d(i^{\prime})-i^{\prime}=c_{m}-wc_{m-1}$ for a unique choice of $2\leq w\leq
r-1$ and $3\leq m\leq n-2$. By 4.7, we have that
$t(i)-i=d(i^{\prime})-i^{\prime}$. Thus $t(i)-i$ is of the desired form. ∎
The following lemma establishes that $\chi$ preserves weights.
###### Lemma 4.17.
For all $n\geq 3$, we have $\beta\in\mathcal{F}(\mathcal{D}_{n})$ if and only
if $\chi(\beta)\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$. Moreover, we have
$|\beta|_{i}=|\chi(\beta)|_{i}$ for $i=1,2$.
###### Proof of 4.17.
Fix $\beta\in\mathcal{F}(\mathcal{D}_{n})$. Suppose $\alpha(i,k)\in\beta$. By
2.2, either we have that $\alpha(i,k)$ is $(m,w)$-green, or we have that
$t(i)-i=1$. In this case, condition $(2^{*})$ of 2.3 enforces that the edge
immediately preceding $\alpha(i,k)$ is contained in $\beta_{j}$. By the non-
overlapping condition for membership in $\mathcal{F}(\mathcal{D}_{n})$, we
have $\beta_{j}\neq\alpha(i^{\prime},k^{\prime})$ for any
$i^{\prime},k^{\prime}$. Thus $\beta_{j}=\alpha_{i}^{\prime}$ for some
$i^{\prime}$. In particular, it does not contribute to $|\beta|_{1}$ and
contributes $1$ to $|\beta|_{2}$, which is the same as if we had considered
$\alpha(i,k)$ to contain this preceding edge. ∎
We can now combine the results about the map $\chi$ in order prove the
expansion formula in our setting.
###### Proof of 2.4.
This modified expansion formula follows immediately from 2.2 and the expansion
formula (4.2) given by Lee and Schiffler [12]. ∎
We now furthermore discuss a generalization of 2.4 to the setting of the
_framed $r$-Kronecker cluster algebra with principal coefficients_. While
there is a more general theory of cluster algebras with coefficients (see, for
example, [8]), we will give a brief explicit description of this cluster
algebra here. For a positive integer $r$, initial cluster variables
$X_{1},X_{2}$ and coefficient variables $Y_{1},Y_{2}$, we consider the
sequence $\\{\widetilde{X}_{n}\\}_{n\in\mathbb{Z}}$ and
$\\{\widetilde{Y}_{n}\\}_{n\in\mathbb{Z}}$ defined by
$\displaystyle\widetilde{Y}_{n+1}$
$\displaystyle=\frac{\widetilde{Y}_{n}^{r}}{\widetilde{Y}_{n-1}}\,,\text{
where }\widetilde{Y}_{1}=Y_{1}\text{ and }\widetilde{Y}_{2}=Y_{1}^{r}Y_{2},\;$
$\displaystyle\widetilde{X}_{n+1}$
$\displaystyle=\frac{\widetilde{X}_{n}^{r}+\widetilde{Y}_{n-1}}{\widetilde{X}_{n-1}}\,,\text{
where }\widetilde{X}_{1}=X_{1}\text{ and }\widetilde{X}_{2}=X_{2}\,.$
The use of tildes is to distinguish the settings with and without
coefficients. Let $\mathbb{P}$ be the tropical semifield
$\operatorname{Trop}[Y_{1},Y_{2}]$. The _framed $r$-Kronecker cluster algebra
$\widetilde{\mathcal{A}}(r,r)$ with principal coefficients_ is the
$\mathbb{Z}\mathbb{P}[\widetilde{X}_{1},\widetilde{X}_{2}]$ algebra generated
by the cluster variables $\\{\widetilde{X}_{n}\\}_{n\in\mathbb{Z}}$. Note that
when we specialize to the case $Y_{1}=Y_{2}=1$, we recover the classical
$r$-Kronecker cluster algebra.
###### Corollary 4.18.
Consider the framed $r$-Kronecker cluster algebra
$\widetilde{\mathcal{A}}(r,r)$ with principal coefficients, having initial
cluster variables $X_{1},X_{2}$ and coefficient variables $Y_{1},Y_{2}$. For
$n\geq 4$, the cluster variable $\widetilde{X}_{n}$ is given by
$\widetilde{X}_{n}=X_{1}^{-c_{n-1}}X_{2}^{-c_{n-2}}\sum_{\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})}X_{1}^{r|\beta|_{1}}X_{2}^{r\left(c_{n-1}-|\beta|_{2}\right)}Y_{1}^{|\beta|_{2}}Y_{2}^{|\beta|_{1}}$
and
$X_{3-n}=X_{2}^{-c_{n-1}}X_{1}^{-c_{n-2}}\sum_{\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})}X_{2}^{r|\beta|_{1}}X_{1}^{r\left(c_{n-1}-|\beta|_{2}\right)}Y_{2}^{-|\beta|_{2}}Y_{1}^{-|\beta|_{1}}\,.$
###### Proof.
Setting $\deg(X_{i})=e_{i}$ and $\deg(Y_{i})=(-1)^{i+1}re_{3-i}$, it is known
that the cluster variable $\widetilde{X}_{n}$ is a homogeneous Laurent
polynomial by [8, Proposition 6.1]. Moreover, this degree is readily
calculated from the recurrence relations on _g-vectors_ to be
$-c_{n-1}e_{1}+c_{n}e_{2}$ for $n\geq 2$ and $c_{-n+3}e_{1}-c_{-n+2}e_{2}$ for
$n<2$ (see, for example, [16, Subsection 4.1]). This determines the powers of
$Y_{1}$ and $Y_{2}$ that must appear in each monomial term, yielding the above
expansion formula directly from 2.4. ∎
## 5\. Bijection between Compatible Pairs and Colored Subpaths of Dyck Paths
In this section, we prove a conjecture of Feiyang Lin that the map $\Phi$,
constructed by Lin and described in 2.6, is a bijection between the
collections of colored subpaths introduced by Lee-Schiffler [12] and the
compatible pairs introduced by Lee-Li-Zelevinsky [11]. This shows the
correspondence between the objects summed over by each set of authors in their
expansion formulas for rank-$2$ cluster algebras. Specifically, we show that
Lin’s map is a bijection between collections $\beta$ of colored Dyck subpaths
in $\mathcal{F}^{\prime}(\mathcal{D}_{n})$ with a fixed $|\beta|_{1}$ and
$|\beta|_{2}$ and compatible pairs on $\mathcal{C}_{n}$ consisting of
$|\beta|_{1}$ vertical edges and $(c_{n-1}-|\beta|_{2})$ horizontal edges.
### 5.1. Compatible Pairs and Lin’s Map
The rank-2 cluster expansion formula given by Lee-Li-Zelevinsky [11] sums over
certain subsets of edges of a maximal Dyck path, known as compatible pairs,
which we discuss here. We will mainly work over $\mathcal{C}_{n}$, though
sometimes we work in more generality. Let $S_{1}$ be a subset of the vertical
edges of a maximal Dyck path $\mathcal{P}(a,b)$, and let $S_{2}$ be a subset
of the horizontal edges of $\mathcal{P}(a,b)$.
In order to study compatible pairs, Li, Lee, and Zelevinsky [11] introduced
the notion of the “shadow” of a set of edges. While they only defined shadows
of subsets of vertical edges, we extend this notion to subsets of horizontal
edges as well. These notions will be used throughout our construction of the
bijection between collections of colored Dyck subpaths and compatible pairs.
###### Definition 5.1.
For a vertical edge $\nu\in S_{2}$ with upper endpoint $w$, we define its
_local vertical shadow_ , denoted $\operatorname{sh}(\nu;S_{2})$, to be the
set of horizontal edges in the shortest subpath $\overrightarrow{tw}$ of
$\mathcal{P}(a,b)$ such that $|tw|_{1}=r|\overrightarrow{tw}\cap S_{2}|_{2}$.
Analogously, for a horizontal edge $\eta\in S_{1}$ with left endpoint $u$, we
define its _local horizontal shadow_ , denoted
$\operatorname{sh}(\eta,S_{2})$, to be the set of vertical edges in the
shortest subpath $\overrightarrow{ut}$ of $\mathcal{P}(a,b)$ such that
$|ut|_{2}=r|\overrightarrow{ut}\cap S_{1}|_{1}$. If there is no such subpath
$\overrightarrow{tw}$ or $\overrightarrow{ut}$, respectively, then we define
the local vertical (resp., horizontal) shadow to be the entire set of
horizontal (resp., vertical) edges in $\mathcal{P}(a,b)$.
For $S\subseteq S_{i}$ where $i\in\\{1,2\\}$, let
$\displaystyle\operatorname{sh}(S;S_{i})=\bigcup_{\alpha\in
S}\operatorname{sh}(\alpha;S_{i})$, and write
$\operatorname{sh}(S_{i}):=\operatorname{sh}(S_{i};S_{i})$.
###### Observation 5.2.
It is a straightforward consequence of 5.1 that for $S\subseteq S_{1}$, we
have $|\operatorname{sh}(S)|=\min(b,\,r|S|)$. Similarly, for $S\subseteq
S_{2}$, we have $|\operatorname{sh}(S)|=\min(a,\,r|S|)$
The expansion formula for cluster variables given by Lee, Li, and Zelevinsky
has monomials corresponding to compatible pairs on $\mathcal{C}_{n}$. Their
expansion formula works in the more general setting of elements of the greedy
basis, which contains the cluster variables. For further details on the greedy
basis, see [11]. We present their formula in the special case of cluster
variables.
###### Theorem 5.3.
[11, Theorem 1.11] For each $n\geq 1$, the cluster variable $X_{n}$ in
$\mathcal{A}(r,r)$ is given by
$X_{n}=x_{1}^{-c_{n-1}}x_{2}^{-c_{n-2}}\sum_{(S_{1},S_{2})}x_{1}^{r|S_{2}|}x_{2}^{r|S_{1}|}\,,$
where the sum is over all compatible pairs $(S_{1},S_{2})$ in
$\mathcal{C}_{n}$.
Figure 4. The top image depicts a collection of colored Dyck subpaths in
$\mathcal{F}^{\prime}(\mathcal{D}_{6})$, the same as in Figure 3. The bottom
image depicts the corresponding compatible pair $(S_{1},S_{2})$ in
$\mathcal{C}_{6}$, obtained by applying the map $\Phi$ to the collection of
colored Dyck subpaths. The edges that are contained in either $S_{1}$ or
$S_{2}$ are depicted by bold edges in the lower image. In particular, we have
$S_{1}=\\{\eta_{5},\eta_{15}\\}$ and
$S_{2}=\\{\nu_{1},\nu_{3},\nu_{4},\nu_{5},\nu_{7},\nu_{8}\\}$. Thus
$\operatorname{wt}((S_{1},S_{2}))=X_{1}^{18}X_{2}^{6}$.
Lin’s map from collections colored Dyck subpaths in $\mathcal{D}_{n}$ to
compatible pairs in $\mathcal{C}_{n}$ is described in 2.6. An example is shown
in Figure 4. Lin conjectured that the map $\Phi$ is a bijection between the
desired sets [16, Conjecture 3], which essentially involves showing that
$\Phi(\beta)$ is indeed a compatible pair for every
$\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$. We verify this claim in the
next subsection. Lin made partial progress toward this conjecture by showing
that it was sufficient to consider only compatible pairs satisfying a certain
irreducibility condition [16, Proposition 4.8.4, Conjecture 4]. We proceed by
a different approach than Lin, so our methods do not rely on this
simplification.
In order to show the correspondence between the Lee-Schiffler and Lee-Li-
Zelevinsky expansion formulas, one needs to show not only bijectivity between
the sets summed over, but also that the resulting monomials correspond. Lin
defined a weight function for collections of colored subpaths and for
compatible pairs that keeps track of the exponents associated to these
monomials.
###### Definition 5.4.
We define the _weight_ of a compatible pair $(S_{1},S_{2})$ by
$\operatorname{wt}((S_{1},S_{2}))=X_{1}^{r|S_{2}|}X_{2}^{r|S_{1}|}$
and the _weight_ of a collection of colored subpaths
$\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$ by
$\operatorname{wt}_{n}(\beta)=X_{1}^{r|\beta|_{1}}X_{2}^{r(c_{n-2}-|\beta|_{2})}\,.$
Lin showed that $\Phi$ is a weight-preserving map from a superset of
$\mathcal{F}(\mathcal{D}_{n})$ to $\mathcal{F}(\mathcal{C}_{n})$, and
conjectured that it restricted to a bijection between
$\mathcal{F}(\mathcal{D}_{n})$ and $\mathcal{F}(\mathcal{C}_{n})$. We prove
this in the next subsection. We convert Lin’s map into the setting of
$\mathcal{F}^{\prime}(\mathcal{D}_{n})$ instead of
$\mathcal{F}(\mathcal{D}_{n})$ in order to make easier reference to the
results of the previous section, though it is straightforward to show the
equivalence between these two settings.
### 5.2. Proof of Bijectivity
We now proceed to show that Lin’s map $\Phi$ indeed takes every collection of
colored subpaths in $\mathcal{F}^{\prime}(\mathcal{D}_{n})$ to a unique
compatible pair on $\mathcal{C}_{n}$. It then follows from the work of Lee-
Schiffler [12] and Lee-Li-Zelevinsky [11] that $\Phi$ is a bijection. For
$2\leq w\leq r-1$ and $m\geq 3$, we define $a_{m,w}$ to be the quantity
$c_{m}-wc_{m-1}$. We can use the quantities $a_{m,w}$ to define the size of
images of atomic colored paths under $\Phi$, as well as their shadows.
###### Observation 5.5.
It is readily deduced from the recursive definition of the sequence $c_{n}$
that for $w,m\geq 1$, we have $ra_{m,w}=a_{m+1,w}+a_{m-1,w}$.
In order to establish that the conditions for compatibility correspond to the
conditions for membership in $\mathcal{F}^{\prime}(\mathcal{D}_{n})$ via
$\Phi$, we first show that this is true for certain simple colored subpaths.
###### Definition 5.6.
We call a subpath of $\mathcal{D}_{n}$ _atomic_ if it consists of a single
edge, is blue, or is an $(m,w)$-brown path of the form $\gamma(i,i+a_{m,w})$.
###### Observation 5.7.
Any subpath of $\mathcal{D}_{n}$ can be written uniquely as a union of atomic
components meeting only at vertices such that
1. (i)
no component is a single edge unless the entire path is a single edge, and
2. (ii)
only the last component can be blue.
This decomposition is obtained by removing minimal brown paths from the front
until only a blue path remains.
Note that when we decompose a path into its atomic components, adjacent
components will necessarily overlap at a vertex. Thus after this
decomposition, the set of paths may no longer be non-overlapping, and hence
not in $\mathcal{F}^{\prime}(\mathcal{D}_{n})$. An example is shown in Figure
5.
Figure 5. The decomposition of the collection $\beta$ of subpaths from Figure
4 into its atomic components. Note that $(3,2)$-brown path $\gamma(2,3)$ and
blue path $\gamma(3,5)$, which meet at the vertex $v_{3}$, are the two atomic
components of the brown path $\gamma(2,5)$ from $\beta$.
We now study the structure of the $(m,w)$-brown paths as Christoffel words.
Recall the morphism $\lambda=\\{E\mapsto E^{r-1}N,N\mapsto E^{r-2}N\\}$.
###### Lemma 5.8.
The Christoffel word for an atomic $(m,w)$-brown path is
$\lambda^{(m-2)}(E^{r-w-1}N)$, where $\lambda^{0}$ is the identity map, which
has length $a_{m+1,w}$.
###### Proof.
Let $\gamma(i,k)$ be an atomic $(m,w)$-brown path, and let $\rho$ denote the
corresponding Christoffel word. Fix $i^{\prime},k^{\prime}\in\mathbb{Z}$ such
that $v_{i}=\mu(w_{i^{\prime}})$ and $v_{k}=\mu(w_{k^{\prime}})$.
First note that if $w_{i^{\prime}}$ is not a northwest corner, then by 4.13 we
have $k^{\prime}-i^{\prime}\leq r$. Thus we have $m=3$ and
$\rho=\lambda(E^{r-w-1}N)$ where $r-w-1=k^{\prime}-i^{\prime}-1$, which is of
the desired form.
Now suppose that $w_{i^{\prime}}=v_{i^{\prime\prime}}$ is a northwest corner.
We automatically have that $w_{k^{\prime}}=v_{k^{\prime\prime}}$ is a
northwest corner from 4.13. We aim to show that
$\gamma(i^{\prime\prime},k^{\prime\prime})$ is an $(m-1,w)$-brown path. Thus
the statement would follow from induction, since the Christoffel word
corresponding to $\gamma(i,k)$ is given by applying $\lambda$ to the word
corresponding to $\gamma(i^{\prime\prime},k^{\prime\prime})$.
In order to show that $\gamma(i^{\prime\prime},k^{\prime\prime})$ is an
$(m-1,w)$-brown path, we study the slopes from
$s(w_{i^{\prime}},w_{i^{\prime}+j})$ for $1\leq j\leq
k^{\prime}-i^{\prime}=k-i$. It readily follows from the recurrence for the
sequence $c_{n}$ and the formula for $\mu$ given in 4.5 that
$s(w_{i^{\prime}},w_{i^{\prime}+j})-s(v_{0},v_{c_{n}})=s(\mu(w_{i^{\prime}}),\mu(w_{i^{\prime}+j}))-s(v_{0},v_{c_{n+1}})=s(v_{i},v_{i+j})-s(v_{0},v_{c_{n+1}})\,.$
Since $s(v_{i},v_{i+j})-s(v_{0},v_{c_{n+1}})<0$ for $1\leq j\leq k-i$ and
$s(v_{i},v_{k})\geq s(v_{0},v_{c_{n+1}})$, the same holds for
$s(w_{i^{\prime}},w_{i^{\prime}+j})-s(v_{0},v_{c_{n}})$. That is, the slope
from $w_{i^{\prime}}$ to any vertex on
$\gamma(i^{\prime\prime},k^{\prime\prime})$ does not exceed that of the
diagonal, except the slope from $w_{i^{\prime}}$ to $w_{k^{\prime}}$.
We now just need to determine $k^{\prime\prime}-i^{\prime\prime}$, or
equivalently, the number of vertical edges in
$\gamma(i^{\prime\prime},k^{\prime\prime})$. Since $\gamma(i,k)$ as $a_{w,m}$
vertical edges, then we know $\gamma(i^{\prime\prime},k^{\prime\prime})$ has
$a_{w,m}$ total edges. Since the $a_{w,m}$ uniquely determine $w,m$, we can
conclude from the inductive hypothesis that
$\gamma(i^{\prime\prime},k^{\prime\prime})$ is $(m-1,w)$-brown. Moreover, by
5.5, we have
$(r-1)a_{w-1,m}+r(a_{w,m}-a_{w-1,m})=a_{w+1,m}\,,$
so we can conclude that $\gamma(i,k)$ has length $a_{w+1,m}$. ∎
We are interested in the portion of the path spanned by the vertical shadow of
the image of an $(m,w)$-brown path. We determine the structure of this portion
of the path with the following result.
###### Corollary 5.9.
The $a_{m-1,w}$ edges preceding an $(m,w)$-brown path form an $(m-2,w)$-brown
path or, when $a_{m-1,w}<r$, a path corresponding to the Christoffel word
$E^{a_{m-1,w}-1}N$.
###### Proof.
By definition, the edge preceding an $(m,w)$-brown path is vertical, so the
latter statement follows immediately.
We prove the first claim via induction on $m$. For $m\leq 5$, we note that
$a_{m-1,w}\leq r-1$, and hence the preceding path is of the form
$E^{a_{m-1,w}-1}N$. Let $\rho$ denote the Christoffel word formed by the
$a_{m-2,w}$ edges preceding an $(m-1,w)$-brown path. Then by 5.8, we can
obtain the word corresponding to the $a_{m-1,w}$ edges preceding an
$(m,w)$-brown path by applying $\lambda$ to $\rho$. By the inductive
hypothesis and 5.8, we can conclude that $\lambda(\rho)$ is an $(m-2,w)$-brown
path. ∎
###### Corollary 5.10.
Let $\gamma(i,k)$ be an $(m,w)$-brown path in $\mathcal{D}_{n}$, and let
$(S_{1},S_{2})=\Phi\left(\\{\gamma(i,k)\\}\right)$. Then the shadow of $S_{2}$
has length $a_{m+1,w}+a_{m-1,w}$.
###### Proof.
By 5.8, it follows that $S_{2}$ consists of $a_{m,w}$ consecutive vertical
edges. Thus, by 5.2, the shadow will contain $\min(ra_{m,w},c_{n-1})$
horizontal edges. Applying 5.5, we see that $ra_{m,w}=a_{m+1,w}+a_{m-1,w}$. We
then have by the bounds on $w$ and $m$ that
$ra_{m,w}=a_{m+1,w}+a_{m-1,w}\leq(c_{m+1}-2c_{m})+c_{m-1}\leq c_{m+1}\leq
c_{n-1}\,.$
So we can conclude the length of the shadow is $a_{m+1,w}+a_{m-1,w}$. ∎
We can now establish that the image under $\Phi$ of an atomic $(m,w)$-brown
path is a compatible pair in $\mathcal{C}_{n}$. As we will later see, this
encodes most of the complexity of the compatibility conditions on
$\mathcal{C}_{n}$.
###### Lemma 5.11.
Let $\gamma(i,k)$ be an atomic $(m,w)$-brown path in $\mathcal{D}_{n}$ and
$\gamma_{j}$ be one of the $a_{m-1,w}$ edges preceding $v_{i}$. Then
$\Phi(\\{\gamma(i,k),\gamma_{j}\\})$ is a compatible pair.
###### Proof.
Let $S_{2}=\Phi_{2}(\\{\gamma(i,k)\\})$. Let $\rho^{\prime}$ denote path
formed by the $a_{m-1,w}$ edges preceding $v_{i}$, and let
$\rho=\sigma(\rho^{\prime})$. It follows from 5.10 and 5.9 the shadow of
$\Phi_{2}$ spans a path of type $\rho\lambda^{2}(\rho)$.
Let $\upsilon$ be the path composed of $a_{m,w}$ north steps and $a_{m-1,w}$
west steps starting from the vertex immediately below $\Phi(v_{i})$, defined
as follows: for $i\geq 2$, the $i$-th north step of $\upsilon$ is $(i-1)r$
units to the west of the $(i-1)$-st edge in $S_{2}$. By definition, $\upsilon$
forms the eastern border of the shadow of each edge of $S_{2}$ except the
last. Moreover, the position of west steps in $\upsilon$ is determined by the
occurrence of subwords $E^{r-1}N$ and $E^{r}N$ in $\lambda^{2}(\rho)$. From
the definition of $\lambda$, it follows that Christoffel word corresponding to
the $90$ degree clockwise rotation of $\upsilon$ is precisely
$\lambda^{-1}(\lambda^{2}(\rho))=\lambda(\rho)$.
Figure 6. The yellow and black path constitute a portion of $\mathcal{D}$. The
thick black edges depict those that are contained in $\Phi_{2}(\gamma)$, where
$\gamma$ is a $(6,2)$-brown path. The shadow of $\Phi_{2}(\gamma)$ spans the
black and yellow paths. The purple path shows the left endpoint of the shadow
of the vertical edge to its right. Note that the yellow and purple paths
overlap in one edge.
We are now interested in the vertical distance from each horizontal edge
$\eta_{i}$ of $\rho$ to that of $\lambda(\rho)$. This determines the maximum
number $M$ of horizontal edges to the right of and including $\eta_{i}$ that
can be included in $S_{1}$ while satisfying the compatibility conditions.
Namely, this distance is the maximum height of the shadow at $\eta_{i}$, i.e.,
$rM$. Using the same inductive techniques as in 5.8, it is readily seen from
the base case $\rho=E^{w}N$ that this distance is $r(a_{m-1,w}-i+1)$ when
$i>1$. Hence it is possible that any combination of these edges is contained
in $S_{1}$. When $i=1$, the distance is $ra_{m-1,w}-1$. Since this is less
than $ra_{m-1,w}$ but greater than $r(a_{m-1,w}-1)$, it is not possible that
all edges of $\rho$ are contained in $S_{1}$, but it is possible that all but
one are. That is, the pair $(S_{1},S_{2})$ is compatible if and only if at
least one edge of $\rho$ is not contained in $S_{1}$. ∎
###### Example 5.12.
Figure 6 illustrates the construction in the proof of 5.11. Observe that the
purple path is a $90$ degree counterclockwise rotation of a Dyck path. Letting
$\rho=E^{2}NEN$ denote the Christoffel word for the yellow path, observe that
the Christoffel word corresponding to the (90 degree clockwise rotation of
the) purple path is given by $\lambda(\rho)$. Moreover, the Christoffel word
corresponding to the black path is given by $\lambda^{2}(\rho)$. The vertical
distance between the purple and yellow paths is precisely the maximum height
of the shadow at each horizontal yellow edge such that the compatibility
conditions are satisfied.
###### Lemma 5.13.
Let $\beta$ consist of a single atomic path and, if $\beta$ is $(m,w)$-brown,
one of the $a_{m-1,w}$ edges preceding this path. Then $\Phi(\beta)$ is a
compatible pair.
###### Proof.
We break into three cases based on the form of $\beta$. If $\beta$ consists of
a single edge, then $\Phi(\beta)$ has no vertical edges and hence is
compatible. If $\beta$ is $(m,w)$-brown, then this is precisely the result of
5.11.
Thus, the only remaining case is when $\gamma(i,k)$ is blue. Then it is the
prefix of an atomic $(m,w)$-brown path, obtained by extending $\gamma(i,k)$
until its slope exceeds that of the diagonal. Let $\gamma(i,k^{\prime})$
denote this atomic $(m,w)$-brown path, and let
$\beta^{\prime}=\\{\gamma(i,k^{\prime}),\eta_{j}\\}$ where $\eta_{j}$ is the
$(a_{m-1})$-th edge preceding $v_{i}$. In the previous case, we have shown
that $\Phi(\beta^{\prime})$ is compatible. Note now that
$\Phi_{2}(\beta)\subseteq\Phi_{2}(\beta^{\prime})$ and
$\operatorname{sh}(\Phi_{2}(\beta))\subseteq\Phi_{1}(\beta^{\prime})$.
Therefore the compatibility of $\Phi(\beta)$ follows directly from the
compatibility of $\Phi(\beta^{\prime})$. ∎
Now that we have handled the case of atomic paths, we show that we can
determine the compatibility of the image of many colored subpaths paths by
restricting to the compatibility conditions on each of its atomic components.
###### Definition 5.14.
Given a pair $(S_{1},S_{2})$ on $\mathcal{P}(a_{1},a_{2})$, and let
$(S_{1}^{\prime},S_{2}^{\prime})$ be a pair on
$\mathcal{P}(a_{1}^{\prime},a_{2}^{\prime})$. We define the _insertion of
$(S_{1}^{\prime},S_{2}^{\prime})$ into $(S_{1},S_{2})$ at position
$(j_{1},j_{2})$_ to be the pair $(S_{1}^{\prime\prime},S_{2}^{\prime\prime})$
on $\mathcal{P}(a_{1}+a_{1}^{\prime},a_{2}+a_{2}^{\prime})$ determined as
follows:
$e_{i}\in S_{k}^{\prime\prime}\iff\begin{cases}e_{i}\in S_{k}\text{ and }1\leq
i\leq j_{k}\,,\text{or}\\\ e_{i-j_{k}}\in S_{k}^{\prime}\text{ and
}j_{k}<i\leq j_{k}+a_{k}^{\prime}\,,\text{or}\\\ e_{i-a_{k}^{\prime}}\in
S_{k}\text{ and }j_{k}+a_{k}^{\prime}<i\leq
a_{k}+a_{k}^{\prime}\,,\end{cases}$
for $k\in\\{1,2\\}$ and $(j_{1},j_{2})\in V(\mathcal{P}(a_{1},b_{1}))$. Here,
each $e_{i}$ refers to the $i^{\text{th}}$ horizontal or $i^{\text{th}}$
vertical edge of the corresponding path, where the orientation of the edge is
determined by the context.
###### Lemma 5.15.
Let $(S_{1},S_{2})$ and $(S_{1}^{\prime},S_{2}^{\prime})$ be compatible pairs
on $\mathcal{P}(a_{1},a_{2})$ and
$\mathcal{P}(a_{1}^{\prime},a_{2}^{\prime})$, respectively, and suppose that
$(S_{1}^{\prime},S_{2}^{\prime})$ has non-spanning shadows. Then for any
$(j_{1},j_{2})\in V(\mathcal{P}(a_{1},a_{2}))$, the insertion
$(S_{1}^{\prime\prime},S_{2}^{\prime\prime})$ of
$(S_{1}^{\prime},S_{2}^{\prime})$ into $(S_{1},S_{2})$ at position
$(j_{1},j_{2})$ is a compatible pair on
$\mathcal{P}(a_{1}+a_{1}^{\prime},a_{2}+a_{2}^{\prime})$.
###### Proof.
Since $(S_{1}^{\prime},S_{2}^{\prime})$ is a compatible pair on
$\mathcal{P}(a_{1}^{\prime},a_{2}^{\prime})$, then either
$rS_{1}^{\prime}<a_{2}^{\prime}$ or $rS_{2}^{\prime}<a_{1}^{\prime}$. Without
loss of generality, suppose $rS_{1}^{\prime}\leq a_{2}^{\prime}$.
We can then see that for $e_{i}\in S_{1}^{\prime\prime}$ with $1\leq i\leq
j_{1}$, we have
$|\operatorname{sh}(e_{i};S_{1}^{\prime\prime})|\leq|\operatorname{sh}(e_{i};S_{1})|+\operatorname{sh}(e_{1},S_{1}^{\prime\prime})|<|\operatorname{sh}(e_{i};S_{1})|+a_{2}^{\prime}\,.$
Similarly, for $j_{2}+a_{2}^{\prime}<i\leq a_{2}+a_{2}^{\prime}$, we have
$|\operatorname{sh}(e_{i};S_{2}^{\prime\prime})|\leq|\operatorname{sh}(e_{i-a_{2}^{\prime}};S_{2})|+|\operatorname{sh}(e_{a_{2}^{\prime}},S_{2}^{\prime\prime})|\leq|\operatorname{sh}(e_{i};S_{2})|+a_{1}^{\prime}\,.$
The lengths of the shadows at the other edges, with the corresponding shift in
indices, is the same as in the original paths. Thus the horizontal and
vertical shadows will never intersect, so the pair is indeed compatible. ∎
This allows us to combine our results on atomic paths in order to handle any
collection of subpaths in $\mathcal{F}^{\prime}(\mathcal{D}_{n})$
###### Theorem 5.16.
If $\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$, then $\Phi(\beta)$ is a
compatible pair.
###### Proof.
Let $(S_{1},S_{2})=\Phi(\beta)$ We proceed by induction on the number of
atomic components in $\beta$, which we denote by $t$. Note that if $t=0$, then
$S_{2}$ is empty and so $(S_{1},S_{2})$ is compatible.
If $\beta$ has one atomic component, the compatibility follows directly from
5.13.
If we add an additional singular edge to $\beta$, then the resulting pair is a
subset of the original, and hence compatible. Otherwise, suppose we add an
atomic component to $\beta$ that appears to the left of all other atomic
components. Then we can view the resulting path as the insertion of the atomic
path into the previous compatible pair. Since both paths involved in the
insertion are compatible, we can conclude using 5.15 that $\Phi(\beta)$ is
compatible. ∎
###### Lemma 5.17.
Every compatible pair in $\mathcal{C}_{n}$ is the image of some
$\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$.
###### Proof.
We know from 5.16 that $\Phi(\beta)$ for
$\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$ is a compatible pair in
$\mathcal{C}_{n}$. Moreover, we know from 4.2 and 5.3 that
$\mathcal{F}^{\prime}(\mathcal{D}_{n})$ and the set of compatible pairs in
$\mathcal{C}_{n}$ are equinumerous, since both are the result of the
substitution $X_{1}=X_{2}=1$. Lastly, we know from the work of Lin [16,
Proposition 4.7.3] that $\Phi$ is injective. Thus $\Phi$ is also surjective. ∎
Combining the results proven above along with work of Lee-Schiffler and Lee-
Li-Zelevinsky, we can prove the bijectivity of the map $\Phi$.
###### Proof of 2.8.
By 5.16 and 5.17, we can see that $\Phi$ is a bijection from
$\mathcal{F}^{\prime}(\mathcal{D}_{n})$ onto the set of compatible pairs in
$\mathcal{C}_{n}$. Using the work of Lin [16, Proposition 4.7.3], we
additionally see that $\Phi$ is weight-preserving. ∎
## 6\. Quantization of Colored Dyck Subpaths
In Lee and Schiffler’s work on expanding rank-$2$ cluster variables, they
showed that the coefficients of the Laurent expansion could be obtained by
taking sums over certain collections of colored Dyck subpaths. Each such
collection was taken to have weight $1$. In order to quantize this
construction, we instead weight each collection by a power of a formal
variable $q$. We then show that an analogous formula holds for rank-$2$
quantum cluster variables with an appropriate choice of $q$-weights, where
setting $q=1$ recovers Lee and Schiffler’s formula.
As discussed by Lee-Li-Rupel-Zelevinky [10, Section 3], the combinatorial
formula for greedy basis elements of a rank-$2$ cluster algebra cannot be
extended to the quantum setting by merely weighting compatible pairs by a
power of $q$, since positivity of these elements can fail in general. However,
Dylan Rupel [19, Corollary 5.4] established that the classical rank-$2$
formula for the quantum cluster variables, which are a proper subset of the
greedy basis, given by Lee-Li-Zelevinsky [11] could be extended in this way.
We proceed by applying the bijection established in the previous section to
Rupel’s expansion formula over weighted compatible pairs associated to quantum
cluster variables.
An advantage of 2.11 is that it requires fewer computations compared to
Rupel’s formula in [19, Corollary 5.4]. Our formula requires $O(|\beta|^{2})$
computations, where $|\beta|$ is the number of subpaths in
$\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$. Rupel’s formula requires
$\binom{c_{n-1}+c_{n-2}}{2}$ computations, which is generally much larger.
Moreover, without knowledge of the bijection between collections of colored
subpaths and compatible pairs, it is unclear how to generate all compatible
pairs. Naïvely, one must consider all collections of edges of
$\mathcal{C}_{n}$ and check that the compatibility condition holds for each
pair of edges. It thus seems more efficient to generate all collections in
$\mathcal{F}^{\prime}(\mathcal{D}_{n})$ and compute their quantum weights
using 2.9 than to generate all compatible pairs on $\mathcal{C}_{n}$ and
compute quantum weights using [19, Corollary 5.4].
Figure 7. The top path depicts the compatible pair on $\mathcal{C}_{6}$
obtained by applying $\Phi$ to $\beta_{\emptyset}$ on $\mathcal{D}_{6}$, which
has $S_{1}=\\{\eta_{1},\eta_{2},\dots,\eta_{21}\\}$ and $S_{2}=\emptyset$. The
bottom path depicts the compatible pair obtained by applying $\Phi$ to the
collection of colored Dyck subpaths shown in Figure 4.
In our proof of 2.11, we translate each compatible pair into a finite word so
that we can refer to the language of combinatorics on words. We will work over
the alphabet $A=\\{h,v,H,V\\}$, with $A^{*}$ denoting the set of finite words
on $A$. For the purposes of this section, we represent a compatible pair by a
word in $A^{*}$. The letters $h$ and $H$ (resp., $v$ and $V$) represent
horizontal (resp., vertical) edges, with the capital letter denoting those
edges in $S_{1}$ (resp., $S_{2}$).
###### Example 6.1.
The word in $A^{*}$ corresponding to the top compatible pair in Figure 7 is
$H^{3}vH^{3}vH^{2}vH^{3}vH^{3}vH^{2}vH^{3}vH^{2}v\,.$
The word in $A^{*}$ corresponding to the bottom compatible pair in Figure 7 is
$h^{3}VhHhvh^{2}Vh^{3}Vh^{3}VHhvh^{3}Vh^{2}V\,.$
We define a morphism $f:\mathbb{Z}A^{*}\to\mathbb{Z}$, where $\mathbb{Z}A^{*}$
is the group of formal $\mathbb{Z}$-sums of words in $A^{*}$. The function
$w_{q}$ is defined on words of length $2$ in $A^{*}$ as follows:
$w_{q}(hv)=w_{q}(Hv)=w_{q}(hV)=1\,,\;\;\;w_{q}(Hh)=w_{q}(vV)=r\,,\;\;\;w_{q}(VH)=r^{2}-1\,,$
and for $x,y\in A$, $w_{q}(xy)=-w_{q}(yx)$. Note that in particular, this
implies that
${w_{q}(hh)=w_{q}(HH)=w_{q}(vv)=w_{q}(VV)=0}$. The function $w_{q}$ naturally
extends to formal $\mathbb{Z}$-sums of words of length $2$ on $A$. It is then
extended to words of larger length by taking the formal sum over all length
$2$ (not necessarily contiguous) subwords with multiplicity, and again
extended naturally to formal $\mathbb{Z}$-sums of any words on $A$. We
sometimes apply $w_{q}$ to a compatible pair; in this case, we interpret
$w_{q}$ as being applied to the corresponding word on $A$. We refer to $w_{q}$
as the _quantum weight_ of a word or compatible pair. Computing the value of
$w_{q}$ on the word associated to a compatible pair in this way is essentially
calculating Rupel’s weighting on compatible pairs associated to quantum
cluster variables [19].
###### Example 6.2.
Let $t_{1}$ (resp., $t_{2}$) denote the word in $A^{*}$ corresponding to the
top (resp., bottom) compatible pair in Figure 7, computed in 6.1. Note that
here we have $r=3$. Looking at all the length-$2$ subwords of $t_{1}$, we can
see that $t_{1}$ has $98$ instances of the subword $Hv$ and $70$ instances of
the subword $vH$. The only other length-$2$ subwords of $t_{1}$ are $HH$ and
$vv$, and we have $w_{q}(HH)=w_{q}(vv)=0$. Thus we can compute
$w_{q}(t_{1})=98w_{q}(Hv)+70w_{q}(vH)=98\cdot 1+70\cdot(-1)=28\,.$
Via a similar computation, we have
$\displaystyle w_{q}(t_{2})$
$\displaystyle=69w_{q}(hV)+45w_{q}(Vh)+21w_{q}(Hh)+17w_{q}(hH)+7w_{q}(vV)+5w_{q}(Vv)$
$\displaystyle\;\;\;+5w_{q}(VH)+7w_{q}(HV)+19w_{q}(hv)+19(vh)+3w_{q}(Hv)+w_{q}(vH)$
$\displaystyle=69-45+21\cdot 3-17\cdot 3+7\cdot 3-5\cdot 3+5\cdot 8-7\cdot
8+19-19+3-1$ $\displaystyle=28$
As we will show, there are more efficient methods for computing these weights
than looking at all length-$2$ subwords.
For a word $u\in A^{*}$ and a letter $x\in A$, let $(u)_{x}$ denote the number
of instances of $x$ in $u$. In order to find a compact formula for the weights
corresponding to collections of subpaths
###### Lemma 6.3.
Let $\beta_{\emptyset}\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$ be the empty
collection of colored Dyck subpaths on $\mathcal{D}_{n}$. For all $n\geq 3$,
we have
$w_{q}\left(\Phi_{n}(\beta_{\emptyset})\right)=c_{n-1}+c_{n-2}-1\,.$
###### Proof.
We proceed by induction on $n$. For the base case $n=3$, we have
$w_{q}\left(\Phi_{3}(\beta_{\emptyset})\right)=w_{q}(H^{r}v)=rw_{q}(Hv)+\binom{r}{2}w_{q}(HH)=r=c_{3}+c_{2}-1\,.$
We now proceed to the inductive step. Let $\psi$ be the morphism $\\{H\mapsto
Hv\\}$. Observe that for a word $u\in\\{H,v\\}^{*}$ that starts with $H$, ends
with $v$, and has no consecutive instances of $v$, we have
$w_{q}(\psi(u))=w_{q}(u)+(u)_{H}\,.$
Let $u_{j}$ be the word associated to $\Phi_{j}(\beta_{\emptyset})$. Then by
3.3 and 4.5, $u_{n+1}$ can be obtained by applying the morphism
$\chi=\\{H\mapsto H^{r}v,v\mapsto H^{r-1}v\\}$ to $\psi^{-1}(u_{n})$. We can
readily calculate
$\displaystyle
w_{q}(\chi(HH))=w_{q}(H^{r}vH^{r}v)=2r=w_{q}(HH)+2w_{q}(\chi(H))\,,$
$\displaystyle
w_{q}(\chi(vv))=w_{q}(H^{r-1}vH^{r}v)=2r-2=w_{q}(vv)+2w_{q}(\chi(v))\,,$
$\displaystyle
w_{q}(\chi(Hv))=w_{q}(H^{r}vH^{r-1}v)=2r=w_{q}(Hv)+w_{q}(H^{r}v)+w_{q}(H^{r-1}v)\,,$
$\displaystyle
w_{q}(\chi(vH))=w_{q}(H^{r-1}vH^{r}v)=2r-2=w_{q}(vH)+w_{q}(H^{r-1}v)+w_{q}(H^{r}v)\,.$
Moreover, $\psi^{-1}(u_{n})$ has $c_{n-1}-c_{n-2}$ instances of $H$ and
$c_{n-2}$ instances of $v$. Therefore, we have
$\displaystyle w_{q}(\Phi_{n+1}(\beta_{\emptyset})$
$\displaystyle=w_{q}(\chi(\psi^{-1}(u_{n})$
$\displaystyle=w_{q}(\psi^{-1}(u_{n}))+(c_{n-1}-c_{n-2})w_{q}(\chi(H))+c_{n-2}w_{q}(\chi(v))$
$\displaystyle=\left(\psi(u_{n})-c_{n-2}\right)+r(c_{n-1}-c_{n-2})+(r-1)c_{n-2}$
$\displaystyle=\left(c_{n-1}+c_{n-2}-1-c_{n-2}\right)+c_{n}$
$\displaystyle=c_{n}+c_{n-1}+1\,.\qed$
Having calculated the weight of the empty collection of paths in
$\mathcal{F}^{\prime}(\mathcal{D}_{n})$, we wish to calculate the quantum
weight when we add in colored subpaths. Note that for the word corresponding
to the compatible pair, this involves swapping out certain instances of $H$
for $h$ and $v$ for $V$. We now calculate how such a substitution affects the
quantum weight of the word.
###### Lemma 6.4.
Let $t_{1},u_{1},t_{2},u_{2},\dots,t_{s},u_{s}$ be words corresponding to
compatible pairs. Furthermore, suppose that $t_{i},u_{i}\in\\{H,v\\}$. Let
$\sigma$ be the morphism $\\{H\mapsto h,v\mapsto V\\}$. Then we have
$w_{q}\left(\prod_{i=1}^{s}t_{i}\sigma(u_{i})\right)=w_{q}\left(\prod_{i=1}^{s}t_{i}u_{i}\right)+\sum_{i=1}^{s}\sum_{j=1}^{s}(-1)^{\mathbbm{1}_{i<j}}\bigg{(}r(t_{j})_{H}\cdot(u_{i})_{h}+\big{(}r(t_{j})_{v}-r^{2}(t_{j})_{H}\big{)}\cdot(u_{i})_{V}\bigg{)}$
###### Proof.
Since $w_{q}(hV)=w_{q}(Hv)$, we have
$w_{q}(u_{i})=w_{q}\left(\sigma(u_{i})\right)$ for all $i$. Hence we need only
to calculate the change its value under $w_{q}$ after applying $\sigma$ to the
length-$2$ subwords with one letter from a $u_{i}$ and the other from a
$t_{j}$.
Let $U_{i}$ denote the value under $w_{q}$ of the sum over all length-$2$
subwords of $\prod_{i=1}^{s}t_{i}\sigma(u_{i})$ with one letter in
$\sigma(u_{i})$ and the other from some $t_{j}$. We then have
$\displaystyle U_{i}$
$\displaystyle=\sum_{j=1}^{s}(-1)^{\mathbbm{1}_{i<j}}\bigg{(}(t_{j})_{H}\cdot(u_{i})_{h}\cdot\left(w_{q}(Hh)-w_{q}(HH)\right)+(t_{j})_{H}\cdot(u_{i})_{V}\cdot\left(w_{q}(HV)-w_{q}(Hv)\right)$
$\displaystyle\quad\quad\quad\quad+(t_{j})_{v}\cdot(u_{i})_{h}\cdot\left(w_{q}(vh)-w_{q}(vH)\right)+(t_{j})_{v}\cdot(u_{i})_{V}\cdot\left(w_{q}(vV)-w_{q}(vv)\right)\bigg{)}$
$\displaystyle=\sum_{j=1}^{s}(-1)^{\mathbbm{1}_{i<j}}\bigg{(}r\big{(}(t_{j})_{H}\cdot(u_{i})_{h}+(t_{j})_{v}\cdot(u_{i})_{V}\big{)}-r^{2}(t_{j})_{H}\cdot(u_{i})_{V}\bigg{)}\,.\qed$
Applying the previous general result about compatible pairs to the specific
case of $\mathcal{C}_{n}$ and using the connection between
$\mathcal{F}^{\prime}(\mathcal{D}_{n})$ and compatible pairs on
$\mathcal{C}_{n}$, we can now prove the quantum cluster variable expansion
formula.
###### Proof of 2.11.
Adding a path to $\beta\in\mathcal{F}^{\prime}(\mathcal{D}_{n})$ corresponds
to applying the morphism $\sigma$ from 6.4 to the appropriate portion of
associated compatible word. Note that $|\beta_{i}|_{2}=(\beta_{i})_{h}$ and
$|\beta_{i}|_{1}=(\beta_{i})_{V}$. Similarly,
$|\overline{\beta_{j}}|_{2}=(\overline{\beta_{j}})_{H}$ and
$|\overline{\beta_{j}}|_{1}=(\overline{\beta_{j}})_{v}$. It follows from 6.4,
6.3, and the definition of $w_{q}$ for words in $A^{*}$ that
$w_{q}(\beta)=\gamma_{\omega}+\beta_{\omega}$, where the terms on the right
hand side are those in [19, Corollary 5.4]. Thus expansion formula can be
deduced directly from [19, Corollary 5.4]. ∎
###### Example 6.5.
Let
$\beta=\\{\beta_{1},\dots,\beta_{6}\\}\in\mathcal{F}^{\prime}(\mathcal{D}_{6})$
be the collection of colored Dyck subpaths shown in Figure 4. Then we have
$\overline{\beta_{0}}=\overline{\beta_{1}}=\overline{\beta_{6}}=\emptyset$,
$\overline{\beta_{2}}=\\{\alpha_{4}\\}$, $\overline{\beta_{3}}=\\{v_{2}\\}$,
$\overline{\beta_{4}}=\\{\alpha_{14}\\}$, and
$\overline{\beta_{5}}=\\{v_{6}\\}$. Applying 2.11, we have
$\displaystyle
w_{q}(\beta)=(c_{5}+c_{4}-1)+3(15-4)-9(5-1)+3(5-1)+3(6-13)-9(2-4)+3(2-4)=28\,.$
which confirms the second calculation in 6.2.
## 7\. Further Directions
Many of our methods rely on the highly structured nature of the maximal Dyck
paths $\mathcal{C}_{n}$ and $\mathcal{D}_{n}$. We use this to better
understand the conditions for a set of edges to form a compatible pair on
$\mathcal{C}_{n}$, in particular deriving a criterion for compatibility in
terms of the sequences of consecutive vertical edges. While $\mathcal{C}_{n}$
is the relevant choice of maximal Dyck path for the cluster variables,
compatible pairs over arbitrary maximal Dyck paths were studied by Lee, Li,
and Zelevinsky [11] in their work on the greedy basis. One way to study
compatibility is in terms of forbidden edge sets, i.e., the minimal subsets of
edges which violate the compatibility condition for compatible pairs. It is
easy to verify that for the staircase Dyck path $\mathcal{P}(a,a)$, a set of
edges is compatible if and only if it does not contain a horizontal edge and
the vertical edge immediately following it. From the proof of the bijectivity
of $\Phi$, it follows that on $\mathcal{C}_{n}$ the forbidden edge sets are
the images under $\Phi$ of an $(m,w)$-brown path and the $c_{m-1}-wc_{m-2}$
edges preceding it. It could be interesting to study whether the criterion for
compatibility can also be reduced for other families of maximal Dyck paths.
While positivity fails in general for the quantum greedy basis [10, Section
3], it would be interesting to know under what conditions positivity holds.
Rupel’s formula [19, Corollary 5.4] establishes the positivity property for
the quantum cluster variables, but perhaps this is a special case of a more
general phenomenon. If so, these elements may also admit a quantum weighting
of the associated compatible pairs, similar to that given by Rupel.
## 8\. Acknowledgements
The author thanks her advisor, Lauren Williams, for introducing her to the
various interesting expansion formulas for low-rank cluster algebras and for
discussions at many stages of this research. The author is also grateful to
Kyungyong Lee, Feiyang Lin, Gregg Musiker, and Ralf Schiffler for their
helpful correspondence and comments about this work. She additionally thanks
Kyungyong Lee for suggesting the generalization of 2.4 to the case of cluster
algebras with coefficients. A portion of this work was completed while the
author was visiting l’Institut de Recherche en Informatique Fondamentale, and
the author extends her gratitude to Sylvie Corteel for hosting her. The author
was supported by NSF Graduate Research Fellowship and a Jack Kent Cooke
Foundation Graduate Fellowship.
## References
* [1] Jean Berstel, Aaron Lauve, Christophe Reutenauer, and Franco Saliola. _Combinatorics on Words: Christoffel Words and Repetitions in Words._ CRM Monographs Series, Vol. 27, American Mathematical Society (2008).
* [2] Philippe Caldero and Frédéric Chapoton. _Cluster algebras as Hall algebras of quiver representations._ Commentarii Mathematici Helvetici, 81(3):595–616 (2006).
* [3] Ilke Canakci and Philipp Lampe. _An expansion formula for type A and Kronecker quantum cluster algebras._ Journal of Combinatorial Theory, Series A, 171:105132 (2020).
* [4] Harold Scott MacDonald Coxeter. _Introduction to Geometry._ Second Edition, John Wiley & Sons Inc, New York (1969).
* [5] Philippe Di Francesco and Rinat Kedem. _Discrete non-commutative integrability: the proof of a conjecture by M. Kontsevich._ International Mathematics Research Notices, 21:4042–4063 (2010).
* [6] Sergey Fomin, Lauren Williams, and Andrei Zelevinsky. _Introduction to Cluster Algebras. Chapters 1-3._ arXiv:1608.05735 [math.CO], (2016).
* [7] Sergey Fomin and Andrei Zelevinsky. _Cluster algebras I: foundations._ Journal of the American Mathematical Society, 15(2):497--529 (2002).
* [8] Sergey Fomin and Andrei Zelevinsky. _Cluster algebras IV: coefficients._ Compositio Mathematica, 143(1):112--164 (2007).
* [9] Kyungyong Lee and Li Li. _On natural maps from strata of quiver Grassmannians to ordinary Grassmannians._ Noncommutative Birational Geometry, Representations and Combinatorics, 592:199--214 (2013).
* [10] Kyungyong Lee, Li Li, Dylan Rupel, and Andrei Zelevinsky. _Greedy bases in rank 2 quantum cluster algebras._ Proceedings of the National Academy of Sciences, 111(27):9712--9716 (2014).
* [11] Kyungyong Lee, Li Li, and Andrei Zelevinsky. _Greedy elements in rank $2$ cluster algebras._ Selecta Mathematica, 20:57--82 (2014).
* [12] Kyungyong Lee and Ralf Schiffler. _A combinatorial formula for rank $2$ cluster variables._ Journal of Algebraic Combinatorics, 37(1):67--85 (2011).
* [13] Kyungyong Lee and Ralf Schiffler. _Proof of a positivity conjecture of M. Kontsevich on non-commutative cluster variables._ Compositio Mathematica, 148(6):1821--1832 (2012).
* [14] Kyungyong Lee and Ralf Schiffler. _Positivity for cluster algebras of rank 3._ Publications of the Research Institute for Mathematical Sciences, 49(3):601--649 (2013).
* [15] Kyungyong Lee and Ralf Schiffler. _Positivity for cluster algebras._ Annals of Mathematics, 182(1):73--125 (2015).
* [16] Feiyang Lin. _On rank-two and affine cluster algebras._ HMC Senior Theses, 251 (2021).
* [17] Georg Pick. _Geometrisches zur zahlenlehre._ Sitzungsberichte des deutschen naturwissenschaftlich-medicinischen Vereines für Böhmen ‘‘Lotos" in Prag, 19:311--319 (1899).
* [18] Dylan Rupel. _Proof of the Kontsevich non-commutative cluster positivity conjecture._ Comptes Rendus Mathematique, 350(21--22):929--932 (2012).
* [19] Dylan Rupel. _Rank two non-commutative Laurent phenomenon and pseudo-positivity._ Algebraic Combinatorics, 2(6): 1239--1273 (2019).
* [20] Dylan Rupel. _On a quantum analog of the Caldero--Chapoton formula._ International Mathematics Research Notices, 14:3207--3236 (2011).
|
# Hubble Space Telescope transmission spectroscopy for the temperate sub-
Neptune TOI-270d: a possible hydrogen-rich atmosphere containing water vapour
Thomas Mikal-Evans Max Planck Institute for Astronomy, Königstuhl 17, D-69117
Heidelberg, Germany Nikku Madhusudhan Institute of Astronomy, University of
Cambridge, Madingley Road, Cambridge CB3 0HA, UK Jason Dittmann Department
of Astronomy, University of Florida, 211 Bryant Space Science Center,
Gainesville, FL 32611, USA Maximilian N. Günther European Space Agency (ESA),
European Space Research and Technology Centre (ESTEC), Keplerlaan 1, 2201 AZ
Noordwijk, The Netherlands Luis Welbanks School of Earth & Space Exploration,
Arizona State University, Tempe, AZ, 85257, USA Vincent Van Eylen Mullard
Space Science Laboratory, University College London, Holmbury St Mary,
Dorking, Surrey RH5 6NT, UK Ian J. M. Crossfield Department of Physics and
Astronomy, University of Kansas, Lawrence, KS, USA Tansu Daylan Department of
Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544
Laura Kreidberg Max Planck Institute for Astronomy, Königstuhl 17, D-69117
Heidelberg, Germany
(Received July 5, 2022; Revised October 30, 2022; Accepted November 22, 2022)
###### Abstract
TOI-270 d is a temperate sub-Neptune discovered by the Transiting Exoplanet
Survey Satellite (TESS) around a bright ($J=9.1$ mag) M3V host star. With an
approximate radius of $2\,R_{\earth}$ and equilibrium temperature of 350 K,
TOI-270 d is one of the most promising small exoplanets for atmospheric
characterisation using transit spectroscopy. Here we present a primary transit
observation of TOI-270 d made with the Hubble Space Telescope (HST) Wide Field
Camera 3 (WFC3) spectrograph across the 1.126-1.644 $\mu\textnormal{m}$
wavelength range, and a 95% credible upper limit of $8.2\times 10^{-14}$ erg
s-1 cm-2 Å-1 arcsec-2 for the stellar Ly$\alpha$ emission obtained using the
Space Telescope Imaging Spectrograph (STIS). The transmission spectrum derived
from the TESS and WFC3 data provides evidence for molecular absorption by a
hydrogen-rich atmosphere at ${4\sigma}$ significance relative to a featureless
spectrum. The strongest evidence for any individual absorber is obtained for
H2O, which is favoured at 3$\sigma$ significance. When retrieving on the WFC3
data alone and allowing for the possibility of a heterogeneous stellar
brightness profile, the detection significance of H2O is reduced to
2.8$\sigma$. Further observations are therefore required to robustly determine
the atmospheric composition of TOI-270 d and assess the impact of stellar
heterogeneity. If confirmed, our findings would make TOI-270 d one of the
smallest and coolest exoplanets to date with detected atmospheric spectral
features.
††journal: AAS Journals††software: NumPy (van der Walt et al., 2011), SciPy
(Virtanen et al., 2020), Matplotlib (Hunter, 2007), emcee (Foreman-Mackey et
al., 2013), batman (Kreidberg, 2015), Astropy (Astropy Collaboration et al.,
2013, 2018), pysynphot (STScI Development Team, 2013), PyMultinest (Buchner et
al., 2014b)††facilities: HST(WFC3), Spitzer(IRAC), TESS ††thanks: ESA
Research Fellow††thanks: NHFP Sagan Fellow††thanks: LSSTC Catalyst Fellow
## 1 Introduction
Sub-Neptunes are the population of planets with radii smaller than that of
Neptune and larger than approximately $1.8\,R_{\earth}$. The latter
corresponds to the location of the “radius valley” uncovered by the NASA
Kepler survey, which divides small planets between those with volatile-poor
and volatile-rich compositions (Fulton et al., 2017; Van Eylen et al., 2018;
Petigura et al., 2022). Measured densities for the volatile-rich sub-Neptunes
can be explained by a variety of compositions with varying core-mass fractions
and envelopes of volatiles such as hydrogen and water (Figueira et al., 2009;
Rogers & Seager, 2010). The measurement of sub-Neptune atmospheric spectra can
help break this degeneracy by providing a direct probe of the composition of
the uppermost layers of the outer envelope.
The observing strategy employed by the NASA Transiting Exoplanet Survey
Satellite (TESS) (Ricker et al., 2015) since 2018 is well suited for
discovering short-period planets that subsequently make excellent targets for
detailed atmospheric studies (Sullivan et al., 2015; Barclay et al., 2018;
Guerrero et al., 2021; Kunimoto et al., 2022). Most significantly, TESS has
observed around 90% of the sky, meaning that a substantial fraction of all
nearby bright stars have now been monitored for planetary transits with
complete phase coverage out to orbital periods of approximately 10 days. This
includes the majority of nearby M dwarfs, around which smaller planets such as
sub-Neptunes can be more readily detected compared to those orbiting earlier
spectral type hosts, thanks to the relatively deep transit signals that they
produce. As of October 2022, the NASA Exoplanet
Archive111https://exoplanetarchive.ipac.caltech.edu reports that there have
been seventeen transiting exoplanets with radii falling in the
1.8-4$\,R_{\earth}$ sub-Neptune range, validated around bright ($J<10$ mag) M
dwarfs. Of these, TESS has discovered twelve, namely: AU Mic b and AU Mic c
(Plavchan et al., 2020; Martioli et al., 2021); TOI-1634 b (Cloutier et al.,
2021; Hirano et al., 2021); TOI-270 c and TOI-270 d (Günther et al., 2019);
TOI-776 b and TOI-776 c (Luque et al., 2021); TOI-1201 b (Kossakowski et al.,
2021); TOI-1231 b (Burt et al., 2021); TOI-1266 b (Demory et al., 2020;
Stefánsson et al., 2020); LTT3780 c (Cloutier et al., 2020; Nowak et al.,
2020); and GJ 3090 b (Almenara et al., 2022). The other five planets that have
been validated to date around bright M dwarfs and with radii between
approximately 1.8-4$\,R_{\earth}$ are: GJ 436 b (Gillon et al., 2007); GJ 1214
b (Charbonneau et al., 2009); K2-3 b and K2-3 c (Crossfield et al., 2015); and
K2-18 b (Montet et al., 2015). Meanwhile, there are currently 57 additional
sub-Neptune candidates that have been identified by TESS around bright M
dwarfs and that are listed as high priority targets for follow-up
confirmation.222https://exofop.ipac.caltech.edu/tess
This paper presents transmission spectroscopy measurements for the TESS-
discovered sub-Neptune TOI-270 d/L 231-32 d (Günther et al., 2019; Van Eylen
et al., 2021; Kaye et al., 2022), made using the Hubble Space Telescope Wide
Field Camera 3 (HST WFC3). TOI-270 d has a radius of $2.00\pm
0.05\,R_{\earth}$ measured in the TESS red-optical passband and a mass of
$4.20\pm 0.16\,M_{\earth}$, corresponding to a bulk density of $2.90\pm 0.24$
g cm-3 (Kaye et al., 2022). It orbits an M3V host star at a distance of 0.07
AU with an orbital period of $11.4$ days. This places TOI-270 d just inside
the inner edge of the habitable zone, where it has an equilibrium temperature
of around 350 K assuming a Bond albedo of 30% and uniform day-night heat
redistribution.333Earth, Jupiter, Saturn, Uranus, and Neptune all have Bond
albedos close to 30%. There are two other transiting planets known to orbit
the same host star (Günther et al., 2019): TOI-270 b, a $1.3\,R_{\earth}$
super-Earth on a 0.03 AU orbit; and TOI-270 c, a $2.3\,R_{\earth}$ sub-Neptune
on a 0.05 AU orbit. The TOI-270 host star appears to be quiescent, with TESS
photometry showing no evidence for variability, either due to flares or
brightness fluctuations caused by stellar heterogeneities (i.e. faculae and
spots) rotating into and out of view (Günther et al., 2019). Additional
indicators — such as H$\alpha$, the S-index, and a lack of radial velocity
jitter (root-mean-square of $0.16\pm 0.23$ m s-1) — are also consistent with a
low stellar activity level (Van Eylen et al., 2021). To further characterise
the host star, in this study we also present new constraints for the stellar
Ly$\alpha$ emission obtained with the HST Space Telescope Imaging Spectrograph
(STIS).
The paper is organised into the following sections. Observations are described
in Sections 2. Analyses of the WFC3 and STIS data are presented in Sections 3
and 4, respectively. Atmospheric retrieval analyses are presented in Section
5. Discussion and conclusions are given in Sections 6 and 7, respectively.
## 2 Observations
A transit of TOI-270 d was observed with HST on 2020 October 6 as part of
program GO-15814 (Mikal-Evans et al., 2019b). Data were acquired over three
consecutive HST orbits, with the first and third orbits providing,
respectively, pre-transit and post-transit reference levels for the stellar
brightness. The second HST orbit coincided with the planetary transit, which
has a duration of 2.117 hours (Kaye et al., 2022).
The transit observation was made using the infrared channel of WFC3 with the
G141 grism, covering the 1.126-1.644 $\mu\textnormal{m}$ wavelength range.
During each exposure, the spectral trace was allowed to drift along the cross-
dispersion axis of the detector using the round-trip spatial-scanning
observing mode (Deming et al., 2013; McCullough et al., 2014), with a scan
rate of 0.111 arcsec s-1. A $512\times 512$ pixel subarray containing the
target spectrum was read out for each exposure using the SPARS25 pattern with
seven non-destructive reads per exposure (NSAMP=7), corresponding to
individual exposure times of 138 seconds. With this setup, 15 exposures were
taken during the first HST orbit following target acquisition, and 16
exposures were taken in both the second and third HST orbits. A single shorter
exposure (NSAMP=3) was also taken at the end of the second and third HST
orbits, due to anticipated overrun of the target visibility window (i.e. the
target being lost from view before a full exposure could be completed). These
latter shorter exposures were ultimately discarded from the subsequent
analysis.
To characterise the Ly$\alpha$ emission of the host star, two additional
observations were made with HST on 2020 January 29 and 31 as part of the same
observing program, using HST STIS with the $52\times 0.05$ arcsec slit and the
G140M grating centered at 1,222Å. For each STIS observation, data were
collected for 1,960 seconds using TIME-TAG mode. Both exposures were timed to
avoid transits of the three known planets in the TOI-270 system, as the goal
was to calibrate the absolute emission level of the host star.
## 3 WFC3 analysis: transmission spectroscopy
The WFC3 transit data were reduced using the custom Python code described
previously in Evans et al. (2016) and since used in numerous other studies
(e.g. Mikal-Evans et al., 2019a, 2020). Individual exposures were provided by
the WFC3 pipeline as FITS files with IMA suffixes, for which basic
calibrations have already been applied, such as flat fielding, linearity
correction, and bias subtraction. For a given calibrated IMA exposure,
background counts were determined for each non-destructive read by taking the
median value of a $20\times 120$ pixel box away from the target spectrum. For
each IMA frame, the differences between successive, background-subtracted,
non-destructive reads were computed. These read-differences correspond to
“stripes” of flux that span the cross-dispersion rows scanned between each
read. The flux-weighted cross-dispersion centroids of these stripes were
determined and all rows more than 50 pixels above and below the stripe
centroids were set to zero to mask contaminating contributions, such as
neighbouring stars and cosmic ray strikes. The masked stripes were then summed
together to form reconstructed data frames. For each of these reconstructed
data frames, the final flux-weighted cross-dispersion centroid was determined
and 1D spectra were extracted by summing all pixels along the cross-dispersion
axis within 120 pixels above and below the centroid row. The wavelength
solution was obtained by cross-correlating the 1D spectrum extracted from the
final exposure against a model stellar spectrum multiplied by the G141
throughput profile. The Python package pysynphot (STScI Development Team,
2013) was used to obtain the model stellar spectrum: namely, a Kurucz 1993
model spectrum with properties close to those reported by Günther et al.
(2019) for the TOI-270 host star ($T_{\textnormal{eff}}=3500$ K,
$\log_{10}g=4.88$ cgs, $[{\rm{Fe}/\rm{H}}]=-0.17$ dex).
### 3.1 Broadband transit light curve
A broadband light curve was generated by integrating the flux for each 1D
spectrum across the $0.8$-$1.95\,\mu\textnormal{m}$ wavelength range, to
conservatively encompass the full G141 passband. The light curve was then
fitted by simultaneously modelling the planetary transit signal and
instrumental systematics.
For the planetary transit signal, we used the batman software package
(Kreidberg, 2015). The planetary parameters allowed to vary in the fitting
were: the planet-to-star radius ratio ($R_{p}/R_{\star}$); the deviation of
the transit mid-time ($\Delta T_{c}$) from the mid-time predicted by Kaye et
al. (2022); the normalised semimajor axis ($a/R_{\star}$); and the transit
impact parameter ($b=a\cos i/R_{\star}$, where $i$ is the orbital
inclination). Uniform priors were adopted for $R_{p}/R_{\star}$ and $\Delta
T_{c}$. The posterior distributions for $a/R_{\star}$ and $b$ obtained by Kaye
et al. (2022) were adopted as Gaussian priors in the light curve fitting,
namely: $a/R_{\star}=41.744\pm 0.527$ and $b=0.23\pm 0.08$. The orbital period
was held fixed to $P=11.38194$ based on the value reported by Kaye et al.
(2022).
For the stellar brightness profile, we assumed a quadratic limb darkening law.
Both limb darkening coefficients were allowed to vary during the light curve
fitting, but due to the incomplete phase coverage of the transit, tight
Gaussian priors were adopted. The latter were obtained by providing the
measured values and uncertainties for the stellar properties reported by Van
Eylen et al. (2021) as input to the PyLDTK software package (Parviainen &
Aigrain, 2015), along with the G141 throughput profile. This resulted in
Gaussian priors of the form $u_{1}=0.1611\pm 0.0002$ and $u_{2}=0.1379\pm
0.0008$.
Table 1: TOI-270 d system properties and broadband light curve fit results. Fixed properties have been taken from Günther et al. (2019) and Kaye et al. (2022). Fixed | Parameter | Value
---|---|---
| $R_{\star}$ ($R_{\odot}$) | $0.380$
| $P$ (d) | $11.38014$
| $e$ | $0$
Free | Parameter | Posterior
| $R_{p}/R_{\star}$ | $0.05278_{-0.00118}^{+0.00095}$
| $a/R_{\star}$ | $41.80_{-0.59}^{+0.61}$
| $b$ | $0.233_{-0.085}^{+0.089}$
| $\Delta T_{1}$ (min) | $-4_{-8}^{+10}$
| $u_{1}$ | $0.1611_{-0.0002}^{+0.0002}$
| $u_{2}$ | $0.1378_{-0.0009}^{+0.0009}$
| $\beta$ | $3.0_{-0.4}^{+0.5}$
Derived | Parameter | Posterior
| $(R_{p}/R_{\star})^{2}$ (ppm) | $2786_{-124}^{+101}$
| $R_{p}$ ($R_{\earth}$) | $2.19_{-0.07}^{+0.07}$
| $\cos i$ | $0.0056_{-0.0020}^{+0.0021}$
| $i$ (∘) | $89.68_{-0.12}^{+0.12}$
| $T_{1}$ (JD${}_{{\textnormal{{UTC}}}}$) | $2459129.34868_{-0.00544}^{+0.00691}$
We modelled the systematics using a Gaussian process (GP) model, with
covariance described by the sum of two squared-exponential kernels that took
time $t$ and the HST orbital phase $\varphi$ as input variables. For the GP
mean function, the transit signal was multiplied by a deterministic
systematics signal of the form $S_{d}(t,\varphi)=(c_{d}+\gamma
t)\,D(t,\varphi)$, where: $c_{d}$ is a normalisation constant for scan
direction $d$ (i.e. forward or backward); $\gamma$ is the slope of a linear
$t$-dependent baseline trend shared by the two scan directions; and $D$ is an
analytic model for the electron charge-trapping systematics that affect the
WFC3 detector (Zhou et al., 2017). For the latter, the same double-exponential
ramp function described in Mikal-Evans et al. (2022) was adopted, which has
five free parameters ($a_{1}$, $a_{2}$, $a_{3}$, $a_{4}$, $a_{5}$). To allow
for additional high-frequency noise not captured by the systematics model, the
Gaussian measurement uncertainties were allowed to vary in the fitting via a
free parameter $\beta$ that rescaled the photon noise. For further details on
the implementation of this systematics treatment, see Mikal-Evans et al.
(2021).
Figure 1: Circles show the raw WFC3 broadband transit light curve for TOI-270
d as the relative variation of received flux over time. There is an offset in
the flux levels measured for the forward scan and backward scan exposures. Red
lines show the maximum-likelihood light curve model. The root-mean-square
(RMS) of the model fit residuals is printed in the lower right corner of the
axis. Yellow shading indicates the time between transit ingress and egress.
Figure 2: (a) Circles show the measured broadband light curve after dividing
through by the systematics component of the maximum-likelihood model.
Measurement uncertainties are approximately the same size as the circles. Red
line shows the transit component of the maximum-likelihood model. (b) Circles
show residuals of the maximum-likelihood model with $1\sigma$ photon noise
measurement uncertainties. Blue square with errorbar on the right of the axis
indicates the size of the rescaled measurement uncertainties obtained from the
model fit, i.e. the photon noise multiplied by the maximum likelihood $\beta$
value.
Uniform priors were adopted for all transit signal parameters, with the
exception of the Gaussian priors described above for $a/R_{\star}$, $b$,
$u_{1}$, and $u_{2}$. Marginalisation of the posterior distribution was
performed using affine-invariant Markov chain Monte Carlo, as implemented by
the emcee software package (Foreman-Mackey et al., 2013), using 300 walkers
and 1000 steps, with the first 500 steps discarded as burn-in. The maximum-
likelihood model is shown in Figure 1 and the systematics-corrected light
curve with corresponding transit model is shown in Figure 2. The root-mean-
square of residuals for the maximum-likelihood model is 116 ppm, equivalent to
$2.7\times$ the photon noise. Inferred model parameter distributions are
reported in Table 1.
### 3.2 Spectroscopic transit light curves
Following the broadband light curve fitting, spectroscopic light curves were
produced for 28 equal-width channels spanning the
1.126-1.644$\mu\textnormal{m}$ wavelength range using the methodology
described in Mikal-Evans et al. (2021). In brief, a lateral shift in
wavelength and a vertical stretch in flux were applied to the individual 1D
spectra to minimise the residuals relative to a template spectrum. The latter
was chosen to be the 1D spectrum extracted from the final exposure, as it was
used to determine the wavelength solution (Section 3). The resulting residuals
were then binned into the 28 wavelength channels. To produce the final light
curves, transit signals were added to these time series, with the same
properties as those derived from the broadband light curve fit (Section 3.1),
but with limb darkening coefficients fixed to values appropriate for the
wavelength range of each spectroscopic channel, determined in the same manner
described above for the broadband light curve. Using this process, common-mode
systematics were effectively removed from the final spectroscopic light
curves, as well as systematics associated with pointing drifts along the
dispersion axis of the detector.
Fitting of the common-mode-corrected spectroscopic light curves proceeded in a
similar manner to the broadband light curve fitting. The only differences were
that HST orbital phase $\varphi$ was the only input variable provided for the
squared-exponential covariance kernel, and for the mean function the transit
signal was multiplied by a $t$-dependent linear trend to capture the remaining
systematics not removed by the common-mode correction. As for the broadband
light curve fit, white noise rescaling factors $\beta$ were included as free
parameters for each spectroscopic light curve, with uniform priors. The only
transit parameters allowed to vary in the spectroscopic light curve fits were
$R_{p}/R_{\star}$ and the quadratic limb darkening coefficients ($u_{1}$,
$u_{2}$) for each of the separate spectroscopic light curves. Uniform priors
were adopted for the $R_{p}/R_{\star}$ values and tight Gaussian priors
obtained using PyLDTK were adopted for the $u_{1}$ and $u_{2}$ values, as
described in Section 3.1. Alternative treatments of the stellar limb darkening
were found to have a negligible effect on the final results and are described
in Appendix A. Values for $T_{c}$, $a/R_{\star}$, and $b$ were held fixed to
the maximum-likelihood values determined from the broadband light curve fit.
Table 2: Spectroscopic light curve fit results. Wavelength | $R_{p}/R_{\star}$ | $\left(R_{p}/R_{\star}\right)^{2}$
---|---|---
($\mu$m) | | (ppm)
1.126-1.144 | $0.05299_{-0.00094}^{+0.00097}$ | $2808_{-99}^{+104}$
1.144-1.163 | $0.05305_{-0.00075}^{+0.00073}$ | $2815_{-79}^{+78}$
1.163-1.181 | $0.05239_{-0.00071}^{+0.00071}$ | $2745_{-74}^{+75}$
1.181-1.200 | $0.05215_{-0.00077}^{+0.00077}$ | $2720_{-80}^{+81}$
1.200-1.218 | $0.05270_{-0.00061}^{+0.00059}$ | $2777_{-64}^{+62}$
1.218-1.237 | $0.05169_{-0.00077}^{+0.00069}$ | $2671_{-79}^{+72}$
1.237-1.255 | $0.05226_{-0.00066}^{+0.00069}$ | $2731_{-69}^{+73}$
1.255-1.274 | $0.05187_{-0.00073}^{+0.00066}$ | $2691_{-75}^{+69}$
1.274-1.292 | $0.05215_{-0.00050}^{+0.00055}$ | $2720_{-51}^{+58}$
1.292-1.311 | $0.05341_{-0.00059}^{+0.00061}$ | $2853_{-63}^{+66}$
1.311-1.329 | $0.05187_{-0.00061}^{+0.00066}$ | $2691_{-63}^{+69}$
1.329-1.348 | $0.05297_{-0.00066}^{+0.00068}$ | $2806_{-70}^{+72}$
1.348-1.366 | $0.05456_{-0.00071}^{+0.00065}$ | $2976_{-77}^{+71}$
1.366-1.385 | $0.05433_{-0.00063}^{+0.00065}$ | $2952_{-68}^{+71}$
1.385-1.403 | $0.05360_{-0.00065}^{+0.00060}$ | $2873_{-69}^{+65}$
1.403-1.422 | $0.05273_{-0.00061}^{+0.00059}$ | $2780_{-64}^{+62}$
1.422-1.440 | $0.05325_{-0.00071}^{+0.00068}$ | $2836_{-75}^{+73}$
1.440-1.459 | $0.05450_{-0.00053}^{+0.00051}$ | $2971_{-58}^{+55}$
1.459-1.477 | $0.05236_{-0.00078}^{+0.00078}$ | $2741_{-81}^{+82}$
1.477-1.496 | $0.05359_{-0.00062}^{+0.00062}$ | $2872_{-66}^{+67}$
1.496-1.514 | $0.05246_{-0.00080}^{+0.00077}$ | $2752_{-83}^{+81}$
1.514-1.533 | $0.05286_{-0.00069}^{+0.00078}$ | $2794_{-73}^{+83}$
1.533-1.551 | $0.05280_{-0.00084}^{+0.00078}$ | $2788_{-88}^{+83}$
1.551-1.570 | $0.05248_{-0.00059}^{+0.00057}$ | $2754_{-61}^{+60}$
1.570-1.588 | $0.05213_{-0.00072}^{+0.00068}$ | $2718_{-74}^{+71}$
1.588-1.607 | $0.05318_{-0.00065}^{+0.00066}$ | $2828_{-69}^{+71}$
1.607-1.625 | $0.05253_{-0.00066}^{+0.00068}$ | $2759_{-69}^{+72}$
1.625-1.644 | $0.05239_{-0.00052}^{+0.00052}$ | $2745_{-54}^{+55}$
Inferred values for $R_{p}/R_{\star}$ and the corresponding transit depths
$(R_{p}/R_{\star})^{2}$ are reported in Table 2. A median precision of 71 ppm
is achieved for the transit depth measurements across all wavelength channels.
Systematics-corrected light curves are shown in Figure 3 and the inferred
white noise rescaling factors are plotted in Figure 4. Posterior distributions
for the limb darkening coefficients are listed in Table 3 and are effectively
identical to the adopted PyLDTK priors (Appendix A).
Figure 3: (Top panels) Spectroscopic light curves after dividing through by
the systematics component of the maximum likelihood light curve models. Data
are plotted using alternating colours for visual clarity. (Bottom panels)
Maximum likelihood model residuals for each spectroscopic light curve, with
corresponding RMS values printed. Note that the plotted error bars are the
$1\sigma$ photon noise uncertainties, but the white noise values were treated
as free parameters in the model fits as described in the text. Figure 4:
Inferred white noise rescaling factors for each spectroscopic light curve. Red
points show median values with errorbars giving the 68% credible intervals.
The black line shows the maximum likelihood values.
## 4 STIS analysis: stellar Ly$\alpha$
The STIS G140M data were reduced using the standard STIS pipeline (CALSTIS
version 3.4.2). Figure 5 shows the reduced and calibrated two-dimensional
spectra. The geocoronal Ly$\alpha$ emission is clearly visible as a strong
vertical stripe (the horizontal axis is the dispersion direction). Emission
from TOI-270 would lie adjacent to the geocoronal emission due to the Doppler
shift between HST and TOI-270. However, no evidence is visible in either
exposure for emission from TOI-270.
Figure 5: Calibrated HST STIS Ly$\alpha$ observations of TOI-270. (Top)
Calibrated spectrum taken on 2020 January 31. Wavelength dispersion direction
is along the horizontal axis. The bright stripe down the middle is the Earth’s
Ly$\alpha$ geocoronal emission. Ly$\alpha$ emission from TOI-270 would be
visible alongside this line at the Doppler shift corresponding to the velocity
difference between HST and the target. We find no significant evidence of
Ly$\alpha$ emission in this exposure. (Bottom) Identical as above, but for a
second exposure taken on 2020 January 29.
To assess the sensitivity of the STIS data to Ly$\alpha$ emission from
TOI-270, a spectrum was extracted (including the geocoronal emission) at the
target location for each exposure. These data are shown in Figure 6. We note
that during the second observation (2020 January 31), there is an apparent
flux excess redwards of the geocoronal emission line. However, there is no
evidence for this excess in the first observation (2020 January 29).
Furthermore, the derived error bar at this wavelength is $0.6\times 10^{-13}$
erg cm-2 sec-1 Å-1 arcsec-2. Given that the amplitude of the excess is only
about $10^{-13}$ erg cm-2 sec-1 Å-1 arcsec-2, this translates to a $<2\sigma$
detection significance. Therefore we do not believe this feature in the data
to be due to significant emission from TOI-270.
We derive a conservative upper-limit for the stellar Ly$\alpha$ emission from
the data errorbars close to where confusion with the Earth’s emission becomes
significant, and estimate that we could have detected line emission peaking
approximately twice as high as these error bars, and spread over the resolving
power of the instrument. Using this approach, we place a $95\%$ credible upper
limit of $8.2\times 10^{-14}$ erg s-1 cm2 Å-1 arcsec-2 on the Ly$\alpha$
emission of TOI-270.
Figure 6: Ly$\alpha$ emission of TOI-270 for the exposure on 2020-01-29 (red)
and 2020-01-31 (blue). The large emission line is due to the Earth’s
geocoronal emission. The slight bump redwards of this line on the 2020-01-31
data set is suggestive of brief Ly$\alpha$ emission of TOI-270. However, we do
not recover this emission on the other observation. Additionally, we see
comparable noise bursts in other areas of our 2-D spectrum (Figure 5), and
therefore we suggest that we have not detected Ly$\alpha$ emission from
TOI-270.
## 5 Atmospheric retrieval analysis
We use the transmission spectrum of TOI-270 d to retrieve the atmospheric
properties at the day-night terminator region of the planet using the AURA
atmospheric retrieval framework for transmission spectra (Pinhas et al.,
2018). The model assumes a plane-parallel atmosphere in hydrostatic
equilibrium. The temperature structure, chemical composition, and the
properties of clouds/hazes are free parameters in the model. We consider
opacity contributions due to prominent chemical species expected in temperate
hydrogen-rich atmospheres in chemical equilibrium and disequilibrium, which
include H2O, CH4, NH3, CO, CO2, and HCN (e.g. Madhusudhan & Seager, 2011;
Moses et al., 2013; Yu et al., 2021; Hu et al., 2021; Tsai et al., 2021). The
opacities of the molecules were obtained using corresponding line lists from
the HITRAN and Exomol databases: H2O (Rothman et al., 2010), CH4 (Yurchenko &
Tennyson, 2014), NH3 (Yurchenko et al., 2011), CO (Rothman et al., 2010), CO2
(Rothman et al., 2010), and HCN (Barber et al., 2014), and collision-induced
absorption (CIA) due to H2-H2 and H2-He (Richard et al., 2012). The absorption
cross sections are computed from the line lists following Gandhi & Madhusudhan
(2017). Besides molecular absorption, we also consider Rayleigh scattering due
to H2 and parametric contributions from clouds/hazes and stellar heterogeneity
as described in Pinhas et al. (2018). The Bayesian inference and parameter
estimation in the retrievals are conducted using the Nested Sampling algorithm
(Feroz et al., 2009) implemented with the PyMultiNest package (Buchner et al.,
2014a).
Figure 7: Observations and retrieval of the transmission spectrum of TOI-270
d. Top: Retrieval with baseline planetary atmosphere model and no stellar
heterogeneity. Bottom: Retrieval including stellar heterogeneity. Both
retrievals included the TESS (photometric point at 0.8 $\mu$m) and WFC3
(spectrum between 1.1-1.7 $\mu$m) data, shown in red. The blue curve shows the
median-fit model spectrum and the light blue shaded regions show the $1\sigma$
and $2\sigma$ credible ranges. The binned model points for the median-fit
spectrum are shown as yellow circles. Figure 8: Posterior probability
distributions for the atmospheric model parameters of TOI-270 d obtained from
the retrieval without stellar heterogeneity and including both the TESS and
WFC3 data. The inset at the bottom left shows the retrieved values, for the
volume mixing ratios of the molecules ($X_{\rm i}$), isothermal temperature
($T_{0}$), reference pressure in bar ($P_{\rm ref}$), haze amplitude and slope
($a$ and $\gamma$), cloud-top pressure in bar ($P_{c}$) and cloud/haze
covering fraction ($\bar{\phi}$). The retrievals show moderate evidence for
H2O at 3.0 $\sigma$ significance. No other molecules are individually detected
above 2 $\sigma$, and the model does not show any evidence for clouds/hazes in
the observed slant photosphere. The retrieved parameters are reported in Table
4.
### 5.1 Baseline retrieval assuming uniform stellar brightness
We follow a similar approach to the retrieval conducted using AURA for the
temperate sub-Neptune K2-18 b (Madhusudhan et al., 2020). We consider an
isothermal pressure-temperature ($P$-$T$) profile with the temperature as a
free parameter. Assuming a non-isothermal profile does not significantly
influence the results and is not warranted given the data quality (also see
e.g. Constantinou & Madhusudhan, 2022). Our baseline model, therefore,
includes twelve free parameters: volume mixing ratios of the six molecules
noted above ($X_{\rm i}$ for $i\ =$ H2O, CH4, NH3, CO, CO2, HCN); the
isothermal temperature ($T_{0}$); the reference pressure ($P_{\rm ref}$)
corresponding to the planet radius of 2.19 R⊕ derived from the broadband light
curve (Table 1); and four cloud/haze parameters, including the haze amplitude
and slope ($a$ and $\gamma$), cloud-top pressure ($P_{c}$) and cloud/haze
covering fraction ($\phi$). The model fit to the observed spectrum is shown in
Figure 7 and the posterior probability distributions for the model parameters
are shown in Figure 8.
For this baseline model, we find a preference for absorption by a combination
of the molecular species listed above in a hydrogen-rich atmosphere at
4$\sigma$ significance. The significance is derived by comparing the Bayesian
evidence (e.g., Benneke & Seager, 2013; Welbanks & Madhusudhan, 2021) for the
baseline model relative to that obtained for a featureless spectrum, i.e., a
flat line. When considered individually, the evidence in favour of a given
molecule being present in the atmosphere is lower, with the strongest evidence
being obtained for H2O at $3\sigma$ significance. We do not find significant
evidence ($>2\sigma$) for absorption due to any other individual molecules.
The retrieved parameters are reported in Table 4.
Using the baseline model we retrieve the H2O abundance to be
$\log(X_{\rm{H}_{2}\rm{O}})=-1.77^{+0.69}_{-0.93}$. We do not obtain strong
constraints on the volume mixing ratios for any of the other molecules, as
shown in Figure 8. The cloud/haze properties are also poorly constrained, with
the cloud-top pressure retrieved as $\log(P_{\rm c}/{\rm
bar})=-0.77^{+1.63}_{-2.47}$ and a cloud/haze covering fraction of
$0.38^{+0.33}_{-0.23}$, i.e. consistent with a cloud-free slant photosphere at
$2\sigma$. The isothermal temperature is constrained to $247^{+80}_{-68}$ K.
To investigate the extent to which the results described above are driven by
the relative transit depth levels of the TESS and WFC3 passbands, a second
retrieval was performed using only the WFC3 data, with the TESS data point
excluded. Similar to the above case, we still find evidence for H2O in the
planetary atmosphere at 3$\sigma$ significance, with no significant
constraints on any other absorber or clouds/hazes. An H2O abundance of
$\log(X_{\rm{H}_{2}\rm{O}})=-1.92^{+0.74}_{-0.97}$ is obtained, which is fully
consistent with the constraint obtained from the original analysis that
included the TESS data point.
### 5.2 Retrievals allowing for stellar heterogeneity
We also investigate the possibility of a heterogeneous stellar brightness
profile influencing the observed transmission spectrum, as has been suggested
for other temperate sub-Neptune atmospheres (e.g. Rackham et al., 2018;
Barclay et al., 2021). We include the effect of stellar heterogeneity
following Pinhas et al. (2018), by adding three parameters to the baseline
retrieval described in the previous section: the average stellar photospheric
temperature ($T_{\rm phot}$); the average temperature of the heterogeneity
($T_{\rm het}$); and the covering fraction of the heterogeneity ($\delta$).
When considering the WFC3 and TESS data together and allowing for stellar
heterogeneity, we find that the retrieved planetary atmosphere properties do
not change significantly, and the quality of the fit to the data is similar to
that obtained for the retrievals that assumed a homogeneous stellar brightness
profile. With stellar heterogeneity included, we obtain evidence for H2O
absorption in the planetary atmosphere at 3$\sigma$ significance, with a
derived abundance of $\log(X_{\rm{H}_{2}\rm{O}})=-1.91^{+0.74}_{-0.98}$, fully
consistent with the abundances obtained for the retrievals without stellar
heterogeneity. Again, we find no significance evidence for any other chemical
species in the planetary atmosphere. For the star spots, we infer a covering
fraction of $\delta=0.06^{+0.05}_{-0.03}$ and temperature of $T_{\rm
het}=2446^{+1120}_{-988}$ K. The resulting model fit is shown in Figure 7.
When considering the combined TESS and WFC3 dataset, the baseline model
without stellar heterogeneity is favoured over the model with stellar
heterogeneity at $2\sigma$ significance. In both cases, the fit to the data is
primarily driven by the H2O absorption in the planetary atmosphere. The model
with stellar heterogeneity is marginally disfavored due to the larger number
of free parameters that do not contribute significantly to the fit. In
particular, a substantial contamination by stellar heterogeneities would
produce a larger offset between the TESS and WFC3 transit depth levels than
that which is observed (Pinhas et al., 2018).
To determine the information content of the WFC3 data alone, we performed an
additional retrieval including stellar heterogeneity but with the TESS data
point excluded. The resulting model fit is shown in Figure 9. In this case, we
find that the detection significance of H2O is reduced to 2.8$\sigma$ with an
abundance constraint of $\log(X_{\rm{H}_{2}\rm{O}})=-2.09^{+0.89}_{-1.21}$,
which remains consistent with the abundance constraints obtained from the
retrievals that included the TESS data point. Comparing the Bayesian evidences
for WFC3-only retrievals with and without stellar heterogeneity, we find that
there is still no strong preference for contamination by stellar
heterogeneity; the evidence remains marginally higher for the baseline model
without stellar heterogeneity. As an additional check, we performed a
retrieval on the WFC3 data alone assuming a model with a featureless planetary
spectrum and a heterogeneous stellar brightness profile. We find that this
scenario is also disfavored at over $3\sigma$ significance relative to the
baseline retrieval allowing for a planetary atmosphere and not including
stellar heterogeneity. Nevertheless, the marginal detection significance
(2.8$\sigma$) obtained for H2O when allowing for stellar heterogeneity and
considering the WFC3 data alone, and our current inability to definitively
rule out contamination by stellar heterogeneity, implies that more
observations are required to robustly establish the presence of H2O in the
planetary atmosphere.
Figure 9: Retrieved transmission spectrum considering WFC3 data alone and a
model including stellar heterogeneity. The blue curve shows the median-fit
model spectrum and the light blue shaded regions show the $1\sigma$ and
$2\sigma$ credible ranges. The binned model points for the median-fit spectrum
are shown as yellow circles. The TESS data point at 0.8 $\mu$m is not included
in the retrieval but is shown here for comparison.
## 6 Discussion
Although we cannot yet definitively discount the possibility that the measured
transmission spectrum is contaminated by stellar heterogeneity, various
indicators point to the TOI-270 host star being a particularly quiet M dwarf,
likely with an age of at least a billion years. For example, Günther et al.
(2019) report two measurements that are suggestive of a very low stellar
activity level. First, they find an absence of fast rotational modulation,
transit spot crossings, and flares in existing TESS photometry, which spans
nearly three years (non-continuous; see also Kaye et al., 2022). Second, the
same authors detected a shallow H$\alpha$ absorption feature and reported an
absence of Ca H/K emission lines in their reconnaissance spectra. Both
findings are common signs for less active M dwarfs, while active stars would
show H$\alpha$ and Ca H/K in emission (e.g. Cram & Mullan, 1985; Stauffer &
Hartmann, 1986; Robertson et al., 2013). A subsequent study by Van Eylen et
al. (2021) reports a tentative stellar rotation period of $\sim 57$ days
derived from a periodogram analysis of TESS photometry, matching the
expectations for an old, quiet M dwarf (Newton et al., 2016). Van Eylen et al.
additionally analyse HARPS and ESPRESSO radial velocity measurements, which
independently confirm very low stellar activity. For example, the ESPRESSO
radial velocity jitter is constrained to $0.16\pm 0.23$ m s-1, consistent with
zero. Additionally, the lack of Ly$\alpha$ emission detected in the present
study (Section 4) is consistent with a low activity level for the host star.
The low stellar activity level suggested by these various auxiliary indicators
are in line with the preference we obtain for the measured transmission
spectrum being minimally contaminated by stellar heterogeneity (Section 5).
Pending further observations, if we assume for now that the measured WFC3
signal is caused by the atmosphere of TOI-270 d rather than unocculted star
spots, then the retrieved chemical abundances provide initial constraints on
the bulk planetary composition and surface conditions. TOI-270 d has been
predicted to be a candidate Hycean world (Madhusudhan et al., 2021), i.e. an
ocean world with a hydrogen-rich atmosphere capable of hosting a habitable
ocean. The mass ($4.20\pm 0.16M_{\earth}$), broadband radius ($2.17\pm
0.05\,R_{\earth}$), and equilibrium temperature (350 K) of the planet together
allow for a range of internal structures, including (a) a gas dwarf with a
rocky core overlaid by a thick hydrogen-rich atmosphere, (b) a water world
with a steamy atmosphere ($\sim$80-100% H2O volume mixing ratio), and (c) a
Hycean world composed of a water-rich interior and a hydrogen-rich atmosphere.
In this study, we obtain a 99% credible upper limit of 30% for the H2O volume
mixing ratio, thereby precluding a water world with a steamy atmosphere
composed of $>80\%$ H2O. However, more stringent constraints on the
atmospheric molecular abundances will be needed to robustly distinguish
between the gas dwarf versus Hycean world scenarios.
The retrieved H2O abundance provides a tentative constraint on the atmospheric
metallicity of the planet. The H2O abundance we retrieve for TOI-270 d,
$\log(X_{\rm{H}_{2}\rm{O}})=-1.77^{+0.69}_{-0.93}$, is similar to that
reported previously for the sub-Neptune K2-18 b (Madhusudhan et al., 2020),
$\log(X_{\rm{H}_{2}\rm{O}})=-2.11^{+1.06}_{-1.19}$. Assuming all the oxygen in
the atmosphere is in H2O, the retrieved H2O abundance of TOI-270 d corresponds
to an O/H ratio spanning 2-99$\times$ solar, with a median value of $20\times$
solar. This estimate is consistent with the mass-metallicity trend observed
for close-in exoplanets using oxygen as a metallicity indicator (Welbanks et
al., 2019) as well as with theoretical predictions of atmospheric metal
enrichment in low-mass exoplanets, albeit at the lower-end of the
theoretically predicted range (e.g. Fortney et al., 2013).
The derived atmospheric abundances also potentially provide insights into
chemical processes in the atmosphere, which in turn could constrain the
surface conditions. In chemical equilibrium, given the low temperature
($\sim$300-350 K) of the atmosphere, CH4 and NH3 are expected to be the
prominent carbon and nitrogen bearing species, respectively (Madhusudhan &
Seager, 2011; Moses et al., 2013), similar to that seen in the solar system
giant planets. Assuming solar elemental ratios of C/O and N/O, the abundances
of CH4 and NH3 are expected to be $\sim$0.5$\times$ and $\sim$0.2$\times$ the
H2O abundance, respectively. However, while H2O is retrieved in abundance, as
discussed above, we do not retrieve comparable constraints for CH4 and NH3.
The lack of significant evidence for CH4 and NH3 may indicate chemical
disequilibrium in the atmosphere, as also noted for K2-18 b (Madhusudhan et
al., 2020).
Such disequilibrium may result if the base of the outer atmospheric envelope
occurs at a pressure of less than $\sim$ 100 bar and interfaces with an
interior layer of different composition (Yu et al., 2021; Hu et al., 2021;
Tsai et al., 2021). For example, a water ocean below a shallow hydrogen-
dominated envelope can deplete CH4 and NH3 in the observable atmosphere. The
chemical composition inferred for TOI-270 d may, therefore, be consistent with
the presence of a water ocean below the outer hydrogen-rich envelope, similar
to the constraints inferred for the habitable-zone sub-Neptune K2-18 b
(Madhusudhan et al., 2020). However, more precise abundance constraints from
future observations will be required to further elucidate the interior and
surface conditions of TOI-270 d.
## 7 Conclusion
We have presented an atmospheric transmission spectrum for the temperate sub-
Neptune TOI-270 d measured using the HST WFC3 spectrograph, spanning near-
infrared wavelengths 1.126-1.644$\mu\textnormal{m}$. We also used observations
made with the HST STIS spectrograph to place a 95% credible upper limit of
$8.2\times 10^{-14}$ erg s-1 cm2 Å-1 arcsec-2 on the Ly$\alpha$ emission of
the host star. Assuming a homogeneous brightness profile for the stellar disc,
the combined TESS and WFC3 transmission spectrum provides strong ($4\sigma$)
evidence for infrared absorption by the planetary atmosphere. Although the
present data precision make it challenging to uniquely identify the molecules
present, the strongest evidence is obtained for H2O at $3\sigma$ significance.
If we allow for the possibility of a heterogeneous stellar brightness profile
and exclude the TESS point from our analysis, the detection significance of
H2O in the planetary atmosphere is reduced to $2.8\sigma$, but the model is
disfavoured at the $2\sigma$ level relative to the model assuming a
homogeneous stellar brightness profile. Furthermore, even when we allow for
stellar heterogeneity and the TESS point is excluded, the detection of a
planetary atmosphere is still preferred over a featureless planetary spectrum
at more than $3\sigma$ significance.
While the current data broadly favour the detection of a planetary atmosphere
and a homogeneous stellar brightness profile, further observations will be
required to increase the robustness of the H2O detection and to firmly rule
out contamination of the measured signal by stellar heterogeneity. If
verified, the inferred atmospheric H2O abundance of
$\log(X_{\rm{H}_{2}\rm{O}})=-1.77^{+0.69}_{-0.93}$ would imply that TOI-270 d
has a hydrogen-rich outer envelope and is a not water world with a steam-
dominated atmosphere. The constraints obtained for the atmospheric composition
remain compatible with TOI-270 d being a Hycean planet with a water ocean
layer below a hydrogen-rich outer envelope.
## References
* Almenara et al. (2022) Almenara, J. M., Bonfils, X., Otegi, J. F., et al. 2022, A&A, 665, A91
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123
* Barber et al. (2014) Barber, R. J., Strange, J. K., Hill, C., et al. 2014, MNRAS, 437, 1828
* Barclay et al. (2021) Barclay, T., Kostov, V. B., Colón, K. D., et al. 2021, AJ, 162, 300
* Barclay et al. (2018) Barclay, T., Pepper, J., & Quintana, E. V. 2018, ApJS, 239, 2
* Benneke & Seager (2013) Benneke, B., & Seager, S. 2013, ApJ, 778, 153
* Buchner et al. (2014a) Buchner, J., Georgakakis, A., Nandra, K., et al. 2014a, A&A, 564, A125
* Buchner et al. (2014b) —. 2014b, A&A, 564, A125
* Burt et al. (2021) Burt, J. A., Dragomir, D., Mollière, P., et al. 2021, AJ, 162, 87
* Castelli & Kurucz (2003) Castelli, F., & Kurucz, R. L. 2003, in Modelling of Stellar Atmospheres, ed. N. Piskunov, W. W. Weiss, & D. F. Gray, Vol. 210, A20
* Charbonneau et al. (2009) Charbonneau, D., Berta, Z. K., Irwin, J., et al. 2009, Nature, 462, 891
* Cloutier et al. (2020) Cloutier, R., Eastman, J. D., Rodriguez, J. E., et al. 2020, AJ, 160, 3
* Cloutier et al. (2021) Cloutier, R., Charbonneau, D., Stassun, K. G., et al. 2021, AJ, 162, 79
* Constantinou & Madhusudhan (2022) Constantinou, S., & Madhusudhan, N. 2022, MNRAS, 514, 2073
* Cram & Mullan (1985) Cram, L. E., & Mullan, D. J. 1985, ApJ, 294, 626
* Crossfield et al. (2015) Crossfield, I. J. M., Petigura, E., Schlieder, J. E., et al. 2015, ApJ, 804, 10
* Deming et al. (2013) Deming, D., Wilkins, A., McCullough, P., et al. 2013, ApJ, 774, 95
* Demory et al. (2020) Demory, B. O., Pozuelos, F. J., Gómez Maqueo Chew, Y., et al. 2020, A&A, 642, A49
* Evans et al. (2016) Evans, T. M., Sing, D. K., Wakeford, H. R., et al. 2016, ApJ, 822, L4
* Feroz et al. (2009) Feroz, F., Hobson, M. P., & Bridges, M. 2009, MNRAS, 398, 1601
* Figueira et al. (2009) Figueira, P., Pont, F., Mordasini, C., et al. 2009, A&A, 493, 671
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306
* Fortney et al. (2013) Fortney, J. J., Mordasini, C., Nettelmann, N., et al. 2013, ApJ, 775, 80
* Fulton et al. (2017) Fulton, B. J., Petigura, E. A., Howard, A. W., et al. 2017, AJ, 154, 109
* Gandhi & Madhusudhan (2017) Gandhi, S., & Madhusudhan, N. 2017, MNRAS, 472, 2334
* Gillon et al. (2007) Gillon, M., Pont, F., Demory, B. O., et al. 2007, A&A, 472, L13
* Guerrero et al. (2021) Guerrero, N. M., Seager, S., Huang, C. X., et al. 2021, ApJS, 254, 39
* Günther et al. (2019) Günther, M. N., Pozuelos, F. J., Dittmann, J. A., et al. 2019, Nature Astronomy, 3, 1099
* Hirano et al. (2021) Hirano, T., Livingston, J. H., Fukui, A., et al. 2021, AJ, 162, 161
* Hu et al. (2021) Hu, R., Damiano, M., Scheucher, M., et al. 2021, ApJ, 921, L8
* Hunter (2007) Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90
* Husser et al. (2013) Husser, T.-O., Wende-von Berg, S., Dreizler, S., et al. 2013, A&A, 553, A6
* Kaye et al. (2022) Kaye, L., Vissapragada, S., Günther, M. N., et al. 2022, MNRAS, 510, 5464
* Kossakowski et al. (2021) Kossakowski, D., Kemmer, J., Bluhm, P., et al. 2021, A&A, 656, A124
* Kreidberg (2015) Kreidberg, L. 2015, PASP, 127, 1161
* Kunimoto et al. (2022) Kunimoto, M., Winn, J., Ricker, G. R., & Vanderspek, R. K. 2022, AJ, 163, 290
* Kurucz (1993) Kurucz, R. 1993, ATLAS9 Stellar Atmosphere Programs and 2 km/s grid. Kurucz CD-ROM No. 13. Cambridge, Mass.: Smithsonian Astrophysical Observatory
* Luque et al. (2021) Luque, R., Serrano, L. M., Molaverdikhani, K., et al. 2021, A&A, 645, A41
* Madhusudhan et al. (2020) Madhusudhan, N., Nixon, M. C., Welbanks, L., Piette, A. A. A., & Booth, R. A. 2020, ApJ, 891, L7
* Madhusudhan et al. (2021) Madhusudhan, N., Piette, A. A. A., & Constantinou, S. 2021, ApJ, 918, 1
* Madhusudhan & Seager (2011) Madhusudhan, N., & Seager, S. 2011, ApJ, 729, 41
* Martioli et al. (2021) Martioli, E., Hébrard, G., Correia, A. C. M., Laskar, J., & Lecavelier des Etangs, A. 2021, A&A, 649, A177
* McCullough et al. (2014) McCullough, P. R., Crouzet, N., Deming, D., & Madhusudhan, N. 2014, ApJ, 791, 55
* Mikal-Evans et al. (2020) Mikal-Evans, T., Sing, D. K., Kataria, T., et al. 2020, MNRAS, 496, 1638
* Mikal-Evans et al. (2019a) Mikal-Evans, T., Sing, D. K., Goyal, J. M., et al. 2019a, MNRAS, 488, 2222
* Mikal-Evans et al. (2019b) Mikal-Evans, T., Crossfield, I., Daylan, T., et al. 2019b, Atmospheric characterization of two temperate mini-Neptunes formed in the same protoplanetary nebula, HST Proposal
* Mikal-Evans et al. (2021) Mikal-Evans, T., Crossfield, I. J. M., Benneke, B., et al. 2021, AJ, 161, 18
* Mikal-Evans et al. (2022) Mikal-Evans, T., Sing, D. K., Barstow, J. K., et al. 2022, Nature Astronomy, 6, 471
* Montet et al. (2015) Montet, B. T., Morton, T. D., Foreman-Mackey, D., et al. 2015, ApJ, 809, 25
* Moses et al. (2013) Moses, J. I., Line, M. R., Visscher, C., et al. 2013, ApJ, 777, 34
* Newton et al. (2016) Newton, E. R., Irwin, J., Charbonneau, D., et al. 2016, ApJ, 821, 93
* Nowak et al. (2020) Nowak, G., Luque, R., Parviainen, H., et al. 2020, A&A, 642, A173
* Parviainen & Aigrain (2015) Parviainen, H., & Aigrain, S. 2015, MNRAS, 453, 3821
* Petigura et al. (2022) Petigura, E. A., Rogers, J. G., Isaacson, H., et al. 2022, AJ, 163, 179
* Pinhas et al. (2018) Pinhas, A., Rackham, B. V., Madhusudhan, N., & Apai, D. 2018, MNRAS, 480, 5314
* Plavchan et al. (2020) Plavchan, P., Barclay, T., Gagné, J., et al. 2020, Nature, 582, 497
* Rackham et al. (2018) Rackham, B. V., Apai, D., & Giampapa, M. S. 2018, ApJ, 853, 122
* Richard et al. (2012) Richard, C., Gordon, I. E., Rothman, L. S., et al. 2012, J. Quant. Spec. Radiat. Transf., 113, 1276
* Ricker et al. (2015) Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003
* Robertson et al. (2013) Robertson, P., Endl, M., Cochran, W. D., & Dodson-Robinson, S. E. 2013, The Astrophysical Journal, 764, 3
* Rogers & Seager (2010) Rogers, L. A., & Seager, S. 2010, ApJ, 716, 1208
* Rothman et al. (2010) Rothman, L. S., Gordon, I. E., Barber, R. J., et al. 2010, J. Quant. Spec. Radiat. Transf., 111, 2139
* Stauffer & Hartmann (1986) Stauffer, J. R., & Hartmann, L. W. 1986, ApJS, 61, 531
* Stefánsson et al. (2020) Stefánsson, G., Kopparapu, R., Lin, A., et al. 2020, AJ, 160, 259
* STScI Development Team (2013) STScI Development Team. 2013, pysynphot: Synthetic photometry software package, ascl:1303.023
* Sullivan et al. (2015) Sullivan, P. W., Winn, J. N., Berta-Thompson, Z. K., et al. 2015, ApJ, 809, 77
* Tsai et al. (2021) Tsai, S.-M., Innes, H., Lichtenberg, T., et al. 2021, ApJ, 922, L27
* van der Walt et al. (2011) van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science Engineering, 13, 22
* Van Eylen et al. (2018) Van Eylen, V., Agentoft, C., Lundkvist, M. S., et al. 2018, MNRAS, 479, 4786
* Van Eylen et al. (2021) Van Eylen, V., Astudillo-Defru, N., Bonfils, X., et al. 2021, MNRAS, 507, 2154
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261
* Welbanks & Madhusudhan (2021) Welbanks, L., & Madhusudhan, N. 2021, arXiv e-prints, arXiv:2103.08600
* Welbanks et al. (2019) Welbanks, L., Madhusudhan, N., Allard, N. F., et al. 2019, ApJ, 887, L20
* Yu et al. (2021) Yu, X., Moses, J. I., Fortney, J. J., & Zhang, X. 2021, ApJ, 914, 38
* Yurchenko et al. (2011) Yurchenko, S. N., Barber, R. J., & Tennyson, J. 2011, MNRAS, 413, 1828
* Yurchenko & Tennyson (2014) Yurchenko, S. N., & Tennyson, J. 2014, MNRAS, 440, 1649
* Zhou et al. (2017) Zhou, Y., Apai, D., Lew, B. W. P., & Schneider, G. 2017, AJ, 153, 243
The authors are grateful for constructive feedback provided by the anonymous
referee. Support for HST program GO-15814 was provided by NASA through a grant
from the Space Telescope Science Institute, which is operated by the
Association of Universities for Research in Astronomy, Inc., under NASA
contract NAS 5-26555. This research has made use of the SIMBAD database,
operated at CDS, Strasbourg, France. Some of the data presented in this paper
were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is
operated by the Association of Universities for Research in Astronomy, Inc.,
under NASA contract NAS5-26555. Support for MAST for non-HST data is provided
by the NASA Office of Space Science via grant NNX13AC07G and by other grants
and contracts. MNG acknowledges support from the European Space Agency (ESA)
as an ESA Research Fellow. NM thanks Savvas Constantinou for helpful
discussions. All the HST data used in this paper can be found in MAST:
http://dx.doi.org/10.17909/2h3r-t275 (catalog 10.17909/2h3r-t275)
## Appendix A Stellar limb darkening
Figure 10: Coefficients ($u_{1}$, $u_{2}$) for a quadratic stellar limb
darkening law obtained using ExoCTK and PyLDTK. Separate sets of ExoCTK
coefficients were computed assuming an ATLAS 9 (blue circles) and Phoenix ACES
(red circles) stellar model. The PyLDTK coefficients (black points) assume a
Phoenix ACES stellar model. Posterior distributions obtained from the light
curve fits are shown as orange diamonds, with $1\sigma$ credible intervals
that are smaller than the marker symbols.
For the fiducial light curve analyses, a quadratic stellar limb darkening law
was adopted with coefficients that were allowed to vary as free parameters. As
described in Sections 3.1 and 3.2, tight Gaussian priors obtained using PyLDTK
were adopted for these light curve fits. The limb darkening coefficiet priors
and resulting posteriors are listed in Table 3 for the spectroscopic light
curves. Due to the limited phase coverage of the transit, the limb darkening
coefficients are effectively unconstrained by the data and consequently the
posteriors are indistinguishable from the priors (Figure 10).
To investigate the sensitivity of the final results to the stellar limb
darkening treatment, the spectroscopic light curve fits were repeated with
$u_{1}$ and $u_{2}$ held fixed to values computed using the online ExoCTK
tool.444https://exoctk.stsci.edu Specifically, ExoCTK provides the option of
computing limb darkening coefficients using both Phoenix ACES (Husser et al.,
2013) and ATLAS9 (Kurucz, 1993; Castelli & Kurucz, 2003) stellar models. As
shown in Figure 10, there are clear differences in the limb darkening
coefficients computed by ExoCTK using these two stellar models, and compared
to those obtained with PyLDTK, which also uses the Phoenix ACES models.
However, as can be seen in Figure 11, the final transmission spectrum is found
to be robust to the choice of limb darkening treatment.
Table 3: Posterior distributions for stellar limb darkening coefficients inferred from the spectroscopic light curve fits. Wavelength | $u_{1}$ | $u_{2}$
---|---|---
($\mu$m) | |
1.126-1.144 | $0.17411_{-0.00015}^{+0.00017}$ | $0.14986_{-0.00049}^{+0.00049}$
1.144-1.163 | $0.17069_{-0.00017}^{+0.00016}$ | $0.14954_{-0.00055}^{+0.00058}$
1.163-1.181 | $0.16853_{-0.00017}^{+0.00016}$ | $0.14873_{-0.00053}^{+0.00050}$
1.181-1.200 | $0.16550_{-0.00016}^{+0.00016}$ | $0.14536_{-0.00053}^{+0.00051}$
1.200-1.218 | $0.16563_{-0.00016}^{+0.00016}$ | $0.14539_{-0.00053}^{+0.00051}$
1.218-1.237 | $0.16187_{-0.00017}^{+0.00017}$ | $0.14692_{-0.00053}^{+0.00052}$
1.237-1.255 | $0.16173_{-0.00017}^{+0.00017}$ | $0.14545_{-0.00055}^{+0.00052}$
1.255-1.274 | $0.15879_{-0.00018}^{+0.00019}$ | $0.14405_{-0.00060}^{+0.00062}$
1.274-1.292 | $0.15610_{-0.00020}^{+0.00020}$ | $0.14629_{-0.00063}^{+0.00062}$
1.292-1.311 | $0.15446_{-0.00019}^{+0.00017}$ | $0.14377_{-0.00059}^{+0.00059}$
1.311-1.329 | $0.15106_{-0.00019}^{+0.00017}$ | $0.14106_{-0.00055}^{+0.00056}$
1.329-1.348 | $0.16933_{-0.00017}^{+0.00017}$ | $0.13147_{-0.00072}^{+0.00068}$
1.348-1.366 | $0.16980_{-0.00021}^{+0.00020}$ | $0.13276_{-0.00084}^{+0.00082}$
1.366-1.385 | $0.16661_{-0.00019}^{+0.00020}$ | $0.12976_{-0.00090}^{+0.00077}$
1.385-1.403 | $0.17225_{-0.00021}^{+0.00022}$ | $0.13419_{-0.00093}^{+0.00094}$
1.403-1.422 | $0.17482_{-0.00027}^{+0.00027}$ | $0.14076_{-0.00107}^{+0.00115}$
1.422-1.440 | $0.16906_{-0.00028}^{+0.00029}$ | $0.16630_{-0.00120}^{+0.00123}$
1.440-1.459 | $0.16990_{-0.00028}^{+0.00030}$ | $0.15845_{-0.00131}^{+0.00129}$
1.459-1.477 | $0.17439_{-0.00032}^{+0.00030}$ | $0.16207_{-0.00121}^{+0.00125}$
1.477-1.496 | $0.16697_{-0.00027}^{+0.00031}$ | $0.15601_{-0.00115}^{+0.00116}$
1.496-1.514 | $0.16053_{-0.00030}^{+0.00028}$ | $0.15220_{-0.00126}^{+0.00130}$
1.514-1.533 | $0.16335_{-0.00029}^{+0.00026}$ | $0.14715_{-0.00110}^{+0.00113}$
1.533-1.551 | $0.16388_{-0.00029}^{+0.00028}$ | $0.15854_{-0.00119}^{+0.00117}$
1.551-1.570 | $0.15884_{-0.00027}^{+0.00030}$ | $0.16827_{-0.00108}^{+0.00115}$
1.570-1.588 | $0.15116_{-0.00028}^{+0.00024}$ | $0.13940_{-0.00111}^{+0.00114}$
1.588-1.607 | $0.14712_{-0.00027}^{+0.00028}$ | $0.16774_{-0.00113}^{+0.00111}$
1.607-1.625 | $0.14538_{-0.00019}^{+0.00018}$ | $0.13665_{-0.00086}^{+0.00077}$
1.625-1.644 | $0.14406_{-0.00023}^{+0.00024}$ | $0.14131_{-0.00090}^{+0.00097}$
Figure 11: Derived transmission spectra for the different stellar limb
darkening treatments described in Appendix A. Free coefficients with PyLDTK
priors (black circles), coefficients held fixed to the PyLDTK values (orange
squares), and coefficients held fixed to the ExoCTK values assuming ATLAS9
(red triangles) and Phoenix ACES (blue triangles) stellar models. A high level
of consistency is obtained for all limb darkening treatments. Table 4: Priors
and Retrieved Parameters for the Models in this Work
Parameter | Prior Distribution | Baseline ModelT,W | Stellar HeterogeneityT,W | Stellar HeterogeneityW
---|---|---|---|---
$\log_{10}(X_{\text{H}_{2}\text{O}})$ | $\mathcal{U}(-12,-0.3)$ | $-1.77^{+0.69}_{-0.93}$ | $-1.91^{+0.74}_{-0.98}$ | $-2.09^{+0.89}_{-1.21}$
$\log_{10}(X_{\text{CH}_{4}})$ | $\mathcal{U}(-12,-0.3)$ | $-7.78^{+2.67}_{-2.49}$ | $-7.71^{+2.60}_{-2.57}$ | $-7.35^{+2.54}_{-2.57}$
$\log_{10}(X_{\text{NH}_{3}})$ | $\mathcal{U}(-12,-0.3)$ | $-8.04^{+2.42}_{-2.37}$ | $-7.72^{+2.25}_{-2.41}$ | $-8.02^{+2.25}_{-2.29}$
$\log_{10}(X_{\text{CO}})$ | $\mathcal{U}(-12,-0.3)$ | $-6.60^{+3.37}_{-3.31}$ | $-6.15^{+3.10}_{-3.33}$ | $-6.36^{+3.13}_{-3.28}$
$\log_{10}(X_{\text{CO}_{2}})$ | $\mathcal{U}(-12,-0.3)$ | $-5.81^{+3.22}_{-3.79}$ | $-5.62^{+3.11}_{-3.77}$ | $-5.73^{+3.04}_{-3.67}$
$\log_{10}(X_{\text{HCN}})$ | $\mathcal{U}(-12,-0.3)$ | $-6.53^{+2.93}_{-3.27}$ | $-6.58^{+2.87}_{-3.16}$ | $-6.70^{+2.82}_{-3.06}$
$\text{T}_{0}$ [K] | $\mathcal{U}(0,400)$ | $247^{+80}_{-68}$ | $257^{+72}_{-69}$ | $232^{+76}_{-65}$
$\log_{10}(\text{P}_{\rm ref})$[bar] | $\mathcal{U}(-6,2)$ | $-1.04^{+0.46}_{-0.54}$ | $-2.35^{+1.47}_{-1.78}$ | $-3.34^{+1.56}_{-1.47}$
$\log_{10}(a)$ | $\mathcal{U}(-4,10)$ | $0.71^{+4.04}_{-2.90}$ | $0.49^{+3.39}_{-2.70}$ | $1.36^{+3.95}_{-3.13}$
$\gamma$ | $\mathcal{U}(-20,2)$ | $-10.83^{+7.20}_{-5.80}$ | $-10.84^{+6.99}_{-5.59}$ | $-10.52^{+6.97}_{-5.75}$
$\log_{10}(\text{P}_{\rm c})$ [bar] | $\mathcal{U}(-6,2)$ | $-0.77^{+1.63}_{-2.47}$ | $-0.88^{+1.68}_{-2.45}$ | $-1.53^{+1.91}_{-2.22}$
$\phi$ | $\mathcal{U}(0,1)$ | $0.34^{+0.36}_{-0.22}$ | $0.38^{+0.33}_{-0.23}$ | $0.37^{+0.31}_{-0.23}$
$\delta$ | $\mathcal{U}(0.0,0.5)$ | N/A | $0.06^{+0.05}_{-0.03}$ | $0.08^{+0.05}_{-0.04}$
$\text{T}_{\rm het}$ [K] | $\mathcal{U}(701,5259)$ | N/A | $2446^{+1120}_{-988}$ | $2202^{+855}_{-866}$
$\text{T}_{\rm phot}$ [K] | $\mathcal{N}(3506,100)$ | N/A | $3512^{+81}_{-84}$ | $3505^{+84}_{-82}$
Note. — The superscript in the model name indicates the data included in the
retrieval: T for TESS and W for HST-WFC3. N/A means that the parameter was not
considered in the model by construction.
|
# Template AASTeXArticle with Examples: v6.3.1111Released on March, 1st, 2021
Koray Aydoğan Department of Physics, Boğaziçi University, Bebek 34342,
İstanbul, Turkey
# Exoplanet Detection by Machine Learning with Data Augmentation
Koray Aydoğan Department of Physics, Boğaziçi University, Bebek 34342,
İstanbul, Turkey
###### Abstract
It has recently been demonstrated that deep learning has significant potential
to automate parts of the exoplanet detection pipeline using light curve data
from satellites such as Kepler Borucki et al. (2010) Koch et al. (2010) and
NASA’s Transiting Exoplanet Survey Satellite (TESS) Ricker et al. (2010).
Unfortunately, the smallness of the available datasets makes it difficult to
realize the level of performance one expects from powerful network
architectures.
In this paper, we investigate the use of data augmentation techniques on light
curve data from to train neural networks to identify exoplanets. The
augmentation techniques used are of two classes: Simple (e.g. additive noise
augmentation) and learning-based (e.g. first training a GAN Goodfellow et al.
(2020) to generate new examples). We demonstrate that data augmentation has a
potential to improve model performance for the exoplanet detection problem,
and recommend the use of augmentation based on generative models as more data
becomes available.
## 1 Introduction
Attempts to discover and identify exoplanets by human eye is a very tough task
to be solved due to the complexity of the light flux distribution on the light
curve data. Moreover it is not feasible too because of the fact that the
amount of the data which comes from the telescopes is so huge and performing
the analysis is a very time consuming process. All of these factors lead to
employing neural networks.
Deep learning has been highly successful in a wide range of scientific
problems where large datasets are available. Unfortunately, in many fields,
the lack of adequate data prevents one from obtaining the full benefits of
sophisticated deep learning models due to problems with overfitting.
One approach to dealing with this problem is to enlarge the existing datasets
by using data augmentation techniques. Given a small dataset, some standard
approaches to creating new, ”augmented” samples involve the transformation of
existing samples in a way that is meaningful for the problem under
consideration (e.g. rotating images) or by adding various forms of noise to
existing samples.
A more interesting approach to data augmentation involves the use of a machine
learning model to generate new samples by learning from existing samples.
While this approach may also be prone to overfitting, it has long been
recognized that it also has the potential to boost the performance of deep
learning models.
Exoplanet detection by light curve data is certainly a problem where datasets
are not as large as one would like. For instance, the light curve dataset from
NASA’s Transiting Exoplanet Survey Satellite (TESS) includes 1 billion objects
(TESS Input Catalog). While this number is steadily increasing, it is
certainly much smaller than one would want when training a powerful deep
learning model to detect exoplanet candidates with a high degree of precision
and recall.
There are various works done by utilizing some deep learning methods for
classification tasks with some specific datasets, which come from different
telescopes, such as K2 photometry from Kepler Mission, Vanderburg & Johnson
(2014); Kepler data from Kepler telescope Shallue & Vanderburg (2018); and
with TESS data from TESS satellite Yu et al. (2019).
In all of these works they followed a specific pipeline in order to prepare
the whole dataset to be trained and tested. However, for the last two works,
the whole preprocessing is the same; binning (splitting the data to global,
local and secondary views), detrending processes can be given as examples. For
the last two, the neural network architecture consists of 1-dimensional CNN’s
O’Shea & Nash (2015) with different number of filter sizes. And, they got
appreciable results according to precision and recall. However, for the first
research, there is another work Malik et al. (2022) which employed a method,
feature extraction, via a Python package TSFresh Christ et al. (2018). And, by
utilizing this process they obtained a simulated data, the rest is again
machine learning. We need to note that they also applied the feature
extraction process to the previous datasets (Kepler and TESS) and they got
better results in metrics.
Recently, there is another work done which uses CNNs again, but with a
different number of input channels and different preprocessing algorithms
Valizadegan et al. (2022). The dataset that they used both TESS and Kepler
datasets from Mikulski Archive for Space Telescopes (MAST). And, they got the
best results up to now in each metric. They also validated 301 new exoplanets
from MAST catalog.
Deep learning models have been utilized for this problem with some success,
but the acquisition of larger amounts of data in the near future will
certainly improve the performance of such models.
In this paper, we explore the use of data augmentation techniques for
expanding the light curve data obtained from the Transiting Exoplanet Survey
Satellite (TESS) to train neural network classifiers on larger, synthetically
extended datasets. The expansion of the natural dataset with synthetically
augmented samples improves the recall value obtained by the neural network
classifier formulated by Yu et al. (2019) from 0.57 to 0.67. While the
existing dataset is too small to draw definitive conclusions, we find the
results encouraging, and are aiming to reiterate the process as additional
data becomes available.
The organization of the paper is as follows. In Section 2, we provide a review
of the data augmentation techniques relevant for the current work. In Section
3, we describe the dataset we use. In Section 4, we describe our augmentation
approaches and the model architectures used, and provide the results obtained.
In Section 5, we explain the training process, and we provide conclusions and
discuss possibilities for future work with Section 6.
## 2 Data Augmentation Techniques
Synthetically expanding an existing dataset in order to improve model
performance is a technique that has been in use for a long time in the machine
learning literature. One of the earlier examples is the use of ”distorsions”
(i.e. affine planar transformations) to generate new images for the problem of
handwritten character recognition. A more recent prominent example is
Krizhevsky et al. (2017), where augmentations such as image translations,
reflections, and a form of scaling of the principal components were used in an
image classification problem.
By now, the use of data augmentation has a very large literature that is
impossible to summarize here. For a broad overview of the use of data
augmentation in image data we mention Shorten & Khoshgoftaar (2019). For a
comparison of some data augmentation techniques on medial data, see
Mikołajczyk & Grochowski (2018). The data we use in this paper is of the form
of time series; for a survey of data augmentation techniques in time series
data, see Wen et al. (2020).
Although it is hard to give a clean classification of data augmentation
methods, we can mention the following broad types of techniques:
* •
Noise augmentation: Generating new samples from existing ones by incorporating
some sort of noise. Perhaps the simplest example of this method is adding
Gaussian noise to existing samples to create new ones.
* •
Data transformation: Applying forms of transformation to the data that
preserve the ”underlying meaning” that the model is trying to learn. For
example, for a model doing object detection from images, rotating, flipping,
or zooming into an image doesn’t change the object content, so it makes sense
to create new examples by using these transformation.
For time series data Wen et al. (2020), depending on the problem, one may use
techniques like shifting time, and time warping and amplitude warping, which
non-uniformly stretch the time axis and the amplitude axis, respectively.
However, note that only if the problem at hand is expected to have these sorts
of transformations as invariances would it make sense to use these methods to
create new samples.
* •
Using generative models: This approach involves training a generative model
such as a generative adversarial network (GAN) Goodfellow et al. (2020) on the
existing data, and then using this model to create new, synthetic samples.
While the transformation methods mentioned above are hand-picked, the
generative approach delegates the creation of new realistic samples to machine
learning, as well, without a need to search for meaningful transformations. Of
course, whether this approach actually works well would depend on the problem
at hand.
There is a large literature on the use of GANs for data augmentation alone.
See Yi et al. (2019) for examples of this approach in medical imaging, and
Shorten & Khoshgoftaar (2019) where such techniques are discussed as par tof a
more general survey. We can also mention Antoniou et al. (2017) as a novel
specific technique that uses a slightly modified form of GAN to use with data
augmentation.
In this paper, we will use the following augmentation techniques in TESS light
curve data, which comes in the form of three time series per observation:
* •
Noise augmentation
* •
GAN-based augmentation
For noise augmentation, we create new samples by adding noise generated from
various distributions such as the normal distribution. For GAN-based
augmentation, we train separate GANs on ”planet” samples and ”non-planet:
samples.
## 3 Data
In our research, we chose a light curve dataset that is obtained from the
Transiting Exoplanet Survey Satellite (TESS), the pipeline is done by Yu et
al. (2019). The main reason why we chose this dataset is the number of
exoplanets in the whole dataset is low (492 exoplanets out of 15959 data
samples), the dataset is also small. This dataset includes information about
brightness, time, moment-derived centroid and so forth. While we are training,
we consider only information about the brightness, we consider “global view”,
“local view”, and “secondary eclipse” keys in the whole dataset, like did in
the previous work. “global view”, which shows the light curve over an entire
orbital period; and a “local view”, which is a close-up of the transit event
and for the secondary eclipse what we observe is that when the planet crosses
behind the star or the galaxy the light from the planet is blocked and we see
the decrease in the flux. Furthermore, global view has 201 data points the
rest have 61 data points Yu et al. (2019).
Figure 1: Light Curve Data Samples. PC (Planet Candidate); EB (Eclipsing
Binary); J (Junk) with their Global, Local and Secondary views
## 4 Our Approaches
### 4.1 Classical Approach (Noise Augmentation)
As we discussed some of the data augmentation techniques in the previous
section, it is a common technique employed in deep learning in order to
improve the performance and as we said earlier, we used noise augmentation
technique for this work.
Noise augmentation is a very common method that is applied for both image
based data and time series data, seismological data can be given as an example
Mousavi et al. (2020). For this example noise augmentation is done with only
Gaussian probability density function. In the line with that density function,
we also used different probabilistic density functions which are, exponential
and Rayleigh probabilistic density functions in two approaches; we added noise
to the training set and we multiplied the noise with the training set. Then,
we concatenate the augmented data to the original data, so we got expanded
version of the training dataset.
### 4.2 Deep Learning Based Approach (GANs)
As discussed in the early section, machine learning based data augmentation is
another common and new method to expand a training dataset. Its architecture
is similar to autoencoders, in that case we had encoder and decoder parts, now
we have two models which are called “generator” and “discriminator”. Generator
part creates a random seed, the discriminator part classifies whether the data
is fake or real, and the training process continues up to an equilibrium
point. In our work, we produced synthetic exoplanet data and noise with a GANs
architecture. Then, as we did in noise augmentation, we concatenate the
synthetic data to the original data and trained the deep learning model. The
main difference from the previous works such as Yi et al. (2019) is we
produced one dimensional data, time series data instead of an image dataset.
Figure 2: Synthetic samples of Global, Local and Secondary Eclipse Views
produced by our GAN architecture
Our GANs architecture consists of several layers of one-dimensional CNNs whose
generator part has three output channels whereas the discriminator has 3 input
channels to be compatible with global, local and secondary view channels.
## 5 Training
As said earlier, we employed noise augmentation technique in two approaches;
additive and multiplicative noise. We started with Gaussian density function
and we augmented the raw training data with Gaussian noise. We did the same
process for the other density functions i.e. exponential and Rayleigh. With
the augmented data, we trained it with the same model Yu et al. (2019).
The determination of the scale parameters of these functions is based on
visual check. In other words, we specified the parameters according to the
visual forms the augmented data; we did not deform the data extremely.
For the deep learning based method, we expanded the training dataset with two
different ways;
* •
Producing Synthetic Exoplanet (Signal) Dataset
* •
Producing Noise (Non-Signal) Dataset
For the first method, we separated the exoplanets (”PC” labeled data points)
from the training dataset and fed them into our GANs model. And, we ended up
with different numbers of synthetic planet candidates that we specified. In
this approach, we produced 1000, 2000, 4000 and 6000 synthetic signals.
We did this up to 6000 in order to obtain a homogeneous training dataset since
the whole training dataset consists of approximately 13000 data points.
After the production phase, we expanded the original training dataset with
these synthetic datasets gradually and trained the model Yu et al. (2019) with
the same parameters that is specified.
For the second method, we cut off the rest of the exoplanets (non-”PC” labeled
data points) from the training data and again we trained our GANs architecture
with these noise data in order to produce synthetic noise. We produced noise
with the same number of training dataset size to duplicate the data.
In both circumstances, we trained our GANs architecture with 200 epochs, 256
batch size and Adam optimizer Kingma & Ba (2014). Also, our GANs architecture
is made of Convolutional Neural Networs, by using a deep learning package
which is TensorFlow Abadi et al. (2015).
## 6 Results
In the line with our methods of data augmentation techniques, we got better
recall score compared to the previous work Yu et al. (2019) when we produced
synthetic planet candidate light curve data points and expanded the training
dataset with them.
Figure 3: Precision-Recall Curve of Each Augmentation Method
The above plot is created for $gannoise$ we expand the training set by adding
synthetic noise that is created by our GANs architectures. For $syn1k$,
$syn2k$, $syn4k$, $syn6$, we expand the training set by concatenating
synthetic planet candidate light curve signals to the original training set.
For $gauss$, $exp$, $ray$, again we expand the training dataset by adding
Gaussian noise, exponential noise and Rayleigh noise respectively.
According to above augmentation methods, the architecture can distinguish
Planet Candidates, PCs, from Eclipsing Binaries, EBs in a more successful way
compared to the previous work, Yu et al. (2019) at a threshold of 0.5; in
their work, they can recover 28 PCs out of 49, 0.67 recall, on the test set at
the same threshold. We can recover 33 PCs out of 49, 0.67 recall, on the test
set when we expanded our training dataset by concatenating 6000 synthetic
exoplanets. The second successful result is 31 PCs out of 49 when we augmented
the training dataset by expanding the training dataset with 4000 synthetic
exoplanets that is created by our GANs archtitecture.
Figure 4: The ROC Curve for GANs Based Augmentation Method
For the AUC score metric for ROC curve, we got similar results with the
previous work. In the previous work, they got 0.978 AUC score and our average
AUC score is 0.970.
## 7 Future Work and Conclusions
To sum up, data augmentation techniques are developing and have a great impact
on machine learning problems. In lots of deep learning research, conventional
techniques are used, which are flipping, rotating, adding noise and so forth.
But, in recent years, deep learning based data augmentation techniques are
also used and their effects are so considerable. In our research, we applied
both techniques and we got better recall score compared to the previous work
and similar AUC score for the ROC curve by expanding our training dataset with
planet candidate light curve signals. In the future, we think of applying
these techniques to the most recent work, ExoMiner Valizadegan et al. (2022),
and compare the final prediction results.
Another approach that we are considering is that, performing transfer learning
from the previous work for Kepler dataset Shallue & Vanderburg (2018) to this
work. In other words, we will train the model partially on Kepler dataset and
and continue with TESS dataset.
## 8 Data Availability and Code
The synthetic data and the codes in this article will be shared on reasonable
request to the corresponding author.
## References
* Abadi et al. (2015) Abadi, M., Agarwal, A., Barham, P., et al. 2015, TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https://www.tensorflow.org/
* Antoniou et al. (2017) Antoniou, A., Storkey, A., & Edwards, H. 2017, arXiv preprint arXiv:1711.04340
* Borucki et al. (2010) Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977
* Christ et al. (2018) Christ, M., Braun, N., Neuffer, J., & Kempa-Liehr, A. W. 2018, Neurocomputing, 307, 72
* Goodfellow et al. (2020) Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al. 2020, Communications of the ACM, 63, 139
* Kingma & Ba (2014) Kingma, D. P., & Ba, J. 2014, arXiv preprint arXiv:1412.6980
* Koch et al. (2010) Koch, D. G., Borucki, W. J., Basri, G., et al. 2010, The Astrophysical Journal Letters, 713, L79
* Krizhevsky et al. (2017) Krizhevsky, A., Sutskever, I., & Hinton, G. E. 2017, Communications of the ACM, 60, 84
* Malik et al. (2022) Malik, A., Moster, B. P., & Obermeier, C. 2022, Monthly Notices of the Royal Astronomical Society, 513, 5505
* Mikołajczyk & Grochowski (2018) Mikołajczyk, A., & Grochowski, M. 2018, in 2018 international interdisciplinary PhD workshop (IIPhDW), IEEE, 117–122
* Mousavi et al. (2020) Mousavi, S. M., Ellsworth, W. L., Zhu, W., Chuang, L. Y., & Beroza, G. C. 2020, Nature communications, 11, 1
* O’Shea & Nash (2015) O’Shea, K., & Nash, R. 2015, arXiv preprint arXiv:1511.08458
* Ricker et al. (2010) Ricker, G. R., Latham, D., Vanderspek, R., et al. 2010, in American Astronomical Society Meeting Abstracts# 215, Vol. 215, 450–06
* Shallue & Vanderburg (2018) Shallue, C. J., & Vanderburg, A. 2018, The Astronomical Journal, 155, 94
* Shorten & Khoshgoftaar (2019) Shorten, C., & Khoshgoftaar, T. M. 2019, Journal of big data, 6, 1
* Valizadegan et al. (2022) Valizadegan, H., Martinho, M. J., Wilkens, L. S., et al. 2022, The Astrophysical Journal, 926, 120
* Vanderburg & Johnson (2014) Vanderburg, A., & Johnson, J. A. 2014, Publications of the Astronomical Society of the Pacific, 126, 948
* Wen et al. (2020) Wen, Q., Sun, L., Yang, F., et al. 2020, arXiv preprint arXiv:2002.12478
* Yi et al. (2019) Yi, X., Walia, E., & Babyn, P. 2019, Medical image analysis, 58, 101552
* Yu et al. (2019) Yu, L., Vanderburg, A., Huang, C., et al. 2019, The Astronomical Journal, 158, 25
|
# Patterns in transitional shear turbulence.
Part 2: Emergence and optimal wavelength
Sébastien Gomé1 Laurette S. Tuckerman1<EMAIL_ADDRESS>Dwight
Barkley2 1Laboratoire de Physique et Mécanique des Milieux Hétérogènes, CNRS,
ESPCI Paris, PSL Research University, Sorbonne Université, Université Paris-
Cité, Paris 75005, France 2Mathematics Institute, University of Warwick,
Coventry CV4 7AL, United Kingdom
###### Abstract
Low Reynolds number turbulence in wall-bounded shear flows _en route_ to
laminar flow takes the form of oblique, spatially-intermittent turbulent
structures. In plane Couette flow, these emerge from uniform turbulence via a
spatiotemporal intermittent process in which localised quasi-laminar gaps
randomly nucleate and disappear. For slightly lower Reynolds numbers,
spatially periodic and approximately stationary turbulent-laminar patterns
predominate. The statistics of quasi-laminar regions, including the
distributions of space and time scales and their Reynolds number dependence,
are analysed. A smooth, but marked transition is observed between uniform
turbulence and flow with intermittent quasi-laminar gaps, whereas the
transition from gaps to regular patterns is more gradual. Wavelength selection
in these patterns is analysed via numerical simulations in oblique domains of
various sizes. Via lifetime measurements in minimal domains, and a wavelet-
based analysis of wavelength predominance in a large domain, we quantify the
existence and non-linear stability of a pattern as a function of wavelength
and Reynolds number. We report that the preferred wavelength maximises the
energy and dissipation of the large-scale flow along laminar-turbulent
interfaces. This optimal behaviour is primarily due to the advective nature of
the large-scale flow, with turbulent fluctuations playing only a secondary
role.
###### keywords:
Turbulence, transition, pattern formation
## 1 Introduction
Turbulence in wall-bounded shear flows in the transitional regime is
characterised by coexisting turbulent and laminar regions, with the turbulent
fraction increasing with Reynolds number. This phenomenon was first described
by Coles & van Atta (1966) and by Andereck et al. (1986) in Taylor-Couette
flow. Later, by constructing Taylor-Couette and plane Couette experiments with
very large aspect ratios, Prigent et al. (2002, 2003) showed that these
coexisting turbulent and laminar regions, called _bands_ and _gaps_
respectively, spontaneously formed regular patterns with a selected wavelength
and orientation that depend systematically on $Re$. These patterns have been
simulated numerically and studied intensively in plane Couette flow (Barkley &
Tuckerman, 2005, 2007; Duguet et al., 2010; Rolland & Manneville, 2011;
Tuckerman & Barkley, 2011), plane Poiseuille flow (Tsukahara et al., 2005;
Tuckerman et al., 2014; Shimizu & Manneville, 2019; Kashyap, 2021), and
Taylor-Couette flow (Meseguer et al., 2009; Dong, 2009; Wang et al., 2022).
In pipe flow, the other canonical wall-bounded shear flow, only the streamwise
direction is long, and transitional turbulence takes the form of _puffs_ ,
also called _flashes_ (Reynolds, 1883; Wygnanski & Champagne, 1973), which are
the one-dimensional analog of turbulent bands. In contrast to bands in planar
shear flows, experiments and direct numerical simulations show that puffs do
not spontaneously form spatially periodic patterns (Moxey & Barkley, 2010;
Avila & Hof, 2013). Instead, the spacing between them is dictated by short-
range interactions (Hof et al., 2010; Samanta et al., 2011). Puffs have been
extensively studied, especially in the context of the model derived by Barkley
(2011a, b, 2016) from the viewpoint of _excitable media_. In this framework,
fluctuations from uniform turbulence trigger quasi-laminar gaps (i.e. low-
turbulent-energy holes within the flow) at random instants and locations, as
has been seen in direct numerical simulations (DNS) of pipe flow. The
bifurcation scenario giving rise to localised gaps has been investigated by
Frishman & Grafke (2022), who called them _anti-puffs_. Interestingly,
spatially periodic solutions like those observed in planar shear flows are
produced in a centro-symmetric version of the Barkley model (Barkley, 2011b)
although the mechanism for their formation has not yet been clarified.
In this paper, we will show that in plane Couette flow, as in pipe flow,
short-lived localised gaps emerge randomly from uniform turbulence at the
highest Reynolds numbers in the transitional range, which we will see is
$Re\simeq 470$ in the domain which we will study. The first purpose of this
paper is to investigate these gaps. The emblematic regular oblique large-scale
bands appear at slightly lower Reynolds numbers, which we will see is
$Re\simeq 430$.
If the localised gaps are disregarded, it is natural to associate the bands
with a _pattern-forming instability_ of the uniform turbulent flow. This was
first suggested by Prigent et al. (2003) and later investigated by Rolland &
Manneville (2011). Manneville (2012) and Kashyap (2021) proposed a Turing
mechanism to account for the appearance of patterns by constructing a
reaction-diffusion model based on an extension of the Waleffe (1997) model of
the streak-roll self-sustaining process. Reetz et al. (2019) discovered a
sequence of bifurcations leading to a large-scale steady state that resembles
a skeleton for the banded pattern, arising from tiled copies of the exact
Nagata (1990) solutions of plane Couette flow. The relationship between these
pattern-forming frameworks and local nucleation of gaps is unclear.
The adaptation of classic stability concepts to turbulent flows is currently a
major research topic. At the simplest level, it is always formally possible to
carry out linear stability analysis of a mean flow, as was done by Barkley
(2006) for a limit cycle in the cylinder wake. The mean flow of uniformly
turbulent plane Couette flow has been found to be linearly stable (Tuckerman
et al., 2010). However, this procedure makes the drastic simplification of
neglecting the Reynolds stress entirely in the stability problem and hence its
interpretation is uncertain (e.g., Bengana & Tuckerman, 2021). The next level
of complexity and accuracy is to represent the Reynolds stress via a closure
model. However, classic closure models for homogeneous turbulence (e.g.
$(K,\Omega)$) have yielded predictions that are completely incompatible with
results from full numerical simulation or experiment (Tuckerman et al., 2010).
Another turbulent configuration in which large, spatially periodic scales
emerge are zonal jets, characteristic of geophysical turbulence. For zonal
jets, a closure model provided by a cumulant expansion (Srinivasan & Young,
2012; Tobias & Marston, 2013) has led to a plausible stability analysis
(Parker & Krommes, 2013). Other strategies are possible for turbulent flows in
general: Kashyap et al. (2022) examined the averaged time-dependent response
of uniform turbulence to large-wavelength perturbations and provided evidence
for a linear instability in plane channel flow. They computed a dispersion
relation which is in good agreement with the natural spacing and angle of
patterns.
Classic analyses for non-turbulent pattern-forming flows, such as Rayleigh-
Bénard convection or Taylor-Couette flow, yield not only a threshold and a
preferred wavelength, but also existence and stability ranges for other
wavelengths through the Eckhaus instability (Busse, 1981; Ahlers et al., 1986;
Riecke & Paap, 1986; Tuckerman & Barkley, 1990; Cross & Greenside, 2009). As
the control parameter is varied, this instability causes spatially periodic
states to make transitions to other periodic states whose wavelength is
preferred. Eckhaus instability is also invoked in turbulent zonal jets (Parker
& Krommes, 2013). The second goal of this paper is to study the regular
patterns of transitional plane Couette flow and to determine the wavelengths
at which they can exist and thrive. At low enough Reynolds numbers, patterns
will be shown to destabilise and to acquire a different wavelength.
Pattern formation is sometimes associated with maximisation principles obeyed
by the preferred wavelength, as in the canonical Rayleigh-Bénard convection.
Such principles, like maximal dissipation, also have a long history for
turbulent solutions. Malkus (1954) and Busse (1981) proposed a principle of
maximal heat transport, or equivalently maximal dissipation, obeyed by
convective turbulent states. The maximal dissipation principle, as formulated
by Malkus (1956) in shear flows, occurs in other systems such as von Kármán
flow (Ozawa et al., 2001; Mihelich et al., 2017). (This principle has been
somewhat controversial and was challenged by Reynolds & Tiederman (1967)
within the context of stability theory. See a modern revisit of Malkus
stability theory with statistical closures by Markeviciute & Kerswell (2022).)
Using the energy analysis formulated in our companion paper Gomé et al.
(2023), we will associate the selected wavelength to a maximal dissipation
observed for the large-scale flow along the bands.
## 2 Numerical setup
Plane Couette flow consists of two parallel rigid plates moving at different
velocities, here equal and opposite velocities $\pm U_{\text{wall}}$. Lengths
are nondimensionalised by the half-gap $h$ between the plates and velocities
by $U_{\text{wall}}$. The Reynolds number is defined to be $Re\equiv
U_{\text{wall}}h/\nu$. We will require one further dimensional quantity that
appears in the friction coefficient – the mean horizontal shear at the walls,
which we denote by $U^{\prime}_{\text{wall}}$. We will use non-dimensional
variables throughout except when specified. We simulate the incompressible
Navier-Stokes equations
$\displaystyle\frac{\partial\bm{u}}{\partial
t}+\left(\bm{u}\cdot\nabla\right)\bm{u}$ $\displaystyle=-\nabla
p+\frac{1}{Re}\nabla^{2}\bm{u},$ (1a) $\displaystyle\nabla\cdot\bm{u}$
$\displaystyle=0,$ (1b)
using the pseudo-spectral parallel code Channelflow (Gibson et al., 2019).
Since the bands are found to be oriented obliquely with respect to the
streamwise direction, we use a doubly periodic numerical domain which is
tilted with respect to the streamwise direction of the flow, shown as the
oblique rectangle in figure 1. This choice was introduced by Barkley &
Tuckerman (2005) and has become common in studying turbulent bands (Shi et
al., 2013; Lemoult et al., 2016; Paranjape et al., 2020; Tuckerman et al.,
2020). The $x$ direction is chosen to be aligned with a typical turbulent band
and the $z$ coordinate to be orthogonal to the band. The relationship between
streamwise-spanwise coordinates and tilted band-oriented coordinates is:
$\displaystyle\mathbf{e}_{\text{strm}}$
$\displaystyle=\quad\cos{\theta}\,\mathbf{e}_{x}+\sin{\theta}\,\mathbf{e}_{z}$
(2a) $\displaystyle\mathbf{e}_{\text{span}}$
$\displaystyle=-\sin{\theta}\,\mathbf{e}_{x}+\cos{\theta}\,\mathbf{e}_{z}\quad$
(2b)
The usual wall-normal coordinate is denoted by $y$ and the corresponding
velocity by $v$. Thus the boundary conditions are $\bm{u}(y=\pm
1)=\pm\mathbf{e}_{\text{strm}}$ in $y$ and periodic in $x$ and $z$, together
with a zero-flux constraint on the flow in the $x$ and $z$ directions. The
field visualised in figure 1 comes from an additional simulation we carried
out in a domain of size ($L_{\text{strm}},L_{y},L_{\text{span}})=(200,2,100)$
aligned with the streamwise-spanwise directions. Exploiting the periodic
boundary conditions of the simulation, the visualisation shows four copies of
the instantaneous field.
Figure 1: Spatial visualization of our numerical domains at $Re=360$. Colors
show the wall-normal velocity $v$ at the midplane $y=0$ (blue: $-0.2$, white:
0, red: 0.2) in a domain of size $L_{\text{strm}}=400$, $L_{\text{span}}=200$.
Red and blue boxes respectively show a Minimal Band Unit and a Long Slender
Box.
The tilted box effectively reduces the dimensionality of the system by
disallowing large-scale variation along the short $x$ direction. The flow in
this direction is considered to be statistically homogeneous as it is only
dictated by small turbulent scales. In a large non-tilted domain, bands with
opposite orientations coexist (Prigent et al., 2003; Duguet et al., 2010;
Klotz et al., 2022), but only one orientation is permitted in the tilted box.
We will use two types of numerical domains, with different lengths $L_{z}$.
Both have fixed resolution $\Delta z=L_{z}/N_{z}=0.08$, along with fixed
$L_{x}=10$ ($N_{x}=120$), $L_{y}=2$ $(N_{y}=33)$ and $\theta=24^{\circ}$.
These domains are shown in figure 1.
1. (1)
Minimal Band Units, an example of which is shown as the dark red box in figure
1. These domains accommodate a single band-gap pair and so are used to study
strictly periodic pattern of imposed wavelength $\lambda=L_{z}$. ($L_{z}$ must
typically be below $\simeq 65$ to contain a unique band.)
2. (2)
Long Slender Boxes, which have a large $L_{z}$ direction that can accommodate
a large and variable number of gaps and bands in the system. The blue box in
figure 1 is an example of such a domain size with $L_{z}=240$, but larger
sizes ($L_{z}=400$ or $L_{z}=800$) will be used in our study.
## 3 Nucleation of laminar gaps and pattern emergence
(a)
(b)
(c)
(d)
(e)
(f)
Figure 2: Spatio-temporal visualization of pattern formation with $L_{z}=800$,
for (a) $Re=500$, (b) $Re=460$, (c) $440$, (d) $420$, (e) $400$ and (f)
$Re=380$. Flow at $t=0$ is initiated from uniform turbulence at $Re=500$.
Color shows cross-flow energy $(v^{2}+u_{\text{span}}^{2})/2$ at $x=L_{x}/2$,
$y=0$ (white: 0, red: 0.02). At high $Re$, weak local gaps appear sparsely.
When $Re$ is decreased, spatio-temporally intermittent patterns of finite
spatial extent emerge. These consist of turbulent cores (dark red) and quasi-
laminar gaps (white). For still lower $Re$, quasi-laminar regions live longer,
and patterns are more regular and steady.
We have carried out simulations in a Long Slender Box of size $L_{z}=800$ for
various $Re$ with the uniform turbulent state from a simulation at $Re=500$ as
an initial condition, a protocol called a quench. Figure 2, an extension of
figure 1 of Gomé et al. (2023, Part 1), displays the resulting spatio-temporal
dynamics at six Reynolds numbers. Plotted is the $(z,t)$ dependence of the
cross-flow energy $(v^{2}+u_{\text{span}}^{2})/2$ at $(x=L_{x}/2,y=0)$. The
cross-flow energy is a useful diagnostic because it is zero for laminar flow
and is therefore a proxy for turbulent kinetic energy. The choice $x=L_{x}/2$
is arbitrary since there is no large-scale variation of the flow field in the
short $x$ direction of the simulation.
Figure 2 encapsulates the main message of this section: the emergence of
patterns out of uniform turbulence is a gradual process involving spatio-
temporal intermittency of turbulent and quasi-laminar flow. At $Re=500$,
barely discernible low-turbulent-energy regions appear randomly within the
turbulent background. At $Re=460$ these regions are more pronounced and begin
to constitute localised, short-lived quasi-laminar gaps within the turbulent
flow. As $Re$ is further decreased, these gaps are more probable and last for
longer times. Eventually, the gaps self-organise into persistent, albeit
fluctuating, patterns. The remainder of the section will quantify the
evolution of states seen in figure 2.
### 3.1 Statistics of laminar and turbulent zones
We consider the $x,y$-averaged cross-flow energy
$e(z,t)\equiv\frac{1}{L_{x}L_{y}}\int_{-1}^{1}\int_{0}^{L_{x}}\frac{1}{2}(v^{2}+u_{\text{span}}^{2})(x,y,z,t)~{}\mathrm{d}x\,\mathrm{d}y$
(3)
as a useful diagnostic of quasi-laminar and turbulent zones. The probability
density functions (PDFs) of $e(z,t)$ are shown in figure 3a for various values
of $Re$. The right tails, corresponding to high-energy events, are broad and
exponential for all $Re$. The left, low-energy portions of the PDFs vary
qualitatively with $Re$, unsurprisingly since these portions correspond to the
weak turbulent events and hence include the gaps. For large $Re$, the PDFs are
maximal around $e\simeq 0.007$. As $Re$ is decreased, a low-energy peak
emerges at $e\simeq 0.002$, corresponding to the emergence of long-lived
quasi-laminar gaps seen in figure 2. The peak at $e\simeq 0.007$ flattens and
gradually disappears. An interesting feature is that the distributions broaden
with decreasing $Re$ with both low-energy and high-energy events becoming more
likely. This reflects a spatial redistribution of energy that accompanies the
formation of gaps, with turbulent bands extracting energy from quasi-laminar
regions and consequently becoming more intense. (See figure 6 of Gomé et al.
(2023, Part 1).)
(a)
(b)
(c)
(d)
Figure 3: (a) PDFs of local cross-flow energy $e(z,t)$ defined in (3). Maximum
at $e\simeq 0.002$ appears for $Re\leq 420$. (b) Illustration of the
thresholding $e(z,t)<e_{\rm turb}$, of a turbulent-laminar field at $Re=440$
with turbulent regions, $e(z,t)>e_{\rm turb}$ in white and quasi-laminar
regions in blue. Definitions of $L_{\rm lam}$ and $L_{\rm turb}$, the lengths
of quasi-laminar and turbulent regions, are illustrated. (c) PDFs of laminar
gap widths $L_{\rm lam}$ showing plateaux near 15 appearing for $Re\leq 440$.
(d) PDFs of widths of turbulent regions $L_{\rm turb}$ showing local increase
near 20 for $Re\leq 420$.
An intuitive way to define turbulent and quasi-laminar regions is by
thresholding the values of $e(z,t)$. In the following, a region will be called
quasi-laminar if $e(z,t)<e_{\rm turb}$ and turbulent if $e(z,t)\geq e_{\rm
turb}$. As the PDF of $e(z,t)$ evolves with $Re$, we define a $Re$-dependent
threshold as a fraction of its average value, $e_{\rm
turb}=0.75~{}\overline{e}$. The thresholding is illustrated in figure 3b,
which is an enlargement of the flow at $Re=440$ that shows turbulent and
quasi-laminar zones as white and blue areas, respectively. Thresholding within
a fluctuating turbulent environment can conflate long-lived gaps with tiny,
short-lived regions in which the energy fluctuates below the threshold $e_{\rm
turb}$. These are seen as the numerous small blue spots in figure 3b that
differ from the wider and longer-lived gaps. This deficiency is addressed by
examining the statistics of the spatial and temporal sizes of quasi-laminar
gaps.
We present the length distributions of laminar $L_{\rm lam}$ and turbulent
zones $L_{\rm turb}$ in figures 3c and 3d at various Reynolds numbers. These
distributions have their maxima at very small lengths, reflecting the large
number of small-scale, low-turbulent-energy regions that arise due to
thresholding the fluctuating turbulent field. As $Re$ is decreased, the PDF
for $L_{\rm lam}$ begins to develop a plateau around $L_{\rm lam}\simeq 15$,
corresponding to the scale of the gaps visible in figure 2. The right tails of
the distribution are exponential and shift upwards with decreasing $Re$. The
PDF of $L_{\rm turb}$ also varies with $Re$, but in a somewhat different way.
As $Re$ decreases, the likelihood of a turbulent length in the range
$15\lesssim L_{\rm turb}\lesssim 35$ increases above the exponential
background, but at least over the range of $Re$ considered, a maximum does not
develop.
The laminar-length distributions show the emergence of structure at $Re$
higher than the turbulent-length distributions. This is visible at $Re=440$,
where the distribution of $L_{\rm turb}$ is indistinguishable from those at
higher $Re$, while the distribution of $L_{\rm lam}$ is substantially altered.
This is entirely consistent with the impression from the visualisation in
figure 2c that quasi-laminar gaps emerge from a uniform turbulent background.
Although the distributions of $L_{\rm lam}$ and $L_{\rm turb}$ behave
differently, the length scale emerging as $Re$ decreases are within a factor
of two. This aspect is not present in the pipe flow results of Avila & Hof
(2013). (See Appendix A for a more detailed comparison.)
### 3.2 Gap lifetimes and transition to patterns
Temporal measurements of the gaps are depicted in figure 4. Figure 4a shows
the procedure by which we define the temporal extents $t_{\text{gap}}$ of
quasi-laminar gaps. For each gap, i.e. a connected zone in $(z,t)$ satisfying
$e(z,t)<e_{\rm turb}$, we locate its latest and earliest times and define
$t_{\rm gap}$ as the distance between them. Here again, we fix the threshold
at $e_{\rm turb}=0.75~{}\overline{e}$. Figure 4b shows the temporal
distribution of gaps, via the survival function of their lifetimes. In a
similar vein to the spatial gap lengths, two characteristic behaviours are
observed: for small times, many points are distributed near zero (as a result
of frequent fluctuations near the threshold $e_{\rm turb}$), while for large
enough times, an exponential regime is seen:
$P(t_{\text{gap}}>t)\propto e^{-t/\tau_{\text{gap}}(Re)}\text{ for }t>t_{0},$
(4)
where $t_{0}=500$ has been used for all $Re$, although the exponential range
begins slightly earlier for larger values of $Re$.
(a)
(b)
(c)
(d)
Figure 4: (a) Same as figure 3b, but illustrating the definition of
$t_{\text{gap}}$, the lifetime of a quasi-laminar gap. (b) Survival functions
of $t_{\text{gap}}$. After initial steep portions, slopes yield the
characteristic times. (c) Evolution with $Re$ of characteristic time
$\tau_{\text{gap}}$ and of ratio of large to small scale energy $e_{L/S}$
defined by (7). Both of these quantities present two exponential regimes, with
the same slopes and a common crossover at $Re_{\rm gu}$. The horizontal dashed
line delimits the region $e_{L/S}>1$, defining $Re_{\rm pg}$ below which
regular patterns dominate. We estimate $Re_{\rm pg}\simeq 430$ and $Re_{\rm
gu}\simeq 470$ (to two significant figures). (d) Evolution of friction
coefficient $C_{f}$ with $Re$, with the three regimes delimited by $Re_{\rm
pg}$ and $Re_{\rm gu}$, as defined from (c).
The slope of the exponential tail is extracted at each $Re$ and the resulting
characteristic time-scale $\tau_{\text{gap}}$ is shown in figure 4c. The
evolution of $\tau_{\rm gap}$ with $Re$ displays two regimes, each with nearly
exponential dependence on $Re$, but with very different slopes on the semi-log
plot. For $Re\geq 470$, the characteristic lifetimes are $\tau_{\rm
gap}=O(10^{2})$ and vary weakly with $Re$. These short timescales correspond
to the small white events visible in figure 2a and are associated with low-
energy values on the left tails of the PDFs for $e(z,t)$ in figure 3a.
Discounting these events, we refer to such states as uniform turbulence. For
$Re<470$, $\tau_{\rm gap}$ varies rapidly with $Re$, increasing by two orders
of magnitude between $Re=470$ and $Re=380$. The abrupt change in slope seen in
figure 4c, which we denote by $Re_{\rm gu}$, marks the transition between gaps
and uniform turbulence; we estimate $Re_{\rm gu}=470$ (to two significant
figures). We stress that as far as we have been able to determine, there is no
critical phenomenon associated with this change of behaviour. That is, the
transition is smooth and lacks a true critical point. It is nevertheless
evident that the dynamics of quasi-laminar gaps changes significantly in the
region of $Re=470$ and therefore it is useful to define a reference Reynolds
number marking this change in behaviour.
Note that typical lifetimes of laminar gaps must become infinite by the
threshold $Re\simeq 325$ below which turbulence is no longer sustained
(Lemoult et al., 2016). (We believe this to be true even for $Re\lesssim 380$
when the permanent banded regime is attained, although this is not shown
here.) For this reason, we have restricted our study of gap lifetimes to
$Re\gtrsim 380$ and we have limited our maximal simulation time to $\sim
10^{4}$.
To quantify the distinction between localized gaps and patterns, we introduce
a variable $e_{L/S}$ as follows. Using the Fourier transform in $z$,
$\bm{\hat{u}}(x,y,k_{z},t)=\frac{1}{L_{z}}\int_{0}^{L_{z}}\bm{u}(x,y,z,t)e^{-ik_{z}z}\,\text{d}z\>,$
(5)
we compute the averaged spectral energy
$\displaystyle\widehat{E}(y,k_{z})\equiv\frac{1}{2}\overline{\bm{\hat{u}}\cdot\bm{\hat{u}}^{\ast}},\qquad\qquad\widehat{E}(k_{z})\equiv\langle\widehat{E}(y,k_{z})\rangle_{y}$
(6)
where the overbar designates an average in $x$ and $t$. This spectral energy
is described in figure 3a of our companion paper Gomé et al. (2023, Part 1).
We are interested in the ratio of $\widehat{E}(k_{z})$ at large scales
(pattern scale) to small scales (roll-streak scale), as it evolves with $Re$.
For this purpose, we define the ratio of large-scale to small-scale maximal
energy:
$e_{L/S}=\frac{\underset{k_{z}<0.5}{\max}\widehat{E}(k_{z})}{\underset{k_{z}\geq
0.5}{\max}\widehat{E}(k_{z})}$ (7)
The choice of wavenumber $k_{z}=0.5$ to delimit large and small scales comes
from the change in sign of non-linear transfers, as established in Gomé et al.
(2023, Part 1). This quantity is shown as blue squares in figure 4c and is
highly correlated to $\tau_{\rm gap}$. This correlation is in itself a
surprising observation for which we have no explanation.
For $Re\gtrsim 430$, we have $e_{L/S}<1$, signaling that the dominant peak in
the energy spectrum is at the roll-streak scale, while for $Re\lesssim 430$,
the large-scale pattern begins to dominate the streaks and rolls, as indicated
by $e_{L/S}>1$ (dashed blue line on figure 4c). Note that $Re=430$ is also the
demarcation between unimodal and bimodal PDFs of $e(z,t)$ in figure 3a. The
transition from gaps to patterns is smooth. In fact, we do not even observe a
qualitative feature sharply distinguishing gaps and patterns. We nevertheless
find it useful to define a reference Reynolds number associated to patterns
starting to dominate the energy spectrum. This choice has the advantage of
yielding a quantitative criterion, which we estimate as $Re_{\rm pg}\simeq
430$ (to two significant figures). We find a similar estimation of the value
of $Re$ below which patterns start to dominate via a wavelet-based
measurement, see Appendix B.
In addition to the previous quantitative measures, we also extract the
friction coefficient. This is defined as the ratio of the mean wall shear
stress $\mu U^{\prime}_{\text{wall}}$ to the dynamic pressure $\rho
U_{\text{wall}}^{2}/2$, which we write in physical units and then in non-
dimensional variables as:
$\displaystyle C_{f}\equiv\frac{\mu U^{\prime}_{\text{wall}}}{\frac{1}{2}\rho
U_{\text{wall}}^{2}}=\frac{2\nu}{hU_{\text{wall}}}\frac{U^{\prime}_{\text{wall}}}{U_{\text{wall}}/h}=\frac{2}{Re}\frac{\partial\left<u_{\text{strm}}\right>_{x,z,t}}{\partial
y}\bigg{\rvert}_{\rm wall}$ (8)
In (8), the dimensional quantities $h$, $\rho$, $\mu$, and $\nu$ are the half-
height, the density, and dynamic and kinematic viscosities, and $U_{\rm wall}$
and $U^{\prime}_{\rm wall}$ are the velocity and mean velocity gradient at the
wall. We note that the behavior of $C_{f}$ in the transitional region has been
investigated in plane channel flow by Shimizu & Manneville (2019) and Kashyap
et al. (2020). Our measurements of $C_{f}$ are shown in figure 4d. We
distinguish different trends within each of the three regimes defined earlier
in figure 4c. In the uniform regime $Re>Re_{\rm gu}=470$, $C_{f}$ increases
with decreasing $Re$. In the patterned regime $Re<Re_{\rm pg}=430$, $C_{f}$
decreases with decreasing $Re$. The localised-gap regime $Re_{\rm
pg}<Re<Re_{\rm gu}$ connects these two tendencies, with $C_{f}$ reaching a
maximum at $Re=450$.
The presence of a region of maximal $C_{f}$ (or equivalently maximal total
dissipation) echoes the results on the energy balance presented in Gomé et al.
(2023, Part 1): the uniform regime dissipates more energy as $Re$ decreases,
up to a point where this is mitigated by the many laminar gaps nucleated. This
is presumably due to the mean flow in the turbulent region needing energy
influx from gaps to compensate for its increasing dissipation.
### 3.3 Laminar-turbulent correlation function
The changes in regimes and the distinction between local gaps and patterns can
be further studied by measuring the spatial correlation between quasi-laminar
regions within the flow. We define
$\Theta(z,t)=\begin{dcases}1~{}~{}\text{ if }e(z,t)<e_{\rm turb}\text{
(laminar) }\\\ 0~{}~{}\text{ otherwise (turbulent) }\end{dcases}$ (9)
(this is the quantity shown in blue and white in figures 3b and 4a). We then
compute its spatial correlation function:
$C(\delta z)=\frac{\left<{\Theta}(z){\Theta}(z+\delta
z)\right>_{z,t}-\left<{\Theta}(z)\right>^{2}_{z,t}}{\left<{\Theta}(z)^{2}\right>_{z,t}-\left<{\Theta}(z)\right>^{2}_{z,t}}.$
(10)
Along with $(z,t)$ averaging, $C$ is also averaged over multiple realisations
of quench experiments. As $\Theta$ is a Heaviside function, $C$ can be
understood as the average probability of finding a gap at a distance $\delta
z$ from a gap at position $z$. The results are presented in figure 5a. The
comparative behaviour of $C$ at near-zero values is enhanced by plotting
$\tanh(10~{}C)$ in figure 5b. At long range, $C$ approaches zero with small
fluctuations at $Re=480$, a noisy periodicity at $Re=460$, and a nearly
periodic behaviour for $Re\leq 420$.
In all cases, $C$ initially decreases from one and reaches a first minimum at
$\delta z\simeq 20$, due to the minimal possible size of a turbulent zone that
suppresses the creation of neighbouring laminar gaps. $C$ has a prominent
local maximum $\delta z_{\rm max}$ right after its initial decrease, at
$\delta z_{\rm max}\simeq 32$ at $Re=480$, which increases to $\delta z_{\rm
max}\simeq 41$ at $Re=420$. These maxima, shown as coloured circles in figure
5b, indicate that gap nucleation is preferred at distance $\delta z_{\rm max}$
from an existing gap. The increase in $\delta z_{\rm max}$ and in the
subsequent extrema as $Re$ is lowered agrees with the trend of increasing
wavelength of turbulent bands as $Re$ is decreased in the fully banded regime
at lower $Re$ (Prigent et al., 2003; Barkley & Tuckerman, 2005).
(a)
(b)
Figure 5: (a) Gap-to-gap correlation function $C(\delta z)$ defined by (10)
for various values of $Re$. (b) For $Re\gtrsim 440$ the weak variation and
short-ranged maxima are enhanced by plotting $\tanh(10~{}C(\delta z))$. The
dots correspond to the first local maximum, indicating the selection of a
finite distance between two local gaps, including at the highest $Re$. Large-
scale modulations smoothly leave room to weak short-range interaction as $Re$
increases and the flow visits patterned, local-gap and uniform regimes.
The smooth transition from patterns to uniform flow is confirmed in the
behaviour of the correlation function. Large-scale modulations characteristic
of the patterned regime gradually disappear with increasing $Re$, as gaps
become more and more isolated. Only a weak, finite-length interaction subsists
in the local-gap and uniform regimes, and will further disappear with
increasing $Re$. This is the selection of this finite gap spacing that we will
investigate in §4 and §5.
## 4 Wavelength selection for turbulent-laminar patterns
In this section, we investigate the existence of a preferred pattern
wavelength by using as a control parameter the length $L_{z}$ of the Minimal
Band Unit. In a Minimal Band Unit, the system is constrained and the
distinction between local gaps and patterns is lost; see section 3 of our
companion paper Gomé et al. (2023, Part 1). $L_{z}$ is chosen such as to
accommodate at most a single turbulent zone and a single quasi-laminar zone,
which due to imposed periodicity, can be viewed as one period of a perfectly
periodic pattern. By varying $L_{z}$, we can verify whether a regular pattern
of given wavelength $L_{z}$ can emerge from uniform turbulence, disregarding
the effect of scales larger than $L_{z}$ or of competition with wavelengths
close to $L_{z}$. We refer to these simulations in Minimal Band Units as
_existence_ experiments. Indeed, one of the main advantages of the Minimal
Band Unit is the ability to create patterns of a given angle and wavelength
which may not be stable in a larger domain.
In contrast, in a Long Slender Box, $L_{z}$ is large enough to accommodate
multiple bands and possibly even patterns of different wavelengths. An initial
condition consisting of a regular pattern of wavelength $\lambda$ can be
constructed by concatenating bands produced from a Minimal Band Unit of size
$\lambda$. The _stability_ of such a pattern is studied by allowing this
initial state to evolve via the non-linear Navier-Stokes equations. Both
existence and stability studies can be understood in the framework of the
Eckhaus instability (Kramer & Zimmermann, 1985; Ahlers et al., 1986; Tuckerman
& Barkley, 1990; Cross & Greenside, 2009).
In previous studies of transitional regimes, Barkley & Tuckerman (2005)
studied the evolution of patterns as $L_{z}$ was increased. In Section 4.1, we
extend this approach to multiple sizes of the Minimal Band Unit by comparing
lifetimes of patterns that naturally arise in this constrained geometry. The
stability of regular patterns of various wavelengths will be studied in Long
Slender Domains ($L_{z}=400$) in Section 4.2.
### 4.1 Temporal intermittency of regular patterns in a short-$L_{z}$ box
(a)
(b)
(c)
(d)
Figure 6: Pattern lifetimes. (a) Space-time visualization of a metastable
pattern in a Minimal Band Unit with $L_{z}=40$ at $Re=440$. Colors show
spanwise velocity (blue: $-0.1$, white: 0, red: 0.1). (b) Values of the
dominant wavelength $\widehat{\lambda}_{\max}$ (light blue curve) and of its
short-time average $\langle\widehat{\lambda}_{\max}\rangle_{t_{a}}$ (dark blue
curve) are shown; see (11). A state is defined to be patterned if
$\widehat{\lambda}_{\max}=L_{z}$. (c) Survival function of lifetimes of
turbulent-laminar patterns in a Minimal Band Unit with $L_{z}=40$ for various
$Re$. The pattern lifetimes $t_{\rm pat}$ are the lengths of the time
intervals during which $\widehat{\lambda}_{\max}=L_{z}$. (d) Above:
characteristic times $\tau_{\rm pat}$ extracted from survival functions as a
function of $L_{z}$ and $Re$. Below: intermittency factor for the patterned
state $\gamma_{\rm pat}$, which is the fraction of time spent in the patterned
state.
Figure 6a shows the formation of a typical pattern in a Minimal Band Unit of
size $L_{z}=40$ and at $Re=440$. While the system cannot exhibit the spatial
intermittency seen in figure 2c, temporal intermittency is possible and is
seen as alternation between uniform turbulence and a pattern. We plot the
spanwise velocity at $y=0$ and $x=L_{x}/2$. This is a particularly useful
measure of the large-scale flow associated with patterns, seen as red and blue
zones surrounding a white quasi-laminar region, i.e. a gap. The patterned
state spontaneously emerges from uniform turbulence and remains from $t\simeq
1500$ to $t\simeq 3400$. At $t\simeq 500$, a short-lived gap appears at
$z=10$, which can be seen as an attempt to form a pattern.
We characterise the pattern quantitatively as follows. For each time $t$, we
compute $|\langle\widehat{\bm{u}}(y=0,k_{z},t)\rangle_{x}|^{2}$, which is the
instantaneous energy contained in wavenumber $k_{z}$ at the mid-plane. We then
determine the wavenumber that maximises this energy and compute the
corresponding wavelength. That is, we define
$\widehat{\lambda}_{\max}(t)\equiv\frac{2\pi}{\underset{k_{z}>0}{\text{argmax}}\>|{\langle\widehat{\bm{u}}(y=0,k_{z},t)}\rangle_{x}|^{2}}.$
(11)
The possible values of $\widehat{\lambda}_{\max}$ are integer divisors of
$L_{z}$, here 40, 20, 10, etc. Figure 6b presents $\widehat{\lambda}_{\max}$
and its short-time average $\langle\widehat{\lambda}_{\max}\rangle_{t_{a}}$
with $t_{a}=30$ as light and dark blue curves, respectively. When turbulence
is uniform, $\widehat{\lambda}_{\max}$ varies rapidly between its discrete
allowed values, while $\langle\widehat{\lambda}_{\max}\rangle_{t_{a}}$
fluctuates more gently around 10. The flow state is deemed to be patterned
when its dominant mode is
$\langle\widehat{\lambda}_{\max}\rangle_{t_{a}}=L_{z}$. The long-lived pattern
occurring for $1500\leq t\leq 3400$ in figure 6a is seen as a plateau of
$\langle\widehat{\lambda}_{\max}\rangle_{t_{a}}$ in figure 6b. There are other
shorter-lived plateaus, notably at for $500\leq t\leq 750$. A similar analysis
was carried out by Barkley & Tuckerman (2005); Tuckerman & Barkley (2011)
using the Fourier component corresponding to wavelength $L_{z}$ of the
spanwise mid-gap velocity.
Figure 6c shows the survival function $t_{\rm pat}$ of the pattern lifetimes
obtained from $\langle\widehat{\lambda}_{\max}\rangle_{t_{a}}$ over long
simulation times for various $Re$. This measurement differs from figure 4b,
which showed lifetimes of _gaps_ in a Long Slender Box and not regular
_patterns_ obtained in a Minimal Band Unit. The results are however
qualitatively similar, with two characteristic zones in the distribution, as
in in figure 4b: at short times, many patterns appear due to fluctuations;
while after $t\simeq 200$, the survival functions enter an approximately
exponential regime, from which we extract the characteristic times $\tau_{\rm
pat}$ by taking the inverse of the slope.
We then vary $L_{z}$, staying within the Minimal Box regime $L_{z}\lesssim 65$
in which only one band can fit. Figure 6d (top) shows that $\tau_{\rm pat}$
presents a broad maximum in $L_{z}$ whose strength and position depend on
$Re$: $L_{z}\simeq 42$ at $Re=440$ and $L_{z}\simeq 44$ at $Re=400$. This
wavelength corresponds approximately to the natural spacing observed in a
Large Slender Box (figure 2). Figure 6d (bottom) presents the fraction of time
that is spent in a patterned state, denoted $\gamma_{\rm pat}$, to reflect
that this should be thought of as the intermittency factor for the patterned
state. The dependence of $\gamma_{\rm pat}$ on $L_{z}$ follows the same trend
as $\tau_{\rm pat}$, but less strongly (the scale of the inset is linear,
while that for $\tau_{\rm pat}$ is logarithmic).
The results shown in figure 6d complement the Ginzburg-Landau description
proposed by Prigent et al. (2003) and Rolland & Manneville (2011). To quantify
the bifurcation from featureless to pattern turbulence, these authors defined
an order parameter and showed that it has a quadratic maximum at an optimal
wavenumber. This is consistent with the approximate quadratic maxima that we
observe in $\tau_{\rm pat}$ and in $\gamma_{\rm pat}$ with regard to $L_{z}$.
Note that the scale of the pattern can be roughly set from the force balance
in the laminar flow regions (Barkley & Tuckerman, 2007), $\lambda\simeq
Re\sin\theta/\pi$, which yields a wavelength of 52 at $Re=400$ (close to the
value of 44 found in figure 6d).
### 4.2 Pattern stability in a large domain
Figure 7: Simulation in a Long Slender Box from a noise-perturbed periodic
pattern with (a) initial $\lambda=57$ at $Re=400$ and (b) initial $\lambda=40$
at $Re=430$. Colors show spanwise velocity (red: 0.1, white: 0, blue: $-0.1$).
(c) and (d) show the local dominant wavelength $\tilde{\lambda}_{\rm
max}(z,t)$ determined by wavelet analysis (see Appendix B) corresponding to
the simulations shown in (a) and (b). Color at $t=0$ shows the wavelength
$\lambda$ of the initial condition. (e) shows the wavelet-defined
$H_{\lambda}(t)$ defined in (12), which quantifies the proportion of the
domain that retains initial wavelength $\lambda$ as a function of time for
cases (a) and (b). Circles indicate the times for (a) and (b) after which
$H_{\lambda}$ is below the threshold value $H_{\rm stab}$ for a sufficiently
long time. (f) Ensemble-averaged $\bar{t}_{\text{stab}}$ of the decay time of
an imposed pattern of wavelength $\lambda$ for various values of $Re$. The
relative stability of a wavelength, whether localised or not, is measured by
$\bar{t}_{\text{stab}}$ via the wavelet analysis.
To study the stability of a pattern of wavelength $\lambda$, we prepare an
initial condition for a Long Slender Box by concatenating repetitions of a
single band produced in a Minimal Band Unit. We add small-amplitude noise to
this initial pattern so that the repeated bands do not all evolve identically.
Figures 7a and 7b show two examples of such simulations. Depending on the
value of $Re$ and of the initial wavelength $\lambda$, the pattern
destabilises to either another periodic pattern (figure 7a for $Re=400$) or to
localised patterns surrounded by patches of featureless turbulence (figure 7b
for $Re=430$).
It can be seen that patterns often occupy only part of the domain. For this
reason, we turn to the wavelet decomposition (Meneveau, 1991; Farge, 1992) to
quantify patterns locally. In contrast to a Fourier decomposition, the wavelet
decomposition quantifies the signal as a function of space and scale. From
this, we are able to define a local dominant wavelength,
$\widetilde{\lambda}_{\max}(z,t)$, similar in spirit to
$\widehat{\lambda}_{\max}(t)$ in (11), but now at each space-time point. (See
Appendix B for details.) Figures 7c and 7d show
$\widetilde{\lambda}_{\max}(z,t)$ obtained from wavelet analysis of the
simulations visualised in figures 7a and 7b.
We now use the local wavelength $\widetilde{\lambda}_{\max}(z,t)$ to quantify
the stability of an initial wavelength. We use a domain of length $L_{z}=400$
and we concatenate $n=7$ to 13 repetitions of a single band to produce a
pattern with initial wavelength $\lambda(n)\equiv 400/n\simeq 57,50,44\ldots
31$. (We have rounded $\lambda$ to the nearest integer value here and in what
follows.) After adding low-amplitude noise, we run a simulation lasting 5000
time units, compute the wavelet transform and calculate from it the local
wavelengths $\widetilde{\lambda}_{\max}(z,t)$. We define
$\epsilon_{\lambda}\equiv\min((\lambda(n+1)-\lambda(n))/2,(\lambda(n)-\lambda(n-1))/2)$
such that $|\lambda-\widetilde{\lambda}_{\max}(z,t)|<\epsilon_{\lambda}$ if
$\widetilde{\lambda}_{\max}$ is closer to $\lambda(n)$ than to its two
neighboring values . Finally, in order to measure the proportion of $L_{z}$ in
the dominant mode $\widetilde{\lambda}_{\max}$ is $\lambda$, we compute
$H_{\lambda}(t)=\left\langle\frac{1}{L_{z}}\int_{0}^{L_{z}}\Theta\left(\epsilon_{\lambda}-|\lambda-\widetilde{\lambda}_{\max}(z,t)|\right)~{}\text{d}z\right\rangle_{t_{a}}$
(12)
where $\Theta$ is the Heaviside function and the short-time average
$\left<\cdot\right>_{t_{a}}$ is taken over time $t_{a}=30$ as before. In
practice, because patterns in a Long Slender Box still fluctuate in width, a
steady pattern may have $H_{\lambda}$ somewhat less than 1. If $H_{\lambda}\ll
1$, a pattern of wavelength $\lambda$ is present in only a very small part of
the flow.
Figure 7e shows how wavelet analysis via the Heaviside-like function
$H_{\lambda}(t)$ quantifies the relative stability of the pattern in the
examples shown in figures 7a and 7b. The flow in figure 7a at $Re=400$ begins
with $\lambda=57$, i.e. 7 bands. Figure 7c retains the red color corresponding
to $\lambda=57$ over all of the domain for $t\lesssim 1200$ and over most of
it until $t\lesssim 2300$. The red curve in figure 7e shows $H_{\lambda}$
decaying quickly and roughly monotonically. One additional gap appears at
around $t=2300$ and starting from then, $H_{\lambda}$ remains low. This
corresponds to the initial wavelength $\lambda=57$ losing its dominance to
$\lambda=40$, 44 and 50 in the visualisation of
$\widetilde{\lambda}_{\max}(z,t)$ in figure 7c. By $t=5000$, the flow shows 9
bands with a local wavenumber $\lambda$ between 40 and 50.
The flow in figure 7b at $Re=430$ begins with $\lambda=40$, i.e. 10 bands.
Figure 7d shows that the initial light green color corresponding to 40 is
retained until $t\lesssim 800$. The blue curve in figure 7e representing
$H_{\lambda}$ initially decreases and drops precipitously around $t\simeq
1000$ as several gaps disappear in figure 7b. $H_{\lambda}$ then fluctuates
around a finite value, which is correlated to the presence of gaps whose local
wavelength is the same as the initial $\lambda$, visible as zones where
$\widetilde{\lambda}_{\max}=40$ in figure 7d. The rest of the flow can be
mostly seen as locally featureless turbulence, where the dominant wavelength
is small ($\widetilde{\lambda}_{\max}\leq 10$). The local patterns fluctuate
in width and strength, and $H_{\lambda}$ evolves correspondingly after
$t=1000$. The final state reached in figure 7a at $Re=430$ is characterised by
the presence of intermittent local gaps.
The lifetime of an initially imposed pattern wavelength $\lambda$ is denoted
$t_{\text{stab}}$ and is defined as follows: We first define a threshold
$H_{\rm stab}\equiv 0.2$ (marked by a horizontal dashed line on figure 7e). If
$H_{\lambda}(t)$ is statistically below $H_{\rm stab}$, the imposed pattern
will be considered as unstable. Following this principle, $t_{\text{stab}}$ is
defined as the first time $H_{\lambda}$ is below $H_{\rm stab}$, with a
further condition to dampen the effect of short-term fluctuations:
$t_{\text{stab}}$ must obey
$\left<H_{\lambda}(t)\right>_{t\in[t_{\text{stab}},~{}t_{\text{stab}}+2000]}<H_{\rm
stab}$, so as to ensure that the final state is on average below $H_{\rm
stab}$. The corresponding times in case (a) and (b) are marked respectively by
a red and a blue circle in figure 7e.
Repeating this experiment over multiple realisations of the initial pattern
(i.e. different noise realisations) yields an ensemble-averaged
$\bar{t}_{\text{stab}}$. This procedure estimates the time for an initially
regular and dominant wavelength to disappear from the flow domain, regardless
of the way in which it does so and of the final state approached. Figure 7f
presents the dependence of $\overline{t}_{\text{stab}}$ on $\lambda$ for
different values of $Re$. Although our procedure relies on highly fluctuating
signals (like those presented on figure 7e) and on a number of arbitrary
choices ($H_{\rm stab}$, $\epsilon_{\lambda}$, etc.) that alter the exact
values of stability times, we find that the trends visualised in figure 7f are
robust. (The sensitivity of $\overline{t}_{\text{stab}}$ with $H_{\rm stab}$
is shown in figure 13b of Appendix B.)
A most-stable wavelength ranging between 40 and 44 dominates the stability
times for all the values of $Re$ under study. This is similar to the results
from the _existence_ study on figure 6d, which showed a preferred wavelength
emerging from the uniform state at around $42$ at $Re=440$. Consistently with
what was observed in Minimal Band Units of different sizes, the most stable
wavelength grows with decreasing $Re$.
### 4.3 Discussion
Our study of the _existence_ and _stability_ of large-scale modulations of the
turbulent flow is summarised in figure 8. This figure resembles the existence
and stability diagrams presented for usual (non-turbulent) hydrodynamic
instabilities such as Rayleigh-Bénard convection and Taylor-Couette flow
(Busse, 1981; Ahlers et al., 1986; Cross & Greenside, 2009). In classic
systems, instabilities appear with increasing control parameter, while here
gaps and bands emerge from uniform turbulent flow as $Re$ is lowered.
Therefore, we plot the vertical axis in figure 8 with decreasing upwards
Reynolds.
We recall that the existence study of $\S$4.1 culminated in the measurement of
$\gamma_{\rm pat}(\lambda,Re)$, the fraction of simulation time that is spent
in a patterned state, plotted in figure 6d. The parameter values at which
$\gamma_{\rm pat}(\lambda,Re)=0.45$ (an arbitrary threshold that covers most
of our data range) are shown as black circles in figure 8. The dashed curve is
an interpolation of the iso-$\gamma_{\rm pat}$ points and separates two
regions, with patterns more likely to exist above the curve than below. The
minimum of this curve is estimated to be $\lambda\simeq 42$. This is a
preferred wavelength at which patterns first statistically emerge as $Re$ is
decreased from large values.
The final result of the stability study in section §4.2, shown in figure 7f,
was $\overline{t}_{\text{stab}}(Re,\lambda)$, a typical duration over which a
pattern initialised with wavelength $\lambda$ would persist. The colours in
figure 8 show $\overline{t}_{\text{stab}}$. The peak in
$\overline{t}_{\text{stab}}$ is first discernible at $Re\simeq 440$ and occurs
at $\lambda\simeq 40$. The pattern existence and stability zones are similar
in shape and in their lack of symmetry with respect to line $\lambda=42$. The
transition seen in figures 7a and 7c from $\lambda=57$ to $\lambda=44$ at
$Re=400$ corresponds to motion from a light blue to a dark blue area in the
top row of figure 8. This change in pattern wavelength resembles the Eckhaus
instability which, in classic hydrodynamics, leads to transitions from
unstable wavelengths outside a stability band to stable wavelengths inside.
The presence of a most-probable wavelength confirms the initial results of
Prigent et al. (2003) and those of Rolland & Manneville (2011). This is also
consistent with the instability study of Kashyap et al. (2022) in plane
Poiseuille flow. However, contrary to classic pattern-forming instabilities,
the turbulent-laminar pattern does not emerge from an exactly uniform state,
but instead from a state in which local gaps are intermittent, as established
in Section 3. In Section 5, we will emphasise the importance of the mean flow
in the wavelength selection that we have described.
Figure 8: Visualisation of the pattern selection in the phase space
$(\lambda,Re)$. Colours show the stability times $\overline{t}_{\text{stab}}$,
while open circles are points $\gamma_{\rm pat}(\lambda,Re)=0.45$. The dashed
line is an illustrative fit of these points.
## 5 Optimisation of the large-scale flow
This section is devoted to the dependence of various energetic features of the
patterned flow on the domain length $L_{z}$ of a Minimal Band Unit. We fix the
Reynolds number at $Re=400$. In the existence study of §4, the wavelength
$\lambda\simeq 44$ was found to be selected by patterns. (Recall the uppermost
curves corresponding to $Re=400$ in figure 6d.) We will show that this
wavelength also extremises quantities in the energy balances of the flow.
### 5.1 Average energies in the patterned state
We first decompose the flow into a mean and fluctuations,
$\bm{u}=\overline{\bm{u}}+\bm{u}^{\prime}$, where the mean (overbar) is taken
over the statistically homogeneous directions $x$ and $t$. We compute energies
of the total flow $\left<E\right>\equiv\left<\bm{u}\cdot\bm{u}\right>/2$ and
of the fluctuations (turbulent kinetic energy)
$\left<K\right>\equiv\left<\bm{u}^{\prime}\cdot\bm{u}^{\prime}\right>/2$,
where $\left<\cdot\right>$ is the $(x,y,z,t)$ average. Figure 9a shows these
quantities as a function of $L_{z}$ for the patterned state at $Re=400$. At
$L_{z}=44$, $\left<E\right>$ is maximal and $\left<K\right>$ is minimal. As a
consequence, the mean-flow energy
$\frac{1}{2}\left<\overline{\bm{u}}\cdot\overline{\bm{u}}\right>=\left<E\right>-\left<K\right>$
is also maximal at $L_{z}=44$. Figure 9a additionally shows average
dissipation of the total flow
$\left<D\right>\equiv\left<|\nabla\times\bm{u}|^{2}\right>/Re$ and average
dissipation of turbulent kinetic energy
$\left<\epsilon\right>\equiv\left<|\nabla\times\bm{u}^{\prime}|^{2}\right>/Re$,
both of which are minimal at $L_{z}=44$. Note that these total energy and
dissipation terms change very weakly with $L_{z}$, with a variation of less
than $6\%$.
(a)
(b)
Figure 9: Energy analysis for the patterned state at $Re=400$ as a function
of the size $L_{z}$ of a Minimal Band Unit. (a) Spatially-averaged total
energy $\left<E\right>$, mean TKE $\left<K\right>$ ($\times 5$), mean total
dissipation $\left<D\right>$, mean turbulent dissipation
$\left<\epsilon\right>$ ($\times 3$), for the patterned state at $Re=400$ as a
function of $L_{z}$. (b) Energy in each of the $z$-Fourier components of the
mean flow (equations (13) and (14)).
The mean flow is further analysed by computing the energy of each spectral
component of the mean flow. For this, the $x$, $t$ averaged flow
$\overline{\bm{u}}$ is decomposed into Fourier modes in $z$:
$\overline{\bm{u}}(y,z)=\overline{\bm{u}}_{0}(y)+2\mathcal{R}\left(\overline{\bm{u}}_{1}(y)e^{i2\pi
z/L_{z}}\right)+\overline{\bm{u}}_{>1}(y,z)$ (13)
where $\overline{\bm{u}}_{0}$ is the uniform component of the mean flow,
$\overline{\bm{u}}_{1}$ is the trigonometric Fourier coefficient corresponding
to $k_{z}=2\pi/L_{z}$ and $\overline{\bm{u}}_{>1}$ is the remainder of the
decomposition, for $k_{z}>2\pi/L_{z}$. (We have omitted the hats on the $z$
Fourier components of $\overline{\bm{u}}$.) The energies of the spectral
components relative to the total mean energy are
$e_{0}=\frac{\left<\overline{\bm{u}}_{0}\cdot\overline{\bm{u}}_{0}\right>}{\left<\overline{\bm{u}}\cdot\overline{\bm{u}}\right>},~{}~{}~{}e_{1}=\frac{\left<\overline{\bm{u}}_{1}\cdot\overline{\bm{u}}_{1}\right>}{\left<\overline{\bm{u}}\cdot\overline{\bm{u}}\right>},~{}~{}~{}e_{>1}=\frac{\left<\overline{\bm{u}}_{>1}\cdot\overline{\bm{u}}_{>1}\right>}{\left<\overline{\bm{u}}\cdot\overline{\bm{u}}\right>}$
(14)
These are presented in figure 9b. It can be seen that $e_{0}\gg e_{1}>e_{>1}$
and also that all have an extremum at $L_{z}=44$. In particular, $L_{z}=44$
minimizes $e_{0}$ ($e_{0}=0.95$) while maximising the trigonometric component
($e_{1}=0.025$) along with the remaining components ($e_{>1}\simeq 0.011$).
Note that for a banded state at $Re=350$, $L_{z}=40$, Barkley & Tuckerman
(2007) found that $e_{0}\simeq 0.70$, $e_{1}\simeq 0.30$ and $e_{>1}\simeq
0.004$, consistent with a strengthening of the bands as $Re$ is decreased.
### 5.2 Mean flow spectral balance
We now investigate the spectral contributions to the budget of the mean flow
$\overline{\bm{u}}$, dominated by the mean flow’s two main spectral components
$\overline{\bm{u}}_{0}$ and $\overline{\bm{u}}_{1}$. The balances can be
expressed as in Gomé et al. (2023, Part 1):
$\displaystyle\widehat{\overline{A}}_{0}-\widehat{\overline{\Pi}}_{0}-\widehat{\overline{D}}_{0}+I=0\text{
for
}\overline{\bm{u}}_{0}~{}~{}~{}\text{and}~{}~{}~{}\widehat{\overline{A}}_{1}-\widehat{\overline{\Pi}}_{1}-\widehat{\overline{D}}_{1}=0\text{
for }\overline{\bm{u}}_{1}$ (15)
where $I$ is the rate of energy injection by the viscous shear, and
$\widehat{\overline{\Pi}}_{0}$, $\widehat{\overline{D}}_{0}$ and
$\widehat{\overline{A}}_{0}$ stand for, respectively, production, dissipation
and advection (i.e. non-linear interaction) contributions to the energy
balance of mode $\overline{\bm{u}}_{0}$ and similarly for
$\overline{\bm{u}}_{1}$. These are defined by
$\displaystyle I$
$\displaystyle=\frac{2}{Re}\mathcal{R}\left\\{\int_{-1}^{1}\frac{\partial}{\partial
y}\left(\widehat{\overline{u}}_{j}^{*}(k_{z}=0)\widehat{\overline{s}}_{yj}(k_{z}=0)\right)\,\text{d}y\right\\}=\frac{1}{Re}\left(\frac{\partial\overline{u}_{\text{strm}}}{\partial
y}\bigg{\rvert}_{1}+\frac{\partial\overline{u}_{\text{strm}}}{\partial
y}\bigg{\rvert}_{-1}\right)$ (16a) $\displaystyle\widehat{\overline{\Pi}}_{0}$
$\displaystyle=\mathcal{R}\left\\{\int_{-1}^{1}\frac{\partial\widehat{\overline{u}}_{j}^{*}}{\partial
x_{i}}(k_{z}=0)~{}\widehat{\overline{u_{i}^{\prime}u_{j}^{\prime}}}(k_{z}=0)~{}\text{d}y\right\\}$
(16b) $\displaystyle\widehat{\overline{D}}_{0}$
$\displaystyle=\frac{2}{Re}\mathcal{R}\left\\{\int_{-1}^{1}\widehat{\overline{s}}_{ij}(k_{z}=0)~{}\widehat{\overline{s}}_{ij}^{*}(k_{z}=0)~{}\text{d}y\right\\}$
(16c) $\displaystyle\widehat{\overline{A}}_{0}$
$\displaystyle=-\mathcal{R}\left\\{\int_{-1}^{1}\widehat{\overline{u}}_{j}^{*}(k_{z}=0)~{}\widehat{\overline{u}_{i}\frac{\partial\overline{u}_{j}}{\partial
x_{i}}}(k_{z}=0)~{}\text{d}y\right\\}$ (16d)
where $\mathcal{R}$ denotes the real part. We define
$\widehat{\overline{\Pi}}_{1}$, $\widehat{\overline{D}}_{1}$ and
$\widehat{\overline{A}}_{1}$ similarly by replacing $k_{z}=0$ by
$k_{z}=2\pi/L_{z}$ in (16a)-(16d).
We recall two main results from Gomé et al. (2023, Part 1): first,
$\widehat{\overline{A}}_{1}\simeq-\widehat{\overline{A}}_{0}$. This term
represents the energetic transfer between modes $\overline{\bm{u}}_{0}$ and
$\overline{\bm{u}}_{1}$ via the self-advection of the mean flow (the energetic
spectral influx from $(\overline{\bm{u}}\cdot\nabla)\overline{\bm{u}}$).
Second, $\widehat{\overline{\Pi}}_{1}<0$, and this term approximately balances
the negative part of TKE production. This is an energy transfer from turbulent
fluctuations to the component $\overline{\bm{u}}_{1}$ of the mean flow.
Each term contributing to the balance of $\overline{\bm{u}}_{0}$ and
$\overline{\bm{u}}_{1}$ is shown as a function of $L_{z}$ in figures 10a and
10b. We do not show $\widehat{\overline{A}}_{0}$ because
$\widehat{\overline{A}}_{0}\simeq-\widehat{\overline{A}}_{1}$.
(a)
(b)
Figure 10: Spectral energy balance of the mean flow components (a)
$\overline{\bm{u}}_{0}$ (uniform component) and (b) $\overline{\bm{u}}_{1}$
(large-scale flow along the laminar-turbulent interface). See equation (15).
Advection and dissipation of the large-scale flow,
$\widehat{\overline{A}}_{1}$ and $\widehat{\overline{D}}_{1}$, show the
strongest variations with $L_{z}$ and are optimal at the preferred wavelength
$L_{z}\simeq 44$.
We obtain the following results:
1. (1)
Production $\widehat{\overline{\Pi}}_{0}$, dissipation
$\widehat{\overline{D}}_{0}$ and energy injection $I$ are nearly independent
of $L_{z}$, varing by no more than 6% over the range shown. These $k_{z}=0$
quantities correspond to uniform fields in $z$ and hence it is unsurprising
that they depend very little on $L_{z}$.
2. (2)
The non-linear term
$\widehat{\overline{A}}_{1}\simeq-\widehat{\overline{A}}_{0}$, i.e. the
transfer from $\overline{\bm{u}}_{0}$ to $\overline{\bm{u}}_{1}$ which is the
principal source of energy of $\overline{\bm{u}}_{1}$, varies strongly with
$L_{z}$ and has a maximum at $L_{z}\simeq 44$. This is the reason for which
$\overline{\bm{u}}_{0}$ is minimised by $L_{z}\simeq 44$ (see figure 9b): more
energy is transferred from $\overline{\bm{u}}_{0}$ to $\overline{\bm{u}}_{1}$.
3. (3)
Production $\widehat{\overline{\Pi}}_{1}$ increases with $L_{z}$ and does not
show an extremum at $L_{z}\simeq 44$ (it instead has a weak maximum at
$L_{z}\simeq 50$). In all cases,
$\widehat{\overline{\Pi}}_{1}<\widehat{\overline{A}}_{1}$: the TKE feedback on
the mean flow, although present, is not dominant and not selective.
4. (4)
Dissipation $\widehat{\overline{D}}_{1}$ accounts for the remaining budget and
its extremum at $L_{z}\simeq 44$ corresponds to maximal dissipation.
The turbulent kinetic energy balance is also modified with changing $L_{z}$.
This is presented in Appendix C. The impact of TKE is however secondary,
because of the results established in item (3).
## 6 Conclusion and discussion
We have explored the appearance of patterns from uniform turbulence in plane
Couette flow at $Re\leq 500$. We used numerical domains of different sizes to
quantify the competition between featureless (or uniform) turbulence and
(quasi-) laminar gaps. In Minimal Band Units, intermittency reduces to a
random alternation between two states: uniform or patterned. In large slender
domains, however, gaps nucleate randomly and locally in space, and the
transition to patterns takes place continuously via the regimes presented in
Section 3: the uniform regime in which gaps are rare and short-lived (above
$Re\simeq 470$), and another regime ($Re<470$) in which gaps are more numerous
and long-lived. Below $Re\simeq 430$, the large-scale spacing of these gaps
starts to dominate the energy spectrum, which is a possible demarcation of the
patterned regime. With further decrease in $Re$, gaps eventually fill the
entire flow domain, forming regular patterns. The distinction between these
regimes is observed in both gap lifetime and friction factor.
Spatially isolated gaps were already observed by Prigent et al. (2003),
Barkley & Tuckerman (2005) and Rolland & Manneville (2011). (See also
Manneville (2015, 2017) and references therein.) Our results confirm that
pattern emergence, mediated by randomly-nucleated gaps, is necessarily more
complex than the supercritical Ginzburg-Landau framework initially proposed by
Prigent et al. (2003) and later developed by Rolland & Manneville (2011).
However, this does not preclude linear processes in the appearance of
patterns, such as those reported by Kashyap et al. (2022) from an ensemble-
averaged linear response analysis.
The intermittency between uniform turbulence and gaps that we quantify here in
the range $380\lesssim Re\lesssim 500$ is not comparable to that between
laminar flow and bands present for $325\lesssim Re\lesssim 340$. The latter is
a continuous phase transition in which the laminar flow is absorbing: laminar
regions cannot spontaneously develop into turbulence and can only become
turbulent by contamination from neighbouring turbulent flow. This is connected
to the existence of a critical point at which the correlation length diverges
with a power-law scaling with $Re$, as characterised by important past studies
(Shi et al., 2013; Chantry et al., 2017; Lemoult et al., 2016) which
demonstrated a connection to directed percolation. The emergence of gaps from
uniform turbulence is of a different nature. Neither uniform turbulence nor
gaps are absorbing states, since gaps can always appear spontaneously and can
also disappear, returning the flow locally to a turbulent state. While the
lifetimes of quasi-laminar gaps do exhibit an abrupt change in behaviour at
$Re=470$ (figure 4c), we observe no evidence of critical phenomena associated
with the emergence of gaps from uniform turbulence. Hence, the change in
behaviour appears to be in fact smooth. This is also true in pipe flow where
quasi-laminar gaps form, but not patterns (Avila & Hof, 2013; Frishman &
Grafke, 2022).
We used the pattern wavelength as a control parameter, via either the domain
size or the initial condition, to investigate the existence of a preferred
pattern wavelength. We propose that the finite spacing between gaps, visible
in both local gaps and patterned regimes, is selected by the preferred size of
their associated large-scale flow. Once gaps are sufficiently numerous and
patterns are established, their average wavelength increases with decreasing
$Re$, with changes in wavelength in a similar vein to the Eckhaus picture.
The influence of the large-scale flow in wavelength selection is analysed in
Section 5, where we carried out a spectral analysis like that in Gomé et al.
(2023, Part 1) for various sizes of the Minimal Band Unit. In particular, we
investigated the roles of the turbulent fluctuations and of the mean flow,
which is in turn decomposed into its uniform component $\overline{\bm{u}}_{0}$
and its trigonometric component $\overline{\bm{u}}_{1}$, associated to the
large-scale flow along the laminar-turbulent interface. Our results
demonstrate a maximisation of the energy and dissipation of
$\overline{\bm{u}}_{1}$ by the wavelength naturally preferred by the flow, and
this is primarily associated to an optimisation of the advective term
$(\overline{\bm{u}}\cdot\nabla)\overline{\bm{u}}$ in the mean flow equation.
This term redistributes energy between modes $\overline{\bm{u}}_{0}$ and
$\overline{\bm{u}}_{1}$ and is mostly responsible for energising the large-
scale along-band flow. Turbulent fluctuations are of secondary importance in
driving the large-scale flow and do not play a significant role in the
wavelength selection. Our results of maximal transport of momentum and
dissipation of the large-scale flow are therefore analogous to the principles
mentioned by Malkus (1956) and Busse (1981). Explaining this observation from
first principles remains a prodigious task.
It is essential to understand the creation of the large-scale flow around a
randomly emerging laminar hole. The statistics obtained in our tilted
configuration should be extended to large streamwise-spanwise domains, where
short-lived and randomly-nucleated holes might align in the streamwise
direction (Manneville, 2017, Fig. 5). This presumably occurs at $Re$ above the
long-lived-gap regime, in which the two gap orientations $\pm\theta$ compete.
The selected pattern angle might also maximise the dissipation of the large-
scale flow, similarly to what we found for the preferred wavelength.
Furthermore, a more complete dynamical picture of gap creation is needed. The
excitable model of Barkley (2011a) might provide a proper framework, as it
accounts for both the emergence of anti-puffs (Frishman & Grafke, 2022) and of
periodic solutions (Barkley, 2011b). Connecting this model to the Navier-
Stokes equations is, however, a formidable challenge. Our work emphasises the
necessity of including the effects of the advective large-scale flow (Barkley
& Tuckerman, 2007; Duguet & Schlatter, 2013; Klotz et al., 2021; Marensi et
al., 2022) to adapt this model to the establishment of the turbulent-laminar
patterns of both preferred wavelength and angle observed in planar shear
flows.
## Acknowledgements
The calculations for this work were performed using high performance computing
resources provided by the Grand Equipement National de Calcul Intensif at the
Institut du Développement et des Ressources en Informatique Scientifique
(IDRIS, CNRS) through Grant No. A0102A01119. This work was supported by a
grant from the Simons Foundation (Grant No. 662985). The authors wish to thank
Anna Frishman, Tobias Grafke and Yohann Duguet for fruitful discussions, as
well as the referees for their useful suggestions.
## Declaration of Interests
The authors report no conflict of interest.
## Appendix A Laminar and turbulent distributions in pipe vs Couette flows.
From figures 3c and 3d of the main text, both distributions of laminar or
turbulent lengths, $L_{\rm lam}$ and $L_{\rm turb}$, are exponential for large
enough lengths, similarly to pipe (Avila & Hof, 2013). It is however striking
that the distributions of $L_{\rm lam}$ and $L_{\rm turb}$ have different
shapes for $L_{\rm lam}$ or $L_{\rm turb}>10$ in plane Couette flow: $L_{\rm
lam}$ shows a sharper distribution, whereas $L_{\rm turb}$ is more broadly
distributed. We present on figures 11a and 11b the cumulative distributions of
$L_{\rm lam}$ and $L_{\rm turb}$ for a complementary analysis.
We focus on the characteristic length $l_{\rm turb}^{*}$ or $l_{\rm lam}^{*}$
for which $P(L_{\rm lam}>l_{\rm lam}^{*})=P(L_{\rm turb}>l_{\rm
turb}^{*})=10^{-2}$: for example, $l_{\rm lam}^{*}=15.5$ and $l_{\rm
turb}^{*}=26.5$ at $Re=440$; $l_{\rm lam}^{*}=23.4$ and $l_{\rm
turb}^{*}=30.3$ at $Re=400$. We see that $l_{\rm turb}^{*}$ and $l_{\rm
lam}^{*}$ are of the same order of magnitude. This differs from the same
measurement in pipe flow, carried out by Avila & Hof (2013, Fig. 2): $l_{\rm
lam}^{*}=6$ and $l_{\rm turb}^{*}\simeq 50$ at $Re=2800$; $l_{\rm
lam}^{*}\simeq 17$ and $l_{\rm turb}^{*}\simeq 160$ at $Re=2500$ (as extracted
from their figure 2.). This confirms that turbulent and laminar spacings are
of the same order of magnitude in plane Couette flow, contrary to pipe flow.
(a)
(b)
Figure 11: Cumulative distribution of (a) laminar gaps and (b) turbulent
zones, for various $Re$.
## Appendix B Wavelet transform
We introduce the one-dimensional continuous wavelet transform of the velocity
$\bm{u}(z,t)$ taken along the line $(x,y)=(L_{x}/2,0)$:
$\tilde{\bm{u}}(z,r,t)=C_{\psi}^{-1/2}r^{-1/2}\int_{0}^{L_{z}}\psi^{*}\left(\frac{z^{\prime}-z}{r}\right)\bm{u}(z^{\prime},t)dz^{\prime}$
(17)
Here $\psi$ is the Morlet basis function, defined in Fourier space as
$\hat{\psi}(k)=\pi^{-1/4}e^{-(k-k_{\psi})^{2}/2}$ for $k>0$. Its central
wavenumber is $k_{\psi}=6/\Delta z$, where $\Delta z$ is the grid spacing. The
scale factor $r$ is related to wavelength via $\lambda\simeq 2\pi r/k_{\psi}$.
$C_{\psi}\equiv\int|k|^{-1}|\hat{\psi}(k)|^{2}\text{d}k$ is a normalization
constant. Tildes are used to designate wavelet transformed quantities. The
inverse transform is:
$\bm{u}(z,t)=C_{\psi}^{-1/2}\int_{0}^{\infty}\int_{-\infty}^{\infty}\>r^{-1/2}\psi\left(\frac{z-z^{\prime}}{r}\right)\tilde{\bm{u}}(z^{\prime},r,t)\>\frac{dz^{\prime}~{}dr}{r^{2}}$
(18)
The wavelet transform is related to the Fourier transform in $z$ by:
$\tilde{\bm{u}}(z,r,t)=\frac{1}{2\pi}C_{\psi}^{-1/2}r^{1/2}\int_{-\infty}^{\infty}\widehat{\psi}(r\,k_{z})\widehat{\bm{u}}(k_{z},t)e^{ik_{z}z}\text{d}k_{z}$
(19)
We then define the most energetic instantaneous wavelength as:
$\widetilde{\lambda}_{\max}(z,t)=\frac{2\pi}{k_{\psi}}~{}\underset{r}{\text{argmax}}~{}|\tilde{\bm{u}}(z,r,t)|^{2}$
(20)
The characteristic evolution of $\widetilde{\lambda}_{\max}(z,t)$ is
illustrated in figure 12b for the flow case corresponding to figure 12a.
Regions in which $\widetilde{\lambda}_{\max}$ is large $(>10)$ and dominated
by a single value correspond to the local patterns observed in figure 12a. In
contrast, in regions where $\widetilde{\lambda}_{\max}$ is small $(<10)$ and
fluctuating, the turbulence is locally uniform.
This space-time intermittency of the patterns is quantified by measuring
$f_{L/S}=\left<\Theta(\widetilde{\lambda}_{\max}(z,t)-10)\right>_{z,t}$ (21)
and is shown in figure 13a as a function of $Re$.
(a)
(b)
Figure 12: Space-time visualisation of a quench experiment at $Re=430$: (a)
spanwise velocity (blue: $-0.2$, white: 0, red: 0.2), (b)
$\widetilde{\lambda}_{\max}(z,t)$ defined by (20).
$\widetilde{\lambda}_{\max}(z,t)$ (b) quantifies the presence of local large-
scale modulations within the flow. Dark blue zones where
$\widetilde{\lambda}_{\max}(z,t)<10$ correspond to locally featureless
turbulence in (a). Large-scale modulation of gaps at different wavelengths are
visible as the green-to-red spots in (b).
(a)
(b)
Figure 13: (a) Space-time fraction of large to small wavelengths obtained by
wavelet transform. $f_{L/S}$ crosses 0.5 at $Re\simeq 427\simeq Re_{\rm pg}$.
(b) Sensitivity of the stability analysis in 4.2 with regard to threshold
$H_{\rm stab}$, at $Re=430$.
## Appendix C Turbulent kinetic energy balance for various $L_{z}$
In this appendix, we address the balance of turbulent kinetic energy
$\widehat{K}(k_{z})$, written here in $y$-integrated form at a specific mode
$k_{z}$ (see equation (5.3) of Gomé et al. (2023, Part 1) and the methodology
in, e.g., Bolotnov et al. (2010); Lee & Moser (2015); Mizuno (2016); Cho et
al. (2018)):
$\displaystyle 0=\widehat{\Pi}-\widehat{D}+\widehat{A}+\widehat{T}_{nl}$ (22)
where the variables in (22) indicate $y$-integrated quantities:
$\displaystyle\widehat{\Pi}(k_{z})\equiv-\mathcal{R}\left\\{\int_{-1}^{1}\overline{\widehat{u_{j}^{\prime}}^{*}\widehat{\overline{u}_{i}\frac{\partial
u_{j}^{\prime}}{\partial
x_{i}}}}~{}\text{d}y\right\\},~{}~{}~{}~{}\widehat{D}(k_{z})\equiv\frac{2}{Re}\int_{-1}^{1}\overline{\widehat{s_{ij}^{\prime}}\widehat{s_{ij}^{\prime}}^{*}}~{}\text{d}y,$
$\displaystyle\widehat{T}_{nl}(k_{z})\equiv-\mathcal{R}\left\\{\int_{-1}^{1}\overline{\widehat{u_{j}^{\prime}}^{*}\widehat{u_{i}^{\prime}\frac{\partial
u_{j}^{\prime}}{\partial
x_{i}}}}~{}\text{d}y\right\\},~{}~{}~{}~{}\widehat{A}(k_{z})\equiv-\mathcal{R}\left\\{\int_{-1}^{1}\overline{\widehat{u_{j}^{\prime}}^{*}\widehat{\overline{u}_{i}\frac{\partial
u_{j}^{\prime}}{\partial x_{i}}}}~{}\text{d}y\right\\}$ (23)
respectively standing for production, dissipation, triadic interaction and
advection terms. We recall that $\overline{(\cdot)}$ is an average in $(x,t)$.
The $y$ evolution of the energy balance was analysed in Gomé et al. (2023,
Part 1).
Gomé et al. (2023, Part 1) reported robust negative production at large
scales, along with inverse non-linear transfers to large scales. If
$k_{\text{rolls}}=1.41$ denotes the scale of rolls and streaks, this inverse
transfer occurs for $k_{z}<k_{\text{LS}}=0.94$, while a downward transfer
occurs for $k_{z}>k_{\text{SS}}=3.6$ (We refer the reader to figure 5 of Gomé
et al. (2023, Part 1)). This spectral organization of the energy balance will
be quantified by the following transfer terms arising from (23):
$\displaystyle\widehat{T}_{LS}\equiv\sum_{k_{z}=0}^{k_{\text{LS}}}\widehat{T}_{nl}(k_{z}),~{}~{}~{}\widehat{T}_{SS}\equiv\sum_{k_{z}=k_{\text{SS}}}^{\infty}\widehat{T}_{nl}(k_{z}),~{}~{}~{}\widehat{D}_{LS}\equiv\sum_{k_{z}=0}^{k_{\text{LS}}}\widehat{D}(k_{z}),~{}~{}~{}\widehat{A}_{LS}\equiv\sum_{k_{z}=0}^{k_{\text{LS}}}\widehat{A}(k_{z})$
(24)
$\widehat{T}_{LS}$ quantifies transfer to large scales, $\widehat{T}_{SS}$ the
transfer to small scales, $\widehat{D}_{LS}$ the dissipation at large scales,
and $\widehat{A}_{LS}$ is a transfer of energy from the mean flow to the large
fluctuating scales. Large-scale production is not shown here, as we presented
in figure 10b a similar measurement of large-scale turbulent transfer to the
mean flow, via $\widehat{\overline{\Pi}}_{1}$.
Figure 14: Evolution of the large-scale TKE balance with $L_{z}$ (24).
The variables defined in (24) are displayed in figure 14 as a function of
$L_{z}$. $\widehat{T}_{LS}$ is minimal at $L_{z}\simeq 44$. $\widehat{D}_{LS}$
is minimal at $L_{z}\simeq 40$. Contrary to $\widehat{T}_{LS}$,
$\widehat{T}_{SS}$ is relatively constant with $L_{z}$ (green dashed line in
figure 14), with a variation of around $6\%$. This demonstrates that transfers
to small scales are unchanged with $L_{z}$. Large-scale TKE advection decays
with increasing $L_{z}$ hence it does not play a role in the preference of a
wavelength. Our results show that the balance at large-scale is minimised
around $L_{z}\simeq 44$, confirming the less important role played by
turbulent fluctuations in the wavelength selection, compared to that of the
mean-flow advection reported in the main text.
## References
* Ahlers et al. (1986) Ahlers, Guenter, Cannell, David S, Dominguez-Lerma, Marco A & Heinrichs, Richard 1986 Wavenumber selection and Eckhaus instability in Couette-Taylor flow. Physica D 23 (1-3), 202–219.
* Andereck et al. (1986) Andereck, C David, Liu, SS & Swinney, Harry L 1986 Flow regimes in a circular Couette system with independently rotating cylinders. J. Fluid Mech. 164, 155–183.
* Avila & Hof (2013) Avila, Marc & Hof, Björn 2013 Nature of laminar-turbulence intermittency in shear flows. Phys. Rev. E 87 (6), 063012.
* Barkley (2006) Barkley, Dwight 2006 Linear analysis of the cylinder wake mean flow. Europhys. Lett. 75 (5), 750.
* Barkley (2011a) Barkley, Dwight 2011a Simplifying the complexity of pipe flow. Phys. Rev. E 84 (1), 016309.
* Barkley (2011b) Barkley, Dwight 2011b Modeling the transition to turbulence in shear flows. J. Phys. Conf. Ser. 318 (3), 032001\.
* Barkley (2016) Barkley, Dwight 2016 Theoretical perspective on the route to turbulence in a pipe. J. Fluid Mech. 803, P1.
* Barkley & Tuckerman (2005) Barkley, Dwight & Tuckerman, Laurette S 2005 Computational study of turbulent-laminar patterns in Couette flow. Phys. Rev. Lett. 94 (1), 014502.
* Barkley & Tuckerman (2007) Barkley, Dwight & Tuckerman, Laurette S 2007 Mean flow of turbulent–laminar patterns in plane Couette flow. J. Fluid Mech. 576, 109–137.
* Bengana & Tuckerman (2021) Bengana, Yacine & Tuckerman, Laurette S 2021 Frequency prediction from exact or self-consistent mean flows. Physical Review Fluids 6 (6), 063901.
* Bolotnov et al. (2010) Bolotnov, Igor A, Lahey Jr, Richard T, Drew, Donald A, Jansen, Kenneth E & Oberai, Assad A 2010 Spectral analysis of turbulence based on the DNS of a channel flow. Computers & Fluids 39 (4), 640–655.
* Busse (1981) Busse, Friedrich H 1981 Transition to turbulence in Rayleigh-Bénard convection. In Hydrodynamic instabilities and the transition to turbulence, pp. 97–137. Springer.
* Chantry et al. (2017) Chantry, Matthew, Tuckerman, Laurette S & Barkley, Dwight 2017 Universal continuous transition to turbulence in a planar shear flow. J. Fluid Mech. 824, R1.
* Cho et al. (2018) Cho, Minjeong, Hwang, Yongyun & Choi, Haecheon 2018 Scale interactions and spectral energy transfer in turbulent channel flow. J. Fluid Mech. 854, 474–504.
* Coles & van Atta (1966) Coles, Donald & van Atta, Charles 1966 Progress report on a digital experiment in spiral turbulence. AIAA Journal 4 (11), 1969–1971.
* Cross & Greenside (2009) Cross, Michael & Greenside, Henry 2009 Pattern Formation and Dynamics in Nonequilibrium Systems. Cambridge University Press.
* Dong (2009) Dong, S 2009 Evidence for internal structures of spiral turbulence. Phys. Rev. E 80 (6), 067301.
* Duguet & Schlatter (2013) Duguet, Yohann & Schlatter, Philipp 2013 Oblique laminar-turbulent interfaces in plane shear flows. Phys. Rev Lett. 110 (3), 034502.
* Duguet et al. (2010) Duguet, Yohann, Schlatter, Philipp & Henningson, Dan S 2010 Formation of turbulent patterns near the onset of transition in plane Couette flow. J. Fluid Mech. 650, 119–129.
* Farge (1992) Farge, Marie 1992 Wavelet transforms and their applications to turbulence. Annu. Rev. Fluid Mech. 24 (1), 395–458.
* Frishman & Grafke (2022) Frishman, Anna & Grafke, Tobias 2022 Dynamical landscape of transitional pipe flow. Phys. Rev. E 105, 045108\.
* Gibson et al. (2019) Gibson, J.F., Reetz, F., Azimi, S., Ferraro, A., Kreilos, T., Schrobsdorff, H., Farano, M., A.F. Yesil, S. S. Schütz, Culpo, M. & Schneider, T.M. 2019 Channelflow 2.0. Manuscript in preparation, see channelflow.ch.
* Gomé et al. (2023) Gomé, Sébastien, Tuckerman, Laurette S & Barkley, Dwight 2023 Patterns in transitional turbulence. Part 1. Energy transfer and mean-flow interaction. J. Fluid Mech. 964, A16.
* Hof et al. (2010) Hof, Björn, De Lozar, Alberto, Avila, Marc, Tu, Xiaoyun & Schneider, Tobias M 2010 Eliminating turbulence in spatially intermittent flows. Science 327 (5972), 1491–1494.
* Kashyap (2021) Kashyap, Pavan 2021 Subcritical transition to turbulence in wall-bounded shear flows: spots, pattern formation and low-order modelling. PhD thesis, Université Paris-Saclay.
* Kashyap et al. (2020) Kashyap, Pavan V, Duguet, Yohann & Dauchot, Olivier 2020 Flow statistics in the transitional regime of plane channel flow. Entropy 22 (9), 1001.
* Kashyap et al. (2022) Kashyap, Pavan V, Duguet, Yohann & Dauchot, Olivier 2022 Linear instability of turbulent channel flow. arXiv:2205.05652 .
* Klotz et al. (2022) Klotz, Lukasz, Lemoult, Grégoire, Avila, Kerstin & Hof, Björn 2022 Phase transition to turbulence in spatially extended shear flows. Phys. Rev. Lett. 128 (1), 014502\.
* Klotz et al. (2021) Klotz, Lukasz, Pavlenko, AM & Wesfreid, JE 2021 Experimental measurements in plane Couette–Poiseuille flow: dynamics of the large-and small-scale flow. J. Fluid Mech. 912.
* Kramer & Zimmermann (1985) Kramer, Lorenz & Zimmermann, Walter 1985 On the Eckhaus instability for spatially periodic patterns. Physica D 16 (2), 221–232.
* Lee & Moser (2015) Lee, Myoungkyu & Moser, Robert D 2015 Direct numerical simulation of turbulent channel flow up to $Re_{\tau}=590$. J. Fluid Mech. 774, 395–415.
* Lemoult et al. (2016) Lemoult, Grégoire, Shi, Liang, Avila, Kerstin, Jalikop, Shreyas V, Avila, Marc & Hof, Björn 2016 Directed percolation phase transition to sustained turbulence in Couette flow. Nature Physics 12 (3), 254.
* Malkus (1956) Malkus, WVR 1956 Outline of a theory of turbulent shear flow. J. Fluid Mech. 1 (5), 521–539.
* Malkus (1954) Malkus, Willem VR 1954 The heat transport and spectrum of thermal turbulence. Proc. Roy. Soc. A 225 (1161), 196–212.
* Manneville (2012) Manneville, Paul 2012 Turbulent patterns in wall-bounded flows: A Turing instability? Europhys. Lett. 98 (6), 64001\.
* Manneville (2015) Manneville, Paul 2015 On the transition to turbulence of wall-bounded flows in general, and plane Couette flow in particular. Eur. J Mech. B Fluids 49, 345–362.
* Manneville (2017) Manneville, Paul 2017 Laminar-turbulent patterning in transitional flows. Entropy 19 (7), 316.
* Marensi et al. (2022) Marensi, Elena, Yalnız, Gökhan & Hof, Björn 2022 Dynamics and proliferation of turbulent stripes in channel and couette flow. arXiv preprint arXiv:2212.12406 .
* Markeviciute & Kerswell (2022) Markeviciute, Vilda K & Kerswell, Rich R 2022 Improved assessment of the statistical stability of turbulent flows using extended Orr-Sommerfeld stability analysis. arXiv:2201.01540 .
* Meneveau (1991) Meneveau, Charles 1991 Analysis of turbulence in the orthonormal wavelet representation. J. Fluid Mech. 232, 469–520.
* Meseguer et al. (2009) Meseguer, Alvaro, Mellibovsky, Fernando, Avila, Marc & Marques, Francisco 2009 Instability mechanisms and transition scenarios of spiral turbulence in Taylor-Couette flow. Phys. Rev. E 80 (4), 046315.
* Mihelich et al. (2017) Mihelich, Martin, Faranda, Davide, Paillard, Didier & Dubrulle, Bérengère 2017 Is turbulence a state of maximum energy dissipation? Entropy 19 (4), 154.
* Mizuno (2016) Mizuno, Yoshinori 2016 Spectra of energy transport in turbulent channel flows for moderate Reynolds numbers. J. Fluid Mech. 805, 171–187.
* Moxey & Barkley (2010) Moxey, David & Barkley, Dwight 2010 Distinct large-scale turbulent-laminar states in transitional pipe flow. Proc. Nat. Acad. Sci. 107 (18), 8091–8096.
* Nagata (1990) Nagata, Masato 1990 Three-dimensional finite-amplitude solutions in plane Couette flow: bifurcation from infinity. J. Fluid Mech. 217, 519–527.
* Ozawa et al. (2001) Ozawa, Hisashi, Shimokawa, Shinya & Sakuma, Hirofumi 2001 Thermodynamics of fluid turbulence: A unified approach to the maximum transport properties. Phys. Rev. E 64 (2), 026303\.
* Paranjape et al. (2020) Paranjape, Chaitanya S, Duguet, Yohann & Hof, Björn 2020 Oblique stripe solutions of channel flow. J. Fluid Mech. 897, A7.
* Parker & Krommes (2013) Parker, Jeffrey B & Krommes, John A 2013 Zonal flow as pattern formation. Phys. Plasmas 20 (10), 100703.
* Prigent et al. (2003) Prigent, Arnaud, Grégoire, Guillaume, Chaté, Hugues & Dauchot, Olivier 2003 Long-wavelength modulation of turbulent shear flows. Physica D 174 (1-4), 100–113.
* Prigent et al. (2002) Prigent, Arnaud, Grégoire, Guillaume, Chaté, Hugues, Dauchot, Olivier & van Saarloos, Wim 2002 Large-scale finite-wavelength modulation within turbulent shear flows. Phys. Rev. Lett. 89 (1), 014501.
* Reetz et al. (2019) Reetz, Florian, Kreilos, Tobias & Schneider, Tobias M 2019 Exact invariant solution reveals the origin of self-organized oblique turbulent-laminar stripes. Nature communications 10 (1), 2277.
* Reynolds (1883) Reynolds, Osborne 1883 An experimental investigation of the circumstances which determine whether the motion of water shall be direct or sinuous, and of the law of resistance in parallel channels. Phil. Trans. R. Soc. Lond. 174, 935—982.
* Reynolds & Tiederman (1967) Reynolds, WC & Tiederman, WG 1967 Stability of turbulent channel flow, with application to malkus’s theory. J. Fluid Mech. 27 (2), 253–272.
* Riecke & Paap (1986) Riecke, Hermann & Paap, Hans-Georg 1986 Stability and wave-vector restriction of axisymmetric Taylor vortex flow. Phys. Rev. A 33 (1), 547.
* Rolland & Manneville (2011) Rolland, Joran & Manneville, Paul 2011 Ginzburg–Landau description of laminar-turbulent oblique band formation in transitional plane Couette flow. Eur. Phys. J. B 80 (4), 529–544.
* Samanta et al. (2011) Samanta, Devranjan, De Lozar, Alberto & Hof, Björn 2011 Experimental investigation of laminar turbulent intermittency in pipe flow. J. Fluid Mech. 681, 193–204.
* Shi et al. (2013) Shi, Liang, Avila, Marc & Hof, Björn 2013 Scale invariance at the onset of turbulence in Couette flow. Phys. Rev. Lett. 110 (20), 204502.
* Shimizu & Manneville (2019) Shimizu, Masaki & Manneville, Paul 2019 Bifurcations to turbulence in transitional channel flow. Phys. Rev. Fluids 4, 113903.
* Srinivasan & Young (2012) Srinivasan, Kaushik & Young, W.R. 2012 Zonostrophic instability. J. Atmos. Sci. 69 (5), 1633–1656.
* Tobias & Marston (2013) Tobias, SM & Marston, JB 2013 Direct statistical simulation of out-of-equilibrium jets. Phys. Rev. Lett. 110 (10), 104502.
* Tsukahara et al. (2005) Tsukahara, Takahiro, Seki, Yohji, Kawamura, Hiroshi & Tochio, Daisuke 2005 DNS of turbulent channel flow at very low Reynolds numbers. In Fourth International Symposium on Turbulence and Shear Flow Phenomena. Begel House Inc. arXiv:1406.0248.
* Tuckerman & Barkley (1990) Tuckerman, Laurette S & Barkley, Dwight 1990 Bifurcation analysis of the Eckhaus instability. Physica D 46 (1), 57–86.
* Tuckerman & Barkley (2011) Tuckerman, Laurette S & Barkley, Dwight 2011 Patterns and dynamics in transitional plane Couette flow. Phys. Fluids 23 (4), 041301.
* Tuckerman et al. (2010) Tuckerman, Laurette S, Barkley, Dwight & Dauchot, Olivier 2010 Instability of uniform turbulent plane Couette flow: Spectra, probability distribution functions and K-$\Omega$ closure model. In Seventh IUTAM Symposium on Laminar-Turbulent Transition, pp. 59–66. Springer.
* Tuckerman et al. (2020) Tuckerman, Laurette S, Chantry, Matthew & Barkley, Dwight 2020 Patterns in wall-bounded shear flows. Annu. Rev. Fluid Mech. 52, 343.
* Tuckerman et al. (2014) Tuckerman, Laurette S, Kreilos, Tobias, Schrobsdorff, Hecke, Schneider, Tobias M & Gibson, John F 2014 Turbulent-laminar patterns in plane Poiseuille flow. Phys. Fluids 26 (11), 114103.
* Waleffe (1997) Waleffe, Fabian 1997 On a self-sustaining process in shear flows. Phys. Fluids 9 (4), 883–900.
* Wang et al. (2022) Wang, B., Ayats, R., Deguchi, K., Mellibovsky, F. & Meseguer, A. 2022 Self-sustainment of coherent structures in counter-rotating Taylor–Couette flow. J. Fluid Mech. 951, A21.
* Wygnanski & Champagne (1973) Wygnanski, Israel J & Champagne, FH 1973 On transition in a pipe. Part 1. The origin of puffs and slugs and the flow in a turbulent slug. J. Fluid Mech. 59 (2), 281–335.
|
# Coordinate-space calculation of the window observable for the hadronic
vacuum polarization contribution to $(g-2)_{\mu}$
En-Hung Chao PRISMA+ Cluster of Excellence & Institut für Kernphysik,
Johannes Gutenberg-Universität Mainz, D-55099 Mainz, Germany Physics
Department, Columbia University, New York, New York 10027, USA Harvey B.
Meyer PRISMA+ Cluster of Excellence & Institut für Kernphysik, Johannes
Gutenberg-Universität Mainz, D-55099 Mainz, Germany Helmholtz Institut Mainz,
Staudingerweg 18, D-55128 Mainz, Germany GSI Helmholtzzentrum für
Schwerionenforschung, Darmstadt, Germany Julian Parrino PRISMA+ Cluster of
Excellence & Institut für Kernphysik, Johannes Gutenberg-Universität Mainz,
D-55099 Mainz, Germany
###### Abstract
The ‘intermediate window quantity’ of the hadronic vacuum polarization
contribution to the anomalous magnetic moment of the muon allows for a high-
precision comparison between the data-driven approach and lattice QCD. The
existing lattice results, which presently show good consistency among each
other, are in strong tension with the data-driven determination. In order to
check for a potentially common source of systematic error of the lattice
calculations, which are all based on the time-momentum representation (TMR),
we perform a calculation using a Lorentz-covariant coordinate-space (CCS)
representation. We present results for the isovector and the connected
strange-quark contributions to the intermediate window quantity at a reference
point in the $(m_{\pi},m_{K})$ plane, in the continuum and infinite-volume
limit, based on four different lattice spacings. Our results are in good
agreement with those of the recent TMR-based Mainz-CLS publication.
††preprint: MITP-22-084
## I Introduction
As a precision observable, the anomalous magnetic moment of the muon,
$a_{\mu}$, has attracted a great deal of attention in recent years. With the
release of the first results by Fermilab’s E989 experiment in 2021, the
experimental world-average Bennett _et al._ (2006); Abi _et al._ (2021) of
this quantity has reached the precision level of 35 ppm. Tremendous efforts
have also been invested on the theory side to reach the same level of
precision. To achieve the desired precision target, it is indispensable to
bring the hadronic contributions – which entirely dominate the error budget of
the theory estimate – under good control. The various hadronic contributions
are classified according to the order in the electromagnetic coupling constant
$\alpha_{\mathrm{QED}}$ at which they contribute to $a_{\mu}$. The leading,
O$(\alpha_{\mathrm{QED}}^{2})$ term is the hadronic vacuum polarization (HVP)
contribution to $a_{\mu}$. Together with the O$(\alpha_{\mathrm{QED}}^{3})$
hadronic light-by-light contribution (HLbL), it has been the key quantity to
improve over the last decade in order to reach the precision that the direct
experimental measurement would achieve in the near future. The efforts from
the theory community to resolve the hadronic contributions to $a_{\mu}$ can be
sorted into two categories of methodology: the data-driven Davier _et al._
(2017); Keshavarzi _et al._ (2018); Colangelo _et al._ (2019); Hoferichter
_et al._ (2019); Davier _et al._ (2020); Keshavarzi _et al._ (2020); Kurz
_et al._ (2014); Melnikov and Vainshtein (2004); Masjuan and Sánchez-Puertas
(2017); Colangelo _et al._ (2017); Hoferichter _et al._ (2018); Bijnens _et
al._ (2019); Colangelo _et al._ (2020); Pauk and Vanderhaeghen (2014);
Danilkin and Vanderhaeghen (2017); Jegerlehner (2017); Knecht _et al._
(2018); Eichmann _et al._ (2020); Roig and Sánchez-Puertas (2020); Colangelo
_et al._ (2014) and lattice Chakraborty _et al._ (2018); Borsanyi _et al._
(2018); Blum _et al._ (2018a); Giusti _et al._ (2019); Shintani and
Kuramashi (2019); Davies _et al._ (2020); Gérardin _et al._ (2019); Aubin
_et al._ (2020); Giusti and Simula (2019); Blum _et al._ (2020); Gérardin
_et al._ (2019); Borsanyi _et al._ (2021); Chao _et al._ (2020, 2021, 2022).
The 2020 $(g-2)_{\mu}$ theory White Paper (WP) Aoyama _et al._ (2020)
provided the Standard Model prediction at a precision level comparable to that
of the experiment; that prediction currently stands in 4.2$\sigma$ tension
with the experimental world-average for $a_{\mu}$. To confirm the discrepancy,
further improvement on the uncertainties is needed. Especially, the HVP
contribution $a_{\mu}^{\mathrm{hvp}}$ has to be known to the few-per-mille
level.
The WP value for the HVP is solely based on the data-driven method, due to the
lattice determinations having larger uncertainties at the time of the
publication. After the publication of the WP, the Budapest-Marseille-Wuppertal
(BMW) collaboration published their lattice QCD estimate for
$a_{\mu}^{\mathrm{hvp}}$ at almost the same precision level Borsanyi _et al._
(2021). Their calculation, however, yields a larger value for $a_{\mu}$, in
better agreement with the direct experimental measurement. Although their
result should still be verified by other lattice collaborations, preferably
using different discretization schemes to pin down potential systematic
errors, understanding the tension within SM predictions resulting from
different classes of methods has become a matter of high priority. Especially
the HVP contribution needs to be sharply scrutinized, as it dominates
currently the hadronic uncertainties.
The window quantities for $a_{\mu}^{\mathrm{hvp}}$, originally introduced in
Ref. Blum _et al._ (2018b), provide a good way to break down the
$a_{\mu}^{\mathrm{hvp}}$ into subcontributions associated with different
Euclidean time intervals. In particular, the intermediate window suggested
therein is a more tractable observable for lattice practitioners, as it avoids
the short-distance region, where discretization effects can become hard to
control, and the large-distance region, where the statistical Monte-Carlo
noise and finite-size effects become the limiting factor. As the calculation
of this observable is amenable to the data-driven methods Colangelo _et al._
(2022), the theory community has invested significant effort into refining the
estimates on this quantity Blum _et al._ (2018b); Borsanyi _et al._ (2021);
Wang _et al._ (2022); Aubin _et al._ (2022); Cè _et al._ (2022); Alexandrou
_et al._ (2022); Davies _et al._ (2022). The original formulation of the
window quantity in fact relies on the Time-Momentum Representation (TMR) of
$a_{\mu}^{\mathrm{hvp}}$, which involves a Euclidean-time correlation function
calculated at vanishing spatial momentum Bernecker and Meyer (2011). The aim
of the present paper is to offer a verification of the method based on an
alternative formulation which utilizes the position-space Euclidean-time two-
point correlator without any momentum projection. This alternative makes use
of the previously introduced Covariant Coordinate-Space (CCS) kernel Meyer
(2017), which is motivated by the rapid fall-off of the Euclidean correlation
function with the spacetime separation. An important feature of this
alternative formulation is the fact that the lattice points are treated in an
O$(4)$-covariant way, whereby different discretization effects are expected
than under the TMR. Therefore, the CCS representation can provide a valuable
check for the continuum extrapolated value obtained from the TMR. In this
work, we focus on lattice ensembles at an almost fixed pion mass of around 350
MeV at four different lattice spacings and apply finite-size corrections
ensemble by ensemble based on a field-theoretic model which is able to
describe to a good degree the experimental data of the pion electromagnetic
form factor. On the one hand, this calculation provides a proof-of-principle
that the CCS method is not only viable, but also quite competitive with the
TMR method. On the other hand, at $m_{\pi}=350\,$MeV we are able to directly
compare our result to the recent calculation by the Mainz-CLS collaboration Cè
_et al._ (2022), thereby testing whether lattice-QCD based results are
independent of the chosen representation. Ultimately, this represents a test
of the restoration of Lorentz invariance, which is broken both at short
distances by the lattice and in the infrared by the finite volume.
This paper is organized as follows. In Section II, we present the CCS
formalism for the calculation of the window quantities. Our numerical setup
and computational strategies are reported in Section III. Section IV is
dedicated to the correction of the finite-size effects used for this work. The
continuum extrapolation of the results is discussed and compared to the
previous Mainz results Cè _et al._ (2022) evaluated at the same pion mass in
Section V. Finally, concluding remarks are made in Section VI.
## II Formalism
Under the time-momentum representation (TMR) Bernecker and Meyer (2011), the
hadronic vacuum polarization contribution to $a_{\mu}$ can be written as an
integral over the two-point function
$G_{\mu\nu}(x)=\langle j_{\mu}(x)j_{\nu}(0)\rangle$ (1)
of the quark electromagnetic current
$j_{\mu}=\sum_{f}{\cal
Q}_{f}\;\bar{\psi}_{f}\gamma_{\mu}\psi_{f}\qquad\qquad({\cal
Q}_{u}={\textstyle\frac{2}{3}},\;{\cal Q}_{d}={\cal
Q}_{s}={\textstyle-\frac{1}{3}}),$ (2)
in Euclidean time weighted with a QED kernel Della Morte _et al._ (2017).
Explicitly, the TMR representation of $a_{\mu}^{\rm hvp}$ reads
$a_{\mu}^{\rm
hvp}=\Big{(}\frac{\alpha}{\pi}\Big{)}^{2}\int_{0}^{\infty}dt\,f(t,m_{\mu})\,{\cal
G}(t)\,,$ (3)
where $G(t)$ is the two-point correlator projected to vanishing spatial
momentum,
${\cal G}(t)\,\delta_{ij}=-\int d^{3}x\;G_{ij}(t,\boldsymbol{x})\,,$ (4)
and $f(t,m_{\mu})$ is the QED kernel
$f(t,m_{\mu})=\frac{2\pi^{2}}{m_{\mu}^{2}}\Big{(}-2+8\gamma_{E}+\frac{4}{\hat{t}^{2}}+\hat{t}^{2}-\frac{8K_{1}(2\hat{t})}{\hat{t}}+8\ln(\hat{t})+G^{2,1}_{1,3}\left(\hat{t}^{2}\Big{|}\begin{matrix}\frac{3}{2}\\\
0,1,\frac{1}{2}\end{matrix}\right)\Big{)}\,,$ (5)
where $\hat{t}\equiv tm_{\mu}$.
Although Eq. (3) provides a way to compute $a_{\mu}^{\mathrm{hvp}}$ on the
lattice, the necessity to precisely control the discretization effects
stemming from small Euclidean-times and the loss of statistical quality in
long Euclidean-times make it challenging for lattice calculations to achieve
the same precision as methods using the $R$-ratio Brodsky and De Rafael
(1968); Lautrup and De Rafael (1968). Alternative observables were first
proposed in Ref. Blum _et al._ (2018b), which consist in filtering
contributions from different Euclidean-time regions with appropriate extra
weight factors to Eq. (3). One can alter the kernel with the help of smoothed
Heaviside functions
$\Theta_{\Delta}(t)=\frac{1}{2}(1+\tanh(\frac{t}{\Delta}))$ to restrict the
integral to a particular energy window, which amounts to substituting the QED-
kernel appearing in Eq. (3) by
$f_{\rm{W}}(t,m_{\mu})=\Big{[}\Theta_{\Delta}(t-t_{0})-\Theta_{\Delta}(t-t_{1})\Big{]}f(t,m_{\mu})\,.$
(6)
The original proposal in Ref. Blum _et al._ (2018b) was motivated by the fact
that lattice calculations and phenomenological estimates can be made accurate
in different euclidean time windows; applying different methods according to
their performance in the concerned region can thus lead to a better combined
estimate. Typically, lattice calculations suffer from enhanced discretization
effects at very short distances, and the long-distance nature of $a_{\mu}$
makes the finite-size corrections non-negligible. The intermediate window
quantity, $a_{\mu}^{\textrm{W}}$, defined by Eq. (6) with $t_{0}=0.4$ fm,
$t_{1}=1.0$ fm and $\Delta=0.15$ fm, is therefore expected to be well-suited
for lattice calculations, where a sub-percent precision level with well-
controlled systematic errors is easier to achieve than for the whole
$a_{\mu}^{\mathrm{hvp}}$. A comparison between the lattice and
phenomenological determinations of this quantity would shed light on the
current tension within SM predictions since the publication of the BMW result
Colangelo _et al._ (2022).
During the past few years, many lattice results on $a_{\mu}^{\textrm{W}}$ have
been published by independent collaborations Aubin _et al._ (2020); Borsanyi
_et al._ (2021); Lehner and Meyer (2020); Lahert _et al._ (2022); Cè _et
al._ (2022); Alexandrou _et al._ (2022); Wang _et al._ (2022); Aubin _et
al._ (2022). The current estimates from different lattice discretization
schemes show nice consistency within the reached accuracy. It is then worth
checking the current available results, all obtained from the TMR, with
alternative approaches to exclude a potential common bias from the TMR method.
Specifically, it is interesting to see if one can still get a consistent
result from a method which has different discretization effects than the TMR.
We propose an alternative representation of $a_{\mu}^{\textrm{W}}$ based on an
alternative approach for the calculation of $a_{\mu}$, the Covariant
Coordinate-Space (CCS) method, introduced in Ref. Meyer (2017). The derivation
is given in Appendix A. Qualitatively, it follows closely the derivation of
the original CCS kernels, which applies to observables which can be written as
a weighted integral over the Adler function $\mathcal{A}(Q^{2})\equiv
Q^{2}\frac{d}{dQ^{2}}\Pi(Q^{2})$, where $\Pi(Q^{2})$ is the vacuum
polarization function. Non-trivial examples of such observables are the
subtracted Vacuum Polarization function and $a_{\mu}$.
Exploiting the transversality of the vacuum polarization tensor, the
dependence on the tensor structure of the vector-vector correlator
$G_{\mu\nu}(x)$ can be made explicit and we arrive at a representation of
$a_{\mu}^{\textrm{W}}$ as a four-dimensional integral,
$a^{\textrm{W}}_{\mu}=\int d^{4}x\;H_{\mu\nu}(x)\,G_{\mu\nu}(x)\,,$ (7)
where the symmetric, rank-two, transverse ($\partial_{\mu}H_{\mu\nu}=0$)
kernel
$H_{\mu\nu}(x)=-\delta_{\mu\nu}\mathcal{H}_{1}(|x|)+\frac{x_{\mu}x_{\nu}}{|x|^{2}}\mathcal{H}_{2}(|x|)$
(8)
is characterized by two scalar weight functions,
$\mathcal{H}_{1}(|x|)=\frac{2}{9\pi|x|^{4}}\int_{0}^{|x|}dt\sqrt{|x|^{2}-t^{2}}(2|x|^{2}+t^{2})f_{\rm{W}}(t,m_{\mu})\,,$
(9)
$\mathcal{H}_{2}(|x|)=\frac{2}{9\pi|x|^{4}}\int_{0}^{|x|}dt\sqrt{|x|^{2}-t^{2}}(4t^{2}-|x|^{2})f_{\rm{W}}(t,m_{\mu})\,.$
(10)
A remarkable feature of the CCS method is the possibility of modifying the
kernel and hence the integrand in Eq. (7) without changing the final
integrated value in infinite-volume, thanks to current conservation.
Effectively, using the fact that the vector-vector correlator is conserved,
$\partial_{\mu}G_{\mu\nu}=0$, one can add a total-derivative term of type
$\partial_{\mu}[x_{\nu}g(|x|)]$ to the kernel without changing
$a_{\mu}^{\textrm{W}}$, as this only leads to a surface term vanishing in
infinite volume Cè _et al._ (2018).
This flexibility makes lattice calculations with the CCS method attractive
because it allows one to find an optimum in terms of discretization- and
finite-size-errors by controlling the sensitivity of the integrand to
different regions by adjusting the shape of the integrand (see Sect. III for
our setup for the lattice computation). In particular, the success in the
control of the finite-size effects in the Hadronic Light-by-Light contribution
to $a_{\mu}$ in an analogous way Blum _et al._ (2017); Asmussen _et al._
(2019) makes this a promising strategy. Nonetheless, the systematic error
induced by finite-size and discretization effects might require careful
studies for each kernel. These are important subjects of this paper (Sect. IV
and Sect. V).
In the following, we will perform calculations with two additional kernels,
the traceless
$H^{\textrm{TL}}_{\mu\nu}(x)=\left(-\delta_{\mu\nu}+4\frac{x_{\mu}x_{\nu}}{|x|^{2}}\right)\mathcal{H}_{2}(|x|)\,,\qquad{\textrm{
(`TL')}}$ (11)
and the one which is proportional to $x_{\mu}x_{\nu}$,
$H^{\textrm{XX}}_{\mu\nu}(x)=\frac{x_{\mu}x_{\nu}}{|x|^{2}}\Big{(}\mathcal{H}_{2}(|x|)+|x|\frac{d}{d|x|}\mathcal{H}_{1}(|x|)\Big{)}\qquad{\textrm{
(`XX')}}.$ (12)
These choices were studied in Ref. Cè _et al._ (2018). In particular, the XX
kernel is motivated by a stronger suppression of the contributions from long
distances when the correlator is modeled by a simple vector-meson exchange
Meyer (2017). Finally, for the remainder of the paper, we denote a generic
kernel as
$\widetilde{H}_{\mu\nu}(x)=-\delta_{\mu\nu}\widetilde{\mathcal{H}}_{1}(|x|)+\frac{x_{\mu}x_{\nu}}{|x|^{2}}\widetilde{\mathcal{H}}_{2}(|x|)\,.$
(13)
## III Lattice setup
We apply the CCS method to five different $N_{f}=2+1$ flavor gauge ensembles
generated by the Coordinated Lattice Simulations consortium Bruno _et al._
(2015) at a pion mass around $350$ MeV. These ensembles have been generated
with the O$(a)$-improved Wilson-clover fermion action and tree-level
O$(a^{2})$ improved Lüscher-Weisz gauge action. The detailed information about
the used ensembles can be found in Tab. 1. In this work, our goal is to
provide a cross-check for the calculation carried out in the conventional TMR
method Cè _et al._ (2022), restricting ourselves to the (strongly dominant)
quark-connected contributions in the $f=u,d,s$ sector.
To control the discretization effects, in this work, we consider both the
local (L) and the conserved (C) version of the vector current on the lattice
$j_{\mu}^{\textrm{(L)}}(x)=\bar{\psi}(x)\gamma_{\mu}{\cal Q}{\psi}(x)\,,$ (14)
and
$\displaystyle j_{\mu}^{\textrm{(C)}}(x)$ $\displaystyle=$
$\displaystyle{\textstyle\frac{1}{2}}\left(j_{\mu}^{\textrm{(N)}}(x)+j_{\mu}^{\textrm{(N)}}(x-a\hat{\mu})\right),$
(15) $\displaystyle j_{\mu}^{\textrm{(N)}}(x)$ $\displaystyle=$
$\displaystyle\frac{1}{2}\Big{[}\bar{\psi}(x+a\hat{\mu})(1+\gamma_{\mu})U_{\mu}^{\dagger}(x){\cal
Q}{\psi}(x)-\bar{\psi}(x)(1-\gamma_{\mu})U_{\mu}(x){\cal
Q}{\psi}(x+a\hat{\mu})\Big{]}\,,$ (16)
where $U_{\mu}(x)$ is the gauge link and ${\cal Q}$ is a generic quark charge
matrix acting in flavor space. Starting from the Noether current
$j_{\mu}^{\textrm{(N)}}$, we have defined the site-centered current
$j_{\mu}^{\textrm{(C)}}$, which obeys the on-shell conservation equation
$\sum_{\mu=0}^{3}\partial_{\mu}^{*}\,j_{\mu}^{\textrm{(C)}}=0$, where
$\partial_{\mu}^{*}$ is the lattice backward derivative.
In practice, to handle the O$(a)$ lattice artifacts, we substitute the lattice
vector currents with their improved counterparts111Eq. (17) is valid for the
flavor non-singlet combinations considered here. Bhattacharya _et al._ (2006)
$j^{(\alpha),\textrm{I}}_{\mu}(x)=j^{(\alpha)}_{\mu}(x)+ac^{(\alpha)}_{V}\partial_{\nu}T_{\mu\nu}(x),\qquad\textrm{for}\quad\alpha=\textrm{L,
C}\,,$ (17)
where the local tensor current is defined by
$T_{\mu\nu}\equiv-\frac{1}{2}\bar{\psi}(x)[\gamma_{\mu},\gamma_{\nu}]{\cal
Q}\psi(x)$ and $c^{(\alpha)}_{V}$ is an improvement coefficient. For
$c^{(\alpha)}_{V}$, we use the interpolating formulae Eq. (46.a) and Eq.
(46.b) of Ref. Gerardin _et al._ (2019), consistently with the treatment of
Ref. Cè _et al._ (2022). For both flavor combinations considered here, the
renormalization is multiplicative,
$\displaystyle j^{({\rm L}),\textrm{R}}_{\mu}(x)=\hat{Z}_{\rm
V}^{\textrm{(L)}}\,j^{({\rm L}),\textrm{I}}_{\mu}(x).$ (18)
In the case of the local isovector current, corresponding to ${\cal Q}={\rm
diag}(\frac{1}{2},-\frac{1}{2},0)$, the renormalization factor is given by
$\displaystyle\hat{Z}_{\rm
V}^{\textrm{(L)}}=Z_{V}(g_{0})\Big{[}1+3\bar{b}_{V}^{\rm eff}am_{\rm
q}^{\textrm{av}}+b_{V}am_{{\rm q},l}\Big{]}\,,$ (19)
where the parameters $Z_{V}$, $\bar{b}_{V}^{\rm eff}$ and $b_{V}$ are obtained
from the Padé fits Eqs. (44.a,b,c) of Ref. Gerardin _et al._ (2019). The
average quark mass $m_{\rm q}^{\textrm{av}}$ and the mass of the quark of
flavour $f$, $m_{{\rm q},f}$, are taken from the same reference. The conserved
vector current does not need to be renormalized, thus we have $\hat{Z}_{\rm
V}^{\textrm{(C)}}=1$. This treatment of the renormalization and improvement
coefficients corresponds to Set 1 in the recent calculation of the window
observable of the Mainz group Cè _et al._ (2022).
The strange current, since we consider only the connected contribution to its
two-point function, must be defined within a partially quenched theory. For
instance, adding a fourth, purely ‘valence’ quark $s^{\prime}$ mass-degenerate
with $s$, the flavor structure corresponds to ${\cal Q}={\rm
diag}(0,0,\frac{1}{3\sqrt{2}},-\frac{1}{3\sqrt{2}})$. The corresponding
renormalization factor can be written in the form
$\displaystyle\hat{Z}_{\rm
V}^{\textrm{(L)}}=Z_{V}(g_{0})\Big{[}1+3\bar{b}_{V}^{\rm eff}am_{\rm
q}^{\textrm{av}}+b_{V}am_{{\rm q},s}+b^{\rm pq}_{V}(am_{\rm
q}^{\textrm{av}}-am_{{\rm q},s})\Big{]}\,.$ (20)
It contains an additional term with a coefficient $b^{\rm pq}_{V}$ (of order
$g_{0}^{4}$ in perturbation theory) representing a sea-quark effect. Both for
the latter reason and the fact that we work quite close to the $SU(3)_{\rm f}$
point $am_{\rm q}^{\textrm{av}}=am_{{\rm q},s}$, we neglect this additional
term.
We give some further details for the implementation in the following
subsections. Although the expressions are given for the case where both
currents are local, the generalization to the cases with conserved currents
should be straightforward.
Id | $\beta$ | $L^{3}\times T$ | $a$ [fm] | $m_{\pi}$ [MeV] | $m_{K}$ [MeV] | $m_{\pi}L$ | $L$ [fm] | | #confs
---
light/strange
U102 | 3.4 | $24^{3}\times 96$ | 0.08636 | 353(4) | 438(4) | 3.7 | 2.1 | 200/0
H102 | | $32^{3}\times 96$ | | | | 4.9 | 2.8 | 240/120
S400 | 3.46 | $32^{3}\times 128$ | 0.07634 | 350(4) | 440(4) | 4.2 | 2.4 | 240/120
N203 | 3.55 | $48^{3}\times 128$ | 0.06426 | 346(4) | 442(5) | 5.4 | 3.1 | $90\times 2$/$90\times 2$
N302 | 3.7 | $48^{3}\times 128$ | 0.04981 | 346(4) | 450(5) | 4.2 | 2.4 | 240/120
Table 1: Overview of the used ensembles. The lattice spacings are determined
in Ref. Bruno _et al._ (2017) and the pion and kaon masses are taken from
Ref. Cè _et al._ (2022). Open boundary conditions are employed for all of the
listed ensembles. For the ensemble N203, two replica have been included in the
analysis. To exploit translational invariance to reduce statistical
fluctuations, all contracted correlators [Eqs. (24,25,28)] have been computed
at $L$ different choices of origin situated at $(n,n,n,T/2)$.
### III.1 Contracted Correlators
From the Lorentz structure of the CCS kernel Eq. (8), we deduce that the
integral representation of $a_{\mu}^{\textrm{W}}$, Eq. (7), can be
conveniently written as
$a_{\mu}^{\textrm{W}}=\int_{0}^{\infty}dr\,f(r)\,,\quad
f(r)\equiv\,r^{3}\Big{[}-\widetilde{\mathcal{H}}_{1}(r)G_{1}(r)+\frac{1}{r^{2}}\widetilde{\mathcal{H}}_{2}(r)G_{2}(r)\Big{]}\,,$
(21)
where
$G_{1}(r)=\int_{\mathbb{S}^{3}}d\Omega_{x}\,G_{\mu\nu}(x)\delta_{\mu\nu}\,,$
(22)
$G_{2}(r)=\int_{\mathbb{S}^{3}}d\Omega_{x}\,G_{\mu\nu}(x)x_{\mu}x_{\nu}\,,$
(23)
with $r\equiv|x|$, $\hat{x}\equiv x/|x|$ and $\mathbb{S}^{3}$ is the measure
of the three-sphere. The functions $G_{1}$ and $G_{2}$ will be referred to as
the contracted correlators and $f$ as the integrand.
In infinite volume, the integrand transforms as a scalar under
O$(4)$-transformations. In particular, it is expected to decay exponentially
with the separation $r$ at large distances due to the behavior of the vector-
current two-point correlator. For our lattice calculation, where the
O$(4)$-symmetry is broken, the contracted correlators need to be sampled by
points which are spread around on the same shell as evenly as possible to
restore the rotational symmetry. This in part motivates our choice for saving
the following quantities on each given distance $r$ on the lattice for the
quark-connected contribution of $a_{\mu}^{\textrm{W}}$
$\widehat{G}^{\textrm{conn.}}_{1}(r)=-\,{\rm Tr}\\{{\cal
Q}^{2}\\}\sum_{x\in\Lambda,\,|x|=r}\Re\,\hbox{\rm
Tr}[S(x,0)\gamma_{\mu}S(0,x)\gamma_{\mu}]\,,$ (24)
$\widehat{G}^{\textrm{conn.}}_{2}(r)=-\,{\rm Tr}\\{{\cal
Q}^{2}\\}\sum_{x\in\Lambda,\,|x|=r}\Re\,\hbox{\rm
Tr}[S(x,0)\not{x}S(0,x)\not{x}]\,,$ (25)
where $\Lambda$ denotes the set of all points on the lattice and $S(x,0)$ is a
quark propagator with point-source at $0$. Note that, in this convention, we
have $\widehat{G}^{\textrm{conn.}}_{i}(r)\rightarrow r^{3}G_{i}(r)$ in the
continuum and infinite-volume limit. Another advantage of such choice is the
re-usability of the data for other quantities for which the form factors of
the CCS kernel are known; it suffices to substitute the form factors
$\widetilde{\mathcal{H}}_{1}$ and $\widetilde{\mathcal{H}}_{2}$ in the master
formula Eq. (21) with the desired one in such a case.
For the $O(a)$-improvement of the discretized lattice vector current Eq. (17),
there is another quantity which has to be taken into account due to the tensor
current. Starting with the O$(a)$-improved vector-current given Eq. (17), one
can keep the explicit coefficient $ac_{V}$ fixed and substitute the vector-
and tensor-currents by their continuum and infinite-volume limit counterparts.
Plugging it into the original infinite-volume vector-current two-point
correlator, Eq. (7) is then modified to, up to O$(a^{2})$-terms,
$\tilde{a}^{\textrm{W}}_{\mu}(a)=\int
d^{4}x\,\Big{\\{}H_{\mu\nu}(x)G_{\mu\nu}(x)+ac_{V}\Big{[}\langle
j_{\mu}(x)T_{\nu\alpha}(0)\rangle-\langle
T_{\mu\alpha}(x)j_{\nu}(0)\rangle\Big{]}\partial_{\alpha}H_{\mu\nu}(x)\Big{\\}}\,,$
(26)
where we have performed an integration-by-part to get the second term on the
right-hand side. The second term in the curly bracket can be seen as a lattice
artifact as it vanishes at the $a\rightarrow 0$ limit at fixed $c_{V}$, where
$a_{\mu}^{\textrm{W}}$ is recovered. Exploiting the Lorentz symmetry as done
previously, we can consider it as a convolution of the correlation function in
the square-bracket as
$ac_{V}\int_{0}^{\infty}dr\,r\,\widetilde{\mathcal{H}}_{3}(r)G_{3}(r)\,,$ (27)
where
$G_{3}(r)=\int_{\mathbb{S}^{3}}d\Omega_{x}\,x_{\alpha}\;\Big{[}-\langle
j_{\mu}(x)T_{\mu\alpha}(0)\rangle+\langle
T_{\mu\alpha}(x)j_{\mu}(0)\rangle\Big{]},$ (28)
$\widetilde{\mathcal{H}}_{3}(r)=\widetilde{\mathcal{H}}_{2}(r)+r\widetilde{\mathcal{H}}^{\prime}_{1}(r)\,.$
(29)
This observation facilitates the numerical computation as the same propagators
required for the calculation of the previously-mentioned contracted
correlators can be reused and leads to the quantity to be computed on the
lattice
$\widehat{G}_{3}^{\textrm{conn.}}(r^{2})=-\,{\rm Tr}\\{{\cal
Q}^{2}\\}\sum_{x\in\Lambda,\,|x|=r}\Re\,\hbox{\rm
Tr}[S(x,0)\gamma_{\mu}S(0,x)(\not{x}\gamma_{\mu}-\gamma_{\mu}\not{x})]\,.$
(30)
### III.2 Summation Schemes
Figure 1: Visualization of the domain of integration on a hypercube of size
$L$. The Details of the integration procedure are provided in Sect. III.2
Because of the periodicity in the spatial directions on the lattice, the
spatial separation in each direction is mapped to $x_{k}\in[-L/2,\,L/2]$ in
infinite-volume spacetime. This means that, in total, one can sample up to
$r=L$ on a lattice with $T\geq L$. However, the CCS formulation consists in
treating the lattice points shell-by-shell with fixed $r$ across the
hypercube, following the radial direction; in the $r>L/2$ region, the
corresponding shell on the hypercube is not faithfully sampled anymore. Upon
taking the continuum and infinite-volume limit, the summations in the lattice-
summed contracted correlators $\widehat{G}_{i}$ run over the three-sphere
$\mathbb{S}^{3}$. As the summands become O$(4)$-invariant objects in this
limit, it suffices to evaluate them at a given point and multiply by the
$\mathbb{S}^{3}$-measure to get the answer. However, on a finite lattice, this
simplified procedure is exposed to both discretization and finite-volume
effects. This is better illustrated with Fig. 1: when going beyond $r=L/2$ in
the radial direction, the hypersphere only intersects with a subset of points
on the entire shell of the hypercube submerged in infinite-volume spacetime.
In order to control the finite-volume effects, we propose the following
summation scheme for our lattice data. We correct for these missing points by
a multiplicative factor given by:
$c(r,L)=\frac{r_{4}((r/a)^{2})}{n_{\textrm{avail}}(r^{2},L)}\,,\quad\textrm{with}\quad
r_{4}(n)=8\sum_{d\,\mid\,n,\ 4\,\nmid\,d}d\,$ (31)
being the number of ways to represent $n$ as the sum of four squares and
$n_{\textrm{avail}}(r^{2},L)$ is the number of available points on the
lattice, which can easily be counted. The sum in Eq. (31) runs over all
divisors $d$ of the integer number $n$, where 4 is not a divisor of $d$
itself. This is known as Jacobi’s four-square theorem. A proof is given for
example in Ref. Hirschhorn (2000). Note that, in this definition, $c(r,L)=1$
for all $r\leq L/2$. This summation scheme allows one to sample the
contribution from the portion of a hypersphere cut out by the box as described
by the red points on the right panel of Fig. 1. As a consequence, our lattice
version of the master formula for $a_{\mu}^{\textrm{W}}$ reads:
$a_{\mu}^{\textrm{W}}{}^{\textrm{,lat.}}=a^{4}\sum_{r=0}^{L}c(r,L)f^{\textrm{lat.}}(r)\,,$
(32)
where the lattice integrand is defined as
$f^{\textrm{lat.}}(r)\equiv-\widetilde{\mathcal{H}}_{1}(r)\widehat{G}^{\textrm{conn.}}_{1}(r)+\frac{1}{r^{2}}\widetilde{\mathcal{H}}_{2}(r)\widehat{G}^{\textrm{conn.}}_{2}(r)+\frac{ac_{V}}{r^{2}}\widetilde{\mathcal{H}}_{3}(r)\widehat{G}^{\textrm{conn.}}_{3}(r)\,.$
(33)
The results for the ensembles given in Tab. 1 calculated with this scheme are
collected in Tab. 3. Finally, as commented early, we could also have computed
the lattice-summed correlators $\widehat{G}_{i}$’s by starting from the
continuum expression (21), which is based on 4d spherical coordinates, and
implementing it in one particular direction on the lattice. The result should
agree with the summation scheme of Eq. (32) after a proper continuum and
infinite-volume extrapolation. In general, the two approaches introduce a
different scaling toward to continuum limit. We have explicitly verified in
the present case that the difference between these two treatments of the
lattice data is much smaller than the statistical error of the data.
## IV Correction for the finite-size effects
The finite-size effects (FSEs) on the electromagnetic correlator come
dominantly from the two-pion intermediate states, which belong to the
isovector channel. In the context of the TMR method, a number of different
approaches have been considered to estimate the FSEs. Perhaps the most
straightforward way to estimate FSEs is to rely on Chiral Perturbation Theory
(ChPT) in a finite box. The role of the $\rho$-meson, which contributes very
strongly to the HVP at intermediate distances, however only enters at higher
orders Aubin _et al._ (2020). Alternatively, one can use phenomenological
models, e.g. Ref. Jegerlehner and Szafron (2011), to include the effects of
the $\rho$ Borsanyi _et al._ (2021). Finite-size effects in the tail of the
TMR correlator can also be computed based on the pion electric form factor in
the timelike region, which can be obtained from auxiliary lattice calculations
Meyer (2011); Francis _et al._ (2013); Della Morte _et al._ (2017); Gérardin
_et al._ (2019). Finally, the first terms of a systematic asymptotic expansion
are given in Refs. Hansen and Patella (2019, 2020), where the FSEs correction
to $a_{\mu}^{\mathrm{hvp}}$ are related to a pion-photon Compton scattering
amplitude.
In our approach with the CCS method, where the position-space vector-vector
correlator is needed, a new aspect in the study of volume effects comes from
the Lorentz structure of the correlator as a symmetric rank-2 tensor under the
breaking of the O(4)-symmetry into that of a subgroup of the hypercubic group
H(4), or the octahedral group $O_{h}$ if the time extent is taken to be
infinite. In addition, it is not straightforward to generalize the approach of
Refs. Hansen and Patella (2019, 2020) or of Ref. Meyer (2011): as the
correlator used in the CCS method is a position-space object, the whole range
of center-of-mass momenta must be considered. For these reasons, we opted to
base our FSEs estimate on the model proposed in Ref. Jegerlehner and Szafron
(2011). We will refer to this model as the Sakurai QFT in the remainder of the
paper.
The pion electric form factor, $F_{\pi}$, is commonly parametrized by the
Gounaris-Sakurai (GS) formula Gounaris and Sakurai (1968). In particular, it
incorporates different dominant vector resonances with their widths into the
form factor. Ref. Jegerlehner and Szafron (2011) suggests a model which is
realistic at $\sqrt{s}<1$ GeV: the Lagrangian of the theory in Euclidean
spacetime is given by
${\cal
L}_{E}=\frac{1}{4}F_{\mu\nu}(A)^{2}+\frac{1}{4}F_{\mu\nu}(\rho)^{2}+\frac{1}{2}m_{\rho}^{2}\rho_{\mu}^{2}+\frac{e}{2g_{\gamma}}F_{\mu\nu}(A)F_{\mu\nu}(\rho)+(D_{\mu}\pi)^{\dagger}(D_{\mu}\pi)+m_{\pi}^{2}\pi^{\dagger}\pi,$
(34)
with the covariant derivative
$D_{\mu}\equiv\partial_{\mu}-ieA_{\mu}-ig\rho_{\mu}$. The degrees of freedom
are the photon $A_{\mu}$, the pion $\pi$ and the massive $\rho$-meson
$\rho_{\mu}$. In this Lagrangian, the $\rho$-meson and the photon mix already
at treelevel via the product of the field strengths, known as kinetic mixing
term. The normalization condition $F_{\pi}(0)=0$ emerges as a result of gauge
invariance, independently of the values of the coupling constants $g_{\gamma}$
and $g$. As a condition to determine the latter, we match the decay rates of
the vector meson to $\pi^{+}\pi^{-}$ and to $e^{+}e^{-}$ to their
experimentally measured values. This procedure gives $g=5.98$ and
$g_{\gamma}=4.97$ [Eq. (76) and Eq. (72)]. The details of this derivation as
well as the renormalization of the theory in infinite volume are deferred to
Appendix B.
As a sanity check, we have looked at the predictions of the Sakurai QFT for
different Euclidean time windows, i.e., different choices of $t_{0}$ and
$t_{1}$ in Eq. (6) at fixed $\Delta=0.15$ fm. With our choice of parameters
$g$ and $g_{\gamma}$, the two-pion channel contribution to these windows
computed in the Sakurai QFT to one-loop agrees surprisingly well with the
analysis based on the $e^{+}e^{-}$ cross-section data below 1 GeV Colangelo
_et al._ (2022); see Tab. 2. Note that according to the analysis presented in
Ref. Colangelo _et al._ (2022), the two-pion channel amounts about 70% of the
total $a_{\mu}^{\mathrm{hvp}}$. This observation further strengthens our
confidence in the model.
$[t_{0},t_{1}]$ | Sakurai QFT | Ref. Colangelo _et al._ (2022)
---|---|---
$[0,0.1]$ fm | 0.66 | 0.83(1)
$[0.1,0.4]$ fm | 14.05 | 12.89(12)
$[0.4,0.7]$ fm | 53.03 | 51.02(45)
$[0.7,1.0]$ fm | 87.59 | 87.28(72)
$[1.0,1.3]$ fm | 94.05 | 95.31(73)
$[1.3,1.6]$ fm | 79.64 | 80.88(58)
$[1.6,\infty]$ fm | 165.81 | 166.08(106)
total | 494.83 | 494.30(355)
Table 2: Predictions of the Sakurai QFT for different Euclidean time windows
defined by Eq. (6) with $\Delta=0.15$ fm and the corresponding values for
$t_{1}$ and $t_{0}$. $m_{\pi}$ and $m_{\rho}$ in the Lagrangian are set to
their physical values and $(g,g_{\gamma})=(5.984,4.97)$. The precision
requirement for the numerical integration is set below the displayed digits.
All numbers in the table are in units of $10^{-10}$. The uncertainties quoted
for the values from Ref. Colangelo _et al._ (2022) result from all sources of
error added in quadrature.
In Fig. 2, we plot the infinite-volume integrands defined in Eq. (21)
predicted by the Sakurai QFT together with the finite-size corrected lattice
integrand Eq. (32) for the ensemble N203, according to the procedure described
in Sect. IV.1. Two different $m_{\rho}$ are considered: one corresponds to its
physical value (775 MeV) and the other (827 MeV) is obtained from a previous
lattice study of the pion electric form factor in Gounaris-Sakurai
parametrization Gérardin _et al._ (2019), evaluated at $m_{\pi}=350$ MeV. A
point worth mentioning is the sensitivity to $m_{\rho}$. At the considered
pion mass, $m_{\rho}=827$ MeV gives an $a_{\mu}^{\textrm{W}}$ of $\sim
155\times 10^{-10}$, which is about 6% lower than the value from
$m_{\rho}=775$ MeV. This difference results from the different height of the
peaks of the integrands. More importantly, as can be seen in the shape of the
integrand in Fig. 2, tuning the $\rho$-mass to its exact value predicted by
the lattice study of Ref. Gérardin _et al._ (2019) leads to a much better
agreement in the long-distance region with the lattice data obtained in our
study. In our study of the finite-size effects presented in this work, we have
chosen $m_{\rho}$ to match the values listed in Ref. Gérardin _et al._
(2019).
Figure 2: Comparison between the integrand from the ensemble N203 for the
conserved-local discretization of the vector current and the prediction of the
Sakurai QFT for the corresponding $m_{\pi}$ and $m_{\rho}$. The correction for
the wrap-around-the-world pion has been applied to the lattice data.
### IV.1 Finite-size-effect correction scheme
We neglect the effects of having a finite temporal extent, as $m_{\pi}T$ is
large for the ensembles included in this calculation. In the CCS method, one
has to correct for the FSEs coming from two sources. The first one is the
truncation of the integrand of Eq. (21) at $r_{\textrm{max}}=L/2$ because of
the finite lattice size. The resulting missing contribution could be large if
the integrand is long-ranged. Selected raw lattice results obtained with
different kernels are displayed in Fig. 3. We see that the widths of the
integrand are very different according to the kernel used. For the kernels
$H_{\mu\nu}^{\textrm{TL}}$ and $H_{\mu\nu}^{\textrm{XX}}$, the integrals to
get $a_{\mu}^{\mathrm{hvp}}$ saturate more rapidly than in the case of the
original, un-subtracted kernel $H_{\mu\nu}$; the FSE corrections due to the
truncation are thus much smaller for the first two.
The second source of FSEs is the wrap-around-the-world effect related to the
discretized momenta in a finite, periodic box. We estimate this effect by
directly comparing the correlators computed in finite- and infinite-volume
Sakurai QFT [Eq. (116)]. The finite-volume part of the latter is to be done
following the same summation schemes described in Sect. III.2 for different
spacetime regions to match the lattice QCD calculation. As ultimately, the
relevant quantities for the calculation of $a_{\mu}^{\textrm{W}}$ are the
contracted correlators [Eqs. (22,23)], we compute the contracted finite-volume
correlators at a distance $|x|=r$ by sampling them at several points $x$
equally-distributed on the same hypersphere in order to reduce the
computational cost.
The numerical error of this sampling procedure is quantified based on the
variation of the correction when increasing the density of the sampled points.
With our setup, we estimate the wrap-around-the-world effect to be controlled
at the 10%-level. An additional uncertainty comes from the fact that the
winding expansion Eq. (116) is truncated at a given order. Our choice is to
truncate at $\left\|n\right\|_{2}^{2}=4$ and
$\left\|n\right\|_{2}^{2}+\left\|\nu\right\|_{2}^{2}=3$ in the first and the
second sum in Eq. (116) respectively. An estimate of the upper bound for the
truncation error is given by the highest-order kept term. This error is added
in quadrature to the uncertainty of the sampling procedure, which gives the
total numerical error of the calculation. The FSE corrections computed
according to the procedure described above are summarized in Tab. 5.
To get an idea of the size of the systematic error associated with the use of
the Sakurai QFT, we also compute the same quantity in leading-order ChPT,
where the photon-two-pions coupling is described by scalar QED. There are
significant relative differences between the estimates, though the order of
magnitude remains the same. Thus we decide to quote 25% of the total FSE
correction as a modelling error, which we add in quadrature to the numerical
error discussed in the previous paragraph.
(a) N203 (L=3.1fm)
(b) N302 (L=2.4fm)
Figure 3: Comparison of the integrand of Eq. (21) for different kernels [Eqs.
(8,11,12)], for two different ensembles for the conserved-local
discretization.
### IV.2 Comparison of the prediction for the finite-size error between the
Sakurai QFT and lattice data
Although in Fig. 3, the shorter-range $H_{\mu\nu}^{\textrm{XX}}$ might appear
to be beneficial in terms of its noise-to-signal ratio, we still prefer the
$H_{\mu\nu}^{\textrm{TL}}$ kernel in this study for two reasons. First, on
coarser ensembles, the integrand exhibits noticeable oscillations at short
distances, which indicates that the discretization effect due to the breaking
of the O$(4)$-symmetry might be less well handled by performing the angular
average over the available lattice points. This effect can be observed in the
comparison between the data from a coarser (N203) and finer (N302) ensemble
plotted in Fig. 3. The second reason for preferring the TL-kernel is that,
even though the tail is strongly suppressed, the Sakurai theory still predicts
non-negligible contributions in this region, if the box size is not big
enough. On the left panel of Fig. 4, we show a zoomed-in version of the tail
of the integrand of H102 with the TL-kernel. With this choice of kernel, the
integrand is very well described by the Sakurai QFT. On the other hand, with
the quality of our data, using the XX-kernel in this region gives a noisy
result consistent with zero, making it hard to really conclude if the model
describes the long distance behavior of the integrand correctly. Therefore, we
deem it most appropriate to opt for the traceless kernel
$H_{\mu\nu}^{\textrm{TL}}$ in our calculation for $a_{\mu}^{\textrm{W}}$, as
the FSE due to the truncation seems to be better controlled. However, one
should not exclude the possibility that the shorter ranged XX-kernel might
become a better choice, if only fine enough ensembles are included in the
continuum extrapolation, with well-resolved tails of the integrand.
In order to test to what extent our FSE correction procedure works, we compare
the difference between the integrand data computed with H102 and U102,
differing only in their spatial length $L$, to the Sakurai QFT prediction at
the corresponding volumes, as shown on the right panel of Fig. 4. For this
study, we set $m_{\rho}$ for U102 to be the same as that of H102, as only the
latter is available from Ref. Gérardin _et al._ (2019). The error on the
lattice data is obtained by adding the statistical errors from each individual
ensemble in quadrature. Although the fluctuations on the lattice data are
large compared to the central values, the prediction from the Sakurai QFT
seems to follow the trend very nicely and gives the right order of magnitude
up to about $r=0.8$ fm, where the integrand from the Sakurai QFT peaks.
However, beyond this region, the Sakurai QFT is no longer in good agreement
with the lattice data. Beside a possible mistuning in $m_{\rho}$ for U102,
another reason for this discrepancy might be that, as we approach or go beyond
the half of the linear box size (1.05 fm for U102), the convergence of the
winding expansion Eq. (116) is not sufficiently good for such a small box. As
the summation scheme for the region beyond $r=1.05$ fm requires one to sample
the two boxes in different ways for geometrical reasons, a more careful
discussion of the validity of the Sakurai QFT would be needed, especially on
smaller boxes where the sensitivity of the model at short distances becomes
critical.
The study described above suggests that the Sakurai QFT is able to effectively
model the FSE due to the wrap-around-the-world effect of the pion up to medium
values of $r$, but this effect might become too large to control with smaller
boxes. Moreover, the correction needed to reconstruct the tail is sizeable for
a small box like U102, leading to a less predictive result. For these reasons,
the ensemble U102 is not included in the final analysis of this work.
Figure 4: Left: Plot of the lattice data from H102 and the prediction of the
Sakurai QFT for the tail of the integrand. The red curve is used to calculate
the correction for truncating the integrand. Right: Comparison of the
difference between the ensembles U102 and H102 and the prediction from the
Sakurai QFT.
## V Numerical results
In this section, we discuss the numerical results for $a_{\mu}^{\textrm{W}}$
from our lattice simulations, which are based on the kernel
$H_{\mu\nu}^{\textrm{TL}}$ and on the finite-size corrections detailed in
Sect. IV. We first compare the results from each individual ensemble to what
has been obtained in the previous Mainz publication based on the TMR Cè _et
al._ (2022). Then, we correct for the mistuning of the pion mass to shift to
the reference pion mass of 350 MeV and kaon mass of 450 MeV prior to
extrapolating the data to the continuum limit.
### V.1 Comparison to the time-momentum representation result
The ensemble-by-ensemble results for the isovector and strange contributions
are displayed in table 3. Recall that two discretizations of the current-
current correlator, namely the local-local (LL) and the conserved-local (CL),
have been used to check for discretization effects (cf. Sect. III). Due to the
different discretization schemes, the results from this study based on the CCS
method do not necessarily agree with those obtained with the TMR method. For
the strange-quark contribution, the results obtained from both methods agree
with each other quite well. Note that we do not apply any FSE correction to
the strange data, as they receive contributions from the kaon loop at the
leading order in ChPT, which is far more suppressed at large distances due to
the higher mass of the kaon. In the isovector channel, we observe a good
agreement between the CCS and the TMR methods for the local-local data on the
larger ensembles H102 and N203. For the smaller ensembles S400 and N302 the
agreement for the local-local discretization is slightly worse. When we look
at the strange data, the agreement on the smaller ensembles is better. This
could be a sign that the worse agreement in the isovector channel for S400 and
N302 is due to finite-size effects, because these effects are much smaller for
the strange channel. On the contrary, for the conserved-local data we see a
different behaviour, when we compare the individual ensembles: our results
with the CCS method lie below the TMR values. This fact is a hint that the
results for the conserved-local discretization show a much flatter gradient as
the continuum limit is approached, since in both methods, the
O($a$)-improvement has been implemented. This behaviour is illustrated in Fig.
5, when we later perform the continuum extrapolation at the common reference
point.
| CCS method $H_{\mu\nu}^{\textrm{TL}}$ kernel | TMR method
---|---|---
| isovector | strange | isovector | strange
Id | (LL) | (CL) | (LL) | (CL) | (LL) | (CL) | (LL) | (CL)
U102 | 174.26(191) | 164.78(190) | — | — | — | — | — | —
H102 | 177.83(92) | 168.66(90) | 35.66(19) | 33.54(19) | 178.54(52) | 179.75(52) | 35.66(12) | 35.90(11)
S400 | 175.21(96) | 167.57(94) | 34.90(20) | 33.15(20) | 173.82(69) | 174.49(68) | 34.402(86) | 34.548(82)
N203 | 173.25(89) | 167.60(88) | 34.11(14) | 32.83(13) | 173.75(43) | 174.11(43) | 34.225(90) | 34.283(89)
N302 | 169.08(96) | 165.39(95) | 33.31(17) | 32.46(17) | 167.77(87) | 167.84(87) | 32.427(83) | 32.444(82)
Table 3: Comparison between the results for the isovector and strange
connected contribution obtained in the CCS method using spherical integration
and the results of the Mainz group Cè _et al._ (2022) using the TMR method.
Finite size corrections are applied to the isovector contribution for both
methods. The results for U102 are not included in the final analysis. All
values are in units of $10^{-10}$.
### V.2 Shift to a common reference point
The chosen ensembles from table 1 are not exactly at the same pion and kaon
mass. Although these masses are not very different, we want to shift the
results for each ensemble to a common reference point in the
$(m_{\pi},m_{K})$-phase-space. We define this reference point to be at
$m_{\pi}=350$ MeV and $m_{K}=450$ MeV. For this task, we use one of the best
global fits from the calculation of the Mainz group in the TMR method Cè _et
al._ (2022). For the isovector contribution the fit has the following form
$\begin{split}(a_{\mu}^{\textrm{W}})_{I=1}(a,\phi_{2},\phi_{4})=&\quad
p_{0}+p_{1}(\phi_{2}-\phi_{2,{\mathrm{phys}}})+p_{2}(\log(\phi_{2})-\log(\phi_{2,{\mathrm{phys}}}))\\\
&+p_{3}(\phi_{4}-\phi_{4,{\mathrm{phys}}})+p_{4}a^{2}\,,\end{split}$ (35)
and for the strange contribution we have
$\begin{split}(a_{\mu}^{\textrm{W}})_{\textrm{strange}}(a,\phi_{2},\phi_{4})=&\quad
p_{0}+p_{1}(\phi_{2}-\phi_{2,{\mathrm{phys}}})+p_{2}(\phi_{2}-\phi_{2,{\mathrm{phys}}})\\\
&+p_{3}(\phi_{4}-\phi_{4,{\mathrm{phys}}})+p_{4}a^{2}\,.\end{split}$ (36)
The fit parameters $p_{i}$ and the associated covariance matrices are taken
from the calculation done in Ref. Cè _et al._ (2022). In the above, $a$ is
the lattice spacing, $\phi_{2}\equiv 8t_{0}m_{\pi}^{2}$ and $\phi_{4}\equiv
8t_{0}(m_{K}^{2}+\frac{1}{2}m_{\pi}^{2})$ are the dimensionless parameters
defined with the gradient flow time $t_{0}$ Lüscher (2010). With this fit form
we calculate the differences between the result at the reference point and the
result at the pion and kaon mass of the specific ensemble. This difference is
independent of the lattice spacing of the given ensemble. The errors are
calculated from the covariance matrices of the fits and the results of this
calculation are given in tab. 6. We then apply these differences as a
correction to the results on each ensemble in the CCS method. We used the TMR
fit for the same current discretization (LL, CL) to correct the corresponding
CCS data. However, we see that there is only a very small difference between
the shifts for the LL and CL discretization calculated in the TMR method.
Again, $25\%$ of the correction is assigned for the systematic uncertainty for
this procedure. Since the chosen ensembles are very close to the chosen
reference point, the systematic errors from shifting to that reference point
are very small.
### V.3 Continuum extrapolation
(a) Isovector contribution
(b) Strange contribution
Figure 5: Continuum extrapolation at the reference point $m_{\pi}=350$ MeV and
$m_{K}=450$ MeV using the TL-kernel. The results from the TMR method are both
at $a=0$. They are separated slightly for a better visibility. The isovector
contribution is corrected for finite-size effects. For the strange
contribution no finite-size correction is applied. The smaller error bar is
only the statistical error, the larger is the total error. The systematic
error on N203 and H102 in the isovector contribution is almost not visible.
For the strange contribution the uncertainty on each ensemble is highly
dominated by the statistical error, as the systematic error from the shift to
the reference point is not visible.
After we applied the corrections to account for the mistuning of the pion
masses to the reference point, we perform an extrapolation to the continuum
with a linear fit in $a^{2}$
$f_{1}(a,\alpha_{1},\beta_{1})=\alpha_{1}+\beta_{1}a^{2}\,.$ (37)
This is depicted in Fig. 5 and the results of the continuum extrapolation are
displayed in Tab. 4.
| isovector | strange
---|---|---
Id | (LL) | (CL) | (LL) | (CL)
$H_{\mu\nu}^{\textrm{TL}}$ | 165.75(158) | 164.69(156) | 32.61(24) | 32.38(23)
TMR Cè _et al._ (2022) | 165.66(125) | 165.09(123) | 32.26(32) | 32.11(31)
Table 4: Results of the continuum extrapolation from the CCS method and the
TMR method with statistical uncertainties. The results of the TMR method are
obtained from the fits in Eqs. (35) and (36). All values are in units of
$10^{-10}$.
Since the O(a)-improvement procedure is fully implemented, O(a) artifacts are
expected to be absent in the continuum extrapolation. However, higher order
terms, such as $a^{3}$, $a^{2}\log(a)$ and $a^{2}/\log(a)$ could also be non-
negligible. This leads to a systematic error of the extrapolation. In order to
obtain an estimate of this uncertainty, we perform several additional fits.
For each of the fits, we allow one of these terms to be non zero. This makes
us consider the following additional three-parameter fit-ansätze
$\displaystyle f_{2}(a,\alpha_{2},\beta_{2},\gamma_{2})$ $\displaystyle=$
$\displaystyle\alpha_{2}+\beta_{2}a^{2}+\gamma_{2}a^{3}\,,$ (38)
$\displaystyle f_{3}(a,\alpha_{3},\beta_{3},\gamma_{3})$ $\displaystyle=$
$\displaystyle\alpha_{3}+\beta_{3}a^{2}+\gamma_{3}a^{2}\log(a)\,,$ (39)
$\displaystyle f_{4}(a,\alpha_{4},\beta_{4},\gamma_{4})$ $\displaystyle=$
$\displaystyle\alpha_{4}+\beta_{4}a^{2}+\gamma_{4}\frac{a^{2}}{\log(a)}.$ (40)
These fit ansätze leave only one degree of freedom with our available data.
Hence, over-fitting could potentially be an issue. We observe a large
cancellation between the term multiplying $\beta_{i}$ and the one multiplying
$\gamma_{i}$. Lacking guidance from additional data points, we introduce
Gaussian priors to constrain the highest order terms in $a$, $\gamma_{i}$, in
the ansätze Eqs. (38-40) to be in similar size as the best-fit coefficient
$\beta_{1}$ from Eq. (37). Additionally, to probe the sensitivity of the
linear fit $f_{1}$ to the range in lattice spacing of the data, we also
perform the fit with the coarsest lattice spacing left out. We apply this
procedure to the LL and CL data independently, resulting in 10 different fits.
To get an estimate of the systematic error of the fitting procedure, we
calculate the root-mean-squared deviation of the individual fit results in the
continuum limit $y_{i}$ from their average $\bar{y}$, $\Delta
y_{\textrm{RMS}}\equiv\Big{(}\sum_{i=1}^{N}(y_{i}-\bar{y})^{2}/N\Big{)}^{1/2}$.
The results for $a_{\mu}^{\textrm{W}}$ from the different fits are shown in
Fig. 6. We see that the extrapolations for the conserved and the local current
are in good agreement. Furthermore, the continuum values at the reference
point are consistent with the calculation with the TMR method.
For our final estimate for the isovector and the strange-quark contribution to
$a_{\mu}^{\textrm{W}}$ with the CCS method at the reference point of
$m_{\pi}=350$ MeV and $m_{K}=450$ MeV, we quote the result from a constant fit
to the LL and CL outcomes under the fit-ansatz $f_{1}$:
$\displaystyle a_{\mu}^{\textrm{W,I1}}$ $\displaystyle=$ $\displaystyle
165.17(157)_{\textrm{stat}}(99)_{\textrm{syst}}\times 10^{-10}\,,$ (41)
$\displaystyle a_{\mu}^{\textrm{W,s}}$ $\displaystyle=$ $\displaystyle
32.49(22)_{\textrm{stat}}(23)_{\textrm{syst}}\times 10^{-10}\,.$ (42)
(a) Isovector contribution
(b) Strange contribution
Figure 6: Comparison of the different fit-ansätze for the continuum
extrapolation at the reference point $m_{\pi}=350$ MeV and $m_{K}=450$ MeV
using the TL-kernel. The root-mean-square deviation of all the different fits
is calculated and gives the systematic uncertainty of the continuum
extrapolation.
## VI Conclusion
In this work, we have extended the Covariant Coordinate Space method first
proposed in Ref. Meyer (2017) to the window quantity for the anomalous
magnetic moment of the muon. Due to the stark geometric difference to the
Time-Momentum Representation, this alternative approach provides a valuable
cross-check for the existing window quantity results from Lattice QCD. We
provide values for the intermediate window quantity in the isovector channel
and for the strange quark-connected contribution at $m_{\pi}=350$ MeV and
$m_{K}=450$ MeV. With an appropriate finite-size effect correction scheme and
a careful scrutiny of the discretization effects, we obtain
$a_{\mu}^{\textrm{W}}=165.17(186)\times 10^{-10}$ for the isovector
contribution and $a_{\mu}^{\textrm{W}}=32.49(32)\times 10^{-10}$ for the
strange quark-connected contribution, where the statistical and systematic
errors have been added in quadrature, confirming the results of the
calculation of the Mainz group using the TMR method Cè _et al._ (2022). This
study strengthens the tension between the lattice calculations and the
dispersive approach on the window quantity.
One advantage of the CCS method is the freedom to modify the weight of the
correlator computed at different regions without changing the final summed
answer. This might turn out useful especially if one wants to adjust the shape
of the lattice integrand to mitigate statistically noisy contributions. Future
applications might involve different weight functions for different Euclidean
time windows to optimize the integrand for minimal lattice artifacts and
statistical noise. Furthermore, we have demonstrated how to correct for the
finite-size effects in the CCS method based on an effective field theory
approach. A strong motivation for this strategy is the non-trivial symmetric
rank-two tensor structure of the coordinate-space correlator required by the
formalism. A simple $\rho$-$\gamma$ mixing model advocated by Jegerlehner and
Szafron Jegerlehner and Szafron (2011) successfully captures the long-distance
contribution to $a_{\mu}^{\textrm{W}}$ in the CCS representation. The expected
$a^{2}$-scaling that our data shows after the finite-size correction based on
this model is encouraging and suggests that the same model might also be
utilized as a guideline for further optimizations with the CCS method. This is
of special interest for the calculation of the full Hadronic Vacuum
Polarization to $a_{\mu}$, whose integrand is much longer-ranged than
$a_{\mu}^{\textrm{W}}$. The technical details appended to this paper might be
useful while computing other coordinate-space observables with similar
integrable divergences in momentum-space.
The present calculation can be easily carried over to calculations of other
lattice observables such as the full Hadronic Vacuum Polarization contribution
to $a_{\mu}$ or the running of the QED coupling. For these observables, it
might be of interest to combine the CCS method with master field simulations
Fritzsch _et al._ (2022). These simulations are performed over very large
lattices, thus finite-size effects are expected to be highly suppressed. In
particular, we expect that this framework is the best suited for studying the
quark-disconnected contribution, which has been omitted in this work. It might
be possible to get a more precise determination of this contribution with a
short-ranged CCS kernel to filter out the noisy region for lattice
calculations.
###### Acknowledgements.
This work is supported by the European Research Council (ERC) under the
European Union’s Horizon 2020 research and innovation programme through grant
agreement 771971-SIMDAMA, as well as by the Deutsche Forschungsgemeinschaft
(DFG) through the Cluster of Excellence _Precision Physics, Fundamental
Interactions, and Structure of Matter_ (PRISMA+ EXC 2118/1) within the German
Excellence Strategy (Project ID 39083149). E.-H.C.’s work was supported in
part by the U.S. D.O.E. grant #DE-SC0011941. Calculations for this project
were partly performed on the HPC clusters “Clover” and “HIMster II” at the
Helmholtz-Institut Mainz and “Mogon II” at JGU Mainz. Our programs use the
deflated SAP+GCR solver from the openQCD package Lüscher and Schaefer (2013),
as well as the QDP++ library Edwards and Joó (2005). The measurement codes
were developed based on the C++ library wit, an coding effort led by Renwick
J. Hudspith. We thank the authors of Ref. Cè _et al._ (2022) and especially
Simon Kuberski for sharing the fit parameters of the continuum extrapolation
calculation in the TMR method, which we use in Sect. V.2. We are grateful to
our colleagues in the CLS initiative for sharing ensembles.
## Appendix A Derivation of the kernel for the window quantity in the CCS
representation
Let $G(t)$ as defined in Eq. (4) be the (positive-definite) TMR correlator.
The relation to the vacuum polarization function222The HVP function is defined
as in Ref. Bernecker and Meyer (2011). is
$G(t)\stackrel{{\scriptstyle t\neq
0}}{{=}}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\,\omega^{2}\,[\Pi(\omega^{2})-\Pi(0)]\,e^{i\omega
t}.$ (43)
Introducing the Adler function
${\cal A}(\omega^{2})=\omega^{2}\,\frac{d}{d\omega^{2}}\Pi(\omega^{2}),$ (44)
one obtains after writing
$\Pi(\omega^{2})-\Pi(0)=\int_{0}^{\omega^{2}}\frac{ds}{s}\,{\cal A}(s)$ (45)
and integrating by parts over $\omega$ (i.e. $e^{i\omega
t}=\frac{d}{d\omega}\frac{e^{i\omega t}}{it}$) in Eq. (43),
$G(t)=\frac{1}{\pi}\int_{0}^{\infty}\frac{d\omega^{2}}{\omega^{2}}\,{\cal
A}(\omega^{2})\;\frac{d^{2}}{dt^{2}}\left(\frac{\sin(\omega t)}{t}\right).$
(46)
Let now an observable in the TMR be given by
$a_{\mu}^{W}=\int_{0}^{\infty}dt\;f_{W}(t)\,G(t).$ (47)
Inserting expression (46) for the correlator $G(t)$, one finds
$a_{\mu}^{W}=\int_{0}^{\infty}dQ^{2}\,{\cal A}(Q^{2})\,g_{W}(Q^{2}),$ (48)
with
$g_{W}(Q^{2})=\frac{1}{\pi
Q^{2}}\int_{0}^{\infty}dt\,f_{W}(t)\,\frac{d^{2}}{dt^{2}}\left(\frac{\sin(|Q|t)}{t}\right).$
(49)
For an expression of the type (48), Ref. Meyer (2017) (Eq. (33) therein) gives
an expression for the weight functions ${\cal H}_{1}$ and ${\cal H}_{2}$ to be
used in the CCS method. Explicitly,
$\displaystyle a_{\mu}^{W}$ $\displaystyle=$ $\displaystyle\int
d^{4}x\;G_{\mu\nu}(x)\;H_{\mu\nu}(x),$ (50) $\displaystyle H_{\mu\nu}(x)$
$\displaystyle=$ $\displaystyle-\delta_{\mu\nu}{\cal
H}_{1}(|x|)+\frac{x_{\mu}x_{\nu}}{x^{2}}\,{\cal H}_{2}(|x|),$ (51)
with
$\displaystyle{\cal H}_{i}(|x|)$ $\displaystyle=$
$\displaystyle\frac{2}{3}\int_{0}^{\infty}\frac{dQ^{2}}{Q^{2}}\;h_{i}(|Q||x|)\,g_{W}(Q^{2})$
(52) $\displaystyle=$
$\displaystyle\frac{2}{3\pi}\int_{0}^{\infty}dt\,f_{W}(t)\,\frac{d^{2}}{dt^{2}}\left[\frac{1}{t}\int_{0}^{\infty}\frac{dQ^{2}}{Q^{4}}\;h_{i}(|Q||x|)\,\sin(|Q|t)\right]\,.$
(53)
One finds, with $r=|x|$,
$\displaystyle\frac{1}{t}\int_{0}^{\infty}\frac{dQ^{2}}{Q^{4}}\;h_{1}(|Q||x|)\,\sin(|Q|t)$
$\displaystyle=$
$\displaystyle\frac{\theta(r-t)}{120}\,\bigg{(}\frac{\sqrt{r^{2}-t^{2}}\left(32r^{4}+11r^{2}t^{2}+2t^{4}\right)}{r^{4}}$
(54) $\displaystyle\qquad\quad\qquad-45t\arccos\left({t}/{r}\right)\bigg{)},$
$\displaystyle\frac{1}{t}\int_{0}^{\infty}\frac{dQ^{2}}{Q^{4}}\;h_{2}(|Q||x|)\,\sin(|Q|t)$
$\displaystyle=$
$\displaystyle\theta(r-t)\frac{\left(r^{2}-t^{2}\right)^{5/2}}{15r^{4}}.$ (55)
In the second derivatives, needed in Eq. (53), the terms proportional to
$\delta(t-r)$ or its derivative do not contribute to the ${\cal H}_{i}$, as
long as $f_{W}(t)$ is smooth. One then finds
$\displaystyle{\cal H}_{1}(|x|)$ $\displaystyle=$ $\displaystyle\frac{2}{9\pi
r^{4}}\int_{0}^{r}dt\,\sqrt{r^{2}-t^{2}}\left(2r^{2}+t^{2}\right)\,f_{W}(t),$
(56) $\displaystyle{\cal H}_{2}(|x|)$ $\displaystyle=$
$\displaystyle\frac{2}{9\pi
r^{4}}\int_{0}^{r}dt\,\sqrt{r^{2}-t^{2}}\left(4t^{2}-r^{2}\right)\,f_{W}(t).$
(57)
It is worth noting that if $f_{W}(t)$ practically vanishes beyond a distances
$t_{1}$, then for $|x|\gg t_{1}$,
$\displaystyle{\cal H}_{1}(|x|)$ $\displaystyle\simeq$
$\displaystyle\frac{4}{9\pi|x|}\int_{0}^{\infty}dt\,f_{W}(t),$ (58)
$\displaystyle{\cal H}_{2}(|x|)$ $\displaystyle\simeq$
$\displaystyle\frac{-2}{9\pi|x|}\int_{0}^{\infty}dt\,\,f_{W}(t).$ (59)
Therefore, these weight functions have a long tail, unlike $f_{W}(t)$. Still,
the $1/|x|$ behaviour amounts to a suppression compared to the weight
functions for $a_{\mu}^{\rm hvp}$, which grow like $x^{2}$ at large $|x|$. In
the specific case of the ‘window quantity’, numerical integration of Eqs.
(56–57) yields the weight functions displayed in Fig. 7.
Figure 7: The weight functions for obtaining the intermediate window
$a_{\mu}^{W}$ (defined by $t_{0}=0.4$ fm, $t_{1}=1.0$ fm, $\Delta=0.15$ fm) in
the CCS method.
## Appendix B Determining the finite-size correction using Sakurai’s field
theory
In this section we discuss some features of the Sakurai QFT in details, with
special focus on its renormalization to one-loop and numerical applications to
the finite-size correction. Recall that in the original basis of fields, the
Euclidean spacetime Lagrangian of the theory is given in Eq. (34). We use
dimensional regularisation in the following. Thus we are in
$d=2\lambda+2=4-\varepsilon$ (60)
dimensions. The massive scalar propagator reads
$G_{m}(x)=\frac{m^{\lambda}}{(2\pi)^{\lambda+1}}\;\frac{K_{\lambda}(m|x|)}{|x|^{\lambda}}\,\stackrel{{\scriptstyle
d=4}}{{=}}\frac{m}{4\pi^{2}|x|}K_{1}(m|x|),$ (61)
and the massive vector propagator
$G_{\mu\nu}(x)\equiv\langle\rho_{\mu}(x)\rho_{\nu}(0)\rangle=\int\frac{d^{d}k}{(2\pi)^{d}}\,e^{ikx}\,\frac{\delta_{\mu\nu}+k_{\mu}k_{\nu}/m_{\rho}^{2}}{k^{2}+m_{\rho}^{2}}=\left(\delta_{\mu\nu}-\frac{1}{m_{\rho}^{2}}\partial_{\mu}\partial_{\nu}\right)G_{m_{\rho}}(x).$
(62)
We begin by determining the couplings $g$ and $g_{\gamma}$, working at tree-
level. The kinematic mixing term between rho and photon can be removed at the
cost of generating a direct coupling of the rho to electrons, and it is
instructive to work in this new basis. We set
$\epsilon=\frac{e}{g_{\gamma}}$ (63)
and remove the kinetic mixing term by a field transformation,
$\left(\begin{array}[]{c}A_{\mu}\\\
\rho_{\mu}\end{array}\right)=\left(\begin{array}[]{c@{\qquad}c}1&-\frac{\epsilon}{\sqrt{1-\epsilon^{2}}}\\\
0&\frac{1}{\sqrt{1-\epsilon^{2}}}\end{array}\right)\left(\begin{array}[]{c}\tilde{A}_{\mu}\\\
\tilde{\rho}_{\mu}\end{array}\right)$ (64)
The square-mass for the $\tilde{\rho}_{\mu}$ field is
$\tilde{m}_{\rho}^{2}=\frac{m_{\rho}^{2}}{1-\epsilon^{2}}.$ (65)
The covariant derivative takes the form
$\displaystyle D_{\mu}$ $\displaystyle=$
$\displaystyle\partial_{\mu}-ieA_{\mu}-ig\rho_{\mu}=\partial_{\mu}-ie\tilde{A}_{\mu}-i\tilde{g}\tilde{\rho}_{\mu},$
(66) $\displaystyle\tilde{g}$ $\displaystyle=$
$\displaystyle\frac{g-e\epsilon}{\sqrt{1-\epsilon^{2}}}.$ (67)
Thus
${\cal
L}_{E}=\frac{1}{4}F_{\mu\nu}(\tilde{A})^{2}+\frac{1}{4}F_{\mu\nu}(\tilde{\rho})^{2}+\frac{1}{2}\tilde{m}_{\rho}^{2}\tilde{\rho}_{\mu}^{2}+(D_{\mu}\pi)^{\dagger}(D_{\mu}\pi)+m_{\pi}^{2}\pi^{\dagger}\pi.$
(68)
While the form of the Lagrangian is now simpler, in the new basis of fields
the electromagnetic current of the electron couples not only to
$\tilde{A}_{\mu}$, but also directly to $\tilde{\rho}_{\mu}$. Explicitly, the
coupling to the electron reads
${\cal
L}_{E}=\bar{e}((\partial_{\mu}-ieA_{\mu})\gamma_{\mu}+m_{e})e=\bar{e}((\partial_{\mu}-ie\tilde{A}_{\mu}+ie\frac{\epsilon}{\sqrt{1-\epsilon^{2}}}\tilde{\rho}_{\mu})\gamma_{\mu}+m_{e})e$
(69)
From here, one calculates the treelevel decay width for $\tilde{\rho}\to
e^{+}e^{-}$ by standard QFT methods (similar to the calculation
$Z^{0}\to\bar{\ell}\ell$) and finds, neglecting the electron mass,
$\Gamma_{e^{+}e^{-}}=\frac{1}{3}\alpha\frac{\epsilon^{2}}{1-\epsilon^{2}}\,\tilde{m}_{\rho}.$
(70)
From the PDG, we set
$\tilde{m}_{\rho}=775.26(23)\,{\rm MeV},\qquad\Gamma_{e^{+}e^{-}}=(4.72\times
10^{-5})\times 149.1\,{\rm MeV}=7.04\,{\rm keV},$ (71)
and from here find
$\epsilon=0.0610,\qquad g_{\gamma}=\frac{\sqrt{4\pi\alpha}}{\epsilon}=4.97.$
(72)
From the value of $\epsilon$, one sees that one could have dropped the factor
$1/\sqrt{1-\epsilon^{2}}$.
Similarly, the $\tilde{\rho}\pi\pi$ decay is driven at leading order by the
interaction $\Delta{\cal L}_{E}=\tilde{g}\tilde{\rho}_{\mu}j_{\mu}$. Here, if
$p$ and $q$ are respectively the final-state momenta of the $\pi^{+}$ and
$\pi^{-}$, we have
$i{\cal M}^{(\sigma)}=\tilde{g}\,\epsilon^{(\sigma)}_{\nu}(p^{\nu}-q^{\nu}),$
(73)
and
$\frac{1}{3}\sum_{\sigma=0,\pm}|{\cal
M}^{(\sigma)}|^{2}=\frac{2\tilde{g}^{2}}{3}(p\cdot q-m_{\pi}^{2}).$ (74)
In the CM frame, where the norm of the pion spatial momentum is $p_{\pi}$, one
obtains
$\Gamma_{\pi^{+}\pi^{-}}=\frac{p_{\pi}}{8\pi
m_{\rho}^{2}}\frac{1}{3}\sum_{\sigma=0,\pm}|{\cal
M}^{(\sigma)}|^{2}=\frac{\tilde{g}^{2}p_{\pi}^{3}}{6\pi m_{\rho}^{2}}.$ (75)
Setting this equal to the experimental value of $149.1\,$MeV, one extracts
$\tilde{g}=5.976,\qquad g=5.984.$ (76)
For orientation, we note that the contribution of a narrow resonance to the
$R$-ratio reads
$R(s)=\frac{9\pi}{\alpha^{2}}\,\frac{\Gamma(m_{V}\to
e^{+}e^{-})}{m_{V}}\,m_{V}^{2}\delta(s-m_{V}^{2}).$ (77)
With the expression of the $\rho$ electronic width above, ignoring the fact
that it is rather broad, this becomes
$R(s)=\frac{12\pi^{2}}{g_{\gamma}^{2}}\,m_{V}^{2}\delta(s-m_{V}^{2}).$ (78)
Given that $a_{\mu}^{\rm hvp}=\int_{0}^{\infty}ds\,w(s)\,R(s)$ with
$w(m_{\rho}^{2})=1.624\times 10^{-8}{\rm GeV}^{-2}$, one obtains the fairly
realistic number
$a_{\mu}^{\rm hvp}(\rho)=468\times 10^{-10}.$ (79)
In the following, we consider the implications of one-loop corrections.
### B.1 The photonless Lagrangian including explicit counterterms
We work in the theory without a photon ($A_{\mu}=0$) and renormalize it at
one-loop order. The Lagrangian is then
$\displaystyle{\cal L}_{E}$ $\displaystyle=$
$\displaystyle\frac{1}{4}F_{\mu\nu}(\rho)^{2}+\frac{1}{2}m_{\rho}^{2}\rho_{\mu}^{2}+(D_{\mu}\pi)^{\dagger}(D_{\mu}\pi)+m_{\pi}^{2}\pi^{\dagger}\pi$
(80) $\displaystyle+\frac{1}{4}(Z_{3}-1)F_{\mu\nu}^{2}+\frac{1}{2}\delta
m_{\rho}^{2}\rho_{\mu}^{2}+(Z_{2}-1)(\partial_{\mu}\pi^{\dagger}\partial_{\mu}\pi+m_{\pi}^{2}\pi^{\dagger}\pi)+Z_{2}\delta
m_{\pi}^{2}\pi^{\dagger}\pi$ $\displaystyle+g(Z_{1}-1)\rho_{\mu}j_{\mu}.$
The pion-loop contribution $\Pi(k^{2})$ appears in the two-point function of
the $\rho_{\mu}$ field. To one-loop order, ignoring the counterterms for now,
$\displaystyle\langle\rho_{\mu}(x)\rho_{\nu}(y)\rangle$ $\displaystyle=$
$\displaystyle
G_{\mu\nu}(x)+\frac{g^{2}}{2}\langle\rho_{\mu}(x)\int_{z}\rho_{\lambda}(z)j_{\lambda}(z)\int_{w}\rho_{\sigma}(w)j_{\sigma}(w)\rho_{\nu}(y)\rangle$
$\displaystyle-g^{2}\langle\rho_{\mu}(x)\int_{z}\rho_{\lambda}(z)\rho_{\lambda}(z)\pi^{\dagger}(z)\pi(z)\;\rho_{\nu}(y)\rangle$
$\displaystyle=$
$\displaystyle\int\frac{d^{d}k}{(2\pi)^{d}}\,e^{ik(x-y)}\Big{\\{}\frac{\delta_{\mu\nu}+k_{\mu}k_{\nu}/m_{\rho}^{2}}{k^{2}+m_{\rho}^{2}}+\frac{\delta_{\mu\lambda}+k_{\mu}k_{\lambda}/m_{\rho}^{2}}{k^{2}+m_{\rho}^{2}}\;g^{2}\Pi(k^{2})\frac{k^{2}\delta_{\lambda\nu}-k_{\lambda}k_{\nu}}{k^{2}+m_{\rho}^{2}}\Big{\\}}.$
Performing the resummation of the geometric series,
$\displaystyle\langle\rho_{\mu}(x)\rho_{\nu}(y)\rangle$ $\displaystyle=$
$\displaystyle\int\frac{d^{d}k}{(2\pi)^{d}}e^{ik(x-y)}\Big{\\{}\frac{\delta_{\mu\nu}+k_{\mu}k_{\nu}/m_{\rho}^{2}}{k^{2}+m_{\rho}^{2}}+\frac{\delta_{\mu\lambda}+k_{\mu}k_{\lambda}/m_{\rho}^{2}}{k^{2}+m_{\rho}^{2}}\;g^{2}\Pi(k^{2})\frac{k^{2}\delta_{\lambda\nu}-k_{\lambda}k_{\nu}}{k^{2}+m_{\rho}^{2}}$
(82)
$\displaystyle+\frac{\delta_{\mu\lambda}+k_{\mu}k_{\lambda}/m_{\rho}^{2}}{k^{2}+m_{\rho}^{2}}\;\Big{(}g^{2}\Pi(k^{2})\frac{k^{2}\delta_{\lambda\nu}-k_{\lambda}k_{\nu}}{k^{2}+m_{\rho}^{2}}\Big{)}^{2}+\dots\Big{\\}}$
$\displaystyle=$
$\displaystyle\int\frac{d^{d}k}{(2\pi)^{d}}\,e^{ik(x-y)}\,\left(M(k)(1-T(k))^{-1}\right)_{\mu\nu},$
where
$\displaystyle M_{\mu\nu}(k)$ $\displaystyle=$
$\displaystyle\frac{\delta_{\mu\nu}+k_{\mu}k_{\nu}/m_{\rho}^{2}}{k^{2}+m_{\rho}^{2}},$
(83) $\displaystyle T_{\lambda\nu}(k)$ $\displaystyle=$ $\displaystyle
g^{2}\Pi(k^{2})\frac{k^{2}\delta_{\lambda\nu}-k_{\lambda}k_{\nu}}{k^{2}+m_{\rho}^{2}}.$
(84)
Since $1-T$ is of the form
$1-T=1-f+f\hat{k}\hat{k}^{\top},\qquad
f=g^{2}\Pi(k^{2})\frac{k^{2}}{k^{2}+m_{\rho}^{2}}$ (85)
($\hat{k}=k/|k|$), its inverse is given by
$(1-T)^{-1}=1-f\hat{k}\hat{k}^{\top}.$ (86)
Thus one finds
$\left(M(k)(1-T(k))^{-1}\right)_{\mu\nu}=\frac{\delta_{\mu\nu}+k_{\mu}k_{\nu}(1-g^{2}\Pi(k^{2}))/m_{\rho}^{2}}{k^{2}(1-g^{2}\Pi(k^{2}))+m_{\rho}^{2}}.$
(87)
Taking into account the counterterms, one finds
$\int
d^{d}x\;e^{ik(x-y)}\,\langle\rho_{\mu}(x)\rho_{\nu}(y)\rangle=\frac{\delta_{\mu\nu}+k_{\mu}k_{\nu}(Z_{3}-g^{2}\Pi(k^{2}))/(m_{\rho}^{2}+\delta
m_{\rho}^{2})}{k^{2}(Z_{3}-g^{2}\Pi(k^{2}))+m_{\rho}^{2}+\delta
m_{\rho}^{2}}.$ (88)
The renormalization conditions we impose on the denominator are,
$\displaystyle\Big{(}k^{2}(Z_{3}-g^{2}{\rm
Re}\,\Pi(k^{2}))+m_{\rho}^{2}+\delta
m_{\rho}^{2}\Big{)}_{k^{2}=-m_{\rho}^{2}}$ $\displaystyle=$ $\displaystyle 0,$
(89) $\displaystyle\frac{d}{dk^{2}}\Big{(}k^{2}(Z_{3}-g^{2}{\rm
Re}\,\Pi(k^{2}))+m_{\rho}^{2}+\delta
m_{\rho}^{2}\Big{|}_{k^{2}=-m_{\rho}^{2}}\Big{)}_{k^{2}=-m_{\rho}^{2}}$
$\displaystyle=$ $\displaystyle 1.$ (90)
These conditions lead to the finite $\rho$ mass shift
$\displaystyle\delta
m_{\rho}^{2}=m_{\rho}^{2}\;g^{2}\,\Big{(}k^{2}\frac{d}{dk^{2}}{\rm
Re}\,\Pi(k^{2})\Big{)}_{k^{2}=-m_{\rho}^{2}},$ (91)
and the wave-function renormalization
$Z_{3}-1=g^{2}\Big{(}1+k^{2}\frac{d}{dk^{2}}\Big{)}\,{\rm
Re}\,\Pi(k^{2})\Big{|}_{k^{2}=-m_{\rho}^{2}}.$ (92)
### B.2 Counterterm for the kinetic mixing term
In addition to the counterterms treated above, the $g_{\gamma}^{-1}$ coupling
gets renormalized by the pion-loop. Thus we must add the counterterm
$\delta{\cal
L}_{E}=\frac{e}{2}\delta\frac{1}{g_{\gamma}}F_{\mu\nu}(A)F_{\mu\nu}(\rho)$
(93)
to the Lagrangian of Eq. (34). Starting from that Lagrangian, one obtains to
one-loop order
$\displaystyle\langle A_{\mu}(x)\,\rho_{\nu}(y)\rangle$ $\displaystyle=$
$\displaystyle\frac{-e}{2}\langle
A_{\mu}(x)\left(\frac{1}{g_{\gamma}}+\delta\frac{1}{g_{\gamma}}\right)\int_{z}F_{\lambda\sigma}(A)_{z}F_{\lambda\sigma}(\rho)_{z}\,\rho_{\nu}(y)\rangle_{0}$
$\displaystyle+\langle
A_{\mu}(x)e\int_{z}A_{\lambda}(z)j_{\lambda}(z)\;g\int_{w}\rho_{\sigma}(w)j_{\sigma}(w)\,\rho_{\nu}(y)\rangle$
$\displaystyle+\langle
A_{\mu}(x)(-2eg)\int_{z}A_{\lambda}(z)\rho_{\lambda}(z)\pi^{*}(z)\pi(z)\,\rho_{\nu}(y)\rangle$
$\displaystyle=$ $\displaystyle
e\int\frac{d^{d}k}{(2\pi)^{d}}\frac{e^{ik(x-y)}}{k^{2}(k^{2}+m_{\rho}^{2})}(\delta_{\mu\nu}k^{2}-k_{\mu}k_{\nu})\Big{(}-\frac{1}{g_{\gamma}}-\delta\frac{1}{g_{\gamma}}+g\Pi(k^{2})\Big{)}.$
Here, we require that at $k^{2}=-m_{\rho}^{2}$, the correlation function be
given by its tree-level value, and therefore set
$\delta\frac{1}{g_{\gamma}}=g\;{\rm Re}\,\Pi(-m_{\rho}^{2}).$ (95)
Having determined all the required counterterms at one-loop order, we consider
the quantity that is computed in lattice QCD, namely the photon two-point
function, evaluated at $A_{\mu}=0$.
### B.3 The current-current correlator
The current-current correlator computed in lattice QCD corresponds to
$\frac{\delta^{2}\log Z[A]}{\partial A_{\mu}(x)\partial A_{\nu}(y)}$, and this
must be matched to the Sakurai QFT. In this context we regard $A_{\mu}(x)$ as
a background field for the remaining degrees of freedom, $\pi$ and
$\tilde{\rho}_{\mu}$. We note
$\frac{\delta S}{\delta
A_{\mu}(x)}=\frac{e}{g_{\gamma}}\partial_{\alpha}F_{\mu\alpha}(\rho)-ie\left(\pi\partial_{\mu}\pi^{*}-\pi^{*}\partial_{\mu}\pi\right)+2eg\rho_{\mu}\pi^{*}\pi+2e^{2}A_{\mu}\pi^{*}\pi.$
(96)
Let
$j_{\mu}=-i\left(\pi\partial_{\mu}\pi^{*}-\pi^{*}\partial_{\mu}\pi\right)$
(97)
be the electromagnetic current carried by the charged pions. Then
$\displaystyle\left.\frac{\delta^{2}\log Z[A]}{\partial A_{\mu}(x)\partial
A_{\nu}(y)}\right|_{A=0}$ $\displaystyle=$
$\displaystyle\left.\Big{\langle}\frac{\delta S}{\delta
A_{\mu}(x)}\frac{\delta S}{\delta A_{\nu}(y)}\Big{\rangle}_{\rm
conn}-\Big{\langle}\frac{\delta^{2}S}{\partial A_{\mu}(x)\partial
A_{\nu}(y)}\Big{\rangle}\right|_{A=0}$ (99) $\displaystyle=$ $\displaystyle
e^{2}\Big{\langle}\Big{(}j_{\mu}+\frac{1}{g_{\gamma}}\partial_{\alpha}F_{\mu\alpha}(\rho)+2g\rho_{\mu}\pi^{*}\pi\Big{)}_{x}\Big{(}j_{\nu}+\frac{1}{g_{\gamma}}\partial_{\beta}F_{\nu\beta}(\rho)+2g\rho_{\nu}\pi^{*}\pi\Big{)}_{y}\Big{\rangle}$
$\displaystyle-2e^{2}\delta_{\mu\nu}\delta(x-y)\langle\pi^{*}\pi\rangle,$
where the expectation value is now in the theory without the field $A_{\mu}$.
Evaluating the expectation value to order $g^{0}$, counting $g_{\gamma}/g$ to
be O(1), yields
$\displaystyle\frac{1}{e^{2}}\left.\frac{\delta^{2}\log Z[A]}{\partial
A_{\mu}(x)\partial A_{\nu}(y)}\right|_{A=0}$ $\displaystyle=$
$\displaystyle\frac{1}{g_{\gamma}^{2}}\langle\partial_{\alpha}F_{\mu\alpha}(\rho)_{x}\partial_{\beta}F_{\nu\beta}(\rho)_{y}\rangle_{0}$
$\displaystyle+\langle j_{\mu}(x)j_{\nu}(y)\rangle_{\rm
sQED}-2\delta_{\mu\nu}\delta(x-y)G_{m_{\pi}}(0)$
$\displaystyle+2\frac{g}{g_{\gamma}}G_{m_{\pi}}(0)\Big{(}\langle\partial_{\alpha}F_{\mu\alpha}(\rho)_{x}\rho_{\nu}(y)\rangle_{0}+\langle\rho_{\mu}(x)\partial_{\beta}F_{\nu\beta}(\rho)_{y}\rangle_{0}\Big{)}$
$\displaystyle+\frac{1}{2}\frac{g^{2}}{g_{\gamma}^{2}}\Big{\langle}\partial_{\alpha}F_{\mu\alpha}(\rho)_{x}\int_{z}\rho_{\sigma}(z)j_{\sigma}(z)\int_{w}\rho_{\lambda}(w)j_{\lambda}(w)\partial_{\beta}F_{\nu\beta}(\rho)_{y}\Big{\rangle}_{0}$
$\displaystyle-\frac{g^{2}}{g_{\gamma}^{2}}\Big{\langle}\partial_{\alpha}F_{\mu\alpha}(\rho)_{x}\int_{z}\rho_{\sigma}(z)\rho_{\sigma}(z)\pi^{*}(z)\pi(z)\partial_{\beta}F_{\nu\beta}(\rho)\Big{\rangle}_{0}$
$\displaystyle-\frac{g}{g_{\gamma}}\Big{\langle}\partial_{\alpha}F_{\mu\alpha}(\rho)_{x}\int_{z}\rho_{\sigma}(z)j_{\sigma}(z)j_{\nu}(y)+j_{\mu}(x)\int_{z}\rho_{\sigma}(z)j_{\sigma}(z)\partial_{\beta}F_{\nu\beta}(\rho)_{y}\Big{\rangle}_{0}.$
For the first line of Eq. (B.3), one derives from the massive vector
propagator expression (62)
$\displaystyle\langle\partial_{\alpha}F_{\mu\alpha}(x)\rho_{\nu}(0)\rangle=\delta_{\mu\nu}\delta(x)-m_{\rho}^{2}G_{\mu\nu}(x)$
(101)
and
$\displaystyle\langle\partial_{\alpha}F_{\mu\alpha}(x)\partial_{\beta}F_{\nu\beta}(0)\rangle=m_{\rho}^{4}G_{\mu\nu}(x)+\Big{(}\partial_{\mu}\partial_{\nu}-(\triangle+m_{\rho}^{2})\delta_{\mu\nu}\Big{)}\delta(x).$
(102)
Secondly, the scalar QED contribution reads
$\langle j_{\mu}(x)j_{\nu}(0)\rangle_{\rm
sQED}=2\Big{(}\partial_{\mu}G_{m_{\pi}}(x)\partial_{\nu}G_{m_{\pi}}(x)-G_{m_{\pi}}(x)\partial_{\mu}\partial_{\nu}G_{m_{\pi}}(x)\Big{)}.$
(103)
Now to the one-loop contribution of the last line of Eq. (B.3), along with the
corresponding tadpole contribution of the third line. Noting that
$\Pi^{\rm sQED}_{\mu\nu}(k)\equiv\int d^{d}x\,e^{ikx}\,\Big{(}\langle
j_{\mu}(x)j_{\nu}(0)\rangle_{\rm
sQED}-2\delta_{\mu\nu}\delta(x)G_{m_{\pi}}(0)\Big{)}=(\delta_{\mu\nu}k^{2}-k_{\mu}k_{\nu})\Pi(k^{2})$
(104)
with
$\Pi(k^{2})=-\frac{1}{(4\pi)^{d/2}}\Gamma(2-{\textstyle\frac{d}{2}})\int_{0}^{1}dx\frac{(1-2x)^{2}}{(x(1-x)k^{2}+m^{2})^{2-d/2}},$
(105)
we obtain
$\displaystyle+2\frac{g}{g_{\gamma}}G_{m_{\pi}}(0)\langle\partial_{\alpha}F_{\mu\alpha}(\rho)_{x}\rho_{\nu}(y)\rangle_{0}-\frac{g}{g_{\gamma}}\Big{\langle}\partial_{\alpha}F_{\mu\alpha}(\rho)_{x}\int_{z}\rho_{\sigma}(z)j_{\sigma}(z)j_{\nu}(y)\Big{\rangle}_{0}$
(106)
$\displaystyle=2\frac{g}{g_{\gamma}}G_{m_{\pi}}(0)\delta_{\mu\nu}\delta(x-y)-\frac{g}{g_{\gamma}}\langle
j_{\mu}(x)j_{\nu}(y)\rangle_{\rm sQED}$
$\displaystyle+m_{\rho}^{2}\frac{g}{g_{\gamma}}\int\frac{d^{d}k}{(2\pi)^{d}}\frac{e^{ik(x-y)}}{k^{2}+m_{\rho}^{2}}(\delta_{\mu\nu}k^{2}-k_{\mu}k_{\nu})\Pi(k^{2}),$
The other terms on line three and on the last line of Eq. (B.3) simply
correspond to the exchange $(x,\mu)\leftrightarrow(y,\nu)$, thus simply
leading to doubling the contribution of Eq. (106).
Lines four and five of Eq. (B.3) are best handled together and we find
$\displaystyle\frac{1}{2}\frac{g^{2}}{g_{\gamma}^{2}}\Big{\langle}\partial_{\alpha}F_{\mu\alpha}(\rho)_{x}\int_{z}\rho_{\sigma}(z)j_{\sigma}(z)\int_{w}\rho_{\lambda}(w)j_{\lambda}(w)\partial_{\beta}F_{\nu\beta}(\rho)_{y}\Big{\rangle}_{0}$
$\displaystyle-\frac{g^{2}}{g_{\gamma}^{2}}\Big{\langle}\partial_{\alpha}F_{\mu\alpha}(\rho)_{x}\int_{z}\rho_{\sigma}(z)\rho_{\sigma}(z)\pi^{*}(z)\pi(z)\partial_{\beta}F_{\nu\beta}(\rho)\Big{\rangle}_{0}$
$\displaystyle=\frac{g^{2}}{g_{\gamma}^{2}}\Big{\\{}\langle
j_{\mu}(x)j_{\nu}(y)\rangle_{\rm
sQED}-2G_{m_{\pi}}(0)\delta_{\mu\nu}\delta(x-y)$
$\displaystyle-2m_{\rho}^{2}\int\frac{d^{d}k}{(2\pi)^{d}}\frac{e^{ik(x-y)}}{k^{2}+m_{\rho}^{2}}(\delta_{\mu\nu}k^{2}-k_{\mu}k_{\nu})\,\Pi(k^{2})$
$\displaystyle+m_{\rho}^{4}\int\frac{d^{d}k}{(2\pi)^{d}}\frac{e^{ik(x-y)}}{(k^{2}+m_{\rho}^{2})^{2}}(\delta_{\mu\nu}k^{2}-k_{\mu}k_{\nu})\,\Pi(k^{2})\Big{\\}}.$
(107)
Altogether, we have
$\displaystyle\frac{1}{e^{2}}\left.\frac{\delta^{2}\log Z[A]}{\partial
A_{\mu}(x)\partial A_{\nu}(y)}\right|_{A=0}$ $\displaystyle=$
$\displaystyle\frac{m_{\rho}^{4}}{g_{\gamma}^{2}}G_{\mu\nu}(x-y)+\frac{1}{g_{\gamma}^{2}}\Big{(}\partial_{\mu}\partial_{\nu}-(\triangle+m_{\rho}^{2})\delta_{\mu\nu}\Big{)}\delta(x-y)$
(108) $\displaystyle+\Big{(}1-\frac{g}{g_{\gamma}}\Big{)}^{2}\Big{(}\langle
j_{\mu}(x)j_{\nu}(y)\rangle_{\rm
sQED}-2\delta_{\mu\nu}\delta(x-y)G_{m_{\pi}}(0)\Big{)}$
$\displaystyle+2m_{\rho}^{2}\frac{g}{g_{\gamma}}\Big{(}1-\frac{g}{g_{\gamma}}\Big{)}(\partial_{\mu}\partial_{\nu}-\delta_{\mu\nu}\triangle)\int\frac{d^{d}k}{(2\pi)^{d}}\frac{e^{ik(x-y)}}{k^{2}+m_{\rho}^{2}}\Pi(k^{2})$
$\displaystyle+\frac{g^{2}}{g_{\gamma}^{2}}(\partial_{\mu}\partial_{\nu}-\delta_{\mu\nu}\triangle)m_{\rho}^{4}\int\frac{d^{d}k}{(2\pi)^{d}}\frac{e^{ik(x-y)}}{(k^{2}+m_{\rho}^{2})^{2}}\,\Pi(k^{2})\,.$
Each term is transverse, i.e. yields zero when $\partial_{\mu}^{(x)}$ is
applied to it; for the first term, note that
$\partial_{\nu}G_{\mu\nu}(x)=\frac{1}{m_{\rho}^{2}}\,\partial_{\mu}\delta(x).$
(109)
#### B.3.1 Contribution of counterterms to the current-current correlator
The contribution of the counterterm (93) amounts to replacing
$g_{\gamma}^{-1}$ by $(g_{\gamma}^{-1}+\delta g_{\gamma}^{-1})$. Since the
counterterm represents a relative correction of order $g^{2}$, this correction
needs be applied only to the leading terms, namely those of order
$g_{\gamma}^{-2}$. Thus we obtain the contribution
$\frac{1}{e^{2}}\left.\frac{\delta^{2}\log Z[A]}{\partial A_{\mu}(x)\partial
A_{\nu}(y)}\right|_{A=0}=\dots+\frac{2}{g_{\gamma}}\delta\frac{1}{g_{\gamma}}m_{\rho}^{4}G_{\mu\nu}(x-y)+\frac{2}{g_{\gamma}}\delta\frac{1}{g_{\gamma}}\Big{(}\partial_{\mu}\partial_{\nu}-(\triangle+m_{\rho}^{2})\delta_{\mu\nu}\Big{)}\delta(x-y)+\dots$
(110)
The contribution of the counterterms proportional to $(Z_{3}-1)$ and $\delta
m_{\rho}^{2}$ reads
$\displaystyle\frac{1}{g_{\gamma}^{2}}\langle\partial_{\alpha}F_{\mu\alpha}(x)\,\partial_{\beta}F_{\nu\beta}(y)\rangle_{\rm
c.t.}$ $\displaystyle=$
$\displaystyle-\frac{1}{g_{\gamma}^{2}}\langle\partial_{\alpha}F_{\mu\alpha}(x)\,\int_{z}\frac{\delta
m_{\rho}^{2}}{2}\rho_{\lambda}(z)\rho_{\lambda}(z)\,\partial_{\beta}F_{\nu\beta}(y)\rangle_{0}$
(111)
$\displaystyle-\frac{1}{g_{\gamma}^{2}}\langle\partial_{\alpha}F_{\mu\alpha}(x)\,(Z_{3}-1)\,\int_{z}\frac{1}{4}F_{\lambda\sigma}(z)F_{\lambda\sigma}(z)\;\partial_{\beta}F_{\nu\beta}(y)\rangle_{0}$
$\displaystyle=$ $\displaystyle-\frac{\delta
m_{\rho}^{2}}{g_{\gamma}^{2}}\Big{(}\delta_{\mu\nu}\delta(x-y)-2m_{\rho}^{2}G_{\mu\nu}(x-y)$
$\displaystyle\qquad+\int_{k}\frac{e^{ik(x-y)}}{(k^{2}+m_{\rho}^{2})^{2}}(\delta_{\mu\nu}m_{\rho}^{4}+k_{\mu}k_{\nu}(2m_{\rho}^{2}+k^{2}))\Big{)}$
$\displaystyle-\frac{Z_{3}-1}{g_{\gamma}^{2}}\Big{(}\big{(}\delta_{\mu\nu}(-\triangle^{(x)}-2m_{\rho}^{2})+\partial_{\mu}^{(x)}\partial_{\nu}^{(x)}\big{)}\delta(x-y)$
$\displaystyle+2m_{\rho}^{4}G_{\mu\nu}(x-y)+m_{\rho}^{4}\int_{k}\frac{e^{ik(x-y)}}{(k^{2}+m_{\rho}^{2})^{2}}(k^{2}\delta_{\mu\nu}-k_{\mu}k_{\nu})\Big{)}\,.\qquad$
Thus we have the final result, now including all counterterm contributions,
$\displaystyle\frac{1}{e^{2}}\left.\frac{\delta^{2}\log Z[A]}{\partial
A_{\mu}(x)\partial A_{\nu}(y)}\right|_{A=0}$ $\displaystyle=$
$\displaystyle\frac{m_{\rho}^{2}(m_{\rho}^{2}+2\delta
m_{\rho}^{2})}{g_{\gamma}^{2}}G_{\mu\nu}(x-y)+\frac{1}{g_{\gamma}^{2}}\Big{(}\partial_{\mu}\partial_{\nu}-(\triangle+m_{\rho}^{2})\delta_{\mu\nu}\Big{)}\delta(x-y)$
(112) $\displaystyle+\Big{(}1-\frac{g}{g_{\gamma}}\Big{)}^{2}\Big{(}\langle
j_{\mu}(x)j_{\nu}(y)\rangle_{\rm
sQED}-2\delta_{\mu\nu}\delta(x-y)G_{m_{\pi}}(0)\Big{)}$
$\displaystyle+2m_{\rho}^{2}(\partial_{\mu}\partial_{\nu}-\delta_{\mu\nu}\triangle)\int\frac{d^{d}k}{(2\pi)^{d}}\frac{e^{ik(x-y)}}{k^{2}+m_{\rho}^{2}}\Big{(}\frac{g}{g_{\gamma}}(1-\frac{g}{g_{\gamma}})\Pi(k^{2})-\frac{\delta
g_{\gamma}^{-1}}{g_{\gamma}}+\frac{Z_{3}-1}{g_{\gamma}^{2}}\Big{)}$
$\displaystyle+\frac{1}{g_{\gamma}^{2}}(\partial_{\mu}\partial_{\nu}-\delta_{\mu\nu}\triangle)m_{\rho}^{4}\int\frac{d^{d}k}{(2\pi)^{d}}\frac{e^{ik(x-y)}}{(k^{2}+m_{\rho}^{2})^{2}}\,(g^{2}\Pi(k^{2})-(Z_{3}-1))$
$\displaystyle-\frac{\delta
m_{\rho}^{2}}{g_{\gamma}^{2}}\Big{(}\delta_{\mu\nu}\delta(x-y)+\int_{k}\frac{e^{ik(x-y)}}{(k^{2}+m_{\rho}^{2})^{2}}(\delta_{\mu\nu}m_{\rho}^{4}+k_{\mu}k_{\nu}(2m_{\rho}^{2}+k^{2}))\Big{)}$
$\displaystyle+\Big{(}\frac{2}{g_{\gamma}}\delta\frac{1}{g_{\gamma}}-\frac{Z_{3}-1}{g_{\gamma}^{2}}\Big{)}\Big{(}\partial_{\mu}\partial_{\nu}-\triangle\delta_{\mu\nu}\Big{)}\delta(x-y)\,.$
### B.4 Finite-size effects on the current-current correlator
Let
$C_{\mu\nu}(x)=-\langle j_{\mu}(x)j_{\nu}(0)\rangle$ (113)
be the Euclidean position-space vector correlator.
Consider the case of $g=g_{\gamma}$. Then the only term contributing to
finite-size effects which is not suppressed by $e^{-m_{V}L/2}$ is
$C^{(L)}_{\mu\nu}(x)=\frac{m_{\rho}^{4}}{V}\sum_{k}e^{ikx}\,\frac{\overline{\Pi}^{(L)}_{\mu\nu}(k)}{(k^{2}+m_{\rho}^{2})^{2}},$
(114)
where $\overline{\Pi}^{(L)}_{\mu\nu}(k)$ is the finite-volume renormalized
vacuum polarization tensor; note that the renormalization is always performed
in infinite volume. It is useful to decompose the latter into its infinite-
volume counterpart, plus a remainder,
$\overline{\Pi}^{(L)}_{\mu\nu}(k)=(\delta_{\mu\nu}k^{2}-k_{\mu}k_{\nu})(\Pi(k^{2})-(Z_{3}-1)/g^{2})+\Delta\Pi^{(L)}_{\mu\nu}(k),$
(115)
because the remainder is ultraviolet finite. We can then write
$\displaystyle C^{(L)}_{\mu\nu}(x)-C^{(\infty)}_{\mu\nu}(x)$ $\displaystyle=$
$\displaystyle\sum_{n\in\mathbb{Z}^{4}\backslash\\{0\\}}C^{(\infty)}_{\mu\nu}(x+nL)$
(116)
$\displaystyle+\sum_{n\in\mathbb{Z}^{4}}\int\frac{d^{4}k}{(2\pi)^{4}}e^{ik(x+nL)}\frac{m_{\rho}^{4}}{(k^{2}+m_{\rho}^{2})^{2}}\Delta\Pi^{(L)}_{\mu\nu}(k).$
It is instructive and useful to compute $C_{\mu\nu}(x)$ in infinite volume,
$C^{(\infty)}_{\mu\nu}(x)=(\partial_{\mu}\partial_{\nu}-\delta_{\mu\nu}\triangle)\int\frac{d^{4}k}{(2\pi)^{4}}\,e^{ikx}\,\frac{m_{\rho}^{4}}{(k^{2}+m_{\rho}^{2})^{2}}\,\left(\Pi(k^{2})-\Pi(0)+\left(\Pi(0)-\frac{Z_{3}-1}{g^{2}}\right)\right).$
(117)
It is clear that the $(\Pi(0)-\frac{Z_{3}-1}{g^{2}})$ term is rapidly decaying
in position space, being O($e^{-m_{V}|x|}$). To calculate the position-space
dependence of the other term, insert the spectral representation
$\Pi(k^{2})-\Pi(0)=k^{2}\int_{0}^{\infty}ds\;\frac{\rho_{\rm
sQED}(s)}{s(s+k^{2})},$ (118)
with the spectral function normalized according to
$\rho(s)=R(s)/(12\pi^{2})\stackrel{{\scriptstyle\rm
sQED}}{{=}}\frac{1}{48\pi^{2}}(1-4m_{\pi}^{2}/s)^{3/2}.$ (119)
In this form, the $d^{4}k$ integral can be performed,
$\displaystyle C^{\infty}_{\mu\nu}(x)$ $\displaystyle=$ $\displaystyle
m_{\rho}^{4}(\partial_{\mu}\partial_{\nu}-\delta_{\mu\nu}\triangle)\Big{\\{}-(\Pi(0)-\frac{Z_{3}-1}{g^{2}})\frac{\partial}{\partial
m_{\rho}^{2}}G_{m_{\rho}}(x)$
$\displaystyle+\int_{0}^{\infty}\frac{ds}{s(s-m_{\rho}^{2})}\,\rho_{\rm
sQED}(s)\Big{(}\frac{-s}{s-m_{\rho}^{2}}(G_{\sqrt{s}}(x)-G_{m_{\rho}}(x))+m_{\rho}^{2}\frac{\partial}{\partial
m_{\rho}^{2}}G_{m_{\rho}}(x)\Big{)}\Big{\\}}.$
For the second term, we note that
$\Delta\Pi^{(L)}_{\mu\nu}(k)=\sum_{\nu\neq
0}\int\frac{d^{4}q}{(2\pi)^{4}}\,e^{iLq\cdot\nu}\frac{-(k+2q)_{\mu}(k+2q)_{\nu}+2\delta_{\mu\nu}((k+q)^{2}+m_{\pi}^{2})}{((k+q)^{2}+m_{\pi}^{2})(q^{2}+m_{\pi}^{2})}.$
(121)
One finds, using a Feynman parameter $\alpha$,
$\displaystyle\Delta\Pi^{(L)}_{\mu\nu}(k)$ $\displaystyle=$
$\displaystyle\sum_{\nu\neq 0}\Delta\Pi^{(L)}_{\mu\nu}(k,y=L\nu),$ (122)
$\displaystyle\Delta\Pi^{(L)}_{\mu\nu}(k,y)$ $\displaystyle=$
$\displaystyle\\{-(k+2q)_{\mu}(k+2q)_{\nu}+2\delta_{\mu\nu}((k+q)^{2}+m_{\pi}^{2})\\}_{q=-i\nabla_{y}}$
(123) $\displaystyle\cdot\frac{1}{8\pi^{2}}\int_{0}^{1}d\alpha\,e^{-i\alpha
k\cdot y}\,K_{0}(\sqrt{\alpha(1-\alpha)k^{2}+m_{\pi}^{2}}|y|).$
For the next step, the $d^{4}k$ integral can be reduced to a one-dimensional
integral,
$\displaystyle\int\frac{d^{4}k}{(2\pi)^{4}}\,e^{ikx}\frac{m_{\rho}^{4}}{(k^{2}+m_{\rho}^{2})^{2}}\;\Delta\Pi^{(L)}_{\mu\nu}(k,y)=\frac{m_{\rho}^{4}}{32\pi^{4}}$
$\displaystyle\cdot\\{-(k+2q)_{\mu}(k+2q)_{\nu}+2\delta_{\mu\nu}((k+q)^{2}+m_{\pi}^{2})\\}_{q=-i\nabla_{y},k=-i\nabla_{x}}$
$\displaystyle\cdot\int_{0}^{1}\frac{d\alpha}{|x-\alpha
y|}\int_{0}^{\infty}dk\;\frac{k^{2}}{(k^{2}+m_{\rho}^{2})^{2}}\,J_{1}(k|x-\alpha
y|)\,K_{0}(\sqrt{\alpha(1-\alpha)k^{2}+m_{\pi}^{2}}|y|).$ (124)
Without the $(k^{2}+m_{\rho}^{2})^{-2}$ factor, the $k$ integral could be
performed,
$\displaystyle\frac{1}{|x-\alpha
y|}\int_{0}^{\infty}dk\;k^{2}\,J_{1}(k|x-\alpha
y|)\,K_{0}(\sqrt{\alpha(1-\alpha)k^{2}+m_{\pi}^{2}}|y|)$ (125)
$\displaystyle=\frac{m_{\pi}^{2}}{\alpha(1-\alpha)}\,\frac{K_{2}(\frac{m_{\pi}}{\sqrt{\alpha(1-\alpha)}}\sqrt{\alpha(y^{2}-2x\cdot
y)+x^{2}})}{\alpha(y^{2}-2x\cdot y)+x^{2}}.$
Using this result, the actually required integral can be brought into the
following, non-oscillatory form,
$\displaystyle\frac{1}{|x-\alpha
y|}\int_{0}^{\infty}dk\;\frac{k^{2}}{(k^{2}+m_{\rho}^{2})^{2}}\,J_{1}(k|x-\alpha
y|)\,K_{0}(\sqrt{\alpha(1-\alpha)k^{2}+m_{\pi}^{2}}|y|)$ (126)
$\displaystyle=-\frac{\partial}{\partial
m_{\rho}^{2}}2\pi^{2}\frac{m_{\pi}^{2}}{\alpha(1-\alpha)}\int_{0}^{\infty}dz\,z^{3}\gamma_{0}^{(m_{\rho})}(|x-\alpha
y|,z)\frac{K_{2}(\frac{m_{\pi}}{\sqrt{\alpha(1-\alpha)}}\sqrt{z^{2}+\alpha(1-\alpha)y^{2}})}{z^{2}+\alpha(1-\alpha)y^{2}},$
where
$\gamma_{n}^{(m)}(|x|,|u|)=\frac{n+1}{2\pi^{2}|x||u|}\Big{(}\theta(|x|-|u|)I_{n+1}(m|u|)K_{n+1}(m|x|)+\theta(|u|-|x|)I_{n+1}(m|x|)K_{n+1}(m|u|)\Big{)}$
(127)
is the $n^{\rm th}$ coefficient in the Gegenbauer polynomial expansion of the
scalar propagator in four dimensions,
$G_{m}(x-u)=\sum_{n\geq
0}\gamma^{(m)}_{n}(|x|,|u|)\,C^{(1)}_{n}(\hat{u}\cdot\hat{x}),$ (128)
with $C^{(1)}_{0}(z)=1$, $C^{(1)}_{1}(z)=2z$, $C^{(1)}_{2}(z)=4z^{2}-1$ etc.
## Appendix C Tables
This section provides tables for the finite-size corrections applied ensemble-
by-ensemble (Tab. 5), as well as for the correction to reach the reference
point in the $(m_{\pi},m_{K})$ plane (Tab. 6).
| | Wrap-around-the-world correction | Trunc. correction
---|---|---|---
| | | Sakurai QFT | | | Scalar QED | | Sakurai QFT
Id | $L$ [fm] | $H_{\mu\nu}$ | $H_{\mu\nu}^{\textrm{TL}}$ | $H_{\mu\nu}^{\textrm{XX}}$ | $H_{\mu\nu}$ | $H_{\mu\nu}^{\textrm{TL}}$ | $H_{\mu\nu}^{\textrm{XX}}$ | $H_{\mu\nu}^{\textrm{TL}}$
U102 | 2.1 | -3.859(455) | -2.834 (962) | -5.805(1865) | 1.023(320) | -6.62(155) | -10.099(177) | -1.511
H102 | 2.8 | -0.990(103) | -0.759(208) | -1.243(308) | 0.419(9) | -1.424(10) | -1.577(8) | -0.584
S400 | 2.4 | -2.047(255) | -1.479(429) | -2.677(719) | 0.705(28) | -3.075(38) | -3.897(35) | -1.654
N203 | 3.1 | 0.114(26) | -0.200(21) | -0.259(49) | 0.266(30) | -0.747(40) | -0.744(31) | -0.200
N302 | 2.4 | -2.396(310) | -1.693(533) | -3.130(894) | 0.714(163) | -3.611(51) | -4.712(48) | -1.104
Table 5: Results for the corrections for the finite-size effect of discretized momenta calculated in the Sakurai QFT and scalar QED and truncation of the integrand calculated in the Sakurai QFT. See the text in Sect. IV.1 for details. All values are in units of $10^{-10}$. To get the FSE correction for N302 with the TL-kernel for example, we have $\textrm{FSE}=-1.693-1.104=-2.797$ for the central value with an uncertainty of $\sigma_{\rm{FSE}}=\sqrt{0.533^{2}+(0.25\cdot 2.797)^{2}}=0.879$ | isovector | strange
---|---|---
Id | (LL) | (CL) | (LL) | (CL)
H102 | 0.11(4) | 0.12(4) | -0.5(1) | -0.49(1)
S400 | -0.07(2) | -0.06(2) | -0.27(1) | -0.26(1)
N203 | -0.74(5) | -0.72(5) | -0.21(1) | -0.2(1)
N302 | -0.63(1) | -0.62(1) | 0.22(0) | 0.22(0)
Table 6: Corrections to the reference point $m_{\pi}=350$ MeV $m_{K}=450$ MeV
determined using calculations based on the TMR method. All values are in units
of $10^{-10}$
## References
* Bennett _et al._ (2006) G. W. Bennett _et al._ (Muon $g-2$), “Final Report of the Muon E821 Anomalous Magnetic Moment Measurement at BNL,” Phys. Rev. D73, 072003 (2006), arXiv:hep-ex/0602035 [hep-ex] .
* Abi _et al._ (2021) B. Abi _et al._ (Muon g-2), “Measurement of the Positive Muon Anomalous Magnetic Moment to 0.46 ppm,” Phys. Rev. Lett. 126, 141801 (2021), arXiv:2104.03281 [hep-ex] .
* Davier _et al._ (2017) Michel Davier, Andreas Hoecker, Bogdan Malaescu, and Zhiqing Zhang, “Reevaluation of the hadronic vacuum polarisation contributions to the Standard Model predictions of the muon $g-2$ and ${\alpha(m_{Z}^{2})}$ using newest hadronic cross-section data,” Eur. Phys. J. C77, 827 (2017), arXiv:1706.09436 [hep-ph] .
* Keshavarzi _et al._ (2018) Alexander Keshavarzi, Daisuke Nomura, and Thomas Teubner, “Muon $g-2$ and $\alpha(M_{Z}^{2})$: a new data-based analysis,” Phys. Rev. D97, 114025 (2018), arXiv:1802.02995 [hep-ph] .
* Colangelo _et al._ (2019) Gilberto Colangelo, Martin Hoferichter, and Peter Stoffer, “Two-pion contribution to hadronic vacuum polarization,” JHEP 02, 006 (2019), arXiv:1810.00007 [hep-ph] .
* Hoferichter _et al._ (2019) Martin Hoferichter, Bai-Long Hoid, and Bastian Kubis, “Three-pion contribution to hadronic vacuum polarization,” JHEP 08, 137 (2019), arXiv:1907.01556 [hep-ph] .
* Davier _et al._ (2020) M. Davier, A. Hoecker, B. Malaescu, and Z. Zhang, “A new evaluation of the hadronic vacuum polarisation contributions to the muon anomalous magnetic moment and to $\mathbf{\boldsymbol{\alpha}(m_{Z}^{2})}$,” Eur. Phys. J. C80, 241 (2020), [Erratum: Eur. Phys. J. C80, 410 (2020)], arXiv:1908.00921 [hep-ph] .
* Keshavarzi _et al._ (2020) Alexander Keshavarzi, Daisuke Nomura, and Thomas Teubner, “The $g-2$ of charged leptons, $\alpha(M_{Z}^{2})$ and the hyperfine splitting of muonium,” Phys. Rev. D101, 014029 (2020), arXiv:1911.00367 [hep-ph] .
* Kurz _et al._ (2014) Alexander Kurz, Tao Liu, Peter Marquard, and Matthias Steinhauser, “Hadronic contribution to the muon anomalous magnetic moment to next-to-next-to-leading order,” Phys. Lett. B734, 144–147 (2014), arXiv:1403.6400 [hep-ph] .
* Melnikov and Vainshtein (2004) Kirill Melnikov and Arkady Vainshtein, “Hadronic light-by-light scattering contribution to the muon anomalous magnetic moment revisited,” Phys. Rev. D70, 113006 (2004), arXiv:hep-ph/0312226 [hep-ph] .
* Masjuan and Sánchez-Puertas (2017) Pere Masjuan and Pablo Sánchez-Puertas, “Pseudoscalar-pole contribution to the $(g_{\mu}-2)$: a rational approach,” Phys. Rev. D95, 054026 (2017), arXiv:1701.05829 [hep-ph] .
* Colangelo _et al._ (2017) Gilberto Colangelo, Martin Hoferichter, Massimiliano Procura, and Peter Stoffer, “Dispersion relation for hadronic light-by-light scattering: two-pion contributions,” JHEP 04, 161 (2017), arXiv:1702.07347 [hep-ph] .
* Hoferichter _et al._ (2018) Martin Hoferichter, Bai-Long Hoid, Bastian Kubis, Stefan Leupold, and Sebastian P. Schneider, “Dispersion relation for hadronic light-by-light scattering: pion pole,” JHEP 10, 141 (2018), arXiv:1808.04823 [hep-ph] .
* Bijnens _et al._ (2019) Johan Bijnens, Nils Hermansson-Truedsson, and Antonio Rodríguez-Sánchez, “Short-distance constraints for the HLbL contribution to the muon anomalous magnetic moment,” Phys. Lett. B798, 134994 (2019), arXiv:1908.03331 [hep-ph] .
* Colangelo _et al._ (2020) Gilberto Colangelo, Franziska Hagelstein, Martin Hoferichter, Laetitia Laub, and Peter Stoffer, “Longitudinal short-distance constraints for the hadronic light-by-light contribution to $(g-2)_{\mu}$ with large-$N_{c}$ Regge models,” JHEP 03, 101 (2020), arXiv:1910.13432 [hep-ph] .
* Pauk and Vanderhaeghen (2014) Vladyslav Pauk and Marc Vanderhaeghen, “Single meson contributions to the muon‘s anomalous magnetic moment,” Eur. Phys. J. C74, 3008 (2014), arXiv:1401.0832 [hep-ph] .
* Danilkin and Vanderhaeghen (2017) Igor Danilkin and Marc Vanderhaeghen, “Light-by-light scattering sum rules in light of new data,” Phys. Rev. D95, 014019 (2017), arXiv:1611.04646 [hep-ph] .
* Jegerlehner (2017) Friedrich Jegerlehner, “The Anomalous Magnetic Moment of the Muon,” Springer Tracts Mod. Phys. 274, 1–693 (2017).
* Knecht _et al._ (2018) M. Knecht, S. Narison, A. Rabemananjara, and D. Rabetiarivony, “Scalar meson contributions to $a_{\mu}$ from hadronic light-by-light scattering,” Phys. Lett. B787, 111–123 (2018), arXiv:1808.03848 [hep-ph] .
* Eichmann _et al._ (2020) Gernot Eichmann, Christian S. Fischer, and Richard Williams, “Kaon-box contribution to the anomalous magnetic moment of the muon,” Phys. Rev. D101, 054015 (2020), arXiv:1910.06795 [hep-ph] .
* Roig and Sánchez-Puertas (2020) Pablo Roig and Pablo Sánchez-Puertas, “Axial-vector exchange contribution to the hadronic light-by-light piece of the muon anomalous magnetic moment,” Phys. Rev. D101, 074019 (2020), arXiv:1910.02881 [hep-ph] .
* Colangelo _et al._ (2014) Gilberto Colangelo, Martin Hoferichter, Andreas Nyffeler, Massimo Passera, and Peter Stoffer, “Remarks on higher-order hadronic corrections to the muon $g-2$,” Phys. Lett. B735, 90–91 (2014), arXiv:1403.7512 [hep-ph] .
* Chakraborty _et al._ (2018) B. Chakraborty _et al._ (Fermilab Lattice, LATTICE-HPQCD, MILC), “Strong-Isospin-Breaking Correction to the Muon Anomalous Magnetic Moment from Lattice QCD at the Physical Point,” Phys. Rev. Lett. 120, 152001 (2018), arXiv:1710.11212 [hep-lat] .
* Borsanyi _et al._ (2018) Sz. Borsanyi _et al._ (Budapest-Marseille-Wuppertal), “Hadronic vacuum polarization contribution to the anomalous magnetic moments of leptons from first principles,” Phys. Rev. Lett. 121, 022002 (2018), arXiv:1711.04980 [hep-lat] .
* Blum _et al._ (2018a) T. Blum, P. A. Boyle, V. Gülpers, T. Izubuchi, L. Jin, C. Jung, A. Jüttner, C. Lehner, A. Portelli, and J. T. Tsang (RBC, UKQCD), “Calculation of the hadronic vacuum polarization contribution to the muon anomalous magnetic moment,” Phys. Rev. Lett. 121, 022003 (2018a), arXiv:1801.07224 [hep-lat] .
* Giusti _et al._ (2019) D. Giusti, V. Lubicz, G. Martinelli, F. Sanfilippo, and S. Simula (ETM), “Electromagnetic and strong isospin-breaking corrections to the muon $g-2$ from Lattice QCD+QED,” Phys. Rev. D99, 114502 (2019), arXiv:1901.10462 [hep-lat] .
* Shintani and Kuramashi (2019) Eigo Shintani and Yoshinobu Kuramashi, “Study of systematic uncertainties in hadronic vacuum polarization contribution to muon $g-2$ with 2+1 flavor lattice QCD,” Phys. Rev. D100, 034517 (2019), arXiv:1902.00885 [hep-lat] .
* Davies _et al._ (2020) C. T. H. Davies _et al._ (Fermilab Lattice, LATTICE-HPQCD, MILC), “Hadronic-vacuum-polarization contribution to the muon’s anomalous magnetic moment from four-flavor lattice QCD,” Phys. Rev. D101, 034512 (2020), arXiv:1902.04223 [hep-lat] .
* Gérardin _et al._ (2019) Antoine Gérardin, Marco Cè, Georg von Hippel, Ben Hörz, Harvey B. Meyer, Daniel Mohler, Konstantin Ottnad, Jonas Wilhelm, and Hartmut Wittig, “The leading hadronic contribution to $(g-2)_{\mu}$ from lattice QCD with $N_{\rm f}=2+1$ flavours of O($a$) improved Wilson quarks,” Phys. Rev. D100, 014510 (2019), arXiv:1904.03120 [hep-lat] .
* Aubin _et al._ (2020) Christopher Aubin, Thomas Blum, Cheng Tu, Maarten Golterman, Chulwoo Jung, and Santiago Peris, “Light quark vacuum polarization at the physical point and contribution to the muon $g-2$,” Phys. Rev. D101, 014503 (2020), arXiv:1905.09307 [hep-lat] .
* Giusti and Simula (2019) D. Giusti and S. Simula, “Lepton anomalous magnetic moments in Lattice QCD+QED,” PoS LATTICE2019, 104 (2019), arXiv:1910.03874 [hep-lat] .
* Blum _et al._ (2020) Thomas Blum, Norman Christ, Masashi Hayakawa, Taku Izubuchi, Luchang Jin, Chulwoo Jung, and Christoph Lehner, “The hadronic light-by-light scattering contribution to the muon anomalous magnetic moment from lattice QCD,” Phys. Rev. Lett. 124, 132002 (2020), arXiv:1911.08123 [hep-lat] .
* Gérardin _et al._ (2019) Antoine Gérardin, Harvey B. Meyer, and Andreas Nyffeler, “Lattice calculation of the pion transition form factor with $N_{f}=2+1$ Wilson quarks,” Phys. Rev. D100, 034520 (2019), arXiv:1903.09471 [hep-lat] .
* Borsanyi _et al._ (2021) Sz. Borsanyi _et al._ , “Leading hadronic contribution to the muon magnetic moment from lattice QCD,” Nature 593, 51–55 (2021), arXiv:2002.12347 [hep-lat] .
* Chao _et al._ (2020) En-Hung Chao, Antoine Gérardin, Jeremy R. Green, Renwick J. Hudspith, and Harvey B. Meyer, “Hadronic light-by-light contribution to $(g-2)_{\mu}$ from lattice QCD with SU(3) flavor symmetry,” Eur. Phys. J. C 80, 869 (2020), arXiv:2006.16224 [hep-lat] .
* Chao _et al._ (2021) En-Hung Chao, Renwick J. Hudspith, Antoine Gérardin, Jeremy R. Green, Harvey B. Meyer, and Konstantin Ottnad, “Hadronic light-by-light contribution to $(g-2)_{\mu}$ from lattice QCD: a complete calculation,” Eur. Phys. J. C 81, 651 (2021), arXiv:2104.02632 [hep-lat] .
* Chao _et al._ (2022) En-Hung Chao, Renwick J. Hudspith, Antoine Gérardin, Jeremy R. Green, and Harvey B. Meyer, “The charm-quark contribution to light-by-light scattering in the muon $(g-2)$ from lattice QCD,” Eur. Phys. J. C 82, 664 (2022), arXiv:2204.08844 [hep-lat] .
* Aoyama _et al._ (2020) T. Aoyama _et al._ , “The anomalous magnetic moment of the muon in the Standard Model,” Phys. Rept. 887, 1–166 (2020), arXiv:2006.04822 [hep-ph] .
* Blum _et al._ (2018b) T. Blum, P. A. Boyle, V. Gülpers, T. Izubuchi, L. Jin, C. Jung, A. Jüttner, C. Lehner, A. Portelli, and J. T. Tsang (RBC, UKQCD), “Calculation of the hadronic vacuum polarization contribution to the muon anomalous magnetic moment,” Phys. Rev. Lett. 121, 022003 (2018b), arXiv:1801.07224 [hep-lat] .
* Colangelo _et al._ (2022) G. Colangelo, A. X. El-Khadra, M. Hoferichter, A. Keshavarzi, C. Lehner, P. Stoffer, and T. Teubner, “Data-driven evaluations of Euclidean windows to scrutinize hadronic vacuum polarization,” Phys. Lett. B 833, 137313 (2022), arXiv:2205.12963 [hep-ph] .
* Wang _et al._ (2022) Gen Wang, Terrence Draper, Keh-Fei Liu, and Yi-Bo Yang (chiQCD), “Muon g-2 with overlap valence fermion,” (2022), arXiv:2204.01280 [hep-lat] .
* Aubin _et al._ (2022) Christopher Aubin, Thomas Blum, Maarten Golterman, and Santiago Peris, “Muon anomalous magnetic moment with staggered fermions: Is the lattice spacing small enough?” Phys. Rev. D 106, 054503 (2022), arXiv:2204.12256 [hep-lat] .
* Cè _et al._ (2022) Marco Cè _et al._ , “Window observable for the hadronic vacuum polarization contribution to the muon $g-2$ from lattice QCD,” (2022), arXiv:2206.06582 [hep-lat] .
* Alexandrou _et al._ (2022) C. Alexandrou _et al._ , “Lattice calculation of the short and intermediate time-distance hadronic vacuum polarization contributions to the muon magnetic moment using twisted-mass fermions,” (2022), arXiv:2206.15084 [hep-lat] .
* Davies _et al._ (2022) C. T. H. Davies _et al._ (Fermilab Lattice, MILC, HPQCD), “Windows on the hadronic vacuum polarization contribution to the muon anomalous magnetic moment,” Phys. Rev. D 106, 074509 (2022), arXiv:2207.04765 [hep-lat] .
* Bernecker and Meyer (2011) David Bernecker and Harvey B. Meyer, “Vector Correlators in Lattice QCD: Methods and applications,” Eur.Phys.J. A47, 148 (2011), arXiv:1107.4388 [hep-lat] .
* Meyer (2017) Harvey B. Meyer, “Lorentz-covariant coordinate-space representation of the leading hadronic contribution to the anomalous magnetic moment of the muon,” Eur. Phys. J. C 77, 616 (2017), arXiv:1706.01139 [hep-lat] .
* Della Morte _et al._ (2017) M. Della Morte, A. Francis, V. Gülpers, G. Herdoíza, G. von Hippel, H. Horch, B. Jäger, H. B. Meyer, A. Nyffeler, and H. Wittig, “The hadronic vacuum polarization contribution to the muon $g-2$ from lattice QCD,” JHEP 10, 020 (2017), arXiv:1705.01775 [hep-lat] .
* Brodsky and De Rafael (1968) Stanley J. Brodsky and Eduardo De Rafael, “Suggested boson - lepton pair couplings and the anomalous magnetic moment of the muon,” Phys. Rev. 168, 1620–1622 (1968).
* Lautrup and De Rafael (1968) B. E. Lautrup and E. De Rafael, “Calculation of the sixth-order contribution from the fourth-order vacuum polarization to the difference of the anomalous magnetic moments of muon and electron,” Phys. Rev. 174, 1835–1842 (1968).
* Lehner and Meyer (2020) Christoph Lehner and Aaron S. Meyer, “Consistency of hadronic vacuum polarization between lattice QCD and the R-ratio,” Phys. Rev. D 101, 074515 (2020), arXiv:2003.04177 [hep-lat] .
* Lahert _et al._ (2022) Shaun Lahert, Carleton DeTar, Aida X. El-Khadra, Elvira Gámiz, Steven Gottlieb, Andreas Kronfeld, Ethan Neil, Curtis T. Peterson, and Ruth Van de Water (Fermilab Lattice, HPQCD, MILC), “Hadronic vacuum polarization of the muon on 2+1+1-flavor HISQ ensembles: an update.” PoS LATTICE2021, 526 (2022), arXiv:2112.11647 [hep-lat] .
* Cè _et al._ (2018) Marco Cè, Antoine Gérardin, Konstantin Ottnad, and Harvey B. Meyer, “The leading hadronic contribution to the running of the Weinberg angle using covariant coordinate-space methods,” PoS LATTICE2018, 137 (2018), arXiv:1811.08669 [hep-lat] .
* Blum _et al._ (2017) Thomas Blum, Norman Christ, Masashi Hayakawa, Taku Izubuchi, Luchang Jin, Chulwoo Jung, and Christoph Lehner, “Connected and Leading Disconnected Hadronic Light-by-Light Contribution to the Muon Anomalous Magnetic Moment with a Physical Pion Mass,” Phys. Rev. Lett. 118, 022005 (2017), arXiv:1610.04603 [hep-lat] .
* Asmussen _et al._ (2019) Nils Asmussen, En-Hung Chao, Antoine Gérardin, Jeremy R. Green, Renwick J. Hudspith, Harvey B. Meyer, and Andreas Nyffeler, “Developments in the position-space approach to the HLbL contribution to the muon $g-2$ on the lattice,” PoS LATTICE2019, 195 (2019), arXiv:1911.05573 [hep-lat] .
* Bruno _et al._ (2015) Mattia Bruno _et al._ , “Simulation of QCD with N${}_{f}=$ 2 $+$ 1 flavors of non-perturbatively improved Wilson fermions,” JHEP 02, 043 (2015), arXiv:1411.3982 [hep-lat] .
* Bhattacharya _et al._ (2006) Tanmoy Bhattacharya, Rajan Gupta, Weonjong Lee, Stephen R. Sharpe, and Jackson M. S. Wu, “Improved bilinears in lattice QCD with non-degenerate quarks,” Phys. Rev. D 73, 034504 (2006), arXiv:hep-lat/0511014 .
* Gerardin _et al._ (2019) Antoine Gerardin, Tim Harris, and Harvey B. Meyer, “Nonperturbative renormalization and $O(a)$-improvement of the nonsinglet vector current with $N_{f}=2+1$ Wilson fermions and tree-level Symanzik improved gauge action,” Phys. Rev. D 99, 014519 (2019), arXiv:1811.08209 [hep-lat] .
* Bruno _et al._ (2017) Mattia Bruno, Tomasz Korzec, and Stefan Schaefer, “Setting the scale for the CLS $2+1$ flavor ensembles,” Phys. Rev. D 95, 074504 (2017), arXiv:1608.08900 [hep-lat] .
* Hirschhorn (2000) Michael D. Hirschhorn, “Partial fractions and four classical theorems of number theory,” The American Mathematical Monthly 107, 260–264 (2000), https://doi.org/10.1080/00029890.2000.12005191 .
* Jegerlehner and Szafron (2011) Fred Jegerlehner and Robert Szafron, “$\rho^{0}-\gamma$ mixing in the neutral channel pion form factor $F_{\pi}^{e}$ and its role in comparing $e^{+}e^{-}$ with $\tau$ spectral functions,” Eur. Phys. J. C 71, 1632 (2011), arXiv:1101.2872 [hep-ph] .
* Meyer (2011) Harvey B. Meyer, “Lattice QCD and the Timelike Pion Form Factor,” Phys. Rev. Lett. 107, 072002 (2011), arXiv:1105.1892 [hep-lat] .
* Francis _et al._ (2013) Anthony Francis, Benjamin Jäger, Harvey B. Meyer, and Hartmut Wittig, “A new representation of the Adler function for lattice QCD,” Phys.Rev. D88, 054502 (2013), arXiv:1306.2532 [hep-lat] .
* Hansen and Patella (2019) Maxwell T. Hansen and Agostino Patella, “Finite-volume effects in $(g-2)^{\text{HVP,LO}}_{\mu}$,” Phys. Rev. Lett. 123, 172001 (2019), arXiv:1904.10010 [hep-lat] .
* Hansen and Patella (2020) Maxwell T. Hansen and Agostino Patella, “Finite-volume and thermal effects in the leading-HVP contribution to muonic ($g-2$),” JHEP 10, 029 (2020), arXiv:2004.03935 [hep-lat] .
* Gounaris and Sakurai (1968) G. J. Gounaris and J. J. Sakurai, “Finite width corrections to the vector meson dominance prediction for $\rho\to e^{+}e^{-}$,” Phys. Rev. Lett. 21, 244–247 (1968).
* Lüscher (2010) Martin Lüscher, “Properties and uses of the Wilson flow in lattice QCD,” JHEP 08, 071 (2010), [Erratum: JHEP 03, 092 (2014)], arXiv:1006.4518 [hep-lat] .
* Fritzsch _et al._ (2022) Patrick Fritzsch, John Bulava, Marco Cè, Anthony Francis, Martin Lüscher, and Antonio Rago, “Master-field simulations of QCD,” PoS LATTICE2021, 465 (2022), arXiv:2111.11544 [hep-lat] .
* Lüscher and Schaefer (2013) Martin Lüscher and Stefan Schaefer, “Lattice QCD with open boundary conditions and twisted-mass reweighting,” Comput. Phys. Commun. 184, 519–528 (2013), arXiv:1206.2809 [hep-lat] .
* Edwards and Joó (2005) Robert G. Edwards and Balint Joó (SciDAC, LHPC, UKQCD), “The Chroma software system for lattice QCD,” _Lattice field theory. Proceedings, 22nd International Symposium, Lattice 2004, Batavia, USA, June 21-26, 2004_ , Nucl. Phys. B (Proc. Suppl.) 140, 832 (2005), arXiv:hep-lat/0409003 .
|
# On the Effectiveness of Parameter-Efficient Fine-Tuning
Zihao Fu,1 Haoran Yang,2 Anthony Man-Cho So,2
Wai Lam,2 Lidong Bing,3 Nigel Collier1
(1Language Technology Lab, University of Cambridge,
2The Chinese University of Hong Kong, 3DAMO Academy, Alibaba Group
<EMAIL_ADDRESS>
<EMAIL_ADDRESS><EMAIL_ADDRESS>)
###### Abstract
Fine-tuning pre-trained models has been ubiquitously proven to be effective in
a wide range of NLP tasks. However, fine-tuning the whole model is parameter
inefficient as it always yields an entirely new model for each task.
Currently, many research works propose to only fine-tune a small portion of
the parameters while keeping most of the parameters shared across different
tasks. These methods achieve surprisingly good performance and are shown to be
more stable than their corresponding fully fine-tuned counterparts. However,
such kind of methods is still not well understood. Some natural questions
arise: How does the parameter sparsity lead to promising performance? Why is
the model more stable than the fully fine-tuned models? How to choose the
tunable parameters? In this paper, we first categorize the existing methods
into random approaches, rule-based approaches, and projection-based approaches
based on how they choose which parameters to tune. Then, we show that all of
the methods are actually sparse fine-tuned models and conduct a novel
theoretical analysis of them. We indicate that the sparsity is actually
imposing a regularization on the original model by controlling the upper bound
of the stability. Such stability leads to better generalization capability
which has been empirically observed in a lot of recent research works. Despite
the effectiveness of sparsity grounded by our theory, it still remains an open
problem of how to choose the tunable parameters. Currently, the random and
rule-based methods do not utilize task-specific data information while the
projection-based approaches suffer from the projection discontinuity problem.
To better choose the tunable parameters, we propose a novel Second-order
Approximation Method (SAM) which approximates the original problem with an
analytically solvable optimization function. The tunable parameters are
determined by directly optimizing the approximation function. We conduct
extensive experiments on several tasks. The experimental results111The code is
available at https://github.com/fuzihaofzh/AnalyzeParameterEfficientFinetune
show that our proposed SAM model outperforms many strong baseline models and
it also verifies our theoretical analysis.
## 1 Introduction
Fine-tuning the model parameters for a specific task on a pre-trained model
Peters et al. (2018); Kenton and Toutanova (2019); Lan et al. (2020); Radford
et al. (2018, 2019); Liu et al. (2019); Brown et al. (2020); Lewis et al.
(2020); Raffel et al. (2020) has become one of the most promising techniques
for NLP in recent years. It achieves state-of-the-art performance on most of
the NLP tasks. However, as the parameter number grows exponentially to
billions Brown et al. (2020) or even trillions Fedus et al. (2021), it becomes
very inefficient to save the fully fine-tuned parameters He et al. (2021a) for
each downstream task. Many recent research works propose a parameter-efficient
Houlsby et al. (2019); Zaken et al. (2021); He et al. (2021a) way to solve
this problem by tuning only a small part of the original parameters and
storing the tuned parameters for each task.
Apart from the efficiency of the parameter-efficient models, it has also been
observed in many recent research works that the parameter-efficient methods
achieve surprisingly good performance. These models are more stable He et al.
(2021b); Lee et al. (2019); Houlsby et al. (2019); Zaken et al. (2021); Sung
et al. (2021); Liu et al. (2021); Ding et al. (2022) and even achieve better
overall scores than the fully fine-tuned models Lee et al. (2019); Houlsby et
al. (2019); Zaken et al. (2021); Sung et al. (2021); Liu et al. (2021); Xu et
al. (2021); Guo et al. (2021); He et al. (2021a); Ding et al. (2022) on some
tasks. Currently, it remains unclear why the parameter-efficient models can
improve the stability and performance in many prevalent works. In this paper,
we first categorize the existing methods into three categories (i.e. random
approaches, rule-based approaches, and projection-based approaches) depending
on how they choose the tunable parameters. Then, we define the generalized
sparse fine-tuned model and illustrate that most of the existing parameter-
efficient models are actually a sparse fine-tuned model. Afterwards, we
introduce the widely used pointwise hypothesis stability of the sparse fine-
tuned model and show theoretically that the sparsity actually controls the
upper bound of the stability. Based on the stability analysis, we further give
a theoretical analysis of the generalization bound for the sparse fine-tuned
model.
Though promising results have been achieved by existing parameter-efficient
models, it still remains a challenging problem to select suitable parameters
as it is an NP-hard problem. Currently, the random Lee et al. (2019) and rule-
based Zaken et al. (2021); Han et al. (2015a); Houlsby et al. (2019); Pfeiffer
et al. (2020) approaches propose to optimize fixed parameters. These methods
are straightforward and easy to implement but they do not utilize task-
specific data information. To solve this problem, the projection-based
approaches Mallya et al. (2018); Guo et al. (2021); Xu et al. (2021) propose
to calculate a score for each parameter based on the data and project the
scores onto the parameter selection mask’s feasible region (an $L_{0}$ ball).
However, as the feasible region is non-convex, we will show that such
projection suffers from the projection discontinuity problem which makes the
parameter selection quite unstable. To solve these problems, we propose a
novel Second-order Approximation Method (SAM) to approximate the NP-hard
optimization target function with an analytically solvable function. Then, we
directly choose the parameters based on the optimal value and optimize the
parameters accordingly. We conduct extensive experiments to validate our
theoretical analysis and our proposed SAM model.
Our contributions can be summarized as follows: 1) We propose a new
categorization scheme for existing parameter-efficient methods and generalize
most of these methods with a unified view called the sparse fine-tuned model.
2) We conduct a theoretical analysis of the parameter-efficient models’
stability and generalization. 3) We propose a novel SAM model to choose the
suitable parameters to optimize. 4) We conduct extensive experiments to verify
our theoretical analysis and the SAM model.
## 2 Unified View of Parameter Efficient Fine-tuning
In this section, we first define the unified sparse fine-tuned model which is
simpler and easier for theoretical analysis. Then, we give a unified form of
the optimization target. Afterwards, similar to previous works Ding et al.
(2022); He et al. (2021a); Mao et al. (2021), we categorize these models into
three categories based on how the parameters are chosen. Finally, we show that
all the models are sparse fine-tuned model.
### 2.1 Sparse Fine-tuned Model
We first give the definition of sparse fine-tuned model as well as a unified
optimization target. The equivalent model is also defined to help understand
the models with modified structures.
###### Definition 1 ($p$-Sparse Fine-tuned Model).
Given a pre-trained model $\operatorname{\mathcal{M}}^{0}$ with parameters
$\theta^{0}$, if a fine-tuned model $\operatorname{\mathcal{M}}$ with
parameters $\theta$ has the same structure as $\operatorname{\mathcal{M}}^{0}$
such that $\|\theta-\theta^{0}\|_{0}\leq p\dim(\theta),p\in(0,1)$, we say the
model $\operatorname{\mathcal{M}}$ is a $p$-sparse fine-tuned model with the
sparsity $p$.
Many previous works propose different methods of selecting proper parameters
to fine-tune. We unify these methods by denoting $M$ as a mask matrix on the
parameters and the parameter $\theta$ can be denoted as
$\theta=\theta^{0}+M\Delta\theta$, where $\Delta\theta$ is the difference
vector. For a fixed sparsity coefficient $p$, the sparse fine-tuned model is
trying to solve the following problem:
$\displaystyle\min_{\Delta\theta,M}\operatorname{\mathcal{L}}(\theta^{0}+M\Delta\theta)$
(1) $\displaystyle s.t.\ \ \ \ \|M\|_{0}=$ $\displaystyle\lfloor mp\rfloor;\ \
\ \ M_{ij}=0,\forall i\neq j;\ \ \ \ M_{ii}\in\\{0,1\\},$
where $\lfloor\cdot\rfloor$ is the floor function, $m=\dim(\theta)$ is the
parameter number, $M\in\\{0,1\\}^{m\times m}$ is the parameter mask matrix
with the diagonal equal to 0 or 1 while other elements are equal to 0 and
$\operatorname{\mathcal{L}}$ is the loss function. We will show that most of
the existing methods are sparse fine-tuned models. However, in Definition 1,
we assume that the fine-tuned model $\operatorname{\mathcal{M}}$ has the same
structure as $\operatorname{\mathcal{M}}^{0}$. This assumption hinders us from
analyzing many models that alter the structure including Adapter Houlsby et
al. (2019); Pfeiffer et al. (2020); Rücklé et al. (2021); He et al. (2021b),
LoRA Hu et al. (2022), and etc. We define the notion of equivalent model to
solve this problem.
###### Definition 2 (Equivalent Model).
Given a pre-trained model $\operatorname{\mathcal{M}}^{0}$ with parameters
$\theta^{0}$, we say that a model $\tilde{\operatorname{\mathcal{M}}^{0}}$
with parameters $\tilde{\theta}^{0}$ is an equivalent model for model
$\operatorname{\mathcal{M}}^{0}$ if $\forall
x,\operatorname{\mathcal{M}}^{0}(x)=\tilde{\operatorname{\mathcal{M}}^{0}}(x)$.
Here, we do not require that the equivalent model shares the same structure as
the original model. As a result, for models fine-tuned with additional
structures (e.g. Adapter and LoRA), we can still get a sparse fine-tuned model
with respect to an equivalent model $\tilde{\operatorname{\mathcal{M}}^{0}}$
instead of the original pre-trained model $\operatorname{\mathcal{M}}^{0}$.
Therefore, our analysis for the sparse fine-tuned model is also applicable to
them.
### 2.2 Parameter Efficient Fine-tuning as Sparse Fine-tuned Model
Figure 1: Equivalent model for Adapter (a) and LoRA (b).
Unfortunately, Problem (1) is NP-Hard due to the nonconvexity of the feasible
region of the matrix $M$. Many existing methods propose to solve this problem
by first estimating $M$ and then optimizing other parameters. Based on
different strategies for choosing $M$, the methods can be divided into three
categories, namely, random approaches, rule-based approaches, and projection-
based approaches. We first give a general introduction of the prevalent
parameter efficient fine-tuning methods in each category and then show that
all of these methods are actually a sparse fine-tuned model. Then, in next
section, we can prove our theory only based on properties in Definition 1
without refering to any specific model’s property.
#### 2.2.1 Random Approaches
Random approaches include Random and Mixout models. These models randomly
choose the parameters to be tuned. Such selection does not depend on the task-
specific data information. Specifically, Random model is very straightforward
by randomly selecting the parameters with respect to a given sparsity ratio
and then training the selected parameters. Therefore, according to Definition
1, it is a sparse fine-tuned model. Mixout Lee et al. (2019) proposes to
directly reset a portion of the fine-tuned model’s parameters to the pre-
trained parameters with respect to a ratio. Therefore, according to Definition
1, it is a sparse fine-tuned model.
#### 2.2.2 Rule-Based Approaches
The rule-based approaches include BitFit, MagPruning, Adapter, and LoRA. This
kind of methods directly uses a pre-defined rule to fix the parameters to be
tuned. It can be viewed as incorporating prior knowledge to recognize
important features and can thus alleviate the problem of random approaches.
However, the selection rules are still irrelevant to the specific data.
Specifically, BitFit Zaken et al. (2021) only fine-tunes the bias-terms and
achieves considerably good performance. Therefore, according to Definition 1,
it is a sparse fine-tuned model with pre-defined tuning weights. MagPruning
Han et al. (2015a, b); Lee et al. (2021); Lagunas et al. (2021) follows the
idea that large weights are more important in the model. It ranks the weights
by the absolute value and tunes the parameters with high absolute values.
Therefore, according to Definition 1, it is a sparse fine-tuned model. Adapter
Houlsby et al. (2019); Pfeiffer et al. (2020); Rücklé et al. (2021); He et al.
(2021b); Karimi Mahabadi et al. (2021); Kim et al. (2021); Mahabadi et al.
(2021) proposes to add an adapter layer inside the transformer layer.
Therefore, the model structure is different from the original model. To make
it easier to analyze, Adapter can be viewed as fine-tuning an equivalent model
shown in Fig. 1 (a) which initializes the matrix $A$ as an all-zero matrix.
The equivalent model has the same output as the original pre-trained model for
arbitrary input while the structure is the same as the Adapter model.
Therefore, fine-tuning the adapter model can be viewed as fine-tuning partial
parameters of the equivalent model with the same structure. According to
Definition 1, it is a sparse fine-tuned model with respect to the equivalent
model. LoRA Hu et al. (2022); Karimi Mahabadi et al. (2021); Panahi et al.
(2021) proposes to add a new vector calculated by recovering an hidden vector
from a lower dimension space. The model is illustrated in Fig. 1 (b). It is
interesting to notice that the original initialization makes the LoRA model
already an equivalent model for the original pre-trained model as the matrix
$B$ is set to 0. Therefore, according to Definition 1, fine-tuning a LoRA
model can also be viewed as fine-tuning partial parameters of the equivalent
model with the same structure.
#### 2.2.3 Projection-Based Approaches
To utilize the task-specific data to help select the model’s tunable
parameters, many researchers propose projection-based approaches including the
DiffPruning, ChildPruning, and etc. These methods propose to choose the
optimal parameter mask $M$ and optimize the parameters $\theta$ alternately to
solve Problem (1). Specifically, they first relax $M$ as a continuous variable
to get an optimized value and then project the optimized value onto the
feasible region which can be denoted as
$\hat{M}=\Pi_{\Omega}(M)=\arg\min_{\hat{M}\in\Omega}\|\hat{M}-M\|$, where
$\Omega=\\{M|\|M\|_{0}=\lfloor mp\rfloor;M_{ij}=0,\forall i\neq
j;M_{ii}\in\\{0,1\\}\\}$ and $\Pi_{\Omega}$ denotes the projection operator
onto the feasible region $\Omega$ which is an $L_{0}$ ball. Specifically,
DiffPruning Mallya et al. (2018); Sanh et al. (2020); Guo et al. (2021);
Lagunas et al. (2021) proposes to model the parameter selection mask as a
Bernoulli random variable and optimize the variable with a reparametrization
method. It then projects the mask onto $M$’s feasible region $\Omega$ and do
the optimization alternately. Therefore, according to Definition 1, it is also
a sparse fine-tuned model. ChildPruning Xu et al. (2021); Mostafa and Wang
(2019) proposes to iteratively train the full model parameters and then
calculates the projected mask to find the child network. Therefore, it also
agrees with the sparse fine-tuned model’s definition.
Projection Discontinuity Problem. Though projection-based methods can utilize
task-specific data information, such kind of methods suffers from the
projection discontinuity problem. Specifically, the feasible region $\Omega$
(the $L_{0}$ ball) of $M$ is non-convex. Therefore, it does not have the non-
expansion property which is generally guaranteed for projection onto a closed
convex set. As a result, a small perturbation on $M$ can lead to a totally
different projection. For example, as illustrated in Fig. 2, suppose that
$p=0.5$ and
$M_{1}=\operatorname{diag}\\{0.99,1\\},M_{2}=\operatorname{diag}\\{1,0.99\\}$.
Though $M_{1}\approx M_{2}$, we have
$\Pi_{\Omega}(M_{1})=\operatorname{diag}\\{0,1\\}$ while
$\Pi_{\Omega}(M_{2})=\operatorname{diag}\\{1,0\\}$, which is quite different.
Consequently, the projection is very sensitive to the parameters updating
noise. As a result, it is hard to keep consistent with the previous parameters
selection which leads to a big change for the parameters selection. Such
inconsistency will impair the overall performance.
Figure 2: Projection discontinuity problem.
## 3 Theoretical Analysis of the Sparse Fine-tuned Model
Suppose that we have a pre-trained model $\operatorname{\mathcal{M}}^{0}$ with
parameters $\theta^{0}$, we fine-tune the sparse fine-tuned model
$\operatorname{\mathcal{M}}$ by updating only $\lfloor pm\rfloor$ parameters.
We will first show that sparsity implies a regularization of the original
model. Then, we prove that if a model is a sparse fine-tuned model, the model
stability can benefit from the sparsity. Next, we give a theoretical analysis
of the model generalization error bound and show that sparsity contributes to
reducing the generalization error. It should be noted that in the proofs, we
only use properties from Definition 1. Therefore, our theory is applicable to
all model categories (random approaches, rule-based approaches, and
projection-based approaches) that agrees with Definition 1.
### 3.1 Sparse Fine-tuned Model as a Regularizer
As analyzed in section 2.2, most of the models choose the parameter mask $M$
with different approaches and optimize the parameters $\theta$ accordingly.
Here, we treat the matrix $M$ as a given parameter and denote
$\theta=\theta^{0}+M\Delta\theta$. The sparse fine-tuned optimization in
Problem (1) can be reformulated as:
$\displaystyle\min_{\theta}\mathcal{L}(\theta)$ (2) $\displaystyle s.t.\ \
\|(I$ $\displaystyle-M)(\theta-\theta^{0})\|^{2}=0,$
where $M=\operatorname{diag}\\{M_{11},\cdots,M_{mm}\\}$ is a diagonal matrix
with $M_{ii}\in\\{0,1\\}$. By Lagrangian duality, solving Problem (2) is
equivalent to solving the following problem:
$\small\bar{L}=\min_{\theta}\max_{\lambda}\mathcal{L}(\theta)+\lambda\|(I-M)(\theta-\theta^{0})\|^{2}.$
(3)
Then, we derive a new regularized problem with the following proposition.
###### Proposition 1.
Optimizing Problem (2) implies to optimizing the upper bound $\bar{L}$ of the
following regularized problem:
$L_{R}=\min_{\theta}\mathcal{L}(\theta)+\|(I-M)(\theta-\theta^{0})\|^{2}\leq\bar{L}.$
(4)
The proof can be found in Appendix A.1. It can be concluded that optimizing
Problem (2) is the same as optimizing the upper bound of the original loss
function $\mathcal{L}(\theta)$ with a regularization term
$\|(I-M)(\theta-\theta^{0})\|^{2}$. We will show later that such
regularization contributes to the stability of the sparse fine-tuned model.
### 3.2 Stability Analysis
Stability has been studied in a lot of previous research works Bousquet and
Elisseeff (2002); Shalev-Shwartz et al. (2010); Shalev-Shwartz and Ben-David
(2014); Hardt et al. (2016); Kuzborskij and Lampert (2018); Charles and
Papailiopoulos (2018); Fu et al. (2021) in many different forms. We focus on
one of the commonly used notions, namely, the Pointwise Hypothesis Stability
(PHS) which focuses on analyzing the change of model output after a training
sample is removed. Following Charles and Papailiopoulos (2018), we denote the
original training data as $S=\\{z_{1},\cdots,z_{n}\\}$ and the dataset without
one sample as $S^{i}=S\backslash
z_{i}=\\{z_{1},\cdots,z_{i-1},z_{i+1},\cdots,z_{n}\\}$, where $z_{i}$ is the
$i$th training sample. We also define $i\sim U(n)$ as a sampling procedure
from a uniform distribution with $n$ samples. $\operatorname{\mathcal{A}}(S)$
is defined as model parameters obtained by running algorithm
$\operatorname{\mathcal{A}}$ on data $S$.
###### Definition 3 (Pointwise Hypothesis Stability, Bousquet and Elisseeff
(2002)).
We say that a learning algorithm $\operatorname{\mathcal{A}}$ has pointwise
hypothesis stability $\epsilon$ with respect to a loss function $\ell$, if
$\small\mathbb{E}_{S,i\sim
U(n)}[|\ell(\operatorname{\mathcal{A}}(S^{i}),z_{i})-\ell(\operatorname{\mathcal{A}}(S),z_{i})|]\leq\epsilon.$
(5)
Here, $\ell(\theta,z_{i})$ is the single sample loss for $z_{i}$ when the
model parameter is $\theta$. We assume that
$\operatorname{\mathcal{A}}(S^{i})$ is close to
$\operatorname{\mathcal{A}}(S)$. As $\operatorname{\mathcal{A}}(S)$ is the
optimal solution, the Hessian matrix at $\operatorname{\mathcal{A}}(S)$ is a
positive-semidefinite matrix. We can derive our bound for PHS in the following
theorem.
###### Theorem 1 (Stability).
If the loss function $\ell$ is $\rho-$Lipschitz,
$\operatorname{\mathcal{A}}(S^{i})$ is close to
$\operatorname{\mathcal{A}}(S)$, the Hessian matrix
$\nabla^{2}\mathcal{L}(\operatorname{\mathcal{A}}(S))$ at
$\operatorname{\mathcal{A}}(S)$ is positive-semidefinite with a singular value
decomposition $U\operatorname{diag}(\Lambda)U^{-1}$,
$\Lambda=\\{\Lambda_{1},\cdots,\Lambda_{m}\\}$ and
$\Lambda_{min}=\min\\{\Lambda_{1},\cdots,\Lambda_{m}\\}$, then the expectation
of the loss $\mathbb{E}_{M}L_{R}$ has a pointwise hypothesis stability as:
$\mathbb{E}_{S,i\sim
U(n)}[|\ell(\operatorname{\mathcal{A}}(S^{i}),z_{i})-\ell(\operatorname{\mathcal{A}}(S),z_{i})|]\leq\frac{2\rho^{2}}{(\Lambda_{min}+2(1-p))n}.$
(6)
The proof can be found in Appendix A.2. It can be observed from Theorem 1 that
as the sparsity parameter $p$ decreases, the upper bound also decreases.
Therefore, sparse models imply better stability which explains most of the
empirical results observed in many recent works He et al. (2021b); Lee et al.
(2019); Houlsby et al. (2019); Zaken et al. (2021); Sung et al. (2021); Liu et
al. (2021); Ding et al. (2022). It should also be noted that if $p$ is small
enough, the upper bound will not change significantly as $p$ continues to
decrease. This is because in this case, the denominator is dominated by
$\Lambda_{min}$ which is related to the landscape of the function.
Empirically, if the sparsity is too small, the landscape will heavily depend
on how the parameters are chosen and thus the stability is impaired.
### 3.3 Generalization Analysis
With the bound for the stability, we can then get the generalization error
bound for the sparse fine-tuned model.
###### Theorem 2 (Generalization).
We denote the generalization error as
$R(\operatorname{\mathcal{A}},S)=\mathbb{E}_{z}\ell(\operatorname{\mathcal{A}}(S),z)$
and the empirical error as
$\hat{R}(\operatorname{\mathcal{A}},S)=\frac{1}{n}\sum_{i=1}^{n}\ell(\operatorname{\mathcal{A}}(S),z)$.
Then, for some constant $C$, we have with probability $1-\delta$,
$R(\operatorname{\mathcal{A}},S)\leq\hat{R}(\operatorname{\mathcal{A}},S)+\sqrt{\frac{C^{2}+\frac{24C\rho^{2}}{\Lambda_{min}+2(1-p)}}{2n\delta}}.$
(7)
The proof can be found in Appendix A.3. This result shows that the
generalization error upper bound becomes smaller as the fine-tuned parameters
become sparser. Intuitively, if a model is stable, a perturbation makes less
effect on the model and the model is less likely to overfit. It should be
noted that the generalization error bound is determined by both the empirical
error $\hat{R}(\operatorname{\mathcal{A}},S)$ and sparsity. Therefore, as the
mask becomes sparser, even though the second term decreases, the training
error $\hat{R}(\operatorname{\mathcal{A}},S)$ will possibly increase when the
tunable parameters are not enough to fit the data. Consequently, as the
sparsity decreases, the generalization error will first decrease and then
increase. We will further examine this conjecture in experiments.
## 4 Second-order Approximation Method
In Section 3, we theoretically prove the effectiveness of sparsity in fine-
tuning. However, it still remains a problem of how to choose the tunable
parameters. As discussed in Section 2.2, the random and the rule-based
approaches are robust to noise perturbation as the tunable parameters are
fixed during training. However, these methods tune the same parameters on all
kinds of tasks without utilizing the information from the task-specific data.
On the other hand, the projection-based approaches solve this problem by
getting full utilization of the data information but they suffer from the
projection discontinuity problem. The noise in the parameter may change the
selection of the parameters frequently, thus making the optimization procedure
unstable.
To solve the problems, we propose a novel Second-order Approximation Method
(SAM), namely, utilizing the data information to help decide the parameter
mask while avoiding the projection discontinuity problem. Instead of choosing
the parameters randomly or simply by some rules, we propose a novel second-
order approximation of Problem (1) to make the optimization target
analytically solvable. Then, we directly get the optimal solution for the
parameter mask $M$ and fix the mask to train the other parameters $\theta$.
Specifically, as indicated by Radiya-Dixit and Wang (2020), the fine-tuned
parameters are close to the pre-trained parameters. We can approximate the
loss function with its second-order Taylor expansion as
$\mathcal{L}(\theta^{0}+M\Delta\theta)\approx\operatorname{\mathcal{L}}(\theta^{0})+\nabla\operatorname{\mathcal{L}}(\theta^{0})^{\mathrm{T}}M\Delta\theta+\frac{1}{2}(M\Delta\theta)^{\mathrm{T}}HM\Delta\theta.$
Unfortunately, the Hessian matrix $H$ is expensive to compute especially for a
large neural model. To solve this problem, we adopt the widely used technique
Dembo et al. (1982); Ricotti et al. (1988); Bishop and Nasrabadi (2006);
Bollapragada et al. (2019); Xu et al. (2020); Yao et al. (2021b, a) of
approximating the Hessian matrix with a diagonal matrix denoted as
$H=\operatorname{diag}\\{h_{1},h_{2},\cdots,h_{n}\\}$. We also assume that $H$
is positive semidefinite as the pre-trained weights is close to the global
minimizer Radiya-Dixit and Wang (2020) in each downstream task. Then, Problem
(1) can be reformulated as:
$\displaystyle\min_{\Delta\theta}\operatorname{\mathcal{L}}(\theta^{0})+$
$\displaystyle\nabla\operatorname{\mathcal{L}}(\theta^{0})^{\mathrm{T}}M\Delta\theta+\frac{1}{2}(M\Delta\theta)^{\mathrm{T}}HM\Delta\theta$
(8) $\displaystyle s.t.\ \ \ \ \|M\|_{0}=\lfloor mp$ $\displaystyle\rfloor;\ \
\ \ M_{ij}=0,\forall i\neq j;\ \ \ \ M_{ii}\in\\{0,1\\}.$
With the above setup, we can get the optimal parameter mask $M$ for Problem
(8) based on the following theorem:
###### Theorem 3.
If
$\hat{M}_{ii}=\mathds{1}(\sum_{j=1}^{m}\mathds{1}(|\frac{\nabla\operatorname{\mathcal{L}}(\theta^{0})_{i}^{2}}{h_{i}}|>|\frac{\nabla\operatorname{\mathcal{L}}(\theta^{0})_{j}^{2}}{h_{j}}|)\geq
m-\lfloor mp\rfloor)$, where
$\nabla\operatorname{\mathcal{L}}(\theta^{0})_{i}$ is the $i$th element of the
gradient vector $\nabla\operatorname{\mathcal{L}}(\theta^{0})$, then
$\inf_{\Delta\theta}\operatorname{\mathcal{L}}(\theta^{0}+\hat{M}\Delta\theta)\leq\inf_{\begin{subarray}{c}\Delta\theta,\|M\|_{0}=\lfloor
mp\rfloor;\\\ M_{ij}=0,\forall i\neq
j;M_{ii}\in\\{0,1\\}\end{subarray}}\operatorname{\mathcal{L}}(\theta^{0}+M\Delta\theta).$
(9)
The proof can be found in Appendix A.4. It can be observed that selecting
features according to Theorem 3 achieves the minimal value of the
approximation in Problem (8). The remaining problem is how to calculate the
diagonal of the Hessian matrix. Unfortunately, calculating the diagonal
Hessian is as complex as calculating the whole Hessian. To solve this problem,
instead of minimizing the target function in Problem (8), we propose to
optimize its upper bound
$\displaystyle\min_{\Delta\theta}\operatorname{\mathcal{L}}(\theta^{0})+$
$\displaystyle\nabla\operatorname{\mathcal{L}}(\theta^{0})^{\mathrm{T}}M\Delta\theta+\frac{1}{2}(M\Delta\theta)^{\mathrm{T}}DM\Delta\theta$
(10) $\displaystyle s.t.\ \ \ \ \|M\|_{0}=\lfloor mp$ $\displaystyle\rfloor;\
\ \ \ M_{ij}=0,\forall i\neq j;\ \ \ \ M_{ii}\in\\{0,1\\}.$
where
$D=\text{diag}\\{|\lambda_{max}|,|\lambda_{max}|,\cdots,|\lambda_{max}|\\}$
and $\lambda_{max}$ is the maximal eigenvalue of $H$. This can be directly
calculated from the Rayleigh quotient that $\forall x\neq 0,x^{T}Hx\leq
x^{T}x\lambda_{max}\leq x^{T}x|\lambda_{max}|=x^{T}Dx$. Therefore, the SAM
algorithm is quite straightforward based on Theorem 3. We first get the
gradient $\nabla\operatorname{\mathcal{L}}(\theta^{0})_{i}$ for the $i$th
parameter $\theta_{i}$. Then, we calculate
$|\nabla\operatorname{\mathcal{L}}(\theta^{0})_{i}^{2}|$ and take the top
$\lfloor mp\rfloor$ parameters to optimize. We will not change the selected
parameters during the optimization procedure.
## 5 Experiments
### 5.1 Experimental Setup
Following most previous works Phang et al. (2018); Lee et al. (2019); Dodge et
al. (2020); Xu et al. (2021), we use the original development set as the test
set to report the scores as the original test sets are only available via the
leaderboard with a limited submission number. Different from many previous
works that train models without validation, we split the original training set
by randomly sampling 10% as the new development set while using the remaining
90% samples to train the model. Instead of training the model for fixed epoch
number, we use the new development set to do an early stop training by setting
the tolerance for all models to 40. We build our models with the
jiant222https://jiant.info/ Phang et al. (2020) framework and test our models
on several GLUE Wang et al. (2018) and SuperGLUE Wang et al. (2019) tasks.
Following the setting of Lee et al. (2019); Xu et al. (2021), we choose
several tasks including Corpus of Linguistic Acceptability (CoLA) Warstadt et
al. (2019), Semantic Textual Similarity Benchmark (STS-B) Cer et al. (2017),
Microsoft Research Paraphrase Corpus (MRPC) Dolan and Brockett (2005),
Recognizing Textual Entailment (RTE) Dagan et al. (2005); Giampiccolo et al.
(2007); Bentivogli et al. (2009), Commitment Bank (CB) De Marneffe et al.
(2019), Choice of Plausible Alternatives (COPA) Roemmele et al. (2011), and
Winograd Schema Challenge (WSC) Levesque et al. (2012). We compare our model
with many strong baseline models including Random, Mixout, BitFit, MagPruning,
Adapter, LoRA, DiffPruning, and ChildPruning. The details of these models have
been extensively discussed in Section 2.2 and we adopt the same evaluation
methods as Wang et al. (2018, 2019) to evaluate the models. We run each
experiment 10 times with different random seeds and report the scores with
corresponding standard deviations. As many previous experiments are conducted
under different settings, we re-implement all the baseline models with the
jiant framework to give a fair comparison. For the Adapter and LoRA model, we
incorporate AdapterHub 333https://adapterhub.ml/ Pfeiffer et al. (2020) and
loralib 444https://github.com/microsoft/LoRA into jiant. Following the setting
of Guo et al. (2021), we set the sparsity to 0.005 for all models for a fair
comparison. In SAM, we calculate $\nabla\mathcal{L}(\theta^{0})_{i}$ by
accumulating the gradient for a few burn-in steps as we cannot load all the
training data into memory, the burn-in steps are chosen from
$\\{500,600,700,800,900,1000,2000\\}$ on the development set as a hyper-
parameter. For CoLA and MRPC, we set burn-in step to 600; For STS-B, we set
burn-in step to 1000; For RTE, CB, and COPA we set burn-in step to 700; For
WSC, we set burn-in step to 2000. We fine-tune the models based on RoBERTa-
base Liu et al. (2019) provided by
transformers555https://huggingface.co/docs/transformers/model_doc/roberta
toolkit Wolf et al. (2020) and we run the models on NVIDIA TITAN RTX GPU with
24GB memory.
| CoLA | STS-B | MRPC | RTE | CB | COPA | WSC | AVG
---|---|---|---|---|---|---|---|---
FullTuning | 58.36±1.74 | 89.80±0.52 | 89.55[1]±0.81 | 76.03±2.14 | 88.93[2]±2.37[2] | 67.70±4.41 | 53.10±6.18 | 74.78±2.60
Random | 58.35±1.05[2] | 89.81±0.11[1] | 88.73±0.80 | 72.71±3.23 | 90.54[1]±3.39 | 68.80±2.64 | 52.88±5.97 | 74.55±2.46
MixOut | 58.66±1.96 | 90.15[3]±0.17 | 88.69±0.60[3] | 77.55[1]±1.64[1] | 86.51±4.13 | 71.30±4.84 | 52.98±6.78 | 75.12[3]±2.88
Bitfit | 56.67±1.45 | 90.12±0.14[3] | 87.35±0.58[2] | 72.74±2.47 | 86.96±3.20 | 71.20±3.79 | 55.10±5.39 | 74.31±2.43
MagPruning | 56.57±2.47 | 90.30[2]±0.14[3] | 88.09±0.79 | 73.53±1.84[3] | 81.25±3.50 | 71.50[3]±2.46[2] | 55.67±2.73[1] | 73.85±1.99[2]
Adapter | 62.11[1]±1.22[3] | 90.05±0.13[2] | 89.29[3]±0.60[3] | 76.93[3]±2.05 | 87.32±4.62 | 69.50±2.54[3] | 57.02[2]±5.27 | 76.03[2]±2.35
LoRA | 60.88[3]±1.48 | 87.19±0.51 | 89.53[2]±0.62 | 76.97[2]±1.92 | 84.64±3.76 | 69.70±2.83 | 56.84[3]±4.52 | 75.11±2.24[3]
DiffPruning | 58.53±1.49 | 89.59±0.34 | 78.79±6.09 | 69.93±7.87 | 86.25±2.65[3] | 72.10[2]±2.91 | 53.37±3.60[3] | 72.65±3.57
ChildPruning | 60.00±1.29 | 89.97±1.51 | 87.19±3.86 | 75.76±4.38 | 86.61±3.22 | 69.40±4.00 | 55.59±3.81 | 74.93±3.15
SAM | 60.89[2]±0.96[1] | 90.59[1]±0.14[3] | 88.84±0.49[1] | 76.79±1.72[2] | 88.93[2]±1.75[1] | 74.30[1]±2.45[1] | 59.52[1]±3.08[2] | 77.12[1]±1.51[1]
Table 1: Main experiment. We run each experiment 10 times with different
random seeds and report means and standard deviations. The number in the
bracket is the rank for the scores in the corresponding column. Due to the
space limit, we attach the training time analysis and the significance test in
Appendix A.5 and A.7.
### 5.2 Experimental Results
Main Experiment. The main experimental results are illustrated in Table 1. We
can draw the following conclusions based on the results: (1) Most of the
parameter-efficient models achieve better performance than the FullTuning
model which is also consistent with the observations in many previous works.
This observation supports our theoretical analysis in Theorem 2 that the
parameter-efficient model has better generalization capability. (2) Most of
the parameter-efficient models are more stable than the FullTuning model. This
observation is also consistent with many empirical results in previous works
and it also supports our theoretical stability analysis in Theorem 1. (3) It
is interesting to note that even the Random model outperforms the FullTuning
model. It shows that sparsity itself contributes to improving the performance.
(4) Our proposed SAM model outperforms several baseline models in several
tasks and it ranks in the top 3 of most tasks. This observation validates the
effectiveness of our parameter selecting method discussed in Theorem 3. Due to
the space limit, we attach the training time analysis and the significance
test in Appendix A.5 and A.7.
Figure 3: Projection discontinuity problem. Figure 4: Relation between
stability and overall Performance.
Figure 5: Effectiveness of sparsity. | CoLA | STS-B | MRPC | RTE | CB | COPA | WSC | AVG
---|---|---|---|---|---|---|---|---
FullTuning | 60.74[2]±1.89 | 90.11[3]±0.26 | 88.74[3]±1.08 | 75.37[3]±1.93 | 84.29±4.21 | 69.60±2.94 | 54.81±7.51 | 74.81±2.83
Random | 56.00±1.84 | 89.79±0.20 | 88.57±0.72[2] | 73.00±2.01 | 89.29[2]±4.92 | 70.30±2.69[3] | 56.87±4.29 | 74.83±2.38
MixOut | 60.37[3]±1.33 | 90.11[3]±0.13[3] | 88.50±0.78[3] | 74.51±1.28[2] | 83.75±3.14[3] | 69.40±4.80 | 57.88±6.15 | 74.93[3]±2.52
Bitfit | 55.26±0.78[1] | 89.98±0.15 | 86.87±1.27 | 71.36±1.71 | 91.29[1]±2.27[1] | 71.80[2]±3.92 | 55.29±9.90 | 74.55±2.86
MagPruning | 56.45±1.80 | 90.26[2]±0.11[1] | 87.35±0.85 | 72.24±2.14 | 84.46±3.58 | 69.20±3.54 | 59.71[1]±3.88[2] | 74.24±2.27[2]
Adapter | 60.05±1.88 | 89.92±0.19 | 88.79[1]±0.80 | 74.55±1.80 | 86.61±4.97 | 68.80±2.40[1] | 55.63±7.53 | 74.91±2.79
LoRA | 61.46[1]±1.27[3] | 86.73±0.38 | 88.28±1.06 | 76.46[1]±1.34[3] | 88.69[3]±5.32 | 67.75±2.49[2] | 58.85[3]±4.27[3] | 75.46[2]±2.30[3]
DiffPruning | 58.36±1.45 | 89.52±0.27 | 77.46±5.31 | 70.76±9.01 | 85.18±2.65[2] | 70.40[3]±3.07 | 55.38±4.30 | 72.44±3.72
ChildPruning | 59.40±2.30 | 89.33±3.23 | 88.43±0.80 | 75.11±2.87 | 85.71±4.07 | 70.30±4.54 | 54.04±7.24 | 74.62±3.58
SAM | 59.52±1.12[2] | 90.45[1]±0.12[2] | 88.79[1]±0.69[1] | 75.74[2]±1.27[1] | 86.79±4.39 | 74.00[1]±2.79 | 59.52[2]±3.32[1] | 76.40[1]±1.96[1]
Table 2: Data perturbation stability. The setting is the same as the main
experiments except that we run the experiments on different sampled datasets.
Due to the space limit, we attach the significance test in Appendix A.7.
Projection Discontinuity Problem. To give an intuitive illustration of the
projection discontinuity problem in projection-based approaches, we plot the
training curve of the DiffPruning method on the CB task. As illustrated in
Fig. 4, we adjust the mask every 600 training steps. It can be observed from
the figure that each time we change the mask, the training error will go back
to almost the same value as its initial loss. This result shows that changing
the mask severely affects the training procedure due to the projection
discontinuity problem.
Relation between Stability and Overall Performance. Theorem 2 shows that
stability implies better generalization. To further validate this, we
illustrate how the stability ranks and the overall performance ranks are
correlated in the main experiment. As shown in Fig. 4, the x-axis is the
stability rank in each main experiment while the y-axis is the corresponding
overall performance rank. For each vertical line of a specific stability rank,
the dot indicates the overall performance mean rank value while the line
length indicates the standard deviation. It can be observed from the figure
that the two ranks are positively correlated indicating that stabler models
usually have better generalization capability. To further show the
relationship between the stability and the overall performance, we calculate
Spearman’s rank correlation coefficient Spearman (1904) for the two ranks. It
can be denoted as
$\rho=\frac{\operatorname{cov}(R(S),R(V))}{\sigma_{R(S)}\sigma_{R(V)}}$, where
$R(S)$ and $R(V)$ are the rank variables, $\operatorname{cov}(R(S),R(V))$ is
the covariance of $R(S)$ and $R(V)$ while $\sigma_{R(V)}$ is the standard
deviation of the rank variable $V$. We have $\rho=0.4356$ with
p-value$=0.000014<0.05$ indicating that the correlation between the two rank
variables is significant.
Effectiveness of Sparsity. To further verify our theoretical analysis in
Theorem 1 and Theorem 2, we conduct a new experiment to show how the overall
performance and the stability change as we change the sparsity. We change the
sparsity of the SAM model in {0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02,
0.05, 0.1, 0.2} and plot the relationship between sparsity and the
mean/standard deviation in both the test set and training set. The results are
shown in Fig. 5. It can be concluded from the results that (1) as the sparsity
ratio decreases, the mean and the standard deviation of most tasks also
decrease which means the models become more stable with better generalization.
This observation is consistent with our bound in Theorem 1 and Theorem 2. (2)
If the sparsity ratio drops below a certain threshold, the models become quite
unstable and the performance also sees a sharp drop. This is because the
empirical error increases drastically which can be observed in the Train Mean
and Train Std scores in Fig. 5. At the same time, under such circumstances,
decreasing the sparsity ratio cannot further lower the bound effectively.
Therefore, such observation is also consistent with our discussion in Theorem
1 and Theorem 2.
Data Perturbation Stability. In the main experiment, we use different random
seeds. However, it is unknown whether the performance is still stable if we
have a perturbation on the dataset. We conduct a new experiment to verify the
data perturbation stability by training the model on 10 different training
sets. Each of them is made by randomly removing 10% training samples from our
original training set. The results are shown in Table 2. It can be observed
from the results that the data perturbation stability performance is similar
to the main experiment and our proposed SAM model still has the best data
perturbation stability as well as the overall performance among all the
models.
## 6 Related Works
Fine-tuning on a pre-trained model Peters et al. (2018); Devlin et al. (2019);
Lan et al. (2020); Radford et al. (2018, 2019); Brown et al. (2020); Dong et
al. (2019); Qiu et al. (2020); Chen et al. (2022); Liu et al. (2022) has shown
to be very promising in recent years. However, fine-tuning the full model
yields a large model with the same size for each task and many works indicate
that fine-tuning the full model is unstable Devlin et al. (2019); Phang et al.
(2018); Lee et al. (2019); Zhu et al. (2020); Dodge et al. (2020);
Pruksachatkun et al. (2020); Mosbach et al. (2020); Zhang et al. (2020); Zhao
et al. (2021). To solve this problem, many researchers propose the parameter-
efficient methods which only fine-tune a small part of the pre-trained
parameters. These methods are found to be more stable than fine-tuning the
full model He et al. (2021b); Lee et al. (2019); Houlsby et al. (2019); Zaken
et al. (2021); Sung et al. (2021); Liu et al. (2021). Currently, there is
still no previous work providing a theoretical analysis for the stability of
the parameter-efficient models.
Depending on how the parameter-efficient models choose which parameters to
optimize, we categorize them into 1) random approaches (Random and MixoutLee
et al. (2019)), 2) rule-based approaches (BitFit Zaken et al. (2021),
MagPruning Han et al. (2015a, b); Lee et al. (2021), Adapter Houlsby et al.
(2019); Pfeiffer et al. (2020); Rücklé et al. (2021); He et al. (2021b), LoRA
Hu et al. (2022)), and 3) projection-based approaches (DiffPruning Mallya et
al. (2018); Sanh et al. (2020); Guo et al. (2021), ChildPruning Xu et al.
(2021)). We refer readers to section 2.2 for more detailed discussion about
these models. Despite the promising results achieved by these models, the
random and rule-based approaches do not utilize the information from task-
specific data while the projection-based approaches have the projection
discontinuity problem.
Apart from the parameter efficient fine-tuning methods, many other approaches
Xuhong et al. (2018); Jiang et al. (2020); Hua et al. (2021) have also been
proposed to regularize the parameters to enhance the generalization
capability. Moreover, many research works Salman et al. (2020); Jiang et al.
(2020); Li and Zhang (2021); Hua et al. (2021) propose to train model
adversarially while some researchers You et al. (2019); Liang et al. (2021);
Ansell et al. (2021); Chen et al. (2021) propose to utilize the lottery ticket
approaches to prune the network. Besides, the prompt-tuning Liu et al. (2021);
Lester et al. (2021); Li and Liang (2021) methods try to use prefix to adapt
the model into new domains without changing the model parameters and the
continuous prompts method Li and Liang (2021) can be categorized into the
rule-based approaches. Currently, our work focuses on approaches that only
fine-tune a small part of the model which is very different from these models
in structure or procedure.
## 7 Conclusions
In this paper, we propose to understand the effectiveness of the parameter-
efficient fine-tuning models. Depending on how the tunable parameters are
chosen, we first categorize most of the models into three categories, namely,
random approaches, rule-based approaches, and projection-based approaches.
Then, we show that all models in the three categories are sparse fine-tuned
models and we give a theoretical analysis of the stability and the
generalization error. We further show that the random approaches and the rule-
based methods do not utilize the task data information while the projection-
based approaches suffer from the projection discontinuity problem. We propose
a novel SAM model to alleviate both problems and we conduct extensive
experiments to show the correctness of our theoretical analysis and the
effectiveness of our proposed models.
## Acknowledgments
The authors gratefully acknowledge the support of the funding from UKRI under
project code ES/T012277/1.
## References
* Ansell et al. (2021) Alan Ansell, Edoardo Maria Ponti, Anna Korhonen, and Ivan Vulić. 2021. Composable sparse fine-tuning for cross-lingual transfer. _arXiv preprint arXiv:2110.07560_.
* Bentivogli et al. (2009) Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In _TAC_.
* Bishop and Nasrabadi (2006) Christopher M Bishop and Nasser M Nasrabadi. 2006. _Pattern recognition and machine learning_ , volume 4. Springer.
* Bollapragada et al. (2019) Raghu Bollapragada, Richard H Byrd, and Jorge Nocedal. 2019. Exact and inexact subsampled newton methods for optimization. _IMA Journal of Numerical Analysis_ , 39(2):545–578.
* Bousquet and Elisseeff (2002) Olivier Bousquet and André Elisseeff. 2002. Stability and generalization. _The Journal of Machine Learning Research_ , 2:499–526.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901.
* Cer et al. (2017) Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017\. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. _arXiv preprint arXiv:1708.00055_.
* Charles and Papailiopoulos (2018) Zachary Charles and Dimitris Papailiopoulos. 2018. Stability and generalization of learning algorithms that converge to global optima. In _International conference on machine learning_ , pages 745–754. PMLR.
* Chen et al. (2022) Guanzheng Chen, Fangyu Liu, Zaiqiao Meng, and Shangsong Liang. 2022. Revisiting parameter-efficient tuning: Are we really there yet? _arXiv preprint arXiv:2202.07962_.
* Chen et al. (2021) Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Zhangyang Wang, and Jingjing Liu. 2021. Earlybert: Efficient bert training via early-bird lottery tickets. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 2195–2207.
* Dagan et al. (2005) Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In _Machine Learning Challenges Workshop_ , pages 177–190. Springer.
* De Marneffe et al. (2019) Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The commitmentbank: Investigating projection in naturally occurring discourse. In _proceedings of Sinn und Bedeutung_ , volume 23, pages 107–124.
* Dembo et al. (1982) Ron S Dembo, Stanley C Eisenstat, and Trond Steihaug. 1982. Inexact newton methods. _SIAM Journal on Numerical analysis_ , 19(2):400–408.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of NAACL-HLT_ , pages 4171–4186.
* Ding et al. (2022) Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. 2022. Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. _arXiv preprint arXiv:2203.06904_.
* Dodge et al. (2020) Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah A Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping.
* Dolan and Brockett (2005) Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In _Third International Workshop on Paraphrasing (IWP2005)_.
* Dong et al. (2019) Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. _Advances in Neural Information Processing Systems_ , 32.
* Elisseeff et al. (2005) Andre Elisseeff, Theodoros Evgeniou, Massimiliano Pontil, and Leslie Pack Kaelbing. 2005. Stability of randomized learning algorithms. _Journal of Machine Learning Research_ , 6(1).
* Fedus et al. (2021) William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. _arXiv preprint arXiv:2101.03961_.
* Fu et al. (2021) Zihao Fu, Wai Lam, Anthony Man-Cho So, and Bei Shi. 2021. A theoretical analysis of the repetition problem in text generation. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, pages 12848–12856.
* Giampiccolo et al. (2007) Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. 2007. The third pascal recognizing textual entailment challenge. In _Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing_ , pages 1–9.
* Guo et al. (2021) Demi Guo, Alexander M Rush, and Yoon Kim. 2021. Parameter-efficient transfer learning with diff pruning. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 4884–4896.
* Han et al. (2015a) Song Han, Huizi Mao, and William J Dally. 2015a. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. _arXiv preprint arXiv:1510.00149_.
* Han et al. (2015b) Song Han, Jeff Pool, John Tran, and William Dally. 2015b. Learning both weights and connections for efficient neural network. _Advances in neural information processing systems_ , 28.
* Hardt et al. (2016) Moritz Hardt, Ben Recht, and Yoram Singer. 2016. Train faster, generalize better: Stability of stochastic gradient descent. In _International conference on machine learning_ , pages 1225–1234. PMLR.
* He et al. (2021a) Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. 2021a. Towards a unified view of parameter-efficient transfer learning. _arXiv preprint arXiv:2110.04366_.
* He et al. (2021b) Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jiawei Low, Lidong Bing, and Luo Si. 2021b. On the effectiveness of adapter-based tuning for pretrained language model adaptation. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 2208–2222.
* Houlsby et al. (2019) Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In _International Conference on Machine Learning_ , pages 2790–2799. PMLR.
* Hu et al. (2022) Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In _International Conference on Learning Representations_.
* Hua et al. (2021) Hang Hua, Xingjian Li, Dejing Dou, Chengzhong Xu, and Jiebo Luo. 2021. Noise stability regularization for improving bert fine-tuning. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 3229–3241.
* Jiang et al. (2020) Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 2177–2190.
* Karimi Mahabadi et al. (2021) Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. 2021. Compacter: Efficient low-rank hypercomplex adapter layers. _Advances in Neural Information Processing Systems_ , 34.
* Kenton and Toutanova (2019) Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of NAACL-HLT_ , pages 4171–4186.
* Kim et al. (2021) Seungwon Kim, Alex Shum, Nathan Susanj, and Jonathan Hilgart. 2021. Revisiting pretraining with adapters. In _Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)_ , pages 90–99.
* Kuzborskij and Lampert (2018) Ilja Kuzborskij and Christoph Lampert. 2018. Data-dependent stability of stochastic gradient descent. In _International Conference on Machine Learning_ , pages 2815–2824. PMLR.
* Lagunas et al. (2021) François Lagunas, Ella Charlaix, Victor Sanh, and Alexander M Rush. 2021. Block pruning for faster transformers. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 10619–10629.
* Lan et al. (2020) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In _International Conference on Learning Representations_.
* Lee et al. (2019) Cheolhyoung Lee, Kyunghyun Cho, and Wanmo Kang. 2019. Mixout: Effective regularization to finetune large-scale pretrained language models. In _International Conference on Learning Representations_.
* Lee et al. (2021) Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, and Jinwoo Shin. 2021. Layer-adaptive sparsity for the magnitude-based pruning. In _International Conference on Learning Representations_.
* Lester et al. (2021) Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 3045–3059.
* Levesque et al. (2012) Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In _Thirteenth international conference on the principles of knowledge representation and reasoning_.
* Lewis et al. (2020) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7871–7880.
* Li and Zhang (2021) Dongyue Li and Hongyang Zhang. 2021. Improved regularization and robustness for fine-tuning in neural networks. _Advances in Neural Information Processing Systems_ , 34.
* Li and Liang (2021) Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 4582–4597.
* Liang et al. (2021) Jianze Liang, Chengqi Zhao, Mingxuan Wang, Xipeng Qiu, and Lei Li. 2021. Finding sparse structures for domain specific neural machine translation. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, pages 13333–13342.
* Liu et al. (2022) Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel. 2022. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning.
* Liu et al. (2021) Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021\. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. _arXiv preprint arXiv:2110.07602_.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.
* Mahabadi et al. (2021) Rabeeh Karimi Mahabadi, Sebastian Ruder, Mostafa Dehghani, and James Henderson. 2021\. Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 565–576.
* Mallya et al. (2018) Arun Mallya, Dillon Davis, and Svetlana Lazebnik. 2018. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , pages 67–82.
* Mao et al. (2021) Yuning Mao, Lambert Mathias, Rui Hou, Amjad Almahairi, Hao Ma, Jiawei Han, Wen-tau Yih, and Madian Khabsa. 2021. Unipelt: A unified framework for parameter-efficient language model tuning. _arXiv preprint arXiv:2110.07577_.
* Mosbach et al. (2020) Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2020. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. In _International Conference on Learning Representations_.
* Mostafa and Wang (2019) Hesham Mostafa and Xin Wang. 2019. Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization. In _International Conference on Machine Learning_ , pages 4646–4655. PMLR.
* Panahi et al. (2021) Aliakbar Panahi, Seyran Saeedi, and Tom Arodz. 2021. Shapeshifter: a parameter-efficient transformer using factorized reshaped matrices. _Advances in Neural Information Processing Systems_ , 34.
* Peters et al. (2018) Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _NAACL_.
* Pfeiffer et al. (2020) Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. Adapterhub: A framework for adapting transformers. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 46–54.
* Phang et al. (2018) Jason Phang, Thibault Févry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks.
* Phang et al. (2020) Jason Phang, Phil Yeres, Jesse Swanson, Haokun Liu, Ian F. Tenney, Phu Mon Htut, Clara Vania, Alex Wang, and Samuel R. Bowman. 2020. jiant 2.0: A software toolkit for research on general-purpose text understanding models. http://jiant.info/.
* Pruksachatkun et al. (2020) Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel Bowman. 2020. Intermediate-task transfer learning with pretrained language models: When and why does it work? In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5231–5247.
* Qiu et al. (2020) Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020\. Pre-trained models for natural language processing: A survey. _Science China Technological Sciences_ , 63(10):1872–1897.
* Radford et al. (2018) Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. _OpenAI blog_ , 1(8):9.
* Radiya-Dixit and Wang (2020) Evani Radiya-Dixit and Xin Wang. 2020. How fine can fine-tuning be? learning efficient language models. In _International Conference on Artificial Intelligence and Statistics_ , pages 2435–2443. PMLR.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_ , 21:1–67.
* Ricotti et al. (1988) Lucio Prina Ricotti, Susanna Ragazzini, and Giuseppe Martinelli. 1988. Learning of word stress in a sub-optimal second order back-propagation neural network. In _ICNN_ , volume 1, pages 355–361.
* Roemmele et al. (2011) Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In _AAAI spring symposium: logical formalizations of commonsense reasoning_ , pages 90–95.
* Rücklé et al. (2021) Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2021. Adapterdrop: On the efficiency of adapters in transformers. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7930–7946.
* Salman et al. (2020) Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. 2020\. Do adversarially robust imagenet models transfer better? _Advances in Neural Information Processing Systems_ , 33:3533–3545.
* Sanh et al. (2020) Victor Sanh, Thomas Wolf, and Alexander Rush. 2020. Movement pruning: Adaptive sparsity by fine-tuning. _Advances in Neural Information Processing Systems_ , 33:20378–20389.
* Shalev-Shwartz and Ben-David (2014) Shai Shalev-Shwartz and Shai Ben-David. 2014. _Understanding machine learning: From theory to algorithms_. Cambridge university press.
* Shalev-Shwartz et al. (2010) Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. 2010. Learnability, stability and uniform convergence. _The Journal of Machine Learning Research_ , 11:2635–2670.
* Spearman (1904) Charles Spearman. 1904. The proof and measurement of association between two things. _The American journal of psychology_ , 15(1):72–101.
* Sung et al. (2021) Yi-Lin Sung, Varun Nair, and Colin A Raffel. 2021. Training neural networks with fixed sparse masks. _Advances in Neural Information Processing Systems_ , 34.
* Wang et al. (2019) Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. _Advances in neural information processing systems_ , 32.
* Wang et al. (2018) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 353–355.
* Warstadt et al. (2019) Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. _Transactions of the Association for Computational Linguistics_ , 7:625–641.
* Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online. Association for Computational Linguistics.
* Xu et al. (2020) Peng Xu, Fred Roosta, and Michael W Mahoney. 2020. Newton-type methods for non-convex optimization under inexact hessian information. _Mathematical Programming_ , 184(1):35–70.
* Xu et al. (2021) Runxin Xu, Fuli Luo, Zhiyuan Zhang, Chuanqi Tan, Baobao Chang, Songfang Huang, and Fei Huang. 2021. Raise a child in large language model: Towards effective and generalizable fine-tuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 9514–9528.
* Xuhong et al. (2018) LI Xuhong, Yves Grandvalet, and Franck Davoine. 2018. Explicit inductive bias for transfer learning with convolutional networks. In _International Conference on Machine Learning_ , pages 2825–2834. PMLR.
* Yao et al. (2021a) Zhewei Yao, Amir Gholami, Sheng Shen, Mustafa Mustafa, Kurt Keutzer, and Michael Mahoney. 2021a. Adahessian: An adaptive second order optimizer for machine learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, pages 10665–10673.
* Yao et al. (2021b) Zhewei Yao, Peng Xu, Fred Roosta, and Michael W Mahoney. 2021b. Inexact nonconvex newton-type methods. _Informs Journal on Optimization_ , 3(2):154–182.
* You et al. (2019) Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G Baraniuk, Zhangyang Wang, and Yingyan Lin. 2019. Drawing early-bird tickets: Toward more efficient training of deep networks. In _International Conference on Learning Representations_.
* Zaken et al. (2021) Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. _arXiv preprint arXiv:2106.10199_.
* Zhang et al. (2020) Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2020\. Revisiting few-sample bert fine-tuning. In _International Conference on Learning Representations_.
* Zhao et al. (2021) Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In _International Conference on Machine Learning_ , pages 12697–12706. PMLR.
* Zhu et al. (2020) Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2020. Freelb: Enhanced adversarial training for natural language understanding. In _ICLR_.
Appendix. Supplementary Material
## Appendix A.1 Proof of Proposition 1
###### Proposition 1.
Optimizing Problem (2) implies to optimizing the upper bound $\bar{L}$ of the
following regularized problem:
$L_{R}=\min_{\theta}\mathcal{L}(\theta)+\|(I-M)(\theta-\theta^{0})\|^{2}\leq\bar{L}.$
(11)
###### Proof.
We write the Lagrangian of Problem (2) as:
$\bar{L}=\min_{\theta}\max_{\lambda}\mathcal{L}(\theta)+\lambda\|(I-M)(\theta-\theta^{0})\|^{2}$
where $\lambda$ is the Lagrangian multiplier. Therefore, Problem (2) is
equivalent to solving the following problem:
$\displaystyle\min_{\theta}\max_{\lambda}\mathcal{L}(\theta)+\lambda\|(I-M)(\theta-\theta^{0})\|^{2}$
$\displaystyle\geq\max_{\lambda}\min_{\theta}\mathcal{L}(\theta)+\lambda\|(I-M)(\theta-\theta^{0})\|^{2}$
$\displaystyle\geq\min_{\theta}\mathcal{L}(\theta)+\|(I-M)(\theta-\theta^{0})\|^{2}$
∎
## Appendix A.2 Proof of Theorem 1
Given training dataset $S=\\{z_{1},\cdots,z_{n}\\}$, $S^{i}=S\backslash
z_{i}=\\{z_{1},\cdots,z_{i-1},z_{i+1},\cdots,z_{n}\\}$. Loss function
$\mathcal{L}(\theta)=\frac{1}{n}\sum_{i=1}^{n}\ell(z_{i})$.
The following proof is derived from Shalev-Shwartz and Ben-David (2014) where
the original Lemma utilizes the convex assumption which is not guaranteed in
the neural network. We use the Taylor expansion instead of the convex
assumption which makes it more suitable for describing the neural network
models. However, the cost is that the bound will be related to a constant that
is determined by the specific shape around the local minima.
###### Lemma 1.
Assume that the loss function $\ell$ is $\rho$ -Lipschitz.
$\operatorname{\mathcal{A}}(S^{i})$ closes to $\operatorname{\mathcal{A}}(S)$.
The Hessian matrix $\nabla^{2}\mathcal{L}(\operatorname{\mathcal{A}}(S))$ at
$\operatorname{\mathcal{A}}(S)$ is a positive-semidefinite matrix with a
singular value decomposition as $U\operatorname{diag}(\Lambda)U^{-1}$,
$\Lambda=\\{\Lambda_{1},\cdots,\Lambda_{m}\\}$ and
$\Lambda_{min}=\min\\{\Lambda_{1},\cdots,\Lambda_{m}\\}$. Then, the learning
algorithm $\operatorname{\mathcal{A}}$ defined by
$\operatorname{\mathcal{A}}(S)=\arg\min_{w}\frac{1}{n}\sum_{i=1}^{n}\ell(z_{i})+\lambda\|w-w_{0}\|^{2}$
has pointwise hypothesis stability $\epsilon$ with rate
$\frac{2\rho^{2}}{(\Lambda_{min}+2\lambda)n}$. It follows that:
$\mathbb{E}_{S,i\sim
U(n)}[|\ell(\operatorname{\mathcal{A}}(S^{i}),z_{i})-\ell(\operatorname{\mathcal{A}}(S),z_{i})|]\leq\frac{2\rho^{2}}{(\Lambda_{min}+2\lambda)n}$
###### Proof.
We denote $f_{S}(w)=\mathcal{L}(w)+\lambda\|w-w_{0}\|^{2}$, and
$\operatorname{\mathcal{A}}(S)=\arg\min_{w}f_{S}(w)$. As
$\operatorname{\mathcal{A}}(S)$ minimize $f_{S}(w)$, we have $\nabla
f_{S}(\operatorname{\mathcal{A}}(S))=0$. $\forall v$ close to
$\operatorname{\mathcal{A}}(S)$, we have:
$\displaystyle f_{S}(v)=f_{S}(\operatorname{\mathcal{A}}(S))+$
$\displaystyle[v-\operatorname{\mathcal{A}}(S)]^{T}\nabla
f_{S}(\operatorname{\mathcal{A}}(S))+\frac{1}{2}[v-\operatorname{\mathcal{A}}(S)]^{T}\nabla^{2}f_{S}(\operatorname{\mathcal{A}}(S))[v-\operatorname{\mathcal{A}}(S)]$
$\displaystyle f_{S}(v)=f_{S}(\operatorname{\mathcal{A}}(S))+$
$\displaystyle\frac{1}{2}[v-\operatorname{\mathcal{A}}(S)]^{T}\nabla^{2}f_{S}(\operatorname{\mathcal{A}}(S))[v-\operatorname{\mathcal{A}}(S)]$
$\displaystyle f_{S}(v)-f_{S}(\operatorname{\mathcal{A}}(S))$
$\displaystyle=\frac{1}{2}[v-\operatorname{\mathcal{A}}(S)]^{T}\nabla^{2}f_{S}(\operatorname{\mathcal{A}}(S))[v-\operatorname{\mathcal{A}}(S)]$
$\displaystyle f_{S}(v)-f_{S}(\operatorname{\mathcal{A}}(S))$
$\displaystyle=\frac{1}{2}[v-\operatorname{\mathcal{A}}(S)]^{T}\nabla^{2}_{w=A(S)}(\mathcal{L}(w)+\lambda\|w-w_{0}\|^{2})[v-\operatorname{\mathcal{A}}(S)]$
$\displaystyle f_{S}(v)-f_{S}(\operatorname{\mathcal{A}}(S))$
$\displaystyle=\frac{1}{2}[v-\operatorname{\mathcal{A}}(S)]^{T}(\nabla^{2}\mathcal{L}(\operatorname{\mathcal{A}}(S))+2\lambda
I)[v-\operatorname{\mathcal{A}}(S)]$ $\displaystyle
f_{S}(v)-f_{S}(\operatorname{\mathcal{A}}(S))$
$\displaystyle=\frac{1}{2}[v-\operatorname{\mathcal{A}}(S)]^{T}(U\operatorname{diag}(\Lambda)U^{-1}+2\lambda
I)[v-\operatorname{\mathcal{A}}(S)]$ $\displaystyle
f_{S}(v)-f_{S}(\operatorname{\mathcal{A}}(S))$
$\displaystyle=\frac{1}{2}[v-\operatorname{\mathcal{A}}(S)]^{T}(U(\operatorname{diag}(\Lambda)+2\lambda
I)U^{-1})[v-\operatorname{\mathcal{A}}(S)]$ $\displaystyle
f_{S}(v)-f_{S}(\operatorname{\mathcal{A}}(S))$
$\displaystyle=\frac{1}{2}[v-\operatorname{\mathcal{A}}(S)]^{T}U(\operatorname{diag}(\Lambda+2\lambda)U^{-1}[v-\operatorname{\mathcal{A}}(S)]$
$\displaystyle f_{S}(v)-f_{S}(\operatorname{\mathcal{A}}(S))$
$\displaystyle=\frac{1}{2}[v-\operatorname{\mathcal{A}}(S)]^{T}U\operatorname{diag}(\sqrt{\Lambda_{1}+2\lambda},\cdots,\sqrt{\Lambda_{m}+2\lambda})\cdot$
$\displaystyle\ \ \ \ \ \
\operatorname{diag}(\sqrt{\Lambda_{1}+2\lambda},\cdots,\sqrt{\Lambda_{m}+2\lambda})U^{-1}[v-\operatorname{\mathcal{A}}(S)]$
$\displaystyle f_{S}(v)-f_{S}(\operatorname{\mathcal{A}}(S))$
$\displaystyle=\frac{1}{2}[v-\operatorname{\mathcal{A}}(S)]^{T}U\operatorname{diag}(\sqrt{\Lambda_{1}+2\lambda},\cdots,\sqrt{\Lambda_{m}+2\lambda})U^{-1}\cdot$
$\displaystyle\ \ \ \ \ \
U\operatorname{diag}(\sqrt{\Lambda_{1}+2\lambda},\cdots,\sqrt{\Lambda_{m}+2\lambda})U^{-1}[v-\operatorname{\mathcal{A}}(S)]$
$\displaystyle f_{S}(v)-f_{S}(\operatorname{\mathcal{A}}(S))$
$\displaystyle=\frac{1}{2}\|U\operatorname{diag}(\sqrt{\Lambda_{1}+2\lambda},\cdots,\sqrt{\Lambda_{m}+2\lambda})U^{-1}[v-\operatorname{\mathcal{A}}(S)]\|^{2}$
$\displaystyle f_{S}(v)-f_{S}(\operatorname{\mathcal{A}}(S))$
$\displaystyle\geq\frac{1}{2}(\Lambda_{min}+2\lambda)\|v-\operatorname{\mathcal{A}}(S)\|^{2}$
Then, if $n$ is large enough, by the definition of $f_{S}(w)$. $\forall u,v$,
we have:
$\displaystyle f_{S}(v)-f_{S}(u)$
$\displaystyle=L_{S}(v)+\lambda\|v-w_{0}\|^{2}-(L_{S}(u)+\lambda\|u-w_{0}\|^{2})$
$\displaystyle=L_{S^{i}}(v)+\lambda\|v-w_{0}\|^{2}-(L_{S^{i}}(u)+\lambda\|u-w_{0}\|^{2})+$
$\displaystyle\frac{\ell(v,z_{i})-\ell(u,z_{i})}{n}$
Then, we choose
$v=\operatorname{\mathcal{A}}(S^{i}),u=\operatorname{\mathcal{A}}(S)$. As $v$
minimizes $L_{S^{i}}(v)+\lambda\|v-w_{0}\|^{2}$, we have:
$f_{S}(\operatorname{\mathcal{A}}(S^{i}))-f_{S}(\operatorname{\mathcal{A}}(S))\leq\frac{\ell(\operatorname{\mathcal{A}}(S^{i}),z_{i})-\ell(\operatorname{\mathcal{A}}(S),z_{i})}{n}$
$\frac{1}{2}(\Lambda_{min}+2\lambda)\|\operatorname{\mathcal{A}}(S^{i})-\operatorname{\mathcal{A}}(S)\|^{2}\leq\frac{\ell(\operatorname{\mathcal{A}}(S^{i}),z_{i})-\ell(\operatorname{\mathcal{A}}(S),z_{i})}{n}$
As the loss function $\ell(\cdot,z_{i})$ is $\rho-$Lipschitz, we have:
$|\ell(\operatorname{\mathcal{A}}(S^{i}),z_{i})-\ell(\operatorname{\mathcal{A}}(S),z_{i})|\leq\rho\|\operatorname{\mathcal{A}}(S^{i})-\operatorname{\mathcal{A}}(S)\|$
(12)
Then, we have:
$\displaystyle\frac{1}{2}(\Lambda_{min}+2\lambda)\|A(S^{i})-\operatorname{\mathcal{A}}(S)\|^{2}\leq\frac{\rho\|A(S^{i})-\operatorname{\mathcal{A}}(S)\|}{n}$
$\displaystyle\|A(S^{i})-\operatorname{\mathcal{A}}(S)\|\leq\frac{2\rho}{(\Lambda_{min}+2\lambda)n}$
Then, plug back into Equation (12), we have:
$|\ell(\operatorname{\mathcal{A}}(S^{(i)}),z_{i})-\ell(\operatorname{\mathcal{A}}(S),z_{i})|\leq\frac{2\rho^{2}}{(\Lambda_{min}+2\lambda)n}.$
As this holds for any $S,i$ we immediately obtain:
$\mathbb{E}_{S,i\sim
U(n)}[|\ell(\operatorname{\mathcal{A}}(S^{(i)}),z_{i})-\ell(\operatorname{\mathcal{A}}(S),z_{i})|]\leq\frac{2\rho^{2}}{(\Lambda_{min}+2\lambda)n}.$
∎
###### Theorem 1 (Stability).
If the loss function $\ell$ is $\rho-$Lipschitz,
$\operatorname{\mathcal{A}}(S^{i})$ is close to
$\operatorname{\mathcal{A}}(S)$, the Hessian matrix
$\nabla^{2}\mathcal{L}(\operatorname{\mathcal{A}}(S))$ at
$\operatorname{\mathcal{A}}(S)$ is positive-semidefinite with a singular value
decomposition $U\operatorname{diag}(\Lambda)U^{-1}$,
$\Lambda=\\{\Lambda_{1},\cdots,\Lambda_{m}\\}$ and
$\Lambda_{min}=\min\\{\Lambda_{1},\cdots,\Lambda_{m}\\}$, then the expectation
of the loss $\mathbb{E}_{M}L_{R}$ has a pointwise hypothesis stability as:
$\mathbb{E}_{S,i\sim
U(n)}[|\ell(\operatorname{\mathcal{A}}(S^{i}),z_{i})-\ell(\operatorname{\mathcal{A}}(S),z_{i})|]\leq\frac{2\rho^{2}}{(\Lambda_{min}+2(1-p))n}.$
(13)
###### Proof.
$\displaystyle\mathbb{E}_{M}L_{R}$
$\displaystyle=\mathcal{L}(\theta)+\mathbb{E}\|(I-M)(\theta^{0}-\theta)\|^{2}$
$\displaystyle=\mathcal{L}(\theta)+\mathbb{E}\sum_{i=1}^{m}(1-M_{ii})^{2}(\theta^{0}_{i}-\theta_{i})^{2}$
$\displaystyle=\mathcal{L}(\theta)+\sum_{i=1}^{m}(\theta^{0}_{i}-\theta_{i})^{2}\mathbb{E}(1-M_{ii})^{2}$
$\displaystyle=\mathcal{L}(\theta)+\sum_{i=1}^{m}(\theta^{0}_{i}-\theta_{i})^{2}(1-p)$
$\displaystyle=\mathcal{L}(\theta)+(1-p)\|(\theta^{0}-\theta)\|^{2}$
From Lemma 1, we have:
$\mathbb{E}_{S,i\sim
U(m)}[|\ell(\operatorname{\mathcal{A}}(S^{i}),z_{i})-\ell(\operatorname{\mathcal{A}}(S),z_{i})|]\leq\frac{2\rho^{2}}{(\Lambda_{min}+2(1-p))n}$
∎
## Appendix A.3 Proof of Theorem 2
We first give a Lemma from Theorem 11 of Bousquet and Elisseeff (2002). It has
an extended version to random algorithms in Elisseeff et al. (2005).
###### Lemma 2.
For any learning algorithm $\operatorname{\mathcal{A}}$ with pointwise
hypothesis stability $\beta$ with respect to a loss function such that $0\leq
c(y,y^{\prime})\leq C$, we have with probability $1-\delta$,
$R(\operatorname{\mathcal{A}},S)\leq\hat{R}(\operatorname{\mathcal{A}},S)+\sqrt{\frac{C^{2}+12Cn\beta}{2n\delta}},$
where, $c(y,y^{\prime})=|y-y^{\prime}|$ is an absolute loss function.
###### Theorem 2 (Generalization).
We denote the generalization error as
$R(\operatorname{\mathcal{A}},S)=\mathbb{E}_{z}\ell(\operatorname{\mathcal{A}}(S),z)$
and the empirical error as
$\hat{R}(\operatorname{\mathcal{A}},S)=\frac{1}{n}\sum_{i=1}^{n}\ell(\operatorname{\mathcal{A}}(S),z)$.
Then, for some constant $C$, we have with probability $1-\delta$,
$R(\operatorname{\mathcal{A}},S)\leq\hat{R}(\operatorname{\mathcal{A}},S)+\sqrt{\frac{C^{2}+\frac{24C\rho^{2}}{\Lambda_{min}+2(1-p)}}{2n\delta}}.$
(14)
## Appendix A.4 Proof of Theorem 3
###### Theorem 3.
If
$\hat{M}_{ii}=\mathds{1}(\sum_{j=1}^{m}\mathds{1}(|\frac{\nabla\operatorname{\mathcal{L}}(\theta^{0})_{i}^{2}}{h_{i}}|>|\frac{\nabla\operatorname{\mathcal{L}}(\theta^{0})_{j}^{2}}{h_{j}}|)\geq
m-\lfloor mp\rfloor)$, where
$\nabla\operatorname{\mathcal{L}}(\theta^{0})_{i}$ is the $i$th element of the
gradient vector $\nabla\operatorname{\mathcal{L}}(\theta^{0})$, then
$\inf_{\Delta\theta}\operatorname{\mathcal{L}}(\theta^{0}+\hat{M}\Delta\theta)\leq\inf_{\begin{subarray}{c}\Delta\theta,\|M\|_{0}=\lfloor
mp\rfloor;\\\ M_{ij}=0,\forall i\neq
j;M_{ii}\in\\{0,1\\}\end{subarray}}\operatorname{\mathcal{L}}(\theta^{0}+M\Delta\theta).$
(15)
###### Proof.
We denote $\mathcal{I}=\\{i|\hat{M}_{ii}=1\\}$, $\forall
M\in\Omega=\\{M|\|M\|_{0}=\lfloor mp\rfloor;M_{ij}=0,\forall i\neq
j;M_{ii}\in\\{0,1\\}\\}$ , we have
$\displaystyle\inf_{\Delta\theta,M\in\Omega}\operatorname{\mathcal{L}}(\theta^{0}+M\Delta\theta)$
$\displaystyle=\inf_{\Delta\theta,M\in\Omega}\operatorname{\mathcal{L}}(\theta^{0})+\nabla\operatorname{\mathcal{L}}(\theta^{0})^{\mathrm{T}}M\Delta\theta+\frac{1}{2}(M\Delta\theta)^{\mathrm{T}}HM\Delta\theta$
$\displaystyle=\inf_{\Delta\theta,M\in\Omega}\operatorname{\mathcal{L}}(\theta^{0})+\sum_{i=1}^{m}M_{ii}\nabla\operatorname{\mathcal{L}}(\theta^{0})_{i}\Delta\theta_{i}+\frac{1}{2}\sum_{i=1}^{m}M_{ii}h_{i}\Delta\theta_{i}^{2}$
$\displaystyle=\inf_{\Delta\theta,M\in\Omega}\operatorname{\mathcal{L}}(\theta^{0})+\sum_{i=1}^{m}M_{ii}(\frac{1}{2}h_{i}\Delta\theta_{i}^{2}+\nabla\operatorname{\mathcal{L}}(\theta^{0})_{i}\Delta\theta_{i})$
$\displaystyle=\inf_{M\in\Omega}\operatorname{\mathcal{L}}(\theta^{0})-\sum_{i=1}^{m}M_{ii}\frac{\nabla\operatorname{\mathcal{L}}(\theta^{0})_{i}^{2}}{2h_{i}}$
$\displaystyle\geq\operatorname{\mathcal{L}}(\theta^{0})-\sum_{i\in\mathcal{I}}\frac{\nabla\operatorname{\mathcal{L}}(\theta^{0})_{i}^{2}}{2h_{i}}\
\ \ \ \ \ \ \ \ \ \ \text{(By the definition of $\hat{M}$ and $\mathcal{I}$)}$
$\displaystyle=\operatorname{\mathcal{L}}(\theta^{0})-\sum_{i=1}^{m}\hat{M}_{ii}\frac{\nabla\operatorname{\mathcal{L}}(\theta^{0})_{i}^{2}}{2h_{i}}$
$\displaystyle=\inf_{\Delta\theta}\operatorname{\mathcal{L}}(\theta^{0}+\hat{M}\Delta\theta)$
∎
## Appendix A.5 Training Time Analysis
To further analyze the computational cost of each method, we list the training
time for different models as shown in Table 3. It should be noted that as we
adopt the early stop strategy, models stop training when they converge and the
running step number may be different from each other. It can be concluded from
the results that: (1) Adapter and LoRA model outperform the FullTuning model
as they only tune only a few newly added parameters; (2) Other parameter-
efficient models spend more time than the FullTuning model because in the
current implementation, all of these models use a mask to determine which
parameters to tune and thus cannot save the computational cost. (3) Our
proposed SAM model outperforms all other models (except Adapter and LoRA) as
it has a faster convergence rate. However, it still needs a mask and thus
needs more time to train than the FullTuning model. It should be noted that as
indicated by Guo et al. (2021); Houlsby et al. (2019), though most of the
parameter-efficient models need more time to train, they actually need less
storage space as they can just only store the changed parameters. This is
quite useful when there are a lot of downstream tasks.
| CoLA | MRPC | STS-B | RTE | CB | COPA | WSC | AVG
---|---|---|---|---|---|---|---|---
FullTuning | 0.74 | 1.04 | 1.90 | 2.63 | 3.18 | 0.66 | 1.55 | 1.67
ChildPruning | 3.91 | 1.36 | 2.12 | 3.77 | 3.13 | 0.91 | 2.34 | 2.51
Adapter | 0.89 | 1.24 | 3.43 | 3.51 | 1.16 | 0.41 | 0.47 | 1.59
LoRA | 1.39 | 1.23 | 2.12 | 4.87 | 2.15 | 0.60 | 1.43 | 1.97
Bitfit | 2.30 | 1.70 | 6.70 | 7.02 | 2.18 | 1.20 | 0.87 | 3.14
DiffPruning | 0.60 | 1.07 | 6.61 | 5.30 | 2.86 | 1.21 | 1.06 | 2.67
Random | 0.41 | 3.36 | 4.34 | 6.70 | 1.60 | 1.27 | 0.69 | 2.62
SAM | 2.31 | 1.68 | 2.78 | 3.15 | 2.70 | 0.72 | 0.82 | 2.02
Table 3: Training time (in hour) analysis.
## Appendix A.6 Tunable Parameter Ratio Comparison
To make the model results comparable with each other, we try our best to set
the tunable parameter ratio for each experiment as close as possible. Table 4
shows the ratio of tunable parameters for each model. The tunable parameters
for Adapter and BitFit are fixed and cannot be changed. We follow the official
setting for the LoRA model to set the size of the tunable parameters. For
other models including our proposed SAM model, we follow Guo et al. (2021) to
set the tunable ratio as 0.5% to make the models comparable with each other.
| FullTuning | ChildPruning | Adapter | LoRA | Bitfit | DiffPruning | Random | MixOut | MagPruning | SAM
---|---|---|---|---|---|---|---|---|---|---
%Param | 100 | 0.5 | 0.72 | 0.91 | 0.09 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5
Table 4: Tunable parameter ratio comparison.
## Appendix A.7 T-Test of the Significance of SAM
To show how significantly our proposed SAM model outperforms other models, we
conduct a t-test by comparing the SAM’s results with other models’ results. If
the SAM model outperforms another model, the t-statistics will be greater than
0. If it outperforms another model significantly, the p-value will be less
than 0.05. We report the t-statistics/p-value for the main experiment in Table
5 as well as the corresponding results for the data perturbation experiments
in Table 6. We can conclude from the experimental results that our proposed
SAM model outperforms the corresponding model significantly with p-values ¡
0.05. It shows that SAM model can outperform the other models in most of the
cases.
| CoLA | STS-B | MRPC | RTE | CB | COPA | WSC
---|---|---|---|---|---|---|---
FullTuning | 3.81/0.00 | 4.43/0.00 | -1.69/0.11 | 0.83/0.42 | -0.00/1.00 | 3.36/0.00 | 2.91/0.01
Random | 5.38/0.00 | 12.34/0.00 | 0.60/0.56 | 3.35/0.00 | -1.26/0.22 | 3.47/0.00 | 3.13/0.01
MixOut | 3.07/0.01 | 5.81/0.00 | 0.83/0.42 | -0.96/0.35 | 1.60/0.13 | 1.47/0.16 | 2.84/0.01
Bitfit | 7.27/0.00 | 6.89/0.00 | 4.68/0.00 | 4.04/0.00 | 1.62/0.12 | 1.73/0.10 | 2.39/0.03
MagPruning | 4.88/0.00 | 4.34/0.00 | 2.43/0.03 | 3.78/0.00 | 5.88/0.00 | 1.83/0.08 | 2.84/0.01
Adapter | -2.34/0.03 | 7.93/0.00 | -1.12/0.28 | -0.16/0.87 | 0.98/0.34 | 3.07/0.01 | 1.60/0.13
LoRA | 0.02/0.99 | 18.99/0.00 | -1.87/0.08 | -0.21/0.84 | 3.10/0.01 | 2.85/0.01 | 1.76/0.10
DiffPruning | 4.00/0.00 | 8.16/0.00 | 4.97/0.00 | 2.55/0.02 | 2.53/0.02 | 1.38/0.18 | 3.78/0.00
ChildPruning | 1.67/0.11 | 1.27/0.22 | 1.35/0.19 | 0.66/0.52 | 1.90/0.07 | 2.63/0.02 | 2.59/0.02
Table 5: T-Test results (t-statistics/p-value) for comparing SAM with other models in each tasks. We use green color if the t-statistics is larger than 0 while the p-value ¡ 0.05. It shows the SAM model outperforms the corresponding model significantly. We use red color if the t-statistics is less than 0 while the p-value ¡ 0.05. It shows the corresponding model outperforms the SAM model significantly. The uncolored text means there is no statistical significance with respect to p-value=0.05. | CoLA | STS-B | MRPC | RTE | CB | COPA | WSC
---|---|---|---|---|---|---|---
FullTuning | -0.25/0.81 | 3.61/0.00 | 0.12/0.90 | 1.14/0.27 | 1.23/0.23 | 3.26/0.00 | 1.76/0.10
Random | 3.72/0.00 | 7.99/0.00 | 0.66/0.52 | 3.68/0.00 | -1.14/0.27 | 2.96/0.01 | 2.49/0.02
MixOut | 0.06/0.95 | 5.33/0.00 | 0.82/0.42 | 2.47/0.02 | 1.69/0.11 | 2.48/0.02 | 0.78/0.44
Bitfit | 6.34/0.00 | 6.65/0.00 | 3.99/0.00 | 5.64/0.00 | -2.49/0.02 | 1.37/0.19 | 1.27/0.22
MagPruning | 3.39/0.01 | 3.22/0.00 | 3.96/0.00 | 4.34/0.00 | 1.23/0.23 | 3.19/0.01 | 0.05/0.96
Adapter | 0.32/0.75 | 6.82/0.00 | -0.02/0.99 | 2.14/0.05 | 0.08/0.94 | 6.23/0.00 | 2.64/0.02
LoRA | -1.09/0.30 | 26.73/0.00 | 1.21/0.24 | 0.00/1.00 | -0.72/0.48 | 4.67/0.00 | 0.49/0.63
DiffPruning | 2.04/0.06 | 9.16/0.00 | 6.34/0.00 | 1.85/0.08 | 0.94/0.36 | 2.60/0.02 | 2.24/0.04
ChildPruning | 0.74/0.48 | 1.05/0.31 | 1.02/0.32 | 1.16/0.26 | 0.54/0.60 | 2.08/0.05 | 2.09/0.05
Table 6: T-Test for significance of SAM in data perturbation stability
experiments. The color scheme is the same as Table 5.
## Appendix A.8 Limitations and Future Directions
In this work, we theoretically prove the SAM model achieves approximal optimal
value with Theorem 3 with the second-order approximation. Therefore, there is
still some room for improvement for the real target function. We can consider
exploring some other assumptions of the target function like quadratic growth,
Polyak-Łojasiewicz, etc. We may get different approximate optimal solutions
under different assumptions. Besides, though the current model achieves better
results with better stability, the training time is a little bit longer than
the full tuning model. This is because, in the current implementation, the
sparsity is achieved with a gradient mask. As a result, the training time may
be slightly longer than an ordinary model. We can explore to further improve
the implementation strategy to accelerate the running speed.
## Appendix A.9 Broader Impact Statement
This paper proposes a theoretical analysis of existing methods and proposes an
improved method under the same task setting. It may help researchers to
understand existing models better and we also propose the SAM model to improve
the model performance. The task is widely studied in the NLP community and we
do not conduct any experiments with any living beings. Our work also does not
cause any kind of safety or security concerns. Our work also does not raise
any human rights concerns or environmental concerns. Therefore, there will be
no negative societal impact on our work. All the data used in this paper come
from widely used datasets and we have given a detailed description and source
of the datasets. As far as we know, these datasets do not have any personally
identifiable information. From the previous literature, no bias cases were
reported.
|
HIP-2022-31/TH
# Palatini formulation for gauge theory: implications for slow-roll inflation
Syksy Räsänen<EMAIL_ADDRESS>University of Helsinki, Department of
Physics and Helsinki Institute of Physics,
P.O. Box 64, FIN-00014 University of Helsinki, Finland Yosef Verbin
<EMAIL_ADDRESS>Astrophysics Research Center, the Open University of
Israel, Raanana 4353701, Israel
###### Abstract
We consider a formulation of gauge field theory where the gauge field
$A_{\alpha}$ and the field strength $F_{\alpha\beta}$ are independent
variables, as in the Palatini formulation of gravity. For the simplest gauge
field action, this is known to be equivalent to the usual formulation. We add
non-minimal couplings between $F_{\alpha\beta}$ and a scalar field, solve for
$F_{\alpha\beta}$ and insert it back into the action. This leads to modified
gauge field and scalar field terms. We consider slow-roll inflation and show
that because of the modifications to the scalar sector, adding higher order
terms to the inflaton potential does not spoil its flatness, unlike in the
usual case. Instead it makes the effective potential closer to quadratic.
## 1 Introduction
There are different formulations of general relativity. One of the most
studied alternatives to the metric formulation is the Palatini formulation
(also called the metric-affine formulation) where the metric and the affine
connection are independent variables [1, 2]. For the simplest gravitational
action (i.e. the Einstein–Hilbert action) and minimally coupled matter (i.e.
matter that does not directly couple to the connection), the equation of
motion of the connection gives back the Levi–Civita connection (up to a
projective transformation, which is a symmetry of the action). For a more
complicated action, the connection is not Levi–Civita, and the two
formulations are physically distinct theories.
The Palatini formulation is also known as the first-order formulation, because
only first derivatives of the fields appear in the Riemann tensor and hence in
Lagrangians constructed algebraically from it, unlike in the metric
formulation. Inserting the solution of the connection equation of motion back
into the action allows to eliminate the connection and obtain a theory where
the metric is the only degree of freedom. The connection is thus sometimes
called an auxiliary field, although it is the metric that appears
algebraically in the original action, not the connection.
Analogously, electromagnetism and other gauge theories are usually formulated
in what we call the gauge field formulation, where the gauge field is the only
variable. However, they can also be formulated in the Palatini style by taking
the field strength to be an independent variable in addition to the gauge
field. In the gauge field formulation, the simplest action is quadratic in the
exterior derivative of the gauge field (plus the commutator for non-Abelian
theories). In the Palatini formulation, it is split into a term that is
quadratic in the field strength and another that is bilinear in the field
strength and the gauge field. This formulation is used in the Faddeev–Jackiw
formalism to make quantisation easier [3, 4]. Solving the classical equation
of motion and inserting the solution into the action gives back the quadratic
action for the gauge field. This procedure is nothing but a Legendre
transformation, similar to the one used to turn actions that are non-linear in
the Riemann tensor into the Einstein–Hilbert form in the Palatini formulation
for gravity [5, 6, 7].
However, as in the case of gravity, the Palatini formulation is not equivalent
to the usual formulation if the gauge field action is more complicated or if
other fields couple directly to the field strength. We investigate this
possibility for an Abelian gauge field. We keep the action linear in the gauge
field, but include linear and quadratic terms in the field strength, with non-
minimal couplings to a scalar field. We solve for the field strength to first
order in the kinetic term of the scalar field and insert the solution back
into the action. We discuss consequences for slow-roll inflation.
In section 2 we present the formulation, write down the action, solve for the
field strength and insert it back into the action. In section 3 we consider
the case of polynomial non-minimal couplings, in particular Higgs inflation.
In section 4 we discuss what would change for a non-Abelian gauge theory, or
if we include fermions or higher order terms in the field strength. In section
5 we summarise our findings and mention open issues.
## 2 Palatini formulation in the gauge sector
### 2.1 Comparison of the Palatini formulation for gravity and gauge fields
Let us start from the simplest action for an Abelian gauge field $A_{\alpha}$,
$\displaystyle S$ $\displaystyle=$
$\displaystyle\int\text{d}^{4}x\sqrt{-g}\left(-A_{\alpha\beta}A^{\alpha\beta}\right)$
(1) $\displaystyle=$
$\displaystyle\int\text{d}^{4}x\sqrt{-g}\left(-F_{\alpha\beta}A^{\alpha\beta}+\frac{1}{4}F_{\alpha\beta}F^{\alpha\beta}\right)\
,$
where because of invariance under the gauge transformations $A_{\alpha}\to
A_{\alpha}+\partial_{\alpha}\sigma$ ($\sigma$ is an arbitrary function) the
gauge field appears only in the combination
$A_{\alpha\beta}\equiv\partial_{[\alpha}A_{\beta]}$. On the second line we
have introduced the field strength $F_{\alpha\beta}$. If we consider the
Palatini formulation, where the field strength is an independent variable,
then varying the action with respect to $F_{\alpha\beta}$ gives the equation
of motion $F_{\alpha\beta}=2A_{\alpha\beta}$. Inserting this back into the
action restores the form on the first line. Actions of the form on the second
line that are linear in the derivatives of $A_{\alpha}$ may be more convenient
for quantisation, as in the Faddeev–Jackiw formalism [3, 4].
Analogously, we can consider gravity in the Palatini formulation, where the
connection is an independent variable. For the simplest action, the
Einstein–Hilbert action, we have (we use units where the reduced Planck mass
is unity)
$\displaystyle S$ $\displaystyle=$
$\displaystyle\int\text{d}^{4}x\sqrt{-g}\frac{1}{2}g^{\alpha\beta}R_{\alpha\beta}$
(2) $\displaystyle=$
$\displaystyle\int\text{d}^{4}x\sqrt{-g}g^{\alpha\beta}\left(\partial_{[\mu}\Gamma^{\mu}_{\beta]\alpha}+\Gamma^{\nu}{}_{[\nu|\mu|}\Gamma^{\mu}{}_{\beta]\alpha}\right)$
$\displaystyle=$
$\displaystyle\int\text{d}^{4}x\sqrt{-g}g^{\alpha\beta}\left(\frac{1}{2}\mathring{R}_{\alpha\beta}+\mathring{\nabla}_{[\mu}L^{\mu}{}_{\beta]\alpha}+L^{\nu}{}_{[\nu|\mu|}L^{\mu}{}_{\beta]\alpha}\right)\
,$
where on the second line we have expressed the Ricci tensor $R_{\alpha\beta}$
in terms of the connection $\Gamma^{\gamma}_{\alpha\beta}$, and on the third
line we have decomposed the connection as
$\Gamma^{\gamma}_{\alpha\beta}=\mathring{\Gamma}^{\gamma}_{\alpha\beta}+L^{\gamma}{}_{\alpha\beta}$,
where $\mathring{\Gamma}^{\gamma}_{\alpha\beta}$ is the Levi–Civita connection
and $L^{\gamma}{}_{\alpha\beta}$ is the disformation tensor; $\mathring{}$
refers to quantities defined with the Levi–Civita connection. As derivatives
of $L^{\gamma}{}_{\alpha\beta}$ appear only in the linear term, which is a
total derivative, the equation of motion for $L^{\gamma}{}_{\alpha\beta}$ is
algebraic. Thus the connection does not involve any new degrees of freedom.
For the Einstein–Hilbert action the equation of motion shows that
$L^{\gamma}{}_{\alpha\beta}$ vanishes up to the projective transformation
$L^{\gamma}{}_{\alpha\beta}\to
L^{\gamma}{}_{\alpha\beta}+\delta^{\gamma}{}_{\beta}\sigma_{\alpha}$
($\sigma_{\alpha}$ is an arbitrary vector). The projective transformation does
not change the action and is unphysical.111We could analogously write
$F_{\alpha\beta}=2A_{\alpha\beta}+C_{\alpha\beta}$ on the second line of (1),
giving the Lagrangian
$-A_{\alpha\beta}A^{\alpha\beta}+\frac{1}{4}C_{\alpha\beta}C^{\alpha\beta}$.
It is then transparent that variation with respect to $C_{\alpha\beta}$ gives
$C_{\alpha\beta}=0$, and there is no counterpart to the accidental projective
symmetry of the gravity sector. We could equivalently vary with respect to the
full $\Gamma^{\gamma}_{\alpha\beta}$, obtain the Levi–Civita connection (up to
the projective transformation) and insert it back into the action to recover
the Einstein–Hilbert action in the metric formulation.
A gravity action that is non-linear in the derivatives of the connection can
lead to new dynamical degrees of freedom. Alternatively, it can simply change
the relation between existing degrees of freedom, as in the case when the
action depends non-linearly on the Ricci scalar [8, 9, 10, 11]. If we include
matter coupled to the connection (but not its derivatives), then the algebraic
equation of motion in general gives a non-zero solution for
$L^{\gamma}{}_{\alpha\beta}$, which, when inserted back into the action, leads
to modified interactions for matter.
The dependence on the dynamical field (the metric or the gauge field) is more
complicated in the gravity case than in the gauge case, but the essential
similarity is that both actions (1) and (2) are quadratic in the field
strength. If we include terms that couple other fields directly to
$F_{\alpha\beta}$, the solution will no longer be
$F_{\alpha\beta}=2A_{\alpha\beta}$, and the equation of motion for
$A_{\alpha\beta}$ will be modified, in analogy to the gravity sector in the
Palatini case. As $F_{\alpha\beta}$ is an independent field, it does not have
to be antisymmetric like $A_{\alpha\beta}$. Analogously, in the gravitational
Palatini formulation it is not necessary for the connection to be symmetric as
in the metric case.222There are also versions of the Palatini formulation
where constraints are imposed on the connection a priori. The most common
constraints are taking the connection to be symmetric, or the non-metricity to
be zero. The former condition is sometimes considered to be part of the
definition of the Palatini formulation, with the term metric-affine
formulation reserved for the case without constraints. A theory with the
latter condition is known as Einstein–Cartan gravity. Another common
constraint is taking the non-metricity tensor to be proportional to a vector
field, $\nabla_{\gamma}g_{\alpha\beta}=V_{\gamma}g_{\alpha\beta}$. We compare
the gravity and the gauge case in table 1, starting with the symmetry of the
dynamical field that constrains how it may appear in the action. Let us now
look at how non-minimal couplings affect the symmetric and antisymmetric parts
of $F_{\alpha\beta}$.
| gravity sector | gauge sector
---|---|---
dynamical field | $g_{\alpha\beta}$ | $A_{\alpha}$
assumed symmetry | $x^{\alpha}\to x^{\prime}{}^{\alpha}(x)$ | $A_{\alpha}\to A_{\alpha}+\partial_{\alpha}\sigma$
accidental symmetry | $\Gamma^{\gamma}_{\alpha\beta}\to\Gamma^{\gamma}_{\alpha\beta}+\delta^{\gamma}{}_{\beta}\sigma_{\alpha}$ | $-$
Palatini field strength | $\Gamma^{\gamma}_{\alpha\beta}$ | $F_{\alpha\beta}$
usual field strength | $\mathring{\Gamma}^{\gamma}_{\alpha\beta}$ | $2\partial_{[\alpha}A_{\beta]}$
change in the usual components of the field strength | $\Gamma^{\gamma}_{(\alpha\beta)}-\mathring{\Gamma}^{\gamma}_{\alpha\beta}$ | $F_{[\alpha\beta]}-2\partial_{[\alpha}A_{\beta]}$
new field strength components | $\Gamma^{\gamma}_{[\alpha\beta]}$ | $F_{(\alpha\beta)}$
Table 1: Comparison of the Palatini formulation in the gravity and in the
gauge sector.
### 2.2 Non-minimal coupling to a scalar field
We generalise the action (1) by including direct couplings between the field
strength and a real scalar field. In the gravity sector, we include a non-
minimal coupling to the Ricci scalar. We assume that gravity too is described
by the Palatini formulation; in section 3 we comment on what would change in
the metric formulation. We will later take the gauge field and the scalar
field to be the $U(1)$ field and the Higgs field of the Standard Model,
respectively, and consider implications for Higgs inflation [12, 13] (for
reviews, see [14, 15, 16]). For the moment they remain general. In the gauge
sector, we keep the action linear in $A_{\alpha}$, and consider all algebraic
terms up to second order in the field strength $F_{\alpha\beta}$ and
$A_{\alpha}$, and at most linear in the kinetic term of the scalar field. The
action is
$\displaystyle S$ $\displaystyle=$
$\displaystyle\int\text{d}^{4}x\sqrt{-g}\Big{\\{}\frac{1}{2}f(\varphi)R-\frac{1}{2}X-V(\varphi)+a_{1}(\varphi,X)A_{\alpha\beta}F^{\alpha\beta}+a_{2}(\varphi,X){}^{*}A_{\alpha\beta}F^{\alpha\beta}$
(3)
$\displaystyle+\left[a_{3}(\varphi)A^{\alpha}{}_{\mu}F^{\mu\beta}+a_{4}(\varphi)A^{\alpha}{}_{\mu}F^{\beta\mu}+a_{5}(\varphi){}^{*}A^{\alpha}{}_{\mu}F^{\mu\beta}+a_{6}(\varphi){}^{*}A^{\alpha}{}_{\mu}F^{\beta\mu}\right]X_{\alpha\beta}$
$\displaystyle+\tilde{b}_{1}(\varphi,X)F^{\alpha\beta}F_{\alpha\beta}+\tilde{b}_{2}(\varphi,X)F^{\alpha\beta}F_{\beta\alpha}+b_{3}(\varphi,X)^{*}F^{\alpha\beta}F_{\alpha\beta}+c_{1}(\varphi,X)F^{\alpha}{}_{\alpha}+c_{2}(\varphi,X)(F^{\alpha}{}_{\alpha})^{2}$
$\displaystyle+\left[d_{1}(\varphi)F^{\alpha\beta}+d_{2}(\varphi)F^{\mu}{}_{\mu}F^{\alpha\beta}+d_{3}(\varphi)F^{\alpha\mu}F_{\mu}{}^{\beta}+d_{4}(\varphi)F^{\mu\alpha}F_{\mu}{}^{\beta}+d_{5}(\varphi)F^{\alpha\mu}F^{\beta}{}_{\mu}\right.$
$\displaystyle\left.+d_{6}(\varphi)^{*}F^{\alpha\mu}F_{\mu}{}^{\beta}+d_{7}(\varphi)^{*}F^{\alpha\mu}F^{\beta}{}_{\mu}\right]X_{\alpha\beta}\Big{\\}}\
,$
where $R\equiv g^{\alpha\beta}R_{\alpha\beta}$,
$X_{\alpha\beta}\equiv\partial_{\alpha}\varphi\partial_{\beta}\varphi$,
$X\equiv g^{\alpha\beta}X_{\alpha\beta}$, and
${}^{*}F^{\alpha\beta}\equiv\frac{1}{2}\epsilon^{\alpha\beta\mu\nu}F_{\mu\nu}$.
By our assumptions, the coefficient functions are at most linear in $X$. While
$\varphi$ and $A_{\alpha\beta}$ are even and ${}^{*}A_{\alpha\beta}$ is odd
under parity, the transformation of $F_{\alpha\beta}$ under parity is not
defined a priori, it is determined by the solution to the equation of motion.
We decompose the first two terms on the second line of (3) as
$\tilde{b}_{1}F^{\alpha\beta}F_{\alpha\beta}+\tilde{b}_{2}F^{\alpha\beta}F_{\beta\alpha}=b_{1}F^{(\alpha\beta)}F_{(\alpha\beta)}+b_{2}F^{[\alpha\beta]}F_{[\alpha\beta]}$,
where $b_{1}\equiv\tilde{b}_{1}+\tilde{b}_{2}$ and
$b_{2}\equiv\tilde{b}_{1}-\tilde{b}_{2}$. This makes it transparent that only
the $d_{i}$ terms (with $i=3\ldots 7$) couple the symmetric part
$F_{(\alpha\beta)}$ and the antisymmetric part $F_{[\alpha\beta]}$. The terms
linear in $F_{\alpha\beta}$, with coefficients $a_{i}$, $c_{1}$ and $d_{1}$,
give the source terms for $F_{\alpha\beta}$.
Minimising the action with respect to $F_{\alpha\beta}$ gives the equations
$\displaystyle 0$ $\displaystyle=$ $\displaystyle
2b_{1}F_{(\alpha\beta)}+(c_{1}+2c_{2}F)g_{\alpha\beta}+(d_{1}+d_{2}F)X_{\alpha\beta}+d_{2}F^{\mu\nu}X_{\mu\nu}g_{\alpha\beta}+(d_{3}+d_{4}+d_{5})(F_{(\alpha\mu)}X^{\mu}{}_{\beta}+F_{(\beta\mu)}X^{\mu}{}_{\alpha})$
(4)
$\displaystyle+(d_{4}-d_{5})(F_{[\alpha\mu]}X^{\mu}{}_{\beta}+F_{[\beta\mu]}X^{\mu}{}_{\alpha})+(d_{6}+d_{7})^{*}F_{\mu(\alpha}X^{\mu}{}_{\beta)}+(a_{3}+a_{4})A_{\mu(\alpha}X^{\mu}{}_{\beta)}+(a_{5}+a_{6}){}^{*}A_{\mu(\alpha}X^{\mu}{}_{\beta)}$
$\displaystyle 0$ $\displaystyle=$ $\displaystyle
a_{1}A_{\alpha\beta}+a_{2}{}^{*}A_{\alpha\beta}+2b_{2}F_{[\alpha\beta]}+2b_{3}{}^{*}F_{\alpha\beta}+(a_{3}-a_{4})A_{\mu[\alpha}X^{\mu}{}_{\beta]}+(a_{5}-a_{6}){}^{*}A_{\mu[\alpha}X^{\mu}{}_{\beta]}$
(5)
$\displaystyle+(d_{4}-d_{5})(F_{(\alpha\mu)}X^{\mu}{}_{\beta}-F_{(\beta\mu)}X^{\mu}{}_{\alpha})+(-d_{3}+d_{4}+d_{5})(F_{[\alpha\mu]}X^{\mu}{}_{\beta}-F_{[\beta\mu]}X^{\mu}{}_{\alpha})+(d_{6}-d_{7})^{*}F^{\mu}{}_{[\alpha}X_{\beta]\mu}$
$\displaystyle-\frac{1}{2}\epsilon_{\alpha\beta}{}^{\mu\nu}[(d_{6}+d_{7})F_{(\mu\rho)}+(d_{6}-d_{7})F_{[\mu\rho]}]X^{\rho}{}_{\nu}\
,$
where
${}^{*}A_{\alpha\beta}\equiv\frac{1}{2}\epsilon_{\alpha\beta\mu\nu}A^{\mu\nu}$.
The general solution of these linear equations is rather involved. We only
consider the solution to first order $X_{\alpha\beta}$, which is sufficient
for slow-roll inflation.333Kinetic terms beyond linear order in
$X_{\alpha\beta}$ could be useful for kinetically driven inflation, and it
would be straightforward to include higher order terms in the action and the
solution [17, 18]. We can then solve the equations iteratively, with the
result
$\displaystyle F_{(\alpha\beta)}$ $\displaystyle=$
$\displaystyle-\frac{c_{1}}{2b_{1}+8c_{2}}g_{\alpha\beta}+\frac{2(b_{1}+4c_{2})c_{2}d_{1}+b_{1}c_{1}d_{2}-2c_{1}c_{2}(2d_{2}+d_{3}+d_{4}+d_{5})}{4b_{1}(b_{1}+4c_{2})^{2}}Xg_{\alpha\beta}$
$\displaystyle-\frac{(b_{1}+4c_{2})d_{1}-c_{1}(2d_{2}+d_{3}+d_{4}+d_{5})}{2b_{1}(b_{1}+4c_{2})}X_{\alpha\beta}$
$\displaystyle+\frac{2\alpha_{1}(d_{4}-d_{5})-\alpha_{2}(d_{6}+d_{7})-a_{3}-a_{4}}{2b_{1}}A_{\mu(\alpha}X^{\mu}{}_{\beta)}+\frac{2\alpha_{2}(d_{4}-d_{5})-\alpha_{1}(d_{6}+d_{7})-a_{5}-a_{6}}{2b_{1}}{}^{*}A_{\mu(\alpha}X^{\mu}{}_{\beta)}$
$\displaystyle F_{[\alpha\beta]}$ $\displaystyle=$
$\displaystyle\alpha_{1}A_{\alpha\beta}+\alpha_{2}{}^{*}A_{\alpha\beta}+(\alpha_{3}A_{\mu[\alpha}+\alpha_{4}{}^{*}A_{\mu[\alpha})X^{\mu}{}_{\beta]}+\frac{1}{2}\alpha_{5}\epsilon_{\alpha\beta}{}^{\mu\nu}A_{\mu\rho}X^{\rho}{}_{\nu}\
.$ (6)
The coefficients $\alpha_{i}$ are lengthy functions of $a_{i}$, $b_{i}$, and
$d_{i}$, and we do not need them for our slow-roll inflation analysis, so they
are relegated to appendix Acknowledgments. Note that $\alpha_{1}$ and
$\alpha_{2}$ include terms proportional to $X$. The field strength
$F_{\alpha\beta}$ has a mixture of even and odd parity terms. For minimal
coupling, we would have $\alpha_{i}=0$ for $i\geq 3$, $F_{(\alpha\beta)}=0$,
and only $A_{\alpha\beta}$ and its dual ${}^{*}A_{\alpha\beta}$ would remain
in the solution for $F_{[\alpha\beta]}$, with constant coefficients. Were we
to impose the constraint that $F_{\alpha\beta}$ is antisymmetric, we would get
only the second equation in (2.2). (To linear order in $X_{\alpha\beta}$, the
solution for the antisymmetric part does not depend on the couplings of the
symmetric part; the reverse is not true. This does not hold at second order
and beyond.) This would be analogous to the gravitational Palatini formulation
in the case where the connection is taken to be symmetric. There the non-
metric contributions modify the symmetric part of the connection, and in the
gauge case, the antisymmetric part of the field strength would also deviate
from the standard result $F_{\alpha\beta}=2A_{\alpha\beta}$ due to the non-
minimal couplings.
We do not impose any constraints on $F_{\alpha\beta}$, and insert (2.2) back
into the action (3), which then involves only $\varphi$ and $A_{\alpha}$. We
expand $b_{i}(\varphi,X)=b_{i}^{(0)}(\varphi)+b_{i}^{(1)}(\varphi)X$, and
correspondingly for $c_{i}$. The action is then, to linear order in
$X_{\alpha\beta}$,
$\displaystyle S$ $\displaystyle=$
$\displaystyle\int\text{d}^{4}x\sqrt{-g}\left[\frac{1}{2}fR-\frac{1}{2}X-\frac{1}{2}\Delta
KX-V-\Delta
V+\beta_{1}A_{\alpha\beta}A^{\alpha\beta}+\beta_{2}A_{\alpha\beta}{}^{*}A^{\alpha\beta}+\left(\beta_{3}A^{\alpha\mu}A^{\beta}{}_{\mu}+\beta_{4}A^{\alpha\mu}{}^{*}A^{\beta}{}_{\mu}\right)X_{\alpha\beta}\right]$
(7) $\displaystyle=$
$\displaystyle\int\text{d}^{4}x\sqrt{-g}\left[\frac{1}{2}R-\frac{1}{2}\frac{1+\Delta
K}{f}X-\frac{V+\Delta
V}{f^{2}}+\beta_{1}A_{\alpha\beta}A^{\alpha\beta}+\beta_{2}A_{\alpha\beta}{}^{*}A^{\alpha\beta}+\left(\beta_{3}A^{\alpha\mu}A^{\beta}{}_{\mu}+\beta_{4}A^{\alpha\mu}{}^{*}A^{\beta}{}_{\mu}\right)X_{\alpha\beta}\right]$
$\displaystyle\equiv$
$\displaystyle\int\text{d}^{4}x\sqrt{-g}\left[\frac{1}{2}R-\frac{1}{2}KX-U+\beta_{1}A_{\alpha\beta}A^{\alpha\beta}+\beta_{2}A_{\alpha\beta}{}^{*}A^{\alpha\beta}+\left(\beta_{3}A^{\alpha\mu}A^{\beta}{}_{\mu}+\beta_{4}A^{\alpha\mu}{}^{*}A^{\beta}{}_{\mu}\right)X_{\alpha\beta}\right]\
,$
where the gauge field term coefficients $\beta_{i}$ (which we do not need for
analysis of slow-roll inflation) are given in appendix Acknowledgments. In the
second equality we have carried out the conformal transformation $g_{ab}\to
f^{-1}g_{\alpha\beta}$ to make the scalar field minimally coupled.444Note that
we cannot shift the direct coupling of the scalar field to
$A_{\alpha\beta}A^{\alpha\beta}$ to the scalar sector as we do with the direct
coupling to $R$, as $\sqrt{-g}A_{\alpha\beta}A^{\alpha\beta}$ is invariant
under the conformal transformation. In the last equality we have defined
$K\equiv(1+\Delta K)/f$, $U\equiv(V+\Delta V)/f^{2}$. This action contains
both even and odd parity terms. The additive contributions to the kinetic term
and the potential coming from the non-minimal coupling to the field strength
are
$\displaystyle\Delta K$ $\displaystyle=$ $\displaystyle-
c_{1}^{(0)}\frac{-2(b_{1}^{(0)}+4c_{2}^{(0)})d_{1}+c_{1}^{(0)}(4d_{2}+d_{3}+d_{4}+d_{5})+4c_{1}^{(0)}(b_{1}^{(1)}+4c_{2}^{(1)})-8c_{1}^{(1)}(b_{1}^{(0)}+4c_{2}^{(0)})}{2(b_{1}^{(0)}+4c_{2}^{(0)})^{2}}$
$\displaystyle\Delta V$ $\displaystyle=$
$\displaystyle\frac{c_{1}^{(0)}{}^{2}}{b_{1}^{(0)}+4c_{2}^{(0)}}\ .$ (8)
For stability, the coupling functions have to be such that $U$ is bounded from
below. We should also have $K>0$, unless we include higher order terms in
$X_{\alpha\beta}$, which could make the sum of the kinetic sector terms
positive even if $K<0$.
If the scalar field is constant, then only the terms
$A_{\alpha\beta}A^{\alpha\beta}$ and $A_{\alpha\beta}{}^{*}A^{\alpha\beta}$
(and vacuum energy) remain, with constant coefficients. The latter is a total
derivative, and so does not affect the equations of motion. The former gives
the standard $U(1)$ Lagrangian (up to an irrelevant constant rescaling of
$A_{\alpha\beta}$). If the scalar field is not constant, the non-minimal
couplings break the conformal symmetry of the gauge field, and could be used
for magnetogenesis, with the parity odd terms leading to helical magnetic
fields [19]. They could also be important during reheating, and when
calculating loop corrections.
Let us compare the effect of the non-minimal couplings to the field strength
in the gravity and in the gauge sector on the scalar field. In the Palatini
formulation for gravity, the non-minimal coupling $f$ to the Ricci scalar is
shifted to the scalar sector where it manifests as the multiplicative factors
$1/f$ in the kinetic term and $1/f^{2}$ in the potential. In contrast, the
non-minimal couplings to the gauge field strength manifest as an additive term
in both the kinetic term and the potential. Modifications in either sector
lead to novel effects on their own, and also have interesting interplay. Let
us consider this in the case of polynomial coupling functions.
## 3 The case of polynomial coupling functions
We consider inflation at the classical level, and put $A_{\alpha}=0$. The
scalar field kinetic term in (2.2) contains 11 coupling functions, and the
potential depends on 3 functions. The behaviour of the theory depends strongly
on how these functions are chosen. We look at the case when all the functions
are polynomial, and include operators up to dimension $D\geq 4$ in the
Lagrangian. The couplings that appear in (2.2) can then be written as follows:
$\displaystyle f(\varphi)$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{D-2}f_{n}\varphi^{n}\ ,\quad
b_{i}^{(k)}(\varphi)=\sum_{n=0}^{D-4-4k}b_{in}^{(k)}\varphi^{n}\ ,\quad
c_{1}^{(k)}(\varphi)=\sum_{n=0}^{D-2-4k}c_{1n}^{(k)}\varphi^{n}$
$\displaystyle c_{2}^{(k)}(\varphi)$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{D-4-4k}c_{2n}^{(k)}\varphi^{n}\ ,\quad
d_{1}(\varphi)=\sum_{n=0}^{D-6}d_{1n}\varphi^{n}\ ,\quad
d_{i}(\varphi)=\sum_{n=0}^{D-8}d_{in}\varphi^{n}\ \text{for }i>1\ .$ (9)
In particular, $\varphi$ could be the radial mode of the Standard Model Higgs,
in which case only even powers of the field appear. We include only even
powers. We first consider the case when only terms up to $D=4$ are included.
Then the original potential is
$V=\frac{1}{2}m^{2}\varphi^{2}+\frac{1}{4}\varphi^{4}$, and the only non-zero
coupling functions are
$\displaystyle f$ $\displaystyle=$ $\displaystyle M^{2}+\xi\varphi^{2}\ ,\quad
b_{1}^{(0)}=\text{constant}\ ,\quad c_{1}^{(0)}\propto\varphi^{2}\ ,\quad
c_{2}^{(0)}=\text{constant}\ ,$ (10)
where $M$ and $\xi$ are constants. The new additive contribution to the
potential is $\Delta V\propto c_{1}^{(0)}{}^{2}\propto\varphi^{4}$, so it has
same highest power of $\varphi$ as the original potential $V$. There is no
additive contribution to the kinetic term. So there is no change to the scalar
Lagrangian coming from the gauge sector, apart from a redefinition of the
quartic coupling of the scalar field.
If we include operators up to dimension $D>4$, the situation changes. The
potential and the kinetic term become rational functions, and it may be
possible to realise different inflationary scenarios by tuning the
coefficients, as in [20, 21]. Let us consider only the leading terms in (3) in
the limit of large $\varphi$. We first look only at the terms coming from the
gauge sector, neglecting the non-minimal coupling to gravity, i.e. we put
$f=1$. The leading behaviour of the additive contribution to the potential is
$\Delta V\propto\varphi^{D}$, same as the original potential $V$. All of the
leading terms in the numerator (and separately in the denominator) of the
additive contribution $\Delta K$ to the kinetic term have the same power of
$\varphi$, and we get $K\propto\varphi^{D-4}$. Defining the scalar field
$\chi$ with a canonical kinetic term as
$\displaystyle\chi$ $\displaystyle=$
$\displaystyle\pm\int\text{d}\varphi\sqrt{K(\varphi)}\ ,$ (11)
we get the leading behaviour $\chi\propto\varphi^{\frac{D-2}{2}}$, i.e.
$\varphi\propto\chi^{\frac{2}{D-2}}$. In terms of the canonical field, the
potential is thus $U=V+\Delta V\propto\chi^{\frac{2D}{D-2}}$. For $D=4$ we
recover the earlier result $U\propto\chi^{4}$. For $D=6$ we get
$U\propto\chi^{3}$. In the limit of large $D$, when we include more and more
non-renormalisable terms in the spirit of effective field theory, the
potential approaches the form $U\propto\chi^{2}$. So higher powers of the
field, due to for example quantum corrections, do not spoil the flatness of
the potential, but instead make it closer to quadratic. Such a potential is by
itself not viable for inflation, because even though the spectral index agrees
with observations, the tensor-to-scalar ratio is above the current
observational upper bound [22, 23]. Including an $R^{2}$ term in the action
will bring the tensor-to-scalar ratio down without affecting the spectral
index [24]. An $R_{(\alpha\beta)}R^{(\alpha\beta)}$ term has the same effect,
while terms higher order in $R_{(\alpha\beta)}$ also change the spectral index
[25, 26].
Let us now look at the interplay of the non-minimal couplings in the gauge and
the gravity sector. The leading contribution from the gauge sector is $\Delta
K\propto\varphi^{D-4}$, which for $D>4$ dominates over the original kinetic
term that is simply unity. Because of the gravity sector, the kinetic term is
divided by $f\propto\varphi^{D-2}$, so we overall have $K\propto\varphi^{-2}$.
Independent of the value of $D$, according to (11) this leads to
$\chi\propto\ln(\varphi/\varphi_{0})$, where $\varphi_{0}$ is a constant. The
potential $V+\Delta V\propto\varphi^{D}$ is divided by $f^{2}$, so we get
$U\propto\varphi^{4-D}\propto e^{-(D-4)\gamma\chi}$. For $D>4$, such an
exponential potential does not give slow roll in agreement with observations,
unless the coefficient $\gamma$ (which is a combination of the coefficients
that appear in the kinetic term) is tuned to be very small. This is a known
problem for Higgs inflation if terms with ever higher powers of the field are
added to the action in the Jordan frame. One proposed solution is to assume
that the classical asymptotic shift symmetry in the Einstein frame extends to
the quantum-corrected potential [14, 16, 27, 28, 29, 30, 31, 32, 33].
The origin of the problem is that the non-minimal coupling
$f\propto\varphi^{D-2}$ to the Ricci scalar appears quadratically in the
potential, which goes like $V\propto\varphi^{D}$, so
$V/f^{2}\propto\varphi^{4-D}$ asymptotes to a constant only if $D=4$. Solving
the problem requires a new contribution to the potential that is quadratic in
$\varphi^{D-2}$, like $f$. The non-minimal coupling to $F^{\alpha}{}_{\alpha}$
does this: we see from (7) and (3) that the additive contribution to the
potential is proportional to $c_{1}^{(0)}{}^{2}\propto\varphi^{2(D-2)}$.
However, this term is divided by the non-minimal couplings to
$F_{\alpha\beta}F^{\alpha\beta}$ and $(F^{\alpha}{}_{\alpha})^{2}$, which are
$\propto\varphi^{D-4}$. If there were a principle to forbid the scalar field
to couple to terms non-linear in the field strength, the potential would
remain asymptotically flat for all $D$.
As inflation happens at finite field value, the contribution of higher order
terms can be small, and the potential flat, if their coefficients are small.
It is a quantitative question how large the coefficients can be without
spoiling successful inflation. If we have non-minimal coupling only in the
gravity sector, Higgs inflation is much more sensitive to higher order terms
in the Palatini formulation than in the metric formulation [34]. This is
related to the fact that although the potential is multiplied by $1/f^{2}$ in
both cases, the kinetic term is different. In both formulations it is
multiplied by $1/f$, but in the metric case there is also an additive
contribution $\propto(f_{,\varphi}/f)^{2}$. For $f\propto\varphi^{D-2}$, this
contribution is proportional to $\varphi^{-2}$ for all values of $D$. With
$D>4$ this term dominates over the $1/f\propto\varphi^{2-D}$ term, and we get
$\chi\propto\ln(\varphi/\varphi_{0})$ for all $D\geq 4$. In contrast, in the
Palatini formulation, we have $\chi\propto\varphi^{\frac{4-D}{2}}$ for $D>4$.
The coefficients are also different, so the formulations do not agree even for
$D=4$. As discussed above, including non-minimal coupling both in the gauge
and in the gravity sector makes the field relation logarithmic for all $D\geq
4$ also in the Palatini formulation, which may affect the sensitivity to
higher order corrections found in [34]. However, this cannot be determined by
considering the large field limit, but requires a more detailed analysis.
## 4 Discussion
So far we have considered an Abelian gauge theory. If we instead consider a
non-Abelian group, the gauge invariant term for the gauge field is
$A_{\alpha\beta}^{a}\equiv\partial_{[\alpha}A^{a}_{\beta]}+\frac{1}{2}ig[A_{\alpha},A_{\beta}]^{a}$,
where $a$ is a gauge group index and $g$ is the gauge coupling. The
corresponding field strength $F^{a}_{\alpha\beta}$ has three indices, like the
affine connection $\Gamma^{\gamma}_{\alpha\beta}$. In the gravity case all
indices refer to spacetime structure, so it is possible to source the
connection with any kind of field, because derivatives can provide indices. In
contrast, in the non-Abelian gauge case, we need a source term with the right
gauge structure to give the index $a$. If the scalar field is a gauge singlet,
the only source term is $A_{\alpha\beta}^{a}$, and hence $F_{\alpha\beta}^{a}$
will be proportional to $A_{\alpha\beta}^{a}$. Therefore there is no change in
the pure scalar sector relevant for inflation. If the scalar field is in the
fundamental representation (like the Standard Model Higgs), it can act as a
source for $F_{\alpha\beta}^{a}$ provided it appears in even powers. In this
sense, there is not much difference with the Abelian case. If the scalar field
and the gauge field (whether Abelian or non-Abelian) are Standard Model
fields, the couplings between them in (7) are constrained by collider
observations. However, the constraints are extremely weak, because the
derivative couplings are suppressed by the Planck scale. For example, consider
$\beta_{1},\beta_{2}\propto\varphi^{2}$ and $\beta_{3},\beta_{4}=$ constant.
The $\beta_{1},\beta_{2}$ terms lead to interactions proportional to (dropping
all indices) $(E/M_{{}_{\mathrm{Pl}}})^{2}A^{2}\varphi^{2}$, where $E$ is the
energy, the field $A$ is either the photon, $W$ or $Z$, and we have restored
the reduced Planck mass. This four-vertex is suppressed relative to the
Standard model contribution by the factor
$(E/M_{{}_{\mathrm{Pl}}})^{2}\lesssim 10^{-29}$ at currently accessible
collider energies. The $\beta_{3},\beta_{4}$ terms lead to similar
interactions with one extra suppression factor of
$(E/M_{{}_{\mathrm{Pl}}})^{2}$.
Including a fermion field $\psi$ leads to a number of possible new terms. The
simplest linear couplings to $F_{\alpha\beta}$ (to generate a source term) are
$\bar{\psi}\psi F^{\alpha}{}_{\alpha}$ and $\bar{\psi}\gamma^{A}\gamma^{B}\psi
e^{\alpha}{}_{A}e^{\beta}{}_{B}F_{[\alpha\beta]}$, where $\gamma^{A}$ is a
gamma matrix and $e^{\alpha}{}_{A}$ is a tetrad. Solving for the field
strength, these lead to the extra contributions
$F_{(\alpha\beta)}\propto\bar{\psi}\psi g_{\alpha\beta}$ and
$F_{[\alpha\beta]}\propto\bar{\psi}[\gamma^{A},\gamma^{B}]\psi e_{\alpha
A}e_{\beta B}$ in the solution (2.2) for $F_{\alpha\beta}$. The symmetric
contribution, when inserted back into the action, leads (at zeroth order in
$X_{\alpha\beta}$) to $\bar{\psi}\psi$ and $(\bar{\psi}\psi)^{2}$ multiplied
by functions of the scalar field. The antisymmetric contribution leads to
$\bar{\psi}\gamma^{A}\gamma^{B}\psi
e^{\alpha}{}_{A}e^{\beta}{}_{B}A_{\alpha\beta}$,
$\bar{\psi}\gamma^{A}\gamma^{B}\psi
e^{\alpha}{}_{A}e^{\beta}{}_{B}{}^{*}A_{\alpha\beta}$, and
$\bar{\psi}[\gamma^{A},\gamma^{B}]\psi\bar{\psi}[\gamma_{A},\gamma_{B}]\psi$.
In the non-Abelian case, depending on the representation structure of the
fermions, symmetry may prevent bare mass terms like $\bar{\psi}\psi$ (as
happens in the Standard Model), but the term $\bar{\psi}\psi
F^{\alpha}{}_{\alpha}$ remains allowed, and will generate couplings between
the scalar field and the fermions that give mass terms, providing a mechanism
to generate masses with the gauge field strength without a scalar field.
Unless there is a fermion condensate during inflation, the fermion terms are
not expected to be relevant during inflation, but could be important for dark
matter production, as in the case of non-minimal couplings to fermions in the
gravity sector [35, 36].
Finally, we have assumed that the action does not contain terms higher than
quadratic in the field strength $F_{\alpha\beta}$. Let us consider what
happens if we have terms of arbitrary power $F_{\alpha\beta}$, but still only
include only operators up to dimension $D$. The higher order terms lead to a
non-linear equation of motion for the gauge field. After solving for the field
strength the action will be non-polynomial in the gauge field, and will
contain an infinite number of powers of it. For electromagnetism such terms
are strongly constrained by observations. The constraints are satisfied if the
coefficients of the terms vanish (or are strongly suppressed) in the
electroweak vacuum, but the terms could still play a role in magnetogenesis or
during preheating. During inflation, if we restrict to coefficients that are
polynomial in $\varphi$, then at large fields we still have (to zeroth order
in $X_{\alpha\beta}$) $F_{(\alpha\beta)}\propto\varphi^{2}g_{\alpha\beta}$,
leading to an additive contribution to the potential that is
$\propto\varphi^{D}$ as in the quadratic case. Similarly, the leading
contribution to $K(\varphi)$ is still $\propto\varphi^{D-4}$, so there is no
qualitative change to the scalar sector.
## 5 Conclusions
We have considered a formulation of gauge field theory where the field
strength $F_{\alpha\beta}$ is an independent variable, whose symmetric part
can be non-zero. We take the action to be linear in the gauge field, and
include non-minimal couplings between $F_{\alpha\beta}$ and a scalar field,
considering terms linear and quadratic in $F_{\alpha\beta}$, and linear in the
kinetic term of the scalar field. As in the Palatini formulation for gravity,
the field equation of the field strength then leads to a dependence on the
scalar field, not only on the gauge field. The theory is physically distinct
from the case where the field strength is a priori fixed in terms of the gauge
field. If the scalar field is constant, we recover usual gauge field theory
plus vacuum energy. Otherwise, there are new terms that involve the gauge
field and the scalar field.
We consider slow-roll inflation, with the gauge field set to zero. The effect
of the field strength, shifted to the scalar sector, gives an additive
contribution to the kinetic term and the potential of the scalar field. If the
non-minimal couplings are polynomial in the scalar field and we include only
operators up to dimension $D$ in the Lagrangian, then the resulting potential
is $\propto\chi^{\frac{2D}{D-2}}$, where $\chi$ is the canonical scalar field.
For $D=4$, there is no change, for $D=6$ the potential changes from the naive
result $\chi^{6}$ to $\chi^{3}$, and for higher powers of $D$ the potential
approaches $\chi^{2}$. Adding higher order powers from loop corrections thus
does not spoil the flatness of the potential, but instead makes it closer to
quadratic. A quadratic potential leads to a tensor-scalar-ratio above the
observational upper limit, which can be lowered by including a $R^{2}$ term in
the action if gravity is also described in the Palatini formulation [24].
In Higgs inflation, which relies on non-minimal coupling between gravity and
the Higgs field, the potential is asymptotically flat if only operators up to
$D=4$ are included. In the Palatini formulation for gravity, Higgs inflation
is more sensitive to higher order terms that break the asymptotic flatness
than in the metric formulation [34]. The reason is that the mapping between
the Jordan frame Higgs field and the Einstein frame canonical field is
different. In the large field limit, in the Palatini case the relation between
the Jordan frame Higgs field and the canonical field is a power-law unless
$D=4$ when it is logarithmic, whereas in the metric case it is logarithmic for
all $D\geq 4$, and the coefficients are also different. With the non-minimal
gauge field couplings we have discussed, the relation is logarithmic in the
Palatini case as well for all $D\geq 4$. It would be interesting to study
whether this has an impact on the sensitivity to higher order corrections. If
the scalar field couples only to terms linear in $F_{\alpha\beta}$, the
asymptotic flatness of the potential is preserved exactly both in the metric
and in the Palatini formulation of gravity.
If terms higher than quadratic in $F_{\alpha\beta}$ are included in the action
(keeping the couplings polynomial), the situation remains qualitatively the
same for the scalar field. However, the gauge field part of the action becomes
non-polynomial, but may still be well-behaved. If derivatives of the field
strength appear in the action, we may have extra degrees of freedom as in the
gravitational Palatini case, and the resulting theory can be unstable. The
modifications to the gauge field sector could be important for magnetogenesis
during inflation. They could also affect the duration of reheating and thus
have an impact on inflationary predictions.
## Acknowledgments
S.R. thanks Matti Heikinheimo for a useful discussion. Y. V. gratefully
acknowledges the Helsinki Institute of Physics of the University of Helsinki
for hospitality during the visit where this work was carried out.
The coefficients in the solution (2.2) for $F_{[\alpha\beta]}$ are
$\displaystyle\alpha_{1}$ $\displaystyle=$
$\displaystyle-\frac{a_{1}b_{2}-a_{2}b_{3}}{2(b_{2}^{2}-b_{3}^{2})}$
$\displaystyle+\left[\frac{(a_{1}b_{2}-a_{2}b_{3})b_{3}[b_{2}(d_{6}-d_{7})+b_{3}(-d_{3}+d_{4}+d_{5})]}{4b_{2}(b_{2}^{2}-b_{3}^{2})^{2}}-\frac{2a_{2}b_{3}(-d_{3}+d_{4}+d_{5})+a_{2}b_{2}(d_{6}-d_{7})+2(a_{5}-a_{6})b_{2}b_{3}}{8b_{2}(b_{2}^{2}-b_{3}^{2})}\right]X$
$\displaystyle\alpha_{2}$ $\displaystyle=$
$\displaystyle\frac{a_{1}b_{3}-a_{2}b_{2}}{2(b_{2}^{2}-b_{3}^{2})}$
$\displaystyle+\left[-\frac{(a_{1}b_{3}-a_{2}b_{2})b_{3}[b_{2}(d_{6}-d_{7})+b_{3}(-d_{3}+d_{4}+d_{5})]}{4b_{2}(b_{2}^{2}-b_{3}^{2})^{2}}+\frac{b_{3}[a_{2}(-d_{6}+d_{7})+2(a_{5}-a_{6})b_{3}]}{8b_{2}(b_{2}^{2}-b_{3}^{2})}\right]X$
$\displaystyle\alpha_{3}$ $\displaystyle=$
$\displaystyle-\frac{a_{1}(-d_{3}+d_{4}+d_{5})+(a_{3}-a_{4})b_{2}+(a_{5}-a_{6})b_{3}}{2(b_{2}^{2}-b_{3}^{2})}$
$\displaystyle\alpha_{4}$ $\displaystyle=$
$\displaystyle\frac{(a_{1}b_{2}-a_{2}b_{3})(d_{6}-d_{7})+2(a_{1}b_{3}-a_{2}b_{2})(-d_{3}+d_{4}+d_{5})-2(a_{5}-a_{6})(b_{2}^{2}-b_{3}^{2})}{4b_{2}(b_{2}^{2}-b_{3}^{2})}$
$\displaystyle\alpha_{5}$ $\displaystyle=$
$\displaystyle-\frac{(a_{1}b_{2}-a_{2}b_{3})(d_{6}-d_{7})+2a_{1}b_{3}(-d_{3}+d_{4}+d_{5})+2(a_{3}-a_{4})b_{2}b_{3}+2(a_{5}-a_{6})b_{3}^{2}}{4b_{2}(b_{2}^{2}-b_{3}^{2})}\
.$ (12)
Note that the coefficients $a_{i}$ and $b_{i}$ contain parts proportional to
$X$.
The coefficients of the gauge field terms in the action (7) are
$\displaystyle\beta_{1}$ $\displaystyle=$ $\displaystyle
a_{1}\alpha_{1}+a_{2}\alpha_{2}+b_{2}(\alpha_{1}^{2}+\alpha_{2}^{2})+2b_{3}\alpha_{1}\alpha_{2}$
$\displaystyle+\frac{1}{2}\left[-a_{2}\alpha_{4}+(-a_{5}+a_{6})\alpha_{2}-2b_{2}\alpha_{2}\alpha_{4}-2b_{3}\alpha_{1}\alpha_{4}+(-d_{3}+d_{4}+d_{5})\alpha_{2}^{2}+(-d_{6}+d_{7})\alpha_{1}\alpha_{2}\right]X$
$\displaystyle\beta_{2}$ $\displaystyle=$ $\displaystyle
a_{1}\alpha_{2}+a_{2}\alpha_{1}+2b_{2}\alpha_{1}\alpha_{2}+b_{3}(\alpha_{1}^{2}+\alpha_{2}^{2})$
$\displaystyle\beta_{3}$ $\displaystyle=$ $\displaystyle-
a_{1}\alpha_{3}+a_{2}(\alpha_{4}+\alpha_{5})+(-a_{3}+a_{4})\alpha_{1}+(a_{5}-a_{6})\alpha_{2}+2b_{2}(-\alpha_{1}\alpha_{3}+\alpha_{2}\alpha_{4}+\alpha_{2}\alpha_{5})$
$\displaystyle+2b_{3}(\alpha_{1}\alpha_{4}+\alpha_{1}\alpha_{5}-\alpha_{2}\alpha_{3})+(-d_{3}+d_{4}+d_{5})(\alpha_{1}^{2}-\alpha_{2}^{2})$
$\displaystyle\beta_{4}$ $\displaystyle=$ $\displaystyle
a_{1}(-\alpha_{4}+\alpha_{5})-a_{2}\alpha_{3}+(-a_{3}+a_{4})\alpha_{2}+(-a_{5}+a_{6})\alpha_{1}+2b_{2}(-\alpha_{1}\alpha_{4}+\alpha_{1}\alpha_{5}-\alpha_{2}\alpha_{3})$
(13)
$\displaystyle+2b_{3}(-\alpha_{1}\alpha_{3}-\alpha_{2}\alpha_{4}+\alpha_{2}\alpha_{5})+2(-d_{3}+d_{4}+d_{5})\alpha_{1}\alpha_{2}+(-d_{6}+d_{7})(\alpha_{1}^{2}+\alpha_{2}^{2})\
.$
Note that the coefficients $a_{i}$, $b_{i}$, and $\alpha_{i}$ contain parts
proportional to $X$.
## References
* [1] A. Einstein, _Einheitliche Feldtheorie von Gravitation und Elektrizität_ , _Sitzungber.Preuss.Akad.Wiss._ 22 (1925) 414–419.
* [2] M. Ferraris, M. Francaviglia and C. Reina, _Variational formulation of general relativity from 1915 to 1925 “Palatini’s method” discovered by Einstein in 1925_ , _Gen.Rel.Grav._ 14 (1982) 243–254.
* [3] L. D. Faddeev, _Feynman integral for singular Lagrangians_ , _Theor. Math. Phys._ 1 (1969) 1–13.
* [4] L. D. Faddeev and R. Jackiw, _Hamiltonian Reduction of Unconstrained and Constrained Systems_ , _Phys. Rev. Lett._ 60 (1988) 1692–1694.
* [5] G. Magnano, M. Ferraris and M. Francaviglia, _Nonlinear gravitational Lagrangians_ , _Gen. Rel. Grav._ 19 (1987) 465.
* [6] J.-i. Koga and K.-i. Maeda, _Equivalence of black hole thermodynamics between a generalized theory of gravity and the Einstein theory_ , _Phys. Rev. D_ 58 (1998) 064020, [gr-qc/9803086].
* [7] V. I. Afonso, C. Bejarano, J. Beltran Jimenez, G. J. Olmo and E. Orazi, _The trivial role of torsion in projective invariant theories of gravity with non-minimally coupled matter fields_ , _Class. Quant. Grav._ 34 (2017) 235003, [1705.03806].
* [8] B. Shahid-Saless, _First-Order Formalism Treatment of R + R**2 Gravity_ , _Phys. Rev._ D35 (1987) 467–470.
* [9] T. P. Sotiriou and S. Liberati, _Metric-affine f(R) theories of gravity_ , _Annals Phys._ 322 (2007) 935–966, [gr-qc/0604006].
* [10] T. P. Sotiriou and V. Faraoni, _f(R) Theories Of Gravity_ , _Rev. Mod. Phys._ 82 (2010) 451–497, [0805.1726].
* [11] G. J. Olmo, _Palatini Approach to Modified Gravity: f(R) Theories and Beyond_ , _Int. J. Mod. Phys._ D20 (2011) 413–462, [1101.3864].
* [12] F. L. Bezrukov and M. Shaposhnikov, _The Standard Model Higgs boson as the inflaton_ , _Phys. Lett._ B659 (2008) 703–706, [0710.3755].
* [13] F. Bauer and D. A. Demir, _Inflation with Non-Minimal Coupling: Metric versus Palatini Formulations_ , _Phys. Lett._ B665 (2008) 222–226, [0803.2664].
* [14] F. Bezrukov, _The Higgs field as an inflaton_ , _Class. Quant. Grav._ 30 (2013) 214001, [1307.0708].
* [15] F. Bezrukov and M. Shaposhnikov, _Inflation, LHC and the Higgs boson_ , _Comptes Rendus Physique_ 16 (2015) 994–1002.
* [16] J. Rubio, _Higgs inflation_ , _Front. Astron. Space Sci._ 5 (2019) 50, [1807.02376].
* [17] C. Armendariz-Picon, T. Damour and V. F. Mukhanov, _k - inflation_ , _Phys. Lett. B_ 458 (1999) 209–218, [hep-th/9904075].
* [18] J. Garriga and V. F. Mukhanov, _Perturbations in k-inflation_ , _Phys. Lett. B_ 458 (1999) 219–225, [hep-th/9904176].
* [19] R. Durrer and A. Neronov, _Cosmological Magnetic Fields: Their Generation, Evolution and Observation_ , _Astron. Astrophys. Rev._ 21 (2013) 62, [1303.7121].
* [20] S. Räsänen, _Higgs inflation in the Palatini formulation with kinetic terms for the metric_ , _Open J. Astrophys._ 2 (2019) 1, [1811.09514].
* [21] M. Långvik, J.-M. Ojanperä, S. Raatikainen and S. Räsänen, _Higgs inflation with the Holst and the Nieh–Yan term_ , _Phys. Rev. D_ 103 (2021) 083514, [2007.12595].
* [22] Planck collaboration, Y. Akrami et al., _Planck 2018 results. X. Constraints on inflation_ , _Astron. Astrophys._ 641 (2020) A10, [1807.06211].
* [23] BICEP, Keck collaboration, P. A. R. Ade et al., _Improved Constraints on Primordial Gravitational Waves using Planck, WMAP, and BICEP/Keck Observations through the 2018 Observing Season_ , _Phys. Rev. Lett._ 127 (2021) 151301, [2110.00483].
* [24] V.-M. Enckell, K. Enqvist, S. Räsänen and L.-P. Wahlman, _Inflation with $R^{2}$ term in the Palatini formalism_, _JCAP_ 02 (2019) 022, [1810.05536].
* [25] J. Annala, _Higgs inflation and higher-order gravity in palatini formulation_ , _Master’s Thesis, University of Helsinki,http://urn.fi/URN:NBN:fi:hulib-202005292527_ (May, 2020) , [2106.09438].
* [26] J. Annala and S. Räsänen, _Inflation with $R_{(\alpha\beta)}$ terms in the Palatini formulation_, _JCAP_ 09 (2021) 032, [2106.12422].
* [27] F. Bezrukov, A. Magnin, M. Shaposhnikov and S. Sibiryakov, _Higgs inflation: consistency and generalisations_ , _JHEP_ 01 (2011) 016, [1008.5157].
* [28] D. P. George, S. Mooij and M. Postma, _Quantum corrections in Higgs inflation: the real scalar case_ , _JCAP_ 1402 (2014) 024, [1310.2157].
* [29] F. Bezrukov, J. Rubio and M. Shaposhnikov, _Living beyond the edge: Higgs inflation and vacuum metastability_ , _Phys. Rev._ D92 (2015) 083512, [1412.3811].
* [30] J. Rubio, _Higgs inflation and vacuum stability_ , _J. Phys. Conf. Ser._ 631 (2015) 012032, [1502.07952].
* [31] D. P. George, S. Mooij and M. Postma, _Quantum corrections in Higgs inflation: the Standard Model case_ , _JCAP_ 1604 (2016) 006, [1508.04660].
* [32] J. Fumagalli and M. Postma, _UV (in)sensitivity of Higgs inflation_ , _JHEP_ 05 (2016) 049, [1602.07234].
* [33] F. Bezrukov, M. Pauly and J. Rubio, _On the robustness of the primordial power spectrum in renormalized Higgs inflation_ , _JCAP_ 1802 (2018) 040, [1706.05007].
* [34] R. Jinno, M. Kubota, K.-y. Oda and S. C. Park, _Higgs inflation in metric and Palatini formalisms: Required suppression of higher dimensional operators_ , _JCAP_ 03 (2020) 063, [1904.05699].
* [35] M. Shaposhnikov, A. Shkerin, I. Timiryasov and S. Zell, _Higgs inflation in Einstein-Cartan gravity_ , _JCAP_ 02 (2021) 008, [2007.14978].
* [36] M. Shaposhnikov, A. Shkerin, I. Timiryasov and S. Zell, _Einstein-Cartan Portal to Dark Matter_ , _Phys. Rev. Lett._ 126 (2021) 161301, [2008.11686].
|
# A black hole solution in conformal supergravity
Pedro D. Alvarez<EMAIL_ADDRESS>Departamento de Física, Universidad
de Antofagasta, Aptdo. 02800, Chile Cristóbal Corral<EMAIL_ADDRESS>Instituto de Ciencias Exactas y Naturales, Universidad Arturo Prat, Playa
Brava 3256, 1111346, Iquique, Chile Facultad de Ciencias, Universidad Arturo
Prat, Avenida Arturo Prat Chacón 2120, 1110939, Iquique, Chile Jorge Zanelli
<EMAIL_ADDRESS>Centro de Estudios Científicos (CECs), Arturo Prat 514,
Valdivia, Chile, Universidad San Sebastián, General Lagos 1163, Valdivia,
Chile
###### Abstract
We present a three-parameter family of analytic black-hole solutions in the
bosonic sector of a four-dimensional supersymmetric model with matter fields
in the adjoint representation. The solutions are endowed with a curvature and
torsional singularities which are both surrounded by an event horizon. They
are asymptotically Lorentz flat, representing the torsional generalization of
the Riegert black hole in conformal gravity. We compute the partition function
to first order in the saddle-point approximation which turns out to be finite
without any reference to boundary counterterms. We find a non-maximmally
symmetric thermalized ground state, whose existence is relevant when studying
Hawking-Page phase transitions. Finally, we discuss future directions
regarding its extended phase space.
## I Introduction
Supersymmetry (SUSY) is an appealing approach to address different problems in
theoretical physics. From the viewpoint of quantum field theory, it attracted
considerable interest as a mechanism to cancel divergences arising at the one-
loop level coming from fermions and bosons. It is also a robust candidate for
solving the hierarchy problem in the Standard Model of particle physics Martin
(2010); Cohen _et al._ (1996); Dimopoulos and Giudice (1995). Moreover, by
combining spacetime and internal symmetries in a graded Lie algebra, SUSY
circumvents the Coleman-Mandula theorem Coleman and Mandula (1967), offering a
natural framework to unify those hitherto segregated symmetries. Supergravity,
on the other hand, provides an interesting gravitational setup where the one-
loop divergences are renormalized, in contrast to what happens in general
relativity (for a review see Van Nieuwenhuizen (1981)). The former represents
the gauge theory of SUSY and it appears as the low-energy limit of string
theory Blumenhagen _et al._ (2013); Ortin (2015).
In recent years, a novel method of implementing SUSY, inspired by ideas from
Yang-Mills theories, supergravity and Einstein-Cartan gravity, has been
proposed Alvarez _et al._ (2012, 2014a, 2015). In this approach –dubbed
unconventional supersymmetry (USUSY)–, the spin connection, the gauge
potentials and matter fields belong to a gauge connection for a superextension
of the anti-de Sitter (AdS) or Poincaré groups (for a review see Alvarez _et
al._ (2021a)). What is unconventional about this approach is that matter
fields are in the adjoint representation of the superalgebra. This feature is
faithful to the spirit of supersymmetry, treating bosons and fermions on equal
footing as parts of the same gauge connection. Fermionic fields representing
matter are directly incorporated in the gauge connection by means of a
Clifford algebra-valued soldering form that provides a metric structure and,
eventually, the inclusion of gravity.
This way of implementing SUSY does not produce a pairing of boson-fermion
states as in standard SUSY Sohnius (1985); Alvarez _et al._ (2012, 2014a) and
therefore, USUSY models greatly differ from standard SUSYs and supergravities
Stelle and West (1978). The lack of superpartners remarkably coincides with
the absence of supporting evidence for superpartners in the LHC. Recently, we
have constructed chiral gauge models for $SU(2,2|2)$ Alvarez _et al._ (2020)
and $SU(2,2|N)$ Alvarez _et al._ (2021b), and we have also described the
embedding of the bi-fundamental spinor representation in the superconformal
algebras Alvarez _et al._ (2022). These developments can be seen as a step
towards defining a grand unified theory based on the conformal superalgebras.
Conformal supergravity is an interesting gauge theory for the conformal
superalgebra studied by many authors in the last decades Kaku _et al._
(1977); Ferrara _et al._ (1977); Kaku and Townsend (1978); Kaku _et al._
(1978); Bergshoeff _et al._ (1981, 1983); Fradkin and Tseytlin (1985, 1985);
Liu and Tseytlin (1998); Butter (2010); Ferrara _et al._ (2018); D’Auria and
Ravera (2021). Indeed, by a proper gauge fixing, it is possible to arrive from
conformal supergravity to the standard $\mathcal{N}=1$ supergravity in four
dimensions Freedman and Van Proeyen (2012). In Refs. Alvarez _et al._ (2014b,
2020, 2021c), we discussed a conformal supergravity theory based on the
$\mathfrak{su}(2,2|N)$ gauge algebra written in a Townsend-MacDowell Mansouri
MacDowell and Mansouri (1977); Townsend (1977) form, which can be regarded as
the $3+1$ generalization of the proposal in Ref. Horne and Witten
(1989).111For a similar approach see Ref. Andrianopoli and D’Auria (2014),
whose applications in holography have been studied in Ref. Andrianopoli _et
al._ (2021a). Its novelty relies on the implementation of supersymmetry in the
adjoint representation, whose partially broken conformal symmetry is realized
explicitly in the action. Remarkably, Dirac supercharges renders the full
$R$-symmetry manifest Trigiante (2017), where the latter is identified with
the internal $SU(N)\times U(1)$ group.
In the present paper, we explore some gravitational vacuum solutions of the
theory, where “vacuum” means absence of fermionic matter fields. Similar
configurations have been found in Chern-Simons AdS5 supergravity models
Giribet _et al._ (2014); Andrianopoli _et al._ (2021b). Here, we focus on
four-dimensional topological black holes with constant-curvature transverse
sections possessing torsional hair in unconventional conformal supergravity.
We found a three-parameter family of analytic asymptotically locally Lorentz-
flat solutions that represents the torsional generalization of the Riegert
metric Riegert (1984) in conformal gravity. We compute its temperature and
partition function to first order in the saddle-point approximation. In a
certain limit, we demonstrate that the black hole becomes a particular two-
parameter family of solutions, possessing a central singularity dressed by a
horizon, a nonvanishing temperature, but its free energy, mass, and entropy
are identically zero. This class of spacetimes has been found in Weyl gravity
as well and it has been regarded as a thermalized ground state Lu _et al._
(2012). Here, we show their existence in unconventional conformal supergravity
as well.
The paper is organized as follows: In Sec. II, we define the basics of the
model, namely, the field content, action principle, and field equations;
essential ingredients for searching new vacuum solutions. In Sec. III, we
present the novel topological black hole solution with torsion, alongside
their properties. In Sec. IV, we find the thermalized ground state as a
limiting case of the three-parameter family of black holes. Finally, in Sec. V
we summarize our results.
## II The model
Since we are interested in studying of vacuum solutions, we consider the
purely bosonic sector of the theory discussed in Alvarez _et al._
(2021c).222A general construction of this theory including spinors can be
found in Ref. Alvarez _et al._ (2021c). The bosonic part of the gauge
connection $1$-form is then given by
$\mathds{A}=\frac{1}{2}\omega^{ab}\mathds{J}_{ab}+f^{a}\mathds{J}_{a}+g^{a}\mathds{K}_{a}+h\mathds{D}+A^{I}\mathds{T}_{I}+A\mathds{Z}\,,$
(1)
where $\mathds{J}_{a},\mathds{K}_{a},\mathds{J}_{ab}$, and $\mathds{D}$ denote
the generators of the conformal group $SO(3,2)$, while $\mathds{T}_{I}$ and
$\mathds{Z}$ represent the generators of the internal $SU(N)$ and $U(1)$
groups, respectively. Hereon, upper and lowercase Latin indices label internal
$SU(N)$ and Lorentz generators, respectively, while Greek indices denote
spacetime coordinates. Each of the generators is contracted with its
respective $1$-form compensating field and the field strength associated to
this gauge connection is
$\mathds{F}=\text{d}\mathds{A}+\mathds{A}\wedge\mathds{A}=\frac{1}{2}\mathcal{R}^{ab}\mathds{J}_{ab}+F^{a}\mathds{J}_{a}+G^{a}\mathds{K}_{a}+H\mathds{D}+F^{I}\mathds{T}_{I}+F\mathds{Z}\,,$
(2)
where d is the exterior derivative, $\wedge$ is the exterior product of
differential forms. The explicit expressions of the curvature components are
$\displaystyle\mathcal{R}^{ab}$ $\displaystyle=R^{ab}+f^{a}\wedge
f^{b}-g^{a}\wedge g^{b}\,,$ $\displaystyle F^{a}$
$\displaystyle=\text{D}f^{a}+g^{a}\wedge h\,,$ $\displaystyle G^{a}$
$\displaystyle=\text{D}g^{a}+f^{a}\wedge h\,,$ (3) $\displaystyle H$
$\displaystyle=\text{d}h+f^{a}\wedge g_{a}\,,$ $\displaystyle F^{I}$
$\displaystyle=\text{d}A^{I}+\frac{1}{2}f_{JK}^{I}A^{J}\wedge A^{K}\,,$
$\displaystyle F$ $\displaystyle=\text{d}A\,,$ (4)
with $R^{ab}=\text{d}\omega^{ab}+\omega^{a}_{\ c}\wedge\omega^{cb}$ being the
Lorentz curvature $2$-form and D denotes the covariant derivative with respect
to the Lorentz connection.
The dynamics of the theory is described by a generalization of the MacDowell-
Mansouri action MacDowell and Mansouri (1977) for the bosonic sector of the
superconformal group, that is,
$\mathcal{S}=-\int\langle\mathds{F}\wedge\circledast\ \mathds{F}\rangle\,,$
(5)
where $\langle\ldots\rangle$ denotes trace over the internal indices and the
dual operator $\circledast$ is defined through
$\circledast\mathds{F}=S\left(\frac{1}{2}\mathcal{R}^{ab}\mathds{J}_{ab}+F^{a}\mathds{J}_{a}+G^{a}\mathds{K}_{a}\right)+(\varepsilon_{1}\ast)H\mathds{D}+(\varepsilon_{2}\ast)F^{I}\mathds{T}_{I}+(\varepsilon_{3}\ast)F\mathds{Z}\,.$
(6)
Here $\varepsilon_{i}=\pm 1$, with $i=1,2,3$, $\ast$ is the Hodge dual, and
$S=i\gamma_{5}$ with $\gamma_{5}$ being the chiral gamma matrix defined such
that $S^{2}=-\mathbb{I}$. The ambiguity in $\varepsilon_{i}$ can be eliminated
by demanding the correct sign of the kinetic terms of the bosonic fields in
the action. This, in turn, will depend on the details of the algebra
representation. More details of this operator can be found in Alvarez _et
al._ (2021b).
The action functional (5) can be thought of as a Yang-Mills theory for an
embedding $G^{+}\hookrightarrow SU(2,2|N)$. Using the properties of the
supertrace (cf. Ref. Alvarez _et al._ (2021b)), we can express the the
bosonic sector of the Lagrangian along the curvature components as
$\displaystyle\mathscr{L}_{\text{bos}}$
$\displaystyle=\frac{1}{4}\epsilon_{abcd}\mathcal{R}^{ab}\wedge\mathcal{R}^{cd}-\varepsilon_{1}(H+f^{a}\wedge
g_{a})\wedge\ast(H+f^{b}\wedge g_{b})$
$\displaystyle-\frac{1}{2}\varepsilon_{2}F^{I}\wedge\ast
F^{I}-4\left(\frac{4}{N}-1\right)\varepsilon_{3}F\wedge\ast F\,.$ (7)
A ghost-free action demands $\varepsilon_{1}=\varepsilon_{2}=1$. The value of
$\varepsilon_{3}$, on the other hand, depends on $N$: the absence of ghosts
demands $\varepsilon_{3}=\pm 1$ for $N<4$ and $N>4$, respectively, such that
$\varepsilon_{3}(N-4)<0$. For $N=4$, however, the $U(1)$ sector drops out from
the Lagrangian (7).
Varying the Lagrangian (7) with respect to $f^{a}$ and $g^{a}$ yields
$\displaystyle\epsilon_{abcd}f^{b}\wedge\mathcal{R}^{cd}-2g_{a}\wedge\ast(H+f^{b}\wedge
g_{b})$ $\displaystyle=0\,,$ (8)
$\displaystyle\epsilon_{abcd}g^{b}\wedge\mathcal{R}^{cd}-2f_{a}\wedge\ast(H+f^{b}\wedge
g_{b})$ $\displaystyle=0\,,$ (9)
respectively. We will search for solutions of these equations (8) and (9)
under the assumption of Lorentz invariance, which is naturally guaranteed if
both $f^{a}$ and $g^{a}$ are conformally related to a soldering form $e^{a}$
that provides the metric structure for the theory. This means
$f^{a}=\rho(x)e^{a}$, $g^{a}=\sigma(x)e^{a}$, where $\rho(x)$ and $\sigma(x)$
are arbitrary scalar functions and $e^{a}=e^{a}_{\mu}\text{d}x^{\mu}$ with
$g_{\mu\nu}=\eta_{ab}\,e^{a}_{\mu}\,e^{b}_{\nu}$. This choice breaks the
conformal invariance and, therefore, the Weyl rescaling is no longer a gauge
symmetry. Then, in the simplest version of this theory, we can take $h=0$.
These assumptions imply $\mathcal{R}^{ab}=R^{ab}-\Lambda(x)e^{a}\wedge e^{b}$,
where we have defined $\Lambda(x):=\sigma^{2}(x)-\rho^{2}(x)$. Note that
asymptotically locally AdS solutions are allowed provided $\Lambda(x)<0$.
The remaining field equations are obtained by extremizing the Lagrangian (7)
with respect to the dynamical fields in the AdS sector. Then, the full set of
field equations is
$\displaystyle\epsilon_{abcd}\text{D}\mathcal{R}^{cd}$ $\displaystyle=0\,,$
$\displaystyle\epsilon_{abcd}e^{b}\wedge\mathcal{R}^{cd}$ $\displaystyle=0\,,$
$\displaystyle\mathfrak{D}\ast F^{I}$ $\displaystyle=0\,,$
$\displaystyle\text{d}\ast F$ $\displaystyle=0\,,$ (10)
where $\mathfrak{D}\equiv D_{(A^{J}\mathds{T}_{J})}$ is the covariant
derivative for the internal gauge connection. These equations, alongside (8)
and (9), are second-order dynamics equations for the gauge connection
$\mathds{A}$ that determine the states of the theory. For the sake of
simplicity, we henceforth focus in the sector with vanishing $SU(N)$ and
$U(1)$ gauge fields.
In order to look for the ground state of the theory, we notice that these
equations admit $\mathcal{R}^{ab}=0$ as solution. Choosing
$\rho(x)^{2}-\sigma(x)^{2}=\pm\ell^{-2}$ to be constant, locally AdS or dS
spacetimes are obtained, that is,
$R^{ab}\pm\ell^{-2}e^{a}\wedge e^{b}=0\,.$ (11)
These geometries include maximally symmetric spaces with up to $10$ globally
defined Killing vectors, which can be genuinely interpreted as vacuum
configurations. The symmetric case $\rho(x)^{2}=\sigma(x)^{2}$, on the other
hand, yields
$\epsilon_{abcd}e^{b}\wedge R^{cd}=0\,,$ (12)
which includes locally Lorentz-flat configurations, $R^{ab}=0$, as well as
nontrivial asymptotically locally Lorentz-flat solutions. This can be achieved
if the torsion $2$-form is convariantly conserved. We present a solution of
the former class in the next section.
## III Static spherically symmetric solution
Here, we look for black hole solutions to the field equations (10). We work in
the first-order formalism where the vielbein $1$-form,
$e^{a}=e^{a}{}_{\mu}\text{d}x^{\mu}$, and the Lorentz connection $1$-form,
$\omega^{ab}=\omega^{ab}{}_{\mu}\text{d}x^{\mu}$, define the Lorentz curvature
and torsion $2$-forms through the Cartan structure equations as
$\displaystyle R^{ab}$
$\displaystyle=\text{d}\omega^{ab}+\omega^{a}{}_{c}\wedge\omega^{cb}=\tfrac{1}{2}R^{ab}_{\
\mu\nu}\text{d}x^{\mu}\wedge\text{d}x^{\nu}\,,$ (13) $\displaystyle T^{a}$
$\displaystyle=\text{d}e^{a}+\omega^{a}{}_{b}\wedge e^{b}=\tfrac{1}{2}T^{a}_{\
\mu\nu}\text{d}x^{\mu}\wedge\text{d}x^{\nu}\,,$ (14)
respectively. These quantities satisfy the Bianchi identities
$\text{D}R^{ab}=0$ and $\text{D}T^{a}=R^{a}{}_{b}\wedge e^{b}$. The Lorentz
connection $1$-form can be decomposed as
$\omega^{ab}=\mathring{\omega}^{ab}+\kappa^{ab}$, where
$\mathring{\omega}^{ab}$ denotes its torsion-free part satisfying
$\text{d}e^{a}+\mathring{\omega}^{a}{}_{b}\wedge e^{b}=0$ and $\kappa^{ab}$ is
the contorsion $1$-form defined through $T^{a}=\kappa^{a}{}_{b}\wedge e^{b}$.
This decomposition allows rewriting the Lorentz curvature $2$-form as
$R^{ab}=\mathring{R}^{ab}+\mathring{\text{D}}\kappa^{ab}+\kappa^{a}{}_{c}\wedge\kappa^{cb}$,
where $\mathring{R}^{ab}=\tfrac{1}{2}\mathring{R}^{ab}_{\
\mu\nu}\text{d}x^{\mu}\wedge\text{d}x^{\nu}$ represents its torsion-free part
related to the Riemann tensor through $\mathring{R}^{\lambda\rho}_{\
\mu\nu}=e^{\lambda}{}_{a}e^{\rho}{}_{b}\mathring{R}^{ab}_{\ \mu\nu}$ with
$e^{\mu}{}_{a}$ being the inverse vielbein, while
$\mathring{\text{D}}\kappa^{ab}$ denotes the covariant derivative of the
contorsion $2$-form with respect to the torsion-free Lorentz connection. From
hereon, ringed quantities denote torsion-free geometric objects constructed
out of $\mathring{\omega}^{ab}$.
In order to search for static topological black-hole solutions to the field
equations (10), we focus on a metric ansatz possessing a constant-curvature
base manifold, namely,
$\text{d}s^{2}=g_{\mu\nu}\text{d}x^{\mu}\otimes\text{d}x^{\nu}=-f(r)\text{d}t^{2}+\frac{\text{d}r^{2}}{f(r)}+r^{2}\text{d}\Sigma^{2}_{(k)}\,,$
(15)
where the line element of the two-dimensional base manifold is parametrized as
$\displaystyle\text{d}\Sigma_{(k)}^{2}=\bar{g}_{\bar{\mu}\bar{\nu}}\text{d}\bar{x}^{\bar{\mu}}\otimes\text{d}\bar{x}^{\bar{\nu}}=\delta_{\bar{a}\bar{b}}\bar{e}^{\bar{a}}\otimes\bar{e}^{\bar{b}}\,.$
(16)
Here, barred quantities are intrinsically defined on the two-dimensional
transverse section $\text{d}\Sigma_{(k)}^{2}$ and $k=\pm 1,0$ stands for
spherical, hyperbolic or flat topology, respectively. According to the
relation $g_{\mu\nu}=\eta_{ab}e^{a}_{\mu}e^{b}_{\mu}$, we choose a vielbein
basis
$\displaystyle e^{0}$ $\displaystyle=\sqrt{f(r)}\,\text{d}t\,,$ $\displaystyle
e^{1}$ $\displaystyle=\frac{\text{d}r}{\sqrt{f(r)}}\,,$ $\displaystyle
e^{\bar{a}}$ $\displaystyle=r\,\bar{e}^{\bar{a}}\,.$ (17)
Demanding invariance under the same isometry group of the metric (15) to the
remaining bosonic fields,333When $k=\pm 1,0$, the metric (15) is locally
invariant under the action of $\mbox{SO}(3)\times\mathbb{R}$,
$\mbox{SO}(1,2)\times\mathbb{R}$, or $\mbox{ISO}(2)\times\mathbb{R}$ isometry
groups, respectively. To obtain the ansatz (18), we decompose
$\omega^{ab}=\mathring{\omega}^{ab}+\kappa^{ab}$ and we read off their
independent components from the invariance of $\mathring{\omega}^{ab}$ and
$\kappa^{ab}$ under these isometry groups. we find $\Lambda=\Lambda(r)$, and
$\displaystyle\omega^{01}$
$\displaystyle=\omega_{1}(r)e^{0}+\omega_{2}(r)e^{1}\,,$
$\displaystyle\omega^{02}$
$\displaystyle=\omega_{3}(r)e^{2}+\omega_{4}(r)e^{3}\,,$ (18a)
$\displaystyle\omega^{03}$
$\displaystyle=-\omega_{4}(r)e^{2}+\omega_{3}(r)e^{3}\,,$
$\displaystyle\omega^{12}$
$\displaystyle=\omega_{5}(r)e^{2}+\omega_{6}(r)e^{3}\,,$ (18b)
$\displaystyle\omega^{13}$
$\displaystyle=-\omega_{6}(r)e^{2}+\omega_{5}(r)e^{3}\,,$
$\displaystyle\omega^{23}$
$\displaystyle=\omega_{7}(r)e^{0}+\omega_{8}(r)e^{1}+\mathring{\omega}^{23}\,.$
(18c)
In this case, $\omega_{i}(r)$, with $i=1,\dots,8$, are indeterminate functions
depending on the radial coordinate only. The nontrivial components of the
Lorentz curvature and torsion can be easily found from Eq. (13) alongside Eqs.
(17) and (18).
Inserting these ansätze into the field equations (10), we find
$\displaystyle\omega_{2}=\omega_{3}=\omega_{4}=\omega_{6}=\omega_{7}=\omega_{8}$
$\displaystyle=0\,,$ (19a)
$\displaystyle\Lambda^{\prime}fr+2f\Lambda+2\sqrt{f}\,r\Lambda\omega_{5}$
$\displaystyle=0\,,$ (19b)
$\displaystyle\left(fr^{2}\Lambda^{2}\right)^{\prime}+2\sqrt{f}\,r^{2}\Lambda^{2}\left(\omega_{5}-\omega_{1}\right)$
$\displaystyle=0\,,$ (19c) $\displaystyle
2\omega_{5}^{\prime}f\,r^{2}+2f\omega_{5}\,r+k\sqrt{f}+3\sqrt{f}\,\Lambda\,r^{2}-\sqrt{f}\omega_{5}^{2}\,r^{2}$
$\displaystyle=0\,,$ (19d)
$\displaystyle\frac{k}{r^{2}}+3\Lambda+2\omega_{1}\omega_{5}-\omega_{5}^{2}$
$\displaystyle=0\,,$ (19e)
where prime denotes differentiation with respect to $r$. This nonlinear system
of first-order ordinary differential equations is solved by
$\displaystyle f(r)$
$\displaystyle=\gamma-\frac{2m}{r}-\frac{(\gamma^{2}-k^{2})r}{6m}+a\,r^{2}\,,$
$\displaystyle\omega_{5}(r)$
$\displaystyle=\frac{\sqrt{f}(k+\frac{1}{2}f^{\prime\prime}r^{2}-f^{\prime}r+f)}{r\left(f^{\prime}r-2f\right)}\,,$
(20a) $\displaystyle\omega_{1}(r)$
$\displaystyle=\sqrt{f}\,\frac{\text{d}}{\text{d}r}\ln(\omega_{5}\,r)\,,$
$\displaystyle\Lambda(r)$
$\displaystyle=\frac{k-\omega_{5}\,r^{2}\left(\omega_{5}-2\omega_{1}\right)}{3r^{2}}\,.$
(20b)
This solution represents a three-parameter family of topological black holes
characterized by the integration constants $\gamma$, $m$, and $a$. It can be
checked that, as $r\to\infty$, the metric function behaves $f\sim
ar^{2}+\mathcal{O}(r)$, and therefore $\omega_{1}\sim\omega_{5}\to 0$. Then,
from Eq. (20b), one finds that $\Lambda\to 0$. Thus, even though the solution
represents a weakened asymptotically AdS metric, the geometry is
asymptotically Lorentz flat from a Riemann-Cartan viewpoint. A similar feature
has been observed for locally AdS configurations in three dimensions that
represents Lorentz-flat spacetimes in presence of covariantly constant torsion
Alvarez _et al._ (2014c). Indeed, the solution (20) resembles this behavior
asymptotically in four dimensions. The nonvanishing components of the Lorentz
curvature and torsion are given by
$\displaystyle R^{01}$ $\displaystyle=R_{I}(r)\,e^{0}\wedge e^{1}\,,$
$\displaystyle R^{0\bar{a}}$ $\displaystyle=R_{II}(r)\,e^{0}\wedge
e^{\bar{a}}\,,$ $\displaystyle R^{1\bar{a}}$
$\displaystyle=R_{II}(r)\,e^{1}\wedge e^{\bar{a}}\,,$ (21a) $\displaystyle
R^{23}$ $\displaystyle=R_{I}(r)\,e^{2}\wedge e^{3}\,,$ $\displaystyle T^{a}$
$\displaystyle=T_{I}(r)\,e^{1}\wedge e^{a}\,,$ $\displaystyle T^{\bar{a}}$
$\displaystyle=T_{II}(r)\,\epsilon^{\bar{a}}{}_{\,\bar{b}\bar{c}}\,e^{\bar{b}}\wedge
e^{\bar{c}}\;,$ (21b)
where the particular values for the solution (20) are
$\displaystyle R_{I}(r)$
$\displaystyle=\frac{\left[k\left(\gamma-k\right)^{2}-36am^{2}\right]r^{3}+6m(\gamma-k)^{2}r^{2}-36m^{2}(\gamma-k)r+72m^{3}}{\left[(\gamma-k)r-6m\right]^{2}r^{3}}\,,$
(22a) $\displaystyle R_{II}(r)$
$\displaystyle=\frac{\left[(\gamma+k)(\gamma-k)^{2}-72am^{2}\right]r^{3}-6m(\gamma-k)^{2}r^{2}+36m^{2}(\gamma-k)r-72m^{3}}{2\left[(\gamma-k)r-6m\right]^{2}r^{3}}\,,$
(22b) $\displaystyle T_{I}(r)$
$\displaystyle=\frac{\gamma-k}{(\gamma-k)r-6m}\sqrt{ar^{2}-\frac{(\gamma^{2}-k^{2})}{6m}r+\gamma-\frac{2m}{r}}\,,$
(22c) $\displaystyle T_{II}(r)$ $\displaystyle=0\,.$ (22d)
We shall discuss the main properties of this three-parameter family of black-
hole solutions in the next section.
## IV Features of the torsional black hole
Here, we analyze different aspects of the solution including the structure of
singularities and their thermodynamic properties in presence of torsion.
Moreover, we derive a particular limit where it becomes a thermalized ground
state.
### IV.1 Curvature
The linear term in $r$ of the metric function $f(r)$ sources the non-Einstein
mode of the solution. This can be seen as follows: First, consider the
torsion-free Riemann tensor $\mathring{R}^{\mu\nu}_{\ \lambda\rho}$
constructed out of the Levi-Civita connection associated to the line element
(15) with metric function (20a). In general, the traceless Ricci tensor
$\mathring{H}_{\mu\nu}\equiv\mathring{R}_{\mu\nu}-\frac{1}{4}g_{\mu\nu}\mathring{R}$
vanishes identically for Einstein spaces. The curvature invariant
$\displaystyle\mathring{H}_{\mu\nu}\mathring{H}^{\mu\nu}=\frac{(\gamma-k)^{2}[(\gamma+k)r-6m]^{2}}{36m^{2}r^{4}}\,,$
(23)
shows that, in this case, $\mathring{H}_{\mu\nu}$ is nonvanishing. Thus, we
conclude that Eq. (20) represents a non-Einstein space. It is worth mentioning
that Maldacena showed in Ref. Maldacena (2011) that imposing Neumann boundary
conditions on the Fefferman-Graham expansion of the weakened asymptotically
locally AdS metric allows one to remove the linear mode present in conformal
gravity. This procedure selects Einstein spaces from the space of solutions of
conformal gravity, whose renormalization is guaranteed for asymptotically
locally AdS spaces due to conformal invariance Grumiller _et al._ (2014).
The solution is Bach-flat, as it can be checked by noticing that the Bach
tensor,
$\displaystyle\mathring{B}_{\mu\nu}=\nabla^{\lambda}\mathring{C}_{\mu\nu\lambda}-\mathring{S}^{\lambda\rho}\mathring{W}_{\mu\lambda\rho\nu}\,,$
(24)
vanishes identically for the solution (20), where
$\mathring{C}_{\mu\nu\lambda}=2\nabla_{[\lambda}\mathring{S}_{\nu]\mu}$ and
$\mathring{S}_{\mu\nu}=\tfrac{1}{2}\left(\mathring{R}_{\mu\nu}-\tfrac{1}{6}g_{\mu\nu}\mathring{R}\right)$
are the torsion-free Cotton and Schouten tensor, respectively. Thus, we
conclude that spacetime (20) can be regarded as a torsional generalization of
the Riegert metric found in the second-order (purely metric) formulation of
conformal gravity Riegert (1984). Notice that, in the presence of the extra
fields required by USUSY, torsion is switched on, so that the first and second
order formulations are not equivalent: the torsional features of the geometry
do not vanish. These torsional contributions can be moved to the right hand
side of the field equations (10), acting as a source for the torsion-free part
of the geometry. This occurs in many situations in which the first and second
order formalisms are not equivalent Toloza and Zanelli (2013); Espiro and
Vásquez (2016); Castillo-Felisola _et al._ (2015, 2016); Barrientos _et al._
(2017); Cid _et al._ (2018); Cisterna _et al._ (2019); Barrientos _et al._
(2019).
For $\gamma\neq k$, the solution (20) has a curvature singularity at $r=0$ as
it can be seen from the torsion-free Ricci scalar [see also Eq. (23)]
$\displaystyle\mathring{R}$ $\displaystyle=\mathring{R}^{\mu\nu}_{\
\mu\nu}=-12a+\frac{\left(\gamma^{2}-k^{2}\right)}{m\,r}-\frac{2(\gamma-k)}{r^{2}}\,.$
(25)
The central singularity is surrounded by a horizon located at $r=r_{h}$,
defined by the condition $f(r_{h})=0$. This is a cubic equation for $r_{h}$
having one real root and two complex conjugate ones. The real one represents
the black hole’s horizon whose explicit form is rather cumbersome and not very
illuminating. The conditions on the integration constants that make $r_{h}>0$
are discussed in Eq. (28) below.
### IV.2 Torsion
Besides the singularity at $r=0$, there exists a torsional singularity at
$r_{s}\equiv 6m(\gamma-k)^{-1}$, as can be seen from the invariant
$\displaystyle T_{\mu\nu\lambda}T^{\mu\nu\lambda}$
$\displaystyle=\frac{6}{r_{s}(r-r_{s})^{2}}\,\left[ar_{s}\,r^{2}-(\gamma+k)r+\gamma\,r_{s}-\frac{2mr_{s}}{r}\right]\,,$
(26)
where $T^{\lambda}_{\ \mu\nu}=e^{\lambda}{}_{a}T^{a}_{\ \mu\nu}$. The
conditions on the parameters of the solution that ensure that the torsional
singularity lies inside the black hole’s event horizon, $r_{h}>r_{s}$, are
discussed below.
From (22) it is apparent that the Lorentz curvature $R^{ab}$ and the torsion
components $T^{a}$ diverge both at $r=r_{s}$ and at $r=0$. In fact, it is also
the case for all polynomial invariants obtained by contracting these tensors.
However, it is a remarkable feature of the solution that torsion-free
quantities, like the Ricci scalar (25), are regular at $r=r_{s}$. This is true
of other torsion-free geometric quantities, like the Kretchsmann or the
quadratic curvature invariant (23). In particular, the torsion-free Weyl
tensor
$\mathring{W}^{\mu\nu}_{\lambda\rho}=\mathring{R}^{\mu\nu}_{\lambda\rho}-4\delta^{[\mu}_{[\lambda}\mathring{S}^{\nu]}_{\rho]}$,
$\displaystyle\mathring{W}^{\mu\nu}_{\lambda\rho}$
$\displaystyle=-\frac{(\gamma-k)r-6m}{3r^{2}}\delta^{\mu\nu}_{\lambda\rho}\,,$
(27)
is regular everywhere except at $r=0$. In fact, this solution is conformally
flat at $r=r_{s}$ $\mathring{W}^{\mu\nu}_{\lambda\rho}$ where (27) vanishes.
The torsional singularity does not affect the motion of spinless test
particles since they couple to the torsion-free Christoffel connection and
follow geodesics rather than autoparallels Hehl _et al._ (1976, 1995).
Spinning particles and fermions in general, on the other hand, couple to
torsion Hehl and Datta (1971); Hehl (1976); Chandia and Zanelli (1998).
Therefore, their propagation must be sensitive to the torsional singularity in
this background. This should be taken into account when studying the geodesic
completeness of this solution for spinning test particles.
The existence of a horizon plus the condition $r_{s}<r_{h}$, that enforces the
cosmic censorship principle, provides a restriction on the parameters. For the
sake of simplicity, consider the particular case of weakened asymptotically
AdS solution with $m>0$, $a>0$, $k=1$, and $-1\leq\gamma\leq 1$. In this case,
the conditions $0<r_{s}<r_{h}$ translate into
$\displaystyle-1\leq\gamma<1\;\;\;\;\;\wedge\;\;\;\;\;\frac{2-3\gamma+\gamma^{3}}{108m^{2}}\leq
a\;\;\;\;\;\vee\;\;\;\;\;\gamma=1\;\;\;\;\;\wedge\;\;\;\;\;0<a\,.$ (28)
Similar restrictions on parameter space are found for the asymptotically de
Sitter case so that the curvature and torsional singularities are hidden by
the event horizon. Nevertheless, if $a<0$, the solution is endowed with a
cosmological horizon as well.
The solution (20) possesses a covariantly constant torsion, which can be
checked by using the Bianchi identity $\text{D}T^{a}=R^{a}{}_{b}\wedge e^{b}$
and noticing that the Lorentz curvature $2$-form (22) satisfies
$R^{a}{}_{b}\wedge e^{b}=0$. Geometries with covariantly constant torsion
include constant Lorentz curvature solutions and Lorentz-flat geometries as
particular cases. In three dimensions, the most general solution to
$\text{D}T^{a}=0$ has at least one pseudoscalar degree of freedom Alvarez _et
al._ (2014c) and admits a Proca-like excitation Andrianopoli _et al._ (2022).
In four or higher dimensions additional degrees of freedom could be present.
The torsional Riegert black hole (20) is continuously connected to the
torsion-free Schwarzschild-AdS and dS with constant curvature base manifold in
the limit $\gamma\to k$ for $a>0$ and $a<0$, respectively. When $\gamma=-k$,
however, the linear mode in the metric (20a) is absent and the torsion is
nonvanishing. Nevertheless, this case represents a naked singularity located
at $r=0$, since there is no set of parameters that allows for a horizon. When
$\gamma=k=0$, the metric becomes the cylindrical black hole studied by Lemos
in Ref. Lemos (1995). Thus, a similar large gauge transformation could give
rise to a nonzero angular momentum when $k=0$ and $\gamma\neq 0$ such that the
torsion is nonvanishing. However, in this work we focus on the static case for
the sake of simplicity and we postpone stationary extensions for future
studies.
### IV.3 Thermodynamics
The thermodynamic properties of these black holes can be derived from the
partition function $\mathcal{Z}$ (see Gibbons and Hawking (1977); Hawking and
Page (1983) for details). This can be computed to first order in the saddle-
point approximation through $\ln\mathcal{Z}\approx-I_{E}$, where $I_{E}$ is
the Euclidean on-shell action. The latter can be obtained by performing the
analytic continuation $t\to-i\tau$ in the line element (15). The absence of
conical singularities when the Euclidean time coordinate is identified as
$\tau\sim\tau+\beta$ fixes thiss period to be $\beta=4\pi/f^{\prime}(r_{h})$.
Then, the Hawking temperature for the torsional Riegert black hole given by
(20) is
$\displaystyle T_{H}=\frac{\gamma+k}{4\pi r_{s}}+\frac{3m-\gamma r_{h}}{2\pi
r_{h}^{2}}\,.$ (29)
In contrast to the Schwarzschild-AdS solution, the temperature of the
torsional Riegert black hole behaves as
$\displaystyle T_{H}\approx\frac{\gamma^{2}-k^{2}}{24\pi
m}+\mathcal{O}(r_{h}^{-1})\;\;\;\;\mbox{when}\;\;\;\;r_{h}\to\infty\,.$ (30)
In the torsion-free limit $\gamma\to k$, however, it behaves as usual, namely,
$T_{H}\approx\mathcal{O}(r_{h}^{-1})$ as $r_{h}\to\infty$.
A direct evaluation of the Euclidean on-shell action (5) on the three-
parameter family of black-hole solutions (20) gives the partition function to
first-order in the saddle point approximation, that is,
$\displaystyle\ln\mathcal{Z}\approx-
I_{E}=\frac{\beta\Omega_{(k)}\left[(\gamma-k)^{2}r_{h}^{2}-6m(\gamma-k)r_{h}+12m^{2}\right]}{3r_{h}^{3}}\,,$
(31)
where $\Omega_{(k)}$ denotes the volume of the base manifold. Remarkably,
conformal invariance renders its value finite without any reference to
boundary counterterms. This feature has been observed in different theories
possessing conformal invariance in the metric formalism Grumiller _et al._
(2014); Anastasiou _et al._ (2021a, b); Barrientos _et al._ (2022) and we
hereby provide additional evidence for conformal supergravity with nontrivial
torsion. Moreover, the cosmological term in the solution (20) arises as an
integration constant, offering a natural scenario to study the extended phase
space of black hole thermodynamics as proposed in Kubiznak _et al._ (2017),
as well as their critical behavior Kubiznak and Mann (2012). Due to the number
of free parameters, the phase structure will be richer and we postpone a
deeper study of this point for the future.
### IV.4 Thermalized ground state
An interesting limit of the solution (20) is $\gamma\to k$ and $m\to 0$, with
$k(\gamma-k)(3m)^{-1}\equiv b$ (or $\frac{\gamma-k}{6m}=r_{s}^{-1}$) fixed. In
this limit, the torsional singularity is translated into $r_{s}=2kb^{-1}$.
Then, we obtain
$\displaystyle f(r)=k-br+ar^{2}\,,$ (32)
where $\omega_{1}(r)$, $\omega_{5}(r)$, and $\Lambda(r)$ can be obtained
directly from the relations in Eq. (20). This solution possesses a weakened
AdS or dS asymptotics when $a>0$ or $a<0$, respectively, and it is weakly
asymptotically locally flat when $a=0$. In this case, the Lorentz curvature
and torsion $2$-forms are
$\displaystyle R^{ab}$
$\displaystyle=\frac{k-a\,r_{s}^{2}}{\left(r-r_{s}\right)^{2}}\,e^{a}\wedge
e^{b}\;\;\;\;\;\mbox{and}\;\;\;\;\;T^{a}=\frac{\sqrt{f(r)}}{r-r_{s}}\,e^{1}\wedge
e^{a}\,.$ (33)
Additionally, one can check that this spacetime is both conformally and Bach
flat.
From the Bianchi identity $\text{D}T^{a}=R^{a}{}_{b}\wedge e^{b}$, it is
direct to see that the torsion $2$-form is covariantly conserved as well,
namely, $DT^{a}=0$. Moreover, this solution has curvature and torsional
singularities at $r=0$ and ${\color[rgb]{0,0,1}r=r_{s}}$, respectively. The
horizons are located at the two positive real roots of the metric function
$f(r)$, i.e. $f(r_{\pm})=0$, where
$\displaystyle r_{\pm}=\frac{1}{2a}\left(b\pm\sqrt{b^{2}-4ak}\right)\,.$ (34)
The existence of a horizon requires $b^{2}-4ak\geq 0$, and the bound is
saturated in the extremal case, which can only happen if $ak>0$. In the non-
extremal case, the condition $ak<0$ guarantees that a horizon will always
exist. Additionally, demand the torsional singularity to lie behind the
horizon by imposing $r_{+}>r_{s}$. Note that the torsion vanishes at the
horizon, as it can be seen from Eq. (33). Moreover, in the extremal case,
there is no torsional singularity whatsoever, as it can be checked from the
torsional invariants that become constant, e.g.
$T_{\mu\nu\lambda}T^{\mu\nu\lambda}=\tfrac{3b^{2}}{2k}$. Notice that the
solution (32) is non-Einstein for $b\neq 0$, as can be seen from the torsion-
free traceless Ricci tensor $\mathring{H}$, that is,
$\displaystyle\mathring{H}_{\mu\nu}\mathring{H}^{\mu\nu}=\frac{b^{2}}{r^{2}}\,.$
(35)
The solution (32) has a rather peculiar thermodynamic behavior. First, its
Hawking temperature is nonvanishing and is given by
$\displaystyle T_{H}=\frac{k}{2\pi r_{+}\,r_{s}}(r_{+}-r_{s})\,.$ (36)
The requirement that the torsional singularity must lie behind the black
hole’s horizon, i.e. $r_{+}>r_{s}$, implies that the Hawking temperature is
positive definite. However, a direct computation of the Euclidean on-shell
action shows that it vanishes identically. This implies that its free energy,
mass, and entropy are zero. Therefore, we conclude that this limit represents
a conformally flat ground state of the theory that is not a maximally
symmetric space, although continuously connected to global AdS or dS in the
limit $b\to 0$ for $a>0$ or $a<0$, respectively. Nevertheless, since its
Hawking temperature (36) is nonvanishing, it can be regarded as a thermalized
ground state, similar to the Schwarzschild-AdS solution in critical gravity Lu
and Pope (2011). This ground state has been studied in metric formulation of
conformal gravity Lu _et al._ (2012) and the solution (32)–(33) represents
its torsional generalization embedded in the bosonic sector of unconventional
conformal supergravity.
## V Summary
We have explored a sector of the space of solutions of a particular
formulation of conformal supergravity. First, we check that this theory
possesses maximally symmetric spaces as ground states. Additionally, we show
that black-hole solutions also exist; the latter can be regarded as the
torsional extension of the Riegert spacetime Riegert (1984). The compensating
fields for translations and special conformal transformations of the $SO(3,2)$
group generate a backreaction that behaves as a dynamical cosmological
constant. Indeed, we find that the solution is a weakened asymptotically
locally AdS, however, from the Riemann-Cartan viewpoint, it represents a novel
example of an asymptotically Lorentz-flat black hole.
This solution presents a number of interesting features. First, it possesses a
torsional singularity where the metric becomes conformally flat. Spinning test
particles should be sensitive to this kind of singularities, providing an
interesting setup to study autoparallel completeness rather than geodesic
completeness. The conditions on the integration constants of the solution are
found such that the curvature and torsion singularities lie behind the
horizon, avoiding a violation of the cosmic censorship conjecture. When
$SU(N)$ and $U(1)$ connections are turned off, the black-hole solution is
continuously connected to Schwarzschild-(A)dS in the torsion-free limit.
Additionally, the torsion is covariantly conserved, something that could be
relevant for computing conserved charges. In a particular limit, the torsional
black hole becomes a thermalized ground state, namely, a non-maximally
symmetric configuration with positive definite temperature whose free energy
vanishes. This configuration is likely to be of relevance in the study of the
realted Hawking-Page phase transitions.
Other interesting properties of this theory can be explored in the future. For
instance, since the cosmological constant term in Eq. (20) appears as an
integration constant, this theory offers a natural scenario to study the
extended phase space of the torsional Riegert black hole in the Gibbs
canonical ensemble by interpreting the cosmological term as a thermodynamic
pressure Kubiznak _et al._ (2017). On the other hand, stationary solutions in
unconventional conformal supergravity are certainly of interest to us. We
believe that the role of torsion will be important in that case due to the
axial symmetry involved in this type of configurations and it is expected that
novel topological features associated to torsion could arise (see Chandia and
Zanelli (1997)). Finally, holographic properties of torsion can be studied in
conformal supergravity, since it is interpreted as the source of the spin
density in the dual field theory Banados _et al._ (2006); Klemm and Tagliabue
(2008); Blagojevic _et al._ (2013). We expect to return for a deeper study of
these aspects in the near future.
###### Acknowledgements.
We thank Giorgos Anastasiou, Laura Andrianopoli, Ignacio J. Araya, Oscar
Castillo-Felisola, Rodrigo Olea, and Omar Valdivia for useful discussions. P.
A. acknowledges MINEDUC-UA project ANT 1755 and Semillero de Investigación
project SEM18-02 from Universidad de Antofagasta, Chile. The work of C. C. and
J. Z. is supported by Agencia Nacional de Investigación y Desarrollo (ANID)
through FONDECYT No 11200025, 1210500, and 1220862.
## References
* Martin (2010) S. P. Martin, Adv. Ser. Direct. High Energy Phys. 21, 1 (2010).
* Cohen _et al._ (1996) A. G. Cohen, D. Kaplan, and A. Nelson, Phys. Lett. B388, 588 (1996).
* Dimopoulos and Giudice (1995) S. Dimopoulos and G. Giudice, Phys. Lett. B357, 573 (1995).
* Coleman and Mandula (1967) S. R. Coleman and J. Mandula, Phys. Rev. 159, 1251 (1967).
* Van Nieuwenhuizen (1981) P. Van Nieuwenhuizen, Phys. Rept. 68, 189 (1981).
* Blumenhagen _et al._ (2013) R. Blumenhagen, D. Lüst, and S. Theisen, _Basic concepts of string theory_, Theoretical and Mathematical Physics (Springer, Heidelberg, Germany, 2013).
* Ortin (2015) T. Ortin, _Gravity and Strings_, 2nd ed., Cambridge Monographs on Mathematical Physics (Cambridge University Press, 2015).
* Alvarez _et al._ (2012) P. D. Alvarez, M. Valenzuela, and J. Zanelli, JHEP 1204, 058 (2012).
* Alvarez _et al._ (2014a) P. D. Alvarez, P. Pais, and J. Zanelli, Phys. Lett. B 735, 314 (2014a).
* Alvarez _et al._ (2015) P. D. Alvarez, P. Pais, E. Rodríguez, P. Salgado-Rebolledo, and J. Zanelli, Class. Quant. Grav. 32, 175014 (2015).
* Alvarez _et al._ (2021a) P. D. Alvarez, L. Delage, M. Valenzuela, and J. Zanelli, Symmetry 13, 628 (2021a).
* Sohnius (1985) M. F. Sohnius, Phys. Rept. 128, 39 (1985).
* Stelle and West (1978) K. S. Stelle and P. C. West, Phys. Lett. B74, 330 (1978).
* Alvarez _et al._ (2020) P. D. Alvarez, M. Valenzuela, and J. Zanelli, JHEP 07, 205 (2020).
* Alvarez _et al._ (2021b) P. D. Alvarez, A. R. Chavez, and J. Zanelli, (2021b), arXiv:2111.09845 [hep-th] .
* Alvarez _et al._ (2022) P. D. Alvarez, R. A. Chavez, and J. Zanelli, J. Math. Phys. 63, 042304 (2022).
* Kaku _et al._ (1977) M. Kaku, P. K. Townsend, and P. van Nieuwenhuizen, Phys. Lett. B 69, 304 (1977).
* Ferrara _et al._ (1977) S. Ferrara, M. Kaku, P. K. Townsend, and P. van Nieuwenhuizen, Nucl. Phys. B 129, 125 (1977).
* Kaku and Townsend (1978) M. Kaku and P. K. Townsend, Phys. Lett. B 76, 54 (1978).
* Kaku _et al._ (1978) M. Kaku, P. K. Townsend, and P. van Nieuwenhuizen, Phys. Rev. D 17, 3179 (1978).
* Bergshoeff _et al._ (1981) E. Bergshoeff, M. de Roo, and B. de Wit, Nucl. Phys. B 182, 173 (1981).
* Bergshoeff _et al._ (1983) E. Bergshoeff, M. de Roo, and B. de Wit, Nucl. Phys. B 217, 489 (1983).
* Fradkin and Tseytlin (1985) E. S. Fradkin and A. A. Tseytlin, Phys. Rept. 119, 233 (1985).
* Liu and Tseytlin (1998) H. Liu and A. A. Tseytlin, Nucl. Phys. B 533, 88 (1998).
* Butter (2010) D. Butter, Annals Phys. 325, 1026 (2010).
* Ferrara _et al._ (2018) S. Ferrara, A. Kehagias, and D. Lüst, JHEP 08, 197 (2018).
* D’Auria and Ravera (2021) R. D’Auria and L. Ravera, Phys. Rev. D 104, 084034 (2021).
* Freedman and Van Proeyen (2012) D. Z. Freedman and A. Van Proeyen, _Supergravity_ (Cambridge Univ. Press, Cambridge, UK, 2012).
* Alvarez _et al._ (2014b) P. D. Alvarez, P. Pais, and J. Zanelli, Phys. Lett. B 735, 314 (2014b).
* Alvarez _et al._ (2021c) P. D. Alvarez, L. Delage, M. Valenzuela, and J. Zanelli, JHEP 07, 176 (2021c).
* MacDowell and Mansouri (1977) S. W. MacDowell and F. Mansouri, Phys. Rev. Lett. 38, 739 (1977), [Erratum: Phys. Rev. Lett. 38, 1376 (1977)].
* Townsend (1977) P. K. Townsend, Phys. Rev. D 15, 2795 (1977).
* Horne and Witten (1989) J. H. Horne and E. Witten, Phys. Rev. Lett. 62, 501 (1989).
* Andrianopoli and D’Auria (2014) L. Andrianopoli and R. D’Auria, JHEP 08, 012 (2014).
* Andrianopoli _et al._ (2021a) L. Andrianopoli, B. L. Cerchiai, R. Matrecano, O. Miskovic, R. Noris, R. Olea, L. Ravera, and M. Trigiante, JHEP 02, 141 (2021a).
* Trigiante (2017) M. Trigiante, Phys. Rept. 680, 1 (2017).
* Giribet _et al._ (2014) G. Giribet, N. Merino, O. Miskovic, and J. Zanelli, JHEP 08, 083 (2014).
* Andrianopoli _et al._ (2021b) L. Andrianopoli, G. Giribet, D. L. Díaz, and O. Miskovic, JHEP 11, 123 (2021b).
* Riegert (1984) R. J. Riegert, Phys. Rev. Lett. 53, 315 (1984).
* Lu _et al._ (2012) H. Lu, Y. Pang, C. N. Pope, and J. F. Vazquez-Poritz, Phys. Rev. D 86, 044011 (2012).
* Alvarez _et al._ (2014c) P. D. Alvarez, P. Pais, E. Rodríguez, P. Salgado-Rebolledo, and J. Zanelli, Phys. Lett. B 738, 134 (2014c).
* Maldacena (2011) J. Maldacena, (2011), arXiv:1105.5632 [hep-th] .
* Grumiller _et al._ (2014) D. Grumiller, M. Irakleidou, I. Lovrekovic, and R. McNees, Phys. Rev. Lett. 112, 111102 (2014).
* Toloza and Zanelli (2013) A. Toloza and J. Zanelli, Class. Quant. Grav. 30, 135003 (2013).
* Espiro and Vásquez (2016) J. L. Espiro and Y. Vásquez, Gen. Rel. Grav. 48, 117 (2016).
* Castillo-Felisola _et al._ (2015) O. Castillo-Felisola, C. Corral, S. Kovalenko, I. Schmidt, and V. E. Lyubovitskij, Phys. Rev. D 91, 085017 (2015).
* Castillo-Felisola _et al._ (2016) O. Castillo-Felisola, C. Corral, S. del Pino, and F. Ramírez, Phys. Rev. D 94, 124020 (2016).
* Barrientos _et al._ (2017) J. Barrientos, F. Cordonier-Tello, F. Izaurieta, P. Medina, D. Narbona, E. Rodríguez, and O. Valdivia, Phys. Rev. D 96, 084023 (2017).
* Cid _et al._ (2018) A. Cid, F. Izaurieta, G. Leon, P. Medina, and D. Narbona, JCAP 04, 041 (2018).
* Cisterna _et al._ (2019) A. Cisterna, C. Corral, and S. del Pino, Eur. Phys. J. C 79, 400 (2019).
* Barrientos _et al._ (2019) J. Barrientos, F. Cordonier-Tello, C. Corral, F. Izaurieta, P. Medina, E. Rodríguez, and O. Valdivia, Phys. Rev. D 100, 124039 (2019).
* Hehl _et al._ (1976) F. W. Hehl, P. Von Der Heyde, G. D. Kerlick, and J. M. Nester, Rev. Mod. Phys. 48, 393 (1976).
* Hehl _et al._ (1995) F. W. Hehl, J. D. McCrea, E. W. Mielke, and Y. Ne’eman, Phys. Rept. 258, 1 (1995).
* Hehl and Datta (1971) F. W. Hehl and B. K. Datta, J. Math. Phys. 12, 1334 (1971).
* Hehl (1976) F. W. Hehl, Rept. Math. Phys. 9, 55 (1976).
* Chandia and Zanelli (1998) O. Chandia and J. Zanelli, Phys. Rev. D 58, 045014 (1998).
* Andrianopoli _et al._ (2022) L. Andrianopoli, B. L. Cerchiai, R. DAuria, R. Noris, L. Ravera, M. Trigiante, and J. Zanelli, (in preparation) (2022).
* Lemos (1995) J. P. S. Lemos, Phys. Lett. B 353, 46 (1995).
* Gibbons and Hawking (1977) G. W. Gibbons and S. W. Hawking, Phys. Rev. D 15, 2752 (1977).
* Hawking and Page (1983) S. W. Hawking and D. N. Page, Commun. Math. Phys. 87, 577 (1983).
* Anastasiou _et al._ (2021a) G. Anastasiou, I. J. Araya, and R. Olea, JHEP 01, 134 (2021a).
* Anastasiou _et al._ (2021b) G. Anastasiou, I. J. Araya, C. Corral, and R. Olea, JHEP 07, 156 (2021b).
* Barrientos _et al._ (2022) J. Barrientos, A. Cisterna, C. Corral, and M. Oyarzo, JHEP 05, 110 (2022).
* Kubiznak _et al._ (2017) D. Kubiznak, R. B. Mann, and M. Teo, Class. Quant. Grav. 34, 063001 (2017).
* Kubiznak and Mann (2012) D. Kubiznak and R. B. Mann, JHEP 07, 033 (2012).
* Lu and Pope (2011) H. Lu and C. N. Pope, Phys. Rev. Lett. 106, 181302 (2011).
* Chandia and Zanelli (1997) O. Chandia and J. Zanelli, Phys. Rev. D 55, 7580 (1997).
* Banados _et al._ (2006) M. Banados, O. Miskovic, and S. Theisen, JHEP 06, 025 (2006).
* Klemm and Tagliabue (2008) D. Klemm and G. Tagliabue, Class. Quant. Grav. 25, 035011 (2008).
* Blagojevic _et al._ (2013) M. Blagojevic, B. Cvetkovic, O. Miskovic, and R. Olea, JHEP 05, 103 (2013).
|
# A Bayesian Approach to Reconstructing Interdependent Infrastructure Networks
from Cascading Failures
Yu Wang, Jin-Zhu Yu, and Hiba Baroud Yu Wang is with the Department of Civil
and Environmental Engineering & Department of Computer Science, Vanderbilt
University, Nashville, TN, USA. E-mail<EMAIL_ADDRESS>Jin-Zhu Yu is
with the Department of Civil Engineering, University of Texas at Arlington,
Arlington, TX, USA. Email<EMAIL_ADDRESS>Hiba Baroud is with the
Department of Civil and Environmental Engineering, Vanderbilt University,
Nashville, TN, USA. E-mail<EMAIL_ADDRESS>
###### Abstract
Analyzing the behavior of complex interdependent networks requires complete
information about the network topology and the interdependent links across
networks. For many applications such as critical infrastructure systems,
understanding network interdependencies is crucial to anticipate cascading
failures and plan for disruptions. However, data on the topology of individual
networks are often publicly unavailable due to privacy and security concerns.
Additionally, interdependent links are often only revealed in the aftermath of
a disruption as a result of cascading failures. We propose a scalable
nonparametric Bayesian approach to reconstruct the topology of interdependent
infrastructure networks from observations of cascading failures. Metropolis-
Hastings algorithm coupled with the infrastructure-dependent proposal are
employed to increase the efficiency of sampling possible graphs. Results of
reconstructing a synthetic system of interdependent infrastructure networks
demonstrate that the proposed approach outperforms existing methods in both
accuracy and computational time. We further apply this approach to reconstruct
the topology of one synthetic and two real-world systems of interdependent
infrastructure networks, including gas-power-water networks in Shelby County,
TN, USA, and an interdependent system of power-water networks in Italy, to
demonstrate the general applicability of the approach.
###### Index Terms:
Cascading failures, network interdependency, network reconstruction,
statistical network models.
## 1 Introduction
Networks offer a powerful tool for describing and analyzing various systems
with complex interactions, such as ecological, biological, technological, and
social systems [1, 2, 3]. Examples in infrastructure systems include modeling
water distribution systems as a network wherein facilities like pumping
stations and storage tanks are modeled as nodes and water pipes as links.
Other examples include modeling interdependent infrastructures systems (e.g.,
coupled power and water systems) as multi-layer networks to analyze their
vulnerability and resilience to disruptive events [4]. Ideally, performing
such network-level assessments requires data from real infrastructure networks
with complete information on the network topology, flow, and
interdependencies. However, data on the topology of real-world infrastructure
networks are often not available either due to privacy and security concerns
or decentralized operations of different infrastructure sectors [5]. As a
result, infrastructure network performance has been evaluated using model-
driven techniques that rely on assumptions of the existence and importance of
interdependencies. More recent research advances have focused on generating
synthetic infrastructure data to overcome real network data challenges [6, 7,
8]. To achieve a more accurate representation of network interdependencies,
this study proposes a data-driven approach to infer the structure of
interdependent critical infrastructure (ICI) networks based on observations of
cascading failures and other data sources.
Network reconstruction from observations of the dynamics on the target network
is a fundamental but challenging inverse problem [9, 10]. This problem has
garnered significant attention from researchers across several domains with
research advances made to (i) capture network structure using partial
information [11, 12, 13], and (ii) anticipate dynamic changes [14, 15, 16].
While applications such as social, biological, and ecological networks have
obvious dynamic patterns, the structure of critical infrastructure networks
has been assumed to be static and deterministic unless the structure is
subject to disruptions. Some studies have applied network reconstruction to an
individual infrastructure, such as road networks [17], but a generic approach
for learning the topology of ICIs is still lacking.
### 1.1 Background and related work
Network reconstruction aims to infer the network topology from direct or
indirect data that are inherently connected to the underlying network such as
observations of dynamic processes on the network [18, 3, 9]. Examples of such
dynamic processes include the diffusion of news among social media [19], the
propagation of cascading failures in infrastructures [20], and the spreading
of influence among political parties [21]. Although a single sequence of
cascading failures may not provide sufficient information on the underlying
network, combining many sequences of cascading failures can enable robust
network reconstruction to obtain valuable insight into the functionality of
the underlying network [22].
Data-driven network reconstruction approaches are divided into three
categories: (i) graph embedding-based approach [23, 24, 25], (ii)
optimization-based approach, and (iii) the Bayesian approach [26, 27, 9, 28].
Graph embedding-based approaches, including different types of graph neural
networks [29, 30] that first require graph embedding, aim to map nodes into a
low-dimensional vector space wherein the quantitative relationships among
nodes reflect the topology of the original network [23, 24, 25]. The low-
dimensional representation of the original networks is then used for link
prediction. However, graph embedding-based approaches are not suitable for
reconstructing infrastructure networks because graph embedding requires a
relatively high amount of topological information about the target network,
which is hard to obtain for real infrastructure systems. optimization-based
approaches mainly include matrix factorization [31], compressed sensing [32,
33], and so forth. Similar to graph embedding-based approaches, Optimization-
based approaches also assume that a subset of the target network is observed,
therefore we resort to other approaches to tackle the problem of
infrastructure networks reconstruction.
The Bayesian approach has been commonly used to uncover the complete network
structure from partial observations [27] primarily due to their ability to
allow for uncertainty measurement of estimated network topology, which is
important because in many cases only sparse observations of failure cascades
are available [9, 10]. Bayesian approaches for network reconstruction are
either parametric or nonparametric. In parametric approaches, such as maximum
likelihood estimation [34] or expectation maximization [35], the topology of
the target network is usually assumed to be generated by a predefined
statistical graph model. Then a fixed set of parameters of the parametric
model are fitted such that the model is most likely to produce the data about
the underlying network. Parametric models require that the network type is
known, but many real-world systems are comprised of a diverse set of
interconnected networks that do not fall into a well-defined network type.
In contrast, nonparametric approaches do not assume a particular model form
and use a large number of samples of possible topologies to identify the most
probable network. Topologies that are compatible with the observed dynamic
process will have a higher posterior probability. Several models have been
developed along this line. Peixoto [27] integrates a Bayesian approach with
epidemic spreading models and the Ising model to learn the network structure
and community labels simultaneously from measurements of network dynamics.
Gray et al. [9] propose a similar Bayesian approach to infer the network
structure that employs the independent cascade model of information diffusion
for dynamical processes on networks, which is analogous to the susceptible-
infected (SI) model for epidemic spreading. These models demonstrate the power
of nonparametric Bayesian approach in reconstructing networks like social
networks and information networks, but they are not specifically designed for
infrastructure networks.
### 1.2 Contributions
In this paper, we develop a nonparametric Bayesian approach to infer the
discrete posterior distribution of the topology of the target infrastructure
networks from observations of cascading failures. The inference is performed
using the Markov Chain Monte Carlo (MCMC) algorithm with infrastructure-
dependent proposals. Our contributions are twofold:
1. 1.
We devise an infrastructure-dependent proposal in applying the MCMC algorithm
for reconstructing different types of infrastructure networks with
considerations to their unique topological constraints. Taking advantage of
the block structure of ICI networks, we leverage the hierarchical stochastic
block model (HSBM) to compute the prior probability of the potential graphs.
The topological constraints guarantee that the topology of the proposed
network matches that of the corresponding real infrastructure and
significantly reduces the sampling space of network topology. The convergence
of the graph MCMC algorithm is proved analytically and demonstrated
numerically.
2. 2.
We design three computational optimization techniques to improve the
efficiency of the graph MCMC algorithm in reconstructing large-scale networks
where the number of possible network topologies grows exponentially over the
number of nodes. The performances of these three techniques are demonstrated
through experiments on reconstructing synthetic ICI networks.
The rest of this paper is organized as follows: Section 2 introduces the
general Bayesian model for reconstructing the network topology based on
cascading failures data. The graph MCMC algorithm with the infrastructure-
dependent proposal to infer the Bayesian model is described in Section 3. In
Section 4, we present three computational optimization techniques to improve
the computational efficiency of the proposed network reconstruction approach.
Numerical experiments for validating the proposed approach are shown in
Section 5, followed by concluding remarks in Section 6.
## 2 Bayesian approach for network reconstruction
Network reconstruction using Bayesian approaches can be defined as follows.
###### Definition 1
(Network reconstruction). Given a set of time series data $\bm{C}$ about the
binary state of nodes that are generated from a cascading failure process on a
network with unknown adjacency matrix $\bm{A}$, the network structure is
inferred by finding the adjacency matrix $\bm{A}$ with the highest conditional
probability $P(\bm{A}|\bm{C})$. The set of cascading failure data $\bm{C}$
contains independent cascading failure scenarios (sequence)
$\bm{c}^{i},i\in\\{1,2,...C\\}$ starting from time 1 where
$\bm{c}^{i}_{t,j}=1$ indicates node $j$ fails at time $t$ in the $i$-th
cascading failure scenario. The failure of node $j$ occurs when at least one
of its adjacent nodes fails and this failure successfully propagates along
links to the node $j$.
In the Bayesian approach for network reconstruction, we need to estimate the
probability distribution density (PDF) of the network topology given the
observations, i.e., a set of initial node failure scenarios and the following
cascading failure sequences. Specifically, conditioned on the time series
observations on node failures $\bm{C}$, we estimate the posterior distribution
$P(\bm{A}|\bm{C})$ for the adjacency matrix $\bm{A}$ of the underlying network
via Bayes’ rule [27, 9]:
$P(\bm{A}|\bm{C})=\frac{P(\bm{C}|\bm{A})P(\bm{A})}{P(\bm{C})}\propto
P(\bm{C}|\bm{A})P(\bm{A}),$ (1)
where $P(\bm{C}|\bm{A})$ is the likelihood that the cascading failures
$\bm{C}$ occur in a network with topology $\bm{A}$, $P(\bm{A})$ encodes the
prior information on the network topology, and $P(\bm{C})$ is the
normalization constant which represents the total evidence for the cascading
failure data $\bm{C}$ [27]. Details on calculating the graph prior $P(\bm{A})$
and the likelihood $P(\bm{C}|\bm{A})$ are presented in Sections 2.1 and 2.2,
respectively.
### 2.1 Hierarchical stochastic block model
This section introduces how to generate the prior probability of network
topology using HSBM. Calculating the prior probability of the topology
$P(\bm{A})$ requires realistic possible samples of the target network to be
reconstructed. Different types of networks having different prior
probabilities $P(\bm{A})$ can be generated by different graph models. The
graph model we adopt here should represent the realistic behaviors of the
target network. Generative graph models for real networks, such as the
classical random network, scale-free network, and the small world network, are
not applicable to real-world ICIs because nodes in infrastructure networks are
grouped by blocks and hierarchy [36]. As such, a suitable model for
characterizing ICI networks is the HSBM.
The HSBM is built on the stochastic block model which divides different nodes
into different blocks using the membership of nodes. The HSBM then adds
hierarchical structures in each block based on levels of node functionality in
each block. As such, the probability of an edge between two nodes depends on
the blocks to which the nodes belong and the hierarchical levels at which the
nodes are positioned. Considering a multilayer network with a set of blocks
$\mathcal{M}=\\{M_{1},M_{2},...,M_{|\mathcal{M}|}\\}$ and the number of nodes
in each block is $n_{M},M\in\mathcal{M}$, each single block $M\in\mathcal{M}$
has a hierarchical structure
$\mathcal{L}=\\{L_{1},L_{2},...,L_{|\mathcal{L}|}\\}$ where $n_{M}$ nodes are
further divided into $|\mathcal{L}|$ levels. Considering the set of nodes and
edges in the multilayer network as
$\mathcal{V}^{\mathcal{M}}=\bigcup\limits_{M\in\mathcal{M}}{\mathcal{V}^{M}},\mathcal{E}^{\mathcal{M}}=\bigcup\limits_{M\in\mathcal{M}}{\mathcal{E}^{M}}$
and denoting the block and the level labels of the node $i$ as
$m_{i}\in\mathcal{M}$ and $l_{i}\in\mathcal{L}$, then the probability of an
edge $(i,j)$ is an independent Bernoulli random variable $p_{ij}$ conditioned
on the block and the level labels $b_{i},b_{j},l_{i},l_{j}$ of nodes $i,j$.
Therefore, the prior probability $P(\bm{A})$ conditioned on the block and
layer assignment, $\mathcal{M}$ and $\mathcal{L}$, is calculated as
$P(\bm{A}|\mathcal{M},\mathcal{L})=\prod\limits_{\forall
i,j\in\mathcal{V}^{\mathcal{M}},i\neq
j}{p_{ij}^{\bm{A}_{ij}}(1-p_{ij})^{1-\bm{A}_{ij}}},$ (2)
$p_{ij}=g(b_{i},b_{j},l_{i},l_{j}),$ (3)
where $g$ is a HSBM function that calculates the edge probability based on the
block and the layer assignments of nodes. Conditioning the graph prior
probability $P(\bm{A})$ on the HSBM model, we introduce prior knowledge on
network topology $P(\bm{A}|\mathcal{M},\mathcal{L})$ from the function $g$. In
ICIs, $\mathcal{M}$ corresponds to individual infrastructure sectors (e.g.,
power grid, gas distribution network) and $\mathcal{L}$ represents different
types of facilities with different functionalities (e.g., power supply and
distribution nodes).
### 2.2 Cascading failure model
After employing the HSBM to model the prior information on the network
topology, we further utilize the cascading failure model to compute the
likelihood used in Eq. (1) in this section. Denoting the adjacency matrix of
the network to be reconstructed as $\bm{A}^{*}$, the prior knowledge on the
network topology is updated with the likelihood $P(\bm{C}|\bm{A})$ towards the
target topology $\bm{A}^{*}$ that determines the diffusion of the cascading
failure encoded in $\bm{C}$. Disasters cause initial failures of
infrastructure components, which then trigger flow redistribution of resources
and energy within and across the infrastructure networks [37, 38]. The flow
redistribution will then overload and damage additional components, leading to
cascading failures until the system reaches a new stable state without newly
failed components. We leverage the SI model to simulate the cascading failure
occurring on infrastructure networks wherein the failure of facilities is
modelled by the ‘infection’ of nodes and the flow redistribution is modelled
by the probabilistic propagation of the infection in the network.
Considering a failure propagation from time step $t$ to $t+1$ on node $j$
shown in Fig. 1 and further assuming that the probability of failure
propagating from node $i$ to $j$ is $q_{ij}$, the failure probability of node
$j$ is calculated as
$\displaystyle P(\bm{c}^{i}_{t+1,j}$
$\displaystyle=1|\bm{c}^{i}_{t,j}=0,\bm{c}^{i}_{t,1}=1...,\bm{c}^{i}_{t,v}=0)$
$\displaystyle=1-(1-q_{1j})(1-q_{2j})\cdots(1-q_{{v-1}j}).$ (4)
Figure 1: Failure propagation from time $t$ to $t+1$ for node $j$, red
represents failed nodes while black represents nodes that are still
functional.
Due to the independence between the two cascading failure scenarios
$\bm{c}^{i}$ and $\bm{c}^{j}\;(i\neq j,i,j\in\\{1,...,C\\}$, the likelihood of
the cascading failure data $\bm{C}$ encodes the failure propagation across all
cascading failure scenarios $i\in\\{1,...,C\\}$, over all time steps for the
entire disruption duration $\\{1,...,T(\bm{c}^{i})\\}$, and among all nodes in
the multilayer network $j\in\mathcal{V}^{\mathcal{M}}$. This likelihood is
given by
$P(\bm{C}|\bm{A}^{M})=\prod\limits^{C}_{i=1}{\prod\limits_{t=1}^{T(\bm{c}^{i})}{\prod\limits_{j=1}^{|\mathcal{V}^{\mathcal{M}}|}{P(\bm{c}^{i}_{t+1,j}|\bm{c}^{i}_{t})}}},$
(5)
where $T(\bm{c}^{i})$ is the number of time steps that the cascading failure
scenario $\bm{c}^{i}$ lasts and $P(\bm{c}^{i}_{t+1,j}|\bm{c}^{i}_{t})$ is the
probability of the status of operation of node $j$ at time step $t+1$, which
is calculated by incorporating operable node status into Eq. (2.2):
$\displaystyle P\left(\bm{c}^{i}_{t+1,j}|\bm{c}^{i}_{t}\right)$
$\displaystyle={\left[{1-{\prod_{k\in\\{{{\cal V}^{\cal M}}\backslash
j\\}}}\left[{1-{q_{kj}}{\bm{A}_{kj}}\bm{c}_{t,k}^{i}(1-{\rm{}}\bm{c}_{t-1,k}^{i})}\right]}\right]^{\bm{c}_{t+1,j}^{i}(1-\bm{c}_{t,j}^{i})}}$
$\displaystyle\;\cdot{\left[{{\prod_{k\in\\{{{\cal V}^{\cal M}}\backslash
j\\}}}\left[{1-{q_{kj}}{\bm{A}_{kj}}\bm{c}_{t,k}^{i}(1-{\rm{}}\bm{c}_{t-1,k}^{i})}\right]}\right]^{(1-\bm{c}_{t+1,j}^{i})(1-\bm{c}_{t,j}^{i})}},$
(6)
where the first term on the right-hand side represents the probability of
failure propagating to node $j$ while the second term represents the failure
probability of its complementary event, i.e. failure not propagating to node
$j$. The complementary property is guaranteed by the fact that only one of the
two binary terms $\bm{c}^{i}_{t+1,j}(1-\bm{c}^{i}_{t,j})$ and
$(1-\bm{c}^{i}_{t+1,j})(1-\bm{c}^{i}_{t,j})$ can be 1. The term
$\bm{c}^{i}_{t,k}(1-\bm{c}^{i}_{t-1,k})$ ensures that the failure propagation
is Markovian, i.e. the failure of nodes at the current time step is only
impacted by nodes that failed at the last time step, which matches the failure
process of infrastructure networks, such as the progressive collapse of
structures and gradual outages of the power stations [39]. For non-Markovian
failure propagation wherein nodes that fail several time steps ago can still
affect nodes at the current time step, such as information diffusion and
epidemic spreading in social networks, one can simply remove the term
$\bm{c}^{i}_{t-1,k}$ to incorporate these cases into the cascading failure
model.
## 3 Graph MCMC for network reconstruction
By generating possible network topologies with the graph prior model from
Section 2.1 and incorporating the cascading failure model in Section 2.2, Eq.
(1) becomes
$P(\bm{A}|\bm{C},\mathcal{M},\mathcal{L})\propto
P(\bm{C}|\bm{A},\mathcal{M},\mathcal{L})P(\bm{A}|\mathcal{M},\mathcal{L})P(\mathcal{M},\mathcal{L}),$
(7)
where $P(\bm{C}|\bm{A},\mathcal{M},\mathcal{L})$ is the likelihood and
$P(\bm{A}|\mathcal{M},\mathcal{L})\cdot P(\mathcal{M},\mathcal{L})$ is the
prior probability of the network topology. Since the cascading failure
scenarios, $\bm{C}$, are not related to blocks and layers assignments,
$\mathcal{M}$ and $\mathcal{L}$, in calculating the prior probability of
proposed networks by the HSBM, the likelihood is
$P(\bm{C}|\bm{A},\mathcal{M},\mathcal{L})=P(\bm{C}|\bm{A}).$ (8)
Given that $P(\mathcal{M},\mathcal{L})$ does not contain $\bm{A}$ and using
Eq. (8), Eq. (7) is simplified as follows
$P(\bm{A}|\bm{C},\mathcal{M},\mathcal{L})\propto
P(\bm{C}|\bm{A})P(\bm{A}|\mathcal{M},\mathcal{L}).$ (9)
### 3.1 Metropolis-Hastings algorithm
Since Eq. (9) usually does not have a closed-form solution, simulation
techniques are typically leveraged to generate samples of the posterior
distributions of possible network topologies. MCMC methods such as Metropolis-
Hastings (M-H) algorithm and Gibbs sampling are commonly used to perform the
Bayesian inference through sampling [40, 41]. In this study, M-H algorithm is
employed to implement the proposed Bayesian approach since the Gibbs sampling
algorithm requires an analytical solution to the conditional distributions of
each parameter in the model. At each step in the M-H algorithm, a new
multilayer network $\mathcal{M}^{\prime}$ with topology $\bm{A}^{\prime}$ is
proposed based on the current network $\mathcal{M}$ with topology $\bm{A}$
using the proposal distribution $Q(\bm{A}^{\prime}|\bm{A})$. The M-H algorithm
accepts the new proposal probabilistically with the ratio $\gamma$, which is
given by
$\displaystyle\gamma=\min(1,\frac{P(\bm{C}|\bm{A}^{\prime})P(\bm{A}^{\prime}|\mathcal{M},\mathcal{L})}{P(\bm{C}|\bm{A})P(\bm{A}|\mathcal{M},\mathcal{L})}\frac{Q(\bm{A}|\bm{A}^{\prime})}{Q(\bm{A}^{\prime}|\bm{A})}).$
(10)
The performance of the algorithm depends on the choice of the proposal
distribution $Q(\bm{A}^{\prime}|\bm{A})$. The random-walk graph proposal is
commonly used in the M-H algorithm, where a random pair of nodes is chosen
either by creating or removing the existing edges between them. However, this
approach can generate invalid or unrealistic proposals of network topology.
For example, in generating proposals of possible underlying topologies of
interdependent water and power networks, a likely edge is from pumping
stations to storage tanks, representing water extraction from nearby rivers at
pumping stations and its transportation to storage tanks to be stored for
future use. However, random graph proposals may add edges going from storage
tanks to pumping stations which is not realistic. As such, we devise an
infrastructure-dependent proposal method that imposes additional constraints
on the topology of the proposed networks.
### 3.2 Infrastructure-dependent proposal
Infrastructure-dependent proposal ensures that only networks with a topology
conforming to ICIs are considered. This is achieved by first analyzing the
basic topology of ICIs, from which we abstract the topological constraints for
designing the infrastructure-dependent proposal. We consider that each
individual network in ICIs has a hierarchical structure with three levels
corresponding to supply, transmission, and demand facilities. Given a set of
infrastructure networks, $\mathcal{M}$, where supply, transmission and demand
nodes are denoted as $\text{s},\text{t},\text{d}$, and a set of interdependent
links, $\mathcal{I}$, across the networks, we can express the topological
constraints as follows:
1. 1.
$\forall M\in\mathcal{M},\forall i\in\mathcal{V}_{\text{s}}^{M},\exists
j\in\mathcal{V}_{\text{d}}^{M},i\mathop{\rightarrow}\limits^{\text{path}}j$.
2. 2.
$\forall M\in\mathcal{M},\forall i\in\mathcal{V}_{\text{d}}^{M},\exists
j\in\mathcal{V}_{\text{s}}^{M},j\mathop{\rightarrow}\limits^{\text{path}}i$.
3. 3.
$\forall M\in\mathcal{M},\forall i\in\mathcal{V}_{\text{s}}^{M},\exists
j\in\mathcal{V}_{\text{t}}^{M},i\mathop{\rightarrow}\limits^{\text{path}}j$.
4. 4.
$\forall M\in\mathcal{M},\forall i\in\mathcal{V}_{\text{t}}^{M},\exists
j\in\mathcal{V}_{\text{s}}^{M},j\mathop{\rightarrow}\limits^{\text{path}}i$.
5. 5.
$\forall I\in\mathcal{I},\forall i\in\mathcal{V}_{\text{t}}^{I},\exists
j\in\mathcal{V}_{\text{d}}^{I},i\mathop{\rightarrow}\limits^{path}j$.
6. 6.
$\forall I\in\mathcal{I},\forall i\in\mathcal{V}_{\text{d}}^{I},\exists
j\in\mathcal{V}_{\text{t}}^{I},j\mathop{\rightarrow}\limits^{path}i$.
7. 7.
$\forall
M\in\mathcal{M},\forall(i,j)\in\mathcal{E}^{M},i\in\mathcal{V}_{1},j\in\mathcal{V}_{2},\\{\mathcal{V}_{1},\mathcal{V}_{2}\\}\in\\{\\{\mathcal{V}^{M}_{\text{s}},\mathcal{V}^{M}_{\text{t}}\\},\\{\mathcal{V}^{M}_{\text{t}},\mathcal{V}^{M}_{\text{d}}\\},\\{\mathcal{V}^{M}_{\text{s}},\mathcal{V}^{M}_{\text{d}}\\}\\}$.
8. 8.
$\forall
I\in\mathcal{I},\forall(i,j)\in\mathcal{E}^{I},i\in\mathcal{V}^{I}_{s},j\in\mathcal{V}^{I}_{d}$.
9. 9.
$\forall M\in\mathcal{M},\forall I\in\mathcal{I},\text{no cycles}$ in $M,I$.
Typically, resources are generated or extracted at supply nodes and
transported via transmission nodes to demand nodes where resources are further
distributed to local residents. Therefore, in every infrastructure network,
$M\in\mathcal{M}$, every supply node $i\in\mathcal{V}^{M}_{\text{s}}$ is
connected by a path to at least one demand node
$j\in\mathcal{V}^{M}_{\text{d}}$ and vice versa, corresponding to constraints
1-2. Every transmission node $i\in\mathcal{V}^{M}_{\text{t}}$ is connected by
a path from at least one supply node $j\in\mathcal{V}^{M}_{\text{s}}$ and to
at least one demand node $k\in\mathcal{V}^{M}_{\text{d}}$, corresponding to
the constraints 3-4. Similarly, for every interdependent link
$I\in\mathcal{I}$, we have the same constraints for connectivity from the
supply nodes to demand nodes, according to constraints 5-6. The resources move
from supply nodes (at the high level) to demand nodes (at the low level) with
no resources flowing back, which results in only forward edges from high level
nodes to low level nodes according to constraints 7-8, which automatically
leads to no cycles in each individual network as shown by constraint 9.
The M-H algorithm with infrastructure-dependent proposal and considerations of
the nine constraints is outlined in Alg. 1. First a basic adjacency matrix
$\bm{A}^{0}$ is generated following steps 1-5 to initiate the M-H algorithm.
While we construct $\bm{A}^{0}$ by connecting failed nodes in sequential time
steps, any method that generates graphs with $P(\bm{C}|\bm{A})>0$ could be
used. In step 10, we randomly select a pair of nodes $(i,j)$ from the subset
of the feasible set
$\mathcal{S}=\\{\\{\mathcal{V}^{M}_{\text{s}},\mathcal{V}^{M}_{\text{t}}\\},\\{\mathcal{V}^{M}_{\text{s}},\mathcal{V}^{M}_{\text{d}}\\},\\{\mathcal{V}^{M}_{\text{t}},\mathcal{V}^{M}_{\text{d}}\\},\\{\mathcal{V}^{I}_{\text{s}},\mathcal{V}^{I}_{\text{d}}\\}\\},\forall
M\in\mathcal{M},\forall I\in\mathcal{I}\\}$, which guarantees that constraints
(7)-(9) are satisfied. Then we add or remove the edge between that pair of
nodes $(i,j)$ and generate a new candidate network following steps 11-15. The
candidate network is accepted as a sample of the posterior distribution of
$\mathcal{A}$, if the addition or removal of this edge does not violate
constraints (1)-(6). Otherwise, we reject this candidate, propose another pair
of nodes, and check the feasibility of the proposed topology, as shown in
steps 16-22. Note that the block and layer assignments required in calculating
the acceptance ratio, Eq. (10), are already given since blocks correspond to
infrastructure networks $\mathcal{M},\mathcal{I}$ and layers correspond to
facility types as defined in Section 2.1.
The advantage of this infrastructure-dependent proposal over the original one
lies in the improvement in the sampling efficiency. By imposing additional
topological constraints, invalid topology proposals are eliminated, thereby
increasing the accuracy of sampling. Moreover, such elimination of invalid
topology proposals reduces the space of the candidate topology (Appendix B),
which significantly decreases the mixing time of the M-H algorithm. We also
show that the Markov chain constructed using the infrastructure-dependent
proposal converges (Appendix A).
Algorithm 1: M-H algorithm with infrastructure-dependent proposal
1:The set of networks to be reconstructed $\mathcal{M},\mathcal{I}$ with known
types of facilities in sets
$\mathcal{V}^{M}_{\text{s}},\mathcal{V}^{M}_{\text{t}},\mathcal{V}^{M}_{\text{d}}$,
the cascading failure data $\bm{C}$, the maximum number of iterations
$Iter_{max}$, the feasible set $\mathcal{S}$ from which the pair of nodes for
which an edge is added or removed is randomly picked
2:The posterior distribution of the adjacency matrix $\mathcal{A}$ of the
target networks
3:for cascading failure scenario $i\in\\{1,...,C\\}$ do
4: for $t\in\\{1,...,T(\bm{c}^{i})-1\\}$ do
5: Add a link between every node failing at $t$ and
6: every node failing at $t+1$
7: end for
8:end for$\triangleright$ Initialize the adjacency matrix $\bm{A}^{0}$
9:$\text{Iter}\leftarrow 0$
10:while $\text{Iter}<=Iter_{max}$ do$\triangleright$ Start M-H sampling
11: $\bm{A}\leftarrow\bm{A}^{\text{Iter}}$
12: Randomly pick a pair of nodes $(i,j)$, $i\in\mathcal{V}_{1}$,
13: $j\in$ $\mathcal{V}_{2}$ and the set
$\\{\mathcal{V}_{1},\mathcal{V}_{2}\\}\in\mathcal{S}$
14:$\triangleright$ Impose constraints (7)-(9)
15: if $\bm{A}_{ij}=0$ then
16: $\bm{A}_{ij}\leftarrow 1$
17: else
18: $\bm{A}_{ij}\leftarrow 0$
19: end if$\triangleright$ Add or remove the link
20: if $\bm{A}$ causes no violation of constraints (1)-(6) then
21: Calculate the acceptance ratio $\gamma$ by Eq. (10)
22: if $\gamma\geq p\sim U(0,1)$ then
23: Accept $\bm{A}$ as a sample of the posterior
24: distribution of $\mathcal{A}$
25: $\text{Iter}\leftarrow\text{Iter}+1$
26: end if
27: end if
28:end while
29:return $\mathcal{A}$
## 4 Computational techniques for improving the sampling efficiency
Although the infrastructure-dependent proposal significantly reduces the space
of the candidate network topology and thus improves the sampling efficiency,
the computational expense can still be very high due to the computation of
proposing and updating the topology of the network at each iteration. At each
iteration, steps 8-23 in Alg. 1 takes $\mathcal{O}(1)$ operations to randomly
choose node pairs from the feasible set and add or remove associated links.
Evaluating the likelihood $P(\bm{C}|\bm{A})$ in calculating the acceptance
ratio $\gamma$ takes
$\mathcal{O}(\sum_{i=1}^{C}{T(\bm{c}^{i})|\mathcal{V}^{\mathcal{M}}|^{2}})$
time to double loop over all nodes in $\mathcal{V}^{\mathcal{M}}$ at time step
$t$ to compute the failure probability of every node at $t+1$ and iterate over
all time steps in $\\{1,...,T(\bm{c}^{i})-1\\}$ through all cascading
scenarios $\bm{c}^{i},i\in\\{1,...,C\\}$. Validating whether the generated
network $\bm{A}$ violates constraints (1)-(6) requires
$\mathcal{O}(|\mathcal{V}^{\mathcal{M}}|(|\mathcal{V}^{\mathcal{M}}|+|\mathcal{E}^{\mathcal{M}}|)+|\mathcal{V}^{\mathcal{I}}|(|\mathcal{V}^{\mathcal{I}}|+|\mathcal{E}^{\mathcal{I}}|))$
as we check the existence of paths by tree-search algorithms such as Depth
First Search (DFS) or Breadth First Search (BFS) on every node in every
network. So the total time of each iteration in Alg. 1 takes ${\cal
O}(\underbrace{\sum\limits_{i=1}^{C}{T({\bm{c}^{i}})|{{\cal V}^{\cal
M}}}{|^{2}}}_{{\text{Likelihood calculation}}}+\underbrace{|{{\cal
V}^{\mathcal{M}\cup\mathcal{I}}}|(|{{\cal
V}^{\mathcal{M}\cup\mathcal{I}}}|+|{{\cal
E}^{\mathcal{M}\cup\mathcal{I}}}|)}_{{\text{Graph validation}}})$, which is
nonlinear due to $|\mathcal{V}^{\mathcal{M}}|^{2}$ in the likelihood
calculation and the $|\mathcal{V}^{\mathcal{M}\cup\mathcal{I}}|^{2}$ in the
graph validation. Since the interdependent infrastructure network is a sparse
and tripartite graph (supply-transmission-demand), we devise the following
three optimization techniques to speed up the computation.
### 4.1 Likelihood calculation
In the likelihood calculation, the quadratic term
$|\mathcal{V}^{\mathcal{M}}|^{2}$ is due to evaluating the failure probability
of each node $j$ by considering failure propagation from all other failed
nodes that are connected to node $j$,
$1-\prod\limits_{k\in\\{\mathcal{V}^{\mathcal{M}}\backslash
j\\}}(1-q_{kj}\bm{A}_{kj}\bm{c}^{i}_{t,k}(1-\bm{c}^{i}_{t-1,k}))$. This
computation can be simplified by only considering the failed neighborhoods of
node $j$, which is
$1-{\prod_{k\in\\{{{\mathcal{N}}^{\mathcal{M}}}(j)\\}}}(1-{q_{kj}}{\bm{c}}_{t,k}^{i}(1-{\bm{c}}_{t-1,k}^{i}))$.
Instead of double looping over all nodes, we traverse the edge list and
integrate the probability of failure propagation following Eq. (2.2) over
edges with failed heading nodes and the same tailing nodes. See Fig. 2 for an
example where we aim to calculate the failure probability of nodes $1$ to $5$
at the next time step based on the current failure status $\bm{C}_{t}^{i}$ by
simply traversing the edge list. Since the whole process requires iterating
over the adjacency list (an array of linked lists wherein each linked list
describes the neighbors of a particular node), the time complexity of
calculating the likelihood is reduced to
$\mathcal{O}(\sum_{i=1}^{C}{T(\bm{c}^{i})(|\mathcal{V}^{\mathcal{M}}|+|\mathcal{E}^{\mathcal{M}}|)})$.
Figure 2: Traversing the edge list to calculate the failure probability
### 4.2 Graph validation
In graph validation, the second order term ${|{V^{{\cal M}\cup{\cal
I}}}|(|{V^{{\cal M}\cup{\cal I}}}|+|{\mathcal{E}^{{\cal M}\cup{\cal I}}}|)}$
is due to checking the corresponding paths on every node in the network.
However, considering the tripartite structure of ICI networks and since every
iteration of the M-H algorithm only adds or removes one edge, the following
theorem is proposed to guarantee that constraints (1)-(6) are validated in
linear time.
###### Theorem 1
Let $\overrightarrow{\bm{A}}^{\prime},\overleftarrow{\bm{A}}^{\prime}$ denote
the adjacency list and the reversed adjacency list of the proposed graph
$\bm{G}^{\prime}$ after adding or removing the link (i, j) selected from the
feasible set $\mathcal{S}$. Given that the graph $\bm{G}$ in the last
iteration of the M-H algorithm has already satisfied constraints (1)-(9) and
the current proposed graph $\bm{G}^{\prime}$ has already satisfied constraints
(7)-(8), we claim that $\bm{G}^{\prime}$ satisfies constraints (1)-(6), (9) if
and only if $\overrightarrow{\bm{A}}^{\prime}_{i}\neq\\{\\}$ and
$\overleftarrow{\bm{A}}^{\prime}_{j}\neq\\{\\}$.
The proof of Theorem 1 is shown in Appendix C. Theorem 1 indicates that we can
validate the proposed topology by only checking the out-degree of the heading
node $i$ and the in-degree of the tailing node $j$, which only takes linear
time ${\mathcal{O}(|{V^{{\cal M}\cup{\cal I}}}|+|{\mathcal{E}^{{\cal
M}\cup{\cal I}}}|)}$.
### 4.3 Tie-No-Tie sampler
The third technique we employ to improve the sampling efficiency is to replace
the random sampler at step 10 in Alg. 1 with the ’tie-no-tie’ (TNT) sampler
[9, 42]. The random sampler selects pairs of nodes with equal probability,
resulting in a higher frequency of proposing node pairs with non-edges (edges
not present in the network) and adding the corresponding edges then proposing
node pairs with edges and removing the corresponding edges in sparse graphs.
However, the newly added edges are likely to be rejected due to the violation
of constraints (1)-(9) or the generation of a network topology discouraged by
the cascading failure data. Conversely, the TNT sampler first selects the set
of edges or the set of non-edges with equal probability, and then changes a
random node-pair in that set, which increases the frequency of removing edges
in the proposal. Since removing edges are more likely to be accepted due to
graph sparsity, the sampling efficiency is improved.
## 5 Numerical experiments
In this section, we apply the proposed Bayesian approach to one synthetic and
two real-world ICIs. In the first case study, we design five methods from
different combinations of the infrastructure-dependent proposal and different
computational optimization techniques (Table I). By comparing the performance
of these methods on reconstructing the synthetic system of ICI networks, we
select the best method and apply it to two systems of real-world ICIs.
### 5.1 Synthetic networks
The synthetic system contains three blocks corresponding to water, power, and
gas networks, each of which has three hierarchical levels corresponding to
supply, transmission, and demand facilities (Fig. 3). All three networks have
two supply nodes, three transmission nodes, and five demand nodes. The
interdependency across networks is simulated using the approach for Synthetic
Interdependent Critical Infrastructure Networks (SICIN) in [6], which requires
as input the degree distribution extracted from real-world infrastructures of
the same type. Two types of interdependencies are considered: 1) pumping
stations require electricity from 12kV substations for pumping water from
nearby rivers and 2) power gate stations depend on water from water delivery
stations for cooling purposes; gas gate stations require electricity from 12kV
substations for extracting natural gas from underground while power gate
stations depend on natural gas from gas delivery stations for generating
electricity. Both of the two interdependencies are considered by accounting
for the distance between each facility. We denote the water-power-gas SICIN
that we aim to reconstruct as $\bm{A}^{*}$ hereafter. The cascading failure
data $\bm{C}$ is simulated using the SI epidemic model, the ratio of the
initial failed nodes is set to $0.2$ and the probability of failure
propagation $q$ in Eq. (2.2) is set to $0.4$ to get sufficient cascading
failure data, which is aligned with prior work in the literature [9]. To
demonstrate the dependency of the performance of reconstructing network
topology on the amount of cascading data $\bm{C}$ we design three experiments:
(i) $E_{5}^{5}$: five cascading failure scenarios with at least five time
steps each (ii) $E_{5}^{15}$: 15 cascading failure scenarios with at least
five time steps (iii) $E_{5}^{40}$: 40 cascading failure scenarios with at
least five time steps. All three experiments are performed with and without
considering infrastructure-dependent proposals.
Figure 3: The water-power-gas SICIN. Solid lines represent links within a
network while dashed lines represent interdependent links between networks.
Since the average degree is the basic metric for the connectivity and density
of the network which affects network features such as the clustering
coefficient and the diameter [43], we select the average degree as the
representative network feature and draw its trace plot to validate the
convergence of the M-H algorithm as shown in Figs. 4 and 5. Regardless of
whether we consider the infrastructure-dependent proposal, the average degree
of the reconstructed network in $E_{5}^{40}$ (green chain) is closer to the
target value (black dashed line) than that in $E_{5}^{5}$ and $E_{5}^{15}$
(blue and orange chains). This outcome is primarily driven by the cascading
failure data where the experiment with larger cascading failure scenarios and
longer time steps in $E_{5}^{40}$ contains more node-pair information that
covers a wider range of network topology than the data with fewer scenarios
and shorter time steps in $E_{5}^{5},E_{5}^{15}$. Therefore, in $E_{5}^{40}$,
the posterior of these edges is updated with more data of the corresponding
node pairs, drawing the distribution closer to the target posterior. Also,
when compared to the first two experiments, the value of the average degree
converges to the target with fewer iterations in $E_{5}^{40}$, i.e., fewer
iterations at the warm-up stage.
Figure 4: Average degree trace plot without infrastructure-dependent proposal
Figure 5: Average degree trace plot with infrastructure-dependent proposal
Comparing Figs. 4 and 5, we can observe that the average degree of the
reconstructed network with infrastructure-dependent proposal converges more
quickly and closer to the target value than that without infrastructure-
dependent proposal in all three experiments. Since infrastructure-dependent
proposal exerts extra topological constraints on the proposed network to
generate more realistic candidate network topologies, it improves both the
accuracy and the computational efficiency.
Next, we evaluate the accuracy of network reconstruction using adjacency
matrix. Suppose that the set of networks in the posterior distribution is
$\mathcal{A}$, the edge probability $p_{ij}$ for each pair of nodes $(i,j)$ is
defined as the ratio of the number of graphs with the edge between $i,j$,
$|\\{{\mathbf{A}\in\mathcal{A}}:\mathbf{A}_{ij}=1\\}|$, and the total number
of graphs in the posterior, $|\mathcal{A}|$, as follows
${p_{ij}}=\frac{|\\{{\mathbf{A}\in\mathcal{A}}:\mathbf{A}_{ij}=1\\}|}{|\mathcal{A}|}.$
(11)
The edge probability for each pair of nodes in the network is displayed on the
heatmap of the adjacency matrix shown in Fig. 6. In the first row of Fig. 6, a
number of noisy edges that are not present in the real networks are proposed
and shown in blank areas. In contrast, when using the infrastructure-dependent
proposal (shown in the second row of Fig. 6), the heatmap of the adjacency
matrix conforms to the block structure in the original network. Furthermore,
since ${E}_{5}^{40}$ uses more cascading failure data to update the network
topology, we have higher confidence in the predicted edges. Therefore, the
heatmap of the adjacency matrix is darker in $E_{5}^{40}$ than in the other
two experiments, indicating higher edge probability.
Figure 6: The heatmap of the adjacency matrix reconstructed without (the first
row) and with (the second row) infrastructure-dependent proposal under three
experiment settings $E_{5}^{5}$ (a), $E_{5}^{15}$ (b), and $E_{5}^{40}$ (c).
We now show the accuracy of the proposed network reconstruction approach. We
set a probability threshold $p$ and classify node pairs into two categories:
node pair connected by an edge if $p_{ij}\geq p$ and node pair not connected
if $p_{ij}<p$. We further count the number of edges for each pair of nodes
that are correctly classified and compute the F1-score which is the harmonic
mean of precision and recall. The precision-recall curve is plotted in Figs.
18 and 19 in Appendix E. For each of the three experiments, the best F1-score
is 0.44, 0.53, and 0.84 for experiments without infrastructure-dependent
proposal, and 0.72, 0.85, and 0.95 for experiments with infrastructure-
dependent proposal. This outcome is consistent with the previous observation
that the accuracy of the reconstructed network improves with additional
cascading data. In addition, our infrastructure-dependent proposal
significantly improves the performance of reconstructing networks using the
same amount of cascading failure data.
To demonstrate the computational performance of incorporating three
optimization techniques, we devise five methods by combining different
techniques and the infrastructure-dependent proposal. The details of the five
methods are outlined in Table I where the checkmark denotes that the
corresponding method uses the corresponding technique. ”IP” refers to using
infrastructure-dependent proposal and ”Validation” refers to validating the
proposed graph according to Theorem 1. We implement each of the five methods
10 times to reconstruct the synthetic networks and report their averaged
reconstructing time and F1-score in Fig. 7. In each run, we propose 3000
samples and use the first 2000 samples as warm-ups.
TABLE I: The configuration of five methods for constructing ICIs. Method | IP | TNT | Edgelist | Validation
---|---|---|---|---
M-1 | ✓ | ✓ | ✓ | ✓
M-2 | ✓ | ✓ | ✓ |
M-3 | ✓ | ✓ | |
M-4 | ✓ | | |
M-5 | | | |
Figure 7: The time (a) and the F1-score (b) of reconstructing the synthetic
networks using the five methods.
In Fig. 7(a), M-1 (blue) and M-2 (orange) have the shortest running time and
the difference in running time between M-1, M-2 and the other three methods
becomes larger as the amount of data increases from $E_{5}^{5}$ to
$E_{5}^{40}$, indicating that traversing edge lists rather than adjacency
matrix to calculate the likelihood significantly reduces the computational
load and this reduction is significant when more failure data is used. The
comparable running time between M-1 and M-2 indicates that applying graph
validation according to Theorem 1 brings nearly no improvement in
computational efficiency. This is because proposing networks by removing edges
in TNT sampler can easily generate pairs of disconnected nodes and once we
find such pairs of nodes, constraints (1)-(6) are broken and we save the time
that would be otherwise needed to check other pairs of nodes. Comparing M-3
(green) and M-4 (red), replacing the random sampler with the TNT sampler
reduces the computational load because the TNT sampler proposes more valid
topological variation by removing edges rather than adding invalid edges that
would be later rejected. The running time of M-4 (red) is greater than that of
M-5 (purple) in $E_{5}^{5}$ while it is less in $E_{5}^{15},E_{5}^{40}$. Given
the smaller amount of cascading failure data in $E_{5}^{5}$, the proposed
networks can be easily accepted without applying the infrastructure-dependent
proposal to check any topological constraint in advance. Therefore, the
sampling process is completed in a shorter time in M-5 where no extra
operations on checking the network topology are required. However, for larger
cascading failure data in $E_{5}^{15}$ and $E_{5}^{40}$ and without verifying
the network topology for constraints (1)-(9) in advance, the topology
candidates proposed by the random sampler are more likely to be rejected as
they are not supported by the cascading failure data (low likelihood).
Therefore, the time spent on proposing such invalid samples is compensated by
the time for checking the topological constraints, leading to shorter time in
M-4 than M-5 for $E_{5}^{15}$ and $E_{5}^{40}$. In Fig. 7(b), the first four
methods lead to roughly the same and better F1-score than M-5, which
demonstrates the power of applying infrastructure-dependent proposals.
However, M-4 which applies the random sampler is slightly better than the
previous three methods M-1, M-2, M-3 that use the TNT sampler because some
edges that are present in the target network have already been removed by the
TNT sampler in M-1, M-2, M-3. The random sampler has a higher chance to add
edges than the TNT sampler and thus is more likely to recover those edges that
should exist in the target network but have already been removed. The running
times of different experiments are provided in Tables II and III. Considering
the trade-off between the running time and the reconstructing accuracy, M-1 is
selected to be used in the following two real-world case studies.
TABLE II: The average F1-score of the five methods Method | $E_{5}^{5}$ | $E_{5}^{15}$ | $E_{5}^{40}$
---|---|---|---
M-1111The bold font denotes the top two methods | 0.718 | 0.837 | 0.948
M-2 | 0.716 | 0.836 | 0.938
M-3 | 0.705 | 0.832 | 0.937
M-4 | 0.728 | 0.853 | 0.954
M-5 | 0.307 | 0.591 | 0.841
TABLE III: The average running time (s) of the five methods Method | $E_{5}^{5}$ | $E_{5}^{15}$ | $E_{5}^{40}$
---|---|---|---
M-1 | 57.608 | 162.836 | 712.096
M-2 | 54.257 | 161.850 | 702.410
M-3 | 95.876 | 368.205 | 2067.501
M-4 | 261.652 | 697.319 | 3762.572
M-5 | 150.339 | 944.409 | 20305.656
### 5.2 Interdependent power, water, and gas networks in Shelby County
The experiments demonstrating the performance improvement brought by the
infrastructure-dependent proposal have thus far only considered synthetic
infrastructure networks. Therefore, we apply the proposed method to
reconstruct real-world ICIs. The first case study considers the gas-power-
water of Shelby County, Tennessee, Fig. 8. Since the size of these networks is
larger than the synthetic networks, we increase the amount of cascading
failure data to ensure a good performance of network reconstruction. The
cascading failure data $E_{20}^{40}$ contains 20 cascading failure scenarios,
each with 40 failure sequences. The constant probability value is taken from
$0.1$ to $0.9$ with interval of $0.1$ to represent the low, middle and high
probability of failure propagation. The F1-score and time of reconstructing
the system of Shelby County infrastructure networks with different failure
propagation probabilities are shown in Fig. 9.
Figure 8: Interdependent gas-power-water networks of Shelby County
For the computation time, since generating network topology used for
initiating the reconstruction process establishes invalid edges that do not
belong to the target network, the following reconstruction process is to
remove these invalid edges by proposing corresponding edge removal. However,
since the likelihood of the network topology after removing these invalid
edges becomes lower than the original network under the case of high
probability of failure propagation, the proposal that removing these invalid
edges is more likely to be rejected. Therefore, achieving the same number of
posterior samples requires more time in the high probability of failure
propagation than in the low one, as observed by the increasing reconstructing
time with the increase of the probability of failure propagation. For the
F1-score, a high probability of failure propagation prompts adding more edges
between consequential failed nodes and the reconstructed network becomes
sparser than the target one. Conversely, a very low probability prompts
removing more unnecessary edges and the reconstructed network becomes denser
than the target one. Both of the above two situations cause low reconstructing
resolution and as observed in Fig. 9, the peak F1-score appears when the
probability of failure propagation is around 0.4.
Figure 9: F1-score and time of reconstructing Shelby County networks. Solid
lines represent links within a network while dashed lines represent
interdependent links.
### 5.3 Interdependent power and water networks in Ideal City
The second case study considers the system of interdependent power and water
networks (Fig. 10) extracted from a virtual city that mimics a typical Italian
building stock with geographic boundaries spanning the city of Turin, Italy.
The data was provided by [44] and processed to obtain subgraphs of the power
and water networks. The water distribution network has one supply node
(pumping station), 10 transmission nodes (storage tanks), and 67 demand nodes
(delivery stations) while the power network consists of one supply node (gate
station), 10 transmission nodes (intermediate stations), and 61 demand nodes
(delivery substations). The geographical locations of facilities in these two
networks are reorganized using the simulated annealing simulation algorithm
based on the population distribution in Italy [45]. Since the topology of this
network is more tree-like than the networks in the previous two case studies,
the cascading failure is more difficult to propagate, leading to shorter
cascading failure sequences. To obtain a reasonably good performance of
network reconstruction, we increase the number of cascading scenarios to 100,
each of which has at least 12 failure sequences. The F1-score of network
reconstruction with different parameters is presented in Fig. 11.
Figure 10: The interdependent gas-power-water networks of Ideal City. Solid
lines represent links within a networks while dashed lines represent
interdependent links. Figure 11: F1-score and time of reconstructing
interdependent power and water networks in the Ideal City.
Similar to the observation in the second case study, the reconstructing time
increases as the probability of failure propagation increases because the
proposals of removing the invalid edges are more likely to be rejected.
However, the peak of the F1-score appears when the probability of failure
propagation is around 0.2, which is different from the previous case study.
This indicates that the probability of failure propagation that achieves the
best reconstructing performance depends on the particular topology of the ICI
to be reconstructed. Another observation is that even though the number of the
cascading scenarios used here is much greater than the previous case study,
the best F1-score (0.775) is less than the best F1-score of the previous case
study (0.875). This is because cascading failure is more difficult to
propagate in the networks in Ideal City that only have two tree structures,
therefore less information about the network topology is encoded in the
cascading failure sequences than the previous case study.
## 6 Conclusion
This paper presents a Bayesian approach for reconstructing the topology of
systems of ICI networks based on cascading failure data. The HSBM is used to
capture the clustering and hierarchical structure of ICIs. The computational
complexity resulting from the exponential growth of the potential topology is
addressed by a new infrastructure-dependent proposal that takes into account
the topological constraints of ICIs. Furthermore, we propose three techniques
to significantly improve the sampling efficiency. Results of three case
studies demonstrate the effectiveness and transferability of the proposed
approach.
Future work can be carried out to investigate the influence of cascading
failure data and the network structure on the reconstruction performance.
While the proposed approach uses the SI epidemic model to simulate cascading
failures in infrastructures, future work can explore other types of failure
simulation models, such as susceptible-infected-susceptible (SIS),
susceptible–infected–recovered (SIR), and their stochastic variants. In the
last case study we demonstrate that the performance of reconstructing the
network topology based on the cascading failure data varies from network to
network and heavily relies on the topology of the networks. As such, further
research can help identify the network topology for which the proposed
reconstruction method is best suited.
## References
* [1] B. D. Fath, U. M. Scharler, R. E. Ulanowicz, and B. Hannon, “Ecological network analysis: Network construction,” _Ecological Modelling_ , vol. 208, no. 1, pp. 49–55, 2007.
* [2] P. Sharma, D. J. Bucci, S. K. Brahma, and P. K. Varshney, “Communication network topology inference via transfer entropy,” _IEEE Transactions on Network Science and Engineering_ , vol. 7, no. 1, pp. 562–575, 2019.
* [3] I. Brugere, B. Gallagher, and T. Y. Berger-Wolf, “Network structure inference, a survey: Motivations, methods, and applications,” _ACM Computing Surveys (CSUR)_ , vol. 51, no. 2, pp. 1–39, 2018.
* [4] C.-W. Ten, C.-C. Liu, and G. Manimaran, “Vulnerability assessment of cybersecurity for scada systems,” _IEEE Transactions on Power Systems_ , vol. 23, no. 4, pp. 1836–1846, 2008.
* [5] W. L. Hamilton, R. Ying, and J. Leskovec, “Representation learning on graphs: Methods and applications,” _arXiv preprint arXiv:1709.05584_ , 2017.
* [6] Y. Wang, J.-Z. Yu, and H. Baroud, “Generating synthetic systems of interdependent critical infrastructure networks,” _IEEE Systems Journal_ , vol. 16, no. 2, pp. 3191–3202, 2022.
* [7] N. Ahmad, M. Chester, E. Bondank, M. Arabi, N. Johnson, and B. L. Ruddell, “A synthetic water distribution network model for urban resilience,” _Sustainable and Resilient Infrastructure_ , pp. 1–15, 2020.
* [8] C. Zhai, T. Y.-j. Chen, A. G. White, and S. D. Guikema, “Power outage prediction for natural hazards using synthetic power distribution systems,” _Reliability Engineering & System Safety_, vol. 208, p. 107348, 2021.
* [9] C. Gray, L. Mitchell, and M. Roughan, “Bayesian inference of network structure from information cascades,” _IEEE Transactions on Signal and Information Processing over Networks_ , vol. 6, pp. 371–381, 2020.
* [10] A. Braunstein, A. Ingrosso, and A. P. Muntoni, “Network reconstruction from infection cascades,” _Journal of the Royal Society Interface_ , vol. 16, no. 151, p. 20180844, 2019.
* [11] A. Vajdi and C. M. Scoglio, “Identification of missing links using susceptible-infected-susceptible spreading traces,” _IEEE Transactions on Network Science and Engineering_ , vol. 6, no. 4, pp. 917–927, 2018.
* [12] Y. Zhao, Y.-J. Wu, E. Levina, and J. Zhu, “Link prediction for partially observed networks,” _Journal of Computational and Graphical Statistics_ , vol. 26, no. 3, pp. 725–733, 2017.
* [13] K. Huang, Z. Wang, and M. Jusup, “Incorporating latent constraints to enhance inference of network structure,” _IEEE Transactions on Network Science and Engineering_ , vol. 7, no. 1, pp. 466–475, 2018.
* [14] H.-F. Zhang, F. Xu, Z.-K. Bao, and C. Ma, “Reconstructing of networks with binary-state dynamics via generalized statistical inference,” _IEEE Transactions on Circuits and Systems I: Regular Papers_ , vol. 66, no. 4, pp. 1608–1619, 2018.
* [15] M. Wilinski and A. Y. Lokhov, “Scalable learning of independent cascade dynamics from partial observations,” _arXiv preprint arXiv:2007.06557_ , 2020\.
* [16] C. Jiang, J. Gao, and M. Magdon-Ismail, “True nonlinear dynamics from incomplete networks,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 34, no. 01, 2020, pp. 131–138.
* [17] T. K. Dey, J. Wang, and Y. Wang, “Road network reconstruction from satellite images with machine learning supported by topological methods,” in _Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems_ , 2019, pp. 520–523.
* [18] A. Clauset, C. Moore, and M. E. Newman, “Hierarchical structure and the prediction of missing links in networks,” _Nature_ , vol. 453, no. 7191, pp. 98–101, 2008.
* [19] A. Louni and K. Subbalakshmi, “Diffusion of information in social networks,” in _Social Networking_. Springer, 2014, pp. 1–22.
* [20] R. G. Little, “Controlling cascading failure: Understanding the vulnerabilities of interconnected infrastructures,” _Journal of Urban Technology_ , vol. 9, no. 1, pp. 109–123, 2002.
* [21] J. H. Parmelee and S. L. Bichard, _Politics and the Twitter revolution: How tweets influence the relationship between political leaders and the public_. Lexington Books, 2011.
* [22] S. Pajevic and D. Plenz, “Efficient network reconstruction from dynamical cascades identifies small-world topology of neuronal avalanches,” _PLoS Computational Biology_ , vol. 5, no. 1, p. e1000271, 2009.
* [23] A. Grover and J. Leskovec, “node2vec: Scalable feature learning for networks,” in _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 2016, pp. 855–864.
* [24] W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in _Advances in Neural Information Processing Systems_ , 2017, pp. 1024–1034.
* [25] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” _arXiv preprint arXiv:1609.02907_ , 2016.
* [26] E. K. Kao, S. T. Smith, and E. M. Airoldi, “Hybrid mixed-membership blockmodel for inference on realistic network interactions,” _IEEE Transactions on Network Science and Engineering_ , vol. 6, no. 3, pp. 336–350, 2018.
* [27] T. P. Peixoto, “Network reconstruction and community detection from dynamics,” _Physical Review Letters_ , vol. 123, no. 12, p. 128301, 2019\.
* [28] J.-G. Young, G. T. Cantwell, and M. Newman, “Bayesian inference of network structure from unreliable data,” _Journal of Complex Networks_ , vol. 8, no. 6, p. cnaa046, 2020.
* [29] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” _IEEE Transactions on Neural Networks_ , vol. 20, no. 1, pp. 61–80, 2008.
* [30] M. Chen, J. Zhang, Z. Zhang, L. Du, Q. Hu, S. Wang, and J. Zhu, “Inference for network structure and dynamics from time series data via graph neural network,” _arXiv preprint arXiv:2001.06576_ , 2020.
* [31] M. F. Ochs and E. J. Fertig, “Matrix factorization for transcriptional regulatory network inference,” in _2012 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB)_. IEEE, 2012, pp. 387–396.
* [32] A. M. Madni, “A systems perspective on compressed sensing and its use in reconstructing sparse networks,” _IEEE Systems Journal_ , vol. 8, no. 1, pp. 23–27, 2013.
* [33] Z. Shen, W.-X. Wang, Y. Fan, Z. Di, and Y.-C. Lai, “Reconstructing propagation networks with natural diversity and identifying hidden sources,” _Nature Communications_ , vol. 5, no. 1, pp. 1–10, 2014.
* [34] M. Gomez-Rodriguez, J. Leskovec, and A. Krause, “Inferring networks of diffusion and influence,” _ACM Transactions on Knowledge Discovery from Data (TKDD)_ , vol. 5, no. 4, pp. 1–37, 2012.
* [35] M. Wu, J. Chen, S. He, Y. Sun, S. Havlin, and J. Gao, “Discrimination universally determines reconstruction of multiplex networks,” _arXiv preprint arXiv:2001.09809_ , 2020.
* [36] M. Ouyang, “Review on modeling and simulation of interdependent critical infrastructure systems,” _Reliability Engineering & System safety_, vol. 121, pp. 43–60, 2014.
* [37] O. Yagan, D. Qian, J. Zhang, and D. Cochran, “Optimal allocation of interconnecting links in cyber-physical systems: Interdependence, cascading failures, and robustness,” _IEEE Transactions on Parallel and Distributed Systems_ , vol. 23, no. 9, pp. 1708–1720, 2012.
* [38] Y. Zhang and O. Yağan, “Robustness of interdependent cyber-physical systems against cascading failures,” _IEEE Transactions on Automatic Control_ , 2019.
* [39] J. M. Adam, F. Parisi, J. Sagaseta, and X. Lu, “Research and practice on progressive collapse and robustness of building structures in the 21st century,” _Engineering Structures_ , vol. 173, pp. 122–149, 2018.
* [40] S. H. Cheung and J. L. Beck, “Bayesian model updating using hybrid Monte Carlo simulation with application to structural dynamic models with many uncertain parameters,” _Journal of Engineering Mechanics_ , vol. 135, no. 4, pp. 243–255, 2009.
* [41] J.-Z. Yu, M. Whitman, A. Kermanshah, and H. Baroud, “A hierarchical bayesian approach for assessing infrastructure networks serviceability under uncertainty: A case study of water distribution systems,” _Reliability Engineering & System Safety_, p. 107735, 2021.
* [42] D. Lusher, J. Koskinen, and G. Robins, _Exponential random graph models for social networks: Theory, methods, and applications_. Cambridge University Press, 2013.
* [43] M. A. Serrano and M. Boguná, “Tuning clustering in random networks with arbitrary degree distributions,” _Physical Review E_ , vol. 72, no. 3, p. 036133, 2005.
* [44] S. Marasco, A. Cardoni, A. Z. Noori, O. Kammouh, M. Domaneschi, and G. P. Cimellarof, “Integrated platform to assess seismic resilience at the community level,” _Sustainable Cities and Society_ , p. 102506, 2020.
* [45] Italian National Institute of Statistics, “Resident population,” http://demo.istat.it/, 2019.
|
# GPT-Neo for commonsense reasoning - a theoretical and practical lens
Rohan Kashyap 9632145522 , Vivek Kashyap and Narendra C.P narendracp@bit-
bangalore.edu.in Bangalore Institute of TechnologyBangaloreIndia
(2023)
###### Abstract.
Recent work has demonstrated substantial gains in pre-training large-language
models (LLMs) followed by supervised fine-tuning on the downstream task. In
this paper, we evaluate the performance of the GPT-neo model using $6$
commonsense reasoning benchmark tasks. We aim to examine the performance of
smaller models using the GPT-neo models against several larger model baselines
such as GPT-$3$, Llama-$2$, MPT and Falcon. Upon fine-tuning with the
appropriate set of hyperparameters, our model achieves competitive accuracy on
several tasks. We also investigate and substantiate our results using
attention-head visualization to better understand the model performance.
Finally, we conduct various robustness tests using various methods to gauge
the model performance under numerous settings.
Transformers
††copyright: acmcopyright††journalyear: 2023††doi:
10.1145/1122445.1122455††journal: TOG††journalvolume: 37††journalnumber:
4††article: 111††publicationmonth: 3††ccs: Neural Networks Large Language
models††ccs: Reasoning
## 1\. Introduction
Commonsense reasoning encompasses a central role in neural language
understanding. Recent years has seen the rise of large language models (LLMs),
such as the GPT-$3$ (b10, ), T5 (b30, ), Falcon (b47, ) and Llama (b25, ; b26,
). Autoregressive language models pretrained on large corpus of self-
supervised data using Reinforcement Learning with Human Feedback (RHLF)
results in model alignment with respect to human preferences on numerous
tasks. Advances in natural language processing tasks such as reading
comprehension, question answering and inductive reasoning demonstrate the
effectiveness of these huge pre-trained models on numerous downstream tasks,
and methods such as instructive fine-tuning (b16, ) and chain-of-thought
prompting (b15, ) have shown excellent few-shot learning capabilities.
In particular, (b15, ) demonstrates the ability of LLMs for multiple-step
reasoning on complex arithmetic and symbolic reasoning tasks as an emergent
property i.e., a series of intermediate reasoning steps lead the model to the
final output known as chain-of-thought-prompting. The parametric count of
these models typically ranges from a few million to billions, the performance
of which follows strict scaling laws (b14, ), with the larger models being
more sample efficient and only mildly prone to overfitting.
The transformer architecture has achieved immense success in NLP, generative
modeling (b2, ; b10, ) and reinforcement learning tasks. The self-attention
mechanism is the central idea of the transformer which help capture long-term
dependencies, and can thus learn coherent patterns and capture the
compositional relationships between them given as:
$Attention(Q,K,V)=softmax(QK^{T}/d_{k})V$
where $Q,K,V$ represents the input query, key and value matrix and $d_{k}$
denotes the key dimension respectively. Thus, they produce cohesive and
plausible text when adequately tuned to the right temperature ($0\leq T\leq
1$) during model inference and trained for several epochs ($>20$). This
challenges the LMs ability to extrapolate and draw useful information from the
training data to gain knowledge about the physical world to generate the
correct answers. We believe that the statistical patterns implicitly present
in the pre-trained text corpus is known to provide the model with a few innate
priors and experience and is a possible explanation for its performance on
few-shot learning methods such as the GPT-$3$ model (b10, ; b13, ).
In regard to commonsense and arithmetic reasoning tasks (b24, ; b28, ; b29, )
large-language models (LLMs) such as GPT-3, Flacon, Llama achieve excellent
performance on these tasks while fails to achieve good generalization as the
model size is decreased. This leads to misrepresentation of knowledge and
factual data in LMs.
### 1.1. Contribution
In this regard, we evaluate the GPT-neo model in a relatively low-parameter
regime, and subsequently, we gauge its performance in comparison to other
baselines with larger model size. Our main contribution in this paper is as
follows:
* •
We investigate the generalization capabilities of the GPT-neo model through
the commonsense reasoning lens. In particular, we employ a supervised learning
objective discussion and test the model on a suite of $6$ tasks, namely Piqa,
Winogrande, Hellaswag, Storycloze, BoolQ and OpenBookQA.
* •
We conduct adequate comparisons with competitive baselines such as GPT-$3$,
Llama and Falcon for all our tasks in both zero-shot setting and fine-tuning
methods.
* •
We also conduct extensive robustness tests and visualisations to assess its
ability for in-context learning and examine the overfitting concerns in
Section $3$ & $4$ respectively.
Our main goal is not to demonstrate state-of-the-art results but to show that
the GPT-neo model is competitive with its counterpart, such as the GPT-$3$
model, even though it is much smaller by a factor of $64x$.
## 2\. Can LLMs Reason?
As discussed in (b53, ; b54, ), reasoning encompasses the ability of LMs for
deduction, induction, abduction, analogy, commonsense and systematic methods
for rational problem solving in multi-steps of inference. This enables the
model with abstraction \- allowing for the model to generalize to unseen test
examples.
While LLMs are not trained explicitly to reason, it is more often an emergent
behaviour where the model mimics true reasoning abilities through less robust
and generalizable mechanism - such as memorization or pattern matching using
the contexts seen in the training data.
For example, (b55, ) examines the in-context performance of GPT-$3$ on
arithmetic tasks. They assess the model performance as a function of the
frequency of the test samples in the training set. They observe that the model
performs significantly lower for numbers not present in the training data and
that thus lack the general ability to perform arithmetic. In addition, they
demonstrate that superior performance on these tasks is rely largely on
memorization and mimics reasoning-like abilities by matching patterns present
in the pre-training data.
Similarly, to test the memorization hypothesis, (b56, ) construct
”counterfactual tasks” to assess if LMs perform faithful reasoning on
commonsense and arithmetic tasks. Their results reveal degradation in
performance on several counterfactual tasks and they attribute this gap to
overfitting and absence of generalization, even though it requires the same
reasoning ability as the original task.
## 3\. MODEL GENERALIZATION
(b3, ) defines generalization as the sensitivity to abstract analogies and
thus requires a series of intermediate reasoning steps for complex downstream
tasks. It involves adapting to novel situations by converting past experience
into future skills. In this regard, neural networks are long known to perform
well as function approximators on smooth (continuous) data manifolds without
any discontinuities using gradient descent methods. Thus, this paper is an
attempt in this direction to explore at least a few of these intuitive
questions by resorting to the commonsense reasoning benchmark as a standard of
measure in all our experiments.
It is believed that the success of LLMs is mainly due to in-context learning:
the ability to predict later tokens is easier because the auto-regressive
model can now utilize the data present in-context i.e., the earlier tokens in
the input data ($x_{i-1}$) itself (b35, ) and induction heads: mainly prefix
matching (similar to skip-grams) and thus increases the logits of attended
tokens (b36, ). This is known to drive meta-learning and the models ability to
extract useful information from earlier context allowing information to move
forward to attention heads in the subsequent layers.
In a recent work, (b48, ) investigate the trajectory of the training process
in masked language models (MLMs) to understand the phase transitions and
specialised attention heads for emergent behaviour at different points in
training. In particular, they introduce Syntactic Attention Structure to
examine the learning of complex linguistic phenomenon in MLMs which leads to
model interpretability.
Likewise, (b49, ) introduce distributional simplicity bias that identify the
features of the training data that influence the network. They argue that
neural networks learn higher order statistics and correlations only later in
the training process which deviates from the general conception of model
progression from linear to non-linear functions during stochastic gradient
descent training.
(b32, ) study in-context learning using selection algorithms of LLMs on
various generation tasks for multiple input sequences without explicit
prompting of the right task. This also allows for meta-learning and adapting
the parameter initialization within few epochs of fine-tuning on downstream
tasks.
It is observed that as the model size increases, the network can learn
concepts that they pass on to downstream tasks and thus generalise to unseen
instances than those seen during training through transfer learning as
discussed in (b10, ; b14, ; b22, ). However, (b19, ) postulate the model-wise
double descent phenomenon where, as we increase the model size the performance
first decreases i.e., a U-like behaviour and then increases on the test data.
The former behaviour is expected because of the classical bias-variance
tradeoff from statistical learning theory given as:
$MSE=bias^{2}+variance$
where $MSE$ is the mean-squared error. Thus, larger models exhibit lower bias
with higher variance. Therefore, as the model size increases beyond a certain
threshold called the over-parameterization regime, we expect the
generalization error on the test dataset to increase while (b19, ) argues that
neural networks exhibit the opposite i.e., the generalization error decreases
with increase in model complexity.
(b23, ) examined the the models robustness with increasing capacity under
projected-gradient descent adversarial examples. They observed that as beyond
a certain point as we increase the size of the network there is a sharp
transition in the model behaviour with steady increase in the training
accuracy under strong adversaries.
Likewise, (b20, ) observes that in the over-parameterization regime i.e., when
the model complexity is large enough to cause zero training error leads to a
decrease in the average test error and the performance of the worst group
error i.e., the minority samples. The best worst-group error was observed for
model with in the underparametrized regime with non-zero training error. This
is attributed to the memorization of the minority samples in the training set
and thus leads to poor generalization on unseen examples.
One way to test GPT-neo’s ability in the fine-tuning phase with limited
dataset constraints is through simple commonsense reasoning tasks, which
requires recognizing a novel pattern that is unlikely to have occurred during
pre-training and thus adapting quickly to the given task.
We demonstrate the effectiveness of our approach on a diverse array of tasks
for natural language understanding. The results show that the model leverages
the feature representations learned during unsupervised pre-training for
larger performance gains during downstream tasks.
Figure 1. Input format for GPT-neo architecture for multiple-choice task.
Start and delim indicates the start and delimiter token respectively.
Winogrande | The trophy doesn’t fit into the brown suitcase because the __ is too large.
---|---
| Choice 1. Trophy Choice 2. Suitcase
Piqa | To make a tomato sauce taste more like a pizza sauce,
| Choice 1. Mix extra paprika into the sauce to brighten the flavor.
| Choice 2. Mix a little bit of sugar into the sauce to sweeten it.
HellaSwag | Making a cake: Several cake pops are shown on a display. A woman and girl are shown making
| Choice 1. Bake them, then frost and decorate.
| Choice 2. Taste them as they place them on plates.
| Choice 3. Put the frosting on the cake as they pan it.
| Choice 4. Come out and begin decorating the cake as well.
StoryCloze | I was walking to my house after riding the school bus. When I arrived at the front, I tried to open
| the door. The door was locked. When I tried to look for my keys, I couldn’t
find them anywhere.
| Choice 1. I then found my spare key and entered.
| Choice 2. I was locked out and had to move to a new home.
BoolQ | In Australia, each state has its own constitution. Each state constitution preceded the
| Constitution of Australia as constitutions of the then separate British
colonies, but all the states
| ceded powers to the Parliament of Australia as part of federation in 1901.
| Question: Does each Australian state have its own constitution?
| Answer: True
OpenBookQA | Context: A stem is used to store water by some plants.
| Question: Stems are to flowers as
| Choice 1. Dogs are to cats.
| Choice 2. Cows are to cud.
| Choice 3. Bees are to pollen.
| Choice 4. Silos are to grains.
Table 1. An example is specified for each of the tasks used for evaluating the
GPT-neo model with the correct answer choice indicated using the bold mark.
### 3.1. Related Work
Recent work on evaluation of LLMs for commonsense natural language inference
has garnered significant attention for larger models under zero-shot and few-
shot settings. (b40, ) conduct a comprehensive examination on four commonsense
benchmark tasks. Similarly, (b43, ) study LLMs under zero-shot settings and
strong supervision, where they assess the model performance under various
design choices such as input format and score function to better understand
scaling laws of large models on NLI tasks.
(b44, ) examine the proficiency of language models for transfer learning
capabilities on downstream tasks using intermediate pre-training and adapter-
based methods. Similarly, (b45, ) introduces a weak supervision method for
incorporating prompts into answer choices for effective knowledge transfer
using a question-answering format i.e., using noisy predictions to get final
predictions on commonsense benchmark tasks.
In contrast, our work is focused on assessing the performance of smaller
language models against larger models and we also conduct numerous robustness
tests and visualisation to investigate the discrepancy between these models as
the size increases.
Our choice for evaluation of the GPT-neo model is motivated by the use of
local and linear self-attention (b37, ), Mixture of Experts (MoE) (b38, ) and
axial positional embeddings (b39, ) in the model architecture which aids in
better in-context learning and also lesser computational overhead while
training on TPUs.
## 4\. Dataset
Commonsense tasks provide a way to understand the model’s capability to
interpret complex structural patterns that are not explicitly mentioned in the
input text and are not part of the pre-trained data. The commonsense reasoning
ability of large pre-trained models largely depends on the dataset quality and
requires high levels of abstract reasoning capabilities to solve these tasks
(b34, ; b35, ). These tasks include reading-comprehension, question-answering,
sentence completion and classification. For multiple-choice tasks, we
concatenate the context, question, and answer choice for each example to
obtain $k$ different such sentences ($k$ choices) and pass it as input to the
model as shown in Figure 1. We compute each choice’s probability and the
softmax scores from their logit values.
For our experiments, we consider six diverse standard commonsense reasoning
benchmark datasets (see Table 1) and examine how well the model adapts to each
of these tasks.
For example, in the sentence, ”The corner table in a restaurant ordered a
beer. The waiter served them the drink”. Humans can quickly establish that it
is the people at the corner table that ordered the beer, and them refers to
the people and not the corner table because we humans have a hard-coded notion
of what people are. On the other hand, language models cannot quickly capture
such implicit information or knowledge and hence fall back significantly on
these tasks. Although relatively trivial for humans, these tasks are extremely
hard for machines that merely rely on statistical patterns without proper
understanding and abstraction capabilities. This paper shows that popular
approaches to large-scale language pre-training, while highly successful on
many abstract tasks, fall short when a physical world model is required.
### 4.1. Winogrande
Winogrande (b4, ) is inspired by the ”Winograd Schema Challenge” with
increased task complexity. It consists of constructing two identical questions
with two answer choices such that it includes a trigger word which flips the
corresponding answer questions. The task is to find a suitable entity for the
pronoun. The training set consists of $44$k examples. Ex: ”The Trophy doesn’t
fit into the brown suitcase because the $trophy^{*}/suitcase$ is too large”.
Here, predicting what blank requires the inference regarding the relative size
of the trophy and the suitcase.
### 4.2. Piqa
PIQA (b6, ) (Physical Interaction Question Answering) dataset is used to
evaluate language representations and their knowledge about our physical
world. This dataset describes how an object is built, used, or operated, which
requires physical reasoning capabilities to select the right choice. It
consists of over $16k$ training question-answering pairs, with an additional
$2k$ and $3k$ validation and test examples respectively. As shown in Table 1,
the prediction of right choice as ”mix extra paprika into the sauce to
brighten the flavor”, requires in-context learning and establishing coherent
relations between tomato sauce, paprika and the pizza sauce.
### 4.3. StoryCloze
Story Cloze (b7, ) dataset involves selecting a plausible ending to a long
story which is framed as a multiple-choice question consisting of $2$ answer
choices. It consists of $4k$ examples, which are based on everyday events of
daily life. It helps evaluate the extent to which the model learns causal and
temporal relations between different entities within the given context.
### 4.4. HellaSwag
HellaSwag (b8, ) is a commonsense natural language inference task with a data
format similar to the Swag dataset but of higher quality. The hellaswag
dataset involves picking the best ending for a story. For each input question,
the model is presented with the context from a caption and four choices to
predict a what might happen next in a coherent way. It contains a total of
$50k$ sentences with an average length of $230$-word tokens.
### 4.5. BoolQ
BoolQ is a question-answering dataset with yes/no answer choices containing
$16k$ examples. Each example consists of a reading comprehension, question and
an answer, with the title as additional context. The task entails various
inferences to be drawn from the passage to find the correct answer choice,
since the questions are generated in unprompted and unconstrained setting. In
Table 1, inferring the answer choice requires establishing adequate casual
relations between the consitution of australia and the federation act in 1901.
### 4.6. OpennBookQA
OpenBookQA is an advanced question-answering task used for commonsense NLI and
consists of a $6k$ examples. It is modeled as an open-book exam where each
example consists of a set of elementary science facts (context), a question
and $4$ answer choices. As shown in Table 1, predicting the right choice
requires a deeper understanding of the input text comprehension and a general
commonsense knowledge to logically connect the facts presented as context and
establish the analogy as ”silos are to grains”.
## 5\. Model
The GPT-neo model was released by Eleuther AI. The GPT-neo model is a
transformer-based decoder-only autoregressive language model that uses a
causal self-attention mechanism to learn contextual representations of
individual word tokens. The GPT-neo was pre-trained using the standard
language modeling objective: next token prediction using causal self-
attention. The objective is to maximize the log-likelihood of the data given
as:
$L(\theta)=\sum_{i}\log p(x_{i}|x_{i-1},...,x_{2},x{1};\theta)$
where $p$ the conditional probability of $x_{i}$ given its context
$\left\\{x_{i}\right\\}_{i=1}^{i-1}$ which is modeled using the GPT-neo model
with parameters $\theta$. The model is trained using gradient descent method.
The GPT-neo model is similar to the GPT-$3$ model but with key differences. It
was pre-trained using on the Pile dataset (b1, ) with consists of a diverse
collection of $22$ high-quality datasets derived from numerous sources and
with a similar performance boost compared to the GPT-$3$ models pre-trained on
the common crawl dataset as the model size increases. This includes several
datasets such as the books$3$, Pile-CC and DM-Mathematics and shows
significant gains when fine-tuned on downstream tasks.
Figure 2. GPT-neo architecture for multiple-choice task.
We adapt the model parameters to the supervised learning task. Let
$\mathcal{C}=\left\\{x_{i},y_{i}\right\\}_{i=1}^{m}$ denote the set of $m$
training examples. The objective is to maximize the conditional probability
$P(y|x)$ over the entire training set given as:
$l(\mathcal{C})=\sum_{(x,y)}\log P\left(y\mid x^{1},\ldots,x^{m}\right)$
First, the input tokens are passed through the model to obtain the final
transformer block’s activation $h_{l}$ and is then fed to a linear classifier
to get model predictions $P_{i}$ given as:
$\displaystyle h_{0}$ $\displaystyle=AW_{e}+W_{ps}$ $\displaystyle h_{l}$
$\displaystyle=\operatorname{transformer}\\_\text{block
}\left(h_{l-1}\right)\forall i\in[1,n]$ $\displaystyle P(i)$
$\displaystyle=\operatorname{softmax}\left(h_{n}W_{e}^{T}\right)$
where $A$ is the context vector for tokens, $W_{e}$ is the embedding matrix
and $W_{ps}$ is the positional-encoding matrix. We are primarily interested in
evaluating: (1) the accuracy and (2) the cross-entropy loss on the test
dataset given as:
$\mathcal{L}=-\sum_{i=1}^{d}y_{i}\log\hat{y_{i}}$
where $y_{i}$ and $\hat{y_{i}}$ correspond to the true and predicted class
labels respectively. We also examine the model robustness under adversarial
attacks at sentence-level and visualize the attention maps in Section 3 and 4
respectively.
## 6\. Baselines
We compare the performance of the GPT-neo model with several model baselines.
The GPT-3 model is our primary choice since it offers similar choices in model
sizes ($125$M, $350$M, $1.5$B & $2.7$B) for comparison against the GPT-neo
model. We evaluate the GPT-$3$ model under zero-shot and few-shot settings for
all our tasks and utilize the standard hyperparameters as discussed in (b10,
).
We also assess the model performance against a family of pre-trained LLMs such
as MPT ($7$B), Falcon ($7$B), Llama ($7$B & $13$B) and GPT-J ($6$B) in both
zero-shot setting and discriminative fine-tuning for adequate model
comparisons. We also utilize bidirectional language models such as BERT-base
($110$M) and BERT-large ($340$M) and report its accuracy scores for all our
tasks as shown in Table 2 $\&$ 5.
Method M | MPT | Falcon | Llama-1 | Llama-2 | Fairseq | GPT-3 Zero Shot | GPT-3 Few Shot | GPT-J | BERT | BERT-L | GPT-Neo
---|---|---|---|---|---|---|---|---|---|---|---
| 7B | 7B | 7B | 13B | 7B | 13B | 125M | 1.3B | 2.7B | 13B | 125M | 350M | 1.3B | 2.7B | 13B | 125M | 350M | 1.3B | 2.7B | 13B | 6B | 110M | 340M | 125M | 350M | 1.3B | 2.7B
Piqa | 79.3 | 75.4 | 78.1 | 80.0 | 78.0 | 80.3 | 66.1 | 72.3 | 75.0 | 75.6 | 63.5 | 69.1 | 74.1 | 74.6 | 78.4 | 62.3 | 68.1 | 73.2 | 74.9 | 78.6 | 76.1 | 66.1 | 65.9 | 62.0 | 64.0 | 69.1 | 71.1
BoolQ | 74.9 | 66.5 | 74.5 | 77.1 | 76.9 | 80.6 | 55.9 | 57.6 | 60.2 | 64.1 | 48.6 | 59.8 | 61.4 | 66.5 | 65.7 | 42.9 | 59.4 | 63.7 | 69.2 | 69.1 | 71.1 | 76.2 | 61.9 | 42.6 | 46.7 | 53.7 | 59.3
Hellaswag | 75.9 | 73.7 | 75.4 | 71.2 | 76.9 | 79.6 | 29.1 | 43.7 | 48.4 | 54.4 | 32.7 | 41.7 | 54.1 | 62.1 | 69.0 | 32.4 | 42.7 | 53.4 | 61.3 | 70.5 | 65.7 | 39.4 | 45.1 | 28.1 | 30.1 | 38.6 | 41.6
Winogrande | 67.9 | 65.8 | 69.1 | 72.0 | 68.2 | 71.7 | 50.0 | 60.2 | 61.5 | 66.5 | 51.9 | 51.1 | 56.5 | 61.8 | 68.0 | 50.2 | 51.5 | 58.1 | 61.5 | 67 | 63.3 | 60.2 | 62.9 | 50.0 | 51.1 | 54.0 | 55.5
Storycloze | 82.9 | 80.3 | 81.1 | - | 82.6 | - | 65.0 | 70.9 | 75.8 | 76.2 | 62.3 | 66.5 | 72.4 | 74.2 | 74.1 | 61.5 | 68.2 | 75.1 | 79.2 | 82.3 | 55.3 | 65.4 | 80.5 | 45.6 | 51.0 | 54.7 | 59.2
OpenBookQA | 50.4 | 51.6 | 56.1 | 55.3 | 57.4 | 56.0 | 34.0 | 48.9 | 48.5 | 54.1 | 31.6 | 40.2 | 45.8 | 51.0 | 54.9 | 36.0 | 42.6 | 49.5 | 54.2 | 59.8 | 50.2 | 52.1 | 61.4 | 35.5 | 39.2 | 42.2 | 46.3
Table 2. Test accuracy for commonsense reasoning tasks using the GPT-neo model
and model baselines. The best performer across all methods is denoted using
the bold mark (excluding model size $\geq 6$B parameters). For ease of
comparison, we color the second best performer with blue color.
## 7\. Experimental Setting
In this section, we give a general overview of the training procedure used for
fine-tuning both GPT-neo and the baseline models. We perform all our model
fine-tuning using Google Cloud TPU- VMs using the TPU-V8 accelerators. We use
the JAX framework for model training and evaluation and obtain significant
training speed boost-up using vmaps and pmaps for vectorisation and
parallelisation on multiple devices. We train the models for $7$ epochs and
use a fixed random seed throughout our experiments. We also add a dropout of
$0.2$ for regularization. We use the byte pair encoding (BPE) tokenization
scheme with a vocab size of $50257$ tokens.
For all our tasks, we use the adafactor optimizer with a learning rate of
$2e-5$ to $3e-7$ and do not use gradient clipping. We use a linear learning
rate decay schedule with warm-up over $0.1\%$ of training, and the value
annealed $3e-7$. We slightly deviate from the standard approach used in
GPT-$3$, where we use the maximum length of the input tokens for padding the
input sequences. This does not affect the model performance and also results
in a reduced memory footprint on the TPU accelerators.
We use a batch size ($k$) of $32$ i.e., $4$ examples on each device. Also
through experiments on various batch sizes ranging from $k=[8,16,32]$ and
conclude that the $32$ version outperforms significantly when experimented for
each of our datasets.
Parameter | Value
---|---
$n_{heads}$ | $16$
$n_{layers}$ | $24$
$d_{model}$ | $768$
$n_{linear}$ | $2048$
$d_{vocab}$ | $50257$
Table 3. Training Parameters used for GPT-neo model training.
In this section, we assess GPT-neo’s performance on a suite of commonsense
reasoning benchmark tasks that involve sentence completion, mulitpl-choice
natural language inference, using the setting discussed in Section 7. We also
report the model scores for baselines (Section 6) to ensure comparisons across
numerous model sizes.
Figure 3 compares the performance gap of the GPT-neo model against random
baselines for all our tasks. The random baseline is the GPT-neo model without
discriminative fine-tuning where the probability of choosing the correct
answer choice is given as $p(y|x)=1/k$ where $k$ is the number of choices.
## 8\. Results
Figure 3. Performance comparison of GPT-neo model and Random baselines for
each of our task.
In the following section, we present our results across for all the tasks
discussed in Section 4. Please note that we exclude the results for the $13$B
parameter model in the following discussion, as our largest model is of size
$2.7$B parameters. However, the results for the $13$B parameter model are
presented in Table 2 for reference.
### 8.1. Winogrande
We evaluate our results on the Winogrande test dataset and obtain an accuracy
of $55.5\%$ which is competitive with the GPT-$3$ model. We also observe that
the model accuracy shows improved performance with an $5.5\%$ increase in
accuracy as the size increases from $125$M to $2.7$B model parameters.
However, we observe that the GPT-$3$ few-shot and BERT-large model achieve the
Top-$2$ accuracy of $61.8\%$ and $62.9\%$ respectively (excluding models $\geq
6$B), which is attributed to their large model size.
### 8.2. Piqa
Here, we achieve an accuracy of $71.1\%$ upon fine-tuning, which is only
slightly lower than the GPT-3 few-shot and the Fairseq model by $4\%$. We
observe that the model is competitive with both the Fairseq and GPT-3 model as
the model size is increased and achieves a performance drop of $4\%$ and $8\%$
with respect to the Falcon and the MPT model respectively.
### 8.3. HellaSwag
We obtain an accuracy of $41.6\%$ on the Hellaswag dataset which is our worst
performance among all the tasks. The GPT-$3$ model significantly outperform
our model by $19\%$ in both the zero-shot and few-shot setting. For model size
of $13$B, Llama-$1$ and Llama-$2$ obtain excellent results with a model best
accuracy of $79.6\%$ which is $9\%$ higher than the GPT-$3$ few-shot results.
### 8.4. StoryCloze
On the StoryCloze dataset, our model achieves an accuracy of $59.2\%$. In
contrast to HellaSwag, the GPT-$3$ model outperforms the GPT-neo model by a
margin of $15-18\%$, even when considering both the $125$M and $350$M
versions. Surprisingly enough the Fairseq model demonstrates better
performance compared to the GPT-$3$ few-shot model for model sizes of $125$M
and $350$M. However, GPT-$3$ excels in performance as its model size scales
beyond $1$B parameters, achieving the highest accuracy of $82.3\%$ in the few-
shot setting.
### 8.5. BoolQ
On this task, GPT-neo achieves an accuracy rate of $59.3\%$, while Llama-$2$
attains a significantly higher accuracy rate of $80.6\%$. While the model’s
performance remains competitive compared to the GPT-$3$ few-shot model with
$125$M parameters, exhibiting only a marginal difference of $0.3\%$, it
becomes evident that as model size increases, GPT-$3$ rapidly adapts to the
task, showcasing a substantial $17.5\%$ performance improvement when
transitioning from the $125$M to the $350$M model. In contrast, GPT-neo
demonstrates a comparatively modest increase of $4.5\%$ over the same
transition.
### 8.6. OpenBookQA
We obtain an accuracy of $46.3\%$ on our largest model which is comparable
with the Fairseq model but the GPT-$3$ model is still better by $8\%$ in the
few-shot setting. We also observe that BERT-large which contains just $340$M
parameters obtains the highest accuracy of $61.4\%$ among all the models.
Larger models such as MPT and Falcon obtain accuracy of $50.4\%$ and $51.6\%$
which is higher than the GPT-neo model by only $4\%$. The attention head
visualization for the example presented in Table 1 is discussed in Section 10.
We also report the mean accuracy scores for all the models reported in Table 5
with model size greater size $20$B parameters. Since, these models are
computational expensive and results in memory-overhead, we do not perform
these supervised fine-tuning for models $\geq 20$B parameters, but report
their scores from the literature (b10, ; b26, ; b27, ; b42, ; b45, ). We also
include our original GPT-neo results from Table 2 for comparisons with larger
models. As expected the performance exhibits a significant and consistent
improvement as the model size increases across all benchmark tasks.
## 9\. Robustness Test
Figure 4. Reframed sentences-we construct additional samples by modifying the
original sentence or its answer choice, which is used to test the model during
inference time.
(b22, ) examine a universal law of robustness where they state that to truly
memorize the dataset (in the sense of low training error i.e.,
$L_{train}<<\epsilon$, where $\epsilon\rightarrow 0$) and to do so robustly
(measured using the Lipschitz 111A function $f:R^{d}\rightarrow R$ is
$L$-Lipschitz with respect to the norm $\|.\|$ if $\forall x,y\in
R^{d},|f(x)-f(y)|/\|x-y\|\leq L$. constant $L$), requires that:
$p\geq nd$
where $p$ is the number of model parameters, $n$ is the number of high-
dimensional data points and $d$ is the ambient input dimension respectively.
This is called dramatic overparametrization.
We are mainly interested in evaluating if our results are robust to the
possibility that one of our assumptions might not be accurate. We perform the
robustness test on a few examples to examine GPT-neo ability to learn the
right semantics at token-level.
We follow the conventional dual text instance method as discussed in (b12, ;
b42, ) by constructing a distorted sentence using the original test sample and
thus inferring the robustness measure.Note that we perform the robustness
tests for both these examples in-context i.e., predictions at the sentence-
level. Each task involves the addition of noise into the original sentences by
means of addition, deletion and replacing of adequate word tokens and then
predict the similarity scores between both these sentences with respect to the
input context.
Method M | Piqa | BoolQ | HellaSwag | Storycloze | Winogrande
---|---|---|---|---|---
Addition | 90 | 81 | 81 | 94 | 74
Subtraction | 80 | 62 | 63 | 85 | 62
Replace | 92 | 71 | 84 | 93 | 75
Swap | 96 | 67 | 66 | 91 | 67
Table 4. Estimation accuracy (in %) for robustness tests across all the tasks.
Dataset M | MPT | Falcon | Llama-1 | Llama-2 | GPT-3 | GPT-Neo X | GPT-Neo | Human
---|---|---|---|---|---|---|---|---
| 30B | 40B | 33B | 65B | 70B | 175B | 20B | 2.7B |
Piqa | 81.9 | 82.4 | 82.3 | 82.8 | 82.8 | 82.3 | 78 | 72.14 | 94.09
BoolQ | 79.0 | 83.1 | 83.1 | 85.3 | 85.0 | 77.5 | - | 57.3 | 90.0
Hellaswag | 79.9 | 83.6 | 82.8 | 84.2 | 85.3 | 79.3 | 71.2 | 42.73 | 95.6
Winogrande | 71 | 76.9 | 76 | 77 | 80.2 | 77.7 | 66.5 | 56.5 | 94.1
OpenBookQA | 52.0 | 56.6 | 58.6 | 60.2 | 60.2 | 65.4 | 32.6 | 56.5 | 92.0
Storycloze | - | - | - | - | - | 87.7 | - | - | 91.5
Table 5. Performance comparison of Large language models ($\geq 20$B) from the
literature. We follow similar convention as that in Table 2.
For each of the robustness task, we create $100$ test samples using random
sampling from the test dataset and forming a pair of coherent test sentences,
testing for consistent results (defined as $\%$ of test samples with accurate
predictions on both the original and distorted sentences) across all the
instances, and report their mean accuracy scores in Table 4.
The goal of this task is to ascertain accurate predictions of the answer
choice that corresponds to the original sentence when the distorted sentence
maintains logical coherence, while allowing a random selection otherwise such
as the subtraction task. We test the model’s capabilities on four tasks given
as follows:
1. (1)
Addition: Addition of words to its sentences, such that they are semantically
equivalent, and is required to output the same answer choice for both the
sentences.
2. (2)
Subtraction: The polarity of the sentence is reversed to negate its semantic
meaning, wherein the sentences contradict each other while the answers choices
for both the original and new sentence remains the same.
3. (3)
Swap: The role of each noun swapped with one another and is expected to pick
the right choices by contextual inference by identifying the roles of each
noun respectively.
4. (4)
Replace: We replace its word tokens so that they are logically coherent and
should output the right choice depending on the context presented in the
sentence.
Figure 5. Samples used for measuring the robustness of the GPT-neo models
using various methods such as addition, subtraction, swapping, and replacing.
We use two sentences here to indicate the original (first row) and distorted
(second row) samples, and the model accurately predicts the positive sentence
and as correct or incorrect predictions for the negative sentence.
In Table 4, we present the mean estimation accuracy of the robustness test for
various datasets. The reported accuracies correspond to the predictions
obtained using the GPT-neo model in a zero-shot setting without further fine-
tuning on the additional distorted test samples. The Piqa and Storycloze
dataset performs consistently well when tested under different settings on all
our tasks with accurate predictions. However, Hellaswag demonstrated strong
performance in two specific tasks i.e., add and replace while exhibits a
considerable drop in accuracy for the remaining tasks.
Likewise the performance on Boolq and Winograde dataset is somewhat degraded,
with some overfitting concerns and its inability to adapt to the given
context. A closer inspection of the incorrect responses reveals that the model
frequently struggles to infer contextual meanings and at times is insensitive
to such changes for both positive and negative sentences. Overall, GPT-neo
displays reasonable proficiency when probed at various zero-shot settings for
atleast a few of these tasks. 222We do not report robustness test scores on
the OpenBookQA dataset because plausible modifications to the original
sentence was not possible while preserving the semantics with respect to the
answer choices.
## 10\. Visualization
Figure 6. Attention head for GPT-neo model Layer $6$ for the example in the
HellsSwag dataset as discussed in Table 1.
In this section, we visualise the attention pattern using the GPT-neo model
for sentence completion tasks using the source and destination token. We
visualise the overall attention pattern and individual attention heads for a
few examples as shown in Figure 6 $\&$ 7. We are mainly interested in
understanding in-context learning and induction heads i.e., a composition of
attention heads that work together to copy patterns. It is observed that the
lower layers of transformer models capture global token dependencies across
multiple tokens which enables the transfer of relevant information across
layers and thus predict the correct answer choice (b50, ; b51, ).
Figure 7. Attention head for GPT-neo model Layer $6$ for the example in the
OpenBookQA dataset as discussed in Table 1.
In Figure 6 $\&$ 7, we observe previous token attention heads that simply
attend back to the preceding token and also first token heads that fall back
to the first token and do nothing as discussed in (b50, ). In comparison, we
also find specialized attention heads i.e., the pattern is diagonal while most
heads have no clear identifiable structure. In Figure 8, we demonstrate an
interesting hook in the attention pattern activation for the relevant head as
discussed in (b52, ).
Figure 8. Attention hook for GPT-based LLMs as demonstrated in (b52, ).
## 11\. Limitation
Although, we conduct the examination of the GPT-neo model, we understand that
we comparisons are not exhaustive in terms of datasets considered and the
ablation studies for understanding the model performance using robustness
tests. Likewise, due to limited computational resource availability, we were
unable to perform extended model training for for more epochs and supervised
fine-tuning on large models in particular for the Storycloze dataset. In our
subsequent work, we hope to extend this further to probe smaller models for a
larger class of datasets with interpretable results. We hope our work provides
some preliminary motivation and understanding on the impact of smaller models
in the purview of larger models.
## 12\. Conclusion
In this work, we illustrate the commonsense reasoning capabilities of the GPT-
neo model on a suite of $6$ diverse tasks. Furthermore, we incorporated
various model baselines for comparative analysis and have demonstrated that
the GPT-neo model delivers competitive performance on several tasks as the
model size is increased. We also investigated the model using attention head
visualization and conducted robustness tests to better understand the model
performance under various settings.
## 13\. Acknowledgments
We thank the anonymous reviewers for their review and suggestions.
## References
* (1) Gao, Leo, et al. ”The pile: An 800gb dataset of diverse text for language modeling.” arXiv preprint arXiv:2101.00027 (2020).
* (2) Vaswani, Ashish, et al. ”Attention is all you need.” Advances in neural information processing systems 30 (2017).
* (3) Chollet, François. ”On the measure of intelligence.” arXiv preprint arXiv:1911.01547 (2019).
* (4) Sakaguchi, Keisuke, et al. ”Winogrande: An adversarial winograd schema challenge at scale.” Communications of the ACM 64.9 (2021): 99-106.
* (5) Rajani, Nazneen Fatema, et al. ”Explain yourself! leveraging language models for commonsense reasoning.” arXiv preprint arXiv:1906.02361 (2019).
* (6) Bisk, Yonatan, et al. ”Piqa: Reasoning about physical commonsense in natural language.” Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 05. 2020.
* (7) Mostafazadeh, Nasrin, et al. ”A corpus and evaluation framework for deeper understanding of commonsense stories.” arXiv preprint arXiv:1604.01696 (2016).
* (8) Zellers, Rowan, et al. ”HellaSwag: Can a machine really finish your sentence?.” arXiv preprint arXiv:1905.07830 (2019).
* (9) Huang, Lifu, et al. ”Cosmos QA: Machine reading comprehension with contextual commonsense reasoning.” arXiv preprint arXiv:1909.00277 (2019).
* (10) Brown, Tom, et al. ”Language models are few-shot learners.” Advances in neural information processing systems 33 (2020): 1877-1901.
* (11) Devlin, Jacob, et al. ”Bert: Pre-training of deep bidirectional transformers for language understanding.” arXiv preprint arXiv:1810.04805 (2018).
* (12) Zhou, Xuhui, et al. ”Evaluating commonsense in pre-trained language models.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 05. 2020.
* (13) Radford, Alec, et al. ”Language models are unsupervised multitask learners.” OpenAI blog 1.8 (2019): 9.
* (14) Kaplan, Jared, et al. ”Scaling laws for neural language models.” arXiv preprint arXiv:2001.08361 (2020).
* (15) Wei, Jason, et al. ”Chain of thought prompting elicits reasoning in large language models.” arXiv preprint arXiv:2201.11903 (2022).
* (16) Wei, Jason, et al. ”Finetuned language models are zero-shot learners.” arXiv preprint arXiv:2109.01652 (2021).
* (17) Reed, Scott, et al. ”A generalist agent.” arXiv preprint arXiv:2205.06175 (2022).
* (18) Frei, Spencer, Niladri S. Chatterji, and Peter Bartlett. ”Benign overfitting without linearity: Neural network classifiers trained by gradient descent for noisy linear data.” Conference on Learning Theory. PMLR, 2022.
* (19) Nakkiran, Preetum, et al. ”Deep double descent: Where bigger models and more data hurt.” Journal of Statistical Mechanics: Theory and Experiment 2021.12 (2021): 124003.
* (20) Sagawa, Shiori, et al. ”An investigation of why overparameterization exacerbates spurious correlations.” International Conference on Machine Learning. PMLR, 2020.
* (21) Liu, Ruibo, et al. ”Mind’s Eye: Grounded Language Model Reasoning through Simulation.” arXiv preprint arXiv:2210.05359 (2022).
* (22) Bubeck, Sébastien, and Mark Sellke. ”A universal law of robustness via isoperimetry.” Advances in Neural Information Processing Systems 34 (2021): 28811-28822.
* (23) Madry, Aleksander, et al. ”Towards deep learning models resistant to adversarial attacks.” arXiv preprint arXiv:1706.06083 (2017).
* (24) Cobbe, Karl, et al. ”Training verifiers to solve math word problems.” arXiv preprint arXiv:2110.14168 (2021).
* (25) Touvron, Hugo, et al. ”Llama: Open and efficient foundation language models.” arXiv preprint arXiv:2302.13971 (2023).
* (26) Touvron, Hugo, et al. ”Llama 2: Open foundation and fine-tuned chat models.” arXiv preprint arXiv:2307.09288 (2023).
* (27) Ott, Myle, et al. ”fairseq: A fast, extensible toolkit for sequence modeling.” arXiv preprint arXiv:1904.01038 (2019).
* (28) Talmor, Alon, et al. ”Commonsenseqa 2.0: Exposing the limits of ai through gamification.” arXiv preprint arXiv:2201.05320 (2022).
* (29) Geva, Mor, et al. ”Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies.” Transactions of the Association for Computational Linguistics 9 (2021): 346-361.
* (30) Raffel, Colin, et al. ”Exploring the limits of transfer learning with a unified text-to-text transformer.” The Journal of Machine Learning Research 21.1 (2020): 5485-5551.
* (31) Poggio, Tomaso, et al. ”Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review.” International Journal of Automation and Computing 14.5 (2017): 503-519.
* (32) Bai, Yu, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei. ”Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection.” arXiv preprint arXiv:2306.04637 (2023).
* (33) Allen-Zhu, Zeyuan, and Yuanzhi Li. ”Physics of Language Models: Part 1, Context-Free Grammar.” arXiv preprint arXiv:2305.13673 (2023).
* (34) Arora, Sanjeev, and Anirudh Goyal. ”A theory for emergence of complex skills in language models.” arXiv preprint arXiv:2307.15936 (2023).
* (35) Von Oswald, Johannes, et al. ”Transformers learn in-context by gradient descent.” International Conference on Machine Learning. PMLR, 2023.
* (36) Olsson, Catherine, et al. ”In-context learning and induction heads.” arXiv preprint arXiv:2209.11895 (2022).
* (37) Beltagy, Iz, Matthew E. Peters, and Arman Cohan. ”Longformer: The long-document transformer.” arXiv preprint arXiv:2004.05150 (2020).
* (38) Shazeer, Noam, et al. ”Outrageously large neural networks: The sparsely-gated mixture-of-experts layer.” arXiv preprint arXiv:1701.06538 (2017).
* (39) Ho, Jonathan, et al. ”Axial attention in multidimensional transformers.” arXiv preprint arXiv:1912.12180 (2019).
* (40) Zellers, Rowan, et al. ”Swag: A large-scale adversarial dataset for grounded commonsense inference.” arXiv preprint arXiv:1808.05326 (2018).
* (41) Madaan, Aman, et al. ”Language models of code are few-shot commonsense learners.” arXiv preprint arXiv:2210.07128 (2022).
* (42) Zhou, Xuhui, et al. ”Evaluating commonsense in pre-trained language models.” Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 05. 2020.
* (43) Li, Xiang Lorraine, et al. ”A systematic investigation of commonsense knowledge in large language models.” Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022.
* (44) Poth, Clifton, et al. ”What to pre-train on? efficient intermediate task selection.” arXiv preprint arXiv:2104.08247 (2021).
* (45) Arora, Simran, et al. ”Ask me anything: A simple strategy for prompting language models.” arXiv preprint arXiv:2210.02441 (2022).
* (46) Chung, Hyung Won, et al. ”Scaling instruction-finetuned language models.” arXiv preprint arXiv:2210.11416 (2022).
* (47) Penedo, Guilherme, et al. ”The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only.” arXiv preprint arXiv:2306.01116 (2023).
* (48) Angelica Chen, et al. ”Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs.” arXiv:2309.07311 (2023).
* (49) Refinetti, Maria, Alessandro Ingrosso, and Sebastian Goldt. ”Neural networks trained with SGD learn distributions of increasing complexity.” International Conference on Machine Learning. PMLR, 2023.
* (50) Olsson, Catherine, et al. ”In-context learning and induction heads.” arXiv preprint arXiv:2209.11895 (2022).
* (51) Elhage, Nelson, et al. ”A mathematical framework for transformer circuits.” Transformer Circuits Thread 1 (2021).
* (52) Nanda, Neel. ”Transformerlens, 2022b.” https://github.com/neelnanda-io/TransformerLens.
* (53) Mitchell, Melanie, and David C. Krakauer. ”The debate over understanding in AI’s large language models.” Proceedings of the National Academy of Sciences 120.13 (2023).
* (54) Mitchell, Melanie. ”How do we know how smart AI systems are?.” Science 381.6654 (2023).
* (55) Razeghi, Yasaman, et al. ”Impact of pretraining term frequencies on few-shot numerical reasoning.” Findings of the Association for Computational Linguistics: EMNLP 2022. 2022.
* (56) Wu, Zhaofeng, et al. ”Reasoning or reciting? exploring the capabilities and limitations of language models through counterfactual tasks.” arXiv preprint arXiv:2307.02477 (2023).
* (57) Huang, Jie, and Kevin Chen-Chuan Chang. ”Towards reasoning in large language models: A survey.” arXiv preprint arXiv:2212.10403 (2022).
|
Explorations in the Space of S-Matrices
A thesis submitted for the degree of
Doctor of Philosophy
in the Faculty of Sciences
Parthiv Haldar
Centre for High Energy Physics
Indian Institute of Science
Bangalore - 560012. India.
2 cm
I hereby declare that this thesis “Explorations in the Space of S-Matrices" is based on my own research work, that I carried out with my collaborators in the Centre for High Energy Physics, Indian Institute of Science, during my tenure as a PhD student under the supervision of Prof. Aninda Sinha. The work presented here does not feature as the work of someone else in obtaining a degree in this or any other institute. Any other work that may have been quoted or used in writing this thesis has been duly cited and acknowledged.
Date: Parthiv Haldar
Certified by:
Prof. Aninda Sinha
Centre for High Energy Physics
Indian Institute of Science
Bangalore: 560012
First and foremost, I would like to express my gratitude towards my advisor Prof. Aninda Sinha. Without his constant encouragement and support, it would not have been possible to complete this journey. His encouragement has provided me with the courage and conviction to explore less traversed paths in my journey for knowledge. Most importantly, he kept his unwavering trust and belief in me when I myself lost them. I couldn't thank him more for being the emotional support during the most vulnerable moments of my life! I count myself luckiest because I went for thesis advisor but at the end of $5$ years Aninda became much more than just thesis advisor!
I was fortunate to work with outstanding collaborators Subham Dutta Chowdhury, Kallol Sen, Kausik Ghosh, Parijat Dey, Prashanth Raman and Ahmadullah Zahed. Their zeal for explorations inspired me to push myself towards betterment.
I am indebted to some of the excellent courses that I was part of during my Integrated PhD programme. In particular, the quantum mechanics courses taught by Prof. Diptiman Sen and Prof. Rohini Godbole had been my asset during my entire PhD journey. The long journey at IISc would not have been possible without the companionship and friendship of Sahel Dey, Prerana Biswas, Sayantan Ghosh. The many a precious moments I made with you kept me going even in the darkest hours! Thank you guys for bearing with me for so long!
Last but not the least, without my parents' unrelenting support I would not have dared to walk the path! They stood beside me in my every decision, right and wrong! In their many sacrifices, they gave wings to my dream!
at (0,0)
at (0,12);
at (25,0);
CHAPTER: SYNOPSIS
The quantum theory of relativistic particles is centred on studying quantum amplitudes for the scattering of quantum particles. Such amplitudes can be viewed as elements of an abstract matrix called S-matrix. Thus S-matrix is one of the most important observables in the theory of quantum dynamics of relativistic particles. The traditional way of constructing S-matrix is via introducing non-observable entities called quantum fields with infinite degrees of freedom. Quantum field theory succeeds impressively in this endeavour. However, due to the infinite number of degrees of freedom of the quantum fields, one has to deal with various infinities along the way. While there are well developed mathematical techniques called renormalization to tackle such infinities and extract finite observable results, it would be impressive if we could find ways to work around such infinities. Heisenberg first proposed such a way of business built upon entirely in terms of S-matrix guided by fundamental physical principles of unitarity, relativistic causality and relativistic covariance and other symmetry principles as warranted. In a modern revival of that idea, S-matrix can be used to define an abstract theory space. Thus, we can explore the space of physically consistent theories by studying the consequences of the aforementioned physical principles on the S-matrix. This thesis is devoted to such explorations for S-matrix amplitudes for $2-2$ scattering of identical particles.
In the first part of
the thesis, we discuss a novel mathematical way of exploring the space of S-matrices
using the tools from the mathematical field of geometric function theory (GFT). GFT is the study of geometric properties of complex analytic functions regarded as mappings between complex planes. We use two particular types of functions that play prominent roles in GFT: univalent functions and typically real functions. Univalent functions are analytic functions which are also injective. Geometrically, univalent functions are conformal mappings. These functions are known to satisfy various bounding relations. The most famous of them is de Branges’ theorem, also known as
Bieberbach conjecture, which bounds the Taylor coefficient of univalent functions. Not only the Taylor coefficients but the functions themselves are also bounded described by Koebe growth theorem. The other kind, namely the typically real functions, are functions with real Taylor coefficients. These functions have positive imaginary part in the upper half-plane and negative imaginary part in the lower half-plane. These functions also satisfy interesting bounds similar to those for the univalent functions. However, the connection between these functions and the scattering amplitudes is not obvious. A novel dispersive representation of scattering amplitudes enables us to unearth these connections with univalent and typically real functions.
Dispersion relation and crossing symmetry are two important consequences of the causality principle. The usual fixed transfer dispersion relation is not manifestly crossing symmetric, and therefore, the crossing symmetry has to be imposed separately, which is quite a non-trivial task. It would be nice to have a manifestly crossing symmetric dispersive representation. Auberson and Khuri came up with such a dispersive representation with a kernel which is manifestly crossing symmetric. In the process of deriving this dispersion relation, the Mandelstam variables are parametrized as rational functions of a complex variable $\tilde{z}$ and another parameter $a$. When expressed as functions of the Mandelstam variables, $\tilde{z}$ and $a$ are crossing symmetric. The dispersion kernel turns out to be a rational function of $\tilde{z}$ and $a$. We establish that the kernel is univalent as well as typically real inside unit disc $|\tilde{z}|<1$ when $a$ takes value in a certain real interval. Further, using unitarity and typically realness of the crossing symmetric dispersion kernel, it turns out that the scattering amplitude itself is a typically real function inside the unit disc in complex $\tilde{z}$ plane for the same real values of $a$. We would like to emphasize that the crossing symmetric dispersion relation is crucial for establishing this connection. Now employing the bounding properties of univalent and typically real functions as mentioned above, we obtain bounds on Taylor coefficients of the expansion of the scattering amplitude about $\tilde{z}=0$ as well as upper and lower bounds on the amplitude itself. Physically, expanding the amplitude about $\tilde{z}=0$ corresponds to having a low energy expansion in a basis of crossing symmetric functions. These expansions can be interpreted as EFT amplitude, and the corresponding Taylor coefficients can be expressed as ratios of Wilson coefficients of the EFT. Thus we obtain $2-$sided bounds on ratios of Wilson coefficients. Such $2-$sided bounds have been obtained before numerically. Our analysis sheds light on a concrete mathematical origin of such bounds for the first time, which was not there before. We apply this analysis to scattering amplitudes of identical massive bosons, pions, effective string amplitudes, $4-$photon amplitude and $4-$graviton amplitudes.
In the second part of the thesis we turn our attention to holographic S-matrices. The conjectural $AdS/CFT$ holography provides a way to construct flat space scattering amplitudes from the Mellin amplitudes of a conformal field theory (CFT) by taking a large radius limit of the dual $AdS$ space. Various analytic properties of flat space scattering amplitudes are encoded in corresponding properties of the CFT Mellin amplitude. Flat space $2-2$ scattering amplitudes are known to satisfy high energy bounds called the Froissart-Martin bound which follows from axiomatic analyticity and unitarity properties of the S-matrix. Froissart-Martin bound is one of the robust consistency tests for a flat space scattering amplitude. Therefore if a holographic construction of the S-matrix is to work, one should be able to obtain a systematic derivation of the Froissart-Martin bound starting with $4-$point Mellin amplitude for a holographic CFT. We provide such a derivation in the second part of the thesis. We find that our holographic derivation gives the exact Froissart-Martin bound in $4$ spacetime dimensions, while in greater spacetime dimensions, we get weaker bounds. We attempt to argue the possible reason for this behaviour.
Great things are not accomplished by those who yield to
trends and fads and popular opinion.
Jack Kerouac
CHAPTER: PUBLICATIONS AND PREPRINTS
tocchapter Publications and Preprints
The thesis is based on the following works
* (Chapter 3 and Chapter 4) Quantum field theory and the Bieberbach conjecture,
P. Haldar, A. Sinha and A. Zahed,
SciPost Phys. 11, 002 (2021), [arXiv:2103.12108 [hep-th]].
* (Chapter 5) Crossing Symmetric Spinning S-matrix Bootstrap: EFT bounds,
S. D. Chowdhury, K. Ghosh, P. Haldar, P. Raman and A. Sinha,
Submitted to SciPost Phys., [arXiv:2112.11755 [hep-th]].
* (Chapter 6) Froissart bound for/from CFT Mellin amplitudes,
P. Haldar and A. Sinha,
SciPost Phys. 8, 095 (2020), [arXiv:1911.05974 [hep-th]].
Other works that were completed during PhD, not included in the thesis
* Relative entropy in scattering and the S-matrix bootstrap,
A. Bose, P. Haldar, A. Sinha, P. Sinha and S.S. Tiwari,
SciPost Phys. 9, 081 (2020), [arXiv:2006.12213 [hep-th]].
* Regge amplitudes in generalized fishnet and chiral fishnet theories,
S. Dutta Chowdhury, P. Haldar and K. Sen,
JHEP 12, 117 (2020), [arXiv:2008.10201 [hep-th]].
* On the Regge limit of Fishnet correlators
S. Dutta Chowdhury, P. Haldar and K. Sen,
JHEP 10, 249 (2019), [arXiv:1908.01123 [hep-th]].
|
# A survey of deep learning optimizers - first and second order methods
ROHAN V KASHYAP<EMAIL_ADDRESS>1234-5678-9012 Indian Institute of
ScienceBangaloreKarnatakaIndia560012
(2023)
###### Abstract.
Deep Learning optimization involves minimizing a high-dimensional loss
function in the weight space which is often perceived as difficult due to its
inherent difficulties such as saddle points, local minima, ill-conditioning of
the Hessian and limited compute resources. In this paper, we provide a
comprehensive review of $14$ standard optimization methods successfully used
in deep learning research and a theoretical assessment of the difficulties in
numerical optimization from the optimization literature.
††copyright: acmcopyright††journalyear: 2023††journal: POMACS††journalvolume:
37††journalnumber: 4††article: 111††publicationmonth: 8
## 1\. Introduction
Optimization encompasses a central role in machine learning (b1, ),
statistical physics, pure mathematics (b11, ; b16, ; b30, ), random matrix
theory and also scientific research. Deep learning involves optimization of
non-convex loss functions over continuous, high-dimensional spaces. Neural
network training involves solving a loss function that is differentiable and
continuous using gradient-based optimization techniques that utilize the first
order or second order information to update its weight parameters and thus
seek to find the critical point. Gradient descent (b30, ), quasi-Newton (b62,
), BFGS (b59, ) and conjugate gradient (b55, ) methods are commonly used to
perform such minimizations and find the optimal solution.
Definition 1. Given $K$ weight matrices $(\theta_{k})_{k\leq K}$ the output
$y$ of a deep neural network with a activation function $\phi$ is given as:
(1)
$f(x)=\phi\left(\phi\left(\cdots\phi\left(x\cdot\theta_{1}\right)\cdots\right)\cdot\theta_{K-1}\right)\cdot\theta_{K},$
where $x\in R^{d}$ is the input to the model. Let
$\left\\{x_{i},y_{i}\right\\}_{i=1}^{m}$ denote the set of training examples.
In principle, the goal of a deep learning algorithm is to reduce the expected
generalization error $\mathbb{E}(L(f(x;\theta),y))$ where $L$ is the
designated loss-function such as mean-squared error, and $f(x;\theta)$ is the
function estimate for $y=f(x)$ with parameters $\theta$. The most common
estimator is the maximum likelihood estimation (MLE) for $\theta$ defined as:
$\boldsymbol{\theta}_{\mathrm{ML}}=\underset{\boldsymbol{\theta}}{\arg\max}\hskip
1.99997pt\mathbb{E}_{\mathbf{x}\sim\hat{p}_{\text{data }}}\log p_{\text{model
}}(\boldsymbol{x};\boldsymbol{\theta})$
where $p_{\text{model }}$ is the model distribution. This accounts to
minimizing the Kullback-Leibler (KL) divergence (a metric) between the
empirical data distribution $p_{model}$ and the true data distribution
$\hat{p}_{\text{data}}$:
$D_{\mathrm{KL}}\left(\hat{p}_{\text{data }}\|p_{\text{model
}}\right)=\mathbb{E}_{\mathbf{x}\sim\hat{p}_{\text{data
}}}\left[\log\hat{p}_{\text{data }}(\boldsymbol{x})-\log p_{\text{model
}}(\boldsymbol{x})\right]$
Since the true data distribution is largely unknown, we minimize the expected
loss on the training set - a quantity called the empirical risk given as
follows:
$\mathbb{E}_{\boldsymbol{x},\mathrm{y}\sim\hat{p}_{\mathrm{data}}(\boldsymbol{x},y)}[L(f(\boldsymbol{x};\boldsymbol{\theta}),y)]=\frac{1}{m}\sum_{i=1}^{m}L\left(f\left(\boldsymbol{x}^{(i)};\boldsymbol{\theta}\right),y^{(i)}\right)$
where $L$ is the loss-function and $f(x;\theta)$ is the prediction output when
the input is $x$ and $\hat{p}_{data}$ is the empirical distribution. In a
supervised setting, $y$ is the target output and $p(y|x)$ is the probability
distribution to be estimated. MLE is the best estimator asymptotically and is
consistent in the sense that it converges 111We refer to converges in the
sense of pointwise and not in distribution i.e.,
$\hat{\theta}\rightarrow\theta^{*}$, as $m\rightarrow\infty$. to the
parameters of the true data distribution $\hat{p}_{\text{data }}$ as
$m\rightarrow\infty$. Although this method in theory, is highly prone to
overfitting since models with high capacity can simply memorize the training
set, empirical results surprisingly enough dictate the opposite and prove the
model’s capability to achieve local generalization power within its capacity
and thus generalizes to unseen examples in the test samples. The solution is
to add a penalty term called the regularizer to the empirical risk defined as:
$\tilde{L}(\boldsymbol{\theta})=\underbrace{\frac{1}{N}\sum_{i=1}^{N}L(y(\mathbf{x},\boldsymbol{\theta}),t)}_{\text{training
loss }}+\underbrace{\mathcal{R}(\boldsymbol{\theta})}_{\text{regularizer }}$
where $\mathcal{R}(\boldsymbol{\theta})$ is the regularization term such as
$L1$ or $L2$ regularization as illustrated in Fig. LABEL:fig:regularization.
However, high-dimensional non-convex optimization techniques come at the cost
of no theoretical guarantees yet obtain state-of-the-art results on numerous
tasks such as image classification (b64, ), text processing (b63, ), and
representation learning (b65, ). Also, deep nets are composed of layers of
affine transformations followed by point-wise non-linear activations such as
RELU, sigmoid and tanh functions. Thus, the choice of the non-linear function
becomes more important to avoid vanishing and exploding gradients problems
(b33, ).
### 1.1. Mathematical Preliminaries and Notations
Let $({x^{(i)},y^{(i)}})$ denote the training examples and the corresponding
targets respectively. Let $f(.,.;\theta)$ denote a neural network with
parameters $\theta$ and a loss function $L$. For $R^{n}$-valued functions
$\nabla$ corresponds to the gradient operator. For all the optimization
methods discussed in the paper, $g$ corresponds to the computed gradient
vector across multiple training samples, $\epsilon$ is the learning rate and
$\rho$ is the decay rate. For first order methods, $r$ is the accumulated
square gradient and $v$ is the velocity vector respectively. For second order
methods, we denote the Hessian matrix of the loss function $L$ with respect to
the model parameters $\theta$ by $H$ and $\lambda_{i}$’s are the corresponding
eigenvalues. Unless stated otherwise, the update rule is given as
$\theta=\theta-\Delta\theta$, where $\Delta\theta=\epsilon g$. We denote the
basic version of stochastic gradient descent algorithm in Algorithm 1.
1 Initialize: Initial parameter $\theta$, Learning rate $\epsilon$.
2 while _stopping criterion not met_ do
3 Sample $m$ training examples ${x^{(1)},...,x^{(m)}}$ and their corresponding
targets $y^{(i)}s$.
4 Compute gradient estimate:
$g\leftarrow\left[\frac{1}{m}\sum_{i=1}^{m}L\left(f\left(\boldsymbol{x}^{(i)};\boldsymbol{\theta}+\alpha\boldsymbol{v}\right),\boldsymbol{y}^{(i)}\right)\right]$
5 Apply update: $\theta\leftarrow\theta-\epsilon g$.
6 end while
Algorithm 1 Stochastic Gradient Descent
### 1.2. Literature Overview
The most important algorithm for machine learning optimization is the
stochastic gradient descent method (SGD). This is a general extension to the
gradient descent method to handle large training sets without being
computationally expensive as discussed in Alg 1. Since, we perform gradient
updates at each step using a mini-batch of examples the asymptotic cost for
SGD is $O(1)$ as a function of $m$, where m is the number of training
examples. In this section, we provide a detailed discussion of the various
problems associated with this method.
Figure 1. Illustration of flat and sharp minima. Source Keskar (b46, ).
### 1.3. Flat Minima
Hochreiter et al. (b45, ) conjectured that the generalization of deepnets is
related to the curvature of the loss at the converged solution. Their
algorithm the ”flat minima search” utilizes the Hessian information to find a
large region in the weight space called the ”flat minima” where all the
solutions lead to a small error function value. It is also noted that high
error local minima traps do not appear when the model is overparameterized,
and Keskar et al. (b46, ) demonstrated that the large batch methods always
tend to converge to sharp minimizers (large number of positive eigenvalues)
and thus generalize a bit worse than the small batch methods that are known to
converge to flat minimizers which are characterized by having numerous small
eigenvalues though they have comparable training accuracies (b4, ).
This is attributed to the empirical evidence as discussed in (b13, ), (b45, )
and (b46, ) that large batch methods are attracted to regions of sharp minima
while the basins found by small batch methods are wider, and since they are
unable to escape these basins of attractions, they generalize better. Although
a larger batch size provides a more accurate approximation of the true
gradient with fewer fluctuations in the update step, it is computationally
very expensive and leads to fewer update steps, while a smaller batch size
offers a regularizing effect due to the noise they add during the learning
process. However, this might lead to high variance in the gradient estimate
and thus requires a smaller learning rate to maintain stability and converge
to an optimal solution. It turns out that most directions in the weight space
have similar loss values since the Hessian consists of a large number of
eigenvalues close to zero, even at random initialization points.
Figure 2. a. The $2-D$ visualization of the SGD trajectory. Source: Shervine
(b67, ). b. Illustration of projected gradient descent where $w$ is the
current parameter estimate, $w^{\prime}$ is the update after a gradient step,
and $PC(w^{\prime})$ projects this onto the constraint set $C$. Source: Ian
Kevin P. Murphy (b44, )
### 1.4. Linear Subspace
Goodfellow et al. (b47, ) show the existence of a linear subspace during
neural network training with no barriers and a smooth path connecting
initialization and the final minima. Also, they show that poor conditioning of
the Hessian and variance in the gradient estimate are significant obstacles in
the SGD training process. These could deviate the algorithm completely from
the desired basin of attraction. Thus the final minima or the final region in
the parameter space and the width of the final minima that SGD converges to
depend largely on the learning rate, batch size, and gradient covariance, with
different geometries and generalization properties.
### 1.5. SGD Trajectory
Felix et al. (b13, ) constructed continuous paths between minima for recurrent
neural networks and conjectured that the loss minima are not isolated points
but essentially form a continuous manifold. Also, since these paths are flat,
this implies the existence of a single connected manifold of low loss and not
a distinct basin of attraction. More precisely, the part of the parameter
space when the loss remains below a certain low threshold form a connected
region of low error valleys as shown in Figure 3. These reflect the
generalization capabilities of the network since this provides evidence that
large neural nets have enough parameters to produce accurate predictions even
though they undergo huge structural changes. This is quite a deviation from
the current literature, where the minima are typically depicted as points at
the bottom of a strictly convex valley of a certain width with network
parameters given by the location of the minimum.
### 1.6. SGD Analysis
Stanislaw et al. (b40, ) conjectured that the SGD process depends on the ratio
of the learning rate to batch size, which thus determines the width of the
endpoint and its generalization capabilities. First, as we approach the local
minima, the loss surface can be approximated by a quadratic bowl, with the
minimum at zero loss. Thus, the training can be approximated by an Ornstein-
Unhlenbeck 222The Ornstein-Unhlenbeck $x_{t}$ process is a Gaussian process
defined using the following differential equation: $dx_{t}=-\theta
x_{t}dt+\sigma dW_{t}$. process as discussed in Section LABEL:Second-
order_methods. Second, the covariance of the gradients
$C=\frac{1}{N}\sum_{i=1}^{N}\left(g_{i}-g\right)^{T}\left(g_{i}-g\right)$,
where $g_{i}$ is the gradient of the loss function $L$ with respect to the
($x_{i},y_{i}$) training example, and the Hessian of the loss function $H$ are
approximately equal, and is possibly the reason SGD escapes the global minima.
Since the loss functions of deep neural networks are typically non-convex,
with complex structure and potentially multiple minima and saddle points, SGD
generally converges to different regions of parameter space, with different
geometries and generalization properties (b2, ; b3, ), depending on
optimization hyper-parameters and initialization.
Figure 3. Left: A slice through the one million-dimensional training loss
function of DenseNet-40-12 on CIFAR10 and the minimum energy path as discussed
in (b13, ). The plane is spanned by the two minima and the mean of the path
nodes. Source: Felix Draxler (b13, )
### 1.7. Curse of Dimensionality
Finding a good descent direction in a high-dimensional space is a difficult
problem, but it is not nearly as difficult as navigating an error surface with
complicated obstacles within multiple low-dimensional subspaces. It is also
highly likely that the non-convex loss landscape is such that it contains
regions of local minima at high energies which full-batch methods couldn’t
have possibly avoided. But, the stochastic nature of SGD due to its inherent
noise and a smaller batch size doesn’t suffer from this problem and is not
trapped in irrelevant basins of attraction. With non-convex functions, such as
neural nets, it is possible to have many local minima. Indeed, nearly any deep
model is guaranteed to have an extremely large number of local minima.
However, this is not necessarily a major problem. It is empirically observed
that as the dimensionality $N$ increases, local minima with high error
relative to the global minimum occur with a probability that is exponentially
small in $N$ (b5, ). This is mainly because neural nets exhibit model
identifiability problem, which results in an extremely large amount of local
minima, but all are equivalent in their cost function value. They do not have
very high error values relative to the global minima and are thus good
solutions that stochastic gradient descent often finds itself at during the
training process.
### 1.8. Critical Points
Critical points 333A critical point is a point whose first-order derivative is
zero i.e., $f^{\prime}(x)=0$. with high costs are far more likely to be saddle
points (b5, ). We are not exactly concerned with finding the exact minimum of
a function but seek only to reduce its value significantly to obtain a good
generalization error. In the gradient descent method, the main problem is not
the direction but its step size along each eigenvector because the update step
is directly proportional to the corresponding eigenvalue. Thus, we move
towards the optimal point if the eigenvalue is positive else; we move away
from it if it is negative along those directions, and this slows down the
learning process since now we might circumnavigate a tall mountain or a flat
region of high error or might simply take a very long time to move away from
the saddle points.
Figure 4. For two dimensions, the lines of constant of the error surface E are
oval in shape. The eigenvectors of H along the major and minor axes. The
eigenvalues measure the steepness of E along each eigendirection. Source:Yann
LeCun (b1, )
For a convex function, such a flat region corresponds to the global minima,
but in a non-convex optimization problem, it could correspond to a point which
a high value of the loss function. Although, Newton’s method solves this
slowness issue by rescaling the gradients in each direction with the inverse
of Hessian, it can often take the wrong direction and be applied iteratively
as long as the Hessian remains positive definite. Thus for a locally quadratic
function, Newton’s method jumps directly to the minimum, but if the
eigenvalues are not all positive (near a saddle point), then the update can
lead us to move in the wrong direction. Also, since second-order methods are
computationally intensive, the computational complexity of computing the
Hessian is $O\left(N^{3}\right)$, and thus, practically only networks with a
small number of parameters can be trained via Newton’s method.
Several methods such as BFGS and saddle-free newton method (b5, ; b53, ; b54,
) are proposed to overcome this problem in non-convex settings, specifically
when the Hessian has negative eigenvalues. This includes adding a constant
along the Hessian diagonal, which largely offsets the negative eigenvalues.
Recently, Luke (b48, ) established the connections between gradient explosion
and the initialization point by measuring the recurrent Jacobian. They found
that with a random initialization, the neural network’s policy is poorly
behaved and has many eigenvalues with a norm greater than length $1$, and thus
the gradient norm grows. However, for a stable initialization (shrinks the
initialization by multiplying by $0.01$) which was picked specifically to
prevent the gradients from exploding, we find that many eigenvalues close to
$1$, and thus the gradients do not explode. Sutskever et al. (b9, )
demonstrated that poorly initialized and the absence of momentum perform worse
than well-initialized networks with momentum.
Figure 5. The loss surface represented as a union of $n$-dimensional manifolds
called $n$-wedges. A model of the low-loss manifold comprising of $2$-wedges
in a $3$-dimensional space. Source: Stanislav (b8, )
### 1.9. Difficulties in Neural Network Optimization
The main challenges for neural net optimization in these high-dimensional
spaces can be ill-conditioning of the Hessian, poor condition number, local
minima, plateaus, and flat regions, the proliferation of saddle points, and
poor correspondence between local and global structure. The existence of a
large number of saddle points can significantly slow down learning since they
are regions of high error plateaus and give the impression of the presence of
a local minimum. Also, since these critical points are surrounded by plateaus
of small curvature, second-order methods such as the Newton method are
attracted to saddle points and thus slow down the learning process (b5, ).
Also, a common observation is that at the end of training process, the norm of
the gradients $\nabla f(x)$ is not necessarily small i.e., $\nabla
f(x)<\epsilon$, and the Hessian $H$ has negative eigenvalues, indicating that
the algorithm did not converge to the local minima (b23, ).
It is also possible that models with high capacity have a large number of
local minima which can increase the number of interactions and cause the
Hessian to be ill-conditioned. There are also wide, flat regions of constant
value where the gradients and Hessian are all zero. Such degenerate locations
pose a significant problem to optimization algorithms. We conjecture that
large neural nets often converge to local minima, most of which are equivalent
and yield similar generalization performance on the test set. Also, the
probability of finding a high error local minima is exponentially small in
high-dimensional structures and decreases quickly with network size.
Figure 6. a. Batch gradient descent moves directly downhill. b. SGD takes
steps in a noisy direction, but moves downhill on average. Source: Roger
Grosse (b21, )
## 2\. First Order Methods
### 2.1. Momentum
Polyak et al. (b49, ) introduced the momentum method to accelerate the
learning process of first-order gradient-based methods, especially in regions
of high curvature. It also addresses the poor conditioning of the Hessian
matrix and the variance in the stochastic gradient descent process. This
algorithm computes an update step by accumulating an exponentially decaying
moving average of the past gradients and continues to move in their direction.
The momentum term increases for dimensions whose gradients point in the same
directions and reduces updates for dimensions whose gradient change
directions, and we gain a faster convergence. Although it can be applied to
both full-batch and mini-batch learning methods, stochastic gradient descent
combined with mini-batches and a momentum term has been the golden rule in the
deep learning community to get excellent results on perception tasks. It
dampens oscillations in directions of high curvature by combining gradients of
opposite signs and also yields a larger learning rate in regions of low
curvature. It builds but the speed in directions of consistent gradient and
thus traverses a lot faster than the steepest descent due to its accumulated
velocity.
If the momentum is close to $1$, this algorithm is a lot faster, and it is
equally essential to ensure a smaller momentum value (close to $0.5$) at the
start of the learning process. In the beginning, since the points are randomly
initialized in the weight space, there may be large gradients, and the
velocity term ($v$) can lead in not being entirely beneficial. Thus, once
these large gradients disappear and we find the desired basin of attraction,
we can increase the momentum term smoothly to its final value, which is $0.9$
to $0.99$, to ensure that learning is not stuck and wastes computational time
in traversing flat regions where the gradients are almost zero.
$\begin{array}[]{l}\boldsymbol{v}\leftarrow\alpha\boldsymbol{v}-\epsilon\nabla_{\theta}\left(\frac{1}{m}\sum_{i=1}^{m}L\left(\boldsymbol{f}\left(\boldsymbol{x}^{(i)};\boldsymbol{\theta}\right),\boldsymbol{y}^{(i)}\right)\right)\\\
\boldsymbol{\theta}\leftarrow\boldsymbol{\theta}+\boldsymbol{v}\end{array}$
The momentum algorithm accelerated convergence to a local minimum, requiring
fewer iterations than the steepest descent method, precisely by $\sqrt{R}$
times fewer iterations, where R is the condition number of the curvature at
the local minimum. The velocity term plays the role of velocity as in physical
analogy in which it accelerates the particle through the parameter space in
reinforced directions, and thus the velocity vector is also known as imparting
momentum to the particle. The hyperparameter determines how quickly the
contributions of previous gradients exponentially decay. Previously in
gradient descent, the step size was proportional to the norm of the gradient.
Now, it depends on how large and aligned a sequence of gradients are. The step
size is largest when many gradients point in exactly the same direction. We
can think of the analogy of a ball rolling down a curved hill, and whenever it
descends a steep part of the surface, it gathers speed and continues sliding
in that direction.
### 2.2. Nesterov Momentum
Figure 7. Nesterov momentum first makes a big jump in the direction of the
previously accumulated gradient and then computes the gradient where you end
up and thus makes a correction. Source: Geoffrey Hinton (b29, )
Sutskever (b9, ) introduced a variant of the classical momentum (CM) that
aligns in line with the work of Nesterov’s accelerated gradient (NAG) method
for optimizing convex functions. The update rule is given as:
$\boldsymbol{v}\leftarrow\alpha\boldsymbol{v}-\epsilon\nabla_{\boldsymbol{\theta}}\left[\frac{1}{m}\sum_{i=1}^{m}L\left(f\left(\boldsymbol{x}^{(i)};\boldsymbol{\theta}+\alpha\boldsymbol{v}\right),\boldsymbol{y}^{(i)}\right)\right]$
$\boldsymbol{\theta}\leftarrow\boldsymbol{\theta}+\boldsymbol{v}$
where $\alpha\in[0,1]$ is a hyperparameter that determines the contributions
of previous gradients across multiple iterations. From the formula, it is
clear that NAG is similar to CM444CM is the abbreviation for Classical
Momentum as discussed in Section 2.1. except where the gradient is evaluated.
With Nesterov momentum, the gradient is evaluated after the current velocity
is applied. This means that in NAG we first make a big jump in the direction
of the previously accumulated gradient and then compute the gradient where you
end up and thus make a correction. This gradient-based correction factor is
vital for a stable gradient update, especially for higher values of µ. The
gradient correction to the velocity responds much quicker in NAG, and if $\mu
v_{t}$ is indeed a poor update and an inappropriate velocity, then NAG will
point back towards $\theta_{t}$ more strongly than $\nabla
f\left(\theta_{t}\right)$ does, thus providing a larger and more timely
correction to $v_{t}$ than CM. Therefore, it can avoid oscillations and is
much more effective than CM along high-curvature vertical directions. Thus, it
is more tolerant to larger values of µ than CM. For convex functions, Nesterov
momentum achieves a convergence rate of $O\left(1/K^{2}\right)$ where k is the
number of steps.
Figure 8. The effect of different learning rate $\epsilon$ and the momentum
coefficients $\mu$ for various hyperparameters. (left) The plot represents the
norm of the residual of the parameter value after projecting the parameters
into a $1$-D subspace $\forall t$. (right) The divergence of SGD from the main
linear path when trained using a maxout network on MNIST. Source: Goodfellow
(b35, )
### 2.3. Adaptive Learning Rates
The delta-bar-delta algorithm (b50, ) introduced the concept of having
separate adaptive learning rates for each individual connection (weight) which
is set empirically by observing its gradient after each iteration. The idea is
that if the gradient stays consistent, i.e., remains the same sign, we tune up
the learning rate; else if the gradient sign reverses, we decrease its
learning rate. The intuition is that for deep neural nets, the appropriate
learning rate varies widely for each of its weights. The magnitude of the
gradients is often different for different layers, and if the weight
initialization is small then the gradients are small for early layers than the
later ones for very deep neural nets. We start with a local gain of $1$ for
each weight and use small additive increases or multiplicative decreases
depending on the sign of the gradient (mini-batch). This ensures that the big
gains decay rapidly when the oscillations start. It is essential to ensure
that we limit the gains to lie within a reasonable range and use bigger mini-
batches to ensure that the sign of the gradient is not due to the sampling
error of the mini-batches.
Jacobs (b50, ) proposed a method for combining adaptive learning rates with
momentum. He conjectures that instead of using the sign agreement between the
current gradient and the previous gradient, we use the current gradient and
the velocity of the weight and thus combine the advantages of momentum and
adaptive learning rates. Since momentum doesn’t care about axis-aligned
effects, it can deal with diagonal ellipses and traverse in diagonal
directions quickly.
### 2.4. Adagrad
The Adagrad (b51, ) algorithm individually adapts the learning rates of all
model parameters by scaling them inversely proportional to the square root of
the sum of the historical squared values of the gradient.
$\boldsymbol{g}\leftarrow\frac{1}{m}\nabla_{\boldsymbol{\theta}}\sum_{i}L\left(f\left(\boldsymbol{x}^{(i)};\boldsymbol{\theta}\right),\boldsymbol{y}^{(i)}\right)$
$\boldsymbol{r}\leftarrow\boldsymbol{r}+\boldsymbol{g}\odot\boldsymbol{g}$
For weight parameter with the largest gradient has the largest decrease in its
learning rate, while the parameter with the smallest gradient has the smallest
decrease in its learning rate. This results in faster progress in the more
gently sloped regions of parameter space and works well when the gradient is
sparse. For non-convex optimization problems, accumulating the squared
gradients can result in a premature and excessive decrease in the learning
rate. The parameter update is given as:
$\Delta\boldsymbol{\theta}=-\frac{\epsilon}{\sqrt{\delta+\boldsymbol{r}}}\odot\boldsymbol{g}$
### 2.5. RMSProp
The RMSProp algorithm (b29, ) modifies the Adagrad by changing the accumulated
squared gradient into an exponentially weighted moving average. In the non-
convex setting, since Adagrad decreases the learning rate rapidly by using the
history of the squared gradient, it is possible that the learning halts before
arriving at the desired basin of attraction or locally convex bowl structure.
RMSProp uses an exponentially decaying average of the squared gradients to
discard the past history and can converge rapidly after finding a convex bowl
it behaves like an instance of the AdaGrad algorithm since the AdaGrad method
is designed to converge rapidly for a convex function.
$\boldsymbol{r}\leftarrow\rho\boldsymbol{r}+(1-\rho)\boldsymbol{g}\odot\boldsymbol{g}\vspace{0.5em}$
If the eigenvectors of the Hessian are axis-aligned, then RMSprop can correct
the curvature. Since RMSProp lacks the bias-correction term, it can often lead
to large step sizes and divergence with sparse gradients. If the eigenvectors
of the Hessian are axis-aligned (dubious assumption), then RMSProp can correct
the curvature.
### 2.6. Adam
Adam (b18, ) (adaptive moments) aligns along the works of RMSProp (b29, ) and
momentum with a few distinctions, such as invariance to the diagonal rescaling
of the gradients. First, in Adam, we incorporate momentum by computing the
biased first-order moment of the gradient as an exponentially decaying moving
average. Second, similar to the RMSProp algorithm, we incorporate the second-
order moment of the gradient as an exponentially decaying average. This
combination has theoretical guarantees in convex settings. Since the moving
averages are initialized to a vector of all zeros, the moment estimates are
biased during zero during the initial steps especially when the biased first
moment ($s$) and and second moment ($r$) estimates are close to $1$.
555$\odot$ corresponds to element-wise product.
$\boldsymbol{s}\leftarrow\rho_{1}\boldsymbol{s}+\left(1-\rho_{1}\right)\boldsymbol{g};\hskip
10.00002pt\boldsymbol{r}\leftarrow\rho_{2}\boldsymbol{r}+\left(1-\rho_{2}\right)\boldsymbol{g}\vspace{0.5em}$
Thus, we introduce bias corrections to these estimates to account for their
initialization for both the momentum term and the second-order moment
(uncentered variance).
$\hat{\boldsymbol{s}}\leftarrow\frac{\boldsymbol{s}}{1-\rho_{1}^{t}};\hskip
10.00002pt\hat{\boldsymbol{r}}\leftarrow\frac{\boldsymbol{r}}{1-\rho_{r}^{t}}$
Since RMSProp lacks the correction factor, it has a high-bias in the early
stages of training, while Adam is robust to the choice of hyperparameters.
Since the accumulated squared values of the gradients term are an
approximation to the diagonal of the Fisher information matrix, it is more
adaptive and leads to faster convergence. The parameter update is given
as:666In Adam optimization, $\delta$ corresponds to a small constant of the
order of $10^{-6}$ to ensure numerical stability.
$\Delta\boldsymbol{\theta}=-\epsilon\frac{\hat{\boldsymbol{s}}}{\sqrt{\hat{\boldsymbol{r}}}+\delta}$
## 3\. Second-order methods
Figure 9. Illustration of the gradient descent and Newton’s method for a
quadratic function. Newton’s method converges in a single step (direct route)
by exploiting the curvature information contained in the Hessian matrix.
### 3.1. Preliminaries and Motivation
For a scalar-valued function, the matrix containing the second-order
derivatives is the Hessian matrix ($H$). The Hessian $H$ is a measure of
curvature or concavity of the function as shown in Figure 10. The matrix
containing all the first-order derivatives for vector-valued functions
$f:R^{n}\rightarrow R^{m}$ is known as the Jacobian matrix ($J$). Suppose
$f:R^{n}\rightarrow R$ (input is a vector and output is scalar), then the
Hessian matrix $H$ and the Jacobian $J$ for $f$ is given as:
$\mathbf{J}=\begin{bmatrix}\nabla^{\mathrm{T}}f_{1}\\\ \vdots\\\
\nabla^{\mathrm{T}}f_{m}\end{bmatrix}=\begin{bmatrix}\frac{\partial
f_{1}}{\partial x_{1}}&\cdots&\frac{\partial f_{1}}{\partial x_{n}}\\\
\vdots&\ddots&\vdots\\\ \frac{\partial f_{m}}{\partial
x_{1}}&\cdots&\frac{\partial f_{m}}{\partial
x_{n}}\end{bmatrix}\quad\mathbf{H}=\begin{bmatrix}\frac{\partial^{2}f}{\partial
x_{1}^{2}}&\frac{\partial^{2}f}{\partial x_{1}\partial
x_{2}}&\cdots&\frac{\partial^{2}f}{\partial x_{1}\partial x_{n}}\\\
\frac{\partial^{2}f}{\partial x_{2}\partial
x_{1}}&\frac{\partial^{2}f}{\partial
x_{2}^{2}}&\cdots&\frac{\partial^{2}f}{\partial x_{2}\partial x_{n}}\\\
\vdots&\vdots&\ddots&\vdots\\\ \frac{\partial^{2}f}{\partial x_{n}\partial
x_{1}}&\frac{\partial^{2}f}{\partial x_{n}\partial
x_{2}}&\cdots&\frac{\partial^{2}f}{\partial
x_{n}^{2}}\end{bmatrix}\vspace{1em}$
The Hessian matrix $H$ is real and symmetric with $H_{ij}=H_{ji}$. Likewise,
the condition number of the Hessian is defined as the ratio of the largest
eigenvalue to the smallest eigenvalue, i.e., is the ratio of the steepest
ridge’s steepness to the shallowest ridge’s steepness. Since the Hessian is
symmetric with real eigenvalues, and the eigenvectors have an orthogonal basis
its eigendecomposition of the matrix is given as:
$\boldsymbol{H}=\boldsymbol{Q}\operatorname{diag}(\boldsymbol{\lambda})\boldsymbol{Q}^{-1}$
where $Q$ is an orthogonal matrix ($Q^{T}Q=I$) with one eigenvector per column
and $\lambda$ is a diagonal matrix. In higher dimensions, the
eigendecomposition of the Hessian can be used to test the critical point. The
point is a local minimum when $H$ is positive definite (all $\lambda_{i}>0$).
Likewise, when $H$ is negative definite (all $\lambda_{i}$ ¡ 0), the point is
a local maximum. A saddle point has both positive and negative eigenvalues,
where it is a local minima across one cross-section and a local maxima within
another cross-section.
Second-order optimization methods (b30, ; b34, ; b52, ) utilize the second
derivatives for weight update. Newton’s method arises naturally from the
second-order Taylor series approximation of the function $f$, ignoring the
higher-order derivatives is given as:
$f(x)\approx
f(x_{0})+(x-x_{0}^{\top})\nabla_{x}f(x_{0})+\frac{1}{2}(x-x_{0})^{\top}H(x-x_{0})$
where $H$ is the Hessian of $f$ with respect to $x$ evaluated at $x_{0}$.
Solving for the critical point $x^{*}$ we get the Newton update rule:
$x^{*}=x_{0}-H(f)(x_{0})^{-1}\nabla_{x}f(x_{0})$
Figure 10. A quadratic function with various curvature. The dashed line
indicates the expected descent direction and the value of the cost function
downhill performed by gradient descent. With negative curvature, the cost
function decreases faster than the gradient predicts, while with positive
curvature, the function decreases much more slowly than expected and thus
begins to increase. With no curvature, the gradient correctly predicts the
decrease. Source: Ian Goodfellow (b23, )
Convex functions have strong theoretical guarantees because they are well-
behaved and have nice properties. They lack saddle points, and all their local
minima correspond to the global minima specifically because the Hessian $H$ is
positive semi-definite. For a quadratic function, the Hessian $H$ is constant
and thus well-conditioned. A detailed analysis for a quadratic convex function
is discussed in Section 3.2. However, for non-convex functions, the Hessian
matrix $\nabla^{2}f_{k}$ may not always be positive-definite, eq. 3.1 may not
be a descent direction. Also, Hessian being ill-conditioned matters more near
the minima since if the Hessian changes too fast we may follow a zig-zag path
leading to sub-optimal performance.
To understand the ill-conditioned problem, we consider a convex quadratic
objective given as:
$\mathcal{J}(\theta)=\frac{1}{2}\theta^{T}A\theta$
where $A$ is a positive semi-definite symmetric matrix. If the Hessian of the
objective i.e., $\nabla^{2}\mathcal{J}(\theta)=A$ is ill-conditioned then
gradient also changing rapidly and the update using gradient descent is as
follows:
$\begin{array}[]{rlr}\theta_{k+1}&\leftarrow\theta_{k}-\alpha\nabla\mathcal{J}\left(\theta_{k}\right)&\\\
&=\theta_{k}-\alpha A\theta_{k}&\\\ &=(I-\alpha A)\theta_{k}&\\\
\Longrightarrow\theta_{k}&=(I-\alpha A)^{k}\theta_{0}&\\\ &=\left(I-\alpha
Q\Lambda Q^{T}\right)^{k}\theta_{0}&\\\
&=\left[Q(I-\alpha\Lambda)Q^{T}\right]^{k}\theta_{0}&\\\
&=Q(I-\alpha\Lambda)^{k}Q^{T}\theta_{0}&\end{array}$
where $Q\Lambda Q^{T}$ is the eigendecomposition of $A$ as discussed in eq.
3.1. The stability conditions is as follows:
1\. $0<\alpha\lambda_{i}\leq 1:$ decays to $0$ at a rate that depends on
$\alpha\lambda_{i}$.
2\. $1<\alpha\lambda_{i}\leq 2:$ oscillates.
3\. $\alpha\lambda_{i}>2:$ unstable (diverges).
where $\alpha$ is the learning-rate of the algorithm and $\lambda_{i}$ are the
eigenvalues. Hence, we need the learning rate to be bound by
$\alpha<2/\lambda_{\max}$ to prevent instability. This ensures that we avoid
overshooting the minima and not going uphill in directions with strong
positive curvature. This also bounds the rate of progress in other directions
given as $\alpha\lambda_{i}<2\lambda_{i}/\lambda_{\max}$.
Given a starting point $x_{i}$, one could construct a local quadratic
approximation to the objective function ($\mathcal{J}(q)$) that matches the
first and second derivative values at that point. We then minimize the
approximate (quadratic function) instead of the original objective function.
The minimizer of the approximate function is used as the starting point in the
next step, and repeat the procedure iteratively. This ensures that the search
direction accounts for the curvature information along almost all directions,
for all iterations as shown in Figure 12.
In the case, where the loss surface is the shape of an elongated quadratic
function or an ellipsoid as shown in Figure 11, we can transform it into a
spherical shape using a whitening transform since the Hessian inverse $H^{-1}$
spheres out of the error surface locally, and we can now perform gradient
descent in the new coordinate system.
Figure 11. The first-order gradient descent method fails to exploit the
curvature information contained in the Hessian matrix. The condition number
greatly influences the SGD trajectory, and we illustrate it using a quadratic
function f(x) whose Hessian has a condition number 5. This means that the
direction of the most curvature has five times more curvature than that of the
least curvature. In this case, the most curvature is in the direction $[1,1]$,
and the least curvature is in the direction $[1,-1]$. The red lines indicate
the path followed by gradient descent. This results in an ellipsoidal
quadratic function, and gradient descent follows a zig-zag path, often
bouncing off the canyon walls (since the gradient is perpendicular to the
surface), and we observe that it spends a lot of descending the canyon walls
because they are the steepest feature. It indicates that gradient descent
takes a larger number of steps to converge, and near the minima, it oscillates
back and forth until it converges to the local minima. If the step size is
large, it can overshoot and reach the opposite canyon wall on the next
iteration, hence not the best search direction. The large positive eigenvalue
of the Hessian corresponding to the eigenvector pointed in this direction
indicates that this directional derivative is rapidly increasing, so an
optimization algorithm based on the Hessian could predict the steepest
direction is not actually a promising search direction in this context.
Source: Ian Goodfellow (b23, ).
### 3.2. Quadratic Function Optimization
For a locally quadratic function, by rescaling the gradients with the inverse
of the Hessian, Newton’s method jumps directly to the minimum and thus
converges in a single step. If the function is nearly quadratic, then this is
a very good estimate of the minimizer of $f$. Since $f$ is twice
differentiable, the quadratic model of $f$ will be very accurate when $x$ is
near the local minima. When $f$ is not quadratic but can be locally
approximated as a positive definite quadratic, and the Newton’s method is
updated iteratively using eq. 3.3 with quadratic rate of convergence. The
following result as stated in (b30, ) asserts the rate of converge for the
Newton’s method and is given below.
THEOREM 1. Suppose that $f$ is twice differentiable and that the Hessian
$\nabla^{2}f(x)$ is Lipschitz continuous in a neighborhood of a solution
$x^{*}$ at which the sufficient conditions are satisfied. Consider the
iteration $x_{k+1}=x_{k}+p_{k}$, where $p_{k}$ is given by
$-\nabla^{2}f_{k}^{-1}\nabla f_{k}$. Then:
(i) if the starting point $x_{0}$ is sufficiently close to $x^{*}$, the
sequence of iterates converges to $x^{*}$.
(ii) the rate of convergence of $\left\\{x_{k}\right\\}$ is quadratic.
(iii) the sequence of gradient norms $\left\\{\left\|\nabla
f_{k}\right\|\right\\}$ converges quadratically to zero.
1 Initialize: Initial point $x_{0}$.
2 for _$k=0,1,2,\ldots$_ do
3 Factorize the matrix $B_{k}=\nabla^{2}f\left(x_{k}\right)+E_{k}$, where
$E_{k}=0$ if $\nabla^{2}f\left(x_{k}\right)$ is sufficiently positive definite
4 otherwise, $E_{k}$ is chosen to ensure that $B_{k}$ is sufficiently positive
definite.
5 Solve $B_{k}p_{k}=-\nabla f\left(x_{k}\right)$.
6 $x_{k+1}\leftarrow x_{k}+\alpha_{k}p_{k}$
7 end for
Algorithm 2 Newton’s Method with Hessian Modification
### 3.3. Newton’s Method
In Newton’s optimization algorithm, to find the local minimum $x^{*}$ with
iteration sequence of $x_{0}\rightarrow x_{1}\rightarrow
x_{2}\rightarrow\ldots\ldots..\rightarrow x_{k}$ the Hessian $\nabla
f^{2}\left(x_{k}\right)$ must be positive definite $\forall k$ iteration steps
else the search direction might not correspond to an actual the descent
direction i.e., $\nabla f_{k}^{T}p_{k}<0$, where $p_{k}$ is the Newton search
direction. If $f$ is strongly convex then f converges quadratically fast to
$x^{*}=\arg\min_{x}f(x)$ i.e.,
$\left\|x_{k+1}-x_{*}\right\|\leq\frac{1}{2}\left\|x_{k}-x_{*}\right\|^{2},\quad\forall
k\geq 0.$
The update rule is given as follows:
$\boldsymbol{\theta}\leftarrow\boldsymbol{\theta}-\boldsymbol{H^{-1}g}$
where $H$ is the Hessian matrix of the loss function $L$ with respect to
$\theta$. Since the Hessian ($H=\nabla f^{2}(x_{k})$) is symmetric, we can
determine the Newton update using Cholesky777Every symmetric positive definite
matrix $A$ can be written as $A=LDL^{T}$ where $L$ is a lower triangular
matrix with unit diagonal elements and $D$ is a diagonal matrix with positive
elements on the diagonal algorithm. An essential feature of Newton’s method is
that it is independent of linear changes in coordinates. If $H$ is positive
semi-definite, then there is nothing that confirms that the iterator has
converged to a minimizer since the higher derivatives of $f(x)$ are unknown.
The algorithm will diverge if the Hessian is not positive definite
($\lambda_{i}\leq 0$ indicates flat regions or directions of negative
curvature).
Thus, alternative methods to ensure that eq. 3.3 corresponds to a descent
direction while retaining the second-order information in $\nabla^{2}f_{k}$
with superlinear rate of convergence is described in the following sections.
In particular, the computation of the Hessian $H$ is computationally expensive
and error-prone. For example, in Quasi-Newton methods an approximation to the
inverse of the Hessian $B_{k}$ is computed at each step using a low-rank
formula while a trust region approach, in which $\nabla^{2}f_{k}$ is used to
form a quadratic model that is minimized in a ball around the current iterate
$x_{k}$.
Figure 12. Newton method constructs a local quadratic approximation of the
function about the current point, moves towards the minimizer of this
quadratic model, and then iteratively progress towards the global minima.
Source: Nicholas Vieau Alger (b17, ).
### 3.4. Dynamics of Optimization
As discussed in Section 3.3, the major drawback of the Newton method is that
the update step can result in the wrong direction if the eigenvalue
$\lambda_{i}$ of the Hessian $H$ is negative i.e., it moves along the
eigenvector in a direction opposite to the gradient descent step and thus
moves in the direction of increasing error towards $\theta^{*}$. To address
this issue, generalized trust region (b30, ), saddle-free Newton (b5, ) and
natural gradient descent method are used and is discussed briefly as follows:
* •
In the trust region method, a damping constant $\alpha$ is added along the
diagonal of the Hessian to offset the direction of negative curvature, which
is equivalent to $\lambda_{i}+\alpha$. However, to ensure that this is a
descent direction, one must ensure that $\lambda_{i}+\alpha>0$, and can result
in small step size along the eigen-directions.
* •
Although, the truncated Newton method ignores directions of negative curvature
it can get struck at saddle points. However, since the natural gradient
descent relies on the Fisher information matrix $F$ to incorporate the
curvature information of the parameter manifold where the update rule is given
as:
$w_{t+1}=w_{t}-\eta F\left(w_{t}\right)^{-1}g_{t}$
* •
However, (b5, ) argue that natural gradient descent method can also suffer
from negative curvature and can converge to a non-stationary point when the
Fisher matrix is rank-deficient. The saddle-free Newton method provides a
elegant solution to this, where the update rule is given as:
$\Delta\theta=-\nabla f|\mathbf{H}|^{-1}$
* •
Since, it does not utilize the second-order Taylor-series approximation to
leverage the information, as in classical methods, it can move further in
direction of low curvature and escape saddle points as shown in Figure 14.
### 3.5. Gauss-Newton and Levenberg-Marquardt algorithm
Following 1.1, we define the non-linear least square consider a set of m
points $(x_{i},y_{i})$ and the curve $y=f(x,\theta)$, where $\theta\in R^{n}$
denotes the model parameters and $i\in[m]$, with $m\geq n$. We define the non-
linear least square problem as:
$\min_{\theta}\|y_{i}-f(x_{i},\theta)\|_{2}^{2}=\min_{\theta}\sum_{i=1}^{m}\|r_{i}\|_{2}^{2}\vspace{0.3em}$
where $r_{i}=y_{i}-f(x_{i},\theta)$ are the residuals $\forall i=1,2,...,m$.
The Gauss-Newton (b10, ; b52, ) method utilizes the square of the Jacobian
(not the Hessian) and is used to minimize non-linear least square problems. It
can be viewed as a modified Newton’s method with line search where we
approximate the Hessian as $\bigtriangledown f_{k}\approx J_{k}^{T}J_{k}$ and
has $O\left(N^{3}\right)$ complexity. The update rule is given as:
$\Delta\theta=-(J_{k}^{T}J_{k})J_{k}^{T}r_{i}\vspace{0.4em}$
where $J_{i}(x)=\left[\begin{array}[]{c}\nabla r_{1}(x)^{T},\nabla
r_{2}(x)^{T},\ldots,\nabla r_{m}(x)^{T}\end{array}\right]^{T}$ is Jacobian of
the residual. Note that, the Gauss-Newton method does not require the
computation of the individual residual Hessians $\bigtriangledown^{2}r_{i}$
and saves significant computational time. Also, near the minimum since the
residuals are significantly smaller i.e., $\|r_{i}|\ \leq\epsilon$, this leads
to rapid-convergence.
The Levenberg-Marquardt (b53, ; b54, ) algorithm is an approximation to Gauss-
Newton method that uses a diagonal approximation to the Hessian and takes care
of extreme directions of curvature, thus preventing the update from moving in
the wrong direction. This addresses the issue namely, when the Jacobian $J(x)$
is rank-deficient (i.e., the columns are not linearly independent) while
maintaining similar local convergence properties since we only replace the
line search with a trust-region method. The new update rule is given as:
$\Delta\theta=-(J_{k}^{T}J_{k}+\lambda I)J_{k}^{T}r_{i}$
where $\mu$ is the regularization parameter and $I$ is the Identity matrix.
The regularization parameter is used to address the issue when the Hessian is
not positive definite and has directions that correspond to small negative
eigenvalues.
Figure 13. Eigenvalue spectrum in a 4 layer shared weights network.
Source:Yann LeCun (b1, )
### 3.6. Conjugate Gradients
The conjugate gradient (b55, ; b56, ) optimization is an iterative method that
utilizes the conjugate directions i.e. orthogonal directions to perform a
gradient update step and thus avoids the explicit computation of the inverse
of the Hessian. This approach is used to overcome the weakness of the gradient
descent method by aligning the current search direction along the current
gradient step with a memory component given by the linear combination of
previous directions. For a quadratic function, the gradient descent method
fails to exploit the curvature information present in the Hessian matrix,
wherein it descends down the canyon iteratively by following a zig-zag pattern
and wastes most of its time by repeatedly hitting the canyon walls and makes
no significant progress. This happens because each line search direction is
orthogonal to the previous direction since the minimum of the objective along
a given direction corresponds to the steepest descent direction at that point.
Thus, this descent direction does not preserve the progress in the previous
direction and will undo the progress in that direction, and we need to
reiterate to minimize the progress made in the previous direction. Thus, we
obtain an ineffective zig-zag pattern of progress towards the optimum, and
because of the large step size, it often overshoots the minima. The conjugate
gradients address this problem by making the search directions orthogonal,
i.e., the current search direction is conjugate to the previous line search
direction, and it will not spoil the progress made in the previous iteration.
This method is only suitable for solving positive definite systems, i.e. when
the Hessian has all positive eigenvalues. The orthogonality condition rightly
finds the minimum along the direction $d_{i}$ since the eigenvector $e_{i+1}$
is orthogonal to $d_{i}$. Two directions $d_{t-1}$ and $d_{t}$ are defined as
conjugate if it satisfies:
$\boldsymbol{d}_{t}^{\top}\boldsymbol{H}\boldsymbol{d}_{t-1}=0$
where $H$ is the Hessian matrix. Thus, this method is an example of Gram-
Schmidt ortho-normalization, converging in atmost $k$ line searches. At
training iteration $t$, the current search direction $d_{t}$ is computed as
discussed in Algorithm 4.
1 Initialize: Initial point $x_{0}$. $r_{0}=d_{0}$.
2 while _$r_{k}\neq 0$_ do
3 Set $r_{0}\leftarrow Ax_{0}-b_{0}$.
4 $\alpha_{i}\leftarrow\frac{r_{i}^{T}r_{i}}{d_{i}^{T}Ad_{i}}$.
5 $x_{i+1}\leftarrow x_{i}+\alpha_{i}d_{i}$.
6 $r_{i+1}\leftarrow r_{i}-\alpha_{i}Ad_{i}$.
7 $\beta_{i+1}\leftarrow\frac{r_{i+1}^{T}r_{i+1}}{r_{i}^{T}r_{i}}$.
8 $d_{i+1}\leftarrow r_{i+1}+\beta_{i+1}d_{i}$
9 end while
Algorithm 3 Conjugate Gradient Method
The parameter $\beta_{t}$ controls how much of the previous direction should
contribute to the current search direction. Since, this line search method
utilizes the eigenvectors of H to choose $\beta_{t}$ which could be
computationally expensive, we mainly use two popular methods (b57, ; b58, )
for computing $\beta_{t}$ and they are as follows:
1\. The Fletcher-Reeves:
$\beta_{t}=\frac{\nabla_{\boldsymbol{\theta}}J\left(\boldsymbol{\theta}_{t}\right)^{\top}\nabla_{\boldsymbol{\theta}}J\left(\boldsymbol{\theta}_{t}\right)}{\nabla_{\boldsymbol{\theta}}J\left(\boldsymbol{\theta}_{t-1}\right)^{\top}\nabla_{\boldsymbol{\theta}}J\left(\boldsymbol{\theta}_{t-1}\right)}$
2\. The Polak-Ribiere:
$\beta_{t}=\frac{\left(\nabla_{\theta}J\left(\theta_{t}\right)-\nabla_{\theta}J\left(\theta_{t-1}\right)\right)^{\top}\nabla_{\theta}J\left(\theta_{t}\right)}{\nabla_{\theta}J\left(\theta_{t-1}\right)^{\top}\nabla_{\theta}J\left(\theta_{t-1}\right)}$
For non-linear conjugate gradient method, we set $\beta_{t}$ to zero i.e., it
restarts at each step by forgetting the past search directions with the
current direction given by the unaltered gradient at that point, to ensure
that it is locally optimal and ensures faster convergence. Also, the initial
point has a strong influence on the number of the steps taken for convergence,
and it is observed that the Fletcher-Reeves method converges only if the
initial point is sufficiently close to the desired minima and thus, the Polak-
Ribiere method converges much more quickly.
Figure 14. Illustration of various optimization methods for various quadratic
functions. The yellow dot indicates the starting point. Source Pascanu (b5, ).
### 3.7. BFGS
The Quasi-Newton Broyden-Fletcher-Goldfarb-Shanno (BFGS) (b59, ) method
iteratively builds an approximation of the inverse of the Hessian at each
step. The BFGS incorporates the second-order information using a low-rank
approximation of the Hessian and is thus similar to the method of conjugate
gradients without the computational burden. Here the explicit calculation of
the Hessian is not required since we now use the secant equation to construct
the approximation matrix by successively analyzing the gradient vector.
1 Initialize: Initial point $x_{0}$, inverse Hessian approximation $B_{0}$,
where $B_{0}$ is positive definite.
2 for _$k=0,1,2,\ldots$_ do
3 Determine the search direction $p_{k}$ by solving
$B_{k}\mathbf{p}_{k}=-\nabla f\left(\mathbf{x}_{k}\right)$.
4 A line search is performed in this direction to determine the step size
$\varepsilon_{k}^{*}$, satisfying the Wolfe
5 conditions.
6 Define $s_{k}=x_{k+1}-x_{k}$ and $y_{k}=\nabla f(x_{k+1})-\nabla f(x_{k})$.
7 Apply Update $x_{k+1}\leftarrow x_{k}+s_{k}$.
8
$B_{k+1}=\left(I-\rho_{k}s_{k}y_{k}^{T}\right)B_{k}\left(I-\rho_{k}y_{k}s_{k}^{T}\right)+\rho_{k}s_{k}s_{k}^{T}$,
where $\rho_{k}=\frac{1}{y_{k}^{T}s_{k}}$.
9 end for
Algorithm 4 BFGS Method
We perform a series of line searches along this direction, and since it takes
more steps to converge due to the lack of precision in the approximation of
the true inverse Hessian i.e., $H_{k}=B_{k}^{-1}$, as illustrated in Algorithm
3. The computational complexity for the BFGS method is a $O\left(N^{2}\right)$
since matrix inversion is not required. The rank-two approximation in Alg 3
can be considered as a diagonal and a low-rank approximation to the Hessian
$H_{k}$. Since it does not utilize only second-order information, it is
sometimes more efficient than the Newton’s method for non-quadratic functions
with super-linear rate of convergence and the cost per iteration is usually
lower. Moreover, it is estimated that BFGS has self-correcting properties,
i.e., when the previous estimate is bad and slows down the convergence, then
BFGS will correct itself in the next few steps. We mainly compute the
approximation to the Hessian matrix $H_{k}$ using: 1. Finite difference method
2. Hessian-vector products. 3. Diagonal Computation. 4. Square Jacobian
approximation.
Figure 15. An example of a radial transformation on a 2-dimensional space to
illustrate the effect of a local perturbation on the relative flatness between
the minima. We can see that only the area in blue and red, i.e. inside the
ball $B_{2}(\hat{\theta},\delta)$, are affected. Here,
$\psi(r,\hat{r},\delta,\rho)=\mathbb{1}(r\notin[0,\delta])r+\mathbb{1}(r\in[0,\hat{r}])\rho\frac{r}{\hat{r}}+\mathbb{1}(r\in]\hat{r},\delta])\left((\rho-\delta)\frac{r-\delta}{\hat{r}-\delta}+\delta\right)$
represents the function under consideration
$g^{-1}(\theta)=\frac{\psi\left(\|\theta-\hat{\theta}\|_{2},\hat{r},\delta,\rho\right)}{\|\theta-\hat{\theta}\|_{2}}(\theta-\hat{\theta})+\hat{\theta}$
$ is the radial-basis transformation respectively. B. Source: Dinh (b4, )
### 3.8. L-BFGS
Since the memory costs of storing and manipulating the inverse of the Hessian
is prohibitively large for deep neural nets, we utilize the L-BFGS (b60, )
method to circumvent this issue. It maintains a simple and compact
approximation of the Hessian matrices by constructing an Hessian approximation
by using only the $m$ most recent iteration to incorporate the curvature
information, where typically $3<m<20$. Thus, instead of storing the fully
dense $n^{*}n$ approximators, we modify $H_{k}$ by implicitly storing only a
certain number of correction pairs ($m$) and thus include the curvature
information from only the $m$ recent iteration (we store only a set of $n$
length vectors). Thus, we add and delete information at each stage, discard
the past $m$ correction pairs, and start the process with the new
$s_{k},y_{k}$ pairs.
This ensures to an optimal rate of convergence and may also outperform
Hessian-free methods, since we only maintain a fixed memory system that
iteratively discards the curvature information from the past since it is not
likely to influence the behavior of the Hessian in the current iteration, thus
saving computational storage. The update rule is exactly similar to the BFGS
method as discussed in Algorithm 4. The only modification is that we discard
the vector pair $\left\\{s_{k-m},y_{k-m}\right\\}$ from storage and the
$s_{k},y_{k}$ and $x_{k}$ updates follows from Algorithm 4. The main weakness
of L-BFGS is that it converges slowly for functions, where the Hessian is ill-
conditioned and contains a wide distribution of eigenvalues.
### 3.9. ADAHESSIAN
Figure 16. Illustration of the local and global curvature wherein the local
curvature information is avoided using an exponential moving average.
Adahessian converges in $1000$ iterations without a moving average, while it
takes only seven iterations with the moving average enabled. Source: Zhewei
Yao (b34, )
Adahessian (b34, ) is a second-order stochastic approximation method that
incorporates the curvature information present in the Hessian for faster
convergence and reduced memory overhead using:
1\. Hessian diagonal approximation using Hutchinson’s method.
2\. RMSE exponential moving average to reduce the variations in the diagonal
approximation of the Hessian.
3\. Block diagonal averaging to reduce the variance of Hessian diagonal
elements.
As discussed earlier, second-order methods involve preconditioning the
gradient with the inverse of the Hessian, which is designed to accelerate
learning for ill-conditioned problems i.e., flat curvatures in some directions
and sharp curvature in other directions by rescaling the gradient vector along
each of its directions. It handles extreme curvature directions by appropriate
gradient rotation and scaling and ensures convergence to a critical point of
any sort (local minima). It is thus more reliable than other adaptive first-
order methods. But this comes at a prohibitive cost since it is
computationally infeasible to compute the Hessian at each iteration and not
scalable for modern neural network architectures. Thus, we approximate the
Hessian matrix as a diagonal operator given as:
$\Delta w=\operatorname{diag}(\mathbf{H})^{-k}\mathbf{g}$
where $\Delta w$ is the weight update and $0\leq k\leq 1$ is the Hessian
power. The Hessian diagonal denoted as $D=\operatorname{diag}(\mathrm{H})$ is
computed using Hutchinson’s method. To obtain the Hessian matrix $H$ without
explicit computation, they utilize the Hessian-free approximation method given
as:
$\frac{\partial\mathbf{g}^{T}z}{\partial\theta}=\frac{\partial\mathbf{g}^{T}}{\partial\theta}z+\mathbf{g}^{T}\frac{\partial
z}{\partial\theta}=\frac{\partial\mathbf{g}^{T}}{\partial\theta}z=\mathbf{H}z$
where $z$ is a random vector with Rademacher distribution. The Hessian
diagonal is given as the expectation of $z\odot Hz$ which corresponds to
optimal performance on various tasks.
$\boldsymbol{D}=\operatorname{diag}(\mathbf{H})=\mathbb{E}[z\odot(\mathbf{H}z)]$
This has the computational overhead equivalent to a gradient backpropagation
step. They employ a moving-average term (spatial averaging) and Hessian
momentum (the update rule is similar to Adam) to smooth out stochastic
variations in the curvature information per iteration. This leads to faster
convergence for strict and smooth convex functions.
$D^{(s)}[ib+j]=\frac{\sum_{k=1}^{b}D[ib+k]}{b},\text{ for }1\leq j\leq b,0\leq
i\leq\frac{d}{b}-1$
where $D^{(s)}$ is spatially averaged Hessian diagonal and $b$ is the block
size. We implement momentum for the Hessian diagonal $H$ with spatial
averaging as shown in Table 1.
Term | Equation
---|---
$\overline{\boldsymbol{D}}_{t}$ | $\sqrt{\frac{\left(1-\beta_{2}\right)\sum_{i=1}^{t}\beta_{2}^{t-i}\boldsymbol{D}_{i}^{(s)}\boldsymbol{D}_{i}^{(s)}}{1-\beta_{2}^{t}}}$
$m_{t}$ | $\frac{\left(1-\beta_{1}\right)\sum_{i=1}^{t}\beta_{1}^{t-i}\mathbf{g}_{i}}{1-\beta_{1}^{t}}$
$v_{t}$ | $\left(\overline{\boldsymbol{D}}_{t}\right)^{k}$
Table 1. Update Rule
The parameter update rule is given as, $\theta_{t+1}=\theta_{t}-\eta
m_{t}/v_{t}$, where $m_{t}$ and $v_{t}$ are the first and second order moments
for AdaHessian with $0<\beta_{1}<\beta_{2}<1$.
Optimizer | CIFAR-10 | CIFAR-100
---|---|---
SGD | $94.33\pm 0.1$ | $79.12\pm 0.54$
Polyak | $95.62\pm 0.07$ | $78.11\pm 0.37$
Adam | $94.76\pm 0.23$ | $77.24\pm 0.12$
LOOKAHEAD | $\textbf{95.63}\pm\textbf{0.03}$ | $\textbf{79.42}\pm\textbf{0.05}$
Dataset | Cifar10 | ImageNet
---|---|---
| ResNet20 | ResNet32 | ResNet18
SGD | 93.01 $\pm$ 0.05 | $\mathbf{94.23}\pm\mathbf{0.17}$ | 70.01
Adam | 90.73 $\pm$ 0.23 | 91.53 $\pm$ 0.1 | 65.42
AdamW | 91.96 $\pm$ 0.15 | 92.82 $\pm$ 0.1 | 67.82
ADAHESSIAN | $\mathbf{93.11}\pm\mathbf{0.25}$ | 93.08 $\pm$ 0.10 | $\mathbf{70.12}$
Figure 17. (left) We report the test accuracy (+ standard deviation) across
$10$ trails using different optimizers on the CIFAR-$10/100$ dataset using the
ResNet$18$ and architecture. (right) We report the model accuracy (+ standard
deviation) across $10$ trails using different optimizers on the CIFAR-$10/100$
and ImageNet dataset using the ResNet$20$ and ResNet$32$ architecture. We
train all our models for $200$ epochs with a batch size of $128$ and an
initial learning rate of $0.1/0.001/0.01$ for SGD/Adam/AdamW respectively. For
Adahessian we set block size equal to $9$, $k=1$ and a learning rate of
$0.15$; while for Lookahead we set $k=5$ and use a learning rate of $0.1$. The
exact training details as mentioned in (b34, ; b61, ) was used for model
training.
### 3.10. Newton-CG method
The line search Newton-CG (b30, ) method utilizes the Hessian-vector products
($\nabla^{2}f_{k}$) has attractive global convergence properties with super-
linear convergence, and results in optimal search directions when the Hessian
is indefinite. This Hessian-free (HF) method does not require explicit
knowledge of the Hessian but uses the finite-difference method to access the
actual Hessian at each point. Here, we compute the product of the Hessian
($\nabla^{2}f_{k}$) and the vector $d$ using a finite-differencing technique
to get the approximation:
$\nabla^{2}f_{k}d\approx\frac{\nabla f\left(x_{k}+hd\right)-\nabla
f\left(x_{k}\right)}{h}$
for some small differencing interval h. Also, using the Gauss-Newton matrix
instead of the Hessian results in better search directions since $G$ is always
positive definite and thus avoids the problem of negative curvature. CG is
designed for positive-definite systems, and the Hessian is indefinite for
points far away from the solution; we can terminate early or use negative
directions when they are detected to account for this. Since the HF approach
has no access to the true Hessian, we may run across directions of extremely
low curvatures or even negative curvatures. Thus, the search directions can
only result in small function reductions even after many function evaluations
in line search. If we are not careful, such a direction could result in a very
large CG step, possibly taking the outside of the region where the Newton
approximation is even valid. One solution is to add a damping parameter to the
Hessian given as $\mathrm{B}=\mathrm{H}+\lambda\mathrm{I}$, where $I$ is the
Identity matrix. This reconditions $H$ and controls the length of each CG
step, mimicking a trust region method.
$\rho_{k}=\frac{f\left(x_{k}+p_{k}\right)-f\left(x_{k}\right)}{q_{k}\left(p_{k}\right)-q_{k}(0)}.$
$\text{ If }\rho_{k}<\frac{1}{4},\text{ then
}\lambda\leftarrow\alpha\lambda.\hskip 3.99994pt\text{ If}\hskip
5.0pt\rho_{k}>\frac{3}{4},\text{ then }\lambda\leftarrow\alpha^{-1}\lambda.$
The main drawback of this method is when $\nabla f(x_{k})$ is nearly singular,
and thus requiring many function evaluations without significant reduction in
the function values. To alleviate this difficulty, trust-region Newton-CG
method and Newton-Lanczos method can be used respectively.
### 3.11. Lookahead
Lookahead (b61, ) optimizer is orthogonal to the previous approaches and
proposes a distinct way for weight updates in which it maintains two sets of
weights namely, slow weights ($\phi$) and fast weights ($\theta)$. It
maintains an inner loop of optimization for $k$ steps using standard methods
such as the SGD or Adam for iterative fast weights update and follows it up
with a linear interpolation of the slow weights along the direction given as
$\theta-\phi$ in the weight space. Thus, after each slow weights update, we
reset it as the new fast weights, making rapid progress in low curvature
directions with reduced oscillations and quicker convergence. The slow weights
trajectory is characterized as an exponential moving average i.e., Polyak
averaging has strong theoretical guarantees of the fast weights. After $k$
inner-loop steps, the slow weights is updated as follows:
$\displaystyle\phi_{t+1}$
$\displaystyle=\phi_{t}+\alpha\left(\theta_{t,k}-\phi_{t}\right)$
$\displaystyle=\alpha\left[\theta_{t,k}+(1-\alpha)\theta_{t-1,k}+\ldots+(1-\alpha)^{t-1}\theta_{0,k}\right]+(1-\alpha)^{t}\phi_{0}$
where $\alpha$ is the learning rate for slow weights $\phi$. For a choice of
optimizer $A$ and a mini-batch of $d$ data points the update rule for fast
weights is:
$\theta_{t,i+1}=\theta_{t,i}+A\left(L,\theta_{t,i-1},d\right)$
## 4\. Conclusion
We summarize first-order and second-order optimization methods from a
theoretical viewpoint using the optimization literature. Since high error,
local minima traps do not appear when the model is overparameterized, and
local minima with high error relative to the global minimum occur with an
exponentially small probability in $N$. Critical points with high cost are far
more likely to be saddle points. Also, since second-order methods are
computationally intensive, the computational complexity of computing the
Hessian is $O(N^{3})$, and thus practically only networks with a small number
of parameters can be trained via Newton’s method.
## 5\. Acknowledgments
We thank the anonymous reviewers for their review and suggestions.
## References
* (1) LeCun, Y. A., Bottou, L., Orr, G. B., and Muller, K. R. (2012). Efficient backprop. In Neural networks: Tricks of the trade (pp. 9-48). Springer, Berlin, Heidelberg.
* (2) Lee, J. D., Simchowitz, M., Jordan, M. I., and Recht, B. (2016, June). Gradient descent only converges to minimizers. In Conference on learning theory (pp. 1246-1257). PMLR.
* (3) Sagun, L., Guney, V. U., Arous, G. B., and LeCun, Y. (2014). Explorations on high dimensional landscapes. arXiv preprint arXiv:1412.6615.
* (4) Dinh, L., Pascanu, R., Bengio, S., and Bengio, Y. (2017, July). Sharp minima can generalize for deep nets. In International Conference on Machine Learning (pp. 1019-1028). PMLR.
* (5) Dauphin, Y. N., Pascanu, R., Gulcehre, C., Cho, K., Ganguli, S., and Bengio, Y. (2014). Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. Advances in neural information processing systems, 27.
* (6) Martens, James. ”Deep learning via hessian-free optimization.” ICML. Vol. 27. 2010.
* (7) Pascanu, Razvan, et al. ”On the saddle point problem for non-convex optimization.” arXiv preprint arXiv:1405.4604 (2014).
* (8) Fort, Stanislav, and Stanislaw Jastrzebski. ”Large scale structure of neural network loss landscapes.” Advances in Neural Information Processing Systems 32 (2019).
* (9) Sutskever, Ilya, et al. ”On the importance of initialization and momentum in deep learning.” International conference on machine learning. PMLR, 2013.
* (10) Moré, Jorge J., and Danny C. Sorensen. Newton’s method. No. ANL-82-8. Argonne National Lab., IL (USA), 1982.
* (11) Fletcher, Roger. Practical methods of optimization. John Wiley and Sons, 2013.
* (12) Choromanska, Anna, et al. ”The loss surfaces of multilayer networks.” Artificial intelligence and statistics. PMLR, 2015.
* (13) Draxler, F., Veschgini, K., Salmhofer, M., and Hamprecht, F. (2018, July). Essentially no barriers in neural network energy landscape. In International conference on machine learning (pp. 1309-1318). PMLR.
* (14) Li, Chunyuan, et al. ”Measuring the intrinsic dimension of objective landscapes.” arXiv preprint arXiv:1804.08838 (2018).
* (15) Jastrzębski, Stanisław, et al. ”Three factors influencing minima in sgd.” arXiv preprint arXiv:1711.04623 (2017).
* (16) Yurii, Nesterov. ”Introductory lectures on convex optimization: a basic course.” (2004).
* (17) Alger, Nicholas Vieau. Data-scalable Hessian preconditioning for distributed parameter PDE-constrained inverse problems. Diss. 2019.
* (18) Kingma, Diederik P., and Jimmy Ba. ”Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014).
* (19) Shewchuk, Jonathan Richard. ”An introduction to the conjugate gradient method without the agonizing pain.” (1994): 1.
* (20) Gill, Philip E., and Walter Murray. ”Newton-type methods for unconstrained and linearly constrained optimization.” Mathematical Programming 7.1 (1974): 311-350.
* (21) Roger Grosse and Jimmy Ba. ”CSC 421/2516 Lectures 7–8: Optimization”.
* (22) Boyd, Stephen, Stephen P. Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
* (23) Ian Goodfellow, Yoshua Bengio, Aaron Courville. ”Deep Learning”, (2016).
* (24) Luenberger, David G., and Yinyu Ye. Linear and nonlinear programming. Vol. 2. Reading, MA: Addison-wesley, 1984.
* (25) Moré, Jorge J., and Danny C. Sorensen. ”On the use of directions of negative curvature in a modified Newton method.” Mathematical Programming 16.1 (1979): 1-20.
* (26) Gill, Philip E., and Walter Murray. ”Newton-type methods for unconstrained and linearly constrained optimization.” Mathematical Programming 7.1 (1974): 311-350.
* (27) Nesterov, Yu E. ”A method for solving the convex programming problem with convergence rate $Obigl(frac{1}{k^{2}}bigr)$.” Dokl. Akad. Nauk SSSR,. Vol. 269. 1983.
* (28) Nesterov, Yurii. Introductory lectures on convex optimization: A basic course. Vol. 87. Springer Science and Business Media, 2003.
* (29) Hinton, Geoffrey, Nitish Srivastava, and Kevin Swersky. ”Neural networks for machine learning lecture 6a overview of mini-batch gradient descent.” Cited on 14.8 (2012): 2.
* (30) Nocedal, Jorge, and Stephen J. Wright, eds. Numerical optimization. New York, NY: Springer New York, 1999.
* (31) Bengio, Yoshua, Aaron Courville, and Pascal Vincent. ”Representation learning: A review and new perspectives.” IEEE transactions on pattern analysis and machine intelligence 35.8 (2013): 1798-1828.
* (32) Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. ”On the difficulty of training recurrent neural networks.” International conference on machine learning. PMLR, 2013.
* (33) Bengio, Yoshua, Patrice Simard, and Paolo Frasconi. ”Learning long-term dependencies with gradient descent is difficult.” IEEE transactions on neural networks 5.2 (1994): 157-166.
* (34) Yao, Zhewei, et al. ”Adahessian: An adaptive second order optimizer for machine learning.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 12. 2021.
* (35) Goodfellow, Ian J., Oriol Vinyals, and Andrew M. Saxe. ”Qualitatively characterizing neural network optimization problems.” arXiv preprint arXiv:1412.6544 (2014).
* (36) Tan, Hong Hui, and King Hann Lim. ”Review of second-order optimization techniques in artificial neural networks backpropagation.” IOP conference series: materials science and engineering. Vol. 495. No. 1. IOP Publishing, 2019.
* (37) Yinyu Ye. ”Second Order Optimization Algorithms I Lecture Notes 13”.
* (38) Battiti, Roberto. ”First-and second-order methods for learning: between steepest descent and Newton’s method.” Neural computation 4.2 (1992): 141-166.
* (39) Balestriero, Randall, Jerome Pesenti, and Yann LeCun. ”Learning in high dimension always amounts to extrapolation.” arXiv preprint arXiv:2110.09485 (2021).
* (40) Jastrzebski, Stanislaw, et al. ”The break-even point on optimization trajectories of deep neural networks.” arXiv preprint arXiv:2002.09572 (2020).
* (41) Fort, Stanislav, and Adam Scherlis. ”The goldilocks zone: Towards better understanding of neural network loss landscapes.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.
* (42) Schmidt, Robin M., Frank Schneider, and Philipp Hennig. ”Descending through a crowded valley-benchmarking deep learning optimizers.” International Conference on Machine Learning. PMLR, 2021.
* (43) Xia, Lin, et al. ”Positive-definite sparse precision matrix estimation.” Advances in Pure Mathematics 7.1 (2017): 21-30.
* (44) Murphy, Kevin P. Probabilistic machine learning: an introduction. MIT press, 2022.
* (45) Hochreiter, Sepp, and Jürgen Schmidhuber. ”Flat minima.” Neural computation 9.1 (1997): 1-42.
* (46) Keskar, Nitish Shirish, et al. ”On large-batch training for deep learning: Generalization gap and sharp minima.” arXiv preprint arXiv:1609.04836 (2016).
* (47) Goodfellow, Ian J., Oriol Vinyals, and Andrew M. Saxe. ”Qualitatively characterizing neural network optimization problems.” arXiv preprint arXiv:1412.6544 (2014).
* (48) Metz, Luke, et al. ”Gradients are not all you need.” arXiv preprint arXiv:2111.05803 (2021).
* (49) Polyak, Boris T. ”Some methods of speeding up the convergence of iteration methods.” Ussr computational mathematics and mathematical physics 4.5 (1964): 1-17.
* (50) Jacobs, Robert A. ”Increased rates of convergence through learning rate adaptation.” Neural networks 1.4 (1988): 295-307.
* (51) Duchi, John, Elad Hazan, and Yoram Singer. ”Adaptive subgradient methods for online learning and stochastic optimization.” Journal of machine learning research 12.7 (2011).
* (52) Gauss, Carl Friedrich. Theoria motus corporum coelestium in sectionibus conicis solem ambientium auctore Carolo Friderico Gauss. sumtibus Frid. Perthes et IH Besser, 1809.
* (53) Levenberg, Kenneth. ”A method for the solution of certain non-linear problems in least squares.” Quarterly of applied mathematics 2.2 (1944): 164-168.
* (54) Marquardt, Donald W. ”An algorithm for least-squares estimation of nonlinear parameters.” Journal of the society for Industrial and Applied Mathematics 11.2 (1963): 431-441.
* (55) Hestenes, Magnus R., and Eduard Stiefel. ”Methods of conjugate gradients for solving.” Journal of research of the National Bureau of Standards 49.6 (1952): 409.
* (56) Straeter, Terry Anthony. On the extension of the davidon-broyden class of rank one, quasi-newton minimization methods to an infinite dimensional hilbert space with applications to optimal control problems. No. NASA-CR-111975. 1971.
* (57) Fletcher, Reeves, and Colin M. Reeves. ”Function minimization by conjugate gradients.” The computer journal 7.2 (1964): 149-154.
* (58) Polak, Elijah, and Gerard Ribiere. ”Note sur la convergence de méthodes de directions conjuguées.” Revue française d’informatique et de recherche opérationnelle. Série rouge 3.16 (1969): 35-43.
* (59) Broyden, Charles George. ”The convergence of a class of double-rank minimization algorithms 1. general considerations.” IMA Journal of Applied Mathematics 6.1 (1970): 76-90.
* (60) Liu, Dong C., and Jorge Nocedal. ”On the limited memory BFGS method for large scale optimization.” Mathematical programming 45.1 (1989): 503-528.
* (61) Zhang, Michael, et al. ”Lookahead optimizer: k steps forward, 1 step back.” Advances in neural information processing systems 32 (2019).
* (62) Gill, Philip E., and Walter Murray. ”Quasi-Newton methods for unconstrained optimization.” IMA Journal of Applied Mathematics 9.1 (1972): 91-108.
* (63) Vaswani, Ashish, et al. ”Attention is all you need.” Advances in neural information processing systems 30 (2017).
* (64) Nichol, Alex, et al. ”Glide: Towards photorealistic image generation and editing with text-guided diffusion models.” (2021).
* (65) Goodfellow, Ian, et al. ”Generative adversarial nets.” Advances in neural information processing systems 27 (2014).
* (66) Rojas, Raul, and Raúl Rojas. ”The backpropagation algorithm.” Neural networks: a systematic introduction (1996).
* (67) Shervine Amidi. ”CS-229: Machine Learning”.
|
*algorithm alglabelAlg. #2
# Fast-SNARF: A Fast Deformer for Articulated Neural Fields
Xu Chen*1,2 Tianjian Jiang*1 Jie Song1 Max Rietmann3 Andreas Geiger2,4 Michael
J. Black2 Otmar Hilliges1
1ETH Zürich 2Max Planck Institute for Intelligent Systems, Tübingen 3NVIDIA
4University of Tübingen *Xu Chen and Tianjian Jiang contributed equally.
###### Abstract
Neural fields have revolutionized the area of 3D reconstruction and novel view
synthesis of _rigid_ scenes. A key challenge in making such methods applicable
to _articulated_ objects, such as the human body, is to model the deformation
of 3D locations between the rest pose (a canonical space) and the deformed
space. We propose a new articulation module for neural fields, Fast-SNARF,
which finds accurate correspondences between canonical space and posed space
via iterative root finding. Fast-SNARF is a drop-in replacement in
functionality to our previous work, SNARF, while significantly improving its
computational efficiency. We contribute several algorithmic and implementation
improvements over SNARF, yielding a speed-up of $150\times$. These
improvements include voxel-based correspondence search, pre-computing the
linear blend skinning function, and an efficient software implementation with
CUDA kernels. Fast-SNARF enables efficient and simultaneous optimization of
shape and skinning weights given deformed observations without correspondences
(e.g. 3D meshes). Because learning of deformation maps is a crucial component
in many 3D human avatar methods and since Fast-SNARF provides a
computationally efficient solution, we believe that this work represents a
significant step towards the practical creation of 3D virtual humans.
## 1 Introduction
3D avatars are an important building block for many emerging applications in
the metaverse, AR/VR and beyond. To this end, an algorithm to reconstruct and
animate non-rigid articulated objects, such as humans, accurately and quickly
is required. This challenging task requires modeling the 3D shape and
deformation of the human body – a complex, articulated, non-rigid object. For
such techniques to be widely applicable, it is paramount that algorithms do
not require manually provided annotations nor that subjects appear in a-priori
known poses. Therefore, inferring the transformation that 3D locations undergo
between the posed observation space and some canonical space is the key
challenge to attain a model that can be animated.
figureFast-SNARF for Articulated Neural Fields. Fast-SNARF finds accurate
correspondences between canonical space and posed space while being
150$\times$ faster than our previous method SNARF [9]. Fast-SNARF enables
optimizing shape and skinning weights given deformed observations without
correspondences (e.g. 3D meshes).
Static shape modeling has recently seen much progress with the advent of
neural fields [33, 43, 36, 37]. Such representations are promising due to
their ability to represent complex geometries of arbitrary topology at
arbitrary resolution, by leveraging multi layer perceptrons (MLPs) to encode
spatial quantities of interest (e.g. occupancy probabilities) in 3D space.
Recent work [37] has further achieved fast reconstruction and real-time view
synthesis of rigid scenes with high quality. However, to enable fast _non-
rigid_ reconstruction and realistic _animation_ of articulated objects, a
robust and fast articulation module is needed.
Articulation of neural fields is typically modeled via deformation of 3D
space, which warps neural fields from a rest pose (canonical space) into any
target pose (posed space), leveraging dense deformation fields. Several
techniques have been proposed to construct such deformation fields. Building
upon traditional mesh-based linear blend skinning (LBS) [22], several works
[23, 35, 51, 44, 56] learn dense skinning weight fields in _posed space_ and
then derive the deformation fields via LBS. While inheriting the smooth
deformation properties of LBS, the resulting skinning weight fields cannot
generalize to unseen poses, because the are _pose-dependent_ and changes in
pose lead to drastic changes to the spatial layout of the deformation field.
These changes not been observed at training time for unseen poses. Another
line of work approximates the mapping as piece-wise rigid transformations [13,
39], which suffers from discontinuous artifacts at joints. The mapping could
also be approximated based on a skinned base mesh [21], which can lead to
inaccuracies due to the mismatch between the base mesh and the actual shape
and suffers from erroneous nearest neighbor associations in regions with self-
contact.
Our recent work, SNARF [9], overcomes these problems by design in that it
learns a skinning weight field in canonical space which is _pose-independent_.
This formulation allows for natural deformation due to the smooth deformation
properties of LBS and generalizes to unseen poses because of the pose-
independent canonical skinning weights. Furthermore, in contrast to previous
methods [23, 35, 51, 44, 56], pose-independent skinning weights can be learned
unsupervised, i.e. without the need for ground-truth skinning weights or other
forms of annotations.
However, a major limitation of SNARF is the algorithm’s computational
inefficiency. While learning a canonical skinning weight field enables
generalization, the deformation from posed to canonical space is defined
implicitly, and hence can only be determined numerically via iterative root
finding. The efficiency of the operations at each root finding iteration play
a critical role in the speed of the overall articulation module. Therefore,
computationally expensive operations in SNARF, such as computing LBS and
evaluating the skinning weight field, parameterized by an MLP, lead to
prohibitively slow speed – learning an animatable avatar from 3D meshes takes
8 hours on high-end GPUs.
In this paper, we propose Fast-SNARF, an articulation module that is fast yet
preserves the accuracy and robustness of SNARF. We achieve this by
significantly reducing the computation at each root finding iteration in the
articulation module. First, we use a compact voxel grid to represent the
skinning weight field instead of an MLP. The voxel-based representation can
replace MLPs without loss of fidelity because the skinning weight field is
naturally smooth, and is pose-independent in our formulation. In addition,
exploiting the linearity of LBS, we factor out LBS computations into a pre-
computation stage without loss of accuracy. As a result, the costly MLP
evaluations and LBS calculations in SNARF are replaced by a single tri-linear
interpolation step, which is lightweight and fast. Together with a custom CUDA
kernel implementation, Fast-SNARF can deform points with a speed-up of 150x
wrt. SNARF (from 800ms to 5ms) without loss of accuracy.
In our experiments we follow the setting of SNARF and learn an animatable
avatar, including its shape and skinning weights, from 3D scans in various
poses, represented by a pose-conditioned occupancy field parameterized by an
MLP. The overall inference and training speed, including both articulation and
evaluation of the canonical shape MLP, is increased by $30\times$ and
$15\times$ respectively. Note that the speed bottleneck is shifted from
articulation (in SNARF) to evaluating the canonical shape MLP (in Fast-SNARF).
Fast-SNARF is also faster than other articulation modules and is significantly
more accurate, as we show empirically. While we focus on learning occupancy
networks, Fast-SNARF may be interfaced with other neural fields in the same
manner that SNARF and variants have been utilized [26, 58, 64, 24].
We hope Fast-SNARF will accelerate research on articulated 3D shape
representations and release the code on our project webpage
111https://github.com/xuchen-ethz/fast-snarf to facilitate future research.
Relation to SNARF [9]: This paper is an extension of SNARF [9], a conference
paper published at ICCV ’21 which models articulation of neural fields. This
paper addresses the main limitation of SNARF, i.e. its computational
inefficiency via a series of algorithmic and implementation improvements
described in Section 4. We provide a speed and accuracy comparison of Fast-
SNARF with SNARF and other baseline methods, and thorough ablations in Section
5.
## 2 Related Work
### 2.1 Rigid Neural Fields
Neural fields have emerged as a powerful tool to model complex rigid shapes
with arbitrary topology in high fidelity by leveraging the expressiveness of
neural networks. These neural networks regress the distance to the surface
[43], occupancy probability [33], color [41] or radiance [36] of 3D points.
Conditioning on local information such as 2D image features or 3D point cloud
features produces more detailed reconstructions [10, 17, 46, 49, 50] than
using global features. Such representations can be trained with direct 3D
supervision, e.g. ground truth occupancy or distance to the surface, or can be
trained indirectly with raw 3D points clouds [2, 16, 51] or 2D images [36, 38,
53, 61].
Fast Rigid Neural Fields: One major limitation of neural field representations
are their slow training and inference speeds, mainly due to the fact that
multiple evaluations of deep neural networks are necessary to generate images
and each of these evaluations is time-consuming. Several works have recently
been proposed to improve the training [28, 55, 52, 54, 37, 6] and inference
speed [48, 15, 19, 62]. The core idea is to leverage explicit representations
[46], such as voxel grids or hash tables, to store features for a sparse set
of points in space. The dense field can then be obtained by interpolating
sparse features and by decoding the features using neural networks. Instead of
point locations, these networks take features as input, which are more
informative, enabling the network to be shallow and hence more computationally
efficient. However, the underlying explicit representations have a fixed
spatial layout which limits these methods to rigid shapes.
Our proposed articulation module can deform rigid neural fields to enable non-
rigid animation at inference time and enable learning from deformed
observations during training. Importantly, our module runs at a comparable
speed to recent fast rigid neural field representations (e.g. [37]) and is
thus complementary to advancements made in accelerating neural fields.
### 2.2 Articulation of Neural Fields
Recently, several articulation algorithms for neural fields have been
proposed. These methods serve as a foundation for many tasks such as
generative modeling of articulated objects or humans [12, 8, 20, 40, 3, 63],
and reconstructing animatable avatars from scans [13, 51, 35, 9, 56, 27, 34],
depth [42, 57, 14], videos [39, 45, 29, 7, 64, 47, 24, 59, 58, 26, 25] or a
single image [21, 18, 60].
Part Based Models: One option is to model articulated shapes as a composition
of multiple parts [13, 39, 34]. Rigidly transforming these parts according to
the input bone transformations produces deformed shapes. While preserving the
global structure after articulation, the continuity of surface deformations is
violated, causing artifacts at the intersections of parts. Moreover, inferring
the correct part assignment from raw data is challenging and typically
requires ground-truth supervision.
Backward Skinning: Another line of work [23, 51, 35, 56] proposes to learn
skinning weight fields in deformed space and then derive the backward warping
field using LBS to map points in deformed space to canonical ones. Such
methods are straightforward to implement but inherently suffer from poor
generalization to unseen poses. Backward deformation fields are defined in
deformed space and, hence, inherently deform with the pose. Thus, the network
must memorize deformation fields for different spatial configurations, making
it difficult to generate deformations that have not been seen during training.
Learning such pose-dependent skinning weight fields is also challenging, thus
existing methods often rely on strong supervision via ground-truth skinning
weights. Moreover, due to the varying spatial configuration, such pose-
dependent skinning weights cannot be modeled using acceleration data
structures such as explicit voxel grids.
Forward Skinning: Learning the skinning weights in canonical space instead of
deformed space is a natural way to resolve the generalization issue. However,
deriving the mapping from deformed to canonical points with canonical skinning
weights is not straightforward, because the skinning weights of the deformed
query points are unknown. Thus, SNARF [9] attains this mapping using an
iterative root finding formulation, which finds the canonical points that are
forward-skinned to the deformed query location. This formulation enables the
articulation of neural fields into arbitrary poses, even those unseen during
training. The pose-independent canonical skinning weights can be learned
unsupervised without the need for ground-truth skinning weights. Moreover,
multiple canonical correspondences can be found using such methods, which is
important to handle self-contact. This forward skinning formulation has
already found widespread use in many tasks, such as generative modeling [8],
or personalized avatar reconstruction from scans [27], depth [14], or images
[24, 64, 58, 26].
However, one major limitation of this formulation is its slow speed due to the
expensive computation at each root finding iteration. The original SNARF model
relies on an MLP to parameterize the skinning weight field. At each root
finding iteration, SNARF requires evaluating the MLP to compute LBS weights,
which is time-consuming. This limitation is further amplified when combining
forward skinning with rendering algorithms that require many queries along
many rays (cf. [11]). To reduce computation time, existing methods [24, 58]
use an explicit mesh to tighten the search space of root finding. However,
these methods introduce the overhead of mesh extraction and still require days
of training time to learn avatars from images.
We address this problem by using a voxel-based parameterization of the
skinning weight field and by factoring out the LBS computation into a pre-
computation stage. Since Fast-SNARF does not require mesh extraction in the
training loop and is therefore more versatile and much faster to train than
those that rely on meshes (e.g. [24]) (minutes vs. days). Our method also
enables learning the skinning weights.
## 3 Differentiable Forward Skinning
Figure 1: General Framework for Articulated Neural Field Representations.
Given a query point in deformed space $\mathbf{x}^{\prime}$ and the input pose
(represented as joint angles $\mathbf{p}$ and 6D transformations
$\mathbf{B}$), an articulation module first finds its canonical
correspondences $\mathbf{x}^{*}$. The canonical shape representation
${f}_{\sigma_{f}}$ then outputs the occupancy probabilities or densities at
$\\{\mathbf{x}^{*}\\}$ which are finally aggregated to yield the occupancy
probability or density of the query point $\mathbf{x}^{\prime}$.
In this section, we briefly summarize the differentiable forward skinning
approach proposed in SNARF [9]. We then discuss Fast-SNARF in Section 4.
General Pipeline: Figure 1 illustrates the general pipeline for modeling
articulated neural fields. Given a query point in posed space, an articulation
module first finds its correspondences in canonical space according to the
input body pose. Then the canonical shape properties are evaluated at the
correspondence locations. When multiple correspondences exist, multiple values
of these properties are predicted and aggregated into one value as the final
output.
Canonical Neural Fields: Canonical shape properties can be modeled using any
coordinate-based representation, e.g. occupancy fields [33] or radiance fields
[36]. For convenience, we follow SNARF and use occupancy fields as an example.
The occupancy field in SNARF [9] is defined as
$\displaystyle{f}_{\sigma_{f}}:\mathbb{R}^{3}\times\mathbb{R}^{n_{p}}$
$\displaystyle\rightarrow[0,1],$ (1) $\displaystyle\mathbf{x},\mathbf{p}$
$\displaystyle\mapsto o.$ (2)
Here ${f}_{\sigma_{f}}$ is the occupancy field that predicts the occupancy
probability $o$ for any canonical point $\mathbf{x}$. The parameters of the
occupancy field are denoted as $\sigma_{f}$. It can be optionally conditioned
on the articulated pose $\mathbf{p}$ to model pose-dependent local
deformations such as clothing wrinkles.
Neural Blend Skinning: In SNARF, the articulation is modeled using LBS. To
apply LBS to continuous neural fields, a skinning weight field in canonical
space is defined as:
$\displaystyle\mathbf{w}_{\sigma_{w}}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{n_{b}},$
(3)
where $\sigma_{w}$ are the parameters and $n_{b}$ denotes the number of bones.
In SNARF, this field is parameterized as an MLP. However, any other
coordinate-based representation can be used instead. Given the skinning
weights $\mathbf{w}$ of a 3D point $\mathbf{x}$ and the bone transformations
$\boldsymbol{B}=\\{\boldsymbol{B}_{1},\dots,\boldsymbol{B}_{n_{b}}\\}$
($\boldsymbol{B}_{i}\in SE(3)$) that correspond to a particular body pose
$\mathbf{p}$, the 6D transformation
$\mathbf{T}(\mathbf{x})\in\mathbb{R}^{3\times 4}$ of a canonical point is
determined by the following convex combination:
$\displaystyle\mathbf{T}(\mathbf{x})=\sum_{i=1}^{n_{\text{b}}}{w}_{\sigma_{w},i}(\mathbf{x})\cdot\boldsymbol{B}_{i}.$
(4)
The deformed point corresponding to the canonical point is then computed as
$\displaystyle\mathbf{x}^{\prime}=\mathbf{d}_{\sigma_{w}}(\mathbf{x},\boldsymbol{B})=\mathbf{T}(\mathbf{x})\cdot\mathbf{x}.$
(5)
Correspondence Search: The canonical skinning weight field and Eq. (5) define
the mapping from canonical points to deformed ones, i.e.
$\mathbf{x}\rightarrow\mathbf{x}^{\prime}$. However, generating posed shapes
requires the inverse mapping, i.e. $\mathbf{x}^{\prime}\rightarrow\mathbf{x}$,
which is defined implicitly as the root of the following equation:
$\displaystyle\mathbf{d}_{\sigma_{w}}(\mathbf{x},\boldsymbol{B})-\mathbf{x}^{\prime}=\mathbf{0}.$
(6)
The roots of this equation cannot be analytically solved in closed form.
Instead, the solution can be attained numerically via standard Newton or
quasi-Netwon optimization methods, which iteratively find a location
$\mathbf{x}$ that satisfies Eq. (6) (see Fig. 2):
$\displaystyle\mathbf{x}^{k+1}$
$\displaystyle=\mathbf{x}^{k}-(\mathbf{J}^{k})^{-1}\cdot(\mathbf{d}_{\sigma_{w}}(\mathbf{x}^{k},\boldsymbol{B})-\mathbf{x}^{\prime}).$
(7)
Here $\mathbf{J}$ is the Jacobian matrix of
$\mathbf{d}_{\sigma_{w}}(\mathbf{x}^{k},\boldsymbol{B})-\mathbf{x}^{\prime}$.
To avoid computing the Jacobian at each iteration, Broyden’s method [5] and
low-rank approximation $\tilde{\mathbf{J}}$ of $\mathbf{J}^{-1}$ is used.
Handling Multiple Correspondences: Multiple roots, denoted by the set
$\\{\mathbf{x}_{i}^{*}\\}$, might exist due to self-contact where multiple
canonical correspondences of one deformed point exist (see green and blue
points in Fig. 1). Multiple roots are found by initializing the optimization-
based root finding procedure with different starting locations and exploiting
the local convergence of the optimizer. The initial states
$\\{\mathbf{x}_{i}^{0}\\}$ are thereby obtained by transforming the deformed
point $\mathbf{x}^{\prime}$ rigidly to the canonical space for each of the
$n_{b}$ bones, and the initial Jacobian matrices $\\{\mathbf{J}_{i}^{0}\\}$
are the spatial gradients of the skinning weight field at the corresponding
initial states:
$\displaystyle\mathbf{x}^{0}_{i}=\boldsymbol{B}_{i}^{-1}\cdot\mathbf{x}^{\prime}\quad\mathbf{J}^{0}_{i}=\frac{\partial{\mathbf{d}_{\sigma_{w}}(\mathbf{x},\boldsymbol{B})}}{\partial{\mathbf{x}}}\bigg{|}_{\mathbf{x}=\mathbf{x}_{i}^{0}}$
(8)
The final set of correspondences is determined by their convergence:
$\displaystyle\mathcal{X}^{*}=\left\\{\mathbf{x}^{*}_{i}\mid\left\|\mathbf{d}_{\sigma_{w}}(\mathbf{x}^{*}_{i},\boldsymbol{B})-\mathbf{x}^{\prime}\right\|_{2}<\epsilon\right\\},$
(9)
where $\epsilon$ is the convergence threshold.
Aggregating Multiple Correspondences: The maximum over the occupancy
probabilities of all canonical correspondences gives the final occupancy
prediction:
$\displaystyle
o^{\prime}(\mathbf{x}^{\prime},\mathbf{p})=\max_{\mathbf{x}^{*}\in\mathcal{X}^{*}}\\{{f}_{\sigma_{f}}(\mathbf{x}^{*},\mathbf{p})\\}.$
(10)
Losses: The canonical neural fields and the skinning weights can be learned
jointly from observations in the deformed space. SNARF assumes direct 3D
supervision and uses the binary cross entropy loss
$\mathcal{L}_{BCE}(o(\mathbf{x}^{\prime},\mathbf{p}),o_{gt}(\mathbf{x}^{\prime}))$
between the predicted and ground-truth occupancy for any deformed point. In
addition, two auxiliary losses are applied during the first epoch to bootstrap
training. SNARF randomly samples points along the bones that connect joints in
canonical space and encourages their occupancy probabilities to be one.
Moreover, SNARF encourages the skinning weights of all joints to be $1$ for
their parent bones.
Gradients: To learn the skinning weights $\mathbf{w}_{\sigma_{w}}$ using a
loss applied on the predicted occupancy probability in posed space
$\mathcal{L}(o(\mathbf{x}^{\prime},\mathbf{p}))$, the gradient of
$\mathcal{L}$ wrt. $\sigma_{w}$ is required. Applying the chain rule, the
gradient $\frac{\partial{\mathcal{L}}}{\partial{\sigma_{w}}}$ is given by
$\displaystyle\frac{\partial{\mathcal{L}}}{\partial{\sigma_{w}}}=\frac{\partial{\mathcal{L}}}{\partial{o}}\cdot\frac{\partial{o}}{\partial{{f}_{\sigma_{f}}}}\cdot\frac{\partial{{f}_{\sigma_{f}}(\mathbf{x}^{*})}}{\partial{\mathbf{x}^{*}}}\cdot\frac{\partial{\mathbf{x}^{*}}}{\partial{\sigma_{w}}},$
(11)
where $\mathbf{x}^{*}$ is the root as defined in Eq. (9)
The last term cannot be obtained using standard auto-differentiation because
$\mathbf{x}^{*}$ is determined by the iterative correspondence search using
${\sigma_{w}}$. This iterative procedure is not trivially differentiable. To
overcome this problem, implicit differentiation is used to derive the
following analytical form of the last term:
$\displaystyle\frac{\partial{\mathbf{x}^{*}}}{\partial{\sigma_{w}}}=-\left(\frac{\partial{\mathbf{d}_{\sigma_{w}}(\mathbf{x}^{*},\boldsymbol{B})}}{\partial{\mathbf{x}^{*}}}\right)^{-1}\cdot\frac{\partial{\mathbf{d}_{\sigma_{w}}(\mathbf{x}^{*},\boldsymbol{B})}}{\partial{\sigma_{w}}}.$
(12)
Substituting Eq. (12) into Eq. (11) yields the gradient term
$\frac{\partial{\mathcal{L}}}{\partial{\sigma_{w}}}$ which then allows
skinning weights to be learned with standard back-propagation.
## 4 Fast Differentiable Forward Skinning
Figure 2: MLP-based Forward Skinning (SNARF). Given a point in deformed space
$\mathbf{x}^{\prime}$, SNARF finds its canonical correspondences
$\mathbf{x}^{*}$ that satisfy the forward skinning equation (5) via root
finding. Multiple correspondences can be reliably found by initializing the
root finding algorithm with multiple starting points derived from the bone
transformations.
While the formulation mentioned above can articulate neural fields with good
quality and generalization ability, the original SNARF algorithm is
computationally expensive, which limits its wider application. As a reference,
determining the correspondences of 200k points takes 800ms on an NVIDIA Quadro
RTX 6000 GPU. In the following, we describe how Fast-SNARF overcomes this
issue, reducing the computation time from 800ms to 5ms (Table II).
### 4.1 Voxel-based Correspondence Search
The core of our method is to factor out costly computations at each root
finding iteration in SNARF, including MLP evaluations and LBS calculations,
into a pre-computation stage as illustrated in Algorithm 1.
Voxel-based Skinning Field: The main speed bottleneck of SNARF lies in
computing Eq. (7) at each iteration of Broyden’s method. Computing Eq. (7) is
time-consuming because it involves querying skinning weights, which are
parameterized via an MLP in SNARF, and then computing LBS. We notice that the
skinning weight field does not contain high-frequency details as illustrated
in Fig. 5. Therefore, we re-parameterize the skinning weight field
$\mathbf{w}$ with a low-resolution voxel grid $\\{\mathbf{w}_{v}\\}$ with
skinning weights $\mathbf{w}_{v}$ defined for each grid point
$\mathbf{x}_{v}$. The skinning weights of any, non-grid aligned point in space
are then obtained via tri-linear interpolation. We found that a resolution of
$64\times 64\times 16$ is sufficient to describe the skinning weights in all
experiments. Note that we use lower resolution along the $z$-axis due to the
“flatness” of the human body along this dimension in canonical space.
Pre-computing LBS: Computing linear blend skinning (Eq. (4)) at each root
finding iteration also impacts speed. To further improve computation
efficiency, we note that an explicit voxel-based skinning weights
representation $\\{\mathbf{w}_{v}\\}$ allows us to compute the linearly
blended skinning transformations for grid points $\\{\mathbf{T}_{v}\\}$ given
current body poses:
$\displaystyle\mathbf{T}_{v}=\sum_{i=1}^{n_{\text{b}}}w_{v,i}\cdot\boldsymbol{B}_{i}.$
(13)
Then, during root finding, the required transformation at any canonical point
$\mathbf{T}(\mathbf{x})$ can be determined by tri-linearly interpolating
neighbouring transformations in $\\{\mathbf{T}_{v}\\}$. Thus, LBS only needs
to be run for a small set of grid points instead of all query points in the
root finding procedure.
Figure 3: Voxel-based Forward Skinning (Fast-SNARF). In comparison with SNARF
(cf. Fig. 2), Fast-SNARF uses a voxel-based representation to speed up the
iterative correspondence search. The skinning weight field is represented as a
voxel grid. For each pose, we first pre-compute LBS for each grid point,
yielding a transformation field. For each query deformed point
$\mathbf{x}^{\prime}$, Fast-SNARF finds its canonical correspondences
$\mathbf{x}^{*}$ which satisfy
$\mathbf{T}(\mathbf{x}^{*})\cdot\mathbf{x}^{*}=\mathbf{x}^{\prime}$.
Custom CUDA Kernel: Broyden’s method is iterative and involves many small
operations that have to be computed per query point, such as arithmetic
operations on small matrices and reading values from the voxel grid. We note
that these operations can be computed in an independent manner. This motivates
us to implement this module with a custom CUDA kernel instead of using native
functions in standard deep learning frameworks. The handwritten kernel,
parallelized over query points, fuses the entire method into a single kernel
that keeps working variables in registers, avoiding unnecessary time and
memory costs from launching native kernels and synchronizing intermediate
results. The input to our CUDA kernel for iterative root finding is the pre-
computed voxel grid of transformations $\\{\mathbf{T}_{v}\\}$, the bone
transformations $\mathbf{B}$ as well as query points $\mathbf{x}^{\prime}$.
The kernel first computes the multiple initialization states (Eq. (8)). Then,
at each root finding iteration, the kernel tri-linearly interpolates
$\\{\mathbf{T}_{v}\\}$ and transforms the points (Eq. (5)), and applies
Broyden’s update (Eq. (7)). After each iteration $k$, we filter diverged and
converged points $\mathbf{x}^{k}$ by checking whether
$\left\|\mathbf{d}_{\sigma_{w}}(\mathbf{x}^{k},\boldsymbol{B})-\mathbf{x}^{\prime}\right\|_{2}$
is larger than the divergence threshold or smaller than the convergence
threshold, further reducing the number of required computations.
Algorithm 1 Correspondence Search
Inputs:
$\\{(\mathbf{x}^{\prime},\mathbf{x}^{0},\tilde{\mathbf{J}}^{0})\\}$ query
points and initialization
$\boldsymbol{B}$ bone transformations
$\mathbf{w}_{\sigma_{w}}$ skinning weights MLP
Variant 1: MLP-based Search (SNARF)
for
$\mathbf{x}^{\prime},\mathbf{x}^{0},\tilde{\mathbf{J}}^{0}\in\\{(\mathbf{x}^{\prime},\mathbf{x}^{0},\tilde{\mathbf{J}}^{0})\\}$
in parallel do
for $k\leftarrow 0,n$ do
$w_{1},...,w_{n_{b}}\leftarrow\mathbf{w}_{\sigma_{w}}(\mathbf{x}^{k}_{j})$
costly operations
$\mathbf{T}\leftarrow\sum_{i=1}^{n_{\text{b}}}{w}_{i}(\mathbf{x}^{k})\cdot\boldsymbol{B}_{i}$inside
root finding
$\mathbf{x}^{k+1},\tilde{\mathbf{J}}^{k+1}\leftarrow\text{broyden}(\mathbf{x}^{k},\tilde{\mathbf{J}}^{k},\mathbf{T},\mathbf{x}^{\prime})$$\triangleright$
Eq. (6)
end for
end for
return: $\\{\mathbf{x}^{n}\\}$
Variant 2: Voxel-based Search (Fast-SNARF)
for each $\mathbf{x}_{v}\in\\{\mathbf{x}_{v}\\}$ in parallel do
$w_{1},...,w_{n_{b}}\leftarrow\mathbf{w}_{\sigma_{w}}(\mathbf{x}_{v})$pre-
computation
$\mathbf{T}_{v}\leftarrow\sum_{i=1}^{n_{\text{b}}}{w}_{i}\cdot\boldsymbol{B}_{i}$
end for
for
$\mathbf{x}^{\prime},\mathbf{x}^{0},\tilde{\mathbf{J}}^{0}\in\\{(\mathbf{x}^{\prime},\mathbf{x}^{0},\tilde{\mathbf{J}}^{0})\\}$
in parallel do
for $k\leftarrow 0,n$ do
$\mathbf{T}\leftarrow\text{trilerp}(\mathbf{x}^{k},\\{\mathbf{T}_{v}\\})$lightweight
operation
$\mathbf{x}^{k+1},\tilde{\mathbf{J}}^{k+1}\leftarrow\text{broyden}(\mathbf{x}^{k},\tilde{\mathbf{J}}^{k},\mathbf{T},\mathbf{x}^{\prime})$
$\triangleright$ Eq. (6)
end for
end for
return: $\\{\mathbf{x}^{n}\\}$
Remove Duplicate Correspondences: A further important speed optimization
pertains to the treatment of multiple correspondences found by the root
finding algorithm. The set of valid correspondences contains duplicates
because different initial states can converge to the same solution. To avoid
unnecessary evaluation of the canonical neural fields for these duplicates, we
detect duplicate solutions by their relative distances in canonical space and
discard them.
### 4.2 Skinning Weights Optimization
Analogous to SNARF, in theory, Fast-SNARF supports learning skinning weights
with the analytical gradients in Eq. (12). However, there are two practical
challenges.
Approximated Gradient: A first problem lies in that Eq. (12) involves
computing derivatives and the matrix inversion
$\left(\frac{\partial{\mathbf{d}_{\sigma_{w}}(\mathbf{x}^{*},\boldsymbol{B})}}{\partial{\mathbf{x}^{*}}}\right)^{-1}$,
which is time-consuming, impeding our goal of fast training. To address this,
we note that this term is identical to the inverse of the Jacobian
$\mathbf{J}$ in the last iteration of root finding (Eq. (7)):
$\displaystyle\left(\frac{\partial{\mathbf{d}_{\sigma_{w}}(\mathbf{x}^{*},\boldsymbol{B})}}{\partial{\mathbf{x}^{*}}}\right)^{-1}={\underbrace{\left(\frac{\partial{\mathbf{d}_{\sigma_{w}}(\mathbf{x}^{*},\boldsymbol{B})-\mathbf{x}^{\prime}}}{\partial{\mathbf{x}^{*}}}\right)}_{\mathbf{J}}}^{-1}$
(14)
because the deformed point $\mathbf{x}^{\prime}$ is a given input and is
independent of the canonical correspondence $\mathbf{x}^{*}$. The inverse of
the Jacobian $\mathbf{J}$ is approximated in Broyden’s method as
$\tilde{\mathbf{J}}$. Thus, we use $\tilde{\mathbf{J}}$ directly:
$\displaystyle\frac{\partial{\mathbf{x}^{*}}}{\partial{\sigma_{w}}}=-\tilde{\mathbf{J}}\cdot\frac{\partial{\mathbf{d}_{\sigma_{w}}(\mathbf{x}^{*},\boldsymbol{B})}}{\partial{\sigma_{w}}}.$
(15)
Distilling Smooth Skinning Fields: A second problem is that the voxel-based
parameterization does not have the global smoothness bias of MLPs, thus
optimizing voxels directly would result in a noisy skinning weight field. To
obtain smooth skinning weights while using voxel-based correspondence search,
a common approach is to apply a total variational regularizer. However, we
experimentally found that this regularization does not lead to the desired
smoothness of the skinning weights and negatively affects the accuracy of the
generated shapes. We thus propose a new approach by using an MLP to
parameterize the skinning weight field during training but continuously
distill the MLP to a voxel-based skinning weight field at each training
iteration. The skinning weight field is thus smooth by design due to the
intermediate use of an MLP. At each training iteration, we compute the
skinning weights voxel grid on the fly by evaluating the MLP at grid points
$\\{\mathbf{x}_{v}\\}$, and then use our fast voxel-based correspondence
search. In this scheme the parameters of the MLP are optimized during
training, not the voxels directly which are only used to store the weights.
The conversion from MLP to voxels does introduce additional computation during
training, but the overhead is minor since the voxel grid is low resolution.
The inference speed is not influenced at all because the MLP is used during
training only. This yields on-par accuracy with SNARF as we inherit the
inductive smoothness bias of the MLP-based skinning weight model.
### 4.3 Learning Avatars from 3D Scans
We can use our articulation module to learn animatable human avatars with
realistic cloth deformations from 3D scans. Given a set of 3D meshes in
various body poses, our method learns the human shape in canonical space as an
occupancy field alongside the canonical skinning weight field which is needed
for animation. We model the canonical shape using an occupancy field and use
the same training losses as SNARF [9] (see Section 3).
## 5 Experiments
### 5.1 Minimally Clothed Humans
We first evaluate the speed and accuracy of our method and baselines on
minimally clothed humans.
#### 5.1.1 Dataset
We follow the same evaluation protocol as NASA [13] and SNARF [9]. More
specifically, we use the DFaust [4] subset of AMASS [32] for training and
evaluating our model on SMPL meshes of people in minimal clothing. This
dataset covers 10 subjects of varying body shapes. For each subject, we use 10
sequences, from which we randomly select one sequence for validation, using
the rest for training. For each frame in a sequence, 20K points are sampled,
among which, half are sampled uniformly in space and half are sampled in near-
surface regions by first applying Poisson disk sampling on the mesh surface,
followed by adding isotropic Gaussian noise with $\sigma=0.01$ to the sampled
point locations. In addition to the “within distribution” evaluation on
DFaust, we test “out of distribution” performance on another subset of AMASS,
namely PosePrior [1]. This subset contains challenging, extreme poses, not
present in DFaust.
| Within Distribution | Out of Distribution | Inference Speed | Training Time
---|---|---|---|---
| IoU bbox | IoU surf | IoU bbox | IoU surf | Articulation | Shape | Total |
Pose-ONet* | 79.34% | 58.61% | 49.21% | 28.69% | 0ms | 28.95ms | 29.88ms | 16min
Backward-LBS* | 81.68% | 87.44% | 66.93% | 68.93% | 12.39ms | 27.67ms | 40.60ms | 31min
NASA | 96.14% | 86.98% | 83.16% | 60.21% | - | - | 582ms | 4h
SNARF | 97.31% | 90.38% | 93.97% | 80.65% | 806.67ms | 186.82ms | 994.01ms | 8h
Fast-SNARF | 97.41% | 90.52% | 94.20% | 81.25% | 5.27ms | 27.78ms | 34.70ms | 25min
TABLE I: Quantitative Results on Minimally Clothed Humans. The mean IoU of uniformly sampled points in space (IoU bbox) and points near the surface (IoU surface), as well as the inference and training time are reported. Our method achieves similar accuracy as SNARF (previous state-of-the-art) while being much faster. Our method outperforms all other baselines in terms of accuracy. Improvements are more pronounced for points near the surface, and for poses outside the training distribution. Also our method is faster than all baselines except Pose-ONet. Note that Pose-ONet and Backward-LBS (above the separation line, marked with *) produce distorted shapes, as shown in Fig. 4. Pose-ONet | | | | | | |
---|---|---|---|---|---|---|---
Backward | | | | | | |
NASA | | | | | | |
SNARF | | | | | | |
Fast-SNARF | | | | | | |
Ground Truth | | | | | | |
| Within Distribution | Out of Distribution
Figure 4: Qualitative Results on Minimally Clothed Humans. Our method and
SNARF produce results similar to the ground-truth with correct pose and
plausible local details, both for poses within the training distribution and
more extreme (OOD) poses. In contrast, the baseline methods suffer from
various artifacts including incorrect poses (Pose-ONet), degenerate shapes
(Pose-ONet and Backward), and discontinuities near joints (NASA), which become
more severe for unseen poses.
#### 5.1.2 Baselines
We consider SNARF as our main baseline. In addition, we consider the following
baselines. For SNARF, “Back-LBS” and “Pose-ONet” we use the same training
losses and hyperparameters as in Fast-SNARF.
Pose-Conditioned Occupancy Networks (Pose-ONet): This baseline extends
Occupancy Networks [33] by directly concatenating the pose input to the
occupancy network.
Backward Skinning (Back-LBS): This baseline implements the concept of backward
skinning similar to [23]. A network takes a deformed point and pose condition
as input and outputs the skinning weights of the deformed point. The deformed
point is then warped back to canonical space via LBS and the canonical
correspondence is fed into the canonical shape network to query occupancy.
NASA: NASA [13] models articulated human bodies as a composition of multiple
parts, each of which transforms rigidly and deforms according to the pose.
Note that in contrast to us, NASA requires ground-truth skinning weights for
surface points as supervision. We use the official NASA implementation
provided by the authors.
#### 5.1.3 Results and Discussion
Within Distribution Accuracy: Overall, all methods perform well in this
relatively simple setting, as shown in Table I. Our method achieves on-par or
better accuracy compared to SNARF and provides an improvement over other
baselines. Our method produces bodies with smooth surfaces and correct poses
as shown in Fig. 4. In contrast, NASA suffers from discontinuous artifacts
near joints. Back-LBS and Pose-ONet suffer from missing body parts.
Out of Distribution (OOD) Accuracy: In this setting, we test the trained
models on a different dataset, PosePrior [1], to assess the performance in
more realistic settings, where poses can be far from those in the training
set. Unseen poses cause drastic performance degradation to the baseline
methods as shown in Table I. In contrast, similar to SNARF, our method
degrades gracefully despite test poses being drastically different from
training poses and very challenging. As can be seen in Fig. 4, our method
generates natural shapes for the given poses while NASA fails to generate
correctives at bone intersections for unseen poses, leading to noticeable
artifacts. Pose-ONet fails to generate meaningful shapes and Back-LBS produces
distorted bodies due to incorrect skinning weights.
Speed Comparison: We report the training and inference speed of all methods on
a single NVIDIA Quadro RTX 6000 GPU. In this setting, with MLP-based canonical
shape, Fast-SNARF can be trained within 25 minutes and produces accurate
shapes in any pose. Baseline methods that reach similar speed, i.e. Pose-ONet,
and Back-LBS, do not produce satisfactory results (see Fig. 4). Compared to
the original SNARF, our improvements, detailed in Section 4, lead to a speed-
up of $150\times$ for the articulation module without loss of accuracy, as
shown in Table I. Fast-SNARF also dramatically boosts the training speed (25
minutes vs. 8 hours). Compared to NASA, Fast-SNARF evaluates the canonical
shape MLP only for true correspondences, while NASA always generates many
candidate correspondences, one for each bone, and needs to evaluate the
canonical shape MLP for all candidates, leading to slow inference (582ms vs.
35ms) and training (4 hours vs. 25 minutes).
Ablation - MLP Distillation: SNARF optimizes an MLP-based skinning field,
resulting in smooth skinning weights but slow training and inference. In Fast-
SNARF, we adopt an MLP distillation strategy: we optimizes a MLP-based
skinning weight field for smoothness, but convert it on the fly to a low
resolution voxel grid at each training iteration, to enable voxel-based
correspondence search. In this way, Fast-SNARF learns a similarly smooth
skinning field as shown in Fig. 5, yet is much faster than SNARF (see Table
II).
We also compare this MLP distillation strategy with a naive strategy in which
we directly optimize the skinning weights at each grid point with an
additional total variation loss on the skinning weights voxel grid. As shown
in Table II, directly optimizing skinning weights voxel (w/o MLP distillation)
leads to inferior results. This accuracy degradation is due to noisy skinning
weights as shown in Fig. 5. In contrast, our strategy distills smooth skinning
weights voxels from the MLP while introducing only a slight overhead during
training (25 minutes vs. 23 minutes).
Ablation - Voxel Grid Resolution: We study the effect of different resolutions
of the skinning weight voxel grid. The results are shown in Table II. In
general, higher resolutions lead to higher accuracy but longer training and
inference time. A resolution of $32\times 32\times 8$ or $64\times 64\times
16$ yields a good balance between accuracy and speed. A grid of lower
resolution $16\times 16\times 4$ cannot fully represent the skinning weight
field and leads to a noticeable accuracy degradation (by $2.8\%$). On the
other hand, further increasing the resolution to $128\times 128\times 32$
produces diminishing returns, i.e. only $0.3\%$ IoU improvement, because the
skinning weight field is naturally smooth and does not contain high-frequency
details. Also, higher resolution significantly slows down the training and
inference speed by more than 2 times because 1) more points need to be
evaluated when converting the MLP to voxels during training and 2) the high-
resolution voxel grid does no longer fit into the GPU’s shared memory and
impacts read speeds significantly.
Configurations | Accuracy | Inference | Training
---|---|---|---
Baseline SNARF | 80.7% | 807ms + 187ms | 8h
\+ Voxel-based search | - | 61ms + 187ms | -
\+ Pre-compute LBS | - | 40ms + 187ms | -
\+ CUDA kernel | - | 5.3ms + 187ms | -
\+ Filter corres. | - | 5.3ms + 28ms | -
Fast-SNARF | 81.2% | 5.3ms + 28ms | 25 min
w/o MLP distillation | 78.2% | 5.3ms + 28ms | 23 min
$16\times 16\times 4$ | 78.3% | 3.6ms + 28ms | 23min
$32\times 32\times 8$ | 81.1% | 4.6ms + 28ms | 24min
$64\times 64\times 16$ | 81.2% | 5.3ms + 28ms | 25min
$128\times 128\times 32$ | 81.5% | 16ms + 28ms | 52min
TABLE II: Quantitative Ablation Study. We report accuracy (the mean IoU of points near the surface in out of distribution setting), inference speed (articulation speed + shape query speed) and training time of several ablative baselines. SNARF | Fast-SNARF | w/o MLP distillation
---|---|---
8 h | 25 min | 23 min
| |
Figure 5: Skinning Weight Learning Strategies. We show skinning weights
learned with three different strategies as well as the corresponding training
times. See text. Figure 6: Qualitative Results on Clothed Humans [31]. Our
method and SNARF both learn realistic clothing shape and deformations. In
contrast, the baseline method using a skinned base mesh produces less details
due to the inaccurate deformation when the base mesh mismatches the actual
shape (highlighted in red circle).
### 5.2 Clothed Avatar from Scans
Dataset: We use the registered meshes from CAPE [31] and their registered SMPL
parameters to train our model. We use 8 subjects with different clothing types
for evaluation. We train a model for each subject and clothing condition.
Baselines: Clothed humans are more challenging to model than minimally clothed
humans due to the clothing details and non-linear deformations. Since most
baselines from Section 5.1 already suffer from implausible shapes and
artifacts, we exclude them in this evaluation. Instead, we keep SNARF as our
major baseline, and also include a new baseline denoted as “SMPL NN”. This
baseline assumes that a skinned base mesh is given, such as SMPL [30]. Given a
pose, such a method first deforms the SMPL model to the target pose using
mesh-based LBS. Then for each query point in deformed space, its corresponding
skinning weights are defined as the skinning weights of its nearest vertex on
the deformed SMPL mesh. Finally, with the skinning weights, the query point
can be transformed back to the canonical space base on inverse LBS.
Results: The results are shown in Fig. 6. Our method can generate realistic
clothed humans in various poses including details on the face and clothing
(e.g. the collar on the left sample). The clothing also deforms naturally with
the body poses (e.g. the collar on the left sample and the lapel on the right
sample). While SNARF produces results of similar quality, training our method
only requires a fraction of SNARF’s training time (80 minutes vs. 20 hours).
Compared with the SMPL NN baseline, our results contain much more detail
because our method derives accurate correspondences between the deformed space
and canonical space. SMPL NN suffers from overly smooth shapes due to
inaccurate correspondences when the actual shape and the skinned base mesh do
not match well, e.g. around the lapel.
## 6 Conclusion
We propose Fast-SNARF, a fast, robust, and universal articulation module for
neural field representations. Fast-SNARF is built upon the idea of
differentiable forward skinning from SNARF [9], but is orders of magnitude
faster than SNARF thanks to a series of algorithmic and implementation
improvements. These include voxel-based correspondence search, LBS pre-
computation, a custom CUDA kernel implementation for root finding, duplicate
correspondences removal, approximated implicit gradients, and online MLP-to-
voxel conversion. The resulting algorithm can find correspondences as
accurately as SNARF while being $150\times$ faster. This leads to significant
speed-up in various real-world applications of forward skinning algorithms.
Using Fast-SNARF we are able to learn animatable human avatars from scans
15$\times$ faster than SNARF, and in contrast to SNARF, the speed bottleneck
is now the canonical shape query instead of the articulation module. We
believe Fast-SNARF’s speed and accuracy will open new applications and
accelerate research on non-rigid 3D reconstruction.
Acknowledgements: Xu Chen was supported by the Max Planck ETH Center for
Learning Systems. Andreas Geiger was supported by the DFG EXC number 2064/1 -
project number 390727645. We thank Yao Feng, Jinlong Yang, Yuliang Xiu and
Alex Zicong Fan for their feedback.
Disclosure:: MJB has received research gift funds from Adobe, Intel, Nvidia,
Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen
Technologies, and Meshcapade GmbH. MJB’s research was performed solely at, and
funded solely by, the Max Planck.
## References
* [1] Ijaz Akhter and Michael J. Black. Pose-conditioned joint angle limits for 3D human pose reconstruction. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2015.
* [2] Matan Atzmon and Yaron Lipman. Sal: Sign agnostic learning of shapes from raw data. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
* [3] Alexander W. Bergman, Petr Kellnhofer, Wang Yifan, Eric R. Chan, David B. Lindell, and Gordon Wetzstein. Generative neural articulated radiance fields. Arxiv, 2022.
* [4] Federica Bogo, Javier Romero, Gerard Pons-Moll, and Michael J. Black. Dynamic FAUST: Registering human bodies in motion. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2017.
* [5] Charles G Broyden. A class of methods for solving nonlinear simultaneous equations. Mathematics of computation, 1965.
* [6] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In Proc. of the European Conf. on Computer Vision (ECCV), 2022.
* [7] Jianchuan Chen, Ying Zhang, Di Kang, Xuefei Zhe, Linchao Bao, Xu Jia, and Huchuan Lu. Animatable neural radiance fields from monocular rgb videos. arXiv.org, 2021.
* [8] Xu Chen, Tianjian Jiang, Jie Song, Jinlong Yang, Michael J Black, Andreas Geiger, and Otmar Hilliges. gdna: Towards generative detailed neural avatars. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
* [9] Xu Chen, Yufeng Zheng, Michael J Black, Otmar Hilliges, and Andreas Geiger. SNARF: Differentiable forward skinning for animating non-rigid neural implicit shapes. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2021.
* [10] Julian Chibane, Thiemo Alldieck, and Gerard Pons-Moll. Implicit functions in feature space for 3D shape reconstruction and completion. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
* [11] Enric Corona, Tomas Hodan, Minh Vo, Francesc Moreno-Noguer, Chris Sweeney, Richard Newcombe, and Lingni Ma. Lisa: Learning implicit shape and appearance of hands. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
* [12] Enric Corona, Albert Pumarola, Guillem Alenyà, Gerard Pons-Moll, and Francesc Moreno-Noguer. SMPLicit: Topology-aware generative model for clothed people. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
* [13] Boyang Deng, JP Lewis, Timothy Jeruzalski, Gerard Pons-Moll, Geoffrey Hinton, Mohammad Norouzi, and Andrea Tagliasacchi. Neural articulated shape approximation. In European Conference on Computer Vision (ECCV), 2020.
* [14] Zijian Dong, Chen Guo, Jie Song, Xu Chen, Andreas Geiger, and Otmar Hilliges. PINA: Learning a personalized implicit neural avatar from a single RGB-D video sequence. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
* [15] Stephan J Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, and Julien Valentin. Fastnerf: High-fidelity neural rendering at 200fps. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2021.
* [16] Amos Gropp, Lior Yariv, Niv Haim, Matan Atzmon, and Yaron Lipman. Implicit geometric regularization for learning shapes. In Proc. of the International Conf. on Machine learning (ICML), 2020\.
* [17] Tong He, John Collomosse, Hailin Jin, and Stefano Soatto. Geo-pifu: Geometry and pixel aligned implicit functions for single-view human reconstruction. In Advances in Neural Information Processing Systems (NeurIPS), 2020\.
* [18] Tong He, Yuanlu Xu, Shunsuke Saito, Stefano Soatto, and Tony Tung. ARCH++: Animation-ready clothed human reconstruction revisited. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
* [19] Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, and Paul Debevec. Baking neural radiance fields for real-time view synthesis. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2021.
* [20] Fangzhou Hong, Mingyuan Zhang, Liang Pan, Zhongang Cai, Lei Yang, and Ziwei Liu. Avatarclip: Zero-shot text-driven generation and animation of 3d avatars. ACM Trans. Gr., 2022.
* [21] Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, and Tony Tung. Arch: Animatable reconstruction of clothed humans. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
* [22] Doug L. James and Christopher D. Twigg. Skinning mesh animations. ACM Trans. on Graphics, 24(3):399, 2005.
* [23] Timothy Jeruzalski, David IW Levin, Alec Jacobson, Paul Lalonde, Mohammad Norouzi, and Andrea Tagliasacchi. Nilbs: Neural inverse linear blend skinning. arXiv.org, 2004.05980, 2020.
* [24] Boyi Jiang, Yang Hong, Hujun Bao, and Juyong Zhang. Selfrecon: Self reconstruction your digital avatar from monocular video. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
* [25] Wei Jiang, Kwang Moo Yi, Golnoosh Samei, Oncel Tuzel, and Anurag Ranjan. Neuman: Neural human radiance field from a single video. In Proc. of the European Conf. on Computer Vision (ECCV), 2022.
* [26] Ruilong Li, Julian Tanke, Minh Vo, Michael Zollhoefer, Jürgen Gall, Angjoo Kanazawa, and Christoph Lassner. Tava: Template-free animatable volumetric actors. In Proc. of the European Conf. on Computer Vision (ECCV), 2022.
* [27] Siyou Lin, Hongwen Zhang, Zerong Zheng, Ruizhi Shao, and Yebin Liu. Learning implicit templates for point-based clothed human modeling. In Proc. of the European Conf. on Computer Vision (ECCV), 2022.
* [28] Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. In Advances in Neural Information Processing Systems (NeurIPS), 2020\.
* [29] Lingjie Liu, Marc Habermann, Viktor Rudnev, Kripasindhu Sarkar, Jiatao Gu, and Christian Theobalt. Neural Actor: Neural free-view synthesis of human actors with pose control. ACM Trans. on Graphics, 2021.
* [30] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multi-person linear model. ACM Trans. on Graphics, 2015.
* [31] Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, and Michael J. Black. Learning to dress 3D people in generative clothing. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
* [32] Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, and Michael J. Black. AMASS: Archive of motion capture as surface shapes. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
* [33] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3D reconstruction in function space. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
* [34] Marko Mihajlovic, Shunsuke Saito, Aayush Bansal, Michael Zollhoefer, and Siyu Tang. COAP: Compositional articulated occupancy of people. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
* [35] Marko Mihajlovic, Yan Zhang, Michael J Black, and Siyu Tang. LEAP: Learning articulated occupancy of people. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
* [36] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In Proc. of the European Conf. on Computer Vision (ECCV), 2020.
* [37] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. on Graphics, 41, 2022.
* [38] Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learning implicit 3D representations without 3D supervision. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
* [39] Atsuhiro Noguchi, Xiao Sun, Stephen Lin, and Tatsuya Harada. Neural articulated radiance field. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2021.
* [40] Atsuhiro Noguchi, Xiao Sun, Stephen Lin, and Tatsuya Harada. Unsupervised learning of efficient geometry-aware neural articulated representations. In Proc. of the European Conf. on Computer Vision (ECCV), 2022.
* [41] Michael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, and Andreas Geiger. Texture fields: Learning texture representations in function space. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
* [42] Pablo Palafox, Aljaž Božič, Justus Thies, Matthias Nießner, and Angela Dai. Neural parametric models for 3D deformable shapes. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2021.
* [43] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. DeepSDF: Learning continuous signed distance functions for shape representation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
* [44] Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. Animatable neural radiance fields for human body modeling. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2021.
* [45] Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Xiaowei Zhou, and Hujun Bao. Animatable neural radiance fields for modeling dynamic human bodies. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2021.
* [46] Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. Convolutional occupancy networks. In Proc. of the European Conf. on Computer Vision (ECCV), 2020.
* [47] Sida Peng, Shangzhan Zhang, Zhen Xu, Chen Geng, Boyi Jiang, Hujun Bao, and Xiaowei Zhou. Animatable neural implict surfaces for creating avatars from videos. arXiv preprint arXiv:2203.08133, 2022.
* [48] Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2021.
* [49] Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. PIFu: Pixel-aligned implicit function for high-resolution clothed human digitization. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
* [50] Shunsuke Saito, Tomas Simon, Jason Saragih, and Hanbyul Joo. PIFuHD: Multi-level pixel-aligned implicit function for high-resolution 3D human digitization. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
* [51] Shunsuke Saito, Jinlong Yang, Qianli Ma, and Michael J. Black. SCANimate: Weakly supervised learning of skinned clothed avatar networks. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
* [52] Sara Fridovich-Keil and Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
* [53] Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. Scene representation networks: Continuous 3D-structure-aware neural scene representations. In Advances in Neural Information Processing Systems (NeurIPS), 2019\.
* [54] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
* [55] Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, and Sanja Fidler. Neural geometric level of detail: Real-time rendering with implicit 3D shapes. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
* [56] Garvita Tiwari, Nikolaos Sarafianos, Tony Tung, and Gerard Pons-Moll. Neural-GIF: Neural generalized implicit functions for animating people in clothing. In International Conference on Computer Vision (ICCV), October 2021.
* [57] Shaofei Wang, Marko Mihajlovic, Qianli Ma, Andreas Geiger, and Siyu Tang. MetaAvatar: Learning animatable clothed human models from few depth images. In Advances in Neural Information Processing Systems (NeurIPS), 2021\.
* [58] Shaofei Wang, Katja Schwarz, Andreas Geiger, and Siyu Tang. Arah: Animatable volume rendering of articulated human sdfs. In Proc. of the European Conf. on Computer Vision (ECCV), 2022.
* [59] Chung-Yi Weng, Brian Curless, Pratul P. Srinivasan, Jonathan T. Barron, and Ira Kemelmacher-Shlizerman. HumanNeRF: Free-viewpoint rendering of moving people from monocular video. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
* [60] Yuliang Xiu, Jinlong Yang, Dimitrios Tzionas, and Michael J Black. ICON: Implicit Clothed humans Obtained from Normals. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
* [61] Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Basri Ronen, and Yaron Lipman. Multiview neural surface reconstruction by disentangling geometry and appearance. In Advances in Neural Information Processing Systems (NeurIPS), 2020\.
* [62] Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. Plenoctrees for real-time rendering of neural radiance fields. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2021.
* [63] Jianfeng Zhang, Zihang Jiang, Dingdong Yang, Hongyi Xu, Yichun Shi, Guoxian Song, Zhongcong Xu, Xinchao Wang, and Jiashi Feng. Avatargen: A 3d generative model for animatable human avatars. Arxiv, 2022.
* [64] Yufeng Zheng, Victoria Fernández Abrevaya, Xu Chen, Marcel C Bühler, Michael J Black, and Otmar Hilliges. IMAvatar: Implicit morphable head avatars from videos. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.
|
# Action-GPT: Leveraging Large-scale Language Models for Improved and
Generalized Action Generation
Sai Shashank Kalakonda
<EMAIL_ADDRESS>
Shubh Maheshwari
<EMAIL_ADDRESS>
Ravi Kiran Sarvadevabhatla
<EMAIL_ADDRESS>
Centre for Visual Information Technology
IIIT Hyderabad, Hyderabad, INDIA 500032
###### Abstract
We introduce Action-GPT, a plug-and-play framework for incorporating Large
Language Models (LLMs) into text-based action generation models. Action
phrases in current motion capture datasets contain minimal and to-the-point
information. By carefully crafting prompts for LLMs, we generate richer and
fine-grained descriptions of the action. We show that utilizing these detailed
descriptions instead of the original action phrases leads to better alignment
of text and motion spaces. We introduce a generic approach compatible with
stochastic (e.g. VAE-based) and deterministic (e.g. MotionCLIP) text-to-motion
models. In addition, the approach enables multiple text descriptions to be
utilized. Our experiments show (i) noticeable qualitative and quantitative
improvement in the quality of synthesized motions, (ii) benefits of utilizing
multiple LLM-generated descriptions, (iii) suitability of the prompt function,
and (iv) zero-shot generation capabilities of the proposed approach. Code,
pretrained models and sample videos will be made available at
https://actiongpt.github.io.
Figure 2: Action-GPT Overview: Given an action phrase ($x$), we first create a
suitable prompt using an engineered prompt function $f_{\text{prompt}}(x)$.
The result ($x_{\text{prompt}}$) is passed to a large-scale language model
(GPT-3) to obtain multiple action descriptions ($D_{i}$) containing fine-
grained body movement details. The corresponding deep text representations
$v_{i}$ are obtained using Description Embedder. The aggregated version of
these embeddings $v_{aggr}$ is processed by the Text Encoder. During training,
the action pose sequence $H_{1},...,H_{N}$ is processed by a Motion Encoder.
The encoders are associated with a deterministic sampler (autoencoder) [3] or
a VAE style generative model [4, 1]. During training (shown with black), the
latent text embedding $Z_{T}$ and the latent motion embedding $Z_{M}$ are
aligned. During inference (shown in green), the sampled text embedding is
provided to the Motion Decoder, which outputs the generated action sequence
${\widehat{H}}$.
Figure 1: Sample text conditioned action generations from a state-of-the-art
model TEACH [1] (top row) and our large language model-based approach (Action-
GPT-TEACH - bottom row). By incorporating large language models, our approach
results in noticeably improved generation quality for seen and unseen
categories. The conditioning action phrases for seen categories are taken from
BABEL [2] whereas unseen action phrases were provided by a user.
## 1 Introduction
Human motion generation finds a vast set of applications spanning from
entertainment (e.g. game and film industry) to virtual reality and robotics.
There have been significant contributions on category-conditioned human motion
generations [5], with some works capable of generation at scale [6, 7].
However, the generated samples are restricted to a finite set of action
categories. More recent approaches focus on text-conditioned motion generation
by constraining motion and language representations via a jointly optimized
latent space [4, 3, 1].
The recent development of large-scale language models (LLMs) [8, 9] has
triggered a paradigm shift in the field of Natural Language Processing (NLP).
These models, pre-trained on enormous amounts of text [10], have demonstrated
impressive generalization capabilities for challenging zero-shot setting tasks
such as text generation [11]. This exciting advance has also driven progress
for various applications in computer vision [12, 13], including the related
task of pose-based human action recognition [12].
The appeal of LLM models lies in their ability to generate task-relevant text
when provided a so-called prompt \- a small piece of text - as input.
Motivated by this observation and the advances mentioned above, we introduce
Action-GPT, an approach that utilizes the generative power of LLMs to improve
the quality and generalization capabilities of action generative models. In
particular, we demonstrate that our plug-and-play approach can be used to
advance existing state-of-the-art motion generation architectures in a
practical manner.
Our contributions are summarized below:
* •
To the best of our knowledge, we are the first to incorporate Large Language
Models (LLMs) for text-conditioned motion generation.
* •
We introduce a carefully crafted prompt function that enables the generation
of meaningful descriptions for a given action phrase.
* •
We introduce Action-GPT, a generic plug-and-play framework which is compatible
with stochastic (e.g. VAE-based [1, 4]) and deterministic (e.g. MotionCLIP
[3]) text-to-motion models. In addition, our framework enables multiple
generated text descriptions to be utilized for action generation.
* •
Via qualitative and quantitative experiments, we demonstrate (i) noticeable
improvement in the quality of synthesized motions, (ii) benefits of utilizing
multiple LLM-generated descriptions, (iii) suitability of the prompt function,
and (iv) zero-shot generation capabilities of the proposed approach.
Code, pretrained models and sample videos will be made available at
https://actiongpt.github.io.
Figure 3: Visual comparison of generated motion sequences across models
trained on Action-GPT framework on BABEL [2] dataset. Note that the
generations using Action-GPT are well-aligned with the semantic information of
action phrases. The example in the bottom right row shows latent space
editing. Action-GPT is better able to transfer the drink from mug style from
standing to sitting pose.
## 2 Action-GPT
Our objective is to generate an actor performing the motion conditioned on the
given action phrase. The input action phrase is a natural language text that
gives a high-level description of the action. It is denoted as a sequence of
words $x=[w_{1},w_{2},...,w_{M}]$. The action is represented as a sequence of
human poses ${H}=\\{H_{1},\dots,H_{n},\dots,H_{N}\\}$ where $N$ represents the
number of timesteps. The human pose $H_{n}\in\mathcal{R}^{|J|\times 6}$, where
J is the number of joints, is the parametric SMPL [14] representation wh ich
encodes the global trajectory and the parent relative joint rotation using the
6D [15] rotation representation.
Our proposed framework Action-GPT can be incorporated in an autoencoder [3] or
a Variational Auto Encoder [4, 1] based text-conditioned motion generation
model. These motion generation models aim to generate a motion sequence
conditioned on the text input by learning a joint latent space between the
text and motion modalities. The key components in these models are Text
Encoder $\mathcal{T}_{enc}$, Motion Encoder $\mathcal{M}_{enc}$ and Motion
Decoder $\mathcal{M}_{dec}$. The two text and motion encoders encode the text
sequence and motion sequence to text $Z_{T}$ and motion $Z_{M}$ latent
embeddings of the same dimension, respectively. In the case of autoencoders,
the latent embeddings are obtained in a deterministic fashion, whereas in
Variational Auto Encoders, the latent embeddings are sampled from the Gaussian
distribution $\mathcal{N}(\mu,\Sigma)$, where $(\mu,\Sigma)$ are the outputs
of the encoder. The motion decoder, on the other hand, uses the latent
embedding $Z$ as input to generate a sequence of motion poses
${\widehat{H}}=\\{\widehat{H}_{1},\dots,\widehat{H}_{n},\dots,\widehat{H}_{N}\\}$
Fig. 2 provides an overview of our approach to incorporate LLM (GPT-3 in our
case) into the text-conditioned motion generation models. In contrast to
training directly using the action phrase $x$ from the dataset, our framework
uses carefully crafted GPT-3 generated text descriptions $D_{i}$, which
provide low-level details about the movement of individual body parts. The
proposed framework consists of three steps (1) Constructing a prompt function
$f_{prompt}$, (2) Aggregating multiple GPT-3 generated text descriptions
$D_{i}$, and finally, (3) utilizing the GPT-3 generated text descriptions
$D_{i}$ in T2M models.
### 2.1 Prompt strategy
For a given action phrase $x$, we generate low-level body movement details
using GPT-3 [9]. GPT-3 is an autoregressive transformer model which generates
human-like textual descriptions relevant to the small amount of input text
provided. However, directly providing the action phrase as input to GPT-3
fails to output text containing the desired detail body movement information
and leads to unrealistic motion generations (see Fig. 4). This necessitates
the need for a suitable prompt function [10]. After multiple empirical trials,
we determine the following prompting function $f_{prompt}$: Describe a
person’s body movements who is performing the action [x] in detail.
Specifically, adding Describe a person’s to the prompt restricts the
description from generic information to character movement. The phrase body
movements forces GPT-3 to explain the motion of individual body parts. Lastly,
in detail forces the descriptions to provide low-level details. Fig. 5
showcases the importance of each component of our prompt function. We provide
GPT-3 generated text description $(D)$ and corresponding generated action
sequence ${\widehat{H}}$ for the action phrase act like a dog along with the
observations in the rightmost column.
Figure 4: This figure highlights the importance of the prompt function.
Observe that directly feeding the action phrase text ($x$) to GPT-3 results in
poor-quality generations. In contrast, the fine-grained body movement details
in the prompt-based text enable higher fidelity generations (last column).
Note that the coloured text descriptions correspond to different body movement
details. Figure 5: The table showcases the descriptions generated by GPT-3
(D), generated action sequences (${\widehat{H}}$) for the action phrase (x =)
act like a dog using different prompt strategies along with the observations
(right most column). Notice that our prompt function (bottom row) generates
the highest amount of required body movement descriptions, generating the most
realistic action sequence. Note that the coloured text descriptions correspond
to the body movement details.
### 2.2 Aggregating multiple descriptions
Given an action prompt $x_{prompt}$, GPT-3 is capable of generating multiple
textual descriptions $D_{1},\dots,D_{k}$ describing the action-specific
information. The randomly generated $k$ descriptions contain common and
description-specific text segments, which enhance the overall richness of
action description (see Fig. 6). Therefore, we utilize multiple descriptions
as part of the text-processing pipeline. The GPT-3 generated $k$ text
descriptions $D_{1},\dots,D_{k}$ are passed through a Description Embedder
$D_{emb}$ to obtain corresponding description embeddings $v_{1},\dots,v_{k}$.
These $k$ description embeddings are aggregated into a single embedding
$v_{aggr}$ using an Embedding Aggregator $E_{aggr}$. We consider average
operation as our Embedding Aggregator unless stated otherwise.
### 2.3 Utilizing GPT-3 generated text descriptions in T2M models
The text encoder $\mathcal{T}_{enc}$ inputs the aggregated embedding
$v_{aggr}$, and the outputs are sampled to generate text latent embeddings
$Z_{T}$. In a similar fashion, motion encoder $\mathcal{M}_{enc}$ inputs the
sequence of motion poses ${H}$ = {${H}_{1},\dots,{H}_{N}$}, where
${H}_{n}\in\mathcal{R}^{|J|\times 6}$ and samples motion latent embeddings
$Z_{M}$. In a deterministic approach [3], the latent embeddings are generated
directly as outputs of the encoder, whereas in a VAE-based approach [4, 1],
the encoders generate distribution parameters $\mu$ and $\Sigma$. The latent
embeddings are then sampled from the Gaussian distributions $N(\mu$,
$\Sigma)$. On the other hand, the generated text and motion embeddings are
provided to the motion decoder, which generates the human motion sequence
${\widehat{H}}$ = {${\widehat{H}}_{1},\dots,{\widehat{H}}_{N}$}, where
${\widehat{H}}_{i}\in\mathcal{R}^{|J|\times 6}$ which is passed through
forward kinematics to generate corresponding 3D mesh sequence.
Figure 6: This figure highlights the importance of using multiple GPT-3
generated descriptions ($D_{1},...,D_{k}$), $k=4$ for each action phrase in
Action-GPT framework for TEACH [1]. Notice the visibly improved generation
quality when multiple prompted descriptions are used (right column). Body
movement text common across descriptions is highlighted in blue. Movements
unique to each description are highlighted in pink.
## 3 Experiments
BABEL [2] is a large dataset with language labels describing the actions being
performed in motion capture sequences. It contains about 43 hours of mocap
sequences, comprising over 65k textual labels which belong to over 250 unique
action categories. We primarily focus our results on BABEL, considering its
vast and diverse set of motion sequences assigned to short text sequences,
which contain an average of 3-4 words. The action phrases of the BABEL dataset
are to the point and precise about the action information without any
additional details about the actor.
### 3.1 Models
We demonstrate our framework on state-of-the-art text conditioned motion
generation models – TEMOS [2], MotionCLIP [3] and TEACH [1]. Since TEACH is an
extension of TEMOS and uses only pairs of motion data of BABEL, we also
demonstrate our results on TEMOS by retraining it on all single action data
segments of BABEL. We train these three models as per our framework using
their publicly available codes and will call them Action-GPT-[model] further.
Action-GPT-MotionCLIP: Similar to MotionCLIP [3], we use CLIP’s pretrained
text encoder [16] as our description embedder, but we inputs all the $k$ text
descriptions and generate the aggregated vector representation $v_{aggr}$
using the embedding aggregator. Note that similar to MotionCLIP, we use CLIP-
ViT-B/32 frozen model. We follow the same motion auto-encoder setup as that of
MotionCLIP. There is no additional text encoder and sampling process as the
constructed $v_{aggr}$ itself is used as the text embedding $Z_{T}$, and the
output of the motion encoder is used as the motion embedding $Z_{M}$.
Action-GPT-TEMOS: Instead of providing action phrase text, we pass multiple
textual descriptions extracted from GPT-3 to DistilBERT [17] to obtain
description embedding $v_{i}\in R^{n_{i}\times e}$, where $n_{i}$ is the
number of words in description $D_{i}$ and $e$ is the DistilBERT embedding
dimension, thus $v_{aggr}\in R^{G\times e}$ where $G=max(n_{i})$. Note that
similar to TEMOS, we use pre-trained DistilBERT and freeze its weights during
training. The text encoder, motion encoder, and motion decoder used is the
same as that of TEMOS. The sampled text embedding $Z_{T}$ and motion embedding
$Z_{M}$ are both $\in R^{d}$ where $d$ is the dimension of latent space.
Action-GPT-TEACH: Since TEACH [1] is an extension of TEMOS, the process of
generating description embeddings $v_{i}$ is the same as that of in Action-
GPT-TEMOS. The text encoder, motion encoder, and motion decoder are used the
same as that of TEACH. As TEACH is trained on the pairs of action data, the
training iteration consists of two forward passes where an action phrase and
its corresponding motion sequence are provided as input in each pass. In
addition, a set of the last few frames of generated motion in the first pass
are also provided as input in the second pass. In both passes, our framework
uses the generated description embeddings corresponding to the input action
phrases.
Detailed diagrams illustrating the differences between original architectures
and their LLM-based variants along with the additional details regarding
training and testing can be found in the appendix 5.
### 3.2 Implementation details
We access GPT-3 via OpenAI API Beta Access program. Unless stated otherwise,
we use the largest GPT-3 model available, davinci-002. The Action-GPT prompt
strategy consumes a maximum of 140 tokens together for prompt and generation.
We use the completions API endpoint with the parameters temperature and top-p
set to 0.5 and 1, ensuring we have well-defined diverse descriptions. All the
other parameters are set to default. We conduct all our experiments on cluster
machines with Intel Xenon E5 2640 v4 and Nvidia GeForce GTX Ti 12GB GPUs with
Ubuntu 16.04 OS.
### 3.3 Quantitative analysis
We follow the metrics employed in TEACH [1] for quantitative evaluation,
namely Average Positional Error (APE) and Average Variational Error (AVE),
measured on the root joint and the rest of the body joints separately. Mean
local correspond to the joint position in the local coordinate system (with
respect to the root), whereas mean global corresponds to the joint position in
the global coordinate system. The APE and AVE for a joint are the averages of
the L2 distances between the generated and ground truth joint positions and
variances, respectively – refer to appendix 8 for details. Tab. 1 summarizes
the results of using our framework in comparison with the default setup for
each model. Incorporating detailed descriptions using GPT-3 shows an
improvement over all the APE (except for MotionCLIP) and AVE metrics. The
metrics of root joints for MotionCLIP are empty since it generates only local
pose without any locomotion.
### 3.4 Qualitative analysis
In Fig. 3, we provide qualitative comparisons of the model generations. We
observe that the generations from our framework are more realistic and well-
aligned with the semantic information of the action phrases compared to the
default approach. The generations are able to capture the low-level fine-
grained details of the action suggested by the original text phrase input.
Please refer to appendix for video examples of zero-shot generations,
comparisons with baselines, and examples showing diversity in generated
sequences.
| | Average Positional Error ↓ | Average Variance Error ↓
---|---|---|---
Model | Method | root joint | global traj | mean local | mean global | root joint | global traj | mean local | mean global
MotionCLIP | Default | - | - | 0.556 | 0.541 | - | - | 0.056 | 0.02
Action-GPT | - | - | 0.590 | 0.571 | - | - | 0.042 | 0.019
TEMOS | Default | 0.597 | 0.574 | 0.162 | 0.644 | 0.113 | 0.112 | 0.010 | 0.122
Action-GPT | 0.561 | 0.540 | 0.151 | 0.605 | 0.101 | 0.100 | 0.010 | 0.109
TEACH | Default | 0.674 | 0.654 | 0.159 | 0.717 | 0.222 | 0.220 | 0.014 | 0.234
Action-GPT | 0.606 | 0.586 | 0.159 | 0.650 | 0.204 | 0.202 | 0.014 | 0.216
Table 1: Quantitative evaluation on the BABEL test set
| | Average Positional Error ↓ | Average Variance Error ↓
---|---|---|---
Architectural Component | Ablation Details | root joint | global traj | mean local | mean global | root joint | global traj | mean local | mean global
number of generated descriptions ($K$) | $k=1$ | 0.655 | 0.635 | 0.159 | 0.698 | 0.216 | 0.214 | 0.015 | 0.228
$k=2$ | 0.637 | 0.617 | 0.158 | 0.680 | 0.211 | 0.209 | 0.015 | 0.223
| $k=8$ | 0.632 | 0.613 | 0.157 | 0.674 | 0.212 | 0.210 | 0.015 | 0.224
GPT-3 Capacity | curie | 0.642 | 0.622 | 0.159 | 0.680 | 0.216 | 0.214 | 0.015 | 0.228
Ours ($\mathbf{k=4}$) | davinci | 0.606 | 0.586 | 0.158 | 0.650 | 0.204 | 0.202 | 0.014 | 0.216
Table 2: Performance scores for ablative variants.
### 3.5 Ablations
We perform an ablation study to understand the underlying effects of the
Action-GPT framework. All of the ablation experiments are carried out on the
Action-GPT-TEACH model unless stated otherwise, as it is capable of handling a
series of action phrases as input.
Number of GPT-3 Text Sequences: We analyzed the influence of number of
generated descriptions in Action-GPT-TEACH framework by varying $k$ in $1,2,4$
and $8$. We observed that for all the values of $k$, Action-GPT-TEACH performs
better than the default TEACH and the best results are obtained for $k=4$ (see
Tab. 2). Increasing the value of $k$ up to a certain value improves
performance. However, aggregating too many descriptions can lead to the
injection of excessive noise, which dilutes the presence of text related to
body movement details.
Language Model Capacity: Open AI provides GPT-3 in different model capacities,
davinci being the largest. We analyzed the influence of curie, the second
largest GPT-3 model, on the motion sequence generations of the Action-GPT-
TEACH $(k=4)$ framework. Results show that having a larger model capacity
helps in generating more realistic motion sequences, as the generated text
descriptions provide much relevant and detailed information as required.
## 4 Discussion and Conclusion
The key to good quality and generalizable text-conditioned action generation
models lies in improving the alignment between the text and motion
representations. Through our Action-GPT framework, we show that such alignment
can be achieved efficiently by employing Large Language Models whose operation
is guided by a judiciously crafted prompt function. Sentences in GPT-generated
descriptions contain procedural text which corresponds to sub-actions. During
training, the regularity and frequency of such procedural text likely enable
better alignment with corresponding motion sequence counterparts. We also
hypothesize that the diversity of procedural sentences in the descriptions
enables better compositionality for unseen (zero-shot) generation settings.
The plug-and-play nature of our approach is practical for adoption within
state-of-the-art text-conditioned action generation models. Our experimental
results demonstrate the generalization capabilities and action fidelity
improvement for multiple adopted models, qualitatively and quantitatively. In
addition, we also highlight the role of various prompt function components and
the benefit of utilizing multiple prompts for improved generation quality.
## References
* [1] Nikos Athanasiou, Mathis Petrovich, Michael J. Black, and Güll Varol, “TEACH: Temporal Action Compositions for 3D Humans,” in International Conference on 3D Vision (3DV), 2022.
* [2] Abhinanda R. Punnakkal, Arjun Chandrasekaran, Nikos Athanasiou, Alejandra Quiros-Ramirez, and Michael J. Black, “BABEL: Bodies, action and behavior with english labels,” in Proceedings IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 722–731.
* [3] Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or, “Motionclip: Exposing human motion generation to clip space,” arXiv preprint arXiv:2203.08063, 2022.
* [4] Mathis Petrovich, Michael J. Black, and Gül Varol, “TEMOS: Generating diverse human motions from textual descriptions,” in European Conference on Computer Vision (ECCV), 2022.
* [5] Mathis Petrovich, Michael J. Black, and Gül Varol, “Action-conditioned 3D human motion synthesis with transformer VAE,” in International Conference on Computer Vision (ICCV), 2021.
* [6] Shubh Maheshwari, Debtanu Gupta, and Ravi Kiran Sarvadevabhatla, “Mugl: Large scale multi person conditional action generation with locomotion,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), January 2022, pp. 257–265.
* [7] Debtanu Gupta, Shubh Maheshwari, Sai Shashank Kalakonda, Manasvi, and Ravi Kiran Sarvadevabhatla, “Dsag: A scalable deep framework for action-conditioned multi-actor full body motion synthesis,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), January 2023.
* [8] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro, “Megatron-lm: Training multi-billion parameter language models using model parallelism,” 2019, cite arxiv:1909.08053.
* [9] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al., “Language models are few-shot learners,” arXiv preprint arXiv:2005.14165, 2020.
* [10] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig, “Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing,” 2021.
* [11] Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo Lee, and Woomyeong Park, “Gpt3mix: Leveraging large-scale language models for text augmentation,” 2021.
* [12] Wangmeng Xiang, Chao Li, Yuxuan Zhou, Biao Wang, and Lei Zhang, “Language supervised training for skeleton-based action recognition,” 2022.
* [13] Sarah Pratt, Rosanne Liu, and Ali Farhadi, “What does a platypus look like? generating customized prompts for zero-shot image classification,” 2022.
* [14] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black, “SMPL: A skinned multi-person linear model,” ACM Trans. Graphics (Proc. SIGGRAPH Asia), vol. 34, no. 6, pp. 248:1–248:16, Oct. 2015.
* [15] Yi Zhou, Connelly Barnes, Lu Jingwan, Yang Jimei, and Li Hao, “On the continuity of rotation representations in neural networks,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
* [16] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever, “Learning transferable visual models from natural language supervision,” in Proceedings of the 38th International Conference on Machine Learning, Marina Meila and Tong Zhang, Eds. 18–24 Jul 2021, vol. 139 of Proceedings of Machine Learning Research, pp. 8748–8763, PMLR.
* [17] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf, “Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter,” 2019.
Appendix
## 5 Models
We demonstrate our framework on state-of-the-art text conditioned motion
generation models – TEMOS [2], MotionCLIP [3] and TEACH [1], all trained on
BABEL [2]. We name their LLM (i.e. GPT here) based variants as Action-
GPT-[model].
Action-GPT-MotionCLIP: In MotionCLIP [3], for a given action phrase $x$, the
CLIP text embedding of the phrase $x$ is considered as its corresponding
latent text embedding $Z_{T}$, whereas in our Action-GPT framework the
aggregated vector embedding $v_{aggr}\in R^{c}$, where $c$ is the CLIP text
embedding dimesnion, constructed for the action phrase $x$ is considered as
its latent text embedding $Z_{T}$. In detail, we first construct $x_{prompt}$
using the action phrase $x$ and prompt function $f_{prompt}$. The $x_{prompt}$
is then input to LLM (i.e. GPT-3) to generate $k$ textual descriptions
$D_{i}$. Using the CLIP text encoder, we then construct $k$ corresponding CLIP
text embeddings $v_{i}$. These $k$ CLIP text embeddings $v_{i}$ are then
aggregated into a single embedding $v_{aggr}$ using an Embedding aggregator
(average operation here). The constructed $v_{aggr}$ is the corresponding
latent text embedding $Z_{T}$ for the action phrase $x$. (see Fig. 7)
We trained this model using the same training configurations as mentioned in
MotionCLIP [3] and provided the results on the test split. We generated the
metrics for the baseline MotionCLIP [3] using the pre-trained model provided.
Figure 7: Action-GPT-MotionCLIP Overview: We extend MotionCLIP [3] by
incorporating LLM (i.e. GPT-3). The box highlighted in green showcases the
generation of $k$ text descriptions $D_{i}$ as the output of LLM on input
$x_{prompt}$, which is constructed using the prompt function $f_{prompt}$ and
action phrase $x$. The box highlighted in blue showcases the aggregation of
description embeddings $v_{i}$, outputs of the CLIP text encoder. The
aggregated embedding $v_{aggr}$ is considered as the latent text embedding
$Z_{T}$, on which the text loss $\mathcal{L}_{text}$ is computed. All the
other components apart from the highlighted boxes represent the original
architecture of MotionCLIP [3].
Action-GPT-TEMOS: Instead of providing the action phrase $x$, directly to
DistilBERT [17] as that in TEMOS [4], we first construct $x_{prompt}$ and
generate $k$ textual descriptions $D_{i}$ using LLM (i.e. GPT-3). These $k$
textual descriptions are then input to DistilBERT [17] to obtain $k$
corresponding description embeddings $v_{i}\in R^{n_{i}\times e}$, where
$n_{i}$ is the number of words in description $D_{i}$ and $e$ is the
DistilBERT embedding dimension. The $k$ description embeddings are aggregated
to a single embedding $v_{aggr}\in R^{G\times e}$, where $G=max(n_{i})$ using
the average operation. This aggregated embedding $v_{aggr}$ is then used as
input to Text Encoder along with the text distribution tokens
$\mu_{token}^{T}$ and $\sum_{token}^{T}$. The later training and inference
process is carried out the same as that of in TEMOS [4]. (see Fig. 8)
We trained this model on a 4 GPU setup with a batch size of 4 on each GPU,
keeping rest all the parameters same as provided in TEMOS [4]. We trained the
baseline model of TEMOS [4] and its LLM-based variant on BABEL [2] using the
single action data segments provided by TEACH [1] and generated the metrics on
the corresponding test set.
Figure 8: Action-GPT-TEMOS Overview: We extend TEMOS [4] by incorporating LLM
(i.e. GPT-3). The box highlighted in green showcases the generation of $k$
text descriptions $D_{i}$ using LLM on input $x_{prompt}$, which is
constructed using the prompt function $f_{prompt}$ and action phrase $x$. The
$k$ text descriptions $D_{i}$ are input to DistilBERT to generate
corresponding description embeddings $v_{i}$. The box highlighted in blue
showcases the aggregation of description embeddings $v_{i}$ to
$v_{aggr}^{1:G}$ using an embedding aggregator. All the other components apart
from the highlighted boxes represent the original architecture of TEMOS [4].
Figure 9: Action-GPT-TEACH Overview: We extend TEACH [1] by incorporating LLM
(i.e. GPT-3). As TEACH can generate an action sequence for a series of action
phrases, we use $x^{i}$ to denote the $i^{th}$ phrase. The box highlighted in
green showcases the $k$ generated text descriptions $D_{j}^{i}$ using LLM on
input $x_{prompt}^{i}$, which is constructed using the prompt function
$f_{prompt}$ and action phrase $x^{i}$. The $k$ text descriptions $D_{j}^{i}$
are input to DistilBERT to generate corresponding sentence embeddings
$v_{j}^{i}$, which are aggregated into a single embedding $v_{aggr}^{i,1:G}$
using an embedding aggregator. All the other components apart from the
highlighted boxes represent the original architecture of TEACH [1].
Action-GPT-TEACH: Since TEACH [1] is an extension of TEMOS [4] the process of
generating aggregated description embedding $v_{aggr}$ is same as that of
Action-GPT-TEMOS. In addition, TEACH can generate an action sequence for a
series of action phrases as input $x=\\{x^{1},\dots,x^{i},\dots,x^{s}\\}$. So,
we compute $v_{aggr}$ $s$ number of times, once for each phrase $x^{i}$. In
detail, for an action phrase $x^{i}$ we generate $k$ text descriptions
$D_{1}^{i},..,D_{j}^{i},..,D_{k}^{i}$. The $k$ text descriptions are input to
DistilBERT to generate corresponding description embeddings
$v_{1}^{i},..,v_{j}^{i},..,v_{k}^{i}$, where $v_{j}^{i}\in R^{n_{j}^{i}\times
e}$, where $n_{j}^{i}$ is the number of words in description $D_{j}^{i}$ and
$e$ is the DistilBERT embedding dimension. The $k$ description embeddings
$v_{j}^{i}$ are aggregated to a single embedding $v_{aggr}^{i}\in R^{G\times
e}$, where $G=max(n_{j}^{i})$ using the average operation. The aggregated
embedding $v_{aggr}^{i}$ is input to Past-conditioned Text Encoder $T_{enc}$
along with the learnable tokens $\mu_{token},\sum_{token},$ SEP token and
${{I}}^{i-1}_{N_{i-1}-P:N_{i-1}}$, the motion features generated using Past
Encoder, corresponding to the last $P$ frames of previous generated action
sequence ${\widehat{H}}^{i-1}_{N_{i-1}-P:N_{i-1}}$. The later training and
inference process followed is the same as that of TEACH.(see Fig. 9)
Similar to Action-GPT-TEMOS, we trained this model on a 4 GPU setup with a
batch size of 4 on each GPU, keeping rest all the parameters same as provided
in TEACH [1]. We generated the metrics for the baseline TEACH [1] using the
pre-trained model provided.
## 6 Diverse Generations
Our Action-GPT framework can generate diverse action sequences for a given
action phrase $x$, utilizing the capability of LLMs to generate diverse text
descriptions for a given prompt. We generate multiple text descriptions
$D_{i}$ for an action phrase $x$, and using them as input, multiple action
sequences ${\widehat{H}}_{i}$ are generated.
## 7 Zero-Shot Generations
Our approach enables the generation of action sequences ${\widehat{H}}$ for
unseen action phrases (zero-shot). The intuition behind this is that our
Action-GPT framework uses low-level detailed body movements textual
information to align text and motion spaces instead of using the action phrase
directly. Hence the action phrase might be unseen, but the low-level body
movement details in the generated corresponding text descriptions $D_{i}$ will
not be completely unseen.
Results corresponding to comparison of the baseline models to their LLM-based
variants, diverse and zero-shot generations can be found at
https://actiongpt.github.io.
Figure 10: Actions with locomotion generated using Action-GPT-TEACH. The color
of the text (column 2) represents the detailed text generated by GPT-3 and the
same color of the mesh represents the pose sequence generated for the sub-
action. The green curve shows the trajectory. The red points show the starting
and end points of the motion. Action-GPT is able to generate diverse examples
involving locomotion such as walk in a circle, run, hop over obstacle and
kick.
## 8 Metrics
We show results using the generative quality metrics followed by [4, 1],
namely Average Positional Error (APE) and Average Variational Error (AVE). For
all the metrics, the smaller the score, the better the generative quality.
Average Positional Error (APE) : For a joint j, APE is calculated as the
average of the L2 distances between the generated and ground truth joint
positions over the timesteps (N) and the number of test samples (S).
APE$[j]=\dfrac{1}{SN}\sum_{s\in S}\sum_{n\in
N}\|\widehat{H}_{s,n}[j]-H_{s,n}[j]\|_{2}$
Average Variational Error (AVE) : For a joint j, AVE is calculated as the
average of the L2 distances between the generated and ground truth variances.
AVE$[j]=\dfrac{1}{S}\sum_{s\in
S}\|\widehat{\sigma}_{s}[j]-\sigma_{s}[j]\|_{2}$
where $\sigma[j]$ denotes the variance of the joint $j$,
$\sigma[j]=\dfrac{1}{N-1}\sum_{n\in N}(\widetilde{H}[j]-H_{n}[j])^{2}$
$\widetilde{H}[j]$ is calculated as the mean of the joint $j$ over $N$
timesteps.
We calculate four variants of errors for both APE and AVE,
* •
root joint error is calculated on the root joint using all the 3 coordinates
$X,Y$ and $Z$.
* •
global traj error is calculated on the root joint using only the 2 coordinates
$X$ and $Y$.
* •
mean local error is calculated as the average of all the joint errors in the
local coordinate system with respect to the root joint.
* •
mean global error is calculated as the average of all the joint errors in the
global coordinate system.
## 9 Locomotion and root movement
Fig. 10 shows the diverse actions involving locomotion generated using Action-
GPT. Our prompt strategy provides detailed descriptions which include the
direction of the locomotion along with the coordination among relevant body
parts. We further verify this by quantitatively evaluating using Average
Positional Error and Average Variance Error of the global trajectory metric –
see Table. 1. We observe that the Action-GPT variants are better aligned with
root trajectory than the baseline.
## 10 Current limitations
* •
Finger motion: Current frameworks use SMPL to represent the pose. SMPL does
not contain detailed finger joints. Therefore, GPT-generated descriptions for
actions requiring detailed finger motion such as rock-paper-scissors, pointing
fingers cannot be generated satisfactorily by the current framework.
* •
Complex Actions: Actions containing complex and vigorous body movements such
as yoga and dance poses cannot be generated by our current framework.
* •
Long duration action sequences: Due to the limited duration of training data
action sequences ($<10$ secs), our method cannot generate long sequences.
Model | Method | Avg. Training Time (secs) | Avg. Inference Time (secs)
---|---|---|---
MotionCLIP | Default | 585 - 598 | 0.32 - 0.34
Action-GPT | 700 - 712 | 0.53 - 0.62
TEMOS | Default | 225 - 230 | 0.8 - 0.96
Action-GPT | 255 - 260 | 1.44 - 1.76
TEACH | Default | 234 - 240 | 1.6 - 2.1
Action-GPT | 315 - 320 | 3.4 - 3.8
Table 3: Computation costs of baselines and their Action-GPT ($k=4$) variants.
## 11 Computation cost analysis
There will be no change in the number of parameters of the Action-GPT variants
when compared to the baseline models as we are using the frozen pre-trained
text embedding models. Although the computation time of the models both during
training and inference is higher for the Action-GPT ($k=4$) variants when
compared with the original baselines as shown in Table. 3. For training time,
we take the mean of the time taken for hundred epochs. For inference time, we
take a batch-size of $16$ and average over $10$ repetitions. The increase in
the computation time of the Action-GPT variants is because of the usage of $k$
GPT-3 text descriptions, each containing around $128$ words, whereas, in the
baseline models, a single action phrase is used, which contains around $5-8$
words.
|
# Convergence Analyses of Davis–Yin Splitting via Scaled Relative Graphs II:
Convex Optimization Problems
Soheun Yi<EMAIL_ADDRESS>Ernest K. Ryu<EMAIL_ADDRESS>
###### Abstract
The prior work of [arXiv:2207.04015, 2022] used scaled relative graphs (SRG)
to analyze the convergence of Davis–Yin splitting (DYS) iterations on monotone
inclusion problems. In this work, we use this machinery to analyze DYS
iterations on convex optimization problems and obtain state-of-the-art linear
convergence rates.
††journal: Journal of Mathematical Analysis and Applications
[label1] organization=Seoul National University, Department of mathematical
sciences, addressline=1 Gwanak-ro, Gwanak-gu, city=Seoul, postcode=08826,
country=Korea
## 1 Introduction
Consider the problem
$\begin{array}[]{ll}\underset{x\in{\mathcal{H}}}{\mbox{minimize}}&f(x)+g(x)+h(x),\end{array}$
(1)
where ${\mathcal{H}}$ is a Hilbert space, $f$, $g$, and $h$ are convex,
closed, and proper functions, and $h$ is differentiable with $L$-Lipschitz
continuous gradients. The Davis–Yin splitting (DYS) [1] solves this problem by
performing the fixed-point iteration with
${\varbbT}={\varbbI}-\mathrm{Prox}_{\alpha g}+\mathrm{Prox}_{\alpha
f}(2\mathrm{Prox}_{\alpha g}-{\varbbI}-\alpha\nabla h\mathrm{Prox}_{\alpha
g}),$ (2)
where $\alpha>0$, $\mathrm{Prox}_{\alpha f}$ and $\mathrm{Prox}_{\alpha g}$
are the proximal operators with respect to $\alpha f$ and $\alpha g$, and is
the identity mapping. DYS has been used as a building block for various
algorithms for a diverse range of optimization problems [2, 3, 4, 5, 6, 7].
Much prior work has been dedicated to analyzing the convergence rate of DYS
iterations [1, 8, 9, 10, 11, 12, 13]. Recently, Lee, Yi, and Ryu [14]
leveraged the recently introduced scaled relative graphs (SRG) [15] to obtain
tighter analyses. However, the focus of [14] was on DYS applied to general
class monotone operators, rather than the narrower class of subdifferential
operators.
In this paper, we use the SRG theory of [14] to analyze the linear convergence
rates of DYS applied to convex optimization problems and obtain state-of-the-
art rates.
### 1.1 Prior works
For explaining and inducing many convex optimization algorithms, splitting
methods for monotone inclusion problems has been a potent tool[16, 17, 18].
Renowned examples of this methodology include forward-backward splitting (FBS)
[19, 20], Douglas–Rachford splitting (DRS) [21, 22, 23], and alternating
directions method of multipliers (ADMM) [24], which have been widely used in
application regimes. While FBS and DRS are concerned with the sum of two
monotone operators, Davis and Yin proposed a new splitting method for the sum
of three monotone operators [1], thereby uniting the aforementioned methods.
This has come up with a variety of applications [2, 3, 4, 5, 6, 7] and many
variants, including stochastic DYS [25, 26, 27, 28, 29], inexact DYS [30],
adaptive DYS [31], inertial DYS [32], and primal-dual DYS[33].
Compared to the substantial amount of these studies on DYS, there is not much
literature on linear convergence analysis of the DYS iteration. One approach
is to formulate SDPs that numerically find the tight contraction factors of
DYS: Ryu, Taylor, Bergeling, and Giselsson [11] and Wang, Fazlyab, Chen, and
Preciado [12] carried out this approach using the performance estimation
problem (PEP) and integral quadratic constraint (IQC), respectively. However,
this approach does not give an analytical expression of the contraction
factors. There is far less literature that gives an analytical expression of
the contraction factors. Two of them are respectively done by Davis and Yin
[1], and Condat and Richtarik [13] via inequality analysis. On the other hand,
Lee, Yi, and Ryu [14] made a different approach utilizing scaled relative
graphs.
This novel tool, the scaled relative graphs (SRG) [15], renders a new approach
to analyzing the behavior of multi-valued operators (in particular, including
nonlinear operators) by mapping them to the extended complex plane. This
theory was further studied by Huang, Ryu, and Yin [34], where they identified
the SRG of normal matrices. Furthermore, Pates leveraged the
Toeplitz–Hausdorff theorem to identify SRGs of linear operators [35]. This
approach has also been used for analyzing nonlinear operators. To prove the
tightness of the averaged coefficient of the composition of averaged operators
[36] and the DYS operator [1], Huang, Ryu, and Yin performed analyses of SRGs
of those operators [37]. Moreover, there is certain literature on applying SRG
to control theory. SRGs have been leveraged by Chaffey, Forni, and Rodolphe to
examine input-output properties of feedback systems [38, 39]. Chaffey and
Sepulchre have further found its application to characterize behaviors of a
given model by leveraging it as an experimental tool [40, 41, 42].
### 1.2 Preliminaries
##### Multi-valued operators.
In general, we follow notations regarding multi-valued operators as in [16,
18]. Write ${\mathcal{H}}$ for a real Hilbert space with inner product
$\langle\cdot,\cdot\rangle$ and norm $\lVert\cdot\rVert$. To represent that is
a multi-valued operator defined on ${\mathcal{H}}$, write
${\varbbA}\colon{\mathcal{H}}\rightrightarrows{\mathcal{H}}$, and define its
domain as
$\mathrm{dom}\,{\varbbA}=\left\\{x\in{\mathcal{H}}\,|\,{\varbbA}x\neq\emptyset\right\\}$.
We say is single-valued if all outputs of are singletons or the empty set, and
identify with the function from $\mathrm{dom}\,{\varbbA}$ to ${\mathcal{H}}$.
Define the graph of an operator as
$\mathrm{graph}({\varbbA})=\\{(x,u)\in{\mathcal{H}}\times{\mathcal{H}}\,|\,u\in{\varbbA}x\\}.$
We do not distinguish and $\mathrm{graph}({\varbbA})$ for the sake of
notational simplicity. For instance, it is valid to write $(x,u)\in{\varbbA}$
to mean $u\in{\varbbA}x$. Define the inverse of as
${}^{-1}=\\{(u,x)\,|\,(x,u)\in{\varbbA}\\},$
scalar multiplication with an operator as
$\alpha{\varbbA}=\\{(x,\alpha u)\,|\,(x,u)\in{\varbbA}\\},$
the identity operator as
${\varbbI}=\\{(x,x)\,|\,x\in{\mathcal{H}}\\},$
and
${\varbbI}+\alpha{\varbbA}=\\{(x,x+\alpha u)\,|\,(x,u)\in{\varbbA}\\}$
for any $\alpha\in\mathbb{R}$. Define the resolvent of with stepsize
$\alpha>0$ as
${}_{\alpha{\varbbA}}=({\varbbI}+\alpha{\varbbA})^{-1}.$
Note that α is a single-valued operator if is monotone, or equivalently if
$\langle x-y,u-v\rangle\geq 0$ for all $(x,u)$, $(y,v)\in{\varbbA}$. Define
addition and composition of operators
${\varbbA}\colon{\mathcal{H}}\rightrightarrows{\mathcal{H}}$ and
${\varbbB}\colon{\mathcal{H}}\rightrightarrows{\mathcal{H}}$ as
$\displaystyle{\varbbA}+{\varbbB}$
$\displaystyle=\left\\{(x,u+v)\,|\,(x,u)\in{\varbbA},(x,v)\in{\varbbB}\right\\},$
$\displaystyle=\left\\{(x,s)\,|\,\exists\,u\;\text{such
that}\;(x,u)\in{\varbbB},(u,s)\in{\varbbA}\right\\}.$
We call ${\mathcal{A}}$ a class of operators if it is a set of operators. For
any real scalar $\alpha\in\mathbb{R}$, define
$\alpha{\mathcal{A}}=\\{\alpha{\varbbA}\,|\,{\varbbA}\in{\mathcal{A}}\\}$
and
${\varbbI}+\alpha{\mathcal{A}}=\\{{\varbbI}+\alpha{\varbbA}\,|\,{\varbbA}\in{\mathcal{A}}\\}.$
Define
${\mathcal{A}}^{-1}=\\{{}^{-1}\,|\,{\varbbA}\in{\mathcal{A}}\\}$
and ${}_{\alpha{\mathcal{A}}}=({\varbbI}+\alpha{\mathcal{A}})^{-1}$ for
$\alpha>0$.
##### Subdifferential operators.
Unless otherwise stated, functions defined on ${\mathcal{H}}$ are extended
real-valued, which means
$f\colon{\mathcal{H}}\to\mathbb{R}\cup\left\\{\pm\infty\right\\}.$
For a function $f$, we define the subdifferential operator $\partial f$ via
$\partial f(x)=\left\\{g\in{\mathcal{H}}\,|\,f(y)\geq f(x)+\langle
g,y-x\rangle,\forall y\in{\mathcal{H}}\right\\}$
(we allow $\infty\geq\infty$ and $-\infty\geq-\infty$). In some cases, the
subdifferential operator $\partial f$ is a single-valued operator. Then, we
write $\nabla f=\partial f$.
##### Proximal operators.
We call $f$ a CCP function if it is a convex, closed, and proper [18, 16]. For
a CCP function
$f\colon{\mathcal{H}}\to\mathbb{R}\cup\left\\{\pm\infty\right\\}$ and
$\alpha>0$, we define the proximal operator with respect to $\alpha f$ as
$\mathrm{Prox}_{\alpha
f}(x)=\operatorname*{argmin}_{y\in{\mathcal{H}}}\left\\{\alpha
f(y)+\frac{1}{2}\lVert x-y\rVert^{2}\right\\}.$
Then, ${}_{\alpha\partial f}=\mathrm{Prox}_{\alpha f}$.
##### Class of functions and subdifferential operators.
Define $f\colon{\mathcal{H}}\to\mathbb{R}\cup\left\\{\pm\infty\right\\}$ being
$\mu$-strongly convex (for $\mu\in(0,\infty)$) and $L$-smooth (for
$L\in(0,\infty)$) as they are defined in [43]. Write
${\mathcal{F}}_{\mu,L}=\left\\{f\,|\,f\text{ is }\mu\text{-strongly convex,
}L\text{-smooth, and CCP.}\right\\}.$
for collection of functions that are $\mu$-strongly convex and $L$-smooth at
the same time. For notational simplicity, we extend ${\mathcal{F}}_{\mu,L}$ to
allow $\mu=0$ or $L=\infty$ by defining
$\displaystyle{\mathcal{F}}_{0,L}$ $\displaystyle=\left\\{f\,|\,f\text{ is
}L\text{-smooth and CCP.}\right\\},$ $\displaystyle{\mathcal{F}}_{\mu,\infty}$
$\displaystyle=\left\\{f\,|\,f\text{ is }\mu\text{-strongly convex and
CCP.}\right\\},$ $\displaystyle{\mathcal{F}}_{0,\infty}$
$\displaystyle=\left\\{f\,|\,f\text{ is CCP.}\right\\}.$
for $\mu$, $L\in(0,\infty)$.
Subdifferential operators of any functions in ${\mathcal{F}}_{\mu,L}$ are
denoted
$\partial{\mathcal{F}}_{\mu,L}=\left\\{\partial
f\,|\,f\in{\mathcal{F}}_{\mu,L}\right\\}.$
##### Complex set notations.
Denote $\overline{\mathbb{C}}=\mathbb{C}\cup\left\\{\infty\right\\}$, and
define $0^{-1}=\infty$ and $\infty^{-1}=0$ in $\overline{\mathbb{C}}$. For
$A\subset\overline{\mathbb{C}}$ and $\alpha\in\mathbb{C}$, define
$\alpha A=\left\\{\alpha z\,|\,z\in
A\right\\},\quad\alpha+A=\left\\{\alpha+z\,|\,z\in A\right\\},\quad
A^{-1}=\left\\{z^{-1}\,|\,z\in A\right\\}.$
For $A\subseteq\mathbb{C}$, define the boundary of $A$
$\partial A=\overline{A}\setminus\mathrm{int}A.$
We clarify that the usage of $\partial$ operator is different when it is
applied to a function or a complex set; the former is the subdifferential
operator, and the latter is the boundary operator. For circles and disks on
the complex plane, write
${\mathrm{Circ}}(z,r)=\\{w\in\mathbb{C}\,|\,\lvert
w-z\rvert=r\\},\qquad{\mathrm{Disk}}(z,r)=\\{w\in\mathbb{C}\,|\,\lvert
w-z\rvert\leq r\\}$
for $z\in\mathbb{C}$ and $r\in(0,\infty)$. Note,
${\mathrm{Circ}}(z,r)=\partial{\mathrm{Disk}}(z,r)$.
##### Scaled relative graphs [15].
Define the SRG of an operator
${\varbbA}\colon{\mathcal{H}}\rightrightarrows{\mathcal{H}}$ as
$\displaystyle\mathcal{G}({\varbbA})$
$\displaystyle=\left\\{\frac{\|u-v\|}{\|x-y\|}\exp\left[\pm
i\angle(u-v,x-y)\right]\,\Big{|}\,u\in{\varbbA}x,\,v\in{\varbbA}y,\,x\neq
y\right\\}$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\bigg{(}\cup\\{\infty\\}\text{
if ${\varbbA}$ is not single-valued}\bigg{)}.$
where the angle between $x\in{\mathcal{H}}$ and $y\in{\mathcal{H}}$ is defined
as
$\displaystyle\angle(x,y)=\left\\{\begin{array}[]{ll}\arccos\left({\tfrac{\langle
x,y\rangle}{\lVert x\rVert\lVert y\rVert}}\right)&\text{ if }x\neq 0,\,y\neq
0\\\ 0&\text{ otherwise.}\end{array}\right.$
Note, SRG is a subset of $\overline{\mathbb{C}}$. Define the SRG of a class of
operators ${\mathcal{A}}$ as
$\mathcal{G}({\mathcal{A}})=\bigcup_{{\varbbA}\in{\mathcal{A}}}\mathcal{G}({\varbbA}).$
Say ${\mathcal{A}}$ is _SRG-full_ if
$\displaystyle{\varbbA}\in{\mathcal{A}}\quad\Leftrightarrow\quad{\mathcal{G}}({\varbbA})\subseteq{\mathcal{G}}({\mathcal{A}}).$
###### Fact 1 (Theorem 4, 5 [15]).
If ${\mathcal{A}}$ is a class of operators, then
${\mathcal{G}}(\alpha{\mathcal{A}})=\alpha{\mathcal{G}}({\mathcal{A}}),\quad{\mathcal{G}}({\varbbI}+{\mathcal{A}})=1+{\mathcal{G}}({\mathcal{A}}),\quad{\mathcal{G}}({\mathcal{A}}^{-1})={\mathcal{G}}({\mathcal{A}})^{-1}.$
where $\alpha$ is a nonzero real number. If ${\mathcal{A}}$ is furthermore
SRG-full, then $\alpha{\mathcal{A}},{\varbbI}+{\mathcal{A}}$, and
${\mathcal{A}}^{-1}$ are SRG-full.
###### Fact 2 (Proposition 2 [15]).
Let $0<\mu<L<\infty$. Then
${\mathcal{G}}(\partial\mathcal{F}_{0,\infty})=$$\scriptstyle\cup\\{{\boldsymbol{\infty}}\\}$$\left\\{z\,|\,\operatorname{Re}z\geq 0\right\\}\cup\\{\infty\\}$ | ${\mathcal{G}}(\partial\mathcal{F}_{\mu,\infty})=$$\scriptstyle\cup\\{{\boldsymbol{\infty}}\\}$$\mu$$\left\\{z\,|\,\operatorname{Re}z\geq\mu\right\\}\cup\\{\infty\\}$ | ${\mathcal{G}}(\partial\mathcal{F}_{0,L})=$$L$ | ${\mathcal{G}}(\partial\mathcal{F}_{\mu,L})=$$L$$\mu$
---|---|---|---
##### DYS operators.
Let
,,,α,λ
$\displaystyle={\varbbI}-\lambda{}_{\alpha{\varbbB}}+\lambda{}_{\alpha{\varbbA}}(2{}_{\alpha{\varbbB}}-{\varbbI}-\alpha{\varbbC}{}_{\alpha{\varbbB}})$
be the DYS operator for operators
${\varbbA}\colon{\mathcal{H}}\rightrightarrows{\mathcal{H}}$,
${\varbbB}\colon{\mathcal{H}}\rightrightarrows{\mathcal{H}}$, and
${\varbbC}\colon{\mathcal{H}}\rightrightarrows{\mathcal{H}}$ with stepsize
$\alpha\in(0,\infty)$ and averaging parameter $\lambda\in(0,\infty)$. In this
paper, we usually take ${\varbbA}=\partial f$, ${\varbbB}=\partial g$, and
${\varbbC}=\nabla h$ for some CCP functions $f$, $g$, and $h$ defined on
${\mathcal{H}}$, to obtain
${}_{\partial f,\partial g,\nabla
h,\alpha,\lambda}={\varbbI}-\lambda\mathrm{Prox}_{\alpha
g}+\lambda\mathrm{Prox}_{\alpha f}(2\mathrm{Prox}_{\alpha
g}-{\varbbI}-\alpha\nabla h\mathrm{Prox}_{\alpha g})$
what we call the _subdifferential DYS operator_.
Let
${}_{{\mathcal{A}},{\mathcal{B}},{\mathcal{C}},\alpha,\lambda}=\left\\{{}_{{\varbbA},{\varbbB},{\varbbC},\alpha,\lambda}\,|\,{\varbbA}\in{\mathcal{A}},{\varbbB}\in{\mathcal{B}},{\varbbC}\in{\mathcal{C}}\right\\}$
be the class of DYS operators for operator classes ${\mathcal{A}}$,
${\mathcal{B}}$, and ${\mathcal{C}}$ with $\alpha,\lambda\in(0,\infty)$.
Define
$\displaystyle\zeta_{\text{DYS}}(z_{A},z_{B},z_{C};\alpha,\lambda)$
$\displaystyle=1-\lambda z_{B}+\lambda z_{A}(2z_{B}-1-\alpha z_{C}z_{B})$
$\displaystyle=1-\lambda z_{A}-\lambda z_{B}+\lambda(2-\alpha
z_{C})z_{A}z_{B},$
which exhibits symmetry with respect to $z_{A}$ and $z_{B}$, and
${\mathcal{Z}}^{\text{DYS}}_{{\mathcal{A}},{\mathcal{B}},{\mathcal{C}},\alpha,\lambda}=\left\\{\zeta_{\text{DYS}}(z_{A},z_{B},z_{C};\alpha,\lambda)\,|\,z_{A}\in{\mathcal{G}}\left({}_{\alpha{\mathcal{A}}}\right),z_{B}\in{\mathcal{G}}\left({}_{\alpha{\mathcal{B}}}\right),z_{C}\in{\mathcal{G}}\left({\mathcal{C}}\right)\right\\}$
for operator classes ${\mathcal{A}}$, ${\mathcal{B}}$, and ${\mathcal{C}}$
with $\alpha,\lambda\in(0,\infty)$.
##### Identifying the tight Lipschitz coefficient via SRG.
Say a subset of $\overline{\mathbb{C}}$ is a _generalized disk_ if it is a
disk, $\\{z\,|\,\operatorname{Re}z\geq a\\}\cup\\{\infty\\}$, or
$\\{z\,|\,\operatorname{Re}z\leq a\\}\cup\\{\infty\\}$ for a real number $a$.
The following is the key fact for calculating the Lipschitz coefficients of
the DYS operators via SRG.
###### Fact 3 (Corollary 1 of [14]).
Let $\alpha,\lambda>0$. Let ${\mathcal{A}}$ and ${\mathcal{B}}$ be SRG-full
classes of monotone operators where
${\mathcal{G}}\left({\varbbI}+\alpha{\mathcal{A}}\right)$ forms a generalized
disk. Let ${\mathcal{C}}$ be an SRG-full class of single-valued operators.
Assume ${\mathcal{G}}\left({\mathcal{A}}\right)$,
${\mathcal{G}}\left({\mathcal{B}}\right)$, and
${\mathcal{G}}\left({\mathcal{C}}\right)$ are nonempty. Then,
$\sup_{\begin{subarray}{c}{\varbbT}\in{}_{{\mathcal{A}},{\mathcal{B}},{\mathcal{C}},\alpha,\lambda}\\\
x,y\in\mathrm{dom}\,{\varbbT},x\neq
y\end{subarray}}\frac{\lVert{\varbbT}x-{\varbbT}y\rVert}{\lVert
x-y\rVert}=\sup_{z\in{\mathcal{Z}}^{\text{DYS}}_{{\mathcal{A}},{\mathcal{B}},{\mathcal{C}},\alpha,\lambda}}\lvert
z\rvert.$
Indeed, the original version of Fact 3 is consistent with
${\mathcal{G}}\left({\varbbI}+\alpha{\mathcal{A}}\right)$ having a more
general property, namely an arc property. We can calculate bounds for the
right-hand-side of the equality in Fact 3 efficiently by using the following
fact.
###### Fact 4 (Lemma 1 of [14]).
Let $f\colon\mathbb{C}^{3}\to\mathbb{C}$ be a polynomial of three complex
variables. Let $A$, $B$, and $C$ be compact subsets of $\,\mathbb{C}$. Then,
$\max_{\begin{subarray}{c}z_{A}\in A,z_{B}\in B,\\\ z_{C}\in
C\end{subarray}}\lvert
f(z_{A},z_{B},z_{C})\rvert=\max_{\begin{subarray}{c}z_{A}\in\partial
A,z_{B}\in\partial B,\\\ z_{C}\in\partial C\end{subarray}}\lvert
f(z_{A},z_{B},z_{C})\rvert.$
## 2 Lipschitz factors of subdifferential DYS
We present Lipschitz factors of subdifferential DYS operators. To the best of
our knowledge, the following results are the best linear convergence rates of
the DYS iteration compared to Theorem 9 of [13] and Theorems 3, 4, and 5 of
[14]. To clarify, our rates are not slower than the prior rates in all cases
and faster in most cases.
###### Theorem 1.
Let $f\in{\mathcal{F}}_{\mu_{f},L_{f}}$, $g\in{\mathcal{F}}_{\mu_{g},L_{g}}$,
and $h\in{\mathcal{F}}_{\mu_{h},L_{h}}$, where
$\displaystyle 0\leq\mu_{f}<L_{f}\leq\infty,\quad
0\leq\mu_{g}<L_{g}\leq\infty,\quad 0\leq\mu_{h}<L_{h}<\infty.$
Let $\lambda>0$ be an averaging parameter and $\alpha>0$ be a step size.
Throughout this theorem, define $r/\infty=0$ for any real number $r$. Write
$\displaystyle
C_{f}=\frac{1}{2}\left(\frac{1}{1+\alpha\mu_{f}}+\frac{1}{1+\alpha
L_{f}}\right),\quad
C_{g}=\frac{1}{2}\left(\frac{1}{1+\alpha\mu_{g}}+\frac{1}{1+\alpha
L_{g}}\right),$ $\displaystyle
R_{f}=\frac{1}{2}\left(\frac{1}{1+\alpha\mu_{f}}-\frac{1}{1+\alpha
L_{f}}\right),\quad
R_{g}=\frac{1}{2}\left(\frac{1}{1+\alpha\mu_{g}}-\frac{1}{1+\alpha
L_{g}}\right),$ $\displaystyle d=\max\\{\lvert
2-\lambda-\alpha\mu_{h}\rvert,\lvert 2-\lambda-\alpha L_{h}\rvert\\}.$
If $\lambda<1/C_{f}$, then ∂f,∂g,∇h,α,λ is $\rho_{f}$-Lipschitz, where
$\displaystyle\rho_{f}^{2}=\left(1-\lambda\frac{C_{f}^{2}-R_{f}^{2}}{C_{f}}\right)\max\bigg{\\{}\left(1-\frac{\lambda}{1+\alpha\mu_{g}}\right)^{2}+\frac{\lambda
d^{2}}{1/C_{f}-\lambda}\left(\frac{1}{1+\alpha\mu_{g}}\right)^{2},$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\quad\left(1-\frac{\lambda}{1+\alpha
L_{g}}\right)^{2}+\frac{\lambda d^{2}}{1/C_{f}-\lambda}\left(\frac{1}{1+\alpha
L_{g}}\right)^{2}\bigg{\\}}.$
Symmetrically, if $\lambda<1/C_{g}$, then ∂f,∂g,∇h,α,λ is
$\rho_{g}$-Lipschitz, where
$\displaystyle\rho_{g}^{2}=\left(1-\lambda\frac{C_{g}^{2}-R_{g}^{2}}{C_{g}}\right)\max\bigg{\\{}\left(1-\frac{\lambda}{1+\alpha\mu_{f}}\right)^{2}+\frac{\lambda
d^{2}}{1/C_{g}-\lambda}\left(\frac{1}{1+\alpha\mu_{f}}\right)^{2},$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\quad\left(1-\frac{\lambda}{1+\alpha
L_{f}}\right)^{2}+\frac{\lambda d^{2}}{1/C_{g}-\lambda}\left(\frac{1}{1+\alpha
L_{f}}\right)^{2}\bigg{\\}}.$
###### Theorem 2.
Let $f$, $g$, $h$, $\mu_{f}$, $L_{f}$, $\mu_{g}$, $L_{g}$, $\mu_{h}$, $L_{h}$,
$\lambda$, and $\alpha$ be the same as in Theorem 1. Additionally, assume
$\lambda<2-\frac{\alpha(\mu_{h}+L_{h})}{2}$. Define $\infty/\infty^{2}=0$.
Write
$\displaystyle\nu_{f}=\min\left\\{\frac{2\mu_{f}+\mu_{h}}{(1+\alpha\mu_{f})^{2}},\frac{2L_{f}+\mu_{h}}{(1+\alpha
L_{f})^{2}}\right\\},\quad\nu_{g}=\min\left\\{\frac{2\mu_{g}+\mu_{h}}{(1+\alpha\mu_{g})^{2}},\frac{2L_{g}+\mu_{h}}{(1+\alpha
L_{g})^{2}}\right\\},$
$\displaystyle\theta=\frac{2}{4-\alpha(\mu_{h}+L_{h})}.$
Then, ∂f,∂g,∇h,α,λ is $\rho$-contractive, where
$\displaystyle\rho^{2}=1-\lambda\theta+\lambda\sqrt{\left(\theta-\alpha\nu_{f}\right)\left(\theta-\alpha\nu_{g}\right)}.$
### 2.1 Proofs of Theorems 1 and 2
To leverage Fact 3, we need to set adequate SRG-full operator classes
${\mathcal{A}}$, ${\mathcal{B}}$, and ${\mathcal{C}}$ to apply it. The most
natural choice is to set
${\mathcal{A}}=\partial{\mathcal{F}}_{\mu_{f},L_{f}},\quad{\mathcal{B}}=\partial{\mathcal{F}}_{\mu_{g},L_{g}},\quad{\mathcal{C}}=\partial{\mathcal{F}}_{\mu_{h},L_{h}}.$
However, this choice is not appropriate since these are not SRG-full classes.
To overcome this issue, we introduce the following operator classes:
$\displaystyle{\mathcal{D}}_{f}$
$\displaystyle=\\{{\varbbA}\colon{\mathcal{H}}\rightrightarrows{\mathcal{H}}\,|\,{\mathcal{G}}({\varbbA})\subseteq{\mathcal{G}}(\partial{\mathcal{F}}_{\mu_{f},L_{f}})\\},$
$\displaystyle{\mathcal{D}}_{g}$
$\displaystyle=\\{{\varbbB}\colon{\mathcal{H}}\rightrightarrows{\mathcal{H}}\,|\,{\mathcal{G}}({\varbbB})\subseteq{\mathcal{G}}(\partial{\mathcal{F}}_{\mu_{g},L_{g}})\\},$
$\displaystyle{\mathcal{D}}_{h}$
$\displaystyle=\\{{\varbbC}\colon{\mathcal{H}}\rightrightarrows{\mathcal{H}}\,|\,{\mathcal{G}}({\varbbC})\subseteq{\mathcal{G}}(\partial{\mathcal{F}}_{\mu_{h},L_{h}})\\}.$
To elaborate, we gather all operators that have its SRG within
${\mathcal{G}}(\partial{\mathcal{F}}_{\mu_{f},L_{f}})$ to form
${\mathcal{D}}_{f}$, and so on. Then, ${\mathcal{D}}_{f}$,
${\mathcal{D}}_{g}$, and ${\mathcal{D}}_{h}$ are SRG-full classes by
definition. We now consider ${\mathcal{A}}={\mathcal{D}}_{f}$,
${\mathcal{B}}={\mathcal{D}}_{g}$, and ${\mathcal{C}}={\mathcal{D}}_{h}$ in
the following proof.
We mention two elementary facts.
###### Fact 5.
For $a,b,c,d\in[0,\infty)$,
$\left(\sqrt{ab}+\sqrt{cd}\right)^{2}\leq(a+c)(b+d).$
###### Proof.
This inequality is an instance of Cauchy–Schwartz. ∎
###### Fact 6.
Let $k$, $l$, and $r$ be positive real numbers, and $b$, $c$ be a real number.
For $z\in{\mathrm{Circ}}(c,r)$,
$k\lvert z-b\rvert^{2}+l\lvert z\rvert^{2}$
is maximized at $z=c-r$ or $z=c+r$.
###### Proof to Fact 6.
Observe that
$k\lvert z-b\rvert^{2}+l\lvert
z\rvert^{2}=(k+l)\left|z-\frac{kb}{k+l}\right|^{2}+\frac{klb^{2}}{k+l}.$
and distance from $\frac{kb}{k+l}$ to $z\in{\mathrm{Circ}}(c,r)$ is maximized
at $z=c-r$ if $\frac{kb}{k+l}>c$ and $z=c+r$ otherwise. ∎
We now prove Theorem 1, 2. We use the similar proof technique introduced in
the previous work [14].
###### Proof to Theorem 1.
We first prove the first statement and show the other by the same reasoning.
Invoking Fact 3 and Fact 4, it suffices to show that
$\lvert\zeta_{\text{DYS}}(z_{f},z_{g},z_{h};\alpha,\lambda)\rvert^{2}\leq\rho_{f}^{2}$
for
$\displaystyle z_{f}$
$\displaystyle\in\partial{\mathcal{G}}\left({}_{\alpha{\mathcal{D}}_{f}}\right)={\mathrm{Circ}}\left(C_{f},R_{f}\right),$
$\displaystyle z_{g}$
$\displaystyle\in\partial{\mathcal{G}}\left({}_{\alpha{\mathcal{D}}_{g}}\right)={\mathrm{Circ}}\left(C_{g},R_{g}\right),$
$\displaystyle z_{h}$
$\displaystyle\in\partial{\mathcal{G}}\left({\mathcal{D}}_{h}\right)={\mathrm{Circ}}\left(\frac{L_{h}+\mu_{h}}{2},\frac{L_{h}-\mu_{h}}{2}\right)$
(refer to Fact 2 and Fact 1 to see why
$\partial{\mathcal{G}}\left({}_{\alpha{\mathcal{D}}_{f}}\right)$,
$\partial{\mathcal{G}}\left({}_{\alpha{\mathcal{D}}_{g}}\right)$,
${\mathcal{G}}\left({\mathcal{D}}_{h}\right)$ are formed correspondingly) when
$\lambda<1/C_{f}$ holds.
Define
$r=\frac{d}{1/C_{f}-\lambda}.$
Then,
$\displaystyle\lvert\zeta_{\text{DYS}}(z_{f},z_{g},z_{h};\alpha,\lambda)\rvert^{2}$
$\displaystyle=\lvert 1-\lambda z_{f}-\lambda z_{g}+\lambda(2-\alpha
z_{h})z_{f}z_{g}\rvert^{2}$ $\displaystyle=\lvert(1-\lambda z_{f})(1-\lambda
z_{g})+\lambda(2-\lambda-\alpha z_{h})z_{f}z_{g}\rvert^{2}$
$\displaystyle\leq\left(\lvert(1-\lambda z_{f})(1-\lambda
z_{g})\rvert+\lvert\lambda(2-\lambda-\alpha z_{h})z_{f}z_{g}\rvert\right)^{2}$
$\displaystyle\stackrel{{\scriptstyle(i)}}{{\leq}}\left(\lvert(1-\lambda
z_{f})(1-\lambda z_{g})\rvert+\lambda d\lvert z_{f}z_{g}\rvert\right)^{2}$
$\displaystyle\stackrel{{\scriptstyle(ii)}}{{\leq}}\left(\lvert 1-\lambda
z_{f}\rvert^{2}+\lambda dr^{-1}\lvert z_{f}\rvert^{2}\right)\left(\lvert
1-\lambda z_{g}\rvert^{2}+\lambda dr\lvert z_{g}\rvert^{2}\right).$ (3)
We have used $\lvert 2-\lambda-\alpha z_{h}\rvert\leq\max\\{\lvert
2-\lambda-\alpha\mu_{h}\rvert,\lvert 2-\lambda-\alpha L_{h}\rvert\\}=d$ in
$(i)$ and Fact 5 in $(ii)$.
Recall that
$\partial{\mathcal{G}}\left({}_{\alpha{\mathcal{D}}_{f}}\right)={\mathrm{Circ}}(C_{f},R_{f})$.
This renders
$\lvert 1-\lambda z_{f}\rvert^{2}+\lambda dr^{-1}\lvert
z_{f}\rvert^{2}=\frac{\lambda}{C_{f}}\lvert z_{f}-C_{f}\rvert^{2}+1-\lambda
C_{f}=1-\lambda\frac{C_{f}^{2}-R_{f}^{2}}{C_{f}}.$ (4)
For the latter term, $z_{g}=\frac{1}{1+\alpha\mu_{g}}$ or
$z_{g}=\frac{1}{1+\alpha L_{g}}$ give the maximum, invoking Fact 6. Therefore,
$\displaystyle\lvert 1-\lambda z_{g}\rvert^{2}+\lambda dr\lvert
z_{g}\rvert^{2}$
$\displaystyle\leq\max\bigg{\\{}\left(1-\frac{\lambda}{1+\alpha\mu_{g}}\right)^{2}+\frac{\lambda
d^{2}}{1/C_{f}-\lambda}\left(\frac{1}{1+\alpha\mu_{g}}\right)^{2},$
$\displaystyle\qquad\qquad\left(1-\frac{\lambda}{1+\alpha
L_{g}}\right)^{2}+\frac{\lambda d^{2}}{1/C_{f}-\lambda}\left(\frac{1}{1+\alpha
L_{g}}\right)^{2}\bigg{\\}}.$ (5)
(4) and (5) together give
$\lvert\zeta_{\text{DYS}}(z_{f},z_{g},z_{h};\alpha,\lambda)\rvert^{2}\leq\rho_{f}^{2}$
which concludes the proof for the first statement. The second statement can be
proven in the same reasoning. ∎
###### Proof to Theorem 2.
By the same reasoning in the proof of Theorem 1, it suffices to show that
$\lvert\zeta_{\text{DYS}}(z_{f},z_{g},z_{h};\alpha,\lambda)\rvert\leq\rho$
for
$\displaystyle
z_{f}\in\partial{\mathcal{G}}\left({}_{\alpha{\mathcal{D}}_{f}}\right),\quad
z_{g}\in\partial{\mathcal{G}}\left({}_{\alpha{\mathcal{D}}_{g}}\right),\quad
z_{h}\in\partial{\mathcal{G}}\left({\mathcal{D}}_{h}\right).$
Recall that $\theta=\frac{2}{4-\alpha(\mu_{h}+L_{h})}$. Note that
$\partial{\mathcal{G}}\left({\mathcal{D}}_{h}\right)={\mathrm{Circ}}\left(\frac{L_{h}+\mu_{h}}{2},\frac{L_{h}-\mu_{h}}{2}\right)$
thus
$\lvert 2-\theta^{-1}-\alpha
z_{h}\rvert=\alpha\bigg{\lvert}z_{h}-\frac{L_{h}+\mu_{h}}{2}\bigg{\rvert}=\alpha\frac{L_{h}-\mu_{h}}{2}=2-\alpha\mu_{h}-\theta^{-1}.$
(6)
Now, observe
$\displaystyle\lvert\zeta_{\text{DYS}}(z_{f},z_{g},z_{h};\alpha,\lambda)-(1-\lambda\theta)\rvert^{2}$
$\displaystyle=\lambda^{2}\lvert\theta-z_{f}-z_{g}+(2-\alpha
z_{h})z_{f}z_{g}\rvert^{2}$
$\displaystyle=\lambda^{2}\lvert\theta^{-1}(z_{f}-\theta)(z_{g}-\theta)+(2-\theta^{-1}-\alpha
z_{h})z_{f}z_{g}\rvert^{2}$
$\displaystyle\leq\lambda^{2}\left(\theta^{-1}\lvert z_{f}-\theta\rvert\lvert
z_{g}-\theta\rvert+\lvert 2-\theta^{-1}-\alpha z_{h}\rvert\lvert
z_{f}\rvert\lvert z_{g}\rvert\right)^{2}$
$\displaystyle\stackrel{{\scriptstyle(i)}}{{=}}\lambda^{2}\left(\theta^{-1}\lvert
z_{f}-\theta\rvert\lvert
z_{g}-\theta\rvert+(2-\alpha\mu_{h}-\theta^{-1})\lvert z_{f}\rvert\lvert
z_{g}\rvert\right)^{2}$
$\displaystyle\stackrel{{\scriptstyle(ii)}}{{\leq}}\lambda^{2}\left(\theta^{-1}\lvert
z_{f}-\theta\rvert^{2}+(2-\alpha\mu_{h}-\theta^{-1})\lvert
z_{f}\rvert^{2}\right)\left(\theta^{-1}\lvert
z_{g}-\theta\rvert^{2}+(2-\alpha\mu_{h}-\theta^{-1})\lvert
z_{g}\rvert^{2}\right).$ (7)
Here, $(i)$ follows from (6) and $(ii)$ follows from Fact 5.
Invoking Fact 6,
$\theta^{-1}\lvert z_{f}-\theta\rvert^{2}+(2-\alpha\mu_{h}-\theta^{-1})\lvert
z_{f}\rvert^{2}$
is maximized at either $z_{f}=\frac{1}{1+\alpha L_{f}}$ or
$z_{f}=\frac{1}{1+\alpha\mu_{f}}$. The first term evaluates to
$\displaystyle\theta^{-1}\lvert
z_{f}-\theta\rvert^{2}+(2-\alpha\mu_{h}-\theta^{-1})\lvert
z_{f}\rvert^{2}=\theta-\alpha\frac{2L_{f}+\mu_{h}}{(1+\alpha L_{f})^{2}}$
when $z_{f}=\frac{1}{1+\alpha L_{f}}$ and similarly to
$\displaystyle\theta^{-1}\lvert
z_{f}-\theta\rvert^{2}+(2-\alpha\mu_{h}-\theta^{-1})\lvert
z_{f}\rvert^{2}=\theta-\alpha\frac{2\mu_{f}+\mu_{h}}{(1+\alpha\mu_{f})^{2}}$
when $z_{f}=\frac{1}{1+\alpha\mu_{f}}$. Hence,
$\displaystyle\theta^{-1}\lvert
z_{f}-\theta\rvert^{2}+(2-\alpha\mu_{h}-\theta^{-1})\lvert z_{f}\rvert^{2}$
$\displaystyle\leq\theta-\alpha\min\left\\{\frac{2L_{f}+\mu_{h}}{(1+\alpha
L_{f})^{2}},\frac{2\mu_{f}+\mu_{h}}{(1+\alpha\mu_{f})^{2}}\right\\}$
$\displaystyle=\theta-\alpha\nu_{f}.$ (8)
Similarly, we have
$\theta^{-1}\lvert z_{g}-\theta\rvert^{2}+(2-\alpha\mu_{h}-\theta^{-1})\lvert
z_{g}\rvert^{2}\leq\theta-\alpha\nu_{g}.$ (9)
Combining (7), (8), and (9), we obtain
$\lvert\zeta_{\text{DYS}}(z_{f},z_{g},z_{h};\alpha,\lambda)\rvert\leq
1-\lambda\theta+\lambda\sqrt{\left(\theta-\alpha\nu_{f}\right)\left(\theta-\alpha\nu_{g}\right)}.$
which is the desired bound. ∎
### 2.2 Comparison with previous results
We now compare our linear convergence rates with prior results and show that
our results are better in general.
#### 2.2.1 Comparison with the result of [13].
Condat and Richtarik [13] showed the following rate.
###### Fact 7 (Setting $\omega=0$ in Theorem 9 of [13]).
Assume $\mu_{g}>0$ or $\mu_{h}>0$, and that $L_{f}$, $L_{h}\in(0,\infty)$.
Furthermore, suppose that $\alpha\in(0,2/L_{h})$. Consider problem (1) where
$f\in{\mathcal{F}}_{0,L_{f}}$, $g\in{\mathcal{F}}_{\mu_{g},\infty}$, and
$h\in{\mathcal{F}}_{\mu_{h},L_{h}}$. Let $x^{\star}$ be a solution for (1).
The DYS iteration converges with the geometric rate of,
$\rho_{\text{old}}^{2}=\max\left\\{\frac{(1-\alpha\mu_{h})^{2}}{1+\alpha\mu_{g}},\frac{(1-\alpha
L_{h})^{2}}{1+\alpha\mu_{g}},\frac{\alpha L_{f}}{\alpha L_{f}+2}\right\\}.$
Assuming the same setting of Fact 7, we get the following as a direct result
of Theorem 1.
###### Corollary 1.
Assume the same conditions as in Fact 7. Then, ∂f,∂g,∇h,α,1 is
$\rho_{\text{new}}$-contractive, where
$d=\max\\{\lvert 1-\alpha\mu_{h}\rvert,\lvert 1-\alpha L_{h}\rvert\\}$
and
$\rho_{\text{new}}^{2}=\max\left\\{\frac{d^{2}}{1+2\alpha\mu_{g}},\frac{1}{(1+\alpha
L_{f})^{2}}\left(\alpha^{2}L_{f}^{2}+\frac{d^{2}}{1+2\alpha\mu_{g}}\right)\right\\}.$
(10)
The following proposition claims that the contraction factor introduced in
Corollary 1 is strictly better than that of Fact 7 in general cases.
###### Proposition 1.
Let $\mu_{g}$, $\mu_{h}$, $L_{f}$, $L_{h}$, $\alpha$ be as in Fact 7. Let
$\rho_{\text{old}}$ and $\rho_{\text{new}}$ be as in Fact 7 and Corollary 1,
respectively. Then, $\rho_{\text{new}}<\rho_{\text{old}}$ if $\mu_{g}>0$, and
$\rho_{\text{new}}=\rho_{\text{old}}$ if $\mu_{g}=0$.
###### Proof to Proposition 1.
Denote $d=\max\\{\lvert 1-\alpha\mu_{h}\rvert,\lvert 1-\alpha L_{h}\rvert\\}$
and recall that
$\displaystyle\rho_{\text{old}}^{2}$
$\displaystyle=\max\left\\{\frac{(1-\alpha\mu_{h})^{2}}{1+\alpha\mu_{g}},\frac{(1-\alpha
L_{h})^{2}}{1+\alpha\mu_{g}},\frac{\alpha L_{f}}{\alpha
L_{f}+2}\right\\}=\max\left\\{\frac{d^{2}}{1+\alpha\mu_{g}},\frac{\alpha
L_{f}}{\alpha L_{f}+2}\right\\}$ $\displaystyle\rho_{\text{new}}^{2}$
$\displaystyle=\max\left\\{\frac{d^{2}}{1+2\alpha\mu_{g}},\frac{1}{(1+\alpha
L_{f})^{2}}\left(\alpha^{2}L_{f}^{2}+\frac{d^{2}}{1+2\alpha\mu_{g}}\right)\right\\}.$
First, consider the case where
$\frac{d^{2}}{1+2\alpha\mu_{g}}\geq\frac{\alpha L_{f}}{\alpha L_{f}+2}.$
In this case,
$\displaystyle\frac{d^{2}}{1+2\alpha\mu_{g}}\geq\frac{1}{(1+\alpha
L_{f})^{2}}\left(\alpha^{2}L_{f}^{2}+\frac{d^{2}}{1+2\alpha\mu_{g}}\right)$
holds, and together with the assumption that $\mu_{g}>0$,
$\displaystyle\rho_{\text{new}}^{2}=\frac{d^{2}}{1+2\alpha\mu_{g}}<\frac{d^{2}}{1+\alpha\mu_{g}}\leq\rho_{\text{old}}^{2}.$
Otherwise if
$\frac{d^{2}}{1+2\alpha\mu_{g}}<\frac{\alpha L_{f}}{\alpha L_{f}+2},$
observe that
$\displaystyle\rho_{\text{new}}^{2}$ $\displaystyle=\frac{1}{(1+\alpha
L_{f})^{2}}\left(\alpha^{2}L_{f}^{2}+\frac{d^{2}}{1+2\alpha\mu_{g}}\right)$
$\displaystyle<\frac{1}{(1+\alpha
L_{f})^{2}}\left(\alpha^{2}L_{f}^{2}+\frac{\alpha L_{f}}{\alpha
L_{f}+2}\right)$ $\displaystyle=\frac{\alpha L_{f}}{\alpha L_{f}+2}$
$\displaystyle\leq\rho_{\text{old}}^{2}.$
∎
#### 2.2.2 Comparison with [14].
Lee, Yi, and Ryu introduced contraction factors in Theorems 3, 4, and 5 of
[14]. The following proposition exhibits those contraction factors and claims
ours are strictly better in most cases due to stronger assumptions on the
operators.
###### Proposition 2.
Let $\mu_{f}$, $\mu_{g}$, $L_{f}$, $L_{g}$, $L_{h}$, $\alpha$, and $\lambda$
be as in Theorem 1. Additionally, assume $\alpha L_{h}<4$ and
$\lambda<2-\frac{\alpha L_{h}}{2}$. Then,
1. 1.
Assume $f\in{\mathcal{F}}_{\mu_{f},L_{f}}$, $g\in{\mathcal{F}}_{0,\infty}$,
and $h\in{\mathcal{F}}_{0,L_{h}}$. Then, Theorem 3 of [14] implies
∂f,∂g,∇h,α,λ is $\rho_{\text{old}}$-contractive, where
$\rho_{\text{old}}=1-\frac{2\lambda}{4-\alpha
L_{h}}+\lambda\sqrt{\frac{2}{4-\alpha L_{h}}\left(\frac{2}{4-\alpha
L_{h}}-\frac{2\alpha\mu_{f}}{\alpha^{2}L_{f}^{2}+2\alpha\mu_{f}+1}\right)}.$
Meanwhile, Theorem 2 implies ∂f,∂g,∇h,α,λ is $\rho_{\text{new}}$-contractive,
where
$\rho_{\text{new}}=1-\frac{2\lambda}{4-\alpha
L_{h}}+\lambda\sqrt{\frac{2}{4-\alpha L_{h}}\left(\frac{2}{4-\alpha
L_{h}}-\alpha\min\left\\{\frac{2\mu_{f}}{(1+\alpha\mu_{f})^{2}},\frac{2L_{f}}{(1+\alpha
L_{f})^{2}}\right\\}\right)}.$
and $\rho_{\text{new}}<\rho_{\text{old}}$ if $\mu_{f}>0$, and
$\rho_{\text{new}}=\rho_{\text{old}}$ if $\mu_{f}=0$.
2. 2.
Assume $f\in{\mathcal{F}}_{0,L_{f}}$, $g\in{\mathcal{F}}_{\mu_{g},\infty}$,
and $h\in{\mathcal{F}}_{0,L_{h}}$. Then, ∂f,∂g,∇h,α,λ is
$\rho_{\text{old}}$-contractive where
$\rho_{\text{old}}=\sqrt{1-2\lambda\alpha\min\left\\{\frac{(2-\lambda)\mu_{g}}{(1+\alpha^{2}L_{f}^{2})(2-\lambda+2\alpha\mu_{g})},\frac{(2-\lambda)(\mu_{g}+L_{f})+2\alpha\mu_{g}L_{f}}{(1+\alpha
L_{f})^{2}(2-\lambda+2\alpha\mu_{g})}\right\\}}.$
Meanwhile, Theorem 1 implies ∂f,∂g,∇h,α,λ is $\rho_{\text{new}}$-contractive
where
$\rho_{\text{new}}=\sqrt{1-2\lambda\alpha\min\left\\{\frac{(2-\lambda)\mu_{g}}{2-\lambda+2\alpha\mu_{g}},\frac{(2-\lambda)(\mu_{g}+L_{f})+2\alpha\mu_{g}L_{f}}{(1+\alpha
L_{f})^{2}(2-\lambda+2\alpha\mu_{g})}\right\\}}.$
and $\rho_{\text{new}}\leq\rho_{\text{old}}$, and
$\rho_{\text{new}}<\rho_{\text{old}}$ if
$(2-\lambda)(1-2\alpha\mu_{g}+\alpha^{2}L_{f}^{2})+2\alpha\mu_{g}(1+\alpha^{2}L_{f}^{2})>0$
and $\mu_{g}>0$.
3. 3.
Assume $f\in{\mathcal{F}}_{0,L_{f}}$, $g\in{\mathcal{F}}_{0,\infty}$, and
$h\in{\mathcal{F}}_{\mu_{h},L_{h}}$. Then, ∂f,∂g,∇h,α,λ is
$\rho_{\text{old}}$-contractive where
$\rho_{\text{old}}=\sqrt{1-2\lambda\alpha\min\left\\{\frac{\mu_{h}\left(1-\frac{\alpha
L_{h}}{2(2-\lambda)}\right)}{1+\alpha^{2}L_{f}^{2}},\frac{L_{f}+\mu_{h}\left(1-\frac{\alpha
L_{h}}{2(2-\lambda)}\right)}{(1+\alpha L_{f})^{2}}\right\\}}.$
Meanwhile, Theorem 1 implies ∂f,∂g,∇h,α,λ is $\rho_{\text{new}}$-contractive
where
$\rho_{\text{new}}=\sqrt{1-2\lambda\alpha\min\left\\{\xi,\frac{L_{f}+\xi}{(1+\alpha
L_{f})^{2}}\right\\}}.$
where
$\xi=\min\left\\{\mu_{h}\left(1-\frac{\alpha\mu_{h}}{2(2-\lambda)}\right),L_{h}\left(1-\frac{\alpha
L_{h}}{2(2-\lambda)}\right)\right\\}.$
Furthermore, $\rho_{\text{new}}<\rho_{\text{old}}$ if $\mu_{h}>0$, and
$\rho_{\text{new}}=\rho_{\text{old}}$ if $\mu_{h}=0$.
###### Proof to Proposition 2-1.
To show $\rho_{\text{new}}<\rho_{\text{old}}$ given $\mu_{f}>0$, it suffices
to show
$\min\left\\{\frac{2\mu_{f}}{(1+\alpha\mu_{f})^{2}},\frac{2L_{f}}{(1+\alpha
L_{f})^{2}}\right\\}>\frac{2\mu_{f}}{\alpha^{2}L_{f}^{2}+2\alpha\mu_{f}+1},$
or equivalently
$\displaystyle\frac{2\mu_{f}}{(1+\alpha\mu_{f})^{2}}>\frac{2\mu_{f}}{\alpha^{2}L_{f}^{2}+2\alpha\mu_{f}+1}$
$\displaystyle\frac{2L_{f}}{(1+\alpha
L_{f})^{2}}>\frac{2\mu_{f}}{\alpha^{2}L_{f}^{2}+2\alpha\mu_{f}+1}$
which are straightforward from $L_{f}>\mu_{f}>0$. ∎
###### Proof to Proposition 2-2.
$\rho_{\text{new}}\leq\rho_{\text{old}}$ is obvious.
By straightforward calculations, the mentioned conditions imply
$\frac{(2-\lambda)\mu_{g}}{(1+\alpha^{2}L_{f}^{2})(2-\lambda+2\alpha\mu_{g})}<\frac{(2-\lambda)(\mu_{g}+L_{f})+2\alpha\mu_{g}L_{f}}{(1+\alpha
L_{f})^{2}(2-\lambda+2\alpha\mu_{g})}.$ (11)
Thus,
$\displaystyle\rho_{\text{old}}$
$\displaystyle=\sqrt{1-2\lambda\alpha\frac{(2-\lambda)\mu_{g}}{(1+\alpha^{2}L_{f}^{2})(2-\lambda+2\alpha\mu_{g})}}$
$\displaystyle\rho_{\text{new}}$
$\displaystyle=\sqrt{1-2\lambda\alpha\min\left\\{\frac{(2-\lambda)\mu_{g}}{2-\lambda+2\alpha\mu_{g}},\frac{(2-\lambda)(\mu_{g}+L_{f})+2\alpha\mu_{g}L_{f}}{(1+\alpha
L_{f})^{2}(2-\lambda+2\alpha\mu_{g})}\right\\}}$
holds. Then,
$\displaystyle\frac{(2-\lambda)\mu_{g}}{(1+\alpha^{2}L_{f}^{2})(2-\lambda+2\alpha\mu_{g})}$
$\displaystyle<\frac{(2-\lambda)\mu_{g}}{2-\lambda+2\alpha\mu_{g}}$
together with (11) gives $\rho_{\text{new}}<\rho_{\text{old}}$. ∎
###### Proof to Proposition 2-3.
First, observe that $0<\mu_{h}<L_{h}$ renders
$\xi>\mu_{h}\left(1-\frac{\alpha L_{h}}{2(2-\lambda)}\right).$
Therefore,
$\displaystyle\xi$ $\displaystyle>\frac{\mu_{h}\left(1-\frac{\alpha
L_{h}}{2(2-\lambda)}\right)}{1+\alpha^{2}L_{f}^{2}},$
$\displaystyle\frac{L_{f}+\xi}{(1+\alpha L_{f})^{2}}$
$\displaystyle>\frac{L_{f}+\mu_{h}\left(1-\frac{\alpha
L_{h}}{2(2-\lambda)}\right)}{(1+\alpha L_{f})^{2}}$
which give $\rho_{\text{new}}<\rho_{\text{old}}$. ∎
## 3 Discussion and conclusion
The reduction of Fact 3 allows us to obtain the Lipschitz coefficients
Theorems 1 and 2 by characterizing the maximum modulus of
$\displaystyle{\mathcal{Z}}^{\text{DYS}}_{{\mathcal{A}},{\mathcal{B}},{\mathcal{C}},\alpha,\lambda}=\left\\{\zeta_{\text{DYS}}(z_{f},z_{g},z_{h};\alpha,\lambda)\,\middle|\,z_{f}\in{\mathcal{G}}\left({}_{\alpha{\mathcal{D}}_{f}}\right),z_{g}\in{\mathcal{G}}\left({}_{\alpha{\mathcal{D}}_{g}}\right),z_{h}\in{\mathcal{G}}\left({\mathcal{D}}_{h}\right)\right\\},$
where $\zeta_{\text{DYS}}=1-\lambda z_{B}+\lambda z_{A}(2z_{B}-1-\alpha
z_{C}z_{B})$ is a relatively simple polynomial. This only requires elementary
mathematics, and it is considerably easier than directly analyzing
$\left\\{\frac{\lVert{\varbbT}x-{\varbbT}y\rVert}{\lVert
x-y\rVert}\,\middle|\,{\varbbT}\in{}_{{\mathcal{D}}_{f},{\mathcal{D}}_{g},{\mathcal{D}}_{h},\alpha,\lambda},\,x,y\in\mathrm{dom}\,{\varbbT},\,x\neq
y\right\\}.$
Furthermore, by obtaining tighter bounds on the set
${\mathcal{Z}}^{\text{DYS}}_{{\mathcal{A}},{\mathcal{B}},{\mathcal{C}},\alpha,\lambda}$,
one can improve upon the contraction factors presented in this work. We leave
the tighter analysis to future work.
We point out that the explicit and simple description of
${\mathcal{Z}}^{\text{DYS}}_{{\mathcal{A}},{\mathcal{B}},{\mathcal{C}},\alpha,\lambda}$
allows one to investigate it in a numerical and computer-assisted manner.
Sampling points from
${\mathcal{Z}}^{\text{DYS}}_{{\mathcal{D}}_{f},{\mathcal{D}}_{g},{\mathcal{D}}_{h},\alpha,\lambda}$
is straightforward, and doing so provides a numerical estimate of the maximum
modulus. For example, Figure 1 depicts
${\mathcal{Z}}^{\text{DYS}}_{{\mathcal{D}}_{f},{\mathcal{D}}_{g},{\mathcal{D}}_{h},\alpha,\lambda}$
with a specific choice of $\mu_{f}$, $\mu_{g}$, $\mu_{h}$, $L_{f}$, $L_{g}$,
$L_{h}$, $\alpha$, and $\lambda$. It shows that $\rho_{g}$, the contraction
factor of Theorem 1, is valid but not tight; the gap between
${\mathcal{Z}}^{\text{DYS}}_{{\mathcal{D}}_{f},{\mathcal{D}}_{g},{\mathcal{D}}_{h},\alpha,\lambda}$
and ${\mathrm{Circ}}(0,\rho_{g})$ indicates the contraction factor has room
for improvement. Interestingly, if we modify the proof of Theorem 1 to choose
$r$ in (3) more carefully, we seem to obtain a tight contraction factor in the
instance of Figure 1. Specifically, when we numerically minimize $\rho$ as a
function of $r$, we observe that ${\mathrm{Circ}}(0,\rho)$ touches
${\mathcal{Z}}^{\text{DYS}}_{{\mathcal{D}}_{f},{\mathcal{D}}_{g},{\mathcal{D}}_{h},\alpha,\lambda}$
in Figure 1 and the contact indicates tightness.
Figure 1:
${\mathcal{Z}}^{\text{DYS}}_{{\mathcal{D}}_{f},{\mathcal{D}}_{g},{\mathcal{D}}_{h},\alpha,\lambda}$
with ${\mathrm{Circ}}(0,\rho_{f})$, ${\mathrm{Circ}}(0,\rho_{g})$, and
${\mathrm{Circ}}(0,\rho)$, where $\mu_{f}=0.7$, $\mu_{g}=2$, $\mu_{h}=0.8$,
$L_{f}=1.5$, $L_{g}=3$, $L_{h}=1.3$, $\alpha=0.9$, and $\lambda=1$.
## References
* [1] D. Davis, W. Yin, A three-operator splitting scheme and its optimization applications, Set-valued and variational analysis 25 (4) (2017) 829–858.
* [2] M. Yan, A new primal–dual algorithm for minimizing the sum of three functions with a linear operator, Journal of Scientific Computing 76 (3) (2018) 1698–1717.
* [3] Y. Wang, H. Zhou, S. Zu, W. Mao, Y. Chen, Three-operator proximal splitting scheme for 3-d seismic data reconstruction, IEEE Geoscience and Remote Sensing Letters 14 (10) (2017) 1830–1834.
* [4] J. A. Carrillo, K. Craig, L. Wang, C. Wei, Primal dual methods for wasserstein gradient flows, Foundations of Computational Mathematics 22 (2) (2022) 389–443.
* [5] D. Van Hieu, L. Van Vy, P. K. Quy, Three-operator splitting algorithm for a class of variational inclusion problems, Bulletin of the Iranian Mathematical Society 46 (4) (2020) 1055–1071.
* [6] M. Weylandt, Splitting methods for convex bi-clustering and co-clustering, 2019 IEEE Data Science Workshop (2019) 237–242.
* [7] H. Heaton, D. McKenzie, Q. Li, S. W. Fung, S. Osher, W. Yin, Learn to predict equilibria via fixed point networks, arXiv preprint arXiv:2106.00906 (2021).
* [8] F. J. Aragón-Artacho, D. Torregrosa-Belén, A direct proof of convergence of Davis–Yin splitting algorithm allowing larger stepsizes, Set-Valued and Variational Analysis 30 (2022) 1–19.
* [9] M. N. Dao, H. M. Phan, An adaptive splitting algorithm for the sum of three operators, arXiv:2104.05460 (2021).
* [10] F. Pedregosa, On the convergence rate of the three operator splitting scheme, arXiv preprint arXiv:1610.07830 (2016).
* [11] E. K. Ryu, A. B. Taylor, C. Bergeling, P. Giselsson, Operator splitting performance estimation: Tight contraction factors and optimal parameter selection, SIAM Journal on Optimization 30 (3) (2020) 2251–2271.
* [12] H. Wang, M. Fazlyab, S. Chen, V. M. Preciado, Robust convergence analysis of three-operator splitting, Allerton Conference on Communication, Control, and Computing (2019).
* [13] L. Condat, P. Richtárik, RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates (2022). arXiv:2207.12891.
* [14] J. Lee, S. Yi, E. K. Ryu, Convergence Analyses of Davis-Yin Splitting via Scaled Relative Graphs, arXiv preprint arXiv:2207.04015 (2022). arXiv:2207.04015.
* [15] E. K. Ryu, R. Hannah, W. Yin, Scaled relative graphs: Nonexpansive operators via 2D Euclidean geometry, Mathematical Programming 194 (1–2) (2022) 569–619.
* [16] H. H. Bauschke, P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd Edition, Springer, 2017.
* [17] E. K. Ryu, S. Boyd, Primer on monotone operator methods, Appl. Comput. Math 15 (1) (2016) 3–43.
* [18] E. K. Ryu, W. Yin, Large-scale convex optimization via monotone operators, Cambridge University Press, 2022.
* [19] R. E. Bruck, On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space, Journal of Mathematical Analysis and Applications 61 (1) (1977) 159–164.
* [20] G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space, Journal of Mathematical Analysis and Applications 72 (2) (1979) 383–390.
* [21] D. W. Peaceman, H. H. Rachford, Jr, The numerical solution of parabolic and elliptic differential equations, Journal of the Society for Industrial and Applied Mathematics 3 (1) (1955) 28–41.
* [22] J. Douglas, H. H. Rachford, On the numerical solution of heat conduction problems in two and three space variables, Transactions of the American Mathematical Society 82 (2) (1956) 421–439.
* [23] P.-L. Lions, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM Journal on Numerical Analysis 16 (6) (1979) 964–979.
* [24] D. Gabay, B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite element approximation, Computers & Mathematics with Applications 2 (1) (1976) 17–40.
* [25] A. Yurtsever, A. Gu, S. Sra, Three operator splitting with subgradients, stochastic gradients, and adaptive learning rates, NeurIPS (2021).
* [26] A. Yurtsever, B. C. Vũ, V. Cevher, Stochastic three-composite convex minimization, NeurIPS (2016).
* [27] V. Cevher, B. C. Vũ, A. Yurtsever, Stochastic forward Douglas–Rachford splitting method for monotone inclusions, in: A. R. Pontus Giselsson (Ed.), Large-Scale and Distributed Optimization, Springer, 2018, pp. 149–179.
* [28] A. Yurtsever, V. Mangalick, S. Sra, Three Operator Splitting with a Nonconvex Loss Function, ICML (2021).
* [29] F. Pedregosa, K. Fatras, M. Casotto, Proximal splitting meets variance reduction, in: AISTATS, 2019.
* [30] C. Zong, Y. Tang, Y. J. Cho, Convergence analysis of an inexact three-operator splitting algorithm, Symmetry 10 (11) (2018) 563.
* [31] F. Pedregosa, G. Gidel, Adaptive three operator splitting, ICML (2018).
* [32] F. Cui, Y. Tang, Y. Yang, An inertial three-operator splitting algorithm with applications to image inpainting, Applied Set-Valued Analysis and Optimization 1 (2019).
* [33] A. Salim, L. Condat, K. Mishchenko, P. Richtárik, Dualize, split, randomize: Toward fast nonsmooth optimization algorithms, Journal of Optimization Theory and Applications (2022).
* [34] X. Huang, E. K. Ryu, W. Yin, Scaled relative graph of normal matrices, arXiv preprint arXiv:2001.02061 (2019).
* [35] R. Pates, The Scaled Relative Graph of a Linear Operator, arXiv preprint arXiv:2106.05650 (2021).
* [36] N. Ogura, I. Yamada, Non-strictly convex minimization over the fixed point set of an asymptotically shrinking nonexpansive mapping, Numerical Functional Analysis and Optimization 23 (1-2) (2002) 113–137.
* [37] X. Huang, E. K. Ryu, W. Yin, Tight coefficients of averaged operators via scaled relative graph, Journal of Mathematical Analysis and Applications 490 (1) (2020) 124211.
* [38] T. Chaffey, F. Forni, R. Sepulchre, Graphical Nonlinear System Analysis, arXiv preprint arXiv:2107.11272 (2021).
* [39] T. Chaffey, F. Forni, R. Sepulchre, Scaled relative graphs for system analysis, IEEE Conference on Decision and Control (2021).
* [40] T. Chaffey, R. Sepulchre, Monotone one-port circuits, arXiv preprint arXiv:2111.15407 (2021).
* [41] T. Chaffey, A rolled-off passivity theorem, Systems & Control Letters 162 (2022) 105198.
* [42] T. Chaffey, A. Padoan, Circuit model reduction with scaled relative graphs, arXiv preprint arXiv:2204.01434 (2022).
* [43] Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course (2014).
|
# Diffuse Emission of Galactic High-Energy Neutrinos from a Global Fit of
Cosmic Rays
Georg Schwefer Institute for Theoretical Particle Physics and Cosmology (TTK),
RWTH Aachen University, 52056 Aachen, Germany III. Physikalisches Institut B,
RWTH Aachen University, 52056 Aachen, Germany Max-Planck-Institut für
Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany Georg Schwefer
<EMAIL_ADDRESS>Philipp Mertsch Institute for Theoretical
Particle Physics and Cosmology (TTK), RWTH Aachen University, 52056 Aachen,
Germany Christopher Wiebusch III. Physikalisches Institut B, RWTH Aachen
University, 52056 Aachen, Germany
(August 30, 2024)
###### Abstract
In the standard picture of galactic cosmic rays, a diffuse flux of high-energy
gamma-rays and neutrinos is produced from inelastic collisions of cosmic ray
nuclei with the interstellar gas. The neutrino flux is a guaranteed signal for
high-energy neutrino observatories such as IceCube, but has not been found
yet. Experimental searches for this flux constitute an important test of the
standard picture of galactic cosmic rays. Both the observation and non-
observation would allow important implications for the physics of cosmic ray
acceleration and transport. We present CRINGE, a new model of galactic diffuse
high-energy gamma-rays and neutrinos, fitted to recent cosmic ray data from
AMS-02, DAMPE, IceTop as well as KASCADE. We quantify the uncertainties for
the predicted emission from the cosmic ray model, but also from the choice of
source distribution, gas maps and cross-sections. We consider the possibility
of a contribution from unresolved sources. Our model predictions exhibit
significant deviations from older models. Our fiducial model is available at
this https URL.
## 1 Introduction
Galactic diffuse emission (GDE) of photons and neutrinos is radiation produced
within the interstellar medium (ISM) of the Galaxy. Electromagnetic radiation
has been observed at all wavelengths ranging from radio and microwaves up to
PeV gamma-rays. Of particular interest are photons and neutrinos from hadronic
interactions of galactic cosmic rays (GCRs) with the interstellar gas while
propagating through the Galaxy (Strong et al., 2000). Both, diffuse galactic
emission of high-energy photons and neutrinos, offer invaluable information on
the spatial and spectral distribution of GCRs elsewhere in the Galaxy (Tibaldo
et al., 2021), providing immediate information on the century-old problem of
the origin of galactic cosmic rays.
At gamma-ray energies, three processes contribute to the high-energy GDE of
photons: decay of neutral pions, produced in inelastic collisions of GCR
nuclei with interstellar gas; bremsstrahlung from GCR electrons and positron
on the interstellar gas; and Inverse Compton scattering, that is the
upscattering of soft radiation backgrounds by GCR electrons and positrons.
Furthermore, a number of extended structures have been discovered in high-
energy gamma-rays, like the Fermi bubbles (Dobler et al., 2010; Su et al.,
2010; Ackermann et al., 2014) or galactic radio loops Berkhuijsen et al.
(1971), as well as an isotropic, extragalactic gamma-ray flux (Ackermann et
al., 2015). Finally, a certain fraction of the observed diffuse emission is
likely due to so-called unresolved sources, that is sources with fluxes below
the experimental detection thresholds (Vecchiotti et al., 2022, 2022).
Untangling the various contributions to GDE in gamma-rays is a formidable task
that is in part made difficult by the degeneracy between the hadronic and
leptonic contributions.
Unlike the multi-component fluxes of photons, the flux of galactic diffuse
neutrinos offers a clean window to the study of GCR because neutrinos uniquely
originate from the decay of charged mesons, i.e. pions, that are produced in
hadronic interactions. This flux is a guaranteed signal for the IceCube
Neutrino Observatory (Aartsen et al., 2017a) and other high-energy neutrino
telescopes (Ageron et al., 2011; Adrian-Martinez et al., 2016; Avrorin et al.,
2018), but a discovery has thus far remained elusive. However, recent analyses
(Aartsen et al., 2019a; Albert et al., 2018; Aartsen et al., 2017b) suggest
that such a discovery could be in reach within the near future.
Precise models of diffuse neutrino emission play a two-fold role in the
searches: First, they quantify our expectations for searches, allowing us to
estimate what bearings (non-)observations have for our understanding of models
of GCRs. Second, they provide detailed spatio-spectral templates for targeted
experimental searches which otherwise suffer from atmospheric and
extragalactic backgrounds.
Model predictions of diffuse galactic neutrino emission can be based on
observed photon fluxes because of the close connection between the parent
particles, neutral and charged pions, that originate from the same
interactions of GCRs. Of the models available in the literature, the
Fermi-$\pi^{0}$ model (Ackermann et al., 2012) and the KRA$\gamma$ model
(Gaggero et al., 2015b) have been employed in most of the recent experimental
searches (Aartsen et al., 2017b; Albert et al., 2018; Aartsen et al., 2019a).
The Fermi-$\pi^{0}$ model assumes a factorization into the angular
distribution of the $\pi^{0}$-component as modelled by the Fermi collaboration
and a spectrum that above a few $\mathrm{GeV}$ is a single power law with the
same spectral index of $\gamma\approx 2.7$ in all directions. While designed
for use at GeV energies, for neutrino studies, the model spectra have been
extrapolated to higher energies with the same unbroken power law. In previous
studies, the spectral index has also been left free to float. The KRA$\gamma$
model instead exhibits harder gamma-ray spectra in the galactic center
direction, leading to a different morphology at the energies of interest for
observations of high-energy neutrinos (see also de la Torre Luque et al.
(2022) for a recent update of the KRA$\gamma$ model). In addition, there are
also a number of analytical parametrizations of fluxes of high-energy gamma-
rays and neutrinos (Joshi et al., 2014; Lipari & Vernetto, 2018; Fang &
Murase, 2021).
These existing models, however, suffer from two main drawbacks: First, they
have not been systematically fitted to the latest data of local GCR
observations. Given the various uncertainties in the modelling of GDE, local
data on GCR should serve as an important anchor point and one would hope that
the models are able to reproduce these local data (e.g. Marinos et al., 2022).
Second, the models suffer from a lack of quantitative estimates of model
uncertainties. While the sources of such uncertainties are manifold (fit
uncertainties from the GCR model, choice of gas maps and cross-sections), both
uses of GDE models (see above) rely heavily on a proper estimate of model
uncertainties.
In this paper, we aim at updating existing models in the light of recent data
on local GCR fluxes, and present the CRINGE111Cosmic Ray-fitted Intensities of
Galactic Emission model. We also quantify the uncertainties from the GCR model
parameters and the various choices for GDE inputs.
While we hope to present a useful state-of-the-art model for high-energy
diffuse galactic neutrinos, we must caution that we rely on existing,
_conventional_ models for the sources and transport of GCRs. Various anomalies
in gamma-ray observations have recently pointed to the need for overhauls of
such conventional models (Acero et al., 2016; Yang et al., 2016). For
instance, the observation of a hardening of the gamma-ray emission towards the
inner Galaxy has not been fully understood.
The outline of the paper is as follows: In Sec. 2, we start by describing the
various ingredients for predictions of the GDE in high-energy neutrinos and
gamma-rays. We lay out the GCR model and describe the global fit to local GCR
data. Next, we discuss the various choices for other inputs of the diffuse
model, that is gas maps, cross-sections and the photon backgrounds. We also
present a model of unresolved sources and explain our treatment of gamma-ray
absorption. Our results are presented in Sec. 3, first for the global fit to
local measurements of GCRs, then for the diffuse fluxes of high-energy gamma-
rays and neutrinos. An extended discussion in the context of other GDE
observables can be found in Sec. 4. We conclude in Sec. 5.
## 2 Method
The intensity of hadronic gamma-rays and neutrinos from longitude and latitude
$(l,b)$ and at energy $E$, $J(l,b,E)$, is given as the line-of-sight integral
(e.g. Strong et al. (2000)) of the volume emissivity, generated by the
inelastic collisions of hadronic GCRs with the gas in the interstellar medium,
$\displaystyle J(l,b,E)$
$\displaystyle=\frac{1}{4\pi}\sum_{m,n}\int_{0}^{\infty}\mathrm{d}s\int_{E}^{\infty}\mathrm{d}E^{\prime}\,\frac{\mathrm{d}\sigma_{m,n}}{\mathrm{d}E}(E^{\prime},E)$
(1) $\displaystyle\times
J_{m}(\bm{r},E^{\prime})n_{\text{gas},n}(\bm{r})\Big{|}_{\bm{r}=\bm{r}(l,b,s)}\,.$
Here, $J_{m}(\bm{r},E)=v/(4\pi)\psi_{m}(\bm{r},p)$ is the GCR intensity of
species $m$, where $\psi$ is the isotropic CR density per unit energy .
$(\mathrm{d}\sigma_{m,n}/\mathrm{d}E)(E^{\prime},E)$ is the differential
cross-section for production of gamma-rays or neutrinos of energy $E$ from
inelastic collisions of GCR species $m$ of energy $E^{\prime}$ on gas of
species $n$. Finally, $n_{\text{gas},n}(\bm{r})$ denotes the 3D distribution
of gas, mostly atomic and molecular hydrogen in the Galaxy.
Similarly, the intensity of gamma-rays originating from Inverse Compton
scattering of cosmic ray leptons, $J_{\mathrm{IC}}(l,b,E)$, can be calculated
as
$\displaystyle J_{\mathrm{IC}}(l,b,E)$
$\displaystyle=\frac{1}{4\pi}\sum_{e^{+},e^{-}}\int_{0}^{\infty}\mathrm{d}s\int_{0}^{\infty}\mathrm{d}E_{0}\int_{0}^{\infty}\mathrm{d}E^{\prime}\,$
(2)
$\displaystyle\frac{\mathrm{d}\sigma_{\mathrm{KN}}}{\mathrm{d}E}(E^{\prime},E,E_{0})$
$\displaystyle\times
J_{e}(\bm{r},E^{\prime})n_{\text{ISRF}}(\bm{r},E_{0})\Big{|}_{\bm{r}=\bm{r}(l,b,s)}\,.$
Here, $(\mathrm{d}\sigma_{\mathrm{KN}}/\mathrm{d}E)(E^{\prime},E,E_{0})$ is
the Klein-Nishina cross-section and $n_{\text{ISRF}}(\bm{r},E_{0})$ is the
number density of radiation field photons per unit energy Blumenthal & Gould
(1970). The predicted intensity of high-energy gamma-rays or neutrinos as a
function of direction and energy therefore depends on these three inputs: a
model for the distribution of GCRs, a map of the interstellar gas and
radiation field in the Milky Way, and the hadronic production cross-sections.
In the following, we will detail our modelling choices for each of these
inputs. We will also describe the global fit that we employed to determine the
parameters of our GCR model.
### 2.1 Galactic Cosmic Ray Model
The propagation of GCRs is usually modelled with the transport equation
(Parker, 1965; Ginzburg & Syrovatskii, 1964),
$\frac{\partial\psi}{\partial t}-\frac{\partial}{\partial
x_{i}}\kappa_{ij}\frac{\partial\psi}{\partial
x_{j}}+u_{i}\frac{\partial\psi}{\partial x_{i}}+\frac{\partial}{\partial
p}\left(b\frac{\partial\psi}{\partial p}\right)=q\,.$ (3)
The boundary conditions employed are that $\psi$ vanishes on the surface of a
cylinder of half-height $z_{\text{max}}$ and radius $r_{\text{max}}$. We have
assumed $r_{\text{max}}=20\,\text{kpc}$ ad adopted
$z_{\text{max}}=6\,\text{kpc}$ (Evoli et al., 2020). The solution of this
partial differential equation depends on the assumed transport parameters,
that is the spatial diffusion tensor $\kappa_{ij}$, the advection velocity
$u_{i}$, the momentum loss rate $b$ and the source density $q$. We solve eq.
(3), employing a publicly available version of the DRAGON code (Evoli et al.,
2017), assuming axisymmetry with respect to the direction perpendicular to the
galactic disk. In the following, we describe the various transport parameters.
#### 2.1.1 Diffusion
GCRs diffuse due to resonant interactions with a spectrum of turbulent
magnetic fields Berezinskii et al. (1990). Here, we assume isotropy, such that
the diffusion tensor is a diffusion coefficient $D$ times unit matrix,
$\kappa_{ij}=D\delta_{ij}$. If the power spectrum of turbulence was known, the
diffusion coefficient $D$ could be computed. In phenomenological applications,
however, the diffusion coefficient is oftentimes assumed to follow a certain
parametric form.
As for its rigidity-dependence, we employ a power law with four breaks,
$\displaystyle D(\mathcal{R})$
$\displaystyle=D_{0}\beta\left(\frac{\mathcal{R}}{\mathcal{R}_{12}}\right)^{-\delta_{1}}$
(4)
$\displaystyle\times\prod_{i=1}^{4}\left(1+\left(\frac{\mathcal{R}}{\mathcal{R}_{i(i+1)}}\right)^{1/s_{i(i+1)}}\right)^{-s_{i(i+1)}(\delta_{i+1}-\delta_{i})}$
hence four break rigidities $\\{\mathcal{R}_{i(i+1)}\\}$, four softness
parameters $\\{s_{i(i+1)}\\}$, five spectral indices $\\{\delta_{i}\\}$ and
one normalization $D_{0}$.
Of course, such spectral breaks should only be introduced if required for an
accurate description of the data (see Vittino et al. (2019), where the
necessity of spectral breaks in the diffusion coefficient is discussed for
models of GCR electrons and positrons). At the same time, it is important to
motivate the breaks from a physical mechanism, e.g. from features in the power
spectrum of turbulence. In the following, we provide some pointers as to the
physical origin of the breaks we consider. While the exact break parameters
will be determined from a global fit to local GCR data, we indicate some
benchmarks.
At rigidities of a few GV and above, the propagated spectra are to a first
approximation proportional to the source spectrum divided by the diffusion
coefficient. Under the constraint of producing the observed spectral indices,
the spectral indices of the source spectra and of the diffusion coefficient
are therefore approximately degenerate. This degeneracy gives us the freedom
to choose a pure power law for the source spectra, see below, and instead
absorb possible spectral breaks in the source spectrum into breaks of the
diffusion coefficient.
The first spectral break ($\delta_{2}-\delta_{1}>0$) in the diffusion
coefficient at a few GV is a hardening of its spectrum and serves to absorb a
spectral softening of the source spectrum. Such spectral breaks have been
observed in the gamma-ray spectra of supernova remnants (SNRs) by Fermi-LAT
(Abdo et al., 2009a, b, 2010a, 2010b). One possibility is that the break in
the diffusion coefficient is due to self-confinement of GCRs in the near-
source environment (Jacobs et al., 2022). Such a break has also been
introduced into models of GCRs on purely phenomenological grounds, that is in
order to achieve a better fit to locally measured spectra when not including
reacceleration, as is the case for the model constructed here. We note that
even for a model that includes reacceleration such a break is likely necessary
(Strong et al., 2011).
The hardening break in the GCR spectra ($\delta_{3}-\delta_{2}<0$) at
$\mathcal{R}_{23}\simeq 300\,\mathrm{GV}$ has been found to be present both in
primary (Panov et al., 2007; Ahn et al., 2010; Adriani et al., 2011; Aguilar
et al., 2015a, b, 2017) and secondary GCRs (Aguilar et al., 2018). The fact
that the break is more pronounced in secondary species points to a propagation
effect (Vladimirov et al., 2012; Génolini et al., 2017) rather than a feature
in the source spectrum. It has been shown (Blasi et al., 2012; Evoli et al.,
2018) that such a break can be rather naturally explained as a transition in
the turbulence power spectrum from self-generated turbulence dominating $D$
for low rigidities to external turbulence for high rigidities.
A further softening in the GCR spectra ($\delta_{4}-\delta_{3}>0$) has been
observed in the spectra of proton and helium by the DAMPE (An et al., 2019;
Alemanno et al., 2021) and CALET (Adriani et al., 2019; Brogi & Kobayashi,
2021) experiments. Earlier indications from the CREAM experiment exist (Yoon
et al., 2017). While it has been suggested to be the spectral feature of an
individual nearby source (Malkov & Moskalenko, 2021; Fornieri et al., 2021),
statistically such a scenario is considered unlikely (Genolini et al., 2017).
Instead, it might be attributed to a cut-off of one population of sources,
e.g. supernova remnants (SNRs), before a different population takes over at
higher rigidities.
Finally, the softening ($\delta_{5}-\delta_{4}>0$) break in the GCR spectra
around $\sim\text{PV}$ is the well-known cosmic ray “knee”. Although it has
been discovered in the all-particle spectrum of cosmic rays as early as 1959
(Kulikov & Khristiansen, 1959), there is still no consensus about its origin.
The KASCADE and KASCADE-Grande experiments have identified it to be consistent
with a break at fixed rigidity for different elements (Antoni et al., 2005;
Apel et al., 2011). Therefore the leading hypotheses are that it corresponds
either to the maximum rigidities at which galactic magnetic fields can contain
GCRs or to the maximum rigidity of galactic accelerators of cosmic rays
(Hillas, 1984).
#### 2.1.2 Source Injection
We follow the usual assumption of a factorization of the source term
$q=q(\bm{r},E)$ in eq. (3) into a spatial source distribution $S(r,z)$ and an
injection spectrum, $g(p)$,
$Q(\bm{r},p)=S(r,z)g(p)\,.$ (5)
The spatial source distribution $S(r,z)$ is an input to the cosmic ray model
with a significant impact on the morphology of the resulting diffuse emission.
To estimate the associated uncertainty, we use the four commonly used
distributions from Ferrière (2001), Case & Bhattacharya (1998), Lorimer et al.
(2006) and Yusifov & Kücük (2004). All of these are analytical
parametrizations based on population studies of SNRs, massive stars as
progenitors or pulsars as relicts of supernovae in the Milky Way. Thus, they
all serve as a proxy for the distribution of SNRs, the likely preeminent
sources of galactic cosmic rays. For an early inference of the radial source
distribution from diffuse GeV gamma-rays, see Stecker & Jones (1977).
While overall all four parametrizations agree qualitatively, there is
significant quantitative disagreement between them. This is true in particular
towards the galactic center, where the source density of the Case &
Bhattacharya (1998) and Yusifov & Kücük (2004) models is forced to zero, while
the distributions of Ferrière (2001) and Lorimer et al. (2006) attain finite
values.
Figure 1: Radial profiles and relative differences to the Ferrière (2001)
model of the four GCR source distributions used in this work.
The injection spectra $g_{i}(p)$ for all nuclei $i$ are assumed to be pure
power laws with energy-independent, but in general different, spectral
indices, that is $g(p)\propto p^{-\gamma_{i}}$. Possible breaks in the source
spectra can to a certain extent be absorbed into breaks in the diffusion
coefficient, see the discussion in Sec. 2.1.1.
For the calculation of the diffuse emission, the contributions of cosmic ray
nuclei heavier than helium can be approximated via a scaling factor in the
hadronic production cross-sections for $p$-$p$ interactions as in Casandjian
(2015), significantly reducing the computation time. This is because the
relevant quantity for the production of hadronic diffuse emission is the all-
nucleon flux as a function of kinetic energy per nucleon, $E_{\text{kin}}/n$.
As $E_{\text{kin}}/n\propto(Z/A)\mathcal{R}$, spectral features at a common
rigidity $\mathcal{R}$ appear also at similar $E_{\text{kin}}/n$ for all
nuclei. Therefore, the spectra of nuclei heavier than helium feature a similar
$E_{\text{kin}}/n$-dependence to those of lighter nuclei and their
contribution to the all-nucleon flux remains small at all $E_{\text{kin}}/n$.
Because of this, the all-nucleon flux in our model is dominated by the
contributions from protons and helium at all $E_{\text{kin}}/n$. It also
allows us to approximate the subdominant contributions of cosmic ray nuclei
heavier than helium via a scaling factor in the hadronic production cross-
sections for $p$-$p$ interactions as in Casandjian (2015), significantly
reducing the computation time needed for the calculation of the diffuse
emission.
Besides the three spectral indices $\gamma_{\text{p}}$, $\gamma_{\text{He}}$
and $\gamma_{\text{C}}$, the source abundances of helium $N_{\text{He}}$ and
carbon $N_{\text{C}}$ relative to those of protons, which, as is for example
done in the supplementary
material222https://galprop.stanford.edu/PaperIISuppMaterial/ to Ackermann et
al. (2012), is set arbitrarily to $1.06\times 10^{6}$, are also free
parameters in the model.
For the lepton spectra, the situation is somewhat more complicated. Following
Vittino et al. (2019), we model the injection spectrum of electrons as a twice
broken power law, with the spectral indices $\gamma^{e^{-}}_{i}$ and break
energies $E^{e^{-}}_{i(i+1)}$ as free fit parameters. To account for the very
short propagation distances of cosmic ray electrons at TeV energies, which are
not correctly grasped by the assumed smooth source distribution, the injection
spectrum is exponentially cut off at
$E^{e^{-}}_{\text{cut}}=20\,\mathrm{TeV}$. This is similar to the choices made
in Mertsch (2018), where such a cut-off was found to be necessary to match the
TeV $e^{-}+e^{+}$ data.
To account for the spectral hardening in the positron spectrum at
$\mathrm{GeV}$ energies (Aguilar et al., 2019a), an extra source component
yielding equal amounts of electrons and positrons is added to the model.
Similarly to Vittino et al. (2019), it represents additional astrophysical
lepton sources, but is agnostic of any precise models for these sources. The
assumed spatial distribution is the same as for all other sources. The
spectrum of this extra component is modelled as a once broken power law with
an exponential cut-off. The break energy is fixed to
$E^{\text{extra}}_{12}=50\,\mathrm{GeV}$, the cut-off is at
$E^{\text{extra}}_{\text{cut}}=600\,\mathrm{GeV}$. The two spectral indices
$\gamma^{\text{extra}}_{i}$ are free fit parameters.
### 2.2 Interstellar Medium Components
Computing the diffuse emission of neutrinos and hadronic gamma-rays requires a
3D map $n_{\text{gas}}(\bm{r})$ of the gas in the Galaxy, see eq. (1). Such 3D
maps can be obtained from galactic surveys of gas line emission. Those are
based on the fact that galactic rotation induces different relative velocities
between the gas and the observer. Assuming a velocity model, e.g. a rotation
curve, this can be used to convert the survey into a 3D map. Given the
uncertainties of such a reconstruction, models with rather larger bins in
galacto-centric radius were developped, so-called ring models. (See appendix B
of Ackermann et al. (2012) for some details.) More recently, more
sophisticated Bayesian inference techniques have been used (Mertsch & Vittino,
2021; Mertsch & Phan, 2022). Alternatively, analytical parametrizations of the
spatial gas distributions have been suggested (Lipari & Vernetto, 2018) as
well as more complicated parametrizations that were fit to data (Jóhannesson
et al., 2018). Our choices for the maps of both atomic and molecular gas are
described in Secs. 2.2.1 and 2.2.2.
For the computation of leptonic Inverse Compton gamma-ray emission, a model of
the ISRF of the Milky Way is required. We describe the models used in this
work in Sec. 2.2.3.
#### 2.2.1 Atomic Hydrogen
Atomic hydrogen (Hi) is traced by the well-known $21\,\mathrm{cm}$ emission
line from the hyperfine transition. Combining the data from various
telescopes, the LAB survey (Kalberla et al., 2005) and the more recent HI4PI
survey (Ben Bekhti et al., 2016) have become available. The quantity measured
by these surveys is the brightness temperature
$T_{\text{B}}(l,b,v_{\text{LSR}})$ of the emission line as a function of
direction and radial velocity $v_{\text{LSR}}$ with respect to the local
standard of rest. The transformation into a differential column density
$(\mathrm{d}N_{\text{H}\textsc{i}}/\mathrm{d}v_{\text{LSR}})(l,b,v_{\text{LSR}})$
can be calculated from the thermodynamics of two-level systems (Draine, 2010;
Ackermann et al., 2012)
$\frac{\mathrm{d}N_{\text{H}\textsc{i}}}{\mathrm{d}v_{\text{LSR}}}=CT_{\text{s}}\tau=-CT_{\text{s}}\ln{\left(1-\frac{T_{\text{B}}}{T_{\text{s}}-T_{\text{CMB}}}\right)}.$
(6)
with $C=1.823\times
10^{18}\,\mathrm{cm}^{-2}\,(\mathrm{K}\,\mathrm{km}\,\mathrm{s}^{-1})^{-1}$.
This involves the spin temperature $T_{\text{s}}$, which is equivalent to the
population ratio of the excited state to the ground state of the hyperfine
structure transition. A large $T_{\text{s}}$, corresponding to a large
population of the excited state and in consequence little self-absorption and
a small optical depth, therefore results in less gas column density being
inferred from the observed $T_{\text{B}}$ of the emission. In fact, the limit
$T_{\text{s}}\rightarrow\infty$ corresponds to the optically thin limit and
poses a lower limit on the total amount of Hi (Mertsch & Phan, 2022).
Correspondingly, low $T_{\text{s}}$ lead to higher inferred column densities.
$T_{\text{s}}$ is quite uncertain and also thought to vary across the Galaxy
(Ackermann et al., 2012; Ben Bekhti et al., 2016). Typically, models assume a
constant value of either $T_{\text{s}}\rightarrow\infty$ or
$T_{\text{s}}\approx 100-500\,\mathrm{K}$, corresponding to a regime where
optical depth becomes relevant (see for example Ackermann et al. (2012);
Mertsch & Phan (2022)).
In order to estimate the uncertainty associated with the choice of the gas
distributions, we employ three different models to calculate diffuse emission.
These are
* •
GALPROP: This is the gas map described as ${}^{\mathrm{T}}150^{\mathrm{C}}5$
in Ackermann et al. (2012). It is based on the the LAB Hi survey data
deconvolved into 17 rings and assumes $T_{\text{s}}=150\;\mathrm{K}$. It also
includes a correction for dark gas calculated from dust reddening maps
(Ackermann et al., 2012; Grenier et al., 2005).
* •
GALPROP-OT: This is exactly the same distribution as the GALPROP model, except
that the gas is assumed to be optically thin at a spin temperature of
$T_{\text{s}}=10^{5}\;\mathrm{K}$. In Ackermann et al. (2012), this model is
called ${}^{\mathrm{T}}100000^{\mathrm{C}}5$.
* •
HERMES: This model uses the HI4PI Hi survey data deconvolved into 11 rings and
assuming $T_{Spin}=300\;\mathrm{K}$ (Remy et al., 2018).333See also
https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/4fgl/Galactic_Diffuse_Emission_Model_for_the_4FGL_Catalog_Analysis.pdf
#### 2.2.2 Molecular Hydrogen
Maps of molecular hydrogen (H2) have been derived from the CfA survey
compilation (Dame et al., 2001) of emission of the $J=1\to 0$ line of
carbonmonoxide (Heyer & Dame, 2015). Similar to the $21\,\mathrm{cm}$ surveys,
it provides a map of brightness temperatures
$T_{\text{B}}(l,b,v_{\text{LSR}})$. The conversion to the differential column
density
$(\mathrm{d}N_{\text{H${}_{2}$}{}}/\mathrm{d}v_{\text{LSR}})(l,b,v_{\text{LSR}})$
is however not as rigorous as the procedure in eq. (6). Instead, a
phenomenological conversion factor $X_{\text{CO}}$ is used to obtain the
following relation
$\frac{\mathrm{d}N_{\text{H${}_{2}$}}}{\mathrm{d}v_{\text{LSR}}}=X_{\text{CO}}T_{\text{B}}.$
(7)
There is great uncertainty associated with the $X_{\text{CO}}$ factor: While
locally a value around $2\times
10^{20}\,\mathrm{cm}^{-2}\,(\mathrm{K}\,\mathrm{km}\,\mathrm{s}^{-1})^{-1}$ is
generally recommended (Bolatto et al., 2013), different works assume values as
low as $0.5\times
10^{20}\,\mathrm{cm}^{-2}\,(\mathrm{K}\,\mathrm{km}\,\mathrm{s}^{-1})^{-1}$
(Liu & Yang, 2022) and as high as $8\times
10^{20}\,\mathrm{cm}^{-2}\,(\mathrm{K}\,\mathrm{km}\,\mathrm{s}^{-1})^{-1}$
(de la Torre Luque et al., 2022). Also, $X_{\text{CO}}$ is often taken to vary
with galactocentric radius, with different models assuming different radial
dependencies. A common description is a ring-based model, where
$X_{\text{CO}}$ is constant within different galactocentric rings. Examples of
this are Gaggero et al. (2015b), where $X_{\text{CO}}$ is fixed in two rings,
and Ackermann et al. (2012), where it is left as a free fit parameter in 13
rings. In these models, $X_{\text{CO}}$ typically increases with
galactocentric radius. This is explained through a decrease in metallicity in
the outer Galaxy that increases the H2 to CO ratio (Gaggero et al., 2015b).
In this study, we conservatively opt for a constant value of $X_{CO}=2\times
10^{20}\;\mathrm{cm}^{-2}\;(\mathrm{K}\;\mathrm{km}\;\mathrm{s}^{-1})^{-1}$
throughout the galaxy, in accordance with the above mentioned local
recommendations.
#### 2.2.3 Interstellar Radiation Field
Models of the interstellar radiation field (ISRF) in the Milky Way, require,
besides the well-measured homogeneous cosmic microwave background (CMB),
calculations of the accumulated galactic starlight and the infrared emission
from dust. These calculations are challenging as the absorption and re-
emission of starlight from dust couples these components. Different analytical
(e.g. Vernetto & Lipari, 2016) and numerical (e.g. Porter et al., 2008, 2017)
approaches are used, yielding quantitatively different results. To account for
this uncertainty in the calculation of Inverse Compton gamma-ray fluxes, we
consider two different models, namely those from Porter et al. (2008)
(henceforth called GALPROP) and Vernetto & Lipari (2016).
### 2.3 Hadronic Production Cross-sections
The calculation of hadronic gamma-ray and neutrino production cross-sections
in different hadronic interaction models is associated with sizeable
uncertainties, as was for example recently detailed by Koldobskiy et al.
(2021). To estimate the resulting uncertainties on the diffuse emission
models, we have used three different models for our calculations. These are
* •
K&K: This model combines the parametrization of the total inelastic $p$-$p$
cross-section from Kafexhiu et al. (2014) with the secondary yields and
spectra from Kelner et al. (2006). The latter is based on an analytical
parametrization of the SIBYLL hadronic interaction model Fletcher et al.
(1994). Note that this parametrization is only valid for primary energies
above $100\,\mathrm{GeV}$ (Kelner et al., 2006).
* •
KamaeExtended: For primary energies below $500\,\mathrm{TeV}$, this model uses
the parametrization from Kamae et al. (2006) which is in large parts derived
from the PYTHIA 6.2 event generator. At higher energies, it is extended with
the K&K model. This follows the prescription used in Gaggero et al. (2015a).
* •
AAfrag: This is based on interpolation tables from the hadronic interaction
model QGSJET-II-04m described in Kachelriess et al. (2019); Koldobskiy et al.
(2021). Below $4\,\mathrm{GeV}$ primary energy, it is complemented by the
parametrization from Kamae et al. (2006).
The K&K and KamaeExtended parametrizations are only available for $p$-$p$
interactions (Kamae et al., 2006; Kelner et al., 2006) and interactions of
heavier gas and cosmic ray nuclei need to be described via scaling factors of
the $p$-$p$ cross-sections. AAfrag in principle contains explicit models for
the interactions of heavier nuclei (Koldobskiy et al., 2021). In this work,
these are however treated through the same scaling factors as used for the
other cross-section parametrizations. The scheme used for these scaling
factors directly follows Casandjian (2015): For protons and helium cosmic ray
nuclei, which are included in the sum in eq. (1), the scaling factors relative
to the $p$-$p$ cross-sections for different targets are taken from Mori
(2009). As already described in Sec. 2.1.2, interactions of cosmic ray nuclei
heavier than helium are not calculated explicitly, but are rather treated
through an increase in the rate of $p$-$p$ interactions.
### 2.4 Unresolved Sources
Individual sources of high-energy gamma-rays or neutrinos vary in luminosity
(also known as intrinsic brightness) and in distance from the observer. The
resulting fluxes (also know as apparent brightnesses) can therefore vary over
many orders of magnitude. Observations are, however, flux-limited due to a
number of effects, like the presence of backgrounds or source confusion. A
source with a flux smaller than a certain threshold value can therefore not be
detected with the required significance and all such sources contribute
collectively to the observed diffuse flux, even though the emission is
originating from spatially well-defined regions and not from the interstellar
medium as for the truly diffuse emission produced by GCRs. These sources and
their collective flux are commonly referred to as “unresolved sources”.
We have modelled the flux of unresolved sources by extrapolating spectra and
luminosity distributions from $\gamma$-ray observations at TeV energies. We
largely follow Vecchiotti et al. (2022) who considered the unresolved sources
to be dominated by pulsar-powered sources. While it might appear that such
sources would be leptonic, we remain agnostic as to the nature of the
particles producing the high-energy gamma-rays inside the sources. Later, when
modelling the flux of high-energy neutrinos, we also consider a contribution
from the same unresolved, pulsar-powered sources as in gamma-rays. Here, we
briefly summarize the salient points of the model and enumerate the adopted
parameter values (Vecchiotti et al., 2022).
The intensity from unresolved sources of high-energy gamma-rays of energy $E$
observed from the direction $(l,b)$ is the cumulative intensity up to the
threshold $J_{\text{th}}$ in intensity,
$J_{\text{unres}}(E,l,b)=\int_{0}^{J_{\text{th}}}\mathrm{d}J\,p_{J}(J;E,l,b)\,J\,,$
(8)
where $p_{J}$ is the probability density for the intensity $J$ at energy $E$
and from the direction $(l,b)$. We assume that the source density factorizes
into a volume density of sources $S_{*}(\bm{r})$ and a probability
distribution of source rates $p_{\Gamma}(\Gamma)$. Under this condition, eq.
(8) can be shown to lead to
$\displaystyle J_{\text{unres}}(E,l,b)$
$\displaystyle=\varphi(E)\int_{L_{\text{min}}}^{L_{\text{max}}}\mathrm{d}L\,\frac{\mathrm{d}N}{\mathrm{d}L}\frac{\Gamma}{4\pi}$
(9)
$\displaystyle\times\int_{\sqrt{\Gamma/(4\pi\Phi_{\text{th}})}}^{\infty}\mathrm{d}s\,S_{*}(\bm{r}(s,l,b))\,.$
Here, $\varphi(E)$ denotes the spectrum of a single source,
$\varphi(E)=\frac{\beta-1}{1-100^{1-\beta}}\left(\frac{E}{1\,\text{TeV}}\right)^{-\beta}\exp\left[-\frac{E}{E_{\text{cut}}}\right]\,,$
(10)
normalized to $\sim 1$ in the energy range $1$ to $100\,\text{TeV}$, assuming
a cut-off energy $E_{\text{cut}}=500\,\text{TeV}$. For the luminosity function
$\mathrm{d}N/\mathrm{d}L$, we follow again Vecchiotti et al. (2022) in
adopting a power law form,
$\frac{\mathrm{d}N}{\mathrm{d}L}=\frac{\Gamma_{*}\tau(\alpha-1)}{L_{\text{max}}}\left(\frac{L}{L_{\text{max}}}\right)^{-\alpha}\,,$
(11)
with $\Gamma_{*}=1.9\times 10^{-2}\,\text{yr}^{-1}$, $\tau=1.8\times
10^{3}\,\text{yr}$, $\alpha=1.5$ and $L_{\text{max}}\equiv
L_{\text{max,HESS}}=4.9\times 10^{35}\,\text{erg}\,\text{s}^{-1}$. The source
rate $\Gamma$ can be related to source luminosity $L$ as
$\Gamma=(\beta-2)/(\beta-1)L$. Throughout, we have assumed a spectral index
$\beta=2.3$. We have varied the flux threshold $\Phi_{\text{th}}$ between
$0.01$ and $0.1\,\Phi_{\text{Crab}}$ where for the 1 to $100\,\text{TeV}$
energy range $\Phi_{\text{Crab}}=2.26\times
10^{-11}\text{cm}^{-2}\,\text{s}^{-1}$. Finally, for the spatial distribution
$\rho_{*}(\bm{r})$ we have assumed the distribution from Lorimer et al. (2006)
and $\bm{r}(s,l,b)$ denotes the position at a distance $s$ from the observer
in the direction $(l,b)$. The per-flavor neutrino intensity $J_{\nu}(E_{\nu})$
is related to this gamma-ray intensity
$J_{\gamma}(E_{\gamma})=J_{\text{unres}}(E_{\gamma})$ as (e.g. Ahlers &
Murase, 2014; Fang & Murase, 2021)
$J_{\nu}(E_{\nu})=2J_{\gamma}(2E_{\nu})\,.$ (12)
### 2.5 Gamma-Ray Absorption
Above a few $\mathrm{TeV}$, gamma-rays are subject to absorption in photon-
photon interactions with the ISRF during propagation in the Milky Way. In a
fully self-consistent treatment, this is accounted for through the inclusion
of the survival probability $\mathrm{exp}(-\tau(E,s(l,b))$ in eq. (1), where,
following e.g. Vernetto & Lipari (2016), the optical depth $\tau$ is
calculated as the line-of-sight integral
$\tau(E,s(l,b))=\int_{0}^{s(l,b)}\mathrm{d}s^{\prime}\,K(\bm{r}(l,b,s^{\prime}),E).$
(13)
This depends on the absorption coefficient $K({\bm{r}},E)$, which for an
isotropic ISRF with an photon density per unit energy
$n_{\mathrm{ISRF}}(\bm{r},E_{0})$ is calculated as
$K({\bm{r}},E)=\int\mathrm{d}E_{0}\,n_{\mathrm{ISRF}(\bm{r},E_{0})}\langle\sigma_{\gamma\gamma}(E,E_{0})\rangle$
(14)
with the interaction cross-section $\sigma_{\gamma\gamma}$ averaged over the
angle between the two interaction partners.
The overall effect of the absorption thus depends on the assumed galactic
distributions of cosmic rays and gas, the ISRF model and the direction in the
sky. The dependence on both the choice of ISRF model and details of the cosmic
ray and gas distributions is however only weak (Vernetto & Lipari, 2016;
Breuhaus et al., 2022), in parts because the dominant contribution to the
absorption stems from interactions with the homogeneous CMB.
Therefore, and because the fully self-consistent calculation as layed out
above adds another line-of-sight integration to eq. (1) and is thus
computationally expensive, we use a simplified approach instead. In this, we
obtain the absorbed gamma-ray intensity averaged over a given window in the
sky $\Omega$, $J_{\mathrm{abs}}(E,\Omega)$ from the non-absorbed intensity $J$
as
$J_{\mathrm{abs}}(E,\Omega)=p_{\mathrm{abs}}(E,\Omega)J(E,\Omega)$ (15)
with a separately calculated absorption probability
$p_{\mathrm{abs}}(E,\Omega)$. We calculate this following the prescriptions
described in App. E of Breuhaus et al. (2022). We also assume the same ISRF
model (Popescu et al., 2017) and analytical descriptions of the distributions
of galactic gas (Ferrière, 1998; Ferrière et al., 2007) and cosmic rays
(Lipari & Vernetto, 2018) as used there444We are grateful to to Mischa
Breuhaus for providing his implementation of the calculation, which makes use
of the GAMERA code (Hahn, 2016)..
### 2.6 Global Fit
The GCR model described in Sec. 2.1 contains a total of 26 free parameters
describing the modeling of sources and transport of GCRs, see Table 1. We
determine those by fits to local measurements of GCR intensities, which
requires an additional set of 8 parameters that are also listed in Table 1. In
the following, we describe these additional parameters, the datasets
considered, the numerical tools used as well as the procedure adopted for the
global fit.
#### 2.6.1 Data
We consider both data from direct measurements by the space experiments AMS-02
and DAMPE as well as indirect measurements from IceTop and KASCADE.
Specifically, we fit to protons (Aguilar et al., 2015a), helium (Aguilar et
al., 2017), carbon (Aguilar et al., 2017), electrons (Aguilar et al., 2019b),
positrons (Aguilar et al., 2019a), and the boron-to-carbon ratio from AMS-02
(Aguilar et al., 2018), and to the proton (An et al., 2019) and helium
(Alemanno et al., 2021) data from DAMPE. For IceTop (Aartsen et al., 2019b)
and KASCADE (Antoni et al., 2005) data, we use the proton and helium analyses
based on the SIBYLL-2.1 interaction model.
The measurements of the hadronic intensities are however not necessarily
consistent between different experiments. The most commonly cited reason for
this are uncertainties in the (relative) energy scale calibrations of
different experiments, in particular for indirect observations such as those
by IceTop and KASCADE, but also for calorimetric experiments such as DAMPE
(Adriani et al., 2022). We here follow Dembinski et al. (2018) in introducing
one nuisance parameter $\alpha_{k}$ per experiment $k$ in order to rescale the
experimental intensities. For instance, given a reported intensity $J(E)$ as a
function of reported energy $E$, we rescale it according to
$\tilde{J}(\tilde{E})=\frac{1}{\alpha_{k}}J(\alpha_{k}\tilde{E})\,.$ (16)
The nuisance parameters $\\{\alpha_{k}\\}$ are also determined by the fit. We
impose a log-normal prior with a width of $30\,\%$ on each $\alpha_{k}$. For
AMS-02 and DAMPE, the data are sufficiently constraining that the choice of
prior has no influence (see Table 1).
Finally, we treat solar modulation in the force-field model (Gleeson & Axford,
1968). We allow for four different modulation potentials, for protons, nuclei,
electrons and positrons, $\phi_{p}$, $\phi_{Nuc}$, $\phi_{e^{-}}$ and
$\phi_{e^{+}}$.
#### 2.6.2 Numerical Tools
We have used a publicly available version of the DRAGON code (Evoli et al.,
2008) that solves the transport eq. (3) with a finite difference method. In
this, we have implemented our multi-break model for the diffusion coefficient
(eq. (4)) and the parameters associated with it (see Table 1).
For computing the hadronic diffuse emission, we have employed the HERMES code
(Dundovic et al., 2021). This code provides a flexible framework for computing
the volume emissivities, given the spatially resolved GCR spectra and gas
densities, see eq. (1). In addition to the models and parametrizations
provided with the publicly available version of HERMES, we have added the
above mentioned GALPROP and GALPROP-OT gas maps, the AAfrag cross-section
parametrization and respective interfaces to the code.
#### 2.6.3 Procedure
For fitting our model to the observational data, we adopt a Gaussian
likelihood function, combining the quoted statistical and systematic
uncertainties of each AMS-02 and DAMPE in quadrature. For KASCADE and IceTop,
we have used the statistical uncertainty only and have subsumed the additional
systematic uncertainties into a potentially larger energy scale uncertainty.
Given the large number of free parameters, finding the best fit is a non-
trivial task. It has been recognized that conventional optimizers cannot
guarantee to find the global minimum of the negative log-likelihood. Instead,
it has been suggested to use Markov Chain Monte Carlo (MCMC) techniques for
the minimization (Korsmeier & Cuoco, 2016; Mertsch et al., 2021). While
computationally rather expensive, MCMC samplers are much more robust and less
prone to getting stuck in local minima. At the same time, once the MCMC chain
has converged, the ensemble of samples can be used as an estimate of the
parameter credible intervals.
We have employed an affine-invariant method (Goodman & Weare, 2010) as
implemented in the emcee package (Foreman-Mackey et al., 2013). The inherent
parallel nature allows for a significant speed-up compared to serial MCMC
samplers. Overall, 90000 MCMC samples were drawn, with a single DRAGON
calculation taking around 12 minutes on a single core. In order to speed up
the convergence towards the global minimum, we distinguish between parameters
for the DRAGON code (“slow parameters”) and other parameters, that is the
energy rescalings $\\{\alpha_{i}\\}$ and the force-field potentials (“fast
parameters”). The “slow” parameters are sampled by the MCMC method while the
“fast” parameters are profiled over after each DRAGON run. For all further
calculations, including the estimates of the uncertainties on the GCR
observables shown below, we use the final 6000 samples drawn after convergence
of the MCMC chain.
## 3 Results
### 3.1 Galactic Cosmic Ray Intensities
Figure 2: Corner plot of the marginalized distributions of the “slow”
parameters of our model. The values of the break rigidities
$\mathcal{R}_{i(i+1)}$ are assumed to be in GV. The dark blue regions are the
the estimated $68\,\%$ credible intervals of the 2D marginalized
distributions. Similarly, the $68\,\%$ credible intervals in the 1D
marginalized distributions are indicated by the dashed blue lines. The light
blue regions represent the estimated $95\,\%$ credible intervals of the 2D
marginalized distributions. The orange markers indicate the median value for
each parameter, which we adopt as our best-fit parameter value. Figure 3:
Corner plot (cont’d from Figure 2).
Figure 4: Corner plot (cont’d from Figures. 2 and 3).
Figure 5: Corner plot (cont’d from Figs. 2, 3 and 4). The values of the break
energies $E^{e-}_{i(i+1)}$ are assumed to be in GeV.
Figures 2, 3, 4 and 5 show (different parts of) the corner plot, that is the
collection of the one-dimensional marginal distributions of parameters and of
two-dimensional distributions of pairs of parameters. We have made use of the
corner package (Foreman-Mackey, 2016) adopting a value of $1$ for the smooth
and smooth1d keywords. This is made necessary by the high dimensionality of
our parameter space. Here, we have chosen to show the distributions for the
“slow” parameters of the MCMC scan only, that is those parameters which are
input parameters for the GCR propagation code. We remind the reader that for
each set of those “slow” parameters, we have determined the “fast” parameters,
that is the remaining ones, through profiling, meaning the likelihood is
maximized with respect to the “fast” parameters only while the “slow”
parameters are kept fixed.
For instance, while most of the parameters are uncorrelated, some of the
existing correlations and anti-correlations are noteworthy:
* •
The better known anti-correlations in GCR parameters are those between source
spectral indices and indices of the diffusion coefficient. This is best seen
for the spectral indices in the rigidity range where data are most
constraining. Indeed, the spectral index $\delta_{2}$ of the diffusion
coefficient for $\mathcal{R}$ between $\mathcal{R}_{12}$ and
$\mathcal{R}_{23}$ is anti-correlated with the source spectral indices of
various species, that is $\gamma^{\text{p}}$, $\gamma^{\text{He}}$ and
$\gamma_{1}^{\text{extra}}$, see Figure 3.
* •
These anti-correlations between source spectral indices and diffusion
coefficient spectral indices also induce correlations between different source
spectral indices, see for instance the correlation of $\gamma^{\text{p}}$ and
$\gamma^{\text{He}}$ in Figure 4.
* •
Another apparent correlation is the one between the normalization $D_{0}$,
defined as the value of the diffusion coefficient at the break rigidity
$\mathcal{R}_{12}$, and this break rigidity, see Figure 2.
* •
The spectral indices of the diffusion coefficient above and below a break can
be correlated or anti-correlated with the break rigidity. For instance, the
break at $\mathcal{R}_{23}$ is a softening of the diffusion coefficient, that
is $(\delta_{3}-\delta_{2})<0$. Increasing $\mathcal{R}_{23}$ can thus be
compensated to a certain degree by making the spectral index above,
$\delta_{3}$, even smaller. This explains the anti-correlation between
$\delta_{3}$ and $\mathcal{R}_{23}$, seen in Figure 2.
* •
The correlation between $\delta_{3}$ and $\mathcal{R}_{34}$ instead, see
Figure 2, is due to the fact that the data would prefer a smaller
$\mathcal{R}_{34}$ were $\delta_{3}$ chosen to be smaller.
In Table 1, we list the best-fit values of the various “slow” and “fast”
parameters as well as their $68\,\%$ credible intervals.555Here, we employ the
Bayesian terminology even though strictly speaking, it only applies to
unbiased samples from the posterior distribution. We have found the best-fit
values to coincide with the maxima and medians of the marginal distributions.
For the “slow” parameters, we have determined the edges of the credible
intervals as the $16\,\%$ and $84\,\%$ quantiles of the marginal
distributions. The edges of the credible intervals for the “fast” parameters
are similarly calculated as the $16\,\%$ and $84\,\%$ quantiles of their
distribution over all samples.
Table 1: Free parameters and their best-fit values. Source parameters
---
$\gamma^{\mathrm{p}}$ | $={2.383}^{+0.008}_{-0.009}$ | Proton source spectral index
$\gamma^{\mathrm{He}}$ | $={2.324}^{+0.006}_{-0.007}$ | Helium source spectral index
$\gamma^{\mathrm{C}}$ | $={2.339}^{+0.006}_{-0.006}$ | Carbon source spectral index
$N_{\mathrm{He}}$ | $={87520}^{+1170}_{-1200}$ | Helium source abundance for a fixed proton abundance of $N_{\mathrm{p}}=1.06\times 10^{6}$
$N_{\mathrm{C}}$ | $={3101}^{+69}_{-80}$ | Carbon source abundance for a fixed proton abundance of $N_{\mathrm{p}}=1.06\times 10^{6}$
$\gamma^{e^{-}}_{1}$ | $={2.359}^{+0.065}_{-0.087}$ | Electron source spectral index below $E^{e-}_{12}$
$\log_{10}\left[E^{e-}_{12}/\text{GeV}\right]$ | $={0.680}^{+0.025}_{-0.018}$ | Energy of first break in electron source spectrum
$\gamma^{e^{-}}_{2}$ | $={2.869}^{+0.018}_{-0.020}$ | Electron source spectral index between $E^{e-}_{12}$ and $E^{e-}_{23}$
$\log_{10}\left[E^{e-}_{23}/\text{GeV}\right]$ | $={1.663}^{+0.026}_{-0.024}$ | Energy of second break in electron source spectrum
$\gamma^{e^{-}}_{3}$ | $={2.545}^{+0.016}_{-0.013}$ | Electron source spectral index above $E^{e-}_{23}$
$\gamma^{\mathrm{extra}}_{1}$ | $={2.386}^{+0.019}_{-0.026}$ | Spectral index of extra lepton component below $E^{\text{extra}}_{12}=50\,\mathrm{GeV}$
$\gamma^{\mathrm{extra}}_{2}$ | $=1.520^{+0.021}_{-0.018}$ | Spectral index of extra lepton component above $E^{\text{extra}}_{12}=50\,\mathrm{GeV}$
Transport parameters
$D_{0}$ | $={5.18}^{+0.36}_{-0.37}$ | Normalization of diffusion coefficient in 1028 cm2s-1
$\delta_{1}$ | $={0.0116}^{+0.0019}_{-0.0019}$ | Spectral index of diffusion coefficient below $\mathcal{R}_{12}$
$\log_{10}\left[\mathcal{R}_{12}/\text{GV}\right]$ | $={0.711}^{+0.013}_{-0.015}$ | First break rigidity
$s_{12}$ | $={0.0630}^{+0.0098}_{-0.0072}$ | Softness of break at $\mathcal{R}_{12}$
$\delta_{2}$ | $={0.566}^{+0.012}_{-0.010}$ | Spectral index of diffusion coefficient between $\mathcal{R}_{12}$ and $\mathcal{R}_{23}$
$\log_{10}\left[\mathcal{R}_{23}/\text{GV}\right]$ | $={2.571}^{+0.073}_{-0.069}$ | Second break rigidity
$s_{23}$ | $={0.75}^{+0.10}_{-0.11}$ | Softness of break at $\mathcal{R}_{23}$
$\delta_{3}$ | $={0.159}^{+0.036}_{-0.044}$ | Spectral index of diffusion coefficient betwen $\mathcal{R}_{23}$ and $\mathcal{R}_{34}$
$\log_{10}\left[\mathcal{R}_{34}/\text{GV}\right]$ | $={4.23}^{+0.27}_{-0.19}$ | Third break rigidity
$s_{34}$ | $={0.167}^{+0.050}_{-0.052}$ | Softness of break at $\mathcal{R}_{34}$
$\delta_{4}$ | $={0.453}^{+0.141}_{-0.070}$ | Spectral index of diffusion coefficient between $\mathcal{R}_{34}$ and $\mathcal{R}_{45}$
$\log_{10}\left[\mathcal{R}_{45}/\text{GV}\right]$ | $={5.89}^{+0.16}_{-0.11}$ | Fourth break rigidity
$s_{45}$ | $={0.022}^{+0.11}_{-0.07}$ | Softness of break at $\mathcal{R}_{45}$
$\delta_{5}$ | $={1.050}^{+0.063}_{-0.046}$ | Spectral index of diffusion coefficient above $\mathcal{R}_{45}$
Solar modulation parameters
$\phi_{p}$ | $=0.781^{+0.029}_{-0.036}$ | Fisk potential for protons in GV
$\phi_{e^{+}}$ | $=0.610^{+0.014}_{-0.017}$ | Fisk potential for positrons in GV
$\phi_{e^{-}}$ | $=1.039^{+0.031}_{-0.037}$ | Fisk potential for electrons in GV
$\phi_{\mathrm{nuc}}$ | $=0.795^{+0.027}_{-0.043}$ | Fisk potential for nuclei in GV
Energy scale shift parameters
$\alpha_{\mathrm{AMS-02}}$ | $=1.021^{+0.023}_{-0.022}$ | Energy scale shift for AMS-02
$\alpha_{\mathrm{DAMPE}}$ | $=1.037^{+0.025}_{-0.022}$ | Energy scale shift for DAMPE
$\alpha_{\mathrm{KASCADE}}$ | $=0.880^{+0.132}_{-0.092}$ | Energy scale shift for KASCADE
$\alpha_{\mathrm{IceTop}}$ | $=0.584^{+0.060}_{-0.084}$ | Energy scale shift for IceTop
Finally, our best-fit model predictions for the proton, helium, electron,
positron and carbon intensities as well as boron-to-carbon ratio are shown in
Figure 6. The proton and helium intensities are shown as a function of
$E_{\text{kin}}/n$ as this is the relevant quantity for the production of
diffuse emission. For the helium intensities, transforming the experimental
data to match these units requires an assumption about the
${}^{3}\text{He}/^{4}\text{He}$ ratio. We use the AMS-02 result
${}^{3}\text{He}/^{4}\text{He}\;(R)=0.1476\left(R/4\;\mathrm{GV}\right)^{-0.294}$
provided in Aguilar et al. (2021) and extrapolate this to higher energies. We
emphasize that this transformation is relevant only for the illustration in
Figure 6 and that all datasets are fitted in their respective measured units.
Figure 6: Best-fit spectra for various GCR primaries and secondaries: protons (top left), helium (top right), electron (middle left), positrons (middle right), boron-to-carbon ratio (bottom left) and carbon (bottom right). The top panels show the GCR intensities, with the solid black line indicating the best-fit model and the grey and light orange bands showing the $68\,\%$ and $95\,\%$ uncertainty intervals. We have also overplotted the observations by AMS-02 (Aguilar et al., 2015a, 2017, 2019b, 2019a, 2018), DAMPE (An et al., 2019; Alemanno et al., 2021), IceTop (Aartsen et al., 2019b) and KASCACE (Antoni et al., 2005). The lower panels are pull plots. Table 2: $\chi^{2}$ of fits to local cosmic ray data. Experiment | Dataset | # Data points | $\chi^{2}$
---|---|---|---
AMS-02 | Protons | 72 | 22.0
AMS-02 | Helium | 68 | 14.8
AMS-02 | Carbon | 68 | 32.2
AMS-02 | B/C | 67 | 54.1
AMS-02 | Electrons | 72 | 32.8
AMS-02 | Positrons | 72 | 131.2
DAMPE | Protons | 17 | 8.2
DAMPE | Helium | 23 | 5.9
KASCADE | Protons | 20 | 31.7
KASCADE | Helium | 14 | 94.4
IceTop | Protons | 19 | 52.3
IceTop | Helium | 13 | 39.7
The best-fit model (black line) reproduces well the GCR data with an overall
satisfactory goodness of fit. We have listed the $\chi^{2}$ values for the
individual observables in Table 2. The pull distributions highlight some
systematic deviations, for instance in the boron-to-carbon ratio below a few
GV. This is due to our restriction to a pure diffusion model; allowing for
advection or reacceleration would lead to a better fit at GV rigidities
(Heinbach & Simon, 1995).
The $68\,\%$ and $95\,\%$ uncertainty bands are shown in grey and light
orange, respectively. They are narrow where the data are sufficiently
constraining and wider where the data are less constraining. An example for
the latter case are proton and helium spectra beyond the energies where direct
observations are available, that is beyond a few hundred TeV/n. Despite the
energy-rescaling parameters that were allowed to float freely, there are still
some discrepancies between IecTop and KASCADE measurements. As the diffuse
neutrino flux is mostly produced by proton and helium, this will be the
dominant uncertainty from the GCR fit.
### 3.2 Diffuse Gamma-Ray Intensities
In this section, we present the gamma ray intensities predicted by the CRINGE
model. As discussed above, the diffuse predictions depend on a number of
inputs beyond the parameters of the GCR models that we have fitted to local
observations. These are the spatial distribution of GCR sources, the spatial
distribution of atomic and molecular hydrogen as well as the cross-sections
for the production of gamma-rays and neutrinos. In addition, the gamma-ray
intensities depend on the choice of the ISRF. Given our ignorance of the true
source distribution, gas distribution, cross-sections and ISRF, the choice
induces another source of uncertainty beyond the uncertainty from the GCR fit.
In the following, we have chosen a combination of these inputs as a default
model. In particular, we have adopted the source distribution by Ferrière
(2001), the GALPROP galactic gas maps as well as the AAfrag production cross-
sections. The fiducial Inverse Compton flux is calculated assuming the GALPROP
ISRF model.
We refer to this choice of parameters together with the best-fit parameters of
the GCR model as the fiducial model.666This model is available at
https://doi.org/10.5281/zenodo.7373010 For the different sources of
uncertainties, we have computed the respective standard deviations separately.
For the uncertainty stemming from a different choice of GCR parameters, we
have determined half of the central $68\,\%$ range of the posterior
distribution of intensities. For the other sources of uncertainties, we have
fixed the GCR parameters to their best-fit values and computed the standard
deviation from the the set of diffuse fluxes obtained for different choices of
the input, as listed in Secs. 2.1.2, 2.2, 2.3 and 2.2.3. In the following
figures, we have stacked these uncertainties into uncertainty bands.
The fiducial model’s intensity as well as the uncertainties around this
intensity depend on direction in the sky. For gamma-ray intensities, we have
adopted two sky windows for which observational data have been presented at
TeV and PeV energies. In Figure 7, we show the prediction of our model for the
diffuse gamma-ray fluxes in the windows $|b|<5^{\circ}$,
$25^{\circ}<l<100^{\circ}$ and $|b|<5^{\circ}$, $50^{\circ}<l<200^{\circ}$.
Uncertainties due to the GCR parameters, the source distribution, the gas maps
and the cross-sections are indicated by red, yellow, blue and green bands,
respectively. Additionally, the uncertainty of the Inverse Compton intensity
stemming from the uncertainty of the ISRF is indicated by the purple band. We
also take into account the expected intensity from unresolved sources
following the model by Vecchiotti et al. (2022). The uncertainties related to
the flux threshold varying between $0.01$ and $0.1\,\Phi_{\text{Crab}}$ are
shown by the orange bands. The default intensity of unresolved sources added
to our fiducial diffuse model is the geometric mean of the intensities
corresponding to the upper and lower end of that range.6
Figure 7: Gamma-ray intensities as a function of energy in the two windows
$|b|<5^{\circ}$, $25^{\circ}<l<100^{\circ}$ (top row) and $|b|<5^{\circ}$,
$50^{\circ}<l<200^{\circ}$ (bottom row). In the left column, we show our model
prediction for the best-fit GCR parameters, combined with the Ferrière (2001)
source distribution, the GALPROP galactic gas maps, the AAfrag production
cross-sections as well as the GALPROP ISRF. In the right column, we have also
added the Vecchiotti et al. (2022) model for the unresolved sources. The
various uncertainties are indicated by the shaded bands. We also compare with
the observations by Tibet AS$\gamma$+MD (Amenomori et al., 2021), ARGO-YBJ
(Bartoli et al., 2015a) and LHAASO (Zhao et al., 2021). The upper panels show
the absolute intensities, the lower panels are normalized to the fiducial
intensity.
Without the inclusion of unresolved sources (left column of Figure 7), our
model spectrum is close to $E^{-2.7}$ for gamma-ray energies between
$10\,\text{GeV}$ and tens of TeV. Beyond a few tens of TeV, the spectrum
softens due to the spectral breaks in the nuclear spectra at tens of TV and at
the knee around $1\,\text{PV}$, see the nuclear spectra in Figure 6. As far as
the uncertainties are concerned, below a TeV, the uncertainty from the gas
maps is dominating, but the total uncertainty remains below $20\,\%$. Above a
TeV, the uncertainties from the GCR model and from the cross-sections grow;
individually they can be as large as $35\,\%$ and $20\,\%$, respectively. The
other uncertainties related to the choice of gas maps and cosmic ray source
distribution are independent of energy and remain below $10\,\%$ in all
regions in the sky, respectively. Even within these uncertainties, our model
without unresolved sources cannot account for the data by LHAASO (Zhao et al.,
2021) and Tibet AS$\gamma$+MD (Amenomori et al., 2021), in neither of the sky
windows.
The inclusion of the unresolved sources significantly enhances the intensities
overall and leads to a much harder, close to $E^{-2.3}$ spectrum for gamma-ray
energies between a few hundreds of GeV and a few hundreds of TeV. The right
column of Figure 7 shows that the gamma-ray intensities are now in much better
agreement with the data by LHAASO (Zhao et al., 2021) and Tibet AS$\gamma$+MD
(Amenomori et al., 2021), in both sky windows. However, the uncertainty in the
prediction of the intensity from unresolved sources is sizeable: Between $\sim
100\,\text{GeV}$ and a few PeV, it is dominating the total uncertainty and can
be as large as a factor $2$.
### 3.3 Diffuse Neutrino Intensities
Figure 8: Neutrino intensity as a function of neutrino energy for the inner
Galaxy window (top panels), outer Galaxy window (middle panels) and high-
latitude window (bottom panels). In the left column, only truly diffuse
emission is shown, in the right column, unresolved sources are included as
well. In each plot, we show our model prediction for the best-fit GCR
parameters, combined with the Ferrière (2001) source distribution, the GALPROP
galactic gas maps as well as the AAfrag production cross-sections. The
Fermi-$\pi^{0}$ model is indicated by the dashed grey line, the KRA$\gamma$-5
and KRA$\gamma$-50 models by the solid and dotted grey lines. The coloured
bands in the top panel indicate the uncertainties due to the GCR parameters
(red band), the source distribution (yellow band), the gas maps (blue bands)
and the cross-sections (green band). In the lower panels, the uncertainty
bands are presented after normalization to the fiducial model.
For neutrinos, we have chosen three regions of the sky over which the diffuse
intensities are averaged: An inner Galaxy window ($|b|<8^{\circ}$,
$|l|<80^{\circ}$), an outer Galaxy window ($|b|<8^{\circ}$, $|l|>80^{\circ}$)
and a high-latitude window ($|b|>8^{\circ}$). These are canonical choices in
models of galactic diffuse emission. In Figure 8, the fiducial model
intensities are again shown by the black, solid lines and the uncertainties
are indicated by bands of different colours: GCR parameters (red), source
distributions (yellow), gas maps (blue) and cross-sections (green). In the
lower panel, the uncertainty bands are shown after normalization to the
fiducial model intensity. We also indicate the predictions of other models,
that is the Fermi-$\pi^{0}$ model (Ackermann et al., 2012, dashed grey line),
the KRA$\gamma$-5 model (Gaggero et al., 2015b, dotted grey line) as well as
the KRA$\gamma$-50 model (Gaggero et al., 2015b, solid grey line). All
neutrino intensities shown are per-flavor intensities. We derive these from
the all-flavor intensity under the assumption that neutrino oscillations lead
to a $1:1:1$ flavor ratio at Earth (Gaisser et al., 2016). For the inner
Galaxy window and without unresolved sources (top left panel of Figure 8), our
model spectrum is close to $E^{-2.7}$ for neutrino energies between
$10\,\text{GeV}$ and tens of TeV. Beyond a TeV, the spectrum softens due to
the spectral breaks in the nuclear spectra. Below a TeV, the uncertainty is
dominated by the cross-section uncertainty which can be as large as $20\,\%$.
At $\sim 1\,\text{TeV}$, the cross-section uncertainties are smallest, and
grow again for higher energies. Also the uncertainties from the GCR model grow
significantly beyond a TeV and reach about $40\,\%$ at $10\,\text{PeV}$. The
other uncertainties related to the choice of gas maps and cosmic ray source
distribution are again independent of energy and remain below $10\,\%$ in all
regions in the sky, respectively.
Comparing our spectrum in the inner Galaxy window with the largely
featureless, $E^{-2.7}$ spectrum of the Fermi-$\pi^{0}$ model, we find our
model to give slightly harder spectra below a few TeV, due to the break in the
nuclei spectra at $\sim 300\,\text{GV}$ that had not been considered in the
Fermi-$\pi^{0}$ model. Above tens of TeV, though, the Fermi-$\pi^{0}$ model is
clearly harder due to the unbroken power law extrapolation of the spectra from
GeV to PeV energies. We judge that extrapolation to not be well justified in
light of data at the knee and recent data just below the knee. Below a few
hundred TeV and a few PeV, the predictions from the KRA$\gamma$-5 and
KRA$\gamma$-50 models, respectively, are significantly harder than ours. This
is of course in part due to the harder spectral index exhibited by these
models in the inner Galaxy, but also due to the choice of the cross-sections
from Kamae et al. (2006), which as shown in Figure 11, leads to systematically
harder spectra than the AAfrag parametrization adopted in our fiducial model.
Already at $100\,\text{TeV}$, both models overpredict our neutrino intensity
by roughly an order of magnitude. At a few PeV, this difference has grown to
almost two orders of magnitude in the case of the KRA$\gamma$-50 model. Note
that spectra are generally softer above a few hundreds of TeV (a few PeV) for
KRA$\gamma$-5 (KRA$\gamma$-50) due to the assumed exponential cut-off.
The inclusion of unresolved sources in the inner Galaxy window (top right
panel of Figure 8) leads to a much enhanced neutrino intensity below $\sim
1\,\text{PeV}$. Our prediction is much closer to the prediction from the
KRA$\gamma$-5 model, even though the origin of the hard spectrum is very
different.
Figure 9: Neutrino intensity as a function of galactic longitude for a
neutrino energy of $3\,\text{TeV}$. We show our model prediction for the best-
fit GCR parameters, combined with the Ferrière (2001) source distribution, the
GALPROP galactic gas maps as well as the AAfrag production cross-sections. The
Fermi-$\pi^{0}$ model is indicated by the dashed grey line, the KRA$\gamma$-5
and KRA$\gamma$-50 models by the solid and dotted grey lines. In the lower
panel, the uncertainty bands are presented after normalization to the fiducial
model.
For the outer Galaxy window (middle row of Figure 8), the intensities are
overall smaller by about a factor two to three, but the spectral shapes are
rather similar. This is to be expected given that we assume no spatial
variation of the diffusion coefficient or the source spectra throughout the
Galaxy. Noticeable is, however, an increased uncertainty from the source
distribution (compare the yellow bands in the upper and middle rows of Figure
8). At first this might seem surprising, given that in absolute terms, the
source distributions differ less in the outer Galaxy than in the inner Galaxy,
see Figure 1. However, this can be explained by the fact that, for the same
local source density, the distributions featuring a lower source density in
the inner Galaxy lead to a lower cosmic ray flux at Earth. As we however
normalize the local flux of our models such that local measurements are
reproduced, we correspondingly scale up the fluxes for lower central source
densities. This decreases the resulting uncertainty in the galactic center
region and increases it towards larger galactocentric radii.
For the case without unresolved sources (middle left panel of Figure 8), the
comparison with the Fermi-$\pi^{0}$ model in this sky window shows that both
models are compatible between $10$ and $100\,\text{GeV}$, but at larger
energies show spectral differences similar to those in the inner Galaxy
window. The KRA$\gamma$ models are at the lower envelope of the uncertainty
band below a few TeV, but start exceeding the upper end of our uncertainty
band above $\sim 10\,\text{TeV}$. Again, the disagreement can be up to two
orders of magnitude at $1\,\text{PeV}$.
For the case where unresolved sources are taken into account (middle right
panel of Figure 8), the intensities are again significantly harder. However,
in this case the prediction is larger even than that of the KRA$\gamma$-5
model for all energies and larger than that of the KRA$\gamma$-50 model for
neutrino energies smaller than a few hundred TeV.
The situation is somewhat similar for high galactic latitudes, as shown in the
lower row of Figure 8, albeit at an overall normalization reduced by a factor
$\sim 6$ with respect to the outer Galaxy window. The uncertainties from the
source distribution are in between those for the inner and outer Galaxy
windows, as could be expected.
In Figure 9, we also show profiles of the neutrino intensity in galactic
longitude at an energy of $3\,\text{TeV}$. We have chosen an energy at which
all sources of uncertainties contribute $\sim 10\,\%$ to the overall
uncertainty. The prediction from our fiducial model with the Ferrière (2001)
source distribution, the GALPROP gas maps and the AAfrag cross-sections is
again shown by the black line and uncertainties around that are denoted by the
bands in the same fashion as in Figure 8. We also compare with the
longitudinal profiles of the Fermi-$\pi^{0}$ and the KRA$\gamma$ models. The
linear ordinate axis highlights the large differences between those models and
our model, in particular for the inner Galaxy. While the uncertainty from the
GCR model and the cross-section uncertainties are independent of longitude,
the uncertainties from the source distribution and gas maps, of course, depend
on longitude. Specifically, the CR source distribution uncertainty is largest
in the outer Galaxy, has a minimum for $|l|\simeq 45^{\circ}$, where on
average the emission is produced at galacto-centric radii similar to the solar
radius, and increases again towards the galactic center. The uncertainties
from the gas maps instead are largest towards $|l|\simeq 45^{\circ}$. This can
be interpreted as a generic model uncertainty of the map reconstruction:
Towards this direction the gradients in the velocity field are rather small
such that the distance uncertainty is largest.
## 4 Discussion
Figure 10: Relative count differences to P8R3 ULTRACLEANVETO Fermi-LAT data
for for our fiducial model combining the best-fit GCR parameters with the
Ferrière (2001) source distribution, the GALPROP galactic gas maps as well as
the AAfrag production cross-sections. Also shown are the relative count
difference for a model that has all parameters equal to this fiducial model
apart from the spectral indices of the diffusion coefficient, which depend on
galactocentric radius as described in eq. (17) with
$A_{i}=0.035\;\mathrm{kpc}^{-1}$. The $B_{i}$ are chosen such that
$\delta_{i}(r=r_{\odot})$ agree with the values from Table 1.
In this paper, we have predicted the high-energy diffuse neutrino intensity
from the Galaxy, based on a model of GCRs that fits measured cosmic ray data
between GV and $100\,\text{PV}$ rigidities. We have also predicted diffuse
gamma-ray fluxes in the TeV to PeV energy range and compared to results from
ARGO-YBJ, Tibet AS-$\gamma$+MD as well as LHAASO. It is, however, natural to
ask whether our models agree also with GeV gamma-rays as for instance measured
by Fermi-LAT (Atwood et al., 2009a). Such a comparison is of course made
difficult by the various anomalies discussed above: the hardening of diffusive
GeV emission towards the inner Galaxy, the higher than predicted intensities
in the inner Galaxy as well as the galactic center excess. Admittedly, the
model framework considered here does not alleviate any of these anomalies. For
instance, we show in Figure 10, the relative differences between the measured
Fermi-LAT counts777We are grateful to Markus Ackermann for providing us with
the counts maps and instrument response functions. and the prediction of our
fiducial model for three different sky regions also shown in Figure 8. In the
inner Galaxy window (top panel), it can be seen that while the model is in
agreement with the GeV gamma-ray data within uncertainties except at energies
above $40\,\text{GeV}$, the observed gamma-ray fluxes are harder than the
model fluxes. In addition to the central Galaxy, there is a significant
discrepancy between GeV gamma-ray data and our model predictions in the outer
Galaxy window, where we underproduce the data by some $20\,\%$. We note that
this issue has received relatively little attention yet and therefore awaits
further clarification. Finally, we find our model to largely reproduce the
data at high latitudes, unlike some of the previous studies (Orlando, 2018).
We have also explored the framework of KRA$\gamma$-like models which aim for
solving the puzzle of the observed spectral hardening in Fermi-LAT data
towards the galactic center by modifying the diffusion constant in the inner
Galaxy as
$\delta_{i}(r)=\begin{cases}A_{i}r+B_{i}&r<11\;\mathrm{kpc}\\\ A_{i}\times
11\;\mathrm{kpc}+B_{i}&r\geq 11\;\mathrm{kpc}\\\ \end{cases},$ (17)
leading to harder spectra from that region. However, we have found that such
modifications can only partly solve the hardening issue. This is illustrated
in the upper panel of Figure 10. There, we have superimposed the prediction
for a KRA$\gamma$-like model that is equal to our fiducial model apart from
the diffusion coefficient, which depends on galactocentric radius as described
in eq. (17) with $A_{i}=0.035\;\mathrm{kpc}^{-1}$. The $B_{i}$ are chosen such
that $\delta_{i}(r=r_{\odot})$ agree with the values from Table 1. The
residuals of this model show a trend rather similar to our fiducial model with
predicted spectra that feature a larger flux normalization but remain too
soft. This is evidence that a significant part of the enhancement and spectral
hardening of the KRA$\gamma$ models with respect to the Fermi-$\pi^{0}$ model
(which assumes homogeneous diffusion) is only due to the hardening break in
the diffusion coefficient at around $300\,\text{GV}$.
In the outer Galaxy window, the KRA$\gamma$-like model even features slightly
larger residuals with the Fermi-LAT data than our fiducial model. At high
latitudes, where local emission dominates, the two models coincide as
expected.
Another interesting outcome of our global fit of the GCR model is the spectral
position of the knee. The break in the predicted proton spectrum, for instance
occurs at a few hundred TeV which is somewhat lower than traditionally
considered. Note that this energy might in fact be compatible with claims by
ARGO-YBJ Bartoli et al. (2015b). Returning to Figure 6, we note that the low
value for the break rigidity $\mathcal{R}_{45}$ has been driven by the KASCADE
helium data. In a sense, the low $\mathcal{R}_{45}$ is not surprising, but a
consequence of the 300 GV break which leads to the dominance of helium even
before the proton knee (O’C. Drury, 2018). The only way, in fact, to get a
more pronounced peak at higher rigidities would be to change the energy scale
corrections with respect to what our fit determined. This could be possible if
there was another hardening break between 10 TV and 1 PeV. We note that there
have been indications from DAMPE (Stolpovskiy, 2022), yet, we have not added
these data preliminary yet.
## 5 Summary and conclusion
We have presented CRINGE, a new model for the diffuse emission of high-energy
gamma-rays and neutrinos from the Galaxy. For the transport of GCRs, we have
adopted a simple diffusion model with homogeneous diffusion coefficient, but
we have allowed for a number of breaks in the rigidity-dependence. We have
determined the free parameters of our model by fitting to locally measured
spectra of proton, helium, electrons and positrons as well as to the boron-to-
carbon ratio. Adopting an MCMC method, we have determined the uncertainties of
the fitted parameters and provided uncertainty bands for the predicted GCR
intensities and the boron-to-carbon ratio. Our GCR model successfully
describes these data in the energy range between $1\,\text{GeV}$ and
$100\,\text{PeV}$.
Combining the best-fit GCR parameters with a fiducial choice for the source
distribution, gas maps, cross-sections and photon backgrounds, we have
computed the diffuse emission of high-energy gamma-rays and neutrinos at
energies between $10\,\text{GeV}$ and $10\,\text{PeV}$. We have also estimated
the uncertainties due to the possibility of alternative choices of inputs. In
addition, we have allowed for the presence of unresolved, pulsar-powered
sources of high-energy gamma-rays and neutrinos. Comparing our results with
the intensity of high-energy gamma-rays observed by Argo-YBJ, Tibet
AS$\gamma$+MD and LHAASO, we have found very good agreement for the case that
unresolved sources are taken into account. For our neutrino predictions, we
have compared with the Fermi-$\pi^{0}$ and the KRA$\gamma$ models that have
previously been employed in experimental searches. Without the unresolved
sources, our predictions are usually lower than the KRA$\gamma$ models for
$E\gtrsim 1\,\text{TeV}$ and higher than the Fermi-$\pi^{0}$ model for
$E\lesssim 100\,\text{TeV}$. Taking the unresolved sources into account, our
predictions are mostly comparable to or even higher than the KRA$\gamma$
models.
Future studies of the galactic diffuse neutrino flux by IceCube and KM3Net
will be able to employ our model predictions as spatio-spectral templates.
Such searches present an important test of the model of GCRs. The possible
observation of or bounds on the diffuse galactic flux of high-energy neutrinos
will present important constraints on models of acceleration and transport of
GCRs at TeV and PeV energies. In addition, we can hope to gain some insights
on the presence of unresolved hadronic sources.
We are grateful to Markus Ackermann for providing us with the Fermi-LAT counts
maps and instrument response functions. We also thank Mischa Breuhaus for
providing his implementation of the calculation of gamma-ray absorption in the
Milky Way. Daniele Gaggero, Pedro de la Torre Luque and Ottavio Fornieri are
gratefully acknowledged for help with the DRAGON code. We also thank Carmelo
Evoli for advice on the HERMES code. We acknowledge use of the HEALPix package
(Gorski et al., 2005). G.S. acknowledges membership in the International Max
Planck Research School for Astronomy and Cosmic Physics at the University of
Heidelberg (IMPRS-HD).
## References
* Aartsen et al. (2017a) Aartsen, M. G., et al. 2017a, Journal of Instrumentation, 12, doi: 10.1088/1748-0221/12/03/P03012
* Aartsen et al. (2017b) —. 2017b, ApJ, 849, 67, doi: 10.3847/1538-4357/aa8dfb
* Aartsen et al. (2019a) —. 2019a, ApJ, 886, 12, doi: 10.3847/1538-4357/ab4ae2
* Aartsen et al. (2019b) —. 2019b, Phys. Rev. D, 100, 082002, doi: 10.1103/PhysRevD.100.082002
* Abdo et al. (2009a) Abdo, A. A., et al. 2009a, ApJ, 706, L1, doi: 10.1088/0004-637X/706/1/L1
* Abdo et al. (2009b) —. 2009b, Science, 327, 1103, doi: 10.1126/science.1182787
* Abdo et al. (2010a) —. 2010a, ApJ, 718, 348, doi: 10.1088/0004-637X/718/1/348
* Abdo et al. (2010b) —. 2010b, ApJ, 722, 1303, doi: 10.1088/0004-637X/722/2/1303
* Abdollahi et al. (2019) Abdollahi, S., et al. 2019, ApJS, 247, 33, doi: 10.3847/1538-4365/ab6bcb
* Acero et al. (2016) Acero, F., et al. 2016, ApJS, 223, 26, doi: 10.3847/0067-0049/223/2/26
* Ackermann et al. (2012) Ackermann, M., et al. 2012, ApJ, 750, 3, doi: 10.1088/0004-637X/750/1/3
* Ackermann et al. (2014) —. 2014, ApJ, 793, 64, doi: 10.1088/0004-637X/793/1/64
* Ackermann et al. (2015) —. 2015, ApJ, 799, 86, doi: 10.1088/0004-637X/799/1/86
* Adrian-Martinez et al. (2016) Adrian-Martinez, S., et al. 2016, J. Phys. G, 43, 084001, doi: 10.1088/0954-3899/43/8/084001
* Adriani et al. (2011) Adriani, O., et al. 2011, Science, 332, 69, doi: 10.1126/science.1199172
* Adriani et al. (2019) —. 2019, Phys. Rev. Lett., 122, 181102, doi: 10.1103/PhysRevLett.122.181102
* Adriani et al. (2022) —. 2022, Journal of Instrumentation, 17, doi: 10.1088/1748-0221/17/08/P08014
* Ageron et al. (2011) Ageron, M., et al. 2011, Nuclear Instruments and Methods in Physics Research A, 656, 11, doi: 10.1016/j.nima.2011.06.103
* Aguilar et al. (2015a) Aguilar, M., et al. 2015a, Phys. Rev. Lett., 114, 171103, doi: 10.1103/PhysRevLett.114.171103
* Aguilar et al. (2015b) —. 2015b, Phys. Rev. Lett., 115, 211101, doi: 10.1103/PhysRevLett.115.211101
* Aguilar et al. (2017) —. 2017, Phys. Rev. Lett., 119, 251101, doi: 10.1103/PhysRevLett.119.251101
* Aguilar et al. (2018) —. 2018, Phys. Rev. Lett., 120, 021101, doi: 10.1103/PhysRevLett.120.021101
* Aguilar et al. (2019a) —. 2019a, Phys. Rev. Lett., 122, 041102, doi: 10.1103/PhysRevLett.122.041102
* Aguilar et al. (2019b) —. 2019b, Phys. Rev. Lett., 122, 101101, doi: 10.1103/PhysRevLett.122.101101
* Aguilar et al. (2021) —. 2021, Physics Reports, 894, 1, doi: 10.1016/j.physrep.2020.09.003
* Ahlers & Murase (2014) Ahlers, M., & Murase, K. 2014, Phys. Rev. D, 90, 023010, doi: 10.1103/PHYSREVD.90.023010
* Ahn et al. (2010) Ahn, H. S., et al. 2010, ApJ, 714, L89, doi: 10.1088/2041-8205/714/1/L89
* Albert et al. (2018) Albert, A., et al. 2018, ApJ, 868, L20, doi: 10.3847/2041-8213/aaeecf
* Alemanno et al. (2021) Alemanno, F., et al. 2021, Phys. Rev. Lett., 126, 201102, doi: 10.1103/PhysRevLett.126.201102
* Amenomori et al. (2021) Amenomori, M., et al. 2021, Phys. Rev. Lett., 126, 141101, doi: 10.1103/PhysRevLett.126.141101
* An et al. (2019) An, Q., et al. 2019, Sci. Adv., 5, doi: 10.1126/sciadv.aax3793
* Antoni et al. (2005) Antoni, T., et al. 2005, Astropart. Phys., 24, 1, doi: 10.1016/j.astropartphys.2005.04.001
* Apel et al. (2011) Apel, W. D., et al. 2011, Phys. Rev. Lett., 107, 171104, doi: 10.1103/PhysRevLett.107.171104
* Atwood et al. (2009a) Atwood, W. B., et al. 2009a, ApJ, 697, 1071, doi: 10.1088/0004-637X/697/2/1071
* Atwood et al. (2009b) Atwood, W. B., Abdo, A. A., Ackermann, M., et al. 2009b, The Astrophysical Journal, 697, 1071, doi: 10.1088/0004-637X/697/2/1071
* Avrorin et al. (2018) Avrorin, A. D., et al. 2018, EPJ Web Conf., 191, 01006, doi: 10.1051/epjconf/201819101006
* Bartoli et al. (2015a) Bartoli, B., et al. 2015a, ApJ, 806, doi: 10.1088/0004-637X/806/1/20
* Bartoli et al. (2015b) —. 2015b, Phys. Rev. D, 92, 092005, doi: 10.1103/PhysRevD.92.092005
* Ben Bekhti et al. (2016) Ben Bekhti, N., et al. 2016, A&A, 594, doi: 10.1051/0004-6361/201629178
* Berezinskii et al. (1990) Berezinskii, V. S., Bulanov, S. V., Dogiel, V. A., & Ptuskin, V. S. 1990, Astrophysics of cosmic rays (North Holland)
* Berkhuijsen et al. (1971) Berkhuijsen, E. M., Haslam, C. G. T., & Salter, C. J. 1971, A&A, 14, 252
* Blasi et al. (2012) Blasi, P., Amato, E., & Serpico, P. D. 2012, Phys. Rev. Lett., 109, 061101, doi: 10.1103/PhysRevLett.109.061101
* Blumenthal & Gould (1970) Blumenthal, G. R., & Gould, R. J. 1970, Reviews of Modern Physics, 42, 237, doi: 10.1103/RevModPhys.42.237
* Bolatto et al. (2013) Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, ARA&A, 51, 207, doi: 10.1146/ANNUREV-ASTRO-082812-140944
* Breuhaus et al. (2022) Breuhaus, M., Hinton, J. A., Joshi, V., Reville, B., & Schoorlemmer, H. 2022, A&A, 661, doi: 10.1051/0004-6361/202141318
* Brogi & Kobayashi (2021) Brogi, P., & Kobayashi, K. 2021, PoS, ICRC2021, 101, doi: 10.22323/1.395.0101
* Bruel et al. (2018) Bruel, P., et al. 2018, arXiv, doi: 10.48550/arxiv.1810.11394
* Casandjian (2015) Casandjian, J.-M. 2015, ApJ, 806, 240, doi: 10.1088/0004-637X/806/2/240
* Casandjian & Grenier (2009) Casandjian, J.-M., & Grenier, I. 2009, arXiv, doi: 10.48550/arxiv.0912.3478
* Case & Bhattacharya (1998) Case, G. L., & Bhattacharya, D. 1998, ApJ, 504, 761, doi: 10.1086/306089
* Dame et al. (2001) Dame, T. M., Hartmann, D., & Thaddeus, P. 2001, The Astrophysical Journal, 547, 792, doi: 10.1086/318388
* de la Torre Luque et al. (2022) de la Torre Luque, P., Gaggero, D., Grasso, D., et al. 2022, arXiv, doi: 10.48550/arXiv.2203.15759
* Dembinski et al. (2018) Dembinski, H. P., et al. 2018, PoS, ICRC2017, 533, doi: 10.22323/1.301.0533
* Dobler et al. (2010) Dobler, G., Finkbeiner, D. P., Cholis, I., Slatyer, T. R., & Weiner, N. 2010, ApJ, 717, 825, doi: 10.1088/0004-637X/717/2/825
* Draine (2010) Draine, B. T. 2010, Physics of the Interstellar and Intergalactic Medium (Princeton University Press), doi: 10.1515/9781400839087
* Dundovic et al. (2021) Dundovic, A., Evoli, C., Gaggero, D., & Grasso, D. 2021, A&A, 653, doi: 10.1051/0004-6361/202140801
* Evoli et al. (2018) Evoli, C., Blasi, P., Morlino, G., & Aloisio, R. 2018, Phys. Rev. Lett., 121, 021102, doi: 10.1103/PhysRevLett.121.021102
* Evoli et al. (2008) Evoli, C., Gaggero, D., Grasso, D., & Maccione, L. 2008, J. Cosmology Astropart. Phys, 10, 018, doi: 10.1088/1475-7516/2008/10/018
* Evoli et al. (2020) Evoli, C., Morlino, G., Blasi, P., & Aloisio, R. 2020, Phys. Rev. D, 101, 023013, doi: 10.1103/PhysRevD.101.023013
* Evoli et al. (2017) Evoli, C., et al. 2017, J. Cosmology Astropart. Phys, 2017, doi: 10.1088/1475-7516/2017/02/015
* Fang & Murase (2021) Fang, K., & Murase, K. 2021, ApJ, 919, 93, doi: 10.3847/1538-4357/ac11f0
* Ferrière (1998) Ferrière, K. 1998, ApJ, 497, 759, doi: 10.1086/305469
* Ferrière (2001) —. 2001, Reviews of Modern Physics, 73, 1031, doi: 10.1103/RevModPhys.73.1031
* Ferrière et al. (2007) Ferrière, K., Gillard, W., & Jean, P. 2007, A&A, 467, 611, doi: 10.1051/0004-6361:20066992
* Fletcher et al. (1994) Fletcher, R. S., Gaisser, T. K., Lipari, P., & Stanev, T. 1994, Phys. Rev. D, 50, 5710, doi: 10.1103/PhysRevD.50.5710
* Foreman-Mackey (2016) Foreman-Mackey, D. 2016, The Journal of Open Source Software, 1, 24, doi: 10.21105/joss.00024
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067
* Fornieri et al. (2021) Fornieri, O., et al. 2021, Phys. Rev. D, 104, 103013, doi: 10.1103/PhysRevD.104.103013
* Gaggero et al. (2015a) Gaggero, D., Grasso, D., Marinelli, A., Urbano, A., & Valli, M. 2015a, ApJ, 815, L25, doi: 10.1088/2041-8205/815/2/L25
* Gaggero et al. (2015b) Gaggero, D., Urbano, A., Valli, M., & Ullio, P. 2015b, Phys. Rev. D, 91, 083012, doi: 10.1103/PhysRevD.91.083012
* Gaisser et al. (2016) Gaisser, T. K., Engel, R., & Resconi, E. 2016, Cosmic Rays and Particle Physics (Cambridge University Press)
* Genolini et al. (2017) Genolini, Y., Salati, P., Serpico, P., & Taillet, R. 2017, A&A, 600, A68, doi: 10.1051/0004-6361/201629903
* Génolini et al. (2017) Génolini, Y., et al. 2017, Phys. Rev. Lett., 119, 241101, doi: 10.1103/PhysRevLett.119.241101
* Ginzburg & Syrovatskii (1964) Ginzburg, V. L., & Syrovatskii, S. I. 1964, The Origin of Cosmic Rays (Pergamon Press, Macmillan Company)
* Gleeson & Axford (1968) Gleeson, L. J., & Axford, W. I. 1968, ApJ, 154, 1011, doi: 10.1086/149822
* Goodman & Weare (2010) Goodman, J., & Weare, J. 2010, Comm. App. Math. and Comp. Sci., 5, 65, doi: 10.2140/camcos.2010.5.65
* Gorski et al. (2005) Gorski, K. M., et al. 2005, ApJ, 622, 759, doi: 10.1086/427976
* Grenier et al. (2005) Grenier, I. A., Casandjian, J.-M., & Terrier, R. 2005, Science, 307, 1292, doi: 10.1126/SCIENCE.1106924
* Hahn (2016) Hahn, J. 2016, PoS, ICRC2015, 917, doi: 10.22323/1.236.0917
* Heinbach & Simon (1995) Heinbach, U., & Simon, M. 1995, ApJ, 441, 209, doi: 10.1086/175350
* Heyer & Dame (2015) Heyer, M. H., & Dame, T. 2015, ARA&A, 53, 583, doi: 10.1146/annurev-astro-082214-122324
* Hillas (1984) Hillas, A. M. 1984, ARA&A, 22, 425, doi: 10.1146/annurev.aa.22.090184.002233
* Jacobs et al. (2022) Jacobs, H., Mertsch, P., & Phan, V. H. M. 2022, J. Cosmology Astropart. Phys, 05, 024, doi: 10.1088/1475-7516/2022/05/024
* Jóhannesson et al. (2018) Jóhannesson, G., Porter, T. A., & Moskalenko, I. V. 2018, ApJ, 856, 45, doi: 10.3847/1538-4357/aab26e
* Joshi et al. (2014) Joshi, J. C., Winter, W., & Gupta, N. 2014, MNRAS, 439, 3414, doi: 10.1093/mnras/stu189
* Kachelriess et al. (2019) Kachelriess, M., Moskalenko, I. V., & Ostapchenko, S. 2019, Computer Physics Communications, 245, doi: 10.1016/j.cpc.2019.08.001
* Kafexhiu et al. (2014) Kafexhiu, E., Aharonian, F., Taylor, A. M., & Vila, G. S. 2014, Phys. Rev. D, 90, 123014, doi: 10.1103/PhysRevD.90.123014
* Kalberla et al. (2005) Kalberla, P. M., et al. 2005, A&A, 440, 775, doi: 10.1051/0004-6361:20041864
* Kamae et al. (2006) Kamae, T., Karlsson, N., Mizuno, T., Abe, T., & Koi, T. 2006, ApJ, 647, 692, doi: 10.1086/505189
* Kelner et al. (2006) Kelner, S. R., Aharonian, F. A., & Bugayov, V. V. 2006, Phys. Rev. D, 74, doi: 10.1103/PhysRevD.74.034018
* Koldobskiy et al. (2021) Koldobskiy, S., et al. 2021, Phys. Rev. D, 104, 123027, doi: 10.1103/PhysRevD.104.123027
* Korsmeier & Cuoco (2016) Korsmeier, M., & Cuoco, A. 2016, Phys. Rev. D, 94, 123019, doi: 10.1103/PhysRevD.94.123019
* Kulikov & Khristiansen (1959) Kulikov, G. V., & Khristiansen, G. B. 1959, J. Exp. Th. Phys., 35, 441
* Lipari & Vernetto (2018) Lipari, P., & Vernetto, S. 2018, Phys. Rev. D, 98, 043003, doi: 10.1103/PhysRevD.98.043003
* Liu & Yang (2022) Liu, B., & Yang, R. 2022, A&A, 659, A101, doi: 10.1051/0004-6361/202039759
* Lorimer et al. (2006) Lorimer, D. R., et al. 2006, MNRAS, 372, 777, doi: 10.1111/j.1365-2966.2006.10887.x
* Malkov & Moskalenko (2021) Malkov, M. A., & Moskalenko, I. V. 2021, ApJ, 911, 151, doi: 10.3847/1538-4357/abe855
* Marinos et al. (2022) Marinos, P. D., Rowell, G. P., Porter, T. A., & Jóhannesson, G. 2022, MNRAS, doi: 10.1093/mnras/stac3222
* Mertsch (2018) Mertsch, P. 2018, J. Cosmology Astropart. Phys, doi: 10.1088/1475-7516/2018/11/045
* Mertsch & Phan (2022) Mertsch, P., & Phan, V. H. M. 2022, arXiv. https://arxiv.org/abs/2202.02341v1
* Mertsch & Vittino (2021) Mertsch, P., & Vittino, A. 2021, A&A, 655, A64, doi: 10.1051/0004-6361/202141000
* Mertsch et al. (2021) Mertsch, P., Vittino, A., & Sarkar, S. 2021, Phys. Rev. D, 104, 103029, doi: 10.1103/physrevd.104.103029
* Mori (2009) Mori, M. 2009, Astropart. Phys., 31, doi: 10.1016/j.astropartphys.2009.03.004
* O’C. Drury (2018) O’C. Drury, L. 2018, PoS, ICRC2017, 1081, doi: 10.22323/1.301.1081
* Orlando (2018) Orlando, E. 2018, MNRAS, 475, 2724, doi: 10.1093/mnras/stx3280
* Panov et al. (2007) Panov, A. D., et al. 2007, Bull. Russ. Acad. Sci. Phys., 71, 494, doi: 10.3103/S1062873807040168
* Parker (1965) Parker, E. N. 1965, Planet. Space Sci., 13, 9, doi: 10.1016/0032-0633(65)90131-5
* Popescu et al. (2017) Popescu, C. C., et al. 2017, MNRAS, 470, 2539, doi: 10.1093/mnras/stx1282
* Porter et al. (2017) Porter, T. A., Johannesson, G., & Moskalenko, I. V. 2017, ApJ, 846, 67, doi: 10.3847/1538-4357/aa844d
* Porter et al. (2008) Porter, T. A., Moskalenko, I. V., Strong, A. W., Orlando, E., & Bouchet, L. 2008, ApJ, 682, 400, doi: 10.1086/589615
* Remy et al. (2018) Remy, Q., Grenier, I. A., Casandjian, J. M., & Fermi-LAT Collaboration. 2018, 8th International Fermi Symposium, Baltimore, MD
* Stecker & Jones (1977) Stecker, F. W., & Jones, F. C. 1977, ApJ, 217, 843, doi: 10.1086/155631
* Stolpovskiy (2022) Stolpovskiy, M. 2022, Proceedings of the 41st International Conference on High Energy physics (ICHEP2022)
* Strong et al. (2000) Strong, A. W., Moskalenko, I. V., & Reimer, O. 2000, ApJ, 537, 763, doi: 10.1086/309038
* Strong et al. (2011) Strong, A. W., Orlando, E., & Jaffe, T. R. 2011, A&A, 534, A54, doi: 10.1051/0004-6361/201116828
* Su et al. (2010) Su, M., Slatyer, T. R., & Finkbeiner, D. P. 2010, ApJ, 724, 1044, doi: 10.1088/0004-637X/724/2/1044
* Tibaldo et al. (2021) Tibaldo, L., Gaggero, D., & Martin, P. 2021, Universe, 7, 141, doi: 10.3390/universe7050141
* Vecchiotti et al. (2022) Vecchiotti, V., Pagliaroli, G., & Villante, F. L. 2022, Communications Physics, 5, 161, doi: 10.1038/s42005-022-00939-7
* Vecchiotti et al. (2022) Vecchiotti, V., Zuccarini, F., Villante, F. L., & Pagliaroli, G. 2022, ApJ, 928, 19, doi: 10.3847/1538-4357/ac4df4
* Vernetto & Lipari (2016) Vernetto, S., & Lipari, P. 2016, Phys. Rev. D, 94, 063009, doi: 10.1103/physrevd.94.063009
* Vittino et al. (2019) Vittino, A., Mertsch, P., Gast, H., & Schael, S. 2019, Phys. Rev. D, 100, 043007, doi: 10.1103/PhysRevD.100.043007
* Vladimirov et al. (2012) Vladimirov, A. E., Jóhannesson, G., Moskalenko, I. V., & Porter, T. A. 2012, ApJ, 752, 68, doi: 10.1088/0004-637X/752/1/68
* Yang et al. (2016) Yang, R., Aharonian, F., & Evoli, C. 2016, Phys. Rev. D, 93, 123007, doi: 10.1103/PhysRevD.93.123007
* Yoon et al. (2017) Yoon, Y. S., et al. 2017, ApJ, 839, 5, doi: 10.3847/1538-4357/aa68e4
* Yusifov & Kücük (2004) Yusifov, I., & Kücük, I. 2004, A&A, 422, 545, doi: 10.1051/0004-6361:20040152
* Zhao et al. (2021) Zhao, S., Zhang, R., Zhang, Y., & Yuan, Q. 2021, PoS, ICRC2021, 859, doi: 10.22323/1.395.0859
## Appendix A Comparison to Fermi-LAT Data
In the following, we provide some details on the preparation of Fermi-LAT data
that we use in Sec. 4. In order to compare a diffuse model to pixelized Fermi-
LAT count maps, we have used forward folding. For a given gamma-ray intensity
$J$, the expected counts per solid angle and reconstructed energy are
calculated as
$\mathcal{C}(E_{reco},\bm{\theta}_{reco})=\int\mathrm{d}E\int\mathrm{d}\Omega\,\Phi_{\gamma}(E,\bm{\theta})\mathcal{E}(E,\bm{\theta})\mathrm{PSF}(|\bm{\theta}-\bm{\theta}_{\text{reco}}|,E)\mathrm{ED}(E_{\text{reco}}|E).$
(A1)
with the instrument response functions (IRFs) exposure $\mathcal{E}$, energy-
dependent point spread function $\mathrm{PSF}$ and the energy dispersion
$\mathrm{ED}$. From this, the expected counts in pixel $i$ of energy bin $k$,
$\mu_{ik}$, follow as
$\mu_{ik}=\int^{E_{\text{max},k}}_{E_{\text{min},k}}\mathrm{d}E_{\text{reco}}\int_{\Omega_{i}}d\Omega\,\mathcal{C}(E_{\text{reco}},\bm{\theta}_{\text{reco}})$
(A2)
This quantity is then compared directly to the Fermi-LAT data counts to obtain
the results presented in Figure 10.
The Fermi-LAT dataset used in this work consists of 8 years of Pass 8 P8R3
ULTRACLEANVETO data from the same time window as used for the construction of
the 4FGL catalogue (Abdollahi et al., 2019) together with the corresponding
IRFs. Pass 8 P8R3 is the latest Fermi-LAT data release featuring the current
up-to-date processing and event reconstruction Bruel et al. (2018). The
ULTRACLEANVETO event selection features the lowest instrumental background and
is recommended for the study of large scale structures (Bruel et al., 2018).
The dataset used in this work features events binned into 22 logarithmic
energy bins ranging from $1.6\;\mathrm{GeV}$ to $2.32\;\mathrm{TeV}$. Besides
that, events are separated into four classes according to the quality of their
angular reconstruction. These are labelled PSF0-PSF3, with a larger number
representing better reconstruction quality, and each contain about the same
number of events, but feature a separate
PSF.888https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_LAT_IRFs/
The total gamma-ray flux measured by Fermi-LAT consists of course not only of
galactic diffuse emission, but also of emission from both point-like and
extended sources as well as isotropic extragalactic emission and the remaining
instrumental backgrounds. For the latter, a recommended model for the P8R3
ULTRACLEANVETO selection is provided
publicly999https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html
and included as an additional flux component in the model. Also, in addition
to the hadronic diffuse emission and the Inverse Compton component discussed
in the main text, we also include the contribution from bremsstrahlung of
cosmic ray leptons. This component can be safely neglected above
$20\,\mathrm{GeV}$ due to its steeply falling spectrum, but contributes at the
level of up to $5\,\%$ at energies of a few GeV. With this, the total
intensity that is compared to data is
$J=J_{\mathrm{had}}+J_{\mathrm{IC}}+J_{\mathrm{brems}}+J_{\mathrm{isotropic}}+J_{\mathrm{instrument}}$
(A3)
Note that this does not include the intensity from either resolved or
unresolved sources. While we do not consider unresolved sources in this
analysis, we mask out known sources to stay as independent of the details of
source emission models as possible. This includes all 5064 sources included in
the 4FGL, of which 75 are spatially extended (Abdollahi et al., 2019) as well
as two large scale features in the galactic gamma-ray sky that can not be
reproduced with the modelling approach pursued here, the Fermi bubbles (Su et
al., 2010) and the north polar spur (NPS) (Acero et al., 2016; Casandjian &
Grenier, 2009). For the Fermi bubbles, a publicly available spatial
template101010https://dspace.mit.edu/handle/1721.1/105492 similar to that
shown in Acero et al. (2016) is used, while the mask for the NPS is
constructed following the template shown in Acero et al. (2016). For the
sources from the 4FGL, a circular mask with radius $\sigma_{\text{mask}}$ is
assumed. For the spatially extended sources, the mask size is not
$\sigma_{\text{mask}}$, but rather the larger
$\sigma_{ext}=\sqrt{\sigma_{\text{mask}}^{2}+\sigma^{2}_{\text{src}}}$, with
$\sigma_{\text{src}}$ the radius of the source. $\sigma_{\text{mask}}$ must of
course be chosen large enough to mask out a large fraction of the source
emission, but also needs to be small enough to still leave a significant
fraction of pixels unmasked, in particular towards the galactic center where
the source density is highest. As a compromise between these considerations, a
value of $\sigma_{\text{mask}}=0.5^{\circ}$ is chosen, and the data considered
in this work are restricted to the PSF3 and PSF2 classes at energies
$E>5\;\mathrm{GeV}$. This ensures that all sources are masked out at at least
$2.3\;\times$ the $68\,\%$ PSF containment angle. For simplicity, and to be
even more restricting at higher energies, $\sigma_{\text{mask}}$ is fixed
independently of gamma-ray energy. As a consequence of only considering the
Fermi-LAT data above $5\;\mathrm{GeV}$, where the energy resolution is much
better than at lower energies8 (e.g. Atwood et al., 2009b), energy dispersion
is neglected. Given this energy threshold and the rather wide logarithmic
energy binning of the dataset used here with
$E_{\text{max},i}/E_{\text{min},i}=\sqrt{2}$, the systematic error introduced
by this simplification was found to be below $5\,\%$ in all cases. On the
high-energy end, we restrict our analysis to gamma-ray energies below
$100\;\mathrm{GeV}$. This cut-off ensures that the Poisson uncertainty on the
data counts also remains below $5\,\%$ in all cases.
## Appendix B Input variations for best-fit GCR parameters
The quantification of the uncertainties of galactic diffuse emission stemming
from the choice of gas distributions, hadronic production cross-sections,
cosmic ray source distribution and interstellar radiation field as described
in Sec. 3.3 are based on discrete variations of the respective model inputs
assuming the best-fit GCR parameters. In the following, we show the fluxes for
each of these discrete variations to illuminate the shape of the uncertainty
bands displayed in Sec. 3.
### B.1 Hadronic Production Cross-sections
Figure 11 shows the all-sky averaged spectra and relative differences of both
the diffuse neutrino and hadronic gamma-ray flux for the three hadronic
production cross-sections described in Sec. 2.2. All other inputs are the same
as for our fiducial model. We remind the reader that the K&K cross-sections
are only valid at primary energies above $100\,\mathrm{GeV}$ and that the
KamaeExtended model is equal to the K&K model at primary energies above
$500\,\mathrm{TeV}$. The latter is then reflected at somewhat lower energies
in the diffuse emission spectra as is clearly visible in Figure 11. When
comparing the differences between the cross-sections for neutrinos to those
for hadronic gamma-rays, one can see that for hadronic gamma-rays, the cross-
sections are in close agreement below $1\,\mathrm{TeV}$. This is different for
neutrinos, where the difference between the AAfrag and KamaeExtended
parametrizations are already sizable at lower energies. We also highlight that
the KamaeExtended model predicts consistently harder spectra than the AAfrag
model for both neutrinos and hadronic gamma-rays.
Figure 11: All-sky averaged neutrino (left panels) and hadronic gamma-ray
(right panel) intensity as a function of energy. We show our model prediction
for the different hadronic production cross-sections described in Sec. 2.3
using the the best-fit GCR parameters combined with the GALPROP galactic gas
maps and the Ferrière (2001) source distribution. The blue line obtained using
the AAfrag cross-sections corresponds to our fiducial model. In the lower
panels, the relative deviations after normalization to this fiducial model are
presented.
### B.2 Gas distribution
Figure 12 shows the longitudinal profiles of the diffuse neutrino flux at
$E=3\;\mathrm{TeV}$ for latitudes $|b|<5^{\circ}$ for the three galactic gas
maps described in Sec. 2.2. All other inputs are the same as for our fiducial
model. The difference between the GALPROP and GALPROP-OT model arises solely
from the assuption of a different value for the spin temeperature
$T_{\text{s}}$. As expected, the GALPROP-OT model assuming optically thin gas
leads to a lower intenity. The differences with respect to the HERMES gas maps
stem from the reliance on different $21\,\mathrm{cm}$ surveys, the
reconstruction into galactocentric rings of different sizes, the assumed
values of $T_{\text{s}}$ and the treatment of dark gas.
Figure 12: Neutrino intensity as a function of galactic longitude for a
neutrino energy of $3\,\text{TeV}$. We show our model prediction for the
different gas maps described in Sec. 2.2 using the the best-fit GCR parameters
combined with the Ferrière (2001) source distribution and the AAfrag
production cross-sections. The blue profile obtained using the GALPROP
galactic gas maps corresponds to our fiducial model. In the lower panel, the
relative deviations after normalization to this fiducial model are presented.
### B.3 Cosmic ray source distribution
Figure 13 shows the longitudinal profiles of the diffuse neutrino flux at
$E=3\;\mathrm{TeV}$ for latitudes $|b|<5^{\circ}$ for the four source
distributions described in Sec. 2.1.2. All other inputs are the same as for
our fiducial model. The relative intensities of the models towards and away
from the galactic center reflect the shape of the radial profiles as shown in
Figure 1. For the reasons described in Sec. 3.3, the relative differences
between the intensities are larger away from the galactic center than towards
it. Furthermore, as also mentioned in Sec. 3.3, the coincidence of the
intensities at $|l|\approx 45^{\circ}$ can be explained by the fact that in
this direction, the emission is on average produced at galacto-centric radii
similar to the solar radius and is thus constrained by the fit to local GCR
intensities. Overall, the source distribution from Case & Bhattacharya (1998)
gives the flattest emission profile, the emission obtained with the
distribution from Lorimer et al. (2006) is peaked towards the galactic center
the most.
Figure 13: Neutrino intensity as a function of galactic longitude for a
neutrino energy of $3\,\text{TeV}$. We show our model prediction for the
different source distributions described in Sec. 2.1 and shown in Figure 1
using the the best-fit GCR parameters combined with the GALPROP galactic gas
maps and the AAfrag production cross-sections. The blue profile obtained using
the Ferrière (2001) source distribution corresponds to our fiducial model. In
the lower panel, the relative deviations after normalization to this fiducial
model are presented.
### B.4 Interstellar radiation field
Figure 14 shows the spectra of the Inverse Compton gamma-ray intensity in
different windows in the sky. The spectra are calculated using the the best-
fit GCR parameters combined with the Ferrière (2001) and are shown for both
ISRF models described in Sec. 2.2.3. Similar to the hadronic component of the
gamma-ray and neutrino intensities, the Inverse Compton intensity also varies
over the different windows in the sky, with the largest intensity towards the
galactic center. The cut-off of the intensities above $10\,\mathrm{TeV}$
reflects the cut-off in the electron source spectra at
$E^{e^{-}}_{\text{cut}}=20\,\mathrm{TeV}$ as described in Sec. 2.1.2. The
GALPROP ISRF model produces intensities at least $25\,\%$ larger than the
model from Vernetto & Lipari (2016) in all regions in the sky and at all
energies. However, because the Inverse Compton intensity is a subdominant
contribution to the overall gamma-ray intensity, the overall uncertainty
stemming from the ISRF model choice as shown in Figures 7 and 10 is
correspondingly reduced.
Figure 14: Inverse Compton gamma-ray intensity as a function of energy in
different regions in the sky. We show our model prediction for the ISRF models
described in Sec. 2.2.3 using the the best-fit GCR parameters combined with
the Ferrière (2001) source distribution. The blue profiles obtained using the
GALPROP ISRF model correspond to our fiducial model. In the lower panels, the
relative deviations after normalization to this fiducial model are presented.
|
# Learning from Good Trajectories in Offline Multi-Agent Reinforcement
Learning
Qi Tian1, Kun Kuang1,, Furui Liu2, Baoxiang Wang3 Corresponding author.
###### Abstract
Offline multi-agent reinforcement learning (MARL) aims to learn effective
multi-agent policies from pre-collected datasets, which is an important step
toward the deployment of multi-agent systems in real-world applications.
However, in practice, each individual behavior policy that generates multi-
agent joint trajectories usually has a different level of how well it
performs. _e.g._ , an agent is a random policy while other agents are medium
policies. In the cooperative game with global reward, one agent learned by
existing offline MARL often inherits this random policy, jeopardizing the
performance of the entire team. In this paper, we investigate offline MARL
with explicit consideration on the diversity of agent-wise trajectories and
propose a novel framework called Shared Individual Trajectories (SIT) to
address this problem. Specifically, an attention-based reward decomposition
network assigns the credit to each agent through a differentiable key-value
memory mechanism in an offline manner. These decomposed credits are then used
to reconstruct the joint offline datasets into prioritized experience replay
with individual trajectories, thereafter agents can share their good
trajectories and conservatively train their policies with a graph attention
network (GAT) based critic. We evaluate our method in both discrete control
(_i.e._ , StarCraft II and multi-agent particle environment) and continuous
control (_i.e._ , multi-agent mujoco). The results indicate that our method
achieves significantly better results in complex and mixed offline multi-agent
datasets, especially when the difference of data quality between individual
trajectories is large.
## Introduction
Multi-agent reinforcement learning (MARL) has shown its powerful ability to
solve many complex decision-making tasks. _e.g._ , game playing (Samvelyan et
al. 2019). However, deploying MARL to practical applications is not easy since
interaction with the real world is usually prohibitive, costly, or risky
(Garcıa 2015), _e.g._ , autonomous driving (Shalev-Shwartz, Shammah, and
Shashua 2016). Thus offline MARL, which aims to learn multi-agent policies in
the previously-collected, non-expert datasets without further interaction with
environments, is an ideal way to cope with practical problems.
Recently, Yang et al. (2021) first investigated offline MARL and found that
multi-agent systems are more susceptible to extrapolation error (Fujimoto,
Meger, and Precup 2019), _i.e._ , a phenomenon in which unseen state-action
pairs are erroneously estimated, compared to offline single-agent
reinforcement learning (RL) (Fujimoto, Meger, and Precup 2019; Kumar et al.
2019; Wu, Tucker, and Nachum 2019; Kumar et al. 2020; Fujimoto and Gu 2021).
Then they proposed implicit constraint Q-learning (ICQ), which can effectively
alleviate this problem by only trusting the state-action pairs given in the
dataset for value estimation.
Figure 1: (a)$\sim$(e) Data composition of data replay in episodic
online/offline RL/MARL, where the online learning agents in (a) and (c) are
assumed to be medium policies. (f) Episode return in 3z_vs_5z map of StarCraft
II with the offline data structure of (e).
In this paper, we point out that in addition to extrapolation error, the
diversity of trajectories in the offline multi-agent dataset also deserves
attention since the composition of this dataset is much more complex than its
single-agent version. For better illustration, we briefly summarize the data
quality of data replay in episodic online/offline RL/MARL as is shown in
Figure 1. In online settings, since the data replay is updated rollingly based
on the learning policy, the quality of all data points is approximately the
same as is shown in Figure 1(a) and Figure 1(c), where line connections in
Figure 1(c) represent multi-agent systems only provide one global reward at
each timestep of each episode. In offline settings, there is no restriction on
the quality of collected data, thus the single-agent data replay usually
contains multi-source and suboptimal data as is shown in Figure 1(b). Figure
1(d) directly extends the offline single-agent data composition to a multi-
agent version. However, in practical tasks, each individual trajectory in an
offline multi-agent joint trajectory usually has a different data quality, as
is shown in Figure 1(e). For example, consider the task that two robotic arms
(agents) work together to lift a large circular object, and the reward is the
height of the center of mass. Suppose that two workers operate two robotic
arms simultaneously to collect offline data, but they have different levels of
proficiency in robotic arm operation. The offline joint data generated in this
case is an agent-wise imbalanced multi-agent dataset.
To investigate the performance of the current state-of-the-art offline MARL
algorithm, ICQ, on the imbalanced dataset, we test it on the 3s_vs_5z map in
StarCraft II (Samvelyan et al. 2019). Specifically, as is shown in Figure
1(f), we first obtain random joint policy (violet dotted) and medium joint
policy (red dotted) through online training, and then utilize these two joint
policies to construct an agent-wise imbalanced dataset with the data structure
of Figure 1(e). We can observe that the performance of ICQ (green) is lower
than that of the online medium joint policy (red dotted), as ${\rm agent_{1}}$
and ${\rm agent_{2}}$ may inherit the random behavior policies that generate
the data. One might attempt to solve this issue of ICQ by randomly exchanging
local observations and actions among agents, so that ${\rm agent_{1}}$ and
${\rm agent_{2}}$ can share some medium-quality individual trajectories of
${\rm agent_{3}}$. Unfortunately, the performance of this alternative (orange)
is similar to ICQ, since the proportion of the medium-quality individual data
in the data replay does not change under the constraint of the total reward.
In this paper, we propose a novel algorithmic framework called Shared
Individual Trajectories (SIT) to address this problem. It first learns an
Attention-based Reward Decomposition Network with Ensemble Mechanism (ARDNEM),
which assigns credits to each agent through a differentiable key-value memory
mechanism in an offline manner. Then these credits are used to reconstruct the
original joint trajectories into Decomposed Prioritized Experience Replay
(DPER) with individual trajectories, thereafter agents can share their good
trajectories and conservatively train their policies with a graph attention
network based critic. Extensive experiments on both discrete control (_i.e._ ,
StarCraft II and multi-agent particle environment) and continuous control
(_i.e._ , multi-agent mujoco) indicate that our method achieves significantly
better results in complex and mixed offline multi-agent datasets, especially
when the difference of data quality between individual trajectories is large.
## Related Work
#### Offline MARL.
Recently, Yang et al. (2021) first explored offline MARL and found that multi-
agent systems are more susceptible to extrapolation error (Fujimoto, Meger,
and Precup 2019) compared to offline single-agent RL (Fujimoto, Meger, and
Precup 2019; Kumar et al. 2020; Fujimoto and Gu 2021), and then they proposed
implicit constraint Q-learning (ICQ) to solve this problem. Jiang and Lu
(2021a) investigated the mismatch problem of transition probabilities in fully
decentralized offline MARL. However, they assume that each agent makes the
decision based on the global state and the reward output by the environment
can accurately evaluate each agent’s action. This is fundamentally different
from our work since we focus on the partially observable setting in global
reward games (Chang, Ho, and Kaelbling 2003), which is a more practical
situation. Other works (Meng et al. 2021; Jiang and Lu 2021b) have made some
progress in offline MARL training with online fine-tuning. Unfortunately, all
the existing methods neglect to investigate the diversity of trajectories in
offline multi-agent datasets, while we take the first step to fill this gap.
#### Multi-agent credit assignment.
In cooperative multi-agent environments, all agents receive one total reward.
The multi-agent credit assignment aims to correctly allocate the reward signal
to each individual agent for a better groups’ coordination (Chang, Ho, and
Kaelbling 2003). One popular class of solutions is value decomposition, which
can decompose team value function into agent-wise value functions in an online
fashion under the framework of the Bellman equation (Sunehag et al. 2018;
Rashid et al. 2018; Yang et al. 2020b; Li et al. 2021). Different from these
works, in this paper, we focus on explicitly decomposing the total reward into
individual rewards in an offline fashion under the regression framework, and
these decomposed rewards will be used to reconstruct the offline prioritized
dataset.
#### Experience replay in RL/MARL.
Experience replay, a mechanism for reusing historical data (Lin 1992), is
widely used in online RL (Wang et al. 2020a). Many prior works related to it
focus on improving data utilization (Schaul et al. 2016; Zha et al. 2019; Oh
et al. 2020). _e.g._ , prioritized experience replay (PER) (Schaul et al.
2016) takes temporal-difference (TD) error as a metric for evaluating the
value of data and performs importance sampling according to it. Unfortunately,
this metric fails in offline training due to severe overestimation in offline
scenarios. In online MARL, most works related to the experience replay focus
on stable decentralized multi-agent training (Foerster et al. 2017;
Omidshafiei et al. 2017; Palmer et al. 2018), but these methods usually rely
on some auxiliary information, _e.g._ , training iteration number, timestamp
and exploration rate, which often are not provided by the offline settings.
SEAC (Christianos, Schäfer, and Albrecht 2020) proposed by Christianos et al.
is most related to our work as it also shares individual trajectories among
agents during online training. However, SEAC assumes that each agent can
directly obtain a local reward and does not consider the importance of each
individual trajectory. Instead, we need to decompose the global reward into
local rewards in an offline manner, and then determine the quality of the
individual trajectory based on the decomposed rewards for priority sampling.
## Preliminaries
Figure 2: Overall framework of Shared Individual Trajectories (SIT) in offline
MARL
A fully cooperative multi-agent task in the global reward game (Chang, Ho, and
Kaelbling 2003) can be described as a Decentralized Partially Observable
Markov Decision Process (Dec-POMDP) (Oliehoek and Amato 2016) consisting of a
tuple
$\langle\emph{N},\mathcal{S},\mathcal{A},\mathcal{T},\mathcal{R},\mathcal{O},\Omega,\gamma\rangle$.
Let N represent the number of agents and $\mathcal{S}$ represent the true
state space of the environment. At each timestep $t\in\mathcal{Z}^{+}$ of
episode $k\in\mathcal{Z}^{+}$, each agent
$i\in\emph{N}\equiv\left\\{1,\dots,n\right\\}$ takes an action
$a_{i}\in\mathcal{A}$, forming a joint action
$\boldsymbol{a}\in\bm{\mathcal{A}}\equiv\mathcal{A}^{n}$. Let
$\mathcal{T}(s^{\prime}|s,\boldsymbol{a}):\mathcal{S}\times\bm{\mathcal{A}}\rightarrow\mathcal{S}$
represent the state transition function. All agents share the global reward
function
$r(s,\boldsymbol{a}):\mathcal{S}\times\bm{\mathcal{A}}\rightarrow\mathcal{R}$
and $\gamma\in[0,1)$ represents the discount factor. Let $J$ represent the
type of agent $i$, which is the prior knowledge given by the environment (Wang
et al. 2020b; Yang et al. 2020a). All agent IDs belonging to type $J$ are
denoted as $j\in\mathbb{J}$, where $\mathbb{J}$ represents the set of IDs of
all agents isomorphic to agent $i$ (including $i$). We consider a partially
observable setting in which each agent receives an individual observation
$o_{i}\in\Omega$ according to the observation function
$\mathcal{O}(s,a):\mathcal{S}\times\mathcal{A}\rightarrow\Omega$. Each agent
has an action-observation history
$\tau_{i}\in\bm{T}\equiv\left(\Omega\times\mathcal{A}\right)^{t}$, on which it
conditions a stochastic policy $\pi_{i}\left(a_{i}|\tau_{i}\right)$. Following
Gronauer (2022), we focus on episodic training in this paper.
## Methodology
In this section, we propose a new algorithmic framework called Shared
Individual Trajectories (SIT) in offline MARL, which can maximally exploit the
useful knowledge in the complex and mixed offline multi-agent datasets to
boost the performance of multi-agent systems. SIT consists of three stages as
shown in Figure 2: Stage I: learning an Attention-based Reward Decomposition
Network with Ensemble Mechanism (ARDNEM). Stage II: reconstructing the
original joint offline dataset into Decomposed Prioritized Experience Replay
(DPER) based on the learned ARDNEM. Stage III: conservative offline actor
training with a graph attention network (GAT) based critic on DPER.
### Attention-Based Reward Decomposition Network with Ensemble Mechanism
It is a non-trivial problem to evaluate the behavior of individual
trajectories in an offline dataset under the global reward games since only
one total reward can be accessed in this case. We propose a reward
decomposition network, as is shown in Figure 2(a), to solve this problem.
Specifically, the individual reward $r_{i}^{t,k}$ at timestep $t$ of episode
$k$ can be approximately estimated from the local observation $o_{i}^{t,k}$
and action $a_{i}^{t,k}$ through the reward network $f_{i}$. _i.e._ ,
$r_{i}^{t,k}=f_{i}(o_{i}^{t,k},a_{i}^{t,k})$. In order to learn the
contribution weight $\lambda_{i}^{t,k}$ of each individual reward
$r_{i}^{t,k}$ to the total reward, we use the attention mechanism (_i.e._ ,
the differentiable key-value memory model (Oh et al. 2016)) to achieve this
goal. That is, we first encode the global state $s^{t,k}$ and local
observation-action pair $(o^{t,k}_{i},a^{t,k}_{i})$ into embedding vectors
$e^{t,k}_{s}$ and $e^{t,k}_{i}$ with two multi-layer perceptrons (MLPs)
respectively, then we pass the similarity value between the global state’s
embedding vector $e^{t,k}_{s}$ and the individual features’ embedding vector
$e^{t,k}_{i}$ into a softmax
$\lambda_{i}^{t,k}\propto\exp((e_{s}^{t,k})^{T}W_{q}^{T}W_{k}e_{i}^{t,k})\,,$
(1)
where the learnable weight $W_{q}$ and $W_{k}$ transform $e_{s}^{t,k}$ and
$e_{i}^{t,k}$ into the query and key. Thus the estimated total reward
$\hat{r}_{tot}^{t,k}$ at each timestep $t$ of episode $k$ can be expressed as
$\hat{r}_{tot}^{t,k}=\sum_{i=1}^{N}\lambda_{i}^{t,k}r_{i}^{t,k}=\sum_{i=1}^{N}\lambda_{i}^{t,k}\cdot
f_{i}(o_{i}^{t,k},a_{i}^{t,k})\,.$ (2)
Since misestimated individual rewards will have a large impact on agent
learning, we want to obtain the uncertainty corresponding to each estimated
value for correcting subsequent training. Following Chua et al. (2018), we
introduce ensemble mechanism (Osband et al. 2016) to meet this goal. _i.e._ ,
$M$ decomposition networks are learned simultaneously. Finally, our Attention-
based Reward Decomposition Network with Ensemble Mechanism (ARDNEM) w.r.t.
parameters $\psi$ can be trained on the original joint offline dataset with
the following mean-squared error (MSE) loss
$\mathcal{L}_{{\rm
MSE}}(\psi)=\frac{1}{M}\sum_{m=1}^{M}\left(\sum_{i=1}^{N}\lambda_{i,m}^{t,k}\cdot
f_{i,m}(o_{i}^{t,k},a_{i}^{t,k})-r_{tot}^{t,k}\right)^{2}\,,$ (3)
where $r_{tot}^{t,k}$ represents the true total reward in the offline dataset.
In practice, the reward network parameters of different agents are shared for
the scalability of our method.
### Decomposed Prioritized Experience Replay
As is shown in Figure 2(b), after ARDNEM is learned, we can use it to estimate
individual rewards for all local trajectories. Specifically, since ARDNEM
adopts the ensemble mechanism, we take the mean of $M$ model output
$\lambda_{i,m}^{t,k}r_{i,m}^{t,k}$ as the estimation of the weighted
individual reward $\hat{r}_{i}^{t,k}$:
$\hat{r}_{i}^{t,k}=\frac{1}{M}\sum_{m=1}^{M}\lambda_{i,m}^{t,k}r_{i,m}^{t,k}\,.$
(4)
Its corresponding variance is defined as the uncertainty $\hat{u}_{i}^{t,k}$
of the model prediction:
$\hat{u}_{i}^{t,k}=\sqrt{\frac{1}{M}\sum_{m=1}^{M}\left(\lambda_{i,m}^{t,k}r_{i,m}^{t,k}-\hat{r}_{i}^{t,k}\right)^{2}}\,.$
(5)
To maximally exploit the useful knowledge in the data replay, we need a metric
to distinguish the importance of each data. Many previous works in online RL
do this through temporal-difference (TD) error (Schaul et al. 2016; Zha et al.
2019; Oh et al. 2020). However, the value function suffers from severe
overestimation in offline settings (Fujimoto, Meger, and Precup 2019), thus TD
error is not an ideal choice. In this paper, considering that offline datasets
are static and limited, we believe that high-quality data should be valued.
Therefore, we use Monte Carlo return $\hat{g}_{i}^{t,k}$ to measure the
importance (or quality) of the data at each timestep $t$ of episode $k$:
$\hat{g}_{i}^{t,k}=\sum_{t=t^{\prime}}^{T}\gamma^{t-t^{\prime}}\hat{r}_{i}^{t,k}\,,$
(6)
where $\gamma$ represents discounted factor.
After the above efforts, each episode $k$ in the original joint trajectories
(_i.e._ , $\\{o_{1}^{t,k},a_{1}^{t,k},o_{1}^{t+1,k},\dots,o_{{\rm
N}}^{t,k},a_{{\rm N}}^{t,k},o_{{\rm N}}^{t+1,k},$
$s^{t,k},s^{t+1,k},r_{tot}^{t,k}\\}^{t=1,\dots,T}$) can be decomposed to the
agent-wise individual trajectories (_i.e._ ,
$\\{\\{o_{i}^{t,k},a_{i}^{t,k},o_{i}^{t+1,k},\bm{o_{-i}}^{t,k},$
$\hat{r}_{i}^{t,k},\hat{u}_{i}^{t,k},\hat{g}_{i}^{t,k}\\}^{t=1,\dots,T}\\}^{i=1,\dots,{\rm
N}}$), where $\bm{o_{-i}}^{t,k}$ represents all local observations except for
agent $i$. We then store all these individual trajectories into
single/multiple Decomposed Prioritized Experience Replay (DPER) according to
their agent type. Since we train multi-agent policies in an episodic manner,
the mean of Monte Carlo return $\hat{g}_{i}^{t,k}$ for each episode $k$ is
used as the sampling priority
$\hat{p}_{i}^{k}=\frac{1}{T}\sum_{t=1}^{T}\hat{g}_{i}^{t,k}$. To trade off
priority sampling and uniform sampling, we use a softmax function with a
temperature factor $\alpha$ to reshape the priorities in the decomposed
dataset:
$p_{i}^{k}=\frac{e^{\hat{p}_{i}^{k}/\alpha}}{\sum_{j\in\mathbb{J},k}^{K}e^{\hat{p}_{j}^{k}/\alpha}}\,,$
(7)
where $\mathbb{J}$ represents the set of IDs of all agents isomorphic to agent
$i$ (including $i$). Temperature factor $\alpha$ determines how much
prioritization is used, with $\alpha\rightarrow\infty$ corresponding to the
uniform sampling.
In practice, considering the large gap in rewards for different environments,
all sampling priority $\hat{p}_{i}^{k}$ are linearly scaled to a uniform
range, allowing our method to share the same temperature factor $\alpha$
across environments. Meanwhile, we use the sum-tree (Schaul et al. 2016) as
the storage structure of DPER to improve the sampling efficiency.
StarCraft II (SC II)
---
| | Behavior | BC | QMIX | MABCQ | MACQL | ICQ | Ours
Low Quality | 2s_vs_1sc | 2.8 | 3.2$\pm$1.2 | 1.1$\pm$0.0 | 1.7$\pm$0.0 | 4.5$\pm$0.1 | 4.4$\pm$0.1 | 10.5$\bm{\pm}$0.4
3s_vs_5z | 2.9 | 2.9$\pm$0.4 | 2.3$\pm$0.2 | 4.5$\pm$0.2 | 4.5$\pm$0.1 | 5.2$\pm$0.1 | 11.1$\bm{\pm}$0.8
2s3z | 3.2 | 3.1$\pm$1.7 | 2.6$\pm$0.0 | 4.7$\pm$0.7 | 5.7$\pm$0.4 | 9.1$\pm$0.3 | 12.3$\bm{\pm}$1.6
8m | 3.1 | 3.1$\pm$0.2 | 2.0$\pm$0.6 | 5.7$\pm$1.2 | 2.5$\pm$0.6 | 5.5$\pm$0.5 | 10.8$\bm{\pm}$0.4
1c3s5z | 5.5 | 5.9$\pm$0.8 | 2.4$\pm$1.9 | 8.4$\pm$1.2 | 6.1$\pm$1.2 | 9.4$\pm$0.0 | 13.7$\bm{\pm}$0.3
10m_vs_11m | 3.8 | 4.2$\pm$0.7 | 0.7$\pm$0.5 | 4.3$\pm$1.8 | 5.2$\pm$0.1 | 8.0$\pm$0.2 | 12.3$\bm{\pm}$0.7
Medium Quality | 2s_vs_1sc | 9.8 | 9.8$\pm$0.2 | 3.6$\pm$1.6 | 6.6$\pm$1.6 | 9.8$\pm$0.1 | 10.2$\pm$0.0 | 16.5$\bm{\pm}$1.8
3s_vs_5z | 7.0 | 7.6$\pm$0.4 | 2.6$\pm$1.6 | 5.8$\pm$0.8 | 8.8$\pm$1.4 | 13.5$\pm$1.2 | 17.8$\bm{\pm}$1.5
2s3z | 7.0 | 7.1$\pm$0.3 | 2.3$\pm$0.5 | 5.8$\pm$1.2 | 7.1$\pm$0.5 | 9.1$\pm$0.3 | 14.8$\bm{\pm}$0.3
8m | 9.8 | 9.6$\pm$0.3 | 2.4$\pm$0.0 | 4.0$\pm$1.6 | 10.5$\pm$1.2 | 13.4$\pm$0.6 | 14.8$\bm{\pm}$1.1
1c3s5z | 10.3 | 10.0$\pm$0.1 | 8.1$\pm$1.6 | 11.8$\pm$2.0 | 10.1$\pm$0.2 | 12.7$\pm$0.3 | 15.4$\bm{\pm}$0.4
10m_vs_11m | 9.2 | 9.2$\pm$0.3 | 1.7$\pm$0.4 | 3.5$\pm$0.6 | 9.5$\pm$0.5 | 10.6$\pm$0.4 | 13.6$\bm{\pm}$1.3
Multi-agent Particle Environment (MPE)
| | Behavior | BC | QMIX | MABCQ | MACQL | ICQ | Ours
Low Quality | CN_3ls3l | -157.7 | 161.1$\pm$4.2 | -231.9$\pm$3.0 | -174.3$\pm$22.7 | -217.2$\pm$18.1 | -117.8$\pm$11.5 | -92.4$\bm{\pm}$7.2
CN_4ls4l | -278.4 | -274.7$\pm$6.0 | -316.4$\pm$17.3 | -194.5$\pm$8.3 | -300.5$\pm$7.3 | -231.4$\pm$2.2 | -120.7$\bm{\pm}$13.3
PP_3p1p | -249.5 | -253.7$\pm$10.8 | -336.9$\pm$22.9 | -239.9$\pm$32.3 | -326.5$\pm$18.6 | -227.7$\pm$6.7 | -105.6$\bm{\pm}$9.0
Medium Quality | CN_3ls3l | -107.4 | -111.0$\pm$1.8 | -256.1$\pm$3.9 | -107.3$\pm$13.9 | -231.9$\pm$13.2 | -83.0$\pm$3.2 | -54.5$\bm{\pm}$2.9
CN_4ls4l | -166.2 | -161.4$\pm$5.4 | -290.0$\pm$5.3 | -146.0$\pm$8.3 | -282.8$\pm$14.0 | -184.6$\pm$10.2 | -72.3$\bm{\pm}$2.3
PP_3p1p | -155.4 | -156.2$\pm$7.3 | -296.6$\pm$28.8 | -230.2$\pm$27.9 | -272.5$\pm$14.4 | -158.6$\pm$5.2 | -80.4$\bm{\pm}$4.7
Multi-Agent mujoco (MAmujoco)
| | Behavior | BC | FacMAC | MABCQ | MACQL | ICQ | Ours
Low Quality | HalfCheetah_2l | -110.5 | -110.3$\pm$1.1 | -152.5$\pm$18.5 | -100.9$\pm$2.8 | -70.8$\pm$29.1 | -109.3$\pm$3.1 | -0.3$\bm{\pm}$0.1
Walker_2l | -21.6 | -27.9$\pm$12.4 | -34.3$\pm$10.3 | -28.7$\pm$14.0 | 16.8$\pm$39.1 | -21.2$\pm$8.5 | 105.8$\bm{\pm}$20.8
Medium Quality | HalfCheetah_2l | 41.7 | 45.3$\pm$5.4 | -95.3$\pm$45.6 | 64.9$\pm$15.0 | 20.3$\pm$34.7 | 50.4$\pm$18.9 | 164.1$\bm{\pm}$13.0
Walker_2l | 71.6 | 80.3$\pm$19.9 | -11.7$\pm$5.7 | 87.0$\pm$17.1 | 41.0$\pm$21.9 | 75.8$\pm$12.5 | 167.1$\bm{\pm}$36.6
Table 1: The mean and variance of the episodic return on agent-wise imbalanced
multi-agent datasets of various maps.
### Conservative Policy Learning with GAT-Based Critic
In this subsection, we will use the obtained type-wise DPER for multi-agent
policy learning under the centralized training decentralized execution (CTDE)
paradigm. As is shown in Figure 2(c), the input of the centralized critic for
each agent $i$ consists of local information and global information.
Specifically, the former includes the local action-observation history
$\tau^{t,k}_{i}=(o^{t,k}_{i},a^{t-1,k}_{i})$ (Peng et al. 2021) and current
action $a_{i}^{t,k}$ of each agent $i$. We encode them into the local
embedding vector $e_{i,{\rm local}}^{t,k}$ with two MLPs $f_{{\rm local}}$.
_i.e._ , $e_{i,{\rm local}}^{t,k}=f_{{\rm
local}}(\tau^{t,k}_{i},a_{i}^{t,k})$. The latter includes local observations
of all agents $(o_{i}^{t,k},\bm{o_{-i}}^{t,k})$. We construct these
observations as a fully connected graph and aggregate the global embedding
vector $e_{i,{\rm global}}^{t,k}$ via a graph attention network (GAT)
(Veličković et al. 2018), as
$\displaystyle
w_{i,j}^{t,k}=\frac{\exp(\text{LeakyReLU}(W_{2}^{T}[W_{1}o_{i}^{t,k};W_{1}o_{j}^{t,k}]))}{\sum_{k\in
N}\exp(\text{LeakyReLU}(W_{2}^{T}[W_{1}o_{i}^{t,k};W_{1}o_{k}^{t,k}]))}$ (8)
$\displaystyle e_{i,{\rm global}}^{t,k}={\textstyle\sum_{j\in
N}w_{i,j}^{t,k}W_{1}o_{j}^{t,k}}\,,$
where $W_{1}$ and $W_{2}$ represent the learnable weights in GAT.
$(\cdot)^{T}$ represents transposition. $[\cdot;\cdot]$ represents
concatenation operation. Then, the centralized critic of each agent $i$ can be
expressed as $Q_{i}(\tau_{i}^{t,k},a_{i}^{t,k},\bm{o_{-i}}^{t,k})=f_{{\rm
agg}}([e_{i,{\rm local}}^{t,k};e_{i,{\rm global}}^{t,k}])$, where $f_{{\rm
agg}}$ represents the aggregation network with two MLPs. In order to simplify
the expression, we denote the critic of agent $i$ as $Q_{i}$ in our subsequent
description.
To alleviate the severe extrapolation error in offline agent learning, we plug
the filtering mechanism of CRR (Wang et al. 2020c) into individual policy
learning. This method can implicitly constrain the forward KL divergence
between the learning policy and the behavior policy, which is widely used in
offline single-agent learning (Wang et al. 2020c; Nair et al. 2020; Gulcehre
et al. 2021) and multi-agent learning (Yang et al. 2021). Formally, suppose
that the type of agent $i$ is $J$, and its corresponding DPER is denoted as
$\mathbb{B}_{J}$. All agent IDs belonging to type $J$ are denoted as
$j\in\mathbb{J}$. The priority-based sampling strategy in this dataset is
denoted as $P_{J}$. When the data at timestep $t$ of episode $k$ is sampled,
the actor $\pi_{i}$ w.r.t. parameters $\theta_{i}$ and critic $Q_{i}$ w.r.t.
parameters $\phi_{i}$ of agent $i$ is trained as follows
$\mathcal{L}_{{\rm
critic}}(\phi_{i})=\mathbb{E}_{P_{J}\left(\mathbb{B}_{J}\right)}\left[\frac{\eta}{\hat{u}_{i(j)}^{t,k}}\left(\hat{r}_{i(j)}^{t,k}+\gamma
Q_{i}^{\prime}-Q_{i}\right)^{2}\right]$ (9) $\mathcal{L}_{{\rm
actor}}(\theta_{i})=\mathbb{E}_{P_{J}\left(\mathbb{B}_{J}\right)}\left[-\frac{\eta}{\hat{u}_{i(j)}^{t,k}}\frac{e^{Q_{i}/\beta}}{Z}\left.Q_{i}\right|_{a=a_{i(j)}^{t,k}}\right]\,,$
(10)
where $(\cdot)_{i(j)}^{(\cdot)}$ indicates that although the original data is
sampled from the trajectory of agent $j$, it is used for training the network
of agent $i$. $Q_{i}^{\prime}$ represents target critic. $e^{Q_{i}/\beta}/Z$
is the filtering trick in CRR, where $Z$ is the normalization coefficient
within a mini-batch and $\beta$ is used to control how conservative the policy
update is. The uncertainty $1/\hat{u}_{(\cdot)}^{t,k}$ indicates that policy
learning should value individual rewards $\hat{r}_{i(j)}^{t,k}$ that are
precisely estimated, since a small $\hat{u}_{(\cdot)}^{t,k}$ means that the
reward network has high confidence for the corresponding predicted reward.
$\eta$ is used to control the importance weight of the uncertainty on actor-
critic learning.
Figure 3: Intermediate results of ARDNEM and DPER modules on 2s3z datasets.
## Experiments
In this section, we first introduce the data generation method for agent-wise
imbalanced multi-agent datasets in StarCraft II (Samvelyan et al. 2019),
multi-agent particle environment (MPE) (Lowe et al. 2017) and multi-agent
mujoco (MAmujoco) (Peng et al. 2021). Then, We evaluate our method SIT on
these datasets. Finally, we conduct analytical experiments to better
illustrate the superiority of our method.
### Data Generation for Agent-Wise Imbalanced Multi-Agent Datasets
We construct the agent-wise imbalanced multi-agent datasets based on six maps
in StarCraft II (see Table 2 in Appendix C.1) and three maps in MPE (see Table
3 and Figure 5 in Appendix C.1) for discrete control, and two maps in MAmujoco
(see Figure 6 in Appendix C.1) for continuous control. In order to obtain
diverse behavior policies for generating these imbalanced datasets, we first
online train the joint policies based on QMIX (Rashid et al. 2018) (discrete)
or FacMAC (Peng et al. 2021) (continuous) in each environment and store them
at fixed intervals during the training process. Then, these saved joint
policies are deposited into random, medium and expert policy pools according
to their episode returns, and the details are elaborated in Table 4, Table 5,
Table 6 of Appendix C.2. Since the policy levels among the individual agents
during online training are balanced (_e.g._ , the policy level of each
individual agent in a medium joint policy is also medium), we can directly
sample the required individual behavior policies from different policy pools
to generate the agent-wise imbalanced datasets.
For the convenience of expression, each dataset is represented by the type and
policy level corresponding to all individual behavior policies. _e.g._ , in
${\rm 3s\\_vs_{\\_}5z}$ map of StarCraft II, the agent-wise imbalanced multi-
agent dataset 100%$[{\rm s_{1}^{r},s_{2}^{m},s_{3}^{e}}]$ indicates that the
policy levels of three Stalkers (agents) that generate this data are random,
medium and expert, respectively. In practice, since we are interested in non-
expert data, we generate low-quality and medium-quality agent-wise imbalanced
datasets based on the average episode return for all environments. Each of
them contains 10000 trajectories and the detailed configuration is shown in
Table 7, Table 8 and Table 9 of Appendix C.3.
### Evaluation on Agent-Wise Imbalanced Multi-Agent Datasets
We compare our proposed method against QMIX (discrete), FacMAC (continuous),
behavior cloning (BC), multi-agent version of BCQ (Fujimoto, Meger, and Precup
2019) (MABCQ) and CQL (Kumar et al. 2020) (MACQL), and existing start-of-the-
art algorithm ICQ (Yang et al. 2021). The value decomposition structure of
MABCQ and MACQL follows Yang et al. (2021). Details for baseline
implementations are in Appendix B. To better demonstrate the effectiveness of
our method, we employ the find-tuned hyper-parameters provided by the authors
of BCQ and CQL. Details for experimental configurations are in Appendix D.
Table 1 shows the mean and variance of the episodic return of different
algorithms with 5 random seeds on tested maps, where the result corresponding
to Behavior represents the average episode return of the offline dataset. It
can be found that our method significantly outperforms all baselines in all
maps and is even 2x higher than the existing start-of-the-art method ICQ in
some maps (_e.g._ , low-quality dataset based on 3s_vs_5z in StarCraft II,
medium-quality dataset based on CN_3p1p in MPE and all datasets in MAmujoco),
which demonstrates that our method can effectively find and exploit good
individual trajectories in agent-wise imbalanced multi-agent datasets to boost
the overall performance of multi-agent systems. Note that since the online
algorithm QMIX or FacMAC cannot handle the extrapolation error in offline
scenarios, its performance is much lower than other methods. We also provide
the training curves for all datasets in Appendix E.3 to better understand the
strengths of our approach.
Figure 4: Ablation experiments and the performance on agent-wise balanced
datasets.
### A Closer Look at SIT
ARDNEM and DPER are two important modules of our SIT. The former is used to
estimate individual rewards, and the latter constructs type-wise prioritized
experience replays. To better demonstrate the effectiveness of our method, we
investigate two questions: 1) Can the learned ARDNEM correctly decompose the
global reward into individual rewards? and 2) Can the priority in DPER
accurately reflect the quality of individual trajectories? To answer these two
questions, we show some intermediate results of our method on medium-quality
and low-quality datasets of 2s3z, where the composition of the former is ${\rm
100\%[s_{1}^{r},s_{2}^{e},z_{1}^{r},z_{2}^{e},z_{3}^{e}]}$ and the composition
of the latter is ${\rm
50\%[s_{1}^{r},s_{2}^{r},z_{1}^{r},z_{2}^{r},z_{3}^{r}]+50\%[s_{1}^{r},s_{2}^{m},z_{1}^{r},z_{2}^{m},z_{3}^{m}]}$.
Figure 3(a) illustrates the decomposed weighted reward $\hat{r}$, estimated by
ARDNEM (denoted as WR in the legend) during training on the medium-quality
dataset. We can observe that ${\rm s_{1}}$’s reward (red) is lower than ${\rm
s_{2}}$’s reward (violet), and ${\rm z_{1}}$’s reward (blue) is lower than
${\rm z_{2}}$’s reward (orange) and ${\rm z_{3}}$’s reward (green). This
decomposition result of ARDNEM agrees with our expectation because all
individual trajectories of ${\rm s_{1}}$ and ${\rm z_{1}}$ are generated by
random behavior policies, while the individual trajectories of other agents
are generated by expert behavior policies. Figure 3(d) illustrates the
decomposed weighted reward on the low-quality dataset. Similarly, the
comparisons also agree with the intended decomposition we desire to achieve.
We further provide the corresponding illustrations for all maps in Appendix
E.4, and their decomposition result also matches their data composition (Table
7, Table 8 and Table 9 in Appendix C.3).
Since 2s3z map has two types of agents, we store the individual trajectories
of each agent into the corresponding prioritized experience replays by their
type. For the medium-quality dataset, Figure 3(b) and Figure 3(c) show the
priority distribution of all individual trajectories in each prioritized
experience replay. It demonstrates that most individual trajectories of ${\rm
s_{1}}$ have lower priority than ${\rm s_{2}}$, while ${\rm z_{1}}$ has lower
priority than ${\rm z_{2}}$ and ${\rm z_{3}}$. This comparison is fully
consistent with the quality of the individual trajectories, which indicates
the priorities in DPER is as intended. Figure 3(e) and Figure 3(f) draw
similar conclusions on the low-quality dataset. A deeper look into Figure 3(f)
finds that the priority distribution has multiple modals, which indicates that
it correctly captures the distribution of the quality even when the variance
of the quality is large. _e.g._ , 50% ${\rm z_{2}^{r}}$ \+ 50% ${\rm
z_{2}^{m}}$.
### Analytical Experiment
We conduct some analytical experiments to better understand our proposed
method. All experiments are evaluated on the low-quality dataset of 2s3z
unless otherwise stated.
#### Ablation study.
Figure 4(a) illustrates the ablation study about stage I and stage III, where
‘(w/o decom)’ represents that the critic of each agent is directly trained
based on the total reward without decomposition. ‘(w/o attention)’ represents
that the weight $\lambda$ of each estimated reward is always equal to 1
instead of a learnable value. ‘(w/o ensemble)’ removes the ensemble mechanism.
‘(w/o GAT)’ removes the GAT aggregation. The result shows that each part is
important in our SIT. To better illustrate the role of priority sampling in
stage II, we construct an extreme low-quality dataset based on 2s3z, _i.e._ ,
${\rm
99.5\%[s_{1}^{r},s_{2}^{r},z_{1}^{r},z_{2}^{r},z_{3}^{r}]+0.5\%[s_{1}^{r},s_{2}^{m},z_{1}^{r},z_{2}^{m},z_{3}^{m}]}$.
The result in Figure 4(b) shows that the priority sampling in stage II plays a
key role when the good individual trajectories are sparse in the dataset.
#### Evaluation on agent-wise balanced multi-agent datasets.
To evaluate the performance of our method on agent-wise balanced multi-agent
datasets, we generate datasets with random, medium and expert quality on the
2s3z for evaluation. Figure 4(c) shows our method can achieve similar
performance to ICQ on these datasets and is significantly higher than other
baselines, which indicates that our method can still work well in agent-wise
balanced datasets.
#### Computational complexity.
According to the experiments, the running time of Stage I (supervised
learning) is only 10% of ICQ as it does not contain some complex calculation
operations (_e.g._ CRR trick) during gradient backpropagation. The running
time of Stage II is negligible (about 10s) as it only needs the inference of
the ARDNEM. The running time of Stage III is similar to ICQ. Almost all
experimental maps can be evaluated within 1 GPU day, except 8m, 1c3s5z and
10m_vs_11m maps in StarCraft II which takes 2-3 GPU days.
We also conduct experiments about hyperparameter selection $\alpha,\beta$ and
$\eta$. See Appendix E.2 for details.
## Conclusion
In this paper, we investigate the diversity of individual trajectories in
offline multi-agent datasets and empirically show the current offline
algorithms cannot fully exploit the useful knowledge in complex and mixed
datasets. _e.g._ , agent-wise imbalanced multi-agent datasets. To address this
problem, we propose a novel algorithmic framework called Shared Individual
Trajectories (SIT). It can effectively decompose the joint trajectories into
individual trajectories, so that the good individual trajectories can be
shared among agents to boost the overall performance of multi-agent systems.
Extensive experiments on both discrete control (_i.e._ , StarCraft II and MPE)
and continuous control (_i.e._ , MAmujoco) demonstrate the effectiveness and
superiority of our method in complex and mixed datasets.
## Acknowledgments
This work was supported in part by National Key Research and Development
Program of China (2022YFC3340900), National Natural Science Foundation of
China (No. 62037001, U20A20387, No. 62006207), Young Elite Scientists
Sponsorship Program by CAST(2021QNRC001), Project by Shanghai AI Laboratory
(P22KS00111), the StarryNight Science Fund of Zhejiang University Shanghai
Institute for Advanced Study (SN-ZJU-SIAS-0010), Natural Science Foundation of
Zhejiang Province (LQ21F020020), Fundamental Research Funds for the Central
Universities (226-2022-00142, 226-2022-00051). Baoxiang Wang is partially
supported by National Natural Science Foundation of China (62106213, 72150002)
and Shenzhen Science and Technology Program (RCBS20210609104356063,
JCYJ20210324120011032).
## References
* Bazzan (2009) Bazzan, A. L. 2009. Opportunities for multiagent systems and multiagent reinforcement learning in traffic control. _Autonomous Agents and Multi-Agent Systems_ , 18(3): 342–375.
* Brockman et al. (2016) Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; and Zaremba, W. 2016. Openai gym. _arXiv preprint arXiv:1606.01540_.
* Chang, Ho, and Kaelbling (2003) Chang, Y.-H.; Ho, T.; and Kaelbling, L. 2003. All learning is local: Multi-agent learning in global reward games. In _NeurIPS_.
* Christianos, Schäfer, and Albrecht (2020) Christianos, F.; Schäfer, L.; and Albrecht, S. 2020. Shared experience actor-critic for multi-agent reinforcement learning. In _NeurIPS_.
* Chua et al. (2018) Chua, K.; Calandra, R.; McAllister, R.; and Levine, S. 2018. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In _NeurIPS_.
* Foerster et al. (2017) Foerster, J.; Nardelli, N.; Farquhar, G.; Afouras, T.; Torr, P. H.; Kohli, P.; and Whiteson, S. 2017. Stabilising experience replay for deep multi-agent reinforcement learning. In _ICML_.
* Fujimoto and Gu (2021) Fujimoto, S.; and Gu, S. S. 2021. A minimalist approach to offline reinforcement learning. In _NeurIPS_.
* Fujimoto, Meger, and Precup (2019) Fujimoto, S.; Meger, D.; and Precup, D. 2019. Off-policy deep reinforcement learning without exploration. In _ICML_.
* Garcıa (2015) Garcıa, J. 2015. A comprehensive survey on safe reinforcement learning. _Journal of Machine Learning Research_ , 16(1): 1437–1480.
* Gronauer (2022) Gronauer, S. 2022. Multi-agent deep reinforcement learning: a survey. _Artificial Intelligence Review_ , 55(2): 895–943.
* Gulcehre et al. (2021) Gulcehre, C.; Colmenarejo, S. G.; Wang, Z.; Sygnowski, J.; Paine, T.; Zolna, K.; Chen, Y.; Hoffman, M.; Pascanu, R.; and de Freitas, N. 2021. Regularized behavior value estimation. In _arXiv preprint arXiv:2103.09575_.
* Hüttenrauch et al. (2019) Hüttenrauch, M.; Adrian, S.; Neumann, G.; et al. 2019. Deep reinforcement learning for swarm systems. _Journal of Machine Learning Research_ , 20(54): 1–31.
* Jiang and Lu (2021a) Jiang, J.; and Lu, Z. 2021a. Offline decentralized multi-agent reinforcement learning. In _arXiv preprint arXiv:2108.01832_.
* Jiang and Lu (2021b) Jiang, J.; and Lu, Z. 2021b. Online Tuning for Offline Decentralized Multi-Agent Reinforcement Learning. In _openreview_.
* Kumar et al. (2019) Kumar, A.; Fu, J.; Soh, M.; Tucker, G.; and Levine, S. 2019. Stabilizing off-policy q-learning via bootstrapping error reduction. In _NeurIPS_.
* Kumar et al. (2020) Kumar, A.; Zhou, A.; Tucker, G.; and Levine, S. 2020. Conservative q-learning for offline reinforcement learning. In _NeurIPS_.
* Li et al. (2021) Li, J.; Kuang, K.; Wang, B.; Liu, F.; Chen, L.; Wu, F.; and Xiao, J. 2021. Shapley counterfactual credits for multi-agent reinforcement learning. In _SIGKDD_.
* Lin (1992) Lin, L.-J. 1992. Self-improving reactive agents based on reinforcement learning, planning and teaching. _Machine learning_ , 8(3): 293–321.
* Lowe et al. (2017) Lowe, R.; Wu, Y. I.; Tamar, A.; Harb, J.; Pieter Abbeel, O.; and Mordatch, I. 2017\. Multi-agent actor-critic for mixed cooperative-competitive environments. In _Advances in neural information processing systems_.
* Meng et al. (2021) Meng, L.; Wen, M.; Yang, Y.; Le, C.; Li, X.; Zhang, W.; Wen, Y.; Zhang, H.; Wang, J.; and Xu, B. 2021. Offline Pre-trained Multi-Agent Decision Transformer: One Big Sequence Model Conquers All StarCraftII Tasks. In _arXiv preprint arXiv:2112.02845_.
* Nair et al. (2020) Nair, A.; Gupta, A.; Dalal, M.; and Levine, S. 2020. Awac: Accelerating online reinforcement learning with offline datasets. In _arXiv preprint arXiv:2006.09359_.
* Oh et al. (2016) Oh, J.; Chockalingam, V.; Lee, H.; et al. 2016. Control of memory, active perception, and action in minecraft. In _ICML_.
* Oh et al. (2020) Oh, Y.; Lee, K.; Shin, J.; Yang, E.; and Hwang, S. J. 2020. Learning to Sample with Local and Global Contexts in Experience Replay Buffer. In _ICLR_.
* Oliehoek and Amato (2016) Oliehoek, F. A.; and Amato, C. 2016. _A concise introduction to decentralized POMDPs_. Springer.
* Omidshafiei et al. (2017) Omidshafiei, S.; Pazis, J.; Amato, C.; How, J. P.; and Vian, J. 2017. Deep decentralized multi-task multi-agent reinforcement learning under partial observability. In _ICML_.
* Osband et al. (2016) Osband, I.; Blundell, C.; Pritzel, A.; and Van Roy, B. 2016. Deep exploration via bootstrapped DQN. In _NeurIPS_.
* Palmer et al. (2018) Palmer, G.; Tuyls, K.; Bloembergen, D.; and Savani, R. 2018. Lenient Multi-Agent Deep Reinforcement Learning. In _AAMAS_.
* Peng et al. (2021) Peng, B.; Rashid, T.; Schroeder de Witt, C.; Kamienny, P.-A.; Torr, P.; Böhmer, W.; and Whiteson, S. 2021. Facmac: Factored multi-agent centralised policy gradients. In _NeurIPS_.
* Rashid et al. (2018) Rashid, T.; Samvelyan, M.; Schroeder, C.; Farquhar, G.; Foerster, J.; and Whiteson, S. 2018. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. In _ICML_.
* Samvelyan et al. (2019) Samvelyan, M.; Rashid, T.; Schroeder de Witt, C.; Farquhar, G.; Nardelli, N.; Rudner, T. G.; Hung, C.-M.; Torr, P. H.; Foerster, J.; and Whiteson, S. 2019. The StarCraft Multi-Agent Challenge. In _AAMAS_.
* Schaul et al. (2016) Schaul, T.; Quan, J.; Antonoglou, I.; and Silver, D. 2016. Prioritized Experience Replay. In _ICLR_.
* Shalev-Shwartz, Shammah, and Shashua (2016) Shalev-Shwartz, S.; Shammah, S.; and Shashua, A. 2016. Safe, multi-agent, reinforcement learning for autonomous driving. In _arXiv preprint arXiv:1610.03295_.
* Sunehag et al. (2018) Sunehag, P.; Lever, G.; Gruslys, A.; Czarnecki, W. M.; Zambaldi, V. F.; Jaderberg, M.; Lanctot, M.; Sonnerat, N.; Leibo, J. Z.; Tuyls, K.; et al. 2018\. Value-Decomposition Networks For Cooperative Multi-Agent Learning Based On Team Reward. In _AAMAS_.
* Todorov, Erez, and Tassa (2012) Todorov, E.; Erez, T.; and Tassa, Y. 2012. Mujoco: A physics engine for model-based control. In _2012 IEEE/RSJ international conference on intelligent robots and systems_ , 5026–5033. IEEE.
* Veličković et al. (2018) Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; and Bengio, Y. 2018. Graph Attention Networks. In _ICLR_.
* Wang et al. (2020a) Wang, H.-n.; Liu, N.; Zhang, Y.-y.; Feng, D.-w.; Huang, F.; Li, D.-s.; and Zhang, Y.-m. 2020a. Deep reinforcement learning: a survey. _Frontiers of Information Technology & Electronic Engineering_.
* Wang et al. (2020b) Wang, W.; Yang, T.; Liu, Y.; Hao, J.; Hao, X.; Hu, Y.; Chen, Y.; Fan, C.; and Gao, Y. 2020b. Action semantics network: Considering the effects of actions in multiagent systems. In _ICLR_.
* Wang et al. (2020c) Wang, Z.; Novikov, A.; Zolna, K.; Merel, J. S.; Springenberg, J. T.; Reed, S. E.; Shahriari, B.; Siegel, N.; Gulcehre, C.; Heess, N.; et al. 2020c. Critic regularized regression. In _NeurIPS_.
* Wu, Tucker, and Nachum (2019) Wu, Y.; Tucker, G.; and Nachum, O. 2019. Behavior regularized offline reinforcement learning. In _arXiv preprint arXiv:1911.11361_.
* Yang et al. (2020a) Yang, Y.; Hao, J.; Chen, G.; Tang, H.; Chen, Y.; Hu, Y.; Fan, C.; and Wei, Z. 2020a. Q-value path decomposition for deep multiagent reinforcement learning. In _ICML_.
* Yang et al. (2020b) Yang, Y.; Hao, J.; Liao, B.; Shao, K.; Chen, G.; Liu, W.; and Tang, H. 2020b. Qatten: A general framework for cooperative multiagent reinforcement learning. In _arXiv preprint arXiv:2002.03939_.
* Yang et al. (2021) Yang, Y.; Ma, X.; Chenghao, L.; Zheng, Z.; Zhang, Q.; Huang, G.; Yang, J.; and Zhao, Q. 2021. Believe what you see: Implicit constraint approach for offline multi-agent reinforcement learning. In _NeurIPS_.
* Zha et al. (2019) Zha, D.; Lai, K.-H.; Zhou, K.; and Hu, X. 2019. Experience Replay Optimization. In _IJCAI_.
## Appendix
### A. Algorithm
Input: Offline joint dataset $\mathbb{B}$, training epoch of ARDNEM $L_{{{\rm
ARDNEM}}}$, training epoch of actor-critic $L_{{{\rm AC}}}$, priority sampling
hyperparameter $\alpha$, conservative hyperparameter $\beta$, uncertainty
weight $\eta$, network update interval $d$.
1 Initialize ARDNEM $z$ w.r.t. parameters $\psi$, critic networks $Q_{i}$
w.r.t. parameters $\phi_{i}$ and actor networks $\pi_{i}$ w.r.t. parameters
$\theta_{i}$ with random parameters.
2 Initialize target critic networks $Q_{i}^{\prime}$.
3 Initialize DPER $\mathbb{B}_{J}=\emptyset$.
// Stage I: learning an ARDNEM.
4 for _$l_{{{\rm ARDNEM}}}=1$ to $L_{{{\rm ARDNEM}}}$_ do
5 Sample joint trajectories
$\\{o_{1}^{t,k},a_{1}^{t,k},o_{1}^{t+1,k},\dots,o_{{\rm N}}^{t,k},a_{{\rm
N}}^{t,k},o_{{\rm N}}^{t+1,k},s^{t,k},r_{tot}^{t,k}\\}$ from $\mathbb{B}$.
6 Train reward decomposition network $z$ with uniform sampling according to
$\mathcal{L}_{{\rm MSE}}(\psi)=\mathbb{E}_{{\rm
Uniform}(\mathbb{B})}\sum_{m=1}^{M}\frac{1}{M}\left(z(o_{1}^{t,k},a_{1}^{t,k},\dots,o_{{\rm
N}}^{t,k},a_{{\rm N}}^{t,k},s^{t,k})-r_{tot}^{t,k}\right)^{2}$.
7
// Stage II: reconstruct the joint offline dataset into type-wise DPER.
8 for _$k=1$ to $K$_ do
9 for _$i=1$ to $N$_ do
10 for _$t=1$ to $T$_ do
11 Obtain ensembled weighted reward
$\\{\lambda_{i,m}^{t,k}r_{i,m}^{t,k}\\}^{m=1,\dots,M}$ according to the input
$\\{o_{i}^{t,k},a_{i}^{t,k},s^{t,k}\\}$ and the learned ARDNEM.
12 Calculate estimated weighted reward
$\hat{r}_{i}^{t,k}=\frac{1}{M}\sum_{m=1}^{M}\lambda_{i,m}^{t,k}r_{i,m}^{t,k}$.
13 Calculate uncertainty
$\hat{u}_{i}^{t,k}=\sqrt{\frac{1}{M}\sum_{m=1}^{M}\left(\lambda_{i,m}^{t,k}r_{i,m}^{t,k}-\hat{r}_{i}^{t,k}\right)^{2}}$.
14 Calculate Monte Carlo return
$\hat{g}_{i}^{t,k}=\sum_{t=t^{\prime}}^{T}\gamma^{t-t^{\prime}}\hat{r}_{i}^{t,k}$.
15
16
17 Calculate episode priority
$\hat{p}_{i}^{k}=\frac{1}{T}\sum_{t=1}^{T}\hat{g}_{i}^{t,k}$.
18 Reshape episode priority
$p_{i}^{k}=\frac{e^{\hat{p}_{i}^{k}/\alpha}}{\sum_{j\in\mathbb{J},k}^{K}e^{\hat{p}_{j}^{k}/\alpha}}$.
19 Store the individual trajectory
$\\{\\{o_{i}^{t,k},a_{i}^{t,k},o_{i}^{t+1,k},\bm{o_{-i}}^{t,k},\hat{r}_{i}^{t,k},\hat{u}_{i}^{t,k}\\}^{i=1,\dots,N.t=1,\dots,T},p_{i}^{k}\\}$
in DPER $\mathbb{B}_{J}$ according to the agent type $J$.
// Stage III: conservative policy training with GAT-based critic.
20 for _$l_{{{\rm AC}}}=1$ to $L_{{{\rm AC}}}$_ do
21 Sample the individual trajectory from $\mathbb{B}_{J}$ with priority-based
sampling strategy $P_{J}$ for each agent $i$.
22 Train critic networks $Q_{i}$ according to $\mathcal{L}_{{\rm
critic}}(\phi_{i})=\mathbb{E}_{P_{J}\left(\mathbb{B}_{J}\right)}\left[\frac{\eta}{\hat{u}_{i(j)}^{t,k}}\left(\hat{r}_{i(j)}^{t,k}+\gamma
Q_{i}^{\prime}\left(\tau_{i(j)}^{t+1,k},a_{i(j)}^{t+1,k},\bm{o_{-i(-j)}}^{t+1,k}\right)-Q_{i}\left(\tau_{i(j)}^{t,k},a_{i(j)}^{t,k},\bm{o_{-i(-j)}}^{t,k}\right)\right)^{2}\right]$.
23 Train actor networks $\pi_{i}$ according to $\mathcal{L}_{{\rm
actor}}(\theta_{i})=\mathbb{E}_{P_{J}\left(\mathbb{B}_{J}\right)}\left[-\frac{\eta}{\hat{u}_{i(j)}^{t,k}}\frac{e^{Q_{i}\left(\tau_{i(j)}^{t,k},a_{i(j)}^{t,k},\bm{o_{-i(-j)}}^{t,k}\right)/\beta}}{Z}\left.Q_{i}\left(\tau_{i(j)}^{t,k},a,\bm{o_{-i(-j)}}^{t,k}\right)\right|_{a=a_{i(j)}^{t,k}}\right]$.
24 if _$l_{{{\rm AC}}}$ mod $d=0$_ then
25 Update target networks: $Q_{i}^{\prime}=Q_{i}$.
26
27
Algorithm 1 Shared Individual Trajectories (SIT)
### B. Baseline implementations
BC. Each agent policy $\pi_{i}$ w.r.t. parameters $\theta_{i}$ is optimized by
the following loss
$\displaystyle\mathcal{L}_{\rm
BC}(\theta_{i})=\mathbb{E}_{\tau_{i},a_{i}\sim\mathbb{B}}[-\log(\pi_{i}(a_{i}\mid\tau_{i}))].$
(11)
MABCQ. Suppose the mixer network $Q$ w.r.t. parameters $\phi$, the individual
$Q$ networks $Q_{i}$ w.r.t. parameters $\phi_{i}$, the behavior policies
$\mu_{i}$ w.r.t. parameters $\theta_{i}$. We first train behavior policies
$\mu_{i}$ by behavior cloning, then the agent networks is trained by the
following loss
$\begin{aligned} \mathcal{L}_{\rm
BCQ}(\phi,\phi_{i})&=\mathbb{E}_{\bm{\tau},\bm{a},r,\bm{\tau^{\prime}}\sim\mathbb{B}}\left[\left(r+\gamma\max_{\tilde{\bm{a}}^{[j]}}Q^{\prime}(\bm{\tau^{\prime}},\tilde{\bm{a}}^{[j]})-Q(\bm{\tau},\bm{a})\right)^{2}\right]\\\
&\qquad\quad\quad\tilde{\bm{a}}^{[j]}=\bm{a}^{[j]}+\xi(\bm{\tau},\bm{a}^{[j]})\end{aligned},$
(12)
where $Q(\bm{\tau},\bm{a})=w_{i}Q_{i}(\tau_{i},a_{i})+b$ and
$\xi(\bm{\tau},\bm{a}^{[j]})$ denotes the perturbation model, which is
decomposed as $\xi_{i}(\tau_{i},a_{i}^{[j]})$. If
$\frac{a_{i}^{[j]}\sim\mu_{i}(\tau_{i})}{\max\\{a_{i}^{[j]}\sim\mu_{i}(\tau_{i})\\}_{j=1}^{m}}\leq\zeta$
in agent $i$, $a_{i}^{[j]}$ is considered an unfamiliar action and
$\xi^{i}(\tau^{i},a_{i}^{[j]})$ will mask $a_{i}^{[j]}$ in maximizing
$Q_{i}$-value operation.
MACQL. Suppose the mixer network $Q$ w.r.t. parameters $\phi$, the individual
$Q$ networks $Q_{i}$ w.r.t. parameters $\phi_{i}$, then the agent networks is
trained by the following loss
$\displaystyle\mathcal{L}_{\rm CQL}(\phi,\phi_{i})$ $\displaystyle=\beta_{\rm
CQL}\mathbb{E}_{\tau_{i},a_{i},\bm{\tau},\bm{a}\sim\mathbb{B}}\left[\sum_{i}\log\sum_{a_{i}}\exp(w_{i}Q_{i}(\tau_{i},a_{i})+b)-\mathbb{E}_{\bm{a}\sim\bm{\mu}(\bm{a}\mid\bm{\tau})}[Q(\bm{\tau},\bm{a})]\right]$
(13)
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{1}{2}\mathbb{E}_{\bm{\tau}\sim\mathbb{B},\bm{a}\sim\bm{\mu}(\bm{a}\mid\bm{\tau})}\left[\left(y_{\rm
CQL}-Q(\bm{\tau},\bm{a})\right)^{2}\right],$
where $y_{\rm CQL}$ is calculated based on $n$-step off-policy estimation
(e.g., Tree Backup algorithm).
ICQ. Suppose the centralized critic network $Q$ w.r.t. parameters $\phi$, the
individual critic networks $Q_{i}$ w.r.t. parameters $\phi_{i}$, the
individual policies $\pi_{i}$ w.r.t. parameters $\theta_{i}$, then the actor-
critic is trained by the following loss
$\displaystyle\mathcal{L}_{{\rm actor}}(\theta_{i})$
$\displaystyle=\mathbb{E}_{\tau_{i},a_{i}\sim\mathbb{B}}\left[-\frac{1}{Z_{i}}\log(\pi_{i}(a_{i}\mid\tau_{i}))\exp\left(\frac{w_{i}Q_{i}(\tau_{i},a_{i})}{\beta_{{\rm
ICQ}}}\right)\right]$ (14) $\displaystyle\mathcal{L}_{{\rm
critic}}(\phi,\phi_{i})$
$\displaystyle=\mathbb{E}_{\bm{\tau},\bm{a},r,\bm{\tau^{\prime}},\bm{a^{\prime}}\sim\mathbb{B}}\left[\sum_{t\geq
0}(\gamma\lambda_{{\rm
ICQ}})^{t}\left[r+\gamma\frac{\exp\left(\frac{1}{\beta_{{\rm
ICQ}}}Q(\bm{\tau^{\prime}},\bm{a^{\prime}})\right)}{Z}Q(\bm{\tau^{\prime}},\bm{a^{\prime}})-Q(\bm{\tau},\bm{a})\right]\right]^{2},$
where $Q(\bm{\tau},\bm{a})=w_{i}Q_{i}(\tau_{i},a_{i})+b$ represents the linear
mixer of the centralized critic network.
### C. Environment and offline dataset information
C.1 Environment information
In this paper, we construct the agent-wise imbalanced multi-agent datasets
based on six maps in StarCraft II, three modified maps for partially
observable settings in the multi-agent particle environment (MPE) and two maps
in multi-agent mujoco (MAmujoco). To better understand our experiments, we
give a brief introduction to these environments.
Maps | Controlled ally agents | Built-in enemy agents | Difficulties
---|---|---|---
2s_vs_1sc | 2 Stalkers | 1 Spine Crawler | Super Hard
3s_vs_5z | 3 Stalkers | 5 Zealots | Super Hard
2s3z | 2 Stalkers & 3 Zealots | 2 Stalkers & 3 Zealots | Super Hard
8m | 8 Marines | 8 Marines | Super Hard
1c3s5z | 1 Colossus, 3 Stalkers & 5 Zealots | 1 Colossus, 3 Stalkers & 5 Zealots | Super Hard
10m_vs_11m | 10 Marines | 11 Marines | Super Hard
Table 2: The information of test maps on StarCraft II.
#### StarCraft II.
StarCraftII micromanagement aims to accurately control the individual units
and complete cooperation tasks. In this environment, each enemy unit is
controlled by the built-in AI while each allied unit is controlled by a
learned policy. The local observation of each agent contains the following
attributes for both allied and enemy units within the sight range: ${\rm
distance}$, ${\rm relative\\_x}$, ${\rm relative\\_y}$, ${\rm health}$, ${\rm
shield}$, and ${\rm unit\\_type}$. The action space of each agent includes:
${\rm noop}$, ${\rm move[direction]}$, ${\rm attack[enemyid]}$, and ${\rm
stop}$. Under the control of these actions, each agent can move and attack in
various maps. The reward setting is based on the hit-point damage dealt on the
enemy units, together with special bonuses for killing the enemy units and
winning the battle. In our experiments, we choose 2s_vs_1sc, 3s_vs_5z, 2s3z,
8m, 1c3s5z and 10m_vs_11m for evaluation, and more detailed information about
these maps is shown in Table 2.
Figure 5: Three modified maps for partially observable settings in MPE. Maps | Controlled agents | Targets | Visible agents | Visible targets
---|---|---|---|---
| CN_3ls3l
---
(Cooperative navigation)
3 Landmark Seekers | 3 Landmarks | 1 Landmark Seeker | 2 Landmarks
| CN_4ls4l
---
(Cooperative navigation)
4 Landmark Seekers | 4 Landmarks | 1 Landmark Seeker | 2 Landmarks
| PP_3p1p
---
(Predator-prey)
3 Predators | 1 Prey | 1 Predator | 1 Prey
Table 3: The information of test maps on MPE.
#### Multi-agent particle environment (MPE).
MPE is a common benchmark that contains various multi-agent games. In this
paper, we choose the discrete version of cooperative navigation (CN) and
predator-prey (PP) for evaluation. In CN task, landmark seeks (agents) must
cooperate through physical actions to reach a set of landmarks without
colliding with each other. In PP task, some slower cooperating predators must
chase the faster prey and avoid collisions in a randomly generated
environment. We set the prey as a heuristic agent whose movement direction is
always away from the nearest predator. Note that in the original MPE
environment, each agent can directly observe the global information of the
environment. However, we modify the environment to the partially observable
setting, a more challenging and practical setting, thus we reset the local
observation of the agents in each map as shown in Figure 5 and Table 3.
Figure 6: Two maps in MAmujoco.
#### Multi-agent mujoco (MAmujoco).
MAmujoco is a novel benchmark for continuous cooperative multi-agent robotic
control. Starting from the popular single-agent robotic mujoco (Todorov, Erez,
and Tassa 2012) control suite included with OpenAI Gym (Brockman et al. 2016).
Each agent’s action space in MAmujoco is given by the joint action space over
all motors controllable by that agent. As shown in Figure 6, we select
HalfCheetah_2l and Walker_2l maps with 200 termination steps for evaluation.
In these two maps, each agent can control joints of one color. For example,
the agent corresponding to the green partition in HalfCheetah_2l (Figure 6(a))
consists of three joints (joint ids 1, 2, and 3) and four adjacent body
segments. Each joint has an action space $[-1,1]$, so the action space for
each agent is a 3-dimensional vector with each entry in $[-1,1]$.
C.2 Policy pools for each environments
In order to obtain diverse behavior policies for generating agent-wise
imbalanced multi-agent datasets, we first online train the joint policies
based on QMIX (Rashid et al. 2018) (discrete) or FacMAC (Peng et al. 2021)
(continuous) in each environment and store them at fixed intervals during the
training process. Then, these saved joint policies are deposited into random,
medium and expert policy pools according to their episode returns. Table 4,
Table 5 and Table 6 show the episode return corresponding to different policy
pools in each map, where ‘Episode return range’ is determined by fully random
policies and the best policies trained by QMIX or FacMAC. After these policy
pools are constructed, we can directly sample the required individual behavior
policies from different policy pools to generate the agent-wise imbalanced
multi-agent datasets.
Maps | | Random
---
pools
| Medium
---
pools
| Expert
---
pools
| Episode
---
return range
2s_vs_1sc | 0$\sim$6 | 6$\sim$14 | 14$\sim$20 | 0$\sim$20
3s_vs_5z | 0$\sim$6 | 6$\sim$14 | 14$\sim$20 | 0$\sim$20
2s3z | 0$\sim$6 | 6$\sim$14 | 14$\sim$20 | 0$\sim$20
8m | 0$\sim$6 | 6$\sim$14 | 14$\sim$20 | 0$\sim$20
1c3s5z | 0$\sim$6 | 6$\sim$14 | 14$\sim$20 | 0$\sim$20
10m_vs_11m | 0$\sim$6 | 6$\sim$14 | 14$\sim$20 | 0$\sim$20
Table 4: Episode return of different policy pools in StarCraft II. Maps | | Random
---
pools
| Medium
---
pools
| Expert
---
pools
| Episode
---
return range
CN_3ls3l | -174.41$\sim$-133.28 | -133.28$\sim$-78.44 | -78.44$\sim$-37.32 | -174.41$\sim$-37.32
CN_4ls4l | -288.89$\sim$-214.47 | -214.47$\sim$-115.24 | -115.24$\sim$-40.81 | -288.89$\sim$-40.81
PP_3p1p | -306.57$\sim$-231.08 | -231.08$\sim$-130.44 | -130.44$\sim$-54.96 | -306.57$\sim$-54.96
Table 5: Episode return of different policy pools in MPE. Maps | | Random
---
pools
| Medium
---
pools
| Expert
---
pools
| Episode
---
return range
HalfCheetah_2l | -203.47$\sim$-69.42 | -69.42$\sim$64.63 | 64.63$\sim$198.70 | -203.47$\sim$198.70
Walker_2l | -74.51$\sim$27.35 | 27.35$\sim$129.21 | 129.21$\sim$231.09 | -74.51$\sim$231.09
Table 6: Episode return of different policy pools in MAmujoco.
C.3 Data components of agent-wise imbalanced offline multi-agent datasets
After the policy pools are ready, we sample the individual behavior policies
from these pools according to the data composition in Table 7, Table 8 and
Table 9, and generate agent-wise imbalanced offline datasets. Note that since
we are interested in non-expert data, We mainly generate two types of
datasets. _i.e._ , low-quality and medium-quality datasets.
| Maps | Components | | Average
---
episode return
| Episode
---
number
Low Quality | 2s_vs_1sc | | ${\rm 50\%[s_{1}^{r},s_{2}^{r}]+}$
---
${\rm 50\%[s_{1}^{r},s_{2}^{m}]}$
2.84 | 10000
3s_vs_5z | | ${\rm 50\%[s_{1}^{r},s_{2}^{r},s_{3}^{r}]+}$
---
${\rm 50\%[s_{1}^{r},s_{2}^{r},s_{3}^{m}]}$
2.94 | 10000
2s3z | | ${\rm 50\%[s_{1}^{r},s_{2}^{r},z_{1}^{r},z_{2}^{r},z_{3}^{r}]+}$
---
${\rm 50\%[s_{1}^{r},s_{2}^{m},z_{1}^{r},z_{2}^{m},z_{3}^{m}]}$
3.24 | 10000
8m | | ${\rm 50\%[m_{1}^{r},m_{2}^{r},m_{3}^{r},m_{4}^{r},m_{5}^{r},m_{6}^{r},m_{7}^{r},m_{8}^{r}]+}$
---
${\rm
50\%[m_{1}^{r},m_{2}^{r},m_{3}^{r},m_{4}^{r},m_{5}^{r},m_{6}^{r},m_{7}^{r},m_{8}^{m}]}$
3.10 | 10000
1c3s5z | | ${\rm 50\%[c_{1}^{r},s_{1}^{r},s_{2}^{r},s_{3}^{r},z_{1}^{r},z_{2}^{r},z_{3}^{r},z_{4}^{r},z_{5}^{r}]+}$
---
${\rm
50\%[c_{1}^{m},s_{1}^{r},s_{2}^{r},s_{3}^{m},z_{1}^{r},z_{2}^{r},z_{3}^{r},z_{4}^{r},z_{5}^{m}]}$
5.56 | 10000
10m_vs_11m | | ${\rm 50\%[m_{1}^{r},m_{2}^{r},m_{3}^{r},m_{4}^{r},m_{5}^{r},m_{6}^{r},m_{7}^{r},m_{8}^{r},m_{9}^{r},m_{10}^{r}]+}$
---
${\rm
50\%[m_{1}^{r},m_{2}^{r},m_{3}^{m},m_{4}^{m},m_{5}^{m},m_{6}^{m},m_{7}^{m},m_{8}^{m},m_{9}^{m},m_{10}^{m}]}$
3.82 | 10000
Medium Quality | 2s_vs_1sc | ${\rm 100\%[s_{1}^{r},s_{2}^{e}]}$ | 9.86 | 10000
3s_vs_5z | ${\rm 100\%[s_{1}^{r},s_{2}^{r},s_{3}^{e}]}$ | 7.06 | 10000
2s3z | ${\rm 100\%[s_{1}^{r},s_{2}^{e},z_{1}^{r},z_{2}^{e},z_{3}^{e}]}$ | 7.04 | 10000
8m | ${\rm 100\%[m_{1}^{r},m_{2}^{e},m_{3}^{e},m_{4}^{e},m_{5}^{e},m_{6}^{e},m_{7}^{e},m_{8}^{e}]}$ | 9.85 | 10000
1c3s5z | ${\rm 100\%[c_{1}^{e},s_{1}^{r},s_{2}^{r},s_{3}^{e},z_{1}^{r},z_{2}^{r},z_{3}^{r},z_{4}^{r},z_{5}^{e}]}$ | 10.31 | 10000
10m_vs_11m | ${\rm 100\%[m_{1}^{r},m_{2}^{e},m_{3}^{e},m_{4}^{e},m_{5}^{e},m_{6}^{e},m_{7}^{e},m_{8}^{e},m_{9}^{e},m_{10}^{e}]}$ | 9.23 | 10000
Table 7: Data components of agent-wise imbalanced offline multi-agent datasets in StarCraft II. | Maps | Components | | Average
---
episode return
| Episode
---
number
Random Quality | CN_3ls3l | | ${\rm 50\%[ls_{1}^{r},ls_{2}^{r},ls_{3}^{r}]+}$
---
${\rm 50\%[ls_{1}^{r},ls_{2}^{r},ls_{3}^{m}]}$
-157.70 | 10000
CN_4ls4l | | ${\rm 50\%[ls_{1}^{r},ls_{2}^{r},ls_{3}^{r},ls_{4}^{r}]+}$
---
${\rm 50\%[ls_{1}^{r},ls_{2}^{r},ls_{3}^{r},ls_{4}^{m}]}$
-278.40 | 10000
PP_3p1p | | ${\rm 50\%[p_{1}^{r},p_{2}^{r},p_{3}^{r}]+}$
---
${\rm 50\%[p_{1}^{r},p_{2}^{r},p_{3}^{m}]}$
-249.58 | 10000
Medium Quality | CN_3ls3l | ${\rm 100\%[ls_{1}^{r},ls_{2}^{m},ls_{3}^{e}]}$ | -107.46 | 10000
CN_4ls4l | ${\rm 100\%[ls_{1}^{r},ls_{2}^{r},ls_{3}^{e},ls_{4}^{e}]}$ | -166.26 | 10000
PP_3p1p | ${\rm 100\%[p_{1}^{r},p_{2}^{m},p_{3}^{e}]}$ | -155.46 | 10000
Table 8: Data components of agent-wise imbalanced offline multi-agent datasets in MPE. | Maps | Components | | Average
---
episode return
| Episode
---
number
Random Quality | HalfCheetah_2l | | ${\rm 100\%[l_{1}^{m},l_{2}^{r}]}$
---
-110.52 | 10000
Walker_2l | | ${\rm 100\%[l_{1}^{r},l_{2}^{m}]}$
---
-21.67 | 10000
Medium Quality | HalfCheetah_2l | ${\rm 100\%[l_{1}^{r},l_{2}^{e}]}$ | 41.79 | 10000
Walker_2l | ${\rm 100\%[l_{1}^{r},l_{2}^{e}]}$ | 71.68 | 10000
Table 9: Data components of agent-wise imbalanced offline multi-agent datasets
in MAmujoco.
### D. Experimental configurations
D.1 Hardware configurations
In all experiments, the GPU is NVIDIA A100 and the CPU is AMD EPYC 7H12
64-Core Processor.
D.2 Hyperparameters
In our proposed algorithmic framework SIT, we first train ARDNEM with the
hyperparameters shown in Figure 10, then take the last checkpoint to construct
type-wise DPER, and finally use the hyperparameters shown in Figure 11 to
train the agent policies. To ensure a fair comparison, other baselines also
share most hyperparameters with ours during agent training as shown in Figure
11.
Hyperparameter | Value
---|---
Learning rate | $1\times 10^{-4}$
Optimizer | RMS
Training epoch | 20000
Gradient clipping | 10
Reward network dimension | 64
Attention hidden dimension | 64
Activation function | ReLU
Batch size | 32
Replay buffer size | $1.0\times 10^{4}$
Ensemble number | 5
Table 10: Hyperparameters sheet for ARDNEM learning
Hyperparameter | Value
---|---
Shared |
Agent network learning rate | $5\times 10^{-4}$
Optimizer | RMS
Discount factor | 0.99
Training epoch | 15000
Parameters update interval | 100
Gradient clipping | 10
Mixer/Critic hidden dimension | 32
RNN hidden dimension | 64
Activation function | ReLU
Batch size | 32
Replay buffer size | $1.0\times 10^{4}$
MABCQ |
$\zeta$ | 0.3
MACQL |
$\beta_{\rm CQL}$ | 2.0
ICQ |
Critic network learning rate | $1\times 10^{-4}$
$\beta_{\rm ICQ}$ | 0.1
$\lambda_{\rm ICQ}$ | 0.8
Ours |
Critic network learning rate | $1\times 10^{-4}$
$\alpha$ | 0.2
$\beta$ | 0.1
$\eta$ | 1
Rescaled priority range | [0,20]
Table 11: Hyperparameters sheet for agent learning
### E. Experiments
E.1 Reward decomposition network with the monotonic nonlinear constraint
(MNC).
Figure 7: Experiment about reward decomposition network with the monotonic
nonlinear constraint (MNC).
In our SIT, we propose an attention-based reward decomposition network in the
stage I. A natural alternative is the decomposition module with monotonic
nonlinear constraint (MNC), similar to QMIX (Rashid et al. 2018).
Specifically, we only need to use the hypernetwork shown in Figure 7(a), so
that the total reward $r_{tot}$ and the individual reward $r_{i}$ satisfy the
following relationship to achieve reward decomposition
$\frac{\partial r_{tot}}{\partial r_{i}}\geq 0,~{}\forall i\in N.$ (15)
Figure 7(b) shows the reward decomposition result of MNC. We can observe there
are two problems with the decomposition result. 1) The gap between individual
rewards is small, so the quality of the trajectory cannot be well
distinguished. 2) More importantly, the reward decomposition result is
incorrect. According to the data composition of low-quality dataset of 2s3z.
_i.e._ , Table 7. we expect s1 (red)$<$s2 (violet), z1 (blue)$<$z2 (orange),
z3 (green), Figure 7(b) does not match this expectation. Instead, the
decomposition result of our SIT is as intended. _i.e._ , Figure 3(d).
Therefore, policy learning with MNC-decomposed rewards cannot achieve the
satisfactory result, as shown in Figure 7(c).
E.2 Hyperparameter selection $\alpha,\beta$ and $\eta$.
Figure 8: Hyperparameter selection.
The hyperparameter $\alpha$ in Equation (7) determines how much prioritization
is used during sampling training data. If $\alpha\rightarrow 0$, it means we
only sample the highest quality individual trajectory, while
$\alpha\rightarrow\infty$ corresponds to the uniform sampling. The
hyperparameter $\beta$ in Equation (10) controls how conservative the policy
update is used during training. If $\beta\rightarrow 0$, there are no
constraints on policy updates, while $\beta\rightarrow\infty$ results in that
agent learning is equivalent to behavior cloning. The hyperparameter $\eta$ in
Equation (9) and Equation (10) controls the importance weight of the
uncertainty on actor-critic learning. In all experiments, we choose
$\alpha=0.2$, $\beta=0.1$ and $\eta=1$ according to Figure 8.
E.3 Training curves
Figure 9: The performance of different algorithms on the low-quality datasets
(StarCraft II) Figure 10: The performance of different algorithms on the low-
quality datasets (MPE) Figure 11: The performance of different algorithms on
the low-quality datasets (MAmujoco) Figure 12: The performance of different
algorithms on the medium-quality datasets (StarCraft II) Figure 13: The
performance of different algorithms on the medium-quality datasets (MPE)
Figure 14: The performance of different algorithms on the medium-quality
datasets (MAmujoco)
E.4 Decomposed weighted individual reward
Figure 15: Decomposed weighted individual reward on the low-quality datasets
(StarCraft II) Figure 16: Decomposed weighted individual reward on the low-
quality datasets (MPE) Figure 17: Decomposed weighted individual reward on the
low-quality datasets (MAmujoco) Figure 18: Decomposed weighted individual
reward on the medium-quality datasets (StarCraft II) Figure 19: Decomposed
weighted individual reward on the medium-quality datasets (MPE) Figure 20:
Decomposed weighted individual reward on the medium-quality datasets
(MAmujoco)
|
11institutetext: Departamento de Astronomía, Facultad Ciencias Físicas y
Matemáticas, Universidad de Concepción, Av. Esteban Iturra s/n Barrio
Universitario, Casilla 160, Concepción, Chile, 11email<EMAIL_ADDRESS>22institutetext: Centre for Astrochemical Studies, Max-Planck-Institut für
Extraterrestrische Physik, Gießnbachstr. 1, 85749 Garching bei München,
Germany 33institutetext: Dipartimento di Fisica “G. Occhialini”, Universitá
degli Studi Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy
44institutetext: INFN, Sezione di Milano-Bicocca, Piazza della Scienza 3,
I-20126 Milano, Italy 55institutetext: INAF-Osservatorio Astrofisico di
Arcetri, Largo E. Fermi 5, I-50125 Firenze, Italy
# Non-ideal magnetohydrodynamics of self-gravitating filaments
Nicol Gutiérrez-Vera 11 Tommaso Grassi 22 Stefano Bovino 11 Alessandro Lupi
3344 Daniele Galli 55 Dominik R.G. Schleicher 11
(Received ; accepted )
###### Abstract
Context. Filaments have been studied in detail through observations and
simulations. A range of numerical works have separately investigated how
chemistry and diffusion effects, as well as magnetic fields and their
structure impact the gas dynamics of the filament. However, non-ideal effects
have hardly been explored thus far.
Aims. We investigate how non-ideal magnetohydrodynamic (MHD) effects, combined
with a simplified chemical model affect the evolution and accretion of a star-
forming filament.
Methods. We modeled an accreting self-gravitating turbulent filament using
lemongrab, a one-dimensional (1D) non-ideal MHD code that includes chemistry.
We explore the influence of non-ideal MHD, the orientation and strength of the
magnetic field, and the cosmic ray ionization rate, on the evolution of the
filament, with particular focus on the width and accretion rate.
Results. We find that the filament width and the accretion rate are determined
by the magnetic field properties, including the initial strength, the coupling
with the gas controlled by the cosmic ray ionization rate, and the orientation
of the magnetic field with respect to the accretion flow direction. Increasing
the cosmic-ray ionization rate leads to a behavior closer to that of ideal
MHD, reducing the magnetic pressure support and, hence, damping the accretion
efficiency with a consequent broadening of the filament width. For the same
reason, we obtained a narrower width and a larger accretion rate when we
reduced the initial magnetic field strength. Overall, while these factors
affect the final results by approximately a factor of 2, removing the non-
ideal MHD effects results in a much greater variation (up to a factor of 7).
Conclusions. The inclusion of non-ideal MHD effects and the cosmic-ray
ionization is crucial for the study of self-gravitating filaments and in
determining critical observable quantities, such as the filament width and
accretion rate.
###### Key Words.:
–star formation –ISM: clouds –methods: numerical –Magnetohydrodynamics (MHD)
## 1 Introduction
Filamentary structures in molecular clouds (e.g., Ward-Thompson et al. 2010,
for the Polaris Flare cloud, Kirk et al. 2013, for the Taurus region, and
Alves de Oliveira et al. 2014, for the Chamaeleon cloud complex) have been
revealed by Herschel observations (André et al., 2010; Molinari et al., 2010),
suggesting that pre-stellar cores may form from the gravitational
fragmentation of marginally supercritical and magnetized filaments (Könyves et
al., 2015; Benedettini et al., 2018). The role of the magnetic field in the
star formation process, and particularly of its orientation with respect to
the filament axis, has been widely discussed (Soler et al., 2013; Planck
Collaboration et al., 2016b) and it represents a crucial point in
understanding how the cores embedded in the filament grow in mass and trigger
the formation of protostellar objects. Observational data from Zeeman-effect
surveys (Crutcher, 2012) show that the maximum strength of the interstellar
magnetic field is $\sim 10$ $\mu$G for gas with densities below $n_{\rm H}\sim
300$ cm-3, increasing significantly at higher densities. This increase is
determined by the balance between ambipolar diffusion and the accumulation of
the magnetic field due to condensation. The comparison between magnetized and
non-magnetized models, either in equilibrium (Toci & Galli, 2015a, b) or
dynamical (Hennebelle & Audit, 2008), shows that magnetic fields have a strong
influence on the filamentary structure and the fragmentation process within
the filaments (Tilley & Pudritz, 2007; Kirk et al., 2015). A relevant driver
of these processes is the geometry of the magnetic field.
Polarization measurements often show that the orientation of the magnetic
field is nearly perpendicular to the major axis of the star-forming filament,
but aligned with low-density filamentary structures (e.g., Soler et al., 2013;
Planck Collaboration et al., 2016a; Jow et al., 2018; Soler, 2019), a result
also found in numerical simulations (e.g., Nakamura & Li, 2008; Soler &
Hennebelle, 2017; Seifried et al., 2020; Körtgen & Soler, 2020). Cox et al.
(2016) analyzed the filamentary structure of the Musca cloud, differentiating
the main filament and the low-density filamentary structures close to it,
showing that they are parallel to the plane-of-the-sky local magnetic field
and quasi-perpendicular to the main filament. Additional observations revealed
a large-scale network of sub-filaments connected to the filamentary structures
(e.g., Schneider et al., 2010), aligned with the direction of the magnetic
field, which can represent a mass reservoir for further growth of the filament
and the cores embedded into it.
The accretion of filaments is important in the context of star formation,
because it could play a key role in preventing dense filaments from collapsing
to spindles, and in maintaining constant widths during the evolutionary
process (André, 2017). Some authors have focused on different ways in which
the ambient material could be accreted, while considering the constraints
imposed by the magnetic field direction inside molecular clouds. Based on
Herschel observations, Shimajiri et al. (2019) showed that the accretion of
ambient gas can be driven by the gravitational potential of the filament. This
provides a strong support for the scenario of mass accretion along magnetic
field lines (oriented nearly perpendicular to the filament) proposed by
Palmeirim et al. (2013). Assuming a magnetic field perpendicular to the
filament, Hennebelle & André (2013) considered radial accretion in a self-
gravitating filament due to the combination of accretion-driven turbulence and
dissipation of the turbulence by ion-neutral friction. Gómez et al. (2022)
proposed that the width measured by observations may change depending on the
tracer used and then suggested a tracer-dependent estimate of the accretion
rate onto the filament. In Gómez & Vázquez-Semadeni (2014), a similar behavior
was shown, whereby filaments were proposed to be river-like flow structures,
with the gas falling onto the filament changing direction as the gas density
increases, and accreting mainly in the perpendicular direction.
Magnetic fields also have an impact on the characteristic width of the
filament. From an analysis of Herschel observations, Arzoumanian et al. (2011)
identified a peaked distribution of filament widths around 0.1 pc, including
both low-column density and star-forming filaments. Whether or not this width
is a universal filament property is still a matter of debate (e.g., Smith et
al., 2014; Panopoulou et al., 2017; Hacar et al., 2022), but several
theoretical works have attempted to explain it. For example, in simulations of
idealized filaments, Seifried & Walch (2015) found that the 0.1 pc width
(Arzoumanian et al., 2011) assumed in their initial conditions could be
maintained if the magnetic field is longitudinal.
Ideal magnetohydrodynamics (MHD) may not be sufficient for studying filaments,
due to the importance of ambipolar diffusion and other non-ideal processes
(Grassi et al., 2019). The coupling between magnetic field and gas depends on
the density and the ionization degree (Shu, 1983). The resistivity coefficient
of ambipolar diffusion is indeed strongly dependent on the abundance of ions
(e.g., Grassi et al. 2019), with collisions between charged dust grains and
neutral gas particles dominating the momentum transfer. In this context,
cosmic rays play a fundamental role as they ionize molecular clouds (Padovani
et al., 2009; Padovani & Galli, 2011). Chen & Ostriker (2014) simulated core
formation in colliding flows including ambipolar diffusion, finding a
transient ambipolar diffusion phase during the formation of the shock layer
that allows for the formation of cores with higher mass-to-flux ratio. In a
subsequent work, Ntormousi et al. (2016) reported wider filaments as a
consequence of non-ideal effects. However, numerical simulations including
ambipolar diffusion in MHD turbulence have not determined that it plays a role
in setting a characteristic scale (Oishi & Mac Low, 2006; Burkhart et al.,
2015).
In this paper, we explore how different physical parameters, such as the
magnetic field strength and cosmic ray ionization rate, affect the evolution
and accretion of a self-gravitating filament, including chemistry and non-
ideal effects. In Sect. 2, we introduce our initial conditions and the non-
ideal magnetohydrodynamic equations, together with details on the chemistry
and microphysics employed in this work. In Sect. 3, we discuss the reference
model and the parameter study, along with the impact of those parameters on
the accretion process. In Sect. 4, we present a discussion of the main results
referring to the accretion rates and filament widths. Finally, in Sect. 5, we
discuss some limitations of our approach and in Sect. 6, we summarize our main
conclusions.
## 2 Methodology
In the following, we describe the initial conditions and the theoretical
framework adopted in this work.
### 2.1 Initial conditions
Our model consists of a self-gravitating turbulent filament with 0.1 pc width
in a 2 pc size periodic box sampled with 1024 cells, which can accrete mass
from both sides of the $x$-axis, namely, the coordinate on which the
hydrodynamical equations are solved. For our study, we model the filament as a
nonuniform slab with an oblique magnetic field.
Figure 1: Representative sketch of our set-up. The code evolves a vector
variable with three components ($x,\,y,\,z,$) along one coordinate ($x$);
hence, $y$ and $z$ have periodic boundary conditions. The $x$ coordinate
traverses the filament (shaded gradient) and the initial density distribution
along this component follows a Plummer-like profile with maximum density,
$\rho_{\rm ridge}$, and background density, $\rho_{0}$, see Eq. (1). The gas
accretes along the same axis, as indicated by the arrows. The magnetic field
is inclined by an angle $\theta$ with respect to the $x$-axis, where
$\theta=\arctan{B_{y}/B_{x}}$. We note that the color palette employed for the
filament is only included for illustration purposes.
To model the filament, we adopted a Plummer-like profile as shown in
Arzoumanian et al. (2011), given by:
$\rho(x)=\frac{\rho_{\mathrm{ridge}}}{\left[1+(x/x_{\rm
flat})^{2}\right]^{p/2}},$ (1)
where $\rho_{\rm ridge}$ is the central density of the filament, $x_{\rm
flat}$ is the characteristic width of the flat inner plateau of the filament,
and $p$ is the typical exponent of the profile. For this study, we assumed the
same parameters as adopted in Körtgen et al. (2018), namely, $p=2$ and $x_{\rm
flat}=0.033$ pc, and we aligned the minor axis of the filament with the
$x$-axis. Figure 1 shows a sketch of the filament setup.
The box is initialized with a uniform temperature of 15 K and is filled with a
low-density medium with ${\rho_{\rm 0}\sim 2\times 10^{-21}}$ g cm-3 outside
the filament region, with the latter modeled according to the Plummer-like
profile in Eq. (1), where we set $\rho_{\mathrm{ridge}}\sim 3\times 10^{-19}$
g cm-3. The mass per unit length ratio is calculated as $(M/L)=3\,(M/L)_{\rm
crit}$ with $(M/L)_{\rm crit}=2c_{\rm s}^{2}/G$ (Inutsuka & Miyama, 1992) and
$c_{\rm s}=\sqrt{k_{\rm B}T/\mu m_{\rm p}}$ is the speed of sound, with $G$,
$k_{\rm B}$, $\mu$ and $m_{\rm p}$ being the gravitational constant, the
Boltzmann constant, the mean molecular weight, and the mass of the proton,
respectively. Then, $(M/L)\sim 86$ M⊙ pc-1. We note that we initialize the
filament at the critical line mass to guarantee an axial collapse.
The free-fall time of the background gas, defined as:
$t_{\rm ff}=\sqrt{\frac{3\pi}{32G\rho_{0}}},$ (2)
corresponds to $\sim 1.3$ Myr.
The filament is initialized including turbulence with Mach number
$\mathcal{M}=2$ in the $\varv_{x},\varv_{y}$, and $\varv_{z}$ components of
the velocity field, following a Burgers-like power spectrum, as described in
Bovino et al. (2019) (see also Körtgen et al., 2017). Given the randomness of
the initial velocity field, to avoid any net momentum that could produce a
drift in the filament along the direction of the gravitational force, we
subtract the mean initial velocity of the $x$-component.
### 2.2 Magnetic field orientation
The magnetic field ${\bf B}$ is set perpendicular to the $z$-axis and is
tilted relative to the $x$-axis (the accretion flow direction) by an angle
$\theta=\arctan(B_{y}/B_{x})$, as shown in Fig. 1. We note that when
$\theta=0$ the field is parallel to the flow direction, while for
$\theta=\pi/2$ the field is perpendicular to the flow. Given the
dimensionality of the code and the implicit assumption that
$\partial_{y}=\partial_{z}=0$, the solenoidal condition requires
$\partial_{x}B_{x}=0$. Hence, $B_{x}$ remains constant throughout the box,
whereas the initial condition of the $y$ component of the field is assumed to
scale as (Crutcher et al., 2010):
$B_{y}=B_{y,0}\left(\frac{\rho}{\rho_{0}}\right)^{k},$ (3)
where $k=0.5$, and $B_{y,0}\equiv B_{0}\sin{\theta}$, with
$B_{0}=10\leavevmode\nobreak\ \mu$G. We note that since $B_{x}$ is constant by
construction, $\theta$ also varies with $\rho$, indicating that $\theta$ in
our initial conditions refers to $B_{y}=B_{y,0}$.
### 2.3 Numerical framework
To study the evolution of the filament, we employ the lemongrab code (Grassi
et al., 2019), a 1D code that solves the non-ideal MHD equations time-
dependently. The code also includes cooling and heating processes as well as
grain chemistry (including charged grains). For the purposes of this study, we
added self-gravity to lemongrab, as described below.
The coupled non-ideal MHD equations can be written as:
$\displaystyle\partial_{t}\,\rho=-\partial_{x}[\rho\varv_{x}],$ (4)
$\displaystyle\partial_{t}[\rho\varv_{x}]=-\partial_{x}\left[\rho\varv_{x}^{2}+P^{*}-\frac{B_{x}^{2}}{4\pi}\right]-\rho\partial_{x}\Phi,$
(5)
$\displaystyle\partial_{t}[\rho\varv_{y}]=-\partial_{x}\left[\rho\varv_{x}\varv_{y}-\frac{B_{x}B_{y}}{4\pi}\right],$
(6)
$\displaystyle\partial_{t}[\rho\varv_{z}]=-\partial_{x}\left[\rho\varv_{x}\varv_{z}-\frac{B_{x}B_{z}}{4\pi}\right],$
(7) $\displaystyle\partial_{t}B_{x}=0,$ (8)
$\displaystyle\partial_{t}B_{y}=-\partial_{x}\left[\varv_{x}B_{y}-\varv_{y}B_{x}+\frac{\eta_{\rm
AD}}{|{\bf B}|^{2}}(F_{B,x}B_{y}-F_{B,y}B_{x})\right],$ (9)
$\displaystyle\partial_{t}B_{z}=-\partial_{x}\left[\varv_{x}B_{z}-\varv_{z}B_{x}+\frac{\eta_{\rm
AD}}{|{\bf B}|^{2}}(F_{B,z}B_{x}-F_{B,x}B_{z})\right],$ (10)
$\displaystyle\partial_{t}{\cal E}=\Gamma_{\rm
cr}-\Lambda-\rho\varv_{x}\partial_{x}\Phi-\partial_{x}\left\\{({\cal
E}+P^{*})\varv_{x}-\frac{B_{x}}{4\pi}({\bf v}\cdot{\bf B})\right.$
$\displaystyle\left.-\frac{\eta_{\rm AD}}{\pi|{\bf
B}|^{2}}[(F_{B,z}B_{x}-F_{B,x}B_{z})B_{z}-(F_{B,x}B_{y}-F_{B,y}B_{x})B_{y}]\right\\},$
(11) $\displaystyle\partial_{t}(\rho X_{i})=-\partial_{x}(\rho
X_{i}\varv_{x})+\mathcal{P}_{i}-\rho X_{i}\mathcal{L}_{i}.$ (12)
In the above equations, $\Phi$ is the gravitational potential, $\Lambda$ is
the cooling rate, $\Gamma_{\rm cr}$ is the cosmic ray heating rate, and
$\mathcal{P}_{i}$ and $\mathcal{L}_{i}$ are the production and loss terms of
the i$th$ chemical species.
The total pressure $P^{\star}$ is:
$P^{\star}=P+\frac{|{\bf B}|^{2}}{8\pi}.$ (13)
We assume an ideal equation of state for the thermal pressure:
${\cal E}=\frac{P}{\gamma-1}+\frac{\rho|{\bf v}|^{2}}{2}+\frac{|{\bf
B}|^{2}}{8\pi},$ (14)
and related to the temperature $T$, needed by the chemistry, via the ideal gas
law
$P=\frac{\rho k_{\rm B}}{\mu m_{\rm p}}T.$ (15)
The components of the Lorentz force are
$\displaystyle F_{B,x}$ $\displaystyle=$ $\displaystyle-
B_{y}\cdot\partial_{x}B_{y}-B_{z}\cdot\partial_{x}B_{z},$ (16) $\displaystyle
F_{B,y}$ $\displaystyle=$ $\displaystyle B_{x}\cdot\partial_{x}B_{y},$ (17)
$\displaystyle F_{B,z}$ $\displaystyle=$ $\displaystyle
B_{x}\cdot\partial_{x}B_{z}.$ (18)
The ambipolar diffusion resistivity coefficient is
$\eta_{\rm AD}=c^{2}\left(\frac{\sigma_{\rm P}}{\sigma_{\rm P}^{2}+\sigma_{\rm
H}^{2}}-\frac{1}{\sigma_{||}}\right),$ (19)
where $\sigma_{\rm P}$, $\sigma_{\rm H}$, and $\sigma_{||}$ are, respectively,
the Pedersen, Hall, and parallel conductivities, and $c$ is the speed of
light.
Given the spatial discretization of the code, the Poisson equation can be
easily discretized over the 1D grid at second-order via central differencing,
as
$\partial^{2}_{x}\Phi\approx\frac{\Phi_{k+1}-\Phi_{k}+\Phi_{k-1}}{\Delta
x^{2}}=4\pi{\rm G}\rho_{k},$ (20)
where $\Phi_{k}$ and $\rho_{k}$ are the gravitational potential and density of
the k$th$ cell (ranging from 1 to $N$, with $N$ as the number of resolution
elements in the simulation), and $\Delta x$ the spatial resolution of the
simulation. For the boundary cells, we have included different boundary
conditions (periodic, Dirichlet, or outflowing) among which the user can
arbitrarily choose; although in this work we only consider periodic ones, that
is, $\Phi_{1}=\Phi_{N}$. The potential is then calculated via matrix
inversion, and the gravitational acceleration is finally obtained via central
differencing as:
$g_{x}=-\partial_{x}\Phi\approx-\frac{\Phi_{k+1}-\Phi_{k-1}}{2\Delta x}.$ (21)
In this work, we employ a cooling function which depends on both temperature
and number density. It corresponds to conditions of collisional ionization
equilibrium typical of the interstellar gas, determined using cloudy (Ferland
et al., 1998) and tabulated by Shen et al. (2013). We imposed a temperature
floor of 10 K by including an artificial heating term, defined as
$\Lambda(T=10\,$K$)$, and added to the $\partial_{t}{\cal E}$ equation.
For details of the calculation of pressure, heating, cosmic ray ionization
rate, conductivities, and the chemical evolution of the species, we refer to
Grassi et al. (2019). In this work, we employed their reduced chemical
network, including eight species: electrons e-, X, X+, and neutral and charged
grains, namely: g, g+, g++, g-, g–; The species X and X+ are a proxy of all
neutrals and cations produced by a chain of fast reactions following H2
ionization (see Fujii et al., 2011; Grassi et al., 2019, for details). We
assumed that the mass of X is the same as the mass of H2, and the dust grains
rates were weighted assuming a MRN distribution (Mathis et al., 1977) with the
same characteristics as in Grassi et al. (2019). The cosmic ray ionization
rate of X is assumed to be initially uniform at a value $\zeta_{\rm
cr}=10^{-17}$ s-1. We set the initial electron fraction to $f_{i}=n_{\rm
e^{-}}/n_{\rm H_{2}}=10^{-7}$ and the dust-to-gas mass ratio to
${\mathcal{D}}=10^{-2}$, as listed in Table 1.
## 3 Results
In this section, we present the results of our simulations divided in four
parts. In Section 3.1, we present our reference model based on the initial
conditions listed in Table 1. In Section 3.2, we show the effects of varying
the orientation and strength of the magnetic field. Separately, in Section
3.3, we consider different cosmic rays ionization rate and, finally, in
Section 3.4 we explore the effect of other physical quantities such as the
initial turbulence seed, Mach number, density regimes, and $k$ exponent of the
magnetic field-density relation, namely, Eq. 3.
### 3.1 Reference case
Physical quantity | Numerical value | Units
---|---|---
$\rho_{\mathrm{ridge}}$ | $3.424\times 10^{-19}$ | g cm-3
$x_{\rm flat}$ | 0.0333 | pc
$L_{\text{box}}$ | 2 | pc
$T$ | 15 | K
$\mathcal{M}$ | 2 | -
$B_{0}$ | 10 | $\mu$G
$B_{x,0}$ | 2.25 | $\mu$G
$B_{y,0}$ | 9.74 | $\mu$G
$B_{z}$ | 0 | $\mu$G
$f_{i}$ | $10^{-7}$ | -
$\zeta_{\rm cr}$ | $10^{-17}$ | s-1
$\mathcal{D}$ | $10^{-2}$ | -
$\varv_{x},\varv_{y},\varv_{z}$ | see text | -
Table 1: Initial conditions for the filament setup of the reference case.
Figure 2: Profile along the $x$-axis of various physical quantities of the
model at three different times: $t/t_{\rm ff}=$ 0.1, 0.5 and 1.0. (a) density
$\rho$, (b) temperature $T$, (c) $x$-component of the velocity $\varv_{x}$,
(d) total velocity magnitude —v—, (e) $y$-component of magnetic field $B_{y}$,
(f) Mach number $\mathcal{M}$, (g) total magnetic field —B— magnitude, (h)
mass flux $\dot{\Sigma}$, (i) ambipolar diffusion resistivity coefficient
$\eta_{\rm AD}$, and (j) abundance $e^{-}/$H2. The solid lines correspond to
non-ideal MHD, dashed lines to ideal MHD. The gray area in the middle
corresponds to 0.1 pc to guide the reader to identify the typical width of the
filaments. We note that in panel (j), in the ideal case (dashed lines),
$\eta_{\rm AD}$ is reported for the sake of comparison, but it is not included
in the MHD equations.
In Fig. 2, we show, for the ideal and the non-ideal cases, the evolution of
density, temperature, velocity in the $x$-direction, total velocity, the
$y$-component of the magnetic field, total magnetic field, Mach number, mass
flux, ambipolar diffusion resistivity coefficient, and electron fraction as a
function of the $x$-coordinate at three different times, for our reference
case (see Table 1). The same panels include a comparison with the ideal MHD
case (dashed lines).
At early times, the density shows a peak with an initial maximum of about
$3\times 10^{-19}$ g cm-3 ($9\times 10^{4}$ cm-3) in the center, slightly
increasing at later times. The peak value of the $y$-component and the total
magnetic field is higher than 100 $\mu$G, triggered by the increase in
density. In the outer regions, at 0.1$t_{\rm ff}$ the density is about
$2\times 10^{-21}$ g cm-3 ($6\times 10^{3}$ cm-3). The initial turbulence
affects the first stages of the simulation generating fluctuations in the
magnetic field, velocity, and temperature, the latter showing values of 20 K.
The density and velocity profiles both indicate that a shock front is
generated at around $0.05-0.1$ pc from the filament center, after the initial
stages.
From the velocity field, we calculated the Mach number as
$\mathcal{M}=\varv/c_{\rm s}$, where
$\varv=(\varv_{x}^{2}+\varv_{y}^{2}+\varv_{z}^{2})^{1/2}$ is the total
velocity magnitude, and $c_{\rm s}$ the current sound speed of each cell.
Similarly to the velocity field, the Mach number shows strong fluctuations at
the first stages of evolution, going from values around 5 (highly supersonic)
in the outer regions to $\mathcal{M}<1$ (subsonic) in the center. The same
behavior is seen at later times. From the density and the velocity field, we
estimate the mass flux, reflecting the accretion flow in the $x$-direction:
$\dot{\Sigma}=\rho\varv_{x}\,.$ (22)
At early times, the mass flux shows peaks of ${\sim 10^{3}}$ M⊙ Myr-1 pc-2 as
a result of the density increase, which remains high also in the subsequent
time-steps with a slight decrease.
The gravitational acceleration due to the mass accumulated in the filament
leads to peak velocities up to 2 km s-1 moving inward, consistent with the
expected free-fall due to the available mass. The $y$\- and $z$-components of
the velocity (not shown), exhibit a similar behavior to the $x$-component, but
reach peak values of $1$ km s-1 and $0.5$ km s-1, respectively, with the
latter close to the edges of the simulation box. This relatively small
difference in magnitude is caused by the initial statistics of the turbulence,
that has the same dispersion on the three spatial components, but zero average
only on the $x$-component.
The thermal profile, after an initial isothermal state, shows three distinct
regions: (i) a cold background gas at around 10 K, (ii) an efficiently heated
region with a temperature that is higher than 100 K produced by gas shocking,
and (iii) the filament ridge where the high density leads to efficient cooling
(i.e., cooling time shorter than the dynamical time). This brings the gas down
to 10 K.
The $x$-component of the magnetic field remains constant in space and time by
construction, while the $z$-component (not shown) is initially zero because of
the alignment followed by fluctuations of a few $\mu$G due to the interaction
with turbulence and velocity fluctuations. The $y$-component dominates the
evolution of the total magnetic field, as seen in panels (f) and (h) of Fig.
2.
We note some differences from the ideal MHD case. In particular, over time,
the density peak tends to be broader in the ideal MHD case, due to the
increased magnetic pressure and stronger coupling between the magnetic field
and the density.
Figure 3: Density and velocity along the $x$-axis for different initial
angles. The solid line corresponds to the reference case. At 0.5$t_{\rm ff}$
in the top panels and at 1.0$t_{\rm ff}$ in the bottom panels. The gray area
in the middle corresponds to 0.1 pc to guide the reader to identify the
typical width of the filaments.
During the evolution, the density profile further steepens. At low densities,
namely, $\rho\lesssim 5\times 10^{-20}$ g cm-3, the electron fraction (panel
k) remains above $10^{-6}$, but as the density approaches a maximum value of
about $10^{-18}$ g cm-3, the electron fraction drops to $\sim 10^{-10}$ due to
efficient recombination. The decrease observed in the ambipolar diffusion at
high densities ($n_{\rm H}\sim 10^{5}$ cm-3) is determined by the electrons
and X+, which are the dominant charge carriers, while grains take over at
$n_{\rm H}>10^{8}$ cm-3 (Marchand et al., 2016). The negatively charged grain
abundance (not shown) is strictly linked to the electrons due to
recombination; thus, the evolution of the two species is highly correlated.
When the electron abundance drops off, the abundance of neutral and positively
charged grains increases at high densities. Ambipolar diffusion keeps
decreasing until the ion abundance increases enough for the diffusion with
respect to the neutrals to become significant.
### 3.2 Magnetic field geometry and strength
Figure 4: Inclination of the magnetic field and $y$-component for the
different initial angles. The solid line corresponds to the reference case. We
have $t=0.5t_{\rm ff}$ in the top panels and $t=1.0t_{\rm ff}$ in the bottom
panels. The gray area in the middle corresponds to 0.1 pc to guide the reader
to identify the typical width of the filaments.
To study how the initial geometry of the magnetic field influences the
evolution of the filament, we tested four different initial inclinations
$\theta$ with respect to the $x$-axis, corresponding to $\theta=5\degree$
(almost parallel to the flow), $\theta=45\degree$, $\theta=77\degree$
(reference case), and $\theta=90\degree$ (perpendicular to the flow), while
the other parameters are kept unchanged. In Fig. 3 and Fig. 4, the density,
velocity and orientation angle of the magnetic field are shown at different
times as a function of position. From now, we present the evolution of the
model at two representative times: at half a free-fall time 0.5$t_{\rm ff}$,
and after one free-fall time 1.0$t_{\rm ff}$. The central peak of the density
and its width do not change very significantly for the different angles;
However, when the magnetic field is nearly parallel to the gas flow, we would
expect that the gas flow is strongly impeded by the magnetic field. As a
result, the density in the outer parts becomes even lower when $\theta$ is
small, without significantly changing the density peak in the center, as the
extra mass that is added to the center is very small. Similarly, the peak
velocity increases as $\theta$ becomes aligned to the flow.
In general, the orientation of the magnetic field evolves both as a function
of position and as a function of time. In particular, in the outer parts of
the filament, the orientation angle decreases as a function of time and the
magnetic field tends to align with the flow of the gas. In the filament
itself, on the other hand, the velocities are reduced due to increased thermal
pressure and the perpendicular component of the magnetic field is compressed
and amplified as more gas is accreted onto the filament. As a result, the
orientation becomes closer to perpendicular within the dense component of the
gas.
Figure 5: Density and velocity along the $x$-axis for the different initial
magnitudes of the magnetic field. The solid line correspond to the reference
case. We have $t=0.5t_{\rm ff}$ in the top panels and $t=1.0t_{\rm ff}$ in the
bottom panels. The gray area in the middle corresponds to 0.1 pc to guide the
reader to identify the typical width of the filaments.
It is important to note that because of the assumed symmetry, the simplified
setup here may not necessarily hold in a realistic setting. Nevertheless, what
we can see clearly from Fig. 4 is the change in the angle of the magnetic
field between the outer and the inner parts of the filament. In a realistic
setting, such a change of the angle may for instance correspond to a
“U”-shaped magnetic field, as suggested by Gómez et al. (2018), or potentially
other types of configurations that imply a change of the angle as a function
of scale.
To study the effect of the magnetic field strength, we determined how the
structure of the filament depends on four different initial field strengths,
corresponding to no field (numerically $\sim 10^{-25}$ $\mu$G), 5 $\mu$G (weak
field), 10 $\mu$G (reference case), and 20 $\mu$G (strong field), while the
other parameters were kept unchanged. We calculated the normalized mass-to-
flux ratio $(M/\Phi_{B})/(M/\Phi_{B})_{\rm crit}$ at the initial condition to
show the magnetic supercritical nature of our simulations: $\sim$141 for 5
$\mu$G, $\sim$62 for 10 $\mu$G, and $\sim$31 for 20 $\mu$G. In Fig. 5, the
density and the $x$-velocity are shown as a function of position. The magnetic
field strength is one of the parameters that mostly affect the evolution of
the filament, as highlighted after $0.5t_{\rm ff}$: as the magnetic field
strength increases, the density peak of the filament appears smaller, while
the filament appears wider. Because of the limited mass available, this also
results in a background density that decreases (on average) for stronger
magnetic fields (we note that because of the initial turbulent velocity, the
local density in some cells does not necessarily reflect this trend). A simple
explanation for this behavior is the increasing magnetic pressure, which
counteracts gravity, slowing down accretion onto the filament, as can be also
seen in the velocity profile, where the peak velocity is higher for weaker
magnetic fields. This suppression results in higher densities in the
background region, which also correspond to a higher average value of the
magnetic field (despite the non-ideal effects) and lower inflow velocities
(which arise from the combined effect of a smaller mass concentration
producing a weaker gravitational pull and a larger pressure gradient
counteracting it). At later times, these differences are almost washed out,
since gravity dominates the system evolution, finally leading to an efficient
accretion of material onto the central overdensity, as shown in the bottom
panels. Nonetheless, mild differences in the density and velocity profiles
remain in the underdense region outside the filament.
The density profile shows a similar trend at the earlier stages, whereas the
velocity exhibits a peculiar evolution. In particular, the weakest magnetized
case ($5\leavevmode\nobreak\ \mu$G) now has the lowest velocities. This result
is due to the inversion from a magnetically dominated pressure to a thermally-
dominated regime, where the previously higher inflow velocities produced a
much stronger shock heating of the gas near the filament, which then developed
into a strong pressure gradient suppressing the inflow. In any case, these
differences remain small, especially considering the corresponding gas
densities.
### 3.3 Cosmic ray ionization rate
Figure 6: Density and velocity on the $x$-axis for different cosmic ray
ionization rates. The black line correspond to the reference case. We have:
$t=0.5t_{\rm ff}$ in the top panels and $t=1.0t_{\rm ff}$ in the bottom
panels. The gray area in the middle corresponds to 0.1 pc to guide the reader
to identify the typical width of the filaments.
Figure 7: Ionization fraction and ambipolar diffusion resistivity coefficient
for the different cosmic ray ionization rates. The black line correspond to
the reference case. We have $t=0.5t_{\rm ff}$ in the top panels and t=1.0
$t_{\rm ff}$ in the bottom panels. The gray area in the middle corresponds to
0.1 pc to guide the reader to identify the typical width of the filaments.
To test the effect of the cosmic ray ionization rate (CRIR) on the evolution
of the filament, we performed three simulations with $\zeta_{\rm cr}=10^{-18}$
s-1 (low), $10^{-17}$ s-1 (reference), and $10^{-16}$ s-1 (high),
respectively, while the other parameters were kept unchanged. The density,
velocity, electron abundance, and ambipolar diffusion resistivity coefficient
are shown in Fig. 6 and Fig. 7 for the different cases and at two different
times. Higher cosmic-ray ionization rates produce an increase in the
ionization fraction in the low-density accretion region (see Fig. 7, which
corresponds to a stronger coupling between the gas and the magnetic field). As
shown in Fig. 7, the lower electron density (left panels) corresponds to a
less-ideal MHD, namely, a higher ambipolar diffusion coefficient (right
panels), determining a direct relation between CRIR and non-ideal behavior.
Analogously, the density profile shows a broader distribution caused by a slow
accretion toward the filament center, as shown in the right top panels of Fig.
6. This behavior is also confirmed by the velocity profile at 0.5$t_{\rm ff}$,
which shows in the high CRIR case a shift of the $\varv_{x}$ peak toward
higher radii, determined by the relatively higher magnetic pressure support.
This peak is no longer present at 1.0$t_{\rm ff}$, where the magnetic pressure
halts the gravitational collapse, producing the slowest infall in the high
CRIR case.
### 3.4 Other parameters
To complete our analysis, we studied the effects of additional parameters on
the evolution of the filament. We have explored independently different
initial Mach numbers, initial central density, the random seeds for the
turbulence, and the exponent of the magnetic field-density relation, as in Eq.
(3), while the other parameters were kept unchanged.
By changing the initial turbulence seed, we did not observe relevant
differences in the final evolution of the filament, suggesting that in our
case the global features are dominating over the local turbulence, which
dissipates over time. In fact, the turbulence-induced fluctuations (e.g., in
the density profile) are more prominent at earlier stages.
For the different density regimes, we reduced $\rho_{\rm ridge}$ by a factor
of 10 and 2, with the free-fall time increasing to $\sim$4.3 Myr and $\sim$1.9
Myr, respectively. As expected, at lower density it becomes harder for the
filament to accrete material and there is no real evolution over time, with
the density keeping around the ambient values. Similarly, reducing the density
by a factor of 2 produces broader density profiles and a slower evolution. By
changing the exponent of the magnetic field – density relation from $k=0.5$ to
$k=0.4$, Eq. (3), the filament reaches higher temperatures, velocity, and
central density, but almost the same density peak with respect to the
reference, since the lower magnetic pressure cannot slow down the accretion
efficiently. Conversely, for $k=0.6$ (cf. 0.65 in Crutcher et al. 2010) the
magnetic field is higher in both the ambient and the filament and it becomes
more perpendicular to the flow compared to the previous cases, resulting in a
slower accretion. Although these parameters affect the global evolution, their
effect is less pronounced than the effect of CRIR and of the initial magnetic
field geometry.
## 4 Discussion
| Reference | Ideal | $\theta=5$° | $\theta=45$° | $\theta=90$° | no-field | 5 $\mu$G | 20 $\mu$G | 10-18 s-1 | 10-16 s-1
---|---|---|---|---|---|---|---|---|---|---
⟨$\dot{\Sigma}$⟩/ ⟨$\dot{\Sigma}_{\rm ref}$⟩ | 1 | 0.410 | 1.372 | 1.366 | 0.960 | 1.684 | 1.663 | 0.919 | 1.317 | 0.706
FWHM/FWHMref | 1 | 7.214 | 0.575 | 0.698 | 1.046 | 0.534 | 0.922 | 1.159 | 0.755 | 2.447
⟨$\dot{\Sigma}$⟩ | 206.15 | 83.828 | 282.87 | 281.55 | 197.92 | 347.08 | 342.802 | 189.48 | 271.58 | 145.58
FWHM | 0.023 | 0.164 | 0.013 | 0.016 | 0.024 | 0.012 | 0.022 | 0.027 | 0.018 | 0.057
Table 2: Time-averaged mass flux ratio, FWHM ratio, time-averaged mass flux
(M⊙ Myr-1 pc-2) and FWHM (pc) at t=1.0$t_{\rm ff}$, for an ideal case, the
different inclination angles, magnetic field strengths, and cosmic ray
ionization rates. Reference case have $\theta=77$°, $B_{0}=10\,\mu$G, and
$\zeta=10^{-17}$ s-1. We note that $\theta=90\degree$ indicates that the
magnetic field is perpendicular to the flow, and no-field corresponds to
$B_{0}=0$ $\mu$G.
In order to assess the impact of the different parameters during the evolution
of the gas, we should define one or more metrics that describe quantitatively
the shape and the global morphological characteristics of the filament. The
formal definition of its shape and width has been discussed by several
authors, as we report in Appendix A. Regarding the aims of our work, we found
that our simulation results are better interpretable by defining two global
metrics: (i) the average accretion mass flux, $\langle\dot{\Sigma}\rangle$,
and (ii) the full-width-at-half-maximum, FWHM, obtained by fitting the density
profile with a Gaussian function (see e.g., Arzoumanian et al. 2011) including
the background up to $0.2$ pc for a more suitable fitting of the curve. The
former is averaged over time, while the second is calculated at 1.0$t_{\rm
ff}$, that is, at the end of the simulation. A summary of these two metrics
and their relative ratio with respect to the reference case is reported in
Table 2.
As both metrics are controlled by the gas pressure support (either thermal or
magnetic), the accretion mass flux and the FHWM appear to be anticorrelated:
the FWHM increases for higher pressure, while the accretion rate tends to be
reduced. This can be explained by observing the ideal MHD case, where the
contribution of the magnetic pressure to the total pressure, reduces both the
accretion and the capability of the filament to reach narrower configurations,
resulting in larger FWHMs. In fact, in this case, the FHWM is 7 times larger
than the reference case. For the same reason, the CRIR cases present an
increased FWHM in the high-CRIR model, and vice versa for the low-CRIR. The
magnetic pressure plays a crucial role also when changing the initial magnetic
field; reducing the field strength (no-$B$ and 5 $\mu$G) narrows the density
profile, while increasing the magnetic strength to $20$ $\mu$G causes a
relatively broader FWHM.
By changing the orientation of the initial magnetic field ($\theta$), we were
able to observe a narrower FWHM when $B_{y}$ tends to be parallel to the flow
(i.e., $\theta$=5°, almost perpendicular to the filament ridge). This decrease
is driven by the magnitude of $B_{y}$, which is the only component that (by
construction) varies spatially111$B_{z}$ is also variable, but it is less
relevant in this context, being always smaller than $B_{y}$. Moreover, $B_{z}$
is expected to remain zero except for the fluctuations induced by turbulence,
that produce a zero average of $B_{z}$.. When $\theta$ is small, $B_{x}$ is
the dominant component of the magnetic field, namely, it is the component that
(having no spatial gradient) plays a minor role in the MHD equations. In fact,
in the right panels of Fig. 3, $v_{x}$ is larger for the smallest $\theta$,
being dominated by self-gravity and resulting in higher accretion and a
narrower FWHM.
To assess the impact of turbulence on the filament properties, we show in Fig.
8, the time evolution of the FWHM for different values of the Mach number for
the reference case. High Mach-number cases tend to behave similarly to
$\mathcal{M}=2,$ except at earlier times when turbulence is dominant.
Figure 8: Time evolution of the FWHM ratio of $\mathcal{M}=4$ (dot-dashed
line) and $\mathcal{M}=6$ (dashed line) with $\mathcal{M}=2$ (reference).
## 5 Limitations of the model
As in any numerical model, some approximations and assumptions need to be
introduced in order to make the problem computationally tractable when
including detailed microphysics and when a parameter study is planned. The
main limitation in our approach is the 1D approximation employed to model the
filament along the $x$ coordinate. Despite the code having three components
for the vector variables, such as velocity and magnetic field, the $y$ and $z$
coordinates have periodic boundary conditions that limit the exploration of
the spatial variability. Additionally, in a 1D code the solenoidal condition
imposes by construction that the $x$ component of the magnetic field is
constant in time and space. Another set of periodic boundary conditions is
imposed at the boundaries of the simulation domain to avoid infinitely growing
accretion onto the filament (outflow boundaries), or nonphysical
configurations (Dirichlet boundary conditions, i.e., zero derivatives).
Ideally, this could be avoided by using a much larger simulation box, where
the boundaries remain unaffected during the simulation due to their distance
from the central filament ridge. However, such a set-up requires much more
computational resources or an adaptive mesh, which is beyond the aims of the
present work.
Although the microphysics is limited by the reduced set of chemical reactions,
it is still nevertheless capable of capturing the main features of larger
chemical networks, especially the influence on the non-ideal behavior driven
by ambipolar diffusion, as discussed in Grassi et al. (2019). In addition,
since the chemistry has a noticeable influence on the cooling functions, the
current network might reduce the capability of our model to determine time-
dependent effects of the thermal evolution. Finally, since our reduced
chemical network cannot be used to determine the ratio between the atomic and
molecular species accurately, we use a constant molecular weight and adiabatic
index that might somewhat affect the results. Additional limitations and
method details are discussed in Grassi et al. (2019), where the main features
of the code are also presented.
## 6 Summary and conclusions
In this 1D study, we modeled an accreting self-gravitating turbulent filament
using lemongrab, a non-ideal MHD code that includes chemistry and
microphysical processes. We explored how the main parameters, namely, the
configuration and strengths of the magnetic field and the cosmic ray
ionization rate, as well as other parameters such as the Mach number, initial
turbulence level, initial ridge density, and the exponent of the magnetic
field-density relation affect the evolution and accretion of the filament. Our
main results can be summarized as follows:
1. 1.
Including non-ideal MHD is crucial with regard to the filament evolution. The
ideal case produces a wider filament, with an FWHM that is approximately seven
times greater than the non-ideal model, due to the increased magnetic pressure
support. This suggests that non-ideal MHD is fundamental to understanding the
accretion process.
2. 2.
Independently of the initial configuration of the magnetic field, the magnetic
field lines in the central part of the filament bend to reach a perpendicular
configuration with respect to the $x$-axis (i.e., the accretion direction).
This is consistent, for example, with “U”-shaped model of Gómez et al. (2018).
3. 3.
Higher cosmic ray ionization rates produce higher ionization degrees that
correspond to a more ideal gas. As the coupling with the magnetic field is
stronger, the magnetic pressure halts the collapse, resulting in a broader
filament.
4. 4.
We did not find any strong change in the final FWHM value with the Mach
number, pointing to a less pronounced role of the turbulence in regulating the
evolution and the final properties of the filament compared to microphysics
and the magnetic pressure. However, a one-dimensional model could be
inadequate to address this specific question.
We conclude that adding magnetic fields and non-ideal MHD effects
significantly affects the evolution of a collapsing filament, its width, and
its accretion rate, while other parameters play only a minor role. Special
attention should be given to the cosmic ray ionization rate that strongly
affect the coupling between the gas and the magnetic field. It is then
fundamental for future works to include a proper cosmic rays propagation
scheme to accurately study their effect on the accretion rate and on the
filament width.
###### Acknowledgements.
We are grateful to the referee for the detailed report. NGV acknowledge
support by the National Agency for Research and Development (ANID) /
Scholarship Program / Magíster Nacional/2020 - 22200413. DRGS thanks for
funding via Fondecyt Regular (project code 1201280). SB and DS gratefully
acknowledge support by the ANID BASAL projects ACE210002 and FB210003. AL
acknowledges funding from MIUR under the grant PRIN 2017-MB8AEZ. TG
acknowledges the financial support of the Max Planck Society.
## References
* Alves de Oliveira et al. (2014) Alves de Oliveira, C., Schneider, N., Merín, B., et al. 2014, A&A, 568, A98
* André et al. (2010) André, P., Men’shchikov, A., Bontemps, S., et al. 2010, A&A, 518, L102
* André (2017) André, P. 2017, Comptes Rendus Geoscience, 349, 187
* Arzoumanian et al. (2011) Arzoumanian, D., André, P., Didelon, P., et al. 2011, A&A, 529, L6
* Benedettini et al. (2018) Benedettini, M., Pezzuto, S., Schisano, E., et al. 2018, A&A, 619, A52
* Bovino et al. (2019) Bovino, S., Ferrada-Chamorro, S., Lupi, A., et al. 2019, ApJ, 887, 224
* Burkhart et al. (2015) Burkhart, B., Collins, D. C., & Lazarian, A. 2015, ApJ, 808, 48
* Chen & Ostriker (2014) Chen, C.-Y. & Ostriker, E. C. 2014, ApJ, 785, 69
* Cox et al. (2016) Cox, N. L. J., Arzoumanian, D., André, P., et al. 2016, A&A, 590, A110
* Crutcher (2012) Crutcher, R. M. 2012, Annual Review of Astronomy and Astrophysics, 50, 29
* Crutcher et al. (2010) Crutcher, R. M., Wandelt, B., Heiles, C., Falgarone, E., & Troland, T. H. 2010, ApJ, 725, 466
* Ferland et al. (1998) Ferland, G. J., Korista, K. T., Verner, D. A., et al. 1998, PASP, 110, 761
* Fujii et al. (2011) Fujii, Y. I., Okuzumi, S., & Inutsuka, S.-i. 2011, ApJ, 743, 53
* Gómez & Vázquez-Semadeni (2014) Gómez, G. C. & Vázquez-Semadeni, E. 2014, ApJ, 791, 124
* Gómez et al. (2018) Gómez, G. C., Vázquez-Semadeni, E., & Zamora-Avilés, M. 2018, MNRAS, 480, 2939
* Gómez et al. (2022) Gómez, G. C., Walsh, C., & Palau, A. 2022, MNRAS, 513, 1244
* Grassi et al. (2019) Grassi, T., Padovani, M., Ramsey, J. P., et al. 2019, MNRAS, 484, 161
* Hacar et al. (2022) Hacar, A., Clark, S., Heitsch, F., et al. 2022, arXiv e-prints, arXiv:2203.09562
* Hennebelle & André (2013) Hennebelle, P. & André, P. 2013, A&A, 560, A68
* Hennebelle & Audit (2008) Hennebelle, P. & Audit, E. 2008, in EAS Publications Series, Vol. 31, EAS Publications Series, ed. C. Kramer, S. Aalto, & R. Simon, 15–18
* Inutsuka & Miyama (1992) Inutsuka, S.-I. & Miyama, S. M. 1992, ApJ, 388, 392
* Jow et al. (2018) Jow, D. L., Hill, R., Scott, D., et al. 2018, MNRAS, 474, 1018
* Kirk et al. (2015) Kirk, H., Klassen, M., Pudritz, R., & Pillsworth, S. 2015, ApJ, 802, 75
* Kirk et al. (2013) Kirk, J. M., Ward-Thompson, D., Palmeirim, P., et al. 2013, MNRAS, 432, 1424
* Könyves et al. (2015) Könyves, V., André, P., Men’shchikov, A., et al. 2015, A&A, 584, A91
* Körtgen et al. (2017) Körtgen, B., Bovino, S., Schleicher, D. R. G., Giannetti, A., & Banerjee, R. 2017, MNRAS, 469, 2602
* Körtgen et al. (2018) Körtgen, B., Bovino, S., Schleicher, D. R. G., et al. 2018, MNRAS, 478, 95
* Körtgen & Soler (2020) Körtgen, B. & Soler, J. D. 2020, MNRAS, 499, 4785
* Marchand et al. (2016) Marchand, P., Masson, J., Chabrier, G., et al. 2016, A&A, 592, A18
* Mathis et al. (1977) Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, ApJ, 217, 425
* Molinari et al. (2010) Molinari, S., Swinyard, B., Bally, J., et al. 2010, A&A, 518, L100
* Nakamura & Li (2008) Nakamura, F. & Li, Z.-Y. 2008, ApJ, 687, 354
* Ntormousi et al. (2016) Ntormousi, E., Hennebelle, P., André, P., & Masson, J. 2016, A&A, 589, A24
* Oishi & Mac Low (2006) Oishi, J. S. & Mac Low, M.-M. 2006, ApJ, 638, 281
* Padovani & Galli (2011) Padovani, M. & Galli, D. 2011, A&A, 530, A109
* Padovani et al. (2009) Padovani, M., Galli, D., & Glassgold, A. E. 2009, A&A, 501, 619
* Palmeirim et al. (2013) Palmeirim, P., André, P., Kirk, J., et al. 2013, A&A, 550, A38
* Panopoulou et al. (2017) Panopoulou, G. V., Psaradaki, I., Skalidis, R., Tassis, K., & Andrews, J. J. 2017, MNRAS, 466, 2529
* Planck Collaboration et al. (2016a) Planck Collaboration, Adam, R., Ade, P. A. R., et al. 2016a, A&A, 586, A135
* Planck Collaboration et al. (2016b) Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016b, A&A, 586, A136
* Priestley & Whitworth (2022) Priestley, F. D. & Whitworth, A. P. 2022, MNRAS, 509, 1494
* Schneider et al. (2010) Schneider, N., Csengeri, T., Bontemps, S., et al. 2010, A&A, 520, A49
* Seifried & Walch (2015) Seifried, D. & Walch, S. 2015, MNRAS, 452, 2410
* Seifried et al. (2020) Seifried, D., Walch, S., Weis, M., et al. 2020, MNRAS, 497, 4196
* Shen et al. (2013) Shen, S., Madau, P., Guedes, J., et al. 2013, ApJ, 765, 89
* Shimajiri et al. (2019) Shimajiri, Y., André, P., Palmeirim, P., et al. 2019, A&A, 623, A16
* Shu (1983) Shu, F. H. 1983, ApJ, 273, 202
* Smith et al. (2014) Smith, R. J., Glover, S. C. O., & Klessen, R. S. 2014, MNRAS, 445, 2900
* Soler (2019) Soler, J. D. 2019, A&A, 629, A96
* Soler & Hennebelle (2017) Soler, J. D. & Hennebelle, P. 2017, A&A, 607, A2
* Soler et al. (2013) Soler, J. D., Hennebelle, P., Martin, P. G., et al. 2013, ApJ, 774, 128
* Tilley & Pudritz (2007) Tilley, D. A. & Pudritz, R. E. 2007, MNRAS, 382, 73
* Toci & Galli (2015a) Toci, C. & Galli, D. 2015a, MNRAS, 446, 2110
* Toci & Galli (2015b) Toci, C. & Galli, D. 2015b, MNRAS, 446, 2118
* Ward-Thompson et al. (2010) Ward-Thompson, D., Kirk, J. M., André, P., et al. 2010, A&A, 518, L92
## Appendix A Boundaries of the filament
Figure 9: Time evolution of the mass flux according to the different methods
adopted here. Circles correspond to non-ideal MHD, stars to ideal MHD.
As there is no unique definition of the boundaries of a filament, we
calculated the mass flux for the reference case at different distances from
the filament axis to make a comparison with the observational data. These
positions are given by: (a) the minimum values of the velocity-divergence (or
$x$-velocity gradient), as in Priestley & Whitworth (2022); (b) $x=\pm 0.05$
pc from the axis of the filament (half of the filament width reported by
Arzoumanian et al. 2011); (c) the distance where the enclosed mass is 80% of
the total mass in the computational box; and (d) where the flow becomes
supersonic with a possible shock with Mach number $\mathcal{M}=1$. Fig. 9
shows the evolution of the mass fluxes in each time-step for each case, where
dots correspond to non-ideal MHD and stars to ideal MHD. By looking at the
ideal case (stars) we note that the difference is not very large after
0.3$t_{\rm ff}$. The total averaged mass fluxes in time for non-ideal MHD are
206.15 M⊙ Myr-1 pc-2 for the criterion (a); 194.02 M⊙ Myr-1 pc-2 for the
criterion (b); 715.87 M⊙ Myr-1 pc-2 for the criterion (c); and 438.38 M⊙ Myr-1
pc-2 for $\mathcal{M}=1$ for the criterion (d).
Observational studies, such as in Palmeirim et al. (2013), provided an
estimate of the accretion rate in the B211 filament based on the observed mass
per unit length and the 12CO (1–0) inflow velocity, finding a value of $\sim
27$–50 M⊙ Myr-1 pc-1. With this accretion rate, it would take $\sim 1$–2 Myr
to form the B211 filament, in reasonable agreement with the free-fall time of
$\sim 1.3$ Myr of our model. Since the accretion rate is estimated in the
observations at a distance of 0.4 pc from the filament axis, the method of the
width is not appropriate for a comparison, because of the smaller radius used
and because the width of the filament changes with time. The methods based on
Mach number and $0.8M_{\rm tot}$ are not well suited for a comparison, as they
assume a position too close to the filament center. On the other hand, the
method of the velocity-divergence (which is the chosen method) displays a
greater similarity to the observations, where the inflow velocity is $\sim
0.6$–1.1 km s-1; whereas in our model, the velocity in the $x$-axis at the
free-fall time $t=t_{\rm ff}$ is $\sim 1.2$ km s-1 at $\sim 0.1$ pc from the
center of the filament.
|
# Information geometry and synchronization phase transition in Kuramoto model
Artem Alexandrov<EMAIL_ADDRESS>Moscow Institute of Physics and
Technology, Dolgoprudny, 141700, Russia Laboratory of complex networks, Brain
and Consciousness Research Center, Moscow, Russia Alexander Gorsky Moscow
Institute of Physics and Technology, Dolgoprudny, 141700, Russia Institute
for Information Transmission Problems, Moscow, 127994, Russia Laboratory of
complex networks, Brain and Consciousness Research Center, Moscow, Russia
###### Abstract
We discuss how the synchronization in Kuramoto model can be treated in terms
of information geometry. We argue that the Fisher information is sensitive to
synchronization transition, specifically components of Fisher metric diverge
at the critical point. Our approach is based on the recently proposed relation
between Kuramoto model and geodesics on hyperbolic space.
## I Introduction
Kuramoto model is the paradigmatic model for synchronization phenomena in non-
linear systems. Despite it deceptively simple form, there are still enough
white spots, concerning different aspects of the model. Whereas some of these
aspects turn around the synchronization in systems with complicated topology,
which were comprehensively described in Rodrigues _et al._ (2016). Another
aspects are mostly focused on the rigorous description of the Kuramoto model
with all-to-all couplings (complete graph case). It was shown that in case of
identical oscillators the Kuramoto model exhibits low-dimensional dynamics.
The emergent dimensional reduction occurs due to existence of integrals of
motion, initially discovered by trial and error in the papers Watanabe and
Strogatz (1993, 1994). The investigation of such integrals of motion was done
comprehensively in the work Marvel _et al._ (2009) where it was demonstrated
that they emerge due to the invariance of dynamics under the Möbius
transformation. The Kuramoto model dynamics can be represented as the motion
on the Möbius group orbit. Moreover, recently it was noticed that the
corresponding low-dimensional dynamics can be described in terms of gradient
flows on two dimensional hyperbolic manifold Chen _et al._ (2017).
However, the origin of hyperbolic manifold was not established. At first
glance, this manifold simply inherits symmetries of Möbius group and there are
no special features. We shall interpret the hyperbolic manifold as induced by
probability measures, which is usually refers as statistical manifold (see
Amari (2016) for the review). In particular the AdS2 metric emerges
immediately for the shifted Gaussian distribution. In a generic situation the
hyperbolic metric emerges as the Fisher metric on the parameter space. The
Fisher metric is the probabilistic counterpart of the quantum metric which is
familiar in the quantum mechanics where one introduces the so-called complex
quantum tensor which has the quantum metric as the real part and the Berry
curvature as the imaginary part Provost and Vallee (1980); Berry (1984). It is
built from the quantum wave function depending on some parameter space however
its classical analogue exists as well.
In this study we focus on the hyperbolic geometry description of the Kuramoto
model and its connection to information geometry. In particular we explain the
hyperbolic geometry involved as the geometry of the statistical manifold. We
shall argue that the equation of motion for Kuramoto model has the form of
gradient flow for the Kullback-Leibler divergence. The properties of the
induced metric or, in other words, the response of the system at the
perturbation of parameters serves as a indicator of the phase transition
Zanardi and Paunković (2006); Zanardi _et al._ (2007a); You _et al._ (2007);
Garnerone _et al._ (2009). In particular the diagonal elements of the quantum
metric provide the good tool for the search of the critical curves which has
been demonstrated in several examples Kolodrubetz _et al._ (2013, 2017);
Pandey _et al._ (2020); Buonsante and Vezzani (2007); Dey _et al._ (2012).
We shall demonstrate in a clear-cut manner that the components of the Fisher
information metric identify in a similar manner the critical curve for the
phase transition in the Kuramoto-Shakaguchi model. Hence the Fisher
information metric yields the proper order parameter for the synchronization
phase transition.
The paper is organized as follows. In section II we recall the Kuramoto model
focusing on its description in terms of the Möbius group. The dimensional
reduction provides the description of the model in terms of motion of a point
at a hyperbolic disc. In section III we shall present the key aspects of the
information geometry. In section IV we apply the information geometry to the
Kuramoto and Kuramoto-Shakaguchi models. We shall treat the coordinates of
point on the disc as the two parameters which the distribution involved depend
on. It turns out that the Cauchy distribution governs the Kuramoto model while
the von Mises distribution is relevant for Kuramoto-Shakaguchi model. In both
cases we shall demonstrate that the singularity of the Fisher metric coincides
with the synchronization transition adding a new example of the relation
between the singularities of the Fisher metric and the phase transitions. In
section V we formulate the results and the open questions. The general
comments concerning the relations between the distributions and the possible
role of the $q$-Gaussian distributions in Kuramoto model can be found in the
Appendices.
## II Brief review of Kuramoto model
Kuramoto model on a complete graph with identical oscillators has the
following equations of motion,
$\dot{\theta}_{i}=\omega+\frac{\lambda}{N}\sum_{j=1}^{N}\sin\left(\theta_{j}-\theta_{i}\right),$
(1)
where $N$ is the number of oscillators, $\theta_{i}\in[0,2\pi)$, $\lambda>0$
is the coupling constant, and $\omega$ is the oscillator eigenfrequency. The
order parameter (synchronization measure) is defined by
$r(t)=\frac{1}{N}\sum_{j=1}^{N}z_{j}(t),$ (2)
where we introduce complex variable $z_{i}(t)=\exp(i\theta_{i}(t))$. The model
exhibits so called _synchronization transition_ from a given initial state to
a synchronized state with the amplitude of order parameter $|r|\rightarrow 1$,
which does not depend on time. In terms of particles on the unit circle, it
means that in the synchronized state all the particles are concentrated at one
point that moves with a certain average frequency.
The equations of motion become
$\dot{z}_{i}=i\omega
z_{i}+\frac{\lambda}{2}\left(r-\overline{r}z_{i}^{2}\right).$ (3)
The evolution of $z_{i}=z_{i}(t)$ can be represented as the action of Möbius
transformation,
$z_{i}(t)=\mathcal{M}(z_{i}^{0}),\quad z_{i}^{0}\equiv\exp(i\theta_{i}(t=0)),$
(4)
where the parameters of Möbius transformation depend on time. Following the
paper Chen _et al._ (2017), this transformation can be parametrized as
follows,
$\mathcal{M}(z)=\zeta\frac{z-w}{1-\overline{w}z},$ (5)
where $\zeta\in\mathbb{C}$, $|\zeta|=1$, $w\in\mathbb{C}$, $|w|<1$. The time
derivative of the phase is
$\dot{z}_{i}=-\frac{\dot{w}\zeta}{1-|w|^{2}}+\left(\dot{\zeta}\overline{\zeta}+\frac{\dot{\overline{w}}w-\dot{w}\overline{w}}{1-|w|^{2}}\right)z_{i}+\\\
+\frac{\dot{\overline{w}}\overline{\zeta}}{1-|w|^{2}}z_{i}^{2}.$ (6)
Matching the RHS of eq. (6) and the RHS of eq. (3), we obtain the system of
equations,
$\begin{gathered}\dot{w}=-\frac{\lambda}{2}\left(1-\left|w\right|^{2}\right)\overline{\zeta}r,\\\
\dot{\zeta}=i\omega\zeta-\frac{\lambda}{2}\left(\overline{w}r-w\overline{r}\zeta^{2}\right).\end{gathered}$
(7)
The key feature of these equations is that they are decoupled
$\overline{\zeta}r=\overline{\zeta}\frac{1}{N}\sum_{j=1}^{N}\zeta\frac{z_{j}^{0}-w}{1-\overline{w}z_{j}^{0}}=\\\
=\left|\zeta\right|^{2}\frac{1}{N}\sum_{j=1}^{N}\frac{z_{j}^{0}-w}{1-\overline{w}z_{j}^{0}}=\frac{1}{N}\sum_{j=1}^{N}\frac{z_{j}^{0}-w}{1-\overline{w}z_{j}^{0}}.$
(8)
hence it means that the equation for $w$ does not feel the variable $\zeta$.
This equation is our object of interest and it was shown previously Chen _et
al._ (2017) that it can be rewritten as the gradient flow on the hyperbolic
disk $\mathbb{D}=\\{z:|z|<1\\}$,
$\dot{w}=\\\
=-\frac{\lambda}{2}\left(1-\left|w\right|^{2}\right)^{2}\frac{\partial}{\partial\overline{w}}\left\\{\frac{1}{N}\sum_{j=1}^{N}\ln
P\left(w,z_{j}^{0}\right)\right\\},$ (9)
where $P=P(w,z_{0})$ is the Poisson kernel,
$P(w,z_{0})=\frac{1-|w|^{2}}{|w-z_{0}|^{2}},\\\
z_{0}\in\mathbb{S}^{1},\,w\in\mathbb{D},\,\mathbb{S}^{1}=\\{z:|z|=1\\}.$ (10)
To determine how the Poisson kernel appears, one should substitute the
expression (8) into the eq. (7) and perform integration over $\overline{w}$.
In the paper Chen _et al._ (2017) the authors have already noticed that for
$|w|\neq 1$ the fixed point of the eq. (9) coincides with conformal barycenter
of $N$ points on $\mathbb{S}^{1}$, which existence and uniqueness are
guaranteed Cantarella and Schumacher (2022). Also one should note that
$|w|\rightarrow 1$ limit corresponds to the synchronized state.
Using this representation of the Kuramoto model dynamics, we would like to
discuss several features of the model that devoted to the interconnections
between information geometry and synchronization.
## III Information geometry and Kuramoto model
### III.1 Basic concepts of information geometry
In this section we discuss the basic concepts of information geometry and
develop an information geometry view on the Kuramoto model. We mainly focus at
the continuum $N\rightarrow\infty$ limit of the Kuramoto model.
To begin with, let us provide some essentials from information geometry. We
follow the book Amari (2016), where more details can be found. Let us consider
the family of probability density functions $p=p(\xi;x)$, where $\xi$ is the
vector of parameters and $x$ is the vector of variables. For all possible
values of $\xi$, this family forms the manifold $\mathbb{M}=\\{p(\xi;x)\\}$,
which is called statistical manifold. To determine how two distributions
$p(\xi_{1};x)$ & $p(\xi_{2};x)$ differ from each other, one can consider the
divergence function $D[\xi_{1};\xi_{2}]$. It is possible to define different
divergence functions but these functions make sense only if they are invariant
and decomposable Amari (2009).
The large class of such divergence functions is called standard
$f$-divergencies, that can be represented as
$D_{f}[p;q]=\int dx\,p(x)f\left(\frac{q(x)}{p(x)}\right),\\\
f(0)=1,\,f(1)=1,\,f^{\prime\prime}(1)=1,$ (11)
where $f$ is the convex function. For $f(v)=-\ln v$ it is easy to see that
corresponding $f$-divergence is nothing more than Kullback-Leibler divergence.
Another important type of divergence is so-called $\alpha$-divergence, which
is defined by (see Amari (2016) and Amari (2009) for details)
$f_{\alpha}(v)=\frac{4}{1-\alpha^{2}}\left[1-v^{(1+\alpha)/2}\right],\quad\alpha\neq
1.$ (12)
The important property of the divergence function is that in case of two close
enough points, $\xi_{1}=\xi+d\xi$ & $\xi_{2}=\xi$, it has the following Taylor
expansion at point $\xi$,
$D[\xi_{1};\xi_{2}]=\frac{1}{2}g_{ij}(\xi)d\xi_{i}d\xi_{j}+\mathcal{O}(|d\xi|^{3}),$
(13)
where $g_{ij}$ is the metric of manifold $\mathbb{M}$. Any standard
$f$-divergence gives the same metric $g_{ij}$, which coincides with the Fisher
metric $g^{F}_{ij}$,
$g_{ij}^{\text{F}}=\int dx\,p(x;\xi)\frac{\partial\ln
p(x;\xi)}{\partial\xi_{i}}\frac{\partial\ln p(x;\xi)}{\partial\xi_{j}}.$ (14)
Strictly speaking, to completely describe the statistical manifold
$\mathbb{M}$ we need one more object, which is called skewness tensor
$T^{\text{F}}$,
$T_{ijk}^{\text{F}}=\\\ =\int dx\,p(x;\xi)\frac{\partial\ln
p(x;\xi)}{\partial\xi_{i}}\frac{\partial\ln
p(x;\xi)}{\partial\xi_{j}}\frac{\partial\ln p(x;\xi)}{\partial\xi_{k}}.$ (15)
Finally, the statistical manifold is the triplet $(\mathbb{M},\,g,\,T)$. For a
given standard $f$-divergence, the metric tensor $g$ (obtained from the Taylor
expansion) coincides with $g^{\text{F}}$, whereas corresponding skewness
tensor is given by $T=\alpha T^{\text{F}}$ with
$\alpha=2f^{\prime\prime\prime}(1)+3$.
The pair of $g^{\text{F}}$ and $T^{\text{F}}$ allows to define so-called
$\alpha$-connections on the statistical manifold, that are given by
$\Gamma^{\pm\alpha}_{ijk}=\Gamma_{ijk}^{0}-\frac{\alpha}{2}T_{ijk}^{\text{F}}.$
(16)
The statistical manifold is called $\alpha$-flat if corresponding
$\alpha$-Riemannian curvature tensor vanishes everywhere with condition
$R_{ijkl}^{\alpha}=R_{ijkl}^{-\alpha}$. Note in contrast with usual Riemannian
manifold, the statistical manifold can have non-zero torsion. It was shown
that such $\alpha$-flat manifolds has dual affine structure, which gives dual
(via Legendre transformation) coordinate system on the statistical manifold.
Such manifold is also called dual flat manifold. For dual flat manifold, the
metric and skewness tensors are given by
$g_{ij}^{\text{F}}=\partial_{i}\partial_{j}\psi(\xi),\,T_{ijk}^{\text{F}}=\partial_{i}\partial_{j}\partial_{k}\psi(\xi),\,\partial_{i}\equiv\frac{\partial}{\partial\xi_{i}}$
(17)
where $\psi(\xi)$ is a convex function. For dual flat manifold, the canonical
divergence is so-called the Bregman divergence.
### III.2 Gradient flow on statistical manifold
For a dual flat manifold, the existence of $\psi(\xi)$ allows to consider the
gradient flow with respect to Fisher metric Nakamura (1993); Fujiwara and
Amari (1995),
$\frac{d\xi}{dt}=-\left(g^{F}\right)^{-1}\partial_{\xi}\psi(\xi).$ (18)
Some examples of the gradient flows on statistical manifolds are discussed in
Nakamura (1993).
The simplest illustrative example of the mentioned gradient flow corresponds
to the Gaussian distribution statistical manifold. It is straightforward to
check that the statistical manifold formed by Gaussian distributions,
$\mathbb{M}_{\text{G}}=\\{p(\mu,\sigma;x)|\mu\in\mathbb{R},\,\sigma>0\\}$, is
$\alpha$-flat for $\alpha=\pm 1$. One can also generalize this statement to
the exponential distribution family,
$p(\xi;x)=\exp\left[\xi_{i}g_{i}(x)+h(x)-\psi(\xi)\right],$ (19)
which forms dual flat manifold with canonical coordinate system $\xi$ and the
function $\psi=\psi(\xi)$. In case of Gaussian distribution, the coordinate
system is $(\xi_{1},\xi_{2})$ with $\xi_{1}=\mu/\sigma^{2}$,
$\xi_{2}=-1/(2\sigma^{2})$ and
$\psi=\mu^{2}/(2\sigma^{2})+\ln\left(\sqrt{2\pi}\sigma\right)$. In case of
exponential distributions family the Bregman divergence coincides with
Kullback-Leibler divergence. The gradient flow for the Gaussian statistical
manifold looks as
$\dot{\mu}=-\mu;\quad\dot{\sigma}=-\frac{\sigma^{2}+\mu^{2}}{2\sigma}.$ (20)
Note that this system has the conserved integral of motion
$H=\sigma^{2}/\mu-\mu$, that corresponds to the arc radius.
### III.3 Cauchy distribution
One can show that family of univariate elliptic distributions forms
$\alpha$-flat statistical manifolds Mitchell (1988). However, there is one
orphan distribution in this family: the Cauchy distribution,
$p_{\text{C}}(\gamma,\beta;x)=\frac{1}{\pi}\frac{\gamma}{\gamma^{2}+(x-\beta)^{2}},\,\gamma>0,\,\beta\in\mathbb{R},$
(21)
which can be written in more convenient for our goals McCullagh’s
representation McCullagh (1992),
$p_{\text{C}}(w;x)=\frac{\operatorname{\mathrm{Im}\,}w}{\pi|x-w|^{2}},\quad
w=\beta+i\gamma.$ (22)
The corresponding Fisher metric looks as
$g_{ij}^{\text{F}}=\frac{1}{2\gamma^{2}}\begin{pmatrix}1&0\\\
0&1\end{pmatrix}$ (23)
It is straightforward to verify that for the Cauchy distribution the skewness
tensor $T^{\text{F}}$ vanishes everywhere, so we can not find any value of
$\alpha$ to obtain dual flat structure at first glance.
However, we can interpret the Cauchy distribution as the member of
$q$-Gaussian family, which probability density function is defined as
$\mathcal{N}_{q}(\beta,x)=\frac{\sqrt{\beta}}{C_{q}}\exp_{q}\left\\{-\beta
x^{2}\right\\},\quad\beta>0,$ (24)
where $C_{q}$ is the normalization constant. The Cauchy distribution can be
considered as $q$-Gaussian with $q=2$, which was done in work Nielsen (2020)
(we discuss some properties of $q$-Gaussians in Appendix),
$p_{\text{C}}(\gamma,\beta;x)=\left.\mathcal{N}_{q}\left(\gamma^{-2},(x-\beta)^{2}\right)\right|_{q=2}.$
(25)
Using the definition of $q$-exponent, it is straightforward to verify that the
distribution function $p_{\text{C}}(\gamma,\beta;x$) can be also represented
as
$p_{\text{C}}(\gamma,\beta;x)=\left.\exp_{q}\left(\xi_{i}g_{i}(x)-\psi(\xi)\right)\right|_{q=2}$
(26)
with $\xi_{1}=2\pi\beta/\gamma$, $\xi_{2}=-\pi/\gamma$, $g_{1}(x)=x$,
$g_{2}(x)=x^{2}$ and the potential function $\psi(\xi)$ is,
$\psi(\xi)=-\frac{\pi^{2}}{\xi_{2}}-\frac{\xi_{1}^{2}}{4\xi_{2}}-1.$ (27)
Next, it was shown that the following divergence
$D_{\text{B-T}}[\xi_{1};\xi_{2}]=\\\ =\left(\int
dx\,p(\xi_{2};x)^{2}\right)^{-1}\left[\int
dx\,\frac{p(\xi_{2};x)^{2}}{p(\xi_{1};x)}-1\right]$ (28)
is the Bregman divergence, i.e. describes the dual flat manifold. Such
divergence is called Bregman-Tsallis divergence. The metric corresponding to
Bregman-Tsallis divergence does not coincide with Fisher metric and can be
computed as
$g_{ij}^{q}=\frac{\partial^{2}\psi(\xi)}{\partial\xi_{i}\partial\xi_{j}},$
(29)
where $\xi$ corresponds to the canonical coordinate system on dual flat
manifold. In case of Cauchy distributions, we have
$g_{ij}^{q}=\frac{2\pi}{\gamma}\begin{pmatrix}1&0\\\ 0&1\end{pmatrix}.$ (30)
and it is clear that $g^{q}$ and $g^{\text{F}}_{\text{C}}$ are related via
conformal transformation. The dual coordinates $\eta_{i}$ are defined by
$\eta_{i}=\frac{\partial\psi(\xi)}{\partial\xi_{i}}=\\\
=\left(-\frac{\xi_{1}}{2\xi_{2}},\,\frac{4\pi^{2}+\xi_{1}^{2}}{4\xi_{2}^{2}}\right)=(\beta,\,\gamma^{2}+\beta^{2})$
(31)
with corresponding dual potential,
$\phi(\eta)=1-2\pi\sqrt{\eta_{2}-\eta_{1}^{2}}.$ (32)
The gradient flow (18) for Cauchy distribution looks like
$\begin{cases}\dot{\xi}_{1}=\displaystyle\frac{\xi_{1}}{8\pi^{2}}\left(4\pi^{2}+\xi_{1}^{2}\right),\\\\[10.0pt]
\dot{\xi}_{2}=\displaystyle\frac{\xi_{2}}{8\pi^{2}}\left(4\pi^{2}-\xi_{1}^{2}\right)\end{cases}\leftrightarrow\begin{cases}\dot{\beta}=-\beta,\\\\[10.0pt]
\dot{\gamma}=\displaystyle\frac{\beta^{2}-\gamma^{2}}{2\gamma}\end{cases}$
(33)
Note that these equations are quite similar to the Gaussian case, eq. (20).
Having introduced all relevant concepts from information geometry, we can turn
to the interpretation of Kuramoto model in terms of the information geometry.
## IV Fisher metric as the order parameter for synchronization transition
### IV.1 Fisher metric and phase transitions
Quantum tensor is now the effective tool for analysis of the topological and
critical phenomena in the complicated many-body systems. It is defined as
$Q_{ij}=\langle\partial_{i}\Psi|\partial_{j}\Psi\rangle-\langle\partial_{i}\Psi|\Psi\rangle\langle\Psi|\partial_{j}\Psi\rangle$
(34)
where $i,j$ indices correspond to the coordinates on the parameter space. Its
real part is the quantum metric while imaginary part is the Berry curvature
Provost and Vallee (1980); Berry (1984)
$Q_{ij}=g_{ij}+iF_{ij}$ (35)
The quantum metric can be thought as a two-point correlator or fidelity whose
behavior is expected to quantify the properties of the system Zanardi and
Paunković (2006); Zanardi _et al._ (2007a); You _et al._ (2007). This
generic argument can be made more precise if we focus at the geometry of the
metric and it was argued that Zanardi and Paunković (2006); Zanardi _et al._
(2007a) that the singularity of the metric or Ricci curvature corresponds to
the position of the phase transition. The kind of the classification of
induced geometries attributed to the ground state has been found in
Kolodrubetz _et al._ (2013). The relation between the singularities of the
metric and the phase transitions is not a completely rigorous statement but
there is convincing list of examples, say Buonsante and Vezzani (2007); Dey
_et al._ (2012) for the Dicke and Hubbard models. The review of this subject
can be found in Kolodrubetz _et al._ (2017). The parameters of the
Hamiltonian or the momenta can be used to evaluate the components of the
quantum metric. More recently it was recognized that the behavior of the
metric nearby the singular point can distinguish the integrable or chaotic
behavior Pandey _et al._ (2020). The singularities of the different
components of the metric provide the information concerning the criticality in
the different directions of the parameter space.
The Fisher information metric is parallel to the quantum metric in quantum
mechanics when the probability substitutes the squared modulus of the wave
function. The probability obeys the Fokker-Planck (FP) equation instead of the
Schrödinger one hence we shall focus below at solutions to FP equations. The
notion of the phase transition in the stochastic problems also requires some
care however the relation between the quantum and semiclassical criticalities
are seen in the temperature component of the information metric Zanardi _et
al._ (2007b). It turns out that the synchronization phase transition provides
another clear-cut example when the behavior of the Fisher metric yields the
identification of the phase transition in the classical system. In this
Section we shall demonstrate that the representation of the Kuramoto model in
terms of the distributions does the job. In the pure Kuramoto model we shall
see a kind of classical version of quantum phase transition at zero
temperature while in Kuramoto-Sakaguchi model we have an effective temperature
due to a noise.
### IV.2 Fisher metric in Kuramoto model
In the paper Chen _et al._ (2017) the authors have introduced the hyperbolic
description of the Kuramoto model. We argue that the hyperbolic space arises
as the statistical manifold of wrapped Cauchy distributions. The normalized
version of Poisson kernel is given by
$p_{\text{wC}}(w;z)=\frac{1}{2\pi}\frac{1-|w|^{2}}{|w-z|^{2}},\,|z|=1,\\\
\oint_{|z|=1}dz\,\frac{p_{\text{wC}}(w,z)}{iz}=1,$ (36)
where wC denotes “wrapped Cauchy”. It is convenient to use polar
representation, $w=re^{i\phi}$ and $z=e^{i\theta}$, which gives
$p_{\text{wC}}(r,\phi;\theta)=\frac{1}{2\pi}\frac{1-r^{2}}{1-2r\cos(\theta-\phi)+r^{2}},\\\
0\leq r<1,\,\phi=\phi\,\text{mod}\,2\pi$ (37)
This is PDF with two parameters $r$ and $\phi$. All such PDFs form manifold
$\mathbb{M}_{\text{wC}}$,
$\mathbb{M}_{\text{wC}}=\left\\{p_{\text{wC}}(r,\phi;\theta)\,|r\in[0,1),\phi=\phi\,\text{mod}\,2\pi\right\\}.$
and the Fisher metric $g_{ij}^{\text{F}}$ is given by (14), which can be
represented in more compact form,
$g_{ij}^{\text{F}}=4\int_{-\pi}^{+\pi}d\theta\,\partial_{i}\sqrt{p_{\text{wC}}(r,\phi;\theta)}\partial_{j}\sqrt{p_{\text{wC}}(r,\phi;\theta)}.$
It is straightforward to compute this integral and obtain components of the
metric,
$g_{rr}^{\text{F}}=\frac{2}{(1-r^{2})^{2}};\quad
g_{\phi\phi}^{\text{F}}=\frac{2r^{2}}{(1-r^{2})^{2}};\\\
g_{r\phi}^{\text{F}}=g_{\phi r}^{\text{F}}=0.$ (38)
hence, reintroducing $w=re^{i\phi}$, we can write
$ds^{2}_{\text{F}}=\frac{2\left(dr^{2}+r^{2}d\phi^{2}\right)}{(1-r^{2})^{2}}\equiv\frac{2\,dw\,d\overline{w}}{(1-|w|^{2})^{2}},$
(39)
which coincides with the hyperbolic disk metrics. Also, the direct computation
of (15) shows that $T^{\text{F}}\equiv 0$. This is not surprise, because the
wrapped Cauchy distribution is simply related to the usual one and the
manifold formed by usual Cauchy distributions is nothing more than upper half-
plane model of AdS2 space. The wrapped Cauchy distributions also form AdS2
space but in terms of Poincare disk model. Such two models can be mapped to
each other via conformal transformation.
We also can represent wrapped Cauchy distribution as $q$-exponent with $q=2$,
which can be checked in straightforward way. Computing the canonical
coordinates $(\xi_{1},\xi_{2})$ and then deriving potential function, we can
find the metric $g_{ij}^{q}$. In $(w,\,\overline{w})$-coordinates the metric
obtained from the corresponding potential function is given by
$g_{ij}^{q}=\frac{2\pi}{1-|w|^{2}}\begin{pmatrix}1&0\\\
0&1\end{pmatrix}\rightarrow
ds_{q}^{2}=\frac{4\pi\,dw\,d\overline{w}}{1-|w|^{2}},$ (40)
so, the $q$-metric again is conformally equivalent to Fisher metric.
The wrapped Cauchy distribution plays a crucial role in the Kuramoto model
dynamics since it is invariant under Möbius group action. Let $z$ is wrapped
random variable with Cauchy distribution, $z\sim p_{\text{wC}}(w;z)$. Let
$z^{\prime}$ is the image of $z$ produced by the Möbius transformation
$\mathcal{M}$, $z^{\prime}=\mathcal{M}(z)$. It was proven that if $z\sim
p_{\text{wC}}(w;z)$, then $z^{\prime}\sim p_{\text{wC}}(w;z^{\prime})$. So,
wrapped Cauchy distributions are closed with respect action of Möbius
transformation. This fact drives us to conclude that the wrapped Cauchy
distribution is a kind of universal distribution for the Kuramoto model
dynamics. Such implication resonates with the role of Ott-Antonsen (O-A)
ansatz Ott and Antonsen (2008).
We would like to emphasize that Fisher information blows up in the
synchronized phase, that corresponds to the limit $|w|\rightarrow 1$. In the
Kuramoto- Sakaguchi the relation between the singularity of the Fisher metric
and the phase transition will be made more transparent.
### IV.3 Fisher metric in Kuramoto-Sakaguchi model
Consider now the Kuramoto-Sakaguchi model Sakaguchi (1988), i.e. the Kuramoto
model with noise,
$\dot{\theta}_{i}=\omega_{i}+\frac{\lambda}{N}\sum_{j=1}^{N}\sin\left(\theta_{j}-\theta_{i}\right)+\eta_{i}(t),$
(41)
where $\eta_{i}=\eta_{i}(t)$ is stochastic term with properties
$\langle\eta_{i}(t)\rangle=0$,
$\langle\eta_{i}(t)\eta_{j}(t^{\prime})\rangle=2D\delta_{ij}\delta(t-t^{\prime})$,
where $D>0$ is noise amplitude. This model was initially considered by
Sakaguchi and it was shown that there is the continuous phase transition for
the conventional order parameter $r$ with respect to the coupling constant
$\lambda$. The critical coupling could be obtained by the analysis of self-
consistency equation or by the stability analysis of incoherent state and is
given by
$\frac{1}{\lambda_{c}}=\frac{D}{2}\int_{-\infty}^{+\infty}\frac{d\omega\,g(\omega)}{\omega^{2}+D^{2}}.$
(42)
Both methods deal with the Fokker-Planck equation, which arises in the
continuum limit of Kuramoto-Sakaguchi model. In case of identical frequencies,
i.e. $g(\omega)=\delta(\omega)$, the critical coupling becomes
$\lambda_{c}=2D$. Moreover, in such case one could easily find the stationary
solution of the Fokker-Planck equation Goldobin _et al._ (2018). in rotating
reference frame. The stationary solution $\rho_{0}$ reads as
$\rho_{0}(\theta,\omega)=\\\ =\frac{1}{2\pi
I_{0}(\lambda|r|/D)}\exp\left\\{\frac{\lambda|r|}{D}\cos\left(\arg
r-\theta\right)\right\\}.$ (43)
The self-consistency equation for the Kuramoto-Sakaguchi model is very simple,
$|r|=\frac{I_{1}(\lambda|r|/D)}{I_{0}(\lambda|r|/D)},$ (44)
where it is straightforward to notice that the non-trivial solution exists for
$\lambda>\lambda_{c}=2D$ (see fig. 1). This equation is similar with self-
consistency equation appeared in XY model and with self-consistency equation
appeared in stationary Hamiltonian mean field model. It is not a surprise,
because the noise strength $D$ plays role of temperature. In case of non-
identical frequencies, the stationary distribution could be found explicitly
Alexandrov (2023), but the Fisher metric can be evaluated only numerically.
Figure 1: Continuous phase transition in Sakaguchi-Kuramoto model
The described stationary distribution is nothing more than the von Mises
distribution belonging to the exponential family. In general form, it looks
like
$p_{\text{vM}}(\kappa,\mu;\theta)=\frac{1}{2\pi
I_{0}(\kappa)}\exp\left\\{\kappa\cos(\theta-\mu)\right\\},\\\ \kappa\geq
0,\,\quad\mu=\mu\,\text{mod}\,2\pi.$ (45)
The computation of the Fisher metric for von Mises distribution is
straightforward,
$g_{ij}^{\text{F}}=\begin{pmatrix}1-\displaystyle\frac{I_{1}(\kappa)^{2}}{I_{0}(\kappa)^{2}}-\frac{I_{1}(\kappa)}{\kappa
I_{0}(\kappa)}&0\\\ 0&\displaystyle\frac{\kappa
I_{1}(\kappa)}{I_{0}(\kappa)}\end{pmatrix}.$ (46)
where $(i,j)=(\kappa,\mu)$. Setting $\kappa=\lambda|r|/D$, we obtain the
Fisher metric corresponding to the stationary solution of Kuramoto-Sakaguchi
model. We can interpret in twofold way: we can set $|r|$ to be a fixed value
and then plot the components of metric as the functions of $\lambda$ and $D$
or we can solve the self-consistency equation (44) and find the order
parameter $|r|$ as the function of $\lambda$ and $D$ and then consider the
components of metric tensor as the functions of $\lambda$ and $D$ only. The
second interpretation gives us the following dependencies, fig. 2. The
component $g_{11}^{\text{F}}$ does not exist for $\lambda<\lambda_{c}$,
whereas the component $g_{22}^{\text{F}}$ is identically zero for
$\lambda<\lambda_{c}$. The first interpretation says us to plot the components
of Fisher metric as functions of order parameter $|r|$ and $\lambda/D$, see
fig. 3. The transition point coincides with $\lambda_{c}=2D$ hence the Fisher
metric provides the order parameter for the synchronization transition.
Figure 2: Non-zero components of Fisher metric for Kuramoto-Sakaguchi model as
the function of $\lambda/D$ with $\lambda_{c}=2D$ Figure 3: Non-zero
components of Fisher metric for Kuramoto-Sakaguchi model as the function of
$|r|$ & $\lambda/D$ with $\lambda_{c}=2D$
### IV.4 Kullback-Leibler divergence in Kuramoto model
After discussion of the Fisher metric, we would like to notice the following
fact: the proposed hyperbolic space description of Kuramoto model dynamics
coincides with gradient flow of Kullback-Leibler divergence on the statistical
manifold.
Indeed, in the continuum limit, $N\rightarrow\infty$, we deal with initial
distribution of phases $f_{0}=f_{0}(z)$, $z=e^{i\theta}$. The gradient flow in
the limit $N\rightarrow\infty$ becomes
$\dot{w}=-\frac{\lambda}{2}\left(1-|w|^{2}\right)^{2}\times\\\
\times\frac{\partial}{\partial\overline{w}}\oint_{|z|=1}\frac{dz\,f_{0}(z)\ln
p_{\text{wC}}(w;z)}{iz},$ (47)
where we rewrite the integral over $\theta$ as the contour integral over the
unit circle $|z|=1$ in the complex plane. Our key observation is that the
integral in the right hand side of eq. (47) is nothing more than so-called
cross entropy of two distributions with PDF $f_{0}(z)$ and
$p_{\text{wC}}(w;z)$,
$S_{\text{cross}}=\oint_{|z|=1}\frac{dz\,f_{0}(z)\ln p_{\text{wC}}(w;z)}{iz}.$
(48)
This quantity is tightly connected to the well-known Kullback-Leibler
divergence,
$S_{\text{K-L}}[f,g]=-\oint_{|z|=1}dz\,\frac{f(z)}{iz}\ln\frac{g(z)}{f(z)}=\\\
=-S_{0}[f(z)]-S_{\text{cross}},$ (49)
where $f=f(z)$ and $g=g(z)$ are two different PDFs of wrapped distributions
and $S_{0}[f(z)]$ denotes the entropy of distribution $f(z)$. We can notice
that gradient with respect to $\overline{w}$ in eq. (47) has the form,
$\frac{\partial
S_{\text{K-L}}}{\partial\overline{w}}=-\frac{\partial}{\partial\overline{w}}\oint_{|z|=1}\frac{dz\,f(z)\ln
g(z)}{iz}.$ (50)
So, we conclude that the gradient flow can be expressed in terms of Kullback-
Leibler divergence,
$\dot{w}=+\frac{\lambda}{2}\left(1-\left|w\right|^{2}\right)^{2}\frac{\partial
S_{\text{K-L}}[f_{0},p_{\text{wC}}]}{\partial\overline{w}},$ (51)
where $f_{0}=f_{0}(z)$ is the initial distribution and
$p_{\text{wC}}=p_{\text{wC}}(w;z)$ is the wrapped Cauchy distribution.
As we have mentioned earlier, the Kullback-Leibler divergence in general does
not coincide with Bregman divergence. Nevertheless, the eq. (51) clearly says
that the Kuramoto model dynamics can be treated as the gradient flow of the
Kullback-Leibler divergence on the statistical manifold formed by wrapped
Cauchy distributions. Therefore, to understand such dynamics we can compute
the Kullback-Leibler divergence between a given initial distribution
$f_{0}(z)$ and wrapped Cauchy distribution $p_{\text{wC}}(w;z)$. Let us notice
that the uniform distribution $p_{\text{U}}(z)=(2\pi)^{-1}$ is nothing more
than wrapped Cauchy distribution with $w=0$ and the Dirac-delta distribution
is also wrapped Cauchy distribution with $w=1$.
If the initial distribution $f_{0}(z)$ is uniform, so $f_{0}(z)=(2\pi)^{-1}$,
the Kullback-Leibler divergence it is easy to compute, which gives
$S_{\text{K-L}}[f_{0},p_{\text{wC}}]=-\ln\left(1-|w|^{2}\right).$ (52)
Substituting this expression into the gradient flow, we find
$\dot{w}=\frac{\lambda}{2}\left(1-|w|^{2}\right)w.$ (53)
The obtained equation is interesting for two reasons. First, it is similar to
the equation appeared in Ott-Antonsen ansatz. Second, it tells us that in
large-$N$ limit the Kuramoto model with all-to-all couplings and identical
frequencies is _dual_ to the _single_ Landau-Stuart oscillator. It is quite
straightforward to obtain the solution $w=w(t)$ and than we can see how the
Kullback-Leilber divergence evolves in time. Until the transition moment, the
divergence is negligibly small (it is quite clear because an instant
distribution on the unit circle is still similar to uniform $f_{0}$). At the
transition moment, the Kullback-Leibler starts to grow significantly and
finally blows up: the final distribution on the unit circle is drastically
different from the uniform, which corresponds to the complete synchronization.
## V Conclusion
In this work we revised the hyperbolic description of the Kuramoto model
proposed in Chen _et al._ (2017) relating it to the statistical manifolds.
The key observation of our study is that the singularity of the Fisher
information metric captures the synchronization transition. Namely the metrics
corresponding to the Kuramoto and Kuramoto-Sakaguchi models blow up in the
synchronized state. Therefore we extend the list of examples when the phase
transitions get identified via the singularities of the Fisher metric. We have
focused on the information geometry describing the all-to-all Kuramoto-
Shakaguchi model but it would be interesting to recognize the information
metric for more general graph architecture like the star when the exact
solution is available Alexandrov (2023).
The presented approach can be also useful for case with complicated topology,
i.e. when an analytical treatment is extremely tough. Based on the fact that
Fisher metric has a singularity at the transition point, it seems possible to
extract an explicit expressions for model parameters, which raises
synchronized state. Moreover, there is no restriction for generalizations of
Kuramoto model, which is emphasized by the described example of model with
noise. This fact allows to use information geometry framework for
generalizations, that includes phase lags, external forces and so on.
We have noticed that the gradient flow in Kuramoto model can be represented as
gradient flow on statistical manifold. However, instead of a potential
function, one deals with Kullback-Leibler divergence. As was emphasized,
Kullback-Leibler divergence is not Bregmann divergence for (wrapped) Cauchy
distribution and it is reasonable to represent such flow in terms of Bregmann-
Tsallis divergence. But the metric obtained via the potential function of
Bregmann-Tsallis divergence does not coincide with Fisher metric (it is
conformally equivalent, but different). We address a closer treatment of a
connection between synchronization and Bregmann-Tsallis divergence for further
research. Next, the discussion of Cauchy distribution as member of
$q$-exponential family naturally raises the question concerning wrapped
$q$-exponential distributions. As far as we know, the properties of such
distributions were not examined. It seems that this research can be
interesting in information geometry context.
In our study the coordinates on the $\mathrm{SL}(2,\mathbb{R})$ orbit play the
role of the parameters due to the dimensional reduction of dynamics. Hence to
some extend one could be confused why the effective degrees of freedom play
the role of the effective parameters. However we know the very precise example
of the same nature — the duality between the inhomogeneous spin chains and
integrable many-body system with long-range interaction of Calogero-
Ruijsenaars type which has been formulated in probabilistic terms in Gorsky
_et al._ (2022). It that case the parameters-inhomogeneities in the spin
chains get identified with the coordinates of the particles in Calogero model
which on the other hand are the coordinates on the group orbit. That is the
Fisher metric on the parameters in the spin chain gets mapped into the clear-
cut object at the Calogero-Ruijsenaars side. We shall discuss this analogy and
relation elsewhere.
The hyperbolic geometry and the large number of degrees of freedom at its
boundary invites for the holographic description. However in this case there
are some subtle points since the boundary theory is classical and the
equations are of the first order in the time derivatives. Nevertheless the
description in terms of the conformal barycenter has a lot in common with the
dynamics of a kind of baryonic vertex since the Poisson kernels are related
with the geodesics connecting the points at the boundary and the bulk. We
postpone these issues for the separate study. The general discussion
concerning the relation of the probabilistic manifold with holography can be
found in Erdmenger _et al._ (2020).
The concluding remark concerns the application of the information geometry to
the neuroscience. The idea is not new Amari (2016) however our findings
provide the new perspectives for this issue. The Kuramoto model is widely used
as the model for the synchronization of the functional connectomes hence one
could expect that the properties of the Fisher metric for the Kuramoto model
at more general graphs shall provide the new approach for the brain rhythms
generation. In particular it would be very interesting to investigate the
synchronization of the several functional connectomes via the properties of
the Fisher metric and possible dependence of the synchronization of the
several functional connectomes on the architecture of the structural
connectome.
## Acknowledgements
We are grateful to A. Polkovnikov for the useful discussions, S. Kato for the
discussion on Kato-Jones distribution properties and to F. Nielsen for the
comments concerning Cauchy distribution statistical manifold. A.G. thanks IHES
and Nordita where the part of the work has been done for the hospitality and
support. A.A. thanks grant by Brain Program of the IDEAS Research Center and
grant №18-12-00434$\Pi$ by Russian Science Foundation. A.G. thanks grant
№075-15-2020-801 by Ministry of Science and Higher Education of Russian
Federation and grant by Basis Foundation.
## Appendix A Appendix. On Möbius transformation and wrapped distributions
In this Appendix let us describe some facts concerning the wrapped
distributions which can be obtained from non-wrapped one via so-called
“compactification”. For random variable $x\in(-\infty,+\infty)$ with
probability density function $p(x)$, we introduce the complex variable
$z=e^{ix}$ and consider its phase $\theta=\arg z$, $\theta\in(-\pi,+\pi]$,
which has _wrapped_ probability density function $p_{\text{w}}(\theta)$. The
connection between $p(x)$ and its wrapped version is following,
$p_{\text{w}}(\theta)=\sum_{k=-\infty}^{+\infty}p(\theta+2\pi k).$ (54)
All the common concepts of probability theory work for wrapped distributions
as well.
There are four (at least) common wrapped distributions that usually discussed
in the context of Kuramoto model: uniform distribution, wrapped Cauchy
distribution, von Mises distribution and recently appeared Kato-Jones
distribution. The uniform distribution is trivial, the wrapped Cauchy
distribution is well-known. The von Mises distribution arises in the
Sakaguchi-Kuramoto model as the stationary solution of the corresponding
Fokker-Planck equation. The Kato-Jones distribution was introduced in work
Kato and Jones (2010) and it is Möbius transformed von Mises distribution.
This distribution is invariant under the action of Möbius group. The mentioned
list of distribution is easy to extend. For the sake of completeness, we can
also mention cardioid distribution and its Möbius image, wrapped normal
distribution and its Möbius image.
Figure 4: Kuramoto model related statistical manifolds. One can obtain
$p_{\text{K-J}}$ via Möbius transformation $\mathcal{M}$ and then reduce
$M_{\text{K-J}}$ to the wrapped Cauchy submanifold $\mathbb{M}_{\text{wC}}$
and the von Mises submanifold $\mathbb{M}_{\text{vM}}$. Both $p_{\text{K-J}}$
and $p_{\text{wC}}$ belong to the Möbius invariant submanifold
$\mathbb{M}_{\text{M}}$, whereas $p_{\text{vM}}$ does not
The invariance under Möbius group action is crucial in the Kuramoto model. So,
here we focus on the triple of wrapped Cauchy distrubition, von Mises
distribution and Kato-Jones distribution. We would like to establish the
interconnections between members of such triple in the information geometry
context. First of all, let us write down the probability density functions for
each member of triplet,
$p_{\text{wC}}(r,\phi;\theta)=\frac{1}{2\pi}\frac{1-r^{2}}{1-2r\cos(\theta-\phi)+r^{2}},\\\
0\leq r<1,\,\phi=\phi\,\text{mod}\,2\pi,$ (55)
$p_{\text{vM}}(\kappa,\mu;\theta)=\frac{1}{2\pi
I_{0}(\kappa)}\exp\left\\{\kappa\cos(\theta-\mu)\right\\},\\\ \kappa\geq
0,\,\mu=\mu\,\text{mod}\,2\pi,,$ (56)
$p_{\text{K-J}}(\kappa,r,\nu,\mu;\theta)=\\\ =\frac{1}{2\pi
I_{0}(\kappa)}\frac{1-r^{2}}{1-2r\cos(\theta-\mu-\nu)+r^{2}}\times\\\
\times\exp\left\\{\frac{\kappa\cos(\mu+\theta)+\kappa
r^{2}\cos(\mu+2\nu-\theta)-2r\kappa\cos\nu}{1-2r\cos(\theta-\mu-\nu)+r^{2}}\right\\},\\\
\kappa\geq 0,\,0\leq
r<1,\,\mu=\mu\,\text{mod}\,2\pi,\,\nu=\nu\,\text{mod}\,2\pi$ (57)
From these expressions it is straightforward to see the following facts:
wrapped Cauchy and von Mises distributions form two dimensional statistical
manifolds, the von Mises distribution belongs to exponential family, the Kato-
Jones distribution forms four dimensional statistical manifold. The wrapped
Cauchy distribution can be derived from the Kato-Jones by setting
$\kappa\rightarrow 0$, whereas the von Mises distribution can be derived by
setting $r\rightarrow 0$. Therefore, we can treat the statistical manifolds
corresponding to the wrapped Cauchy distribution and to the von Mises
distribution as the submanifolds of the Kato-Jones statistical manifold.
Wrapped distributions $p_{\text{wC}}$ and $p_{\text{K-J}}$ belong to Möbius
invariant statistical manifold $\mathbb{M}_{\mathcal{M}}$, whereas
$p_{\text{vM}}$ does not. However, from $p_{\text{K-J}}$ one can obtain
$p_{\text{vM}}$ and $p_{\text{wC}}$ taking an appropriate limit, $r\rightarrow
0$ and $\kappa\rightarrow 0$, respectively (see fig. 4).
## Appendix B On $q$-Gaussian distributions in Kuramoto model
The Kuramoto model shares some similarities with the famous Hamiltonian mean
field (HMF) model Miritello _et al._ (2009a); Pluchino and Rapisarda (2006),
which describes $N$ interacting particles on $\mathbb{S}^{1}$ via cosine
potential with all-to-all couplings,
$H_{\text{HMF}}=\sum_{i=1}^{N}\frac{p_{i}^{2}}{2m}+\frac{\lambda}{2N}\sum_{i<j}\cos\left(\theta_{j}-\theta_{i}\right),$
(58)
where $\lambda$ is the coupling constant. This model is the prototypical for
study of systems with long-range interactions (LRI). In the HMF model LRI is
caused by all-to-all couplings. The effects of LRI in HMF model are quite
well-known: it was shown that LRI causes violent relaxation phenomenon,
existence of quasi-stationary states Yamaguchi _et al._ (2004). It is worth
mentioning that quantum version of HMF model also exhibits such phenomena. One
quite interesting consequences of LRI is the appearance of so-called Tsallis
$q$-statistics Tsallis and Tirnakli (2010). Due to the presence of LRI, the
entropy of a system becomes non-extensive. Tsallis with co-authors developed
the appropriate machinery to describe this non-extensivity Umarov _et al._
(2008). Roughly speaking, the parameter $q$ measures the non-extensivity in a
system. The $q$-statisics deals with so-called $q$-Gaussian distributions (see
Umarov _et al._ (2008) for detailed discussion). Such $q$-Gaussian
distributions differ from the usual Gaussian distribution for $q\neq 1$: they
have heavier tails (see fig. 5) and in the limit $q\rightarrow 1$ one restores
usual Gaussian distribution.
Figure 5: $q$-Gaussians
Next, in the paper Tirnakli _et al._ (2007) the authors presented the method
to capture correlations between degrees of freedom based on $q$-statistics.
Despite the fact that there is no mathematically rigorous derivations of
$q$-statistics for a system with LRI, such method allows one to verify (at
least numerically) that $q$-statistics takes place. For instance, it was shown
that $q$-statistics appears in the HMF model Pluchino _et al._ (2009, 2008).
Based on the idea that HMF model and Kuramoto model are related to each other,
in Miritello _et al._ (2009b) the authors examined fingerprints of
$q$-statistics in the Kuramoto model. The key observation was following. From
the phases $\theta_{i}(t)$ one should construct the so-called CLT-variable
$y_{i}$ (see Tirnakli _et al._ (2007) for details),
$y_{i}=\frac{1}{\sqrt{M}}\sum_{k=1}^{M}\theta_{i}(k\delta),$ (59)
where $\delta>0$ is the pre-defined time interval In the Kuramoto model it was
demonstrated that for $\lambda<\lambda_{c}$ the variables $y_{i}$ obey
$q$-Gaussian distribution with $q\approx 1.7$ , i.e. $q$-CLT appears. The
authors considered the case of _non-identical_ frequencies and focused on case
of uniform distribution and case of Gaussian distribution. In the case of
$\lambda>\lambda_{c}$ the variables $y_{i}$ were fitted by usual Gaussian
distribution.
Having briefly discussed the $q$-Gaussians, we would like to draw attention to
the fact that $q$-Gaussian distribution have already appeared in the context
of Kuramoto model. Indeed, the continuum limit of Kuramoto model can be
described in Ott-Antonsen framework Ott and Antonsen (2008). In the case of
identical oscillators, the distribution function for the continuum limit can
be explicitly obtained. This distribution function is nothing more then
normalized Poisson kernel, i.e. the wrapped Cauchy distribution. Of course,
parameters of the distribution depend on time and initial conditions. As we
have shown, the usual Cauchy distribution belongs to $q$-exponential family
with $q=2$, so the wrapped Cauchy distribution also inherits properties of
$q$-exponential family (wrapped and non-wrapped versions of Cauchy
distribution are related to each other via conformal transformation).
## References
* Rodrigues _et al._ (2016) F. A. Rodrigues, T. K. D. Peron, P. Ji, and J. Kurths, Physics Reports 610, 1 (2016).
* Watanabe and Strogatz (1993) S. Watanabe and S. H. Strogatz, Physical review letters 70, 2391 (1993).
* Watanabe and Strogatz (1994) S. Watanabe and S. H. Strogatz, Physica D: Nonlinear Phenomena 74, 197 (1994).
* Marvel _et al._ (2009) S. A. Marvel, R. E. Mirollo, and S. H. Strogatz, Chaos: An Interdisciplinary Journal of Nonlinear Science 19, 043104 (2009).
* Chen _et al._ (2017) B. Chen, J. R. Engelbrecht, and R. Mirollo, Journal of Physics A: Mathematical and Theoretical 50, 355101 (2017).
* Amari (2016) S.-I. Amari, _Information geometry and its applications_, Vol. 194 (Springer, 2016).
* Provost and Vallee (1980) J. Provost and G. Vallee, Communications in Mathematical Physics 76, 289 (1980).
* Berry (1984) M. V. Berry, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 392, 45 (1984).
* Zanardi and Paunković (2006) P. Zanardi and N. Paunković, Phys. Rev. E 74, 031123 (2006).
* Zanardi _et al._ (2007a) P. Zanardi, P. Giorda, and M. Cozzini, Phys. Rev. Lett. 99, 100603 (2007a).
* You _et al._ (2007) W.-L. You, Y.-W. Li, and S.-J. Gu, Physical Review E 76, 022101 (2007).
* Garnerone _et al._ (2009) S. Garnerone, D. Abasto, S. Haas, and P. Zanardi, Physical Review A 79, 032302 (2009).
* Kolodrubetz _et al._ (2013) M. Kolodrubetz, V. Gritsev, and A. Polkovnikov, Physical Review B 88, 064304 (2013).
* Kolodrubetz _et al._ (2017) M. Kolodrubetz, D. Sels, P. Mehta, and A. Polkovnikov, Physics Reports 697, 1 (2017).
* Pandey _et al._ (2020) M. Pandey, P. W. Claeys, D. K. Campbell, A. Polkovnikov, and D. Sels, Phys. Rev. X 10, 041017 (2020).
* Buonsante and Vezzani (2007) P. Buonsante and A. Vezzani, Physical review letters 98, 110601 (2007).
* Dey _et al._ (2012) A. Dey, S. Mahapatra, P. Roy, and T. Sarkar, Physical Review E 86, 031137 (2012).
* Cantarella and Schumacher (2022) J. Cantarella and H. Schumacher, SIAM Journal on Applied Algebra and Geometry 6, 503 (2022).
* Amari (2009) S.-I. Amari, IEEE Transactions on Information Theory 55, 4925 (2009).
* Nakamura (1993) Y. Nakamura, Japan journal of industrial and applied mathematics 10, 179 (1993).
* Fujiwara and Amari (1995) A. Fujiwara and S.-i. Amari, Physica D: Nonlinear Phenomena 80, 317 (1995).
* Mitchell (1988) A. F. Mitchell, International Statistical Review/Revue Internationale de Statistique , 1 (1988).
* McCullagh (1992) P. McCullagh, Biometrika 79, 247 (1992).
* Nielsen (2020) F. Nielsen, Entropy 22, 713 (2020).
* Zanardi _et al._ (2007b) P. Zanardi, L. C. Venuti, and P. Giorda, Physical Review A 76, 062318 (2007b).
* Ott and Antonsen (2008) E. Ott and T. M. Antonsen, Chaos: An Interdisciplinary Journal of Nonlinear Science 18, 037113 (2008).
* Sakaguchi (1988) H. Sakaguchi, Progress of theoretical physics 79, 39 (1988).
* Goldobin _et al._ (2018) D. S. Goldobin, I. V. Tyulkina, L. S. Klimenko, and A. Pikovsky, Chaos: An Interdisciplinary Journal of Nonlinear Science 28, 101101 (2018).
* Alexandrov (2023) A. Alexandrov, Chaos, Solitons & Fractals 167, 113056 (2023).
* Gorsky _et al._ (2022) A. Gorsky, M. Vasilyev, and A. Zotov, Journal of High Energy Physics 2022, 1 (2022).
* Erdmenger _et al._ (2020) J. Erdmenger, K. Grosvenor, and R. Jefferson, SciPost Physics 8, 073 (2020).
* Kato and Jones (2010) S. Kato and M. Jones, Journal of the American Statistical Association 105, 249 (2010).
* Miritello _et al._ (2009a) G. Miritello, A. Pluchino, and A. Rapisarda, EPL (Europhysics Letters) 85, 10007 (2009a).
* Pluchino and Rapisarda (2006) A. Pluchino and A. Rapisarda, Physica A: Statistical Mechanics and its Applications 365, 184 (2006).
* Yamaguchi _et al._ (2004) Y. Y. Yamaguchi, J. Barré, F. Bouchet, T. Dauxois, and S. Ruffo, Physica A: Statistical Mechanics and its Applications 337, 36 (2004).
* Tsallis and Tirnakli (2010) C. Tsallis and U. Tirnakli, in _Journal of Physics: Conference Series_ (IOP Publishing, 2010).
* Umarov _et al._ (2008) S. Umarov, C. Tsallis, and S. Steinberg, Milan journal of mathematics 76, 307 (2008).
* Tirnakli _et al._ (2007) U. Tirnakli, C. Beck, and C. Tsallis, Physical Review E 75, 040106 (2007).
* Pluchino _et al._ (2009) A. Pluchino, A. Rapisarda, and C. Tsallis, EPL (Europhysics Letters) 85, 60006 (2009).
* Pluchino _et al._ (2008) A. Pluchino, A. Rapisarda, and C. Tsallis, Physica A: Statistical Mechanics and its Applications 387, 3121 (2008).
* Miritello _et al._ (2009b) G. Miritello, A. Pluchino, and A. Rapisarda, Physica A: Statistical Mechanics and its Applications 388, 4818 (2009b).
|
11institutetext: Faculty of Computer Science, Dalhousie University, Nova
Scotia, Canada
11email<EMAIL_ADDRESS>
<EMAIL_ADDRESS>22institutetext: 2Keys Corporation - An
Interac Company, Ottawa, Canada 22email<EMAIL_ADDRESS>
http://www.2keys.ca
# A Boosting Approach to Constructing an Ensemble Stack††thanks: Supported by
2Keys Corporation - An Interac Company.
Zhilei Zhou 11 Ziyu Qiu 11 Brad Niblett 22 Andrew Johnston 22 Jeffrey
Schwartzentruber 11 Nur Zincir-Heywood 11 0000-0003-2796-7265 Malcolm I.
Heywood 11 0000-0002-1521-0671
###### Abstract
An approach to evolutionary ensemble learning for classification is proposed
in which boosting is used to construct a stack of programs. Each application
of boosting identifies a single champion and a residual dataset, i.e. the
training records that thus far were not correctly classified. The next program
is only trained against the residual, with the process iterating until some
maximum ensemble size or no further residual remains. Training against a
residual dataset actively reduces the cost of training. Deploying the ensemble
as a stack also means that only one classifier might be necessary to make a
prediction, so improving interpretability. Benchmarking studies are conducted
to illustrate competitiveness with the prediction accuracy of current state-
of-the-art evolutionary ensemble learning algorithms, while providing
solutions that are orders of magnitude simpler. Further benchmarking with a
high cardinality dataset indicates that the proposed method is also more
accurate and efficient than XGBoost.
###### Keywords:
Boosting Stacking Genetic Programming.
## 1 Introduction
Ensemble learning represents a widely employed meta-learning scheme for
deploying multiple models under supervised learning tasks [6, 22]. In general,
there are three basic formulations: Bagging, Boosting or Stacking. We note,
however, that most instances of ensemble learning adopt one scheme alone (§2).
Moreover, Boosting and Bagging represent the most widely adopted approaches.
In this work we investigate the utility of Boosting for constructing a Stacked
ensemble. Thus, rather than members of an ensemble being deployed in parallel,
our Stack assumes that ensemble members are deployed in the order that they
were originally discovered. We therefore partner Stacking with a Boosting
methodology for constructing the ensemble. This will enable us to
incrementally develop ensemble classifiers that are explicitly focused on what
previous members of the ensemble cannot do. In addition, we are then able to
explicitly scale ensemble learning to large cardinality datasets.
A benchmarking study demonstrates the effectiveness of the approach by first
comparing with recent evolutionary ensemble learning algorithms on a suite of
previously benchmarked low cardinality datasets. Having established the
competitiveness of the proposed approach from the perspective of classifier
accuracy, we then benchmark using a classification task consisting of 800
thousand records. The best of previous approaches is demonstrated not to scale
under these conditions. Conversely, the efficiency of the proposed approach
implies that parameter tuning is still feasible under the high cardinality
setting.
The balance of the paper is organized as follows. Section 2 reviews key
concepts from ensemble learning and summarizes research in evolutionary
ensemble learning. The proposed approach for using Boosting to develop a
Stacked ensemble is then detailed (§3.1). The approach to benchmarking and
parameterizing algorithms is then outlined (§4) and the results of two
benchmarking studies detailed (§5). Conclusions and future research completes
the work (§6).
## 2 Related Work
Ensemble learning appears in three basic forms, summarized as follows and
assuming (without loss of generality) that the underlying goal is to produce
solutions for a classification problem:
* •
Bagging: _n_ classifiers are independently constructed and an averaging scheme
is adopted to aggregate the _n_ independent predictions into a single ensemble
recommendation.111Classification tasks often assume the majority vote,
although voting/weighting schemes might be evolved [2]. The independence of
the _n_ classifiers is established by building each classifier on a
_different_ sample taken from the original training partition, or a ‘bag’.
* •
Boosting: constructs _n_ classifiers sequentially. Classifier performance is
used to incrementally reweight the probability of sampling training data,
$\mathcal{D}$, to appear in the training sample used to train the next
classifier. Thus, each classifier encounters a sample of $\mathcal{D}/n$
training records. Post-training the ensemble again assumes an averaging scheme
to aggregate the _n_ labels into a single label.
* •
Stacking: assumes that a heterogeneous set of _n_ classifiers is trained on a
common training partition. The predictions from the _n_ classifiers are then
‘stacked’ to define a $n\times|\mathcal{D}|$ dataset that trains a ‘meta
classifier’ to produce the overall classification prediction.
Such a division of ensemble learning architectures reflects the view that the
learning algorithm only constructs one model at a time. However, genetic
programming (GP) develops multiple models simultaneously. Thus, one approach
for GP ensemble learning might be to divide the population into islands and
expose each island to different data subsets, where the data subset is
constructed using a process synonymous with Bagging or Boosting (e.g. [11, 12,
8]). Some of the outcomes resulting from this research theme were that
depending on the degree of isolation between populations, classifiers might
result that were collectively strong, but individually weak [12], or that
solution simplicity might be enabled by the use of ensemble methods [11].
Indeed, solution simplicity has been empirically reported for other GP
ensemble formulations [14].
Another recurring theme is to assume that an individual takes the form of a
‘multi-tree’ [24, 16, 1]. In this case, an ‘individual’ is a team of _n_ Tree-
structured programs. Constraints are then enforced on the operation of
variation operators in order to maintain context between programs in the
multi-tree. This concept was developed further under the guise of ‘levels of
selection’ in which selection can operate at the ‘level’ of a program or
ensemble [25, 28]. However, in order to do so, it was necessary to have
different performance definitions for each level.
Virgolin revisits the theme of evolving ensembles under Bagging while
concentrating on the role of selection [26]. Specifically, individuals are
evaluated w.r.t. _all_ bags. The cost of fitness evaluation over all bags is
minimized by evaluating programs once and caching the corresponding fitness
values. The motivation for evaluating performance over all the bags is to
promote uniform development toward the performance goal.
Boosting and Bagging in particular have also motivated the development of
mechanisms for decoupling GP from the cardinality of the training partition.
In essence, GP performance evaluation is only ever conducted relative to the
content of a data subset, $DS$, where $|DS|<<|\mathcal{D}|$. However, the data
subset is periodically resampled, with biases introduced to reflect the
difficulty of labelling records correctly and the frequency of selecting data
to appear in the data subset [10, 23]. This relationship was then later
explicitly formulated from the perspective of competitively coevolving
ensembles against the data subset [13, 14, 15], i.e. data subset and ensemble
experience different performance functions.
Several coevolutionary approaches have also been proposed in which programs
are defined in one population and ensembles are defined by another, e.g. [13,
14, 15, 20]. Programs are free to appear in multiple ensembles and the size of
the ensemble is determined through evolution.
The concept of ‘Stacking’ has been less widely employed. However, the cascade-
correlation approach to evolving neural networks might be viewed through this
perspective [7]. Specifically, cascade-correlation begins with a single
‘perceptron’ and identifies the error residual. Assuming that the performance
goal has not been reached, a new perceptron is added that receives as input
all the previous perceptron outputs and the original data attributes. Each
additional perceptron is trained to minimize the corresponding residual error.
Potter and deJong demonstrated a coevolutionary approach to evolving neural
networks using this architecture [18]. Curry et al. benchmarked such an
approach for training layers of GP classifiers under the cascade-correlation
architecture [5]. However, they discovered that the GP classifiers could
frequently degenerate to doing no more than copy the input from the previous
layer.
Finally, the concept of interpreting the output of a GP program as a dimension
in a new feature space rather than a label is also of significance to several
ensemble methods. Thus, programs comprising an ensemble might be rewarded for
mapping to a (lower-dimensional) feature space that can be clustered into
class-consistent regions [15, 17]. Multiple programs appearing in an ensemble
define the dimensions of the new feature space. Likewise, ensembles based on
multi-trees have been rewarded for mapping to a feature space that maximizes
the performance of a linear discriminant classifier [1].
## 3 Evolving an Ensemble Stack using Boosting
The motivation of this work is to use a combination of boosting and stacking
to address data cardinality while constructing the GP ensemble. Our insight is
that classifiers can be sequentially added to the ensemble. After adding a
classifier, the data correctly classified is removed from the training
partition. Thus, as classifiers are added to the ensemble the cardinality of
the training partition decreases. Moreover, the next classifier evolved is
explicitly directed to label what the ensemble cannot currently label. Post
training, the ensemble is deployed as a ‘stack’ in which classifiers are
deployed in the order in which they were evolved. As will become apparent, we
need not visit all members of the ensemble stack in order to make a
prediction. In the following, we first present the evolutionary cycle adopted
for developing the Boosted Ensemble Stack (§3.1) and then discuss how the
Ensemble Stack is deployed (§3.2).
### 3.1 The Boosting Ensemble Stack Algorithm
Algorithm 1 summarizes the overall approach taken to boosting in this
research. The training dataset, $\mathcal{D}$, is defined in terms of a matrix
$X^{t}$ of _n_ (input) records (each comprised of _d_ -attributes) and a
vector $Y^{t}$ of _n_ labels (i.e. supervised learning for classification). We
may sample records pairwise, $\langle\vec{x}_{p}\in X^{t},y_{p}\in
Y^{t}\rangle$, during training. The outer loop (Step 3) defines the number of
‘boosting epochs’, where this sets an upper limit on ensemble size. Step 5
initializes a new population, consisting of program decision stumps alone
(single node programs). Step 6 performs fitness evaluation, ranking, parent
pool selection, and variation for a limited number of generations
(Max_GP_Epoch). Specifically, the following form is assumed:
1. 1.
Fitness evaluation (Step 7) assumes the Gini index in which the output of each
program is modelled as a distribution (Algorithm 2). Fitter individuals are
those with a higher Gini index.
2. 2.
Parent pool selection (_PP_) implies that the worst %_Gap_ programs are
deleted, leaving the parent pool as the survivors (Algorithm 1, Step 9).
3. 3.
Test for early stopping (Step 10) where this is defined in terms of fitness
and ‘bin purity’ a concept defined below.
4. 4.
Variation operators (Step 21) are limited to 1) cloning %_Gap_ parents, 2)
adding a single new node to a clone where the parameters of the new node are
chosen stochastically, and 3) mutating any of the parameters in the resulting
offspring.
Algorithm 1 StackBoost($\langle X^{t},Y^{t}\rangle$, New_Pop_size,
Max_Boost_epoch, Max_GP_epoch). _PP_ is the parent pool
1:$Best\\_Fit\leftarrow 0$
2:$Ensemble\leftarrow[]$ $\triangleright$ Initialize ensemble stack to null
3:for $i\leftarrow 1$ to Max_Boost_epoch do
4: $best\leftarrow$ False
5: _Pop_ $\leftarrow$ _Initialize_(New_Pop_size) $\triangleright$ Initialize
a new pop
6: for ($j\leftarrow 1$ to Max_GP_epoch) $\&$ ($best=$ False) do
$\triangleright$ Evolve population
7: $Fitness\leftarrow$[ $GiniIndexFitness(N,X^{t},Y^{t})$ for $N\in Pop$]
8: $Ranked\\_Pop\leftarrow[\arg\mbox{ sort}(Fitness,Pop)]$
9: $PP\leftarrow Ranked\\_Pop(Gap)$
10: while $Tree\in PP$ do $\triangleright$ Test for champion
11: $Histogram,Interval\leftarrow fitHist(Tree,X^{t},Y^{t})$
12: if ($\exists Interval\in Histogram=$ Pure) then
13: if ($Tree.Fitness>Best\\_Fit$) then
14: $best\leftarrow$ True $\triangleright$ Exit early if champion found
15: $Best\\_Fit\leftarrow Best\\_Fit$
16: $Champion\\_Histogram\leftarrow Histogram$
17: $Champion\leftarrow Tree$ $\triangleright$ Record champion
18: end if
19: end if
20: end while
21: _Offspring_ $\leftarrow$ Variation($PP$, $\%Gap$)
22: $Trees\leftarrow PP\ \cup$ _Offspring_
23: end for
24: $\langle X^{\prime},Y^{\prime}\rangle\leftarrow\emptyset$
25: for $Interval\in Champion\\_Histogram$ do $\triangleright$ Identify Pure
bin content
26: if $Interval=Pure$ then
27: $\langle X^{\prime},Y^{\prime}\rangle\leftarrow$ copy($\langle
x_{p},y_{p}\rangle\in Interval$) $\triangleright$ Identify correctly labelled
data
28: end if
29: end for
30: $\langle X^{t},Y^{t}\rangle\leftarrow\langle X^{t},Y^{t}\rangle/\langle
X^{\prime},Y^{\prime}\rangle$ $\triangleright$ Define residual dataset
31: $Ensemble.push(Champion)$ $\triangleright$ Add _Champion_ to ensemble
stack
32: if $\langle X^{t},Y^{t}\rangle=\emptyset$ then $\triangleright$ Case of
early stopping
33: return $Ensemble$
34: end if
35:end for
36:return $Ensemble$
Algorithm 2 Gini Index Fitness ($Tree,X,Y$) returns gini index weighted by
model complexity.
1:$Total(i)\leftarrow$ # records mapped to interval ‘ _i_ ’
2:$Count(i,c)\leftarrow$ # class ‘ _c_ ’ mapped to interval ‘ _i_ ’
3:$\\#Inst(c)\leftarrow$ # records of class ‘ _c_ ’ in $Y$
4:_%Used_bins_ is the % of bins with a non-zero count
5:for $i\leftarrow 1$ to NumBin do
6: for $c\leftarrow 1$ to NumClass do
7: if $\mbox{Total}(i)\neq 0$ then
8: $hist(i,c)\leftarrow\frac{Count(i,c)}{Total(i)\times\\#Inst(c)}$
9: else
10: $hist(i,c)\leftarrow 0$
11: end if
12: $GiniIndex\leftarrow GiniIndex+hist(i,c)^{2}\times\\#Inst(c)$
13: end for
14:end for
15:return $GiniIndex+\alpha(\%Used\\_bins)$
In assuming a performance function based on the Gini index, programs perform
the following mapping: $\hat{y}_{p}=f(x_{p})$ where $x_{p}$ is input record
_p_ and $\hat{y}_{p}$ is the corresponding output from the program. There is
no attempt to treat $\hat{y}_{p}$ as a predicted classification label for
record _p_. Instead, $\hat{y}_{p}$ represents the mapping of the original
input, $x_{p}$, to a scalar value on a 1-dimensional number line, $\hat{y}$.
After mapping all _p_ inputs to their corresponding $\hat{y}_{p}$ we quantize
$\hat{y}$ into NumBin intervals (Step 11), as illustrated by Figure 1. Each
interval defines an equal non-overlapping region of the number line $\hat{y}$
and an associated bin. The bins act as a container for the labels, $y_{p}=c$,
associated with each $\hat{y}$ appearing in the bin’s interval. Three types of
the bin may now appear,
* •
Empty bins: have no $\hat{y}_{p}$ appearing in their interval.
* •
Pure bins: have $\hat{y}_{p}$ appearing in their interval such that the
majority of labels $y_{p}$ are the same. A ‘pure bin’ assumes the label
$y_{p}$ that reflects the majority of the bin content and is declared when the
following condition holds,
$\frac{C_{bin}-y^{*}}{C_{bin}}<\beta$ (1)
where $C_{bin}$ is the count of the number of records appearing in the bin,
$C(c)$ represents the number of records of each class in the bin, and
$y^{*}=\max_{c}C(c)$.
* •
Ambiguous bins: imply that some $\hat{y}_{p}$ appear at an interval such that
bin purity does not hold.
Figure 1: Illustration for the relationship between program outputs
($\hat{y}_{p}$), intervals, bins, labels ($y_{p}=c$) and bin type.
If a pure bin is encountered for an individual during training (Step 12) that
also has best fitness, then the next champion for including in the ensemble
has been identified. Moreover, each time a new ensemble member is to be
evolved, they have to improve on the fitness of the previous ensemble member
(Step 13).
Note that multiple Pure bins can appear, where this does not place any
constraint on the labels representing different Pure bins. All training
records corresponding to the ‘Pure bins’ are then identified (Step 27) and
removed to define a candidate residual dataset (Step 30).
Finally, the champion individual is then ‘pushed’ to the list of ensemble
members (Step 31). Note that the order in which champions are pushed to the
list is significant (§3.2). At the next iteration of the algorithm, an
entirely new population of programs is evolved to specifically label what the
members of the ensemble currently cannot label. Moreover, ‘early stopping’
might appear if the residual dataset is empty (Step 32).
### 3.2 Evaluating an Ensemble Stack Post Training
Post-training, we need to define how the ensemble operates collectively to
produce a single label; whereas, during training, programs are constructed
‘independently’, without direct reference to other members of the ensemble.
Algorithm 3 summarizes how an ensemble is deployed as a ‘stack’ and evaluated
following the order in which each ensemble member was originally added to the
ensemble, i.e. the stack chooses programs under a first-in, first-out order
(Step 3). Program _i_ may only suggest a label for bins that were originally
identified as ‘Pure bins’. If the program produces a $\hat{y}_{p}$
corresponding to any other type of bin (Empty or Ambiguous), then the next
program, $i+1$, is called on to suggest its mapping $\hat{y}_{p}$ for record
_p_ (Step 16). At any point where a program maps the input to a ‘Pure bin’,
then the corresponding label for that bin can be looked up. If this matches
the actual label, $y_{p}$, then the ensemble prediction is correct (Step 13).
This also implies that the entire ensemble need not be evaluated in order to
make a prediction.
The first-in, first-out deployment of programs mimics the behaviour of
‘falling rule lists’, a model of deployment significant to interpretable
machine learning [21]. Thus, the most frequent queries are answered by the
classifier deployed earliest. As the classes of queries become less frequent
(more atypical) prediction is devolved to classifiers appearing deeper in the
list (queue). The proposed evolutionary ensemble learner adds programs to the
stack in precisely this order.
Algorithm 3 StackEvaluation($X,Y$, Ensemble, stack_depth)
1:$\\#correct\leftarrow\\#error\leftarrow 0$
2:for $\langle x_{p},y_{p}\rangle\in X,Y$ do $\triangleright$ Loop over all
data partition
3: _Initialize.Stack_(Ensemble) $\triangleright$ Initialize stack, using
original ‘fifo’ order
4: $n=$ stack_depth $\triangleright$ Initialize to max ensemble members
5: repeat
6: _Tree $\leftarrow$ pop_(_Stack_) $\triangleright$ Pop a tree, oldest first
7: $\langle bin_{p},\hat{y}_{p}\rangle\leftarrow Tree(x_{p})$ $\triangleright$
Execute Tree on record $x_{p}$
8: if $bin_{p}$ is Pure then $\triangleright$ Test for a ‘Pure’ bin type
9: $n\leftarrow 0$ $\triangleright$ Pure bin, so use this Tree for prediction
10: if $\hat{y}_{p}\neq y_{p}$ then $\triangleright$ Case of prediction not
matching class
11: $\\#error++$ $\triangleright$ Update classification performance
12: else
13: $\\#correct++$
14: end if
15: else
16: $n--$ $\triangleright$ Not a pure bin, so next Tree
17: end if
18: until $n$ is 0 $\triangleright$ No further trees in Ensemble
19:end for
20:return $\langle\\#error,\\#correct\rangle$
### 3.3 Using an Extremely Large Number of Bins
One approach to parameterizing BStacGP is to force the number of bins to be
very low (2 or 3). The intuition of is that this rewards BStacGP discovering a
mapping of records to bins that develops at least one bin to be pure. This
approach will be adopted later for the ‘small scale’ benchmark. Under high
cardinality datasets the opposite approach is assumed. The insight behind this
is that there is sufficient data to support multiple pure bins such that we
maximize the number of training records mapped to pure bins by the same
program. This will also result in the fastest decrease in data cardinality
during training.
Taking this latter approach to the limit, we assume the number of bins is set
by the size of a floating point number or $2^{32}$. During training, the
training data partition again defines bins as pure, ambiguous or empty.
However, given the resolution of the bins, most bins should actually be
‘pure’, reducing the number of boosting epochs necessary to build the
ensemble. Under test conditions, given that the bins have a high resolution,
it is also likely that test data will be mapped to ‘empty’ bins. Hence, under
test conditions, the test data is labelled by the pure bin that it is closest
to. If the closest bin is not pure but ambiguous, then the next tree in the
stack is queried.
## 4 Experimental Methodology
A benchmarking comparison will be made against a set of five datasets from a
recent previous study [26]. This enables us to establish to what degree the
proposed approach is competitive with five state-of-the-art approaches for GP
ensemble classification. The datasets appearing in this comparison are all
widely used binary classification tasks from the UCI
repository,222urlhttps://archive-beta.ics.uci.edu as summarized by Table 1.
The training and test partitions are stratified to provide the same class
distribution in each partition and the training/test split is always 70/30%.
The algorithms appearing in this comparison take the following form:
* •
2SEGP: A bagging approach to evolving GP ensembles in which the underlying
design goal is to maintain uniform performance across the multiple bootstrap
bags. Such an approach was demonstrated to be particularly competitive with
other recent developments in evolutionary ensemble learning [26]. In addition,
the availability of a code base enables us to make additional benchmarking
comparisons under a high cardinality dataset.
* •
eGPw: Represents the best-performing configuration of the cooperative
coevolutionary approach to ensemble learning proposed in [20]. Specifically,
benchmarking revealed the ability to discover simple solutions to binary
classification problems while also being competitive with Random Forests and
XGBoost.
* •
DNGP: Represents an approach to GP ensembles in which diversity maintenance
represents the underlying design goal [27]. Diversity maintenance represents a
reoccurring theme in evolutionary ensemble learning, where the underlying
motivation is to reduce the correlation between ensemble members. The
framework was reimplemented and deployed in the benchmarking study of [26].
* •
M3GP: Is not explicitly an evolutionary ensemble learner but does evolve a set
of programs to perform feature engineering [17]. The underlying objective is
to discover a mapping to a new feature space that enables clustering to
separate between classes. M3GP has been extensively benchmarked in multiple
contexts and included in this study as an example of what GP classification
can achieve without evolutionary ensemble learning being the design goal [17,
3].
Table 1: Properties of the benchmarking datasets. Class distribution reflects the distribution of positive to negative class instances. Dataset | # Features | Cardinality | Class distribution
---|---|---|---
| (_d_) | ($|\mathcal{D}|$) | (%P / %N)
Small benchmarking datasets
BCW | 11 | 683 | 30/65
HEART | 13 | 270 | 45/55
IONO | 33 | 351 | 65/35
PARKS | 23 | 195 | 75/25
SONAR | 61 | 208 | 46/54
Large benchmarking dataset
CTU | 8 | 801132 | 55/45
The second benchmarking study is then performed on a large cardinality dataset
describing an intrusion detection task. This dataset is the union of the
normal and botnet data from the CTU-13 dataset [9], resulting in hundreds of
thousands of data records (Table 1). In this case, we compare the best two
evolutionary ensembles from the first study with C4.5 [19] and XGBoost [4].
The latter represent very efficient non-evolutionary machine learning
approaches to classification.
Table 2: BStacGP parameterization. _Gap_ defines the size of the parent pool. $\beta$ is defined by Eqn. (1). $\alpha$ weights the fitness regularization term, Algorithm 2. NumBin appears in Algorithm 3 and the remaining parameters in Algorithm 1 Benchmarking Study | Small Scale | Large Scale
---|---|---
Parameter | fast | slow | fast | slow
Max_Boost_epoch | 1000 | 10
Max_GP_epoch | 30 | 3 | 6
New_Pop_Size | 30 | 1000 | 30
_Gap_ | 10 | 300 | 10
NumBin | 2 | $2^{32}$
Bin Purity ($\beta$) | 0.99 | 0.6 | 0.75
Regularization ($\alpha$) | 0.0 | 0.4
Num. Trials | 40
Table 2 summarizes the parameters assumed for BStacGP. Two parameterizations
are considered in each benchmarking study: ‘slow and complex’ versus ‘fast and
simple’. One insight is that higher data cardinality can imply a higher bin
count. We therefore use the lowest bin count (2) and a high purity threshold
(0.99) as a starting point under the Small Scale benchmarking study. Such a
combination forces at least one of the 2 bins to satisfy the high bin purity
threshold. We then differentiate between ‘slow and complex’ versus ‘fast and
simple’ scenarios by increasing the population size (with the parent pool
increasing proportionally). The value for Max_Boost_epoch is set intentionally
high, where in practice such an ensemble size is not encountered due to early
stopping being triggered (Algorithm 1, Step 32). Under the large scale
benchmark, the largest bin count was assumed, where this does not imply that
this number of bins needs to contain values, but it does mean that the
distribution has the most resolution.
## 5 Results
Two benchmarking studies are performed.333Laptop with Intel i7 10700k CPU,
4.3GHz single core. The first assumes a suite of ‘small scale’ classification
tasks (§5.1) that recently provided the basis for comparing several state-of-
the-art GP evolutionary ensemble learners [26]. The second study reports
results on a single large cardinality dataset using the best two evolutionary
ensemble learners from the first study, and two non-evolutionary methods
(§5.2). Hereafter, the proposed approach is referred to as BStacGP.
### 5.1 Small Scale Classification Tasks
Tables 3 and 4 report benchmarking for the five small datasets. In the
previous benchmarking study, 2SEGP and M3GP were the best-performing
algorithms on these datasets [26]. Introducing the proposed Boosted Stack
ensemble (BStacGP) to the comparison changes the ranking somewhat. That said,
all five GP formulations perform well on the BCW dataset ($>95\%$ under test),
whereas the widest variance in classifier performance appears under HEART and
SONAR. Applying the Friedman non-parametric test for multiple models to the
ranking of test performance fails to reject the null hypothesis (all
algorithms are ranked equally). Given that these are all strong classification
algorithms this is not in itself unexpected.
Table 3: Classifier accuracy on the _training partition_ for small binary datasets. Bold indicates the best-performing classifier on the dataset Dataset | BStacGP | 2SEGP | DNGP | eGPw | M3GP
---|---|---|---|---|---
| slow | fast | [26] | [26] | [26] | [26]
BCW | 0.994 | 0.995 | 0.995 | 0.979 | 0.983 | 0.971
| $\pm$ 0.006 | $\pm$ 0.005 | $\pm$ 0.005 | $\pm$ 0.010 | $\pm$ 0.008 | $\pm$ 0.002
HEART | 1.000 | 0.985 | 0.944 | 0.915 | 0.907 | 0.970
| $\pm$ 0.0 | $\pm$ 0.015 | $\pm$ 0.022 | $\pm$ 0.021 | $\pm$ 0.025 | $\pm$ 0.017
IONO | 0.993 | 0.983 | 0.976 | 0.955 | 0.884 | 0.932
| $\pm$ 0.008 | $\pm$ 0.017 | $\pm$ 0.017 | $\pm$ 0.015 | $\pm$ 0.032 | $\pm$ 0.042
PARKS | 0.982 | 0.996 | 0.948 | 0.931 | 0.923 | 0.981
| $\pm$ 0.018 | $\pm$ 0.004 | $\pm$ 0.011 | $\pm$ 0.057 | $\pm$ 0.042 | $\pm$ 0.024
SONAR | 0.999 | 1.000 | 0.966 | 0.924 | 0.924 | 1.000
| $\pm$ 0.001 | $\pm$ 0.0 | $\pm$ 0.034 | $\pm$ 0.043 | $\pm$ 0.034 | $\pm$ 0.012
Table 4: Classifier accuracy on the _test partition_ for small binary datasets. Bold indicates the best-performing result Dataset | BStacGP | 2SEGP | DNGP | eGPw | M3GP
---|---|---|---|---|---
| slow | fast | [26] | [26] | [26] | [26]
BCW | 0.96 | 0.957 | 0.965 | 0.959 | 0.956 | 0.957
| $\pm$ 0.017 | $\pm$ 0.022 | $\pm$ 0.018 | $\pm$ 0.019 | $\pm$ 0.018 | $\pm$ 0.014
HEART | 0.803 | 0.796 | 0.815 | 0.815 | 0.790 | 0.778
| $\pm$ 0.094 | $\pm$ 0.052 | $\pm$ 0.062 | $\pm$ 0.049 | $\pm$ 0.034 | $\pm$ 0.069
IONO | 0.924 | 0.901 | 0.896 | 0.901 | 0.830 | 0.871
| $\pm$ 0.027 | $\pm$ 0.047 | $\pm$ 0.047 | $\pm$ 0.026 | $\pm$ 0.057 | $\pm$ 0.057
PARKS | 0.951 | 0.937 | 0.936 | 0.917 | 0.822 | 0.897
| $\pm$ 0.017 | $\pm$ 0.013 | $\pm$ 0.012 | $\pm$ 0.055 | $\pm$ 0.064 | $\pm$ 0.051
SONAR | 0.76 | 0.728 | 0.738 | 0.730 | 0.762 | 0.810
| $\pm$ 0.083 | $\pm$ 0.131 | $\pm$ 0.067 | $\pm$ 0.063 | $\pm$ 0.060 | $\pm$ 0.071
Av.Rank | 2.0 | 3.8 | 2.7 | 3.2 | 5.0 | 4.3
However, BStacGP and 2SEGP are the two highest ranked algorithms on training
and test. With this in mind we consider the average solution complexity and
time to evolve solutions. In the case of complexity, the average number of
nodes in the Tree structured GP individuals comprising an ensemble is counted,
Figure 2. It is apparent that BStacGP is able to discover solutions that are
typically an order of magnitude simpler. Figure 3 summarizes the wall clock
time to conduct training. Both BStacGP and 2SEGP are implemented in Python.
BStacGP typically completes training an order of magnitude earlier than 2SEGP.
In summary, the process by which BstackGP incrementally only evolves
classifiers against the ‘residual’ misclassified data results in a significant
decrease in computational cost and model complexity.
Figure 2: Solution Complexity for BStacGP and 2SEGP on small scale
classification tasks. ‘fast’ and ‘slow’ represent the two BStacGP
parameterizations. Figure 3: Training time for BStacGP and 2SEGP on small
scale classification tasks. ‘fast’ and ‘slow’ represent the two BStacGP
parameterizations.
### 5.2 Large Scale Classification Task
BStacBP and 2SEGP accounted for 4 out of 5 of the best classifier performance
under the test partition of the ‘Small Scale’ task (Table 4). With this in
mind, we now compare the performance of BStacBP and 2SEGP under the high-
cardinality CTU dataset with the non-evolutionary classifiers of C4.5 and
XGBoost. Table 5 reports performance for the average (over 40 trials) of
BStacBP, C4.5 and XGBoost and a single run of 2SEGP. Specifically, the high
run time cost of 2SEGP precluded performing multiple trials on this
benchmark.4442SEGP parameterization: pop. size 500, ensemble size 50, max.
tree size 500. The best-performing algorithm under test is XGBoost, however,
BStacGP is within $1.3\%$ of this result. Conversely, 2SEGP returns a 10%
lower accuracy. This result is undoubtedly in part because it took 4 hours to
perform 20 generations.5552SEGP used 100 generations in the small scale task.
In the case of solution complexity (as measured by node count), BStacGP
returns solutions that are 2 to 3 orders of magnitude simpler than any
comparator algorithm. From the perspective of the computational cost of
performing training, C4.5 is the fastest. However, of the ensemble algorithms,
BStacGP is twice as fast as XGBoost and 3 orders of magnitude faster than
2SEGP.
Table 5: CTU dataset comparison assuming the ‘slow’ parameterization Algorithm | BStacGP | 2SEGP | Decision Tree | XGBoost
---|---|---|---|---
Train Accuracy | 0.987 | 0.8437 | 0.9986 | 0.9716
Test Accuracy | 0.953 | 0.8419 | 0.9625 | 0.9681
number of nodes | 157.9 | 8454 | 52801 | 26792
number of trees | 2.87 | 50 | 1 | 300
avg. tree depth | 6.03 | – | 55 | 6
time (sec) | 23.09 | 13160.92 | 1.54 | 46.92
A second experiment is performed using the CTU dataset, the hypothesis, in
this case, being that constraining C4.5 and XGBoost to the same complexity as
BStacGP will have a significant impact on their classification performance.
Put another way, the comparator algorithms will not be able to discover
solutions with similar complexity to BStacGP without significantly
compromising their classification accuracy.
Table 6: CTU dataset low complexity comparison. BStacGP assumes the fast parameterization from Table 2 Algorithm | BStacGP | Decision Tree | XGBoost
---|---|---|---
Train Accuracy | 0.985 | 0.914 | 0.8796
Test Accuracy | 0.952 | 0.915 | 0.8795
number of nodes | 47.55 | 59 | 75
number of trees | 2.25 | 1 | 5
avg. tree depth | 3.8 | 8 | 3
time (sec) | 11.28 | 0.83 | 0.41
Table 6 summarizes performance under the low complexity condition. It is now
apparent that BStacGP is able to maintain solution accuracy on this task as
well as further reduce the computation necessary to identify such a solution.
This implies that BStacGP has the potential to scale to tasks that other
formulations of evolutionary ensemble learning fail to scale to while
maintaining state-of-the-art classification performance. Thus, as dataset
cardinality increases algorithm efficiency has an increasing impact, as it is
simply not possible to tune parameters, an important practical property to
have.
Post training, BStacGP solutions can be queried. For example, the first
program from a typical BStack stack ensemble provided $\approx 57\%$ of the
labels. The second $\approx 24\%$, the third $\approx 15\%$ and the forth
$\approx 4\%$. This illustrates the ‘fall through’ nature of BStacGP operation
in which most of the data is labeled by a single GP program. The complexity of
trees associated with each stack level are respectively 138, 250, 305 and 407
nodes, again illustrating how complexity increases with position in the stack.
## 6 Conclusion
An approach to evolutionary ensemble learning is proposed that employs
boosting to develop a stack of GP programs. Key components of the framework
include: 1) interpreting the program output as a distribution; 2) quantizing
the distribution into intervals and therefore ‘binning’ the number of records
mapped to an interval; 3) making predictions on the basis of ‘bin purity’; and
4) removing records from the training partition corresponding to correctly
classified instances. The combination of these properties incrementally
reduces the cardinality of the training partition as classifiers are added to
the ensemble, and explicitly focuses the role of the next program on what
previous programs could not classify. Moreover, the resulting ensemble is then
deployed as a ‘Stack’. This is important because it now means that only part
of the ensemble is responsible for providing a label. Such an approach may
improve the explainability of the resulting ensemble.
Accuracy of the BStacGP framework on small cardinality datasets previously
employed for benchmarking is empirically shown to be comparable to state-of-
the-art evolutionary ensemble learners. Moreover, training time and model
simplicity is significantly improved. This property is shown to be key to
scaling BStacGP much more efficiently to a large cardinality dataset
containing hundreds of thousands of records. Indeed the results are
competitive with non-evolutionary methods.
Future work will scale BStacGP to multi-class classification and continue to
investigate scalability and solution transparency.
#### 6.0.1 Acknowledgements
Please place your acknowledgments at the end of the paper, preceded by an
unnumbered run-in heading (i.e. 3rd-level heading).
## References
* [1] Badran, K.M.S., Rockett, P.I.: Multi-class pattern classification using single, multi-dimensional feature-space feature extraction evolved by multi-objective genetic programming and its application to network intrusion detection. Genetic Programming and Evolvable Machines 13(1), 33–63 (2012)
* [2] Brameier, M., Banzhaf, W.: Evolving teams of predictors with linear genetic programming. Genetic Programming and Evolvable Machines 2(4), 381–407 (2001)
* [3] Cava, W.G.L., Silva, S., Danai, K., Spector, L., Vanneschi, L., Moore, J.H.: Multidimensional genetic programming for multiclass classification. Swarm and Evolutionary Computation 44, 260–272 (2019)
* [4] Chen, T., Guestrin, C.: Xgboost: A scalable tree boosting system. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 785–794. ACM (2016)
* [5] Curry, R., Lichodzijewski, P., Heywood, M.I.: Scaling genetic programming to large datasets using hierarchical dynamic subset selection. IEEE Transactions on Systems, Man, and Cybernetics – Part B 37(4), 1065–1073 (2007)
* [6] Dietterich, T.G.: An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Mach. Learn. 40(2), 139–157 (2000)
* [7] Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. In: Advances in Neural Information Processing Systems. vol. 2, pp. 524–532. Morgan Kaufmann (1989)
* [8] Folino, G., Pizzuti, C., Spezzano, G.: Training distributed GP ensemble with a selective algorithm based on clustering and pruning for pattern classification. IEEE Transactions on Evolutionary Computation 12(4), 458–468 (2008)
* [9] García, S., Grill, M., Stiborek, J., Zunino, A.: An empirical comparison of botnet detection methods. Computer Security 45, 100–123 (2014)
* [10] Gathercole, C., Ross, P.: Dynamic training subset selection for supervised learning in genetic programming. In: Proceedings of the Parallel Problem Solving from Nature Conference. LNCS, vol. 866, pp. 312–321. Springer (1994)
* [11] Iba, H.: Bagging, boosting, and bloating in genetic programming. In: Proceedings of the Genetic and Evolutionary Computation Conference. pp. 1053–1060. Morgan Kaufmann (1999)
* [12] Imamura, K., Soule, T., Heckendorn, R.B., Foster, J.A.: Behavioral diversity and a probabilistically optimal GP ensemble. Genetic Programming and Evolvable Machines 4(3), 235–253 (2003)
* [13] Lichodzijewski, P., Heywood, M.I.: Managing team-based problem solving with symbiotic bid-based genetic programming. In: Ryan, C., Keijzer, M. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference. pp. 363–370. ACM (2008)
* [14] Lichodzijewski, P., Heywood, M.I.: Symbiosis, complexification and simplicity under GP. In: Proceedings of the Genetic and Evolutionary Computation Conference. pp. 853–860. ACM (2010)
* [15] McIntyre, A.R., Heywood, M.I.: Classification as clustering: A pareto cooperative-competitive GP approach. Evolutionary Computation 19(1), 137–166 (2011)
* [16] Muni, D.P., Pal, N.R., Das, J.: A novel approach to design classifiers using genetic programming. IEEE Transactions on Evolutionary Computation 8(2), 183–196 (2004)
* [17] Muñoz, L., Silva, S., Trujillo, L.: M3gp – multiclass classification with GP. In: Proceedings of the European Conference on Genetic Programming. LNCS, vol. 9025, pp. 78–91 (2015)
* [18] Potter, M.A., Jong, K.A.D.: Cooperative coevolution: An architecture for evolving coadapted subcomponents. Evolutionary Computation 8(1), 1–29 (2000)
* [19] Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann (1993)
* [20] Rodrigues, N.M., Batista, J.E., Silva, S.: Ensemble genetic programming. In: Proceedings of the European Conference on Genetic Programming. LNCS, vol. 12101, pp. 151–166 (2020)
* [21] Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1(5), 206–215 (2019)
* [22] Sipper, M., Moore, J.H.: Symbolic-regression boosting. CoRR abs/2206.12082 (2022)
* [23] Song, D., Heywood, M.I., Zincir-Heywood, A.N.: Training genetic programming on half a million patterns: an example from anomaly detection. IEEE Transactions on Evolutionary Computation 9(3), 225–239 (2005)
* [24] Soule, T.: Voting teams: a cooperative approach to non-typical problems using genetic programming. In: Proceedings of the Genetic and Evolutionary Computation Conference. pp. 916–922. Morgan Kaufmann (1999)
* [25] Thomason, R., Soule, T.: Novel ways of improving cooperation and performance in ensemble classifiers. In: Proceedings of the Genetic and Evolutionary Computation Conference. pp. 1708–1715. ACM (2007)
* [26] Virgolin, M.: Genetic programming is naturally suited to evolve bagging ensembles. In: Proceedings of the Genetic and Evolutionary Computation Conference. pp. 830–839. ACM (2021)
* [27] Wang, S., Mei, Y., Zhang, M.: Novel ensemble genetic programming hyper-heuristics for uncertain capacitated arc routing problem. In: Proceedings of the Genetic and Evolutionary Computation Conference. pp. 1093–1101. ACM (2019)
* [28] Wu, S.X., Banzhaf, W.: Rethinking multilevel selection in genetic programming. In: Proceedings of the Genetic and Evolutionary Computation Conference. pp. 1403–1410. ACM (2011)
|
# Towards learning optimized kernels for complex Langevin
DADaniel Alvestad RLRasmus Larsen ARAlexander Rothkopf Faculty of Science
and Technology, University of Stavanger, 4021 Stavanger, Norway
###### Abstract
We present a novel strategy aimed at restoring correct convergence in complex
Langevin simulations. The central idea is to incorporate system-specific prior
knowledge into the simulations, in order to circumvent the NP-hard sign
problem. In order to do so, we modify complex Langevin using kernels and
propose the use of modern auto-differentiation methods to learn optimal kernel
values. The optimization process is guided by functionals encoding relevant
prior information, such as symmetries or Euclidean correlator data. Our
approach recovers correct convergence in the non-interacting theory on the
Schwinger-Keldysh contour for any real-time extent. For the strongly coupled
quantum anharmonic oscillator we achieve correct convergence up to three-times
the real-time extent of the previous benchmark study. An appendix sheds light
on the fact that for correct convergence not only the absence of boundary
terms, but in addition the correct Fokker-Plank spectrum is crucial.
stochastic quantization, complex Langevin, sign problem, real-time, machine
learning,
###### keywords:
Research Article - PreprintFP
## 1 Motivation
Strongly correlated quantum systems underlie some of the most pressing open
questions in modern theoretical physics. Whether it is the transport of highly
energetic partons through a liquid of deconfined quarks and gluons
busza_heavy_2018 , created in heavy-ion collisions Foka:2016vta or the
transport of non-relativistic fermions chien_quantum_2015 , captured in the
iconic Hubbard model qin_hubbard_2021 at low energies. When formulated in
Minkowski time, quantum field theories so far have defied a treatment by
conventional Monte-Carlo simulation techniques, due to the presence of the
notorious sign problem gattringer_approaches_2016 ; Pan:2022fgf . And while
progress has been made in extracting real-time dynamics from Euclidean time
simulations using e.g. Bayesian inference Rothkopf:2022ctl , the sign problem
prevails by rendering the extraction ill-posed and equally exponentially hard.
The sign problem has been proven to be NP-hard Troyer:2004ge , which entails
that no generic solution method is likely to exist. In turn, if we wish to
make inroads towards overcoming the sign problem, system-specific solutions
are called for.
Over the past decade, several approaches to tackle the sign problem have been
put forward gattringer_approaches_2016 ; berger_complex_2021 . They can be
divided into system-specific and system-agnostic approaches. The reformulation
strategies discussed e.g. in Refs. chandrasekharan_meron-cluster_1999 ;
mercado_qcd_2011 ; Kloiber:2013rba are an example of the former class, where
the partition function of the original system is re-expressed in terms of new
degrees of freedom, for which no sign problem exists. While highly successful
in the systems for which a reformulation has been discovered, no systematic
prescription exists to transfer the approach to other systems. The other
approaches, among them reweighting, extrapolation from sign-problem free
parameter ranges de_forcrand_constraining_2010 ; braun_imaginary_2013 ;
braun_zero-temperature_2015 ; guenther_qcd_2017 , density of states
wang_efficient_2001 ; langfeld_density_2012 ; gattringer_density_2015 , tensor
networks orus_tensor_2019 ; hauschild_efficient_2018 , Lefschetz thimbles
rom_shifted-contour_1997 ; cristoforetti_new_2012 ; Alexandru:2020wrj and
complex Langevin (CL) damgaard_stochastic_1987 ; namiki_stochastic_1992 all
propose a generic recipe to estimate observables in systems with a sign
problem. As the NP-hard sign problem however requires system-specific
strategies, all of these methods are destined to fail in some form or the
other. Be it that their costs scale excessively when deployed to realistic
systems (e.g. reweighting, Lefschetz thimbles, tensor networks) or that they
simply fail to converge to the correct solution (complex Langevin).
Both the Lefschetz Thimbles and complex Langevin belong to the class of
complexification strategies berger_complex_2021 . They attempt to circumvent
the sign problem by moving the integration of the Feynman integral into the
complex plane. After complexifying the degrees of freedom, the former proposes
to integrate over a specific subspace on which the imaginary part of the
Feynman weight remains constant (thimble), while the latter proposes to carry
out a diffusion process of the coupled real- and imaginary part of the
complexified degrees of freedom.
In this paper our focus lies on the complex Langevin approach, as it has been
shown to reproduce correctly the physics of several strongly correlated model
systems, albeit in limited parameter ranges Seiler:2017wvd . Most importantly
in its naive implementation it scales only with the volume of the system,
similar to conventional Monte-Carlo simulations. In the past, complex Langevin
had suffered from two major drawbacks: the occurrence of unstable
trajectories, called runaways and the convergence to incorrect solutions. In a
previous publication Alvestad:2021hsi we have shown how to avoid runaways by
deploying inherently stable implicit solvers (c.f. the use of adaptive step
size AartsJames2010 ). In this study we propose a novel strategy to restore
correct convergence in the complex Langevin approach.
One crucial step towards establishing complex Langevin as reliable tool to
attack the sign problem is to identify when it converges to incorrect
solutions. The authors of ref. Aarts:2011ax and later Nagata:2016vkn
discovered that in order for CL to reproduce the correct expectation values of
the underlying theory, the histograms of the sampled degrees of freedom must
fall off rapidly in the imaginary direction. Otherwise boundary terms spoil
the proof of correct convergence. The absence of boundary terms has been
established as necessary criterion and efforts are underway Sexty:2021nov to
compensate for their presence to restore correct convergence.
With QCD at the center of attention, the gauge cooling strategy Seiler:2012wz
; berges_real-time_2008 , based on exploiting gauge freedom, has been
proposed. It has recently been amended by the dynamic stabilization approach
Attanasio:2018rtq ; Aarts:2016qhx , which modifies the CL stochastic dynamics
with an additional drift term. Both are based on the idea that by pulling the
complexified degrees of freedom closer to the real axis, boundary terms can be
avoided. Their combination has led to impressive improvements in the correct
convergence of complex Langevin in the context of QCD thermodynamics with
finite Baryo-chemical potential Aarts:2022krz and is currently explored in
the simulation of real-time gauge theory Boguslavski:2022vjz .
We focus here on scalar systems formulated in real-time on the Schwinger-
Keldysh contour (for a Lefschetz thimble perspective see Alexandru:2016gsd ;
Alexandru:2017lqr ). For scalars, gauge freedom does not offer a rescue from
the convergence problems. The fact that dynamical stabilization introduces a
non-holomorphic modification of the drift term means that the original proof
of convergence is not applicable, which is why we refrain from deploying it
here. Furthermore the boundary term correction requires that the eigenvalues
of the Fokker-Planck equation associated with the original system lie in the
lower half of the complex plane, which is not necessarily the case in the
scalar systems that we investigate.
The convergence problem in real-time complex Langevin is intimately connected
with the extent of the real-time contour BergesSexty2007 . In a previous
publication Alvestad:2021hsi we showed that for a common benchmark system,
the strongly correlated quantum anharmonic oscillator, real-time simulations
directly on the SK contour are feasible for times up to $mt_{\rm max}=0.5$.
Convergence quickly breaks down when extending the contour beyond this point.
Within the complex Langevin community, coordinate transformations and
redefinitions of the degrees of freedom have been used in the past to weaken
the sign problem in a system specific manner (see e.g. discussion in
Aarts:2012ft ). All of these reformulations can be captured mathematically by
introducing a so called kernel for complex Langevin. It amounts to a
simultaneous modification of the drift and noise contribution to the CL
stochastic dynamics. In the past it has been used to improve the
autocorrelation time in real-valued Langevin simulations
namiki_stochastic_1992 and has been explored in simple model systems to
restore the convergence of complex Langevin (see e.g. Okamoto:1988ru ). The
construction of the kernels, as discussed in the literature applies to a
specific system only and so far no systematic strategy exists to make kernels
work in more realistic theories.
Our study takes inspiration from both conceptual and technical developments in
the machine learning community. In machine learning, an optimization
functional, based on prior knowledge and data is used to train an algorithm to
perform a specific task. The algorithm depends on a set of parameters, e.g.
the weights of a neural network, which need to be tuned to minimize the
prescribed optimization functional. Highly efficient automatic differentiation
programming techniques baydin2018automatic have been developed to compute the
dependence of the outcome of complex algorithms on their underlying
parameters. Here we utilize them to put forward a systematic strategy to
incorporate prior knowledge about the system into the CL evolution by learning
optimal kernels.
In section 2 we review the concept of kernelled Langevin, first in the context
of Euclidean time simulations and subsequently for use in complex Langevin. In
section 3 we show how the concept of a kernel emerges in a simple model system
and how it relates to the Lefschetz thimbles of the model. Subsequently we
discuss that a constant kernel can be used to restore convergence of real-time
complex Langevin for the quantum harmonic oscillator. The kernel found in this
fashion will help us to improve the convergence of the interacting theory too.
Section 4 introduces the central concept of our study: a systematic strategy
to learn optimal kernels for complex Langevin, based on system-specific prior
information. Numerical results from deploying a constant kernel to the quantum
anharmonic oscillator are presented in section 4.3 (Source code for the kernel
optimization and simulation is written in Julia and available at
daniel_alvestad_2022_7373498 ), leading to a significant extension of correct
convergence. In the appendices, we discuss some of the limitations of constant
kernels and show in the context of simple models that correct convergence
requires not only the vanishing of boundary terms but in addition requires the
spectrum of the associated Fokker-Plank equation to remain negative.
## 2 Neutral and non-neutral modifications of Langevin dynamics
Stochastic quantization, the framework underlying Langevin simulations, sets
out to construct a stochastic process for fields in an artificial additional
time direction $\tau_{\rm L}$ with a noise structure, which correctly
reproduces the quantum statistical fluctuations in the original theory. In the
context of conventional Monte-Carlo simulations in Euclidean time, where
expectation values of observables are given by the path integral
$\displaystyle\langle O\rangle=\frac{1}{Z}\int{\cal
D}\phi\;O[\phi]e^{-S_{E}[\phi]},\quad S_{E}[\phi]=\int d^{d}xL_{E}[\phi],$ (1)
with Euclidean action $S_{E}$, the goal thus is to guarantee at late Langevin
times a distribution of fields $\Phi[\phi]\propto{\rm
exp}\big{(}-S_{E}[\phi]\big{)}$. The chain of configurations $\phi(\tau_{L})$
underlying the distribution $\Phi[\phi]$, can then be used to evaluate the
expectation values of observables $O$ from the mean of samples $\langle
O\rangle=\lim_{\tau_{\rm L}\to\infty}\frac{1}{\tau_{\rm L}}\int_{0}^{\tau_{\rm
L}}d\tau_{\rm L}^{\prime}O[\phi(\tau_{\rm L}^{\prime})]$. The simplest
stochastic process, which realizes this goal and which is therefore commonly
deployed is
$\displaystyle\frac{d\phi}{d\tau_{\rm L}}=-\frac{\delta
S_{E}[\phi]}{\delta\phi(x)}+\eta(x,\tau_{\rm L})\quad\textrm{with}$ (2)
$\displaystyle\langle\eta(x,\tau_{\rm
L})\rangle=0,\quad\langle\eta(x,\tau_{\rm L})\eta(x^{\prime},\tau_{\rm
L}^{\prime})\rangle=2\delta(x-x^{\prime})\delta(\tau_{\rm L}-\tau_{\rm
L}^{\prime}).$
Its drift term is given by the derivative of the action $S_{E}$ and the noise
terms $\eta$ are Gaussian. The associated Fokker-Planck equation reads
$F_{\textrm{FP}}=\int
d^{d}x\;\frac{\partial}{\partial\phi(x)}\left(\frac{\partial}{\partial\phi(x)}+\frac{\delta
S_{E}[\phi]}{\delta\phi(x)}\right),\quad\frac{\partial\Phi(\phi,\tau_{\rm
L})}{\partial\tau_{\rm L}}=F_{\textrm{FP}}\Phi(\phi,\tau_{\rm L}).$ (3)
For an in-depth review of the approach see e.g. ref. namiki_stochastic_1992 .
In the following we will discuss the fact that there exists the freedom to
introduce a so called kernel into eq. 3, which as a purely real quantity
allows us to modify the above Fokker-Planck equation without spoiling the
convergence to the correct stationary solution
$\Phi[\phi]=\lim_{\tau_{L}\to\infty}\Phi[\phi,\tau_{L}]\propto{\rm
exp}\big{(}-S_{E}[\phi]\big{)}$. One may use this freedom to improve
autocorrelation times of the simulation and for other problem-specific
optimizations as has been explored in the literature.
Subsequently we will turn our attention to the case of complex Langevin, where
the simplest stochastic process proposed by stochastic quantization is not
guaranteed to converge to the correct solution. In that case we will explore
how a reparametrization of the associated Fokker Planck equations through in
general complex kernels can be used to not only change the convergence speed
but actually to change the stationary distribution itself, allowing us to
recover correct convergence where the naive process fails.
### 2.1 Kernelled Real Langevin
As alluded to above, there exists a freedom to reparametrize the Fokker-Planck
eq. 3 by introducing a real-valued kernel function
$K_{ij}(x,x^{\prime},\phi;\tau_{\rm L})$
$F_{\textrm{FP}}=\sum_{i,j}\int d^{d}x\int
d^{d}x^{\prime}\;\frac{\partial}{\partial\phi_{i}(x)}K_{ij}(x,x^{\prime},\phi;\tau_{\rm
L})\left(\frac{\partial}{\partial\phi_{j}(x^{\prime})}+\frac{\delta
S_{E}[\phi]}{\delta\phi_{j}(x^{\prime})}\right).$ (4)
Written in its most generic form, it may couple the different degrees of
freedom of the system (according to the $ij$ indices), it may couple different
space-time points (according to its $x$ and $x^{\prime}$ dependence) and may
depend explicitly both on the Langevin time $\tau_{\rm L}$, as well as the
field degrees of freedom $\phi$. The corresponding Langevin equation reads
$\displaystyle\frac{d\phi_{i}(x,\tau_{\rm L})}{d\tau_{\rm L}}=$
$\displaystyle\sum_{j}\left\\{-\int
d^{d}x^{\prime}K_{ij}(x,x^{\prime};\phi)\frac{\delta
S_{E}[\phi]}{\delta\phi_{j}(x^{\prime},\tau_{\rm L})}+\int
d^{d}x^{\prime}\frac{\delta
K_{ij}(x,x^{\prime};\phi)}{\delta\phi_{j}(x^{\prime},\tau_{\rm L})}\right.$
(5) $\displaystyle\left.+\int
d^{d}x^{\prime}H_{ij}(x,x^{\prime};\phi)\eta(x^{\prime},\tau_{\rm
L})\right\\}\quad\textrm{with}$ $\displaystyle K(x,x^{\prime};\phi)=$
$\displaystyle\sum_{k}\int
d^{d}x^{\prime\prime}H_{ik}(x,x^{\prime\prime};\phi)H_{jk}(x^{\prime},x^{\prime\prime};\phi),$
where in the last equation we assume that $K$ is factorizable. In practice we
will either choose kernels, which can be factorized using the square root of
their eigenvalues or will start directly by constructing the function $H$ that
can be combined into an admissible $K$.
Let us gain a bit of intuition about the role of the kernel when considering
it in its simplest form, a constant scalar kernel, which multiplies each
d.o.f. with a real number $\gamma$. Inspecting eq. 5 we find that, as it
appears in front of the drift term and as square root in front of the noise
term, $\gamma$ simply leads to a redefinition of the Langevin time coordinate
$\tau_{\rm L}^{\prime}=\gamma\tau_{\rm L}$. While the stationary solution is
left unchanged, the convergence time has been modified.
Even for more general kernels, the fact that $K$ appears in the generalized
Fokker-Planck eq. 4 on the outside of the parenthesis
$\left(\frac{\partial}{\partial\phi_{i}(x)}+\frac{\delta
S_{E}[\phi]}{\delta\phi_{i}(x)}\right)$ tells us that the stationary
distribution remains unchanged. It goes without saying that choosing
$K_{ij}(x,x^{\prime};\phi)=\delta_{ij}\delta(x^{\prime}-x)$ we regain the
standard Langevin eq. 2.
### 2.2 Kernelled Complex Langevin
Let us now consider the application of stochastic quantization to complex-
valued path integrals, in particular to those describing real-time physics in
Minkowski time. Here the observables are given by Feynman’s path integral
$\displaystyle\langle O\rangle=\frac{1}{Z}\int{\cal
D}\phi\;O[\phi]e^{iS_{M}[\phi]},\quad S_{M}[\phi]=\int d^{d}xL_{M}[\phi],$ (6)
which houses the Minkowski time action of the theory $S_{M}$. Stochastic
quantization in this case proposes to modify the real-valued stochastic
process of eq. 2 via the substitution $-S_{E}\to iS_{M}$ such that
$\displaystyle\frac{d\phi}{d\tau_{\rm L}}=i\frac{\delta
S_{M}[\phi]}{\delta\phi(x)}+\eta(x,\tau_{\rm L})\quad\textrm{with}$ (7)
$\displaystyle\langle\eta(x,\tau_{\rm
L})\rangle=0,\quad\langle\eta(x,\tau_{\rm L})\eta(x^{\prime},\tau_{\rm
L}^{\prime})\rangle=2\delta(x-x^{\prime})\delta(\tau_{\rm L}-\tau_{\rm
L}^{\prime}).$
It is obvious that even if one starts out with purely real degrees of freedom
at $\tau_{\rm L}=0$, the presence of the complex drift term necessitates the
complexification $\phi=\phi_{\rm R}+i\phi_{\rm I}$, each of which will obey a
coupled stochastic evolution.
In the complexified scenario, the question of correct convergence is not as
simple to answer as in the purely real case. The most stringent criterion
refers to whether complex Langevin reproduces the correct expectation values
$\displaystyle\lim_{\tau_{\rm L}\to\infty}\frac{1}{\tau_{\rm
L}}\int_{0}^{\tau_{\rm L}}d\tau_{\rm
L}^{\prime}O[\phi_{R}+i\phi_{I}]\overset{?}{=}\frac{1}{Z}\int{\cal
D}\phi\;O[\phi]e^{iS_{M}[\phi]}$ (8)
of the theory, defined on the right. And indeed it has been found that the
dynamics of eq. 7 may violate the equal sign of eq. 8. I.e. complex Langevin
converges, but it does not converge to the correct solution. In this study we
set out to recover correct convergence by introducing kernels into the complex
Langevin dynamics.
To this end we consider a not-necessarily real kernel function
$K(x,x^{\prime};\phi)$ which enters the complexified dynamics as
$\displaystyle\frac{d\phi}{d\tau_{\rm L}}=\int
d^{d}x^{\prime}\left\\{iK(x,x^{\prime};\phi)\frac{\delta
S_{M}[\phi]}{\delta\phi(x^{\prime},\tau_{\rm L})}+\frac{\partial
K(x,x^{\prime};\phi)}{\partial\phi(x^{\prime},\tau_{\rm
L})}+H(x,x^{\prime};\phi)\eta(x,\tau_{\rm L})\right\\}$ (9)
$\displaystyle\textrm{with}\quad\langle\eta(x,\tau_{\rm
L})\rangle=0,\quad\langle\eta(x,\tau_{\rm L})\eta(x^{\prime},\tau_{\rm
L}^{\prime})\rangle=2\delta(x-x^{\prime})\delta(\tau_{\rm L}-\tau_{\rm
L}^{\prime})$ $\displaystyle\textrm{and}\quad K(x,x^{\prime};\phi)=\int
d^{d}x^{\prime\prime}H(x,x^{\prime\prime};\phi)H(x^{\prime},x^{\prime\prime};\phi).$
Expressed as two separate but coupled stochastic processes for the real- and
imaginary part of the complexified field we obtain
$\displaystyle\frac{d\phi_{R}}{d\tau_{\rm L}}=\int
d^{d}x^{\prime}\left.\left\\{\textrm{Re}\left[K[\phi]i\frac{\delta
S_{M}[\phi]}{\delta\phi}+\frac{\delta
K[\phi]}{\delta\phi}\right]+\textrm{Re}\left[H[\phi]\right]\eta\right\\}\right|_{\phi=\phi_{R}+i\phi_{I}},$
(10) $\displaystyle\frac{d\phi_{I}}{d\tau_{\rm L}}=\int
d^{d}x^{\prime}\left.\left\\{\textrm{Im}\left[K[\phi]i\frac{\delta
S_{M}[\phi]}{\delta\phi}+\frac{\partial
K[\phi]}{\partial\phi}\right]+\textrm{Im}\left[H[\phi]\right]\eta\right\\}\right|_{\phi=\phi_{R}+i\phi_{I}}.$
Note that at this point we are dealing with two different concepts of Fokker-
Planck equations. One describes how the probability distribution
$\Phi[\phi_{R},\phi_{I}]$ of the real- and imaginary part $\phi_{R}$,
$\phi_{I}$ of the complexified field evolve under eq. 9
$\displaystyle\frac{\partial\Phi}{\partial\tau_{\rm L}}=$
$\displaystyle\left[\left(\frac{\partial}{\partial\phi^{R}}H_{R}+\frac{\partial}{\partial\phi^{I}}H_{I}\right)^{2}-\frac{\partial}{\partial\phi^{R}}{\rm
Re}\left\\{iK\frac{\partial S_{M}}{\partial\phi}+\frac{\partial
K}{\partial\phi}\right\\}\right.$ (11)
$\displaystyle\left.-\frac{\partial}{\partial\phi^{I}}{\rm
Im}\left\\{iK\frac{\partial S_{M}}{\partial\phi}+\frac{\partial
K}{\partial\phi}\right\\}\right]\Phi=L_{K}\Phi.$
We define the operator for the real Fokker Planck equation, which has been
separated into real and imaginary part, as $L_{K}$, not to be confused with
the original now complex Fokker Planck equation $F_{FP}$, which was only
defined on the real part of $\phi$. For the term quadratic in derivatives, we
have split the kernel K into the product of H functions, as shown in eq. 9,
such that each derivative acts on either the real or the imaginary part of $H$
respectively. Since it is the noise term of the Langevin eq. 9 that translates
into a term quadratic in derivatives in the Fokker-Planck language, it is
there that $H$ appears in eq. 11.
It is important to recognize that the correct late Langevin-time distribution
of this Fokker-Planck equation is purely real and therefore is not related in
a trivial manner to the Feynman weight ${\rm exp[iS_{M}]}$ of the original
path integral, as has been established in simple models in the literature as
discussed e.g. in refs. Klauder:1985kq ; Giudice:2013eva ; Abe:2016hpd ;
Seiler:2017vwj ; Salcedo:2018uop .
The other Fokker-Plank equation is not a genuine Fokker-Planck equation, in
the statistical sense, as it does not describe the evolution of a real-valued
probability density $P[\phi,\tau_{L}]$ but instead that of a complex-valued
distribution $\rho(\phi,\tau_{L})$
$\displaystyle\frac{\partial}{\partial\tau_{L}}\rho(\phi,t)=F_{FP}\rho(\phi,\tau_{L}),$
(12) $\displaystyle F_{FP}=\sum_{i,j}\int d^{d}x\int
d^{d}x^{\prime}\;\frac{\partial}{\partial\phi_{i}(x)}K_{ij}(x,x^{\prime},\phi;\tau_{\rm
L})\left(\frac{\partial}{\partial\phi_{j}(x^{\prime})}-i\frac{\delta
S_{M}[\phi]}{\delta\phi_{j}(x^{\prime})}\right).$
It is this equation whose late time limit we expect to reproduce the Feynman
weight $\lim_{\tau_{L}\to\infty}\rho(\phi,\tau_{L})={\rm exp[iS_{M}]}$ and we
will refer to in the following as the complex Fokker-Planck equation.
Significant progress in the understanding of the convergence properties of
complex Langevin had been made starting with ref. Aarts:2011ax in the form of
so-called correctness criteria.
The criteria most often discussed in the literature are boundary terms (for a
detailed exposition see Refs. Aarts:2011ax ; Nagata:2016vkn ). They tell us if
the expectation value calculated from the real distribution
$\Phi(\phi^{R},\phi^{I};\tau_{\rm L})$ (eq. 11), which we can sample using the
CL, is the same as the expectation value obtained from the complex
distribution $\rho(\phi;\tau_{\rm L})$. The latter one can only be obtained
from solving the complex Fokker-Planck equation, eq. 12. The two expectation
values, $\langle\mathcal{O}\rangle_{\Phi}(\tau_{\rm
L})=\langle\mathcal{O}\rangle_{\rho}(\tau_{\rm L})$ only agree if
$\Phi(\phi^{R},\phi^{I};\tau_{\rm L})$ falls off exponentially fast. If it
does not fall of sufficiently fast, it will produce boundary terms and the
equal sign in eq. 8 is not valid. This criterion is however not sufficient as
it does not guarantee the equilibrium distribution of the complex Fokker-
Planck equation to be ${\rm exp[iS_{M}]}$. These two criteria combined are
however sufficient to claim convergence of the CL to the true solution. For a
proof that the correctness criterion still holds after introducing a kernel
into the CL, we revisit the proof in appendix A.
How can a kernel help to restore the correct convergence? Not only do we need
to make sure that no boundary terms arise in sampling eq. 10 but also that the
complex Fokker-Planck equation has a unique and correct complex stationary
distribution. I.e. we need in general a non-neutral modification of the
complex Langevin dynamics.
If we were to introduce a real-valued kernel, similarly to the case of
conventional real-valued Langevin, we will be able to change the speed of
convergence but not the stationary solution. On the other hand, since the
drift term is complex, there is no reason not to consider also complex valued
kernels, which will act differently on the stochastic process for $\phi_{R}$
and $\phi_{I}$, representing a genuine non-neutral modification in the
corresponding Fokker-Planck eq. 11. Similarly in the complex Fokker-Planck eq.
12 the presence of a complex $K_{ij}$ can change the stationary distribution
through a reshuffling of the associated eigenvalues, as is discussed in more
detail in appendix A. For a comprehensive discussion of different
modifications to complex Langevin, including kernels, see also ref.
Aarts:2012ft .
In the following sections we will start off with constructing an explicit
example of a field-independent kernel that improves convergence in the free
theory and find that it can restore correct convergence in the interacting
theory to some degree. We will then continue to present our novel strategy to
learn optimal kernels for the restoration of correct convergence and showcase
their efficiency in a benchmark model system. Subsequently we discuss the
limitations of field-independent kernels and shed light on how kernels connect
to the correctness criteria.
## 3 A field independent kernel for real-time complex Langevin
In this section, we will manually construct one specific field-independent
kernel and demonstrate its use to improve convergence in real-time simulations
of the quantum anharmonic oscillator. The form of the kernel is motivated by
insight gained in a simple one d.o.f. model and reveals an interesting
connection between kernelled Langevin and the thimble approach. Since in the
following only low dimensional model systems are considered, we will refer to
the dynamical degrees of freedom from now on as $x$.
### 3.1 A kernel for the simplest real-time model
Following ref. Okamoto:1988ru let us investigate the simplest model of real-
time physics, the integrals
$\displaystyle\langle x^{n}\rangle=\frac{1}{Z}\int\;dx\,x^{n}\,{\rm
exp}[-\frac{1}{2}ix^{2}],\quad Z=\int\;dx\,\,{\rm exp}[-\frac{1}{2}ix^{2}].$
(13)
Attempting to solve this expression using the complex Langevin approach for
$x(\tau_{L})$, leads to a stochastic process
$\displaystyle\frac{dx}{d\tau_{L}}=-ix+\eta,$ (14)
with Gaussian noise $\eta$. Equation 14 fails at reproducing correct values of
$\langle x^{n}\rangle$.
We can understand this failure by recognizing that without regularization the
original integral in eq. 13 is not well defined and this lack of
regularization is inherited by the Langevin eq. 14.
One way to proceed is to explicitly modify the action by introducing a
regulator term, such as $\epsilon x^{2}$. The integral becomes well-defined
and its value is obtained when we let $\epsilon\rightarrow 0$ at the end of
the computation. In a numerical setting this would require to explicitly
include the regulator term, carry out the corresponding simulation for
different values of $\epsilon$ and extrapolate $\epsilon\rightarrow 0$. There
are two drawbacks to this strategy: first it requires several evaluations of
the simulation, which can be expensive for large systems. The second reason is
that the relaxation time for the simulation grows, the smaller $\epsilon$
becomes, and hence in practice we cannot make $\epsilon$ arbitrarily small.
Let’s consider an alternative strategy of solving the integral of eq. 13,
which relies on contour deformations, the so-called Lefschetz thimble method.
We will carry out a change of variables in the integral which moves the
integration path into the complex plane and which in turn will weaken the
oscillatory nature of the integrand. This method is based on a continuous
change of variables according to the following gradient descent equation
$\frac{d\tilde{x}}{d\tau}=\overline{\frac{dS_{E}[\tilde{x}]}{d\tilde{x}}},$
(15)
which complexifies the degree of freedom, $\tilde{x}=a+ib$. Equation 15
evolves the formerly real-valued $x$ towards the so called Lefschetz thimble
which is the optimal contour deformation where the imaginary part of the
action stays constant.
Following the steps outlined in Woodward:2022pet , we solve the flow of eq. 15
analytically which gives $\tilde{x}(x,\tau)=x(\cosh(\tau)-i\sinh(\tau))$. For
large values of $\tau$ it leads to
$\tilde{x}(x,\tau)\overset{\tau\gg
1}{\approx}x(1-i)\frac{1}{2e^{-2\tau}}=\frac{x}{2e^{-2\tau}}e^{-i\frac{\pi}{4}}.$
(16)
The above equation tells us that the optimal thimble in this system lies on
the downward $45^{\circ}$ diagonal in the complex plane
$z(x)=xe^{-i\frac{\pi}{4}}$. On this contour the integrand of the original
integral eq. 14 reduces to a real Gaussian $e^{-x^{2}}$ for which no
regularization is required.
If we flow for just a very small $\tau=\epsilon$, we obtain on the other hand
$\cosh(\tau)-i\sinh(\tau)\approx 1-i\epsilon$ and
$\displaystyle\int\;dx\,{\rm
exp}[-\frac{1}{2}ix^{2}]=\int\;dx\,\frac{\partial\tilde{x}}{\partial x}\;{\rm
exp}[-\frac{1}{2}i\tilde{x}^{2}]$ (17)
$\displaystyle=(1-i\epsilon)\int\;dx\;{\rm
exp}[-\frac{1}{2}ix^{2}(1-i\epsilon)^{2}]\approx(1-i\epsilon)\int\;dx\;{\rm
exp}[-\frac{1}{2}ix^{2}-\epsilon x^{2}].$
We see that the term $\epsilon$ here takes on the role of a regulator in the
action but due to its presence also in the Jacobian, the value of the integral
is not changed. This is different from introducing the regulator only in the
action itself.
Hence the obvious benefit of the deformation method is that we can introduce a
regulator to tame oscillations without the need to extrapolate that regulator
in the end. The closer we approach the optimal thimble, the easier the
integral will be to solve numerically.
How can such a coordinate transformation be implemented in complex Langevin?
Intuitively the action in the integral is what influences the drift in complex
Langevin and the measure is related to the noise structure. The above tells us
that the change we introduced will therefore affect the drift quadratically,
while it occurs in the noise linearly. Thinking back to eq. 9, we see that
this is just how a field-independent kernel modifies the complex Langevin
equations.
For the optimal thimble with $z(x)=xe^{-i\frac{\pi}{4}}$ the modification in
the drift therefore becomes $K=e^{-i\frac{\pi}{2}}=\frac{1}{i}$ and for the
noise $H=\sqrt{K}=\sqrt{-i}$. This leads to the following stochastic process
$\displaystyle\frac{dx}{d\tau_{L}}=-x+\sqrt{-i}\eta,$ (18)
which had been identified as optimal already in ref. Okamoto:1988ru . This
stochastic process converges to the correct solution of the integral eq. 13.
Interestingly the imaginary unit has disappeared from the drift term since the
kernel $K$ exactly canceled it there and instead moved it over into the noise
term.
As the last step, let us show explicitly that the choice of kernel above
indeed amounts to a coordinate transform. Following Aarts:2012ft we have
$\displaystyle\frac{dx}{d\tau_{L}}=$ $\displaystyle-HH^{T}\frac{\partial
S_{E}(x)}{\partial x}+H\eta$ (19) $\displaystyle\Rightarrow
H^{-1}\frac{dx}{d\tau_{L}}=-H^{T}\frac{\partial S_{E}(x)}{\partial x}+\eta$
(20)
$\displaystyle\Rightarrow\frac{du}{d\tau_{L}}=-H^{T}(H^{T})^{-1}\frac{\partial
T(u)}{\partial u}+\eta=-\frac{\partial T(u)}{\partial u}+\eta.$ (21)
Here $x=Hu$ and $T(u)=S_{E}(Hu)=S_{E}(x)$. We find that introducing a kernel
$K=HH^{T}$ in the evolution equation for $x$ has the same effect as carrying
out a coordinate transformation to $u=H^{-1}x$.
As an example of the complex Langevin dynamics in the absence (left panel) and
presence (right panel) of the kernel discussed above, we show the
corresponding scatter plots in fig. 1. The kernel has indeed rotated the noise
into the direction of the thimble, along which the system now samples. Note
that while the naive CL dynamics have been implemented using the semi-implicit
Euler-Maruyama scheme to avoid runaways, we are able to carry out the
kernelled dynamics with a fully explicit solver without adaptive step size.
The reason is that on the deformed contour the integral has already been
regularized.
Figure 1: Distribution of a complex Langevin simulation (scatter points) and
the Lefschetz thimble (red line) for the model eq. 13. (left) simulation
according to the naive CL eq. 14 and (right) simulation after introducing the
kernel in eq. 18. The color of the scatter points refers to the number of
measurements recorded at the corresponding position, a lighter color indicates
a larger number. Note that the optimal kernel has moved the sampling onto the
single thimble present in this simple system.
This result shows that in the simple model discussed here we can find a kernel
that both restores correct convergence of the complex Langevin dynamics and at
the same time removes the need for a regulator. Both are related to the fact
that the kernel effectively instituted a coordinate transform that amounts to
a deformation of the integration contour into the complex plane. In the next
section, we will consider a similar kernel for the harmonic oscillator.
We investigate the relation between the Lefschetz thimbles and kernel
controlled complex Langevin in section B.2, where a similar analysis is
performed in a system where a $\frac{\lambda}{4}x^{4}$ term has been added to
the action. As such a one-degree-of-freedom model has a non-linear drift term,
a constant kernel in that case will not suffice to remove the imaginary unit
from the drift term, and hence the complex Langevin will not sample directly
on the thimble as was the case for the example above.
### 3.2 A kernel for the harmonic oscillator
When constructing a kernel for the harmonic oscillator, we encounter similar
difficulties related to stability and convergence of complex Langevin process
as in the previous section. In order to see how an optimal kernel can be
chosen we revisit the discussion originally found in refs. Okamoto:1988ru ;
namiki_stochastic_1992 .
The continuum action of this one-dimensional system is given by
$S_{M}=\int dt\left\\{\frac{1}{2}\left(\frac{\partial x(t)}{\partial
t}\right)^{2}-\frac{1}{2}m^{2}x^{2}(t)\right\\}=\int
dt\left\\{\frac{1}{2}x(t)\Big{(}-\partial_{t}^{2}-\frac{1}{2}m^{2}\Big{)}x(t)\right\\},$
(22)
In quantum mechanics the fields $\phi$ are the position $x$ and the
coordinates, previously called $x$, are the time $t$. The corresponding
complex Langevin equation reads
$\displaystyle\frac{dx(t,\tau_{L})}{d\tau_{L}}=-i\Big{(}\partial_{t}^{2}+m^{2}\Big{)}x(t,\tau_{L})+\eta(t,\tau_{L}).$
(23)
In the absence of a regularization, this stochastic process is unstable and
does not show convergence to the correct result. In analogy with the results
for the simple model system in the previous section we will argue analytically
that correct convergence can be achieved in this system via a kernel with the
property
$-\Big{(}\partial_{t}^{2}+m^{2}\Big{)}K(t-t^{\prime})=i\delta(t-t^{\prime})$.
This kernel will render the drift term trivial, proportional to $x$ itself and
move all complex structure into the noise.
Following Okamoto:1988ru ; namiki_stochastic_1992 we solve eq. 23
analytically and obtain for the two-point correlator in Fourier space
$\langle
x(\omega,\tau_{L})x(\omega^{\prime},\tau_{L}^{\prime})\rangle=\delta(\omega+\omega^{\prime})\frac{i}{\omega^{2}-m^{2}}\left(e^{i(\omega^{2}-m^{2})|\tau_{L}-\tau_{L}^{\prime}|}-e^{i(\omega^{2}-m^{2})(\tau_{L}+\tau_{L}^{\prime})}\right).$
(24)
Obviously this expression does not have a well defined value in the late
Langevin-time limit. Introducing an explicit regulator of the form $i\epsilon
x(t)^{2}$ yields
$S_{M}=\int
dx\frac{1}{2}\left\\{\partial_{0}\phi(x)\partial_{0}\phi(x)-(m^{2}-i\epsilon)\phi^{2}(x)\right\\}$
(25)
and improves the situation, as now the stochastic process correctly converges
to
$\lim_{\tau_{L}\to\infty}\langle
x(\omega,\tau_{L})x(\omega^{\prime},\tau_{L})\rangle=\delta(\omega+\omega^{\prime})\frac{i}{\omega^{2}-m^{2}+i\epsilon}.$
(26)
A careful analysis of the associated Fokker-Planck equation in ref.
Nakazato:1986zy however reveals that the relaxation time towards the correct
solution scales with $1/\epsilon$. I.e. carrying out a CL simulation based on
a small regulator $\epsilon$ will lead to slow convergence. In addition one
also needs to take the limit $\epsilon\rightarrow 0$ as well as
$\Delta\tau_{\rm L}\rightarrow 0$, which may not commute Huffel:1985ma .
Since the action of the harmonic oscillator in Fourier space decouples into a
collection of non-interacting modes we may deploy a similar strategy for each
mode as we considered in the simple model of the preceding section. I.e. we
introduce a kernel, which moves the integration onto the single thimble for
each mode.
$\displaystyle\frac{\partial}{\partial\tau_{\rm
L}}x(\omega,\tau_{L})=i\tilde{K}(\omega)\;\frac{\delta S_{M}[x]}{\delta
x(\omega)}+\sqrt{\tilde{K}(\omega)}\xi(\omega,\tau_{L}),$ (27)
$\displaystyle\langle\xi(\omega,\tau_{L})\rangle=0,\quad\langle\xi(\omega,\tau_{L})\xi(\omega^{\prime},\tau_{\rm
L}^{\prime})\rangle=2\delta(\omega+\omega^{\prime})\delta(\tau_{L}-\tau_{L}^{\prime}).$
(28)
This train of thought leads us to choose the following field-independent
kernel, which had been explored in ref. Okamoto:1988ru before
$\tilde{K}(\omega)=\frac{iA(\omega)}{\omega^{2}-m^{2}+i\epsilon},\quad
K(t)=\int\frac{d\omega}{(2\pi)}\tilde{K}(\omega)e^{-i\omega t},\quad$ (29)
where $A(\omega)$ is a real, positive and even function of $\omega$. Thus for
a constant $A(\omega)$, $\tilde{K}(\omega)$ is nothing but the propagator of
the free theory in momentum space.
The corresponding correlation function is found to read
$\lim_{\tau_{\rm L}\to\infty}\langle\phi(\omega,\tau_{\rm
L})\phi(\omega^{\prime},\tau_{\rm
L})\rangle=\delta(\omega+\omega^{\prime})\frac{\tilde{K}(\omega)}{A(\omega)}=\delta(\omega+\omega^{\prime})\frac{i}{\omega^{2}-m^{2}+i\epsilon}$
(30)
which is the correct result. The most important difference to simply
introducing a regulator in the action however lies in the fact that now the
relaxation time for each mode is proportional to $1/A(\omega)$ and not
proportional to $1/\epsilon$ and no extrapolation in $\epsilon$ needs to be
carried out. For completeness let us note the corresponding coordinate space
complex Langevin process
$\displaystyle\frac{\partial}{\partial\tau_{\rm L}}x(t,\tau_{L})=i\int
dt^{\prime}\;K(t-t^{\prime})\;\frac{\delta S_{M}[x]}{\delta
x(t^{\prime})}+\chi(t,\tau_{L}),$ (31) $\displaystyle\chi(t,\tau_{L})=\int
d\omega e^{i\omega t}\sqrt{\tilde{K}(\omega)}\xi(\omega,\tau_{L}).$ (32)
### 3.3 A kernel for real-time Langevin on the thermal SK contour
The analytic study of the one d.o.f. model and the harmonic oscillator have
provided us with insight into how a kernel can be used to both satisfy the
need for regularization of the path integral and achieve convergence to the
correct solution of the associated complex Langevin equation in practice.
We will now construct the corresponding kernel for the harmonic oscillator at
finite temperature, discretized on the Schwinger-Keldysh contour. Numerical
simulations will confirm the effectiveness of the kernel in the non-
interacting theory.
The Schwinger-Keldysh contour for a quantum system at finite temperature
encompasses three branches. The forward branch along the conventional time
axis reaches up to a real-time $t_{\rm max}$ and the degrees of freedom
associated with it are labeled $x^{+}(t)$. The backward branch with $x^{-}(t)$
returns to the initial time $t_{0}$ in reverse and the Euclidean branch which
houses $x^{E}(-i\tau)$ and extends along the negative imaginary time axis. The
physical length of the imaginary time branch dictates the inverse temperature
of the system. A sketch of our contour setup is shown in the left panel of
fig. 2.
In the action of the system, the integration over time is rewritten into an
integration over a common contour parameter $\gamma$. The d.o.f. on the
different branches are then distinguished by the values of the contour
parameter $x(\gamma)$ and we will drop the superscript in the remainder of the
text.
As sketched in the right panel of fig. 2, we will refer to the equal- and
unequal-time two-point correlation functions along the SK contour in the
following, plotted against the contour parameter. The reader can identify the
values along the forward and backward branch as being mirrored, connecting to
the values on the Euclidean branch that show the expected periodicity of a
thermal theory.
Figure 2: (left) Sketch of the Schwinger-Keldysh contour deployed in our study
with forward $x^{+}(t)$ and backward $x^{-}(t)$ branches on the real-time
axis, connected to an imaginary time branch $x^{E}(-i\tau)$. The contour
parameter $\gamma$ is used to address all branches in a unified manner.
(right) Sketch of the visualization of our observables along the contour
parameter $\gamma$ for the example of $mt_{\rm max}=1$. The analytic solution
of the real- and imaginary part of the equal time correlator $\langle
x^{2}(\gamma)\rangle$ and the unequal time correlator $\langle
x(0)x(\gamma)\rangle$ will be plotted in the real-time $\gamma<2mt_{\rm max}$
and subsequently Euclidean domain $\gamma>2mt_{\rm max}$.
When discretizing the action for use in a numerical simulation the direction
of each branch of the SK contour is encoded in a contour spacing
$a_{i}\in\mathbb{C}$. Computing the drift term for an arbitrary contour yields
$\displaystyle i\frac{\partial S_{M}[x]}{\partial
x_{j}}=\frac{i}{\frac{1}{2}\left(|a_{j}|+|a_{j-1}|\right)}\Big{\\{}$
$\displaystyle\frac{x_{j}-x_{j-1}}{a_{j-1}}-\frac{x_{j+1}-x_{j}}{a_{j}}-\frac{1}{2}\left[a_{j-1}+a_{j}\right]\frac{\partial
V(x_{j})}{\partial x_{j}}\Big{\\}}.$ (33)
This expression simplifies if we use a constant magnitude step-size
$|a_{i}|=|a|$, such that the prefactor in the above equation can be reduced to
$\frac{i}{|a|}$. In that case we can go over to a convenient matrix-vector
notation
$i\nabla_{x}S_{M}[{\bm{x}}]=\frac{1}{|a|}\;iM{\bm{x}},$ (34)
where
$M_{jk}=\begin{cases}\frac{1}{a_{j-1}}+\frac{1}{a_{j}}-\frac{1}{2}\left[a_{j-1}+a_{j}\right]m^{2},&j=k\\\
-\frac{1}{a_{j}},&j=k-1\\\ -\frac{1}{a_{j-1}},&j=k+1.\end{cases}$ (35)
Based on the findings in the previous sections the form of the optimal
discrete free theory kernel in coordinate space will be the inverse propagator
$K=H\;H^{T}=iM^{-1},$ (36)
where $H$ is the factorized kernel used in the noise term. The form of this
kernel relies on the matrix $M$ to be invertible, and $M^{-1}$ to be
factorizable, both of which holds. We obtain $H=\sqrt{iM^{-1}}$ by using the
square root of the eigenvalues. Written in differential form with Wiener
processes $d{\bm{W}}$, the corresponding Langevin equation reads
$d{\bm{x}}=\frac{1}{|a|}\left(\frac{i}{M}iM{\bm{x}}\right)d\tau_{L}+\sqrt{2\frac{i}{M}}d{\bm{W}}=-\frac{1}{|a|}{\bm{x}}d\tau_{L}+\sqrt{2\frac{i}{M}}d{\bm{W}},$
(37)
which leaves us with a complex non-diagonal noise coefficient
$\sqrt{\frac{2i}{M}}$ and a drift term pointing in the direction of
$-{\bm{x}}$.
Let us demonstrate the effect of this kernel by carrying out a simulation for
the following parameters. We discretize the canonical SK contour with
$N_{t}=50$ points on the forward and backward branch each and $N_{\tau}=5$
points on the imaginary branch. Note that we do not introduce any tilt here.
Choosing a mass parameter $m=1$, the imaginary branch extends up to
$m\tau_{\rm max}=1$. As real-time extent, we choose $mt_{\rm max}=10$. The
value chosen here is arbitrary as the kernelled dynamics of the free theory
are stable and converge for any real-time extent. The results of the
simulation without a kernel are given in the top panel of fig. 3 and rely on
the implicit Euler-Maruyama scheme to avoid the occurrence of runaway
solutions. The results with our choice of kernel are shown in the bottom panel
and were obtained using a simple forward-stepping Euler scheme at
$\Delta\tau_{\rm L}=10^{-3}$ without adaptive step size. In each case we
generate $100$ different trajectories, saving configurations at every
$m\Delta_{\tau_{L}}=0.1$ in Langevin time up to a total of $m\tau_{L}=100$.
Each panel in fig. 3 showcases four quantities plotted against the contour
parameter $\gamma$. Their values for $0<m\gamma<10$ are obtained on the
forward branch, those for $10<m\gamma<20$ on the backward branch and the small
piece $20<m\gamma<21$ denotes the Euclidean time results. The real- and
imaginary part of the equal time expectation value $\langle
x^{2}(\gamma)\rangle$ are plotted as green and pink data points respectively.
The real- and imaginary-part of the unequal time correlator $\langle
x(0)x(\gamma)\rangle$ on the other hand are plotted as orange and blue data
points. The analytically known values from solving the Schrödinger equation
are underlaid as black solid lines.
Figure 3: The result of a complex Langevin simulation of the real-time
harmonic oscillator without a kernel (top) and with a kernel (bottom) for the
observables $\langle x^{2}\rangle$, and the correlator $\langle
x(0)x(t)\rangle$. Simulations are carried out with 100 trajectories and up to
$m\tau_{L}=100$ in Langevin time, saving at every $m\Delta_{\tau_{L}}=0.1$
using the implicit scheme Euler-Maruyama with $\theta=1.0$ with adaptive step-
size with tolerance $10^{-3}$ (top) and the explicit scheme Euler-Maruyama
with $\theta=0$ with fixed step-size $\Delta\tau_{\rm L}=10^{-3}$ (bottom).
Values of the correlators from the solution of the Schrödinger equation are
given as solid black lines.
The results without a kernel show both deviations from the correct result and
exhibit relatively large uncertainties. The reason lies in the slow relaxation
rate to the correct result due to the presence of an explicit regulator. Here
the regulator is provided by our use of an implicit numerical scheme (Euler-
Maruyama with implicitness parameter $\theta=1.0$ and an adaptive step-size
with a maximum step size of $10^{-3}$), but could equally well be introduced
by adding a small term $i\epsilon x^{2}$ to the system action. Using a
stronger regulator, e.g., tilting the contour, would yield a shorter
relaxation time, but any such explicit regulator distorts the results away
from the actual $\epsilon\rightarrow 0$ physical solution. It is interesting
to note that it is the equal time observable $\langle x^{2}\rangle$ that is
performing the worst. We will see later on that this is the hardest observable
to accurately reproduce.
For the bottom plot we use the free theory propagator kernel, eq. 36. This
simulation now aligns excellently with the true solution for all the
observables.
After applying the kernel, the problem is regularized and thus less stiff, and
we can revert to using a fixed step-size explicit Euler-Maruyama scheme. The
step-size here is $d\tau_{\rm L}=10^{-3}$. The fact that we do not need to
impose an explicit regulator term is important as in this case we only need to
take the limit $\Delta\tau_{\rm L}\to 0$ to obtain a physical result, and do
not need to extrapolate the regulator term to zero ($\epsilon\rightarrow 0$).
This might be important considering the recent work in ref. Matsumoto:2022ccq
, which shows that one encounters subtleties in taking the limit of the
regulator $\epsilon\rightarrow 0$.
### 3.4 A kernel for the quantum anharmonic oscillator
Not only did the kernel in the free theory change the convergence behavior of
the complex Langevin simulation, it also removed the need of a regulator in
the action. The obvious next step is to explore the interacting theory where
the problem of convergence to the wrong solution is more severe. The potential
term in the action is now given by
$V(x)=\frac{1}{2}mx(t)^{2}+\frac{\lambda}{4!}x(t)^{4},$ (38)
where we use $m=1$ and $\lambda=24$. This choice of parameters has been
deployed in the past as benchmark for strongly-interacting real-time complex
Langevin in refs. BergesSexty2007 ; Alvestad:2021hsi . As ref. BergesSexty2007
formulated the real-time dynamics on a tilted contour they found correct
convergence up to $t^{\textrm{max}}=0.8$, while ref. Alvestad:2021hsi worked
with an untilted contour and observed onset of incorrect convergence already
above $t^{\textrm{max}}=0.5$. In the following we will remain with an untilted
contour.
We find that using the free kernel of eq. 36 the convergence to the correct
solution can be extended slightly to around $t^{\textrm{max}}=0.75$. If in
addition we modify the free theory kernel by rescaling the contributions from
the kinetic term with a common prefactor $g$ and modify the mass term away
from the free theory value $m$
$M_{jk}(g,m_{g})=\begin{cases}\frac{g}{a_{j-1}}+\frac{g}{a_{j}}-\frac{1}{2}\left[a_{j-1}+a_{j}\right]m_{g}^{2},&j=k\\\
-\frac{g}{a_{j}},&j=k-1\\\ -\frac{g}{a_{j-1}},&j=k+1.\end{cases}$ (39)
convergence can be pushed up to $t_{\textrm{max}}=1.0$ by using the
heuristically determined parameter values $g=0.8$ and $m_{g}=1.8$. The CL
equation we simulate is given by
$d{\bm{x}}=\frac{1}{|a|}\left[\frac{i}{M(g,m_{g})}i\left(M(1,m){\bm{x}}+\frac{\lambda}{3}x^{2}{\bm{x}}\right)\right]d\tau_{L}+\sqrt{\frac{2i}{M(g,m_{g})}}d{\bm{W}}.$
(40)
We carry out simulations, assigning $N_{t}=10$ points to the forward and
backward branches each and $N_{\tau}=10$ points to the imaginary branch of the
contour. Here we use the implicit Euler-Maruyama scheme with implicitness
parameter $\theta=0.5$. Even though we do not need a regulator in the presence
of the kernel, the system retains some of its stiffness in contrast to the
free theory. The use of an explicit scheme with e.g. adaptive step size is
possible, however we find it more efficient to rely on an implicit scheme, as
it allows the use of much larger Langevin step sizes.
The results of two simulations with a maximum real-time extent
$t_{\textrm{max}}=1.0$ are shown in fig. 4. One is carried out without a
kernel and using an implicit scheme (top) and the other in the presence of a
kernel based on the parameters $g=0.8$ and $m_{g}=1.8$ (bottom). The graphs
show the real and imaginary part of the equal time $\langle x(t)x(t)\rangle$
(green and pink data points) and unequal time correlator $\langle
x(0)x(t)\rangle$ (orange and blue datapoints) plotted against the contour
parameter $\gamma$. For $0<m\gamma<1$ and $1<m\gamma<2$ it refers to the
forward and backward branch of the contour and for $2<m\gamma<2.9$ denotes the
imaginary time branch.
The top panel indicates that naive complex Langevin fails to converge to the
correct solution at this real-time extent of $mt_{\textrm{max}}=1$. It is
interesting to point out the failure of CL at two specific points along the
contour, the first one is the starting point at $\gamma=0$, which is connected
by periodic boundary condition to the Euclidean path. Then at the turning
point of the contour at maximum real-time extent, corresponding to
$m\gamma=1$, the real part of the $\langle x^{2}\rangle$ observable lies
significantly away from the true solution. This points seems to be most
affected by the convergence problem of the CLE.
Figure 4: Anharmonic oscillator at $m=1$ and $\lambda=24$ up to
$t_{\textrm{max}}=1$ in real-time without (top) and in the presence (bottom)
of the heuristic free theory kernel with $g=0.8$ and $m_{g}=1.8$ from eq. 40.
Both simulations are carried out generating 100 trajectories simulated up to
$m\tau_{L}=100$ in Langevin time, saving configurations at every
$m\Delta\tau_{L}=0.01$. We deploy the Euler-Maruyama solver with $\theta=1.0$
without kernel (top), and $\theta=0.5$ with kernel (bottom). Values of the
correlators from the solution of the Schrödinger equation are given as solid
black lines.
In the lower panel, the simulation in the presence of the modified free theory
kernel is presented. The outcome of the kernelled complex Langevin evolution
is very close to the correct solution and shows only small statistical
uncertainties. Note however that especially the observable $\langle
x^{2}\rangle$ still shows some deviation from the true result beyond the
statistical error bars indicating that exact correct convergence has not yet
been achieved111This behavior may be understood in terms of boundary terms.
The kernel manages to significantly reduce the magnitude of boundary terms for
$\langle x^{2}\rangle$ where it differs from the true solution the boundary
terms, while small, are not exactly zero.
The above results are promising, as they indicate that in principle the
convergence problem of real-time complex Langevin can be attacked by use of a
kernel. At the same time explicitly constructed kernels, such as the modified
free theory kernel are limited in the range of real-time extent in which they
are effective. The question at hand is how to systematically construct kernels
that will restore convergence at even larger real-time extent.
## 4 Learning optimal kernels
In this section, we introduce our novel strategy to systematically construct
kernels to improve the convergence of real-time complex Langevin. Our goal is
to overcome the limitations of explicitly parametrized kernels, such as the
one of eq. 39. While optimal parameter values $g$ and $m_{g}$ were found for
this kernel, they only achieved correct convergence for a limited $mt_{\rm
max}\leq 1$. Most importantly it is not clear how to systematically modify
that kernel for realizing convergence at larger real-time extent.
Instead we set out to use a generic parametrization of the kernel. We propose
to use an expansion in a set of complete basis functions of the dynamical
d.o.f. In this study, as a proof of principle, we will restrict ourselves to a
field-independent kernel, which can be understood as the first term in an
expansion in powers of the field. This field-independent kernel for the
quantum anharmonic oscillator on the Schwinger-Keldysh contour will take the
form of a $\tau_{L}$ independent matrix $K$ with $(2N_{t}+N_{\tau})^{2}$
entries, multiplying the $2N_{t}$ d.o.f. on the forward and backward contour
and the $N_{\tau}$ ones on the imaginary time branch. It is the values of
these matrix entries that we set out to tune in order to achieve optimal
convergence.
And even though simple model systems indicate that a field-dependent kernel is
needed to achieve correct convergence in case of strong complex drift terms,
we find that an optimal field-independent kernel can already extend the range
of convergence of the anharmonic oscillator out to $mt_{\rm max}=1.5$, three
times larger than the previous record set for CL in ref. BergesSexty2007 .
In order to obtain kernel values that restore correct convergence, we
formulate an optimization problem based on a cost functional, which
incorporates prior knowledge about the system of interest. Taking advantage of
modern programming techniques that allow us to compute the dependence of a
full complex-Langevin simulation on the entries of the kernel we propose to
iteratively learn the optimal kernel. The fact that we incorporate prior
information into the simulation opens a novel path to beat the notorious sign
problem, i.e. for the first time complex Langevin can be amended by system-
specific information in order to restore correct convergence.
### 4.1 The optimization functional
In order to guarantee that a complex Langevin simulation converges to the true
solution we must fulfill the correctness criteria of Aarts:2011ax . First we
must ensure the absence of boundary terms and second that the late-time
distribution of the complex Fokker-Planck equation is indeed ${\rm
exp[iS_{M}]}$. Constructing a loss function for both criteria however is only
feasible for very low dimensional models, as it entails calculating the
eigenvalues and eigenvectors of the complex Fokker-Planck operator, which is
prohibitively expensive already for the anharmonic oscillator discussed here.
Instead we will retain only the first ingredient of the correctness criteria,
the absence of boundary terms and use other prior information in order to
guide the kernelled complex FP equation to the correct stationary solution.
The boundary terms can be calculated via the expectation value $\langle
L_{c}\mathcal{O}\rangle_{Y}$ where $L_{c}$ is the Langevin operator and
$\mathcal{O}$ refers to any observable (for a detailed discussion see e.g.
Scherzer:2018hid ). In appendix A we demonstrate that the correctness
criterion still holds with a kernel and how to calculate these boundary terms.
Besides the boundary terms, we often possess additional relevant prior
information about the system at hand. We can e.g. compute correlation
functions in Euclidean time using conventional Monte-Carlo methods. In
addition, we know that in thermal equilibrium the correlation functions on the
forward and backward branch are related due to the KMS relation. In order to
exploit this prior information it is vital for the CL equations to be
formulated on the canonical SK contour, whose real-time branches lie parallel
to each other and connect to the Euclidean branch at the origin. In a tilted
contour setup, access to the Euclidean branch is limited and the comparison of
the values on the forward and backward branch is much more involved. In
addition, symmetries provide powerful constraints to the simulation, as e.g.
time-translation invariance in a thermal system renders local observables such
as $\langle x^{n}(\gamma)\rangle$ constant along the full contour.
We quantify the distance of the simulated result from the behavior dictated by
prior knowledge via a loss function $L^{\rm prior}$. The comparison is carried
out on the level of expectation values of observables, where apriori known
values from conventional Euclidean simulations are referred to as
$\langle{\cal O}\rangle_{\rm MC}$ and those from the complex Langevin
simulation in the presence of a kernel by $\langle{\cal O}\rangle_{K}$.
In principle one can distinguish between four categories of prior knowledge:
* •
Euclidean correlators ($L^{\textrm{eucl}}$), which are accessible via
conventional Monte-Carlo simulations:
$L^{\textrm{eucl}}=\sum_{\cal O}\int
d\tau\Big{|}\left\langle\mathcal{O}(\tau)\right\rangle_{K}-\left\langle\mathcal{O}(\tau)\right\rangle_{\rm
MC}\Big{|}^{2}/\sigma^{2}_{\langle{\cal
O}(\tau)\rangle_{K}}\quad\tau\in\textrm{imaginary time}$
* •
Model symmetries ($L^{\textrm{sym}}$), which exploit that the expectation
values of observables ${\cal O}$ must remain invariant under a symmetry
transformation $T_{\xi}$ governed by a continuous (or discrete) parameter
$\xi$:
$L^{\textrm{sym}}=\sum_{{\cal O}}\int d\xi|\langle T_{\xi}{\cal
O}\rangle_{K}-\langle{\cal O}\rangle_{K}|^{2}/\sigma^{2}_{\langle{\cal
O}(\tau)\rangle_{K}}$
* •
Contour symmetries ($L^{\textrm{rt}}$), which arise predominantly in systems
in thermal equilibrium:
$L^{{\cal C}}=\sum_{\cal O}\int d\gamma\Big{|}\langle\mathcal{O}^{\cal
C^{+}}(\gamma)\rangle_{K}-F[\langle\mathcal{O}^{\cal
C^{-}}(\gamma)\rangle_{K}]\Big{|}^{2}/\sigma^{2}_{\langle{\cal
O}(\tau)\rangle_{K}}\quad{\rm with\,F\,analytically\,known}$
* •
Boundary terms ($L^{\textrm{BT}}$), which can be explicitly computed from the
outcome of the kernelled Langevin simulation:
$L^{\textrm{BT}}=\sum_{\cal O}\Big{|}\langle
L_{c}(K)\mathcal{O}\rangle_{K}\Big{|}^{2}/\sigma^{2}_{\langle{\cal
O}(\tau)\rangle_{K}}$
In practice one wishes to combine as many of these different contributions as
possible. To this end they must be added as dimensionless quantities. This is
why each of the terms above is normalized by the variance of the complex
Langevin simulation. In order for the combined functional to provide a
meaningful distinction of the success of convergence (also in the case of e.g.
the free theory as shown in fig. 3) we propose to introduce an overall
normalization for the combined prior functional
$\displaystyle L^{\rm prior}={\cal N}_{\rm
tot}\Big{(}L^{\textrm{eucl}}+L^{\textrm{sym}}+L^{{\cal
C}}+L^{\textrm{BT}}\Big{)}.$ (41)
There is an element of arbitrariness in what overall normalization to choose,
and we find that the best distinction between wrong and correct convergence is
achieved if one uses the relative error of the most difficult observable to
reproduce. In case of the systems studied here this amounts to the relative
error of the equal time correlator obtained in the complex Langevin simulation
with respect to the correct known value from Euclidean simulations ${\cal
N}_{\rm tot}={\rm max}_{\gamma}\\{\sigma_{\langle
x^{2}\rangle_{K}}(\gamma)/\langle x^{2}\rangle_{\rm MC}(\gamma)\\}$.
We carry the subscript $K$ in the expectation values above, in order to
emphasize that the loss functional depends implicitly on the choice of kernel
used in the underlying complex Langevin simulation. The number of observables
${\cal O}$ contained in the cost functional is not specified here and depends
on the problem at hand. In practice, we find that often including the apriori
known Euclidean one- and two-point functions already allow us to reliably
distinguish between correct and incorrect convergence.
In the next section, we will discuss both fully general numerical strategies
to locate the minimum of the optimization functional, as well as an
approximate low-cost approach, which we have deployed in the present study.
### 4.2 Optimization strategies
#### 4.2.1 General approach
The task at hand is to find the critical point of a cost functional that is
comprised of a subset of the contributions listed in the previous section,
i.e. of $L^{\rm eucl}$,$L^{\rm sym}$,$L^{\cal C}$ or $L^{\rm BT}$. Generically
each contribution can be written as the expectation value of a known function
$G$, depending on the dynamical degrees of freedom $x$ and the kernel $K$,
i.e. $L^{\rm prior}[K]=\left|\left\langle G[x,K]\right\rangle_{K}\right|^{2}$.
In order to make the dependence of the expectation value on the kernel
explicit we consider the d.o.f. within the simulation to explicitly depend on
$K$ as $x(K)$. This allows us to remove the subscript $K$ from the expectation
value so that $L^{\rm prior}[K]=\left|\left\langle
G[x(K),K]\right\rangle\right|^{2}$.
Let us characterize the kernel via a set of variables $\kappa$. We emphasize
that this does not limit the general nature of the approach, as $\kappa$ may
refer to the prefactors of a general expansion of the kernel in a complete set
of basis functions.
To efficiently locate its critical point we deploy standard numerical
optimization algorithms, which utilize the information of the gradient of the
functional with respect to the parameters of the kernel. The computational
challenge lies in determining the gradient robustly. In the continuum, the
gradient of the loss reads
$\displaystyle\nabla_{\kappa}L^{\rm prior}[K]=$ $\displaystyle 2\frac{\langle
G[x(K),K]\rangle}{\left|\langle
G[x(K),K]\rangle\right|}\left\langle\nabla_{\kappa}G[x(K),K]\right\rangle$
(42) $\displaystyle=$ $\displaystyle 2\frac{\langle
G[x(K),K]\rangle}{\left|\langle
G[x(K),K]\rangle\right|}\left\\{\left\langle\nabla_{x}G[x(K),K]\cdot\nabla_{\kappa}{\bf
x}\right\rangle+\left\langle\nabla_{\kappa}G[x,K]\right\rangle\right\\}$ (43)
In order to evaluate eq. 43 we need to compute the change in the field $x(K)$,
which depends on the kernel. This requires taking the gradient of the CL
simulation itself. While a demanding task, dedicated methods to evaluate such
gradients have been developed, which underpin the recent progress in the
machine learning community. They are known as differential programming
techniques (for an in-depth review see e.g. ref. baydin2018automatic ).
As a first option, we considered using direct auto-differentiation222Auto-
differentiation is a method to compute derivatives to machine precision on
digital computers based on an efficient use of the chain rule, exploiting
elementary arithmetic operations in the form of dual variables (see
e.g.harrison2021brief ). We have used the Julia library
Zygote.jlZygote.jl-2018 and ForwardDiff.jl for computing gradients. on the
full loss function, as we are dealing with the standard setting of estimating
the gradient of a highly dimensional functional whose output is a single
number. For small systems with a number of degrees of freedom ${\cal O}(10)$,
forward-auto-differentiation is feasible as it requires multiple runs of the
full CL simulation. As the number of independent d.o.f. grows, backward-auto-
differentiation offers us to reduce the number of necessary simulation runs,
trading computational cost for increased memory demands to store intermediate
results of the chain rule it computes internally. We find that already for the
quantum anharmonic oscillator this direct computation of the gradient is too
costly and thus not practical.
A more advanced approach, which promises to avoid the cost and memory
limitations of direct auto-differentiation are so-called sensitivity analysis
methods, such as e.g. adjoint methods for stochastic differential equations. A
detailed discussion of these methods is beyond the scope of this paper and the
interested reader is referred to refs. cao2003adjoint ;
schafer2020differentiable ; rackauckas2020universal for further details.
We find that for the specific case of real-time complex Langevin, these
methods in their standard implementation, as provided e.g. in
rackauckas2020universal are challenged in estimating the gradient robustly.
We believe that the difficulty here lies in the stiffness of the underlying
stochastic differential equation. One possible way out is to deploy
sensitivity analysis methods specifically developed for chaotic systems, such
as Least Square Shadowing algorithms, discussed e.g. in Refs. wang2014least ;
Ni_2017 . While these methods at this point are still too slow to be deployed
in CL simulations, the rapid development in this field over the past years is
promising.
Our survey of differential programming techniques indicates that while
possible in principle, the optimization of the loss functional $L(K)$ is
currently plagued by issues of computational efficiency. We believe that
implementing by hand the adjoint method for the real-time complex Langevin
systems considered here will offer a significant improvement in speed and
robustness compared to the generic implementations on the market. This line of
work goes beyond the scope of this manuscript and will be considered in an
upcoming study.
To make headway in spite of these methods limitations we in the following
propose an approach to compute an approximate low-cost gradient, which in
practice allows us to significantly reduce the values of the optimization
functional.
#### 4.2.2 A low cost update from an heuristic gradient
Our goal is to compute a gradient, which allows us to approximately minimize
the cost functional $L^{\rm prior}[K]$ without the need to take derivatives of
the CL simulation. The approach we propose here relies on using a different
optimization functional, whose form is motivated by the need to avoid boundary
terms. While updating the values of the kernel according to a heuristic
gradient obtained from this alternative functional, we will monitor the values
of the true optimization functional, selecting the kernel which achieves the
lowest value of $L^{\rm prior}[K]$.
We saw that the optimal kernel for the free theory reduces the drift term to a
term in the direction of $-x$. This drift term points towards the origin. In
this spirit we construct a functional that penalizes drift away from the
origin.
The starting point is the following expression, where we define $D=-iK\partial
S_{M}/\partial x$ as the drift term modified by the kernel
$D(x,K)\cdot(-x)=||D(x,K)||||x||\cos{\theta}.$ (44)
Here ${\rm cos}\theta$ denotes the angle between the drift and the optimal
direction. As we wish to align the drift and $-x$, our optimization problem
becomes finding a kernel $K$ such that
$\min_{K}\left\\{D(x,K)\cdot(-x)-||D(x,K)||\;||x||\right\\}.$ (45)
We can write down different loss functionals which encode this minimum
$L_{D}=\left\langle\Big{|}D(x)\cdot(-x)-||D(x)||\;||x||\Big{|}^{\xi}\right\rangle=\frac{1}{T}\int\Big{|}D(x(\tau_{\rm
L}))\cdot(-x(\tau_{\rm L}))-||D(x)||\;||x||\Big{|}^{\xi}$ (46)
The choice of $\xi$ determines how steep the gradients on the functional are
and we find that in practice a value between $1<\xi<2$ leads to most efficient
minimization, when $L_{D}$ is used to construct the heuristic gradient we
describe below.
Note that turning the drift towards the origin differs from the strategy
employed by dynamic stabilization. The scalar counterpart to minimizing the
unitarity norm is driving the values of the complexified $x$ towards the real
axis. In addition, in dynamical stabilization a non-holomorphic term is added
to the action. Here the CL equation is modified only by a kernel, which still
leads to a holomorphic complex Langevin equation that leaves the correctness
criteria intact.
The exact gradient of the functional $L_{D}$ of eq. 46 also contains the
costly derivatives over the whole CL simulation. However we find that in
practice for values $1\leq\xi\leq 2$ in $L_{D}$ these contributions can be
neglected. We believe the reason to lie in the fact that $L_{D}$ consists of
the difference between two terms that contain the same powers of $x$. I.e. we
find that carrying out the optimization using only the explicit dependence of
$L_{D}$ on the kernel $K$, which is computed using standard auto-
differentiation. The approximate gradient allows us to locate kernel values,
which significantly reduce the values of the true optimization functional
$L^{\rm prior}[K]$. The kernels identified in this way in turn achieves
correct convergence on contours with larger real-time extent than previously
possible.333Note that disregarding the costly terms in the gradient of the
true cost functional $L^{\rm prior}[K]$ introduced in section 4.1 did not lead
to a viable minimization of its values..
The full optimization scheme can be summarized as follows:
1. 1.
Initialize the kernel parameters yielding the initial kernel $K_{1}$
2. 2.
Carry out the CL simulation with $K_{1}$ and save the configurations
$\\{x_{j}\\}_{1}$, where the subscript indicates that this is the first
iteration
3. 3.
Compute the values of the loss functions $L_{D}$ and $L^{\rm prior}[K_{1}]$
4. 4.
Compute the gradients of the loss function $L_{D}(\\{x_{j}\\}_{1},K_{1})$ with
respect to the kernel parameters using auto-differentiation
5. 5.
Update the kernel parameters using one step of the ADAM optimization scheme
6. 6.
Rerun the CL simulation with the new kernel $K_{i+1}$ and save a new set of
configurations $\\{x_{j}\\}_{i+1}$
7. 7.
Loop over step 3 - 6 for $N$ steps, or until $L_{D}$ have reached a minimum
and then select the kernel parameters with the smallest $L^{\rm prior}[K_{i}]$
We will demonstrate the efficiency of the proposed optimization based on the
heuristic gradient in the next section, where we learn optimal kernels for the
quantum harmonic and anharmonic oscillator on the thermal Schwinger-Keldysh
real-time contour.
### 4.3 Learning optimal kernels for the thermal harmonic oscillator
To put the strategy laid out in the previous section to a test we set out here
to learn a field-independent kernel for the quantum harmonic oscillator on the
canonical Schwinger-Keldysh contour at finite temperature. In section 3.3 we
had identified one kernel by hand, which actually minimizes the low-cost
functional eq. 46. We will compare it to the learned kernel at the end of this
section.
We simulate on the canonical Schwinger-Keldysh contour with real-time extent
$mt_{\rm max}=10$ and an imaginary time branch of length $m\beta=1$. The
contour will be discretized with steps of equal magnitude $|a_{i}|=|a|$ such
that $N_{t}=25$ points are assigned to the forward and backward branch each
and $N_{\tau}=5$ to the imaginary time axis.
The field-independent kernel therefore is a complex $55\times 55$ matrix,
which we parametrize via two real matrices $A$ and $B$ such that $K=e^{A+iB}$.
This choice is arbitrary and is based on the observation that the minimization
procedure is more robust for the exponentiated matrices than when using $A+iB$
directly. The kernel is initialized to unity before the start of the
optimization by setting all elements of $A$ and $B$ to zero. The optimization
itself, as discussed in the previous section, is carried out using the
approximate gradient following from $L_{D}$ with a choice of $\xi=2$.
One needs to choose the actual cost functional $L^{\rm prior}$ based on prior
knowledge through which to monitor the optimization success. We decide to
include the known values of the Euclidean two-point correlator $\langle
x(0)x(-i\tau)\rangle$ and exploit the knowledge about the symmetries of the
system, which require that $\langle x\rangle=\langle x^{3}\rangle=0$, as well
as $\langle x^{2}(\gamma)\rangle=\langle x^{2}(0)\rangle$. This leads us to
the following functional
$\displaystyle L^{\rm prior}={\cal N}_{\rm tot}\Big{(}$ (47)
$\displaystyle\sum_{i\in SK}\Big{\\{}|\langle
x(\gamma_{i})\rangle_{K}|^{2}/\sigma^{2}_{x_{\rm K}}+|\langle
x^{3}(\gamma_{i})\rangle_{K}|^{2}/\sigma^{2}_{x^{3}_{\rm K}}+|\langle
x^{2}(0)\rangle_{\rm MC}-\langle
x^{2}(\gamma_{i})\rangle_{K}|^{2}/\sigma^{2}_{x^{2}_{\rm K}}\Big{\\}}$
$\displaystyle+$ $\displaystyle\sum_{i\in{\rm Eucl.}}|\langle
x(0)x(\tau_{i})\rangle_{\rm MC}-\langle
x(0)x(\tau_{i})\rangle_{K}|^{2}/\sigma^{2}_{xx_{\rm K}}\Big{)}$
where the first sum runs over all points of the discretized Schwinger-Keldysh
contour, while the second sum only contains the correlator on the Euclidean
branch. As discussed before, the overall normalization is based on the
uncertainty of the equal-time $x^{2}$ correlator.
Since we start from a trivial kernel, we must make sure that our simulation
algorithm provides a regularization and remains stable even for stiff
dynamics. Therefore we solve the complex Langevin stochastic differential
equation using the Implicit Euler-Maruyama scheme with implicitness parameter
$\theta=1.0$ and adaptive step-size. For every update of the CL configurations
we simulate $30$ different trajectories up to a Langevin time of
$m\tau_{L}=30$, with a thermalization regime of $m\tau_{L}=5$ in Langevin time
before we start collecting the configurations at every
$m\Delta_{\tau_{L}}=0.05$ in Langevin time. To calculate the expectation
values, we compute sub-averages from the saved configurations in each
trajectory separately. The final mean and variance are then estimated from the
results of the different trajectories.
The iterative optimization of the kernel values, based on the low-cost
functional and its approximate gradient, is performed using the ADAM (Adaptive
Moment) optimizer with a learning rate of $0.001$. This is an improved
gradient descent optimizer, which combines gradient descent momentum and an
adaptive learning rate.
Since we know that the complex Langevin simulation will be the slowest part of
the optimization scheme we will only run the full CL simulation for every five
optimization steps. For this simple model it would not be a computation time
problem to update the expectation values in $L_{D}$ after every kernel update,
but for realistic models in higher dimensions this might be too expensive. As
the distributions of the observables should be similar for a small change in
the kernel we indeed find that not updating the CL configurations at every
update steps still allows us to obtain a good estimate of the heuristic
gradient.
Starting with the unit kernel, the functionals $L_{D}=6.58\times 10^{11}$ and
$L^{\rm prior}=107$ show appreciable deviation from zero. After 32 steps of
the ADAM optimizer we manage to find values of $K$ which reduce the value of
$L^{\rm prior}=26.5$ indicating that the apriori known information has been
well recovered.
Figure 5: Harmonic oscillator in the presence of our learned optimal kernel,
based on the heuristic gradient from the loss function in eq. 46. We simulate
on the SK contour with $mt_{\rm max}=10$ in real-time and choose $m=1$.
Simulating up to $m\tau_{L}=100$ in Langevin time, we combine samples from 100
(top) and 400 (bottom) different trajectories. Note that improved statistics
diminishes the residual oscillatory artifacts in $\langle x^{2}\rangle$.
Values of the correlators from the solution of the Schrödinger equation are
given as solid black lines.
The results for the simulation with the optimal learned kernel are plotted in
fig. 5, based on $100$ trajectories (top) and $400$ trajectories (bottom) each
of which progresses up to $m\tau_{L}=100$ in Langevin time . The x-axis refers
to the contour parameter $\gamma$, such that at $m\gamma=10$ we are at the
turning point of the real-time branch of the Schwinger-Keldysh contour and at
$m\gamma=20$ the contour has returned to the origin, before extending along
the imaginary axis to $mt=-i$. We plot the real- and imaginary part of the
unequal time correlator $\langle x(0)x(\gamma)\rangle$ as orange and blue data
points, while the real- and imaginary part of the equal time expectation value
$\langle x^{2}(\gamma)\rangle$ are given in green and pink respectively. The
analytically known values from solving the Schrödinger equation are underlaid
as black solid lines.
How has the learned kernel improved the outcome? When comparing to a
simulation without kernel in the top panel of fig. 3 we see that using the
same amount of numerical resources (i.e. 100 trajectories at $m\tau_{L}=100$)
the learned kernel has reduced the resulting errorbars significantly. On the
other hand in the top panel of fig. 5 residual oscillations in $\langle
x^{2}\rangle$ seem to persist. One may ask whether these indicate incorrect
convergence, which is why we provide in the lower panel the result after
including 400 trajectories at the same Langevin time extent. One can see that
not only the errorbars further reduce but also that the oscillatory artifacts
have diminished. The improvement amounts to another factor of two in terms of
$L^{\rm prior}$ from the value $L^{\rm prior}=26.5$ in the top panel to
$L^{\rm prior}=13.4$ in the lower panel. We emphasize that we did not use the
analytically known solution of the system for the optimization procedure.
Figure 6: (left) An explanatory sketch of our visualization of the complex
kernel used in the simulations. The top major panel denotes the values of the
real part and the lower major panel those of the imaginary part of the kernel.
Inside each panel the values of the kernel are ordered along the contour
parameter, indicating which parts of the kernel couple which range on the
contour. (center) The free theory kernel in eq. 36, constructed explicitly in
the previous section for $mt_{\textrm{max}}=10$. The repeating pattern
indicates an oscillatory behavior in coordinate space arising from the fact
that this kernel is just the propagator of the free theory. (right) In the
optimal learned kernel based on the low-cost update we have subtracted the
unit matrix from the real-part to avoid it dominating the other structures. We
find that the learned kernel exhibits some of the structure of the manually
constructed kernel but in general has a more simple form, which nevertheless
manages to achieve correct convergence of the complex Langevin dynamics.
Let us inspect the learned kernel and compare it to the free theory propagator
kernel of eq. 36. In fig. 6 we visualize the structures of the kernel by
plotting a heat-map of the matrix entries of the complex matrix kernel. The
right sketch shows how the matrix is structured, where the top panel refers to
the real part and the lower panel to the imaginary part. The entries of the
matrices are laid out corresponding to the contour parameter $\gamma$. The
smaller regions inside the two panels indicate how the kernel mixes points
along the time contour. The $++$ corresponds to the mixing of the forward
branch of the contour, while $+-$ mixes the forward and backward branch time
points. There exists also a small strip involving the Euclidean points, mixing
with the real-time points ($E+$ and $E-$), as well as a small corner ($EE$)
mixing within the Euclidean points.
The different regions shown in the sketch can easily be recognized in the two
kernel structure plots. Note that we have subtracted the unit matrix from the
real-part of the optimized kernel to more clearly expose off-diagonal
structures, if present. The manually constructed free theory propagator kernel
(middle), as expected from being the inverse free propagator, exhibits an
oscillatory pattern. It leads to a significant coupling between the forward
and backward time points, due to an anti-diagonal structure in the real and
imaginary parts, forming an oscillatory cross pattern. This anti-diagonal
behavior is much less pronounced in the optimized kernel (left). In its real-
part it mainly exhibits a diagonal which is not as wide as in the manually
constructed kernel. There is however a small negative structure present, an
off-diagonal band, similar to the black part in the middle panel.
For the imaginary part, the patterns close to the diagonal are similar between
the manually constructed kernel and the optimal learned kernel. Both possess a
diagonal close to zero and a broad sub/super-diagonal that switches sign at
the turning points between the $++$ and $--$ part of the time contour. We also
see that the anti-diagonal structure is similar for a very short part in the
$+-$ and $-+$ quadrants in the imaginary panel. The rest of the $+-$ and $-+$
quadrant seems to contain noise.
While some similarities exist between the explicit kernel and the optimal
learned kernel, it appears that correct convergence requires some non-trivial
structure in the imaginary part of K. The learned kernel achieves correct
convergence with much less structure than the manually constructed one.
### 4.4 Learning optimal kernels for the strongly coupled anharmonic
oscillator
After successfully testing the learning strategy for a field-independent
kernel in the free theory in the previous section, we are now ready to attack
the central task of this study: learning an optimal kernel for a strongly
coupled quantum system in order to extend the correct convergence of the
corresponding real-time complex Langevin simulation.
We deploy the same parameter set as before with $m=1$ and $\lambda=24$. In
section 3.4 we showed that for a real-time extent of $mt_{\textrm{max}}=1$ an
explicit kernel based on insight from the free theory can be constructed,
which allows us to restore correct convergence within statistical
uncertainties (see fig. 4).
Here we set out to learn an optimal kernel based only on the combination of
our low-cost functional and prior knowledge of the Euclidean two-point
functions and time-translation invariance of the thermal system. Since we
restrict ourselves to a field-independent kernel we expect that our approach
will be able to improve on the manually constructed kernel but will itself be
limited in the maximum real-time extent up to which correct convergence can be
achieved.
As testing ground we selected three different real-time extents,
$mt_{\textrm{max}}=1$, $mt_{\textrm{max}}=1.5$ and $mt_{\textrm{max}}=2$, all
of which show convergence to the wrong solution when performing naive complex
Langevin evolution.
We discretize the real-time contour with a common magnitude of the lattice
spacing $|a_{i}|=|a|$. I.e. depending on the maximum real-time extent the
number of grid points changes. E.g. in case of $mt^{\textrm{max}}=2$ we use
$N_{t}=20$ on the forward and backward part of the real-time contour each, and
$N_{\tau}=10$ for the imaginary part of the contour. Due to the stiffness of
the complex Langevin equations in the interacting case, all CL simulations are
performed with the Euler-Maruyama scheme with $\theta=0.6$ and adaptive step-
size. We simulate $40$ different trajectories up to $m\tau_{L}=40$ in Langevin
time, computing observables at every $m\Delta\tau_{L}=0.02$ step.
The setup for learning the optimal kernel is very similar to that in the
previous section. The kernel parametrization is given by $K=e^{A+iB}$, where
$A$ and $B$ are real matrices. We search for the critical point of the true
loss function $L^{\rm prior}$ (see eq. 47) via the heuristic gradient obtained
from the loss function $L_{D}$ of eq. 46. We find that minimization proceeds
efficiently, when choosing the parameter $\xi=1$ in $L_{D}$. The optimal
kernel is chosen according to the lowest values observed in $L^{\rm prior}$.
Figure 7: Complex Langevin simulation of the strongly coupled anharmonic
oscillator on the thermal Schwinger-Keldysh contour in the absence (left) of a
kernel and in the presence of the optimal learned field-independent kernel
(right). The top row corresponds to results from a contour with real-time
extent $mt_{\textrm{max}}=1$, while the center row shows results for
$mt_{\textrm{max}}=1.5$ and the bottom row for $mt_{\textrm{max}}=2$. Values
of the correlators from the solution of the Schrödinger equation are given as
solid black lines.
We find that for a trivial unit kernel where complex Langevin fails, the cost
functional $L^{\rm prior}$ based on prior information indicates values of
$L^{\rm prior}_{mt_{\rm max}=1}=942$, $L^{\rm prior}_{mt_{\rm
max}=1.5}=597320$ and $L^{\rm prior}_{mt_{\rm max}=2}=12923$. The left column
of fig. 7 shows from top to bottom the rows correspond to results of the naive
CL simulation for $mt_{\textrm{max}}=1$, $mt_{\textrm{max}}=1.5$ and
$mt_{\textrm{max}}=2$ respectively. As in previous comparison plots the real-
and imaginary part of the unequal time correlation function $\langle
x(0)x(\gamma)\rangle$ is given by orange and blue data points, while the real-
and imaginary part of the equal time expectation value $\langle
x^{2}(\gamma)\rangle$ is represented by the green and pink symbols
respectively. The analytically known values from solving the Schrödinger
equation are underlaid as black solid lines.
The results of real-time CL in the presence of the optimal learned kernel for
the anharmonic oscillator are shown in the right column of fig. 7. For
$mt_{\textrm{max}}=1$ we achieve to lower the value of $L^{\rm prior}_{mt_{\rm
max}=1}=14.3$. At this low value all the correlation functions plotted, agree
with the true solution within uncertainties. Note that we manage to restore
correct convergence for the unequal time correlation function on the real-time
axis, even though no prior information about these points was provided in
$L^{\rm prior}$ nor $L_{D}$. In contrast to the use of the modified free
theory kernel, we see here that $\langle x^{2}\rangle$ does not show a
systematic shift on the real-time branches anymore.
We continue to the second row, where, via an optimal learned kernel, we
achieve extending the correctness of CL into a region inaccessible to the
modified free theory kernel at $mt^{\textrm{max}}=1.5$. The value of the
functional encoding our prior knowledge has reduced to $L^{\rm prior}_{mt_{\rm
max}=1.5}=48.1$. We find that the unequal time correlation function values are
reproduced excellently, while the real- and imaginary part of $\langle
x^{2}\rangle$ show residual deviations from the correct solution around those
points along the SK contour, where the path exhibits sharp turns, i.e. at the
end point $\gamma=t_{\rm max}$ and the point where the real-time and Euclidean
branch meet $\gamma=2t_{\rm max}$.
The results shown in the third row clearly spell out the limitation of the
field-independent kernel we deploy in this study. At $mt^{\textrm{max}}=2$ we
do not manage to reduce the value of the cost functional below $L^{\rm
prior}_{mt_{\rm max}=2}=759$. Correspondingly in the bottom row of fig. 7 it
is clear that CL even in the presence of the field-independent kernel fails to
converge to the correct solution. Interestingly the imaginary part of the
unequal-time two-point correlator still agrees very well with the true
solution on the forward branch while its real part already shows significant
deviations from the correct solution. This deviation of the unequal time
correlation function affects also the values of the equal-time correlation
function which is far from constant and thus leads to a penalty in $L^{\rm
prior}$, correctly indicating failure of correct convergence.
There are two possible reasons behind the failure of convergence at
$mt^{\textrm{max}}=2$. One is that the low-cost gradient obtained from $L_{D}$
is unable to bring the kernel close to those values required for restoring
correct convergence. The other is that the field-independent kernel is not
expressive enough to encode the change in CL dynamics needed to restore
correct convergence. In simple models it is e.g. known from ref. Okano:1991tz
that field-independent kernels may fail to restore correct convergence for
large imaginary drift. We believe that, as a next step, the investigation of
field dependent kernels is most promising.
Figure 8: A detailed comparison of the unequal time correlation functions
$\langle x(0)x(t)\rangle$ from fig. 7 evaluated in the presence of the optimal
learned field-independent kernel on contours with $mt_{\textrm{max}}=1,1.5$
and $2$ respectively.The different colored circles correspond to the real-part
while the squares to the imaginary part of the correlator. Values of the
correlators from the solution of the Schrödinger equation are given as solid
black lines.
The unequal time correlation function is most relevant phenomenologically, as
it encodes the particle content and occupation numbers in the system. We thus
compare in fig. 8 the values of $\langle x(0)x(t)\rangle$ along the forward
real-time extent of the contour for $mt^{\textrm{max}}=1.0,1.5$ and $2.0$ to
the correct solution given as black solid line. Here we can see in more detail
that for a real-time extent of $1$ and $1.5$ CL with the optimal learned
kernel converges to the true solution within uncertainties. At $2$ the real
part of the correlator begins to deviate from the correct solution. Note that
the most difficult points to achieve convergence at are $t=0$ and at $t=t_{\rm
max}$. Similarly we find that these points are also the ones, where the equal
time correlator deviates the most from the correct solution, an important fact
as this allows this deviation to contribute to the penalty in $L^{\rm prior}$.
Figure 9: (left) Free theory kernel for the SK contour with $mt_{\rm
max}=1.5$. (center) The optimal learned kernel in the interacting theory for
$mt_{\rm max}=1.5$, which achieves correct convergence of CL. The diagonal
entries with values close to unity are subtracted from the kernel. (right) The
kernel obtained as a result of the optimization procedure in the case of
$mt_{\rm max}=2.0$, which does not achieve correct convergence. At the turning
point at $t_{\rm max}$ and when connecting to the Euclidean domain the kernel
for the interacting theory shows nontrivial structure not present in the free
theory.
In fig. 9 we plot a heat map of the values of the kernels with $mt_{\rm
max}=1.5$ (center) and $mt_{\rm max}=2$ (right) compared to the free theory
propagator kernel from eq. 36 (left) for $mt_{\rm max}=1.5$. (for a sketch of
the structure of the heat map see the left panel of fig. 6). We have
subtracted the unit matrix from the real-part of the two optimized kernels.
They both exhibit a diagonal band in the real part, which is thinner than the
one in the free theory kernel. It is interesting to see that both show non-
trivial structures passing through the $t^{\textrm{max}}$ point and when
connecting to the Euclidean branch. In the imaginary part the structures have
more similarity with the free theory propagator kernel, where a sign change
occurs as one moves away from the diagonal. The difference in the optimal
kernels between $mt_{\rm max}=1.5$ and $mt_{\rm max}=2$ is small overall.
## 5 Summary and Conclusion
In this paper we proposed a novel strategy to recover correct convergence of
real-time complex Langevin simulations by incorporating prior information into
the simulation via a learned kernel. The effectiveness of the strategy was
demonstrated for the strongly coupled anharmonic oscillator on the Schwinger
Keldysh contour by extending correct convergence in this benchmark system up
to $mt_{\rm max}=1.5$, three times the previously accessible range of $mt_{\rm
max}=0.5$.
After discussing the concept of neutral and non-neutral modifications of
Langevin dynamics by use of real and complex kernels, we demonstrated that an
explicitly constructed complex kernel can be used to improve the convergence
behavior of real-time complex Langevin on the Schwinger-Keldysh contour.
Taking insight from a single d.o.f. model and the harmonic oscillator,
approximately correct convergence in the strongly coupled anharmonic
oscillator was achieved up to $mt_{\rm max}=1$. As no systematic extension to
the explicit construction of that kernel exists, we instead proposed to learn
optimal kernels using prior information.
The ingredients to learning an optimal kernel are prior information and an
efficient prescription for computing gradients. Prior information comes in the
form of apriori known Euclidean correlation functions, known symmetries of the
theory and the Schwinger-Keldysh contour, as well as information on the
boundary terms. Here we included only the first two types of information,
which sufficed to achieve improvements in convergence. We surveyed different
modern differential programming techniques that in principle allow a direct
optimization of the kernel based on the full prior information, but found that
in their standard implementations they are of limited use in practice due to
runtime or memory limitations. Instead we constructed an approximate gradient
based on an alternative optimization functional, inspired by the need to avoid
the presence of boundary terms. This optimization functional possesses a
gradient, which can be approximated with much lower cost than that of the
original optimization functional. The low-cost gradient in practice is
computed using standard auto-differentiation. By minimizing with this gradient
and monitoring success via the full prior information cost functional we
proposed, we were able to locate optimal kernels.
Our strategy was successfully applied first to the harmonic oscillator on the
thermal SK contour. We managed to restore correct convergence with an optimal
learned field-independent kernel that shows a simpler structure compared to
the manually constructed kernel. This result bodes well for future studies,
where we will investigate in detail the structure of the optimal learned
kernel to draw conclusions about the optimal analytic structure for extending
the approach to a field-dependent kernel.
The central result of our study is the restoration of correct convergence in
the strongly correlated anharmonic oscillator on the thermal SK contour up to
a real-time extent of $mt_{\rm max}=1.5$, which is beyond the reach of any
manually constructed kernel proposed so far. We find some remnant deviations
of the equal-time correlation function $\langle x^{2}\rangle$ from the true
solution at the turning points of the SK contour. The phenomenologically
relevant unequal-time correlation function $\langle x(0)x(t)\rangle$ on the
real-time branch on the other hand reproduces the correct solution within
statistical uncertainty.
While our strategy based on a field-independent kernel is successful in a
range three times the previous state-of-the-art, we find that the restricted
choice of kernel limits its success at larger real-time extent.
We conclude that our study provides a proof-of-principle for the restoration
of correct convergence in complex Langevin based on the inclusion of prior
information via kernels. Future work will focus on extending the approach to
field-dependent kernels, carefully reassess the discretization prescription of
the SK at the turning points and improve the efficiency of the differential
programming techniques necessary to carry out a minimization directly on the
full prior knowledge cost functional.
## Acknowledgements
The team of authors gladly acknowledges support by the Research Council of
Norway under the FRIPRO Young Research Talent grant 286883. The numerical
simulations have been partially carried out on computing resources provided by
UNINETT Sigma2 - the National Infrastructure for High Performance Computing
and Data Storage in Norway under project NN9578K-QCDrtX ”Real-time dynamics of
nuclear matter under extreme conditions”
## Appendix A Correctness criterion in the presence of a kernel
In this section we discuss the correctness criterion in the presence of a
kernel in the CL evolution. As mentioned in section 2.2 there are two parts to
the correctness criterion that need to be fulfilled in order for complex
Langevin to converge to the correct solution. We must avoid boundary terms for
the real-valued distribution $\Phi(x^{R},x^{I},\tau_{\rm L})$ and the complex
Fokker-Planck eq. 12 must have the correct equilibrium distribution. If both
conditions are fulfilled the equal sign of eq. 8 holds.
To check if the equilibrium distribution of $\rho(x,\tau_{L})$ is ${\rm
exp[iS_{M}]}$ we need to either solve the Fokker-Planck equation explicitly,
or inspect the eigenvalue spectrum of the Fokker-Planck equation
Klauder:1985kq . To make inference about correct convergence based on the
eigenspectrum, the eigenvectors of the Fokker-Planck operator must form a
complete set, as otherwise there exist non-orthogonal zero modes competing
with the $e^{iS_{M}}$ stationary distribution. For a non-self-adjoint operator
this is not always the case.
To show the connection between the eigenvalues of the Fokker-Planck equation
and the equilibrium distribution we use a similarity transform to define the
operator $G$ from the Fokker-Planck operator $L$ including the kernel
$\displaystyle G(x)=$ $\displaystyle
UL(x)U^{-1}=e^{-\frac{1}{2}iS_{M}(x)}L(x)e^{\frac{1}{2}iS_{M}(x)}$ (48)
$\displaystyle=$ $\displaystyle\left(\frac{\partial}{\partial
x}+\frac{1}{2}i\frac{\partial S_{M}}{\partial
x}\right)K[x]\left(\frac{\partial}{\partial x}-\frac{1}{2}i\frac{\partial
S_{M}}{\partial x}\right),$
which by definition has the same eigenvalues as $L$. The transformation is
carried out here to follow closely the conventional way of proving the correct
convergence for a real action $S$. I.e., when $S$ is real, $G$ becomes a self-
adjoint and hence negative semi-definite operator. For complex actions,
$iS_{M}$, this transformation is not necessary for the following arguments. It
is however useful in practice as a pre-conditioner for calculating the
eigenvalues of the Fokker Planck operator. The complex distribution
$\rho(x,\tau_{L})$ is also transformed based on the same transformation, such
that
$\tilde{\rho}(x,\tau_{L})=e^{-i\frac{1}{2}S_{M}}\rho(x,\tau_{L}),\quad\textrm{where}\quad\dot{\tilde{\rho}}(x,\tau_{L})=G(x)\tilde{\rho}(x,\tau_{L})$
(49)
is the Fokker-Planck equation for the transformed operator. Since we are
interested in the stationary distribution, we construct the eigenvalue
equation
$G(x)\psi_{n}(x)=\lambda_{n}\psi_{n}(x).$ (50)
Due to the form of the operator we know that it must have at least one zero
eigenvalue, $\lambda=0$, associated with the eigenvector
$e^{i\frac{1}{2}S_{M}}$.
The formal solution of the Fokker-Planck equation after the similarity
transform of eq. 49 is given by
$\tilde{\rho}(x;\tau_{L})=e^{\tau_{L}G(x)}\tilde{\rho}(x,0)$ (51)
and by expanding $\tilde{\rho}(x;0)$ in the eigenbasis $\psi_{n}$, and using
$\psi_{0}e^{\lambda_{0}t}=e^{\frac{1}{2}iS_{M}}$ we get
$\displaystyle\rho(x;\tau_{L})=$ $\displaystyle
e^{\frac{1}{2}iS_{M}(x)}\tilde{\rho}(x;\tau_{L})$ (52) $\displaystyle=$
$\displaystyle
e^{\frac{1}{2}iS_{M}(x)}\sum_{n=0}^{\infty}a_{n}\psi_{n}(x)e^{\lambda_{n}\tau_{L}}=ce^{iS_{M}(x)}+\sum_{n=1}^{\infty}a_{n}\psi_{n}(x)e^{\lambda_{n}\tau_{L}}$
(53)
such that when $\tau_{L}\rightarrow\infty$ only the first term is left, namely
the equilibrium distribution ${\rm exp[iS_{M}]}$. This is however only true if
$\textrm{Re}\;\lambda_{n}\leq 0$, in which case the spectrum of $G$ provides
information of the equilibrium distribution of the Fokker-Planck equation.
The second condition, which needs to be satisfied is that the sampling of CL
gives the same distribution as the complex Fokker-Planck equation. To
establish that it does, we follow the correctness criterion of ref.
Aarts:2011ax . Let us show that the criterion also holds in the presence of a
kernel by revisiting some central steps of the original proof. We start with
the Fokker-Planck equation for complex Langevin eq. 11, which operates on a
real distribution $\Phi(x^{R},x^{I};t)$ for the complexified degrees of
freedom $x^{R}$ and $x^{I}$. Let us take a look at the Fokker-Planck equation,
which evolves the distribution of an observable ${\cal O}$
$\displaystyle\partial_{\tau_{L}}{\cal O}$
$\displaystyle(x^{R},x^{I})=\left[(H_{R}\partial_{x^{R}}+H_{I}\partial_{x^{I}})^{2}+{\rm
Re}\left\\{iK[x^{R}+ix^{I}]\nabla S_{M}+\frac{\partial
K[x^{R}+ix^{I}]}{\partial x^{R}}\right\\}\partial_{x^{R}}\right.$ (54)
$\displaystyle\left.+{\rm Im}\left\\{iK[x^{R}+ix^{I}]\nabla
S_{M}+\frac{\partial K[x^{R}+ix^{I}]}{\partial
x^{R}}\right\\}\partial_{x^{I}}\right]{\cal O}(x^{R},x^{I})=L_{K}^{T}{\cal
O}(x^{R},x^{I}),$
where we can identify the operator $L_{K}$ to be the bilinear adjoint of the
Fokker-Planck operator $L_{K}^{T}$Aarts:2011ax . If we assume that ${\cal O}$
is holomorphic, we know that $\partial_{x^{I}}{\cal O}=i\partial_{x^{R}}{\cal
O}\rightarrow i\partial_{z}{\cal O}$, where for the last equality we have used
the following relation between derivatives $\partial_{x^{R}}{\cal
O}(x^{R}+ix^{I})\rightarrow\partial_{z}f(z)$ with $z=x^{R}+ix^{I}$. Replacing
derivatives yields the following Langevin equation for the holomorphic
observable ${\cal O}$ expressed in the complex variable $z$
$\displaystyle\partial_{t}{\cal O}$
$\displaystyle=\left[K[z]\partial_{z}^{2}+iK[z]\nabla
S_{M}\partial_{z}+\frac{\partial K[z]}{\partial
z}\partial_{z}\right]f=\left[\partial_{z}+i\nabla
S_{M}\right]K[z]\partial_{z}{\cal O}=\tilde{L}_{K}^{T}{\cal O},$ (55)
where in the last equality we have used that
$K[z]\partial_{z}^{2}+(\partial_{z}K[z])\partial_{z}=\partial_{z}K[z]\partial_{z}$
based on integration by parts. We have now shown that
$(\tilde{L}_{K}^{T}-L_{K}^{T}){\cal O}=0$ for a Fokker-Planck equation with a
field dependent kernel. In turn, we conclude that the correctness criterion
also holds for a kernelled complex Langevin equation.
For the above derivation to hold there may not arise any boundary terms given
byScherzer:2018hid
$B_{n}=\int
dx^{R}dx^{I}\Phi(x^{R},x^{I})\left(\tilde{L}_{K}^{T}\right)^{n}\mathcal{O}(x^{R}+ix^{I})$
(56)
where $\tilde{L}_{K}^{T}$ is the Langevin operator given by
$\tilde{L}_{K}^{T}=(\partial_{z}+i\nabla S_{M})K[z]\partial_{z}.$ (57)
The formal criterion is then that the observable
$\langle\tilde{L}_{K}^{T}\mathcal{O}\rangle$ should be zero. This expression
for $B$ includes contributions from the full range of values of the d.o.f.
between $-\infty$ to $\infty$. Including all of these will introduce
significant amounts of noise in the expectation value. This can be avoided by
introducing a cut-off $\Omega$ for the values for $x^{R}$ and $x^{I}$ in the
calculation of the observable. The boundary terms of eq. 56 are thus
calculated using
$B_{n}^{\Omega}=\left\langle\left(\tilde{L}_{K}^{T}\right)^{n}\mathcal{O}(x^{R}+ix^{I})\right\rangle_{\Omega}=\left\langle\begin{cases}\left(\tilde{L}_{K}^{T}\right)^{n}\mathcal{O}(x^{R}+ix^{I}),&\text{if
}x^{R}\leq\Omega_{x^{R}}\text{ and }x^{I}\leq\Omega_{x^{I}}\\\
0,&\text{otherwise}\end{cases}\right\rangle$ (58)
where $\Omega_{x^{R}}$ and $\Omega_{x^{I}}$ denote the individual cutoffs for
the real- and imaginary part respectively. In the case of scalar fields (which
in contrast to gauge fields do not feature a compact dimension), we need to
cut off in both $x^{R}$ and $x^{I}$ direction. We will in this paper stick to
considering the cut-off to be a square. For all the values outside the square
we set the contributions to the expectation value to zero.
Since the observable of interest in the simple models is $z^{2}$ (i.e. it is
the most difficult to capture accurately), we find the boundary terms
observable from eq. 58 to be
$\displaystyle\tilde{L}_{K}^{T}\;z^{2}=$ $\displaystyle(\nabla_{z}+i\nabla
S_{M})K(z)\nabla_{z}z^{2}=(\nabla_{z}+i\nabla S_{M})K(z)2z$ $\displaystyle=$
$\displaystyle 2((\nabla_{z}K(z))z+K(z)+i\nabla S_{M}K(z)z)$ $\displaystyle=$
$\displaystyle 2K(z)(1+i\nabla S_{M}z)+2(\nabla_{z}K(z))z,$ (59)
which for a field-independent kernel reduces to
$\langle\tilde{L}_{K}^{T}\;z^{2}\rangle_{\Omega}=\langle 2K+iz\nabla
S_{M}\rangle_{\Omega}$.
We have discussed both ingredients necessary to establish correct convergence
of our simulation in the presence of a kernel, i.e. the behavior of the
Fokker-Planck spectrum and boundary terms. The boundary terms can be
calculated in practice without problems, while the eigenvalues of the Fokker-
Planck operator of eq. 50 so far remain out of reach for realistic systems,
due to computational cost.
## Appendix B Constant kernels and correct convergence in simple models
In this appendix we investigate concrete examples of our optimization
procedure and the corresponding learned kernels in one-degree of freedom
models, for which in the literature (see e.g. Okamoto:1988ru ; Okano:1991tz )
kernels have been constructed by hand. The motivation behind this appendix is
to understand how the kernels affect the behavior of the complex Langevin
simulation, in particular how they are connected to the idea of minimizing the
drift loss in eq. 46. To this end we connect complex Langevin to the Lefschetz
thimbles and the correctness criterion Aarts:2011ax .
We investigate the one-degree of freedom model with the action
$S=\frac{1}{2}\sigma x^{2}+\frac{\lambda}{4}x^{4},$ (60)
which leads to the following partition function
$Z=\int dxe^{-S},$ (61)
i.e. we use the same convention as in the literature Okamoto:1988ru ;
Okano:1991tz ; Aarts:2013fpa . Note that this is a different convention from
the main text as $S$ can now have a imaginary part. This model is interesting
as it exhibits similar properties as the interacting real-time model: the
convergence problem appears, breaking both the boundary term condition and the
equilibrium distribution of the Fokker-Planck equation for various parameters.
We will therefor take a closer look at two specific sets of parameters. The
first one is $\sigma=4i$ and $\lambda=2$ where we can find an optimal kernel,
and as second parameter we choose $\sigma=-1+4i$ with the same $\lambda=2$,
where for correct convergence we have to go beyond a constant, field-
independent kernel.
In section 3.1 we looked at a variant of this model corresponding to
$\sigma=i$ and $\lambda=0$ in eq. 60. The optimal field independent kernel
$K=-i$ transforms the complex Langevin equation such that it samples exactly
on the Lefschetz thimble. In contrast, the models considered here have more
than one critical point, and hence the relation to the Lefschetz thimbles is
not as simple. The critical points for eq. 60, can be found via
$\frac{\partial S(x)}{\partial x}=0.$ (62)
which are located at $x=0,\pm\sqrt{\sigma/\lambda}$ Aarts:2011ax . We see that
the smaller the real-part of the $\sigma$ parameter becomes, the further out
into the complex plane the two critical points away from the origin are
located.
### B.1 Non-uniqueness of the optimization
In this study we used the optimization functional
$L_{D}=\left\langle\Big{|}D(x)\cdot(-x)-||D(x)||\;||x||\Big{|}^{\xi}\right\rangle$
(63)
with $D=K\delta S/\delta x$, to compute an approximate gradient for the
minimization of the true cost functional $L^{\rm prior}$. $L_{D}$ was
constructed with the idea in mind that in order to remove boundary terms we
wish to penalize drift away from the origin. In this appendix we discuss the
fact that there exist multiple critical points to $L_{D}$, which may or may
not correspond to a kernel that restores correct convergence. In practice we
distinguish between these solutions by testing the success of the
corresponding kernel in restoring correct convergence via the value of $L^{\rm
prior}$.
Let us start with the parameter set $\sigma=4i$ and $\lambda=2$ in eq. 60. For
this choice ref. Okamoto:1988ru showed that a constant kernel can be
constructed that restores correct convergence.
In a one-degree of freedom model, where the constant kernel is nothing but a
complex number, we can optimize by brute force. Using the parametrization
$K=e^{i\theta},\quad H=\sqrt{K}=e^{i\frac{\theta}{2}}$ (64)
we only have to consider a single compact parameter: $\theta\in[0,2\pi)$. A
scan of the $\theta$ values reveals two minima of the $L_{D}$ loss function.
One at $\theta_{1}=\frac{\pi}{3}$ and one at $\theta_{2}=\frac{2\pi}{3}$,
where the first one corresponds to the kernel found manually in ref
Okamoto:1988ru . When deriving the optimal kernel, the authors also obtained
two solutions, which correspond to these two kernels. They selected the
correct one by requiring the kernel to belong to the first Riemann sheet when
taking a square root. In our case, we too need to select the correct one and
in this simple model can use the correctness criteria directly to do so.
To proceed in this direction, let us take a look at the complex Langevin
distribution according to the two kernels found in the optimization process
and compare them to the Lefschetz thimble structure of the model. The thimble
here consist of three different parts as shown by red lines in fig. 10,
together with the critical points (green points). Note that the thimbles
always cross through the critical points. The distribution of the complex
Langevin evolution is shown as a point cloud. The three different
distributions shown in each panel correspond to the case of (top left)
$K_{0}=1$, (top right) $K_{1}={\rm exp}[-i\pi/3]$ and (bottom) $K_{2}={\rm
exp}[-i2\pi/3]$. One can clearly see that for the trivial kernel complex
Langevin tries to sample parallel to the real axis. As we saw in section 3 the
angle parametrizing the kernel translates into a preferred sampling direction.
In the top right and bottom of fig. 10, we have plotted the complex Langevin
distribution obtained after introducing one of the two kernels that minimize
$L_{D}$. Again we find that the angle of the noise term decides where CL
samples. We see that the highest density of the CL distribution lies along the
direction in which the thimble passes through the critical point at the
origin. Further out from the origin, the distribution follows closely the
angle of the noise term, which is $H_{1}=\sqrt{e^{-i\pi/3}}=e^{-i\pi/6}$ for
the first kernel (top left) and $H_{2}=\sqrt{e^{-i2\pi/3}}=e^{-i\pi/3}$ for
the second (bottom). I.e. we can distinguish that sampling with the first
kernel leads to samples slightly closer to the thimbles going out along the
real-axis, compared to the other kernel which favors sampling more closely
along the parts of the thimble that eventually run off to infinity. We will
give a formal explanation for this behavior in the next paragraphs.
Figure 10: Distribution of the complex Langevin simulation and the Lefschetz
thimble (red line) for the model of eq. 60 with $\sigma=4i$ and $\lambda=2$,
using different kernels; $K_{0}=1$ (top left), $K_{1}={\rm exp}[-i\pi/3]$ (top
right) and$K_{2}={\rm exp}[-i2\pi/3]$ (bottom). The green points denote the
critical points given by the solution to eq. 62. The color in the distribution
heat map corresponds to the number of samples at the corresponding position (a
lighter color refers to a higher value).
As shown in appendix A the correctness criterion consist of two parts, the
first one states that no boundary terms may appear and the second requires
that the eigenvalues of the complex Fokker-Planck equation need to have a
negative real part. For this simple model we can compute both of these
criteria, which is illustrated in fig. 11.
Figure 11: (left) Boundary term according to the $x^{2}$ observable for the
model of eq. 60 with $\sigma=4i$ and $\lambda=2$, evaluated for the three
different kernels $K_{i}$ discussed in the main text. (right) the five
eigenvalues of the Fokker-Planck operator with the largest real part (blue
lines) plotted against the kernel parameter $\theta$. The position of the two
kernels that optimize $L_{D}$ are indicated by red lines.
The left plot contains the boundary terms for the real-part of the observable
$\langle x^{2}\rangle$. Each of the curves corresponds to one of the three
kernels $K_{i}$. They are computed using the boundary term expectation value
of eq. 58. We see that both of the kernels lead to very small values of the
boundary terms for this observable, while the complex Langevin process without
kernel exhibits a clear boundary term. However at this point we cannot yet say
which of the two kernels produces the correct solution, if any.
In order to see which of them is correct, we need to look at the right plot in
fig. 11 where the five eigenvalues of the Fokker-Planck operator are plotted,
which have the largest real-part (blue lines). They are plotted against
different kernel parameters $\theta$ and the red lines indicate the position
of the two kernels that optimize $L_{D}$. The eigenvalue calculation is
carried out using a restarted Arnoldi method solver, which internally uses a
Krylov-Schur method. We see that there is a region of $\theta$ where the
eigenvalues are all satisfying $\textrm{Re}(\lambda)\leq 0$, which includes
the kernel $\theta_{1}=-\frac{\pi}{3}$. It is exactly this kernel, which, when
incorporated into the complex Langevin evolution gives the right solution for
the model. For smaller $\theta$s, the eigenvalues will eventually cross the
zero. This is the region where one finds the second kernel
$\theta_{2}=-\frac{2\pi}{3}$. We can therefore attribute the failure to
restore correct convergence with the second kernel to a violation of the
correctness criterion pertaining to the spectrum of the complex Fokker-Planck
equation.
The interesting point here is that the boundary terms do not seem to
distinguish between the two kernels as both lead to quickly diminishing
distributions.
### B.2 Limitation of constant kernels and boundary terms
Let us now go to the set of parameters $\sigma=-1+4i$ and $\lambda=2$, for
which there does not exist a constant kernel, which restores correct
convergence. It is however possible to construct a field-dependent kernel that
solves the problem Okano:1991tz .
We can understand this behavior, as the constant kernel that is optimal in the
sense of removing boundary terms, does not achieve correct convergence of the
complex Fokker-Planck equation to the correct $e^{-S}$.
This can be seen again by plotting the CL distribution for some of the local
minima of the $L_{D}$ loss function. For this parameter space there are more
than two, but we have picked out two of the solutions which have the
interesting property that they both have no boundary terms, and still do not
converge to the true solution. The kernels that are picked have the parameters
$\theta_{3}=-\frac{3\pi}{4}$ and $\theta_{4}=\frac{\pi}{2}$
Figure 12: Distribution of the complex Langevin simulation and the Lefschetz
thimbles (red line) for the model of eq. 60 with $\sigma=-1+4i$ and
$\lambda=2$, using different kernels; $K_{0}=1$ (top left),
$K_{3}=e^{-i\frac{3\pi}{4}}$ (top right) and $K_{4}=e^{i\frac{\pi}{2}}$
(bottom). The green points are the critical points given by the solution to
eq. 62.
The CL distribution together with the thimble is plotted in fig. 12 for the
three different kernels $K_{0}=1$ (top left), $K_{3}=\exp(i\theta_{3})$ (top
right) and $K_{4}=\exp(i\theta_{4})$ (bottom). We see that the thimbles show
three distinct structures, connecting at infinity. To obtain the thimbles we
evolve the gradient flow equation starting from a small offset from the
critical points (which all are saddle points) and then combine the six part of
the thimbles. The CL distribution without a kernel (top left plot in fig. 12)
again favors sampling parallel to the real-axis, while the two other kernels
sample completely different parts of the thimbles. The distribution for
$K_{3}$ is located along the thimble crossing the origin. The other kernel
($K_{4}$), follows the other two thimbles crossing the critical points away
from the origin. We can explain this behavior with the angle of the noise
coefficient. For $K_{3}$ we have an angle of $-\frac{3\pi}{8}$ against the
real axis and for $K_{4}$ we have an angle of $\frac{\pi}{4}$ against the real
axis.
Figure 13: (left) Boundary term according to the $x^{2}$ observable for the
model of eq. 60 with $\sigma=-1+4i$ and $\lambda=2$, evaluated for the three
different kernels $K_{i}$ discussed in the main text. (right) the five
eigenvalues of the Fokker-Planck operator with the largest real part (blue
lines) plotted against the kernel parameter $\theta$. The position of the two
kernels that optimize $L_{D}$ are indicated by red lines.
In fig. 13 (left) the boundary terms for this set of model parameters is
calculated for the observable $\textrm{Re}\;\langle x^{2}\rangle$ and plotted
for increasing square box cutoff. We see that without a kernel, there are
boundary terms present, as the blue datapoints do not go to zero for large
cutoff. This can also been seen directly from the distribution in fig. 12
which exhibits a large spread and hence the falloff of the distribution is not
fast enough. For the two kernels, $K_{3}$ and $K_{4}$, that correspond to a
local minimum in $L_{D}$, the system does not show any boundary terms. This is
an important point as even though we have avoided boundary terms, the CL
dynamics under the kernels $K_{3}$ and $K_{4}$ still does not converge to the
correct solution. In turn it appears that it is in general not enough to
remove the boundary terms to achieve correct convergence. In fact one also
needs to be sure that the complex Fokker-Planck equation converges to the
desired equilibrium distribution.
In fig. 13 (right) we show the five eigenvalues of the Fokker-Planck equation
with the largest real-part plotted against the parameter $\theta$ which
determines the kernel $K=e^{i\theta}$. For the parameters chosen here, we find
that both kernels lie outside of the admissible region444An interesting
observation was made in Okano:1991tz , that combining kernels, which sample
different parts of the thimble into a field-dependent kernel seems to work
well. The motivation was to find a kernel that would reduce the drift term to
$-x$ when either the $x^{2}$ or the $x^{4}$ term in the action dominates. A
similar argument for constructing a field-dependent kernel can now be made via
the minima of the $L_{D}$ loss function, which favor sampling different parts
of the thimbles., where $\Re\lambda\leq 0$. Interestingly at $\theta=0$ the
eigenvalues actually all lie in the lower half complex plane but there the
boundary criterion is not fulfilled. But as one increases the imaginary part
of $\sigma$, e.g. at $\sigma=-1+5i$, one finds that the eigenvalues for the
identity kernel $K=1$ already take on positive real-parts.
Including the calculation of eigenvalues in the cost functional would be
possible for simple models such as the one of eq. 60, but for larger, more
realistic systems the dimension of the Fokker-Planck operator scales as
$N^{d}$, where $N$ is the number of points along in each dimensions $d$. Even
for the anharmonic oscillator on the SK contour, the calculation of the
Fokker-Planck eigenvalues is too costly in practice. We therefore need a
different way of distinguishing which kernel leads to correct convergence. As
discussed in detail in the main text of this manuscript we thus propose to
collect as much prior information about the system as possible in the cost
functional $L^{\rm prior}$, based on which the success of the optimal kernel
according to $L_{D}$ is judged.
## Competing interests
The authors declare that they have no competing interests.
## Author’s contributions
* •
D. Alvestad: concept development, algorithmic development, code
implementation, data analysis, writing
* •
R. Larsen: supervising code development
* •
A. Rothkopf: project conceptualization, funding acquisition, supervision,
writing
## References
* (1) Busza, W., Rajagopal, K., van der Schee, W.: Heavy Ion Collisions: The Big Picture, and the Big Questions. Annual Review of Nuclear and Particle Science 68(1), 339–376 (2018). doi:10.1146/annurev-nucl-101917-020852. arXiv:1802.04801 [hep-ph, physics:hep-th, physics:nucl-ex, physics:nucl-th]. Accessed 2022-08-30
* (2) Foka, P., Janik, M.A.: An overview of experimental results from ultra-relativistic heavy-ion collisions at the CERN LHC: Bulk properties and dynamical evolution. Rev. Phys. 1, 154–171 (2016). doi:10.1016/j.revip.2016.11.002. 1702.07233
* (3) Chien, C.-C., Peotta, S., Di Ventra, M.: Quantum transport in ultracold atoms. Nature Physics 11(12), 998–1004 (2015). doi:10.1038/nphys3531. Number: 12 Publisher: Nature Publishing Group. Accessed 2022-08-30
* (4) Qin, M., Schäfer, T., Andergassen, S., Corboz, P., Gull, E.: The Hubbard model: A computational perspective. arXiv:2104.00064 [cond-mat] (2021). arXiv: 2104.00064. Accessed 2022-01-16
* (5) Gattringer, C., Langfeld, K.: Approaches to the sign problem in lattice field theory. International Journal of Modern Physics A 31(22), 1643007 (2016). doi:10.1142/S0217751X16430077. Number: 22 arXiv: 1603.09517. Accessed 2021-02-09
* (6) Pan, G., Meng, Z.Y.: Sign Problem in Quantum Monte Carlo Simulation (2022). 2204.08777
* (7) Rothkopf, A.: Bayesian inference of real-time dynamics from lattice QCD. Frontiers in Physics 10 (2022). doi:10.3389/fphy.2022.1028995
* (8) Troyer, M., Wiese, U.-J.: Computational complexity and fundamental limitations to fermionic quantum Monte Carlo simulations. Phys. Rev. Lett. 94, 170201 (2005). doi:10.1103/PhysRevLett.94.170201. cond-mat/0408370
* (9) Berger, C.E., Rammelmüller, L., Loheac, A.C., Ehmann, F., Braun, J., Drut, J.E.: Complex Langevin and other approaches to the sign problem in quantum many-body physics. Physics Reports 892, 1–54 (2021). doi:10.1016/j.physrep.2020.09.002. Accessed 2021-10-31
* (10) Chandrasekharan, S., Wiese, U.-J.: Meron-cluster solution of fermion sign problems. Physical Review Letters 83(16), 3116 (1999). Number: 16
* (11) Mercado, Y.D., Evertz, H.G., Gattringer, C.: QCD Phase Diagram According to the Center Group. Physical review letters 106(22), 222001 (2011). Number: 22
* (12) Kloiber, T., Gattringer, C.: Dual Methods for Lattice Field Theories at Finite Density. PoS LATTICE2013, 206 (2014). doi:10.22323/1.187.0206. 1310.8535
* (13) de Forcrand, P., Philipsen, O.: Constraining the QCD Phase Diagram by Tricritical Lines at Imaginary Chemical Potential. Phys. Rev. Lett. 105(15), 152001 (2010). doi:10.1103/PhysRevLett.105.152001. Number: 15
* (14) Braun, J., Chen, J.-W., Deng, J., Drut, J.E., Friman, B., Ma, C.-T., Tsai, Y.-D.: Imaginary Polarization as a Way to Surmount the Sign Problem in Ab Initio Calculations of Spin-Imbalanced Fermi Gases. Phys. Rev. Lett. 110(13), 130404 (2013). doi:10.1103/PhysRevLett.110.130404. Number: 13
* (15) Braun, J., Drut, J.E., Roscher, D.: Zero-Temperature Equation of State of Mass-Imbalanced Resonant Fermi Gases. Phys. Rev. Lett. 114(5), 050404 (2015). doi:10.1103/PhysRevLett.114.050404. Number: 5
* (16) Guenther, J.N., Bellwied, R., Borsányi, S., Fodor, Z., Katz, S.D., Pásztor, A., Ratti, C., Szabó, K.K.: The QCD equation of state at finite density from analytical continuation. Nuclear Physics A 967, 720–723 (2017). doi:10.1016/j.nuclphysa.2017.05.044
* (17) Wang, F., Landau, D.P.: Efficient, multiple-range random walk algorithm to calculate the density of states. Physical review letters 86(10), 2050 (2001). Number: 10
* (18) Langfeld, K., Lucini, B., Rago, A.: Density of States in Gauge Theories. Physical Review Letters 109(11) (2012). doi:10.1103/physrevlett.109.111601. Number: 11
* (19) Gattringer, C., Törek, P.: Density of states method for the $\mathbb{Z}_{3}$ spin model. Phys. Lett. B 747, 545–550 (2015). doi:10.1016/j.physletb.2015.06.017. 1503.04947
* (20) Orus, R.: Tensor networks for complex quantum systems. Nature Reviews Physics 1(9), 538–550 (2019). doi:10.1038/s42254-019-0086-7. Number: 9 arXiv: 1812.04011. Accessed 2020-01-12
* (21) Hauschild, J., Pollmann, F.: Efficient numerical simulations with Tensor Networks: Tensor Network Python (TeNPy). SciPost Physics Lecture Notes (2018). doi:10.21468/scipostphyslectnotes.5
* (22) Rom, N., Charutz, D., Neuhauser, D.: Shifted-contour auxiliary-field Monte Carlo: circumventing the sign difficulty for electronic-structure calculations. Chemical physics letters 270(3-4), 382–386 (1997). Number: 3-4
* (23) Cristoforetti, M., Di Renzo, F., Scorzato, L., AuroraScience Collaboration: New approach to the sign problem in quantum field theories: High density QCD on a Lefschetz thimble. Physical Review D 86(7), 074506 (2012). Number: 7
* (24) Alexandru, A., Basar, G., Bedaque, P.F., Warrington, N.C.: Complex paths around the sign problem. Rev. Mod. Phys. 94(1), 015006 (2022). doi:10.1103/RevModPhys.94.015006. 2007.05436
* (25) Damgaard, P.H., Hüffel, H.: Stochastic quantization. Physics Reports 152(5), 227–398 (1987). doi:10.1016/0370-1573(87)90144-X. Number: 5. Accessed 2020-01-11
* (26) Namiki, M., Ohba, I., Okano, K., Yamanaka, Y., Kapoor, A.K., Nakazato, H., Tanaka, S.: Stochastic quantization. Lect. Notes Phys. Monogr. 9, 1–217 (1992). doi:10.1007/978-3-540-47217-9
* (27) Seiler, E.: Status of Complex Langevin. EPJ Web Conf. 175, 01019 (2018). doi:10.1051/epjconf/201817501019. 1708.08254
* (28) Alvestad, D., Larsen, R., Rothkopf, A.: Stable solvers for real-time Complex Langevin. JHEP 08, 138 (2021). doi:10.1007/JHEP08(2021)138. 2105.02735
* (29) Aarts, G., James, F.A., Seiler, E., Stamatescu, I.-O.: Adaptive stepsize and instabilities in complex Langevin dynamics. Physics Letters B 687(2-3), 154–159 (2010). doi:10.1016/j.physletb.2010.03.012. Adaptive step-size. 0912.0617
* (30) Aarts, G., James, F.A., Seiler, E., Stamatescu, I.-O.: Complex Langevin: Etiology and Diagnostics of its Main Problem. Eur. Phys. J. C 71, 1756 (2011). doi:10.1140/epjc/s10052-011-1756-5. 1101.3270
* (31) Nagata, K., Nishimura, J., Shimasaki, S.: Argument for justification of the complex Langevin method and the condition for correct convergence. Phys. Rev. D 94(11), 114515 (2016). doi:10.1103/PhysRevD.94.114515. 1606.07627
* (32) Sexty, D., Seiler, E., Stamatescu, I.-O., Hansen, M.W.: Complex Langevin boundary terms in lattice models. PoS LATTICE2021, 194 (2022). doi:10.22323/1.396.0194. 2112.02924
* (33) Seiler, E., Sexty, D., Stamatescu, I.-O.: Gauge cooling in complex Langevin for QCD with heavy quarks. Phys. Lett. B 723, 213–216 (2013). doi:10.1016/j.physletb.2013.04.062. 1211.3709
* (34) Berges, J., Sexty, D.: Real-time gauge theory simulations from stochastic quantization with optimized updating. Nuclear Physics B 799(3), 306–329 (2008). doi:10.1016/j.nuclphysb.2008.01.018. Number: 3. Accessed 2020-01-08
* (35) Attanasio, F., Jäger, B.: Dynamical stabilisation of complex Langevin simulations of QCD. Eur. Phys. J. C 79(1), 16 (2019). doi:10.1140/epjc/s10052-018-6512-7. 1808.04400
* (36) Aarts, G., Attanasio, F., Jäger, B., Sexty, D.: Complex Langevin in Lattice QCD: dynamic stabilisation and the phase diagram. Acta Phys. Polon. Supp. 9, 621 (2016). doi:10.5506/APhysPolBSupp.9.621. 1607.05642
* (37) Aarts, G., Allton, C., Bignell, R., Burns, T.J., García-Mascaraque, S.C., Hands, S., Jäger, B., Kim, S., Ryan, S.M., Skullerud, J.-I.: Open charm mesons at nonzero temperature: results in the hadronic phase from lattice QCD (2022). 2209.14681
* (38) Boguslavski, K., Hotzy, P., Müller, D.I.: A stabilizing kernel for complex Langevin simulations of real-time gauge theories. In: 39th International Symposium on Lattice Field Theory (2022). 2210.08020
* (39) Alexandru, A., Basar, G., Bedaque, P.F., Vartak, S., Warrington, N.C.: Monte Carlo Study of Real Time Dynamics on the Lattice. Phys. Rev. Lett. 117(8), 081602 (2016). doi:10.1103/PhysRevLett.117.081602. 1605.08040
* (40) Alexandru, A., Basar, G., Bedaque, P.F., Ridgway, G.W.: Schwinger-Keldysh formalism on the lattice: A faster algorithm and its application to field theory. Phys. Rev. D 95(11), 114501 (2017). doi:10.1103/PhysRevD.95.114501. 1704.06404
* (41) Berges, J., Borsányi, S., Sexty, D., Stamatescu, I.-O.: Lattice simulations of real-time quantum fields. Physical Review D 75(4) (2007). doi:10.1103/physrevd.75.045007. hep-lat/0609058
* (42) Aarts, G., James, F.A., Pawlowski, J.M., Seiler, E., Sexty, D., Stamatescu, I.-O.: Stability of complex Langevin dynamics in effective models. JHEP 03, 073 (2013). doi:10.1007/JHEP03(2013)073. 1212.5231
* (43) Okamoto, H., Okano, K., Schulke, L., Tanaka, S.: The Role of a Kernel in Complex Langevin Systems. Nucl. Phys. B 324, 684–714 (1989). doi:10.1016/0550-3213(89)90526-9
* (44) Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018)
* (45) Alvestad, D.: KernelCL: Towards Learning Optimized Kernels for Complex Langevin. doi:10.5281/zenodo.7373498. https://doi.org/10.5281/zenodo.7373498
* (46) Klauder, J.R., Petersen, W.P.: NUMERICAL INTEGRATION OF MULTIPLICATIVE NOISE STOCHASTIC DIFFERENTIAL EQUATIONS. SIAM J. Num. Anal. 22, 1153–1166 (1985)
* (47) Giudice, P., Aarts, G., Seiler, E.: Localised distributions in complex Langevin dynamics. PoS LATTICE2013, 200 (2014). doi:10.22323/1.187.0200. 1309.3191
* (48) Abe, Y., Fukushima, K.: Analytic studies of the complex Langevin equation with a Gaussian ansatz and multiple solutions in the unstable region. Phys. Rev. D 94(9), 094506 (2016). doi:10.1103/PhysRevD.94.094506. 1607.05436
* (49) Seiler, E., Wosiek, J.: Positive Representations of a Class of Complex Measures. J. Phys. A 50(49), 495403 (2017). doi:10.1088/1751-8121/aa9310. 1702.06012
* (50) Salcedo, L.L.: Positive representations of complex distributions on groups. J. Phys. A 51(50), 505401 (2018). doi:10.1088/1751-8121/aaea16. 1805.01698
* (51) Woodward, S., Saffin, P.M., Mou, Z.-G., Tranberg, A.: Optimisation of Thimble Simulations and Quantum Dynamics of Multiple Fields in Real Time (2022). 2204.10101
* (52) Nakazato, H., Yamanaka, Y.: MINKOWSKI STOCHASTIC QUANTIZATION. In: 23rd International Conference on High-Energy Physics (1986)
* (53) Huffel, H., Landshoff, P.V.: Stochastic Diagrams and Feynman Diagrams. Nucl. Phys. B 260, 545 (1985). doi:10.1016/0550-3213(85)90050-1
* (54) Matsumoto, N.: Comment on the subtlety of defining a real-time path integral in lattice gauge theories. PTEP 2022(9), 093–03 (2022). doi:10.1093/ptep/ptac106. 2206.00865
* (55) Scherzer, M., Seiler, E., Sexty, D., Stamatescu, I.-O.: Complex Langevin and boundary terms. Phys. Rev. D 99(1), 014512 (2019). doi:10.1103/PhysRevD.99.014512. 1808.05187
* (56) Harrison, D.: A brief introduction to automatic differentiation for machine learning. arXiv preprint arXiv:2110.06209 (2021)
* (57) Innes, M.: Don’t unroll adjoint: Differentiating ssa-form programs. CoRR abs/1810.07951 (2018). 1810.07951
* (58) Cao, Y., Li, S., Petzold, L., Serban, R.: Adjoint sensitivity analysis for differential-algebraic equations: The adjoint dae system and its numerical solution. SIAM journal on scientific computing 24(3), 1076–1089 (2003)
* (59) Schäfer, F., Kloc, M., Bruder, C., Lörch, N.: A differentiable programming method for quantum control. Machine Learning: Science and Technology 1(3), 035009 (2020)
* (60) Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., Skinner, D., Ramadhan, A.: Universal differential equations for scientific machine learning. arXiv preprint arXiv:2001.04385 (2020)
* (61) Wang, Q., Hu, R., Blonigan, P.: Least squares shadowing sensitivity analysis of chaotic limit cycle oscillations. Journal of Computational Physics 267, 210–224 (2014)
* (62) Ni, A., Wang, Q.: Sensitivity analysis on chaotic dynamical systems by non-intrusive least squares shadowing (NILSS). Journal of Computational Physics 347, 56–77 (2017). doi:10.1016/j.jcp.2017.06.033
* (63) Okano, K., Schulke, L., Zheng, B.: Kernel controlled complex Langevin simulation: Field dependent kernel. Phys. Lett. B 258, 421–426 (1991). doi:10.1016/0370-2693(91)91111-8
* (64) Aarts, G.: Lefschetz thimbles and stochastic quantization: Complex actions in the complex plane. Phys. Rev. D 88(9), 094501 (2013). doi:10.1103/PhysRevD.88.094501. 1308.4811
|
††thanks: These authors contributed equally to this work.††thanks: These
authors contributed equally to this work.††thanks: These authors both closely
supervised the work.††thanks: These authors both closely supervised the work.
# High-fidelity generation of four-photon GHZ states on-chip
Mathias Pont Centre for Nanosciences and Nanotechnologies, CNRS, Université
Paris-Saclay, UMR 9001, 10 Boulevard Thomas Gobert, 91120, Palaiseau, France
Giacomo Corrielli Istituto di Fotonica e Nanotecnologie - Consiglio Nazionale
delle Ricerche (IFN-CNR), p.za Leonardo da Vinci 32, 20133 Milano, Italy
Andreas Fyrillas Quandela SAS, 7 Rue Léonard de Vinci, 91300 Massy, France
Iris Agresti Dipartimento di Fisica, Sapienza Università di Roma, P.le Aldo
Moro 5, 00185, Rome, Italy University of Vienna, Faculty of Physics,
Boltzmanngasse 5, 1090 Vienna, Austria Gonzalo Carvacho Dipartimento di
Fisica, Sapienza Università di Roma, P.le Aldo Moro 5, 00185, Rome, Italy
Nicolas Maring Quandela SAS, 7 Rue Léonard de Vinci, 91300 Massy, France
Pierre-Emmanuel Emeriau Quandela SAS, 7 Rue Léonard de Vinci, 91300 Massy,
France Francesco Ceccarelli Istituto di Fotonica e Nanotecnologie -
Consiglio Nazionale delle Ricerche (IFN-CNR), p.za Leonardo da Vinci 32, 20133
Milano, Italy Ricardo Albiero Istituto di Fotonica e Nanotecnologie -
Consiglio Nazionale delle Ricerche (IFN-CNR), p.za Leonardo da Vinci 32, 20133
Milano, Italy Paulo Henrique Dias Ferreira Physics Department, Federal
University of São Carlos, São Carlos 13565-905, SP, Brazil Istituto di
Fotonica e Nanotecnologie - Consiglio Nazionale delle Ricerche (IFN-CNR), p.za
Leonardo da Vinci 32, 20133 Milano, Italy Niccolo Somaschi Quandela SAS, 7
Rue Léonard de Vinci, 91300 Massy, France Jean Senellart Quandela SAS, 7 Rue
Léonard de Vinci, 91300 Massy, France Isabelle Sagnes Centre for
Nanosciences and Nanotechnologies, CNRS, Université Paris-Saclay, UMR 9001, 10
Boulevard Thomas Gobert, 91120, Palaiseau, France Martina Morassi Centre for
Nanosciences and Nanotechnologies, CNRS, Université Paris-Saclay, UMR 9001, 10
Boulevard Thomas Gobert, 91120, Palaiseau, France Aristide Lemaître Centre
for Nanosciences and Nanotechnologies, CNRS, Université Paris-Saclay, UMR
9001, 10 Boulevard Thomas Gobert, 91120, Palaiseau, France Pascale Senellart
Centre for Nanosciences and Nanotechnologies, CNRS, Université Paris-Saclay,
UMR 9001, 10 Boulevard Thomas Gobert, 91120, Palaiseau, France Fabio
Sciarrino Dipartimento di Fisica, Sapienza Università di Roma, P.le Aldo Moro
5, 00185, Rome, Italy Marco Liscidini Dipartimento di Fisica, Università di
Pavia, Via Bassi 6, 27100, Pavia, Italy Nadia Belabas Centre for
Nanosciences and Nanotechnologies, CNRS, Université Paris-Saclay, UMR 9001, 10
Boulevard Thomas Gobert, 91120, Palaiseau, France Roberto Osellame Istituto
di Fotonica e Nanotecnologie - Consiglio Nazionale delle Ricerche (IFN-CNR),
p.za Leonardo da Vinci 32, 20133 Milano, Italy
###### Abstract
Mutually entangled multi-photon states are at the heart of all-optical quantum
technologies. While impressive progresses have been reported in the generation
of such quantum light states using free space apparatus, high-fidelity high-
rate on-chip entanglement generation is crucial for future scalability. In
this work, we use a bright quantum-dot based single-photon source to
demonstrate the high fidelity generation of 4-photon Greenberg-Horne-Zeilinger
(GHZ) states with a low-loss reconfigurable glass photonic circuit. We
reconstruct the density matrix of the generated states using full quantum-
state tomography reaching an experimental fidelity to the target
$\ket{\text{GHZ}_{4}}$ of $\mathcal{F}_{\text{GHZ}_{4}}=(86.0\pm 0.4)\,\%$,
and a purity of $\mathcal{P}_{\text{GHZ}_{4}}=(76.3\pm 0.6)\,\%$. The
entanglement of the generated states is certified with a semi device-
independent approach through the violation of a Bell-like inequality by more
than 39 standard deviations. Finally, we carry out a four-partite quantum
secret sharing protocol on-chip where a regulator shares with three
interlocutors a sifted key with up to 1978 bits, achieving a qubit-error rate
of 10.87%. These results establish that the quantum-dot technology combined
with glass photonic circuitry for entanglement generation on chip offers a
viable path for intermediate scale quantum computation and communication.
Entangled multi-partite states have a pivotal role in quantum technologies
based on multiple platforms, ranging from trapped-ions [1] to superconducting
qubits [2]. In photonics, over the past two decades, major advances in the
generation of multi-photon entanglement have been achieved by exploiting
spontaneous parametric down-conversion (SPDC) and free-space apparatuses [3,
4, 5, 6, 7]. Very recently, one dimensional linear cluster states were
generated on demand through atom-photon entanglement [8] or spin-photon
entanglement [9, 10] at high rate [8, 10] and high indistinguishability [9,
10]. While the complexity of the generated states in some of these works [6,
7, 8] is still unmatched, bulk equipment and free space propagation poses
limitations to scalability and real-world applications.
Consequently, in the last few years, there has been a significant focus on the
generation of multi-photon states in integrated circuits, with noteworthy
results in the demonstration of reconfigurable graph states in silicon-based
devices via on-chip spontaneous four-wave mixing (SFWM) [11, 12, 13, 14]. In
these works, the generation of the input state is probabilistic, for it is
obtained via post-selection on squeezed light generated by SPDC or SFWM. An
intrinsic issue of this approach is the emission of unwanted photon pairs,
whose generation probability is proportional to the average number of
generated photons. One has thus to reach a trade-off between large rate and
coincidence-to-accidental ratio [15].
An alternative approach to the generation of multi-photon states harnesses
optically engineered quantum-dot (QD) emitters that operate as deterministic
bright sources of indistinguishable single-photons in wavelength ranges well
suited for high-efficiency single-photon detectors [16, 17, 18, 19]. Recently,
the potential of such high-performance single-photon sources (SPS) for the
generation of multi-photon states has been highlighted with bulk optics [20].
QDs are also compatible with integrated photonic chips [21, 22] and, in
particular, with glass optical circuits fabricated by femtosecond laser
micromachining (FLM) [23, 24]. These devices offer an efficient interfacing
with optical fibers, low losses at the QD emission frequencies, and the
possibility of integrating thermal phase shifters to achieve full circuit
programmability. Thanks to these characteristics, the combined use of QD-based
SPS and laser-written photonic processors have demonstrated to be an effective
platform for quantum information processing [25, 26].
In this work, we demonstrate for the first time the on-chip generation and
characterization of a 4-photon Greenberger-Horne-Zeilinger (GHZ) state [27]
using a bright SPS in combination with a reconfigurable laser-written photonic
chip. We achieve a complete characterization of the generated state through
the reconstruction of its density matrix via quantum state tomography. By
using a semi device-independent approach, we further test non-classical
correlations, certify entanglement, non-biseparability, and study the
robustness of the generated states to noise. Finally, as a proof-of-principle
that our platform is application ready, we show that it can be used to
implement a 4-partite quantum secret sharing protocol [28]. Our approach
combines the practical assets of bright QD-based SPS, efficient single-photon
detectors, and low-loss, scalable, integrated optical circuits fabricated
using FLM.
## I Results
### I.1 Path-encoded 4-GHZ generator
Among graph states, GHZ states are striking examples of maximally entangled
states that are considered a pivotal resource for photonic quantum computing,
since they can be used as building blocks for the construction of high-
dimension cluster states [29]. They are also of interest for quantum
communication and cryptography protocols [28, 30].
In this work, we target 4-partite GHZ states of the form:
$\ket{\text{GHZ}_{4}}=\frac{\ket{0101}+\ket{1010}}{\sqrt{2}}$ (1)
encoded in the path degree of freedom (dual-rail). In Fig.1 the conceptual
scheme of our path-encoded 4-partite GHZ generator chip is depicted. It is
composed by a first layer of balanced beam splitters (50/50 directional
couplers) followed by waveguide permutations (3D waveguide crossings). The
4-photon input states are created using a high-performance QD based single-
photon source (Quandela e-Delight-LA) [16] and a time-to-spatial demultiplexer
(Quandela DMX6) (see Methods), which initialise the input states to
$\ket{0000}$. With this scheme, the generation of the GHZ states is
conditioned to the presence of one and only one photon per qubit. Finally, the
chip allows for the characterisation of the generated states by means of four
reconfigurable Mach-Zehnder interferometers (MZI), each one implementing
single-qubit Pauli projective measurements ($\sigma_{x}$, $\sigma_{y}$,
$\sigma_{z}$) in the path degree of freedom. The overall system efficiency
enables us to detect useful 4-fold coincidence events at the rate of 0.5 Hz
with a pump rate of 79 MHz. This is on par with the recent record for
generating entangled states in integrated photonics [14]. Because of the short
lifetime of our photons (145 ps), a pump rate of 500 MHz is achievable, which
would yield a generation rate $>$3 times higher than [14]. Further details
about the chip functioning, its manufacturing and the experimental setup are
provided in the Supplementary Information (S-I).
Figure 1: Integrated path-encoded 4-GHZ generator. Conceptual layout. For each
qubit i, the upper and lower waveguides encode the computational basis
{$\ket{0}_{i}$, $\ket{1}_{i}$}. The preparation of the state in Eq. (1) along
the orange dashed line (dots and crosses encoding the $\ket{0101}$ and
$\ket{1010}$ states respectively) is conditioned (”&” and red arrow) to the
detection of one and only one photon (red dot) per qubit. The projective
measurements of the tomography stage are performed by four thermally
reconfigurable MZIs. One of the phase shifters before MZI1 is also used for
controlling the phase $\theta$ (Supplementary S-I).
Our photonic chip is reconfigurable, thus it can generate a whole class of GHZ
states of the form
$\ket{\text{GHZ}_{4}}^{(\theta)}=\left(\ket{0101}+e^{i\theta}\ket{1010}\right)/\sqrt{2}$,
parametrized by an internal phase $\theta$ corresponding to the algebraic sum
of the optical phases acquired by each photon in the different paths of the
preparation stage after the beam splitters (Supplementary S-I.1). The phase
$\theta$ can be controlled with a single phase-shifter localized on one of
these paths, as depicted in Fig.1. To prepare the targeted GHZ states, we use
a 2-qubit Pauli projector $M_{0}^{(i)}$ for each qubit $i\in\\{1,..,4\\}$ (See
Methods for each $M_{0}^{(i)}$ definition), and compute the expectation value
$\langle\hat{\Theta}\rangle=\langle
M_{0}^{(1)}M_{0}^{(2)}M_{0}^{(3)}M_{0}^{(4)}\rangle$. For our class of GHZ
states, $\langle\hat{\Theta}\rangle$ can be used as an internal phase witness
as $\langle\hat{\Theta}\rangle=(\sqrt{2}/2)\cos{\theta}$, and it reaches its
maximum value for the target stsate $\ket{\text{GHZ}_{4}}$ at
$\theta=0~{}[2\pi]$. Fig.2.a shows the measured
$\langle\hat{\Theta}_{exp}\rangle$ as a function of the driving power of one
of the outer thermal phase shifters of MZI1, and it demonstrates that we have
full control over the value of $\theta$ in the state preparation. The recorded
4-photon coincidence probability distribution corresponding to our target
state is reported in Fig. 2.b. We measure the maximal value of
$\langle\hat{\Theta}_{exp}\rangle=0.56\pm 0.01$, which is limited by some
experimental imperfections discussed in Sec.I.2.
Figure 2: Preparation of 4-GHZ states in the path-encoded basis and
reconstruction of its density matrix. (a) A single phase shifter (see Fig.
1.a) is scanned over a $\sim$ 50 mW range of electrical driving power P. For
each value of $\theta$ and each 4-qubit state we acquire the 4-photon
coincidence rates for 900 s and compute the phase witness
$\langle\hat{\Theta}_{exp}\rangle$ (blue circles) which is fitted with a
cosine function with free amplitude (red line). The error bars are computed
assuming a shot noise limited error on the detected 4-fold coincidences. Using
the fit parameters, the internal phase of the GHZ state is set to
$\theta=0~{}[2\pi]$ when P=52.81 mW. (b) The experimental (theoretical)
4-photon coincidence probability distribution measured in this configuration
are reported in blue bars (dotted black bars). Along the horizontal arrow, the
qubit states are ordered as {0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111,
1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111}. (c-d) The real (top) and
imaginary (bottom) parts of the reconstructed density matrix
$\rho=\ket{\text{GHZ}_{4}}\bra{\text{GHZ}_{4}}$ for the target (c, grey) are
compared to the reconstruction from the experimental 4-photon tomography data
using maximum likelihood estimation $\rho_{exp}$ (d, green). The noise (±1e-9)
in (c) arises from the numerical method (maximum likelihood estimation) and is
orders of magnitude smaller than the noise arising from the imperfections of
the experimental setup (±0.03) in $\rho_{exp}$.
There are many tools available to detect entanglement and to estimate the
fidelity of a multipartite system with minimal resources. Entanglement
witnesses [31, 32], Bell-like inequalities [33], or the so-called GHZ paradox
[27, 34] all require only a few Pauli projective measurements to characterize
the GHZ states. Here we choose the stabilizer witness for GHZ states
$\mathcal{W}_{\text{GHZ}_{4}}$ (Methods) first introduced in [35], for which a
measured negative expectation value signals the presence of entanglement. This
witness requires only two projective measurements to detect entanglement, and
to compute a lower bound of the fidelity of the generated state to our target
$\ket{\text{GHZ}_{4}}$, where
$\mathcal{F}_{\text{GHZ}_{4}}\geq(1-\langle\mathcal{W}_{\text{GHZ}_{4}}\rangle)/2$.
We found $\langle\mathcal{W}_{\text{exp}}\rangle=-0.65\pm 0.03$, which
certifies that the generated state is entangled. The lower bound for the
fidelity of the generated state to the target is
$\mathcal{F}_{\text{GHZ}_{4}}\geq 0.82\pm 0.01$.
### I.2 On-chip quantum-state tomography
The generated states can be fully characterised through the reconstruction of
the density matrix via maximum likelihood estimation [36] from a full quantum-
state tomography. Most previous on-chip entanglement generation protocols
based on SPDC or SFWM sources use partial analysis of the state, such as an
entanglement witness or the stabilizer group, to determine the state fidelity
with respect to the target and to detect entanglement. Here the high single-
photon rate of the QD source and the low insertion losses of the chip allow us
to obtain the density matrix of the 4-qubit state.
Table 1: Numerical simulation of the fidelity and the purity of a 4-qubit GHZ
state computed from the reconstruction of the density matrix with maximum
likelihood estimation [37]. The impact of each dominant experimental source of
noise (second row) is assessed using independently measured parameters
(Supplementary S-II). The main source of noise (bold) are the partial
distinguishability of the input 4-photon states and the unbalanced efficiency
of the single-photon detectors. Error bars are obtained from Monte Carlo
simulations assuming a Poissonian distribution of the measured counts when
measuring the sources of noise.
To fully reconstruct the $16\times 16$ density matrix, $3^{4}=81$ projective
measurements, corresponding to all possible combinations of
$(\sigma_{x},\sigma_{y},\sigma_{z})$ among the four qubits, are necessary. The
density matrix is determined by using a maximum likelihood estimation to
restrict the numerical approximation to physical states. The result is shown
in Fig.2.d. From the experimental density matrix $\rho_{\text{exp}}$ we
calculated the fidelity
$\mathcal{F}_{\text{GHZ}_{4}}=\bra{\text{GHZ}_{4}}\rho_{exp}\ket{\text{GHZ}_{4}}=0.860\pm
0.004$ and the state purity
$\mathcal{P}_{\text{GHZ}_{4}}=Tr\\{\rho_{exp}^{2}\\}=0.763\pm 0.006$. Our
four-photon results establish a new state-of-the-art in terms of fidelity and
purity for integrated implementations of GHZ states. Previous record values
demonstrated in Ref. [14] showed fidelity of $0.80\pm 0.02$ and a purity of
$0.72\pm 0.04$ which was achieved for a two-photon GHZ and not a multipartite
state as in our case.
In what follows we investigate all the sources of noise in our system to
analyse quantitatively what is limiting our values of fidelity and purity. In
order to explore the effect of each experimental imperfection, we use a
phenomenological model (Supplementary S-II) based on the measured
characteristics of the experimental setup to perform numerical simulations of
the experiment [37]. The model accounts for i) the imperfections of the
single-photon source (Supplementary S-II.1), namely the imperfect single-
photon purity and indistinguishability of the input state made of four
simultaneous photons, ii) the imperfections of the preparation of the GHZ
states (Supplementary S-II.2), namely the imperfect directional couplers and
initialisation of the internal phase $\theta$ in the preparation stage (see
Fig. 1), and iii) the imperfections of the projective measurements,
implemented by the MZIs, and detectors (Supplementary S-II.3) experimentally
dominated by unbalanced detection efficiencies, modeled by imperfect
projective measurements. Each imperfection is studied independently to uncover
the main source of noise in the system. The results of the numerical
simulations and the corresponding values of fidelity and purity are shown in
Tab 1.
The imperfections of the single photon source, namely the multiphoton terms
and the partial distinguishability of the 4-photon input state, limit the
achievable values of fidelity and purity. The multiphoton component
$g^{(2)}(0)$ of the single-photon stream, measured independently in a Hanbury-
Brown and Twiss setup, is $g^{(2)}(0)=0.005\pm 0.001$. The
indistinguishability of two subsequent photons (12.3 ns time delay) measured
with a Hong-Ou-Mandel (HOM) interferometer [38] is $M_{s}=0.962\pm 0.02$. The
indistinguishability of the 4-photon input state is limited by long-term
fluctuations of the emitter environment (electrical and magnetic noise) when
using the time-to-spatial demultiplexing scheme that synchronizes photons up
to 500 ns apart (see setup in S-I.2). The indistinguishability of long-delay
(500 ns time delay) photons is measured through the demultiplexer and the chip
used as multiple HOM interferometers (Supplementary S-I.2.1). We measure a
minimal 2-photon interference visibility $V_{\text{min}}=0.88\pm 0.01$. The
indistinguishability of the photons is degraded by the imperfect temporal
overlap and imperfect polarisation control of the photons at the output of the
demultiplexer. It is also affected by fabrication imperfections of the optical
circuit. We use all the accesible pairwise indistinguishabilities (see
Supplementary S-I.2.1) as inputs for the model. All the imperfections from the
chip input to the detectors are thus taken into account twice, which explains
why we underestimate the fidelity and purity when all the sources of noise are
taken into account.
### I.3 Causal inequality for GHZ state certification
Figure 3: Semi-device independent certification of the GHZ4 state (a) Causal
structure. In the DAG shown here, there are three different kinds of nodes:
$\Lambda$ is a hidden variable (blue box), (M0, M1) are measurement settings
(yellow boxes) and each party is associated with a variable (i) outputting
measurement outcomes. (b) Violation of the Bell-like inequality
characterizing 4-partite GHZ states. The violation of the Bell-like inequality
certifies the presence of non-classical correlations. We study the dependence
of the entanglement of the GHZ state on the dominant source of noise, i.e. the
minimal pairwise 2-photon indistinguishability. A half-wave plate is used to
rotate the polarisation of photon C and make it distinguishable from A, B and
D. The measured 2-photon mean wavepacket overlap (HOM) is shown in the insets
for 2 datapoints. The experimental data (black squares) and simulation (red
circles) demonstrates a good numerical match. Error bars for the violation of
the inequality, referring to the 95% confidence interval, are obtained from
error propagation assuming a shot noise limited error on the total number of
4-fold coincidences. Error bars for the minimal HOM correspond to the standard
deviation of the measured 2-photon interference visibility distribution
measured between each combination of binary measurements. The violation of the
Bell-like inequality reaches a maximal value of $\mathcal{I}^{2}=7.49\pm 0.04$
when all photon have the highest pairwise indistinguishability, which
certifies non-classical correlations an non bi-separability within 39 standard
deviations.
We further certify the presence of non-classical correlations within the
generated state, by adopting an approach requiring fewer measurements than the
full quantum state tomography and minimal assumptions on the experimental
apparatus, i.e in a semi device-independent fashion.
We use Eq. (2) as a special case of generic Bell Inequalities for self-testing
graph states that can be found in [39]. Under the assumptions detailed and
justified in Supplementary S-III, a violation of Eq. (2) guarantees the
presence of non-classical correlations among the parties. Like for a two-
partite Bell measurement, the orthogonal measurement bases
$(M_{0}^{(i)},M_{1}^{(i)})$ for each of the four qubits $i\in\\{1,..,4\\}$
have been set to obtain the highest violation of Eq. (2) for the target state
$\ket{\text{GHZ}_{4}}$, i.e. $6\sqrt{2}=8.48$. Each of these 2-qubit Pauli
projectors are defined in the Methods section. A maximal quantum violation
self-tests that the generated state has the form of the target state [40, 41].
$\begin{split}\mathcal{I}^{2}=\sum_{i=2}^{4}\langle
M_{0}^{(1)}M_{1}^{(i)}\rangle-\sum_{i=2}^{4}\langle
M_{1}^{(1)}M_{1}^{(i)}\rangle+3\langle
M_{0}^{(1)}M_{0}^{(2)}M_{0}^{(3)}M_{0}^{(4)}\rangle+3\langle
M_{1}^{(1)}M_{0}^{(2)}M_{0}^{(3)}M_{0}^{(4)}\rangle\leq 6\end{split}$ (2)
We compute the expectation values of the left hand side of Eq. (2) (see
Methods). Abiding Eq. (2) would mean the measured probabilities could be
compatible with a local hidden variable model, according to the directed
acyclic graph in Fig.3.a [42]. On the contrary, a violation of Eq. (2)
certifies the presence of non-classical correlations among the parties. In our
case, the largest experimental estimate of $\mathcal{I}^{2}$ is $7.49\pm
0.04>6$, which violates the classical bound described in Eq. (2) by 39
standard deviations.
Furthermore, we address the robustness of our inequality violation with
respect to the experimental noise. Since we have identified partial
distinguishability as the main source of noise in our system, we vary in a
controlled way the indistinguishability among the parties, i.e. make one of
the four photons distinguishable in the polarisation degree of freedom using a
half-wave plate, to gauge how robust the entanglement is with respect to this
issue. In Fig. 3.b, we report the measured values for $\mathcal{I}^{2}$ while
increasing the photon distinguishability, which we calibrated with independent
measurements (Supplementary S-I.2.1). Our setup can tolerate a substantial
amount of distinguishability before inequality $\mathcal{I}^{2}$ is not
violated anymore. In Fig. 3 we observe a good agreement between experimental
data and simulations.b for different levels of noise, which reveals that our
model can faithfully describe the generated state for a wide range of input
parameters.
### I.4 Quantum secret sharing
We now examine the suitability of our approach to implement ”Quantum secret
sharing” (QSS) – a protocol presented in 1999 by Hillery et al. [28]. QSS
considers the practical case of a regulator who wants to share a secret string
of random bits with three interlocutors, in such a way that they can access
the secret message only if they cooperate all together. In this protocol, the
regulator prepares a string of 4-qubits in a state of the form of Eq. (1),
keeps one qubit and distributes the three others to three parties. All four
parties then randomly choose a basis for measuring the state of their qubit:
$\sigma_{x}$ or $\sigma_{y}$. The sifted key is extracted, on average with a
50% success probability, after public basis sharing (see Supplementary S-IV
for more details).
We performed a proof-of-principle implementation of this QSS protocol by
generating the 4-qubit GHZ state with our chip and by exploiting the
reconfigurable MZIs to perform the required projective measurements. Each
party measures its share of the 4-qubit GHZ state by randomly selecting a
measurement basis, and by recording the measurement outcome in the raw key
when the first 4-photon coincidence event occurs. This procedure is repeated
until the target length for the raw key has been reached. The key sifting is
then performed by discarding the raw bits that correspond to non-valid basis
choices. The raw bit generation rate we obtained is about 0.5 Hz. This rate
incorporates the dwell time to reach stable settings ($\sim$ 100 ms) for the
randomly chosen projective measurements.
We evaluate the total number of errors by calculating the quantum bit error
rate (QBER) on the sifted key. The most accurate uninterrupted run has a QBER
of 10.87% $\pm$ 0.01, which guarantees a secure communication as it is below
the required threshold of 11% [43], with a raw length of 4060 bits, and a
sifted length of 1978 bits.
## II Discussion
In this work we demonstrated the generation of 4-photon multipartite GHZ
states integrating a deterministic solid-state QD-based SPS with a
reconfigurable glass photonic chip. We achieved a post-selected 4-fold
coincidence rate of 0.5 Hz with a pump rate of 79 MHz, which allowed us to
perform full quantum state tomography, with a fidelity of 86% to the target
state. A 4-photon coincidence rate of 10 Hz was reached removing one stage of
spectral filtering on the single photon source with a limited effect ($+0.007$
on $g^{(2)}(0)$ and $-0.05$ on $M_{s}$, see Section S-I.2). The combination of
high-fidelity and high rate, as well as the overall platform stability —stable
enough for highly demanding measurements such as full quantum state
tomography— shows the suitability of the platform for intermediate scale
computing or communication protocols.
Our systematic analysis and numerical modeling shows that, despite state of
the art performances of the source, the effective overall indistinguishability
of the photons is the main limiting factor to the ideal fidelity. We thus
identify many handles to further improve these results. Our observations
indeed indicate that the effective indistinguishability of the QD photons at
long delays is limited by the stability of the voltage source used to operate
the QD. More precise temporal overlap of the photons and better polarization
control at the input of the chip would also increase the net photon overlap.
In the current demonstration, we have 50 % insertion losses on the chip. We
foresee that this can be improved to 30 %. Finally, the operation rate can be
brought up to at least to 500 MHz thanks to the short photon profile. All
these improvements would allow to significantly improve state fidelity and
purity, and increase performances for applications, i.e. a higher bitrate and
a lower QBER in the QSS protocol.
All together, these qualitative and quantitative results constitute an
important milestone in the generation and use of high-dimensional quantum
states. They prove that the integration of QD-based SPSs with glass chips for
generation and manipulation of multipartite states is now mature and can lead
to performances comparable to those achievable by state-of-the-art free space
implementations. The deterministic nature of QD-based SPSs along with the
integration of low-loss, stable, and re-configurable optical elements in a
scalable photonic glass chip put our platform in the front line for the
development of practical photon-based quantum devices.
Recent works have shown the ability to generate linear cluster state at high
rate using an entangling-gate in a fiber loop [44], or through spin-photon
entanglement [10], harnessing a similar QD-based single-photon source.
Generating multipartite entanglement on chip, as demonstrated here, will be
key to obtain the 2D cluster states required for measurement-based quantum
computation.
## Acknowledgements
This work is partly supported the European Union’s Horizon 2020 FET OPEN
project PHOQUSING (Grant ID 899544), the European Union’s Horizon 2020
Research and Innovation Programme QUDOT-TECH under the Marie Sklodowska-Curie
Grant Agreement No. 861097, the French RENATECH network, the Paris Ile-de-
France Region in the framework of DIM SIRTEQ. The fabrication of the photonic
chip was partially performed at PoliFAB, the micro- and nanofabrication
facility of Politecnico di Milano (www.polifab.polimi.it). F.C. and R.O. would
like to thank the PoliFAB staff for the valuable technical support.
## Author contributions
The development of the experimental platform used in this work was made
possible by the collaboration of multiples teams, as revealed by the large
number of authors and institutions involved. M.P.: Single-photon source (SPS)
sample design, experimental investigation, data analysis, formal analysis,
numerical simulations, methodology, visualization, writing, G.Co.
conceptualization, methodology, photonic chip fabrication & characterisation,
visualization, writing, A.F.: experimental investigation, data analysis,
numerical simulations, I.A.: data analysis, numerical simulations,
visualization, writing, G.Ca.: conceptualization, formal analysis,
visualization, writing N.M.: experimental investigation, supervision, P-E.E.:
conceptualization, formal analysis, writing, F.C., R.A., P.H.D.F.: photonic
chip fabrication & characterisation, N.S., I.S.: SPS nano-processing, J.S.:
numerical simulations, M.M., A.L.: SPS sample growth, P.S.: SPS sample design
& nano-processing, conceptualization, methodology, supervision, writing,
funding acquisition, F.S.: conceptualization, supervision, funding acquisition
M.L.: conceptualization, methodology, visualization, writing, N.B, R.O:
conceptualization, methodology, data analysis, visualization, writing,
supervision, funding acquisition.
## Data & code availability
The data generated as part of this work is available upon reasonable request
(mathias.pont@polytechnique.org). The code used for the numerical simulations
is available at [37].
## References
* Roos _et al._ [2004] C. F. Roos, M. Riebe, H. Haffner, W. Hansel, J. Benhelm, G. P. Lancaster, C. Becher, F. Schmidt-Kaler, and R. Blatt, science 304, 1478 (2004).
* DiCarlo _et al._ [2010] L. DiCarlo, M. D. Reed, L. Sun, B. R. Johnson, J. M. Chow, J. M. Gambetta, L. Frunzio, S. M. Girvin, M. H. Devoret, and R. J. Schoelkopf, Nature 467, 574 (2010).
* Pan _et al._ [2001] J.-W. Pan, M. Daniell, S. Gasparoni, G. Weihs, and A. Zeilinger, Physical Review Letters 86, 4435 (2001).
* Walther _et al._ [2005] P. Walther, K. J. Resch, T. Rudolph, E. Schenck, H. Weinfurter, V. Vedral, M. Aspelmeyer, and A. Zeilinger, Nature 434, 169 (2005).
* Kiesel _et al._ [2005] N. Kiesel, C. Schmid, U. Weber, G. Tóth, O. Gühne, R. Ursin, and H. Weinfurter, Physical Review Letters 95, 210502 (2005).
* Zhong _et al._ [2018] H.-S. Zhong, Y. Li, W. Li, L.-C. Peng, Z.-E. Su, Y. Hu, Y.-M. He, X. Ding, W. Zhang, H. Li, _et al._ , Physical review letters 121, 250505 (2018).
* Wang _et al._ [2018] X.-L. Wang, Y.-H. Luo, H.-L. Huang, M.-C. Chen, Z.-E. Su, C. Liu, C. Chen, W. Li, Y.-Q. Fang, X. Jiang, _et al._ , Physical review letters 120, 260502 (2018).
* Thomas _et al._ [2022] P. Thomas, L. Ruscio, O. Morin, and G. Rempe, Nature 608, 677 (2022).
* Cogan _et al._ [2021] D. Cogan, Z.-E. Su, O. Kenneth, and D. Gershoni, arXiv preprint arXiv:2110.05908 (2021).
* Coste _et al._ [2022] N. Coste, D. Fioretto, N. Belabas, S. Wein, P. Hilaire, R. Frantzeskakis, M. Gundin, B. Goes, N. Somaschi, M. Morassi, _et al._ , arXiv preprint arXiv:2207.09881 (2022).
* Adcock _et al._ [2019] J. C. Adcock, C. Vigliar, R. Santagati, J. W. Silverstone, and M. G. Thompson, Nature communications 10, 1 (2019).
* Reimer _et al._ [2019] C. Reimer, S. Sciara, P. Roztocki, M. Islam, L. Romero Cortés, Y. Zhang, B. Fischer, S. Loranger, R. Kashyap, A. Cino, _et al._ , Nature Physics 15, 148 (2019).
* Llewellyn _et al._ [2020] D. Llewellyn, Y. Ding, I. I. Faruque, S. Paesani, D. Bacco, R. Santagati, Y.-J. Qian, Y. Li, Y.-F. Xiao, M. Huber, _et al._ , Nature Physics 16, 148 (2020).
* Vigliar _et al._ [2021] C. Vigliar, S. Paesani, Y. Ding, J. C. Adcock, J. Wang, S. Morley-Short, D. Bacco, L. K. Oxenløwe, M. G. Thompson, J. G. Rarity, _et al._ , Nature Physics 17, 1137 (2021).
* Takesue and Shimizu [2010] H. Takesue and K. Shimizu, Optics Communications 283, 276 (2010).
* Somaschi _et al._ [2016] N. Somaschi, V. Giesz, L. De Santis, J. Loredo, M. P. Almeida, G. Hornecker, S. L. Portalupi, T. Grange, C. Anton, J. Demory, _et al._ , Nature Photonics 10, 340 (2016).
* Wang _et al._ [2019a] H. Wang, Y.-M. He, T.-H. Chung, H. Hu, Y. Yu, S. Chen, X. Ding, M.-C. Chen, J. Qin, X. Yang, _et al._ , Nature Photonics 13, 770 (2019a).
* Uppu _et al._ [2020] R. Uppu, F. T. Pedersen, Y. Wang, C. T. Olesen, C. Papon, X. Zhou, L. Midolo, S. Scholz, A. D. Wieck, A. Ludwig, _et al._ , Science advances 6, eabc8268 (2020).
* Tomm _et al._ [2021] N. Tomm, A. Javadi, N. O. Antoniadis, D. Najer, M. C. Löbl, A. R. Korsch, R. Schott, S. R. Valentin, A. D. Wieck, A. Ludwig, _et al._ , Nature Nanotechnology 16, 399 (2021).
* Li _et al._ [2020] J.-P. Li, J. Qin, A. Chen, Z.-C. Duan, Y. Yu, Y. Huo, S. Höfling, C.-Y. Lu, K. Chen, and J.-W. Pan, ACS Photonics 7, 1603 (2020).
* Wang _et al._ [2019b] H. Wang, J. Qin, X. Ding, M.-C. Chen, S. Chen, X. You, Y.-M. He, X. Jiang, L. You, Z. Wang, _et al._ , Physical review letters 123, 250503 (2019b).
* de Goede _et al._ [2022] M. de Goede, H. Snijders, P. Venderbosch, B. Kassenberg, N. Kannan, D. H. Smith, C. Taballione, J. P. Epping, H. v. d. Vlekkert, and J. J. Renema, arXiv preprint arXiv:2204.05768 (2022).
* Corrielli _et al._ [2021] G. Corrielli, A. Crespi, and R. Osellame, Nanophotonics 10, 3789 (2021).
* Meany _et al._ [2015] T. Meany, M. Gräfe, R. Heilmann, A. Perez-Leija, S. Gross, M. J. Steel, M. J. Withford, and A. Szameit, Laser & Photonics Reviews 9, 363 (2015).
* Antón _et al._ [2019] C. Antón, J. C. Loredo, G. Coppola, H. Ollivier, N. Viggianiello, A. Harouri, N. Somaschi, A. Crespi, I. Sagnes, A. Lemaitre, _et al._ , Optica 6, 1471 (2019).
* Pont _et al._ [2022] M. Pont, R. Albiero, S. E. Thomas, N. Spagnolo, F. Ceccarelli, G. Corrielli, A. Brieussel, N. Somaschi, H. Huet, A. Harouri, _et al._ , Physical Review X 12, 031033 (2022).
* Greenberger _et al._ [1989] D. M. Greenberger, M. A. Horne, and A. Zeilinger, in _Bell’s theorem, quantum theory and conceptions of the universe_ (Springer, 1989) pp. 69–72.
* Hillery _et al._ [1999] M. Hillery, V. Bužek, and A. Berthiaume, Physical Review A 59, 1829 (1999).
* Li _et al._ [2015] Y. Li, P. C. Humphreys, G. J. Mendoza, and S. C. Benjamin, Physical Review X 5, 041007 (2015).
* Proietti _et al._ [2021] M. Proietti, J. Ho, F. Grasselli, P. Barrow, M. Malik, and A. Fedrizzi, Science advances 7, eabe0395 (2021).
* Gühne _et al._ [2007] O. Gühne, C.-Y. Lu, W.-B. Gao, and J.-W. Pan, Physical Review A 76, 030305 (2007).
* Gühne and Tóth [2009] O. Gühne and G. Tóth, Physics Reports 474, 1 (2009).
* Baccari _et al._ [2017] F. Baccari, D. Cavalcanti, P. Wittek, and A. Acín, Phys. Rev. X 7, 021042 (2017).
* Mermin [1990] N. D. Mermin, American Journal of Physics 58, 731 (1990).
* Tóth and Gühne [2005] G. Tóth and O. Gühne, Physical Review A 72, 022340 (2005).
* Altepeter _et al._ [2005] J. B. Altepeter, E. R. Jeffrey, and P. G. Kwiat, Advances in Atomic, Molecular, and Optical Physics 52, 105 (2005).
* Pont [2022] M. Pont, Zenodo (2022), 10.5281/zenodo.7219737.
* Ollivier _et al._ [2021] H. Ollivier, S. Thomas, S. Wein, I. M. de Buy Wenniger, N. Coste, J. Loredo, N. Somaschi, A. Harouri, A. Lemaitre, I. Sagnes, _et al._ , Physical Review Letters 126, 063602 (2021).
* Baccari _et al._ [2020] F. Baccari, R. Augusiak, I. Šupić, J. Tura, and A. Acín, Phys. Rev. Lett. 124, 020402 (2020).
* Agresti _et al._ [2021] I. Agresti, B. Polacchi, D. Poderini, E. Polino, A. Suprano, I. Supic, J. Bowles, G. Carvacho, D. Cavalcanti, and F. Sciarrino, PRX Quantum 2, 020346 (2021).
* Wu _et al._ [2021] D. Wu, Q. Zhao, X.-M. Gu, H.-S. Zhong, Y. Zhou, L.-C. Peng, J. Qin, Y.-H. Luo, K. Chen, L. Li, N.-L. Liu, C.-Y. Lu, and J.-W. Pan, Phys. Rev. Lett. 127, 230503 (2021).
* Pearl [2009] J. Pearl, (2009).
* Shor and Preskill [2000] P. W. Shor and J. Preskill, Physical review letters 85, 441 (2000).
* Istrati _et al._ [2020] D. Istrati, Y. Pilnyak, J. Loredo, C. Antón, N. Somaschi, P. Hilaire, H. Ollivier, M. Esmann, L. Cohen, L. Vidro, _et al._ , Nature communications 11, 1 (2020).
* Dousse _et al._ [2008] A. Dousse, L. Lanco, J. Suffczyński, E. Semenova, A. Miard, A. Lemaître, I. Sagnes, C. Roblin, J. Bloch, and P. Senellart, Physical review letters 101, 267404 (2008).
* Nowak _et al._ [2014] A. Nowak, S. Portalupi, V. Giesz, O. Gazzano, C. Dal Savio, P.-F. Braun, K. Karrai, C. Arnold, L. Lanco, I. Sagnes, _et al._ , Nature communications 5, 1 (2014).
* Berthelot _et al._ [2006] A. Berthelot, I. Favero, G. Cassabois, C. Voisin, C. Delalande, P. Roussignol, R. Ferreira, and J.-M. Gérard, Nature Physics 2, 759 (2006).
* Thomas _et al._ [2021] S. Thomas, M. Billard, N. Coste, S. Wein, H. Ollivier, O. Krebs, L. Tazaïrt, A. Harouri, A. Lemaitre, I. Sagnes, _et al._ , Physical review letters 126, 233601 (2021).
* Bergamasco _et al._ [2017] N. Bergamasco, M. Menotti, J. Sipe, and M. Liscidini, Physical Review Applied 8, 054014 (2017).
* Crespi [2015] A. Crespi, Physical Review A 91, 013811 (2015).
* Piacentini _et al._ [2021] S. Piacentini, T. Vogl, G. Corrielli, P. K. Lam, and R. Osellame, Laser & Photonics Reviews 15, 2000167 (2021).
* Ceccarelli _et al._ [2020] F. Ceccarelli, S. Atzeni, C. Pentangelo, F. Pellegatta, A. Crespi, and R. Osellame, Laser & Photonics Reviews 14, 2000024 (2020).
* Heurtel _et al._ [2022] N. Heurtel, A. Fyrillas, G. de Gliniasty, R. L. Bihan, S. Malherbe, M. Pailhas, B. Bourdoncle, P.-E. Emeriau, R. Mezher, L. Music, _et al._ , arXiv preprint arXiv:2204.00602 (2022).
* Brod _et al._ [2019] D. J. Brod, E. F. Galvão, N. Viggianiello, F. Flamini, N. Spagnolo, and F. Sciarrino, Physical review letters 122, 063602 (2019).
* Oszmaniec and Brod [2018] M. Oszmaniec and D. J. Brod, New Journal of Physics 20, 092002 (2018).
* Shalm _et al._ [2015] L. K. Shalm, E. Meyer-Scott, B. G. Christensen, P. Bierhorst, M. A. Wayne, M. J. Stevens, T. Gerrits, S. Glancy, D. R. Hamel, M. S. Allman, _et al._ , Physical review letters 115, 250402 (2015).
* Hensen _et al._ [2015] B. Hensen, H. Bernien, A. E. Dréau, A. Reiserer, N. Kalb, M. S. Blok, J. Ruitenberg, R. F. Vermeulen, R. N. Schouten, C. Abellán, _et al._ , Nature 526, 682 (2015).
* Giustina _et al._ [2015] M. Giustina, M. A. Versteegh, S. Wengerowsky, J. Handsteiner, A. Hochrainer, K. Phelan, F. Steinlechner, J. Kofler, J.-Å. Larsson, C. Abellán, _et al._ , Physical review letters 115, 250401 (2015).
## Methods
### Single-photon source
The bright SPS consists of a single InAs QD deterministically embedded in the
center of a micropillar [16]. The sample was fabricated using the in-situ
fabrication technology [45, 46] from a wafer grown by molecular beam epitaxy
composed of a $\lambda$-cavity and two distributed Bragg reflectors (DBR) made
of GaAs/Al0.95Ga0.05As $\lambda$/4 layers with 36 (18) pairs for the bottom
(top). The top (bottom) DBR is gradually p(n)-doped and electrically
contacted. The resulting p-i-n diode is driven in the reversed bias regime to
reduce the charge noise [47] and to tune the QD in resonance with the
microcavity. The resonance of the QD with the cavity mode at
$\lambda_{\text{QD}}$=928 nm is actively stabilised in real time with a
feedback loop on the total detected single-photons countrate. The sample is
placed in a closed-loop cryostat operating at 5 K. The LA phonon-assisted
excitation [48] is provided by a shaped Ti:Sa laser at
$\lambda_{\text{excitation}}$=927.4 nm, generating $\sim 15$ ps pulses with a
repetition rate of 79 MHz. The (polarised) first-lens brightness, defined as
the (polarised) single-photon countrate before the first optical element
computed from the loss-budget presented in Supplementary S-I.2.2 is
($\beta_{FL}\sim 38\%$) $\beta_{FL}\sim 50\%$, leading to a detected countrate
of 18.9 MHz (12.3 MHz) accounting (not accounting) for the 65% efficiency of
the SNSPD. To improve the single-photon purity and indistinguishability of the
source a narrow optical filter
($\text{FWHM}_{\text{filter}}=4\times\text{FWHM}_{\text{photon}}$, T=60%) is
added to the laser filtering module (Supplementary S-I.2). With this
additional spectral filter the detected single-photon countrate is 11.3 MHz
(7.4 MHz).
The single-photon stream is split into four spatial modes using an acousto-
optic based time-to-spatial demultiplexer. The time of arrival of each photon
at the input of the optical circuit is synchronised with fibered delays (0 ns,
180 ns, 360 ns, 540 ns). The polarisation of each output is actively
controlled with motorised paddles for five minutes every one hour, to account
for the temperature instability in the laboratory.
### Projectors
The projectors $M_{0}^{(i)}$ and $M_{1}^{(i)}$ used in the definition of the
operator $\hat{\Theta}$ for the characterization of the phase $\theta$, and in
the Bell-like inequality expressed by Eq. (2) are:
$M_{0}^{(1)}=\frac{\sigma_{x}+\sigma_{z}}{\sqrt{2}}$,
$M_{1}^{(1)}=\frac{\sigma_{x}-\sigma_{z}}{\sqrt{2}}$,
$M_{0}^{(3)}=\sigma_{x}$, $M_{1}^{(3)}=\sigma_{z}$,
$M_{0}^{(2)}=M_{0}^{(4)}=-\sigma_{x}$ and
$M_{1}^{(2)}=M_{1}^{(4)}=-\sigma_{z}$.
### Expectation values
For a given 4-qubit projector $\hat{E}$, the expectation value is computed as
$\langle\hat{E}\rangle=\sum_{i=1}^{16}p_{i}\mathcal{E}_{i}$, where $p_{i}$ is
the probability of detecting the 4-qubit output state $i$ associated to the
measured normalized 4-photon coincidence rate of each possible output state
and $\mathcal{E}_{i}=\pm 1$ is the product of the individual outcomes where +1
is associated with the detection of $\ket{0}$ and -1 is associated with
$\ket{1}$. Note that whenever we have $\mathds{1}$ in an expectation value for
a qubit then we always record +1 (irrespective of which detector have clicked)
which amounts to trace out the corresponding qubit.
### Stabilizer witness
The stabilizer witness $\mathcal{W}_{\text{GHZ}_{4}}$ [31] can be computed
from the generating operators
$g_{1}^{(\text{GHZ}_{4})}=\sigma_{x}\otimes\sigma_{x}\otimes\sigma_{x}\otimes\sigma_{x}=\sigma_{x}^{\otimes
4}$ where $\otimes$ is the Kronecker product of the Pauli matrices, and
$g_{k}^{(\text{GHZ}_{4})}=-\sigma_{z}^{(k-1)}\otimes\sigma_{z}^{(k)}$ for
$k=2,3,4$ (the identity operator $\mathds{1}$ has been omitted for two of the
parties not involved) as
$\frac{\mathcal{W}_{\text{GHZ}_{4}}}{3}=\mathbb{1}-\frac{2}{3}\left[\frac{g_{1}^{(\text{GHZ}_{4})}+\mathbb{1}}{2}+\prod_{k=2}^{4}\frac{g_{k}^{(\text{GHZ}_{4})}+\mathbb{1}}{2}\right].$
(3)
To compute these expectation values, we need to perform only two projective
measurements, namely $\sigma_{x}^{\otimes 4}$ and $\sigma_{z}^{\otimes 4}$.
This witness allows to give a lower bound on the fidelity via
$\mathcal{F}_{\text{GHZ}_{4}}\geq(1-\langle\mathcal{W}_{\text{GHZ}_{4}}\rangle)/2$.
A measured negative expectation value signals the presence of entanglement.
Supplementary Information
## S-I Experimental implementation
### S-I.1 The photonic circuit
Figure S1: (a) Schematic layout of the integrated photonic chip for 4-photon
path-encoded GHZ state generation. In the right inset a microscope picture of
a waveguide crossing is shown. Inset scalebars correspond to 150 $\mu$m
(horizontal) and 15 $\mu$m (vertical).
In Fig. S1 the schematic of the chip functioning is depicted. It’s layout is
inspired from Bergamasco et al. [49], where a scheme for the generation of a
path-encoded 3-qubit GHZ state is presented. Here, the scheme is generalized
to 4 qubits.
Up to a global phase with no physical meaning, the field scattering matrix of
this device (with spatial modes numbered top to bottom from 1 to 8) from input
to the orange dashed line reads as:
$U=\frac{1}{\sqrt{2}}\begin{bmatrix}e^{i\theta_{1}}&ie^{i\theta_{1}}&0&0&0&0&0&0\\\
0&0&e^{i\theta_{2}}&ie^{i\theta_{2}}&0&0&0&0\\\
ie^{i\theta_{3}}&e^{i\theta_{3}}&0&0&0&0&0&0\\\
0&0&0&0&e^{i\theta_{4}}&ie^{i\theta_{4}}&0&0\\\
0&0&ie^{i\theta_{5}}&e^{i\theta_{5}}&0&0&0&0\\\
0&0&0&0&0&0&e^{i\theta_{6}}&ie^{i\theta_{6}}\\\
0&0&0&0&ie^{i\theta_{7}}&e^{i\theta_{7}}&0&0\\\
0&0&0&0&0&0&ie^{i\theta_{8}}&e^{i\theta_{8}}\\\ \end{bmatrix}.$ (S1)
The values $\theta_{k}$ indicate the optical phase that ligth acquires in
propagating from the chip input to the orange dashed line in correspondence of
spatial mode $k$, and are represented as static phase plates in Fig. S1.
Consecutive pairs of odd and even spatial modes are used to encode the qubit
computational basis states $\ket{0}$ and $\ket{1}$. If the circuit is fed with
four indistinguishable photons in the separable state
$\ket{\Psi_{\text{in}}}=\ket{0000}$, it is possible to show, from equation S1
and applying the standard rules to compute the output state of a linear
optical network when single photons are used as input [50], that, up to a
global phase term, the qubit state at the orange dashed line, conditioned to
the presence of one and only one photon per qubit, is a 4-fold GHZ state of
the form:
$\ket{\text{GHZ}_{4}}^{(\theta)}=\frac{\ket{0101}+e^{i\theta}\ket{1010}}{\sqrt{2}}.$
(S2)
This process takes place with probability 1/8. The phase term $\theta$ within
equation S2 reads as:
$\theta=\theta_{1}-\theta_{2}-\theta_{3}+\theta_{4}+\theta_{5}-\theta_{6}-\theta_{7}+\theta_{8}$
(S3)
After the dashed line, a set of four reconfigurable Mach Zehnder
interferometers (MZIs) allows to perform the projective measurements
(Supplementary S-III) required to characterize the generated state.
Figure S2: a) Microscope picture of the photonic chip after the fabrication of
the isolation trenches around the waveguide in the thermal-shifter region. b)
Microscope picture of the same view of panel a) after the deposition of the
CrAu layer and the laser ablation of the resistive circuit. Bright regions
correspond to CrAu coated surfaces.
The optical circuit was fabricated using femtosecond laser micromachining
technology [24, 23] on a borosilicate glass substrate (EagleXG), by using a
commercial femtosecond laser source emitting infrared ($\lambda$=1030 nm)
laser pulses with duration of 170 fs and at the repetition rate of 1 MHz. The
focusing optics used was a 0.5 numerical aperture water-immersion microscope
objective. Single-mode waveguides for 926 nm light have been obtained with 320
nJ/pulse energy, a writing velocity of 20 mm/s, and by performing 6 overlapped
writing scans per waveguide The inscription depth is 35 $\mu$m below top
surface. After laser irradiation, a thermal annealing treatment (same recipe
as [51]) was also performed for improving waveguide performances. The overall
circuit length is 4 cm. All directional couplers (DC) are identical and their
geometry is optimized to reach balanced 50/50 splitting ratio. The actual
values of the DC reflectivities (defined as the fraction of light power that
remains in the copler BAR mode) have been experimentally characterized with a
926 nm laser diode for horizontally (H) and vertically (V) polarized ligth,
and the results are reported in Tab. S3.
The waveguide crossings are implemented by spline trajectories, and the
relative waveguides distance at the crossing point is 15 $\mu$m, resulting in
an optical cross talk $<$ -50 dB (no cross talk detected, upper bound set by
the resolution of the measurement).
For ensuring full MZIs programmability, a (redundant) number of 16 thermal
phase shifters have been integrated in the chip, distributed as depicted in
Fig. S1. The thermal phase shifters are fabricated according to the geometry
presented in ref. [52] for improving the power consumption and thermal cross
talk between adjacent shifters. In particular, waveguide isolation trenches
(100 $\mu$m wide) are fabricated by water-assisted laser ablation between
neighbouring waveguides (separated by 127 $\mu$m), in correspondence to the
position of the thermal phase shifters (Fig. S2.a). Then, a Cr-Au metallic
bilayer (5 nm Cr + 100 nm Au) is deposited on the top surface of the sample by
thermal evaporation and annealed at 300 °C for 3 h in vacuum. Finally, the
resistive heaters (length of 3 mm, width of 10 $\mu$m) and the corresponding
contact pads are patterned by fs-laser irradiation (see Fig. S2.b). The
electrical resistances of all thermal shifters in the chip fall within the
range 410 $\Omega$ \- 430 $\Omega$. Finally, the photonic chip is permanently
glued to optical fiber arrays (fiber model: FUD-3602, from Nufern) at both
input and output, and the overall fiber-to-fiber device insertion losses are
$\sim$2.7 dB for all channels.
Due to the presence of thermal cross talk between thermal shifters belonging
to the same columns, the relations that link the currents $I_{j}$ dissipated
on the resistors Rj and the induced phase shifts $\alpha_{i}$ and $\phi_{i}$
in the MZIs are well approximated by:
$\displaystyle[\alpha_{1}~{}\alpha_{2}~{}\alpha_{3}~{}\alpha_{4}]^{T}$
$\displaystyle=A~{}[I_{1}^{2}~{}I_{2}^{2}~{}I_{3}^{2}~{}I_{4}^{2}~{}I_{5}^{2}~{}I_{6}^{2}~{}I_{7}^{2}~{}I_{8}^{2}~{}]^{T},$
(S4) $\displaystyle[\phi_{1}~{}\phi_{2}~{}\phi_{3}~{}\phi_{4}]^{T}$
$\displaystyle=\Phi_{0}+B~{}[I_{9}^{2}~{}I_{10}^{2}~{}I_{11}^{2}~{}I_{12}^{2}~{}I_{13}^{2}~{}I_{14}^{2}~{}I_{15}^{2}~{}I_{16}^{2}]^{T},$
(S5)
where:
$\displaystyle A$
$\displaystyle=\begin{bmatrix}53.031&-54.123&-10.807&-4.293&-2.302&-1.307&-1.000&-0.733\\\
2.915&9.016&49.504&-48.858&-9.342&-3.271&-1.604&-0.801\\\
1.094&1.304&4.330&9.644&51.987&-53.094&-11.325&-3.920\\\
0.828&1.124&1.604&2.203&4.162&11.675&54.696&-51.980\\\
\end{bmatrix}\frac{krad}{A^{2}}$ (S6) $\displaystyle\Phi_{0}$
$\displaystyle=\begin{bmatrix}3.8656\\\ 2.838\\\ 0.798\\\ 0.990\\\
\end{bmatrix}rad,\qquad
B=\begin{bmatrix}53.604&-52.942&-12.937&-4.535&-2.067&-1.504&0&-0.730\\\
3.779&10.918&52.829&-54.752&-9.796&-3.963&0&-1.201\\\
1.165&1.870&3.826&11.283&48.144&-54.833&0&-3.791\\\
0.706&0.926&1.338&2.199&3.731&11.630&0&-52.863\\\
\end{bmatrix}\frac{krad}{A^{2}}.$ (S7)
Note that resistor R15 resulted damaged during the fabrication process.
### S-I.2 Full optical setup
The experimental setup for on-chip generation of a path-encoded 4-GHZ state
using a bright QDSPS is presented in Fig. S3.
Figure S3: Experimental setup for on-chip generation of a path-encoded 4-GHZ
state using a bright QD SPS. A stream a single-photon is generated with a
bright QD ($\sim$19 MHz single-photon detected rate in a single-mode fiber
using single-photon detectors) deterministically embedded in a micropillar.
The four-photon state is generated with a time-to-spatial demultiplexer, sets
of waveplates (QWP (purple) & HWP (blue)) to control the polarisation and
fibered delays (0 ns, 180 ns, 360 ns, 540 ns) to control the arrival time of
the single photons at the input of the optical circuit. The 4-GHZ state is
postselected with 4-photon coincidence measurements with superconducting
nanowire single-photon detectors (SNSPD) and a time-to-digital convertor
(correlator). A feedback loop ensures the stability of the setup during the
measurement. It controls the spectral resonance of the QD with the microcavity
(real time), the indistinguishability of the four-photon state (5 minutes per
hour) and the fidelity of the projective measurements in the optical circuit
(before each measurement). The temperature of the optical circuit is
stabilized at $\sim 295$ K with a $\sim 1$ mK resolution.
#### S-I.2.1 Single-photon purity & indistinguishability
The single-photon purity of the source, defined as
$\mathcal{P}_{QD}=1-g^{(2)}(0)$, characterised independently in a Hanbury
Brown and Twiss setup, is $g^{(2)}(0)=0.012\pm 0.001$ without spectral
filtering, and can be lowered to $g^{(2)}(0)=0.005\pm 0.001$ with a narrow
optical filter which does not block the single photons
($\text{FWHM}_{\text{filter}}=4\times\text{FWHM}_{\text{photon}}$), but
improves the rejection of the near-resonant excitation laser. The effect of
this additional spectral filter on the overall transmission of the optical
setup is detailed in the loss budget (see Tab.S2).
The layout of the optical chip makes it possible to measure 4 of the 6 (AB,
AC, BD and CD, see inset of Fig. 3.b) two-photon wavepacket overlaps by
setting the phase of all the MZIs to $\Phi_{i}=\pi/2$. In this configuration,
we measure the 2-photon interference fringe visibility $V_{\text{HOM}}$, and
compute the indistinguishability $M_{s}$ by taking into account the multi-
photon component [38]. Because of the time-to-spatial demultiplexing, the
delay between two interfering photon is different from party to party. We
present the measured indistinguishability between all accessible pairs in Tab.
S1.
MZI Nº | Interfering photons | Delay | $V_{\text{HOM}}$ (without etalon) | $M_{s}$ (without etalon)
---|---|---|---|---
MZI1 | Photons A & B | $\sim$ 180 ns | $0.91\pm 0.01$ ($0.85\pm 0.01$) | $0.92\pm 0.01$ ($0.88\pm 0.01$)
MZI2 | Photons A & C | $\sim$ 360 ns | $0.90\pm 0.01$ ($0.82\pm 0.01$) | $0.91\pm 0.01$($0.85\pm 0.01$)
MZI3 | Photons B & D | $\sim$ 360 ns | $0.87\pm 0.01$ ($0.80\pm 0.01$) | $0.88\pm 0.01$($0.83\pm 0.01$)
MZI4 | Photons C & D | $\sim$ 180 ns | $0.90\pm 0.01$ ($0.84\pm 0.01$) | $0.91\pm 0.01$ ($0.87\pm 0.01$)
Table S1: 2-photon interference fringe visibility $V_{\text{HOM}}$ and
indistinguishability $M_{s}$ of each pair of interfering photons. The
multiphoton component is $g^{(2)}(0)=0.005\pm 0.001$ ($g^{(2)}(0)=0.012\pm
0.001$) with (without) the etalon.
#### S-I.2.2 Experimental characterisation of the loss budget
All component optical transmissions are measured independently in order to
compute the efficiencies mentioned in the main text: the collection efficiency
of the optical setup $\eta_{C}$, the active demultiplexing efficiency
$\eta_{\text{DMX}}$, the transmission of the photonic chip, mainly dominated
by the insertion losses $\eta_{chip}$ and the single-photon detector
efficiency $\eta_{D}$ are detailed in Tab. S2. The (polarised) first-lens
brightness of the source, defined as the (linearly polarised) single-photon
countrate before the first optical element devided by the repetition rate of
the laser, is ($\beta_{FL}\sim 40\%$) $\beta_{FL}\sim 50\%$. The fibered
collection efficiency of the single-photon is $\eta_{C}$=48% ($\eta_{C}$=29%)
and the overall transmission of the setup including the demultiplexing scheme
and the photonic chip is $\eta$=17% ($\eta$=10%) without (with) the etalon
filter, not including detection efficiency. The filling factor (FF) induced by
the switching time of the demultiplexer is also not included in the overall
efficiency, as it only reduces the effective repetition rate (RR) of the
laser. The detected 4-photon coincidence rate, which must be weight by the 1/8
probability to detect one and only one photon per qubit, is thus given by:
$C_{4-photon}=RR\times
FF\times(\beta_{FL}\times\eta_{C}\times\eta_{\text{DMX}}\times\eta_{chip}\times\eta_{D})^{4}$
(S8)
Table S2: Loss budget
Single-photon collection efficiency ($\eta_{C}$)
---
First lens | 0.99
Cryostat window | 0.99
Polarisation control (QWP+HWP) | 0.96
Coupling into SM fiber | 0.70
Laser rejection (bandpass filters) | 0.73
Laser rejection (etalon filter) | 0.60
Degree of linear polarization | 0.70
Demultiplexing efficiency
---
Optical efficiency ($\eta_{\text{DMX}}$) | 0.75
Filling factor (FF) | 0.67
Photonic chip ($\eta_{chip}$)
Insertion losses | 0.54
Detector efficiency ($\eta_{D}$)
Mean detection efficiency | 0.65
## S-II Modeling experimental imperfections
We build on a phenomenological model first introduced in ref. [26] to identify
the main sources of noise in the system. The model developed here constructs
the input multi-photon state from experimentally measured metrics, and enables
analysis of the calculated photon number tomography at the output of the
simulated optical circuit. The simulations are implemented using the open-
acess linear optical circuit development framework Perceval [53]. The code
developed for this work and used for the numerical simulations is available at
[37].
### S-II.1 Imperfect single-photon source
The imperfect single-photon source is modeled by a statistical mixture of Fock
states (Fig. S4). The ideal single-photon state $\ket{1}$ is mixed with
distinguishable photons $\ket{\tilde{1}}$ (indistinguishability $M\neq 1$),
and additional noise photons $\ket{\bar{1}}$ (multiphoton component
$g^{(2)}(0)\neq 0$). These hypotheses together with the initial
characterisation of the source ($M$ and $g^{(2)}(0)$) allow the computation of
the probabilities $p_{0}$, $p_{1}$, and $p_{2}$ that the single-photon source
emits 0, 1, and 2 photons per time bin respectively. Here, for a bright QD
based single-photon source with a LA phonon-assisted excitation scheme, we can
consider that the emission process is deterministic, and set $p_{0}$=0.
Moreover, as $g^{(2)}(0)\approx 0$, we neglect all the terms with more than
two photons, which sets $p_{1}+p_{2}=1$.
Figure S4: Representation of the state generated by an ideal single photon
source (a) and an imperfect single-photon source (b). $\beta$ is the
occupation probability of the QD, defined as $\beta=1-p_{0}$. M and g2 are the
indistinguishability and purity of the single-photon state. Imperfect values
are modeled respectively by photons in states $\ket{\tilde{1}}$ and
$\ket{\bar{1}}$.
$\begin{cases}g^{(2)}(0)=\frac{2p_{2}}{(p_{1}+2p_{2})^{2}}\\\
p_{1}+p_{2}\approx 1\\\ p_{0}\approx 0\end{cases}$ (S9)
The indistinguishability of each input state i=A,…,D is defined by the
probability $x_{i}$ to be in a master single-photon state $\ket{1}$, shared
with all other inputs, and by the probability $(1-x_{i})$ to be in a
subsidiary state $\ket{\tilde{1}}_{i}$, distinguishable from the main state,
and all other subsidiary states $\ket{\tilde{1}}_{j}$ for $j\neq i$. The best
estimators $x_{i}$ i=A,…,D are computed with least-square optimization so that
all the mean wavepacket overlaps between photon measured in four simulated
Hong-Ou-Mandel interferometers fit the value measured experimentally. The two
wavepacket overlaps $M_{BC}$ and $M_{AD}$ are not experimentally accessible,
but can be computed from the $x_{i}$. The 4 measured mean wavepacket overlaps
provide upper and lower bounds [54] on $M_{BC}$ and $M_{AD}$. These bounds are
included in the numerical optimization to find $x_{i}$ so that the result is
physically acceptable.
$\begin{cases}M_{BD}+M_{CD}-1\leqslant M_{BC}\leqslant 1-|M_{BD}-M_{CD}|\\\
M_{AB}+M_{BD}-1\leqslant M_{AD}\leqslant 1-|M_{AB}-M_{BD}|\end{cases}$
Figure S5: Left: Graph encoding of the known experimental data. Vertices are
photons, and edges are measured two-photon overlaps. Right: Bounds on the
unmeasured overlaps computed from the measured 2-photon mean wavepacket
overlaps.
Before demultiplexing, the input state in a given time bin generated by the
imperfect single-photon source after losses can be defined by the following
statistical mixture of states
$\ket{\Psi}\bra{\Psi}_{i}=c_{\ket{0}}\ket{0}\bra{0}_{i}+c_{\ket{1}}\ket{1}\bra{1}_{i}+c_{\ket{\tilde{1}}}\ket{\tilde{1}}\bra{\tilde{1}}_{i}+c_{\ket{\bar{1}}}\ket{\bar{1}}\bra{\bar{1}}_{i}+c_{\ket{(1,\bar{1})}}\ket{(1,\bar{1})}\bra{(1,\bar{1})}_{i}+c_{\ket{(\tilde{1},\bar{1})}}\ket{(\tilde{1},\bar{1})}\bra{(\tilde{1},\bar{1})}_{i}$
(S10)
where
* •
$\ket{0}$ is the vacuum.
* •
$\ket{1}_{i}$ is one single photon, identical for all inputs and time bins
(signal).
* •
$\ket{\bar{1}}_{i}$ is a one photon state, distinguishable from $\ket{1}$ and
all $\ket{\bar{1}}_{j}$ for $j\neq i$. It models the partial
distinguishability of the input state.
* •
$\ket{\tilde{1}}_{i}$ is a one photon state, distinguishable from
$\ket{1}_{i}$, $\ket{\tilde{1}}_{i}$ and all $\ket{\bar{1}}_{j}$ and
$\ket{\tilde{1}}_{j}$ for $j\neq i$. It models the unwanted multiphoton
components by a two-photon state containing one single photon (signal) and one
noise photon, which is distinguishable from the signal [38].
The losses in the experimental apparatus are comparable for all optical modes,
so they commute with all linear optical elements, including detection
efficiencies [55]. We can thus define an overall transmission parameter per
photon, $\eta$. The probabilities for state $i$ are thus modelled as
* •
$c_{\ket{0}}^{2}=1-(\eta\cdot p_{1}+\eta^{2}\cdot p_{2}+2\eta(1-\eta)\cdot
p_{2})$ • $c_{\ket{1}}^{2}=\eta x_{i}\cdot p_{1}+\eta(1-\eta)x_{i}\cdot p_{2}$
• $c_{\ket{\bar{1}}}^{2}=\eta(1-x_{i})\cdot p_{1}+\eta(1-\eta)(1-x_{i})\cdot
p_{2}$
* •
$c_{\ket{\tilde{1}}}^{2}=\eta(1-x_{i})\cdot p_{1}+\eta(1-\eta)(1-x_{i})\cdot
p_{2}$ • $c_{\ket{(1,\tilde{1})}}^{2}=\eta^{2}x_{i}p_{2}$ •
$c_{\ket{(\bar{1},\tilde{1})}}^{2}=\eta^{2}(1-x_{i})\cdot p_{2}$
The multi-photon input states generated by the demultiplexer are given by all
possible combination of the $\ket{\Psi}_{i}$ states for i=A,…,D, leading to
$4^{6}=4096$ instances. Using Perceval, the permanent of the scattering matrix
corresponding to a given input state and a unitary matrix representing the
optical chip (which varies depending on the configuration of the phase
shifters) is computed. Each output state (number of photon per optical mode)
is mapped to an outcome in the computational basis ($\ket{0}=0$ or $\ket{1}=1$
for each party). Because we use threshold single-photon detectors, the
probability for a detector to click is equal whatever the number of photons
detected, which leads to the following mapping $(\text{number of photons per
output port})\rightarrow(\text{qubit values})$
$\displaystyle(1,0,1,0,1,0,1,0)\rightarrow(0,0,0,0)$
$\displaystyle(0,1,1,0,1,0,1,0)\rightarrow(1,0,0,0)$
$\displaystyle(0,2,1,0,1,0,1,0)\rightarrow(1,0,0,0)$ $\displaystyle...$
The probability of an outcome is given by the sum of the probabilities of all
output states yielding that outcome, weighted by the probability of the input.
The simulation is restricted to the input states with $\geq 4$ photons (1041
instances), and the input states associated with a negligible probability (1e8
times smaller than the dominant term) are neglected. To run the numerical
simulations presented in Tab. 1, the parameters used, independently determined
experimentally, are $g^{(2)}(0)=0.005\pm 0.001$, $M_{AB}=0.924\pm 0.002$,
$M_{BD}=0.881\pm 0.003$, $M_{CD}=0.921\pm 0.002$, $M_{AC}=0.915\pm 0.002$ and
$\eta=0.039\pm 0.001$. The typical computation time to reconstruct the density
matrix (cf. Fig. 2.b) with these simulations is $\sim 30$ min.
### S-II.2 Imperfect preparation
We define the directional coupler (first row of beams-splitters, see Fig. S1)
reflectivity $\mathcal{R}$ as the fraction of optical power that, after the
coupler, remains in the upper waveguide. The target value is 0.5 (balanced
coupler). In Tab. S3 we report the value of $\mathcal{R}_{x}$ as measured for
DCx at the wavelengths of 926 nm for both H- and V-polarized lights. The
reflectivity of each DC exhibits a deviation from the ideal behavior $<1\%$.
The values reported in Tab. S3 have been measured before pigtailing output
fibers. Measurements performed after this operation may be distorted by
differential losses at the circuit input/output due to imperfect gluing. This
effect is taken into account when computing the scattering matrix using the
mean value for H- and V-polarized lights, as we do not control the
polarisation entering into the chip.
Polarization | DC1 | DC2 | DC3 | DC4
---|---|---|---|---
H | 0.499 | 0.505 | 0.490 | 0.502
V | 0.501 | 0.505 | 0.491 | 0.504
Table S3: Directional coupler reflectivities.
The imperfect preparation of the internal phase $\theta$ of the GHZ states is
not taken into account in the model, as it would only affect the fidelity of
the state to the target, and not the purity. The fidelity of the state to the
whole class of GHZ states of the form
$\ket{\text{GHZ}_{4}}^{(\theta)}=\left(\ket{0101}+e^{i\theta}\ket{1010}\right)/\sqrt{2}$
was calculated, and we compute a maximum fidelity of $0.863\pm 0.004$ when
$\theta=0.065$ rad, which falls into the error bar of the fidelity to the
target. We can thus conclude that the imperfect initialisation of the internal
phase is negligible.
### S-II.3 Imperfect detectors & tomography
The fidelity of a projective measurement correspond to the implementation of a
given projection using the tunable Mach-Zehnder interferometers (MZI). While a
fine calibration of all crosstalks between the phase-shifters in the optical
circuit allows to implement any gate with a near-unity fidelity, the
unbalanced efficiency of the SNSPDs effectively reduces the fidelity of the
projective measurement. As the SNSPD efficiency depends highly on the
polarisation, it varies sporadically in the time needed to run the experiment
($\sim 50$ h for the reconstruction of the density matrix) because of external
noise (temperature, vibration, etc.). To counterbalance this effect, we
slightly correct the calibration of each phase shifters so that the detected
balance meets the target.
To characterise the fidelity of the projective measurement, we switch off 2
channels of the DMX so that only two inputs of the optical chip are used to
address each MZI only on the upper (lower) input for each odd (even) parties.
In this configuration the single-photon source is used as a classical source
of light and the balance between the two detectors is compared to the target.
The error is computed for each of the 81 measurements needed to reconstruct
the density matrix. The mean error for each party before (after) re-
calibration is 2.6% (1.5%) for party A, 5.5% (1.4%) for party B, 3.2% (0.5%)
for party C, and 1.6% (3.3%) for party D.
## S-III Bell-like inequality measurements
The phase shifts associated with each Pauli projective measurement is reported
in Tab. S4.
Measurement | Projector | $\chi$ | $\psi$ | $\alpha$ | $\phi$
---|---|---|---|---|---
$\sigma_{x}$ | $0.707\ket{0}+0.707\ket{1}$ | $\frac{\pi}{4}$ | 0 | $0$ | $\frac{\pi}{2}$
$-\sigma_{x}$ | $0.707\ket{0}-0.707\ket{1}$ | $-\frac{\pi}{4}$ | 0 | $0$ | $\frac{3\pi}{2}$
$\sigma_{y}$ | $0.707\ket{0}+i0.707\ket{1}$ | $\frac{\pi}{4}$ | $\frac{\pi}{2}$ | $\frac{\pi}{2}$ | $\frac{\pi}{2}$
$\sigma_{z}$ | $\ket{0}$ | 0 | 0 | $0$ | $\pi$
$-\sigma_{z}$ | $\ket{1}$ | $\frac{\pi}{2}$ | 0 | $0$ | $0$
$\frac{\sigma_{x}+\sigma_{z}}{\sqrt{2}}$ | $0.924\ket{0}+0.383\ket{1}$ | $\frac{\pi}{8}$ | 0 | $0$ | $\frac{3\pi}{4}$
$\frac{\sigma_{x}-\sigma_{z}}{\sqrt{2}}$ | $0.383\ket{0}+0.924\ket{1}$ | $\frac{3\pi}{8}$ | 0 | $0$ | $\frac{\pi}{4}$
Table S4: Phase shifts to perform the projectors required by inequality in Eq.
(2). $\chi$ and $\psi$ are the parameters characterizing the eigenstate of
each operator corresponding to eigenvalue +1:
$\cos\chi\ket{0}+e^{i\psi}\sin\chi\ket{1}$. $\alpha$ refers to the phase
difference between the input modes of the Mach-Zehnder interferometer. $\phi$
refers to the phase difference between the arms of the Mach-Zehnder
interferometer (see Fig. S1).
The measurements that are required to obtain the highest violation of Eq. (2)
in Section I.3 are the following:
$M_{0}=\frac{\sigma_{x}+\sigma_{z}}{\sqrt{2}}$ and
$M_{1}=\frac{\sigma_{x}-\sigma_{z}}{\sqrt{2}}$ for party $1$,
$M_{0}=\sigma_{x}$ and $M_{1}=\sigma_{z}$, for party $3$ and
$M_{0}=-\sigma_{x}$ and $M_{1}=-\sigma_{z}$, for parties $2$ and $4$. The
parameters $\alpha_{i}$ and $\phi_{i}$ (with $i\in(1,4)$) corresponding are
shown in Tab. S4. As mentioned, this inequality requires $8$ expected values,
which are summarized in Tab. S5.
Operator | Party 1 | Party 2 | Party 3 | Party 4
---|---|---|---|---
$M_{1}^{(1)}M_{1}^{(2)}$ | $\frac{\sigma_{x}-\sigma_{z}}{\sqrt{2}}$ | $-\sigma_{z}$ | $\mathds{1}$ | $\mathds{1}$
$M_{1}^{(1)}M_{1}^{(3)}$ | $\frac{\sigma_{x}-\sigma_{z}}{\sqrt{2}}$ | $\mathds{1}$ | $\sigma_{z}$ | $\mathds{1}$
$M_{1}^{(1)}M_{1}^{(4)}$ | $\frac{\sigma_{x}-\sigma_{z}}{\sqrt{2}}$ | $\mathds{1}$ | $\mathds{1}$ | $-\sigma_{z}$
$M_{0}^{(1)}M_{1}^{(2)}$ | $\frac{\sigma_{x}+\sigma_{z}}{\sqrt{2}}$ | $-\sigma_{z}$ | $\mathds{1}$ | $\mathds{1}$
$M_{0}^{(1)}M_{1}^{(3)}$ | $\frac{\sigma_{x}+\sigma_{z}}{\sqrt{2}}$ | $\mathds{1}$ | $\sigma_{z}$ | $\mathds{1}$
$M_{0}^{(1)}M_{1}^{(4)}$ | $\frac{\sigma_{x}+\sigma_{z}}{\sqrt{2}}$ | $\mathds{1}$ | $\mathds{1}$ | $-\sigma_{z}$
$M_{0}^{(1)}M_{0}^{(2)}M_{0}^{(3)}M_{0}^{(4)}$ | $\frac{\sigma_{x}+\sigma_{z}}{\sqrt{2}}$ | $-\sigma_{x}$ | $\sigma_{x}$ | $-\sigma_{x}$
$M_{1}^{(1)}M_{0}^{(2)}M_{0}^{(3)}M_{0}^{(4)}$ | $\frac{\sigma_{x}-\sigma_{z}}{\sqrt{2}}$ | $-\sigma_{x}$ | $\sigma_{x}$ | $-\sigma_{x}$
Table S5: Measurements operators that maximize the violation of Eq. (2). The
maximal achievable value is $6\sqrt{2}\approx 8.4853$.
Non-classicality certification techniques which rely on the violations of
given causal constraints are only valid for processes that are describable
through the corresponding causal structures. For this reason, although these
violations require a priori no assumption on the inner functioning of the
device generating the states to certify, it is still necessary to ensure that
the cause-effect relationships among the variables are the correct ones.
To draw conclusions from violating the classical bound of inequality in Eq.
(2) we need to ensure that the causal structure presented in Fig. 3.a
describes faithfully our experimental implementation. We do so by using part
of our knowledge of the integrated circuit, depicted in Fig. 1 and detailed in
Fig. S1.a, thus making our conclusions semi-device independent. To validate
the causal structure we make the fair sampling assumption and we then need to
ensure the two following constraints: (i) Measurement Independence: there is
no mutual influence among the measurement choices, and (ii) Causal influence:
the parties $1$, $2$, $3$ and $4$ are independent. For condition (i), no
observational data can exclude that measurements choices have a common cause
in their past, i.e. a superdeterministic scenario can never be excluded. For
condition (ii), our implementation exploiting an integrated photonic circuit
cannot rely on space-like separation between the parties that would enforce
the condition of no direct causal influence between them by physical
principles [56, 57, 58]. Hence, we use our knowledge of the apparatus, to
estimate the probability that i) and ii) are satisfied by our apparatus. We
identify the main a source of causal influence among the implemented
measurements to be represented by thermal cross-talk. When some of the circuit
resistors are heated to perform given measurements, heat dispersion affect
also the other ones, introducing a correlation between the operations carried
out on different photons. However, this effect can be considered negligible,
as shown by the chip characterization measurements reported in the
Supplementary S-I.1.
We thus assume that the level of causal influence of our experimental
apparatus is not sufficient to reproduce the observed correlations, thus
justifying that the observed violation of the Bell-like inequality certifies
the presence of non-classical correlations.
## S-IV Protocol for Quantum Secret Sharing
We summarize in the following the 4-parties quantum secret sharing protocol
described in [28], adapted to the use of our target state.
1. 1.
The regulator A prepares a string of 4-qubits in a GHZ state of the form:
$\ket{\text{GHZ}_{4}}=(\ket{1010}+\ket{0101})/\sqrt{2}$, keeps qubit 1 and
distributes qubits 2 to B, qubit 3 to C and qubit 4 to D.
2. 2.
All four parties choose randomly a basis for measuring the state of their
qubit between $\sigma_{x}$ and $\sigma_{y}$, whose eigenstates are $|\pm
x\rangle$ and $|\pm y\rangle$ respectively.
3. 3.
Depending on the choice of each party, it is possible to identify the
following cases:
1. (a)
All parties measure the qubits according to the same basis, e.g. $\sigma_{x}$.
Accordingly, the state of the qubits can be written as an equal superposition
of all terms that contain only an even number of $\ket{-x}$ states. Thanks to
this, B, C and D can determine the result of A by comparing their results: if
they measured an even number of $\ket{-x}$ states then A would find its qubit
in the state $\ket{x}$, and vice versa.
2. (b)
One of the parties measures on a different basis with respect to all others.
Accordingly, the 4-qubits state can be written as an equal superposition of
all 16 possible combinations of the measurements eigenstates and, due to this,
the joint knowledge of the results of B, C and D produces no information on
A’s.
3. (c)
Two parties sharing correlated qubits, e.g. A and C, measure on $\sigma_{x}$,
while the others measure on $\sigma_{y}$. In this case the overall state can
be written as a superposition of all terms that contain only an odd number of
negative states ($\ket{-x}$ or $\ket{-y}$). Similarly as in case (a), parties
B, C and D can determine A’s result by counting the number of negative states
they found.
4. (d)
Two parties sharing anti-correlated qubits, e.g. A and B, measure on
$\sigma_{x}$, while the others measure on $\sigma_{y}$. In this case the
overall state can be written as a superposition of all terms that contain only
an even number of negative states. Again, the result of A can be retrieved by
those of B, C and D.
4. 4.
B, C and D announce publicly their measurement basis. A compares their choices
with its own, and will communicate to B, C and D to discard the results
belonging to case (b), while will make public its basis choice for the other
cases. Among all possible 16 combinations of choices, 2 belong to case (a), 8
belong to case (b), 2 belong to case (c) and 4 belong to case (d). So, on
average, there is 50% probability to obtain a useful result.
5. 5.
The qubit sharing and measurement process is iterated 2N times, which leads to
the establishment of a sifted key of length N.
6. 6.
A part of the sifted key can be exchanged publicly and used for quantum bit
error rate (QBER) evaluation. Similarly as in BB84 QKD scheme, communication
eavesdropping can be detected by measuring anomalously high values of QBER.
7. 7.
If the QBER is within the agreed threshold, the remaining part of the string
can be used for error correction and privacy amplification.
|
[a]Fabian Joswig
# Exploring distillation at the $SU(3)$ flavour symmetric point
Felix Erben Maxwell T. Hansen Nelson Pitanga Lachini Antonin Portelli
###### Abstract
In these proceedings we present an exact distillation setup with stabilised
Wilson fermions at the $SU(3)$ flavour symmetric point utilising the
flexibility of the Grid and Hadrons software libraries. This work is a
stepping stone towards a non-perturbative investigation of hadronic
$D$-decays, for which one needs to control the multi-hadron final states. As a
first step we study two-to-two $s$-wave scattering of pseudoscalar mesons. In
particular we examine the reliability of the extraction of finite-volume
energies as a function of the number of eigenvectors of the gauge-covariant
Laplacian entering our distillation setup.
## 1 Introduction
The violation of charge conjugation symmetry (C) as well as the violation of
this combined with parity (CP) are necessary conditions to explain the matter-
antimatter asymmetry in the universe [1]. There are various known sources of
CP-violation in the standard model of particle physics111See e.g. refs. [2, 3]
for pedagogical overviews. but their combined effect appears to be too small
to account for the observed asymmetry [4]. Recently the LHCb experiment
observed nonzero CP asymmetry in the decay of charmed hadrons for the first
time [5] and estimated the difference of the time-integrated asymmetries in
$D^{0}\rightarrow K^{-}K^{+}$ and $D^{0}\rightarrow\pi^{-}\pi^{+}$ decays to
be
$\displaystyle\Delta
A_{\mathrm{CP}}=A_{\mathrm{CP}}(K^{-}K^{+})-A_{\mathrm{CP}}(\pi^{-}\pi^{+})=(-15.4\pm
2.9)\times 10^{-4}\,.$ (1)
These kinds of decays can be a test case for beyond-the-standard-model
dynamics in the up-quark sector but the corresponding theoretical standard
model predictions are difficult to compute reliably. (See for example ref.
[6].)
In these proceedings, we describe our progress towards a first calculation of
hadronic $D$-decays from first principles using Monte Carlo simulations of
lattice QCD at heavier-than-physical quark masses. Our strategy to obtain the
desired decay amplitude is to compute the required matrix element via an
effective four-quark Hamiltonian $\mathcal{H}_{\mathrm{weak}}$ [7] in a finite
spacetime volume. We can then relate the finite-volume matrix element to its
infinite-volume counterpart via the relation due to Lellouch and Lüscher [8]
and subsequent generalizations [9, 10, 11, 12, 13, 14]:
$\displaystyle|A|^{2}=8\pi{\bigg{\\{}q\frac{\partial\phi}{\partial
q}+k\frac{\partial\delta_{0}}{\partial
k}\bigg{\\}}_{k=k_{n}}}\frac{E_{n}^{2}m_{D}}{k_{n}^{3}}\,\big{|}{Z^{\overline{\mathrm{MS}}}}{\langle}{n,L}{|\mathcal{H}_{\mathrm{weak}}|D,L\rangle}\big{|}^{2}\,,$
(2)
where the quantity in angled brackets is a finite-volume matrix element that
can be determined via lattice QCD, and depends on the box length $L$ and the
final state $n$ as indicated. The renormalization factor
$Z^{\overline{\mathrm{MS}}}$ is the link between the lattice regularized
matrix element and its continuum counterpart. The factor multiplying the
renormalized matrix element, often called the Lellouch-Lüscher factor, relates
it to the infinite-volume decay amplitude and depends on $\phi(q)$ (a known
geometric function of $q=kL/(2\pi)$ where $k$ is the momentum of the decay
products). The factor additionally depends on $\delta_{0}$, the $s$-wave
scattering phase of the final-state particles. A central challenge here is
that the $\langle n,L|$ state must satisfy $E_{n}=m_{D}$, where $m_{D}$ is the
incoming meson mass, in order to define a physical decay.
On the way towards a first full computation of hadronic $D$-decays on the
lattice, various theoretical and computational challenges have to be overcome.
(See ref. [15] for a recent more general discussion of numerical challenges in
lattice QCD simulations.) In these proceedings we particularly focus on
studying $K\pi$ scattering, with an eye on $D\to K\pi$ decays. The scattering
study is required for two reasons. First, as can be seen in eq. (2), the
scattering phase shift is required to extract the physical observable. Second,
as we detail below, the analysis requires the construction of optimized
operators that can then also be used to create the excited $\langle n,L|$
state in the decay.
## 2 Computational setup
In this study we work on a set of gauge-field ensembles with three flavors of
stabilised Wilson fermions [16] and tree-level Symanzik improved gluons. The
ensembles were generated by the OPEN LATtice initiative [17, 18, 19] using the
openQCD software package [20]. The three degenerate sea quarks in the
simulation are tuned such that the sum of their masses is equivalent to their
sum in the physical world. The gauge-field ensembles which we plan to use in
this project are summarized in Table 1. All preliminary results presented in
these proceedings are only based on the coarsest ensemble, labeled a12m400. We
plan to extend this calculation to two additional ensembles with very similar
pion masses and physical volumes but finer lattice spacings, which can be
considered to lie on an approximate line of constant physics. This will allow
us to control the continuum limit of the calculation which is especially
important for observables with heavy valence quarks.
Label | $T\times L^{3}/a^{4}$ | $\beta$ | $\kappa$ | a (fm) | $m_{\pi}$ (MeV)
---|---|---|---|---|---
a12m400 | $96\times 24^{3}$ | 3.685 | 0.1394305 | 0.12 | 410
a094m400 | $96\times 32^{3}$ | 3.8 | 0.1389630 | 0.094 | 410
a064m400 | $96\times 48^{3}$ | 4.0 | 0.1382720 | 0.064 | 410
Table 1: Planned gauge-field ensembles for this project. All preliminary
results were generated on ensemble a12m400. We plan to include the greyed out
ensembles in the near future.
For the computation of the relevant correlation functions, we make use of the
exact distillation method described in ref. [21]. In this approach, a smearing
matrix $\mathcal{S}$ is obtained from the low-mode subspace of the three-
dimensional gauge-covariant Laplacian
$\displaystyle\mathcal{S}(t)=\sum_{k=1}^{N_{\mathrm{vec}}}u_{k}(t)u_{k}(t)^{\dagger}\,,$
(3)
where $u_{k}$ are the eigenvectors of $-\nabla^{2}$. Correlation functions can
then be cost effectively built from the smeared quark fields
$\displaystyle\tilde{q}=\mathcal{S}q\,.$ (4)
For the $I=3/2$ $s$-wave channel, which will be the focus of these
proceedings, we construct correlation functions from $K\pi$ two-hadron
interpolators with different momenta.
From the relevant operators for a given channel we construct a correlator
matrix $C$ and solve a generalized eigenvalue problem (GEVP) [22, 23, 24, 25]
$\displaystyle
C(t)v_{i}(t,t_{0})=\lambda_{i}(t,t_{0})C(t_{0})v_{i}(t,t_{0})\,,$ (5)
in order to obtain the eigenvalues $\lambda_{i}(t,t_{0})\sim
e^{-aE_{i}(L)(t-t_{0})}$ for a desired state $i$.
Our computational setup is based on the Grid [26] and Hadrons [27] program
libraries. The distillation modules are also used in an ongoing $K\pi$
scattering study at the physical point using Domain Wall fermions [28, 29].
For the error analysis we make use of the $\Gamma$-method approach [30, 31] in
the pyerrors implementation [32].
## 3 Eigenvector dependence of the finite-volume energies
When choosing the number of eigenvectors $N_{\mathrm{vec}}$ of the gauge-
covariant Laplacian for the smearing matrix, eq. 3, one has to find a
compromise between the statistical error and anticipated smearing radius on
the one side and the computational cost and memory requirement on the other.
One method to get an idea of the required number of eigenvectors is to look at
the spatial distribution of the distillation operator as suggested in ref.
[21]. This spatial distribution function defined as
$\displaystyle\Psi(r)=\sum_{\mathbf{x}}\frac{\sqrt{\mathrm{tr}\mathcal{S}_{\mathbf{x},\mathbf{x}+\mathbf{r}}\mathcal{S}_{\mathbf{x}+\mathbf{r},\mathbf{x}}}}{\sqrt{\mathrm{tr}\mathcal{S}_{\mathbf{x},\mathbf{x}}\mathcal{S}_{\mathbf{x},\mathbf{x}}}}\,,$
(6)
with $r=|\mathbf{r}|$ is shown in Figure 1 for ensemble a12m400 and
$N_{\mathrm{vec}}\in\\{20,40,60\\}$.
Figure 1: Smearing profile of the distillation operator as a function of
$N_{\mathrm{vec}}$ on ensemble a12m400 with stout-smearing [33] parameters
$\rho=0.1$ and $n=1$.
The smearing profile allows one to get an estimate for the smearing radius for
a given number of eigenvectors and therefore an idea of the relevant scales in
the computation. It is, however, non-trivial how the estimated smearing radius
translates into operator overlaps and overall statistical precision of the
data. Instead of using the smearing profile as a benchmark for the impact of
$N_{\mathrm{vec}}$ we opt for an empirical approach and study the quality of
finite-volume energies extracted via a GEVP as a function of
$N_{\mathrm{vec}}$.
In this contribution we restrict our discussion to the repulsive $I=3/2$
$s$-wave channel. In Figure 2 we show the effective masses defined by
$\displaystyle
am_{\mathrm{eff}}(t)=\log\left(\frac{\lambda(t,t_{0})}{\lambda(t+1,t_{0})}\right)\,,$
(7)
where $\lambda$ is an eigenvalue obtained by solving a GEVP defined in eq. 5
for given state $i$ and reference time slice $t_{0}=2$.
Figure 2: Effective masses from a GEVP with $t_{0}=2$ for different values of
$N_{\mathrm{vec}}$ for $i=0,1,2,3$ from bottom to top calculated on 77
configurations of ensemble a12m400.
From the bottom panel it becomes obvious that the $N_{\mathrm{vec}}$ has very
little impact on the quality of the extracted ground state energy. In line
with our expectation a smaller number of eigenvectors corresponds to a larger
smearing radius and thus results in slightly better overlap with the ground
state which can be seen from the fact that the effective mass settles to a
plateau at earlier source sink separations $x_{0}/a$. This observation
drastically changes for the higher excited states for which a higher number of
eigenvectors improves both the statistical quality and the overlap of the GEVP
extracted correlator with the desired state. Particularly for the third
excited state (top panel in Figure 2) $N_{\mathrm{vec}}=20$ does not seem to
sufice to obtain a reliable estimate of the associated finite-volume energy.
We take the fact that the plateaus for all relevant states in this channels
agree at moderate source sink separations for both $N_{\mathrm{vec}}=40$ and
$N_{\mathrm{vec}}=60$ as confirmation that $60$ eigenvectors seems to be a
reasonable compromise and proceed with this setup.
## 4 Lellouch-Lüscher factors
From the finite-volume energies extracted from our preferred data set with
$N_{\mathrm{vec}}=60$ we can derive the corresponding scattering phase shifts
via the relation
$\displaystyle\delta_{0}(q)=\arctan\left(\frac{\pi^{\frac{3}{2}}q}{Z_{00}(1;q^{2})}\right)\,,\quad
q=\frac{kL}{2\pi}\,,$ (8)
originally derived by Lüscher [34, 35]. The corresponding phase shifts which
we obtain for the $I=3/2$ $s$-wave channel are shown in Figure 3 as a function
of $k/m_{\pi}$, together with a linear fit to the data.
Figure 3: $I=3/2$ $s$-wave scattering phase shifts as a function of $k/m\pi$
together with a linear fit to the data.
With this model for the scattering phase shift we can get an estimate for the
“Lellouch-Lüscher” factors relating the finite-volume to the infinite-volume
decay amplitudes which are summarized in Table 2.
$q$ | F
---|---
$0.110(16)$ | $117(27)$
$1.0253(87)$ | $69.84(65)$
$1.4375(93)$ | $59.60(41)$
$1.7530(96)$ | $80.99(37)$
Table 2: Finite-to-infinite volume proportionality factors
$F^{2}=8\pi\big{\\{}q\frac{\partial\phi}{\partial
q}+k\frac{\partial\delta_{0}}{\partial
k}\big{\\}}\frac{E_{n}^{2}}{k_{n}^{3}}$. Figure 4: Finite-to-infinite-volume
proportionality factors as a function of $q$ divided by their non-interacting
counterparts. The shaded blue area is not a fit to the data but an estimate
for the expected statistical uncertainty for values of $q$ which are not
covered by our data set.
To get an idea of the overall impact of our scattering calculation on the
proportionality factors we display these factors divided by their non-
interacting counterparts in Figure 4. For all four $q$ values in our
calculation we see a statistical significant difference from unity
highlighting the importance of the scattering analysis for the extraction of
hadronic $D$-decay amplitudes.
## 5 Conclusions & Outlook
In these proceedings we describe our steps towards the first ab-initio
calculation of hadronic decays of $D$-mesons. In our simplified setup we focus
on the $D\rightarrow K\pi$ decay channel at non-physical quark masses. In
order to construct operators which excite $K\pi$ final states with energies
close to the $D$-meson mass we make use of the exact distillation method. We
use an empirical approach to determine the number of eigenvectors of the gauge
covariant Laplacian and found that $N_{\mathrm{vec}}=60$ is a good compromise
for our setup. With the results from a scattering phase shift analysis for the
repulsive $I=3/2$ $K\pi$ channel we were able to obtain first results for the
“Lellouch-Lüscher” proportionality factors which relate the finite-volume
matrix elements to the infinite-volume decay amplitudes.
## Acknowledgments
M. T. H. and F. J. are supported by UKRI Future Leader Fellowship
MR/T019956/1. N. L. and A. P. received funding from the European Research
Council (ERC) under the European Union’s Horizon 2020 research and innovation
programme under grant agreement No 813942. A. P., M. T. H. and F. E. are
supported in part by UK STFC grant ST/P000630/1. A. P. and F. E. also received
funding from the European Research Council (ERC) under the European Union’s
Horizon 2020 research and innovation programme under grant agreements No
757646.
This work used the DiRAC Extreme Scaling service at the University of
Edinburgh, operated by the Edinburgh Parallel Computing Centre on behalf of
the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by
BEIS capital funding via STFC capital grant ST/R00238X/1 and STFC DiRAC
Operations grant ST/R001006/1. DiRAC is part of the National e-Infrastructure.
The authors acknowledge the open lattice initiative for providing the gauge
ensembles. Generating the ensembles the open lattice initiative received
support from the computing centres hpc-qcd (CERN), Occigen (CINES), Jean-Zay
(IDRIS) and Irène-Joliot-Curie (TGCC) under projects
(2020,2021,2022)-A0080511504 and (2020,2021,2022)-A0080502271 by GENCI as well
as project 2021250098 by PRACE and from the DiRAC Extreme Scaling service.
## References
* [1] A. D. Sakharov, _Violation of CP Invariance, C asymmetry, and baryon asymmetry of the universe_ , _Pisma Zh. Eksp. Teor. Fiz._ 5 (1967) 32–35.
* [2] Y. Nir, _CP violation in and beyond the standard model_ , in _27th SLAC Summer Institute on Particle Physics: CP Violation in and Beyond the Standard Model_ , pp. 165–243, 7, 1999. hep-ph/9911321.
* [3] A. J. Buras, _Flavor dynamics: CP violation and rare decays_ , _Subnucl. Ser._ 38 (2002) 200–337, [hep-ph/0101336].
* [4] V. A. Rubakov and M. E. Shaposhnikov, _Electroweak baryon number nonconservation in the early universe and in high-energy collisions_ , _Usp. Fiz. Nauk_ 166 (1996) 493–537, [hep-ph/9603208].
* [5] (LHCb), R. Aaij et al., _Observation of CP Violation in Charm Decays_ , _Phys. Rev. Lett._ 122 (2019) 211803, [1903.08726].
* [6] A. Khodjamirian and A. A. Petrov, _Direct CP asymmetry in $D\to\pi^{-}\pi^{+}$ and $D\to K^{-}K^{+}$ in QCD-based approach_, _Phys. Lett. B_ 774 (2017) 235–242, [1706.07780].
* [7] G. Buchalla, A. J. Buras and M. E. Lautenbacher, _Weak decays beyond leading logarithms_ , _Rev. Mod. Phys._ 68 (1996) 1125–1144, [hep-ph/9512380].
* [8] L. Lellouch and M. Lüscher, _Weak transition matrix elements from finite volume correlation functions_ , _Commun. Math. Phys._ 219 (2001) 31–44, [hep-lat/0003023].
* [9] C. h. Kim, C. T. Sachrajda and S. R. Sharpe, _Finite-volume effects for two-hadron states in moving frames_ , _Nucl. Phys. B_ 727 (2005) 218–243, [hep-lat/0507006].
* [10] N. H. Christ, C. Kim and T. Yamazaki, _Finite volume corrections to the two-particle decay of states with non-zero momentum_ , _Phys. Rev. D_ 72 (2005) 114506, [hep-lat/0507009].
* [11] M. T. Hansen and S. R. Sharpe, _Multiple-channel generalization of Lellouch-Lüscher formula_ , _Phys. Rev. D_ 86 (2012) 016007, [1204.0826].
* [12] R. A. Briceño and Z. Davoudi, _Moving multichannel systems in a finite volume with application to proton-proton fusion_ , _Phys. Rev._ D88 (2013) 094507, [1204.1110].
* [13] V. Bernard, D. Hoja, U. Meißner and A. Rusetsky, _Matrix elements of unstable states_ , _JHEP_ 09 (2012) 023, [1205.4642].
* [14] R. A. Briceño, M. T. Hansen and A. Walker-Loud, _Multichannel 1 $\rightarrow$ 2 transition amplitudes in a finite volume_, _Phys. Rev._ D91 (2015) 034501, [1406.5965].
* [15] P. Boyle et al., _Lattice QCD and the Computational Frontier_ , in _2022 Snowmass Summer Study_ , 3, 2022. 2204.00039.
* [16] A. Francis, P. Fritzsch, M. Lüscher and A. Rago, _Master-field simulations of O( $a$)-improved lattice QCD: Algorithms, stability and exactness_, _Comput. Phys. Commun._ 255 (2020) 107355, [1911.04533].
* [17] OPEN LATtice initiative. https://openlat1.gitlab.io/.
* [18] A. S. Francis, F. Cuteri, P. Fritzsch, G. Pederiva, A. Rago, A. Schindler et al., _Properties, ensembles and hadron spectra with Stabilised Wilson Fermions_ , _PoS_ LATTICE2021 (2022) 118, [2201.03874].
* [19] F. Cuteri, A. Francis, P. Fritzsch, G. Pederiva, A. Rago, A. Shindler et al., _Gauge generation and dissemination in OpenLat_ , 12, 2022. 2212.07314.
* [20] M. Lüscher and S. Schaefer. http://luscher.web.cern.ch/luscher/openQCD.
* [21] (Hadron Spectrum), M. Peardon, J. Bulava, J. Foley, C. Morningstar, J. Dudek, R. G. Edwards et al., _A Novel quark-field creation operator construction for hadronic physics in lattice QCD_ , _Phys. Rev. D_ 80 (2009) 054506, [0905.2160].
* [22] K. G. Wilson. Talk at the Abingdon Meeting on Lattice Gauge Theories, 1981.
* [23] C. Michael and I. Teasdale, _Extracting glueball masses from lattice QCD_ , _Nucl. Phys. B_ 215 (1983) 433–446.
* [24] M. Lüscher and U. Wolff, _How to Calculate the Elastic Scattering Matrix in Two-dimensional Quantum Field Theories by Numerical Simulation_ , _Nucl. Phys. B_ 339 (1990) 222–252.
* [25] B. Blossier, M. Della Morte, G. von Hippel, T. Mendes and R. Sommer, _On the generalized eigenvalue method for energies and matrix elements in lattice field theory_ , _JHEP_ 04 (2009) 094, [0902.1265].
* [26] P. Boyle, A. Yamaguchi, G. Cossu and A. Portelli, _Grid: A next generation data parallel C++ QCD library_ , 1512.03487.
* [27] A. Portelli, R. Abott, N. Asmussen, A. Barone, P. A. Boyle, F. Erben et al., _aportelli/hadrons: Hadrons v1.3_ , Mar., 2022. 10.5281/zenodo.6382460.
* [28] N. P. Lachini, P. Boyle, F. Erben, M. Marshall and A. Portelli, _$K\pi$ scattering at physical pion mass using distillation_, _PoS_ LATTICE2021 (2022) 435, [2112.09804].
* [29] N. P. Lachini, P. Boyle, F. Erben, M. Marshall and A. Portelli, _Towards $K\pi$ scattering with domain-wall fermions at the physical point using distillation_, 2211.16601.
* [30] (ALPHA), U. Wolff, _Monte Carlo errors with less errors_ , _Comput. Phys. Commun._ 156 (2004) 143–153, [hep-lat/0306017]. [Erratum: Comput.Phys.Commun. 176, 383 (2007)].
* [31] A. Ramos, _Automatic differentiation for error analysis of Monte Carlo data_ , _Comput. Phys. Commun._ 238 (2019) 19–35, [1809.01289].
* [32] F. Joswig, S. Kuberski, J. T. Kuhlmann and J. Neuendorf, _pyerrors: a python framework for error analysis of Monte Carlo data_ , 2209.14371.
* [33] C. Morningstar and M. J. Peardon, _Analytic smearing of SU(3) link variables in lattice QCD_ , _Phys. Rev. D_ 69 (2004) 054501, [hep-lat/0311018].
* [34] M. Lüscher, _Two particle states on a torus and their relation to the scattering matrix_ , _Nucl. Phys. B_ 354 (1991) 531–578.
* [35] M. Lüscher, _Signatures of unstable particles in finite volume_ , _Nucl. Phys. B_ 364 (1991) 237–251.
|
††thanks: CONTACT Alexey Melnikov. Email<EMAIL_ADDRESS> Kindly
refer to the updated and published version of the paper, replete with the most
recent additions and revisions: Mohammad Kordzanganeh, Markus Buchberger,
Basil Kyriacou, Maxim Povolotskii, Wilhelm Fischer, Andrii Kurkin, Wilfrid
Somogyi, Asel Sagingalieva, Markus Pflitsch, Alexey Melnikov. Benchmarking
simulated and physical quantum processing units using quantum and hybrid
algorithms. Adv. Quantum Technol. 6, 2300043 (2023), DOI:
10.1002/qute.202300043
# Benchmarking simulated and physical quantum processing units
using quantum and hybrid algorithms
Mohammad Kordzanganeh Markus Buchberger Basil Kyriacou Maxim Povolotskii
Wilhelm Fischer Andrii Kurkin Wilfrid Somogyi Asel Sagingalieva Markus
Pflitsch Alexey Melnikov Terra Quantum AG, 9000 St. Gallen, Switzerland
QMware AG, 9000 St. Gallen, Switzerland
###### Abstract
Powerful hardware services and software libraries are vital tools for quickly
and affordably designing, testing, and executing quantum algorithms. A robust
large-scale study of how the performance of these platforms scales with the
number of qubits is key to providing quantum solutions to challenging industry
problems. This work benchmarks the runtime and accuracy for a representative
sample of specialized high-performance simulated and physical quantum
processing units. Results show the QMware simulator can reduce the runtime for
executing a quantum circuit by up to 78% compared to the next fastest option
for algorithms with fewer than 27 qubits. The AWS SV1 simulator offers a
runtime advantage for larger circuits, up to the maximum 34 qubits available
with SV1. Beyond this limit, QMware can execute circuits as large as 40
qubits. Physical quantum devices, such as Rigetti’s Aspen-M2, can provide an
exponential runtime advantage for circuits with more than 30 qubits. However,
the high financial cost of physical quantum processing units presents a
serious barrier to practical use. Moreover, only IonQ’s Harmony quantum device
achieves high fidelity with more than four qubits. This study paves the way to
understanding the optimal combination of available software and hardware for
executing practical quantum algorithms.
††preprint: APS/123-QED
## I Introduction
Quantum computing is a rapidly growing field of technology with increasingly
useful applications across both industry and research. This new paradigm of
computing has the potential to solve classically-intractable problems, by
exploiting an exponentially-increasing computational space. This allows
quantum algorithms to dramatically reduce the runtime for solving
computationally resource-intensive problems.
There is a plethora of quantum algorithms, of which parameterized quantum
circuits represent the most general form. Quantum neural networks (QNNs) are
quantum machine learning (QML) algorithms [1, 2, 3, 4] that leverage powerful
techniques developed for classical neural networks, to optimize this
parameterized structure, and have already been applied to solve a number of
industrial problems [5, 6, 7, 8, 9, 10, 11, 12]. The complexity and
performance of classical neural networks employed to solve data-intensive
problems has grown dramatically in the last decade. Although algorithmic
efficiency has played a partial role in improving performance, hardware
development (including parallelism and increased scale and spending) is the
primary driver behind the progress of artificial intelligence [13, 14]. Unlike
their classical counterparts, QNNs are able to learn a generalized model of a
dataset from a substantially smaller training set [15, 16, 17] and typically
have the potential to do so with polynomially or exponentially simpler models
[18, 19, 20]. Thus, they provide a promising opportunity to subvert the
scaling problem encountered in classical machine learning [21, 22, 5, 23, 24,
25, 26, 27], which presents a serious challenge for data-intensive problems
that are increasingly bottle-necked by hardware limitations [28, 29, 30].
Nonetheless, even for a small dataset, training QNNs requires on the order of
a million circuit evaluations. This is a consequence of the multiplicative
number of data points, evaluations required for calculating the gradient [31],
and iterations before a solution is reached. This makes them a relatively
challenging and resource-intensive use case for quantum processing units
(QPU). Therefore, QNNs require stable, on-demand, and accurate quantum circuit
execution. A plethora of different options for executing quantum circuits
exist. These are either physical QPUs or classical hardware simulating quantum
behaviour. In both cases multiple vendor options and services are available.
Establishing the combination of software and hardware that provides the
optimum runtime, cost, and accuracy is crucial to the future of democratizing
quantum software development.
In this study, different pairings of software development kits (SDKs) and
hardware platforms are compared in order to determine the fastest and most
cost-efficient route to developing novel quantum algorithms. This benchmark is
performed using QNNs, which represent the most general form of a quantum
algorithm. The benchmark indicates an advantage in using the QMware basiq
simulator for circuits with 2 to 26 qubits, AWS SV1 for 28-34 qubits, and
QMware basiq for 36-40 qubits. Additionally, QPUs from four different vendors
(IonQ, Oxford Quantum Circuits (OQC), IBM, and Rigetti) were benchmarked for
runtime, accuracy, and cost. The results show QPUs could become time-
competitive in a practical use case for circuits with 30 qubits or more.
However, the current low fidelity attained by many of these systems precludes
their application to industrial problems. The Python implementation code for
the benchmarks presented in this study is available in Ref. [32].
This investigation is not an exhaustive benchmark of all available systems,
and it is worth acknowledging the existence of other state-of-the-art
services. This includes hardware quantum simulators such as IBM’s simulator
statevector and Atos’s Quantum Learning Machine [33], as well as software
backends such as the IBM Qiskit machine learning suite and the Qulacs package
[34, 35]. It also includes QPUs such as IBM’s Eagle processor [36],
Honeywell’s System Model H1 [37], and Google’s Sycamore processor [38]. The
authors also acknowledge other works that involved a similar technique in
performing benchmarks on quantum hardware. These include hardware specific
performance benchmarks such as Refs. [39, 40] as well as metric-specific tests
such as Refs. [41, 42, 43, 44, 45, 46, 47].
The work is organized in five sections. Sec. II describes the methodology,
including the benchmark algorithms, the hardware and software tested, as well
as the results of the runtime benchmark. Sec. III details the cost of
executing the proposed quantum circuits on various QPUs and compares their
fidelity. Sec. IV discusses the execution of large quantum circuit, with as
many as 40 qubits, and the implementation of multi-threading in simulators.
Finally Sec. IV.3 provides a summary of the findings, and outlines key
challenges for the quantum computing industry.
## II Runtime Benchmarking
Figure 1: The scheme for the benchmark. Simulated and native QPU stacks are
benchmarked in the present work using open-source software and proprietary
software on AWS, IonQ, OQC, Rigetti, IBM, and QMware.
Quantum simulators are bipartite systems, consisting of a software library and
the hardware on which the software is run. Both play a crucial role in the
development and execution of a quantum circuit. Although the various software
libraries available for quantum simulation are often implemented in a
hardware-agnostic way, the internal implementation of the linear algebraic
methods and the manner in which quantum logic gates are compiled will have a
significant impact on performance. This is true for the execution of any
quantum circuit, but particularly relevant for gradient calculations when
optimizing a QNN during the training phase, due to the use of gradient-based
optimization techniques. In particular, the standard parameter-shift method of
calculating the gradient of the circuit output with respect to each of the $n$
trainable parameters increases the number of expectation values evaluated by a
factor of $2n$. By comparison, the forward pass of a trained QNN requires
evaluating just a single expectation value, which can usually be obtained with
fewer than one thousand circuit shots. The specification of the simulator
hardware also plays an important role in the ability to quickly and
efficiently optimize a variational quantum algorithms. Moreover, the synergy
between software and hardware influences how much computational overhead is
required and the ease with which quantum algorithms can be designed, tested,
and deployed.
For many quantum computing applications, the open-source PennyLane Python
library is extremely popular [48]. It is also the recommended quantum
simulation library for the AWS Braket computing service, which provides a
performance-optimized version of the PennyLane library [49]. PennyLane offers
a variety of qubit devices. The most commonly used is the default.qubit, which
implements a Python backend using NumPy, TensorFlow, JAX, and PyTorch. More
recently the lightning.qubit device was introduced, which implements a high-
performance C++ backend [48]. QMware’s cloud computing service provides a
quantum simulator stack which supports open-source, hardware-agnostic
libraries, such as PennyLane, in addition to QMware’s bespoke quantum
computing Python library basiq [50]. The basiq library also supports a
PennyLane plugin, which translates circuits built using the Pennylane SDK into
circuits that can be executed using the basiq backend.
### II.1 Methodology
Figure 2: Overview of benchmarking methodology and results. Boxes clockwise from top left: examples of the $n_{d}$-dimensional dataset used for benchmarking for $n_{d}=1,2,3$, each containing $m=500$ data points; the architecture of the pure quantum (top) and hybrid (bottom) neural networks used for the benchmarking calculations; the classification accuracy of the HQNN and an equivalent classical neural network, illustrating the improved learning rate of a hybrid approach; the relative runtime of different software and hardware stacks for a range of circuit sizes, showing the performance advantage of the basiq SDK using QMware’s cloud computing service; training (left) and inference (right) runtime for the HQNN on various simulator software and hardware stacks; training (left) and inference (right) runtime for the pure QNN on various simulator software and hardware stacks (see legend at bottom). Dashed lines in the runtime plots illustrate the most performant stack for a given range of qubits. The inference plots in both cases also illustrate the runtime using real QPUs. Hardware | Qubits | Native Gates
---|---|---
Rigetti Aspen M-2 | 80 | $\mathrm{RX}$ $\mathrm{RZ}$ $\mathrm{CZ}$, $\mathrm{CP}$
IBMQ Falcon r5.11 | 27 | $\mathrm{I}$, $\mathrm{CX}$, $\mathrm{IFELSE}$, $\mathrm{RZ}$, $\mathrm{SX}$, $\mathrm{X}$
IonQ Harmony | 11 | $\mathrm{GPI}$, $\mathrm{GPI2}$, $\mathrm{MS}$
OQC Lucy | 8 | $\mathrm{I}$, $\mathrm{ECR}$, $\mathrm{V}$, $\mathrm{X}$, $\mathrm{RZ}$
Table 1: Quantum processing units used in the hardware benchmarking tests,
their qubit counts, and native gates.
Initially the performance of the QMware HQC4020 simulator is compared to the
performance of the AWS Braket ml.m5.24xlarge simulator. In both instances the
PennyLane lightning.qubit backend is used in order to evaluate the performance
of the underlying hardware systems – referred to as QPL and APL, respectively.
The performance of the QMware basiq library is then benchmarked using both a
native basiq implementation of the QNN and HQNN. This runtime benchmark is
executed on the QMware HQC4020 simulator using 384 vCPUs across all circuit
sizes. Results are compared to the high-performance AWS Braket SV1 simulator
(ASV), as well as the previous QPL and APL benchmark. Finally, the runtime
performance of these simulator stacks is also compared to runtimes achieved
for (H)QNN inference (forward pass) using real QPUs. The QPUs included are
IonQ’s Harmony, OQC’s Lucy, Rigetti’s Aspen M-2, and IBM Quantum’s Falcon
r5.11.
The ml.m5.24xlarge AWS notebook instance provides 96 vCPUs and 384 GiB of
random access memory (RAM). By comparison the QMware HQC4020 simulator has 12
TB of RAM and 384 vCPUs in total, a maximum of 48 vCPUs of which are utilized
throughout the benchmarking tests executed on the QMware service. The
exception to this is when benchmarking the PennyLane Lightning qubit, which
implements no parallelization for the parameter-shift method. Hence the
results for the QMware PennyLane lightning.qubit (QPL) and AWS PennyLane
lightning.qubit (APL) benchmark represents single-core calculations on both
hardware services.
The metric used in benchmarks is the training time per epoch per training
sample, which is measured using the Python time library. The expectation value
of each circuit measurement, corresponding to the output prediction of the
(H)QNN, is obtained using 1000 circuit shots. Measuring the execution time for
multiple circuit shots has the effect of reducing the proportion of circuit
initialization time in the quoted runtime for each stack. In practice, quantum
circuits usually require hundreds or thousands of circuit shots to obtain an
accurate expectation value. Benchmarks are performed for a range of circuit
sizes (number of qubits). This is achieved by varying the dimension of the
dataset, $n_{d}\in[1,15]$, which increases linearly with the number of qubits,
$n_{q}=2n_{\text{d}}$ (see Sec. II.2 for details).
Although the IBM Quantum Cloud development platform allows users to retrieve
quantum processor runtimes, the AWS Braket computing platform precludes the
possibility of measuring low-level process times. As a result all times
quoted, for both AWS and QMware benchmark, include the interaction with the
runtime environment and the backend that compiles, initializes, and invokes
the quantum circuit. This follows the standard practice in quantum benchmarks
and reflects a real-world use case encompassing the full software and hardware
stack [51]. For the benchmark involving the QPU devices available via AWS
Braket, quoted times also include any potential queue time incurred on the
vendor backend. The proportion of the total runtime this constitutes varies
according to the number and complexity of tasks submitted by other users of
the same QPU (as detailed in the Amazon Braket Developer Guide) and cannot be
accurately estimated through the AWS user interface.
### II.2 Benchmark Dataset
The dataset for the benchmarks presented here is an $n$-dimensional abstracted
version of the well-documented two-dimensional sklearn.datasets.make_circles
for binary classification tasks [52]. It consists of points sampled from two
concentric circular shells, distributed randomly about two nominal mean radii,
$r_{\text{inner}}<r_{\text{outer}}$. The distribution of the data points about
the mean radius is described by a normal distribution, and both shells use the
same standard deviation. The method for creating the dataset is adapted from
that proposed by [53] for sampling points on a sphere. First, an
$n_{d}$-dimensional vector is created with components $v_{i}$ sampled randomly
from a standard normal distribution with mean $\mu=0$ and standard deviation
$\sigma=1$:
$\vec{v}=\begin{bmatrix}v_{1}\\\ v_{2}\\\ \vdots\\\
v_{n_{d}}\end{bmatrix},\quad v_{i}\in\mathcal{N}(0,1).$ (1)
The vector is then normalized to length $r$ to obtain a point sampled randomly
with uniform probability from the surface of an $n$-sphere. A vector of random
noise, $\vec{\rho}$, sampled independently from the distribution
$\mathcal{N}(0,\sigma^{2})$, is applied to each component of the vector:
$\vec{x}=r\frac{\vec{v}}{\norm{\vec{v}}}+\vec{\rho},\quad\rho_{i}\in\mathcal{N}(0,\sigma^{2}).$
(2)
Data points $\vec{x}$ are sampled from two such distributions to create an
outer shell with classification label $y_{i}=0$ and inner shell with
classification label $y_{i}=1$. In the present work $r_{\text{outer}}=1.0$ is
used to create points in the outer shell and $r_{\text{inner}}=0.2$ to create
points in the inner shell, with $\sigma=0.3$ for both shells. The dimension of
the dataset determines the number of features used as input to the neural
network. By linearly increasing the dimension of the dataset, circuits with a
varying number of qubits ($n_{q}=2n_{d}$) can be benchmarked without changing
the underlying rubric of the classification problem itself. Examples of the
one-, two- and, three-dimensional datasets are shown in Fig. 2.
### II.3 Learning Models
The following sections describe the architecture of the hybrid quantum-
classical (HQNN) and pure QNNs used in the benchmarking tests. In all cases,
the networks are trained using a binary cross-entropy loss function and the
Adam optimizer with a learning rate of $\alpha$ = 0.3 [54].
#### II.3.1 Quantum Neural Network
The QNN used in this benchmark consists of a multi-qubit variational quantum
circuit, and is based on the model proposed by [15] The model employs a
sequential $\mathrm{R_{x}}$ $\mathrm{R_{z}}$ two-axis rotation encoding scheme
to embed each of the data features in a single qubit state on the odd-numbered
qubits. The feature encoding is followed by an entangling layer of sequential
‘nearest-neighbor’ CNOT gates across all qubits. A layer of trainable single-
axis $\mathrm{R_{y}}$ rotation are then applied to each qubit, followed by a
final layer of sequential ‘cascading’ CNOT gates across all qubits. Finally,
the expectation value of the Pauli-Z operator ($\sigma_{z}$) is measured for
each of the even-numbered qubits. The mean expectation value across the
$n_{d}$ measurement qubits is interpreted as a probability of the input
belonging to the class $y_{i}=1$.
To obtain the gradients of the loss function with respect to each of the
parameters, the standard analytical parameter-shift algorithm is used, in
which the gradient of an expectation value, with respect to the $i$-th
parameter, is obtained via measurement of two additional expectation values.
This method requires a total of $2n_{w}+1$ circuit evaluations to obtain the
gradient of the loss with respect to $n_{w}$ circuit parameters and thus
scales linearly with the number of trainable parameters. Unlike the more
efficient adjoint and backpropagation methods, which are usually available for
quantum simulators, parameter-shift is the only currently available algorithm
that can be implemented natively on a QPU. It therefore provides an upper
bound on the cost of training a QNN using a QPU, and on the runtime of the
benchmarked simulators [55, 56].
#### II.3.2 Hybrid Quantum Neural Network
The HQNN is a bipartite model consisting of a QNN and a multi-layer perceptron
(MLP) classical neural network, with the expectation value of each measurement
qubit in the QNN used as an input feature for the first layer of the MLP. The
quantum component of the HQNN follows the same architecture as described in
Sec. II.3.1, and uses the same parameter-shift method to obtain the gradients
of the expectation values. The MLP is built using the PyTorch library and
contains three linear layers with sizes $[n_{d},40,1]$, with ReLU and Sigmoid
activation functions applied to the input and hidden layer, respectively [57].
The gradients in the classical component of the HQNN are computed using the
standard back-propagation algorithm, via the native implementation in the
PyTorch library. For the inference benchmark that utilizes QPUs or the AWS SV1
device, the classical part of the circuit is executed using the AWS
ml.m5.24xlarge compute instance in all cases except for IBM Quantum Falcon
r5.11 whose classical part was executed on QMware HQC4020.
### II.4 Results
This section presents the results of the benchmark methodology outlined in
Sec. II.1. The results of the benchmark are illustrated in Fig 2. A table of
these results is also given in App. B, Tab. 3 and Tab. 4. The measured runtime
values (runtime per epoch, per training sample) are averaged across 100
repeats in order to obtain a mean and standard deviation for circuits up to 24
qubits in size for the training benchmark, and 26 qubits for the inference
benchmark. A similar approach to averaging is impractical for larger circuit
sizes, owing to the significantly longer total runtime. Thus, only a single
value with no standard deviation is quoted for circuits larger than 24 qubits.
In all cases Chauvenet’s criterion is applied in order to filter anomalous
runtime measurements that arise due to extraneous hardware processes [58].
Label | Hardware | SDK | Backend
---|---|---|---
QBN | QMware HQC4020 | basiq | basiq C++
QPL | QMware HQC4020 | PennyLane | lightning.qubit
APL | AWS ml.m5.24xlarge | PennyLane | lightning.qubit
ASV | AWS SV1 | PennyLane | SV1
Table 2: The abbreviations associated with each hardware, SDK, and backend.
#### II.4.1 Training
In general, the QMware HQC4020 and the AWS ml.m5.24xlarge hardware achieve
similar performance using the Pennylane Lightning backend (QPL and APL,
respectively - see Tab. 2 for a list of abbreviations). When benchmarking with
the QNN, QPL performs similarly to APL for circuits with less than 27 qubits
in size, with a relative runtime of $t_{\text{QPL}}/t_{\text{APL}}$ =
$0.90(17)$. When benchmarking with the HQNN the average relative runtime is
$t_{\text{QPL}}/t_{\text{APL}}$ = $0.95(16)$.
Comparing the QMware HQC4020 hardware using a native basiq implementation
(QBN) to the Pennylane Lightning implementation (QPL) shows a clear advantage
for the native basiq implementation across all circuit sizes for both QNNs and
HQNNs. The QBN implementation achieves an average speedup of
$t_{\text{QBN}}/t_{\text{QPL}}$ = $0.56(32)$ over QPL in the QNN benchmark,
and $t_{\text{QBN}}/t_{\text{QPL}}$ = $0.54(30)$ for HQNNs. It is also notable
that the native QMware implementation is most performant for models using
fewer than more than 20 qubits, where an average speedup of
$t_{\text{QBN}}/t_{\text{QPL}}$ = $0.22(4)$ is obtained for the QNNs and
$t_{\text{QBN}}/t_{\text{QPL}}$ = $0.21(2)$ for the HQNNs.
The AWS Braket SV1 device is a high-performance managed device designed for
simulating large quantum circuits up to a maximum of 34 qubits.
Correspondingly, it outperforms QBN for circuits with 28 or 30 qubits with an
average relative runtime across QNNs and HQNNs of
$t_{\text{QBN}}/t_{\text{ASV}}$ = $1.35(11)$. Conversely, it has very poor
performance for small and medium circuits, with a relative runtime
approximately two to four orders of magnitude slower than the other available
simulator stacks for circuits with fewer than 20 qubits. Across all circuit
sizes smaller than 28 qubits, the QBN implementation outperforms ASV for both
QNNs and HQNNs. The average relative runtime is
$t_{\text{QBN}}/t_{\text{ASV}}$ = $0.000\,41(25)$ for circuits with 14 qubits
or less, $t_{\text{QBN}}/t_{\text{ASV}}$ = $0.0049(35)$ for circuits with 16
to 20 qubits, and $t_{\text{QBN}}/t_{\text{ASV}}$ = $0.36(29)$ for circuits
with 22 to 26 qubits.
#### II.4.2 Inference
When training a QNN the number of circuit evaluations increases linearly with
the number of trainable parameters. In contrast, the inference (forward pass)
requires only a single evaluation of the output expectation values. As a
result, the total runtime is reduced significantly. The results of the
inference runtime benchmark broadly follow the same trends as for the training
runtime. One exception to this are the QPL and APL relative runtimes, which
are identical within one standard deviation for circuits less than 16 qubits
in size. The average relative runtime is $t_{\text{QPL}}/t_{\text{APL}}$ =
$1.04(10)$ for QNNs and $t_{\text{QPL}}/t_{\text{APL}}$ = $1.03(22)$ for
HQNNs. For larger circuits, APL performs marginally better than QPL with
$t_{\text{QPL}}/t_{\text{APL}}$ = $1.11(10)$ and
$t_{\text{QPL}}/t_{\text{APL}}$ = $1.12(9)$ for QNNs and HQNNs, respectively.
Crucially, the inference tests include a benchmark of physical QPUs, as listed
in Tab. 1. For low numbers of qubits the QPU runtimes are orders of magnitude
longer than their simulator counterparts. This is likely due to the additional
overhead incurred in compiling and initializing a QPU results, as well as the
queue time incurred on the vendor side, where multiple AWS users may be
accessing the QPU device simultaneously. On the other hand, the runtime for
QPUs increases linearly with the number of qubits. For large numbers of
qubits, the exponentially increasing simulator runtime exceeds the fixed time
cost associated with QPUs. This results in a ‘threshold’ circuit size above
which it becomes exponentially faster to execute a QNN using a QPU device.
The limited number of qubits available for many QPUs means that, in most
cases, this threshold is not attainable. Of the three QPUs in the present
study, only Rigetti’s Aspen M-2 has a sufficient number of qubits to be time-
competitive with the simulator stacks tested. The threshold occurs at 30
qubits for QBN and ASV where the relative runtime is
$t_{\text{ASV}}/t_{\text{M-2}}$ = 1.03, $t_{\text{QBN}}/t_{\text{M-2}}$ = 1.53
for the QNN and $t_{\text{ASV}}/t_{\text{M-2}}$ = 1.02,
$t_{\text{QBN}}/t_{\text{M-2}}$ = 1.51 for the HQNN. For smaller circuits
sizes OQC’s Lucy produces runtimes that are faster by a factor of
$t_{\text{Lucy}}/t_{\text{M-2}}$ = $0.22(3)$ compared to Rigetti’s Aspen M-2.
In contrast, the runtime for IonQ’s Harmony QPU is on average a factor
$t_{\text{Harmony}}/t_{\text{M-2}}$ = $13.5(52)$ slower than Aspen M-2. The
IBMQ Falcon r5.11 QPU performs similarly to the Rigetti Aspen M-2, with an
average relative runtime of $t_{\text{ASV}}/t_{\text{M-2}}$ = $0.87(18)$.
Notably, vendor management of the QPUs results in a runtime that varies
significantly from job to job. The percentage variance in runtime measured for
inference on a QPU is generally higher than for simulator execution.
Specifically, the average percentage standard deviation across the 10 repeat
measurements is on the order of 125%, 16%, 21%, and 7% for IonQ, Aspen M-2,
Lucy, and Falcon r5.11, respectively.
## III Accuracy and Cost Analysis
Figure 3: Comparison of circuit fidelity (left), training cost (right) and
inference cost (inset) for various publicly available QPUs. The training price
estimates are calculated on the assumption that the model is trained over 100
epochs, with 100 training samples and 1000 circuit shots per expectation
value. For IBM Falcon r5.11 the prices are obtained in CHF and the conversion
is calculated using a rate of 1.06 USD = 1.00 CHF. Note that the IBM Cloud
Estimator service offers a method for circuit training that could reduce the
IBM prices by up to 2 orders of magnitude, please refer to Sec. III.1 for a
detailed explanation.
### III.1 Cost Evaluation
As described in Sec. I a primary consideration in developing and testing QNNs
and HQNNs is the financial cost of training a network. In general, the pricing
structure of publicly available QPUs is such that the training cost is
proportional to the number of distinct quantum circuits that must be evaluated
during the training process. When using the parameter-shift method for
gradient calculations, the number of distinct quantum circuits is proportional
to the number of trainable parameters. Hence, for the QNN and HQNN considered
in Sec. II.3.1 and Sec. II.3.2, the training costs scales linearly with the
number of qubits. For an increasing number of epochs, and for an increasing
number of training samples, this rapidly results in millions or billions of
distinct quantum circuits that must be initialized and evaluated on the QPU.
In the present work, QPUs are accessed through AWS Braket and IBM Cloud Qiskit
Runtime. The AWS pricing scheme includes both a per-task cost, incurred for
the execution of a given quantum circuit, and a per-shot cost which is applied
to the number of shots specified for that quantum circuit. By comparison,
Qiskit Runtime implements a pricing scheme on the basis of runtime with a
fixed price per second for executing a quantum circuit. Fig. 3(b) illustrates
an example of how the estimated cost scales with circuit size for the four
QPUs presented in this work. The consequence of this QPU pricing structure is
that for circuits larger than a few bits, training a QNN on a QPU becomes
prohibitively expensive. Prices range from approximately 1000 USD for a two-
qubit QNN using Rigetti’s Aspen M-2 or OQC’s Lucy, to more than
$10\text{\times}{10}^{6}$ USD for a 26 qubit QNN using IBM’s Falcon r5.11. It
is worth clarifying that the training in this benchmark treats every quantum
evaluation as a new circuit on which many shots can be executed. The IBM
Qiskit Runtime offers an alternative method: where the quantum computer
scientist can set up the quantum circuit once for a large initial cost and
then all the circuit executions in that epoch would use the same setup but
with varying parameters depending on the dataset and the parameter-shift rule.
Assuming a circuit set-up that takes $t_{1}=5s$ and a per shot runtime of
$t_{2}=250\mu s$, the cost of training the same circuit could be dramatically
reduced from tens of millions of USD to $212,800$ USD. The multi-parameter,
multi-circuit functionality is specific to the IBM Cloud and thus the authors
decided to set up every circuit in all cases in the interest of fairness.
In contrast to the high cost of training, the forward pass of a QNN does not
entail a gradient calculation and thus requires only a single task with around
100-1000 circuit shots. The cost of using a QPU for the inference stage of a
trained QNN is considerably less. For the QPUs accessed through AWS Braket,
the cost is fixed for a given number of shots, irrespective of circuit size.
For QPUs accessed through Qiskit Runtime, the cost increases linearly with
circuit size (QPU runtime increases linearly with circuit size). A typical
forward-pass of a QNN can be executed at a cost of approximately 1 USD for
QPUs accessed through AWS Braket, or between 10 – 40 USD for QPUs accessed
through Qiskit Runtime.
### III.2 Accuracy Evaluation
Understanding the role noise plays in executing quantum circuits is critical
to leveraging quantum computers. Vendors often provide esoteric measures of
fidelity, gate accuracy and characteristic timescales. Although these are
valuable metrics for assessing the relative performance of different QPUs,
evaluating the overall effect of these different sources of error on a typical
quantum circuit can be challenging. A holistic measure of noise in a quantum
circuit can be achieved with a straightforward empirical procedure.
First, the parameters of all trainable gates are fixed at a value of $\pi/4$.
The input feature values are each set to $\pi^{2}/4$, and the QNN is augmented
by applying the adjoint of the entire circuit prior to measurement. In a
noiseless circuit this results in a final state vector with zero amplitude for
all basis states except the computational ground state, i.e $\ket{00...0}$. In
a physical QPU, various noise sources degrade the fidelity of the circuit,
resulting in a non-zero probability of measuring one of the other
$2^{n_{q}}-1$ computational basis states. An accuracy measure is then obtained
by counting the proportion of states measured in the computational ground
state over 1000 circuit shots. This is commonly referred to as the fidelity of
a quantum state, $F=\left|\innerproduct{00...0}{\psi}\right|^{2}$. In this
case the fidelity of the final quantum state is measured relative to the
computational ground state. This fidelity measurement is repeated by executing
10 such jobs on each QPU to obtain a mean accuracy. The state vectors needed
to obtain fidelities for individual shots are not available through the
PennyLane SDK with AWS. Consequently, the AWS braket software library is used
instead to construct the same quantum circuits described in Sec. II.3 for
physical QPU accuracy tests presented here, with the exception of the IBM
Falcon r5.11 which was tested on QMware and accessed through IBM Cloud’s
Qiskit Runtime API.
The results, shown in Fig. 3(a) and Tab. 9, vary significantly between QPUs.
The IonQ Harmony device attains greater than 99% fidelity across circuits
sizes from two to ten qubits. The fidelity of the OQC Lucy device is high for
small circuits with only two qubits, but degrades significantly for larger
circuits, with $67\pm 2$ %, $64\pm 2$ % and $57.6\pm 0.9$ % fidelity for four,
six, and eight qubits, respectively. The performance of the Rigetti Aspen M-2
and IBMQ Falcon r5.11 QPUs is similarly high for the two qubit circuit, with
fidelities of $89\pm 6$ % and $93.2\pm 0.4$ %, respectively. However the
fidelity of these devices approaches zero for circuits larger than
approximately eight qubits.
Note that the IonQ Harmony device has full connectivity between its 11 qubits,
potentially resulting in a shallower transpilation depth that could contribute
to its higher overall accuracy. Importantly, there are additional
considerations that might contribute to this result, such as internal
optimizations or error correction algorithms. Since this study focuses on
performing a practical benchmark, these results were obtained from the end-
user’s perspective with minimal modification or tuning. Thus, this work does
not rule out potential gains for other devices if the appropriate tuning is
performed.
## IV Discussion and Summary
Figure 4: Hardware statistics for the QMware HQC4020 simulator. (a) The
optimal number of threads for simulating an $n$-qubit quantum circuit. Faster
runtimes are indicated with a paler shade, demonstrating how the optimal
number of threads varies for increasing circuit size. (b) a plot of the
optimum thread count against number of qubits (c) a plot of total CPU and RAM
utilization over time during the execution of a 40 qubit QNN. (d) memory
required to store the state vector for an increasing number of qubits.
### IV.1 Large Quantum Circuits
The amount of random access memory (RAM) utilized in simulating any noiseless
quantum circuit is a function of the dimension of the vector space, and hence
grows exponentially with the number of qubits. An $n$-qubit state is specified
by $2^{n}$ complex amplitudes, and thus requires approximately $16\times
2^{n}$ bytes of memory. This equates to approximately 16 GB of RAM for a 30
qubit circuit.
QNNs using large numbers of qubits are highly susceptible to the barren
plateau phenomenon, where the gradients of a QNN with randomly initialized
parameters vanish exponentially as the number of qubits increases [59]. This
could present a serious obstacle to training a QML model with a large number
of qubits unless a variety of mitigation methods are employed [60, 61, 62,
63]. Solving the barren plateau problem is crucial to achieving highly
expressive models that could outperform classical machine learning methods on
complex datasets. Access to development services that are able to simulate
QNNs with a large number of qubits is essential to solving the barren plateau
problem.
To explore the performance of QMware’s HQC4020 with an increasing number of
qubits, additional benchmarks are performed for QNNs and HQNNs with up to 40
qubits. A 40 qubit simulation is achievable by reducing the floating point
precision to single precision (32 bit) representation, for all other circuit
sizes a double precision (64 bit) is retained. Fig. 4(d) illustrates the
exponential increase in memory usage for a range of circuit sizes up to 40
qubits, for both single and double precision representations.
Tab. 11 gives the inference runtimes for circuits with up to 40 qubits, up to
a maximum of 34 qubits in the case of ASV. The runtimes for the QBN and ASV
simulators increase exponentially, with QBN reaching a runtime greater than 13
hours for the 40 qubit QNN. Owing to the long simulator runtimes for large
circuits, multiple repeats are not possible, and thus it is hard to draw
meaningful conclusions from the single QBN and ASV trials presented for
circuits larger than 27 qubits. Nonetheless, in the case of QNNs, ASV achieves
a relative runtime of $t_{\text{ASV}}/t_{\text{QBN}}$ = $1.34(9)$ for circuits
with 28 to 34 qubits. In the case of HQNNs there is no clear advantage, with
$t_{\text{ASV}}/t_{\text{QBN}}$ = $0.95(22)$.
### IV.2 Multi-threading
The runtime performance for circuits with many qubits can be improved using
various parallelization techniques. A common method for achieving a
substantial increase in runtime performance for linear algebra operations is
to compute matrix-vector and matrix-matrix products with the aid of multi-
threading. The QMware basiq library provides native support for a multi-
threaded approach to low-level C++ linear algebra operations. However,
determining the optimal number of threads to execute a circuit with a given
number of qubits is not straightforward. Fig. 4(a) illustrates a cross-
sectional study of runtime for circuits with up to 20 qubits with multi-
threading across as many as 24 threads. In Fig. 4(b) the optimum number of
threads for a given qubit count is shown. These results can be applied
generally to achieve best-in-class performance with the QMware HQC2040 using
the basiq software library.
PennyLane provides some parallelization for gradient calculations using the
adjoint operator method [56], which is applicable to parameterized quantum
algorithms. However, the authors are not aware of any low-level
parallelization in the PennyLane library for general linear algebra
calculations encountered in generic quantum circuits.
### IV.3 Conclusion
This work presents a comprehensive study of various quantum computing
platforms, using both simulated and physical quantum processing units. The
results presented in Sec. II.4 demonstrate a clear runtime advantage for the
QMware basiq library executed on the QMware cloud computing service. Relative
to the next fastest classical simulator, QMware achieves a runtime reduction
of up to 78% across all algorithms with fewer than 27 qubits. In particular,
the QMware basiq library achieves a runtime reduction of $0.56(32)$ relative
to the PennyLane library for QNNs and $0.54(30)$ for HQNNs.
The QMware HQC4020 hardware benchmarked against AWS ml.m5.24xlarge achieves a
comparable relative runtime of $0.90(17)$ for QNNs and $0.95(16)$ for HQNNs.
Thus, the advantage offered by the QMware cloud computing service is primarily
due an harmonious interplay between software and hardware. This performance
advantage can be attributed to the superior multi-threading support present in
the basiq library. AWS also provides the SV1 simulator, which performs
marginally better than QMware basiq for large circuits with more than 27
qubits in the case of QNNs (up to the SV1 maximum of 34 qubits). There is no
clear advantage for either QMware or SV1 in the case of HQNNs. Additionally,
QMware is the only tested simulator that has the capability of simulating
circuits with 34-40 qubits.
The price and scarcity of quantum hardware means it is more time and cost
efficient to develop algorithms and train QNNs with quantum simulators such as
Amazon Web Services Braket or QMware’s cloud computing service. This is
particularly true for variational quantum algorithms and QNNs which represent
a promising utilization of quantum technologies in artificial intelligence on
NISQ computers. In contrast to the exponential runtime scaling encountered in
quantum simulators, the runtime of QPUs scales linearly with the circuit size.
Publicly available QPUs are already able to achieve runtime improvements over
simulator hardware for large numbers of qubits. For example, Rigetti’s Aspen
M-2 is able to execute 40 qubit circuits in approximately one minute, which is
less than 0.02% of the runtime measured for QMware’s simulator.
As quantum hardware improves and the number of qubits available grows, it will
become possible to gain a substantial runtime advantage over simulator
hardware when executing large quantum circuits. However, the fidelity tests
presented in Sec. III.2 indicate that accurate inference with such a large
quantum circuit is not yet possible. Moreover, the cost of accessing these
quantum devices makes training a QNN on currently available QPUs prohibitively
expensive. Owing to the exponential computational space, QNNs with relatively
few qubits are able to tackle challenging data science and industry problems.
Thus, the key to success in the field of quantum computing is to improve the
cost and the accuracy of QPUs, and integrating them well within classical
infrastructure. The hybrid interplay between quantum and classical machines is
the key to seamlessly harness the best performance of QPUs and simulators
depending on the use case.
### IV.4 Author contributions
M.K., M.B., Maxim P., W.F., A.K., W.S., and A.S. wrote the benchmarking Python
code. B.K. reviewed the existing benchmarking literature, wrote the Appendix,
and automated the benchmarking Python code. M.K., Maxim P., and A.S. performed
benchmarking of simulated quantum processing units. M.K., A.K., and W.S.
performed benchmarking of physical quantum processing units. A.M., M.K., W.S.,
A.K., and A.S. analyzed the results. M.B. and W.F. set up the QMware
simulator, and prepared the basiq SDK. M.B. monitored the resource utilization
on the QMware simulator, and analyzed the multithreading efficiency. W.S. and
M.K., and A.M. wrote the initial version of the main text. A.S. and A.M.
prepared the figures. All authors have contributed to improving the
manuscript, read, and agreed to the final version of the manuscript. Markus P.
and A.M. performed project administration and supervision.
## References
* [1] Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. Quantum machine learning. Nature, 549(7671):195, 2017.
* [2] Vedran Dunjko and Hans J Briegel. Machine learning & artificial intelligence in the quantum domain: a review of recent progress. Reports on Progress in Physics, 81(7):074001, 2018.
* [3] Lucas Lamata. Quantum machine learning and quantum biomimetics: A perspective. Machine Learning: Science and Technology, 1(3):033002, 2020.
* [4] Alexey Melnikov, Mohammad Kordzanganeh, Alexander Alodjants, and Ray-Kuang Lee. Quantum machine learning: from physics to software engineering. Advances in Physics: X, 8(1):2165452, 2023.
* [5] Javier Alcazar, Vicente Leyton-Ortega, and Alejandro Perdomo-Ortiz. Classical versus quantum models in machine learning: Insights from a finance application. Machine Learning: Science and Technology, 1(3):035003, 2020.
* [6] Asel Sagingalieva, Andrii Kurkin, Artem Melnikov, Daniil Kuhmistrov, et al. Hyperparameter optimization of hybrid quantum neural networks for car classification. arXiv preprint arXiv:2205.04878, 2022.
* [7] Marco Pistoia, Syed Farhan Ahmad, Akshay Ajagekar, Alexander Buts, Shouvanik Chakrabarti, Dylan Herman, Shaohan Hu, Andrew Jena, Pierre Minssen, Pradeep Niroula, Arthur Rattew, Yue Sun, and Romina Yalovetzky. Quantum Machine Learning for Finance. arXiv preprint arXiv:2109.04298, 2021.
* [8] Arsenii Senokosov, Alexander Sedykh, Asel Sagingalieva, and Alexey Melnikov. Quantum machine learning for image classification. arXiv preprint arXiv:2304.09224, 2023.
* [9] Manuel S Rudolph, Ntwali Bashige Toussaint, Amara Katabarwa, Sonika Johri, Borja Peropadre, and Alejandro Perdomo-Ortiz. Generation of high-resolution handwritten digits with an ion-trap quantum computer. Physical Review X, 12(3):031010, 2022.
* [10] Asel Sagingalieva, Mohammad Kordzanganeh, Nurbolat Kenbayev, Daria Kosichkina, Tatiana Tomashuk, and Alexey Melnikov. Hybrid quantum neural network for drug response prediction. Cancers, 15(10):2705, 2023.
* [11] Serge Rainjonneau, Igor Tokarev, Sergei Iudin, Saaketh Rayaprolu, Karan Pinto, Daria Lemtiuzhnikova, Miras Koblan, Egor Barashov, Mohammad Kordzanganeh, Markus Pflitsch, et al. Quantum algorithms applied to satellite mission planning for Earth observation. arXiv preprint arXiv:2302.07181, 2023.
* [12] Alexandr Sedykh, Maninadh Podapaka, Asel Sagingalieva, Nikita Smertyak, Karan Pinto, Markus Pflitsch, and Alexey Melnikov. Quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes. arXiv preprint arXiv:2304.11247, 2023.
* [13] Liane Bernstein, Alexander Sludds, Ryan Hamerly, Vivienne Sze, Joel Emer, and Dirk Englund. Freely scalable and reconfigurable optical hardware for deep learning. Scientific Reports, 11(1):3144, 2021.
* [14] Danny Hernandez and Tom B. Brown. Measuring the Algorithmic Efficiency of Neural Networks. arXiv preprint arXiv:2005.04305, 2020.
* [15] Michael Perelshtein, Asel Sagingalieva, Karan Pinto, Vishal Shete, et al. Practical application-specific advantage through hybrid quantum computing. arXiv preprint arXiv:2205.04858, 2022.
* [16] Matthias C. Caro, Hsin-Yuan Huang, M. Cerezo, Kunal Sharma, Andrew Sornborger, Lukasz Cincio, and Patrick J. Coles. Generalization in quantum machine learning from few training data. Nature Communications, 13(1):4919, 2022.
* [17] Amira Abbas, David Sutter, Christa Zoufal, Aurelien Lucchi, Alessio Figalli, and Stefan Woerner. The power of quantum neural networks. Nature Computational Science, 1(6):403–409, 2021.
* [18] Sergio Boixo, Sergei V. Isakov, Vadim N. Smelyanskiy, Ryan Babbush, Nan Ding, Zhang Jiang, Michael J. Bremner, John M. Martinis, and Hartmut Neven. Characterizing quantum supremacy in near-term devices. Nature Physics, 14(6):595–600, 2018.
* [19] Rocco A. Servedio and Steven J. Gortler. Equivalences and Separations Between Quantum and Classical Learnability. SIAM Journal on Computing, 33(5):1067–1092, 2004.
* [20] Diego Ristè, Marcus P. da Silva, Colm A. Ryan, Andrew W. Cross, Antonio D. Córcoles, John A. Smolin, Jay M. Gambetta, Jerry M. Chow, and Blake R. Johnson. Demonstration of quantum advantage in machine learning. npj Quantum Information, 3(1):16, 2017.
* [21] Marcello Benedetti, Erika Lloyd, Stefan Sack, and Mattia Fiorentini. Parameterized quantum circuits as machine learning models. Quantum Science and Technology, 4(4):043001, 2019.
* [22] Maria Schuld and Francesco Petruccione. Learning with Quantum Models, pages 247–272. Springer International Publishing, Cham, 2018.
* [23] Mohammad Kordzanganeh, Daria Kosichkina, and Alexey Melnikov. Parallel hybrid networks: an interplay between quantum and classical neural networks. arXiv preprint arXiv:2303.03227, 2023.
* [24] Dimitrios Emmanoulopoulos and Sofija Dimoska. Quantum Machine Learning in Finance: Time Series Forecasting. arXiv preprint arXiv:2202.00599, 2022.
* [25] Brian Coyle, Maxwell Henderson, Justin Chan Jin Le, Niraj Kumar, Marco Paini, and Elham Kashefi. Quantum versus classical generative modelling in finance. Quantum Science and Technology, 6(2):024013, 2021.
* [26] Bob Coecke, Giovanni de Felice, Konstantinos Meichanetzidis, and Alexis Toumi. Foundations for Near-Term Quantum Natural Language Processing. arXiv preprint arXiv:2012.03755, 2020.
* [27] Konstantinos Meichanetzidis, Alexis Toumi, Giovanni de Felice, and Bob Coecke. Grammar-Aware Question-Answering on Quantum Computers. arXiv preprint arXiv:2012.03756, 2020.
* [28] Xiaowei Xu, Yukun Ding, Sharon Xiaobo Hu, Michael Niemier, Jason Cong, Yu Hu, and Yiyu Shi. Scaling for edge inference of deep neural networks. Nature Electronics, 1(4):216–222, 2018.
* [29] Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, and Joel Emer. Efficient Processing of Deep Neural Networks: A Tutorial and Survey. arXiv preprint arXiv:1703.09039, 2017.
* [30] Mark Horowitz. 1.1 Computing’s energy problem (and what we can do about it). In IEEE International Solid-State Circuits Conference, pages 10–14, 2014.
* [31] Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, and Nathan Killoran. Evaluating analytic gradients on quantum hardware. Physical Review A, 99(3):032331, 2019.
* [32] Mohammad Kordzanganeh, Markus Buchberger, Basil Kyriacou, Maxim Povolotskii, Wilhelm Fischer, Andrii Kurkin, Wilfrid Somogyi, Asel Sagingalieva, Markus Pflitsch, and Alexey Melnikov. Benchmarking simulated and physical quantum processing units using quantum and hybrid algorithms. https://github.com/terra-quantum-public/benchmarking, 2023.
* [33] Atos. Quantum Learning Machine. https://atos.net/en/solutions/quantum-learning-machine, 2022.
* [34] Gadi Aleksandrowicz et al. Qiskit: An open-source framework for quantum computing. IBM Quantum, 2019.
* [35] Koki Aoyama. Qulacs. https://github.com/qulacs/qulacs, 2022.
* [36] Jerry Chow, Oliver Dial, and Jay Gambetta. IBM quantum breaks the 100‑qubit processor barrier. https://research.ibm.com/blog/127-qubit-quantum-processor-eagle, 2021\.
* [37] J. M. Pino, J. M. Dreiling, C. Figgatt, et al. Demonstration of the trapped-ion quantum CCD computer architecture. Nature, 592:209–213, 2021.
* [38] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, et al. Quantum supremacy using a programmable superconducting processor. Nature, 574:505–510, 2019.
* [39] K. Wright, K. M. Beck, S. Debnath, J. M. Amini, et al. Benchmarking an 11-qubit quantum computer. Nature Communications, 10, 11 2019.
* [40] Alexander J. McCaskey, Zachary P. Parks, Jacek Jakowski, Shirley V. Moore, et al. Quantum chemistry as a benchmark for near-term quantum computers. npj Quantum Information, 5, 2019.
* [41] Salonik Resch and Ulya R. Karpuzcu. Benchmarking quantum computers and the impact of quantum noise, 2019.
* [42] Thomas Lubinski, Sonika Johri, Paul Varosy, Jeremiah Coleman, et al. Application-oriented performance benchmarks for quantum computing. https://github.com/SRI-International/QC-App-Oriented-Benchmarks, 2021\.
* [43] Pacific Northwest National Laboratory. QASMBench benchmark suite. https://github.com/pnnl/QASMBench, 2022.
* [44] Daniel Mills, Seyon Sivarajah, Travis L. Scholten, and Ross Duncan. Application-motivated, holistic benchmarking of a full quantum computing stack. Quantum, 5:415, 2021.
* [45] Q-score. https://github.com/myQLM/qscore, 06 2022.
* [46] Pierre-Luc Dallaire-Demers, Michal Stechly, Jerome F. Gonthier, Ntwali Toussaint Bashige, et al. An application benchmark for fermionic quantum simulations, 2020.
* [47] Koen Mesman, Zaid Al-Ars, and Matthias Möller. Qpack: Quantum approximate optimization algorithms as universal benchmark for quantum computers, 2021.
* [48] Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, et al. PennyLane: Automatic differentiation of hybrid quantum-classical computations. arXiv preprint arXiv:1811.04968, 2022.
* [49] Amazon Web Services. Amazon braket. https://aws.amazon.com/, 2020.
* [50] QMware. QMware — The first global quantum cloud. https://qm-ware.com/, 2022.
* [51] Andrew Wack, Hanhee Paik, Ali Javadi-Abhari, Petar Jurcevic, Ismael Faro, Jay M. Gambetta, and Blake R. Johnson. Quality, Speed, and Scale: Three key attributes to measure the performance of near-term quantum computers. arXiv preprint arXiv:2110.14108, 2021.
* [52] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, et al. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
* [53] George Marsaglia. Choosing a Point from the Surface of a Sphere. The Annals of Mathematical Statistics, 43(2):645–646, 1972.
* [54] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980, 2017.
* [55] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. Nature, 323(6088):533–536, 1986.
* [56] Tyson Jones and Julien Gacon. Efficient calculation of gradients in classical simulations of variational quantum algorithms. arXiv preprint arXiv:2009.02823, 2020.
* [57] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, et al. PyTorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019.
* [58] William Chauvenet. A Manual of Spherical and Practical Astronomy: Spherical astronomy. J. B. Lippincott & Company, 1863.
* [59] Jarrod R. McClean, Sergio Boixo, Vadim N. Smelyanskiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training landscapes. Nature Communications, 9(1):4812, 2018.
* [60] Chen Zhao and Xiao-Shan Gao. Analyzing the barren plateau phenomenon in training quantum neural networks with the ZX-calculus. Quantum, 5:466, 2021.
* [61] Tyler Volkoff and Patrick J. Coles. Large gradients via correlation in random parameterized quantum circuits. Quantum Science and Technology, 6(2):025008, 2021.
* [62] Edward Grant, Leonard Wossnig, Mateusz Ostaszewski, and Marcello Benedetti. An initialization strategy for addressing barren plateaus in parametrized quantum circuits. Quantum, 3:214, 2019.
* [63] Mohammad Kordzanganeh, Pavel Sekatski, Leonid Fedichkin, and Alexey Melnikov. An exponentially-growing family of universal quantum circuits. arXiv preprint arXiv:2212.00736, 2022.
* [64] Matt Langione, JF Bobier, L Krayer, H Park, and A Kumar. The Race to Quantum Advantage Depends on Benchmarking. Boston Consulting Group, Tech. Rep, 2022.
* [65] Junchao Wang, Guoping Guo, and Zheng Shan. SoK: Benchmarking the Performance of a Quantum Computer. Entropy, 24(10):1467, 2022.
* [66] Kristel Michielsen, Madita Nocon, Dennis Willsch, Fengping Jin, Thomas Lippert, and Hans De Raedt. Benchmarking gate-based quantum computers. Computer Physics Communications, 220:44–55, 2017.
* [67] Daniel Koch, Brett Martin, Saahil Patel, Laura Wessing, and Paul M Alsing. Demonstrating NISQ era challenges in algorithm design on IBM’s 20 qubit quantum computer. AIP Advances, 10(9):095101, 2020.
* [68] Andrew W Cross, Lev S Bishop, Sarah Sheldon, Paul D Nation, and Jay M Gambetta. Validating quantum computers using randomized model circuits. Physical Review A, 100(3):032328, 2019.
* [69] Peter Chapman. Scaling IonQ’s Quantum Computers: The Roadmap. https://ionq.com/, 2020.
* [70] Arjan Cornelissen, Johannes Bausch, and András Gilyén. Scalable benchmarks for gate-based quantum computers. arXiv preprint arXiv:2104.10698, 2021.
* [71] Byron Tasseff, Tameem Albash, Zachary Morrell, Marc Vuffray, Andrey Y Lokhov, Sidhant Misra, and Carleton Coffrin. On the Emerging Potential of Quantum Annealing Hardware for Combinatorial Optimization. arXiv preprint arXiv:2210.04291, 2022.
## Appendix A Current Benchmarking Landscape
As described in Ref. [64], quantum benchmarking faces many more challenges
than its classical counterparts. After all, there is no singular “quantum
technology.” Quantum devices are made from different materials and used for
vastly different purposes. This makes it difficult to predict how the field
will progress, as each type of technology develops breakthroughs at different
rates. Additionally, quantum devices are ultimately compared not only to other
quantum computers but also to the performance of classical devices, itself an
ever-moving target.
Given the multitude of challenges, it is important to question how any
benchmark result compares to others. In this appendix, a non-exhaustive list
of key takeaways from other benchmarking papers is presented to better address
where this paper sits in the wider landscape. The year of each paper’s
publication can be viewed in Figure 5. For more information on the current
state of quantum benchmarking, Ref. [65] provides an excellent summary of many
benchmarking papers. In that paper, Wang et al. separate quantum benchmarks
into three classes: physical, aggregative, and application-level.
Figure 5: Timeline of key benchmarking papers considered in Appendix: Current
Benchmarking Landscape. The bracketed numbers represent the citations of the
referenced papers.
Physical benchmarks depend on hardware specificity. These tasks give a measure
of the engineering capabilities and limitations of quantum devices, such as
the impacts of decoherence and noise, which have been established as major
speedbumps in the development of fully-functional quantum devices [66, 41].
Examples of physical benchmarking papers include Ref. [39], a measurement of
qubit coherences and gate-based fidelities of IonQ’s trapped Ytterbium qubits,
and Ref. [67], which measures T1 and T2 coherence times as well as the CCNOT
gate and the adjoint Quantum Fourier Transform for a 20-qubit IBM device.
Aggregated benchmarks measure how well multiple qubits and gates work
together. Perhaps the most widely used of these hardware-agnostic metrics is
Quantum Volume. Introduced by IBM in Ref. [68], Quantum Volume measures the
maximum circuit width (number of qubits) and depth (layers of gates) that a
device can run such that the two numbers are equal and the fidelity is
maintained above a certain threshold. Given a square circuit of width and
depth size $N$, its Quantum Volume is defined as the exponentiation of the
side length of this square, $2^{N}$, and the side length itself is sometimes
referred to as the number of Algorithmic Qubits [69]. For example, a quantum
device that can attain high fidelity with 4 qubits and a circuit depth of 4
has a Quantum Volume of $2^{4}=16$ and is said to have 4 Algorithmic Qubits.
Lastly, application-based benchmarks, which include this paper, demonstrate
the abilities of a device to perform specific algorithms and tasks. A large
portion of the excitement around future quantum computers is their potential
for speedups in a variety of common algorithms and optimizations. As the
applications for quantum devices are vast, application-based benchmarks vary
dramatically. For the sake of clarity, this category of benchmarks is divided
into subcategories and is presented in the following paragraphs in roughly
chronological order. These subcategories are quantum chemistry, multiple task
suites, visual representation, and quantum annealing.
One implementation of quantum devices is the simulation of chemical reactions,
which are by their nature quantum mechanical. The usefulness of this task has
led to the simulation of quantum chemistry becoming a persistent benchmark for
quantum computers. For example, Ref. [40] utilizes simulated alkali metal
hydrides to test the Variational Quantum Eigensolver on IBM and Rigetti’s
superconducting devices. With this approach, they were able to reach a high
enough accuracy in their simulations to reproduce the results of these
chemical reactions. Along a similar vein, Ref. [46] proposes a problem from
solid-state physics, a one-dimensional Fermi-Hubbard model. In one dimension,
this model has an exact solution, which makes it straightforward to test the
results of the Variational Quantum Eigensolver. By layering VQE on top of
itself, they managed to recreate the theoretical one-dimensional results with
Google’s Sycamore hardware.
Later, entire suites of benchmarks were developed to test various tasks and
metrics, such as QASMBench [43]. QASMBench is a benchmarking suite of
computational problems for quantum devices based on the OpenQASM assembly
language. It presents metrics in multiple tasks including linear algebra,
chemistry, optimization, and cryptography. Other suites have also been
developed, such as QScore [45] and QPack [47] which both present benchmarking
that centers around Max-Cut and similar optimization problems.
Then, building on the framework of IBM’s volumetric benchmarking, Ref. [44]
considers specific tasks for which deep (larger circuit depth) and shallow
circuits (larger circuit width) would be better suited, such as state
preparation and IQP circuits respectively. While Ref. [42] utilizes volumetric
benchmarks as a backdrop to measure a suite of quantum algorithms, such as the
Quantum Fourier Transform and Grover’s Search. These algorithms are tested
many times at different depths and widths. Consistently, as the tasks increase
in size and move farther away from the center of the quantum volume square,
the fidelity drops. In this way, the relationship between a device’s quantum
volume and the fidelity of quantum operations is readily apparent through
visually accessible figures that show when the devices begin to fail. The use
of visual assessments allows for an intuitive understanding of benchmarking
and has become more standard. For example, Ref. [70] tests 21 quantum devices
with algorithms that were not only chosen for performance but also
specifically chosen with visual accessibility in mind.
Finally, benchmarks from the field of quantum annealing, which utilizes the
similarity of a physical Ising model and quadratic unconstrained binary
optimization (QUBO) to solve combinatorial optimization tasks, are necessary
for annealing devices. Ref. [71] showed that specific instances of quantum
annealing on DWave’s Advantage were able to produce solutions within $0.5\%$
of the best-known solution in a fraction of the time of the fastest classical
algorithms. It should be noted this is not a fundamental speedup,
nevertheless, order-of-magnitude differences in execution time obviously have
substantial industry potential.
Application-based benchmarks span a wide field of practical industry tasks as
well as theoretical problems tailored to comparing devices. Ultimately, while
there have been many proposed benchmarks, they share a common theme in that
they attempt to answer the degree to which a given quantum device can perform
industry tasks and effectively communicate results. With this in mind, Figure
6 provides an inexact categorization of these benchmarks based on the
specificity of the metrics—that is, the generality of a given benchmark with
respect to the hardware required and algorithms available—as well as its
practicality on current noisy quantum computers.
This paper’s testing of supervised learning on various devices serves as an
application-based benchmark that has yielded preliminary results on the role
of neural networks in quantum machine learning. This document tests a range of
real and simulated quantum devices, comparing their time and accuracy
performance on a classification task. The future role that quantum machines
will play in high-level neural networks and deep learning remains to be
determined, but these results demonstrate practical effects of quantum
hardware as well as quantum-inspired classical hardware.
Figure 6: Key benchmarking papers from the past 5 years plotted with
specificity versus practicality. Specificity describes the requirements of
hardware as well as the generality of algorithms presented by the benchmarking
paper. Practicality refers to the use of metrics that apply to real-world
industry problems, as opposed to theoretical problems that have been
manufactured for the purposes of benchmarks themselves. It should be
reiterated that all quadrants have important uses. Naturally, the placement of
each paper is subjective, and others may come to different conclusions about
where each benchmark belongs.
## Appendix B Raw numerical results
In this section, the raw results of the benchmarks are provided in tabular
view. Tab. 3 demonstrates the data obtained during the training process,
whereas Tab. 4 shows the data obtained during the inference process. The blank
spaces represent two types of non-applicability: 1) the standard deviations of
training (inference) runtimes were calculated up to 24 (26) qubits to avoid
long execution times as well as limiting our carbon emission, and 2) as not
all QPUs included enough qubits to fulfill all parts, their numbers are
presented up to their highest even-qubit availability.
Tabs. 5, 6, 7, 8 showcase the dependence of the QBN runtimes for quantum
training, quantum inference, hybrid inference, and hybrid training,
respectively. In all cases, it is evident that for an increasing number of
threads we see a general improvement in runtimes. However, in some cases even
for low-qubit circuits it is beneficial to use more than a single thread.
Furthermore, Tab. 9 provides the numerical results for the QPU accuracy tests,
whereas the times obtained are shown in Tab. 10. These runtimes are largely in
agreement with those in Tab. 3. However, it is worth noting that these
circuits are twice as deep as the ones used in runtime benchmarks as explained
in Sec. III.2. Tab. 11 shows the runtimes associated with running large
quantum circuits. A characteristic feature of this limit is the scaling of the
quantum hardware that is unavailable to any classical machine. This hints at a
decisive quantum advantage in this region with the condition that the accuracy
would also become competitive.
Finally, Tab. 12 showcases the pricing of the various QPUs used in this work
for training and inference.
Quantum Neural Network $n_{\text{qubits}}$ 2 4 6 8 10 12 14 16 18 20 22 24 26
28 30 QBN MEAN 0.0024 0.0054 0.0099 0.0160 0.0248 0.0437 0.0866 0.1636 0.3786
1.2116 9.8348 46.9552 217.3697 1005.2324 4640.2727 STD 0.0003 0.0002 0.0005
0.0006 0.0007 0.0163 0.0026 0.0122 0.0251 0.0283 0.0858 0.4396 QPL MEAN 0.0039
0.0095 0.0132 0.0216 0.0317 0.0455 0.0709 0.1724 0.7224 4.6597 36.8044
211.0829 998.0364 5139.8335 28121.3965 STD 0.0006 0.0173 0.0002 0.0109 0.0144
0.0172 0.0184 0.0286 0.0383 0.1668 0.9413 8.1045 APL MEAN 0.0058 0.0099 0.0175
0.0285 0.0407 0.0587 0.0882 0.2106 0.7963 3.7189 39.8270 212.9200 1165.6600
4968.9300 22281.9225 STD 0.0109 0.0002 0.0003 0.0094 0.0072 0.0110 0.0105
0.0179 0.0252 0.1924 0.8422 2.0854 ASV MEAN 14.0362 24.6182 36.7133 50.7568
62.1815 75.1483 86.7919 99.1183 112.1725 121.7680 136.4578 185.8316 285.5654
794.7542 3212.0178 STD 1.2146 1.4740 2.6514 9.8396 3.5893 7.2071 9.1376 7.6116
15.6195 9.7491 8.7736 11.5878 Hybrid Quantum Neural Network QBN MEAN 0.0029
0.0055 0.0096 0.0162 0.0248 0.0437 0.0774 0.1498 0.3656 1.1677 9.7629 46.5285
217.2490 1003.2794 4676.9508 STD 0.0001 0.0003 0.0004 0.0005 0.0007 0.0175
0.0030 0.0029 0.0116 0.0168 0.0978 0.7141 QPL MEAN 0.0042 0.0098 0.0135 0.0219
0.0328 0.0484 0.0695 0.1729 0.7119 4.9052 41.7528 213.8433 1057.6980 5150.4466
27499.8175 STD 0.0006 0.0171 0.0002 0.0112 0.0192 0.0270 0.0156 0.0290 0.0373
0.2127 0.9243 8.0949 APL MEAN 0.0057 0.0096 0.0167 0.0272 0.0388 0.0563 0.0861
0.2027 0.7968 3.8129 39.2271 212.5060 1043.6700 4926.9200 22429.9246 STD
0.0109 0.0002 0.0003 0.0092 0.0072 0.0109 0.0131 0.0184 0.0281 0.2071 1.0127
2.9169 ASV MEAN 14.4530 26.6481 38.8279 51.0942 64.5029 74.9574 88.3625
100.9840 112.5433 122.2724 138.2863 189.3105 290.3000 821.8000 3155.0000 STD
1.2316 3.3369 3.5860 9.2361 7.7340 4.5772 6.6438 13.1945 11.3345 8.8118 8.5446
9.0637
Table 3: Benchmarking results – training of quantum and hybrid quantum neural
networks.
Quantum Neural Network $n_{\text{qubits}}$ 2 4 6 8 10 12 14 16 18 20 22 24 26
28 30 Simulator QBN MEAN 0.0006 0.0008 0.0010 0.0014 0.0014 0.0021 0.0030
0.0046 0.0109 0.0295 0.2269 0.9442 4.0763 17.6377 77.0772 STD 0.0000 0.0002
0.0003 0.0001 0.0004 0.0008 0.0006 0.0007 0.0009 0.0018 0.0135 0.0271 0.0520
QPL MEAN 0.0011 0.0015 0.0019 0.0040 0.0027 0.0032 0.0041 0.0079 0.0249 0.1285
0.9461 4.1994 20.3387 99.3129 390.8168 STD 0.0003 0.0001 0.0001 0.0169 0.0002
0.0003 0.0003 0.0007 0.0010 0.0183 0.0550 0.1163 1.5112 APL MEAN 0.0011 0.0016
0.0020 0.0025 0.0030 0.0035 0.0044 0.0076 0.0241 0.0975 0.8895 4.2057 18.7627
81.3886 340.6302 STD 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0003
0.0014 0.0056 0.0069 0.0149 0.1176 ASV MEAN 2.9186 2.9810 2.9658 2.9148 2.9611
2.9631 3.1480 3.0285 3.6932 3.1081 3.1192 4.5938 5.7911 14.1463 52.1173 STD
0.6806 0.7128 0.6408 0.6332 0.6130 0.6648 1.3843 0.7282 6.0972 0.7290 0.6413
3.8130 1.0294 QPU IonQ Harmony MEAN 257.9168 155.6078 455.7227 128.1430
456.8760 STD 405.6818 264.2781 493.2422 83.4089 582.5769 OQC Lucy MEAN 3.5553
4.2667 4.8857 5.0052 STD 0.5025 0.5352 1.2849 0.9200 Rigetti Aspen-M2 MEAN
19.8218 20.3571 21.5349 20.0242 24.5956 27.0217 36.3195 29.0213 30.4019
39.2857 38.8354 39.2298 41.1695 45.7342 50.5362 STD 3.1108 2.5300 5.8806
2.7064 4.8186 3.3928 9.9247 15.6830 4.8891 4.8322 7.1394 9.9047 9.0877 3.7660
9.3982 IBMQ Falcon r5.11 MEAN 18.9823 19.9964 19.5158 20.8323 21.5527 21.6090
23.3926 23.9918 25.2614 27.1860 28.9943 30.2249 31.6950 STD 0.6948 2.3790
0.5109 0.6333 0.7015 0.4522 0.9337 0.4424 1.1654 0.8637 0.6914 1.1444 0.8600
Hybrid Quantum Neural Network Simulator QBN MEAN 0.0006 0.0008 0.0010 0.0014
0.0014 0.0021 0.0030 0.0046 0.0109 0.02952 0.229162 0.95632 4.119108 18.019226
77.074412 STD 0.0000 0.0002 0.0003 0.0001 0.0004 0.0008 0.0006 0.0007 0.0009
0.001796 0.012569 0.015437 0.092773 QPL MEAN 0.0011 0.0015 0.0019 0.0040
0.0028 0.0034 0.0043 0.0079 0.0270 0.1314 0.9457 4.6933 20.6906 104.8952
466.5611 STD 0.0002 0.0001 0.0001 0.0167 0.0003 0.0005 0.0004 0.0007 0.0150
0.0074 0.0398 0.1462 2.6173 APL MEAN 0.0011 0.0016 0.0021 0.0025 0.0030 0.0036
0.0045 0.0078 0.0246 0.1001 0.9136 4.2659 18.6986 95.8960 398.8728 STD 1.0000
0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0002 0.0003 0.0017 0.0077 0.0103
0.0187 ASV MEAN 2.7970 2.8320 2.8027 2.7822 2.7476 2.7362 3.4099 2.8901 2.8085
3.0879 2.8273 3.6522 5.3048 14.3117 51.9528 STD 0.6013 1.2064 0.6225 0.5762
0.4415 0.4935 6.0821 0.7496 0.5778 2.9173 0.5501 0.6063 0.6095 QPU IonQ
Harmony MEAN 262.3067 232.7949 379.4396 129.2485 455.5415 STD 405.5434
342.4183 509.7297 83.8298 575.1551 OQC Lucy MEAN 3.7157 4.5734 4.6802 5.6457
STD 0.7375 0.7556 0.9706 1.4995 Rigetti Aspen-M2 MEAN 16.8365 20.7644 23.3819
20.5349 24.0114 25.4231 33.9155 27.0000 28.1941 37.1067 44.8568 35.0372
40.1152 45.8019 51.1153 STD 2.8938 2.6985 6.3815 2.6125 4.6563 3.5133 4.2292
5.8891 3.2389 4.3796 9.6188 3.9946 6.5734 7.5121 7.1591 IBMQ Falcon r5.11 MEAN
18.6851 19.0753 21.7630 32.3815 22.5955 22.1030 22.6698 23.5265 24.7820
26.7634 28.3862 29.9197 31.7038 STD 0.4072 0.5190 5.5041 9.8036 2.5923 0.3981
0.4838 0.5665 0.6743 0.6848 0.4064 0.4930 0.6160
Table 4: Benchmarking results – inference of quantum and hybrid quantum neural
networks.
Pure Quantum Training $n_{\text{qubits}}$ 2 4 6 8 10 12 14 16 18 20 Threads 1
MEAN 0.0024 0.0054 0.0099 0.0160 0.0248 0.0437 0.0964 0.3243 1.4042 6.6382 STD
0.0003 0.0002 0.0005 0.0006 0.0007 0.0163 0.0023 0.0021 0.0040 0.0066 2 MEAN
0.0026 0.0060 0.0118 0.0195 0.0299 0.0456 0.1439 0.2196 1.1028 3.7856 STD
0.0001 0.0002 0.0002 0.0004 0.0009 0.0016 0.2070 0.0255 0.3134 0.0075 4 MEAN
0.0027 0.0066 0.0139 0.0211 0.0331 0.0501 0.0866 0.1724 0.5542 2.3383 STD
0.0001 0.0001 0.0021 0.0009 0.0017 0.0021 0.0026 0.0114 0.0102 0.0400 8 MEAN
0.0029 0.0090 0.0165 0.0272 0.0406 0.0561 0.0904 0.1636 0.4147 1.5209 STD
0.0003 0.0008 0.0012 0.0013 0.0020 0.0018 0.0043 0.0122 0.0132 0.0262 12 MEAN
0.0036 0.0125 0.0236 0.0356 0.0517 0.0716 0.1064 0.1674 0.3927 1.3844 STD
0.0002 0.0019 0.0006 0.0003 0.0006 0.0009 0.0050 0.0043 0.0061 0.0323 16 MEAN
0.0037 0.0121 0.0261 0.0431 0.0603 0.0835 0.1104 0.1737 0.3786 1.2194 STD
0.0001 0.0002 0.0005 0.0008 0.0007 0.0016 0.0078 0.0026 0.0251 0.0373 20 MEAN
0.0040 0.0130 0.0335 0.0583 0.0711 0.1098 0.1424 0.1930 0.4020 1.2593 STD
0.0004 0.0008 0.0006 0.0011 0.0008 0.0022 0.0026 0.0036 0.0095 0.0298 24 MEAN
0.0042 0.0163 0.0355 0.0618 0.0879 0.1229 0.1577 0.2179 0.4339 1.2116 STD
0.0001 0.0009 0.0004 0.0008 0.0006 0.0047 0.0028 0.0062 0.0158 0.0283 Best
thread count 1 1 1 1 1 1 4 8 16 24
Table 5: The performance of QMware basiq native (QBN) when training the QNN
quantum network for different numbers of threads.
Pure Quantum Forward $n_{\text{qubits}}$ 2 4 6 8 10 12 14 16 18 20 Threads 1
MEAN 0.0008 0.0007 0.0009 0.0011 0.0015 0.0020 0.0034 0.0099 0.0400 0.1687 STD
0.0001 0.0001 0.0002 0.0003 0.0005 0.0006 0.0008 0.0008 0.0148 0.0087 2 MEAN
0.0007 0.0008 0.0010 0.0012 0.0030 0.0020 0.0030 0.0068 0.0236 0.0935 STD
0.0001 0.0002 0.0002 0.0001 0.0002 0.0003 0.0003 0.0004 0.0002 0.0024 4 MEAN
0.0006 0.0008 0.0011 0.0017 0.0017 0.0021 0.0026 0.0050 0.0150 0.0576 STD
0.0000 0.0000 0.0000 0.0001 0.0000 0.0002 0.0000 0.0002 0.0004 0.0019 8 MEAN
0.0006 0.0009 0.0011 0.0018 0.0018 0.0021 0.0026 0.0039 0.0118 0.0389 STD
0.0000 0.0000 0.0000 0.0001 0.0000 0.0001 0.0000 0.0000 0.0005 0.0040 12 MEAN
0.0007 0.0013 0.0014 0.0022 0.0022 0.0026 0.0029 0.0046 0.0122 0.0388 STD
0.0000 0.0000 0.0000 0.0001 0.0000 0.0001 0.0000 0.0003 0.0014 0.0037 16 MEAN
0.0007 0.0013 0.0016 0.0026 0.0023 0.0029 0.0030 0.0048 0.0111 0.0346 STD
0.0000 0.0000 0.0000 0.0001 0.0002 0.0001 0.0002 0.0002 0.0009 0.0022 20 MEAN
0.0008 0.0015 0.0022 0.0033 0.0036 0.0035 0.0039 0.0059 0.0119 0.0356 STD
0.0000 0.0000 0.0000 0.0001 0.0002 0.0001 0.0001 0.0001 0.0005 0.0018 24 MEAN
0.0009 0.0016 0.0028 0.0036 0.0036 0.0044 0.0044 0.0063 0.0120 0.0355 STD
0.0000 0.0000 0.0001 0.0001 0.0001 0.0006 0.0002 0.0001 0.0004 0.0021 Best
thread count 4 1 1 1 1 1 8 8 16 16
Table 6: The performance of QMware basiq native (QBN) when performing
inference using the QNN quantum network for different numbers of threads.
Hybrid Quantum Forward $n_{\text{qubits}}$ 2 4 6 8 10 12 14 16 18 20 Threads 1
MEAN 0.0008 0.0008 0.0010 0.0028 0.0014 0.0021 0.0036 0.0104 0.0419 0.1680 STD
0.0001 0.0002 0.0003 0.0159 0.0004 0.0008 0.0008 0.0013 0.0168 0.0105 2 MEAN
0.0007 0.0010 0.0011 0.0014 0.0016 0.0022 0.0331 0.0071 0.0242 0.0932 STD
0.0001 0.0001 0.0001 0.0002 0.0002 0.0002 0.0076 0.0003 0.0034 0.0017 4 MEAN
0.0006 0.0010 0.0011 0.0014 0.0021 0.0022 0.0059 0.0052 0.0177 0.0617 STD
0.0000 0.0000 0.0001 0.0001 0.0002 0.0001 0.0061 0.0002 0.0064 0.0103 8 MEAN
0.0006 0.0010 0.0012 0.0015 0.0023 0.0024 0.0030 0.0046 0.0118 0.0414 STD
0.0001 0.0001 0.0000 0.0000 0.0001 0.0001 0.0006 0.0007 0.0011 0.0100 12 MEAN
0.0007 0.0014 0.0014 0.0017 0.0026 0.0029 0.0035 0.0055 0.0123 0.0363 STD
0.0000 0.0000 0.0000 0.0000 0.0001 0.0003 0.0002 0.0001 0.0008 0.0046 16 MEAN
0.0008 0.0015 0.0016 0.0020 0.0030 0.0032 0.0039 0.0055 0.0109 0.0308 STD
0.0001 0.0000 0.0000 0.0001 0.0001 0.0004 0.0006 0.0002 0.0009 0.0019 20 MEAN
0.0008 0.0016 0.0023 0.0026 0.0035 0.0038 0.0053 0.0060 0.0121 0.0312 STD
0.0000 0.0001 0.0002 0.0000 0.0001 0.0002 0.0003 0.0002 0.0004 0.0019 24 MEAN
0.0009 0.0019 0.0031 0.0034 0.0044 0.0040 0.0057 0.0062 0.0118 0.0295 STD
0.0000 0.0002 0.0000 0.0001 0.0001 0.0002 0.0002 0.0001 0.0002 0.0018 Best
thread count 4 1 1 4 1 1 8 8 16 24
Table 7: The performance of QMware basiq native (QBN) when performing
inference using the HQNN for different numbers of threads.
Hybrid Quantum Training $n_{\text{qubits}}$ 2 4 6 8 10 12 14 16 18 20 Threads
1 MEAN 0.0008 0.0007 0.0009 0.0011 0.0015 0.0020 0.0034 0.0099 0.0400 0.1687
STD 0.0001 0.0001 0.0002 0.0003 0.0005 0.0006 0.0008 0.0008 0.0148 0.0087 2
MEAN 0.0007 0.0008 0.0010 0.0012 0.0030 0.0020 0.0030 0.0068 0.0236 0.0935 STD
0.0001 0.0002 0.0002 0.0001 0.0002 0.0003 0.0003 0.0004 0.0002 0.0024 4 MEAN
0.0006 0.0008 0.0011 0.0017 0.0017 0.0021 0.0026 0.0050 0.0150 0.0576 STD
0.0000 0.0000 0.0000 0.0001 0.0000 0.0002 0.0000 0.0002 0.0004 0.0019 8 MEAN
0.0006 0.0009 0.0011 0.0018 0.0018 0.0021 0.0026 0.0039 0.0118 0.0389 STD
0.0000 0.0000 0.0000 0.0001 0.0000 0.0001 0.0000 0.0000 0.0005 0.0040 12 MEAN
0.0007 0.0013 0.0014 0.0022 0.0022 0.0026 0.0029 0.0046 0.0122 0.0388 STD
0.0000 0.0000 0.0000 0.0001 0.0000 0.0001 0.0000 0.0003 0.0014 0.0037 16 MEAN
0.0007 0.0013 0.0016 0.0026 0.0023 0.0029 0.0030 0.0048 0.0111 0.0346 STD
0.0000 0.0000 0.0000 0.0001 0.0002 0.0001 0.0002 0.0002 0.0009 0.0022 20 MEAN
0.0008 0.0015 0.0022 0.0033 0.0036 0.0035 0.0039 0.0059 0.0119 0.0356 STD
0.0000 0.0000 0.0000 0.0001 0.0002 0.0001 0.0001 0.0001 0.0005 0.0018 24 MEAN
0.0009 0.0016 0.0028 0.0036 0.0036 0.0044 0.0044 0.0063 0.0120 0.0355 STD
0.0000 0.0000 0.0001 0.0001 0.0001 0.0006 0.0002 0.0001 0.0004 0.0021 Best
thread count 4 1 1 1 1 1 8 8 16 16
Table 8: The performance of QMware basiq native (QBN) when training the hybrid
quantum network for different numbers of threads. Accuracy, %
---
$n_{\text{qubits}}$ | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18
IonQ Harmony | MEAN | 99.76 | 99.73 | 99.52 | 99.19 | 99.12 | | | |
STD | 0.20 | 0.17 | 0.12 | 0.27 | 0.30 | | | |
Rigetti Aspen-M2 | MEAN | 88.77 | 5.47 | 0.67 | 0.15 | 0.04 | 0.02 | 0.00 | 0.00 | 0.00
STD | 6.81 | 0.75 | 2.65 | 0.12 | 0.07 | 0.04 | 0.00 | 0.00 | 0.00
OQC Lucy | MEAN | 93.76 | 67.87 | 63.66 | 57.64 | | | | |
STD | 0.69 | 1.98 | 1.59 | 0.93 | | | | |
IBMQ Falcon r5.11 | MEAN | 93.23 | 54.40 | 2.97 | 1.40 | 0.13 | 0.03 | 0.00 | |
STD | 0.42 | 4.20 | 0.50 | 0.22 | 0.05 | 0.05 | 0.00 | |
Table 9: QPU accuracies Times, seconds
---
$n_{\text{qubits}}$ | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18
IonQ Harmony | MEAN | 350.2784 | 80.0766 | 339.8358 | 17.9963 | 73.2473 | | | |
STD | 501.8712 | 59.0565 | 630.4188 | 7.0536 | 102.1677 | | | |
Rigetti Aspen-M2 | MEAN | 58.7262 | 52.1269 | 60.1630 | 64.3308 | 58.6161 | 85.4907 | 85.7971 | 81.8377 | 99.0498
STD | 35.1338 | 21.8146 | 17.8216 | 16.9935 | 20.6759 | 26.5828 | 22.4378 | 33.9639 | 19.8638
OQC Lucy | MEAN | 3.4535 | 3.3095 | 3.3682 | 3.4450 | | | | |
STD | 0.4990 | 0.2007 | 0.2636 | 0.3664 | | | | |
IBMQ Falcon r5.11 | MEAN | 43.1557 | 20.1590 | 17.6970 | 18.2476 | 21.6717 | 18.6053 | 18.9948 | |
STD | 48.3393 | 4.9506 | 0.8301 | 0.8283 | 8.4172 | 0.3280 | 0.9642 | |
Table 10: QPUs runtimes for the accuracy test
$n_{\text{qubits}}$ 32 34 36 38 40 QNN QBN 304.1323 1343.5551 5952.5783
28893.5688 48970.1862 ASV 237.3554 986.5651 Rigetti Aspen-M2 MEAN 51.9887
52.1990 59.0934 60.9397 73.9763 STD 8.9592 7.9419 11.5971 7.4355 11.4524 HQNN
QBN 270.6306 1244.3166 3450.6118 18630.4505 47273.2937 ASV 323.9802 1414.4067
Rigetti Aspen-M2 MEAN 49.9818 52.4805 61.9770 64.4956 67.4168 STD 6.9296
7.0688 17.2564 7.6582 7.9843
Table 11: Inference runtimes beyond 30 qubits
Prices, inference in US dollars and training in thousands of US dollars
$n_{\text{qubits}}$ 2 4 6 8 10 12 14 16 18 20 22 24 26 IonQ Harmony Inference
10.30 10.30 10.30 10.30 10.30 Training 515.00 927.00 1339.00 1751.00 2163.00
Rigetti Aspen-M2 Inference 0.34 0.34 0.34 0.34 0.34 0.34 0.34 0.34 0.34 0.34
0.34 0.34 0.34 Training 16.75 30.15 43.55 56.95 70.35 83.75 97.15 110.55
123.95 137.35 150.75 164.15 177.55 OQC Lucy Inference 0.34 0.34 0.34 0.34
Training 16.75 30.15 43.55 56.95 IBM Falcon r5.11 Inference Mean 16.21 16.29
17.11 18.67 19.90 20.22 22.18 23.57 24.97 28.00 30.54 32.83 35.04 Std 4.66
2.52 3.38 1.34 1.10 0.80 0.99 0.98 0.73 0.90 0.80 0.84 0.82 Training Mean
81.04 146.63 222.46 317.33 417.81 505.59 643.31 777.97 923.83 1147.91 1374.12
1608.50 1857.09 Std 23.30 22.66 43.93 22.86 23.06 19.97 28.67 32.39 26.83
37.07 36.18 40.98 43.49
Table 12: Cost of executing inference and training runs on QPUs.
|
# A variational method for functionals depending on eigenvalues
Romain Petrides Romain Petrides, Université de Paris, Institut de
Mathématiques de Jussieu - Paris Rive Gauche, bâtiment Sophie Germain, 75205
PARIS Cedex 13, France<EMAIL_ADDRESS>
###### Abstract.
We perform a systematic variational method for functionals depending on
eigenvalues of Riemannian manifolds. It is based on a new concept of Palais
Smale sequences that can be constructed thanks to a generalization of
classical min-max methods on $\mathcal{C}^{1}$ functionals to locally-
Lipschitz functionals. We prove convergence results on these Palais-Smale
sequences emerging from combinations of Laplace eigenvalues or combinations of
Steklov eigenvalues in dimension 2.
Optimization of eigenvalues of operators (Laplacian with Dirichlet or Neumann
boundary conditions, Dirichlet-to-Neumann operator, bi-laplacian, magnetic
Laplacian etc) is a common field of spectral geometry. We consider the
eigenvalues as functionals depending on the shape and topology of the domain,
on the operator, and/or on the geometric structure (Riemannian metrics, CR
structure, sub-Riemannian metrics, etc). One old and celebrated problem was
independently solved by Faber [Fab23] and Krahn [Kra25] in 1923: the domains
minimizing the first Laplace eigenvalue with Dirichlet boundary conditions
among domains of same volume in $\mathbb{R}^{n}$ are Euclidean balls. This
problem is very similar to the classical problem of isoperimetry, and the
proof of this result uses the isoperimetric inequality, so that even when the
perimeter is not involved in the renormalization (by a prescribed
area/perimeter/diameter or Cheeger constant etc) of an eigenvalue functional,
shape optimization on it is often called an isoperimetric problem on the
eigenvalue.
We can distinguish two main families of optimization of eigenvalues. In the
first one, the ambiant geometry is prescribed (for instance, the Euclidean
space $\mathbb{R}^{n}$, sphere, hyperbolic space, etc) and there is an
optimization with respect to the shape and topology of a domain in this
ambiant space. Emblematic results are the Faber-Krahn inequality
[Fab23][Kra25] and the Szegö-Weinberger [Sze54][Wei56] inequality. In the
second one, the ambiant topology is prescribed (on a fixed manifold) but the
optimization holds with respect to the metric on the manifold, or potentials
involved in the eigenvalue operator. An emblematic result is Hersch inequality
[Her70]: the round sphere is the maximizer of the first Laplace eigenvalue
among metrics of same area on the $2$-sphere. In both problems, we look for
bounds on eigenvalues, optimal inequalities and critical
domains/metrics/potentials realizing these bounds.
The current paper is devoted to the second family of problems. In principle,
the bigger the space of variations is, the richer the critical points of the
functional are. For instance, critical metrics for combinations of Laplace
eigenvalues over Riemannian metrics with prescribed volume are associated to
minimal surfaces into ellipsoids (see [Pet21]), while critical metrics for
Steklov eigenvalues with prescribed perimeter are associated to free boundary
minimal surfaces into ellipsoids (see [Pet22]). If only one eigenvalue appears
in the functional, the target ellipsoids are spheres/balls as was primarily
noticed by Nadirashvili [Nad96] for Laplace eigenvalues and Fraser and Schoen
for Steklov eigenvalues [FS13][FS16]. This gives an elegant connexion with the
theory of minimal surfaces. If we look for critical metrics with respect to
variations in a conformal class, we only obtain harmonic maps instead of
minimal immersions [ESI03][ESI08][FS13][Pet21][Pet22]. Other examples of
critical metrics will be given in [PT22] thanks to a unified approach based on
computations of subdifferentials (see e.g. [Cla13] and discussions below).
Noticing that the harmonic maps enjoy a regularity theory (see e.g
[Hel96][Riv08]), we can start a long story of investigations for variational
aspects of eigenvalue functionals.
In the past decades, many variational methods have been proposed since the
seminal works by Nadirashvili [Nad96] for the maximization of the first
Laplace eigenvalues on tori and Fraser and Schoen [FS16] for the maximization
of the first Steklov eigenvalues on surfaces with boundary of genus $0$. We
briefly explain the idea with the example of maximization of one eigenvalue in
a conformal class $[g]=\\{e^{2u}g;u\in\mathcal{C}^{\infty}\left(M\right)\\}$
$\Lambda_{k}(M,[g])=\sup_{\tilde{g}\in[g]}\bar{\lambda}_{k}(\tilde{g})$
where $\bar{\lambda}_{k}$ is a renormalized eigenvalue. Notice that conformal
classes are convenient not only because the space of variation is a space of
functions, but also because there are upper bounds on eigenvalues in this
space [Kor93][Has11]. The main idea was to build a specific maximizing
sequence of conformal factors that emerge from a regularized variational
problem.
* •
In [Nad96], (Laplacian, dimension 2) the author maximizes the first eigenvalue
$\bar{\lambda}_{1}$ on the smaller admissible space $E_{N}$ of conformal
factors $f\in\mathcal{C}^{\infty}(M)$ such that $0\leq f\leq N$ for
$N\in\mathbb{N}$, giving a maximizing sequence as $N\to+\infty$ of
$L^{\infty}$ factors $f_{N}\in E_{N}$ for
$\Lambda_{1}(\Sigma,[g])=\sup_{\tilde{g}\in[g]}\lambda_{1}(\tilde{g})$.
* •
In [FS16], [Pet14a], (Laplacian, dimension 2) the authors maximize a relaxed
functional $f\mapsto\bar{\lambda}_{1}\left(K_{\varepsilon}(f)g\right)$, where
$K_{\varepsilon}(f)$ is the solution at time $\varepsilon>0$ of the heat
equation with respect to $g$ at time $\varepsilon>0$ with initial data $f$,
obtaining a maximizing sequence $K_{\varepsilon}(\nu_{\varepsilon})$ of smooth
positive conformal factors as $\varepsilon\to 0$, for some maximal probability
measure $\nu_{\varepsilon}$ of the relaxed functional
$\nu\mapsto\bar{\lambda}_{1}\left(K_{\varepsilon}(\nu)g\right)$.
* •
In [GP22], (Conformal Laplacian, dimension $n\geq 3$) the authors proposed to
modify both the functional and the space of admissible variations.
Whatever the choice, the main difficulty is to obtain convergence of this
maximizing sequence of conformal factors to a regular conformal factor. Since
these maximizing sequences come from the maximization of a regularized
variational problem, we obtain Euler-Lagrange equations expected to bring
regularity estimates on the sequence, in order to pass to the limit. Of
course, these expectations are only possible if sequences of critical metrics
already a priori satisfy regularity estimates and compactness properties. This
is the case for conformal factors associated to harmonic maps [Hel96][Riv08].
The second method (see [Pet14a]), improved in [Pet18] and [Pet19] (Laplace and
Steklov eigenvalues with higher index) is now performed for combinations of
eigenvalues [Pet21] [Pet22]. The first method (see [Nad96] [NS15]) was
improved in [KNPP20] for Laplace eigenvalues of higher index. It is also worth
mentioning that there is an indirect method to maximize first and second
conformal Laplace eigenvalues [KS20] [KS22] based on min-max methods to build
harmonic maps. While it is difficult to generalize it to higher eigenvalues or
combinations, this gives a nice characterization of the maximizers, also
leading to quantified inequalities on first and second eigenvalues [KNPS21].
In the current paper, we simplify, unify and generalize the previous
variational methods by defining a notion of Palais-Smale sequences of
conformal factors. It is a significative improvement, e.g for the following
reasons:
* •
We can observe that maximizing sequences extracted by the maximization of
relaxed functionals by the Heat kernel
$e^{2u_{\varepsilon}}=K_{\varepsilon}[\nu_{\varepsilon}]$ (in
[Pet14a][Pet18][Pet21][Pet22]) satisfy the properties of Palais-Smale
sequences as $\varepsilon\to 0$. Notice that these sequences
$(e^{2u_{\varepsilon}})_{\varepsilon>0}$ are canonical in the sense that they
satisfy even more regularity properties (for instance, there are
$\mathcal{C}^{0}$ a priori estimates on eigenfunctions) than a random Palais
Smale sequence. However, working on these sequences requires an overly high
technicality.
* •
All the previous methods are ad hoc methods while the concept of Palais-Smale
sequences gives a systematic approach.
* •
Palais-Smale sequences can be extracted from min-max problems on combinations
of eigenvalues, while the previous methods seem specific to maximizations, and
for some of them specific to the maximization of only one eigenvalue.
* •
This new method is also used in [Pet22a] for variational problems on the
Laplacian in higher dimensions and is promising to solve many other
variational problems on eigenvalues.
Classically, Palais-Smale sequences on a $\mathcal{C}^{1}$ functional
$E:X\to\mathbb{R}$ are sequences $(x_{n})$ such that $E(x_{n})\to c$ and
$\left|DE(x_{n})\right|\to 0$. The main problem is that a functional involving
eigenvalues (depending on a space $X$ of metrics, conformal factors,
potentials, etc) is not a $\mathcal{C}^{1}$ functional. Of course, it is a
$\mathcal{C}^{1}$ functional at any point in which the involved eigenvalues
are simple, but we often have multiplicity of eigenvalues at the critical
points, corresponding to intersection of smooth branches of eigenvalues.
However, thanks to F. Clarke (see e.g [Cla13]), the subdifferential $\partial
E(x)$ plays the role of the differential for locally Lipschitz functionals.
Roughly speaking, it is a space of subgradients containing all the
informations on the first variation of the functional, and in particular on
the derivatives corresponding to the smooth branches of eigenvalues at points
of multiplicity (see [PT22] for more details). Then, criticality of $E$ at $x$
can be defined by $0\in\partial E(x)$. The current paper is devoted to
quantify this property by the definition of a pseudo-norm $\left|\partial
E(x)\right|$ on subdifferentials such that $x\mapsto\left|\partial
E(x)\right|$ is lower semi-continuous and criticality is characterized by
$\left|\partial E(x)\right|=0$. A Palais-Smale sequence $(x_{n})$ is then
nothing but defined by $E(x_{n})\to c$ and $\left|\partial E(x_{n})\right|\to
0$.
With this point of view, extraction of Palais-Smale sequences from variational
problems on eigenvalue functionals follows the classical constructions for
$\mathcal{C}^{1}$ functionals (see for instance the nice book [Str08]). This
method is explained in Section 1. In particular, we explain how this new point
of view is available for min-max variational problems on combinations of
eigenvalues. As in the previous methods, the main difficulty is to prove
convergence of Palais-Smale sequences. A wide part of the current paper is
devoted to prove this convergence for functionals depending on combinations of
Laplace eigenvalues (proof or Proposition 2.1 in Section 2) or combinations of
Steklov eigenvalues (proof of Proposition 3.1 in Section 3) in dimension 2. We
rewrite here a proof of the main theorems in [Pet21] and [Pet22] to simplify
and enlighten the techniques used there and we give new convergence results on
Palais-Smale sequences ($H^{-1}$ convergence of maximizing sequences,
convergence of spectral indices to the spectral index of the limit). A short
summary of the systematic technique we use for convergence of Palais-Smale
sequences is given after Proposition 2.1. This technique is also developped in
[Pet22a] for eigenvalues of the Laplacian in higher dimensions, with all the
specificities due to higher dimensions. We emphasize that this systematic
approach is promising to solve many other variational problems on eigenvalues.
## 1\. The variational approach and consequences
### 1.1. Examples of functionals and spaces of variation
We denote by $E:X\subset V\to\mathbb{R}$ a functional depending on combination
of eigenvalues, where $X$ is a subset of a Banach space $V$. $E$ is a locally
Lipschitz functional. We can define its subdifferential as
$\partial E(f)=\\{\zeta\in V^{\star};\forall v\in
V,E^{\circ}(f;v)\geq\left\langle\zeta,v\right\rangle\\}$
where
$E^{\circ}(f;v):=\limsup_{\tilde{f}\to f;t\to
0}\frac{E(\tilde{f}+tv)-E(\tilde{f})}{t}.$
is a generalized directional derivative of $E$.
Concerning Laplace eigenvalues on a compact surface without boundary $\Sigma$,
$\lambda_{0}\leq\lambda_{1}(g)\leq\cdots\leq\lambda_{k}(g)\leq\cdots,$
we denote by $\bar{\lambda}_{k}(g)=\lambda_{k}(g)A_{g}(\Sigma)$ the
renormalized eigenvalue where $A_{g}(\Sigma)$ is the area of $\Sigma$ with
respect to $g$, and $E_{k}(g)$ the set of (maybe multiple) eigenfunctions
associated to $\lambda_{k}(g)$. We set
$V=\mathcal{C}^{0}\left(\Sigma\right)\text{ and
}X=\\{f\in\mathcal{C}^{0}(\Sigma),f>0\\}$
and for $f\in X$ we set
$E(f):=F\left(\bar{\lambda}_{1}(fg),\cdots,\bar{\lambda}_{m}(fg)\right)$
where $F:\mathbb{R}^{m}\to\mathbb{R}_{+}$ is a $\mathcal{C}^{1}$ function such
that for all $i\in\\{1,\cdots,m\\}$, $\partial_{i}F\leq 0$ everywhere. We
denote $\partial E(f)$ the subdifferential of $E$ at $f$. We proved in [PT22]
(1.1) $\begin{split}\partial
E(f)\subset\overline{co}_{(\phi_{1},\cdots,\phi_{m})\in\mathcal{O}_{E(f)}}\left\\{\sum_{i=1}^{m}d_{i}\bar{\lambda}_{i}(fg)\left(1-\left(\phi_{i}\right)^{2}\right)\right\\}\end{split}$
where $(\phi_{1},\cdots,\phi_{m})$ lies in the set $\mathcal{O}_{E(f)}$ of
$L^{2}(\Sigma,fg)$-orthonormal families where $\phi_{i}\in E_{i}(fg)$ and
$d_{i}=\partial_{i}F\left(\bar{\lambda}_{1}(fg),\cdots,\bar{\lambda}_{m}(fg)\right)\leq
0$. Notice that in this case, the subdifferential is a space of functions,
while it is defined as a subspace of $V^{\star}$ the set of Radon measures:
here, we identify a function $\psi$ to the measure $\psi dA_{g}$.
Concerning Steklov eigenvalues on a compact surface with boundary $\Sigma$,
$\sigma_{0}\leq\sigma_{1}(g)\leq\cdots\leq\sigma_{k}(g)\leq\cdots,$
we denote by $\bar{\sigma}_{k}(g)=\sigma_{k}(g)L_{g}(\partial\Sigma)$ the
renormalized eigenvalue where $L_{g}(\partial\Sigma)$ is the length of the
boundary $\partial\Sigma$ of $\Sigma$ with respect to $g$, and again
$E_{k}(g)$ the set of (maybe multiple) eigenfunctions associated to
$\sigma_{k}(g)$.
$V=\mathcal{C}^{0}\left(\partial\Sigma\right)\text{ and
}X=\\{f\in\mathcal{C}^{0}(\partial\Sigma),f>0\\}$
and for $f\in X$ we set
$E(f):=F\left(\bar{\sigma}_{1}(\tilde{f}^{2}g),\cdots,\bar{\sigma}_{m}(\tilde{f}^{2}g)\right)$
where $F:\mathbb{R}^{m}\to\mathbb{R}_{+}$ is a $\mathcal{C}^{1}$ function such
that for all $i\in\\{1,\cdots,m\\}$, $\partial_{i}F\leq 0$ everywhere, and
$\tilde{f}:\Sigma\to\mathbb{R}$ denotes one smooth extension of $f$ to
$\Sigma$, noticing that $\sigma_{k}(\tilde{f}^{2}g)$ does not depend on the
extension of $f$ by conformal invariance of the metric. We denote $\partial
E(f)$ the subdifferential of $E$ at $f$. We proved in [PT22]
(1.2) $\begin{split}\partial
E(f)\subset\overline{co}_{(\phi_{1},\cdots,\phi_{m})\in\mathcal{O}_{E(f)}}\left\\{\sum_{i=1}^{m}d_{i}\bar{\sigma}_{i}(\tilde{f}^{2}g)\left(1-\left(\phi_{i}\right)^{2}\right)\right\\}\end{split}$
where $(\phi_{1},\cdots,\phi_{m})$ lies in the set $\mathcal{O}_{E(f)}$ of
$L^{2}(\partial\Sigma,fdL_{g})$-orthonormal families where $\phi_{i}\in
E_{i}(\tilde{f}^{2}g)$ and
$d_{i}=\partial_{i}F\left(\bar{\sigma}_{1}(\tilde{f}^{2}g),\cdots,\bar{\sigma}_{m}(\tilde{f}^{2}g)\right)\leq
0$. Notice that in this case the subdifferential is a space of functions on
$\partial\Sigma$, while it is defined as a subspace of $V^{\star}$, the set of
Radon measures on $\partial\Sigma$: here, we identify a function $\psi$ to the
measure $\psi dL_{g}$ on $\partial\Sigma$.
### 1.2. A pseudo-norm on subdifferentials
We set
$\left|\partial E(f)\right|=-\min_{\tau\in Y}\max_{\psi\in\partial
E(f)}\left\langle\tau,\psi\right\rangle$
the pseudo-norm of the subdifferential $\partial E(f)$, where $Y\subset
V^{\star}$ is the unit sphere of $V^{\star}$. It is the set of probability
measures on $\Sigma$ for Laplace eigenvalues, or the set of probability
measures on $\partial\Sigma$ for Steklov eigenvalues and
$\left\langle.,.\right\rangle$ is the natural pairing between $V^{\star}$ and
$V$. Notice that
* •
$\left|\partial E(f)\right|\geq 0$ because $\partial E(f)$ is a subset of a
finite dimensional vector space and we can always find $\tau\in Y$ such that
$\forall\psi\in\partial E(f),\left\langle\tau,\psi\right\rangle=0$
* •
If $E$ is differentiable at $f$, $\left|\partial E(f)\right|$ is nothing but
the norm of the linear form $f\to DE(f)$ in $V^{\star}$.
* •
The infimum in the definition of $\left|\partial E(f)\right|$ is a minimum
because of the compactness of the sphere with respect to the weak $\star$
topology on $V^{\star}$ (the set of Radon measures) and because $\partial
E(f)$ can be seen as a set of continuous functions.
* •
By a standard density theorem,
$\left|\partial E(f)\right|=-\inf_{\tau\in\tilde{Y}}\max_{\psi\in\partial
E(f)}\left\langle\tau,\psi\right\rangle$
where $\tilde{Y}=\\{hdA_{g}\in Y;h\in\mathcal{C}^{0}(\Sigma),h\geq
0,\int_{\Sigma}hdA_{g}=1\\}$, seeing naturally a function $h$ as the measure
$hdA_{g}$ for Laplace eigenvalues. $\tilde{Y}=\\{hdL_{g}\in
Y;h\in\mathcal{C}^{0}(\Sigma),h\geq 0,\int_{\Sigma}hdL_{g}=1\\}$ for Steklov
eigenvalues. $\left|\partial E(f)\right|$ has to be seen as the absolute value
of the least right derivative of $E$ among all the variations $v_{t}=th$ for
$hdA_{g}\in\tilde{Y}$.
###### Claim 1.1.
The map $f\mapsto\left|\partial E(f)\right|$ is lower semi-continous.
###### Proof.
Let $f_{n}\to f$ in $X$. By definition of the subdifferential, we have the
following semi-continuity result on subdifferentials
$\partial
E(f)\supset\left\\{\lim_{n\to+\infty}\varphi_{n};(\varphi_{n})_{n\in\mathbb{N}}\text{
is a convergent sequence with }\varphi_{n}\in\partial E(f_{n})\right\\}$
Let $\tau\in Y$. $\partial E(f_{n})$ is compact as a bounded subset of a
finite dimensional set. Then, let $\psi_{n}\in\partial E(f_{n})$ be such that
$\left\langle\tau,\psi_{n}\right\rangle=\max_{\psi\in\partial
E(f_{n})}\left\langle\tau,\psi\right\rangle$
By convergence of eigenvalues and eigenfunctions, up to a subsequence,
$\psi_{n}$ converges to some function $\psi_{\infty}$ in $\mathcal{C}^{0}$ as
$n\to+\infty$. More precisely, we have by ellipticity of the eigenfunction
equations
$\left|\left\langle\tau,\psi_{\infty}-\psi_{n}\right\rangle\right|\leq\|\psi-\psi_{n}\|_{\mathcal{C}^{0}}\leq
C\left(\|f_{n}-f\|_{\mathcal{C}^{0}}+\sum_{j=1}^{N}\|\phi^{j}-\phi_{n}^{j}\|_{L^{2}}+\sum_{i=1}^{m}\left|\lambda_{n}^{i}-\lambda^{i}\right|\right),$
where $\lambda_{n}^{i}=\lambda_{i}(f_{n}g)$, $\lambda^{i}=\lambda_{i}(fg)$,
$N$ is the dimension of $E_{1}(f_{n}g)+\cdots+E_{m}(f_{n}g)$ and
$(\phi_{n}^{1},\cdots,\phi_{n}^{M})$ is an $L^{2}(f_{n}g)$-orthonormal basis
of eigenfunctions for $f_{n}g$ in this space converging in $L^{2}$ to
$(\phi^{1},\cdots,\phi^{M})$, an $L^{2}(fg)$-orthonormal family of
eigenfunctions with respect for $fg$, and $C$ is a function depending on
$\|f\|_{\mathcal{C}^{0}}$, $\lambda_{i}$ and $\sup\partial_{i}F$ at the
neighborhood of $(\lambda^{1},\cdots,\lambda^{n})$. Knowing in addition that
$\psi_{\infty}\in\partial E(f)$, we obtain that
$\begin{split}-\left|\partial E(f_{n})\right|=\min_{\mu\in
Y}\max_{\psi\in\partial
E(f_{n})}\left\langle\mu,\psi\right\rangle\leq\left\langle\tau,\psi_{n}\right\rangle\leq\left\langle\tau,\psi_{\infty}\right\rangle+\left|\left\langle\tau,\psi_{\infty}-\psi_{n}\right\rangle\right|\\\
\leq\max_{\psi\in\partial
E(f)}\left\langle\tau,\psi\right\rangle+C\left(\|f_{n}-f\|_{\mathcal{C}^{0}}+\sum_{j=1}^{N}\|\phi^{j}-\phi_{n}^{j}\|_{L^{2}}+\sum_{i=1}^{m}\left|\lambda_{n}^{i}-\lambda^{i}\right|\right)\end{split}$
Passing to the infimum on $\tau\in X$, we get
$-\left|\partial E(f_{n})\right|\leq-\left|\partial E(f)\right|+o(1)$
as $n\to+\infty$, and finally we have
$\left|\partial E(f)\right|\leq\liminf_{n\to+\infty}\left|\partial
E(f_{n})\right|$
∎
###### Claim 1.2.
Let $f\in X$. The following propositions are equivalent
* (i)
$0\in\partial E(f)$
* (ii)
$\partial E(f)\cap\\{a\in\mathcal{C}^{0}(\Sigma);a\geq 0\\}\neq\emptyset$
* (iii)
$\left|\partial E(f)\right|=0$
###### Proof.
Of course, $(i)\Rightarrow(ii)$. We also have that $(ii)\Rightarrow(i)$,
indeed, if there is $\psi\in\partial
E(f)\cap\\{a\in\mathcal{C}^{0}(\Sigma);a\geq 0\\}$, then knowing that any
function in $\partial E(f)$ satisfies $\int_{\Sigma}\psi dA_{g}=0$, we deduce
that $\psi=0$.
We prove now that $(iii)\Rightarrow(ii)$. If $\partial
E(f)\cap\\{a\in\mathcal{C}^{0}(\Sigma);a\geq 0\\}=\emptyset$, then by Hahn-
Banach’s theorem there is a Radon measure $\tau$ such that
$\forall\psi\in\partial E(f),\left\langle\tau,\psi\right\rangle<0$
and
$\forall a\in\mathcal{C}^{0}(\Sigma),a\geq
0\Rightarrow\left\langle\tau,a\right\rangle\geq 0$
which implies that $\tau$ is a non-negative Radon measure. Up to
renormalization, we can assume that $\tau$ is a probablility measure. We
obtain that $\left|\partial E(f)\right|>0$.
Finally, we prove that $(ii)\Rightarrow(iii)$. If $\left|\partial
E(f)\right|>0$, then, there is $\tau\in Y$ such that
$\forall\psi\in\partial E(f),\left\langle\tau,\psi\right\rangle<0$
This positive measure $\tau$ also satisfies
$\forall a\in\mathcal{C}^{0}(\Sigma),a\geq
0\Rightarrow\left\langle\tau,a\right\rangle\geq 0$
so that $\partial E(f)\cap\\{a\in\mathcal{C}^{0}(\Sigma);a\geq 0\\}=\emptyset$
∎
The set of regular points of $f$ is denoted
$X_{r}=\\{f\in X;\left|\partial E(f)\right|>0\\}=\\{f\in X;0\notin\partial
E(f)\\}$
and the set of critical points we are looking for all along the current paper:
$X_{c}=\\{f\in X;\left|\partial E(f)\right|=0\\}=\\{f\in X;0\in\partial
E(f)\\}.$
### 1.3. A deformation Lemma
Thanks to the definition of $\left|\partial E(f)\right|$ we have an adaptation
of the classical deformation lemma (thanks to downhill directions) for
$\mathcal{C}^{1}$ functionals to the functionals depending on eigenvalues:
###### Proposition 1.1.
If there is $\varepsilon_{0}>0$ and $\delta>0$ such that
$\forall f\in
X;\forall\varepsilon\in(0,\varepsilon_{0}),\left|E(f)-c\right|\leq\varepsilon\Rightarrow\left|\partial
E(f)\right|\geq\delta\hskip 2.84544pt,$
Then $\forall\varepsilon\in(0,\varepsilon_{0})$, there is $\eta:X\to X$ such
that
* •
$\eta(f)=f$ for any $f\in\\{E\geq c+\varepsilon_{0}\\}\cup\\{E\leq
c-\varepsilon_{0}\\}$
* •
$\forall f\in X,E(\eta(f))\leq E(f)$
* •
$\eta(\\{E\leq c+\varepsilon\\})\subset\\{E\leq c-\varepsilon\\}$
We first build an adapted pseudo-vector field for our problem
###### Claim 1.3.
Let $\varepsilon>0$. There is a locally Lipschitz vector field $v:X_{r}\to X$
such that for all $f\in X$
* •
$\left\|v(f)\right\|_{1}<2$
* •
$\forall\psi\in\partial E(f),\left\langle
v(f),\psi\right\rangle<-\frac{\left|\partial E(f)\right|}{2}$
* •
$v(f)\geq 0$
###### Proof.
We first fix $f_{0}\in X$ and we build an adapted image $v_{0}(f_{0})$
satisfying the conclusion of the Claim.
Let $\tau_{0}\in\tilde{Y}$ such that
$-\left|\partial E(f_{0})\right|\geq\max_{\psi\in\partial
E(f_{0})}\left\langle\tau_{0},\psi\right\rangle-\frac{\left|\partial
E(f_{0})\right|}{4}\hskip 2.84544pt.$
We choose $v_{0}(f_{0})=\tau_{0}$ so that
* •
$\left\|v_{0}(f_{0})\right\|_{1}\leq 1<2$
* •
$\forall\psi\in\partial E(f_{0}),\left\langle
v_{0}(f_{0}),\psi\right\rangle\leq-\frac{3}{4}\left|\partial
E(f_{0})\right|<-\frac{\left|\partial E(f_{0})\right|}{2}$
* •
$v_{0}(f_{0})\geq 0$
Now, we aim at defining $v$ by some transformation of $v_{0}$ in order to
obtain a locally Lipschitz vector field $v:X_{r}\to X$.
Let $f_{0}\in X_{r}$. Let $\Omega_{f_{0}}$ be an open neighborhood of $f_{0}$
in $X_{r}$ such that for all $f\in\Omega_{f_{0}}$
$\forall\psi\in\partial
E(f),\left\langle\psi,v_{0}(f_{0})\right\rangle<-\frac{\left|\partial
E(f)\right|}{2}$
We notice that
$X_{r}=\bigcup_{f_{0}\in X_{r}}\Omega_{f_{0}}.$
Since $X_{r}$ is paracompact, one has a family of open sets
$(\omega_{i})_{i\in I}$ such that
* •
$X_{r}=\bigcup_{i\in I}\omega_{i}$
* •
$\forall i\in I,\exists f_{i}\in X_{r},\omega_{i}\subset\Omega_{f_{i}}$
* •
for all $u\in X_{r}$ there is an open set $\Omega$ such that $u\in\Omega$ and
$\Omega\cap\omega_{i}=\emptyset$ except for a finite number of indices $i$.
We set $\psi_{i}(u)=d\left(u,X_{r}\setminus\omega_{i}\right)$ and
$\eta_{i}(u)=\frac{\psi_{i}}{\sum_{j\in I}\psi_{j}}$ and the vectorfield
$v(f)=\sum_{i\in I}\eta_{i}(f)v_{0}(f_{i})$
satisfies the conclusion of the claim. ∎
###### Proof.
(of proposition 1.1) We define a vector-field $\Phi:X\to X$ as for $f\in X$,
$\Phi(f)=\frac{d(f,A)}{d(f,A)+d(f,B)}v(f)$
where $v:X_{r}\to X$ is given by Claim 1.3 and we define the sets
$A=\\{E\leq c-\varepsilon_{0}\\}\cup\\{E\geq c+\varepsilon_{0}\\}$
and
$B=\\{c-\varepsilon\leq E\leq c+\varepsilon\\}\hskip 2.84544pt.$
Let $\eta$ be a solution of
(1.3) $\begin{cases}\frac{d}{dt}\eta_{t}(f)=\Phi\left(\eta_{t}(f)\right)\\\
\eta_{0}(f)=f\hskip 2.84544pt.\end{cases}$
Such a solution $\eta$ exists since $\Phi$ is locally Lipschitz. Moreover,
$\eta$ is well defined on $\mathbb{R}_{+}$ since $\Phi$ is bounded. Let $t\geq
0$, and $f\in X$. We have by elementary properties on the subdifferential that
(1.4)
$\begin{split}\frac{d}{dt}E(\eta_{t}(f))&\leq\max\\{\left\langle\Phi(\eta_{t}(f)),\psi\right\rangle;\psi\in\partial
E\left(\eta_{t}(f)g\right)\\}\\\
&\leq-\frac{d(\eta_{t}(f),A)}{d(\eta_{t}(f),A)+d(\eta_{t}(f),B)}\frac{\left|\partial{E}(\eta_{t}(f))\right|}{2}\leq
0\end{split}$
for any $t\geq 0$. It is clear that for any $t\geq 0$, $\eta_{t}(f)=f$ for
$f\in A$ and that
$\eta_{t}\left(\\{E\leq c-\varepsilon\\}\right)\subset\\{E\leq
c-\varepsilon\\}\hskip 2.84544pt.$
It remains to prove that for $t_{0}>0$ small enough we also have that
$\eta_{t_{0}}\left(B\right)\subset\\{E\leq c-\varepsilon\\}\hskip 2.84544pt.$
Let $t_{\star}$ be the smallest time such that
$\eta_{t_{\star}}(f)=c-\varepsilon$. Then, for $0\leq t\leq t_{\star}$, we
have
$\eta_{t}(f)\in
B\Rightarrow\frac{d}{dt}E(\eta_{t}(f))\leq-\frac{\delta}{2}\hskip 2.84544pt.$
since by assumption, $c-\varepsilon\leq E(\eta_{t}(f))\leq c$ implies that
$\left|\partial{E}(\eta_{t}(f))\right|\geq\delta$ and that
$d(\eta_{t}(f),B)=0$. We deduce by integration that
$E(\eta_{t_{\star}}(f))-E(f)\leq-\frac{\delta}{2}t_{\star}$
so that $t_{\star}\leq\frac{4\varepsilon}{\delta}$ Then, letting
$t_{0}=\frac{2\varepsilon}{\delta}$ we obtain $\eta_{t_{0}}(B)\subset\\{E\leq
c-\varepsilon\\}$. Therefore,
$\eta_{t_{0}}(\\{E\leq c+\varepsilon\\})\subset\eta_{t_{0}}(\\{E\leq
c-\varepsilon\\})$
and we obtain the proposition. ∎
### 1.4. Construction of Palais-Smale sequences
Let $\mathcal{A}\subset\mathcal{P}(X)$. We assume that
$c=\inf_{A\in\mathcal{A}}\sup_{f\in A}E(f).$
is finite. We give a sufficient condition on $\mathcal{A}$ to define an
associated Palais-Smale sequence:
###### Claim 1.4.
We assume that there is $\alpha>0$ such that for any homeomorphism $\eta:X\to
X$ such that $E\left(\eta(f)\right)\leq E(f)$, and such that $E(f)=f$ for any
$f\in\\{\left|E-c\right|>\alpha\\}$,
$A\in\mathcal{A}\Rightarrow\eta(A)\in\mathcal{A}.$
Then, there is a sequence $f_{\varepsilon}\in X$ such that
$\left|E(f_{\varepsilon})-c\right|\leq\varepsilon$ and
$\delta_{\varepsilon}=\left|\partial E(f_{\varepsilon})\right|\to 0$ as
$\varepsilon\to 0$.
###### Proof.
If it is not the case, then there is $\delta_{0}>0$ and $\varepsilon_{0}>0$
satisfying the assumptions of the deformation lemma. We can assume that
$\varepsilon_{0}\leq\alpha$. Then, for $0<\varepsilon\leq\varepsilon_{0}$, we
let $\eta:X\to X$ be the homeomorphism given by the deformation lemma. By
definition of $c$, let $A\in\mathcal{A}$ be such that
$\sup_{f\in\mathcal{A}}E(f)\leq c+\varepsilon$
Then, we have that
$\sup_{f\in\eta\left(\mathcal{A}\right)}E(f)\leq c-\varepsilon$
where $\eta(A)\in\mathcal{A}$ by assumption. This is a contradiction with the
definition of $c$ ∎
For instance:
* •
If $\mathcal{A}=\\{\\{f\\};f\in X\\}$, we obtain a minimizing sequence
satisfying the Palais-Smale condition.
* •
If we have two strict local minimizers $f_{1}$ and $f_{2}$ of $E$, then if
$\mathcal{A}=\\{\gamma([0,1]);\gamma:[0,1]\to X\text{ continuous and
}\gamma(0)=f_{1},\gamma(1)=f_{2}\\}$
we obtain a new Palais-Smale sequence.
Up to standard regularizations, we can assume that a Palais Smale sequence is
a sequence of smooth positive functions $e^{2u_{\varepsilon}}$ satisfying
$\left|E(e^{2u_{\varepsilon}})-c\right|\leq\varepsilon\text{ and
}\delta_{\varepsilon}=\left|\partial E(f_{\varepsilon})\right|\to 0\text{ as
}\varepsilon\to 0.$
The equality $\delta_{\varepsilon}=\left|\partial E(f_{\varepsilon})\right|$
can be rewritten as
$\forall\tau\in Y,\exists\psi\in\partial
E(e^{2u_{\varepsilon}});-\left\langle\tau,\psi\right\rangle\leq\delta_{\varepsilon}$
which can be rewritten as
$\forall\tau\in Y,\exists\psi\in\partial
E(e^{2u_{\varepsilon}});-\left\langle\tau,\psi+\delta_{\varepsilon}\right\rangle\leq
0$
where $\delta_{\varepsilon}$ is a constant function in $V$, meaning that
(1.5) $-\left(\partial
E(e^{2u_{\varepsilon}})+\\{\delta_{\varepsilon}\\}\right)\cap\\{a\in V;a\leq
0\\}\neq\emptyset.$
Indeed, if not, we use the classical Hahn-Banach theorem to separate these two
spaces (the first one is compact, the second one is closed in $V$) by $\tau\in
V^{\star}$ satisfying
$\begin{split}\forall\psi\in\partial
E(e^{2u_{\varepsilon}}),\left\langle\tau,-(\psi+\delta_{\varepsilon})\right\rangle>0\end{split}$
and
$\forall a\in V;a\leq 0,\left\langle\tau,a\right\rangle\leq 0.$
The second condition implies that $\tau$ is a non-negative Radon measure. Up
to a renormalization, we assume that $\tau\in Y$ is a probability measure and
we obtain a contradiction. Therefore we obtain the Palais-Smale condition on
the sequence $e^{2u_{\varepsilon}}$, given in the assumption of Proposition
2.1 (with (1.1) and (1.5)) and Proposition 3.1 (with (1.2) and (1.5)), up to a
renormalization on $\delta_{\varepsilon}$ and to denote $\delta_{\varepsilon}$
another sequence converging to $0$ as $\varepsilon\to 0$.
## 2\. Convergence of Palais-Smale sequences for finite combination of
Laplace eigenvalues in dimension 2
###### Proposition 2.1.
Let $e^{2u_{\varepsilon}}$, $\Lambda_{\varepsilon}$,
$\Phi_{\varepsilon}:\Sigma\to\mathbb{R}^{m}$ be a smooth sequence of maps
satisfying the Palais-Smale assumption $(PS)$ as $\varepsilon\to 0$, that is
* •
$\Lambda_{\varepsilon}=diag(\lambda_{k_{1}^{\varepsilon}}^{\varepsilon},\cdots,\lambda_{k_{m}^{\varepsilon}}^{\varepsilon})$
where the diagonal terms are eigenvalues associated to $e^{2u_{\varepsilon}}$
with uniformly bounded spectral indices $k_{i}^{\varepsilon}$,
$\lambda_{k_{1}^{\varepsilon}}^{\varepsilon}\leq\cdots\leq\lambda_{k_{m}^{\varepsilon}}^{\varepsilon}$
and $\Lambda_{\varepsilon}\to\Lambda=(\lambda_{k_{1}},\cdots,\lambda_{k_{m}})$
as $\varepsilon\to 0$.
* •
$\Delta_{g}\Phi_{\varepsilon}=e^{2u_{\varepsilon}}\Lambda_{\varepsilon}\Phi_{\varepsilon}$
* •
$\int_{\Sigma}e^{2u_{\varepsilon}}dA_{g}=\int_{\Sigma}\left|\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}e^{2u_{\varepsilon}}dA_{g}=\int_{\Sigma}\left|\nabla\Phi_{\varepsilon}\right|_{g}^{2}dA_{g}=1$
* •
For $i\in\\{1,\cdots,m\\}$,
$t_{i}^{\varepsilon}=\int_{\Sigma}\left(\phi_{i}^{\varepsilon}\right)^{2}e^{2u_{\varepsilon}}dA_{g}$
and $\sum\lambda_{k_{i}}^{\varepsilon}t_{i}^{\varepsilon}=1$.
* •
$\left|\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}\geq
1-\delta_{\varepsilon}$ uniformly, where $\delta_{\varepsilon}\to 0$ as
$\varepsilon\to 0$.
Then, up to a subsequence $\Phi_{\varepsilon}$ bubble tree converges in
$W^{1,2}$ to $\Phi_{0}:\Sigma\to\mathcal{E}_{\Lambda}$,
$\Phi_{j}:\mathbb{S}^{2}\to\mathcal{E}_{\Lambda}$ for $j=1,\cdots,l$ ($l\geq
0$) with an energy identity:
$1=\int_{\Sigma}\left|\nabla\Phi_{0}\right|_{g}^{2}dA_{g}+\sum_{j=1}^{l}\int_{\mathbb{S}^{2}}\left|\nabla\Phi_{j}\right|_{h}^{2}dA_{h}$
Moreover, $\Phi_{j}$ are smooth harmonic maps for $j=0,\cdots,l$ and their
$i$-th coordinates are eigenfunctions associated to $\lambda_{k_{i}}$ on the
surface $\Sigma\cup\bigcup_{1\leq i\leq l}\mathbb{S}^{2}$ with respect to the
metrics
$\frac{\left|\nabla\Phi_{0}\right|_{\Lambda,g}^{2}}{\left|\Lambda\Phi_{0}\right|^{2}}g$
on $\Sigma$ and
$\frac{\left|\nabla\Phi_{j}\right|_{\Lambda,h}^{2}}{\left|\Lambda\Phi_{j}\right|^{2}}h$
on $\mathbb{S}^{2}$.
We can summarize the proof in the following way: we have
$\Delta_{g}\Phi_{\varepsilon}=e^{2u_{\varepsilon}}\Lambda_{\varepsilon}\Phi_{\varepsilon}\text{
and }\left|\Phi\right|_{\Lambda_{\varepsilon}}^{2}\geq
1-\delta_{\varepsilon}\text{ and
}\int_{\Sigma}e^{2u_{\varepsilon}}dA_{g}=\int_{\Sigma}\left|\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}e^{2u_{\varepsilon}}dA_{g}$
where $\Phi_{\varepsilon}$ is harmonic if and only if
$\left|\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}=1$. We aim at
proving that $\Phi_{\varepsilon}$ converges to $\Phi$ as $\varepsilon\to 0$ in
a suitable space so that if $e^{2u_{\varepsilon}}dA_{g}\to_{\star}\nu$ with
respect to weak$\star$ convergence of measures and
$\Lambda_{\varepsilon}\to\Lambda$, the equation on $\Phi_{\varepsilon}$ passes
to the limit and we get
$\Delta_{g}\Phi=\nu\Lambda\Phi\text{ and }\left|\Phi\right|_{\Lambda}^{2}=1,$
which is exactly the equation for weak harmonic maps. By regularity of $\Phi$,
we obtain by computation of
$0=\Delta_{g}\left(\left|\Phi\right|^{2}_{\Lambda}\right)$ that
$\nu=\frac{\left|\nabla\Phi\right|^{2}_{\Lambda}}{\left|\Phi\right|_{\Lambda}^{2}}dA_{g}$
is a regular measure. This idea has to be written up to a bubble tree (see
Claim 2.1).
All the proof is based on local energy-convexity results for the harmonic
replacement $\Psi_{\varepsilon}$ of
$\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}$ where
$\omega_{\varepsilon}:=\left|\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}$
on disks $\mathbb{D}_{r}(p)$ of small energy of $\Phi_{\varepsilon}$:
(2.1)
$\frac{1}{2}\int_{\mathbb{D}_{r}(p)}\left|\nabla\left(\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right)\right|^{2}\leq\int_{\mathbb{D}_{r}(p)}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|^{2}-\int_{\mathbb{D}_{r}(p)}\left|\nabla\Psi_{\varepsilon}\right|^{2}$
where the harmonic replacement $\Psi_{\varepsilon}$ is defined as the unique
harmonic map into the ellipsoid $\mathcal{E}_{\Lambda_{\varepsilon}}$ such
that $\Psi_{\varepsilon}=\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}$ on
$\partial\mathbb{D}_{r}(p)$ (see [CM08] [LP19]). Taking the harmonic
replacement of $\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}$ is justified
because of the crucial assumption $\omega_{\varepsilon}\geq
1-\delta_{\varepsilon}$, and its consequences: $\omega_{\varepsilon}$
converges to $1$ in some sense and
$\nabla\left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right)$
converges to $0$ in $L^{2}$ (see (2.3) and (2.4)).
Therefore, we just have to prove that the right-hand term in (2.1) converges
to $0$ as $\varepsilon\to 0$. This comes from the equation on
$\Phi_{\varepsilon}$ and an extra assumption on the disk $\mathbb{D}_{r}(p)$
of replacement: we assume that the first Laplace eigenvalue with Dirichlet
boundary condition satisfies
$\lambda_{\star}\left(\mathbb{D}_{r}(p)\right)\geq\lambda_{k_{m}^{\varepsilon}}^{\varepsilon}$.
Then
(2.2)
$\int_{\mathbb{D}_{r}(p)}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|^{2}-\int_{\mathbb{D}_{r}(p)}\left|\nabla\Psi_{\varepsilon}\right|^{2}\to
0\text{ as }\varepsilon\to 0.$
Using (2.1), (2.2) and standard convergence theorems for harmonic maps, we
obtain the strong $H^{1}$ convergence of
$\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}$ and $\Phi_{\varepsilon}$ to
a function $\Phi$ satisfying the desired weak-harmonic map equation and the
regularity of the mimiting measure $\nu$. We globalize these local
convergences noticing that they happen at a fixed neighborhood of any point
except at most $k_{m}+1$ points that we call bad points (see Claim 2.3 and
(2.5)). They do not interfere in the final regularity result and energy
identities. The proof of local strong $H^{1}$ convergence is given in Claim
2.5. However, in this Claim, we need the assumption that all the eigenvalues
involved in the Palais-Smale sequence are uniformly lower bounded because in
this case, the local smallness of the energy of $\Phi_{\varepsilon}$ is
implied by (2.5). Therefore, we prove an adaptation of this simple technique
in the general case (see subsection 2.3.4).
All along the proof, every local computation is made in the exponential chart
centered at points $p\in M$, defined on balls whose radius is controlled by
the injectivity radius with respect to $g$: $inj_{g}(M)$. Without loss of
generality, we can assume that $inj_{g}(M)\geq 2$ and make arguments on the
unit disk $\mathbb{D}$ endowed with the metric $\exp_{p}^{\star}g$ still
denoted $g$. We do not change the notations of the metrics and functions in
the charts. Moreover sometimes, when there is not any embiguity, we do not
precise the measures of integration associated to $g$, $dA_{g}$ inside the
integrals nor the index $g$ in $\left|\nabla\varphi\right|_{g}^{2}$ in order
to lighten the computations.
### 2.1. A bubble tree structure
For a Radon measure, $\mu$ on $\Sigma$ (or $\mathbb{R}^{2}$), we denote
$\tilde{\mu}$ the continuous part of $\mu$, defined as the measure without
atom such that
$\mu=\tilde{\mu}+\sum_{x\in\mathcal{A}(\mu)}\alpha_{x}\delta_{x}$
where $\mathcal{A}(\mu)$ is the set of atoms of $\mu$ and
$\alpha_{x}\in\mathbb{R}$
###### Claim 2.1.
Let $e^{2u_{\varepsilon}}$ be a sequence of smooth positive functins such that
$\int_{\Sigma}e^{2u_{\varepsilon}}dA_{g}=1$ and
$\liminf_{\varepsilon\to 0}\lambda_{k}(e^{2u_{\varepsilon}})>0$
Then up to a subsequence on $\varepsilon\to 0$, there are an integer $l\geq 0$
and if $l>0$ sequences of points $q_{\varepsilon}^{i}$ for $i=1,\cdots,l$ and
scales $\alpha_{\varepsilon}^{l}\leq\cdots\leq\alpha_{\varepsilon}^{1}\to 0$
as $\varepsilon\to 0$ such that $e^{2u_{\varepsilon}}dA_{g}$ weak-${\star}$
converges in $M$ to a non negative Radon measure $\mu_{0}$ in $\Sigma$ and
$e^{2u_{\varepsilon}\left(\alpha_{\varepsilon}^{i}z+q_{\varepsilon}^{i}\right)}$
we have the weak-${\star}$ convergence in (any compact set of)
$\mathbb{R}^{2}$ to a non negative and non zero Radon measure $\mu_{i}$ on
$\mathbb{R}^{2}$ for any $i$. Moreover, their associated continuous parts
preserve the mass before the limit:
$\lim_{\varepsilon\to
0}\int_{\Sigma}e^{2u_{\varepsilon}}dA_{g}=\int_{\Sigma}d\tilde{\mu}_{0}+\sum_{i=1}^{l}\int_{\mathbb{R}^{2}}d\tilde{\mu}_{i}=1\text{
and }\forall i\geq 1,\int_{\mathbb{R}^{2}}d\tilde{\mu}_{i}>0$
and we have that $1\leq\mathbf{1}_{\tilde{\mu}_{0}\neq 0}+t\leq k$. In
addition, setting
$F_{i}=\\{j>i;\left(\frac{d_{g}(q_{i}^{\varepsilon},q_{j}^{\varepsilon})}{\alpha_{i}^{\varepsilon}}\right)\text{
is bounded}\\}$, we have for $j>i$,
$j\in
F_{i}\Rightarrow\frac{\alpha_{j}^{\varepsilon}}{\alpha_{i}^{\varepsilon}}\to
0\text{ and }j\notin
F_{i}\Rightarrow\frac{d_{g}(q_{i}^{\varepsilon},q_{j}^{\varepsilon})}{\alpha_{i}^{\varepsilon}}\to+\infty$
The last condition also reads as
$\frac{\alpha_{i}^{\varepsilon}}{\alpha_{j}^{\varepsilon}}+\frac{\alpha_{j}^{\varepsilon}}{\alpha_{i}^{\varepsilon}}+\frac{d_{g}(q_{i}^{\varepsilon},q_{j}^{\varepsilon})}{\alpha_{i}^{\varepsilon}+\alpha_{j}^{\varepsilon}}\to+\infty$
which is the standard condition for a bubble tree. A wide part of the section
is devoted to prove that the continuous measures $\tilde{\mu}_{i}$ for
$i\in\\{0,\cdots,t\\}$ are absolutely continuous with respect to $dA_{g}$ if
$i=0$ and to the Euclidean metric if $i\geq 1$, with densities equal to
densities of energy of harmonic maps. The proof of Claim 2.1 is already
written in [Kok14] [Pet18] or [KNPP20]. The selection of scales of
concentration is based on Hersch’s trick because it uses the conformal group
of the sphere to balance continuous measures on a sphere. Then the selection
stops because of the min-max characterization of $\lambda_{k}$: If there are
more than $k+1$ scales of concentration, we can build $k+1$ independant test
functions with arbitrarily small rayleigh quotient, contradicting the first
assumption of Claim 2.1.
### 2.2. Some convergence of $\omega_{\varepsilon}$ to $1$ and first
replacement of $\Phi_{\varepsilon}$
We set
$\omega_{\varepsilon}=\left|\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}$.
We first prove that in some sense, $\omega_{\varepsilon}$ converges to $1$ and
that $\Phi_{\varepsilon}$ has a similar $H^{1}$ behaviour as
$\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}$
###### Claim 2.2.
We have that
(2.3)
$\int_{\Sigma}\frac{\left|\nabla\omega_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}dA_{g}=O(\delta_{\varepsilon})$
(2.4)
$\int_{\Sigma}\left|\nabla\left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right)\right|_{\Lambda_{\varepsilon}}^{2}=O\left(\delta_{\varepsilon}^{\frac{1}{2}}\right)$
as $\varepsilon\to 0$.
###### Proof.
We integrate
$\Delta_{g}\Phi_{\varepsilon}=\Lambda_{\varepsilon}e^{2u_{\varepsilon}}\Phi_{\varepsilon}$
against $\Lambda_{\varepsilon}\Phi_{\varepsilon}$ and
$\frac{\Lambda_{\varepsilon}\Phi_{\varepsilon}}{\omega_{\varepsilon}}$. We
obtain
$\int_{\Sigma}\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{2}e^{2u_{\varepsilon}}=\int_{\Sigma}\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}$
and
$\int_{\Sigma}\frac{\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}e^{2u_{\varepsilon}}=\int_{\Sigma}\frac{\Lambda\Phi_{\varepsilon}}{\omega_{\varepsilon}}\Delta_{g}\Phi_{\varepsilon}=\int_{\Sigma}\left(\frac{\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}}{\omega_{\varepsilon}}-\frac{\left|\nabla\omega_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}\right)=\int_{\Sigma}\omega_{\varepsilon}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|_{\Lambda_{\varepsilon}}^{2}$
Therefore
$\begin{split}\int_{\Sigma}\frac{\left|\nabla\omega_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}dA_{g}=&\int_{\Sigma}\left(\frac{\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}}{\omega_{\varepsilon}}-\omega_{\varepsilon}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|_{\Lambda_{\varepsilon}}^{2}\right)dA_{g}\\\
=&\int_{\Sigma}\left(\frac{\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}}{\omega_{\varepsilon}}-e^{2u_{\varepsilon}}\frac{\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}\right)dA_{g}\\\
=&\int_{\Sigma}\left(\frac{\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}}{\omega_{\varepsilon}}-\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}\right)dA_{g}-\int_{\Sigma}e^{2u_{\varepsilon}}\left(\frac{\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}-\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{2}\right)\\\
=&\int_{\Sigma}\frac{\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}}{\omega_{\varepsilon}}\left(1-\omega_{\varepsilon}\right)dA_{g}+\int_{\Sigma}\frac{\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{2}}e^{2u_{\varepsilon}}(\omega_{\varepsilon}^{2}-\omega_{\varepsilon})\\\
\end{split}$
We know that
$\omega_{\varepsilon}^{2}-\omega_{\varepsilon}=\omega_{\varepsilon}(\omega_{\varepsilon}-1)\geq
1-\delta_{\varepsilon}-\sqrt{1-\delta_{\varepsilon}}$ so that
$\int_{\Sigma}\frac{\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{2}}e^{2u_{\varepsilon}}(\omega_{\varepsilon}^{2}-\omega_{\varepsilon})\leq\sup\left|\frac{\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{2}}\right|\left(\int_{\Sigma}e^{2u_{\varepsilon}}(\omega_{\varepsilon}^{2}-\omega_{\varepsilon})+\left(\sqrt{1-\delta_{\varepsilon}}+\delta_{\varepsilon}-1\right)\right)$
Noticing the crucial equality
$\int_{\Sigma}e^{2u_{\varepsilon}}\omega_{\varepsilon}^{2}=\int_{\Sigma}e^{2u_{\varepsilon}}=1$,
we obtain that
$\int_{\Sigma}e^{2u_{\varepsilon}}(\omega_{\varepsilon}^{2}-\omega_{\varepsilon})=\int_{\Sigma}e^{2u_{\varepsilon}}(1-\omega_{\varepsilon})\leq
1-\sqrt{1-\delta_{\varepsilon}}$
so that
$\begin{split}\int_{\Sigma}\frac{\left|\nabla\omega_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}dA_{g}\leq&\int_{\Sigma}\frac{\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}}{\omega_{\varepsilon}}\left(1-\omega_{\varepsilon}\right)dA_{g}+\delta_{\varepsilon}\sup\left|\frac{\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{2}}\right|\\\
\leq&\int_{\Sigma}\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}\frac{1-\sqrt{1-\delta_{\varepsilon}}}{\sqrt{1-\delta_{\varepsilon}}}+\delta_{\varepsilon}\sup\left|\frac{\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{2}}\right|\\\
\leq&\max_{j}\lambda_{k_{j}^{\varepsilon}}^{\varepsilon}\left(\frac{1-\sqrt{1-\delta_{\varepsilon}}}{\sqrt{1-\delta_{\varepsilon}}}+\delta_{\varepsilon}\right)=O\left(\delta_{\varepsilon}\right)\end{split}$
We obtain (2.3).
Let’s prove (2.4). We have that
$\nabla\left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right)=\left(1-\frac{1}{\omega_{\varepsilon}}\right)\nabla\Phi_{\varepsilon}+\frac{\nabla\omega_{\varepsilon}}{\omega_{\varepsilon}^{2}}\Phi_{\varepsilon}$
and then
$\left|\nabla\left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right)\right|_{\Lambda_{\varepsilon}}^{2}=\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}\left(1-\frac{1}{\omega_{\varepsilon}}\right)^{2}+\frac{\left|\nabla\omega_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{2}}+2\frac{\left|\nabla\omega_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}\left(1-\frac{1}{\omega_{\varepsilon}}\right)$
so that
$\int_{\Sigma}\left|\nabla\left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right)\right|_{\Lambda_{\varepsilon}}^{2}\leq\int_{\Sigma}\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}\left(1-\frac{1}{\omega_{\varepsilon}}\right)^{2}+O\left(\delta_{\varepsilon}\right)$
as $\varepsilon\to 0$ and
$\begin{split}\int_{\Sigma}\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}&\left(1-\frac{1}{\omega_{\varepsilon}}\right)^{2}=-\int_{\Sigma}div\left(\nabla\Phi_{\varepsilon}\left(1-\frac{1}{\omega_{\varepsilon}}\right)^{2}\right)\Lambda_{\varepsilon}\Phi_{\varepsilon}\\\
=&\int_{\Sigma}\left(1-\frac{1}{\omega_{\varepsilon}}\right)^{2}\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{2}e^{2u}+2\int_{\Sigma}\nabla\left(\frac{1}{\omega_{\varepsilon}}\right)\omega_{\varepsilon}\nabla\omega_{\varepsilon}\left(1-\frac{1}{\omega_{\varepsilon}}\right)\\\
\leq&\left(\int_{\Sigma}\frac{\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}\right|^{4}}{\omega_{\varepsilon}^{2}}e^{2u_{\varepsilon}}\right)^{\frac{1}{2}}\left(\int_{\Sigma}e^{2u_{\varepsilon}}\left(\omega_{\varepsilon}-\frac{1}{\omega_{\varepsilon}}\right)^{2}\right)^{\frac{1}{2}}-2\int_{\Sigma}\frac{\left|\nabla\omega_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{2}}\left(\omega_{\varepsilon}-1\right)\\\
\leq&\max_{j}\lambda_{k_{j}}^{\varepsilon}\left(\int_{\Sigma}e^{2u_{\varepsilon}}\left(\frac{\omega_{\varepsilon}^{2}-1}{\omega_{\varepsilon}^{2}}\right)dA_{g}\right)^{\frac{1}{2}}+O\left(\delta_{\varepsilon}^{2}\right)\\\
\leq&O\left(\delta_{\varepsilon}^{\frac{1}{2}}\right)\end{split}$
as $\varepsilon\to 0$, where we crucially used again
$\int_{\Sigma}e^{2u_{\varepsilon}}\omega_{\varepsilon}^{2}=\int_{\Sigma}e^{2u_{\varepsilon}}=1$.
We then obtain (2.4). This completes the proof of the claim.
∎
### 2.3. Regularity of the limiting measures
We apply Claim 2.1 to the Palais-Smale sequence given by Proposition 2.1. We
choose to prove only the regularity of $\tilde{\mu}_{0}$. The regularity of
$\tilde{\mu}_{i}$ will follow the same arguments because of the scale
invariance of all the equations satisfied by the Palais-Smale sequence.
In order to apply locally the a priori estimates on the map
$\Phi_{\varepsilon}$, we have to detect points where the smallness asumptions
on $\varepsilon$-regularity type results fail, that is
* •
Disks $\mathbb{D}_{r^{\varepsilon}_{i}}(p^{\varepsilon}_{i})$ satisfying
$\lambda_{\star}\left(\mathbb{D}_{r^{\varepsilon}_{i}}(p^{\varepsilon}_{i}),e^{2u_{\varepsilon}}dA_{g}\right)\leq\lambda_{k_{m}^{\varepsilon}}$
* •
Points and scales of concentration of
$\left(\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}+e^{2u_{\varepsilon}}\right)dA_{g}$
We prove that in the first point, the equality occurs for finitely many disks
thanks to the assumption that the indices $k_{i}^{\varepsilon}$ of
$\lambda_{i}^{\varepsilon}=\lambda_{k_{i}^{\varepsilon}}\left(e^{2u_{\varepsilon}}g\right)$
is uniformly bounded. We prove in the second point that thanks to global
properties on the system of equation, the concentration function of this
energy is controlled by $e^{2u_{\varepsilon}}dA_{g}$
#### 2.3.1. Disjoint small disks with small critical fundamental state and
bad points
###### Claim 2.3.
Up to the extraction of a subsequence of
$\\{e^{2u_{\varepsilon}}g\\}_{\varepsilon>0}$ we can find a maximal collection
of points $p_{1}^{\varepsilon},\cdots,p_{s}^{\varepsilon}\in\Sigma$ with
$0\leq s\leq k_{m}$ such that $p_{i}^{\varepsilon}\to p_{i}$ as
$\varepsilon\to 0$ and positive scales $r_{1}^{\varepsilon}\leq\cdots\leq
r_{s}^{\varepsilon}$ such that for any $1\leq i\leq s$, setting
$A_{i}^{\varepsilon}$ as
$A_{i}^{\varepsilon}=\left\\{r>0;\overline{\mathbb{D}_{r}\left(p\right)}\subset\Sigma\setminus\left(\bigcup_{j=1}^{i}\overline{\mathbb{D}_{r_{j}^{\varepsilon}}\left(p_{j}^{\varepsilon}\right)}\right)\text{
and
}\lambda_{\star}\left(\mathbb{D}_{r}\left(p\right),e^{2\tilde{u}_{\varepsilon}}dx\right)\leq\lambda_{k_{m}}(e^{2u_{\varepsilon}}g)\right\\}$
$\lim_{\varepsilon\to 0}\left(\inf A_{s}^{\varepsilon}\right)>0\hskip
2.84544pt,$ $r_{i}^{\varepsilon}:=\min A_{i}^{\varepsilon}\to 0\text{ as
}\varepsilon\to 0\hskip 2.84544pt,$
$\overline{\mathbb{D}_{r_{i}^{\varepsilon}}\left(p_{i}^{\varepsilon}\right)}\subset\Sigma\setminus\left(\bigcup_{j=1}^{i-1}\overline{\mathbb{D}_{r_{j}^{\varepsilon}}\left(p_{j}^{\varepsilon}\right)}\right)\hskip
2.84544pt,$
$\lambda_{\star}\left(\mathbb{D}_{r_{i}^{\varepsilon}}\left(p_{i}^{\varepsilon}\right),e^{2\tilde{u}_{\varepsilon}}dx\right)=\lambda_{k_{m}}(e^{2u_{\varepsilon}}g)\hskip
2.84544pt.$
###### Proof.
We first define
$A_{0}^{\varepsilon}=\left\\{r>0;\overline{\mathbb{D}_{r}\left(p\right)}\subset\Sigma\text{
and
}\lambda_{\star}\left(\mathbb{D}_{r}\left(p\right),e^{2\tilde{u}_{\varepsilon}}dx\right)\leq\lambda_{k_{m}}(e^{2u_{\varepsilon}}g)\right\\}$
Notice first that if $\lim_{\varepsilon\to 0}\inf A_{0}^{\varepsilon}>0$ then
$s=0$ and there is not such a sequence and the claim is proved. Otherwise
$\lim_{\varepsilon\to 0}\inf A_{0}^{\varepsilon}=0$. We set
$r_{1}^{\varepsilon}=\min A_{0}^{\varepsilon}$ (notice that the infimum is a
minimum) and we chose $p_{1}^{\varepsilon}$ such that
$\overline{\mathbb{D}_{r_{1}^{\varepsilon}}\left(p_{1}^{\varepsilon}\right)}\subset\Sigma$
and
$\lambda_{\star}\left(\mathbb{D}_{r_{1}^{\varepsilon}}\left(p_{1}^{\varepsilon}\right),e^{2\tilde{u}_{\varepsilon}}dx\right)\leq\lambda_{k_{m}}(e^{2u_{\varepsilon}}g)$.
Since $r_{1}^{\varepsilon}$ is a minimum, the previous inequality has to be an
equality.
If for a given $i\geq 1$ the sequences
$r_{1}^{\varepsilon},\cdots,r_{i}^{\varepsilon}$,
$p_{1}^{\varepsilon},\cdots,p_{i}^{\varepsilon}$ are built, then if
$\lim_{\varepsilon\to 0}\inf A_{i}^{\varepsilon}>0$, the construction
terminates and $s=i$. Otherwise $\lim_{\varepsilon\to 0}\inf
A_{i}^{\varepsilon}=0$ and we set $r_{i+1}^{\varepsilon}=\min
A_{i}^{\varepsilon}$, and we chose $p_{1}^{\varepsilon}$ such that
$\overline{\mathbb{D}_{r_{i+1}^{\varepsilon}}\left(p_{i+1}^{\varepsilon}\right)}\subset\Sigma$
and
$\lambda_{\star}\left(\mathbb{D}_{r_{i+1}^{\varepsilon}}\left(p_{i+1}^{\varepsilon}\right),e^{2\tilde{u}_{\varepsilon}}dx\right)\leq\lambda_{k_{m}}(e^{2u_{\varepsilon}}g)$.
Since $r_{i+1}^{\varepsilon}$ is a minimum, the previous inequality has to be
an equality.
Finally we prove that this sequence terminates after $k_{m}$ steps. Indeed, if
not, we set
$D_{i}^{\varepsilon}=\mathbb{D}_{r_{i}^{\varepsilon}}\left(p_{i}^{\varepsilon}\right)$
for $i\in\\{1,\cdots,k_{m}+1\\}$. The domains $D_{i}^{\varepsilon}$ are
disjoint in $\Sigma$ and we have
$\lambda_{\star}\left(D_{i}^{\varepsilon},e^{2u_{\varepsilon}}\right)=\lambda_{k_{m}}(e^{2u_{\varepsilon}}g)$.
Let $\varphi_{i}^{\varepsilon}$ be first Dirichlet eigenfunctions on
$D_{i}^{\varepsilon}$ extended by $0$ on $\Sigma$. We use these functions as
test functions for the variational characterization of
$\lambda_{k_{m}}(e^{2u_{\varepsilon}},g)$:
$\begin{split}\lambda_{k_{m}}(e^{2u_{\varepsilon}},g)=\inf_{E_{k_{m}+1}}\max_{\varphi\in
E_{k_{m}+1}}\frac{\int_{\Sigma}\left|\nabla\varphi\right|_{g}^{2}dA_{g}}{\int_{\Sigma}\left(\varphi\right)^{2}e^{2u_{\varepsilon}}dA_{g}}\\\
\leq\max_{0\leq i\leq
k_{m}}\left\\{\frac{\int_{\Sigma}\left|\nabla\varphi_{i}^{\varepsilon}\right|_{g}^{2}dA_{g}}{\int_{\Sigma}\left(\varphi_{i}^{\varepsilon}\right)^{2}e^{2u_{\varepsilon}}dA_{g}}\right\\}=\lambda_{k_{m}}(e^{2u_{\varepsilon}},g)\end{split}$
where the inequality holds because the test functions have support in disjoint
sets.
Therefore, we obtain the case of equality in the variational characterization
of $\lambda_{k_{m}}(e^{2u_{\varepsilon}},g)$ and deduce that there is a linear
combination of the $\varphi_{i}$ which is an eigenfunction associated to
$\lambda_{m}(e^{2u_{\varepsilon}},g)$. Such a function vanishes on an open set
: this is absurd. ∎
From now to the end of the proof, we set
$r_{\star}(p)=\min_{i=1,\cdots,s}\\{\inf
A_{s}^{\varepsilon};\frac{\left|p-p_{i}\right|}{2}\\}>0$, and we say that
$p_{1},\cdots,p_{s}$ are bad points.
#### 2.3.2. Non concentration of the energy far from bad points
###### Claim 2.4.
Let $p\in\Sigma\setminus\\{p_{1},\cdots,p_{s}\\}$, then
(2.5) $\lim_{r\to 0}\limsup_{\varepsilon\to
0}\int_{\mathbb{D}_{r}(p)}e^{2u_{\varepsilon}}=\lim_{r\to
0}\limsup_{\varepsilon\to
0}\int_{\mathbb{D}_{r}(p)}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|^{2}_{\Lambda_{\varepsilon}}=\lim_{r\to
0}\limsup_{\varepsilon\to
0}\int_{\mathbb{D}_{r}(p)}\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}=0$
###### Proof.
Let $\eta\in\mathcal{C}_{c}^{\infty}(\mathbb{D}_{\sqrt{r}}(p))$ with
$0\leq\eta\leq 1$, $\eta=1$ in $\mathbb{D}_{r}(p)$ and
$\int_{\Sigma}\left|\nabla\eta\right|^{2}_{g}\leq\frac{C}{\ln\left(\frac{1}{r}\right)}$,
and we first have
$\int_{\Sigma}\eta
e^{2u_{\varepsilon}}\leq\left(\int_{\Sigma}\eta^{2}e^{2u_{\varepsilon}}\right)^{\frac{1}{2}}\leq\left(\frac{1}{\lambda_{\star}\left(\mathbb{D}_{\sqrt{r}}(p),e^{2u_{\varepsilon}}\right)}\int_{\mathbb{D}_{\sqrt{r}}(p)}\left|\nabla\eta\right|^{2}\right)^{\frac{1}{2}}\leq\left(\frac{C}{\lambda_{k_{m}^{\varepsilon}}\ln\frac{1}{r}}\right)^{\frac{1}{2}}$
Now, we integrate the equation
$\Delta\Phi_{\varepsilon}=\Lambda_{\varepsilon}e^{2u_{\varepsilon}}\Phi_{\varepsilon}$
against
$\eta\Lambda_{\varepsilon}\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}$
and we obtain
$\int_{\Sigma}\eta\left\langle\nabla{\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}},\nabla\Phi_{\varepsilon}\right\rangle_{\Lambda_{\varepsilon}}+\int_{\Sigma}\nabla\eta\nabla{\Lambda_{\varepsilon}\Phi_{\varepsilon}}\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}=\int_{\Sigma}\eta
e^{2u_{\varepsilon}}$
so that
$\begin{split}\int_{\Sigma}\eta\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|_{\Lambda_{\varepsilon}}^{2}=&\int_{\Sigma}\eta
e^{2u_{\varepsilon}}-\int_{\Sigma}\eta\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}\nabla\omega_{\varepsilon}\Lambda_{\varepsilon}\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\int_{\Sigma}\nabla\eta\nabla{\Phi_{\varepsilon}}\Lambda_{\varepsilon}\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}\\\
\leq&\left(\frac{C}{\lambda_{k_{m}^{\varepsilon}}\ln\frac{1}{r}}\right)^{\frac{1}{2}}+\int_{\Sigma}\eta\frac{\left|\nabla\omega_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}+\left(\int_{\mathbb{D}_{\sqrt{r}}(p)}\frac{\left|\nabla{\omega_{\varepsilon}}\right|^{2}}{\omega_{\varepsilon}^{2}}\right)^{\frac{1}{2}}\left(\int_{\Sigma}\left|\nabla\eta\right|^{2}\right)^{\frac{1}{2}}\end{split}$
where we used that
$\left|\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}=\omega_{\varepsilon}$,
so that
$\int_{\mathbb{D}_{\sqrt{r}}(p)}\eta\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|_{\Lambda_{\varepsilon}}^{2}\leq\left(\frac{C}{\ln\frac{1}{r}}\right)^{\frac{1}{2}}\left(\lambda_{k_{m}^{\varepsilon}}^{-\frac{1}{2}}+O\left(\delta_{\varepsilon}^{\frac{1}{2}}\right)\right)+O\left(\delta_{\varepsilon}\right)$
Now, to conclude, we have by (2.4) that for any $p\in\Sigma$,
$\int_{\mathbb{D}_{\sqrt{r}}(p)}\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}-\int_{\mathbb{D}_{r}(p)}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|_{\Lambda_{\varepsilon}}^{2}\leq
O\left(\delta_{\varepsilon}^{\frac{1}{4}}\right)$
and since we know that
$\int_{\Sigma}\left|\nabla\phi_{i}^{\varepsilon}\right|^{2}=t_{i}^{\varepsilon}\lambda_{k_{i}^{\varepsilon}}$
we obtain that
$\int_{\mathbb{D}_{\sqrt{r}}(p)}\left|\nabla\Phi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}\leq\max_{j}\lambda_{k_{j}}^{\varepsilon}\int_{\mathbb{D}_{\sqrt{r}}(p)}\left|\nabla\frac{\Phi}{\omega}\right|_{\Lambda_{\varepsilon}}^{2}+\sum_{i;\lambda_{k_{i}^{\varepsilon}}\to
0}t_{i}^{\varepsilon}\lambda_{k_{i}^{\varepsilon}}+O\left(\delta_{\varepsilon}^{\frac{1}{4}}\right)$
which completes the proof of the Claim. ∎
#### 2.3.3. Local $H^{1}$-convergence of eigenfunctions by harmonic
replacement, in the case $\lambda_{k_{1}}\neq 0$
Since the case $\lambda_{k_{i}}\neq 0$ is simple, we assume it in the
following claim in order to obtain the strong $H^{1}$ convergence of
$\Phi_{\varepsilon}$ far from $p_{1},\cdots,p_{s}$
###### Claim 2.5.
We assume that the eigenvalues $\lambda_{k_{i}^{\varepsilon}}$ are lower
bounded by a positive constant. Then $\Phi_{\varepsilon}$ converges to a
harmonic map $\Phi_{0}:\Sigma\to\mathcal{E}_{\Lambda}$ in
$H^{1}_{loc}(\Sigma\setminus\\{p_{1},\cdots,p_{s}\\})$.
###### Proof.
Let $p\in\Sigma\setminus\\{p_{1},\cdots,p_{s}\\}$. Since we have (2.5), and
that $\lambda_{k_{i}^{\varepsilon}\textdegree}$ are lower bounded by a
positive constant, let $r>0$ be such that for any $\varepsilon>0$,
$\int_{\mathbb{D}_{r}(p)}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|^{2}_{g}<\varepsilon_{0}.$
Let
$\Psi_{\varepsilon}:\mathbb{D}_{r}(p)\to\mathcal{E}_{\Lambda_{\varepsilon}}$
be the harmonic replacement of
$\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}$ and we obtain by [CM08]
Theorem 3.1 (see also [LP19] Theorem 1.2)
(2.6)
$\int_{\mathbb{D}_{r}(p)}\left|\nabla\left(\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right)\right|^{2}_{g}\leq
2\left(\int_{\mathbb{D}_{r}(p)}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|^{2}_{g}-\int_{\mathbb{D}_{r}(p)}\left|\nabla\Psi_{\varepsilon}\right|^{2}_{g}\right)\hskip
2.84544pt.$
Let’s prove that the right-hand term converges to $0$ as $\varepsilon\to 0$.
We test the function
$\frac{\Phi_{\varepsilon}^{i}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}^{i}$
in the variational characterization of
$\lambda_{\star}:=\lambda_{\star}\left(\mathbb{D}_{r}(p),e^{2u_{\varepsilon}}\right)$:
$\lambda_{k_{i}^{\varepsilon}}\int_{\mathbb{D}_{r}(p)}\left(\frac{\Phi_{\varepsilon}^{i}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}^{i}\right)^{2}e^{2u_{\varepsilon}}\leq\lambda_{\star}\int_{\mathbb{D}_{r}(p)}\left(\frac{\Phi_{\varepsilon}^{i}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}^{i}\right)^{2}e^{2u_{\varepsilon}}\leq\int_{\mathbb{D}_{r}(p)}\left|\nabla\left(\frac{\Phi_{\varepsilon}^{i}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}^{i}\right)\right|^{2}$
and we sum on $i$ to get
(2.7)
$\int_{\mathbb{D}_{r}(p)}\left|\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}e^{2u_{\varepsilon}}\leq\int_{\mathbb{D}_{r}(p)}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|^{2}+\int_{\mathbb{D}_{r}(p)}\left|\nabla\Psi_{\varepsilon}\right|^{2}-2\int_{\mathbb{D}_{r}(p)}\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\nabla\Psi_{\varepsilon}$
Now we test the equation
$\Delta\Phi_{\varepsilon}=\Lambda_{\varepsilon}e^{2u_{\varepsilon}}\Phi_{\varepsilon}$
against
$\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}-\frac{\Psi_{\varepsilon}}{\omega_{\varepsilon}}$
and we get after an integration by part knowing that
$\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}-\frac{\Psi_{\varepsilon}}{\omega_{\varepsilon}}=0$
on $\partial\mathbb{D}_{r}(p)$
$\int_{\mathbb{D}_{r}(p)}\left(\frac{1}{\omega_{\varepsilon}}\nabla\Phi_{\varepsilon}\nabla\left(\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right)+\nabla\frac{1}{\omega_{\varepsilon}}\nabla\Phi_{\varepsilon}\left(\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right)\right)=\int_{\mathbb{D}_{r}(p)}\left\langle\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}},\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right\rangle_{\Lambda_{\varepsilon}}e^{2u_{\varepsilon}}$
so that,
(2.8)
$\begin{split}\int_{\mathbb{D}_{r}(p)}\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\nabla\left(\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right)=&\int_{\mathbb{D}_{r}(p)}\left\langle\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}},\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right\rangle_{\Lambda_{\varepsilon}}e^{2u_{\varepsilon}}-\int_{\mathbb{D}_{r}(p)}\nabla\frac{1}{\omega_{\varepsilon}}\nabla\Phi_{\varepsilon}\left(\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right)\\\
&+\int_{\mathbb{D}_{r}(p)}\Phi_{\varepsilon}\nabla\frac{1}{\omega_{\varepsilon}}\nabla\left(\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right)\end{split}$
Knowing that
$\left|\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|_{\Lambda_{\varepsilon}}=\left|\Psi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}$,
it is clear that
$2\left\langle\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}},\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right\rangle_{\Lambda_{\varepsilon}}=\left|\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}$
and multiplying (2.8) by $2$, we obtain
(2.9)
$\begin{split}&2\int_{\mathbb{D}_{r}(p)}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|^{2}-2\int_{\mathbb{D}_{r}(p)}\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\nabla\Psi_{\varepsilon}\\\
&=\int_{\mathbb{D}_{r}(p)}\left|\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}e^{2u_{\varepsilon}}+2\int_{\mathbb{D}_{r}(p)}\frac{\nabla\omega_{\varepsilon}}{\omega_{\varepsilon}}\left(\frac{\nabla\Phi_{\varepsilon}}{\omega_{\varepsilon}}\left(\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right)-\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\nabla\left(\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right)\right)\\\
&\leq\int_{\mathbb{D}_{r}(p)}\left|\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\right|_{\Lambda_{\varepsilon}}^{2}e^{2u_{\varepsilon}}+O\left(\frac{\delta_{\varepsilon}^{\frac{1}{2}}}{\lambda_{k_{1}^{\varepsilon}}^{\frac{1}{2}}}\right)\end{split}$
Summing (2.7) and (2.9), we get
$\int_{\mathbb{D}_{r}(p)}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|^{2}_{g}-\int_{\mathbb{D}_{r}(p)}\left|\nabla\Psi_{\varepsilon}\right|^{2}_{g}\leq
O\left(\frac{\delta_{\varepsilon}^{\frac{1}{2}}}{\lambda_{k_{1}^{\varepsilon}}^{\frac{1}{2}}}\right)$
By (2.6), we obtain that
$\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}-\Psi_{\varepsilon}\to 0\text{
in }H^{1}\left(\mathbb{D}_{r}(p)\right)$
Now since $\Psi_{\varepsilon}$ converges in
$\mathcal{C}^{k}\left(\mathbb{D}_{\frac{r}{2}}(p)\right)$ for any
$k\in\mathbb{N}$ to some harmonic map $\Phi$, and because the property is true
for all $p\in\Sigma\setminus\\{p_{1},\cdots,p_{s}\\}$, we have that
$\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}$ converges to some harmonic
map $\Phi_{0}$ in $H^{1}_{loc}(\Sigma\setminus\\{p_{1},\cdots,p_{s}\\})$. Then
by (2.4) $\Phi_{\varepsilon}$ converges to $\Phi_{0}$ in
$H^{1}_{loc}(\Sigma\setminus\\{p_{1},\cdots,p_{s}\\})$.
∎
#### 2.3.4. Local $H^{1}$-convergence of eigenfunctions by harmonic
replacement, general case
In the general case, the obstruction to perform the proof of Claim 2.5 is the
lack of $L^{\infty}$ control of
$\frac{\phi_{\varepsilon}^{i}}{\omega_{\varepsilon}}$ in the case
$\lambda_{k_{i}^{\varepsilon}}\to 0$. However, thanks to the Courant-Lebesgue
lemma and the equation satisfied by $\phi_{\varepsilon}^{i}$ we can even
assume the boundedness of $\phi_{\varepsilon}^{i}$, but only on some circle.
This proves the needed boundedness results on the harmonic replacement (see
claim (2.7)). Before that, we need a bound on the eigenfunctions
$\phi_{\varepsilon}^{i}$ in
$L^{2}_{loc}\left(\Sigma\setminus\left\\{p_{1},\cdots,p_{s}\right\\},g\right)$,
remembering that the $L^{2}\left(\Sigma,e^{2u_{\varepsilon}}g\right)$-norm of
$\phi_{\varepsilon}^{i}$ is $\sqrt{t_{i}^{\varepsilon}}$. The Poincaré
inequality used there is also a first step used in [Pet14a], [Pet18].
###### Claim 2.6.
We assume that
$\lim_{r\to 0}\liminf_{\varepsilon\to
0}\int_{\Sigma\setminus\bigcup_{j=1}^{s}\mathbb{D}_{r}(p_{j})}e^{2u_{\varepsilon}}dA_{g}>0$
Then, for any $i\in\\{1,\cdots,m\\}$, for any $\rho>0$ small enough, there is
a constant $C(\rho)>0$ such that for any $\varepsilon$,
$\int_{\Sigma\setminus\bigcup_{j=1}^{s}\mathbb{D}_{\rho}(p_{j})}\left(\phi_{\varepsilon}^{i}-m_{\varepsilon,i,\rho}\right)^{2}dA_{g}\leq
C(\rho)\int_{\Sigma}\left|\nabla\phi_{\varepsilon}^{i}\right|_{g}^{2}dA_{g}$
where
$m_{\varepsilon,i,\rho}=\int_{\Sigma\setminus\bigcup_{j=1}^{s}\mathbb{D}_{\rho}(p_{j})}\phi_{\varepsilon}^{i}\frac{e^{2u_{\varepsilon}}dA_{g}}{\int_{\Sigma\setminus\bigcup_{j=1}^{s}\mathbb{D}_{\rho}(p_{j})}e^{2u_{\varepsilon}}dA_{g}}\text{
and }\lim_{\rho\to 0}\lim_{\varepsilon\to
0}\frac{\left|m_{\varepsilon,i,\rho}\right|}{\sqrt{t_{\varepsilon}^{i}}}\leq
1$
###### Proof.
We have the following Poincaré inequality for any domain $\Omega$ of $\Sigma$
$\int_{\Omega}\left(\zeta-\int_{\Omega}\zeta\frac{e^{2u_{\varepsilon}}}{\int_{\Omega}e^{2u_{\varepsilon}}dA_{g}}dA_{g}\right)^{2}dA_{g}\leq
C\left\|\frac{e^{2u_{\varepsilon}}}{\int_{\Omega}e^{2u_{\varepsilon}}dA_{g}}\right\|_{H^{-1}\left(\Omega\right)}^{2}\int_{\Omega}\left|\nabla\zeta\right|_{g}^{2}dA_{g}.$
We choose $\Omega=\Sigma\setminus\bigcup_{j=1}^{s}\mathbb{D}_{\rho}(p_{j})$
for $\rho>0$, and we aim at proving that there is a constant $C(\rho)>0$ such
that
$\left\|\frac{e^{2u_{\varepsilon}}}{\int_{\Omega}e^{2u_{\varepsilon}}dA_{g}}\right\|_{H^{-1}\left(\Omega\right)}^{2}\leq
C(\rho).$
With the first assumption of Claim 2.6, it suffices to prove that
$\left\|e^{2u_{\varepsilon}}\right\|_{H^{-1}\left(\Omega\right)}^{2}\leq
C(\rho)$. This property is true because of the globalization, up to partition
of unity of the following estimate coming form Claim 2.3: for any
$p\in\Sigma\setminus\\{p_{1},\cdots,p_{s}\\}$ and $\zeta\in
H^{1}_{0}\left(\mathbb{D}_{r_{\star}(p)}(p)\right)$,
$\begin{split}\int_{\mathbb{D}_{r_{\star}(p)}(p)}\zeta
e^{2u_{\varepsilon}}\leq\left(\int_{\mathbb{D}_{r_{\star}(p)}(p)}\zeta^{2}e^{2u_{\varepsilon}}\right)^{\frac{1}{2}}&\leq\left(\frac{1}{\lambda_{\star}\left(\mathbb{D}_{r_{\star}(p)}(p),e^{2u_{\varepsilon}}\right)}\int_{\mathbb{D}_{r_{\star}(p)}(p)}\left|\nabla\zeta\right|^{2}\right)^{\frac{1}{2}}\\\
&\leq\left(\frac{1}{\lambda_{k_{m}}\left(e^{2u_{\varepsilon}}g\right)}\right)^{\frac{1}{2}}\left(\int_{\mathbb{D}_{r_{\star}(p)}(p)}\left|\nabla\zeta\right|^{2}\right)^{\frac{1}{2}}\end{split}$
where $\lambda_{k_{m}}\left(e^{2u_{\varepsilon}}g\right)$ is uniformly lower
bounded. The number of disks in the partition of unity give the dependence
with respect to $\rho$ of the upper bound of
$\left\|e^{2u_{\varepsilon}}\right\|_{H^{-1}\left(\Omega\right)}^{2}$. Then,
we apply the Poincaré inequality to all the eigenfunctions in order to obtain
the Claim. ∎
Up to a subsequence, let $I\in\\{1,\cdots,m\\}$ be such that for $i\leq I$,
$\lambda_{\varepsilon}^{i}\to 0$ and for $i\geq I+1$,
$\lambda_{\varepsilon}^{i}$ is uniformly lower bounded. Thanks to Claim 2.6,
we know that for $1\leq i\leq I$, there is a sequence of constant functions
$m_{\varepsilon}^{i}$ (which does not depend on $\rho$) such that
(2.10) $\phi_{\varepsilon}^{i}-m_{\varepsilon}^{i}\to 0\text{ in
}H^{1}_{loc}(\Sigma\setminus\\{p_{1},\cdots,p_{s}\\})$
and
(2.11) $\forall\rho_{0}>0,\exists\varepsilon_{0}>0,\forall
0<\varepsilon\leq\varepsilon_{0},\left|m_{\varepsilon}^{i}\right|\leq\sqrt{t_{\varepsilon}^{i}}\left(1+\rho_{0}\right)$
We denote
$\Phi_{\varepsilon}=(\phi_{\varepsilon}^{1},\cdots,\phi_{\varepsilon}^{I},\widetilde{\Phi_{\varepsilon}})$.
It remains to prove the $H^{1}$ convergence of
$\widetilde{\Phi_{\varepsilon}}$. We assume that there is a constant $c_{0}$
such that for any $\varepsilon>0$,
(2.12) $1-\sum_{i=1}^{I}t_{i}^{\varepsilon}\lambda_{i}^{\varepsilon}\geq
c_{0}$
because if it is not the case, we would directly have the $H^{1}$ convergence
of $\widetilde{\Phi_{\varepsilon}}$ to $0$. Let’s fist prove that in this
case, the harmonic replacement of $\widetilde{\Phi_{\varepsilon}}$ is well
defined.
###### Claim 2.7.
For any $p\in\Sigma\setminus\\{p_{1},\cdots,p_{s}\\}$, there is $r_{0}>0$ such
that for any $\varepsilon$ there is $r_{0}\leq r_{\varepsilon}\leq
r_{\star}(p)$, such that the harmonic replacement
$\widetilde{\Psi_{\varepsilon}}:\Sigma\to\mathcal{E}_{\widetilde{\Lambda_{\varepsilon}}}$
of $\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}$
on $\mathbb{D}_{r_{\varepsilon}}(p)$ satisfies
(2.13)
$\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\widetilde{\Psi_{\varepsilon}}\right|^{2}_{g}dA_{g}\leq\varepsilon_{0}$
where $\widetilde{\omega_{\varepsilon}}$ is defined on
$\partial\mathbb{D}_{r_{\varepsilon}}(p)$ by
$\widetilde{\omega_{\varepsilon}}=\sqrt{\sum_{i=I+1}^{m}\lambda_{k_{i}^{\varepsilon}}\left(\phi_{i}^{\varepsilon}\right)^{2}}=\sqrt{\omega_{\varepsilon}^{2}-\sum_{i=1}^{I}\lambda_{k_{i}^{\varepsilon}}\left(\phi_{i}^{\varepsilon}\right)^{2}}\text{
and
}\widetilde{\Lambda_{\varepsilon}}=diag\left(\lambda_{\varepsilon}^{I+1},\cdots,\Lambda_{\varepsilon}^{m}\right)$
and
(2.14)
$\frac{\omega_{\varepsilon}}{\widetilde{\omega_{\varepsilon}}}\leq\sqrt{\frac{2}{c_{0}}}$
###### Proof.
Let $\eta>0$ a small constant we explicit later. Up to a subsequence, let
$I\in\\{1,\cdots,m\\}$ be such that for $i\leq I$,
$\lambda_{\varepsilon}^{i}\to 0$ and for $i\geq I+1$,
$\lambda_{\varepsilon}^{i}$ is uniformly lower bounded. Thanks to (2.5), let
$r>0$ a positive radius such that $r\leq r_{\star}(p)$ and for any
$\varepsilon>0$,
$\int_{\mathbb{D}_{r}(p)}\left|\nabla\Phi_{\varepsilon}\right|_{g}^{2}\leq\eta\varepsilon_{0}.$
By the Courant-Lebesgue lemma, let $\frac{r}{2}\leq r_{\varepsilon}\leq r$ be
a radius such that
(2.15)
$\int_{\partial\mathbb{D}_{r_{\varepsilon}}(p)}\left|\partial_{\theta}\Phi_{\varepsilon}\right|^{2}d\theta\leq\frac{1}{\ln
2}\int_{\mathbb{D}_{r}(p)}\left|\nabla\Phi_{\varepsilon}\right|_{g}^{2}\leq\frac{1}{\ln
2}\eta\varepsilon_{0}.$
and as a consequence
(2.16) $\forall
q,q^{\prime}\in\partial\mathbb{D}_{r_{\varepsilon}}(p);\left|\Phi_{\varepsilon}(q)-\Phi_{\varepsilon}(q^{\prime})\right|^{2}\leq\frac{\pi}{\ln
2}\eta\varepsilon_{0}.$
By the classical trace $L^{2}$ embedding into $H^{1}$ and (2.10), we have that
for $1\leq i\leq I$,
$\int_{\partial\mathbb{R}_{r_{\varepsilon}}(p)}\left(\phi_{\varepsilon}^{i}-m_{\varepsilon}^{i}\right)^{2}\to
0$
as $\varepsilon\to 0$. Using this and (2.16), we obtain for $1\leq i\leq I$
and $\varepsilon$ small enough:
$\forall
q,q^{\prime}\in\partial\mathbb{D}_{r_{\varepsilon}}(p);\left|\phi_{\varepsilon}^{i}(q)-m_{\varepsilon}^{i}\right|^{2}\leq\frac{\pi}{\ln
2}\eta\varepsilon_{0}.$
On $\partial\mathbb{D}_{r_{\varepsilon}}(p)$, we have from (2.13) with
$\rho_{0}=\frac{c_{0}}{4}$, $\varepsilon_{0}>0$ that for $\varepsilon$ small
enough,
$\begin{split}\frac{\widetilde{\omega_{\varepsilon}}^{2}}{\omega_{\varepsilon}^{2}}=1-\frac{1}{\omega_{\varepsilon}^{2}}\sum_{i=1}^{I}\lambda_{k_{i}^{\varepsilon}}\left(\phi_{i}^{\varepsilon}\right)^{2}&\geq
1-\frac{1}{1+\delta_{\varepsilon}}\sum_{i=1}^{I}\lambda_{k_{i}^{\varepsilon}}\left(m_{i}^{\varepsilon}+\sqrt{\frac{\pi}{\ln
2}\eta\varepsilon_{0}}\right)^{2}\\\ &\geq
1-\sum_{i=1}^{I}\lambda_{k_{i}^{\varepsilon}}t_{i}^{\varepsilon}(1+\rho_{0})-O(\lambda_{k_{I}^{\varepsilon}}+\delta_{\varepsilon})\geq\frac{3c_{0}}{4}-O(\lambda_{k_{I}^{\varepsilon}}+\delta_{\varepsilon})\end{split}$
and we obtain (2.14). Then $\widetilde{\Psi_{\varepsilon}}$ is well defined.
Let’s prove the energy bound (2.13). Let
$H_{\varepsilon}:\mathbb{D}_{r_{\varepsilon}}(p)\to\mathbb{R}^{m-I}$ be the
(Euclidean) harmonic extension of
$\widetilde{\Phi_{\varepsilon}}:\partial\mathbb{D}_{r_{\varepsilon}}(p)\to\mathbb{R}^{m-I}$
on $\mathbb{D}_{r_{\varepsilon}}(p)$. We have that
$\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\widetilde{\Psi_{\varepsilon}}\right|^{2}_{g}dA_{g}\leq\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\frac{H_{\varepsilon}}{\left|H_{\varepsilon}\right|_{\widetilde{\Lambda_{\varepsilon}}}}\right|^{2}_{g}dA_{g}$
where we choose $\eta$ such that
$\left|H_{\varepsilon}\right|_{\widetilde{\Lambda_{\varepsilon}}}$ is
uniformly lower bounded by a positive constant:
$\min_{q\in\mathbb{D}_{r_{\varepsilon}}(p)}\left|H_{\varepsilon}(q)\right|_{\widetilde{\Lambda_{\varepsilon}}}^{2}\geq\sum_{i=I+1}^{m}\lambda_{k_{i}^{\varepsilon}}\min_{q\in\mathbb{D}_{r_{\varepsilon}}(p)}\left(H_{i}^{\varepsilon}(q)\right)^{2}\geq\sum_{i=I+1}^{m}\lambda_{k_{i}^{\varepsilon}}\min_{q\in\partial\mathbb{D}_{r_{\varepsilon}}(p)}\left(\phi_{i}^{\varepsilon}(q)\right)^{2}$
where the second inequality comes from the maximum principle for harmonic
functions. Let $q_{\varepsilon}^{i}\in\partial\mathbb{D}_{r_{\varepsilon}}(p)$
be such that
$\phi_{\varepsilon}^{i}(q_{\varepsilon}^{i})^{2}=\min_{q\in\partial\mathbb{D}_{r_{\varepsilon}}(p)}\left(\phi_{i}^{\varepsilon}(q)\right)^{2}$.
We choose $q\in\partial\mathbb{D}_{r_{\varepsilon}}(p)$
$\begin{split}\left|H_{\varepsilon}\right|_{\widetilde{\Lambda_{\varepsilon}}}(q)\geq\widetilde{\omega_{\varepsilon}}(q)-\sqrt{\sum_{i=I+1}^{m}\lambda_{k_{i}^{\varepsilon}}\left(\phi_{\varepsilon}^{i}(q)-\phi_{\varepsilon}^{i}(q_{\varepsilon}^{i})\right)^{2}}\geq\sqrt{\frac{c_{0}}{2}}-\sqrt{\sum_{i=I+1}^{m}\lambda_{k_{\varepsilon}^{i}}\frac{\pi}{\ln
2}\eta\varepsilon_{0}}\end{split}$
so that choosing
$\eta\leq\left(\sum_{i=I+1}^{m}\lambda_{k_{\varepsilon}^{i}}\frac{\pi}{\ln
2}\varepsilon_{0}\right)^{-1}\frac{c_{0}}{8}$, we obtain that
$\left|H_{\varepsilon}\right|_{\widetilde{\Lambda_{\varepsilon}}}$ is
uniformly lower bounded by $\sqrt{\frac{c_{0}}{8}}$. By a straightforward
computation, we obtain that
$\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\frac{H_{\varepsilon}}{\left|H_{\varepsilon}\right|_{\widetilde{\Lambda_{\varepsilon}}}}\right|^{2}_{g}dA_{g}\leq
K\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla
H_{\varepsilon}\right|^{2}_{g}dA_{g}\leq
K\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\widetilde{\Phi_{\varepsilon}}\right|^{2}_{g}dA_{g}\leq
K\eta\varepsilon_{0}$
where $K$ is independant of $\varepsilon$ and $\eta$. Choosing $\eta\leq
K^{-1}$ completes the proof of the Claim. ∎
In order to define a convenient replacement, we define for
$i\in\\{1,\cdots,I\\}$ the harmonic function $\psi_{\varepsilon}^{i}$ such
that $\psi_{\varepsilon}^{i}=\phi_{\varepsilon}^{i}$ on
$\partial\mathbb{D}_{r_{\varepsilon}}(p)$. We complete the definition of a
replacement of $\Phi_{\varepsilon}$ in the whole target ellipsoid
(2.17)
$\Psi_{\varepsilon}=\left(\psi_{\varepsilon}^{1},\cdots,\psi_{\varepsilon}^{I},\widetilde{\omega_{\varepsilon}}\widetilde{\Psi_{\varepsilon}}\right):\mathbb{D}_{r_{\varepsilon}}(p)\to\mathcal{E}_{\Lambda_{\varepsilon}}\text{
where
}\widetilde{\omega_{\varepsilon}}=\sqrt{\omega_{\varepsilon}^{2}-\sum_{i=1}^{I}\lambda_{k_{i}^{\varepsilon}}\left(\psi_{\varepsilon}^{i}\right)^{2}}$
We are now in position to prove the expected general result
###### Claim 2.8.
$\Phi_{\varepsilon}$ converges to a harmonic map
$\Phi_{0}:\Sigma\to\mathcal{E}_{\Lambda}$ in
$H^{1}_{loc}(\Sigma\setminus\\{p_{1},\cdots,p_{s}\\})$.
###### Proof.
Let $p\in\Sigma\setminus\\{p_{1},\cdots,p_{s}\\}$ and let $r_{\varepsilon}$
and $\Psi_{\varepsilon}$ the replacement of $\Phi_{\varepsilon}$ on
$\mathbb{D}_{r_{\varepsilon}}(p)$ be given by (2.17). We first test
$\frac{\widetilde{\Phi_{\varepsilon}}-\widetilde{\omega_{\varepsilon}}\widetilde{\Psi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}^{2}}$
against the equation satisfied by $\widetilde{\Phi_{\varepsilon}}$. We obtain
$\begin{split}\int_{\mathbb{D}_{r_{\varepsilon}}(p)}&\left\langle\nabla\widetilde{\Phi_{\varepsilon}},\nabla\frac{\widetilde{\Phi_{\varepsilon}}-\widetilde{\omega_{\varepsilon}}\widetilde{\Psi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}^{2}}\right\rangle=\int_{\mathbb{D}_{r_{\varepsilon}}(p)}e^{2u_{\varepsilon}}\left\langle\widetilde{\Lambda_{\varepsilon}}\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}},\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}-\widetilde{\Psi_{\varepsilon}}\right\rangle\\\
=&\frac{1}{2}\int_{\mathbb{D}_{r_{\varepsilon}}(p)}e^{2u_{\varepsilon}}\left|\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}-\widetilde{\Psi_{\varepsilon}}\right|_{\widetilde{\Lambda_{\varepsilon}}}^{2}-\frac{1}{2}\sum_{i=1}^{I}\lambda_{k_{i}^{\varepsilon}}\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\frac{e^{2u_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}^{2}}\left(\left(\phi_{\varepsilon}^{i}\right)^{2}-\left(\psi_{\varepsilon}^{i}\right)^{2}\right)\\\
\leq&\frac{1}{2}\sum_{i=I+1}^{m}\frac{\lambda_{k_{i}^{\varepsilon}}}{\lambda_{k_{m}^{\varepsilon}}}\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\left(\frac{\widetilde{\phi_{\varepsilon}^{i}}}{\widetilde{\omega_{\varepsilon}}}-\widetilde{\psi_{\varepsilon}^{i}}\right)\right|^{2}\\\
&+\sum_{i=1}^{I}\frac{8}{c_{0}}\lambda_{k_{i}^{\varepsilon}}\left(\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left(\phi_{\varepsilon}^{i}+\psi_{\varepsilon}^{i}\right)^{2}e^{2u_{\varepsilon}}\right)^{\frac{1}{2}}\left(\frac{1}{\lambda_{k_{m}^{\varepsilon}}}\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\left(\phi_{\varepsilon}^{i}-\psi_{\varepsilon}^{i}\right)\right|^{2}\right)^{\frac{1}{2}}\\\
\leq&\frac{1}{2}\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\left(\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}-\widetilde{\Psi_{\varepsilon}}\right)\right|^{2}+C\sum_{i=1}^{I}\frac{8}{c_{0}}\frac{\lambda_{k_{i}^{\varepsilon}}}{\lambda_{k_{m}^{\varepsilon}}}\left(t_{\varepsilon}^{i}\right)^{\frac{1}{2}}\left(\lambda_{\varepsilon}^{i}t_{\varepsilon}^{i}\right)^{\frac{1}{2}}\\\
\leq&\frac{1}{2}\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\left(\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}-\widetilde{\Psi_{\varepsilon}}\right)\right|^{2}+O\left(\left(\lambda_{k_{\varepsilon}^{I}}\right)^{\frac{1}{2}}\right)\end{split}$
which gives
(2.18)
$\begin{split}\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right|^{2}-&\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\widetilde{\Psi_{\varepsilon}}\right|^{2}\leq
O\left(\left(\lambda_{k_{\varepsilon}^{I}}\right)^{\frac{1}{2}}\right)\\\
&+\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\frac{\nabla\widetilde{\omega_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\left(\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\nabla\left(\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}-\widetilde{\Psi_{\varepsilon}}\right)-\left(\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}-\widetilde{\Psi_{\varepsilon}}\right)\frac{\widetilde{\nabla\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right)\\\
\leq&O\left(\left(\lambda_{k_{\varepsilon}^{I}}\right)^{\frac{1}{2}}+\delta_{\varepsilon}^{\frac{1}{2}}\right)\end{split}$
Now, we test
$\widetilde{\Psi_{\varepsilon}}-\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}$
against the harmonic map equation satisfied by
$\widetilde{\Psi_{\varepsilon}}$. We obtain in a similar way
$\begin{split}I:=2\int_{\mathbb{D}_{r_{\varepsilon}}(p)}&\left\langle\nabla\left(\widetilde{\Psi_{\varepsilon}}-\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right),\nabla\widetilde{\Psi_{\varepsilon}}\right\rangle=\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\frac{\left|\nabla\widetilde{\Psi_{\varepsilon}}\right|_{\widetilde{\Lambda_{\varepsilon}}}^{2}}{\left|\widetilde{\Lambda_{\varepsilon}}\widetilde{\Psi_{\varepsilon}}\right|^{2}}\left\langle\widetilde{\Lambda_{\varepsilon}}\widetilde{\Psi_{\varepsilon}},\widetilde{\Psi_{\varepsilon}}-\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right\rangle\\\
=&\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\frac{\left|\nabla\widetilde{\Psi_{\varepsilon}}\right|_{\widetilde{\Lambda_{\varepsilon}}}^{2}}{\left|\widetilde{\Lambda_{\varepsilon}}\widetilde{\Psi_{\varepsilon}}\right|^{2}}\left(\left|\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}-\widetilde{\Psi_{\varepsilon}}\right|_{\widetilde{\Lambda_{\varepsilon}}}^{2}-\sum_{i=1}^{I}\frac{\lambda_{k_{i}^{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}^{2}}\left(\left(\psi_{\varepsilon}^{i}\right)^{2}-\left(\phi_{\varepsilon}^{i}\right)^{2}\right)\right)\\\
\end{split}$
and using the consequence of the $\varepsilon$-regularity result on harmonic
maps
$\left|\nabla\Psi_{\varepsilon}\right|^{2}\leq\frac{C}{\left(1-\left|x\right|\right)^{2}}$
and a Hardy inequality (this is the way we prove energy convexity results in
[LP19]), we obtain:
$\begin{split}I\leq&C\epsilon_{0}\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\left(\widetilde{\Psi_{\varepsilon}}-\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right)\right|^{2}+C\epsilon_{0}\sum_{i=1}^{I}\frac{8}{c_{0}}\lambda_{k_{i}^{\varepsilon}}\left(t_{\varepsilon}^{i}\right)^{\frac{1}{2}}\left(\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\left(\psi_{\varepsilon}^{i}-\phi_{\varepsilon}^{i}\right)\right|^{2}\right)^{\frac{1}{2}}\\\
\leq&C\epsilon_{0}\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\left(\widetilde{\Psi_{\varepsilon}}-\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right)\right|^{2}+O\left(\left(\lambda_{k_{I}^{\varepsilon}}\right)^{\frac{1}{2}}\right).\end{split}$
Letting $\varepsilon_{0}$ be small enough such that
$1-C\varepsilon_{0}\geq\frac{1}{2}$ we obtain that
$\begin{split}\frac{1}{2}\int_{\mathbb{D}_{r_{\varepsilon}}(p)}&\left|\nabla\left(\widetilde{\Psi_{\varepsilon}}-\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right)\right|^{2}\leq\left(1-C\varepsilon_{0}\right)\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\left(\widetilde{\Psi_{\varepsilon}}-\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right)\right|^{2}\\\
\leq&\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\left(\widetilde{\Psi_{\varepsilon}}-\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right)\right|^{2}-2\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left\langle\nabla\left(\widetilde{\Psi_{\varepsilon}}-\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right),\nabla\widetilde{\Psi_{\varepsilon}}\right\rangle-O\left(\left(\lambda_{k_{I}^{\varepsilon}}\right)^{\frac{1}{2}}\right)\\\
\leq&\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right|^{2}-\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\widetilde{\Psi_{\varepsilon}}\right|^{2}-O\left(\left(\lambda_{k_{I}^{\varepsilon}}\right)^{\frac{1}{2}}\right).\end{split}$
It implies
$\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\left(\widetilde{\Psi_{\varepsilon}}-\frac{\widetilde{\Phi_{\varepsilon}}}{\widetilde{\omega_{\varepsilon}}}\right)\right|^{2}=O\left(\left(\lambda_{k_{I}^{\varepsilon}}\right)^{\frac{1}{2}}+\delta_{\varepsilon}^{\frac{1}{2}}\right)$
and we also obtain
$\int_{\mathbb{D}_{r_{\varepsilon}}(p)}\left|\nabla\left(\frac{\widetilde{\omega_{\varepsilon}}}{\omega_{\varepsilon}}\widetilde{\Psi_{\varepsilon}}-\frac{\widetilde{\Phi_{\varepsilon}}}{\omega_{\varepsilon}}\right)\right|^{2}=O\left(\left(\lambda_{k_{I}^{\varepsilon}}\right)^{\frac{1}{2}}+\delta_{\varepsilon}^{\frac{1}{2}}\right)$
as $\varepsilon\to 0$. We obtain the expected local $H^{1}$ comparison of
$\frac{\widetilde{\Phi_{\varepsilon}}}{\omega_{\varepsilon}}$ to a harmonic
map into $\mathcal{E}_{\Lambda_{\varepsilon}}$ times a function
$\frac{\widetilde{\omega_{\varepsilon}}}{\omega_{\varepsilon}}$ uniformly
bounded by $1$ and uniformly lower bounded by $\sqrt{\frac{c_{0}}{2}}$ which
converges to a constant function in $H^{1}$ and we conclude like in the end of
the proof of Claim 2.5. ∎
#### 2.3.5. The limiting measure has a smooth density
Let $\zeta\in\mathcal{C}_{c}^{\infty}\left(\mathbb{D}_{r}(p)\right)$, we have
that
$\displaystyle\int_{\Sigma}\zeta\left(\Lambda_{\varepsilon}\Phi_{\varepsilon}e^{2u_{\varepsilon}}dA_{g}-\Lambda\Phi_{0}d\tilde{\mu}_{0}\right)$
$\displaystyle=$
$\displaystyle\int_{\Sigma}\zeta\left(\Lambda_{\varepsilon}\Phi_{\varepsilon}-\Lambda\Phi_{0}\right)e^{2u_{\varepsilon}}dA_{g}$
$\displaystyle\quad+\int_{\Sigma}\zeta\Lambda\Phi_{0}\left(e^{2u_{\varepsilon}}dA_{g}-d\tilde{\mu}_{0}\right)\hskip
2.84544pt.$
Then on the first right-hand term, we have that
$\displaystyle\int_{\Sigma}\zeta\left(\Lambda_{\varepsilon}\Phi_{\varepsilon}-\Lambda\Phi_{0}\right)e^{2u_{\varepsilon}}dA_{g}$
$\displaystyle\leq$
$\displaystyle\left(\int_{\mathbb{D}_{r}(p)}\zeta^{2}\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}-\Lambda\Phi_{0}\right|^{2}e^{2u_{\varepsilon}}dA_{g}\right)^{\frac{1}{2}}$
$\displaystyle\leq$
$\displaystyle\left(\frac{1}{\lambda_{\star}(\mathbb{D}_{r}(p),e^{2u_{\varepsilon}})}\int_{\mathbb{D}_{r}(p)}\left|\nabla\left(\zeta\left|\Lambda_{\varepsilon}\Phi_{\varepsilon}-\Lambda\Phi_{0}\right|\right)\right|^{2}_{g}dA_{g}\right)^{\frac{1}{2}}$
$\displaystyle\leq$ $\displaystyle
C\left(\int_{\mathbb{D}_{r}(p)}\left|\nabla\left(\Phi_{\varepsilon}-\Phi_{0}\right)\right|^{2}_{g}dA_{g}\right)^{\frac{1}{2}}$
for some constant $C$ independent of $\varepsilon$. Letting $\varepsilon\to 0$
in a weak sense to the eigenvalue equation
$\Delta_{g}\Phi_{\varepsilon}=\Lambda_{\varepsilon}\Phi_{\varepsilon}e^{2u_{\varepsilon}}$,
we get
$\Delta\Phi_{0}=\Lambda\Phi_{0}\tilde{\mu}_{0}$
and since $\Phi_{0}$ is harmonic, we obtain that
$\tilde{\mu}_{0}=\frac{\left|\nabla\Phi_{0}\right|^{2}_{\Lambda}}{\left|\Lambda\Phi_{0}\right|_{\Lambda}^{2}}dA_{g}$.
With a same analysis in the scales $\alpha_{\varepsilon}^{j}$ centered at
$q_{\varepsilon}^{j}$, we obtain limits $\Phi_{j}$ which are harmonic maps on
$\mathbb{R}^{2}$ into $\mathcal{E}_{\Lambda}$ such that
$\tilde{\mu}_{j}=\frac{\left|\nabla\Phi_{j}\right|^{2}_{\Lambda}}{\left|\Lambda\Phi_{j}\right|_{\Lambda}^{2}}dx$.
Up to a stereographic projection and a standard point removability theorem on
finite energy harmonic maps at the point $\infty$, these quantities for $1\leq
j\leq l$ can be seen on $\mathbb{S}^{2}$.
### 2.4. Extra convergence results
#### 2.4.1. Global $H^{1}$ convergence of eigenfunctions
Notice that at that stage $\Phi_{j}$ for $0\leq j\leq l$ are weak limits of
(the rescalings of) $\Phi_{\varepsilon}$ in $H^{1}\left(\Sigma\right)$ (or
$H^{1}_{loc}\left(\mathbb{R}^{2}\right)$ ). We prove here that this weak limit
of this bubble tree is strong, meaning that we have the energy identity given
by Proposition 2.1. The main point is to prove that $H^{1}$ convergence also
holds at the neighborhood of bad points. We set for $\rho>0$, and $1\leq j\leq
l$ ($l$ is given by Propositon 2.1)
$U_{0}(\rho)=\Sigma\setminus\bigcup_{i=1}^{s_{0}}\mathbb{D}_{\rho}(p_{i}^{0})\text{
and
}U_{j}(\rho)=\mathbb{D}_{\frac{1}{\rho}}\setminus\bigcup_{i=1}^{s_{j}}\mathbb{D}_{\rho}(p_{i}^{j})$
where $\\{p_{1}^{0},\cdots,p_{s_{0}}^{0}\\}=\\{p_{1},\cdots,p_{s}\\}$ are the
bad points given by Claim 2.3 and for $1\leq j\leq l$,
$\\{p_{1}^{j},\cdots,p_{s_{j}}^{j}\\}$ are the rescaled bad points: it is the
set of the finite limits (up to a subsequence as $\varepsilon\to 0$) of
$\frac{p_{i}^{\varepsilon}-q_{i}^{\varepsilon}}{\alpha_{i}^{\varepsilon}}$
where the indices $i$ satisfy
$\frac{r_{i}^{\varepsilon}}{\alpha_{i}^{\varepsilon}}\to 0$ as $\varepsilon\to
0$. We set for $1\leq j\leq l$,
$e^{2u_{j}^{\varepsilon}(z)}=\left(\alpha_{\varepsilon}^{j}\right)^{2}e^{2u_{\varepsilon}(q_{\varepsilon}+\alpha_{\varepsilon}^{j}z)}$
and
$\Phi_{j}^{\varepsilon}(z)=\Phi_{\varepsilon}(q_{\varepsilon}+\alpha_{\varepsilon}^{j}z)$,
the rescaled quantities at $j$ (for $j=0$, there is no rescaling). We denote
$Q(\varepsilon,\rho)$ any quantity which satisfies $\lim_{\rho\to
0}\lim_{\varepsilon\to 0}Q(\varepsilon,\rho)=0$. By Proposition 2.1, we have:
$\int_{\Sigma}\left|\nabla\Phi_{\varepsilon}\right|^{2}=\int_{\Sigma}e^{2u_{\varepsilon}}=\sum_{j=0}^{l}\int_{U_{j}(\sqrt{\rho})}e^{2u_{j}^{\varepsilon}}+Q(\varepsilon,\rho)=\int_{\Sigma}d\tilde{\mu}_{0}+\sum_{j=1}^{l}\int_{\mathbb{R}^{2}}d\tilde{\mu}_{j}+Q(\varepsilon,\rho)$
and integrating the limiting equations on $\Phi_{j}$ (limit of
$\Phi_{j}^{\varepsilon}$ as $\varepsilon\to 0$) against $\Phi_{j}$, we obtain
$\int_{\Sigma}\left|\nabla\Phi_{\varepsilon}\right|^{2}=\int_{\Sigma}\left|\nabla\Phi_{0}\right|^{2}dA_{g}+\sum_{j=1}^{l}\int_{\mathbb{R}^{2}}\left|\nabla\Phi_{j}\right|^{2}+Q(\varepsilon,\rho)$
and letting $\varepsilon\to 0$ and then $\rho\to 0$, and using the weak
convergence, we obtain the expected identity of energy.
#### 2.4.2. $H^{-1}$ convergence of conformal factors
###### Claim 2.9.
We assume that $\lambda_{1}(e^{2u_{\varepsilon}}g)$ is uniformly lower bounded
by a positive constant. Then, $\Phi_{\varepsilon}$ converges strongly in
$H^{1}\left(\Sigma\right)$ and $e^{2u_{\varepsilon}}$ converges strongly to
$e^{2u}$ in $H^{-1}\left(\Sigma\right)$.
###### Proof.
Notice that thanks to the assumption that
$\lambda_{1}^{\varepsilon}:=\lambda_{1}(e^{2u_{\varepsilon}}g)$ is uniformly
lower bounded, there is not any bubble tree, and $\Phi_{\varepsilon}$
converges in $H^{1}$. Let $\Psi\in H^{1}(\Sigma)$, we have
$\int_{\Sigma}\nabla\Phi\nabla\Psi=\int_{\Sigma}e^{2u}\left\langle\Lambda\Phi,\Psi\right\rangle\text{
and
}\int_{\Sigma}\nabla\Phi_{\varepsilon}\nabla\Psi=\int_{\Sigma}e^{2u_{\varepsilon}}\left\langle\Lambda_{\varepsilon}\Phi_{\varepsilon},\Psi\right\rangle$
so that
$\left|\int_{\Sigma}e^{2u}\left\langle\Lambda\Phi,\Psi\right\rangle-\int_{\Sigma}e^{2u_{\varepsilon}}\left\langle\Lambda_{\varepsilon}\Phi_{\varepsilon},\Psi\right\rangle\right|\leq\left\|\nabla\left(\Phi-\Phi_{\varepsilon}\right)\right\|_{2}\left\|\nabla\Psi\right\|_{2}.$
Knowing that
$\begin{split}&\left|\int_{\Sigma}\left\langle\Lambda_{\varepsilon}\left(\Phi-\int_{\Sigma}\Phi
e^{2u_{\varepsilon}}-\Phi_{\varepsilon}\right),\Psi\right\rangle
e^{2u_{\varepsilon}}\right|\\\
=&\left|\int_{\Sigma}\left\langle\Lambda_{\varepsilon}\left(\Phi-\int_{\Sigma}\Phi
e^{2u_{\varepsilon}}-\Phi_{\varepsilon}\right),\left(\Psi-\int_{\Sigma}\Psi
e^{2u_{\varepsilon}}\right)\right\rangle
e^{2u_{\varepsilon}}\right|\leq\frac{1}{\lambda_{1}^{\varepsilon}}\left\|\nabla\left(\Phi-\Phi_{\varepsilon}\right)\right\|_{2}\left\|\nabla\Psi\right\|_{2}\end{split}$
we obtain that
$\begin{split}\left|\int_{\Sigma}e^{2u}\left\langle\Lambda\Phi,\Psi\right\rangle-\int_{\Sigma}e^{2u_{\varepsilon}}\left\langle\Lambda_{\varepsilon}\left(\Phi-\int_{\Sigma}\Phi
e^{2u_{\varepsilon}}\right),\Psi\right\rangle\right|\leq&C\left\|\nabla\left(\Phi-\Phi_{\varepsilon}\right)\right\|_{2}\left\|\nabla\Psi\right\|_{2}\end{split}$
It is clear that
$\int_{\Sigma}\Phi
e^{2u_{\varepsilon}}=\int_{\Sigma}\Phi\left(e^{2u_{\varepsilon}}-e^{2u}\right)\to
0$
as $\varepsilon\to 0$ by weak convergence of $e^{2u_{\varepsilon}}dA_{g}$ to
$e^{2u}dA_{g}$. We choose $\varepsilon$ small enough so that
$\left|\int_{\Sigma}\Phi e^{2u_{\varepsilon}}\right|_{\Lambda}\leq\frac{1}{4}$
and $\left|\Lambda_{\varepsilon}-\Lambda\right|\leq\frac{1}{4}$. Now, setting
$\Psi=\frac{\Lambda_{\varepsilon}\left(\Phi-\int_{\Sigma}\Phi
e^{2u_{\varepsilon}}\right)}{\left|\Lambda_{\varepsilon}\left(\Phi-\int_{\Sigma}\Phi
e^{2u_{\varepsilon}}\right)\right|^{2}}f$, for some function $f\in H^{1}$, we
obtain
$\begin{split}\left|\int_{\Sigma}e^{2u}\left\langle\Lambda\Phi,\frac{\Lambda_{\varepsilon}\left(\Phi-\int_{\Sigma}\Phi
e^{2u_{\varepsilon}}\right)}{\left|\Lambda_{\varepsilon}\left(\Phi-\int_{\Sigma}\Phi
e^{2u_{\varepsilon}}\right)\right|^{2}}\right\rangle
f-\int_{\Sigma}fe^{2u_{\varepsilon}}\right|\leq&C\left\|\nabla\left(\Phi-\Phi_{\varepsilon}\right)\right\|_{2}\left\|\nabla\Psi\right\|_{2}\end{split}$
so that
$\begin{split}\left|\int_{\Sigma}f\left(e^{2u}-e^{2u_{\varepsilon}}\right)\right|\leq&C\left\|\nabla\left(\Phi-\Phi_{\varepsilon}\right)\right\|_{2}\left\|\nabla\Psi\right\|_{2}\\\
&+\left|\int_{\Sigma}\frac{\left|\nabla\Phi\right|_{\Lambda}^{2}}{\left|\Lambda\Phi\right|^{2}}\frac{1}{\left|\Lambda_{\varepsilon}\left(\Phi-\int_{\Sigma}\Phi
e^{2u_{\varepsilon}}\right)\right|}\left(\left|\Lambda_{\varepsilon}\int_{\Sigma}\Phi
e^{2u_{\varepsilon}}\right|+\left|(\Lambda_{\varepsilon}-\Lambda)\Phi\right|\right)f\right|\\\
\leq&C\left\|\nabla\left(\Phi-\Phi_{\varepsilon}\right)\right\|_{2}\left\|f\right\|_{H^{1}}+C^{\prime}\left(\left|\int_{\Sigma}\Phi
e^{2u_{\varepsilon}}\right|+\left|\Lambda-\Lambda_{\varepsilon}\right|\right)\left\|f\right\|_{1}\end{split}$
∎
#### 2.4.3. Convergence of spectral indices
###### Claim 2.10.
Up to a subsequence, we can assume that $k_{i}^{\varepsilon}=k_{i}$ is
constant and we have the following upper semi-continuity of eigenvalues
$\lim_{\varepsilon\to
0}\lambda_{k_{i}}^{\varepsilon}\leq\lambda_{k_{i}}\left((\Sigma,e^{2u_{0}}g)\sqcup\bigsqcup_{j=1}^{l}(\mathbb{S}^{2},e^{2u_{j}}h)\right).$
###### Proof.
We use $\theta_{0},\theta_{1},\cdots,\theta_{k_{m}}$ an orthonormal family of
first $(k_{m}+1)$ eigenfunctions for
$(\Sigma,\tilde{\mu}_{0})\sqcup\bigsqcup_{j=1}^{l}(\mathbb{R}^{2},\tilde{\mu}_{j})$
as $k_{m}+1$ test functions for the variational characterization of
$\lambda_{1}(\Sigma,e^{2u_{\varepsilon}}g),\cdots,\lambda_{k_{m}}\left(\Sigma,e^{2u_{\varepsilon}}g\right)$.
For that, we fix $\rho>0$ and we let
$\eta_{0}\in\mathcal{C}^{\infty}_{c}\left(\Sigma(\rho)\right)\text{ with
}\eta_{0}=1\text{ on }\Sigma\left(\sqrt{\rho}\right)\text{ and
}\int_{\Sigma}\left|\nabla\eta_{0}\right|^{2}_{g}dA_{g}\leq\frac{C}{\ln\frac{1}{\rho}},$
and for $1\leq j\leq l$
$\eta_{j}\in\mathcal{C}_{\infty}\left(\Omega_{i}(\rho)\right)\text{ with
}\eta_{j}=1\text{ on }\Omega_{i}(\sqrt{\rho})\text{ and
}\int_{\mathbb{R}^{2}}\left|\nabla\eta_{j}\right|^{2}_{g}dA_{g}\leq\frac{C}{\ln\frac{1}{\rho}},$
where
$\Sigma(\rho)=\Sigma\setminus\bigcup_{i=1}^{L_{0}}\mathbb{D}_{\rho}(q_{i}^{0})\text{
and
}\Omega_{j}(\rho)=\mathbb{D}_{\frac{1}{\rho}}\setminus\bigcup_{i=1}^{L_{j}}\mathbb{D}_{\rho}(q_{i}^{j})$
where $\\{q_{1}^{j},\cdots,q_{L_{j}}^{j}\\}$ is the set of atoms of $\mu_{j}$,
and we set
$\varphi_{a}^{\varepsilon}(z)=\eta_{0}(z)\theta_{a}(z)+\eta_{1}\left(\frac{z-q_{1}^{\varepsilon}}{\alpha_{1}^{\varepsilon}}\right)\theta_{a}\left(\frac{z-q_{i}^{\varepsilon}}{\alpha_{i}^{\varepsilon}}\right)+\cdots+\eta_{j}\left(\frac{z-q_{l}^{\varepsilon}}{\alpha_{l}^{\varepsilon}}\right)\theta_{a}\left(\frac{z-q_{l}^{\varepsilon}}{\alpha_{l}^{\varepsilon}}\right)$
for $0\leq a\leq k_{m}+1$ where all the involved functions in the sum have
disjoint support for $\varepsilon$ small enough. We obtain
$\begin{split}\lambda_{a}(\Sigma,e^{2u_{\varepsilon}}g)\leq&\max_{\varphi\in\left\langle\varphi_{0}^{\varepsilon},\cdots,\varphi_{a}^{\varepsilon}\right\rangle}\frac{\int_{\Sigma}\left|\nabla\varphi\right|^{2}_{g}dA_{g}}{\int_{\Sigma}\varphi^{2}e^{2u_{\varepsilon}}dA_{g}}\\\
\leq&\max_{\theta\in\left\langle\theta_{0},\cdots,\theta_{a}\right\rangle}\frac{\int_{\Sigma}\left|\nabla\theta\right|^{2}_{g}dA_{g}+\sum_{i=1}^{L}\int_{\left(\mathbb{R}^{2}\right)_{i}}\left|\nabla\theta\right|^{2}}{\int_{\Sigma(\sqrt{\rho})}\theta^{2}d\tilde{\mu}_{0}+\sum_{i=1}^{L}\int_{\Omega_{i}(\sqrt{\rho})}\theta^{2}d\tilde{\mu}_{i}}\frac{1}{1-o(1)}+\frac{\tilde{C}}{\sqrt{\ln\left(\frac{1}{\rho}\right)}}\\\
\leq&\lambda_{a}\left((\Sigma,e^{2u}g)\sqcup\bigsqcup_{j=1}^{l}(\mathbb{S}^{2},e^{2u_{j}}h)\right)\frac{1}{1-c\rho}\frac{1}{1-o(1)}+\frac{\tilde{C}}{\sqrt{\ln\left(\frac{1}{\rho}\right)}}\end{split}$
as $\varepsilon\to 0$. Letting $\varepsilon\to 0$ and then $\rho\to 0$ gives
the expected inequality. ∎
###### Claim 2.11.
We assume that the Palais-Smale sequence is a minimizing sequence for
$e^{2u}\mapsto
E(e^{2u}):=F(\bar{\lambda}_{l_{1}}(\Sigma,e^{2u}g),\cdots,\bar{\lambda}_{l_{p}}(\Sigma,e^{2u}g))$,
where $l_{j}=k_{I_{j}}=\cdots=k_{I_{j+1}-1}$ and
$1=I_{1}<\cdots<I_{p}<I_{p+1}=m+1$. Then, we have that
$\lim_{\varepsilon\to
0}\lambda_{k_{i}}^{\varepsilon}=\lambda_{k_{i}}\left((\Sigma,e^{2u}g)\sqcup\bigsqcup_{j=1}^{l}(\mathbb{S}^{2},e^{2u_{j}}h)\right).$
###### Proof.
From Claim 2.10, we have
$\lim_{\varepsilon\to 0}\lambda_{k_{i}}^{\varepsilon}=\lim_{\varepsilon\to
0}\lambda_{k_{i}}(\Sigma,e^{2u_{\varepsilon}}g)\leq\lambda_{k_{i}}\left((\Sigma,e^{2u}g)\sqcup\bigsqcup_{j=1}^{l}(\mathbb{S}^{2},e^{2u_{j}}h)\right).$
Now, if the inequality is strict for one of the $p$ inequalities corresponding
to $i=l_{1},\cdots,l_{p}$, this would mean that the sequence is not a
minimizing sequence. Indeed, the infimum of the functional $E$ over the
conformal metrics to $(\Sigma,[g])$ is smaller than the infimum of $E$ over
$(\tilde{\Sigma},[\tilde{g}])=(\Sigma,[g])\sqcup\bigsqcup_{j=1}^{l}(\mathbb{S}^{2},[h])$.
Let’s see how to prove that. It suffices to take a metric $\tilde{g}_{\delta}$
on
$(\tilde{\Sigma},[\tilde{g}])=(\Sigma,[g])\sqcup\bigsqcup_{j=1}^{l}(\mathbb{S}^{2},[h])$
such that $E(\tilde{g}_{\delta})\leq\inf
E_{(\tilde{\Sigma},[\tilde{g}])}+\delta$. We denote
$\tilde{g}_{\delta}=e^{2u_{\delta}}g$ in $\Sigma$ and
$e^{2u_{i,\delta}}dx=\pi^{\star}\tilde{g}_{\delta}$ in
$\left(\mathbb{R}^{2}\cup\\{\infty\\}\right)_{i}$ where $\pi$ is a
stereographic projection for $i=1,\cdots l$. Taking disjoint points
$p_{1},\cdots,p_{l}\in\Sigma$ and a sequence of scale $\alpha\to 0$ we set
$g_{\delta,\alpha}(z)=\left(e^{2u_{\delta}(z)}+\sum_{i=1}^{l}\frac{1}{\alpha^{2}}\eta\left(\frac{z-p_{i}}{\alpha}\right)e^{2u_{i,\delta}\left(\frac{z-p_{i}}{\alpha}\right)}\right)g(z).$
where
$\eta\in\mathcal{C}^{\infty}_{c}\left(\mathbb{D}_{\frac{1}{\delta}}\setminus\mathbb{D}_{\delta}\right)$
is a cut-off function with $\eta=1$ on
$\mathbb{D}_{\frac{1}{\sqrt{\delta}}}\setminus\mathbb{D}_{\sqrt{\delta}}$ and
$\int_{\Sigma}\left|\nabla\eta\right|^{2}\leq\frac{C}{\ln\left(\frac{1}{\delta}\right)}$.
We obtain by asymptotic computations left to the reader (see [CES03] for an
analogous asymptotic computation)
$\inf E_{\Sigma,[g]}\leq E(g_{\delta,\alpha})\to
E_{\tilde{\Sigma},[\tilde{g}]}\left(\tilde{g}_{\delta}\right)+A(\delta)\text{
as }\alpha\to 0.$
where $A(\delta)\to 0$ as $\delta\to 0$. Letting then $\delta\to 0$ gives the
expected result. ∎
###### Claim 2.12.
We assume that the Palais-Smale sequence satisfies that
$\lambda_{1}(\Sigma,e^{2u_{\varepsilon}})$ is uniformly lower bounded by a
positive constant, that it is associated to a functional $e^{2u}\mapsto
E(e^{2u}):=F(\bar{\lambda}_{1}(\Sigma,e^{2u}g),\cdots,\bar{\lambda}_{m}(\Sigma,e^{2u}g))$
and that the sequences of non negative numbers
$\left|\partial_{i}F(\lambda_{i}^{\varepsilon},\cdots,\lambda_{m}^{\varepsilon})\right|$
are uniformly lower bounded by a positive constant. Then, there is no bubbling
and we have that for any $1\leq i\leq m$,
$\lim_{\varepsilon\to
0}\lambda_{i}^{\varepsilon}=\lambda_{i}\left(\Sigma,e^{2u_{0}}g\right).$
###### Proof.
From Claim 2.9 we know that there is no bubbling and that
$e^{2u_{\varepsilon}}$ converges to $e^{2u_{0}}$ in $H^{-1}$. We extract from
the map $\Phi_{\varepsilon}:\Sigma\to\mathcal{E}_{\Lambda_{\varepsilon}}$ in
the Palais-Smale sequence, an idependant family
$\left(\phi_{0}^{\varepsilon},\cdots,\phi_{m}^{\varepsilon}\right)$ of
eigenfunctions associated to the eigenvalues
$\left(0,\lambda_{1}^{\varepsilon},\cdots,\lambda_{m}^{\varepsilon}\right)$
with respect to the metric $e^{2u_{\varepsilon}}$. We use them as test
functions for $\lambda_{k}(\Sigma,e^{2u_{0}}g)$ (with $0\leq k\leq m$) because
they converge in $H^{1}$. Since we know that
$\frac{\phi_{k}^{\varepsilon}}{\omega_{\varepsilon}}$ are sequences of bounded
functions in $L^{\infty}$, and that
$\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}$ converges
to $0$ in $H^{1}$ , we have for any $k,l\geq 1$,
$\left(\frac{\phi_{k}^{\varepsilon}}{\omega_{\varepsilon}}\frac{\phi_{l}^{\varepsilon}}{\omega_{\varepsilon}}-\phi_{k}^{\varepsilon}\phi_{l}^{\varepsilon}\right)e^{2u_{\varepsilon}}\to
0\text{ and
}\left(\frac{\phi_{k}^{\varepsilon}}{\omega_{\varepsilon}}\frac{\phi_{l}^{\varepsilon}}{\omega_{\varepsilon}}-\phi_{k}^{\varepsilon}\phi_{l}^{\varepsilon}\right)e^{2u_{0}}\to
0\text{ in }L^{1}(g)\text{ as }\varepsilon\to 0.$
$\frac{\phi_{k}^{\varepsilon}}{\omega_{\varepsilon}}\frac{\phi_{l}^{\varepsilon}}{\omega_{\varepsilon}}\left(e^{2u_{\varepsilon}}-e^{2u_{0}}\right)\to
0\text{ and
}\nabla\frac{\phi_{k}^{\varepsilon}}{\omega_{\varepsilon}}\nabla\frac{\phi_{k}^{\varepsilon}}{\omega_{\varepsilon}}-\nabla\phi_{k}^{\varepsilon}\nabla\phi_{l}^{\varepsilon}\to
0\text{ in }L^{1}(g)\text{ as }\varepsilon\to 0.$
Then,
$\lambda_{k}\left(\Sigma,e^{2u_{0}}g\right)\leq\max_{\varphi\in\left\langle
1,\phi_{1}^{\varepsilon},\cdots,\phi_{k}^{\varepsilon}\right\rangle}\frac{\int_{\Sigma}\left|\nabla\varphi\right|_{g}^{2}dA_{g}}{\int_{\Sigma}\varphi^{2}e^{2u_{0}}dA_{g}}=\max_{\varphi\in\left\langle
1,\phi_{1}^{\varepsilon},\cdots,\phi_{k}^{\varepsilon}\right\rangle}\frac{\int_{\Sigma}\left|\nabla\varphi\right|_{g}^{2}dA_{g}}{\int_{\Sigma}\varphi^{2}e^{2u_{\varepsilon}}dA_{g}}+o(1)=\lambda_{k}^{\varepsilon}$
and with Claim 2.10 we have the expected result. ∎
###### Remark 2.1.
It would be interesting to get more general conditions such that the spectral
indices of eigenvalues associated to a Palais-Smale sequence converge to the
spectral indices of the limit, in particular when there is bubbling. It is
also linked to the question of strong $H^{1}$ convergence of all the
eigenfunctions associated to the Palais-Smale sequence.
## 3\. Convergence of Palais-Smale sequences for finite combination of
Steklov eigenvalues in dimension 2
In the current section, we prove the new ideas we need to prove the
convergence of Palais-Smale sequences associated to Steklov eigenvalues.
Therefore, in order to simplify the presentation of the proof, we focus on the
case without bubbling since the bubbling phenomenon is the same as for Laplace
eigenvalues. More generally, we refer to the analogous case of Laplace
eigenvalues in the previous section in order to complete all the arguments of
the following proposition that are skiped.
###### Proposition 3.1.
Let $e^{u_{\varepsilon}}$, $\sigma_{\varepsilon}$,
$\Phi_{\varepsilon}:\Sigma\to\mathbb{R}^{m}$ be a smooth sequence of maps
satisfying the Palais-Smale assumption $(PS)$ as $\varepsilon\to 0$, that is
* •
$\sigma_{\varepsilon}=diag(\sigma_{k_{1}^{\varepsilon}}^{\varepsilon},\cdots,\sigma_{k_{m}^{\varepsilon}}^{\varepsilon})$
where the diagonal terms are eigenvalues associated to $e^{u_{\varepsilon}}$
with uniformly bounded spectral indices $k_{i}^{\varepsilon}$, and
$\sigma_{\varepsilon}\to\sigma$ as $\varepsilon\to 0$.
* •
$\Delta_{g}\Phi_{\varepsilon}=0$ in $\Sigma$ and
$\partial_{\nu}\Phi_{\varepsilon}=e^{u_{\varepsilon}}\sigma_{\varepsilon}\Phi_{\varepsilon}$
on $\partial\Sigma$
* •
$\int_{\partial\Sigma}e^{u_{\varepsilon}}dL_{g}=\int_{\partial\Sigma}\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}e^{u_{\varepsilon}}dL_{g}=\int_{\Sigma}\left|\nabla\Phi_{\varepsilon}\right|_{g}^{2}dA_{g}=1$
* •
$\left|\Phi\right|_{\sigma_{\varepsilon}}^{2}\geq 1-\delta_{\varepsilon}$
uniformly on $\partial\Sigma$, where $\delta_{\varepsilon}\to 0$ as
$\varepsilon\to 0$.
Then, up to a subsequence $\Phi_{\varepsilon}$ bubble tree converges in
$W^{1,2}$ to $\Phi_{0}:\Sigma\to co\left(\mathcal{E}_{\sigma}\right)$, and
$\Phi_{j}:\mathbb{D}\to co\left(\mathcal{E}_{\sigma}\right)$ for
$j=1,\cdots,l$ ($l\geq 0$) with an energy identity:
$1=\int_{\Sigma}\left|\nabla\Phi_{0}\right|_{g}^{2}dA_{g}+\sum_{j=1}^{l}\int_{\mathbb{D}}\left|\nabla\Phi_{j}\right|_{h}^{2}dA_{h}$
Moreover, $\Phi_{j}$ are smooth harmonic maps with free boundary for
$j=0,\cdots,l$ and their $i$-th coordinates are eigenfunctions associated to
$\lambda_{k_{i}}$ on the surface $\Sigma\cup\bigcup_{1\leq i\leq l}\mathbb{D}$
with respect to the metrics $e^{2u}g$ on $\Sigma$ such that
$e^{u}=\partial_{\nu}\Phi_{0}.\Phi_{0}$ on $\partial\Sigma$ and
$e^{2v_{j}}\xi$ on $\mathbb{D}$ such that
$e^{v_{j}}=\partial_{\nu}\Phi_{j}.\Phi_{j}$ on $\mathbb{S}^{1}$. Moreover
$e^{u_{\varepsilon}}dL_{g}$ bubble tree converges in $W^{-\frac{1}{2},2}$ to
$\partial_{\nu}\Phi_{0}.\Phi_{0}dL_{g}$ in $\Sigma$ and
$\partial_{\nu}\Phi_{j}.\Phi_{j}dA_{h}$ in $\mathbb{S}^{2}$.
### 3.1. Some convergence of $\omega_{\varepsilon}$ to $1$ and first
replacement of $\Phi_{\varepsilon}$
We let $\omega_{\varepsilon}$ be the harmonic extension on $\Sigma$ of
$\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}$. We set
$\widetilde{\Phi_{\varepsilon}}=\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}$
We integrate $\Delta_{g}\Phi_{\varepsilon}=0$ in $\Sigma$ and
$\partial_{\nu}\Phi_{\varepsilon}=\sigma_{\varepsilon}e^{u_{\varepsilon}}\Phi_{\varepsilon}$
on $\partial\Sigma$ against $\sigma\Phi$ and
$\frac{\sigma_{\varepsilon}\Phi_{\varepsilon}}{\omega_{\varepsilon}}$. We
obtain
(3.1)
$\int_{\partial\Sigma}\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{2}e^{u_{\varepsilon}}dL_{g}=\int_{\Sigma}\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}dA_{g}$
and
(3.2)
$\int_{\partial\Sigma}\frac{\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}e^{u_{\varepsilon}}=\int_{\Sigma}\frac{\sigma_{\varepsilon}\Phi_{\varepsilon}}{\omega_{\varepsilon}}\Delta_{g}\Phi_{\varepsilon}=\int_{\Sigma}\left\langle\nabla\widetilde{\Phi_{\varepsilon}},\nabla\Phi_{\varepsilon}\right\rangle_{\sigma_{\varepsilon}}$
We have that
(3.3)
$\left\langle\nabla\widetilde{\Phi_{\varepsilon}},\nabla\Phi_{\varepsilon}\right\rangle_{\sigma_{\varepsilon}}=\frac{\left\langle\left(\nabla\Phi_{\varepsilon}-\widetilde{\Phi_{\varepsilon}}\nabla\omega_{\varepsilon}\right),\nabla\Phi_{\varepsilon}\right\rangle_{\sigma_{\varepsilon}}}{\omega_{\varepsilon}}=\frac{\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}}{\omega_{\varepsilon}}+\left\langle\nabla\frac{1}{\omega_{\varepsilon}},\sigma_{\varepsilon}\Phi_{\varepsilon}.\nabla\Phi_{\varepsilon}\right\rangle\hskip
2.84544pt,$
noticing that
$\sigma_{\varepsilon}\Phi_{\varepsilon}.\nabla\Phi_{\varepsilon}=\left\langle\Phi_{\varepsilon},\nabla\Phi_{\varepsilon}\right\rangle_{\mathcal{E}_{\varepsilon}}=\nabla\left(\frac{\left|\Phi_{\varepsilon}\right|_{\mathcal{E}_{\varepsilon}}^{2}}{2}\right)$,
we compute the first left-hand side term of (3.2), by considering first the
last-hand term in (3.3):
$\begin{split}\int_{\Sigma}\left\langle\frac{1}{\omega_{\varepsilon}},\sigma_{\varepsilon}\Phi_{\varepsilon}.\nabla\Phi_{\varepsilon}\right\rangle
dA_{g}=&\int_{\Sigma}\left\langle\frac{1}{\omega_{\varepsilon}},\nabla\frac{\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}}{2}\right\rangle
dA_{g}\\\
=&\int_{\Sigma}\frac{\left|\Phi\right|_{\sigma_{\varepsilon}}^{2}}{2}\Delta_{g}\left(\frac{1}{\omega_{\varepsilon}}\right)dA_{g}+\int_{\partial\Sigma}\frac{\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}}{2}\partial_{\nu}\left(\frac{1}{\omega_{\varepsilon}}\right)dL_{g}\\\
=&\int_{\Sigma}\frac{\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}}{\omega_{\varepsilon}^{3}}\left(\frac{\omega_{\varepsilon}}{2}\Delta_{g}\omega_{\varepsilon}-\left|\nabla\omega_{\varepsilon}\right|_{g}^{2}\right)dA_{g}-\int_{\partial\Sigma}\frac{\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}}{2\omega_{\varepsilon}^{2}}\partial_{\nu}\omega_{\varepsilon}dL_{g}\\\
=&-\int_{\Sigma}\frac{\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}}{\omega_{\varepsilon}^{3}}\left|\nabla\omega_{\varepsilon}\right|_{g}^{2}dA_{g}\hskip
2.84544pt,\end{split}$
where we noticed in the last step that that $\Delta_{g}\omega_{\varepsilon}=0$
and
$\omega_{\varepsilon}=\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}$
on $\partial\Sigma$. Therefore, using (3.2) and (3.3),
$\begin{split}\int_{\Sigma}&\left|\nabla\omega_{\varepsilon}\right|^{2}\frac{\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}}{\omega_{\varepsilon}^{3}}dA_{g}=\int_{\Sigma}\frac{\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}}{\omega_{\varepsilon}}dA_{g}-\int_{\partial\Sigma}\frac{\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}e^{u_{\varepsilon}}dL_{g}\hskip
2.84544pt,\end{split}$
We also have since $\omega_{\varepsilon}^{2}\geq 1-\delta_{\varepsilon}$ by
the maximum principle and (3.1) that
$\int_{\Sigma}\frac{\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}}{\omega_{\varepsilon}}dA_{g}\leq\frac{1}{\sqrt{1-\delta_{\varepsilon}}}\int_{\Sigma}\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}dA_{g}=\frac{1}{\sqrt{1-\delta_{\varepsilon}}}\int_{\partial\Sigma}\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{2}e^{u_{\varepsilon}}dL_{g}\hskip
2.84544pt,$
so that since
$\omega_{\varepsilon}^{2}-\omega_{\varepsilon}=\omega_{\varepsilon}(\omega_{\varepsilon}-1)\geq
1-\delta_{\varepsilon}-\sqrt{1-\delta_{\varepsilon}}$
$\begin{split}\int_{\Sigma}\left|\nabla\omega_{\varepsilon}\right|^{2}\frac{\left|\Phi_{\varepsilon}\right|_{\mathcal{E}_{\varepsilon}}^{2}}{\omega_{\varepsilon}^{3}}dA_{g}\leq&\int_{\partial\Sigma}\left(\omega_{\varepsilon}-1\right)\frac{\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}}e^{u_{\varepsilon}}dL_{g}+\left(\frac{1}{\sqrt{1-\delta_{\varepsilon}}}-1\right)\int_{\partial\Sigma}\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{2}e^{u_{\varepsilon}}dL_{g}\\\
\leq&\max_{\Sigma}\frac{\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{2}}\left(\int_{\partial\Sigma}\left(\omega_{\varepsilon}^{2}-\omega_{\varepsilon}\right)e^{u_{\varepsilon}}dL_{g}+\sqrt{1-\delta_{\varepsilon}}+\delta_{\varepsilon}-1\right)\\\
&+\max_{j}\sigma_{k_{j}}^{\varepsilon}\left(\frac{1}{\sqrt{1-\delta_{\varepsilon}}}-1\right)\\\
\leq&\max_{j}\sigma_{k_{j}^{\varepsilon}}\left(\delta_{\varepsilon}+\frac{1}{\sqrt{1-\delta_{\varepsilon}}}-1\right)=O\left(\delta_{\varepsilon}\right)\end{split}$
as $\varepsilon\to 0$ where we crucially used the global equality
$\int_{\partial\Sigma}e^{u_{\varepsilon}}\omega_{\varepsilon}^{2}=\int_{\partial\Sigma}e^{u_{\varepsilon}}=1$
We also have that
$\nabla\left(\Phi_{\varepsilon}-\widetilde{\Phi_{\varepsilon}}\right)=\left(1-\frac{1}{\omega_{\varepsilon}}\right)\nabla\Phi_{\varepsilon}+\frac{\nabla\omega_{\varepsilon}}{\omega_{\varepsilon}^{2}}\Phi_{\varepsilon}$
and then
$\left|\nabla\left(\Phi_{\varepsilon}-\widetilde{\Phi_{\varepsilon}}\right)\right|_{\sigma_{\varepsilon}}^{2}=\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}\left(1-\frac{1}{\omega_{\varepsilon}}\right)^{2}+\frac{\left|\nabla\omega_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{2}}\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}+\frac{\left\langle\nabla\omega_{\varepsilon},\nabla\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}\right\rangle}{\omega_{\varepsilon}^{2}}\left(1-\frac{1}{\omega_{\varepsilon}}\right)\hskip
2.84544pt.$
We have that
$\begin{split}\int_{\Sigma}\left|\nabla\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}&\left(1-\frac{1}{\omega_{\varepsilon}}\right)^{2}=-\int_{\Sigma}div\left(\nabla\Phi_{\varepsilon}\left(1-\frac{1}{\omega_{\varepsilon}}\right)^{2}\right)\sigma_{\varepsilon}\Phi_{\varepsilon}+\int_{\partial\Sigma}\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{2}\left(1-\frac{1}{\omega_{\varepsilon}}\right)^{2}e^{u_{\varepsilon}}\\\
=&\int_{\Sigma}\left\langle\nabla\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2},\nabla\frac{1}{\omega_{\varepsilon}}\right\rangle\left(1-\frac{1}{\omega_{\varepsilon}}\right)+\int_{\partial\Sigma}\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{2}\left(1-\frac{1}{\omega_{\varepsilon}}\right)^{2}e^{u_{\varepsilon}}\\\
\end{split}$
so that
$\begin{split}\int_{\Sigma}\left|\nabla\left(\Phi_{\varepsilon}-\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right)\right|_{\sigma_{\varepsilon}}^{2}=&\int_{\partial\Sigma}\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{2}\left(1-\frac{1}{\omega_{\varepsilon}}\right)^{2}e^{u_{\varepsilon}}+\int_{\Sigma}\frac{\left|\nabla\omega_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{2}}\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}\\\
\leq&\left(\int_{\partial\Sigma}\frac{\left|\sigma_{\varepsilon}\Phi_{\varepsilon}\right|^{4}}{\omega_{\varepsilon}^{2}}e^{u_{\varepsilon}}\right)^{\frac{1}{2}}\left(\int_{\partial\Sigma}e^{u_{\varepsilon}}\left(\omega_{\varepsilon}-\frac{1}{\omega_{\varepsilon}}\right)^{2}\right)^{\frac{1}{2}}+O\left(\delta_{\varepsilon}\right)\\\
\leq&\max_{j}\sigma_{k_{j}}\left(\int_{\partial\Sigma}e^{u_{\varepsilon}}\left(\frac{1}{\omega_{\varepsilon}^{2}}-1\right)\right)^{\frac{1}{2}}+O\left(\delta_{\varepsilon}\right)\end{split}$
as $\varepsilon\to 0$, where we crucially used again
$\int_{\partial\Sigma}e^{u_{\varepsilon}}\omega_{\varepsilon}^{2}=\int_{\partial\Sigma}e^{u_{\varepsilon}}=1$.
We then obtain that
(3.4)
$\int_{\Sigma}\left|\nabla\left(\Phi_{\varepsilon}-\widetilde{\Phi_{\varepsilon}}\right)\right|_{\sigma_{\varepsilon}}^{2}=O\left(\delta_{\varepsilon}^{\frac{1}{2}}\right)\text{
as }\varepsilon\to 0.$
### 3.2. $H^{1}$-convergence of eigenfunctions by harmonic replacement
(simple case)
Let $p\in\partial\Sigma\setminus\\{p_{1},\cdots,p_{s}\\}$, where
$\\{p_{1},\cdots,p_{s}\\}$ are bad points (see Claim 2.3 in the Laplace case).
Refering to the Laplacian case, we also have an analogous property to (2.5)
far from bad points. Let $r>0$ be such that for any $\varepsilon>0$,
$\int_{\mathbb{D}^{+}_{r}(p)}\left|\nabla\widetilde{\Phi_{\varepsilon}}\right|^{2}_{g}<\varepsilon_{0}$
Then, let
$\Psi_{\varepsilon}:\mathbb{D}^{+}_{r}(p)\to\mathcal{E}_{\Lambda_{\varepsilon}}$
be the harmonic replacement of $\widetilde{\Phi_{\varepsilon}}$ and we obtain
by [LP19], Theorem 1.2
(3.5)
$\int_{\mathbb{D}^{+}_{r}(p)}\left|\nabla\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)\right|^{2}_{g}\leq\frac{1}{2}\left(\int_{\mathbb{D}^{+}_{r}(p)}\left|\nabla\widetilde{\Phi_{\varepsilon}}\right|^{2}_{g}-\int_{\mathbb{D}^{+}_{r}(p)}\left|\nabla\Psi_{\varepsilon}\right|^{2}_{g}\right)\hskip
2.84544pt.$
Let’s prove that the right-hand term converges to $0$ as $\varepsilon\to 0$.
We test the function
$\widetilde{\Phi_{\varepsilon}}^{i}-\Psi_{\varepsilon}^{i}$ in the variational
characterization of
$\sigma_{\star}:=\sigma_{\star}\left(\mathbb{D}^{+}_{r}(p),e^{u_{\varepsilon}}\right)$:
$\sigma_{k_{i}^{\varepsilon}}\int_{I_{r}(p)}\left(\widetilde{\Phi_{\varepsilon}}^{i}-\Psi_{\varepsilon}^{i}\right)^{2}e^{u_{\varepsilon}}\leq\sigma_{\star}\int_{\mathbb{D}^{+}_{r}(p)}\left(\widetilde{\Phi_{\varepsilon}}^{i}-\Psi_{\varepsilon}^{i}\right)^{2}e^{2u_{\varepsilon}}\leq\int_{\mathbb{D}^{+}_{r}(p)}\left|\nabla\left(\widetilde{\Phi_{\varepsilon}}^{i}-\Psi_{\varepsilon}^{i}\right)\right|^{2}$
and we sum on $i$ to get
(3.6)
$\int_{I_{r}(p)}\left|\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}e^{u_{\varepsilon}}\leq\int_{\mathbb{D}^{+}_{r}(p)}\left|\nabla\widetilde{\Phi_{\varepsilon}}\right|^{2}+\int_{\mathbb{D}_{r}(p)}\left|\nabla\Psi_{\varepsilon}\right|^{2}-2\int_{\mathbb{D}_{r}(p)}\nabla\widetilde{\Phi_{\varepsilon}}\nabla\Psi_{\varepsilon}$
Now we test the equation $\Delta\Phi_{\varepsilon}=0$ and
$\partial_{\nu}\Phi_{\varepsilon}=\sigma_{\varepsilon}e^{u_{\varepsilon}}\Phi_{\varepsilon}$
against
$\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}-\frac{\Psi_{\varepsilon}}{\omega_{\varepsilon}}$
and we get after an integration by part knowing that
$\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}^{2}}-\frac{\Psi_{\varepsilon}}{\omega_{\varepsilon}}=0$
on $\partial\mathbb{D}^{+}_{r}(p)\setminus I_{r}(p)$
$\int_{\mathbb{D}^{+}_{r}(p)}\left(\frac{1}{\omega_{\varepsilon}}\nabla\Phi_{\varepsilon}\nabla\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)+\nabla\frac{1}{\omega_{\varepsilon}}\nabla\Phi_{\varepsilon}\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)\right)=\int_{I_{r}(p)}\left\langle\widetilde{\Phi_{\varepsilon}},\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right\rangle_{\sigma_{\varepsilon}}e^{u_{\varepsilon}}$
so that,
(3.7)
$\begin{split}\int_{\mathbb{D}_{r}^{+}(p)}\nabla\widetilde{\Phi_{\varepsilon}}\nabla\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)=&\int_{I_{r}(p)}\left\langle\widetilde{\Phi_{\varepsilon}},\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right\rangle_{\sigma_{\varepsilon}}e^{u_{\varepsilon}}-\int_{\mathbb{D}^{+}_{r}(p)}\nabla\frac{1}{\omega_{\varepsilon}}\nabla\Phi_{\varepsilon}\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)\\\
&+\int_{\mathbb{D}_{r}(p)}\Phi_{\varepsilon}\nabla\frac{1}{\omega_{\varepsilon}}\nabla\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)\end{split}$
Knowing that
$\left|\widetilde{\Phi_{\varepsilon}}\right|_{\sigma_{\varepsilon}}=\left|\Psi_{\varepsilon}\right|_{\sigma_{\varepsilon}}$
on $\partial\Sigma$, it is clear that
$2\left\langle\widetilde{\Phi_{\varepsilon}},\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right\rangle_{\sigma_{\varepsilon}}=\left|\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}$
and multiplying (3.7) by $2$, we obtain
(3.8)
$\begin{split}&2\int_{\mathbb{D}^{+}_{r}(p)}\left|\nabla\widetilde{\Phi_{\varepsilon}}\right|^{2}-2\int_{\mathbb{D}^{+}_{r}(p)}\nabla\widetilde{\Phi_{\varepsilon}}\nabla\Psi_{\varepsilon}\\\
&=\int_{I_{r}(p)}\left|\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}e^{u_{\varepsilon}}+2\int_{\mathbb{D}^{+}_{r}(p)}\frac{\nabla\omega_{\varepsilon}}{\omega_{\varepsilon}}\left(\frac{\nabla\Phi_{\varepsilon}}{\omega_{\varepsilon}}\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)-\widetilde{\Phi_{\varepsilon}}\nabla\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)\right)\\\
\end{split}$
Summing (3.6) and (3.8), we get
$\int_{\mathbb{D}_{r}^{+}(p)}\left|\nabla\frac{\Phi_{\varepsilon}}{\omega_{\varepsilon}}\right|^{2}_{g}-\int_{\mathbb{D}_{r}(p)}\left|\nabla\Psi_{\varepsilon}\right|^{2}_{g}\leq
2\int_{\mathbb{D}^{+}_{r}(p)}\frac{\nabla\omega_{\varepsilon}}{\omega_{\varepsilon}}\left(\frac{\nabla\Phi_{\varepsilon}}{\omega_{\varepsilon}}\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)-\widetilde{\Phi_{\varepsilon}}\nabla\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)\right)$
Up to reduce $\varepsilon_{0}$, we assume that
$\left|\widetilde{\Phi_{\varepsilon}}\right|_{\sigma_{\varepsilon}}^{2}\geq\frac{1}{2}$
in $\mathbb{D}_{r}^{+}(p)$ and we obtain that
$\begin{split}2\left|\int_{\mathbb{D}^{+}_{r}(p)}\frac{\nabla\omega_{\varepsilon}}{\omega_{\varepsilon}}\left(\frac{\nabla\Phi_{\varepsilon}}{\omega_{\varepsilon}}\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)-\widetilde{\Phi_{\varepsilon}}\nabla\left(\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\right)\right)\right|&\leq
O\left(\frac{\left(\int_{\Sigma}\frac{\left|\nabla\omega_{\varepsilon}\right|^{2}}{\omega_{\varepsilon}^{4}}\left|\Phi_{\varepsilon}\right|_{\sigma_{\varepsilon}}^{2}\right)^{\frac{1}{2}}}{\sigma_{k_{1}^{\varepsilon}}}\right)\\\
&=O\left(\frac{\delta_{\varepsilon}^{\frac{1}{2}}}{\sigma_{k_{1}^{\varepsilon}}}\right)\end{split}$
By (3.5), we obtain that
$\widetilde{\Phi_{\varepsilon}}-\Psi_{\varepsilon}\to 0\text{ in
}H^{1}\left(\mathbb{D}_{r}^{+}(p)\right)$
Now since $\Psi_{\varepsilon}$ converges in
$\mathcal{C}^{k}\left(\mathbb{D}_{\frac{r}{2}}^{+}(p)\right)$ for any
$k\in\mathbb{N}$ to some free boundary harmonic map $\Psi$, and because the
property is true for all
$p\in\partial\Sigma\setminus\\{p_{1},\cdots,p_{s}\\}$, we have that
$\widetilde{\Phi_{\varepsilon}}$ converges to some free boundary harmonic map
$\Phi$ in $H^{1}_{loc}(\\{x\in\Sigma;d(x,\partial\Sigma)\leq
r_{0}\\}\setminus\\{p_{1},\cdots,p_{s}\\})$ for some $r_{0}>0$. Then by (3.4)
and since $\Phi_{\varepsilon}$ is a harmonic map in an Euclidean space,
$\Phi_{\varepsilon}$ converges to some free boundary harmonic map $\Phi$ in
$H^{1}_{loc}(\Sigma\setminus\\{p_{1},\cdots,p_{s}\\})$.
### 3.3. The limiting measure has a smooth density
Let $\zeta\in\mathcal{C}_{c}^{\infty}\left(\mathbb{D}_{r}^{+}(p)\right)$ where
$p\in\partial\Sigma\setminus\\{p_{1},\cdots,p_{s}\\}$ and
$\\{p_{1},\cdots,p_{s}\\}$ are bad points. We have that
$\displaystyle\int_{\partial\Sigma}\zeta\left(\sigma_{\varepsilon}\Phi_{\varepsilon}e^{u_{\varepsilon}}dL_{g}-\sigma\Phi
d\nu\right)$ $\displaystyle=$
$\displaystyle\int_{\partial\Sigma}\zeta\left(\sigma_{\varepsilon}\Phi_{\varepsilon}-\sigma\Phi\right)e^{u_{\varepsilon}}dL_{g}$
$\displaystyle\quad+\int_{\partial\Sigma}\zeta\sigma\Phi\left(e^{u_{\varepsilon}}dL_{g}-d\nu\right)\hskip
2.84544pt.$
Then on the first right-hand term, we have that
$\displaystyle\int_{\partial\Sigma}\zeta\left(\Lambda_{\varepsilon}\Phi_{\varepsilon}-\Lambda\Phi\right)e^{u_{\varepsilon}}dL_{g}$
$\displaystyle\leq$
$\displaystyle\left(\int_{I_{r}(p)}\zeta^{2}\left|\sigma_{\varepsilon}\Phi_{\varepsilon}-\sigma\Phi\right|^{2}e^{u_{\varepsilon}}dL_{g}\right)^{\frac{1}{2}}$
$\displaystyle\leq$
$\displaystyle\left(\frac{1}{\sigma_{\star}(\mathbb{D}^{+}_{r}(p),e^{u_{\varepsilon}})}\int_{\mathbb{D}^{+}_{r}(p)}\left|\nabla\left(\zeta\left|\sigma_{\varepsilon}\Phi_{\varepsilon}-\sigma\Phi\right|\right)\right|^{2}_{g}dA_{g}\right)^{\frac{1}{2}}$
$\displaystyle\leq$ $\displaystyle
C\left(\int_{\mathbb{D}^{+}_{r}(p)}\left|\nabla\left(\Phi_{\varepsilon}-\Phi\right)\right|^{2}_{g}dA_{g}\right)^{\frac{1}{2}}$
for some constant $C$ independent of $\varepsilon$. Letting $\varepsilon\to 0$
in a weak sense to the eigenvalue equation $\Delta_{g}\Phi_{\varepsilon}=0$ in
$\Sigma$ and
$\partial_{\nu}\Phi_{\varepsilon}=\sigma_{\varepsilon}\Phi_{\varepsilon}e^{u_{\varepsilon}}$
on $\partial\Sigma$, we get
$\Delta\Phi=0\text{ in }\Sigma\text{ and
}\partial_{\nu}\Phi=\sigma\Phi\nu\text{ on }\partial\Sigma$
and since $\Phi$ is harmonic, we obtain that $\nu=\Phi.\partial_{\nu}\Phi
dL_{g}$. Knowing that $\Phi$ is smooth (see [JLZ19]), we obtain a smooth
density for $\nu$.
## References
* [CES03] B. Colbois, A. El Soufi, Extremal eigenvalues of the Laplacian in a conformal class of metrics: the ‘conformal spectrum’, Ann. Global Anal. Geom. 24 (2003), no.4, 337–349.
* [Cla13] F. Clarke, Functional analysis, calculus of variations and optimal control, Graduate Texts in Mathematics , 264, Springer, London, 2013, xiv+591,
* [CM08] T.H. Colding, W.P. Minicozzi II, Width and finite extinction time of Ricci flow., Geom. Topol., 12, 2008, 5, 2537–2586.
* [ESI86] A. El Soufi, S. Ilias, Immersions minimales, première valeur propre du laplacien et volume conforme, Mathematische Annalen, 1986, 275, 257-267
* [ESI03] A. El Soufi and S. Ilias, Extremal metrics for the first eigenvalue of the Laplacian in a conformal class, Proc. Amer. Math. Soc. 131, 2003, 1611-1618.
* [ESI08] A. El Soufi, S. Ilias, Laplacian eigenvalue functionals and metric deformations on compact manifolds, Journal of Geometry and Physics, 58, Issue 1, January 2008, 89-104
* [Fab23] G. Faber, Beweis, dass unter allen homogenen membranen von gleicher flüche und gleicher spannung die kreisfürmige den tiefsten grundton gibt, Sitz. ber. bayer. Akad. Wiss., 1923, 169–172,
* [FS13] A. Fraser, R. Schoen, Minimal surfaces and eigenvalue problems, Contemporary Mathematics, 599, 2013, 105–121
* [FS16] A. Fraser, R. Schoen, Sharp eigenvalue bounds and minimal surfaces in the ball, Invent. Math. 203, 2016, 823–890.
* [FN99] L. Friedlander, N. Nadirashvili, A differential invariant related to the first eigenvalue of the Laplacian, Internat. Math. Res. Notices, 1999, 17, 939–952,
* [GP22] M.J. Gursky, S. Pérez-Ayala, Variational properties of the second eigenvalue of the conformal Laplacian, J. Funct. Anal., 282, 2022, no.8, Paper No. 109371, 60
* [Has11] A. Hassannezhad, Conformal upper bounds for the eigenvalues of the Laplacian and Steklov problem, J. Funct. Anal., 261, 2011, no 12, 3419–3436
* [Hel96] F. Hélein, Harmonic maps, conservation laws and moving frames, Cambridge Tracts in Mathematics, 150, Second Edition, Cambridge University Press, 2002, xxvi+264.
* [Her70] J. Hersch, Quatre propriétés isopérimétriques de membranes sphériques homogènes, C.R. Acad. Sci. Paris Sér. A-B 270, 1970, A1645–A1648.
* [JLZ19] J. Jost, L. Liu and M. Zhu, The qualitative behavior at the free boundary for approximate harmonic maps from surfaces, Math. Ann., 374, 2019, 1-2, 133–177
* [Kar21] M. Karpukhin, Index of minimal spheres and isoperimetric eigenvalue inequalities, Invent. Math., 223, 2021, no. 1, 335–377
* [KNPP17] M.A. Karpukhin, N. Nadirashvili, A. Penskoi, I. Polterovich, An isoperimetric inequality for Laplace eigenvalues on the sphere. arXiv: 1706.05713. To appear in J. Differential Geom..
* [KNPP20] M. Karpukhin, N. Nadirashvili, A. Penskoi, I. Polterovich, Conformally maximal metrics for Laplace eigenvalues on surfaces, arXiv:2003.02871v2
* [KNPS21] M. Karpukhin, M. Nahon, I. Polterovich, D.Stern Stability of isoperimetric inequalities for Laplace eigenvalues on surfaces, arXiv:2106.15043
* [KS20] M. Karpukhin, D. L. Stern Min-max harmonic maps and a new characterization of conformal eigenvalues arXiv preprint (2020), arXiv:2004.04086, 69 pp
* [KS22] M. Karpukhin, D. L. Stern Existence of harmonic maps and eigenvalue optimization in higher dimensions arXiv preprint (2022), arXiv:2207.13635, 60 pp
* [Kok14] G. Kokarev, Variational aspects of Laplace eigenvalues on Riemannian surfaces, Adv. Math. 258, 2014, 191–239.
* [Kor93] N. Korevaar, Upper bounds for eigenvalues of conformal metrics, J. Differential Geom., 37, 1993, 1, 73–93
* [Kra25] Krahn, E., Über eine von Rayleigh formulierte Minimaleigenschaft des Kreises, Math. Ann., 94, 1925, 1, 97–100
* [LP19] P. Laurain, R. Petrides, Existence of min-max free boundary disks realizing the width of a manifold, Advances in Mathematics 352, 2019, 326–371.
* [LY82] P. Li, S.T. Yau, A new conformal invariant and its applications to the Willmore conjecture and the first eigenvalue of compact surfaces, Invent.. Math. 69, 1982, 269–291.
* [Nad96] N. Nadirashvili, Berger’s isoperimetric problem and minimal immersions of surfaces, Geom. Func. Anal. 6, 1996, 877-897.
* [NS15] Nikolai Nadirashvili and Yannick Sire. Conformal spectrum and harmonic maps. Moscow J. of maths. Volume 15, 2014, 1, 123–140
* [Pet14] R. Petrides, Maximization of the second conformal eigenvalue of spheres, Proc. Amer. Math. Soc., 142, 2014, 7 , 2385–2394
* [Pet15] R. Petrides, On a rigidity result for the first conformal eigenvalue of the Laplacian, J. Spectr. Theory, 5, 2015, no.1, 227–234
* [Pet14a] R. Petrides, Existence and regularity of maximal metrics for the first Laplace eigenvalue on surfaces, Geom. Funct. Anal. 24, 2014, 1336–1376.
* [Pet18] R. Petrides, On the existence of metrics which maximize Laplace eigenvalues on surfaces, Int. Math. Res. Not., 14, 2018, 4261–4355.
* [Pet19] R. Petrides, Maximizing Steklov eigenvalues on surfaces, J. Differential Geom. Volume 113, 2019, no.1, 95-188.
* [Pet21] R. Petrides, Extremal metrices for combinations of Laplace eigenvalues and minimal surfaces into ellipsoids, submitted
* [Pet22] R. Petrides, Shape optimization for combinations of Steklov eigenvalues on Riemannian surfaces, submitted
* [Pet22a] R. Petrides, Maximizing one Laplace eigenvalue on n-dimensional manifolds, submitted
* [PT22] R. Petrides, D. Tewodrose, Subdifferentials and critical points of eigenvalue functionals
* [Riv08] T. Rivière, Conservation laws for conformally invariant variational problems, Inventiones Mathematicae, 168, 2007, 1–22.
* [Sze54] G. Szegö, Inequalities for certain eigenvalues of a membrane of given area, J. Rational Mech. Anal., 3, 1954, 343–356
* [Str08] M. Struwe, Variational Methods: Applications to Nonlinear Partial Differential Equations and Hamiltonian Systems, Springer Berlin, Heidelberg, Edition 4, 2008, XX, 302
* [Wei56] H.F. Weinberger, An isoperimetric inequality for the $N$-dimensional free membrane problem, J. Rational Mech. Anal., 5,1956, 633–636
* [YY80] P.C. Yang, S.T. Yau, Eigenvalues of the Laplacian of compact Riemannian surfaces and minimal submanifolds, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 7, 1980, no.1, 53–63.
|
# Adversarial graph burning densities
Karen Gunderson<EMAIL_ADDRESS>, William Kellough
<EMAIL_ADDRESS>, JD Nir<EMAIL_ADDRESS>and Hritik Punj
<EMAIL_ADDRESS>
###### Abstract.
Graph burning is a discrete-time process that models the spread of influence
in a network. Vertices are either _burning_ or _unburned_ , and in each round,
a burning vertex causes all of its neighbours to become burning before a new
_fire source_ is chosen to become burning. We introduce a variation of this
process that incorporates an adversarial game played on a nested, growing
sequence of graphs. Two players, Arsonist and Builder, play in turns: Builder
adds a certain number of new unburned vertices and edges incident to these to
create a larger graph, then every vertex neighbouring a burning vertex becomes
burning, and finally Arsonist ‘burns’ a new fire source. This process repeats
forever. Arsonist is said to win if the limiting fraction of burning vertices
tends to 1, while Builder is said to win if this fraction is bounded away from
1.
The central question of this paper is determining if, given that Builder adds
$f(n)$ vertices at turn $n$, either Arsonist or Builder has a winning
strategy. In the case that $f(n)$ is asymptotically polynomial, we give
threshold results for which player has a winning strategy.
###### 2020 Mathematics Subject Classification:
Primary: 05C57; Secondary: 05C42, 05C63
The first author gratefully acknowledges funding from NSERC, the second author
was supported in part from the University of Manitoba Faculty of Science
Undergraduate Research Award program, and the fourth author was supported by
the University of Manitoba Undergraduate Research Award program.
## 1\. Introduction
Graph burning is a discrete-time process that models the spread of influence
in a network. Vertices are in one of two states: either _burning_ or
_unburned_. In each round, a burning vertex causes all of its neighbours to
become burning and a new _fire source_ is chosen: a previously unburned vertex
whose state is changed to burning. The updates repeat until all vertices are
burning. The _burning number_ of a graph $G$, denoted $b(G)$, is the minimum
number of rounds required to burn all of the vertices of $G$.
Graph burning first appeared in print in a paper of Alon [1], motivated by a
question of Brandenburg and Scott at Intel, and was formulated as a
transmission problem involving a set of processors. It was then independently
studied by Bonato, Janssen, and Roshanbin [6, 7, 19] who gave bounds and
characterized the burning number for various graph classes. They showed in [7]
that the burning number of every connected graph is equal to the burning
number of one of its spanning trees and conjectured that if $G$ is a connected
graph on $n$ vertices, then $b(G)\leq\lceil n^{1/2}\rceil$. If true, this
would be the best possible upper bound as the path with $n$ vertices, $P_{n}$,
satisfies $b(P_{n})=\lceil n^{1/2}\rceil$. Currently, the tightest known bound
was given by Bastide, Bonamy, Bonato, Charbit, Kamali, Pierron, and Rabie [2]
who showed that $b(G)\leq\lceil\sqrt{4n/3}\rceil+1$. Norin and Turcotte
recently showed [18] that $b(G)\leq(1+o(1))\sqrt{n}$ and so the conjecture
holds asymptotically. Here, we require only the following bound from [3]:
(1) $b(G)\leq\sqrt{2n}.$
Burning density, introduced by Bonato, Gunderson, and Shaw [5], considers the
burning process when the graph itself is growing over time. Consider a
sequence of graphs:
$G_{1}\subseteq G_{2}\subseteq G_{3}\subseteq\cdots$
such that if $i<j$, the subgraph of $G_{j}$ induced by the vertices $V(G_{i})$
is $G_{i}$. That is, the ‘old graph’ does not change, but at each step ‘new
vertices’ arrive with some edges either between new vertices or between new
vertices and old vertices. The burning process in this setting evolves as: the
graph grows, the fire spreads, and we select an additional fire source.
At step $n$ of the process, let $V_{n}$ be the set of vertices and let $B_{n}$
be the set of burning vertices. In this context, there may never be a step
when every vertex is burning, but one can consider the proportion of burning
vertices. Given the sequences of graphs $\mathbf{G}=(G_{i})_{i\geq 1}$ and
fire sources $\mathbf{v}=(v_{i})_{i\geq 1}$, define the _lower burning
density_
$\underline{\delta}(\mathbf{G},\mathbf{v})=\liminf_{n\to\infty}\frac{|B_{n}|}{|V_{n}|}$,
the _upper burning density_
$\overline{\delta}(\mathbf{G},\mathbf{v})=\limsup_{n\to\infty}\frac{|B_{n}|}{|V_{n}|}$,
and the _burning density_
$\delta(\mathbf{G},\mathbf{v})=\lim_{n\to\infty}\frac{|B_{n}|}{|V_{n}|}$, if
it exists. As we will demonstrate, it is possible to construct examples where
the fraction of burning vertices fluctuates infinitely often and the burning
density does not exist.
In traditional graph burning there is a choice of where to place the fires,
but the process is otherwise deterministic. In this paper, we modify the setup
for burning densities to define an adversarial ‘game’. In this game there are
two players, Builder and Arsonist, who play as follows: Given a function
$f(n)$, known to both players, at time step $n$,
* •
Builder adds $f(n)$ vertices to $G_{n-1}$ and edges incident to these new
vertices to create a connected graph $G_{n}$,
* •
the fire spreads, and
* •
Arsonist selects an additional fire source $v_{n}$.
Note that the definition of burning density does not require $G_{n}$ to be
connected, but we impose this restriction in order to prevent Builder from
playing the trivial strategy of adding $f(n)$ independent vertices. In this
game (that goes on forever), Arsonist wins if the burning density is
$\delta(\mathbf{G},\mathbf{v})=1$ and Builder wins if
$\overline{\delta}(\mathbf{G},\mathbf{v})<1$. That is, Arsonist wins if they
can eventually burn all but a negligible fraction of the vertices and Builder
wins if they can eventually guarantee that for all time steps going forward, a
constant fraction of the vertices are unburned. If
$\overline{\delta}(\mathbf{G},\mathbf{v})=1$ but
$\underline{\delta}(\mathbf{G},\mathbf{v})<1$, which is to say if the burning
density does not exist but the proportion of burning vertices approaches $1$
infinitely often, we say that neither player wins and the game ends in a draw.
Assume that both Builder and Arsonist play ‘perfectly’: among all possible
choices, they always choose the best.
One may equivalently view this game as measuring the worst-case burning
density of a graph burning process in which the graphs are known to be
connected and the orders of the graphs to be burned are known, but the
specific choice of $G_{n}$ is only revealed on turn $n$.
The game we have described is an adversarial zero-sum game with perfect
information. This means at most one player can win and both players know of
all options available to each player at every step in the game. Such games
have a long history in the literature, tracing back to von Neumann and
Morgenstern [17]. Gale and Stewart [11] introduced the idea of infinite games
in which a winner is selected only after countably many turns. Infinite games
are used in mathematical logic and have been used in computer science to study
computation processes [12]. Other examples of games on graphs that have (or
may potentially have) infinitely many rounds include a Maker-Breaker game for
infinite connected components of the origin in a subgraph of the infinite grid
[9], a Ramsey-game where a win for one of the players involves an infinite
graph [13], a variation of Cops and Robbers on an infinite graph [14], and
energy-parity games [8].
The central question of this paper is determining whether, given that Builder
adds $f(n)$ vertices at turn $n$, Arsonist or Builder has a winning strategy.
Our main result proves there is a threshold at which functions which grow
polynomially switch from being Arsonist-win to Builder-win.
###### Theorem 1.
Let $\alpha>0$ and $f:\mathbb{Z}^{+}\to\mathbb{Z}^{+}$ satisfy
$f(n)=\theta(n^{\alpha})$. Then in the adversarial burning game in which
Builder adds $f(n)$ vertices on turn $n$, Arsonist has a winning strategy if
$\alpha<1$ and Builder has a winning strategy if $\alpha\geq 1$.
Indeed, Theorem 1 is a direct consequence of a broader classification theorem
which considers functions that may fluctuate between growth rates.
###### Theorem 2.
Let $f:\mathbb{Z}^{+}\to\mathbb{Z}^{+}$ and consider the adversarial graph
burning game in which Builder adds $f(n)$ vertices on turn $n$. Then:
* •
Arsonist has a winning strategy if there exists $\alpha<1$ such that
$f(n)=O(n^{\alpha})$ and $\omega(n^{2\alpha-1})$, and
* •
Builder has a winning strategy if either $f(n)=\theta(n)$ or if there exists
$c>2$ so that for all $n\geq n_{0}$, $f(n)\geq cn$.
Note that if $f(n)=\theta(n^{\alpha})$ for $\alpha>1$, then, for sufficiently
large $n$, $f(n)>2.1n$, say, and so Theorem 1 follows from Theorem 2.
Additionally, Theorem 2 also covers superpolynomial functions.
In [7], it was shown that determining extremal values of burning numbers for
connected graphs can always be reduced to the restricted problem on trees.
Similarly, we show that, in terms of understanding which player has a winning
strategy, it suffices to assume that Builder always builds trees.
Given a function $f(n)$, define a _legal construction_ for Builder to be a
sequence of graphs $(G_{1},G_{2},\dots)$ such that for each
$n\in\mathbb{Z}^{+}$, $G_{n}$ is a connected graph with $\sum_{k=1}^{n}f(k)$
vertices, $G_{n}\subseteq G_{n+1}$, and the subgraph of $G_{n+1}$ induced by
$V(G_{n})$ is $G_{n}$.
###### Theorem 3.
Let $f(n)$ be given. For every legal construction
$\mathbf{G}=(G_{1},G_{2},\dots)$ for Builder and for all choices of fire
sources, $\mathbf{v}$, every legal construction
$\mathbf{T}=(T_{1},T_{2},\dots)$ such that $T_{n}$ is a spanning tree of
$G_{n}$, for all $n\in\mathbb{Z}^{+}$, satisfies
$\underline{\delta}(\mathbf{T},\mathbf{v})\leq\underline{\delta}(\mathbf{G},\mathbf{v})$
and
$\overline{\delta}(\mathbf{T},\mathbf{v})\leq\overline{\delta}(\mathbf{G},\mathbf{v})$.
In particular, if Arsonist has a strategy that succeeds given any sequence of
trees Builder selects, they have a winning strategy for all legal
constructions. Similarly, if Builder has a winning strategy, they also have a
winning strategy using a legal construction consisting of a sequence of trees.
We prove Theorem 3 in Section 2 and Theorem 2 in Section 3 before closing with
a few unresolved questions regarding the adversarial burning game.
### 1.1. Acknowledgements
The authors offer thanks to Ruiyang Chen who gave useful feedback on some
preliminary discussions of the problems considered in this paper as well as
the anonymous referee who offered several useful refinements to the results
and examples in this paper.
## 2\. Restricting Builder to trees
In this section we prove Theorem 3.
###### Proof.
For a legal construction $\mathbf{G}=(G_{1},G_{2},\dots)$, let
$\mathbf{T}=(T_{1},T_{2},\dots)$ be any sequence of graphs such that
$T_{n}\subseteq T_{n+1}$, the subgraph of $T_{n+1}$ induced by $V(T_{n})$ is
$T_{n}$, and $T_{n}$ is a spanning tree of $G_{n}$ for all
$n\in\mathbb{Z}^{+}$. Let $\mathbf{v}=(v_{1},v_{2},\dots)$ be any sequence of
fire sources for Arsonist in $\mathbf{G}$. Suppose Arsonist burns the same
sequence of vertices $\mathbf{v}$ when Builder uses the legal construction
$\mathbf{T}$.
Since $T_{n}$ contains a subset of the edges of $G_{n}$ for every
$n\in\mathbb{Z}^{+}$, the number of burning vertices in $T_{n}$ is at most the
number of burning vertices in $G_{n}$. Furthermore, $T_{n}$ and $G_{n}$ have
the same number of vertices for all $n\in\mathbb{Z}^{+}$. Thus
$\underline{\delta}(\mathbf{T},\mathbf{v})\leq\underline{\delta}(\mathbf{G},\mathbf{v})$
and
$\overline{\delta}(\mathbf{T},\mathbf{v})\leq\overline{\delta}(\mathbf{G},\mathbf{v})$.
∎
Based on Theorem 3, we assume hereafter that Builder’s sequence of graphs is a
growing sequence of trees.
## 3\. Polynomial thresholds
In this section we prove Theorem 2 by describing winning strategies in each
scenario. We start by proving that Arsonist has a winning strategy when the
number of vertices Builder is given is bounded by two sublinear polynomials.
###### Proposition 4.
Let $\alpha<1$ and let $f:\mathbb{Z}^{+}\to\mathbb{Z}^{+}$ be a function with
$f(n)=O(n^{\alpha})$ and $f(n)=\omega(n^{2\alpha-1})$. Then Arsonist has a
winning strategy in the adversarial burning game when Builder receives $f(n)$
new vertices at step $n$.
###### Proof.
Let $c>0$ be such that $f(n)\leq cn^{\alpha}$ for all $n$ sufficiently large.
Let $\mathbf{G}=(G_{1},G_{2},\dots)$ be the sequence of graphs Builder
constructs and for every $i$, let $V_{i}=V(G_{i})$.
Fix $N_{1}\in\mathbb{Z}^{+}$ large, and allow Arsonist to choose vertices to
burn arbitrarily in the first $N_{1}$ steps of the game. Recursively define a
sequence of integers $(N_{k})_{k\geq 1}$ as follows. For every $k\geq 1$
define $A_{k}=b(G_{N_{k}})$, the burning number of $G_{N_{k}}$, and
$N_{k+1}=N_{k}+A_{k}$. Arsonist’s strategy will be to burn the graph
$G_{N_{k}}$ in $A_{k}$ steps (which we bound by Equation (1)) between time
$N_{k}$ and time $N_{k+1}$, ignoring any vertices that had been burning before
time $N_{k}$. Thus we have $|B_{N_{k}}|\geq|V_{N_{k-1}}|$ for all $k\geq 2$ as
the number of vertices burning at time $N_{k}$ is at least the number of
vertices in $G_{N_{k-1}}$.
We first estimate the number of new vertices added between time $N_{k-1}$ and
$N_{k}$: $\sum_{n=N_{k-1}+1}^{N_{k}}f(n)$. For this, we use the inequality
(2) $(1+x)^{1+\alpha}\leq 1+2(1+\alpha)x,$
which holds for $x<1$ when $\alpha<1$.
$\displaystyle\sum_{n=N_{k-1}+1}^{N_{k}}f(n)$ $\displaystyle\leq
c\sum_{n=N_{k-1}+1}^{N_{k}}n^{\alpha}$ $\displaystyle\leq
c\int_{N_{k-1}}^{N_{k}+1}x^{\alpha}dx$
$\displaystyle=\frac{c}{\alpha+1}\left((N_{k}+1)^{\alpha+1}-(N_{k-1})^{\alpha+1}\right)$
$\displaystyle=\frac{c}{\alpha+1}\left((N_{k-1}+A_{k-1}+1)^{\alpha+1}-(N_{k-1})^{\alpha+1}\right)$
$\displaystyle=\frac{c}{\alpha+1}(N_{k-1})^{\alpha+1}\left(\left(1+\frac{A_{k-1}}{N_{k-1}}+\frac{1}{N_{k-1}}\right)^{\alpha+1}-1\right)$
$\displaystyle\leq\frac{c}{\alpha+1}(N_{k-1})^{\alpha+1}\left(\left(1+\frac{2A_{k-1}}{N_{k-1}}\right)^{\alpha+1}-1\right)$
$\displaystyle\leq\frac{c}{\alpha+1}(N_{k-1})^{\alpha+1}\left(1+\frac{4(\alpha+1)A_{k-1}}{N_{k-1}}-1\right)$
(by Equation (2)) $\displaystyle=4c(N_{k-1})^{\alpha}A_{k-1}$
$\displaystyle\leq 4c(N_{k-1})^{\alpha}\cdot\sqrt{2|V_{N_{k-1}}|}$ (by
Equation (1))
To estimate the total number of vertices, we consider two cases depending on
whether $\alpha\geq 1/2$ or $\alpha<1/2$. First, assume that $\alpha\geq 1/2$
so that $2\alpha-1\geq 0$.
We first claim that
(3) $|V_{N_{k}}|=\omega((N_{k})^{2\alpha}).$
Fix $M>0$ and choose $N$ large enough such that $f(n)\geq 4\alpha M\cdot
n^{2\alpha-1}$ for all $n\geq N$. Then for $k$ large enough that $N_{k}\geq
N$,
$\displaystyle|V_{N_{k}}|$ $\displaystyle=\sum_{n=1}^{N_{k}}f(n)$
$\displaystyle\geq\sum_{n=N+1}^{N_{k}}4\alpha Mn^{2\alpha-1}$
$\displaystyle\geq 4\alpha M\int_{N}^{N_{k}}x^{2\alpha-1}dx$
$\displaystyle\geq 2M\left(\left(N_{k}\right)^{2\alpha}-N^{2\alpha})\right)$
$\displaystyle\geq M\left(N_{k}\right)^{2\alpha}$
for $N_{k}\geq(2)^{1/2\alpha}N$. As $M$ was arbitrary, we have shown
$|V_{N_{k}}|=\omega((N_{k})^{2\alpha})$.
To estimate the fraction of burning vertices in the graph $G_{N_{k}}$, note
that
$|V_{N_{k}}|=|V_{N_{k-1}}|+\sum_{n=N_{k-1}+1}^{N_{k}}f(n).$
Therefore, using Equation (3),
$\frac{\sum_{n=N_{k-1}+1}^{N_{k}}f(n)}{|V_{N_{k-1}}|}\leq\frac{4\sqrt{2}c(N_{k-1})^{\alpha}}{\sqrt{|V_{N_{k-1}}|}}=4\sqrt{2}c\frac{(N_{k-1})^{\alpha}}{\omega((N_{k-1})^{\alpha})}=o(1).$
Returning to the case where $\alpha<1/2$, note that the condition on the lower
bound for the function $f$ reduces to $f(n)>0$. Then, using the bound
$|V_{N_{k-1}}|\geq N_{k-1}$ gives,
$\frac{\sum_{n=N_{k-1}+1}^{N_{k}}f(n)}{|V_{N_{k-1}}|}\leq\frac{4\sqrt{2}c(N_{k-1})^{\alpha}}{\sqrt{|V_{N_{k-1}}|}}\leq
4\sqrt{2}c\frac{(N_{k-1})^{\alpha}}{N_{k-1}^{1/2}}=o(1).$
Thus in either case,
$\displaystyle\liminf_{k\to\infty}\frac{|V_{N_{k-1}}|}{|V_{N_{k}}|}$
$\displaystyle=\liminf_{k\to\infty}\frac{|V_{N_{k-1}}|}{|V_{N_{k-1}}|+\sum_{n=N_{k-1}+1}^{N_{k}}f(n)}$
$\displaystyle=\liminf_{k\to\infty}\frac{|V_{N_{k-1}}|}{(1+o(1))|V_{N_{k-1}}|}$
$\displaystyle=1$
For any $n\geq N_{1}$, choose $k$ largest such that $n\geq N_{k}$. Then
$|B_{n}|\geq|B_{N_{k}}|\geq|V_{N_{k-1}}|$ and $|V_{n}|\leq|V_{N_{k+1}}|$, so
we have
$\displaystyle\underline{\delta}(\mathbf{G},\mathbf{v})$
$\displaystyle=\liminf_{n\to\infty}\frac{|B_{n}|}{|V_{n}|}$
$\displaystyle\geq\liminf_{k\to\infty}\frac{|V_{N_{k-1}}|}{|V_{N_{k+1}}|}$
$\displaystyle=\left(\liminf_{k\to\infty}\frac{|V_{N_{k-1}}|}{|V_{N_{k}}|}\right)\left(\liminf_{k\to\infty}\frac{|V_{N_{k}}|}{|V_{N_{k+1}}|}\right)$
$\displaystyle=1,$
where we note that the limit infimum is bounded above by $1$ and therefore
exists, which permits splitting the limit across the product
$\frac{|V_{N_{k-1}}|}{|V_{N_{k}}|}\cdot\frac{|V_{N_{k}}|}{|V_{N_{k+1}}|}$. So
as
$1\geq\overline{\delta}(\mathbf{G},\mathbf{v})\geq\underline{\delta}(\mathbf{G},\mathbf{v})=1,$
the burning density, $\delta(\mathbf{G},\mathbf{v})$, exists and equals $1$,
so Arsonist wins.
∎
Note that the lower bound conditions on the function $f$ in Proposition 4 are
necessary to guarantee a winning strategy for the Arsonist. To see this,
consider the following examples.
###### Example 5.
First, for $\alpha<1/2$, since the lower bound condition on the function $f$
in Proposition 4 reduces to simply $f(n)>0$, we consider a function $f$ that
alternates between $0$ and $\lfloor n^{\alpha}\rfloor$. Throughout, suppose
that Builder builds a path. Given any $N_{0}$ and $|V_{N_{0}}|$, let $f(n)=0$
for $N_{0}<n\leq N_{1}$, with $N_{1}$ large enough so that
$|V_{N_{1}}|<N_{1}^{\alpha}$. Then, for $N_{2}=N_{1}+\lfloor
N_{1}^{\alpha/2}\rfloor$, define $f(n)=\lfloor n^{\alpha}\rfloor$ when
$N_{1}<n\leq N_{2}$. The total number of vertices at time $N_{2}$ is
$|V_{N_{2}}|\geq|V_{N_{1}}|+N_{1}^{\alpha}\cdot\lfloor
N_{1}^{\alpha/2}\rfloor\geq|V_{N_{1}}|+N_{1}^{\alpha}\cdot(N_{1}^{\alpha/2}-1)\geq
N_{1}^{3\alpha/2}-N_{1}^{\alpha}.$
To bound from above the number of burning vertices at time $N_{2}$, we assume
that all vertices in $V_{N_{1}}$ are burning. As Builder constructs a path,
the vertices Arsonist burns from time $N_{1}+1$ to $N_{2}$ form (possibly
overlapping) intervals of lengths $1$ through $N_{2}-(N_{1}+1)=\lfloor
N_{1}^{\alpha/2}\rfloor-1$, with at most $N_{1}^{\alpha/2}$ additional
vertices burning due to spread from $V_{N_{1}}$. Together, this gives
$|B_{N_{2}}|\leq|V_{N_{1}}|+N_{1}^{\alpha/2}+\sum_{\ell=1}^{\lfloor
N_{1}^{\alpha/2}\rfloor-1}(2\ell-1)\leq
N_{1}^{\alpha}+N_{1}^{\alpha/2}+(N_{1}^{\alpha/2})^{2}\leq 3N_{1}^{\alpha}.$
Thus,
(4)
$\frac{|B_{N_{2}}|}{|V_{N_{2}}|}\leq\frac{3N_{1}^{\alpha}}{N_{1}^{3\alpha/2}-N_{1}^{\alpha}}.$
For $N_{1}$ sufficiently large, the upper bound in equation (4) is arbitrarily
small. Repeating these phases of growth shows that for any choice of fire
sources, $\underline{\delta}(\mathbf{G},\mathbf{v})=0$, so we do not have
$\delta(\mathbf{G},\mathbf{v})=1$ and Arsonist does not win.
###### Example 6.
Consider now an example where, for $1/2\leq\alpha<1$, the function $f$
fluctuates between $f(n)=\lfloor n^{\alpha}\rfloor$ and $f(n)=\lfloor
n^{2\alpha-1}\rfloor$. Again, assume that Builder builds a path.
Fix $0<\varepsilon<1/8$. Given any $N_{0}$, let $M_{0}$ be the number of
vertices added by time $N_{0}$. Assuming that for $N_{0}<n\leq N$, that
$f(n)=\lfloor n^{2\alpha-1}\rfloor$, then as $N$ tends to infinity, the number
of vertices at time $N$ is
$\displaystyle|V_{N}|$ $\displaystyle=M_{0}+\sum_{n=N_{0}+1}^{N}\lfloor
n^{2\alpha-1}\rfloor$
$\displaystyle=M_{0}+(1+o(1))\int_{N_{0}}^{N}x^{2\alpha-1}\ dx$
$\displaystyle=M_{0}+\frac{(1+o(1))}{2\alpha}\left(N^{2\alpha}-N_{0}^{2\alpha}\right)$
$\displaystyle=\frac{(1+o(1))}{2\alpha}N^{2\alpha},$
for any fixed value of $N_{0}$ or $M_{0}$. Thus, given $N_{0}$ and the number
of vertices added up to this point, let $N_{1}$ be large enough so that if,
for $N_{0}<n\leq N_{1}$, $f(n)=\lfloor n^{2\alpha-1}\rfloor$, then
$\left||V_{N_{1}}|-\frac{N_{1}^{2\alpha}}{2\alpha}\right|\leq\varepsilon
N_{1}^{2\alpha}.$
Set $N_{2}=N_{1}+\lfloor N_{1}^{\alpha}\rfloor$ and for $N_{1}<n\leq N_{2}$,
let $f(n)=\lfloor n^{\alpha}\rfloor$. To estimate the number of burning
vertices at time $N_{2}$, assume that all vertices in $V_{N_{1}}$ are burning
as are all vertices added up to time $N_{1}+\lfloor N_{1}^{\alpha}/2\rfloor$.
Accounting for the spread of burning vertices from these and the balls
surrounding the vertices Arsonist selected after time $N_{1}+\lfloor
N_{1}^{\alpha}/2\rfloor$, we see that
$\displaystyle|B_{N_{2}}|$
$\displaystyle\leq|V_{N_{1}}|+\sum_{n=N_{1}+1}^{N_{1}+\lfloor
N_{1}^{\alpha}/2\rfloor}n^{\alpha}+(N_{2}-(N_{1}+\lfloor
N_{1}^{\alpha}/2\rfloor))+\sum_{\ell=1}^{\lfloor N_{1}^{\alpha}\rfloor-\lfloor
N_{1}^{\alpha}/2\rfloor}(2\ell-1)$ (5)
$\displaystyle\leq\left(\frac{1}{2\alpha}+\varepsilon\right)N_{1}^{2\alpha}+\int_{N_{1}+1}^{N_{1}+N_{1}^{\alpha}/2+1}x^{\alpha}\
dx+\frac{N_{1}^{\alpha}}{2}+1+\left(\frac{N_{1}^{\alpha}}{2}+1\right)^{2}$
To estimate the integral in equation (5), we extend the inequality in Equation
(2) by noting
(6) $(1+x)^{\alpha+1}\leq 1+(\alpha+1)x+(\alpha+1)\alpha x^{2},$
which holds for $x<1$ for any $\alpha<1$. Therefore,
$\displaystyle\int_{N_{1}+1}^{N_{1}+N_{1}^{\alpha}/2+1}x^{\alpha}\ dx$
$\displaystyle=\frac{1}{\alpha+1}\left((N_{1}+N_{1}^{\alpha}/2+1)^{\alpha+1}-(N_{1}+1)^{\alpha+1}\right)$
$\displaystyle=\frac{(N_{1}+1)^{\alpha+1}}{\alpha+1}\left(\left(1+\frac{N_{1}^{\alpha}}{2(N_{1}+1)}\right)^{\alpha+1}-1\right)$
$\displaystyle\leq\frac{(N_{1}+1)^{\alpha+1}}{\alpha+1}\left(1+\frac{(\alpha+1)N_{1}^{\alpha}}{2(N_{1}+1)}+\frac{(\alpha+1)\alpha
N_{1}^{2\alpha}}{4(N_{1}+1)^{2}}-1\right)$ (by Equation (6))
$\displaystyle=\frac{N_{1}^{\alpha}(N_{1}+1)^{\alpha}}{2}+\frac{\alpha
N_{1}^{2\alpha}}{4(N_{1}+1)^{1-\alpha}}$
$\displaystyle\leq\frac{N_{1}^{2\alpha}}{2}(1+1/N_{1})^{\alpha}+\frac{\alpha
N_{1}^{3\alpha-1}}{4}$
$\displaystyle\leq\frac{N_{1}^{2\alpha}}{2}\left(1+\frac{2\alpha}{N_{1}}\right)+\frac{\alpha
N_{1}^{3\alpha-1}}{4}$ (by Equation (2)) (7)
$\displaystyle\leq\frac{N_{1}^{2\alpha}}{2}+2\alpha N_{1}^{3\alpha-1}.$
Since
$\left(\frac{N_{1}^{\alpha}}{2}+1\right)^{2}=\frac{N_{1}^{2\alpha}}{4}+N_{1}^{\alpha}+1\leq\frac{N_{1}^{2\alpha}}{4}+2N_{1}^{\alpha}$,
combining equations (5) and (7) gives
$\displaystyle|B_{N_{2}}|$
$\displaystyle\leq\left(\frac{3}{4}+\frac{1}{2\alpha}+\varepsilon\right)N_{1}^{2\alpha}+\frac{N_{1}^{\alpha}}{2}+1+2\alpha
N_{1}^{3\alpha-1}+2N_{1}^{\alpha}$
$\displaystyle\leq\left(\frac{3}{4}+\frac{1}{2\alpha}+\varepsilon\right)N_{1}^{2\alpha}+5N_{1}^{3\alpha-1}.$
Meanwhile,
$\displaystyle|V_{N_{2}}|$
$\displaystyle\geq|V_{N_{1}}|+\sum_{n=N_{1}+1}^{N_{1}+\lfloor
N_{1}^{\alpha}\rfloor}(n^{\alpha}-1)$ $\displaystyle\geq|V_{N_{1}}|+\lfloor
N_{1}^{\alpha}\rfloor((N_{1}+1)^{\alpha}-1)$
$\displaystyle\geq\left(\frac{1}{2\alpha}-\varepsilon\right)N_{1}^{2\alpha}+(N_{1}^{\alpha}-1)N_{1}^{\alpha}$
$\displaystyle=\left(1+\frac{1}{2\alpha}-\varepsilon\right)N_{1}^{2\alpha}-N_{1}^{\alpha}.$
Thus, the fraction of burning vertices is at time $N_{2}$ is bounded by
(8)
$\frac{|B_{N_{2}}|}{|V_{N_{2}}|}\leq\frac{\left(\frac{3}{4}+\frac{1}{2\alpha}+\varepsilon\right)N_{1}^{2\alpha}+5N_{1}^{3\alpha-1}}{\left(1+\frac{1}{2\alpha}-\varepsilon\right)N_{1}^{2\alpha}-N_{1}^{\alpha}}.$
For $N_{1}$ sufficiently large, the upper bound in equation (8) is arbitrarily
close to $\frac{3/4+1/(2\alpha)+\varepsilon}{1+1/(2\alpha)-\varepsilon}$,
which is strictly less than $1$. Repeating the construction gives a function
$f$ so that for any sequence of activated vertices,
$\underline{\delta}(\mathbf{G},\mathbf{v})<1$ and so Arsonist does not win.
Next, we consider the adversarial burning game in which the number of vertices
Builder is given grows either linearly or superlinearly.
###### Proposition 7.
Let $f:\mathbb{Z}^{+}\to\mathbb{Z}^{+}$ be a function so that either
* •
$f=\theta(n)$, or
* •
there exists $c>2$ so that for some $n_{0}$, for all $n\geq n_{0}$, $f(n)\geq
cn$.
Then given $f(n)$ new vertices at step $n$, Builder has a winning strategy in
the adversarial burning game.
###### Proof.
We consider the strategy in which Builder constructs an increasingly long path
by always adding new vertices to one end of the previous path.
First, consider the case where $f=\theta(n)$. Let $n_{0},\alpha,\beta>0$ (with
$\alpha\leq\beta$) be such that for each $n\geq n_{0}$, $\alpha n\leq
f(n)\leq\beta n$. Consider the state of the game on some turn $N>n_{0}$.
Regardless of Arsonist’s strategy, some initial portion of Builder’s path
consists entirely of burning vertices while fire sources that Arsonist has
burned more recently form small (potentially overlapping) intervals. To bound
the total number of burning vertices, we choose some turn $k_{0}$ satisfying
$N>k_{0}\geq n_{0}$ and we assume that the paths added in the process up to
turn $k_{0}$ are entirely burning and that fires started after turn $k_{0}$ do
not intersect. This gives us three collections of burning vertices: an initial
segment of length $\sum_{k=1}^{k_{0}}f(k)$, an additional $N-k_{0}$ vertices
extending that initial segment that are potentially burned from that segment,
and $N-k_{0}$ non-intersecting balls of radius $1$ through $N-k_{0}$, giving
the estimate
$|B_{N}|\leq\sum_{k=1}^{k_{0}}f(k)+(N-k_{0})+\sum_{\ell=1}^{N-k_{0}}(2\ell-1)=\sum_{k=1}^{k_{0}}f(k)+(N-k_{0})^{2}+O(N).$
The total number of vertices by turn $N$ is
$|V_{N}|=\sum_{k=1}^{N}f(k)\geq\sum_{k=1}^{k_{0}}f(k)+\sum_{k=k_{0}+1}^{N}\alpha
k\geq\sum_{k=1}^{k_{0}}f(k)+\frac{\alpha}{2}(N^{2}-k_{0}^{2}).$
Thus for any choice of $k_{0}$ we have a burning density of at most
$\displaystyle\frac{|B_{N}|}{|V_{N}|}$
$\displaystyle\leq\frac{\sum_{k=1}^{k_{0}}f(k)+(N-k_{0})^{2}+O(N)}{\sum_{k=1}^{k_{0}}f(k)+\frac{\alpha}{2}(N^{2}-k_{0}^{2})}$
$\displaystyle=1-\frac{\frac{\alpha}{2}(N^{2}-k_{0}^{2})-(N-k_{0})^{2}+O(N)}{\sum_{k=1}^{k_{0}}f(k)+\frac{\alpha}{2}(N^{2}-k_{0}^{2})}$
We choose
$k_{0}=\left\lfloor\frac{2}{2+\alpha}N\right\rfloor$
which gives
$\frac{\alpha}{2}(N^{2}-k_{0}^{2})-(N-k_{0})^{2}+O(N)=\frac{\alpha^{2}}{2(\alpha+2)}N^{2}+O(N).$
We also have
$\displaystyle\sum_{k=1}^{k_{0}}f(k)+\frac{\alpha}{2}(N^{2}-k_{0}^{2})$
$\displaystyle=\sum_{k=1}^{n_{0}-1}f(k)+\sum_{k=n_{0}}^{k_{0}}f(k)+\frac{\alpha}{2}(N^{2}-k_{0}^{2})$
$\displaystyle\leq\sum_{k=1}^{n_{0}-1}f(k)+\sum_{k=n_{0}}^{k_{0}}\beta
k+\frac{\alpha}{2}(N^{2}-k_{0}^{2})$
$\displaystyle=\beta\frac{k_{0}(k_{0}+1)}{2}+\frac{\alpha}{2}(N^{2}-k_{0}^{2})+\sum_{k=1}^{n_{0}-1}f(k)-\beta\frac{n_{0}(n_{0}-1)}{2}$
$\displaystyle=\left(\frac{\alpha^{3}+4\alpha^{2}+4\beta}{2(\alpha+2)^{2}}\right)N^{2}+O(N)$
and so
$\displaystyle\frac{\frac{\alpha}{2}(N^{2}-k_{0}^{2})-(N-k_{0})^{2}+O(N)}{\sum_{k=1}^{k_{0}}f(k)+\frac{\alpha}{2}(N^{2}-k_{0}^{2})}$
$\displaystyle\geq\frac{\frac{\alpha^{2}}{2(\alpha+2)}N^{2}+O(N)}{(\frac{\alpha^{3}+4\alpha^{2}+4\beta}{2(\alpha+2)^{2}})N^{2}+O(N)}$
$\displaystyle=\frac{\alpha^{2}(2+\alpha)}{\alpha^{3}+4\alpha^{2}+4\beta}+o(1).$
For sufficiently large $N$, we have
$\frac{|B_{N}|}{|V_{N}|}\leq
1-\frac{\alpha^{2}(2+\alpha)}{2(\alpha^{3}+4\alpha^{2}+4\beta)}+o(1)<1$
and so Builder wins.
Now if instead $c>2$ and $f(n)\geq cn$ when $n\geq n_{0}$, we may instead
bound
$|B_{N}|\leq\sum_{k=1}^{N}2k-1=N^{2}$
as
$|V_{N}|\geq|V_{n_{0}}|+\sum_{k=n_{0}+1}^{N}cn=\frac{c}{2}N^{2}+O(N)$
and thus once again we have
$\limsup_{N\to\infty}\frac{|B_{N}|}{|V_{N}|}\leq\lim_{N\to\infty}\frac{N^{2}}{\frac{c}{2}N^{2}+O(N)}=\frac{2}{c}<1.$
∎
Proposition 7 also covers the case of superlinear functions: if $f=\omega(n)$,
then given $f(n)$ new vertices at step $n$, since for $n$ sufficiently large,
$f(n)>2.1n$, Builder has a winning strategy.
In the case where the function $f$ fluctuates between a linear function and a
sublinear function, the Builder can no longer necessarily win, as shown by the
following example.
###### Example 8.
Consider a function $f$ for which there are $0<\alpha<1$ and $\beta>0$ so that
either $f(n)>\beta n$ or else $f(n)=\lfloor n^{\alpha}\rfloor$. At any given
time $N_{0}$, let $0<\varepsilon<1/2$ and let $N_{1}$ be such that letting
$f(n)>\beta n$ for $N_{0}<n\leq N_{1}$, gives
$|V_{N_{1}}|>\frac{(1-\varepsilon)\beta}{2}N_{1}^{2}.$
Set $N_{2}=N_{1}+\lceil\sqrt{2|V_{N_{1}}|}\rceil$ so that, using Equation (1),
by time $N_{2}$ Arsonist can burn all of the vertices in $V_{N_{1}}$ no matter
which sequence of graphs Builder chose and which fire sources Arsonist chose
before turn $N_{1}$. For $N_{1}<n\leq N_{2}$, let $f(n)=\lfloor
n^{\alpha}\rfloor$. Note that by the assumption on $N_{1}$,
$N_{1}<\sqrt{\frac{2}{\beta(1-\varepsilon)}|V_{N_{1}}|}<\sqrt{\frac{4}{\beta}|V_{N_{1}}|}$.
The number of new vertices added during this time is
$\displaystyle\sum_{n=N_{1}+1}^{N_{2}}f(n)$
$\displaystyle\leq\lceil\sqrt{2|V_{N_{1}}|}\rceil N_{2}^{\alpha}$
$\displaystyle=\lceil\sqrt{2|V_{N_{1}}|}\rceil\left(N_{1}+\lceil\sqrt{2|V_{N_{1}}|}\rceil\right)^{\alpha}$
$\displaystyle\leq
2\sqrt{|V_{N_{1}}|}\left(\sqrt{\frac{4}{\beta}|V_{N_{1}}|}+2\sqrt{|V_{N_{1}}|}\right)^{\alpha}$
$\displaystyle\leq 4(\beta^{-1/2}+1)^{\alpha}|V_{N_{1}}|^{(\alpha+1)/2}.$
Thus, the fraction of burning vertices at time $N_{2}$ is
$\frac{|B_{N_{2}}|}{|V_{N_{2}}|}\geq\frac{|V_{N_{1}}|}{|V_{N_{1}}|+4(\beta^{-1/2}+1)^{\alpha}|V_{N_{1}}|^{(\alpha+1)/2}}.$
Since $(\alpha+1)/2<1$, then for $N_{1}$ sufficiently large, the fraction of
burning vertices at time $N_{1}$ is arbitrarily close to $1$. Thus, repeating
this construction shows that there is a sequence of activated vertices with
$\overline{\delta}(\mathbf{G},\mathbf{v})=1$ and so Builder cannot have a
winning strategy.
## 4\. Further directions
While Theorem 2 resolves the question of which player wins the adversarial
burning game for a large collection of functions, there are clearly many
functions not covered by either the results given here or a direct application
of the proof techniques. As we have shown, it is possible to construct
examples that fluctuate between different growth rates in which neither player
wins, and even where the lower density is 0 and the upper burning density is
1.
While a classification of those functions for which either Arsonist or Builder
has a winning strategy seems to be out of reach, it is natural to seek to
expand the class of functions for which the winner is known. The most natural
question is whether Proposition 7 is best possible. The proposition resolves
the question for asymptotically linear functions as well as functions that
stay strictly larger than $2n$ but does not address, for example, a function
that alternates between $n$ and $\lceil n^{1.1}\rceil$.
###### Question 1.
Does there exist a function $f:\mathbb{Z}^{+}\to\mathbb{Z}^{+}$ such that
$f(n)=\Omega(n)$ and Builder does not have a winning strategy in the
adversarial burning game given $f(n)$ vertices on turn $n$?
One approach to answer Question 1 would be to determine whether the problem
exhibits monotonicity under certain conditions. It seems natural that if
Builder has a winning strategy given $f(n)$ vertices, any additional vertices
could only help.
###### Question 2.
Let $f,g:\mathbb{Z}^{+}\to\mathbb{Z}^{+}$ be functions such that $f(n)\leq
g(n)$ for all $n\in\mathbb{Z}^{+}$. If Builder wins the adversarial burning
game when given $f(n)$ vertices at every step $n$, does it necessarily follow
that Builder still wins if they are given $g(n)$ vertices at every step?
Because of the possibility that neither player wins, Question 2 can be
formulated from Arsonist’s perspective: if Arsonist already wins when Builder
is given $g(n)\geq f(n)$ vertices, what can we say when Builder is only given
$f(n)$ vertices? Note that, perhaps surprisingly, if $f(n)\leq g(n)$ and
Arsonist has a winning strategy for the function $g(n)$, it is not necessarily
true that Arsonist has a winning strategy for the function $f(n)$, as seen by
Examples 5 and 6 where the Arsonist can win with $g(n)=\lfloor
n^{\alpha}\rfloor$, but cannot win with a function that fluctuates between
$\lfloor n^{\alpha}\rfloor$ and a smaller function. However, in these
examples, it is only shown that Builder can bound the burning density away
from 1 infinitely often and leaves open the following problem.
###### Question 3.
Let $f,g:\mathbb{Z}^{+}\to\mathbb{Z}^{+}$ be functions such that $f(n)\leq
g(n)$ for all $n\in\mathbb{Z}^{+}$. If Arsonist wins the adversarial burning
game when Builder is given $g(n)$ vertices at every step $n$, does it
necessarily follow that Builder cannot win if they are given $f(n)$ vertices
at every step $n$?
One could also consider a weaker type of monotonicity. Suppose instead of
decreasing the number of vertices Builder is given each turn, we only require
that at each step the total number of vertices Builder has received is
smaller. That is, instead of requiring $f(n)\leq g(n)$ for each
$n\in\mathbb{Z}^{+}$, we have the weaker condition
$\sum_{k=1}^{n}f(k)\leq\sum_{k=1}^{n}g(k)$.
In each instance we analyzed a specific strategy for Builder, we considered
the consequences of Builder constructing a path. The conjecture that paths
have extremal burning number makes this a natural choice of strategy. However,
it is not uncommon in combinatorial games that an extremal example is not
optimal in the context of a game. This suggests the following question.
###### Question 4.
Let $f:\mathbb{Z}^{+}\to\mathbb{Z}^{+}$. If Builder has some winning strategy
in the adversarial burning game adding $f(n)$ vertices at step $n$, is
building a path guaranteed to be a winning strategy for Builder?
If true, one would no longer need to consider which strategy Builder chooses,
essentially reducing the question of determining a winner to bounding the
burning density of a path that gains $f(n)$ vertices at step $n$.
Finally, one may also consider variations of the adversarial burning game
where either Builder or Arsonist is replaced by a random process. Mitsche,
Prałat, and Roshanbin [16] studied the graph burning process when new fire
sources were selected uniformly at random, which generalizes naturally to the
adversarial game, replacing Arsonist by a uniform random process.
Alternatively, one could replace Builder with a _uniform random recursive
tree_ , which is generated by adding vertices one by one, selecting an
existing vertex uniformly at random to be the parent of each new vertex. For
more on recursive trees, see [10, 15].
## References
* [1] N. Alon, Transmitting in the n-dimensional cube, _Discrete Applied Mathematics_ 37 (1992) 9–11.
* [2] P. Bastide, M. Bonamy, A. Bonato, P. Charbit, S. Kamali, T. Pierron, and M. Rabie, _Improved pyrotechnics: Closer to the burning graph conjecture_ , arXiv preprint, arXiv:2110.10530 [math.CO], 2022.
* [3] S. Bessy, A. Bonato, J. Janssen, D. Rautenbach, and E. Roshanbin, Bounds on the burning number, _Discrete Appl. Math._ , 235 (2018), 16–22.
* [4] A. Bonato, A survey of graph burning, _Contrib. Discrete Math._ , 16 (2021), 185–197.
* [5] A. Bonato, K. Gunderson, and A. Shaw, Burning the plane: Densities of the infinite Cartesian grid, _Graphs and Combinatorics_ 36 (2020), 1311–1335.
* [6] A. Bonato, J. Janssen, and E. Roshanbin, Burning a graph as a model of social contagion, _Lecture Notes in Comput. Sci._ , 8882 (2014), 13–22.
* [7] A. Bonato, J. Janssen, E. Roshanbin, How to burn a graph, _Internet Mathematics_ 1-2 (2016), 85–100.
* [8] K. Chatterjee, L. Doyen, Energy parity games, _Theoret. Comput. Sci._ 458 (2012), 49–60.
* [9] A. N. Day, V. Falgas-Ravry, Maker-breaker percolation games II: escaping to infinity, _J. Combin. Theory Ser. B_ 151 (2021), 482–508.
* [10] M. Drmota, _Random trees: an interplay between combinatorics and probability_. Springer Science and Business Media, 2009.
* [11] D. Gale and F. Stewart, Infinite games with perfect information, _Contributions to the Theory of Games_ 2 (1953) 2–16.
* [12] Y. Gurevich, Infinite Games, _Current trends in theoretical computer science: essays and tutorials_ , World Scientific (1993) 235–244.
* [13] D. Hefetz, C. Kusch, L. Narins, A. Pokrovskiy, C. Requilé, A. Sarid, Strong Ramsey games: drawing on an infinite board, _J. Combin. Theory Ser. A_ 150 (2017), 248–266.
* [14] F. Lehner, Pursuit evasion on infinite graphs, _Theoret. Comput. Sci._ 655 (2016), part A, 30–40.
* [15] H. Mahmoud and R. Smythe, A survey of recursive trees, _Theory Probab. Math. Statist._ , 51, (1995), 1–27.
* [16] D. Mitsche, P. Prałat, and E. Roshanbin, Burning graphs—a probabilistic perspective, _Graphs Combin._ , 33 (2017), 449–471.
* [17] J. von Neumann and O. Morgenstern, Theory of games and economic behavior. Princeton University Press, 2007. Originally published 1947.
* [18] S. Norin and J. Turcotte, _The burning number conjecture holds asymptotically_ , arXiv preprint, arXiv:2207.04035 [math.CO] (2022).
* [19] E. Roshanbin, Burning a graph as a model of social contagion, PhD thesis, Dalhousie University, Halifax, NS, 2016.
|
# Time-linear quantum transport simulations with correlated nonequilibrium
Green’s functions
R. Tuovinen QTF Centre of Excellence, Department of Physics, P.O. Box 64,
00014 University of Helsinki, Finland Department of Physics, Nanoscience
Center, P.O. Box 35, 40014 University of Jyväskylä, Finland Y. Pavlyukh
Institute of Theoretical Physics, Faculty of Fundamental Problems of
Technology, Wroclaw University of Science and Technology, 50-370 Wroclaw,
Poland E. Perfetto Dipartimento di Fisica, Università di Roma Tor Vergata,
Via della Ricerca Scientifica 1, 00133 Rome, Italy INFN, Sezione di Roma Tor
Vergata, Via della Ricerca Scientifica 1, 00133 Rome, Italy G. Stefanucci
Dipartimento di Fisica, Università di Roma Tor Vergata, Via della Ricerca
Scientifica 1, 00133 Rome, Italy INFN, Sezione di Roma Tor Vergata, Via della
Ricerca Scientifica 1, 00133 Rome, Italy
###### Abstract
We present a time-linear scaling method to simulate open and correlated
quantum systems out of equilibrium. The method inherits from many-body
perturbation theory the possibility to choose selectively the most relevant
scattering processes in the dynamics, thereby paving the way to the real-time
characterization of correlated ultrafast phenomena in quantum transport. The
open system dynamics is described in terms of an embedding correlator from
which the time-dependent current can be calculated using the Meir-Wingreen
formula. We show how to efficiently implement our approach through a simple
grafting into recently proposed time-linear Green’s function methods for
closed systems. Electron-electron and electron-phonon interactions can be
treated on equal footing while preserving all fundametal conservation laws.
Introduction.– Few systems in nature are in equilibrium. Behind the facade of,
e.g., calm and stationary transport, the electrical and heat currents run
violently. Such out-of-equilibrium dynamics bridges fields like quantum
transport and optics Miwa _et al._ (2019); Hübener _et al._ (2021), atomic
and molecular physics Ruggenthaler _et al._ (2018); M$\mathring{\rm a}$nsson
_et al._ (2021), spectroscopy in solids Buzzi _et al._ (2020); Dendzik _et
al._ (2020); Nicoletti _et al._ (2022), and cavity materials engineering
Latini _et al._ (2021); Bao _et al._ (2022); Schlawin _et al._ (2022).
Recent progress in state-of-the-art time-resolved pump-probe spectroscopy and
transport measurements has pushed the temporal resolution down to the
femtosecond time scale McIver _et al._ (2020); Sung _et al._ (2020); De Sio
_et al._ (2021); Abdo _et al._ (2021); Niedermayr _et al._ (2022).
Inherently, the associated phenomena are time-dependent; the complex many-body
systems are far from equilibrium, with no guarantee of instantly relaxing to
stationary states.
The theory of quantum transport began with the pioneering works of Landauer
and Büttiker Landauer (1957); Büttiker _et al._ (1985); Büttiker (1986) and
became a mature field after the works of Meir, Wingreen and Jauho Meir and
Wingreen (1992); Jauho _et al._ (1994) who provided a general formula for the
time-dependent current through correlated junctions in terms of nonequilibrium
Green’s functions (NEGF). NEGF is an _ab initio_ method suitable to deal with
both bosonic and fermionic interacting particles, in and out of equilibrium
Danielewicz (1984); Stefanucci and van Leeuwen (2013); Balzer and Bonitz
(2013). Nonetheless, the ability to harness the full power of the Meir-
Wingreen formula Meir and Wingreen (1992) is hampered by the underlying two-
time structure of the NEGF, a feature that makes real-time simulations
computationally challenging.
In this Letter, we present a time-linear scaling NEGF theory for open and
correlated quantum systems. The resulting method is strikingly simple, with
ordinary differential equations (ODE) only. Correlation effects originating
from different scattering mechanisms are included through a proper selection
of Feynman diagrams, and all fundamental conservation laws are preserved. The
Meir-Wingreen formula is rewritten in terms of an embedding correlator which
allows for evaluating the time-dependent current at a time-linear cost. We use
the method to study transport of electron-hole pairs, highlighting the pivotal
role of correlations in capturing velocity renormalizations and decoherence
mechanisms.
Figure 1: Quantum transport set up. A nanowire (finite quantum system) is
contacted to left ($\alpha=L$) and right ($\alpha=R$) electrodes and lies over
a substrate (or gate electrode) $\alpha=$gate. The nanowire is driven out of
equilibrium by time dependent voltages $V_{\alpha}(t)$ [$V_{\rm
bias}=V_{L}-V_{R}$] and laser pulses $E_{\rm pulse}(t)$.
Kadanoff-Baym equations for open systems.– We consider a finite quantum
system, being it a molecule or a nanostructure, with one-electron integrals
$h_{ij}(t)$ and two-electron integrals $v_{ijmn}$ in some orthonormal one-
particle basis of dimension $N_{\rm sys}$, see Fig. 1. The time-dependence of
$h_{ij}(t)=h^{0}_{ij}+\langle i|V_{\rm
gate}(t)+\hat{\mathbf{d}}\cdot{\mathbf{E}_{\rm pulse}}(t)|j\rangle$ is due to
a time-dependent gate voltage $V_{\mathrm{gate}}(t)$ and to a possible laser
pulse $\mathbf{E}_{\rm pulse}(t)$ coupled to the electronic dipole operator
$\hat{\mathbf{d}}$. The system is said open if it is in contact with
electronic reservoirs with which it can exchange particles and hence energy.
This is the typical quantum transport set up, the finite system being the
junction and the electronic reservoirs being the electrodes. Neglecting
correlation effects in the electrodes and between the electrodes and the
system the Green’s function $G_{ij}(z,z^{\prime})$ with times $z,z^{\prime}$
on the Keldysh contour $C$ satisfies the equation of motion (EOM) (in the
$N_{\rm sys}\times N_{\rm sys}$ matrix form) Jauho _et al._ (1994); Haug and
Jauho (2008); Myöhänen _et al._ (2008, 2009); Stefanucci and van Leeuwen
(2013)
$\displaystyle\left[{\rm i}\frac{d}{dz}-h^{e}(z)\right]$ $\displaystyle
G(z,z^{\prime})=\delta(z,z^{\prime})$
$\displaystyle+\\!\int_{C}d\bar{z}\;\big{[}\Sigma_{\rm c}(z,\bar{z})+\
\Sigma_{\rm em}(z,\bar{z})\big{]}G(\bar{z},z^{\prime}).$ (1)
In Eq. (1) $h^{e}_{ij}(z)=h_{ij}(z)+V^{\rm HF}_{ij}(z)$ is the one-electron
Hamiltonian properly renormalized by the Hartree-Fock (HF) potential $V^{\rm
HF}_{ij}(z)=-i\sum_{mn}(v_{imnj}-v_{imjn})G_{nm}(z,z^{+})$, $\Sigma_{\rm c}$
is the correlation self-energy due to electron-electron interactions, and
$\Sigma_{\rm em}$ is the embedding self-energy accounting for all virtual
processes where an electron from orbital $i$ leaves the system to occupy some
energy level in one of the electrodes and thereafter moves back to the system
in orbital $j$.
The Kadanoff-Baym equations (KBE) for the open system follow from Eq. (1) by
choosing the times $z$ and $z^{\prime}$ on different branches of the Keldysh
contour Stefanucci and van Leeuwen (2013). In particular the EOM for the
$N_{\rm sys}\times N_{\rm sys}$ electronic density matrix
$\rho(t)=-iG(z,z^{+})=-iG^{<}(t,t)$ can easily be derived by subtracting Eq.
(1) from its adjoint and then setting $z=t$ on the forward branch and
$z^{\prime}=t$ on the backward branch:
$\displaystyle i\frac{d}{dt}\rho(t)=\Big{(}h^{e}(t)\rho(t)-iI_{\rm
c}(t)-iI_{\rm em}(t)\Big{)}-{\rm h.c.}.$ (2)
The collision integral $I_{\rm c}$ is the convolution between the correlation
self-energy and the Green’s function $G$ whereas the embedding integral
$I_{\rm em}$, the main focus of this work, is the convolution between the
embedding self-energy and $G$. For $I_{\rm em}=0$ the system is closed (no
electrodes) and the KBE have been implemented in a number of works using
different approximations to $\Sigma_{\rm c}$; these include the second-Born
Dahlen and van Leeuwen (2007); Balzer _et al._ (2012), the $GW$ and
$T$-matrix von Friesen _et al._ (2009); Stan _et al._ (2009); Puig von
Friesen _et al._ (2010); Schüler _et al._ (2020), the Fan-Migdal Säkkinen
_et al._ (2015); Schüler _et al._ (2016), and approximations based on the
nonequilibrium dynamical mean-field theory Freericks _et al._ (2006); Aoki
_et al._ (2014); Strand _et al._ (2015); Golež _et al._ (2017). KBE studies
of open systems are less numerous Myöhänen _et al._ (2008, 2009); Puig von
Friesen _et al._ (2010). In all cases the unfavorable $O(N_{t}^{3})$ scaling
with the number of time steps $N_{t}$ limits the KBE, and hence the
possibility of studying ultrafast correlated dynamics, to relatively small
systems, although promising progresses have been recently achieved Schüler
_et al._ (2020); Kaye and Golež (2021); Meirinhos _et al._ (2022); Dong _et
al._ (2022).
Generalized Kadanoff-Baym Ansatz.– For any given correlation self-energy the
direct solution of the EOM for the density matrix, see Eq. (2), is
computationally less complex than solving the KBE and opens the door to a
wealth of nonequilibrium phenomena Perfetto and Stefanucci (2018). To date the
most efficient way to make the collision integral a functional of $\rho$ is
the Generalized Kadanoff-Baym Ansatz (GKBA) Lipavský _et al._ (1986) with
$G^{R/A}$ the $N_{\rm sys}\times N_{\rm sys}$ retarded/advanced quasi-particle
propagators, $\rho^{>}=\rho-1$ and $\rho^{<}=\rho$. The GKBA respects the
causal structure and it preserves all fundamental conservation laws for
$\Phi$-derivable approximations Baym (1962) to $\Sigma_{\rm c}$ Karlsson _et
al._ (2021). In closed systems the GKBA-EOM can be exactly reformulated in
terms of a coupled set of ODEs Schlünzen _et al._ (2020); Joost _et al._
(2020) for several major approximations to $\Sigma_{\rm c}$, the most notable
being $GW$ and $T$-matrix Schlünzen _et al._ (2020); Joost _et al._ (2020);
Perfetto _et al._ (2022), $GW$ and $T$-matrix plus exchange Pavlyukh _et
al._ (2021, 2022a), Fan-Migdal Karlsson _et al._ (2021) and the doubly-
screened $G\widetilde{W}$ Pavlyukh _et al._ (2022b). In essence, the idea is
to introduce high-order correlators $\mbox{$\mathcal{G}$}^{a=1,\ldots,n}(t)$
($\mbox{$\mathcal{G}$}^{1}=\rho$), write $I_{\rm
c}[\\{\mbox{$\mathcal{G}$}^{a}\\}]$ as a functional of them, and solve the
coupled EOMs
$i\frac{d}{dt}\mbox{$\mathcal{G}$}^{a}=\mbox{$\mathcal{I}$}^{a}[\\{\mbox{$\mathcal{G}$}^{a}\\}]$
($\mbox{$\mathcal{I}$}^{1}=\big{(}h^{e}\rho-iI_{\rm c}\big{)}-{\rm h.c.}$).
For all aforementioned methods the system of ODE can be closed using a
relatively few number of correlators (the highest number being $n=7$ in
$G\widetilde{W}$). Extending the ODE formulation to open systems would enable
performing time-linear ($O(N_{t})$ scaling) NEGF simulations of correlated
junctions and hence studying, e.g., the formation of Kondo singlets Krivenko
_et al._ (2019), blocking dynamics of polarons Albrecht _et al._ (2013);
Wilner _et al._ (2014a), bistability and hysteresis Galperin _et al._
(2005); Wilner _et al._ (2013), phonon dynamics and heating Galperin _et
al._ (2007a, b); Wilner _et al._ (2014b); Rizzi _et al._ (2016),
nonconservative dynamics Todorov _et al._ (2001); Dundas _et al._ (2009);
Bode _et al._ (2011); Chen _et al._ (2019), molecular electroluminescence
Miwa _et al._ (2019) as well as transport and optical response of junctions
under periodic drivings Cabra _et al._ (2020); Zheng _et al._ (2013), see
also Ref. Ridley _et al._ (2022) for a recent review.
Below we show that the set of ODEs for closed systems can be coupled to one
more ODE for the embedding correlator $\mbox{$\mathcal{G}$}^{\rm em}$ to
effectively open the system, thus providing a time-linear method to solve Eq.
(2). Equation (2) was originally investigated using the integral (convolution)
form of the collision and embedding integrals in Refs. Latini _et al._
(2014); Tuovinen _et al._ (2020, 2021); Tuovinen (2021). It was emphasized
therein that the GKBA propagators $G^{R/A}$ chosen for closed-system
simulations need to be modified. This change affects all other ODEs in an
extremely elegant way while preserving the overall computational complexity.
Time-linear method.– Let $\Sigma_{\alpha}$ be the embedding self-energy of
electrode $\alpha=1,\ldots,N_{\rm leads}$, hence $\Sigma_{\rm
em}=\sum_{\alpha}\Sigma_{\alpha}$. In the so-called wide-band limit
approximation (WBLA) wbl , the retarded and lesser components read Stefanucci
and van Leeuwen (2013); Tuovinen _et al._ (2014); sup
$\displaystyle\Sigma^{R}_{\alpha}(t,t^{\prime})$
$\displaystyle=-\frac{i}{2}s^{2}_{\alpha}(t)\delta(t,t^{\prime})\,\Gamma_{\alpha},$
(3a) $\displaystyle\Sigma^{<}_{\alpha}(t,t^{\prime})$
$\displaystyle=is_{\alpha}(t)s_{\alpha}(t^{\prime})e^{-i\phi_{\alpha}(t,t^{\prime})}\\!\\!\int\\!\\!\frac{d\omega}{2\pi}f(\omega-\mu)e^{-i\omega(t-t^{\prime})}\,\Gamma_{\alpha},$
(3b)
where $s_{\alpha}(t)$ is the switch-on function for the contact between the
system and electrode $\alpha$, $\Gamma_{\alpha}$ is the $N_{\rm sys}\times
N_{\rm sys}$ quasi-particle line-width matrix due to electrode $\alpha$,
$\phi_{\alpha}(t,t^{\prime})\equiv\int_{t^{\prime}}^{t}d\bar{t}\;V_{\alpha}(\bar{t})$
is the accumulated phase due to the time-dependent voltage $V_{\alpha}$ Ridley
_et al._ (2015), and $f(\omega-\mu)=1/(e^{\beta(\omega-\mu)}+1)$ is the Fermi
function at inverse temperature $\beta$ and chemical potential $\mu$. The
matrix elements
$\Gamma_{\alpha,ij}=2\pi\sum_{k}T_{ik\alpha}\delta(\mu-\epsilon_{k\alpha})T_{k\alpha
j}$ can be calculated from the transition amplitudes $T_{k\alpha
j}=T^{\ast}_{jk\alpha}$ from orbital $j$ to level $k$ in electrode $\alpha$
having the energy dispersion $\epsilon_{k\alpha}$. The exact form of the
embedding integral is then
$\displaystyle I_{\rm
em}(t)=\sum_{\alpha}I_{\alpha}(t)=\int\\!\\!d\bar{t}\;\Sigma^{<}_{\rm
em}(t,\bar{t})G^{A}(\bar{t},t)+\frac{1}{2}\Gamma(t)\rho(t),$ (4)
with $\Gamma(t)\equiv\sum_{\alpha}s^{2}_{\alpha}(t)\Gamma_{\alpha}$. In Ref.
Latini _et al._ (2014) it was shown that the mean-field approximation of Eq.
(2), i.e., $I_{\rm c}=0$, is exactly reproduced in GKBA provided that
$\displaystyle
G^{R}(t,t^{\prime})=-i\theta(t-t^{\prime})Te^{-i\int_{t^{\prime}}^{t}d\bar{t}\big{(}h^{e}(\bar{t})-i\Gamma(\bar{t})/2\big{)}},$
(5)
and $G^{A}(t^{\prime},t)=[G^{R}(t,t^{\prime})]^{{\dagger}}$. Equation (5)
reduces to the propagator of closed systems for $\Gamma=0$. In open systems,
however, setting $\Gamma=0$ is utterly inadequate as no steady-state would
ever be attained. Beyond the mean-field approximation we close Eq. (2) using
the GKBA with propagators as in Eq. (5) pro .
To construct the time-linear method we use an efficient pole expansion (PE)
scheme for the Fermi function Hu _et al._ (2010)
$f(\omega)=\frac{1}{2}-\sum_{l}\eta_{l}\left(\frac{1}{\beta\omega+i\zeta_{l}}+\frac{1}{\beta\omega-i\zeta_{l}}\right)$,
${\rm Re}[\zeta_{l}]>0$, to rewrite the lesser self-energy for $t>t^{\prime}$
as
$\Sigma_{\alpha}^{<}(t,t^{\prime})=\frac{i}{2}s_{\alpha}^{2}(t)\delta(t-t^{\prime})\Gamma_{\alpha}-s_{\alpha}(t)\sum_{l}\frac{\eta_{l}}{\beta}F_{l\alpha}(t,t^{\prime})\Gamma_{\alpha}$
with
$\displaystyle
F_{l\alpha}(t,t^{\prime})=s_{\alpha}(t^{\prime})e^{-i\phi_{\alpha}(t,t^{\prime})}e^{-i(\mu-i\frac{\zeta_{l}}{\beta})(t-t^{\prime})}.$
(6)
Inserting the result into Eq. (4) the EOM Eq. (2) for the density matrix
becomes
$\displaystyle i\frac{d}{dt}\rho$ $\displaystyle=\Big{(}h^{e}_{\rm
eff}\rho+\frac{i}{4}\Gamma+i\sum_{l\alpha}s_{\alpha}\frac{\eta_{l}}{\beta}\Gamma_{\alpha}\mbox{$\mathcal{G}$}^{\rm
em}_{l\alpha}-iI_{\rm c}\Big{)}-{\rm h.c.},$ (7)
where $h^{e}_{\rm eff}(t)\equiv h^{e}(t)-i\Gamma(t)/2$ is the effective (non-
self-adjoint) mean-field Hamiltonian and $\mbox{$\mathcal{G}$}^{\rm
em}_{l\alpha}(t)\equiv\int\\!\\!d\bar{t}\;F_{l\alpha}(t,\bar{t})G^{A}(\bar{t},t)$
is the $N_{\rm sys}\times N_{\rm sys}$ embedding correlator. Taking into
account the explicit expressions in Eqs. (5) and (6) we find
$\displaystyle\\!\\!i\frac{d}{dt}\mbox{$\mathcal{G}$}^{\rm
em}_{l\alpha}(t)=-s_{\alpha}(t)-\mathcal{G}^{\rm
em}_{l\alpha}(t)\Big{(}h^{e{\dagger}}_{\rm
eff}(t)-V_{\alpha}(t)-\mu+i\frac{\zeta_{l}}{\beta}\Big{)}.$ (8)
Equations (7) and (8), together with the ODEs for $I_{\rm c}$, form a coupled
system of ODEs for correlated real-time simulations of open systems. This
time-linear method becomes similar to the one of Refs. Croy and Saalmann
(2009); Zheng _et al._ (2010); Zhang _et al._ (2013); Kwok _et al._ (2013);
Wang _et al._ (2013); Kwok _et al._ (2019) for $I_{\rm c}=0$. The scaling
with the system size of Eq. (8) grows like $N_{\rm sys}^{3}\times N_{p}\times
N_{\rm leads}$ where $N_{p}$ is the number of poles for the expansion of
$f(\omega)$ sup .
An alternative time-linear method can be constructed from the spectral
decomposition (SD) of the embedding self-energy
$\Sigma_{\alpha,ij}(z,z^{\prime})=s_{\alpha}(z)s_{\alpha}(z^{\prime})\sum_{k}T_{ik\alpha}g_{k\alpha}(z,z^{\prime})T_{k\alpha
j}$, where $g_{k\alpha}$ is the Green’s function of the isolated electrode. In
this case, one would rewrite the embedding integral as $I_{{\rm
em},ij}=\sum_{k\alpha}T_{ik\alpha}\widetilde{\mbox{$\mathcal{G}$}}^{\rm
em}_{k\alpha j}$ and derive an ODE for the scalar quantities
$\widetilde{\mbox{$\mathcal{G}$}}^{\rm em}_{k\alpha j}=\sum_{m}\int
d\bar{t}[g^{R}_{ka}(t,\bar{t})T_{kam}G^{<}_{mj}(\bar{t},t)+g^{<}_{ka}(t,\bar{t})T_{k\alpha
m}G^{A}_{mj}(\bar{t},t)]$ using the GKBA for the lesser Green’s function. The
scaling with the system size of the scalar ODE for all
$\widetilde{\mbox{$\mathcal{G}$}}^{\rm em}_{k\alpha j}$ grows like $N_{\rm
sys}^{2}\times N_{k}\times N_{\rm leads}$ sup , where $N_{k}$ is the number of
$k$-points needed for the discretization. The SD scheme is ill-advised for the
following reasons. If the electrodes are not wide band then the calculation of
the mean-field propagator scales cubically in time; any other approximation to
$G^{R}$, including Eq. (5), would be inconsistent and could even lead to
unphysical time evolutions, e.g., no steady states for constant voltages. If
the electrodes are wide band then $N_{k}$ could be orders of magnitude larger
than $N_{\rm sys}\times N_{p}$ to achieve convergence, hence $N_{\rm
sys}^{2}\times N_{k}\gg N_{\rm sys}^{3}\times N_{p}$. This statement is proven
numerically below; see also Supplemental Material sup .
The quasi-particle broadening $\Gamma$ in the propagators, see Eq. (5), is
only responsible for a minor change in the ODEs for the high-order correlators
of closed systems. We focus here on the $T$-matrix approximation in the
particle-hole channel ($T^{ph}$) as $T^{ph}$-simulations of open systems are
reported below; similar arguments apply to all other approximations in Ref.
Pavlyukh _et al._ (2022b). The collision integral is $I_{{\rm
c},ij}=-i\sum_{lmn}v_{inml}\mbox{$\mathcal{G}$}^{\rm c}_{lmjn}$ where
$\mbox{$\mathcal{G}$}^{\rm
c}_{lmjn}=-\langle\hat{d}^{{\dagger}}_{j}\hat{d}^{{\dagger}}_{n}\hat{d}_{l}\hat{d}_{m}\rangle_{\rm
c}$ is the correlated part of the equal-time two-particle Green’s function
Stefanucci and van Leeuwen (2013). Following Refs. Pavlyukh _et al._ (2021,
2022a), we construct the matrices in the two-electron space
$\mbox{\boldmath$\mathcal{G}$}^{\rm c}_{\begin{subarray}{c}ij\\\
nm\end{subarray}}(t)\equiv\mbox{$\mathcal{G}$}^{\rm c}_{imjn}(t)$,
$\mbox{\boldmath$v$}_{\begin{subarray}{c}ij\\\ nm\end{subarray}}\equiv
v_{imnj}$ and $\mbox{\boldmath$\rho$}^{\lessgtr}_{\begin{subarray}{c}ij\\\
nm\end{subarray}}\equiv\rho^{\lessgtr}_{ij}\rho^{\gtrless}_{mn}$ (boldface
letters are used to distinguish them from matrices in one-electron space). The
only difference in the derivation of the EOM for $\mathcal{G}$ of closed
systems Pavlyukh _et al._ (2021, 2022a) comes from the fact that the EOM for
the propagator contains $h^{e}_{\rm eff}$ instead of $h^{e}$. The final result
is therefore
$\displaystyle i\frac{d}{dt}\mbox{\boldmath$\mathcal{G}$}^{\rm
c}=\Big{(}\mbox{\boldmath$\rho$}^{<}\mbox{\boldmath$v$}\mbox{\boldmath$\rho$}^{>}+\big{(}\mbox{\boldmath$h$}^{e}_{\rm
eff}+\mbox{\boldmath$\rho$}^{\Delta}\mbox{\boldmath$v$}\big{)}\mbox{\boldmath$\mathcal{G}$}^{\rm
c}\Big{)}-{\rm h.c.},$ (9)
where
$\mbox{\boldmath$\rho$}^{\Delta}=\mbox{\boldmath$\rho$}^{>}-\mbox{\boldmath$\rho$}^{<}$
and ${\mbox{\boldmath$h$}^{e}_{\mathrm{eff},}}_{\begin{subarray}{c}ij\\\
nm\end{subarray}}\equiv h^{e}_{{\rm
eff},ij}\delta_{mn}-\delta_{ij}h^{e{\dagger}}_{{\rm eff},nm}$. The solution of
the coupled ODEs for $\rho$, $\mbox{$\mathcal{G}$}^{\rm em}$ and
$\mbox{$\mathcal{G}$}^{\rm c}$ yields the time-dependent evolution of open
systems in the $T^{ph}$ approximation. Similarly, one can show that all the
212 NEGF methods of Ref. Pavlyukh _et al._ (2022b) are only affected by the
replacement $h^{e}\to h^{e}_{\rm eff}$. The addition of $I_{\rm em}$ in the
EOM for $\rho$ along with the propagation of the embedding correlator
according to Eq. (8), allows for studying open systems for a large number of
NEGF methods. These include methods to deal with the electron-phonon
interaction as well Karlsson _et al._ (2021). Noteworthily, all NEGF methods
in Ref. Pavlyukh _et al._ (2022b) guarantee the satisfaction of fundamental
conservation laws like the continuity equation and the energy balance.
Charge current.– Charge distributions, local currents, local moments, etc.,
can be extracted from the one-particle density matrix $\rho$. Information on
the electron-hole pair correlation function is carried by the $T^{ph}$
correlator $\mbox{$\mathcal{G}$}^{\rm c}$. The embedding correlator
$\mbox{$\mathcal{G}$}^{\rm em}$ is instead crucial for calculating the time-
dependent current $J_{\alpha}(t)$ at the interface between the system and
electrode $\alpha$. This current is given by the Meir-Wingreen formula Meir
and Wingreen (1992) and it can be written as the contribution of the $\alpha$
electrode to the embedding integral Jauho _et al._ (1994); Haug and Jauho
(2008); Stefanucci and van Leeuwen (2013), see Eq. (4), $J_{\alpha}(t)=2{\rm
Re}{\rm Tr}[I_{\alpha}(t)]$. Expressing the embedding self-energy in terms of
$\mbox{$\mathcal{G}$}^{\rm em}$ we find
$\displaystyle\\!\\!\\!J_{\alpha}(t)=2s_{\alpha}(t){\rm Re}{\rm
Tr}\Big{[}\Gamma_{\alpha}\Big{(}\\!s_{\alpha}(t)\frac{2\rho(t)-1}{4}-\\!\sum_{l}\frac{\eta_{l}}{\beta}\mbox{$\mathcal{G}$}^{\rm
em}_{l\alpha}(t)\\!\Big{)}\Big{]}.$ (10)
Satisfaction of the continuity equation implies
$\mathrm{CE}\equiv\frac{d}{dt}{\rm Tr}[\rho]+\sum_{\alpha}J_{\alpha}=0$.
Spectral decomposition vs pole expansion.– We first study a two-level molecule
coupled to two one-dimensional tight-binding electrodes. We set
$h_{11}=h_{22}=0$, $h_{12}=h_{21}=-T/2$ and measure all energies in units of
$T>0$. We consider an interaction $v_{ijmn}=v_{ij}\delta_{in}\delta_{jm}$ with
$v_{11}=v_{22}=1$ and $v_{12}=v_{21}=0.5$. The chemical potential is fixed at
the middle of the HF gap of the uncontacted system, in our case $\mu=1$, and
the inverse temperature is $\beta=100$. The electrodes are parameterized by
on-site and hopping energies $a_{\alpha}=\mu$ (half-filled electrodes),
$b_{\alpha}=-8$, respectively, the energy dispersion thus taking the form
$\epsilon_{k\alpha}=a_{\alpha}-2|b_{\alpha}|\cos[\pi k/(N_{k}+1)]$, with
$N_{k}$ the number of discretized $k$ points. The left and right electrodes
are coupled to the first and second molecular levels, respectively, with
transition amplitude $T_{\alpha}=-0.2$, $\alpha=L,R$. As $T_{\alpha}\ll
b_{\alpha}$ the WBLA is accurate and one finds
$\Gamma_{L,ij}=\delta_{i1}\delta_{j1}\gamma_{L}$ and
$\Gamma_{R,ij}=\delta_{i2}\delta_{j2}\gamma_{R}$ with
$\gamma_{\alpha}=2T^{2}_{\alpha}/|b_{\alpha}|=0.01$.
Figure 2: Dynamics of the two-level molecule contacted to a left and right
electrodes. Occupation of the first level (a) and current at the left
interface (b) using the SD ($k$-points $N_{k}$) and PE (poles $N_{p}$)
schemes. The inset in panel (b) shows the continuity equation
$\mathrm{CE}=\frac{d}{dt}\mathrm{Tr}[\rho]+\sum_{\alpha}J_{\alpha}$. Energies
in units of $T$ and time in units of $1/T$.
In Fig. 2 we present time-dependent HF results for the occupation of the first
level [panel (a)] and the current at the left interface [panel (b)]. We
adiabatically switch on the contacts between the molecule and the electrodes
for $t<0$ with a “sine-squared” switch-on function Karlsson _et al._ (2018),
and then drive the system away from equilibrium with a constant bias
$V_{L}=T/2$, $V_{R}=0$ for $t\geq 0$ (hence $V_{\mathrm{gate}}=0$,
$E_{\mathrm{pulse}}=0$, $V_{\mathrm{bias}}=V_{L}-V_{R}$). The time-linear PE
and SD schemes perform similarly at convergence, as they should. However,
within the time-frame of the simulation, $N_{k}=1000$ $k$-points are needed to
converge the SD scheme, against only $N_{p}=20$ poles needed to converge the
PE scheme. Furthermore, the convergence with $N_{p}$ is independent of the
maximum simulation time whereas $N_{k}$ must grow linearly with it for
otherwise finite size effects, as those visible for $N_{k}=500$ at time
$t\simeq 50$, take place. Steady values are attained on a time scale of a few
$1/\gamma_{a}$ time units sup . The inset in Fig. 2(b) shows that the
continuity equation is satisfied with high accuracy.
Correlated electron-hole transport.– Transport of correlated electron-holes
($eh$) is a fundamental process in photovoltaic junctions Ponseca _et al._
(2017); Pastor _et al._ (2019). We study the relaxation of a suddenly created
$eh$ in a two-band direct gap one-dimensional semiconductor coupled to WBLA
electrodes. The Hamiltonian of the system reads
$\hat{H}=\sum_{ij\nu}h_{ij\nu}\hat{d}_{i\nu}^{\dagger}\hat{d}_{j\nu}+U\sum_{i}\hat{n}_{iv}\hat{n}_{ic}$,
where $\hat{d}_{i\nu}$ destroys an electron in the $i$-th valence ($\nu=v$) or
conduction ($\nu=c$) orbital, and
$\hat{n}_{i\nu}\equiv\hat{d}_{i\nu}^{\dagger}\hat{d}_{i\nu}$ is the orbital
occupation. The one-electron integrals are $h_{iiv}=-\epsilon_{0}<0$,
$h_{iic}=\epsilon_{0}-U$ on site and $h_{ijv}=-h_{ijc}=T>0$ for nearest
neighbors Perfetto _et al._ (2019). In equilibrium the HF gap is
$\Delta=2(\epsilon_{0}-2T)$. The left and right electrodes are coupled to the
left- and right-most orbitals, respectively, with tunneling strength
$\gamma_{\alpha}$ independent of $\alpha$. Henceforth all energies are
measured in units of $\Delta/2$; we set $\epsilon_{0}=4.5$, $T=1.75$,
$\gamma_{\alpha}=0.1$ and work at inverse temperature $\beta=100$. The
equilibrium chemical potential is set in the middle of the HF gap of the
uncontacted system.
Figure 3: Dynamics of an electron-hole pair in a one-dimensional semiconductor
junction with $N=20$ cells. (a-b) Time-dependent current at the right-
interface in HF (dashed) and $T^{ph}$ (solid) for $U=1$ (a) and $U=2$ (b). (c)
Correlated part of the total number of $eh$ pairs for different interaction
strengths. (d-e) Conduction occupations (color map) versus time (vertical
axis) and cell position (horizontal axis) in HF (d) and $T^{ph}$ (e). (f)
Difference of panel (e) to a situation without electrodes. The dashed lines
are guides to the eye.
At time $t=0$ we suddenly couple the system to the electrodes and create an
$eh$ excitation at the left-most orbitals, hence
$\rho_{iv,iv}(0)=(1-\delta_{i1})$ and $\rho_{ic,ic}(0)=\delta_{i1}$. In Fig.
3(a) and (b), we show the current at the right interface in two different
many-body methods, i.e., HF and $T^{ph}$, and for two values $U=1,2$ of the
$eh$ attraction. The results indicate that: (i) the velocity of the $eh$
wavepacket is faster in HF than $T^{ph}$ (each spike corresponds to an $eh$
bouncing at the right interface); (ii) the HF dynamics is coherent, the
wavepacket travelling almost undisturbed, whereas in $T^{ph}$ correlations are
responsible for a fast decoherence and wavepacket spreading. The slower
velocity in $T^{ph}$ is rationalized in Fig. 3(c) where we plot the correlated
part of the total number of $eh$ pairs:
$\sum_{i}\langle(1-\hat{n}_{iv})\hat{n}_{ic}\rangle_{\rm
c}=-\sum_{i}\mbox{$\mathcal{G}$}^{\rm c}_{icivivic}$. The build-up of
correlations is almost instantaneous. The initially uncorrelated $eh$ pair
binds and becomes heavier, thus reducing the propagation speed. The observed
decay at longer times is due to electron and hole tunneling into the
electrodes; at the steady state about $10^{-2}$ conduction electrons and
valence holes remain in the system (not shown). Both decoherence and velocity
reduction are well visible in Fig. 3(d) and (e) where we display the color
plot of the conduction occupations $n_{ic}(t)=\rho_{ic,ic}(t)$ in HF and
$T^{ph}$ for $U=2$. In $T^{ph}$ the $eh$ wavepacket loses coherence and
spreads after bouncing back and forth a few times. In Fig. 3(f) we analyze the
effect of the electrodes by showing the difference between the open and closed
system dynamics. In the open system the amplitude of the wavepacket and the
localization of the charge decreases faster than in the isolated system.
In conclusion, we put forward a time-linear approach to study the correlated
dynamics of open systems with a large number of NEGF methods. Our work
empowers the Meir-Wingreen formula allowing its use in contexts and/or for
levels of approximation which were previously unattainable in practice. The
ODE formulation lends itself to parallel computation, adaptive time-stepping
implementations and restart protocols, thus opening new avenues for the ab
initio description of time-dependent quantum transport phenomena.
###### Acknowledgements.
R.T. wishes to thank the Academy of Finland for funding under Project No.
345007. Y.P. acknowledges funding from NCN Grant POLONEZ BIS 1,
“Nonequilibrium electrons coupled with phonons and collective orders”,
2021/43/P/ST3/03293. G.S. and E.P. acknowledge funding from MIUR PRIN Grant
No. 20173B72NB, from the INFN17-Nemesys project. G.S. and E.P. acknowledge Tor
Vergata University for financial support through projects ULEXIEX and TESLA.
We also acknowledge CSC – IT Center for Science, Finland, for computational
resources.
## References
* Miwa _et al._ (2019) K. Miwa, H. Imada, M. Imai-Imada, K. Kimura, M. Galperin, and Y. Kim, Nano Lett. 19, 2803 (2019).
* Hübener _et al._ (2021) H. Hübener, U. De Giovannini, C. Schäfer, J. Andberger, M. Ruggenthaler, J. Faist, and A. Rubio, Nat. Mater. 20, 438 (2021).
* Ruggenthaler _et al._ (2018) M. Ruggenthaler, N. Tancogne-Dejean, J. Flick, H. Appel, and A. Rubio, Nat. Rev. Chem. 2, 0118 (2018).
* M$\mathring{\rm a}$nsson _et al._ (2021) E. P. M$\mathring{\rm a}$nsson, S. Latini, F. Covito, V. Wanie, M. Galli, E. Perfetto, G. Stefanucci, H. Hübener, U. De Giovannini, M. C. Castrovilli, A. Trabattoni, F. Frassetto, L. Poletto, J. B. Greenwood, F. Légaré, M. Nisoli, A. Rubio, and F. Calegari, Commun. Chem. 4, 73 (2021).
* Buzzi _et al._ (2020) M. Buzzi, D. Nicoletti, M. Fechner, N. Tancogne-Dejean, M. A. Sentef, A. Georges, T. Biesner, E. Uykur, M. Dressel, A. Henderson, T. Siegrist, J. A. Schlueter, K. Miyagawa, K. Kanoda, M.-S. Nam, A. Ardavan, J. Coulthard, J. Tindall, F. Schlawin, D. Jaksch, and A. Cavalleri, Phys. Rev. X 10, 031028 (2020).
* Dendzik _et al._ (2020) M. Dendzik, R. P. Xian, E. Perfetto, D. Sangalli, D. Kutnyakhov, S. Dong, S. Beaulieu, T. Pincelli, F. Pressacco, D. Curcio, S. Y. Agustsson, M. Heber, J. Hauer, W. Wurth, G. Brenner, Y. Acremann, P. Hofmann, M. Wolf, A. Marini, G. Stefanucci, L. Rettig, and R. Ernstorfer, Phys. Rev. Lett. 125, 096401 (2020).
* Nicoletti _et al._ (2022) D. Nicoletti, M. Buzzi, M. Fechner, P. E. Dolgirev, M. H. Michael, J. B. Curtis, E. Demler, G. D. Gu, and A. Cavalleri, Proc. Natl. Acad. Sci. 119, e2211670119 (2022).
* Latini _et al._ (2021) S. Latini, D. Shin, S. A. Sato, C. Schäfer, U. D. Giovannini, H. Hübener, and A. Rubio, Proc. Natl. Acad. Sci. 118, e2105618118 (2021).
* Bao _et al._ (2022) C. Bao, P. Tang, D. Sun, and S. Zhou, Nat. Rev. Phys. 4, 33 (2022).
* Schlawin _et al._ (2022) F. Schlawin, D. M. Kennes, and M. A. Sentef, Appl. Phys. Rev. 9, 011312 (2022).
* McIver _et al._ (2020) J. W. McIver, B. Schulte, F.-U. Stein, T. Matsuyama, G. Jotzu, G. Meier, and A. Cavalleri, Nat. Phys. 16, 38 (2020).
* Sung _et al._ (2020) J. Sung, C. Schnedermann, L. Ni, A. Sadhanala, R. Y. S. Chen, C. Cho, L. Priest, J. M. Lim, H.-K. Kim, B. Monserrat, P. Kukura, and A. Rao, Nat. Phys. 16, 171 (2020).
* De Sio _et al._ (2021) A. De Sio, E. Sommer, X. T. Nguyen, L. Groß, D. Popović, B. T. Nebgen, S. Fernandez-Alberti, S. Pittalis, C. A. Rozzi, E. Molinari, E. Mena-Osteritz, P. Bäuerle, T. Frauenheim, S. Tretiak, and C. Lienau, Nat. Nanotech. 16, 63 (2021).
* Abdo _et al._ (2021) M. Abdo, S. Sheng, S. Rolf-Pissarczyk, L. Arnhold, J. A. J. Burgess, M. Isobe, L. Malavolti, and S. Loth, ACS Photonics 8, 702 (2021).
* Niedermayr _et al._ (2022) A. Niedermayr, M. Volkov, S. A. Sato, N. Hartmann, Z. Schumacher, S. Neb, A. Rubio, L. Gallmann, and U. Keller, Phys. Rev. X 12, 021045 (2022).
* Landauer (1957) R. Landauer, IBM J. Res. Develop. 1, 233 (1957).
* Büttiker _et al._ (1985) M. Büttiker, Y. Imry, R. Landauer, and S. Pinhas, Phys. Rev. B 31, 6207 (1985).
* Büttiker (1986) M. Büttiker, Phys. Rev. Lett. 57, 1761 (1986).
* Meir and Wingreen (1992) Y. Meir and N. S. Wingreen, Phys. Rev. Lett. 68, 2512 (1992).
* Jauho _et al._ (1994) A.-P. Jauho, N. S. Wingreen, and Y. Meir, Phys. Rev. B 50, 5528 (1994).
* Danielewicz (1984) P. Danielewicz, Ann. Phys. (N. Y.) 152, 239 (1984).
* Stefanucci and van Leeuwen (2013) G. Stefanucci and R. van Leeuwen, _Nonequilibrium Many-Body Theory of Quantum Systems: A Modern Introduction_ (Cambridge University Press, Cambridge, 2013).
* Balzer and Bonitz (2013) K. Balzer and M. Bonitz, _Nonequilibrium Green’s Functions Approach to Inhomogeneous Systems_ (Springer, 2013).
* Haug and Jauho (2008) H. Haug and A.-P. Jauho, _Quantum Kinetics in Transport and Optics of Semiconductors_ (Springer, New York, 2008).
* Myöhänen _et al._ (2008) P. Myöhänen, A. Stan, G. Stefanucci, and R. van Leeuwen, EPL (Europhysics Letters) 84, 67001 (2008).
* Myöhänen _et al._ (2009) P. Myöhänen, A. Stan, G. Stefanucci, and R. van Leeuwen, Phys. Rev. B 80, 115107 (2009).
* Dahlen and van Leeuwen (2007) N. E. Dahlen and R. van Leeuwen, Phys. Rev. Lett. 98, 153004 (2007).
* Balzer _et al._ (2012) K. Balzer, S. Hermanns, and M. Bonitz, EPL (Europhysics Letters) 98, 67002 (2012).
* von Friesen _et al._ (2009) M. P. von Friesen, C. Verdozzi, and C.-O. Almbladh, Phys. Rev. Lett. 103, 176404 (2009).
* Stan _et al._ (2009) A. Stan, N. E. Dahlen, and R. van Leeuwen, J. Chem. Phys. 130, 224101 (2009).
* Puig von Friesen _et al._ (2010) M. Puig von Friesen, C. Verdozzi, and C.-O. Almbladh, Phys. Rev. B 82, 155108 (2010).
* Schüler _et al._ (2020) M. Schüler, D. Golež, Y. Murakami, N. Bittner, A. Herrmann, H. U. Strand, P. Werner, and M. Eckstein, Comp. Phys. Commun. 257, 107484 (2020).
* Säkkinen _et al._ (2015) N. Säkkinen, Y. Peng, H. Appel, and R. van Leeuwen, J. Chem. Phys. 143, 234102 (2015).
* Schüler _et al._ (2016) M. Schüler, J. Berakdar, and Y. Pavlyukh, Phys. Rev. B 93, 054303 (2016).
* Freericks _et al._ (2006) J. K. Freericks, V. M. Turkowski, and V. Zlatić, Phys. Rev. Lett. 97, 266408 (2006).
* Aoki _et al._ (2014) H. Aoki, N. Tsuji, M. Eckstein, M. Kollar, T. Oka, and P. Werner, Rev. Mod. Phys. 86, 779 (2014).
* Strand _et al._ (2015) H. U. R. Strand, M. Eckstein, and P. Werner, Phys. Rev. X 5, 011038 (2015).
* Golež _et al._ (2017) D. Golež, L. Boehnke, H. U. R. Strand, M. Eckstein, and P. Werner, Phys. Rev. Lett. 118, 246402 (2017).
* Kaye and Golež (2021) J. Kaye and D. Golež, SciPost Phys. 10, 091 (2021).
* Meirinhos _et al._ (2022) F. Meirinhos, M. Kajan, J. Kroha, and T. Bode, SciPost Phys. Core 5, 030 (2022).
* Dong _et al._ (2022) X. Dong, E. Gull, and H. U. R. Strand, Phys. Rev. B 106, 125153 (2022).
* Perfetto and Stefanucci (2018) E. Perfetto and G. Stefanucci, J. Phys. Condens. Matter 30, 465901 (2018).
* Lipavský _et al._ (1986) P. Lipavský, V. Špička, and B. Velický, Phys. Rev. B 34, 6933 (1986).
* Baym (1962) G. Baym, Phys. Rev. 127, 1391 (1962).
* Karlsson _et al._ (2021) D. Karlsson, R. van Leeuwen, Y. Pavlyukh, E. Perfetto, and G. Stefanucci, Phys. Rev. Lett. 127, 036402 (2021).
* Schlünzen _et al._ (2020) N. Schlünzen, J.-P. Joost, and M. Bonitz, Phys. Rev. Lett. 124, 076601 (2020).
* Joost _et al._ (2020) J.-P. Joost, N. Schlünzen, and M. Bonitz, Phys. Rev. B 101, 245101 (2020).
* Perfetto _et al._ (2022) E. Perfetto, Y. Pavlyukh, and G. Stefanucci, Phys. Rev. Lett. 128, 016801 (2022).
* Pavlyukh _et al._ (2021) Y. Pavlyukh, E. Perfetto, and G. Stefanucci, Phys. Rev. B 104, 035124 (2021).
* Pavlyukh _et al._ (2022a) Y. Pavlyukh, E. Perfetto, D. Karlsson, R. van Leeuwen, and G. Stefanucci, Phys. Rev. B 105, 125134 (2022a).
* Pavlyukh _et al._ (2022b) Y. Pavlyukh, E. Perfetto, and G. Stefanucci, Phys. Rev. B 106, L201408 (2022b).
* Krivenko _et al._ (2019) I. Krivenko, J. Kleinhenz, G. Cohen, and E. Gull, Phys. Rev. B 100, 201104 (2019).
* Albrecht _et al._ (2013) K. F. Albrecht, A. Martin-Rodero, R. C. Monreal, L. Mühlbacher, and A. Levy Yeyati, Phys. Rev. B 87, 085127 (2013).
* Wilner _et al._ (2014a) E. Y. Wilner, H. Wang, M. Thoss, and E. Rabani, Phys. Rev. B 89, 205129 (2014a).
* Galperin _et al._ (2005) M. Galperin, M. A. Ratner, and A. Nitzan, Nano Lett. 5, 125 (2005).
* Wilner _et al._ (2013) E. Y. Wilner, H. Wang, G. Cohen, M. Thoss, and E. Rabani, Phys. Rev. B 88, 045137 (2013).
* Galperin _et al._ (2007a) M. Galperin, M. A. Ratner, and A. Nitzan, J. Phys. Condens. Matter 19, 103201 (2007a).
* Galperin _et al._ (2007b) M. Galperin, A. Nitzan, and M. A. Ratner, Phys. Rev. B 75, 155312 (2007b).
* Wilner _et al._ (2014b) E. Y. Wilner, H. Wang, M. Thoss, and E. Rabani, Phys. Rev. B 90, 115145 (2014b).
* Rizzi _et al._ (2016) V. Rizzi, T. N. Todorov, J. J. Kohanoff, and A. A. Correa, Phys. Rev. B 93, 024306 (2016).
* Todorov _et al._ (2001) T. N. Todorov, J. Hoekstra, and A. P. Sutton, Phys. Rev. Lett. 86, 3606 (2001).
* Dundas _et al._ (2009) D. Dundas, E. J. McEniry, and T. N. Todorov, Nat. Nanotech. 4, 99 (2009).
* Bode _et al._ (2011) N. Bode, S. V. Kusminskiy, R. Egger, and F. von Oppen, Phys. Rev. Lett. 107, 036804 (2011).
* Chen _et al._ (2019) F. Chen, K. Miwa, and M. Galperin, J. Phys. Chem. A 123, 693 (2019).
* Cabra _et al._ (2020) G. Cabra, I. Franco, and M. Galperin, J. Chem. Phys. 152, 094101 (2020).
* Zheng _et al._ (2013) X. Zheng, Y. Yan, and M. Di Ventra, Phys. Rev. Lett. 111, 086601 (2013).
* Ridley _et al._ (2022) M. Ridley, N. W. Talarico, D. Karlsson, N. L. Gullo, and R. Tuovinen, J. Phys. A: Math. Theor. 55, 273001 (2022).
* Latini _et al._ (2014) S. Latini, E. Perfetto, A.-M. Uimonen, R. van Leeuwen, and G. Stefanucci, Phys. Rev. B 89, 075306 (2014).
* Tuovinen _et al._ (2020) R. Tuovinen, D. Golež, M. Eckstein, and M. A. Sentef, Phys. Rev. B 102, 115157 (2020).
* Tuovinen _et al._ (2021) R. Tuovinen, R. van Leeuwen, E. Perfetto, and G. Stefanucci, J. Chem. Phys. 154, 094104 (2021).
* Tuovinen (2021) R. Tuovinen, New J. Phys. 23, 083024 (2021).
* (72) As the GKBA performs poorly for narrow band electrodes Latini _et al._ (2014) we only consider the WBLA.
* Tuovinen _et al._ (2014) R. Tuovinen, E. Perfetto, G. Stefanucci, and R. van Leeuwen, Phys. Rev. B 89, 085131 (2014).
* (74) See Supplemental Material for more detailed discussion.
* Ridley _et al._ (2015) M. Ridley, A. MacKinnon, and L. Kantorovich, Phys. Rev. B 91, 125433 (2015).
* (76) More refined propagators can be considered, see Ref. Latini _et al._ (2014), without affecting the overall computational scaling.
* Hu _et al._ (2010) J. Hu, R.-X. Xu, and Y. Yan, J. Chem. Phys. 133, 101106 (2010).
* Croy and Saalmann (2009) A. Croy and U. Saalmann, Phys. Rev. B 80, 245311 (2009).
* Zheng _et al._ (2010) X. Zheng, G. Chen, Y. Mo, S. Koo, H. Tian, C. Yam, and Y. Yan, J. Chem. Phys. 133, 114101 (2010).
* Zhang _et al._ (2013) Y. Zhang, S. Chen, and G. Chen, Phys. Rev. B 87, 085110 (2013).
* Kwok _et al._ (2013) Y. H. Kwok, H. Xie, C. Y. Yam, X. Zheng, and G. H. Chen, J. Chem. Phys. 139, 224111 (2013).
* Wang _et al._ (2013) R. Wang, D. Hou, and X. Zheng, Phys. Rev. B 88, 205126 (2013).
* Kwok _et al._ (2019) Y. Kwok, G. Chen, and S. Mukamel, Nano Lett. 19, 7006 (2019).
* Karlsson _et al._ (2018) D. Karlsson, R. van Leeuwen, E. Perfetto, and G. Stefanucci, Phys. Rev. B 98, 115148 (2018).
* Ponseca _et al._ (2017) C. S. Ponseca, P. Chábera, J. Uhlig, P. Persson, and V. Sundström, Chem. Rev. 117, 10940 (2017).
* Pastor _et al._ (2019) E. Pastor, J.-S. Park, L. Steier, S. Kim, M. Grätzel, J. R. Durrant, A. Walsh, and A. A. Bakulin, Nat. Commun. 10, 3962 (2019).
* Perfetto _et al._ (2019) E. Perfetto, D. Sangalli, A. Marini, and G. Stefanucci, Phys. Rev. Materials 3, 124601 (2019).
Supplemental Material for “Time-linear quantum transport simulations with
correlated nonequilibrium Green’s functions”
In this Supplemental Material, we consider only the embedding self-energy
between the system and the electrodes. Other collision terms, such as
electron-electron and electron-phonon interactions, can be taken care of by a
separate calculation Karlsson _et al._ (2021); Pavlyukh _et al._ (2022).
## I Quantum transport setup
We consider the following Hamiltonian for the quantum-correlated system
coupled to electrodes
$\displaystyle\hat{H}$
$\displaystyle=\sum_{k\alpha,\sigma}\epsilon_{k\alpha}\hat{d}_{k\alpha,\sigma}^{\dagger}\hat{d}_{k\alpha,\sigma}+\sum_{ij,\sigma}h_{ij}\hat{d}_{i,\sigma}^{\dagger}\hat{d}_{j,\sigma}$
$\displaystyle+\sum_{ik\alpha,\sigma}\left(T_{ik\alpha}\hat{d}_{i,\sigma}^{\dagger}\hat{d}_{k\alpha,\sigma}+\text{h.c.}\right)$
$\displaystyle+\frac{1}{2}\sum_{\begin{subarray}{c}ijmn\\\
\sigma\sigma^{\prime}\end{subarray}}v_{ijmn}\hat{d}_{i,\sigma}^{\dagger}\hat{d}_{j,\sigma^{\prime}}^{\dagger}\hat{d}_{m,\sigma^{\prime}}\hat{d}_{n,\sigma},$
(1)
where $\hat{d}^{(\dagger)}$ are the electronic annihilation (creation)
operators, $\epsilon_{k\alpha}$ is the energy dispersion of the $\alpha$-th
electrode, $h_{ij}$ are the one-particle matrix elements of the system,
$T_{ik\alpha}$ are the tunneling matrix elements between the system and the
electrodes, and $v_{ijmn}$ are the Coulomb integrals of the system. An out-of-
equilibrium condition, making charge carriers to flow through the system, is
introduced by assigning time-dependence on the horizontal branch of the
Keldysh contour,
$\epsilon_{k\alpha}\to\epsilon_{k\alpha}+V_{\alpha}(t)\equiv\bar{\epsilon}_{k\alpha}(t)$,
$h_{ij}\to h_{ij}(t)$, and $T_{ik\alpha}\to T_{ik\alpha}s_{\alpha}(t)$, where
$V_{\alpha}(t)$ is the (time-dependent) bias-voltage profile, and
$s_{\alpha}(t)$ is a switch-on function for the system-electrode coupling.
While the two-body interaction is itself instantaneous, we could also ramp the
strength of the Coulomb integrals with a switch-on function.
## II Retarded self-energy and the effective Hamiltonian
The retarded self-energy appears in the equation of motion for the retarded
Green’s function:
$\left[{\rm i}\frac{{\rm d}}{{\rm
d}t}-h^{e}(t)\right]G^{\text{R}}(t,t^{\prime})=\delta(t,t^{\prime})+\int{\rm
d}\bar{t}\Sigma^{\text{R}}(t,\bar{t})G^{\text{R}}(\bar{t},t^{\prime}),$ (2)
where $h^{e}(t)$ is the (time-dependent) one-electron Hamiltonian, including
the Hartree-Fock potential. The self-energy is constructed from the tunneling
matrices and the electrode Green’s function as
$\Sigma^{\text{R}}(t,t^{\prime})=\sum_{\alpha}\Sigma^{\text{R}}_{\alpha}(t,t^{\prime})$
with Tuovinen _et al._ (2014)
$\Sigma_{\alpha,ij}^{\text{R}}(t,t^{\prime})=\sum_{k}T_{ik\alpha}s_{\alpha}(t)g_{k\alpha}^{\text{R}}(t,t^{\prime})T_{k\alpha
j}s_{\alpha}(t^{\prime}),$ (3)
where we assumed the tunneling strength between the system and electrode
$\alpha$ depends on time only through an overall switch-on function
$s_{\alpha}(t)$. The free-electron, retarded Green’s function of the
$\alpha$-th electrode is
$g_{k\alpha}^{\text{R}}(t,t^{\prime})=-{\rm i}\theta(t-t^{\prime}){\rm
e}^{-{\rm i}\phi_{\alpha}(t,t^{\prime})}{\rm e}^{-{\rm
i}\epsilon_{k\alpha}(t-t^{\prime})}$ (4)
with $\epsilon_{k\alpha}$ and
$\phi_{\alpha}(t,t^{\prime})\equiv\int_{t^{\prime}}^{t}{\rm
d}\bar{t}V_{\alpha}(\bar{t})$ the energy dispersion and the bias-voltage phase
factor, respectively. Inserting this into the expression for the self-energy,
we can transform the $k$-summation into a frequency integral as
$\displaystyle\Sigma_{\alpha,ij}^{\text{R}}(t,t^{\prime})$
$\displaystyle=-{\rm i}s_{\alpha}(t)s_{\alpha}(t^{\prime}){\rm e}^{-{\rm
i}\phi_{\alpha}(t,t^{\prime})}\int\frac{{\rm d}\omega}{2\pi}{\rm e}^{-{\rm
i}\omega(t-t^{\prime})}$ $\displaystyle\hskip 50.0pt\times
2\pi\sum_{k}T_{ik\alpha}\delta(\omega-\epsilon_{k\alpha})T_{k\alpha
j}\theta(t-t^{\prime})$ $\displaystyle=-\frac{{\rm
i}}{2}s_{\alpha}^{2}(t)\delta(t-t^{\prime})\Gamma_{\alpha,ij},$ (5)
where we employed the wide-band limit approximation (WBLA),
$\Gamma_{\alpha,ij}(\omega)=2\pi\sum_{k}T_{ik\alpha}\delta(\omega-\epsilon_{k\alpha})T_{k\alpha
j}\simeq\Gamma_{\alpha,ij}(\mu)\equiv\Gamma_{\alpha,ij}$, and we used
$\int\frac{{\rm d}\omega}{2\pi}{\rm e}^{-{\rm
i}\omega(t-t^{\prime})}\theta(t-t^{\prime})=\delta(t-t^{\prime})/2$. Inserting
this into the r.h.s. of Eq. (2) we obtain
$\displaystyle\int{\rm
d}\bar{t}\Sigma^{\text{R}}(t,\bar{t})G^{\text{R}}(\bar{t},t^{\prime})$
$\displaystyle=\int{\rm d}\bar{t}\sum_{\alpha}[-{\rm
i}\Gamma_{\alpha}s_{\alpha}^{2}(t)/2]\delta(t-\bar{t})G^{\text{R}}(\bar{t},t^{\prime})$
$\displaystyle=-\frac{{\rm
i}}{2}\sum_{\alpha}\Gamma_{\alpha}s_{\alpha}^{2}(t)G^{\text{R}}(t,t^{\prime}),$
(6)
so we may write the equation of motion as
$\left[{\rm i}\frac{{\rm d}}{{\rm
d}t}-h^{e}_{\text{eff}}(t)\right]G^{\text{R}}(t,t^{\prime})=\delta(t,t^{\prime})$
(7)
with the non-self-adjoint, effective Hamiltonian $h^{e}_{\text{eff}}(t)\equiv
h^{e}(t)-\frac{{\rm i}}{2}\sum_{\alpha}\Gamma_{\alpha}s_{\alpha}^{2}(t)$. When
considering the GKBA for the lesser and greater Green’s functions,
$G^{\lessgtr}(t,t^{\prime})=-G^{\text{R}}(t,t^{\prime})\rho^{\lessgtr}(t^{\prime})+\rho^{\lessgtr}(t)G^{\text{A}}(t,t^{\prime}),$
(8)
we take the quasi-particle propagators for the coupled system as
$G^{\text{R}/\text{A}}(t,t^{\prime})=\mp{\rm
i}\theta[\pm(t-t^{\prime})]\mathrm{T}{\rm e}^{-{\rm
i}\int_{t^{\prime}}^{t}{\rm d}\bar{t}h^{e}_{\text{eff}}(\bar{t})}$. Within the
GKBA, the lesser and greater Green’s functions thus satisfy Karlsson _et al._
(2021)
$\displaystyle{\rm i}\frac{{\rm d}}{{\rm d}t}G^{\lessgtr}(t,t^{\prime})$
$\displaystyle=h^{e}_{\text{eff}}(t)G^{\lessgtr}(t,t^{\prime}),$ (9)
$\displaystyle-{\rm i}\frac{{\rm d}}{{\rm
d}t^{\prime}}G^{\lessgtr}(t,t^{\prime})$
$\displaystyle=G^{\lessgtr}(t,t^{\prime})h^{e\dagger}_{\text{eff}}(t^{\prime}).$
(10)
## III Lesser self-energy
For the equations of motion for the one-particle density matrix, we also need
the lesser self-energy Tuovinen _et al._ (2014):
$\Sigma_{\alpha,ij}^{<}(t,t^{\prime})=\sum_{k}T_{ik\alpha}s_{\alpha}(t)g_{k\alpha}^{<}(t,t^{\prime})T_{k\alpha
j}s_{\alpha}(t^{\prime}),$ (11)
where the free-electron, lesser Green’s function of the $\alpha$-th electrode
is
$g_{k\alpha}^{<}(t,t^{\prime})={\rm i}f(\epsilon_{k\alpha}-\mu){\rm e}^{-{\rm
i}\int_{t^{\prime}}^{t}{\rm
d}\bar{t}[\epsilon_{k\alpha}+V_{\alpha}(\bar{t})]}.$ (12)
Here, it is worth noting that the initial condition originates from the
Matsubara component, i.e., the Fermi function contains the unbiased electrode
energy dispersion shifted by the chemical potential Stefanucci and van Leeuwen
(2013). Transforming the $k$-summation into a frequency integral gives
$\displaystyle\Sigma_{\alpha,ij}^{<}(t,t^{\prime})$ $\displaystyle={\rm
i}s_{\alpha}(t)s_{\alpha}(t^{\prime}){\rm e}^{-{\rm
i}\phi_{\alpha}(t,t^{\prime})}\int\frac{{\rm d}\omega}{2\pi}f(\omega-\mu){\rm
e}^{-{\rm i}\omega(t-t^{\prime})}$ $\displaystyle\hskip 50.0pt\times
2\pi\sum_{k}T_{ik\alpha}\delta(\omega-\epsilon_{k\alpha})T_{k\alpha j}.$ (13)
In matrix form, we may then write (employing again the WBLA) Ridley _et al._
(2017)
$\Sigma_{\alpha}^{<}(t,t^{\prime})={\rm
i}\Gamma_{\alpha}s_{\alpha}(t)s_{\alpha}(t^{\prime}){\rm e}^{-{\rm
i}\phi_{\alpha}(t,t^{\prime})}\int\frac{{\rm d}\omega}{2\pi}f(\omega-\mu){\rm
e}^{-{\rm i}\omega(t-t^{\prime})}.$ (14)
Away from the time-diagonal, $t\neq t^{\prime}$, the integral is well-
behaving, a divergence is arising however for $t=t^{\prime}$. At zero
temperature one finds $\Sigma^{<}(t,t^{\prime}\sim
t)\sim\frac{1}{t-t^{\prime}}-i\pi\delta(t-t^{\prime})$, see Eq. (11) in Ref.
Stefanucci _et al._ (2008). This divergence is however harmless since in the
EOM the self-energy is always convoluted with the Green’s function over time;
see below. The time-linear scheme is “blind” to the divergence of the
embedding self-energy since it is cast in terms of the embedding correlator,
which is the integral of the power-law divergent part of $\Sigma^{<}$ and the
advanced Green’s function.
The Fermi function can be evaluated employing a pole expansion Hu _et al._
(2010); Ridley _et al._ (2017)
$f(x)\equiv\frac{1}{{\rm e}^{\beta
x}+1}=\frac{1}{2}-\lim_{N_{p}\to\infty}\sum_{l=1}^{N_{p}}\eta_{l}\left(\frac{1}{\beta
x+{\rm i}\zeta_{l}}+\frac{1}{\beta x-{\rm i}\zeta_{l}}\right),$ (15)
where $\eta$ and $\pm{\rm i}\zeta$ are the residues and poles ($\zeta>0$),
respectively. In the Matsubara case, one would take $\eta_{l}=1$ and
$\zeta_{l}=\pi(2l-1)$, but the expansion can be optimized through the solution
of an eigenvalue problem of a specific, tridiagonal matrix Hu _et al._
(2010). Because of the exponential ${\rm e}^{-{\rm i}\omega(t-t^{\prime})}$,
the nontrivial part of the integral is closed on the lower-half complex plane
for $t>t^{\prime}$ and on the upper-half complex plane for $t<t^{\prime}$:
$\displaystyle\Sigma_{\alpha}^{<}(t,t^{\prime})$ $\displaystyle={\rm
i}\Gamma_{\alpha}s_{\alpha}(t)s_{\alpha}(t^{\prime}){\rm e}^{-{\rm
i}\phi_{\alpha}(t,t^{\prime})}\frac{1}{2}\int\frac{{\rm d}\omega}{2\pi}{\rm
e}^{-{\rm i}\omega(t-t^{\prime})}$ $\displaystyle-{\rm
i}\Gamma_{\alpha}s_{\alpha}(t)s_{\alpha}(t^{\prime}){\rm e}^{-{\rm
i}\phi_{\alpha}(t,t^{\prime})}\sum_{l}\frac{\eta_{l}}{\beta}$
$\displaystyle\times\int\frac{{\rm
d}\omega}{2\pi}\left(\frac{1}{\omega-\mu+{\rm
i}\zeta_{l}/\beta}+\frac{1}{\omega-\mu-{\rm i}\zeta_{l}/\beta}\right){\rm
e}^{-{\rm i}\omega(t-t^{\prime})}$ $\displaystyle=\frac{{\rm
i}}{2}\Gamma_{\alpha}s_{\alpha}^{2}(t)\delta(t-t^{\prime})-{\rm
i}\Gamma_{\alpha}s_{\alpha}(t)s_{\alpha}(t^{\prime}){\rm e}^{-{\rm
i}\phi_{\alpha}(t,t^{\prime})}\sum_{l}\frac{\eta_{l}}{\beta}$
$\displaystyle\times\left[-{\rm i}{\rm e}^{-{\rm i}(\mu-{\rm
i}\frac{\zeta_{l}}{\beta})(t-t^{\prime})}\theta(t-t^{\prime})+{\rm i}{\rm
e}^{-{\rm i}(\mu+{\rm
i}\frac{\zeta_{l}}{\beta})(t-t^{\prime})}\theta(t^{\prime}-t)\right],$ (16)
where we noticed a missing prefactor $1/\beta$ in Ref. Ridley _et al._
(2017). In our situation, we only require the $t>t^{\prime}$ contribution:
$\displaystyle\Sigma_{\alpha}^{<}(t,t^{\prime})$ $\displaystyle=\frac{{\rm
i}}{2}s_{\alpha}^{2}(t)\delta(t-t^{\prime})\Gamma_{a}$ $\displaystyle-
s_{\alpha}(t)\sum_{l}\frac{\eta_{l}}{\beta}s_{\alpha}(t^{\prime}){\rm
e}^{-{\rm i}\phi(t,t^{\prime})}{\rm e}^{-{\rm i}(\mu-{\rm
i}\zeta_{l}/\beta)(t-t^{\prime})}\Gamma_{a}.$ (17)
We observe that the inclusion of finite number of poles removes the power-law
divergence and makes the function numerically more tractable Ridley _et al._
(2017).
## IV Embedding integral
Inserting Eq. (II) into the embedding integral appearing in Eq. (2) of the
main text yields
$\displaystyle I_{\text{em}}(t)$ $\displaystyle=\int{\rm
d}\bar{t}\left[\Sigma^{<}(t,\bar{t})G^{\text{A}}(\bar{t},t)+\Sigma^{\text{R}}(t,\bar{t})G^{<}(\bar{t},t)\right]$
$\displaystyle=\int{\rm
d}\bar{t}\left\\{\Sigma^{<}(t,\bar{t})G^{\text{A}}(\bar{t},t)+\frac{1}{2}\Gamma(t)\delta(t-\bar{t})[-{\rm
i}G^{<}(\bar{t},t)]\right\\}$ $\displaystyle=\int{\rm
d}\bar{t}\Sigma^{<}(t,\bar{t})G^{\text{A}}(\bar{t},t)+\frac{1}{2}\Gamma(t)\rho(t),$
(18)
where we defined
$\Gamma(t)\equiv\sum_{\alpha}s_{\alpha}^{2}(t)\Gamma_{\alpha}$ and
$\rho(t)\equiv-{\rm i}G^{<}(t,t)$. Further, inserting Eq. (III) gives
$\displaystyle I_{\text{em}}(t)$
$\displaystyle=\frac{1}{2}\Gamma(t)\rho(t)+\sum_{\alpha}\int{\rm
d}\bar{t}\left\\{\frac{{\rm
i}}{2}s_{\alpha}^{2}(t)\delta(t-\bar{t})\Gamma_{\alpha}G^{\text{A}}(\bar{t},t)\right.$
$\displaystyle\left.-s_{\alpha}(t){\textstyle\sum_{l}}\frac{\eta_{l}}{\beta}s_{\alpha}(\bar{t}){\rm
e}^{-{\rm i}\phi(t,\bar{t})}{\rm e}^{-{\rm i}(\mu-{\rm
i}\zeta_{l}/\beta)(t-\bar{t})}\Gamma_{a}G^{\text{A}}(\bar{t},t)\right\\}$
$\displaystyle=\frac{1}{2}\Gamma(t)\rho(t)-\frac{1}{4}\Gamma(t)-\sum_{l\alpha}s_{\alpha}(t)\frac{\eta_{l}}{\beta}\Gamma_{\alpha}\mathcal{G}_{l\alpha}^{\text{em}}(t),$
(19)
where we introduced the embedding correlator
$\mathcal{G}_{l\alpha}^{\text{em}}(t)=\int{\rm
d}\bar{t}s_{\alpha}(\bar{t}){\rm e}^{-{\rm i}\phi_{\alpha}(t,\bar{t})}{\rm
e}^{-{\rm i}\mu(t-\bar{t})}{\rm
e}^{-\zeta_{l}(t-\bar{t})/\beta}G^{\text{A}}(\bar{t},t).$ (20)
In Eq. (IV), the prefactor of the second term results from the implicit step
function within the advanced Green’s function: $\int{\rm
d}\bar{t}\delta(t-\bar{t})G^{\text{A}}(\bar{t},t)={\rm i}/2$.
## V Equations of motion
With the above-derived embedding integral, the equation of motion for the
electronic density matrix becomes (here we omit the collision integral
$I_{\text{c}}$ resulting from, e.g., electron-electron or electron-phonon
correlations; it can be directly included)
$\displaystyle{\rm i}\frac{{\rm d}}{{\rm d}t}\rho(t)$
$\displaystyle=\left[h^{e}(t)\rho(t)-{\rm
i}I_{\text{em}}(t)\right]-\text{h.c.}$
$\displaystyle=\left[h^{e}(t)\rho(t)-\frac{{\rm
i}}{2}\Gamma(t)\rho(t)+\frac{{\rm i}}{4}\Gamma(t)\right.$ $\displaystyle\hskip
20.0pt\left.+{\rm
i}{\textstyle\sum_{l\alpha}}s_{\alpha}(t)\frac{\eta_{l}}{\beta}\Gamma_{\alpha}\mathcal{G}_{l\alpha}^{\text{em}}(t)\right]-\text{h.c.}$
$\displaystyle=\left[h^{e}_{\text{eff}}(t)\rho(t)+\frac{{\rm
i}}{4}\Gamma(t)+{\rm
i}\sum_{l\alpha}s_{\alpha}(t)\frac{\eta_{l}}{\beta}\Gamma_{\alpha}\mathcal{G}_{l\alpha}^{\text{em}}(t)\right]-\text{h.c.},$
(21)
where we utilized the effective Hamiltonian. The equation of motion for the
embedding correlator is derived as
$\displaystyle{\rm i}\frac{{\rm d}}{{\rm
d}t}\mathcal{G}_{l\alpha}^{\text{em}}(t)$ $\displaystyle=\int{\rm
d}\bar{t}s_{\alpha}(\bar{t})\left\\{{\rm i}\frac{{\rm d}}{{\rm d}t}{\rm
e}^{-{\rm i}\int_{\bar{t}}^{t}{\rm d}\tau\left[V_{\alpha}(\tau)+\mu-{\rm
i}\zeta_{l}/\beta\right]}\right\\}G^{\text{A}}(\bar{t},t)$
$\displaystyle-\int{\rm d}\bar{t}s_{\alpha}(\bar{t}){\rm e}^{-{\rm
i}\int_{\bar{t}}^{t}{\rm d}\tau\left[V_{\alpha}(\tau)+\mu-{\rm
i}\zeta_{l}/\beta\right]}\left\\{-{\rm i}\frac{{\rm d}}{{\rm
d}t}G^{\text{A}}(\bar{t},t)\right\\}$ $\displaystyle=\int{\rm
d}\bar{t}s_{\alpha}(\bar{t})\left[V_{\alpha}(t)+\mu-{\rm
i}\zeta_{l}/\beta\right]{\rm e}^{-{\rm i}\int_{\bar{t}}^{t}{\rm
d}\tau\left[V_{\alpha}(\tau)+\mu-{\rm
i}\zeta_{l}/\beta\right]}G^{\text{A}}(\bar{t},t)$ $\displaystyle-\int{\rm
d}\bar{t}s_{\alpha}(\bar{t}){\rm e}^{-{\rm i}\int_{\bar{t}}^{t}{\rm
d}\tau\left[V_{\alpha}(\tau)+\mu-{\rm
i}\zeta_{l}/\beta\right]}\left[G^{\text{A}}(\bar{t},t)h^{e\dagger}_{\text{eff}}(t)+\delta(\bar{t}-t)\right]$
$\displaystyle=-s_{\alpha}(t)-\mathcal{G}_{l\alpha}^{\text{em}}(t)\left[h^{e\dagger}_{\text{eff}}(t)-V_{\alpha}(t)-\mu+{\rm
i}\zeta_{l}/\beta\right],$ (22)
where we used the equation of motion (7) of the quasi-particle propagator for
the coupled system
$G^{\text{A}}(\bar{t},t)=[G^{\text{R}}(t,\bar{t})]^{\dagger}$.
The embedding correlator in Eq. (20) is a $N_{\text{sys}}\times
N_{\text{sys}}$ matrix. The multiplication in Eq. (V) thus leads to an overall
computational complexity $N_{\text{sys}}^{3}\times N_{p}\times N_{\rm leads}$.
## VI $k$-resolved spectral decomposition
Alternatively, we could also write the embedding integral (IV) directly in
terms of the $k$-resolved electrode Green’s functions (spectral
decomposition):
$\displaystyle I_{\text{em},ij}(t)$ $\displaystyle=\sum_{m}\int{\rm
d}\bar{t}\left[\Sigma_{im}^{<}(t,\bar{t})G_{mj}^{\text{A}}(\bar{t},t)+\Sigma_{im}^{\text{R}}(t,\bar{t})G_{mj}^{<}(\bar{t},t)\right]$
$\displaystyle=\sum_{k\alpha m}\int{\rm
d}\bar{t}\left[T_{ik\alpha}s_{\alpha}(t)g_{k\alpha}^{<}(t,\bar{t})T_{k\alpha
m}s_{\alpha}(\bar{t})G_{mj}^{\text{A}}(\bar{t},t)\right.$ $\displaystyle\hskip
50.0pt\left.+T_{ik\alpha}s_{\alpha}(t)g_{k\alpha}^{\text{R}}(t,\bar{t})T_{k\alpha
m}s_{\alpha}(\bar{t})G_{mj}^{<}(\bar{t},t)\right]$
$\displaystyle=\sum_{k\alpha}T_{ik\alpha}s_{\alpha}(t)\widetilde{\mathcal{G}}_{k\alpha
j}^{\text{em}}(t),$ (23)
where we introduced another embedding correlator
$\displaystyle\widetilde{\mathcal{G}}_{k\alpha j}^{\text{em}}(t)$
$\displaystyle=\sum_{m}\int{\rm
d}\bar{t}\left[g_{k\alpha}^{<}(t,\bar{t})T_{k\alpha
m}s_{\alpha}(\bar{t})G_{mj}^{\text{A}}(\bar{t},t)\right.$ $\displaystyle\hskip
50.0pt\left.+g_{k\alpha}^{\text{R}}(t,\bar{t})T_{k\alpha
m}s_{\alpha}(\bar{t})G_{mj}^{<}(\bar{t},t)\right].$ (24)
The embedding integral in Eq. (VI) enters the equation of motion (2) in the
main text. The correlator $\widetilde{\mathcal{G}}_{k\alpha j}^{\text{em}}$ is
a scalar quantity and it is distinct from the embedding coorelator
$\mathcal{G}_{l\alpha}^{\text{em}}$ in Eq. (20), which is a
$N_{\text{sys}}\times N_{\text{sys}}$ matrix for every pole $l$ and electrode
$\alpha$.
For deriving an equation of motion for $\widetilde{\mathcal{G}}_{k\alpha
j}^{\text{em}}(t)$ we need the equations of motion for the free-electron,
electrode Green’s functions, cf. Eqs. (4) and (12):
$\displaystyle{\rm i}\frac{{\rm d}}{{\rm
d}t}g_{k\alpha}^{\text{R}}(t,t^{\prime})$
$\displaystyle=\bar{\epsilon}_{k\alpha}(t)g_{k\alpha}^{\text{R}}(t,t^{\prime})+\delta(t-t^{\prime}),$
(25) $\displaystyle{\rm i}\frac{{\rm d}}{{\rm
d}t}g_{k\alpha}^{<}(t,t^{\prime})$
$\displaystyle=\bar{\epsilon}_{k\alpha}(t)g_{k\alpha}^{<}(t,t^{\prime}),$ (26)
where $\bar{\epsilon}_{k\alpha}(t)=\epsilon_{k\alpha}+V_{\alpha}(t)$ is the
biased electrode energy dispersion, see below Eq. (I). We then find
$\displaystyle{\rm i}\frac{{\rm d}}{{\rm d}t}\widetilde{\mathcal{G}}_{k\alpha
j}^{\text{em}}(t)$ $\displaystyle=\sum_{m}\int{\rm d}\bar{t}\left\\{\left[{\rm
i}\frac{{\rm d}}{{\rm d}t}g_{k\alpha}^{<}(t,\bar{t})\right]T_{k\alpha
m}s_{\alpha}(\bar{t})G_{mj}^{\text{A}}(\bar{t},t)\right.$ $\displaystyle\hskip
40.0pt\left.-g_{k\alpha}^{<}(t,\bar{t})T_{k\alpha
m}s_{\alpha}(\bar{t})\left[-{\rm i}\frac{{\rm d}}{{\rm
d}t}G_{mj}^{\text{A}}(\bar{t},t)\right]\right.$ $\displaystyle\hskip
40.0pt\left.+\left[{\rm i}\frac{{\rm d}}{{\rm
d}t}g_{k\alpha}^{\text{R}}(t,\bar{t})\right]T_{k\alpha
m}s_{\alpha}(\bar{t})G_{mj}^{<}(\bar{t},t)\right.$ $\displaystyle\hskip
40.0pt\left.-g_{k\alpha}^{\text{R}}(t,\bar{t})T_{k\alpha
m}s_{\alpha}(\bar{t})\left[-{\rm i}\frac{{\rm d}}{{\rm
d}t}G_{mj}^{<}(\bar{t},t)\right]\right\\}$ $\displaystyle=\sum_{m}\int{\rm
d}\bar{t}\left\\{\bar{\epsilon}_{k\alpha}(t)g_{k\alpha}^{<}(t,\bar{t})T_{k\alpha
m}s_{\alpha}(\bar{t})G_{mj}^{\text{A}}(\bar{t},t)\right.$ $\displaystyle\hskip
40.0pt\left.-g_{k\alpha}^{<}(t,\bar{t})T_{k\alpha
m}s_{\alpha}(\bar{t})\left[{\textstyle\sum_{p}}G_{mp}^{\text{A}}(\bar{t},t)h_{\text{eff},pj}^{e\dagger}(t)+\delta(t-\bar{t})\right]\right.$
$\displaystyle\hskip
40.0pt\left.+\left[\bar{\epsilon}_{k\alpha}(t)g_{k\alpha}^{\text{R}}(t,\bar{t})+\delta(t-\bar{t})\right]T_{k\alpha
m}s_{\alpha}(\bar{t})G_{mj}^{<}(\bar{t},t)\right.$ $\displaystyle\hskip
40.0pt\left.-g_{k\alpha}^{\text{R}}(t,\bar{t})T_{k\alpha
m}s_{\alpha}(\bar{t}){\textstyle\sum_{p}}G_{mp}^{<}(\bar{t},t)h_{\text{eff},pj}^{e\dagger}(t)\right\\}$
$\displaystyle=\bar{\epsilon}_{k\alpha}(t)\widetilde{\mathcal{G}}_{k\alpha
j}^{\text{em}}(t)+{\rm i}\sum_{m}T_{k\alpha
m}s_{\alpha}(t)\left[\rho_{mj}(t)-f(\epsilon_{k\alpha}-\mu)\delta_{mj}\right]$
$\displaystyle-\sum_{p}\widetilde{\mathcal{G}}_{k\alpha
p}^{\text{em}}(t)h_{\text{eff},pj}^{e\dagger}(t),$ (27)
where we employed Eq. (12) for the electrode Green’s function at the equal-
time limit, and the GKBA equation of motion (10) for the lesser Green’s
function. The solution of the EOM for all the scalar quantities
$\widetilde{\mathcal{G}}_{k\alpha j}^{\text{em}}(t)$ scales like
$N_{\text{sys}}^{2}\times N_{k}\times N_{\rm leads}$. The scaling ratio
between the pole expansion scheme and the spectral decomposition scheme is
therefore $N_{p}\times N_{\text{sys}}/N_{k}$.
Figure 1: Transient currents at the left electrode interface in the dimer case
compared to the converged results shown in Fig. 2 of main text. Top:
$k$-points spectral decomposition with varying number of points $N_{k}$.
Bottom: pole-expansion with varying number of poles $N_{p}$.
In Fig. 1, we show a calculation corresponding to the dimer system studied in
Fig. 2 of the main text. The system is coupled to one-dimensional tight-
binding electrodes (on-site energy $a_{\alpha}=\mu$, hopping $b_{\alpha}$)
with the energy dispersion
$\epsilon_{k\alpha}=a_{\alpha}-2|b_{\alpha}|\cos[\pi k/(N_{k}+1)]$ and the
tunneling matrix elements (to terminal sites of the electrodes)
$T_{ik\alpha}=T_{\alpha}\sqrt{2/(N_{k}+1)}\sin(\pi k/(N_{k}+1))$, where
$N_{k}$ is the number of discretized $k$ points. Here, the left-interface
current, evaluated for different numbers of the $k$ points ($N_{k}$) or poles
($N_{p}$) is compared to the converged result shown in the main text. We see
that a recurrence time due to a finite-size effect is present in the spectral
decomposition scheme. This recurrence time is equal to $N_{k}/|b_{\alpha}|$
and it corresponds to the time it takes for the electronic wavefront to go
from one of the interfaces to the boundary of the corresponding electrode and
back. Other limitations of the spectral decomposition scheme are discussed in
the main text. In contrast, the pole expansion scheme shows no finite-size
effects, but instead, if the number of poles is too low, the steady-value of
the current is inaccurate. Compared to the spectral decomposition scheme, the
pole expansion scheme converges extremely fast.
Within the temporal window of Fig. 2 in the main text and Fig. 1 above (up to
$t=100$), the current has not yet reached a steady value. By evolving longer
in time, however, a well defined steady state is attained. In Fig. 2 we show
the results of a longer time evolution (up to $t=2000$); the current
saturation is clearly visible. For this long-time evolution, a significantly
higher number of $k$-points is required to reach converged results, in
contrast to the number of poles which is instead the same.
Figure 2: Long-time behaviour corresponding to Fig. 2(b) of the main text
using the spectral-decomposition scheme and the pole expansion of the Fermi
function.
## VII Meir-Wingreen formula
The current between the system and the $\alpha$-th electrode can be calculated
from the Meir-Wingreen formula, which consists of the $\alpha$-th electrode
contribution to the embedding integral, cf. Eq. (IV):
$\displaystyle J_{\alpha}(t)=2\mathrm{Re}\mathrm{Tr}I_{\alpha}(t)$
$\displaystyle=2\mathrm{Re}\mathrm{Tr}\left[\frac{1}{2}\Gamma_{\alpha}s_{\alpha}(t)\rho(t)-\frac{1}{4}\Gamma_{\alpha}s_{\alpha}(t)-\sum_{l}s_{\alpha}(t)\frac{\eta_{l}}{\beta}\Gamma_{\alpha}\mathcal{G}_{l\alpha}^{\text{em}}(t)\right]$
$\displaystyle=2s_{\alpha}(t)\mathrm{Re}\mathrm{Tr}\left[\Gamma_{\alpha}\left(\frac{2\rho(t)-1}{4}-{\sum_{l}}\frac{\eta_{l}}{\beta}G_{l\alpha}^{\text{em}}(t)\right)\right],$
(28)
where the trace also contains a sum over spin.
Alternatively, the Meir-Wingreen formula can be cast in terms of the
$k$-resolved embedding correlator:
$J_{\alpha}(t)=2\mathrm{Re}\sum_{ik}T_{ik\alpha}s_{\alpha}(t)\widetilde{\mathcal{G}}_{k\alpha
i}^{\text{em}}(t).$ (29)
Remarkably, calculating the time-dependent current adds no extra complexity in
either cases. The current formula is completely specified in terms of the
single-time embedding correlator, which is readily available when evolving the
coupled system of ODEs. With the pole expansion, this corresponds to Eqs. (V)
and (V), and with the $k$-resolved spectral decomposition, to the first
equality of Eq. (V), and Eqs. (VI) and (27).
## References
* Karlsson _et al._ (2021) D. Karlsson, R. van Leeuwen, Y. Pavlyukh, E. Perfetto, and G. Stefanucci, Phys. Rev. Lett. 127, 036402 (2021).
* Pavlyukh _et al._ (2022) Y. Pavlyukh, E. Perfetto, D. Karlsson, R. van Leeuwen, and G. Stefanucci, Phys. Rev. B 105, 125134 (2022).
* Tuovinen _et al._ (2014) R. Tuovinen, E. Perfetto, G. Stefanucci, and R. van Leeuwen, Phys. Rev. B 89, 085131 (2014).
* Stefanucci and van Leeuwen (2013) G. Stefanucci and R. van Leeuwen, _Nonequilibrium Many-Body Theory of Quantum Systems: A Modern Introduction_ (Cambridge University Press, Cambridge, 2013).
* Ridley _et al._ (2017) M. Ridley, A. MacKinnon, and L. Kantorovich, Phys. Rev. B 95, 165440 (2017).
* Stefanucci _et al._ (2008) G. Stefanucci, E. Perfetto, and M. Cini, Phys. Rev. B 78, 075425 (2008).
* Hu _et al._ (2010) J. Hu, R.-X. Xu, and Y. Yan, J. Chem. Phys. 133, 101106 (2010).
|
]Maximizing one Laplace eigenvalue on n-dimensional manifolds
Romain Petrides
Romain Petrides, Université de Paris, Institut de Mathématiques de Jussieu - Paris Rive Gauche, bâtiment Sophie Germain, 75205 PARIS Cedex 13, France
We prove existence and regularity results for the problem of maximization of one Laplace eigenvalue with respect to metrics of same volume lying in a conformal class of a Riemannian manifold of dimension $n\geq 3$.
Let $M$ be a smooth compact manifold of dimension $n$. Given a Riemannian metric $g$ on $M$, we denote the sequence of eigenvalues associated to the Laplace operator $\Delta_g = -div_g \nabla$
$$ 0 = \lambda_0 \leq \lambda_1(g) \leq \lambda_2(g) \leq \cdots \leq \lambda_k(g) \leq \cdots \to +\infty \text{ as } k \to +\infty . $$
The eigenvalue $\lambda_0$ is associated to constant functions and if $M$ is connected, the other ones are positive, depending on $g$. We will focus on the following natural scale invariant functional
$$\bar{\lambda}_k(g) = \lambda_k(g)V_{g}(M)^{\frac{2}{n}}, $$
where $V_{g}(M)$ is the volume of $g$ on $M$. We let $[g] = \{ e^{2u} g ; u \in \mathcal{C}^{\infty}\left(M\right) \}$ be a given conformal class of metrics. In the current paper, we aim at studying the following variational problem
$$ \Lambda_k(M,[g]) = \sup_{\tilde{g} \in [g]} \lambda_k(\tilde{g})V_{\tilde{g}}(M)^{\frac{2}{n}} = \sup_{\tilde{g} \in [g]} \bar{\lambda}_k(\tilde{g}).$$
Studying the maximization of $\bar{\lambda}_k$ is more relevant than the minimization because the well-known examples of Cheeger dumbells prove $\inf_{[g]} \bar\lambda_k = 0$. Moreover, while it is standard in geometric analysis to restrict a functional with respect to metrics to a conformal class (e.g the Yamabe problem, problems for Q-curvature etc), there are deep reasons to do it in the context of spectral geometry:
On one side, $\Lambda_k(M,[g]) < +\infty$ was proved by Yang-Yau [43] for $k=1$ and $n=2$ and then by Li-Yau [29] and El Soufi and Ilias, [6], [7] for $k= 1$ and $n\geq 2$ thanks to the conformal volume and was generalized by Korevaar [27] to any $k,n$ (see also [16]). On the other side, the supremum of the scale invariant functional $\bar{\lambda}_k(g)$ over all the metrics $g$ on $M$ is always $+\infty$ in dimension $n\geq 3$ [3]. In dimension $2$, the bound on $\Lambda_k(M,[g])$ is nothing but a tool to obtain an upper bound for $\bar{\lambda}_k(g)$ for any metric $g$ depending only on $k$ and the topology of $M$ (see Yang-Yau [43] for $k= 1$ and Korevaar [27] for $k\geq 1$). It is then natural to compute $\Lambda_k(M,[g])$, depending on $k$ and the conformal class. As a fundamental result, it was proved by Hersch [17] in dimension $2$ and El-Soufi Ilias [7] for $n\geq 3$ that
$$ \Lambda_1( \mathbb{S}^n,[h]) = \bar\lambda_1\left( \mathbb{S}^n, h \right) = n \omega_n^{\frac{2}{n}}$$
where $h$ is the round metric on the $n$-sphere $\mathbb{S}^n$ and $\omega_n$ is its volume, and that the round metric is the unique maximizer. Many other computations of $\Lambda_k(M,[g])$ with existence (or not) of maximizers are given e.g in [32], [2], [33], [34], [36], [35], [21], [23], [18], [26], and references therein.
Furthermore, as observed by Nadirashvili [32] and El Soufi-Ilias [8][9][10], critical metrics for $\bar{\lambda}_k$ correspond to harmonic maps into spheres.
For instance and more precisely, in dimension $2$, the conformal factors $e^{2u}$ of the critical metrics $\tilde{g} = e^{2u} g$ for $\bar{\lambda}_k$ are nothing but densities of energy associated to the equation of harmonic maps into spheres with respect to $g$.
This observation was crucial to expect regularity of critical metrics constructed by a suitable variational method, by the use of regularity theorems for weakly-harmonic maps.
Finally, studying the variational problem $\Lambda_k(M,[g])$ is a first step to look for critical metrics of $\bar{\lambda}_k(g)$ over all the metrics. As observed by Nadirashvili [32] and El Soufi-Ilias [8], critical metrics with respect to variations in the space of 2-symmetric tensors arise as induced metrics of minimal immersions of $M$ into spheres, and conversely, any induced metric of a minimal immersion into a sphere can be seen as the critical metric of one of the functionals $\bar{\lambda}_k$.
For instance, existence of minimal immersions into sphere by first eigenfunctions is given in [36] and [30] by maximization of $\lambda_1$ over all the metrics on any surface of dimension 2. While maximization over all the metrics is not possible in dimension $n\geq 3$ [3], one approach by min-max was given by Friedlander and Nadirashvili [13].
Their invariant was essentially computed in dimension $2$ in [36] [19] and is often not realized. Notice that, the variational techniques developped in the current paper and in [38] are substantial to initiate construction of minimal immersions into spheres by a min-max method on eigenvalues $\bar{\lambda}_k$.
In the past decades, many works have been done in dimension $2$ to compute $\Lambda_k(M,[g])$ and to give methods to prove existence and regularity of maximal metrics, after the seminal work by Nadirashivli on the torus [32] and Fraser-Schoen for Steklov eigenvalues on surfaces of genus zero [11]. In the current paper, we are interested in the generalization to $n\geq 3$ of the study for any $k$ and $M$ with $n=2$ given in [36] [37] and [38]. They are based on a construction of maximizing sequence of metrics $e^{2u_\eps} g$ and associated maps $\Phi_\eps : M \to \R^{p_\eps}$ which are "almost" harmonic in the following sense:
$$ \Delta_g \Phi_\eps = \lambda_k(e^{2u_\eps} g) e^{2u_\eps} \Phi_\eps $$
and there is $\delta_\eps \to 0$ such that
$$ \left\vert \Phi_\eps \right\vert^2 \geq 1-\delta_\eps \text{ and } \int_M \left\vert\Phi_\eps \right\vert^2 e^{2u_\eps} = \int_M e^{2u_\eps} = 1.$$
These maps are harmonic into a sphere if and only if $\left\vert \Phi_\eps \right\vert^2 = 1$. In [36] and [37], this sequence is built by maximization of a regularized functional depending on a parameter $\eps$, and in [38] we simplify the selection of this maximizing sequence thanks to a new concept of Palais-Smale sequences for the functional $\bar{\lambda}_k$ in the level sets $\bar{\lambda}_k \geq \Lambda_k(M,[g]) - \eps$. The goal is then to pass to the limit as $\eps\to 0$ on $(\Phi_\eps)$ using the elliptic estimates on this super-critical system of equations (in the sense that
$e^{2u_\eps}$ only belongs to $L^1$ and $\Phi_\eps$ is not uniformly bounded).
If it is possible, we have that $e^{2u_\eps}dv_g\to \nu$ for the weak-$\star$ topology of measures and
$\Phi_\eps \to \Phi$, $\lambda_k(e^{2u_\eps} g)\to \lambda$ so that
$$ \Delta_g \Phi = \lambda \nu \Phi \text{ and } \left\vert \Phi \right\vert^2 = 1 $$
in a weak sence. Computing $0 = \Delta \left\vert \Phi \right\vert^2$ gives that $\nu = \frac{\left\vert \nabla \Phi \right\vert^2}{\lambda} dv_g$ so that the limit equation is the equation of weak harmonic maps, known to be smooth and stongly harmonic, so that the limiting measure is smooth.
However, establishing existence and regularity of metrics achieving $\Lambda_k(M,[g])$ in dimension $n\geq 3$ involves other difficulties. The first one is that contrary to dimension $n=2$, if $n\geq 3$ the Dirichlet energy and the Laplacian are not conformally invariant: for a metric $\tilde{g} = e^{2u}g$, we have
$$\lambda_k(\tilde{g}) = \min_{E \in \mathcal{G}_{k+1}\left(\mathcal{C}^\infty(M)\right)} \max_{\varphi \in E\setminus \{0\}} \frac{\int_M \left\vert\nabla \varphi \right\vert_g^2 e^{(n-2)u}dv_g}{\int_M \varphi^2 e^{nu}dv_g} $$
so that the possible degenerescence of maximizing sequence of conformal factors $e^{2u_\eps}$ not only appear at the right-hand side of the elliptic equation for eigenfunctions, as in dimension $2$ (already leading to a super-critical elliptic equations because we only have a $L^1$ control of $e^{nu_\eps}$)
$$ -div_g\left( e^{\left(n-2\right) u_\eps} \nabla \varphi_\eps \right) = \lambda_k(e^{2u_\eps} g) e^{nu_\eps} \varphi_\eps, $$
but also at the left-hand side, allowing to loose the elliptic properties of the operator $-div_g\left( e^{\left(n-2\right) u_\eps} \nabla . \right)$ as $\eps\to 0$. The second one is that in dimension $2$ there are bounds on the multiplicity of the eigenvalue $\lambda_k(g)$ depending only on $k$ and the topology $M$, while it is not the case in dimension $n\geq 3$ (see [4]), even with restriction in a conformal class. This boundedness was often used to initiate compactness arguments on the sequence of maximizing metric associated to an "almost critical" system of equations approaching the system of equations of a harmonic map into a sphere. Indeed, the number of equations in the system is automatically uniformly bounded for $n=2$. This is a priori not true in higher dimensions.
In the following result, we overcome these problems by establishing a natural generalization of the maximization results for $\Lambda_k(M,[g])$ in dimension $2$ to higher dimensions:
Let $(M, [g])$ be a compact connected manifold of dimension $n\geq 3$ endowed with a conformal class and $k\geq 1$. If
$$ \Lambda_k(M,[g]) > \Lambda_k(\widetilde{M},[\widetilde{g}]) $$
for any $(\widetilde{M},[\widetilde{g}]) = (M,[g]) \sqcup \left(\mathbb{S}^n,[h]\right)\sqcup \cdots \sqcup \left(\mathbb{S}^n,[h]\right) $ or $ (\widetilde{M},[\widetilde{g}]) = \left(\mathbb{S}^n,[h]\right)\sqcup \cdots \sqcup \left(\mathbb{S}^n,[h]\right) $ where $h$ is the round metric on $\mathbb{S}^n$, then for some $\alpha>0$, there is a non-negative factor $f \in \mathcal{C}^{0,\alpha}\left(M\right) \cap \mathcal{C}^{\infty}(M\setminus Z)$ where $Z = \{ z \in M ; f(z) = 0 \}$ such that
$$ \lambda_k(f g) \left(\int_M f^{\frac{n}{2}}dv_g\right)^{\frac{2}{n}} = \Lambda_k(M,[g]) $$
Moreover, $f = \frac{\left\vert \nabla \Phi \right\vert_g^2}{\lambda_k(f g)}$, where $\Phi : M \to \mathbb{S}^p$ is some $n$-harmonic map into a sphere, whose coordinate functions are eigenfunctions with respect to $\lambda_k(f g)$.
Notice that as in dimension $n=2$ (see [36] and [37]), the strict inequality assumptions we make are natural to ensure the compactness of the sequence of critical metrics, and more generally some compactness for maximizing sequences of metrics. These strict inequality assumptions are simple ways to prevent from bubbling that may happen (for more information on the bubble tree convergence in this context, see e.g [37], [23], [22] for $n=2$ or the more general Theorem <ref> and discussions around).
Notice also that as in dimension $n=2$, the conformal factor $f$ appears as the density of energy of a $n$-harmonic map into a sphere
$$ -div_g\left( \left\vert \nabla \Phi \right\vert^{n-2}_g \nabla \Phi \right) = \left\vert \nabla \Phi \right\vert^n_g \Phi. $$
Apparition of $n$-harmonic maps into a sphere in this context was already observed in [20] (see also [39]). In fact, $f$ is the limit of a maximizing sequence of conformal factors and we conclude the proof of the theorem by noticing that $f = \frac{\left\vert \nabla \Phi \right\vert_g^2}{\lambda_k(f g)}$, where $\Phi$ is a weak locally minimizing $n$-harmonic map into a sphere. The regularity theory for these maps in the litterature implies $\Phi \in \mathcal{C}^{1,\alpha} $ for some $\alpha\in (0,1)$, and not more. The lack of higher regularity is well known because the weight $\left\vert \nabla \Phi \right\vert^{n-2}_g$ inside the divergence term of the equations may vanish, so that we have a degenerate elliptic system. Therefore $f \in \mathcal{C}^{0,\alpha}$ is the optimal regularity. It is a very common conclusion when we look for conformal metrics as solutions of a variational problem (see e.g in [1] or [14] the concept of generalized metrics). Notice also that even in dimension $2$, while the zero set $Z$ of $f$ has to be discrete, it may be non-empty. Therefore, even if the conformal factor $f$ is smooth for $n=2$, the associated metric may have conical singularities. Of course, $f \in \mathcal{C}^{\infty}\left(M\setminus Z\right)$ in the domain where the equation of $\Phi$ is elliptic and
the metric $\tilde{g} = fg$ is regular. Since in $M\setminus Z$, $\left\vert \nabla \Phi \right\vert^2_{\tilde{g}} = \lambda_k(\tilde{g})$ and thanks to conformal invariance of the $n$-harmonic equation, the n-harmonic map into a sphere with respect to $g$ becomes a $2$-harmonic map into a sphere with respect to $\tilde{g} = fg$
$$ \Delta_{\tilde{g}} \Phi = \left\vert \nabla \Phi \right\vert^2_{\tilde{g}} \Phi $$
as was primarily observed by [8] for smooth critical metrics. In fact, it is a $p$-harmonic map with respect to $\tilde{g}$ for any $p$, and $p=n$ is the adapted integer that gives a conformally invariant equation. It is a reason why we have to deal with $n$-harmonic maps.
Notice also that it is not clear that generalized metrics $f.g$ have a discrete spectrum and that any eigenvalue has a finite dimensional associated space of eigenfunctions. In the current paper, we actually prove that it holds true as a consequence of
For the factor $f$ given by Theorem (<ref>), the embedding $W^{1,2}\left( f.g\right) \to L^{2}\left( f.g\right)$ is compact and the eigenfunctions with respect to $\Delta_{f.g}$ are $\mathcal{C}^{1,\alpha}$ functions.
This compactness property on the weight $f$ generalizes to higher dimensions the concept of "admissible measures" developped in dimension 2 e.g in [23]. Thanks to this result, we a posteriori deduce that the multiplicity of the $k$-th eigenvalue associated to $f.g$ is finite and that the target sphere $\mathbb{S}^p$ in Theorem <ref> is a finite dimensional sphere. The proof of Theorem <ref> is based on a new understanding of $\mathcal{C}^{1}$ $n$-harmonic maps: they can be seen as local limits of $(\tau,n)$-harmonic maps, defined as minimizers of the regularized functional
$$ \Psi \mapsto \int \left(\left\vert \nabla \Psi \right\vert^2 + \tau \right)^{\frac{n}{2}}. $$
Then, we can generalize to systems some recent improvements of regularity of the $p$-Laplace equation (see [40]) by computing Caccioppoli (or reverse Hölder) type inequalities independant of $\tau$ on the equation satisfied by the gradient of $(\tau,n)$-harmonic maps. We can deduce that the $W^{1,2}$ norm of some power of $f$ is controled and we deduce local embeddings $W^{1,2}\left( f.g\right) \to L^{2\kappa_0}\left( f.g\right)$ for some $\kappa_0 > 1$ that imply compact embeddings in $L^p(f.g)$ for $p < 2 \kappa_0$. From this new technique, we also deduce a higher regularity result for $n$-harmonic maps into spheres.
Coming back to our original problem, we know that the strict inequality involved in the assumptions of Theorem <ref> always holds true for $k= 1$ if $(M,[g])$ is not equivalent to a sphere (see [35]). Since by [7], the theorem holds on the conformal class of the round sphere, we then have the existence result for the first eigenvalue:
Let $(M, [g])$ be a compact connected manifold of dimension $n\geq 3$ endowed with a conformal class, then for some $\alpha>0$, there is a non-negative factor $f \in \mathcal{C}^{0,\alpha}\left(M\right) \cap \mathcal{C}^{\infty}(M\setminus Z)$ where $Z = \{ z \in M ; f(z) = 0 \}$ such that
$$ \lambda_1(f g) \left(\int_M f^{\frac{n}{2}}dv_g\right)^{\frac{2}{n}} = \Lambda_1(M,[g]) $$
Moreover, $f = \frac{\left\vert \nabla \Phi \right\vert_g^2}{\lambda_1(f g)}$, where $\Phi : M \to \mathbb{S}^p$ is some $n$-harmonic map into a sphere, whose coordinate functions are eigenfunctions with respect to $\lambda_1(f g)$.
This is the generalization for $n\geq 3$ of the main theorem in [36].
In fact, for $k > 1$ we always have existence of maximal configurations but they may be a disjoint union of at most $k$ connected surfaces $(\widetilde{M},[\tilde{g}]) = (M,[g]) \sqcup \left(\mathbb{S}^n,[h]\right)\sqcup \cdots \sqcup \left(\mathbb{S}^n,[h]\right) $ or $ (\widetilde{M},[\widetilde{g}]) = \left(\mathbb{S}^n,[h]\right)\sqcup \cdots \sqcup \left(\mathbb{S}^n,[h]\right) $ endowed with metrics maximizing lower eigenvalues in their conformal class. This is a consequence of the bubble tree convergence proved in the current paper. For instance bubbling happens for $\Lambda_2(\mathbb{S}^n,[h])$, which is never realized on $\mathbb{S}^n$ [26], but realized by a disjoint union of two round spheres of same volume. We add the strict inequality assumption in Theorem <ref> to be sure to obtain a new maximizer on $M$. Moreover, as a contrapositive, if there is not any maximizer for $\Lambda_k\left(M,[g]\right)$, we deduce from Theorem <ref> and a result by Colbois, El Soufi [2] that the inequality of the assumption of Theorem <ref> is an equality. For instance, proving that $\Lambda_k\left(\mathbb{S}^n,[h]\right)$ is not realized in the conformal class of a round sphere gives a natural way to prove the following conjecture
$$ \Lambda_k\left(\mathbb{S}^n,[h]\right) = n k^{\frac{2}{n}} \omega_n^{\frac{2}{n}} $$
that is known to be true for $k=1$ ([17] and [7]), $k=2$ ([33] [34] and [26]) and $n=2$ ([21]). This strategy was used in dimension 2 in [21] and [18].
As we will explain in section <ref>, we prove theorem (<ref>) by noticing that the maximization of $\Lambda_k(M,g)$ is in fact the same as the maximization of a more general functional
$$ \bar{\lambda}_k(g,\alpha,\beta) = \lambda_k(g, \alpha, \beta) \frac{ \int_M \beta dv_g }{ \left(\int_M \alpha^{\frac{n}{n-2}}dv_g \right)^{\frac{n-2}{n}}} . $$
among non negative functions $\alpha$, $\beta$, where
$$ \lambda_k(g,\alpha,\beta) = \inf_{E \subset \mathcal{G}{k+1}\left(\mathcal{C}^\infty\left(M\right)\right)} \sup_{\phi\in E\setminus\{0\}} \frac{\int_M \left\vert \nabla \phi \right\vert_g^2 \alpha dv_g}{\int_M\phi^2 \beta dv_g} . $$
Indeed, we have:
Let $(M, [g])$ be a compact connected manifold of dimension $n\geq 3$ endowed with a conformal class and $k\geq 1$. Then
$$ \Lambda_k(M,[g]) = \sup_{\alpha \geq 0 , \beta \geq 0} \bar{\lambda}_k(g,\alpha,\beta) $$
More precisely, the maximizers $(\widetilde{M},[\tilde{g}]) = (M,[g]) \sqcup \left(\mathbb{S}^n,[h]\right)\sqcup \cdots \sqcup \left(\mathbb{S}^n,[h]\right) $ or $ (\widetilde{M},[\widetilde{g}]) = \left(\mathbb{S}^n,[h]\right)\sqcup \cdots \sqcup \left(\mathbb{S}^n,[h]\right) $ of $\bar{\lambda}_k(g, \alpha, \alpha^{\frac{n}{n-2}})$ for $\alpha$ non negative functions are the same as the maximizers of $\bar{\lambda}_k(g, \alpha, \beta)$ for $(\alpha,\beta)$ a couple of non negative functions.
Very recently, Karpukhin and Stern [24] proposed another variational problem
$$ \nu_k(M,g) = \sup_{\beta\geq 0} \left( \lambda_k(g,1,\beta) \int_M \beta dv_g \right) $$
which was more convenient to generalize their techniques in dimension 2 [23] to dimension $n\geq 3$ [24]. One reason is that the eigenvalue equations and harmonic map equations associated to critical potentials $\beta$ never become degenerate elliptic system of equations. Notice that the techniques we use in the current paper are adaptable to prove existence and regularity results on $\nu_k(M,g)$ for any $k$. As an immediate consequence of Theorem <ref>, we have that
$$ \nu_k(M,g) \leq \Lambda_k(M,[g])V_g(M)^{\frac{n-2}{n}} $$
with equality if and only if $g$ is a maximizer of $\Lambda_k(M,[g])$. For instance, by Theorem (<ref>), there is a maximizer for $k=1$. This inequality was only proved for specific conformal classes in [24].
Notice that the techniques and results given in the current paper are generalizable to many other eigenvalue problems: combinations of Laplace eigenvalues, or Steklov eigenvalues... This will be written in forecoming papers.
The paper is organized as follows: in section <ref> we first explain the variational approach to prove Theorem <ref>. In particular, we define and select a Palais-Smale sequence for this variational problem. In section <ref>, we prove the bubble tree convergence of Palais-Smale sequences, leading to Theorem <ref> and Theorem <ref>. In particular, we prove at the end of section <ref> the compactness embedding of Theorem <ref>. In Section <ref>, we prove all the necessary regularity results on $n$-harmonic maps into (possibly infinite dimensional) spheres, generalizing the classical $\eps$-regularity results (e.g in [42][41][28][31]), strong convergence results for sequences of harmonic maps [5], point removability [42][28][31]... In particular, inspired by [42], we prove a priori regularity estimates which are independant of the dimension of the target manifold. By the way, we prove a new result concerning harmonic maps: any $\mathcal{C}^1$ $n$-harmonic map is a locally minimizing harmonic map and is locally a strong limit of minimizing $(\tau,n)$-harmonic maps. This leads to the proof of higher regularity results for $\mathcal{C}^1$ harmonic maps we use for the proof of Theorem <ref>.
§ THE VARIATIONAL APPROACH
§.§ Splitting the conformal factor into two densities
Let $(M,g)$ be a Riemannian manifold of dimension $n$. We consider $\alpha$ and $\beta$ two non-negative functions, as weights for the following eigenvalue problem:
$$ \lambda_k(\alpha,\beta) = \inf_{E \subset \mathcal{G}{k+1}\left(\mathcal{C}^\infty\left(M\right)\right)} \sup_{\phi\in E\setminus\{0\}} \frac{\int_M \left\vert \nabla \phi \right\vert_g^2 \alpha dv_g}{\int_M\phi^2 \beta dv_g} $$
Notice that if $\beta = \alpha^{\frac{n}{n-2}}$ is a positive function, $\lambda_k(\alpha,\beta)$ is nothing but the $k$-th Laplace eigenvalue associated to the metric $\beta^{\frac{2}{n}}g$. Thanks to this remark, a natural functional invariant by dilatation to consider is
$$ \bar{\lambda}_k(\alpha,\beta) = \lambda_k(f, h) \frac{ \int_M \beta dv_g }{ \left(\int_M \alpha^{\frac{n}{n-2}}dv_g \right)^{\frac{n-2}{n}}} . $$
If the functions $\alpha$ and $\beta$ are smooth positive, by compact embeddings of the natural weighted $L^2$ and $W^{1,2}$ spaces involved in this problem there is existence of eigenfunctions. Let $(\alpha,\beta)$ be such that $\int_M \beta dv_g = 1$ and $\int_M \alpha^{\frac{n}{2}} dv_g = 1$. The subdifferential of $-\bar\lambda_k$ at $(\alpha,\beta)$ (see [39] for definition and computations of the subdifferential) satisfies
\begin{equation} \label{subdifferential} - \partial\left( - \bar{\lambda}_k\right)(\alpha,\beta) \subset co \left\{
\left( \left\vert \nabla \phi \right\vert_g^2 - \bar\lambda_k \alpha^{\frac{2}{n-2}}, \bar\lambda_k \left( 1 - \phi^2 \right) \right) ; \phi \in E_k(\alpha,\beta), \int_M \phi^2 \beta dv_g = 1
\right\}
\end{equation}
where $\bar\lambda_k = \bar\lambda_k(\alpha,\beta)$. Notice that as a convention, we compute the subdifferential of $-\bar{\lambda}_k$ because the subdifferential is well adapted for minimization of functionals. If $(\alpha,\beta)$ is critical, we have by definition that $0 \in \partial \bar{\lambda}_k(\alpha,\beta)$. Then, there are eigenfunctions $\Phi = (\phi_1,\cdots, \phi_p)$ such that
$$ \left\vert \nabla \phi \right\vert_g^2 - \bar\lambda_k(\alpha,\beta) \alpha^{\frac{2}{n-2}} = 0 \text{ and } \left\vert \Phi \right\vert^2 = 1$$
Using the system of equations on $\Phi$ (eigenvalue equations)
$$ -div_g(\alpha \nabla \Phi) = \bar{\lambda}_k(\alpha,\beta) \beta \Phi $$
we obtain by computing $0 = -div_g( \alpha \nabla \left\vert \Phi \right\vert^2 ) $ that
$$ \alpha \left\vert \nabla \Phi \right\vert^2 = \bar{\lambda}_k(\alpha,\beta) \beta $$
so that $ \beta = \alpha^{\frac{n}{n-2}}$. Then, if a maximizer of $\bar\lambda_k(\alpha,\beta)$ exists, it is a maximizer for the Laplace eigenvalue $\bar\lambda_k$ in a conformal class
Notice that the computation of the subdifferential is valid as soon as we assume that the embedding of the weighted Sobolev spaces involved in the eigenvalue functional $W^{1,2}(\alpha,\beta) \to L^2(\beta)$ is compact (see [39]), leading to the existence of eigenfunctions. The maximizers we obtain at the very end of the proof of Theorem <ref> are not necessarily smooth and positive everywhere but satisfy this compactness embedding.
§.§ A deformation Lemma
We denote by
$$ X = \{(\alpha,\beta) \in \mathcal{C}^0(M)\times \mathcal{C}^0(M), \alpha>0, \beta>0 \} $$
Let $(\alpha,\beta)\in X$ with $\int_M \beta dv_g = 1$ and $\int_M \alpha^{\frac{n}{2}} dv_g = 1$. We have that the formula (<ref>) holds true on the subdifferential $\partial\bar{\lambda}_k(\alpha,\beta) $
and we set
$$ \left\vert \partial \bar{\lambda}_k(\alpha,\beta) \right\vert = \max_{\tau\in \bar{X}} \min_{\psi \in \partial \bar\lambda_k(\alpha,\beta)} \left\langle \tau , \psi \right\rangle $$
a pseudo-norm of the subdifferential $ \partial \bar\lambda_k(\alpha,\beta) $, where
$$\tilde{X} = \{ (\dot{\alpha},\dot{\beta}) \in \mathcal{C}^0\left(M\right) ; \alpha\geq 0, \beta\geq 0 \text{ and } \int_M \dot{\alpha} dv_g + \int_M \dot{\beta} dv_g = 1 \} $$
Notice that $\left\vert \partial \bar{\lambda}_k(\alpha,\beta) \right\vert \geq 0$ has to be seen as the largest right derivative of $\bar\lambda_k$ among all the variations $v_t = t (\dot{\alpha},\dot{\beta})$ for $\dot{\alpha},\dot{\beta} \in \tilde{X}$.
We let
$$ X_{r} = \{ (\alpha,\beta)\in X ; \left\vert \partial \bar\lambda_k(\alpha,\beta) \right\vert > 0 \} $$
be the set of the regular points of $f$. The set of critical points is defined as
$$ X_{c} = \{ (\alpha,\beta)\in X ; \left\vert \partial \bar\lambda_k(\alpha,\beta) \right\vert = 0 \}. $$
Notice that we are looking for critical points in the set $ X_c $ (in the current paper: maximizers)
If there is $\eps_0 >0$ and $\delta>0$ such that
$$ \forall (\alpha,\beta) \in X ; \forall \eps\in (0,\eps_0), \left\vert \bar\lambda_k(\alpha,\beta) - c \right\vert \leq \eps \Rightarrow \left\vert \partial \bar\lambda_k(\alpha,\beta) \right\vert \geq \delta \hskip.1cm, $$
Then $\forall \eps \in (0,\eps_0)$, there is a continuous map $\eta : X \to X$ such that
* $\eta(\alpha,\beta) = (\alpha,\beta)$ for any $f\in \{ \bar\lambda_k\geq c + \eps_0 \} \cup \{\bar\lambda_k \leq c-\eps_0\}$
* $\eta(\{ \bar{\lambda}_k \geq c - \eps\} ) \subset \{ \bar\lambda_k \geq c + \eps\} $
During the proof, we use the notations $E = \bar\lambda_k$ and $x =(\alpha,\beta) \in X $, $\tau = (\dot{\alpha},\dot{\beta})\in \tilde{X}$. We first build an adapted pseudo-vector field for our problem.
Let $\eps>0$. There is a locally Lipschitz vector field $v : X_r \to \bar{X}$ such that for all $x \in X$
* $ \left\| v(x)\right\|_1 < 2 $
* $\forall \psi \in \partial E(x) , \left\langle v(x),\psi \right\rangle > \frac{ \left\vert \partial E(x) \right\vert}{2} $
* $v(x) \geq 0$
We first fix $x_0 \in X$ and we build an adapted image $v_0(x_0)$ satisfying the conclusion of the Claim.
Let $\tau_0 \in \tilde{X}$ be such that
$$ \left\vert \partial E(x_0) \right\vert \leq \min_{\psi\in \partial E(x_0)} \left\langle \tau_0,\psi \right\rangle + \frac{\left\vert \partial E(x_0) \right\vert}{4} \hskip.1cm.$$
We choose $v_0(x_0) = \tau_0 $ so that
* $ \left\| v_0(f_0)\right\|_1 \leq 1 < 2 $
* $\forall \psi \in \partial E(x_0) , \left\langle v_0(x_0),\psi \right\rangle \geq \frac{3}{4} \left\vert \partial E(x_0) \right\vert > \frac{\left\vert \partial E(x_0) \right\vert}{2} $
* $v_0(x_0) \geq 0$
Now, we aim at defining $v$ by some transformation of $v_0$ in order to obtain a locally Lipschitz vector field $v : X_r \to X$.
Let $x_0 \in X_{r}$. Let $\Omega_{x_0}$ be an open neighborhood of $x_0$ in $X_{r}$ such that for all $x\in \Omega_{x_0}$
$$\forall \psi \in \partial E(x) , \left\langle \psi,v_0(x_0) \right\rangle > \frac{\left\vert \partial E(x) \right\vert}{2}$$
We notice that
$$ X_{r} = \bigcup_{x_0 \in X_{r}} \Omega_{x_0}.$$
Since $X_{r}$ is paracompact, one has a family of open sets $(\omega_i)_{i\in I}$ such that
* $ X_{r} = \bigcup_{i \in I} \omega_i $
* $\forall i\in I, \exists x_i \in X_{r}, \omega_i \subset \Omega_{x_i}$
* for all $ u\in X_{r}$ there is an open set $\Omega$ such that $u\in \Omega$ and $\Omega \cap \omega_i = \emptyset$ except for a finite number of indices $i$.
We set $\psi_i(u) = d\left(u, X_{r} \setminus \omega_i \right)$ and $\eta_i(u) = \frac{\psi_i}{\sum_{j\in I}\psi_j}$ and the vectorfield
$$ v(x) = \sum_{i\in I} \eta_i(x) v_0(x_i) $$
satisfies the conclusion of the Claim.
(of proposition <ref>) We define a vector-field $\Phi : X \to X$ as for $x\in X$,
$$ \Phi(x) = \frac{d(x,A)}{d(x,A)+d(x,B)} v(x)$$
where $v : X_r\to X$ is given by Claim <ref> and we define the sets
$$ A = \{ E\leq c-\eps_0\} \cup \{E\geq c+\eps_0\} $$
$$ B = \{ c-\eps \leq E \leq c+\eps\} \hskip.1cm.$$
Let $\eta$ be a solution of
\begin{equation}
\begin{cases}
\frac{d}{dt}\eta_t(f) = \Phi\left( \eta_t(f)\right) \\
\eta_0(f) = f \hskip.1cm.
\end{cases}
\end{equation}
Such a solution $\eta$ exists since $\Phi$ is locally Lipschitz. Moreover, $\eta$ is well defined on $\mathbb{R}_+$ since $\Phi$ is bounded. Let $t\geq 0$, and $x \in X$. We have thanks to the subdifferential that
\begin{equation}\begin{split} \frac{d}{dt} E(\eta_t(x)) & \geq \min \{ \left\langle \Phi(\eta_t(x)), \psi \right\rangle ; \psi \in \partial E\left(\eta_t(x) \right) \} \\
& \geq \frac{d(\eta_t(x),A)}{d(\eta_t(x),A)+d(\eta_t(x),B)} \frac{\left\vert \partial{E}(\eta_t(x)) \right\vert}{ 2 } \geq 0
\end{split}\end{equation}
for any $t\geq 0$. It is clear that for any $t\geq 0$, $\eta_t(x) = x$ for $x\in A$ and that
$$\eta_t\left( \{ E\geq c+ \eps \} \right) \subset\{ E\geq c+\eps \} \hskip.1cm.$$
It remains to prove that for $t_0 >0$ small enough we also have that
$$\eta_{t_0}\left( B \right) \subset\{ E\geq c+ \eps \} \hskip.1cm.$$
Let $t_{\star}$ be the smallest time such that $\eta_{t_{\star}}(f) = c + \eps$. Then, for $0\leq t \leq t_{\star}$, we have
$$ \eta_t(x) \in B \Rightarrow \frac{d}{dt}E(\eta_t(x)) \geq \frac{\delta}{2} $$
since by assumption, $c - \eps \leq E(\eta_t(x)) \leq c $ implies that $\left\vert \partial{E}(\eta_t(x)) \right\vert \geq \delta$ and that $d(\eta_t(x),B) = 0$. We deduce by integration that
$$ E(\eta_{t_{\star}}(x)) - E(x) \geq \frac{\delta}{2}t_{\star} $$
so that $t_{\star} \leq \frac{4\eps}{\delta}$. Then, letting $t_0 = \frac{2\eps}{\delta}$ we obtain $\eta_{t_0}(B)\subset \{ E\geq c+\eps \}$. Therefore,
$$\eta_{t_0}(\{ E\geq c- \eps \}) \subset \eta_{t_0}(\{ E\geq c+ \eps \})$$
and we obtain the proposition.
§.§ Existence of Palais-Smale sequences for the maximization problem
If we take $c = \sup_X \bar{\lambda}_k < + \infty$, proposition <ref> implies that for any sequence $\eps\to 0$ and $\delta_\eps \to 0$ we have a sequence of $(\alpha_\eps , \beta_\eps) \in X$ such that
$$ \bar\lambda_k(\alpha_\eps , \beta_\eps) > c -\eps \text{ and } \left\vert \partial\bar\lambda_k(\alpha_\eps , \beta_\eps)\right\vert < \delta_\eps $$
Now, up to classical convolutions, pairs of smooth positive functions are dense in $X$, so that we have a sequence of pairs smooth positive functions $(\alpha_\eps,e^{nu_\eps} )$ such that
$$ \bar\lambda_k( \alpha_\eps,e^{nu_\eps} ) > c -\eps \text{ and } \left\vert \partial\bar\lambda_k( \alpha_\eps,e^{nu_\eps} )\right\vert < \delta_\eps $$
the first condition gives that the pair $( \alpha_\eps,e^{nu_\eps} )$ is a maximizing sequence. The second condition can be rewritten as
$$\forall (\dot \alpha, \dot \beta)\in \tilde{X}, \exists \Psi \in -\partial\left( -\bar{\lambda}_k\right)(\alpha_\eps,e^{nu_\eps}) ; \left\langle (\dot \alpha, \dot \beta) , \Psi \right\rangle \leq \delta_\eps . $$
We denote $Q_\eps = \lambda_\eps^{\frac{2-n}{2}} \alpha_\eps$ where $ \lambda_\eps = \bar\lambda_k(\alpha_\eps,e^{nu_\eps})$.
\begin{equation*}
\begin{split}\forall (\dot \alpha, \dot \beta)\in \tilde{X}, -\partial\left( -\bar{\lambda}_k\right)(\lambda_\eps^{\frac{2-n}{2}}Q_\eps,e^{nu_\eps}) ,
\left\langle (\dot \alpha, \dot \beta) , (\psi_1-\delta_\eps,\psi_2-\delta_\eps) \right\rangle \leq 0
\end{split}
\end{equation*}
meaning that
$$ \left( - \partial \left(-\bar\lambda_k\right)( \lambda_{\eps}^{\frac{2-n}{2}}Q_\eps, e^{nu_\eps} ) - \{ ( \delta_\eps, \delta_\eps) \} \right) \cap \{ (a,b) \in \mathcal{C}^0(M) ; a \leq 0, b\leq 0 \} \neq \emptyset.$$
Indeed, if not, we use the classical Hahn-Banach theorem to separate these two spaces (the first one is compact, the second one is closed in $\mathcal{C}^0 \times \mathcal{C}^0$) by a linear form $(\mu_1,\mu_2)$ (where $\mu_1$ and $\mu_2$ are Radon measures, belonging to the dual space of continuous functions) satisfying
\begin{equation*}
\begin{split} \forall (\psi_1,\psi_2) \in -\partial\left( -\bar{\lambda}_k\right)(\lambda_\eps^{\frac{2-n}{2}}Q_\eps,e^{nu_\eps}) ,
\int_M \left( \psi_1 d\mu_1 +\psi_2 d\mu_2 \right) > 0
\end{split}
\end{equation*}
$$ \forall (a,b) \in \mathcal{C}^0(M) ; a \leq 0, b\leq 0, \int_M( a d\mu_1 + bd\mu_2) \leq 0 $$
The second condition implies that $\mu_1$ and $\mu_2$ are non negative Radon measures. Up to a renormalization, we assume that $\int_M d\mu_1 + \int_M d\mu_2 = 1$ and we obtain a contradiction. Therefore we obtain the following Palais Smale condition: there is a map $\Phi_\eps = (\phi_1^\eps,\cdots, \phi_{p_\eps}^\eps)$ such that
$$ - div_g\left( \lambda_\eps^{\frac{2-n}{2}}Q_\eps \nabla \Phi_{\eps} \right) = \lambda_{\eps} e^{nu_{\eps}} \Phi_{\eps} $$
$$\int_{M} e^{nu_{\eps}}dA_g = \int_{M} \left\vert \Phi_{\eps} \right\vert^2 e^{nu_{\eps}} dA_g =\lambda_\eps^{-\frac{n}{2}} \int_{M} Q_\eps \left\vert \nabla\Phi_{\eps} \right\vert_g^2 dA_g = \lambda_\eps^{-\frac{n}{2}} \int_M Q_\eps^{\frac{n}{n-2}} = 1$$
$\left\vert \nabla \Phi_\eps \right\vert^2 \leq Q_\eps^{\frac{2}{n-2}} + \delta_\eps$ and
$\left\vert \Phi_\eps \right\vert^2 \geq 1 - \frac{\delta_\eps}{\lambda_\eps}$
where $\delta_\eps \to 0 $ as $\eps\to 0$.
§ CONVERGENCE OF PALAIS SMALE SEQUENCES FOR ONE LAPLACE EIGENVALUE IN DIMENSION $N\GEQ 3$
In the current section, we aim at proving Theorem <ref>, as a consequence of the following
Let $e^{nu_{\eps}}$, $\lambda_{\eps}$, $\Phi_{\eps} : M \to \mathbb{R}^{p_\eps}$ be a smooth sequence of maps satisfying the Palais-Smale assumption $(PS)$ as $\eps\to 0$, that is
* $ - div_g\left( \lambda_\eps^{\frac{2-n}{2}}Q_\eps \nabla \Phi_{\eps} \right) = \lambda_{\eps} e^{nu_{\eps}} \Phi_{\eps} $ and the eigenvalue index of $\lambda_\eps$ is uniformly bounded
* $\int_{M} e^{nu_{\eps}}dv_g = \int_{M} \left\vert \Phi_{\eps} \right\vert^2 e^{nu_{\eps}} dv_g =\lambda_\eps^{-\frac{n}{2}} \int_{M} Q_\eps \left\vert \nabla\Phi_{\eps} \right\vert_g^2 dv_g = \lambda_\eps^{-\frac{n}{2}} \int_M Q_\eps^{\frac{n}{n-2}} dv_g = 1$
* $\left\vert \nabla \Phi_\eps \right\vert_g^2 \leq Q_\eps^{\frac{2}{n-2}} + \delta_\eps$ uniformly where $\delta_{\eps}\to 0$ as $\eps\to 0$.
* $\left\vert \Phi_\eps \right\vert^2 \geq 1 - \delta_\eps$ uniformly, where $\delta_{\eps}\to 0$ as $\eps\to 0$.
Then, up to a subsequence and rearrangement of coordinates of $\Phi_\eps$, $p_\eps \to p$, $\lambda_\eps\to \lambda$ and $\Phi_{\eps}$ bubble tree converges in $W^{1,n}$ to $\Phi_0 : M \to \mathbb{S}^p$, $\Phi_j : \mathbb{S}^n \to \mathbb{S}^p$ for $j=1,\cdots,l$ ($l\geq 0$) with an energy identity:
$$ \lambda^{\frac{n}{2}} = \int_{M} \left\vert \nabla \Phi_{0} \right\vert_g^n dv_g + \sum_{j=1}^l \int_{\mathbb{S}^n} \left\vert \nabla \Phi_{j} \right\vert_h^n dv_h $$
Moreover, $\Phi_j$ are $\mathcal{C}^{0,\alpha}$ $n$-harmonic maps for $j=0,\cdots,l$ and their $i$-th coordinates are eigenfunctions associated to $\lambda$ on the surface $M \cup \bigcup_{1 \leq i\leq l} \mathbb{S}^n$ endowed with the generalized metrics $\frac{\left\vert \nabla \Phi_0 \right\vert^2}{\lambda}g$ on $M$ and $\frac{\left\vert \nabla \phi \right\vert^2}{ \lambda }h$ on $\mathbb{S}^n$.
All along the proof, every local computation is made in the exponential chart centered at points $p \in M$, defined on balls whose radius is controlled by the injectivity radius with respect to $g$: $inj_g(M)$. Without loss of generality, we can assume that $inj_g(M) \geq 2$ and make arguments on the unit ball $\mathbb{B}_n$ endowed with the metric $\exp_{p}^\star g$ still denoted $g$. We do not change the notations of the metrics and functions in the charts. Moreover, when there is not any embiguity, we do not precise the measures of integration associated to $g$, $dv_g$ inside the integrals in order to lighten the computations.
§.§ A first bubble tree structure
For a Radon measure, $\mu$ on $M$ (or $\mathbb{R}^n$), we denote $\tilde{\mu}$ the continuous part of $\mu$, defined as the measure $\tilde{\mu}$ which does not have any atom and such that $\mu - \tilde{\mu}$ is a (maybe infinite) linear combination of Dirac masses.
Let $Q_\eps$, $e^{nu_\eps}$ be sequences of smooth positive weights such that
$\int_{M} e^{nu_{\eps}}dv_g = \int_M Q_\eps^{\frac{n}{n-2}} dv_g = 1$ and
$$ \liminf_{\eps\to 0} \lambda_k(Q_\eps, e^{nu_\eps}) > 0 $$
Then up to a subsequence on $\eps\to 0$, there are an integer $t\geq 0$ and if $t>0$ sequences of points $q_\eps^i$ for $i= 1,\cdots, t$ and scales $\alpha_\eps^t \leq \cdots \leq \alpha_\eps^1 \to 0$ as $\eps \to 0$ such that we have the weak-${\star}$ convergence in $M$, of $ e^{nu_\eps}$ to a non negative Radon measure $\mu_0$ in $M$, we have the weak-${\star}$ convergence in (any compact set of) $\mathbb{R}^n$, of $e^{n u_\eps\left( \alpha_\eps^i z + q_\eps^i \right)} $ to a non negative and non zero Radon measure $\mu_i $ on $\mathbb{R}^n$ for any $i$. Moreover, their associated continuous parts preserve the mass before the limit:
$$\lim_{\eps\to 0} \int_M e^{nu_\eps}dv_g = \int_M d\tilde{\mu}_0 + \sum_{i=1}^{t} \int_{\R^n} d\tilde{\mu}_i = 1 \text{ and } \forall i\geq 1, \int_{\R^n} d\tilde{\mu}_i >0 $$
and we have that $1 \leq \mathbf{1}_{\tilde{\mu}_0 \neq 0} + t \leq k$. Finally if we set $F_i = \{ j> i ; \left(\frac{d_g(q_i^\eps,q_j^\eps)}{\alpha_i^\eps}\right) \text{ is bounded} \}$, we have for $j>i$,
$$ j\in F_i \Rightarrow \frac{\alpha_j^\eps}{\alpha_i^\eps} \to 0 \text{ and } j\notin F_i \Rightarrow \frac{d_g(q_i^\eps,q_j^\eps)}{\alpha_i^\eps} \to +\infty $$
In other words, the last condition reads as
$$\frac{\alpha_i^\eps}{\alpha_j^\eps} +
\frac{\alpha_j^\eps}{\alpha_i^\eps} +
\frac{d_g(q_i^\eps,q_j^\eps)}{\alpha_i^\eps + \alpha_j^\eps}
\to +\infty $$
which is the standard condition for a bubble tree. The aim of the rest of section <ref> is devoted to prove that the continuous measures $\tilde\mu_i$ for $i\in \{0,\cdots,t\}$ are absolutely continuous with respect to $dv_g$ if $i= 0$ and to the Euclidean metric if $i\geq 1$, with densities equal to densities of energy of $n$-harmonic maps.
We choose to skip the proof of Claim <ref> because it is a simple adaptation of standard arguments of dimension 2 to higher dimensions we can find in [25], [36], [37], [23], [22]. The selection of scales of concentration is based on Hersch's trick because it uses the conformal group of the sphere to balance continuous measures on a sphere. Then the selection stops because of the min-max characterization of $\lambda_k$: If there are more than $k+1$ scales of concentration, we can build $k+1$ independant test functions with arbitrarily small rayleigh quotient, contradicting the first assumption of Claim <ref>.
§.§ Some convergence of $\omega_{\eps}$ to $1$ and first replacement of $\Phi_{\eps}$
We set $\omega_{\eps} = \left\vert \Phi_{\eps} \right\vert$. We first prove that in some sense, $\omega_{\eps}$ converges to $1$ and that $\Phi_{\eps}$ has a similar $H^1$ behaviour as $\frac{\Phi_{\eps}}{\omega_{\eps}}$
We have that
\begin{equation} \label{eqomegaepsto1dimn} \int_{M} Q_\eps \frac{\left\vert \nabla \omega_\eps \right\vert_g^2}{\omega_\eps}dv_g
= O(\delta_{\eps}) \end{equation}
\begin{equation} \label{eqradialreplacementdimn} \int_{M} Q_\eps \left\vert \nabla\left( \Phi_\eps - \frac{\Phi_\eps}{\omega_\eps} \right) \right\vert_g^2 dv_g = O\left( \delta_\eps \right) \end{equation}
and for all $q\leq n$, and for any sequence of functions $f_\eps$ in $L^{\frac{n}{n-q}}\left(M,g\right)$,
\begin{equation} \label{eqQepsPhiepsdimn} \int_{M} Q_\eps^{\frac{q}{n-2}} f_\eps dv_g = \int_{M} \left\vert \nabla \Phi_\eps \right\vert_g^q f_\eps dv_g + \| f_{\eps} \|_{L^{\frac{n}{n-q}}} O\left(\delta_\eps^{\frac{2}{n}}\right) \end{equation}
as $\eps\to 0$.
We integrate $ - div_g\left(\lambda_{\eps}^{\frac{2-n}{2}} Q_\eps \Phi_{\eps} \right) = \lambda_{\eps} e^{nu_{\eps}} \Phi_{\eps} $ against $ \Phi_\eps$ and $\frac{ \Phi_\eps}{\omega_\eps}$. We obtain
$$ \int_{M}\left\vert \Phi_\eps \right\vert^2 e^{nu_\eps} = \lambda_\eps^{-\frac{n}{2}}\int_{M} Q_\eps \left\vert \nabla\Phi_\eps \right\vert^2 $$
\begin{equation*} \begin{split}
\int_{M} \frac{\left\vert \Phi_\eps \right\vert^2}{\omega_\eps} e^{n u_\eps} = & - \lambda_{\eps}^{-\frac{n}{2}} \int_{M}\frac{\Phi_\eps}{\omega_\eps}div_g\left(Q_\eps \nabla \Phi_{\eps} \right) \\
= & \lambda_{\eps}^{-\frac{n}{2}} \int_{M} Q_\eps \left(\frac{\left\vert \nabla\Phi_\eps \right\vert^2}{\omega_\eps} - \frac{\left\vert \nabla\omega_\eps \right\vert^2}{\omega_\eps}\right) \\
= & \lambda_{\eps}^{-\frac{n}{2}} \int_{M}\omega_\eps Q_\eps \left\vert \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert^2 \end{split} \end{equation*}
\begin{equation*}
\begin{split}
\int_{M}Q_\eps \frac{\left\vert \nabla \omega_\eps \right\vert^2}{\omega_\eps}dv_g = & \int_{M}Q_\eps \left(\frac{\left\vert \nabla \Phi_\eps \right\vert^2}{\omega_\eps} -\omega_\eps \left\vert \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert^2 \right)dv_g \\
= & \int_{M} \left( Q_\eps \frac{\left\vert \nabla \Phi_\eps \right\vert^2}{\omega_\eps} - \lambda_\eps^{\frac{n}{2}} e^{nu_\eps}\frac{\left\vert \Phi_\eps \right\vert^2}{\omega_\eps} \right)dv_g \\
= & \int_{M}\left(\frac{Q_\eps\left\vert \nabla \Phi_\eps \right\vert^2}{\omega_\eps} -Q_\eps\left\vert \nabla \Phi_\eps \right\vert^2 \right) - \lambda_\eps^{\frac{n}{2}} \int_{M} e^{nu_\eps} \left(\frac{\left\vert \Phi_\eps \right\vert^2}{\omega_\eps} - \left\vert \Phi_\eps \right\vert^2 \right) \\
= & \int_{M}\frac{Q_\eps\left\vert \nabla \Phi_\eps \right\vert^2}{\omega_\eps} \left( 1 - \omega_\eps \right) + \lambda_\eps^{\frac{n}{2}}\int_{M} e^{nu_\eps}(1 -\omega_{\eps}) \\
\end{split}
\end{equation*}
noticing the crucial equality $\int_{M} e^{nu_\eps}\omega_\eps^2 = \int_{M} e^{nu_\eps} = 1 $. We know that $ \omega_\eps - 1 \geq \sqrt{1-\delta_{\eps}} - 1$ so that
\begin{equation*}
\begin{split}
\int_{M}Q_\eps \frac{\left\vert \nabla \omega_\eps \right\vert^2}{\omega_\eps}dv_g
\leq & \int_{M}Q_\eps \left\vert \nabla \Phi_\eps \right\vert^2 \frac{1- \sqrt{1-\delta_\eps}}{\sqrt{1-\delta_\eps}} + \lambda_\eps^{\frac{n}{2}} \left(1-\sqrt{1-\delta_{\eps}} \right) = O\left( \delta_\eps \right)
\end{split}
\end{equation*}
and we obtain (<ref>).
Let's prove (<ref>). We have that
$$ \nabla\left( \Phi_\eps - \frac{\Phi_\eps}{\omega_\eps} \right) = \left(1-\frac{1}{\omega_\eps}\right) \nabla \Phi_\eps + \frac{\nabla \omega_\eps}{\omega_\eps^2} \Phi_\eps$$
and then
$$ \left\vert \nabla\left( \Phi_\eps - \frac{\Phi_\eps}{\omega_\eps}\right) \right\vert^2 = \left\vert \nabla \Phi_\eps \right\vert^2 \left( 1-\frac{1}{\omega_\eps} \right)^2 + \frac{\left\vert \nabla \omega_\eps \right\vert^2}{\omega_\eps^2} + 2 \frac{\left\vert \nabla \omega_\eps \right\vert^2}{\omega_\eps} \left( 1-\frac{1}{\omega_\eps} \right) $$
so that
\begin{equation*}
\int_{M}Q_\eps \left\vert \nabla\left( \Phi_\eps - \frac{\Phi_\eps}{\omega_\eps} \right) \right\vert^2 \leq \int_{M} Q_\eps \left\vert \nabla \Phi_\eps \right\vert_g^{2} \left( 1-\frac{1}{\omega_\eps} \right)^2 + O\left( \delta_\eps\right)
\end{equation*}
as $\eps\to 0$ and
\begin{equation*}
\begin{split}
\int_{M} Q_\eps \left\vert \nabla \Phi_\eps \right\vert^2 & \left( 1-\frac{1}{\omega_\eps} \right)^2 = - \int_{\Sigma} div\left( Q_\eps \nabla \Phi_\eps \left( 1-\frac{1}{\omega_\eps} \right)^2 \right) \Phi_\eps \\
= & \lambda_\eps^{\frac{n}{2}} \int_{M} \left( 1-\frac{1}{\omega_\eps} \right)^2 \left\vert \Phi_\eps \right\vert^2e^{nu_\eps} + 2\int_{M}Q_\eps \nabla\left(\frac{1}{\omega_\eps}\right) \omega_\eps \nabla \omega_\eps \left(1-\frac{1}{\omega_\eps}\right) \\
= & \lambda_\eps^{\frac{n}{2}} \int_{M} e^{nu_\eps} \left(\omega_\eps^2 + \frac{1}{\omega_\eps} - 2\right) - 2\int_{M} Q_\eps \frac{\left\vert \nabla \omega_\eps \right\vert^2}{\omega_\eps^2} \left(\omega_\eps-1\right) \\
= & \lambda_\eps^{\frac{n}{2}} \int_{M} e^{2u_\eps} \left(\frac{1-\omega_\eps}{\omega_\eps} \right) dv_g + O\left( \delta_\eps^{2}\right) = O\left(\delta_\eps\right) \\
\end{split}
\end{equation*}
as $\eps\to 0$, where we crucially used again $ \int_{\Sigma} e^{2u_\eps}\omega_\eps^2 = \int_{\Sigma} e^{2u_\eps} = 1$. We then obtain (<ref>). Now, let's prove (<ref>). We define $A_\eps$ by
$$ A_\eps^2 = Q_\eps^{\frac{2}{n-2}} +\delta_\eps - \left\vert \nabla \Phi_\eps \right\vert^2 $$
$A_\eps$ is a non-negative function and we know that
$$ \int_M \left\vert \nabla \Phi_\eps \right\vert^2_g Q_\eps dv_g = \int_M Q_\eps^{\frac{n}{n-2}} dv_g = 1. $$
Therefore multiplying $A_\eps^2$ by $Q_\eps^{\frac{n}{n-2}}$ and integrating, we obtain
$$ \int_M A_\eps^2 Q_\eps dv_g = O\left( \delta_\eps \right) \text{ as }\eps\to 0.$$
Now, we also obtain that
$$ \int_M A_\eps^2 \left( \left\vert \nabla \Phi_\eps \right\vert^2 + A_\eps^2\right)^{\frac{n-2}{2}} = \int_M A_\eps^2\left(Q_\eps^{\frac{2}{n-2}}+ \delta_\eps\right)^{\frac{n-2}{2}} \leq c_n \left( \int_M A_\eps^2 Q_\eps + \delta_\eps^{\frac{n-2}{2}} \int_M A_\eps^2 \right) $$
for a constant $c_n$ depending only on the dimension so that by a minoration of the left-hand term we obtain
$$ \int_M A_\eps^2 \left\vert \nabla \Phi_\eps \right\vert_g^2 + \int_M A_\eps^n = O\left( \delta_\eps + \delta_\eps^{\frac{n-2}{2}} \int_M A_\eps^2\right) $$
and in particular, we deduce
$$ \int_M A_\eps^n = O\left( \delta_\eps\right) $$
Now, writing
$$ Q_\eps^{\frac{q}{n-2}} - \left\vert \nabla \Phi_\eps \right\vert^{\frac{q}{2}} = Q_\eps^{\frac{q}{n-2}} - \left(Q_\eps^{\frac{2}{n-2}} + \delta_\eps\right)^{\frac{q}{2}} + \left(A_\eps^2 + \left\vert \nabla \Phi_\eps \right\vert^2\right)^{\frac{q}{2}} - \left\vert \nabla \Phi_\eps \right\vert^{\frac{q}{2}} $$
we obtain that
$$ \int_M \left(Q_\eps^{\frac{q}{n-2}} - \left\vert \nabla \Phi_\eps \right\vert^{\frac{q}{2}}\right) f_\eps = \| f_{\eps} \|_{L^{\frac{n}{n-q}}} \left(O\left(\delta_\eps\right) + O\left(\delta_\eps^{\frac{2}{n}}\right) \right) $$
which completes the proof of the claim.
From the previous claim and assumptions on the Palais-Smale sequence, we already deduce a global $W^{1,q}$ convergence of $\frac{\Phi_\eps}{\omega_\eps}$ for any $q< n$.
Up to a subsequence, there is a map $\Phi_0 : M \to \mathbb{S}^{\mathbb{N}}$ such that
$$ \left\vert \Phi_0 \right\vert^2 := \sum_{i=1}^{+\infty} \left(\phi_0^i\right)^2 =_{a.e} 1 \text{ and } \int_M \left\vert \nabla \Phi_0 \right\vert_g^n dv_g \leq_{\eps \to 0} \limsup \int_M \left\vert \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert_g^n dv_g $$
where $\left\vert \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert_g^2 := \sum_{i=1}^{+\infty} \left\vert \nabla\frac{\phi_\eps^i}{\omega_\eps}\right\vert_g^2$ and such that for any $1\leq p < +\infty$, $1\leq q < n$,
$$ \int_M \left( \left\vert \frac{\Phi_\eps}{\omega_\eps} -\Phi_0 \right\vert^p + \left\vert \nabla \left(\frac{\Phi_\eps}{\omega_\eps}-\Phi_0 \right) \right\vert_g^{q} \right)dv_g \to 0 $$
as $\eps \to 0$.
We use Claim <ref> in the appendix where we extend $\Phi_\eps$ by $\phi_\eps^i = 0$ for $i \geq p_\eps + 2$. We have that
$$ -div_g\left( \left\vert \nabla\frac{ \Phi_\eps}{\omega_\eps} \right\vert^{n-2}\nabla \frac{\Phi_\eps}{\omega_\eps} \right) = \lambda_\eps^{\frac{n}{2}} e^{nu_\eps} \Phi_\eps - div_g\left( \left\vert \nabla\frac{ \Phi_\eps}{\omega_\eps} \right\vert^{n-2}\nabla \frac{\Phi_\eps}{\omega_\eps} - Q_\eps \nabla \Phi_\eps \right) $$
We notice that $A_\eps = \lambda_\eps^{\frac{n}{2}} e^{nu_\eps} \Phi_\eps $ satisfies $\left(\left\vert A_\eps \right\vert \right)_\eps $ is bounded in $L^1$ and it remains to prove that
$$ B_\eps = - div_g\left( \left\vert \nabla\frac{ \Phi_\eps}{\omega_\eps} \right\vert^{n-2}\nabla \frac{\Phi_\eps}{\omega_\eps} - Q_\eps \nabla \Phi_\eps \right) $$
satisfies $\| B_{\eps} \|_{W^{-1,n}} \to 0$ as $\eps \to 0$. Let $\eta : M \to \mathbb{R}^{\mathbb{N}}$ be such that
$$ \int_M \left( \left\vert \eta \right\vert^n + \left\vert \nabla\eta \right\vert_g^{n} \right)dv_g < +\infty. $$
We have by an integration by parts that
\begin{equation*}
\begin{split} \int_M B_\eps . \eta dv_g = & \int_M \nabla \eta \left( \left\vert \nabla\frac{ \Phi_\eps}{\omega_\eps} \right\vert^{n-2}\nabla \frac{\Phi_\eps}{\omega_\eps} - Q_\eps \nabla \Phi_\eps \right) \\
= & \int_M \left( \left\vert \nabla\frac{ \Phi_\eps}{\omega_\eps} \right\vert^{n-2} - \left\vert \nabla\Phi_\eps \right\vert^{n-2} \right) \nabla \eta \nabla \frac{\Phi_\eps}{\omega_\eps} + \int_M \left( \left\vert \nabla\Phi_\eps \right\vert^{n-2} - Q_\eps \right) \nabla \eta \nabla \frac{\Phi_\eps}{\omega_\eps} \\
& + \int_M Q_\eps \nabla \eta . \nabla\left( \Phi_\eps - \frac{\Phi_\eps}{\omega_\eps}\right) dv_g
\end{split}
\end{equation*}
The second right-hand term satisfies by (<ref>) and then a Hölder inequality
$$ \left\vert \int_M \left( \left\vert \nabla\Phi_\eps \right\vert^{n-2} - Q_\eps \right) \nabla \eta \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert \leq O\left( \delta_\eps^{\frac{2}{n}} \right) \| \left\vert \nabla \eta \right\vert \|_{L^n} . $$
The third right-hand term satisfies by a Hölder inequality, then (<ref>), and then another Hölder inequality,
$$ \left\vert \int_M Q_\eps \nabla \eta . \nabla\left( \Phi_\eps - \frac{\Phi_\eps}{\omega_\eps}\right) dv_g \right\vert \leq O\left( \delta_\eps \right) \| \left\vert \nabla \eta \right\vert \|_{L^n} . $$
we write the first right-hand term as
$$ \left\vert \int_M \left( \left\vert \nabla\frac{ \Phi_\eps}{\omega_\eps} \right\vert^{n-2} - \left\vert \nabla\Phi_\eps \right\vert^{n-2} \right) \nabla \eta \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert \leq c_n \int_M \left\vert \nabla\left( \Phi_\eps - \frac{\Phi_\eps}{\omega_\eps} \right) \right\vert_g \left\vert \nabla \Phi_\eps \right\vert_g^{n-2} \left\vert \nabla \eta \right\vert_g $$
for a constant $c_n$ depending only on the dimension $n$ where we used that
$$ \left\vert \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert_g^2 \leq \frac{\left\vert \nabla \Phi_\eps \right\vert_g^2}{1-\delta_\eps} \leq 2 \left\vert \nabla \Phi_\eps \right\vert_g^2 $$
for $\eps$ small enough. Then, we have
\begin{equation*}
\begin{split}
\int_M \left\vert \nabla\left( \Phi_\eps - \frac{\Phi_\eps}{\omega_\eps} \right) \right\vert_g \left\vert \nabla \Phi_\eps \right\vert_g^{n-2} \left\vert \nabla \eta \right\vert_g = & \int_M \left\vert \nabla\left( \Phi_\eps - \frac{\Phi_\eps}{\omega_\eps} \right) \right\vert_g \left( \left\vert \nabla \Phi_\eps \right\vert_g^{n-2} - Q_\eps \right) \left\vert \nabla \eta \right\vert_g \\
&+ \int_M \left\vert \nabla\left( \Phi_\eps - \frac{\Phi_\eps}{\omega_\eps} \right) \right\vert_g Q_\eps \left\vert \nabla \eta \right\vert_g
\end{split}
\end{equation*}
and as before, we apply (<ref>) and then a Hölder inequality for the first term and Hölder inequalities and (<ref>) for the second term to obtain
$$ \left\vert \int_M \left( \left\vert \nabla\frac{ \Phi_\eps}{\omega_\eps} \right\vert^{n-2} - \left\vert \nabla\Phi_\eps \right\vert^{n-2} \right) \nabla \eta \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert \leq O\left( \delta_\eps^{\frac{2}{n}} \right) \| \left\vert \nabla \eta \right\vert \|_{L^n} $$
and gathering all the previous inequalities gives the Claim.
In the next subsections, we aim at proving that the strong convergence holds in a better space, in order to pass to the limit on the sequence of equations satisfied by $\Phi_\eps$.
§.§ Regularity of the limiting measures
We choose to prove in detail the regularity of $\tilde\mu_0$ on $M$, meaning that it is absolutely continous with respect to $dv_g$ and that the density is $\mathcal{C}^{0,\alpha}$ as the density of energy of a $n$-harmonic map.
The proof of the regularity of the measures $\tilde\mu_i$ in $\mathbb{R}^n$ is similar, because locally, all the regularity estimates we make hold for any rescalings centered at $q_\eps^i$ with scale $\alpha_\eps^i$ of the involved functions. Just notice that since the metric rescales as $g\left(\alpha_\eps^i\left(z - q_\eps^i\right)\right)$, it converges to a Euclidean metric as $\eps\to 0$. Up to a reverse stereographic projection and thanks to the point removability property of $n$-harmonic maps, the regular measures in $\mathbb{R}^n$ give $\mathcal{C}^{0,\alpha}$ conformal factors for the round sphere.
§.§.§ Selection of bad points
In the following, we perform local regularity estimates on $(\Phi_\eps)$. These estimates can only be done far from "bad points" we select in Claim <ref>.
For $\Omega \subset M$ a domain of $M$, we set
$$ \lambda_\star(\Omega, e^{nu_\eps}, Q_\eps ) = \inf_{\varphi \in \mathcal{C}_c^{\infty}\left(\Omega\right) } \frac{\int_\Omega \left\vert \nabla \varphi \right\vert_g^2 Q_\eps dv_g }{\int_\Omega \varphi^2 e^{nu_\eps}dv_g }. $$
Then, knowing that $\lambda_\eps$ is a $k$-th eigenvalue on $M$ we have:
Up to a subsequence, there is $0< r_\star <1$ and a set of at most $k$ bad points $B = \{p_1,\cdots, p_s\} \subset M$ and such that for any $p \in M\setminus\{p_1,\cdots,p_s\}$ and any $r < \min\left( r_\star, d_g(x,\{p_1,\cdots,p_s\} ) \right) $, then
$$ \lambda_\star\left( \mathbb{B}_r(p), e^{nu_\eps}, Q_\eps \right) \geq \lambda_\eps^{\frac{2}{n}} . $$
Notice that the atoms of $\mu_0$ belong to this set of points $\{p_1,\cdots,p_s\}$.
We set
$$ r_\eps^1 = \inf\{ r>0 ; \exists p\in M, \lambda_\star\left( \mathbb{B}_r(p), e^{nu_\eps}, Q_\eps \right) < \lambda_\eps^{\frac{2}{n}} \}. $$
If $r_\eps^1$ does not converge to $0$, then up to a subsequence, there is $r_\star$ such that $r_\eps^1 > r_\star$ and Claim <ref> is proved for this $r_\star$ and $B= \emptyset$.
If $r_\eps^1 \to 0$, then we let $p_1^\eps$ be such that $ \lambda_\star\left( \mathbb{B}_{r_\eps^1}(p^1_\eps), e^{nu_\eps}, Q_\eps \right) = \lambda_\eps^{\frac{2}{n}} $.
By induction assume that for $j \in \mathbb{N}$ we constructed $r_\eps^1 \leq r_\eps^2 \leq \cdots \leq r_\eps^{j-1}$ such that $r_\eps^{j-1}\to 0$ and points $p^1_\eps,\cdots,p^{j-1}_\eps$ such that
$$ \forall i_1 \neq i_2, \mathbb{B}_{r^i_\eps}(p_i^\eps) \cap \mathbb{B}_{r^i_\eps}(p^i_\eps) = \emptyset \text{ and } \forall i, \lambda_\star\left( \mathbb{B}_{r_\eps^i}(p^i_\eps), e^{nu_\eps}, Q_\eps \right) = \lambda_\eps^{\frac{2}{n}}$$
then we set
$$ r_\eps^j = \inf\{ r>0 ; \exists p\in M, \forall i, \mathbb{B}_r(p) \cap \mathbb{B}_{r^i_\eps}(p^i_\eps) = \emptyset \text{ and } \lambda_\star\left( \mathbb{B}_r(p), e^{nu_\eps}, Q_\eps \right) < \lambda_\eps^{\frac{2}{n}} \} $$
Then if $r_\eps^j $ does not converge to $0$ and up to a subsequence, there is $r_\star$ such that $r_\eps^j > r_\star $ and the proof of Claim <ref> is proved for this $r_\star$ and $B= \{p_1,\cdots, p_{j-1}\}$ where up to a subsequence we took $p_1,\cdots, p_{j-1}$ as limits of $p_1^\eps,\cdots, p_{j-1}^\eps$ as $\eps \to 0$. If $r_\eps^j \to 0$, then let $p_j^\eps$ be such that $ \lambda_\star\left( \mathbb{B}_{r_\eps^j}(p^1_\eps), e^{nu_\eps}, Q_\eps \right) = \lambda_\eps^{\frac{2}{n}} $ and $\mathbb{B}_{r_\eps^j}(p_\eps^j) \cap \mathbb{B}_{r_\eps^j}(p_\eps^i)=\emptyset$ for $i<j$.
This induction process has to stop because if we have we constructed $r_\eps^1 \leq r_\eps^2 \leq \cdots \leq r_\eps^{k+1}$ such that $r_\eps^{k+1}\to 0$ and points $p^1_\eps,\cdots,p^{k+1}_\eps$ such that
$$ \forall i_1 \neq i_2, \mathbb{B}_{r^i_\eps}(p_i^\eps) \cap \mathbb{B}_{r^i_\eps}(p^i_\eps) = \emptyset \text{ and } \forall i, \lambda_\star\left( \mathbb{B}_{r_\eps^i}(p^i_\eps), e^{nu_\eps}, Q_\eps \right) = \lambda_\eps^{\frac{2}{n}}$$
then, using the first eigenfunction $\varphi_i$ associated to the eigenvalue $\lambda_\star\left( \mathbb{B}_{r_\eps^i}(p^i_\eps), e^{nu_\eps}, Q_\eps \right)$ extended by $0$ in $M \setminus \mathbb{B}_{r_\eps^i}(p^i_\eps)$, we have by the min-max characterization of the $k$-th eigenvalue on $M$, $\lambda_\eps$ and since $\varphi_i$ are orthogonal functions that
$$ \lambda_\eps \leq \max_{i=1,\cdots, k+1} \frac{\int_M \left\vert \nabla \varphi_i \right\vert^2_g Q_\eps \lambda_\eps^{\frac{n-2}{2}} dv_g}{\int_M \left(\varphi_i\right)^2 e^{nu_\eps} dv_g} \leq \lambda_\eps^{\frac{n}{2}} \lambda_\eps^{\frac{n-2}{2}} = \lambda_\eps . $$
The case of equality in the min-max characterization of $\lambda_\eps$ implies that a linear combination of $\varphi_i$ is an eigenfunction for $\lambda_\eps$ which is impossible since it vanishes on an open set.
§.§.§ Non concentration of the energy
Far from bad points, the energy cannot have concentration points:
Let $p\in M \setminus \{p_1,\cdots, p_s\}$, then
\begin{equation} \label{noconcentdimn} \begin{split}
\lim_{r\to 0} \limsup_{\eps\to 0} \int_{\mathbb{B}_r(p)} e^{nu_{\eps}} & = \lim_{r\to 0} \limsup_{\eps\to 0}\int_{\mathbb{B}_r(p)} Q_\eps \left\vert \nabla \frac{\Phi_{\eps}}{\omega_{\eps}} \right\vert^2 \\
& = \lim_{r\to 0} \limsup_{\eps\to 0}\int_{\mathbb{B}_r(p)} Q_\eps \left\vert \nabla \Phi_{\eps} \right\vert^2 = 0 \end{split}
\end{equation}
Let $\eta \in \mathcal{C}_c^{\infty}(\mathbb{D}_{\sqrt{r}}(p))$ with $0\leq \eta \leq 1$, $\eta = 1$ in $\mathbb{B}_{r}(p)$ and $\int_{M}\left\vert \nabla \eta\right\vert^n_{g} \leq \frac{C}{\ln\left(\frac{1}{r}\right)}$, and we first have using <ref>
\begin{equation*} \begin{split} \int_{M} \eta e^{nu_{\eps}} \leq \left( \int_{M} \eta^2 e^{nu_{\eps}}\right)^{\frac{1}{2}} & \leq \left(\frac{1}{\lambda_{\star}\left(\mathbb{B}_{\sqrt{r}}(p),e^{nu_{\eps}} , Q_\eps\right)} \int_{\mathbb{B}_{\sqrt{r}}(p)} \left\vert \nabla \eta \right\vert^2 \right)^{\frac{1}{2}} \\
& \leq \left( \frac{1}{\lambda_{\eps}^{\frac{2}{n}}} \left( \int_{\mathbb{B}_{\sqrt{r}}(p)} \left\vert \nabla \eta \right\vert^n \right)^{\frac{2}{n}} \right)^{\frac{1}{2}} \leq \left( \frac{C}{ \lambda_\eps \ln\frac{1}{r} }\right)^{\frac{1}{n}}
\end{split} \end{equation*}
Now, we integrate the equation $- div_g \left(Q_\eps \nabla \Phi_{\eps} \right) = \lambda_{\eps}^{\frac{n}{2}} e^{nu_{\eps}} \Phi_{\eps}$ against $\eta \frac{\Phi_{\eps}}{\omega_{\eps}^2}$ and we obtain
$$ \int_{M}Q_\eps \eta \left\langle \nabla{ \frac{ \Phi_{\eps}}{\omega_{\eps}^2}},\nabla\Phi_{\eps} \right\rangle + \int_{M}Q_\eps \nabla \eta \nabla{ \Phi_{\eps}} \frac{\Phi_{\eps}}{\omega_\eps^2} = \lambda_\eps^{\frac{n}{2}} \int_{M} \eta e^{nu_{\eps}} $$
so that
\begin{equation*}
\begin{split}
\int_{M} \eta Q_\eps \left\vert \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert^2 = & \lambda_\eps^{\frac{n}{2}} \int_{M} \eta e^{nu_{\eps}} - \int_{M} \eta Q_\eps \nabla\frac{\Phi_{\eps}}{\omega_{\eps}^2}\nabla \omega_{\eps} \frac{\Phi_{\eps}}{\omega_{\eps}} \\
& - \int_{M} Q_\eps \nabla \eta \nabla{\Phi_{\eps}} \frac{\Phi_{\eps}}{\omega_\eps^2} \\
\leq & \left( \frac{C}{ \ln\frac{1}{r}} \right)^{\frac{1}{n}} + \int_M \eta Q_\eps \frac{\left\vert \nabla \omega_\eps \right\vert^2 }{\omega_\eps} \\
& + \left( \int_{\mathbb{B}_{\sqrt{r}}(p)} Q_\eps \frac{\left\vert \nabla{\omega_\eps} \right\vert^2}{\omega_\eps^2} \right)^{\frac{1}{2}} \left( \int_{M} Q_\eps \left\vert \nabla\eta \right\vert^2 \right)^{\frac{1}{2}}
\end{split}
\end{equation*}
where we used that $\left\vert \Phi_\eps \right\vert =\omega_\eps$, so that
$$ \int_{ \mathbb{B}_{\sqrt{r}}(p) } \eta Q_\eps \left\vert \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert^2 \leq \left( \frac{C}{\ln\frac{1}{r}} \right)^{\frac{1}{n}} \left(1+O\left(\delta_\eps^{\frac{1}{2}}\right)\right) +O\left(\delta_{\eps} \right) $$
Now, to conclude, we have by (<ref>) that for any $p\in M$,
$$ \int_{ \mathbb{B}_{\sqrt{r}}(p) } Q_\eps \left\vert \nabla \Phi_{\eps} \right\vert^2 - \int_{ \mathbb{B}_r(p) } Q_\eps \left\vert \nabla \frac{\Phi_{\eps}}{\omega_{\eps}} \right\vert^2 \leq O\left( \delta_{\eps}^{\frac{1}{2}} \right) $$
and the proof of the Claim is complete.
§.§.§ $W^{1,n}$-convergence of eigenfunctions and $n$-harmonic replacement
Let $p\in M\setminus\{p_1,\cdots,p_s\}$, and $\rho_\eps > 0$ be such that $\mathbb{B}_{\rho_\eps}(p) \subset M\setminus\{p_1,\cdots,p_s\}$ . We denote by $\Psi_\eps : \mathbb{B}_{\rho_\eps}(p) \to \mathbb{S}^{p_\eps}$ the $n$-harmonic replacement of $\frac{\Phi_\eps}{\omega_\eps}$ on $\mathbb{B}_{\rho_\eps}(p)$ and we set
$$ P_\eps = \left( \left\vert \nabla \Psi_\eps \right\vert^{2} \right)^{\frac{n-2}{2}}.$$
The four following upper estimates occur
\begin{equation} \label{estdiffphioveromegapsidimn} \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps \left\vert \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert^2_g - \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps \left\vert \nabla \Psi_{\eps} \right\vert^2_g \leq O\left(\delta_\eps^{\frac{1}{2}}\right) \end{equation}
\begin{equation} \label{estdiffphipsidimn} \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps \left\vert \nabla \Phi_\eps \right\vert^2_g - \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps \left\vert \nabla \Psi_{\eps} \right\vert^2_g \leq O\left(\delta_\eps^{\frac{1}{2}}\right) \end{equation}
\begin{equation} \label{estdiffphipsidimnP} \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \Phi_\eps \right\vert^2_g - \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \Psi_{\eps} \right\vert^2_g \leq O\left(\delta_\eps^{\frac{1}{2}} + \delta_\eps^{\frac{2}{n}}\right) \end{equation}
\begin{equation} \label{estdiffphioveromegapsidimnP} \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert^2_g - \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \Psi_{\eps} \right\vert^2_g \leq O\left(\delta_\eps^{\frac{1}{2}} + \delta_\eps^{\frac{2}{n}}\right) \end{equation}
as $\eps \to 0$.
We first prove (<ref>)
We test the function $\frac{\Phi_{\eps}^i}{\omega_{\eps}}- \Psi_{\eps}^i$ in the variational characterization of $\lambda_{\star}:= \lambda_{\star}\left(\mathbb{B}_{\rho_\eps}(p),e^{nu_{\eps}} , \frac{Q_\eps}{\lambda_\eps^{\frac{n-2}{2}}} \right)$:
$$ \lambda_{\eps}^{\frac{n}{2}} \int_{ \mathbb{B}_{\rho_\eps}(p) } \left(\frac{\Phi_{\eps}^i}{\omega_{\eps}}- \Psi_{\eps}^i\right)^2e^{nu_{\eps}} \leq \frac{\lambda_{\eps}}{\lambda_\star} \int_{ \mathbb{B}_{\rho_\eps}(p) } Q_\eps \left\vert \nabla\left(\frac{\Phi_{\eps}^i}{\omega_{\eps}}- \Psi_{\eps}^i\right)\right\vert^2$$
and we sum on $i$ to get thanks to Claim <ref>
\begin{equation} \label{eqtestlambdastardimn} \begin{split} \lambda_{\eps}^{\frac{n}{2}} & \int_{ \mathbb{B}_{\rho_\eps}(p) } \left\vert \frac{\Phi_{\eps}}{\omega_{\eps}}- \Psi_{\eps}\right\vert^2e^{nu_{\eps}} \\ \leq & \int_{ \mathbb{B}_{\rho_\eps}(p) } Q_\eps \left\vert \nabla\frac{\Phi_{\eps}}{\omega_{\eps}}\right\vert^2 + \int_{ \mathbb{B}_{\rho_\eps}(p) }Q_\eps\left\vert \nabla \Psi_{\eps} \right\vert^2 - 2\int_{ \mathbb{B}_{\rho_\eps}(p) }Q_\eps \nabla\frac{\Phi_{\eps}}{\omega_{\eps}}\nabla \Psi_{\eps} \end{split} \end{equation}
Now we test the equation $ - div_g\left(\lambda_{\eps}^{\frac{2-n}{2}} Q_\eps \Phi_{\eps} \right) = \lambda_{\eps} e^{nu_{\eps}} \Phi_{\eps} $ against $\frac{\Phi_{\eps}}{\omega_{\eps}^2} - \frac{\Psi_{\eps}}{\omega_{\eps}}$ and we get after an integration by part knowing that $\frac{\Phi_{\eps}}{\omega_{\eps}^2} - \frac{\Psi_{\eps}}{\omega_{\eps}} = 0$ on $\partial \mathbb{B}_r(p)$
\begin{equation} \begin{split} \int_{\mathbb{B}_{\rho_\eps}(p)} &Q_\eps \left(\frac{1}{\omega_{\eps}}\nabla \Phi_{\eps} \nabla\left(\frac{\Phi_{\eps}}{\omega_{\eps}} - \Psi_{\eps}\right) + \nabla\frac{1}{\omega_{\eps}}\nabla \Phi_{\eps} \left(\frac{\Phi_{\eps}}{\omega_{\eps}} - \Psi_{\eps}\right) \right) \\
= & \lambda_\eps^{\frac{n}{2}} \int_{\mathbb{B}_{\rho_\eps}(p)} \left\langle \frac{\Phi_{\eps}}{\omega_{\eps}}, \frac{\Phi_{\eps}}{\omega_{\eps}} - \Psi_{\eps} \right\rangle e^{nu_{\eps}} \end{split} \end{equation}
so that,
\begin{equation} \label{eqtestagainstequationdimn}
\begin{split}
\int_{\mathbb{B}_{\rho_\eps}(p)} & Q_\eps \nabla \frac{\Phi_{\eps}}{\omega_{\eps}} \nabla\left(\frac{\Phi_{\eps}}{\omega_{\eps}} - \Psi_{\eps}\right) = \lambda_\eps^{\frac{n}{2}} \int_{\mathbb{B}_{\rho_\eps}(p)} \left\langle \frac{\Phi_{\eps}}{\omega_{\eps}}, \frac{\Phi_{\eps}}{\omega_{\eps}} - \Psi_{\eps} \right\rangle e^{nu_{\eps}} \\ & - \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps \nabla\frac{1}{\omega_{\eps}}\nabla \Phi_{\eps} \left(\frac{\Phi_{\eps}}{\omega_{\eps}} - \Psi_{\eps}\right)
+ \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps \Phi_{\eps} \nabla \frac{1}{\omega_{\eps}} \nabla\left(\frac{\Phi_{\eps}}{\omega_{\eps}} - \Psi_{\eps}\right)
\end{split}
\end{equation}
Knowing that $\left\vert \frac{\Phi_{\eps}}{\omega_{\eps}} \right\vert = \left\vert \Psi_\eps \right\vert$, it is clear that $2\left\langle \frac{\Phi_{\eps}}{\omega_{\eps}}, \frac{\Phi_{\eps}}{\omega_{\eps}} - \Psi_{\eps} \right\rangle = \left\vert \frac{\Phi_{\eps}}{\omega_{\eps}}- \Psi_{\eps}\right\vert^2$ and multiplying (<ref>) by $2$, we obtain
\begin{equation} \label{eqtestagainstequation2dimn}
\begin{split}
& 2 \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps \left\vert \nabla \frac{\Phi_{\eps}}{\omega_{\eps}}\right\vert^2 - 2 \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps \nabla \frac{\Phi_{\eps}}{\omega_{\eps}} \nabla \Psi_{\eps} = \lambda_\eps^{\frac{n}{2}} \int_{\mathbb{B}_{\rho_\eps}(p)} \left\vert \frac{\Phi_{\eps}}{\omega_{\eps}}- \Psi_{\eps}\right\vert^2 e^{nu_{\eps}}
\\ & + 2 \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps \frac{\nabla\omega_{\eps}}{\omega_{\eps}} \left( \frac{\nabla \Phi_{\eps}}{\omega_{\eps}} \left(\frac{\Phi_{\eps}}{\omega_{\eps}} - \Psi_{\eps}\right) - \frac{\Phi_{\eps}}{\omega_{\eps}} \nabla\left(\frac{\Phi_{\eps}}{\omega_{\eps}} - \Psi_{\eps} \right) \right) \\
\leq & \lambda_\eps^{\frac{n}{2}} \int_{\mathbb{B}_{\rho_\eps}(p)} \left\vert \frac{\Phi_{\eps}}{\omega_{\eps}}- \Psi_{\eps}\right\vert^2 e^{nu_{\eps}} + O\left( \delta_{\eps}^{\frac{1}{2}}\right)
\end{split}
\end{equation}
Summing (<ref>) and (<ref>), we get (<ref>). Then, (<ref>) easily follows from (<ref>) and (<ref>) by a triangle inequality. Now let's prove (<ref>). We have that
\begin{equation*} \begin{split} \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \Phi_\eps \right\vert^2_g - \int_{\mathbb{B}_{\rho_\eps}(p)} & P_\eps \left\vert \nabla \Psi_\eps \right\vert_g^{2} = \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps \left\vert \nabla \Phi_\eps \right\vert^2_g - \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps \left\vert \nabla \Psi_\eps \right\vert_g^{2} \\
& + \int_{\mathbb{B}_{\rho_\eps}(p)} \left(\left\vert \nabla \Psi_\eps\right\vert^{n-2} - \left\vert \nabla \Phi_\eps\right\vert^{n-2}\right) \left( \left\vert \nabla \Phi_\eps \right\vert^2_g - \left\vert \nabla\Psi_\eps \right\vert^2_g \right) \\
& + \int_{\mathbb{B}_{\rho_\eps}(p)} \left(\left\vert \nabla \Phi_\eps\right\vert^{n-2} - Q_\eps \right) \left( \left\vert \nabla \Phi_\eps \right\vert^2_g - \left\vert \nabla\Psi_\eps \right\vert^2_g \right) \\
\leq & O\left( \delta_\eps^{\frac{1}{2}}\right) + O\left( \delta_\eps^{\frac{2}{n}} \right)
\end{split} \end{equation*}
where we used (<ref>) and (<ref>). Finally, we prove (<ref>). We have by (<ref>) that
\begin{equation*} \begin{split} \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \frac{\Phi_{\eps}}{\omega_{\eps}} \right\vert^2_g - \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \Psi_\eps \right\vert_g^{2} \leq & O\left(\delta_\eps^{\frac{1}{2}} + \delta_\eps^{\frac{2}{n}} \right) + \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left( \left\vert \nabla \frac{\Phi_{\eps}}{\omega_{\eps}} \right\vert^2_g - \left\vert \nabla\Phi_\eps \right\vert^2_g \right) \\
\leq & O\left( \delta_\eps^{\frac{1}{2}} + \delta_\eps^{\frac{2}{n}}\right) + \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left(1-\omega_\eps^2\right) \left\vert \nabla \frac{\Phi_{\eps}}{\omega_{\eps}} \right\vert^2_g \\
\leq & O\left( \delta_\eps^{\frac{1}{2}} + \delta_\eps^{\frac{2}{n}} + \delta_\eps\right)
\end{split} \end{equation*}
since $\omega_\eps^2 \geq 1-\delta_\eps$ and we obtain (<ref>). The proof is complete
There is $\gamma_{n,g} >0$ such that if
$$\int_{\mathbb{B}_{\rho}(p)} Q_{\eps}^{\frac{n}{n-2}}dv_g \leq \gamma_{n,g} $$
then, there is $\frac{\rho}{2} <\rho_\eps < \rho $ such that the harmonic replacement $\Psi_{\eps} : \mathbb{B}_{\rho_\eps}(p) \to \mathbb{S}^{p_\eps}$ of $\frac{\Phi_\eps}{\omega_\eps}$ satisfies for any $r < \frac{{\rho}}{4}$:
\begin{equation} \label{mainestdimn} \int_{\mathbb{B}_r(p)} \left(P_\eps + Q_\eps\right) \left\vert \nabla \left( \Phi_\eps - \Psi_{\eps}\right) \right\vert^2_g \to 0 \end{equation}
as $\eps \to 0$.
STEP 1: Up to reduce $\gamma_{n,g}>0$, there is $\frac{\rho}{2}<\rho_\eps < \rho$ such that up to a rotation coordinates of $\Psi_\eps$, we have that the first coordinate of the $n$-harmonic replacement $\Psi_\eps : \mathbb{B}_{\rho_\eps}(p) \to \mathbb{S}^{p_\eps}$ is uniformly lower bounded
\begin{equation} \label{eqphiepsuniformlylowerbounded}
\forall x \in \mathbb{B}^{n}_{\rho_\eps}(p) , \psi_\eps^{1}(x ) \geq \frac{1}{2}. \end{equation}
Proof of Step 1: We apply the Courant-Lebesgue lemma to obtain $\frac{\rho}{2} < \rho_\eps < \rho$ such that
\begin{equation*}
\begin{split} \int_{\mathbb{S}^{n-1}}\left( \sum_{i=1}^{p_\eps+1}\left\vert \partial_\theta \psi_\eps^{i} \left( \rho_\eps . \theta \right) \right\vert^{2} \right)^{\frac{n}{2}} d\theta_{\mathbb{S}^n-1} \leq & \frac{1}{\ln 2} \int_{\mathbb{B}_{\rho}(p)} \left( \sum_{i=1}^{p_\eps+1} \left\vert \nabla \psi_\eps^{i} \left( y \right) \right\vert^{2} \right)^{\frac{2}{n}} dy \\
\leq & \frac{C_g}{\ln 2} \left( \int_{\mathbb{B}_{\rho}(p)} \left\vert \nabla \Psi_\eps \right\vert^{n}_g dv_g\right)^{\frac{2}{n}}
\end{split}
\end{equation*}
for a constant $C_g$ only depending on $g$. Now, for a given $1\leq i_0\leq p_\eps+1$, we have by classical Morrey-Sobolev injections a constant $a_n$ such that for any $z,z' \in \mathbb{S}^{n-1}$
\begin{equation*}
\begin{split}
\left\vert \phi_\eps^{i_0}(\rho_\eps z+p) - \phi_\eps^{i_0}(\rho_\eps z'+p) \right\vert \leq & a_n \left( \int_{\mathbb{S}^{n-1}}\left\vert \partial_\theta \phi_\eps^{i_0} \left( \rho_\eps\theta \right) \right\vert^{n} d\theta_{\mathbb{S}^n-1} \right)^{\frac{1}{n}} d_{\mathbb{S}^n-1}(z,z')^{\frac{1}{n}} \\
\leq & a_n 2^{\frac{1}{n}} \frac{C_g}{\ln 2} \left( \int_{\mathbb{B}_{\rho}(p)} \left\vert \nabla \Phi_\eps \right\vert^{n}_g dv_g\right)^{\frac{1}{n}} \\
\leq & \frac{1}{8}
\end{split}
\end{equation*}
where we choose $a_n 2^{\frac{1}{n}} \frac{C_g}{\ln 2} \gamma_{n,g}^{\frac{1}{n}} \leq \frac{1}{8}$. Now, let $z_0^\eps \in \mathbb{S}^{n-1}$ be such that
$$\omega_\eps(\rho_\eps z_0^\eps + p) = \max_{z\in \mathbb{S}^{n-1}} \omega_\eps(\rho_\eps z + p)$$
Then, up to a rotation of $\Phi_\eps$, we assume that $\phi_\eps^{1}(\rho_\eps z_0^\eps+p) = \omega_\eps(\rho_\eps z_0^\eps+p)$ and $\phi_\eps^{i}(\rho_\eps z_0+p) = 0$ for $i \neq 1$. Knowing in addition that $\omega_\eps^2 \geq 1-\delta_\eps$, this implies that
\begin{equation} \label{eqlowerboundpsieps1boundary}
\forall z \in \mathbb{S}^{n-1} , \frac{\phi_{\eps}^{1}(\rho_\eps z+p)}{\omega_\eps(\rho_\eps z+p)} \geq \frac{\phi_\eps^{1}(\rho_\eps z_0^\eps+p)-\frac{1}{8 }}{\omega_\eps(\rho_\eps z_0^\eps+p)} \geq 1 - \frac{1}{8\sqrt{1-\delta_\eps}} \geq \frac{3}{4} . \end{equation}
Now, let's focus on the $n$-harmonic extension $\Psi_\eps$ of $\frac{\Phi_\eps}{\omega_\eps}$ on $\mathbb{B}_{\rho_\eps}(p)$. We define the following extension of $\Psi_\eps$ in $\mathbb{R}^n$ by setting $\widetilde{\Psi_\eps}(\rho_\eps z) = \Psi_\eps\left(\rho_\eps \frac{z}{\left\vert z \right\vert^2}\right)$ if $\left\vert z \right\vert\geq 1$ so that we have by conformal invariance of the $n$-energy
$$ \int_{\mathbb{R}^n} \left\vert \nabla \widetilde{\Psi_\eps} \right\vert^n_g dv_g \leq 2c_g \int_{\mathbb{B}_{\rho_\eps}(p)} \left\vert \nabla \Psi_\eps \right\vert^n_gdv_g \leq 2 c_g \gamma_{n,g} $$
Now let $x_\eps$ be such that $\psi_\eps^1(x_\eps) = \min_{x\in \overline{\mathbb{B}_{\rho_\eps}(p)}} \psi_\eps^1(x)$, we aim at proving that $\psi_\eps^1(x_\eps) \geq \frac{1}{2}$. We assume that $\left\vert x_\eps \right\vert < \rho_\eps $ since otherwise, (<ref>) proves step 1. We set $d_\eps = \rho_\eps - \left\vert x_\eps \right\vert$ and $y_\eps = \rho_\eps \frac{x_\eps}{\left\vert x_\eps \right\vert}$. By application of the Courant-Lebesgue theorem and Morrey-Sobolev embeddings again (see the beginning of Step 1), we have existence of a radius $\frac{d_\eps}{2} < R_\eps < d_\eps $ such that
$$ \sup_{w,w' \in \mathbb{S}_{R_\eps}^{n-1}(y_\eps)} \left\vert \psi_\eps^{1}(w) - \psi_\eps^{1}(w') \right\vert \leq a_n 2^{\frac{1}{n}}\frac{C_g}{\ln 2} \left(\int_{\mathbb{B}_{\rho_\eps}(p)} \left\vert \nabla \widetilde{\Psi_\eps} \right\vert^n_gdv_g\right)^{\frac{1}{n}} \leq \frac{1}{8} $$
if we assume $a_n 2^{\frac{1}{n}}\frac{C_g}{\ln 2} 2 c_g \gamma_{n,g}^{\frac{1}{n}} \leq \frac{1}{8}$. Now the $\eps$-regularity result independant from the dimension of the target manifold (see Claim <ref>) gives that
$$ \sup_{w \in \mathbb{B}^n_{\frac{d_\eps}{2}}(x_\eps)} \left\vert \psi_\eps^1(w) - \psi_\eps^1(x_\eps) \right\vert \leq C_{n,g}^{\frac{1}{n}} \left(\int_{\mathbb{B}^n_{d_\eps}(x_\eps)} \left\vert \nabla \Psi_\eps \right\vert^n_g\right)^{\frac{1}{n}} \leq \frac{1}{8}$$
if we assume that $C_{n,g}^{\frac{1}{n}} \gamma_{n,g}^{\frac{1}{n}} \leq \frac{1}{8}$. Then, gathering all the previous inequalities and noticing that $\mathbb{B}^n_{\frac{d_\eps}{2}}(x_\eps) \cap \mathbb{S}_{R_\eps}(y_\eps) \neq \emptyset$ and that $\mathbb{S}_{\rho_\eps}(p) \cap \mathbb{S}_{R_\eps}(y_\eps) \neq \emptyset$ we obtain the desired lower bound.
STEP 2: We prove that
\begin{equation} \label{eqstep2convergesto0}
\begin{split} \sum_i \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \left(\frac{\phi_\eps^i}{\omega_\eps}-\psi_\eps^i\right) - \nabla\left( \ln \psi_\eps^{1} \right) \left(\frac{\phi_\eps^i}{\omega_\eps}-\psi_\eps^i\right) \right\vert^2_g dv_g \\ = \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps\left( \left\vert \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert^2 - \left\vert \nabla \Psi_\eps \right\vert^2 \right) \leq O\left(\delta_\eps^{\frac{1}{2}}+ \delta_\eps^{\frac{2}{n}}\right)
\end{split}\end{equation}
Proof of Step 2:
We recall that $\Psi_{\eps}:\mathbb{B}_{\rho_\eps}(p) \to \mathbb{S}^{p_\eps} $ is the $n$-harmonic replacement of $\frac{\Phi_{\eps}}{\omega_{\eps}}$ and we obtain
\begin{equation*}
\begin{split}
\int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \left(\frac{\Phi_{\eps}}{\omega_{\eps}} - \Psi_{\eps}\right) \right\vert^2_g & - \left(\int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \frac{\Phi_{\eps}}{\omega_{\eps}} \right\vert^2_g - \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \Psi_{\eps} \right\vert^2_g \right) \\ = & 2 \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\langle \nabla \Psi_\eps, \nabla \left(\Psi_\eps - \frac{\Phi_{\eps}}{\omega_{\eps}}\right) \right\rangle_g \\
= & 2 \int_{\mathbb{B}_{\rho_\eps}(p)} -div_g\left(P_\eps \nabla \Psi_\eps\right) \left(\Psi_\eps - \frac{\Phi_{\eps}}{\omega_{\eps}}\right) \\
= & 2 \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \Psi_\eps \right\vert_g^2 \Psi_\eps \left(\Psi_\eps - \frac{\Phi_{\eps}}{\omega_{\eps}}\right) \\
= & \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \Psi_\eps \right\vert_g^2 \left\vert\Psi_\eps - \frac{\Phi_{\eps}}{\omega_{\eps}}\right\vert^2
\end{split}
\end{equation*}
Now, for a given $u \in \mathcal{C}^{\infty}\left(\mathbb{B}_{\rho_\eps}(p)\right) \cap \mathcal{C}^{0}\left(\overline{\mathbb{B}_{\rho_\eps}(p)}\right)$, such that $u=0$ we have the following computation:
\begin{equation*}
\begin{split}
\int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla u - \nabla\left( \ln \psi_\eps^{1} \right) u \right\vert^2_g dv_g = & \int_{\mathbb{B}_r(p)} P_\eps \left\vert \nabla u \right\vert_g^2 + \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \frac{\left\vert \nabla \psi_\eps^{1} \right\vert_g^2}{\left(\psi_\eps^{1}\right)^2} u^2 \\
& - \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \frac{\nabla \psi_\eps^{1}}{\psi_\eps^{1}} \nabla u^2 \\
= & \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla u \right\vert_g^2 - \int_{\mathbb{B}_{\rho_\eps}(p)} \frac{-div_g\left( P_\eps \nabla \psi_\eps^{1} \right)}{\psi_\eps^{1}} u^2 \\
= & \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla u \right\vert_g^2 - \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \Psi_{\eps} \right\vert_g^2 u^2
\end{split}
\end{equation*}
so that testing this for $u= \frac{\phi_\eps^i}{\omega_\eps}-\psi_\eps^i$ and summing over $i$, we obtain
\begin{equation*}
\begin{split}
\sum_i \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \left(\frac{\phi_\eps^i}{\omega_\eps}-\psi_\eps^i\right) - \nabla\left( \ln \psi_\eps^{1} \right) \left(\frac{\phi_\eps^i}{\omega_\eps}-\psi_\eps^i\right) \right\vert^2_g dv_g \\
= \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla\left( \frac{\Phi_\eps}{\omega_\eps}-\Psi_\eps \right) \right\vert_g^2 - \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps \left\vert \nabla \Psi_{\eps} \right\vert_g^2 \left\vert\frac{\Phi_\eps}{\omega_\eps}-\Psi_\eps\right\vert^2.
\end{split}
\end{equation*}
Finally, using the previous computations and (<ref>) we obtain (<ref>)
STEP 3: We prove that
\begin{equation} \label{eqstep3convergesto0Peps}
\int_{\mathbb{B}_{\rho_\eps}(p)} \left( \left\vert \nabla \Psi_\eps\right\vert^{n-2} + \left\vert \nabla \Phi_\eps\right\vert^{n-2} \right) \left\vert\nabla \left( \frac{\Phi_\eps}{\omega_\eps} - \Phi_\eps \right)\right\vert^2 = O\left(\delta_\eps^{\frac{2}{n}} \right) \text{ as } \eps\to 0
\end{equation}
and that
\begin{equation} \label{eqstep3convergesto0}
\int_{\mathbb{B}_{\rho_\eps}(p)} \left( \left\vert \nabla \Psi_\eps\right\vert^{n-2} + \left\vert \nabla \Phi_\eps\right\vert^{n-2} \right) \left( \left\vert\nabla \Phi_\eps \right\vert - \left\vert \nabla \Psi_\eps\right\vert\right)^2 = O\left(\delta_\eps^{\frac{1}{2}} + \delta_\eps^{\frac{2}{n}} \right) \text{ as } \eps\to 0
\end{equation}
Proof of Step 3: By the same computations as for the proof of (<ref>), we have
$$ \left\vert \nabla \left(\Phi_\eps - \frac{\Phi_{\eps}}{\omega_{\eps}}\right) \right\vert^2 \leq \frac{ \left\vert \nabla\omega_\eps \right\vert^2 }{\omega_\eps^2} + \left\vert \nabla \Phi_\eps \right\vert^2 \leq 2 \left\vert \nabla \Phi_\eps \right\vert^2 \leq 2 Q_\eps^{\frac{2}{n-2}} $$
so that writing $2 = \frac{2(n-2)}{n} + \frac{4}{n} $
\begin{equation*} \begin{split} \int_{\mathbb{B}_r(p)} & \left( \left\vert \nabla \Psi_\eps\right\vert^{n-2} + \left\vert \nabla \Phi_\eps\right\vert^{n-2} \right) \left\vert \nabla \left(\Phi_\eps - \frac{\Phi_{\eps}}{\omega_{\eps}}\right) \right\vert^2_g \\
\leq & 2^{\frac{2(n-2)}{n}} \int_{\mathbb{B}_r(p)} \left( \left\vert \nabla \Psi_\eps\right\vert^{n-2} + \left\vert \nabla \Phi_\eps\right\vert^{n-2} \right) Q_\eps^{\frac{2}{n}} \left\vert \nabla \left(\Phi_\eps - \frac{\Phi_{\eps}}{\omega_{\eps}}\right) \right\vert^{\frac{4}{n}}_g \\
\leq & 2^{\frac{2(n-2)}{n}} \left(\int_{\mathbb{B}_r(p)} \left( \left\vert \nabla \Psi_\eps\right\vert^{n-2} + \left\vert \nabla \Phi_\eps\right\vert^{n-2} \right)^{\frac{n}{n-2}} \right)^{\frac{n-2}{n}} \left(\int_{\mathbb{B}_r(p)} Q_\eps \left\vert \nabla \left(\Phi_\eps - \frac{\Phi_{\eps}}{\omega_{\eps}}\right) \right\vert^2_g\right)^{\frac{2}{n}} \\
\leq & O\left( \delta_\eps^{\frac{2}{n}}\right)
\end{split} \end{equation*}
where we used (<ref>) for the last inequality, and we obtain
(<ref>) .
Now let's prove (<ref>). By (<ref>), we have that
\begin{equation*} \begin{split} O\left(\delta_\eps^{\frac{1}{2}}\right)& \geq \int_{\mathbb{B}_{\rho_\eps}(p)} Q_\eps\left( \left\vert \nabla \Phi_\eps \right\vert^2 - \left\vert \nabla \Psi_\eps \right\vert^2 \right) \\
& \geq \int_{\mathbb{B}_{\rho_\eps}(p)} P_\eps\left( \left\vert \nabla \Phi_\eps \right\vert^2 - \left\vert \nabla \Psi_\eps \right\vert^2 \right) + \int_{\mathbb{B}_{\rho_\eps}(p)} \left(Q_\eps-P_\eps\right)\left( \left\vert \nabla\Phi_\eps \right\vert^2 - \left\vert \nabla \Psi_\eps \right\vert^2 \right) \\
& \geq O\left(\delta_\eps^{\frac{1}{2}}+ \delta_\eps^{\frac{2}{n}}\right) + \int_{\mathbb{B}_{\rho_\eps}(p)} \left(Q_\eps-P_\eps\right)\left( \left\vert \nabla \Phi_\eps \right\vert^2 - \left\vert \nabla \Psi_\eps \right\vert^2 \right)
\end{split}
\end{equation*}
where we used for the last inequality (<ref>), proved in step 2 and (<ref>). Now,
\begin{equation*} \begin{split} \int_{\mathbb{B}_{\rho_\eps}(p)} \left(Q_\eps-P_\eps\right) & \left( \left\vert \nabla \Phi_\eps \right\vert^2 - \left\vert \nabla \Psi_\eps \right\vert^2 \right) \geq \int_{\mathbb{B}_{\rho_\eps}(p)} \left(Q_\eps-\left\vert \nabla \Phi_\eps \right\vert^{n-2}\right)\left( \left\vert \nabla \Phi_\eps \right\vert^2 - \left\vert \nabla \Psi_\eps \right\vert^2 \right) \\
&+ \int_{\mathbb{B}_{\rho_\eps}(p)} \left( \left\vert \nabla \Phi_\eps \right\vert^{n-2} - \left\vert \nabla \Psi_\eps \right\vert^{n-2} \right)\left( \left\vert \nabla \Phi_\eps \right\vert^2 - \left\vert \nabla \Psi_\eps \right\vert^2 \right) \\
\geq & O\left(\delta_\eps^{\frac{2}{n}}\right) + \int_{\mathbb{B}_{\rho_\eps}(p)} \left( \left\vert \nabla \Phi_\eps \right\vert - \left\vert \nabla \Psi_\eps \right\vert \right)^2 \left( \left\vert \nabla \Phi_\eps \right\vert^{n-2} + \left\vert \nabla \Psi_\eps \right\vert^{n-2} \right)
\end{split}
\end{equation*}
where we used (<ref>) again for the last inequality and we obtain (<ref>) by gathering the previous inequalities.
STEP 4: We prove that
\begin{equation} \label{eqstep4convergesto0} \sum_i \int_{\mathbb{B}_{\rho_\eps}(p)} \left\vert \nabla\Phi_\eps \right\vert_g^{n-2} \left\vert \nabla \left(\frac{\phi_\eps^i}{\omega_\eps}-\psi_\eps^i\right) - \nabla\left( \ln \psi_\eps^{1} \right) \left(\frac{\phi_\eps^i}{\omega_\eps}-\psi_\eps^i\right) \right\vert^2_g dv_g \to 0 \text{ as } \eps\to 0 \end{equation}
Proof of Step 4: (<ref>) follows from (<ref>) if we prove that
$$ \sum_i \int_{\mathbb{B}_{\rho_\eps}(p)}\left( \left\vert \nabla\Phi_\eps \right\vert_g^{n-2} - \left\vert \nabla\Psi_\eps \right\vert_g^{n-2} \right)\left\vert \nabla \left(\frac{\phi_\eps^i}{\omega_\eps}-\psi_\eps^i\right) - \nabla\left( \ln \psi_\eps^{1} \right) \left(\frac{\phi_\eps^i}{\omega_\eps}-\psi_\eps^i\right) \right\vert^2_g dv_g \to 0 $$
as $\eps \to 0$. We denote $D_\eps$ this quantity. Using (<ref>) in the last inequality, we have that
\begin{equation*}
\begin{split}D_\eps \leq & c_n \int_{\mathbb{B}_{\rho_\eps}(p)}\left\vert \left\vert \nabla\Phi_\eps \right\vert - \left\vert \nabla\Psi_\eps \right\vert \right\vert \left( \left\vert \nabla\Phi_\eps \right\vert_g^{n-3} + \left\vert \nabla\Psi_\eps \right\vert^{n-3} \right)\left( 2 \left\vert \nabla \left( \frac{\Phi_\eps}{\omega_\eps} - \Psi_\eps \right) \right\vert^2 + 8 \left\vert \nabla \Psi_\eps \right\vert^2 \right) \\
\leq & O\left( \left(\int_{\mathbb{B}_{\rho_\eps}(p)} \left( \left\vert \nabla \Phi_\eps \right\vert - \left\vert \nabla \Psi_\eps \right\vert \right)^2 \left( \left\vert \nabla \Phi_\eps \right\vert^{n-2} + \left\vert \nabla \Psi_\eps \right\vert^{n-2} \right)\right)^{\frac{1}{2}} \right) \to 0 \text{ as } \eps\to 0
\end{split}\end{equation*}
STEP 5: We prove that for any $r < \frac{\rho}{4}$
\begin{equation} \label{eqestPepsphioveromegadimn} \int_{\mathbb{B}_{r}(p)} P_\eps \left\vert \nabla \left( \frac{\Phi_\eps}{\omega_\eps} - \Psi_{\eps}\right) \right\vert^2_g \to 0 \text{ as } \eps\to 0
\end{equation}
Proof of Step 5: We set $\rho_{0} = \lim_{\eps\to 0} \rho_\eps$. From (<ref>), we have that for any $\mathbb{B}_r(p) \subset \mathbb{B}_{\rho_\eps}(p)$ with $r < \rho_0$,
\begin{equation} \label{eqineqforanysubsetA}
\begin{split} \int_{\mathbb{B}_r(p)} P_{\eps}\left\vert \nabla\left(\frac{ \Phi_\eps}{\omega_\eps}-\Psi_\eps\right) \right\vert^2 \leq & 2 \int_{A} P_{\eps} \left\vert \nabla\left( \ln\psi_\eps^{1} \right) \right\vert^2 \left\vert \frac{ \Phi_\eps}{\omega_\eps}-\Psi_\eps\right\vert^2 \\
& + 2 \sum_i \int_{\mathbb{B}_r(p)} P_\eps \left\vert \nabla \left(\frac{\phi_\eps^i}{\omega_\eps}-\psi_\eps^i\right) - \nabla\left( \ln \psi_\eps^{1} \right) \left(\frac{\phi_\eps^i}{\omega_\eps}-\psi_\eps^i\right) \right\vert^2_g dv_g \\
\leq &2 \int_{\mathbb{B}_r(p)} P_{\eps} \left\vert \nabla\left( \ln\psi_\eps^{1} \right) \right\vert^2 \left\vert \frac{ \Phi_\eps}{\omega_\eps}-\Psi_\eps\right\vert^2 + O\left(\delta_\eps^{\frac{1}{2}}\right) \\
\end{split}
\end{equation}
We aim at proving that the right-hand term converges to $0$ as $\eps\to 0$.
By Claim (<ref>) applied to $\frac{\Phi_\eps}{\omega_\eps}$ and Claim (<ref>) applied to $\Psi_\eps$, we know that up to a subsequence, the sequence of maps $\frac{\Phi_\eps}{\omega_\eps} : \mathbb{B}_{\rho_\eps}(p) \to \mathbb{S}^{p_\eps}$ and $\Psi_\eps : \mathbb{B}_{\rho_\eps}(p) \to \mathbb{S}^{p_\eps}$ converge to some maps $\Phi_0 : \mathbb{B}_{\rho_0}(p) \to \mathbb{R}^{\mathbb{N}}$ and $\Psi_0 : \mathbb{B}_{\rho_0}(p) \to \mathbb{R}^{\mathbb{N}}$ in $W^{1,q}$ for any $1 \leq q < n$,
and $\Psi_0$ is a harmonic function into the unit sphere of $\mathbb{R}^N$.
Moreover, we know from (<ref>) and (<ref>) that
$$ \left(\left\vert \nabla \Psi_\eps \right\vert_g^{\frac{n-2}{2}} + \left\vert \nabla \frac{\Phi_\eps}{\omega_\eps} \right\vert_g^{\frac{n-2}{2}}\right) \left( \nabla \Psi_\eps - \nabla\left( \frac{\Phi_\eps}{\omega_\eps}\right) - \left\vert \nabla \ln \left( \psi_\eps^1\right) \right\vert_g^2 \left( \Psi_\eps- \frac{\Phi_\eps}{\omega_\eps}\right) \right) \to 0 $$
strongly in $L^2$ as $\eps\to 0$. Passing to the limit as $\eps\to 0$, we obtain
$$ \left( \left\vert \nabla \Psi_0 \right\vert^{\frac{n-2}{2}} + \left\vert \nabla \Phi_0 \right\vert^{\frac{n-2}{2}} \right)\left( \nabla \Psi_0 - \nabla \Phi_0 - \nabla \ln \left( \psi_0^1\right) \left( \Psi_0 - \Phi_0\right) \right) = 0$$
If we define
$$ Z = \{ z\in \mathbb{B}_{\rho_0}(p) ; \left\vert \nabla \Psi_0 \right\vert^{\frac{n-2}{2}} + \left\vert \nabla \Phi_0 \right\vert^{\frac{n-2}{2}} = 0 \}, $$
We obtain that
$$ \forall z \in \mathbb{B}_{\rho_0}(p) \setminus Z , \left( \nabla \Psi_0 - \nabla \Phi_0 - \nabla \ln \left( \psi_0^1\right) \left( \Psi_0 - \Phi_0\right)\right)(z) = 0 $$
It is clear that $\Psi_0 - \Phi_0 = 0$ in $\partial\mathbb{B}_{\rho_0} $ as limits of functions such that $\Psi_\eps - \frac{\Phi_\eps}{\omega_\eps} = 0$ in $\partial\mathbb{B}_{\rho_\eps}$. Now by Claim <ref> and Step 1, we know that
$$\forall z \in \mathbb{B}_{\rho_0}(p), \left\vert \nabla \ln\psi_0^1 \right\vert^2(z) \leq 4 \frac{C_{n,g}^{\frac{2}{n}} \gamma_{n,g}^{\frac{2}{n}}}{\left(1-\left\vert z \right\vert\right)^2} $$
Therefore, we have by a classical Hardy inequality that
\begin{equation*}
\begin{split} \int_{\mathbb{B}_{\rho_0}(p) } \left\vert \nabla\left( \Phi_0 - \Psi_0\right) \right\vert^2 = \int_{\mathbb{B}_{\rho_0}(p) \setminus Z} \left\vert \nabla\left( \Phi_0 - \Psi_0\right) \right\vert^2 = & \int_{\mathbb{B}_{\rho_0}(p) \setminus Z} \left\vert \nabla \ln\psi_0^1 \right\vert^2 \left\vert \Phi_0 - \Psi_0\right\vert^2 \\
\leq & 4 C_{n,g}^{\frac{2}{n}} \gamma_{n,g}^{\frac{2}{n}} \int_{\mathbb{B}_{\rho_0}(p) } \frac{ \left\vert \Phi_0 - \Psi_0\right\vert^2 }{\left(\rho_0-\left\vert z \right\vert\right)^2} dv_g \\
\leq & \tilde{C}_{n,g} \gamma_{n,g}^{\frac{2}{n}} \int_{\mathbb{B}_{\rho_0}(p) } \left\vert \nabla\left( \Phi_0 - \Psi_0\right) \right\vert_g^2dv_g
\end{split}
\end{equation*}
so that assuming that $\tilde{C}_{n,g} \gamma_{n,g}^{\frac{2}{n}}\leq \frac{1}{2}$, we obtain that $\nabla \Phi_0 =_{a.e} \nabla \Psi_0$ and then that for a.e $z \in \mathbb{B}_{\rho_0}(p)\setminus Z$, $ \Phi_0(z) = \Psi_0(z)$ and then that $\Phi_0 =_{a.e} \Psi_0$. Now, coming back to (<ref>), it is clear that the right-hand side
$$ 2 \int_{\mathbb{B}_r(p)} P_{\eps} \left\vert \nabla\left( \ln\psi_\eps^{1} \right) \right\vert^2 \left\vert \frac{ \Phi_\eps}{\omega_\eps}-\Psi_\eps\right\vert^2 \to 0 $$
as $\eps\to 0$ by uniform boundedness of $\nabla \Psi_\eps$ and $\nabla \psi_\eps^1$ on $\mathbb{B}_r(\rho)$ (see Claim <ref> and Step 1 of the current claim) and strong convergence to $0$ of $\frac{\Phi_{\eps}}{\omega_\eps} - \Psi_\eps $ in $L^2$. The proof of (<ref>) is complete.
STEP 6: We complete the proof of (<ref>). We have:
\begin{equation*}
\begin{split}
\int_{\mathbb{B}_r(p)} Q_\eps \left\vert \nabla \left( \Phi_\eps - \Psi_{\eps}\right) \right\vert^2_g \leq & 2 \int_{\mathbb{B}_r(p)} Q_\eps \left\vert \nabla \left( \Phi_\eps - \frac{\Phi_{\eps}}{\omega_\eps}\right) \right\vert^2_g + 2 \int_{\mathbb{B}_r(p)} P_\eps \left\vert \nabla \left( \frac{\Phi_\eps}{\omega_\eps} - \Psi_{\eps}\right) \right\vert^2_g \\
& + 2 \int_{\mathbb{B}_r(p)} \left(\left\vert \nabla \Psi_\eps \right\vert_g^{n-2} - \left\vert \nabla \Phi_\eps \right\vert_g^{n-2} \right) \left\vert \nabla \left( \frac{\Phi_\eps}{\omega_\eps} - \Psi_{\eps}\right) \right\vert^2_g \\
& + 2 \int_{\mathbb{B}_r(p)} \left(\left\vert \nabla \Phi_\eps \right\vert_g^{n-2} -Q_\eps \right) \left\vert \nabla \left( \frac{\Phi_\eps}{\omega_\eps} - \Psi_{\eps}\right) \right\vert^2_g
\end{split} \end{equation*}
The first left-hand term converges to $0$ because of (<ref>), the second one converges to $0$ because of (<ref>). The third one converges to $0$ by the use of a Hölder inequality and (<ref>) and the fourth one converges to 0 thanks to (<ref>).
The proof of (<ref>) is complete.
We noticed that up to a subsequence, $\Psi_\eps$ and $\frac{\Phi_\eps}{\omega_\eps}$ respectively converge to some maps $\Psi_0 : \mathbb{B}_r(p)\to \mathbb{R}^{\mathbb{N}}$ and $\Phi_0 : \mathbb{B}_r(p) \to \mathbb{R}^{\mathbb{N}}$ in the following sense: for all $1\leq p<+\infty$, and $1 \leq q < n $
$$ \left(\int_{\mathbb{B}_r(p)} \left\vert \frac{\Phi_\eps}{\omega_\eps} - \Phi_0 \right\vert^p\right)^{\frac{1}{p}} + \left(\int_{\mathbb{B}_r(p)} \left\vert \nabla\left( \frac{\Phi_\eps}{\omega_\eps} - \Phi_0 \right) \right\vert^q\right)^{\frac{1}{q}} \to 0$$
as $\eps \to 0$. $\Psi_\eps$ satisfies the same convergence properties but also more (we do not need): there is some $\alpha\in (0,1)$ such that $\left\vert \nabla \Psi_\eps- \nabla \Psi_0\right\vert$ converges to $0$ in $\mathcal{C}^{0,\alpha}$. Thanks to Claim <ref>, we obtain that
$$ \int_{\mathbb{B}_r(p)} \left\vert \nabla \left( \frac{\Phi_\eps}{\omega_\eps} - \Psi_\eps\right) \right\vert^n \to 0 $$
so that by uniqueness of the limit, $\Psi_0 = \Phi_0$.
§.§.§ The limiting measure has a $\mathcal{C}^{0,\alpha}$ density
Let $\zeta \in \mathcal{C}_c^{\infty}\left( \mathbb{B}_r(p) , \mathbb{R}^{N}\right) $, we have that
\begin{equation*}
\begin{split} \int_{M} \zeta \left(\lambda_{\eps}^{\frac{n}{2}} \Phi_{\eps} e^{2u_{\eps}}dv_g - \lambda^{\frac{n}{2}} \Phi_0 d\nu\right) = \left(\lambda_\eps^{\frac{n}{2}} - \lambda^{\frac{n}{2}} \right) \int_{M} \zeta \Phi_\eps e^{nu_{\eps}}dv_g \\ + \lambda^{\frac{n}{2}} \left(
\int_{M} \zeta \left( \Phi_\eps - \Phi_0 \right) e^{nu_{\eps}}dv_g
+ \int_{M} \zeta \Phi_0 \left(e^{nu_{\eps}}dv_g - d\nu\right)\right)
\end{split}
\end{equation*}
Then on the first right-hand term, we have thanks to Claim <ref> that
\begin{equation*} \begin{split} &\left\vert \int_{M} \zeta \left( \Phi_{\eps} - \Phi_0\right) e^{2u_{\eps}}dA_g \right\vert \leq \left(\int_{\mathbb{D}_{r}(p)} \zeta^2 \left\vert \Phi_{\eps} - \Phi_0 \right\vert^2 e^{nu_{\eps}}dA_g\right)^{\frac{1}{2}} \\
& \leq \left(\frac{\lambda_\eps^{\frac{2-n}{2}}}{\lambda_{\star}\left(\mathbb{B}_{r}(p),e^{n u_{\eps}}, \frac{ Q_\eps}{\lambda_\eps^{\frac{n-2}{2}}}\right)} \int_{\mathbb{B}_{r}(p)} Q_\eps \left\vert\nabla \left(\zeta \left\vert \Phi_{\eps} - \Phi_0 \right\vert \right) \right\vert^2_g dv_g\right)^{\frac{1}{2}} \\
& \leq C\left( \left( \int_{\mathbb{B}_{r}(p)} Q_\eps \left\vert \nabla \left(\Phi_{\eps} - \Psi_\eps \right) \right\vert^2\right)^{\frac{1}{2}} + \left( \int_{\mathbb{B}_{r}(p)} \left\vert \nabla \left(\Psi_{\eps} - \Phi_0 \right) \right\vert^n \right)^{\frac{1}{n}} \right) \\
& \to 0 \text{ as } \eps \to 0
\end{split}
\end{equation*}
for some constant $C$ independent of $\eps$. Letting $\eps\to 0$ in a weak sense to the eigenvalue equation $-div_g\left( Q_\eps \nabla \Phi_\eps \right) = \lambda_\eps^{\frac{n}{2}} \Phi_\eps e^{nu_{\eps}}$, we get
$$-div_g\left( \left\vert \nabla \Phi_0 \right\vert_g^{n-2} \nabla \Phi_0 \right) = \lambda^{\frac{n}{2}} \Phi_0 \tilde\mu_0 $$
and since $\Phi_0$ is $n$-harmonic, we obtain that $\tilde\mu_0 = \frac{\left\vert\nabla \Phi_0 \right\vert^n}{\lambda^{\frac{n}{2}} }dv_g$ and the density is a non negative $\mathcal{C}^{0,\alpha}$ function.
Then we obtain that for some global n-harmonic map $\Phi_0 : M\setminus \{p_1,\cdots,p_s\} \to \mathbb{R}^{\mathbb{N}}$, $\tilde\mu_0 = \frac{\left\vert\nabla \Phi_0 \right\vert^n}{\lambda^{\frac{n}{2}} }dv_g$.
By a point removability theorem (Claim <ref>), $\Phi_0$ can be extended to $\Phi_0 : M \to \mathbb{R}^{\mathbb{N}}$ as a n-harmonic map, and the conformal factor has the expected regularity. As already said, $\tilde{\mu}_i = \frac{\left\vert\nabla \Phi_i \right\vert^n}{\lambda^{\frac{n}{2}} } dx$ for some $n$-harmonic map on the Euclidean space $\mathbb{R}^n$. A pullback by stereographic projection of the round sphere $\pi$ and a point removability theorem (Claim <ref>), gives the expected regularity on $\pi^{\star} \tilde{\mu}_i$.
In the next section, we prove Theorem <ref>: the embedding $W^{1,2}(f . g) \to L^2(f . g)$ is compact, where $f = \frac{\left\vert \nabla \Phi_0 \right\vert_g^{2}}{\lambda} $ in $M$ (and $f = \frac{\left\vert \nabla \Phi_i \right\vert_g^{2}}{\lambda} $ in $\mathbb{R}^n$). We can deduce thanks to Remark <ref> that the target sphere of $\Phi_0$ (and $\Phi_i$) can be reduced to a finite dimensional sphere. The proof of Proposition <ref> is complete.
§.§ A compact embedding for the weighted Sobolev spaces associated to the limiting metrics
In this section, we prove Theorem <ref>.
Let $f : M \to \mathbb{R}_+$ be a non-negative continuous function and we denote $Z$ its zero set. We set $L^2(f.g)$ the set of measurable functions $u : M \to \R$ such that
$$ \| u \|_{L^{2}(f.g)} := \int_M u^2 f^{\frac{n}{2}} dv_g < +\infty, $$
We set $ W^{1,2}(f.g) $ the completion of $ \mathcal{C}^{\infty}\left(M\right)$ with respect to the semi-norm
$$ \| u \|_{W^{1,2}(f.g)} = \int_M u^2 f^{\frac{n}{2}} dv_g + \int_M \left\vert \nabla u \right\vert^2 f^{\frac{n-2}{2}} dv_g $$
We aim at proving that for $f = \frac{\left\vert \nabla \Phi_0 \right\vert_g^{2}}{\lambda}$ given by Theorem <ref>, the embedding $W^{1,2}(f . g) \to L^2(f . g)$ is compact. For that purpose, we will prove the following local Sobolev embedding:
There is $\kappa_0 > 1$, there is $r_0>0$ and $C_0 >0$ such that for any $x \in M $ and any $u\in \mathcal{C}^{\infty}_c\left(\mathbb{B}_r(p)\right)$
$$ \left(\int_{\mathbb{B}_r(p)} u^{2\kappa_0} f^{\frac{n}{2}} dv_g \right)^{\frac{1}{\kappa_0}} \leq C_0 r^{2-\frac{n(\kappa_0-1)}{\kappa_0}} \int_{\mathbb{B}_r(p)} \left\vert \nabla u \right\vert_g^2 f^{\frac{n-2}{2}} dv_g . $$
Using a classical Sobolev inequality, we have a universal constant $C_{0}$ such that
\begin{equation*}
\begin{split} \left(\int_{\mathbb{B}_r(p)} u^{2\kappa_0} f^{\frac{n}{2}} dv_g \right)^{\frac{1}{\kappa_0}} \leq & C_0 r^{2-\frac{n(\kappa_0-1)}{\kappa_0}} \int_{\mathbb{B}_r(p)} \left\vert \nabla\left( u f^{\frac{n}{4\kappa_0}}\right) \right\vert_g^2 dv_g \\
\leq & 2 C_0 r^{2-\frac{n(\kappa_0-1)}{\kappa_0}} \left( \int_{\mathbb{B}_r(p)} \left\vert \nabla u \right\vert_g^2 f^{\frac{n}{2\kappa_0}} dv_g + \int_{\mathbb{B}_r(p)} \left\vert \nabla f^{\frac{n}{4\kappa_0}} \right\vert_g^2 u^2 \right)
\end{split}
\end{equation*}
We aim at estimating the right-hand term, noticing that $f$ is nothing but the limit of $B:= \left\vert \nabla \Psi_\tau \right\vert^2_g$ as $(\tau,m) \to (0,+\infty)$, where $\Psi_\tau : \mathbb{B}_r(p)\to \mathbb{S}^m$ is a $(\tau,n)$-harmonic map, if we choose $r \leq \frac{r_\star}{2}$, where
$$r_{\star} = \sup\left\{ r >0 , \sup_x \int_{\mathbb{B}_r(x)} \left( f + \tau\right)^{\frac{n}{2}} \leq \frac{\eps_{n,g}}{2} \right\}. $$
We recall that $\Psi_\tau$ satisfies by Claim <ref>
$$ \mu^{n-2} B \left(\kappa_g + B\right) \geq - div_g\left( a^\tau \nabla \mu^n \right) + \left\vert \nabla^2 \Psi \right\vert \mu^{n-2} +\frac{n-2}{4} \mu^{n-4} \left\vert \nabla B \right\vert^2 $$
where $\mu = \left(B + \tau\right)^{\frac{1}{2}}$ and
\begin{equation*} a_{i,j}^\tau = \frac{1}{n}\left( \delta_{i,j} + (n-2) \frac{\nabla_i \Psi.\nabla_j \Psi}{B + \tau} \right).
\end{equation*}
We integrate this equation over $ u^{2} \mu^{-\nu_0}$ for $\nu_0 = \frac{n(\kappa_0 - 1)}{\kappa_0}$ and we get
\begin{equation*}
\begin{split} \int_{\mathbb{B}_r(p)} u^2 \left(a\left( \nabla \mu^{-\nu_0}\nabla\left( \mu^n\right)\right) + \frac{n-2}{4} \mu^{n-4-\nu_0} \left\vert \nabla B \right\vert^2_g + \left\vert \nabla^2 \Psi \right\vert^2_g \mu^{n-2-\nu_0} \right) \\
\leq \int_{\mathbb{B}_r(p)} u^2 B \mu^{n-2-\nu_0}(B+k_g) - \int_{\mathbb{B}_r(p)} 2u \mu^{-\nu_0} a\left( \nabla u \nabla\left( \mu^n \right)\right)
\end{split} \end{equation*}
Knowing that $\frac{1}{n} \left\vert X \right\vert^2 \leq a_{i,j}X^i X^j \leq \frac{n-1}{n}\left\vert X \right\vert^2$, and the following computations
\begin{equation*}
\nabla \mu^{-\nu_0}\nabla\left( \mu^n \right) = - \nu_0 n \mu^{n-2-\nu_0} \left\vert \nabla \mu \right\vert^2 \text{ and } \left\vert \nabla B \right\vert^2 = 4 \mu^2 \left\vert \nabla \mu \right\vert^2
\end{equation*}
we obtain that
\begin{equation*}
\begin{split} & \left( n-2 - (n-1)\nu_0 \right) \int_{\mathbb{B}_r(p)} u^2 \mu^{n-2-\nu_0} \left\vert \nabla \mu \right\vert^2_g + \int_{\mathbb{B}_r(p)} u^2 \left\vert \nabla^2 \Psi \right\vert^2_g \mu^{n-2-\nu_0} \\
\leq & (n-1) \int_{\mathbb{B}_r(p)} 2 \left\vert u \nabla u \right\vert \mu^{n-1-\nu_0} \left\vert \nabla \mu \right\vert
+ \int_{\mathbb{B}_r(p)} u^2 \mu^{n-\nu_0}(B+k_g) \\
\leq & (n-1)\left( \theta^2 \int_{\mathbb{B}_r(p)} u^2 \mu^{n-2-\nu_0} \left\vert \nabla \mu \right\vert^2_g + \frac{1}{\theta^2} \int_{\mathbb{B}_r(p)} \mu^{n-\nu_0} \left\vert \nabla u \right\vert^2_g\right) \\
& + \int_{\mathbb{B}_r(p)} u^2 \mu^{n-\nu_0}(B+k_g)
\end{split} \end{equation*}
so that knowing that $ \left\vert \nabla^2 \Psi \right\vert^2_g \geq \left\vert \nabla \mu \right\vert^2_g $ (by a Hölder inequality)
\begin{equation*}
\begin{split} \left(1 - \nu_0 - \theta^2\right) & \int_{\mathbb{B}_r(p)} u^2 \mu^{n-2-\nu_0} \left\vert \nabla \mu \right\vert^2_g \\
\leq & \frac{1}{\theta^2} \int_{\mathbb{B}_r(p)} \mu^{n-\nu_0} \left\vert \nabla u \right\vert^2_g + \frac{1}{n-1} \int_{\mathbb{B}_r(p)} u^2 \mu^{n-\nu_0}(B+k_g)
\end{split} \end{equation*}
and choosing $\nu_0 < 1$ and $\theta^2 = \frac{1-\nu_0}{2}$, we obtain such that
\begin{equation*}
\int_{\mathbb{B}_r(p)} u^2 \left\vert \nabla \mu^{\frac{n-\nu_0}{2}} \right\vert^2_g
\leq \frac{4}{(1-\nu_0)^2} \int_{\mathbb{B}_r(p)} \mu^{n-\nu_0} \left\vert \nabla u \right\vert^2_g + \frac{2}{(1-\nu_0)(n-1)} \int_{\mathbb{B}_r(p)} u^2 \mu^{n-\nu_0}(B+k_g)
\end{equation*}
and passing to the limit as $\tau \to 0$, we have that $\mu^{2} \to f$ as $\tau\to 0$ and noticing that $n-\nu_0 = \frac{n}{\kappa_0}$ we obtain
\begin{equation*}
\int_{\mathbb{B}_r(p)} u^2 \left\vert \nabla f^{\frac{n}{4\kappa_0}} \right\vert^2_g
\leq \frac{4}{(1-\nu_0)^2} \int_{\mathbb{B}_r(p)} f^{\frac{n}{2\kappa_0}} \left\vert \nabla u \right\vert^2_g + \frac{2}{(1-\nu_0)(n-1)} \int_{\mathbb{B}_r(p)} u^2 f^{\frac{n}{2\kappa_0}}(f+k_g)
\end{equation*}
so that there is a constant $c:= c(n, 1-\nu_0) $ such that
\begin{equation*}
\begin{split} \left(\int_{\mathbb{B}_r(p)} u^{2\kappa_0} f^{\frac{n}{2}} dv_g \right)^{\frac{1}{\kappa_0}} \leq & 2 C_0 r^{2-\frac{n(\kappa_0-1)}{\kappa_0}} \int_{\mathbb{B}_r(p)} \left\vert \nabla u \right\vert_g^2 f^{\frac{n}{2\kappa_0}} ( 1 + c) dv_g \\
& + 2 C_0 r^{2-\frac{n(\kappa_0-1)}{\kappa_0}} c \int_{\mathbb{B}_r(p)} u^2 f^{\frac{n}{2\kappa_0}}(f+k_g) dv_g \\
\leq & 2 C_0 r^{2-\frac{n(\kappa_0-1)}{\kappa_0}} A^{1-\frac{n(\kappa_0-1)}{2\kappa_0}} \left(1+c \right) \int_{\mathbb{B}_r(p)} \left\vert \nabla u \right\vert_g^2 f^{\frac{n-2}{2}}dv_g \\
& + 2 C_0 c_g r^2 c \left(A+k_g\right) \left(\int_{\mathbb{B}_r(p)} u^{2\kappa_0} f^{\frac{n}{2}} dv_g \right)^{\frac{1}{\kappa_0}}
\end{split}
\end{equation*}
noticing that $f$ is bounded by a constant $A$ in $M$ and applying a Hölder inequality for the last inequality. Letting $r>0$ small enough so that $2 C_0 c_g r^2 c \left(A+k_g\right) < \frac{1}{2}$, we obtain the expected Sobolev inequality.
In the classical Sobolev inequality, the optimal constant $2\kappa_0$ is $\frac{2n}{n-2}$.
Our Caccioppoli type estimate involved in the proof of the Sobolev inequality seems to be optimal regarding other similar regularity results on the $n$-harmonic equation (e.g [40]). In this case, we only obtain $2\kappa_0 < \frac{2n}{n-1}$.
It would be interesting to improve $\nu_0 >0$ so that
$f^{\frac{n-\nu_0}{4}}$ is a $H^1$ function.
Now, it is clear that the embedding $W^{1,2}(f . g) \to L^2(f . g)$ is compact. Indeed, first, any sequence of functions $(u_k)$ bounded in $W^{1,2}(f . g)$ has to be bounded in $L^{2\kappa_0}(f . g)$ (up to take a partition of unity to globalize the previous Sobolev inequality). We also know that up to a subsequence, it converges weakly to $u\in W^{1,2}(f . g)$. Moreover, using the classical compact embedding $W^{1,2}(g) \to L^2(g)$ on any compact subset of $M \setminus Z$ where $Z = \{ x\in M; f(x)=0\}$, we deduce that up to a subsequence, the sequence $(u_k)$ converges almost everywhere to $u$ with respect to the measure $f^{\frac{n}{2}} dv_g$. These two properties prove that up to a subsequence $(u_k)$ converges strongly in $L^2(f . g)$.
§.§ Conclusion
Let's prove Theorem <ref>. By section <ref>, there is a Palais-Smale sequence for the maximization problem. We denote $\lambda_\eps^{\frac{n}{2}} = \lambda_k(Q_\eps,e^{nu_\eps})$. We apply proposition <ref> to this sequence. Theorem <ref> then follows from upper semi-continuity of $\bar\lambda_k$. Indeed, let $\theta_0,\cdots,\theta_k$ be the $(k+1)$ first eigenfunctions of the limiting manifold endowed with generalized metrics $(\widetilde{M},f.g) = (M,f_0.g) \sqcup \left(\mathbb{S}^n, f_1.h \right)\sqcup \cdots \sqcup \left(\mathbb{S}^n,f_t.h\right) $ if $f_0\neq 0$ or $ (\widetilde{M},f.g) = \left(\mathbb{S}^n,f_1.h\right)\sqcup \cdots \sqcup \left(\mathbb{S}^n,f_t.h\right) $ if $f_0 = 0$, where $f_0 dv_g = \tilde{\mu}_0$ and $f_i dv_h = \pi^{\star}\tilde{\mu}_i$ (where $\pi : \mathbb{S}^n \to \mathbb{R}^n \cup \{\infty\}$ is a stereographic projection) and $f_i$ are $\mathcal{C}^{0,\alpha}$ functions. We use $\theta_0,\cdots,\theta_k$ as test functions in the variational characterization of $\lambda_\eps$ in the following way: we set
$$ \tilde{\theta}_i^\eps(x) = \eta_0(x) \theta_i(x) + \eta_1\left( \frac{x- q_1^\eps}{\alpha_1^\eps} \right) \theta_i \circ \pi^{-1}\left( \frac{x- q_1^\eps}{\alpha_1^\eps} \right) + \cdots + \eta_t \left( \frac{x- q_t^\eps}{\alpha_t^\eps} \right) \theta_i\circ \pi^{-1}\left( \frac{x- q_t^\eps}{\alpha_t^\eps} \right) $$
where points $q_i^\eps$ and scales $\alpha_i^\eps$ are given by Claim <ref> and we define $\eta_i \in \mathcal{C}_c^\infty\left(\Omega_i(\rho) \right)$ as cut-off functions such that $\eta_i = 1$ in $ \mathcal{C}^\infty\left(\Omega_i(\sqrt{\rho}) \right)$ and
$$\int_{M} \left\vert \nabla \eta_0 \right\vert_g^n \leq \frac{C}{\ln{\frac{1}{\rho}}} \text{ and } \int_{\mathbb{R}^n} \left\vert \nabla \eta_i \right\vert_g^n \leq \frac{C}{\ln{\frac{1}{\rho}}} \text{ if } i \geq 1 $$
where if we denote $p_1^i, \cdots, p_1^{s_i}$ the rescaled bad points,
$$ \Omega_0(\rho) = \Sigma \setminus \bigcup_{j=1}^{s_0} \mathbb{B}_\rho\left(p_i^j \right) = \text{ and } \Omega_i(\rho) = \mathbb{B}_{\frac{1}{\rho}}\setminus \bigcup_{j=1}^{s_i} \mathbb{B}_\rho\left(p_i^j \right) \text{ if } i \geq 1. $$
Then, knowing that $Q_\eps\left( q_i^\eps + \alpha_i^\eps x \right) \to f^{\frac{n-2}{2}}$ in $L^{\frac{n}{n-2}}\left(\Omega_i(\rho)\right)$ and $\lambda_\eps^{\frac{n}{2}}e^{nu_\eps}\left( q_i^\eps + \alpha_i^\eps x \right) \to f^{\frac{n}{2}}$ for the weak-$\star$ convergence of measures on $\Omega_i(\rho)$, a straightforward computation gives that
$$ \lambda_\eps \leq \max_{\varphi \in \left\langle \tilde{\theta}_0^\eps,\cdots,\tilde{\theta}_k^\eps \right\rangle} \frac{\int_M \left\vert \nabla \varphi \right\vert_g^2 \lambda_\eps^{\frac{2-n}{2}}Q_\eps dv_g}{ \int_M \varphi^2 e^{n u_\eps} dv_g} \leq \lambda_k\left( \tilde{M}, f.h \right) + o(1) + c \left(\frac{C}{\ln{\frac{1}{\rho}}}\right)^{\frac{1}{n}} $$
as $\eps \to 0$. Letting $\eps \to 0$ and then $\rho \to 0$ gives
$$ \limsup_{\eps\to 0} \lambda_\eps \leq \lambda_k\left( \tilde{M}, f.h \right) $$
then $\Lambda_k(M, [g]) \leq \lambda_k\left( \tilde{M}, f.h \right) $. By a result by Colbois and El-Soufi [2] (see also [12] in Steklov case), we must have equality, so that the limit of $\lambda_\eps$ is the $k$-th eigenvalue of the limit of the maximizing sequence. We also obtain Theorem <ref>: with the strict inequality assumption, there is no bubbling ($t=0$).
§ INDEPENDANCE OF REGULARITY ESTIMATES FOR HARMONIC MAPS WITH RESPECT TO THE DIMENSION OF THE TARGET SPHERE
In this section, we aim at generalizing the known estimates on $n$-harmonic maps, noticing carefully their possible dependance on the dimension of the target sphere.
§.§ A priori estimates for $n$-harmonic maps from a $n$-manifold into spheres
We say that $\Psi : \mathbb{B}^n \to \mathbb{S}^p$ is a $\tau$-approximated $n$-harmonic map if it is the limit as $\tau \to 0$ of a sequence of maps $\Psi_{\tau} : \mathbb{B}^n \to \mathbb{S}^p$ which are solutions of the following minimization problems
$$ \inf \left\{ \int_{\mathbb{B}^n} \left(\left\vert \nabla \varphi \right\vert_g^2 + \tau\right)^{\frac{n}{2}} dv_g ; \varphi : \mathbb{B}^n \to \mathbb{S}^p \text{ and } \varphi = \Psi \text{ on } \partial\mathbb{B}^n \right\} $$
It satisfies the following Euler-Lagrange equation
$$ -div_g\left( \left(\left\vert \nabla \Psi_\tau \right\vert_g^2 + \tau \right)^{\frac{n-2}{2}} \nabla \Psi_\tau \right) = \left(\left\vert \nabla \Psi_\tau \right\vert_g^2 + \tau \right)^{\frac{n-2}{2}} \left\vert \nabla \Psi_\tau \right\vert_g^2 \Psi_\tau $$
and we have that $\Psi_{\tau} \in \mathcal{C}^{\infty}\left( \mathbb{B}^n \right)$[28][41]. We say that $\Psi_\tau$ is a $(\tau,n)$-harmonic map. For instance, we will deduce from Proposition <ref> and [41] that any $n$-harmonic map into a sphere is locally a $\tau$-approximated $n$-harmonic map. In particular, we deduce a local uniqueness of the harmonic replacement of $n$-harmonic maps into a sphere $\mathbb{S}^p$. We also have the following proposition:
There is a constant $\eps_0$ and a constant $C_{n}$ such that for any $p\geq 2$ and any $\tau$-approximated $n$-harmonic map $\Psi: \mathbb{B}_n \to \mathbb{S}^p$ such that
$$ \int_{ \mathbb{B}^n} \left\vert \nabla \Psi \right\vert_g^n \leq \eps_0 $$
we have for any ball $\mathbb{B}^n_r(p)\subset \mathbb{B}_n$
$$ r^n \left\| \nabla \Psi \right\|_{\mathcal{C}^{0}\left(\mathbb{B}^n_{\frac{r}{2}}(p)\right)}^n \leq C_n \int_{\mathbb{B}^n_r(p)} \left\vert \nabla \Psi \right\vert_g^n dv_g $$
In order to prove Proposition <ref>, we first prove that a $(\tau,n)$-harmonic map satisfies the following
For any $p\geq 2$ and any $(\tau,n)$-harmonic map $\Psi_ \tau : \mathbb{B}^n \to \mathbb{S}^p$, there is $(a_{i,j}^\tau)_{1\leq i,j \leq n}$ such that for any $X \in \R^n$
$$ \frac{1}{n} \left\vert X \right\vert^2 \leq a_{i,j}^\tau X^iX^j \leq \frac{n-1}{n} \left\vert X \right\vert^2 \text{ and } \forall i,j, \left\vert a_{i,j}^\tau \right\vert \leq 1$$
and such that
$$ u\left(\kappa_g + \left\vert \nabla \Psi \right\vert_g^2\right) \geq P \left\vert \nabla \Psi \right\vert^2 \left(\kappa_g + \left\vert \nabla \Psi \right\vert_g^2\right) \geq - div_g\left( a^\tau \nabla u \right) + \left\vert \nabla^2 \Psi \right\vert P +\frac{n-2}{4} \mu^{n-4} \left\vert \nabla B \right\vert^2 $$
where $B = \left\vert \nabla \Psi_\tau \right\vert_g^2 $, $\mu = \left(B + \tau \right)^{\frac{1}{2}} $, $P= \mu^{n-2}$,
$ u = \mu^n $ and $\kappa_g$ is a constant such that $Ric_g \geq -\kappa_g g$.
During the proof we set $\Psi = \Psi_\tau$, $u=u_\tau$ and $B= \left\vert \nabla \Psi \right\vert_g^2$ and $P = \left(B+\tau\right)^{\frac{n-2}{2}}$ so that $u = PB$. We have that
\begin{equation*} \begin{split} & \nabla \Psi . \Delta\left( P \nabla \Psi \right) \\ = & \sum_i \left( - \nabla_i\left( \nabla \Psi . \nabla_i\left(P\nabla \Psi \right) \right) + \nabla_i\nabla\Psi . \nabla_i\left( P \nabla \Psi \right) \right) \\
= & - \sum_i \left( \nabla_i\left( \left\vert \nabla \Psi \right\vert^2 \nabla_i P \right) + \nabla_i\left( \nabla_i \nabla \Psi . \nabla \Psi P \right) \right) \\
& + \sum_i \left( P \nabla_i \nabla \Psi . \nabla_i \nabla \Psi + \nabla_i P \nabla_i \nabla \Psi . \nabla\Psi \right) \\
=& - \sum_i \nabla_i\left( \left( \frac{n-2}{2}B \left(B+\tau\right)^{\frac{n-4}{2}} + \frac{1}{2} P \right) \nabla_i B \right) \\
& + \left\vert \nabla^2 \Psi \right\vert P +\frac{n-2}{4} \left(B+\tau\right)^{\frac{n-4}{2}} \left\vert \nabla B \right\vert^2 \\
\end{split}
\end{equation*}
Now, we aim at computing $\nabla \Psi . \Delta\left( P \nabla \Psi \right)$ in another way, using the equation on the $n$-harmonic map $\Psi$. We write $\Delta_g = - d_g ^\star d_g - d_g d_g^\star$ the Laplacian acting on forms, so that
\begin{equation*}
\begin{split}
& \left\langle d\Psi , \Delta_g\left( P d \Psi \right) \right\rangle_g = - \left\langle d\Psi , \left(d_g^\star d_g + d_gd_g^\star\right)\left( P d \Psi \right) \right\rangle_g \\
= & -\left\langle d\Psi , d_g^\star\left( dP \wedge d\Psi \right)\right\rangle_g + \left\langle d\Psi , d\left( P \left\vert \nabla \Psi \right\vert_g^2 \Psi \right) \right\rangle_g + P \left\langle d\Psi , \star\left( (\star F) \wedge d \Psi) \right) \right\rangle_g \\
= & - \left\langle d\Psi , d_g^\star\left( dP \wedge d\Psi \right)\right\rangle_g + P \left\vert \nabla \Psi \right\vert_g^4 - P Ric_g(\nabla \Psi, \nabla \Psi)
\end{split}
\end{equation*}
where we used for the second equality that $d_g d_g\Psi = F \Psi$, where $F$ is the curvature 2-form, that $- d_g^\star\left( P d\Psi\right) = P \left\vert \nabla \Psi \right\vert^2 \Psi $ and that $\Psi . d\Psi = 0 $. We use again $\Psi . d\Psi = 0 $ for the third equality. Now, for a function $\theta \in \mathcal{C}^{\infty}_c\left(\mathbb{B}^n\right)$, we have
\begin{equation*}
\begin{split} - \int_{\mathbb{B}^n} \theta & \left\langle d_g\Psi , d_g^\star\left( d P \wedge d\Psi \right)\right\rangle_g dv_g = \int_{\mathbb{B}^n} \left\langle d\theta \wedge d\Psi , dP \wedge d\Psi \right\rangle_g dv_g \\
& = \int_{\mathbb{B}^n} \left( \left\langle d\theta , d P \right\rangle_g B - \left\langle d\theta , d\Psi \right\rangle_g \left\langle dP , d\Psi \right\rangle_g \right)dv_g \\
& = - \int_{\mathbb{B}^n} \theta \nabla_i \left( \frac{n-2}{2} \left(B+\tau\right)^{\frac{n-4}{2}}\left( B \delta_{i,j} - \nabla_i \Psi \nabla_j \Psi \right) \nabla_j B\right)
\end{split}
\end{equation*}
where we used again for the second equality that $d_g d_g\Psi = F \Psi$ and $\Psi.d\Psi= 0$. Combining the previous equalities
\begin{equation*} \begin{split} P \kappa_g \left\vert \nabla \Psi \right\vert_g^2 + P \left\vert \nabla \Psi \right\vert_g^4 \geq & - \sum_i \nabla_i\left( \left( \frac{n-2}{2}B \left(B+\tau\right)^{\frac{n-4}{2}} + \frac{1}{2} P \right) \nabla_i B \right) \\
& + \sum_{i,j} \frac{n-2}{2} \nabla_i \left( \left(B+\tau\right)^{\frac{n-4}{2}} \left(B\delta_{i,j} - \nabla_i \Psi \nabla_j \Psi \right) \nabla_j B \right) \\
& + \left\vert \nabla^2 \Psi \right\vert P +\frac{n-2}{4} \left(B+\tau\right)^{\frac{n-4}{2}} \left\vert \nabla B \right\vert^2
\end{split} \end{equation*}
Noticing that
$$ \nabla u = \frac{n}{2} P \nabla B $$
we obtain that
\begin{equation*} u \left(\kappa_g + \left\vert \nabla \Psi \right\vert_g^2\right) \geq - div_g\left( a^\tau \nabla u \right) + \left\vert \nabla^2 \Psi \right\vert P +\frac{n-2}{4} \left(B+\tau\right)^{\frac{n-4}{2}} \left\vert \nabla B \right\vert^2
\end{equation*}
\begin{equation*} a_{i,j}^\tau = \frac{1}{n}\left( \delta_{i,j} + (n-2) \frac{\nabla_i \Psi.\nabla_j \Psi}{B + \tau} \right)
\end{equation*}
We deduce $\eps$-regularity results independant of the dimension of the target sphere on these maps:
For any $n\geq 3$, there is $\eps_{n,g} >0$ and a constant $C_{n,g}$ such that for any $p\geq 2$, any $\tau >0$ and any $(\tau,n,g)$-harmonic map $\Psi_ \tau : \mathbb{B}^n \to \mathbb{S}^p$ such that
$$ \int_{ \mathbb{B}^n} \left(\left\vert \nabla \Psi_{\tau} \right\vert_g^2 + \tau\right)^{\frac{n}{2}} dv_g \leq \eps_{n,g} $$
we have for any ball $\mathbb{B}^n_r(p)\subset \mathbb{B}_n$
$$ r^n \left\| \nabla \Psi_\tau \right\|_{\mathcal{C}^{0}\left(\mathbb{B}^n_{\frac{r}{2}}(p)\right)}^n \leq {C_{n,g}} \int_{\mathbb{B}^n_r(p)} \left(\left\vert \nabla \Psi_{\tau} \right\vert_g^2 + \tau\right)^{\frac{n}{2}} dv_g $$
Notice that it is sufficient to prove that there is $\eps_n$ small enough such that for any $p \in \mathbb{B}^n$ and $r_0>0$ such that $\mathbb{B}^n_{r_0}(p) \subset \mathbb{B}_n$
$$ \left\vert \nabla \Psi_\tau(p) \right\vert^n \leq \frac{\delta}{ r_0^n} \text{ where } \int_{\mathbb{B}^n_{r_0}(p)} \left(\left\vert \nabla \Psi_{\tau} \right\vert_g^2 + \tau\right)^{\frac{n}{2}} dv_g = \delta \int_{ \mathbb{B}^n} \left(\left\vert \nabla \Psi_{\tau} \right\vert_g^2 + \tau\right)^{\frac{n}{2}} dv_g \leq \delta \eps_{n,g} $$
We set
$$F(x) = \left( r_0 - \left\vert x- p \right\vert \right)^n \left\vert \nabla \Psi_\tau \right\vert_g^n$$
and we let $x_0 \in \mathbb{B}^n_{r_0}(p)$ be such that $F(x_0) = \sup_{x\in \mathbb{B}^n_{r_0}(p)} F(x) $. Notice that it is sufficient to prove that for $\eps$ small enough, $F(x_0)\leq \delta$. We assume by contradiction that $F(x_0) > \delta $. We set $\sigma>0$ such that
$$ \sigma^n \left\vert \nabla \Psi_{\tau} \right\vert_g^n(x_0) = \frac{\delta}{4} $$
Since $F(x_0) >\delta$, we have that $2\sigma \leq r_0 - \left\vert x-p \right\vert$. By a triangle inequality, we have for $x \in \mathbb{B}_\sigma(x_0)$ that
$$ \frac{1}{2} \leq \frac{r_0 - \left\vert x-p \right\vert}{r_0 - \left\vert x_0-p \right\vert} \leq 2 $$
Since $F$ realizes its maximum at $x_0$, we have
\begin{equation*}
\begin{split}
\left(r_0 - \left\vert x-x_0 \right\vert\right)^n \sup_{x\in\mathbb{B}^n_\sigma(x_0)} \left\vert \nabla \Psi_\tau(x) \right\vert^n \leq 4 \sup_{x\in \mathbb{B}^n_\sigma(x_0)} F(x) = 4 F(x_0) \\
\leq 4 \left(r_0 - \left\vert x-x_0 \right\vert\right)^n \left\vert \nabla \Psi_\tau(x_0) \right\vert^n
\end{split}
\end{equation*}
so that by definition of $\sigma$
\begin{equation} \label{eqestgradpsitausigma} \sup_{x\in\mathbb{B}^n_\sigma(x_0)} \left\vert \nabla \Psi_\tau(x) \right\vert^n \leq 4 \left\vert \nabla \Psi_\tau(x_0) \right\vert^n = \frac{\delta}{\sigma^n}. \end{equation}
We set $\tilde{g}(z) = g(x_0+\sigma z)$, $\tilde{\tau} = \sigma^2 \tau$, $\widetilde{\Psi}(z) = \Psi( x_0 + \sigma z )$ and $\widetilde{Q} = \left( \left\vert \nabla \widetilde{\Psi} \right\vert^2 + \tilde{\tau} \right)^{\frac{n-2}{2}}$ and $\tilde{u} = \left( \left\vert \nabla \widetilde{\Psi} \right\vert^2 + \tilde{\tau} \right)^{\frac{n}{2}}$ so that we obtain
$$ -div_{\tilde{g}}\left( \widetilde{Q} \nabla \widetilde{\Psi} \right) = \widetilde{Q} \left\vert \nabla \widetilde{\Psi} \right\vert^2 \widetilde{\Psi} $$
and by Claim <ref>,
$$ -div_{\tilde{g}}(\tilde{a} \nabla \tilde{u}) \leq \tilde{u} \left( \left\vert \nabla \widetilde{\Psi} \right\vert_{\tilde{g}}^2 + \kappa_{\tilde{g}} \right) $$
where $\left(\tilde{a}_{i,j}\right)$ is such that for any $X \in \R^n$
$$ \frac{1}{n} \left\vert X \right\vert^2 \leq \tilde{a}_{i,j} X^iX^j \leq \frac{n-1}{n} \left\vert X \right\vert^2 \text{ and } \forall i,j, \left\vert \tilde{a}_{i,j} \right\vert \leq 1$$
By the rescaled version of (<ref>) , we obtain
$$ -div_{\tilde{g}}(\tilde{a} \nabla \tilde{u}) \leq \tilde{u} \left(1 + \kappa_{\tilde{g}} \right) $$
By standard elliptic a priori estimates for smooth positive subsolutions (see e.g [15] chapter 4) and knowing that $\tilde{u} \geq 0$, we obtain that
$$ \frac{\delta}{4} = \left\vert \nabla \widetilde{\Psi}\right\vert_{\tilde{g}}^n(0) \leq \tilde{u}(0) \leq K_{n} \left(1 + \kappa_{\tilde{g}} \right) \int_{\mathbb{B}^n} \tilde{u} \leq K_{n,g} \int_{\mathbb{B}^n_{r_0}(p)} u_\tau dv_g \leq K_{n,g} \delta \eps_{n,g} $$
for some constants $K_n$ and $K_{n,g}$. Setting $\eps_{n,g} = \frac{1}{8K_{n,g}}$ gives the Claim.
Proposition <ref> then follows letting $\tau \to 0$ on the $(\tau,n)$ harmonic maps that converge to the $\tau$-approximated $n$-harmonic map.
§.§ Global strong convergences independant from the dimension of the target manifold
The following claim adapts a result by Courilleau [5] to infinite dimensional target manifolds.
Let $u_k : M \to \mathbb{R}^{\mathbb{N}}$ be a sequence of maps such that
$$\limsup_{k\to +\infty} \int_{M} \left( \left\vert u_k \right\vert^2 + \left\vert \nabla u_k \right\vert_g^n \right)dv_g < +\infty $$
We assume that
$$ div_g\left( \left\vert \nabla u_k \right\vert_g^{n-2} \nabla u_k^i \right) = A_k^i + B_k^i $$
$$ \limsup_{k\to +\infty} \left( \int_M \left\vert A_k \right\vert + \| B_k \|_{W^{-1,n}} \right)< +\infty $$
$$ \| B_k \|_{W^{-1,n}} \to 0 \text{ as } k \to +\infty $$
where we set
$$ \left\vert u_k \right\vert^2 = \sum_{i=0}^{+\infty} \left(u_k^i\right)^2 \text{ and } \left\vert \nabla u_k \right\vert_g^{2} = \sum_{i=0}^{+\infty} \left\vert \nabla u_k^i \right\vert_g^2 $$
and for $n_\star = \frac{n}{n-1}$,
$$ \| B_k \|_{W^{-1,n}} = \sup_{ \varphi : M \to \mathbb{R}^{\mathbb{N}} } \frac{\left\vert \int_M \sum_i B_k^i \varphi_i dv_g \right\vert }{ \left( \int_M \left(\left\vert \varphi \right\vert^{n_\star} + \left\vert \nabla \varphi \right\vert^{n_\star}\right)dv_g\right)^{\frac{1}{n_\star}} } $$
Then, up to a subsequence, there is $u : M \to \mathbb{R}^{\mathbb{N}}$ such that for any $1\leq p < +\infty$ and any $1\leq q < n$,
\begin{equation} \label{eqLqconvergencegrad} \int_M \left( \left\vert u_k -u \right\vert^p + \left\vert \nabla \left(u_k-u\right) \right\vert_g^{q} \right)dv_g \to 0 \end{equation}
as $k\to +\infty$.
We first notice that by classical compact Sobolev embeddings, up to a diagonal extraction of subsequences, there is a subsequence of $(u_k)$ such that for any $i \in\mathbb{N}$, we have
$$ u_k^i \rightharpoonup u^i \text{ in } W^{1,n} $$
$$ u_k^i \to u_i \text{ in } L^p \text{ for any } 1\leq p < +\infty $$
$$ u_k^i \to u_i \text{ a.e }$$
$$ \forall k \in \mathbb{N}, \int_M \left(u_k^i - u^i \right)^2 dv_g \leq 2^{-i}$$
$$ \forall (\varphi_i), \int_M \left(\left\vert \varphi \right\vert^{n_\star} + \left\vert \nabla \varphi \right\vert^{n_\star}\right)dv_g < +\infty \Rightarrow \int_M \sum_i \nabla \left(u_k^i - u^i \right) \nabla \varphi_i \to 0 \text{ as } k\to +\infty$$
Then, by weak convergence, we have that for any $m\in \mathbb{N}$
$$ \int_M \left(\sum_{i=0}^m \left\vert \nabla u^i \right\vert_g^2\right)^{\frac{n}{2}}dv_g \leq \liminf_{k\to+\infty} \int_M \left(\sum_{i=0}^m \left\vert \nabla u_k^i \right\vert_g^2\right)^{\frac{n}{2}}dv_g \leq \liminf_{k\to+\infty} \int_M \left\vert \nabla u_k \right\vert_g^ndv_g $$
so that passing to the limit as $m\to +\infty$, we have
$$ \int_M \left\vert \nabla u \right\vert_g^ndv_g \leq \liminf_{k\to+\infty} \int_M \left\vert \nabla u_k \right\vert_g^ndv_g. $$
By the same argument, using Sobolev embeddings, we have for any $1 \leq p< +\infty$
$$ \int_M \left\vert u \right\vert^p dv_g \leq \liminf_{k\to +\infty} \int_M \left\vert u_k \right\vert^p dv_g $$
In fact, we have equality in the previous inequality. Indeed, for any $m\in \mathbb{N}$
$$ \lim_{k\to +\infty } \int_M \left(\sum_{i=0}^m \left( u_k^i - u^i \right)^2\right)^{\frac{p}{2}} dv_g = 0 $$
so that since
$$ \left\vert u_k - u \right\vert^2 \leq \sum_{i=0}^m \left( u_k^i - u^i \right)^2 + 2^{-m} $$
we obtain by a Holder inequality that for any $m\in \mathbb{N}$,
$$ \lim_{k\to +\infty } \int_M \left\vert u_k - u \right\vert^pdv_g \leq 2^{-m} $$
and we obtain the first part of (<ref>). Now let's prove (<ref>).
STEP 1: Up to a new subsequence, we have that
\begin{equation} \label{eqstep1courilleau} \sum_{i=0}^{+\infty}\left( \left\vert \nabla u_k \right\vert_g^{n-2} u_k^i - \left\vert \nabla u \right\vert_g^{n-2} \nabla u^i \right).\left( \nabla u_k^i - \nabla u^i \right) \to_{a.e} 0 \text{ as } k\to +\infty \end{equation}
Proof of Step 1: we have that
\begin{equation*}
\begin{split} \sum_{i=0}^{+\infty}\left( \left\vert \nabla u_k \right\vert_g^{n-2} u_k^i - \left\vert \nabla u \right\vert_g^{n-2} \nabla u^i \right).\left( \nabla u_k^i - \nabla u^i \right) \\
\geq \left( \left\vert \nabla u_k \right\vert_g -\left\vert \nabla u \right\vert_g \right)\left( \left\vert \nabla u_k \right\vert_g^{n-1} -\left\vert \nabla u \right\vert_g^{n-1} \right) \geq 0
\end{split}
\end{equation*}
Then, we just need upper bounds.
Let $\delta>0$. By Egoroff 's theorem there is $E_{\delta}\subset\subset M$ such that $Vol_g(M\setminus E_\delta) < \delta$ and such that $\left\vert u_k - u \right\vert^2$ converges uniformly to $0$ in $E_\delta$. We aim at proving that
\begin{equation} \label{convergesto0egoroff} \lim_{k\to +\infty} \int_{E_\delta} \sum_{i=0}^{+\infty}\left( \left\vert \nabla u_k \right\vert_g^{n-2} u_k^i - \left\vert \nabla u \right\vert_g^{n-2} \nabla u^i \right).\left( \nabla u_k^i - \nabla u^i \right)dv_g = 0 \end{equation}
Let $\eps>0$. We let $\delta_\eps>0$ be such that for any set such that $Vol_g(M\setminus A) < \delta_\eps$
$$ \int_{M\setminus A} \left\vert \nabla u \right\vert^n dv_g < \eps $$
and we use this with $A_{\delta_\eps} = E_{\delta_\eps} \cup E_\delta$. By uniform convergence, let $k_0$ be such that for any $k\geq k_0$, $\left\vert u-u_k \right\vert \leq \eps$ on $A_{\delta_\eps}$. We then have that for
a cut-off function $\eta \in \mathcal{C}^{\infty}_c(M)$ such that $\eta \leq 1$ and $\eta = 1$ in $A_{\delta_\eps}$
a truncation function $\beta_\eps : \mathbb{R}^{\mathbb{N}} \to \mathbb{R}^{\mathbb{N}} $ defined as $\beta_\eps(v) = v$ if $\left\vert v \right\vert \leq \eps$ and $\beta_\eps(v) = \eps \frac{v}{\left\vert v \right\vert}$ if $\left\vert v \right\vert >\eps$.
\begin{equation*}
\begin{split} \int_{M} & \eta \sum_{i=0}^{+\infty} \left( \left\vert \nabla u_k \right\vert_g^{n-2} \nabla u_k^i - \left\vert \nabla u \right\vert_g^{n-2} \nabla u^i \right).\left( \nabla \beta_\eps(u_k - u)^i \right)dv_g \\
\leq & \int_M \eta \sum_i \left(A_k^i + B_k^i\right) \beta_{\eps}\left(u_k - u\right)^i - \int_M \sum_i \left( \nabla \eta \left\vert \nabla u_k \right\vert_g^{n-2} \nabla u_k^i \right) \beta_{\eps}\left(u_k - u\right)^i \\
& - \int_{M} \eta \sum_{i} \left\vert \nabla u \right\vert_g^{n-2} \nabla u^i . \nabla \left( \beta_{\eps}\left(u_k - u\right)^i \right) \\
\leq & \left( \int_M \left\vert A_k \right\vert \right) \left\| \left\vert \beta_{\eps}\left(u_k - u \right) \right\vert \right\|_{\infty} + \left\| \nabla \eta \right\|_{\infty} \left(\int_M \left\vert \nabla u_k \right\vert^{n}\right)^{\frac{n-1}{n}} \left(\int_{M} \left\vert \beta_\eps(u_k-u) \right\vert^n\right)^{\frac{1}{n}} \\
&+ \| B_k \|_{W^{-1,n}} \| \left\vert \beta_\eps(u_k-u) \right\vert \|_{W^{1,n}} + \left\vert \int_{M} \eta \sum_{i} \left\vert \nabla u \right\vert_g^{n-2} \nabla u^i . \nabla \left( \beta_{\eps}\left(u_k - u\right)^i \right) \right\vert \\
\leq & \left( \int_M \left\vert A_k \right\vert \right) \eps + o(1) \text{ as } k \to +\infty
\end{split}
\end{equation*}
We also have that
\begin{equation*}
\begin{split} & \left\vert \int_{M\setminus A_{\delta_\eps}} \eta \sum_{i=0}^{+\infty} \left( \left\vert \nabla u_k \right\vert_g^{n-2} \nabla u_k^i - \left\vert \nabla u \right\vert_g^{n-2} \nabla u^i \right)\left( \nabla \beta_\eps(u_k - u)^i \right)dv_g \right\vert \\
& \leq \int_{M\setminus A_{\delta_\eps}}\left( \left\vert \nabla u_k \right\vert^{n-1} \left\vert \nabla \beta_\eps(u_k-u) \right\vert + \left\vert \nabla u \right\vert^{n-1} \left\vert \nabla \beta_\eps(u_k-u) \right\vert \right) \\
& \leq C \int_{M\setminus A_{\delta_\eps}}\left( \left\vert \nabla u_k \right\vert^{n-1} \left\vert \nabla u \right\vert + \left\vert \nabla u \right\vert^{n-1} \left\vert \nabla u_k \right\vert \right) \\
& \leq C \left( \left(\int_M \left\vert \nabla u_k \right\vert^{n}\right)^{\frac{n-1}{n}} \eps^{\frac{1}{n}} + \left(\int_M \left\vert \nabla u_k \right\vert^{n}\right)^{\frac{1}{n}} \eps^{\frac{1}{n_\star}} \right)
\end{split}
\end{equation*}
and we finally obtain
\begin{equation*}
\begin{split} \limsup_{k\to +\infty} \int_{A_\delta} & \sum_{i=0}^{+\infty} \left( \left\vert \nabla u_k \right\vert_g^{n-2} \nabla u_k^i - \left\vert \nabla u \right\vert_g^{n-2} \nabla u^i \right).\left( \nabla (u_k^i - u^i) \right)dv_g \\
\leq & \limsup_{k\to +\infty} \int_{A_{\delta_\eps}} \eta \sum_{i=0}^{+\infty} \left( \left\vert \nabla u_k \right\vert_g^{n-2} \nabla u_k^i - \left\vert \nabla u \right\vert_g^{n-2} \nabla u^i \right)\left( \nabla \beta_\eps(u_k - u)^i \right)dv_g \\
\leq & \limsup_{k\to +\infty}\left( \left( \int_M \left\vert A_k \right\vert \right) \eps + C \left( \left(\int_M \left\vert \nabla u_k \right\vert^{n}\right)^{\frac{n-1}{n}} \eps^{\frac{1}{n}} + \left(\int_M \left\vert \nabla u_k \right\vert^{n}\right)^{\frac{1}{n}} \eps^{\frac{1}{n_\star}} \right) \right)
\end{split}
\end{equation*}
Letting $\eps\to 0$ gives (<ref>). Then, letting $\delta\to 0$ gives Step 1.
STEP 2: We have that
$$ \left\vert \nabla u_k - \nabla u \right\vert_g \to_{a.e} 0 \text{ as } k\to +\infty $$
Proof of Step 2: For a fixed $z \in M$ such that (<ref>) occurs at the point $z$, we set $X_k = \nabla u_k(z) $, $X = \nabla u(z)$ and for any $Y \in \left(\mathbb{R}^2\right)^{\mathbb{N}}$,
$$ \left\vert Y \right\vert = \sum_{i=0}^{+\infty}g^{a,b}(z) \left(Y^i\right)_a \left(Y^i\right)_b $$
We have that by Step 1 that
$$ \sum_{i=0}^{+\infty}\left( \left\vert X_k \right\vert^{n-2} X_k^i - \left\vert X \right\vert^{n-2} X^i \right).\left( X_k^i - X^i \right) \to 0 \text{ as } k \to +\infty$$
First, it is clear that $\left\vert X_k \right\vert$ is bounded. Indeed, we have deduce from the previous convergence that
$$ \left\vert X_k \right\vert^n + \left\vert X \right\vert^n \leq \left(\left\vert X_k \right\vert^{n-2} + \left\vert X \right\vert^{n-2}\right)\left\vert X_k \right\vert \left\vert X \right\vert + o(1) \text{ as } k \to +\infty. $$
Up to a subsequence there is $Z$ such that $X_k$ weak converges to $Z$ in $l^2\left(\left(\mathbb{R}^2\right)^{\mathbb{N}}\right)$. Up to another subsequence, we set $\alpha = \lim_{k \to +\infty} \left\vert X_k \right\vert$ and $\beta = \left\vert X \right\vert $. Then, passing to the limit, we have
$$ \alpha^n + \beta^n = \left(\alpha^{n-2} + \beta^{n-2}\right) \sum_i X^i Z^i $$
By Cauchy-Schwarz inequality and since $\left\vert Z \right\vert \leq \alpha$, we obtain
$$ \alpha^n + \beta^n \leq \left(\alpha^{n-2} + \beta^{n-2}\right) \alpha \beta $$
so that since $2\alpha \beta \leq \alpha^2 + \beta^2$,
$$ \alpha^n + \beta^n \leq \left( \alpha^2 \beta^{n-2} + \beta^2 \alpha^{n-2} \right)$$
which implies that
$$ \left(\alpha^{n-2} - \beta^{n-2}\right)\left(\alpha^2 -\beta^2\right) \leq 0 $$
and that all the previous inequalities are in fact equalities: $\alpha = \beta = \left\vert Z \right\vert $ and $\left\langle X,Z \right\rangle = \alpha^2$. In particular $Z = X$ and $X_k$ strongly converges to $Z = X$. Since the limit is independant from the subsequence, Step 2 is proved.
STEP 3: Conclusion: since $\left\vert \nabla u_k -\nabla u \right\vert$ is bounded in $W^{1,n}$ and converges almost everywhere to $0$ in $(M,g)$, then, $\left\vert \nabla u_k -\nabla u \right\vert$ strongly converges to $0$ in $L^q(M,g)$ for any $q < n$.
§.§ Point removability
Let $\Psi : \mathbb{B}\setminus\{0\} \to \mathbb{R}^{\mathbb{N}}$ be a $\mathcal{C}^1$ $n$-harmonic map into an infinite dimensional sphere such that
$$\int_{\mathbb{B}} \left\vert \nabla \Psi \right\vert_g^n dv_g < +\infty \text{ and } \left\vert \nabla \Psi \right\vert(x) \leq \frac{C}{\left\vert x \right\vert} $$
then $\Psi : \mathbb{B}\setminus\{0\} \to \mathbb{R}^{\mathbb{N}}$ extends to a $\mathcal{C}^{1,\alpha}$ function on $\mathbb{B}$ which is $n$-harmonic.
We can follow and generalize the proof by [31] to infinite dimensional target spheres.
§.§ Regular n-harmonic maps are locally $(\tau,n)$-approximated harmonic maps
The aim of the section is to prove the following for $\mathcal{C}^1$ harmonic maps into a possibly infinite dimensional target sphere.
There is $\delta_{n,g}>0$ such that for any weak $n$-harmonic map $\Phi : M \to \mathbb{R}^{\mathbb{N}}$ into a possibly infinite dimensional target sphere, such that $\Phi \in \mathcal{C}^{1}$ and for any ball $\mathbb{B}_r(p)$ such that
$$ \int_{\mathbb{B}_{2r}(p)} \left\vert \nabla \Phi \right\vert^n_g dv_g \leq \delta_{n,g} $$
for any $\left(\Psi_{\tau,m}\right)_{\tau>0, n\in \mathbb{N}}$ minimizers of
$$\Psi \mapsto \int_{\mathbb{B}_r(p)} \left( \left\vert \nabla \Psi\right\vert_g^2 + \tau \right)^{\frac{n}{2}} dv_g $$
under the $W^{1,n}$ maps $\Psi : M \to \mathbb{S}^m$ such that $\left\vert \Psi \right\vert^2 =_{a.e} 1$ and $\Psi = \frac{(\phi_0,\cdots,\phi_m)}{\left(\sum_{i=0}^m (\phi_i)^2 \right)^{\frac{1}{2}}}$ on $\partial \mathbb{B}_{r}(p)$ we have that $\Psi_{\tau,m}$ converges to $\Phi$ as $(\tau,m) \to (0,+\infty)$.
Step 1: Since $\Phi$ is a $\mathcal{C}^1$ function, it satisfies the a priori estimate on the gradient for $x \in \mathbb{B}_r(p)$
$$\left\vert \nabla \Phi(x) \right\vert_g^2 \leq C \int_{\mathbb{B}_{2r}(p)} \left\vert \nabla \Phi \right\vert^n_g dv_g \leq C \delta_{n,g} $$
We have for $x \in \mathbb{B}_r(p)$ that
$$ \left\vert \phi_1(0) - \phi_1(x) \right\vert^2 \leq r^2 C_g \left\vert \nabla \Phi(z) \right\vert_g^2 \leq C_g C \delta_{n,g}$$
so that taking $\delta_{n,g}$ small enough, and up to a rotation of coordinates so that $\phi_1(0) = 1$ we can assume that
$$ \forall x \in \mathbb{B}_r(p) \left\vert\phi_1(x) \right\vert \geq \frac{3}{4}. $$
Moreover, up to reduce $\delta_{n,g}$, and for $\tau$ small enough $\Psi_{\tau}$ has to satisfy
$$ \forall x \in \mathbb{B}_r(p) \left\vert\left(\psi_{\tau,m}\right)_1(x) \right\vert \geq \frac{1}{2} $$
thanks to the same trick as in the proof of Step 1 of Claim <ref> based on Courant-Lebesgue lemma and Morrey-Sobolev embeddings.
Step 2: We set $\tilde \Phi_m = \frac{\Phi_m}{\left\vert \Phi_m \right\vert}$ where $ \Phi_m = (\phi_0,\cdots,\phi_m, 0,\cdots) $. We denote $\tilde{\Psi}_{\tau,m} : M \to \mathbb{R}^{\mathbb{N}}$ the minimizer of
$$\Psi \mapsto \int_{\mathbb{B}_r(p)} \left( \left\vert \nabla \Psi\right\vert_g^2 + \tau \right)^{\frac{n}{2}} dv_g $$
under the $W^{1,n}$ maps $\Psi : M \to \mathbb{S}^m$ such that $\left\vert \Psi \right\vert^2 =_{a.e} 1$ and $\Psi = \tilde \Phi_m$ on $\partial \mathbb{B}_{r}(p)$. we also set
$$\Psi_{\tau,m} = \left\vert \Phi_m \right\vert \tilde{\Psi}_{\tau,m} + \Phi- \Phi_m \text{ so that } \Psi_{\tau,m} - \Phi = \left\vert \Phi_m \right\vert \left( \tilde{\Psi}_{\tau,m} - \tilde\Phi_m \right).$$
We have:
\begin{equation*}
\begin{split}\int_{\mathbb{B}_r(p)} \left\vert \nabla\left( \Phi - \Psi_{\tau,m}\right) \right\vert_g^2 \left\vert \nabla \Phi \right\vert_g^{n-2} - \left( \int_{\mathbb{B}_r(p)} \left\vert \nabla \Psi_{\tau,m} \right\vert_g^{2} \left\vert \nabla \Phi \right\vert_g^{n-2} - \int_{\mathbb{B}_r(p)} \left\vert \nabla \Phi \right\vert_g^n \right) \\
= 2 \int_{\mathbb{B}_r(p)} \left\vert \nabla \Phi \right\vert_g^{n-2} \nabla \Phi . \nabla\left( \Phi - \Psi_{\tau,m}\right)
= 2 \int_{\mathbb{B}_r(p)} \left\vert \nabla \Phi \right\vert_g^{n} \Phi . \left( \Phi - \Psi_{\tau,m} \right) \\
= \int_{\mathbb{B}_r(p)} \left\vert \nabla \Phi \right\vert_g^{n} \left\vert \Phi - \Psi_{\tau,m} \right\vert^2
\end{split}
\end{equation*}
and we obtain
\begin{equation*}
\begin{split}\int_{\mathbb{B}_r(p)} \left\vert \nabla\left( \Phi - \Psi_{\tau,m}\right) - \nabla \ln \phi_1\left( \Phi - \Psi_{\tau,m}\right) \right\vert_g^2 \left\vert \nabla \Phi \right\vert_g^{n-2} \\
= \int_{\mathbb{B}_r(p)} \left\vert \nabla\left( \Phi - \Psi_{\tau,m}\right) \right\vert_g^2 \left\vert \nabla \Phi \right\vert_g^{n-2} - \int_{\mathbb{B}_r(p)} \left\vert \nabla \Phi \right\vert_g^{n} \left\vert \Phi - \Psi_{\tau,m} \right\vert^2
\end{split}
\end{equation*}
so that
\begin{equation*}
\begin{split}
\int_{\mathbb{B}_r(p)} \left\vert \nabla\left( \Phi - \Psi_{\tau,m}\right) - \nabla \ln \phi_1\left( \Phi - \Psi_{\tau,m}\right) \right\vert_g^2 \left\vert \nabla \Phi \right\vert_g^{n-2} = \int_{\mathbb{B}_r(p)} \left\vert \nabla \Psi_{\tau,m} \right\vert_g^{2} \left\vert \nabla \Phi \right\vert_g^{n-2} \\
- \int_{\mathbb{B}_r(p)} \left\vert \nabla \Phi \right\vert_g^n \\
= \int_{\mathbb{B}_r(p)} \left\vert \nabla \Phi \right\vert_g^{n-2} \left\vert \Phi_m \right\vert^2 \left( \left\vert \nabla \tilde{\Psi}_{\tau,m} \right\vert_g^{2} - \left\vert \nabla \tilde{\Phi}_m \right\vert_g^{2} \right)
\end{split}
\end{equation*}
Moreover, we have by similar computations that
\begin{equation*}
\begin{split}
\int_{\mathbb{B}_r(p)} & \left\vert \nabla\left( \tilde{\Phi}_m - \tilde\Psi_{\tau,m}\right) - \nabla \ln \left(\tilde\Psi_{\tau,m}\right)_1\left( \tilde\Phi_m - \tilde\Psi_{\tau,m}\right) \right\vert_g^2 \left(\left\vert \nabla \tilde\Psi_{\tau,m} \right\vert_g^2 + \tau\right)^{\frac{n-2}{2}} \\
= & \int_{\mathbb{B}_r(p)} \left\vert \nabla \tilde\Phi_m \right\vert_g^{2} \left(\left\vert \nabla \tilde\Psi_{\tau,m} \right\vert_g^2 + \tau\right)^{\frac{n-2}{2}} - \int_{\mathbb{B}_r(p)} \left\vert \nabla \tilde\Psi_{\tau,m} \right\vert_g^2 \left(\left\vert \nabla \tilde\Psi_{\tau,m} \right\vert_g^2 + \tau\right)^{\frac{n-2}{2}} \\
\end{split}
\end{equation*}
and we obtain
\begin{equation*}
\begin{split}
A:= & \int_{\mathbb{B}_r(p)} \left\vert \nabla\left( \tilde\Phi_m - \tilde\Psi_{\tau,m}\right) - \nabla \ln \left(\tilde\Psi_{\tau,m}\right)_1\left( \tilde\Phi_m - \tilde\Psi_{\tau,m}\right) \right\vert_g^2 \left( \left\vert \nabla \tilde\Psi_{\tau,m} \right\vert_g^2 + \tau\right)^{\frac{n-2}{2}} \\
& + \int_{\mathbb{B}_r(p)} \left\vert \nabla\left( \Phi - \Psi_{\tau,m}\right) - \nabla \ln \phi_1\left( \Phi - \Psi_{\tau,m}\right) \right\vert_g^2 \left\vert \nabla \Phi \right\vert_g^{n-2}÷ \\
\leq & \int_{\mathbb{B}_r(p)}\left( \left\vert \nabla \tilde{\Psi}_{\tau,m} \right\vert_g^{2} - \left\vert \nabla \tilde{\Phi}_m \right\vert_g^{2} \right)\left( \left\vert \nabla \Phi \right\vert_g^{n-2} \left\vert \Phi_m \right\vert^2- \left(\left\vert \nabla \tilde\Psi_{\tau,m} \right\vert_g^2 + \tau\right)^{\frac{n-2}{2}} \right) \\
\end{split}
\end{equation*}
so that splitting the right-hand side into three terms, we have
\begin{equation*}
\begin{split}
A \leq & \int_{\mathbb{B}_r(p)}\left( \left\vert \nabla \tilde{\Psi}_{\tau,m} \right\vert_g^{2} - \left\vert \nabla \tilde{\Phi}_m \right\vert_g^{2} \right)\left( \left\vert \nabla \tilde\Phi_m \right\vert_g^{n-2} -\left\vert \nabla \tilde\Psi_{\tau,m} \right\vert_g^{n-2} \right) \\
& + \int_{\mathbb{B}_r(p)}\left( \left\vert \nabla \tilde{\Psi}_{\tau,m} \right\vert_g^{2} - \left\vert \nabla \tilde{\Phi}_m \right\vert_g^{2} \right)\left( \left\vert \nabla \Phi \right\vert_g^{n-2} \left\vert \Phi_m \right\vert^2 - \left\vert \nabla \tilde{\Phi}_m \right\vert_g^{n-2} \right) \\
& + \int_{\mathbb{B}_r(p)}\left( \left\vert \nabla \tilde{\Psi}_{\tau,m} \right\vert_g^{2} - \left\vert \nabla \tilde{\Phi}_m \right\vert_g^{2} \right) \left( \left\vert \nabla \tilde\Psi_{\tau,m} \right\vert_g^{n-2} - \left(\left\vert \nabla \tilde\Psi_{\tau,m} \right\vert_g^2 + \tau\right)^{\frac{n-2}{2}} \right) \end{split}
\end{equation*}
and since the first right-hand term is non positive, we obtain
\begin{equation*}
\begin{split}
A \leq & \int_{\mathbb{B}_r(p)}\left( \left\vert \nabla \tilde{\Psi}_{\tau,m} \right\vert_g^{2} + \left\vert \nabla \tilde{\Phi}_m \right\vert_g^{2} \right)\left\vert \left\vert \nabla \Phi \right\vert_g^{n-2} \left\vert \Phi_m \right\vert^2 - \left\vert \nabla \tilde{\Phi}_m \right\vert_g^{n-2} \right\vert \\
& + \int_{\mathbb{B}_r(p)}\left( \left\vert \nabla \tilde{\Psi}_{\tau,m} \right\vert_g^{2} + \left\vert \nabla \tilde{\Phi}_m \right\vert_g^{2} \right) \left\vert \left\vert \nabla \tilde\Psi_{\tau,m} \right\vert_g^{n-2} - \left(\left\vert \nabla \tilde\Psi_{\tau,m} \right\vert_g^2 + \tau\right)^{\frac{n-2}{2}} \right\vert .
\end{split}
\end{equation*}
Knowing that
$$ \left\vert \nabla \Phi \right\vert^2 = \left\vert \Phi_m \right\vert^2\left\vert \nabla \tilde\Phi_m \right\vert^2 + \left\vert \nabla\left\vert \Phi_m \right\vert \right\vert^2 + \left\vert \nabla \left(\Phi-\Phi_m\right) \right\vert_g^2 $$
and that $\Phi_m$ and $\nabla \Phi_m$ converge a.e to $\Phi$ and $\nabla \Phi$, it is clear by the dominated convergence theorem that the first integral converges to $0$ as $m\to +\infty$. Thanks to the a priori estimates of Claim <ref>, up to a subsequence, $\Psi$ is the limit as $(\tau,m) \to (0,+\infty)$ of $\tilde\Psi_{\tau,m}$. We easily obtain that $\Psi$ is also the limit of $\Psi_{\tau,m}$ and that the second integral in the previous inequality converges to $0$ as $(\tau,m)\to (0,+\infty)$. Moreover, if we set
$$ Z = \{ x \in \mathbb{B}_r(p) ; \left\vert \nabla \Psi \right\vert_g^2 + \left\vert \nabla \Phi \right\vert_g^2 = 0 \}, $$
we have
$$ \left\vert \nabla\left( \Phi - \Psi \right) - \nabla \ln \psi_1\left( \Phi - \Psi \right) \right\vert_g^2 + \left\vert \nabla\left( \Phi - \Psi \right) - \nabla \ln \phi_1\left( \Phi - \Psi \right) \right\vert_g^2 = 0 \text{ in } \mathbb{B}_r(p)\setminus Z. $$
so that
\begin{equation*}
\begin{split}
\int_{\mathbb{B}_r(p)} \left\vert \nabla\left( \Phi - \Psi \right) \right\vert_g^2 = \int_{\mathbb{B}_r(p)\setminus Z} \left\vert \nabla\left( \Phi - \Psi \right) \right\vert_g^2 \leq 2 \int_{\mathbb{B}_r(p)\setminus Z} \left( \left\vert \nabla \ln \psi_1 \right\vert^2 + \left\vert \nabla \ln \phi_1 \right\vert^2 \right) \left\vert \Phi-\Psi \right\vert^2 \\
\leq 2 C \delta_{n,g}^{\frac{2}{n}} \int_{\mathbb{B}_r(p)} \frac{\left\vert \Phi-\Psi \right\vert^2}{\left(r-\left\vert x \right\vert\right)^2} \leq \tilde{C} \delta_{n,g}^{\frac{2}{n}} \int_{\mathbb{B}_r(p)} \left\vert \nabla\left( \Phi - \Psi \right) \right\vert_g^2
\end{split}
\end{equation*}
where the second inequality comes from classical weak estimates on the gradient when it satisfies a $L^{\infty}$ $\eps$-regularity property and the third one comes from a classical Hardy inequality. Letting $\delta_{n,g}$ be small enough gives that $\Phi= \Psi$.
[1]
B. Ammann, E. Humbert, The second Yamabe invariant, J. Funct. Anal., 235, 2006, no.2, 377–412
[2]
B. Colbois, A. El Soufi,
Extremal eigenvalues of the Laplacian in a conformal class of metrics: the `conformal spectrum',
Ann. Global Anal. Geom. 24 (2003), no.4, 337–349.
[3]
B. Colbois, J. Dodziuk, Riemannian metrics with large $\lambda_1$, Proc. Amer. Math. Soc, 122, 1994, no.3, 905–906
[4]
Y. Colin de Verdière, Sur la multiplicité de la première valeur propre non nulle du laplacien, Comment. Math. Helv., 61, 1986, no.2, 254–270
[5]
P. Courilleau, A compactness result for $p$-harmonic maps, Differential Integral Equations, 14, 2001, no.1, 75–84
[6]
A. El Soufi, S. Ilias, Le volume conforme et ses applications d'apres Li et Yau, Sém. Théorie Spectrale et Géométrie, Institut Fourier, 1983?1984, No.VII, (1984).
[7]
A. El Soufi, S. Ilias, Immersions minimales, première valeur propre du laplacien et volume conforme, Mathematische Annalen, 1986, 275, 257-267
[8]
A. El Soufi, S. Ilias, Riemannian manifolds admitting isometric immersions by their first eigenfunctions, Pacific J. Math. 195, 2000, 91–99.
[9]
A. El Soufi and S. Ilias, Extremal metrics for the first eigenvalue of the Laplacian in a conformal class, Proc. Amer. Math. Soc. 131, 2003, 1611-1618.
[10]
A. El Soufi, S. Ilias, Laplacian eigenvalue functionals and metric deformations on compact manifolds, Journal of Geometry and Physics, 58, Issue 1, January 2008, 89-104
[11]
A. Fraser, R. Schoen, Sharp eigenvalue bounds and minimal surfaces in the ball, Invent. Math. 203, 2016, 823–890.
[12]
A. Fraser, R. Schoen, Some results on higher eigenvalue optimization, Calc. Var. Partial Differential Equations, 59, 2020, 5, Paper No. 151, 22
[13]
L. Friedlander, N. Nadirashvili, A differential invariant related to the first eigenvalue of
the Laplacian, Internat. Math. Res. Notices, 1999, 17, 939–952,
[14]
M.J. Gursky, S. Pérez-Ayala, Variational properties of the second eigenvalue of the conformal Laplacian, J. Funct. Anal., 282, 2022, no.8, Paper No. 109371, 60
[15]
Q. Han, F. Lin , Elliptic partial differential equations, Courant Lecture Notes in Mathematics, 1, New York University, Courant Institute of Mathematical
Sciences, New York; American Mathematical Society, Providence,
RI, 1997, x+144,
[16]
A. Hassannezhad, Conformal upper bounds for the eigenvalues of the Laplacian and Steklov problem, J. Funct. Anal., 261, 2011, no 12, 3419–3436
[17]
J. Hersch, Quatre propriétés isopérimétriques de
membranes sphériques homogènes, C.R. Acad. Sci. Paris Sér. A-B 270, 1970, A1645–A1648.
[18]
M. Karpukhin,
Index of minimal spheres and isoperimetric eigenvalue inequalities, Invent. Math., 223, 2021, no. 1, 335–377
[19]
M. Karpukhin, V. Medvedev, On the Friedlander-Nadirashvili invariants of surfaces, Math. Ann., 379, 2021, 3-4, 1767–1805,
[20]
M. Karpukhin, A. Métras, Laplace and Steklov extremal metrics via $n$-harmonic maps, J. Geom. Anal., 32, 2022, no.5, Paper No. 154, 36
[21]
M.A. Karpukhin, N. Nadirashvili, A. Penskoi, I. Polterovich, An isoperimetric inequality for Laplace eigenvalues on the sphere. arXiv: 1706.05713.
To appear in J. Differential Geom..
[22]
M.A. Karpukhin, N. Nadirashvili, A. Penskoi, I. Polterovich, Conformally maximal metrics for Laplace eigenvalues on surfaces, arXiv:2003.02871v2
[23]
M.A. Karpukhin, D. L. Stern Min-max harmonic maps and a new characterization of conformal eigenvalues
arXiv preprint (2020), arXiv:2004.04086, 69 pp
[24]
M.A. Karpukhin, D. L. Stern Existence of harmonic maps and eigenvalue optimization in higher dimensions
arXiv preprint (2022), arXiv:2207.13635, 60 pp
[25]
G. Kokarev, Variational aspects of Laplace eigenvalues on Riemannian surfaces, Adv. Math. 258, 2014, 191–239.
[26]
H.N. Kim, Maximization of the second Laplacian eigenvalue on the
sphere, Proc. Amer. Math. Soc., Proceedings of the American Mathematical Society, 150, 2022, no.8, 3501–3512
[27]
N. Korevaar, Upper bounds for eigenvalues of conformal metrics, J. Differential Geom., 37, 1993, 1, 73–93
[28]
R. Hardt, F-H Lin, Mappings minimizing the $L^p$ norm of the gradient, Comm. Pure Appl. Math., Communications on Pure and Applied Mathematics, 40, 1987, no.5, 555–588
[29]
P. Li, S.T. Yau, A new conformal invariant and its applications to the Willmore conjecture and the first eigenvalue of compact surfaces, Invent.. Math. 69, 1982, 269–291.
[30]
H. Matthiesen, A. Siffert,
Handle attachement and the normalized first eigenvalue, preprint.
[31]
L. Mou, P. Yang, Regularity for $n$-harmonic maps, J. Geom. Anal., 6, 1996, no.1, 91–112
[32]
N. Nadirashvili, Berger's isoperimetric problem and minimal immersions of surfaces, Geom. Func. Anal. 6, 1996, 877-897.
[33]
N. Nadirashvili, Isoperimetric inequality for the second eigenvalue of a sphere, J. Differential Geom. 61, 2002, 335–340.
[34]
R. Petrides, Maximization of the second conformal eigenvalue of spheres, Proc. Amer. Math. Soc., 142, 2014, 7 , 2385–2394
[35]
R. Petrides, On a rigidity result for the first conformal eigenvalue of the
Laplacian, J. Spectr. Theory, 5, 2015, no.1, 227–234
[36]
R. Petrides, Existence and regularity of maximal metrics for the first Laplace eigenvalue on surfaces, Geom. Funct. Anal. 24, 2014, 1336–1376.
[37]
R. Petrides, On the existence of metrics which maximize Laplace eigenvalues on surfaces, Int. Math. Res. Not. , 14, 2018, 4261–4355.
[38]
R. Petrides, A variational method for functionals depending on eigenvalues, submitted
[39]
R. Petrides, D. Tewodrose, Subdifferentials and critical points of eigenvalue functionals
[40]
S. Sarsa, Note on an elementary inequality and its application to the
regularity of $p$-harmonic functions, Ann. Fenn. Math., 47, 2022, no.1, 139–153
[41]
Strzelecki, Paweł, Regularity of $p$-harmonic maps from the $p$-dimensional ball into a sphere, Manuscripta Math., 82, 1994, no.3-4, 407–415
[42]
K. Uhlenbeck, K., Regularity for a class of non-linear elliptic systems, Acta Math., 138, 1977, no.3-4, 219–240
[43]
P.C. Yang, S.T. Yau, Eigenvalues of the Laplacian of compact Riemannian surfaces and minimal submanifolds, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 7, 1980, no.1, 53–63.
|
# Witnessing the non-objectivity of an unknown quantum dynamics
Davide Poderini International Institute of Physics, Federal University of Rio
Grande do Norte, 59078-970, Natal, Brazil Giovanni Rodari Dipartimento di
Fisica - Sapienza Università di Roma, P.le Aldo Moro 5, I-00185 Roma, Italy
George Moreno International Institute of Physics, Federal University of Rio
Grande do Norte, 59078-970, Natal, Brazil Departamento de Computação,
Universidade Federal Rural de Pernambuco, 52171-900, Recife, Pernambuco,
Brazil Emanuele Polino Dipartimento di Fisica - Sapienza Università di Roma,
P.le Aldo Moro 5, I-00185 Roma, Italy Ranieri Nery International Institute
of Physics, Federal University of Rio Grande do Norte, 59078-970, Natal,
Brazil Alessia Suprano Dipartimento di Fisica - Sapienza Università di Roma,
P.le Aldo Moro 5, I-00185 Roma, Italy Cristhiano Duarte International
Institute of Physics, Federal University of Rio Grande do Norte, 59078-970,
Natal, Brazil School of Physics and Astronomy, University of Leeds, Leeds LS2
9JT, United Kingdom Fabio Sciarrino<EMAIL_ADDRESS>Dipartimento
di Fisica - Sapienza Università di Roma, P.le Aldo Moro 5, I-00185 Roma, Italy
Rafael Chaves<EMAIL_ADDRESS>International Institute of Physics,
Federal University of Rio Grande do Norte, 59078-970, Natal, Brazil School of
Science and Technology, Federal University of Rio Grande do Norte, Natal,
Brazil
###### Abstract
Quantum Darwinism offers an explanation for the emergence of classical
objective features – those we are used to at macroscopic scales – from quantum
properties at the microscopic level. The interaction of a quantum system with
its surroundings redundantly proliferates information to many parts of the
environment, turning it accessible and objective to different observers. But
given that one cannot probe the quantum system directly, only its environment,
how to witness whether an unknown quantum property can be deemed objective or
not? Here we propose a probabilistic framework to analyze this question and
show that objectivity implies a Bell-like inequality. Among several other
results, we show quantum violations of this inequality, a device-independent
proof of the non-objectivity of quantum correlations that give rise to the
phenomenon we name ”collective hallucination”: observers probing distinct
parts of the environment can agree upon their measurement outcome of a given
observable but such outcome can be totally uncorrelated from the property of
the quantum system that fixed observable should be probing. We also implement
an appealing photonic experiment where the temporal degree of freedom of
photons is the quantum system of interest, while their polarization acts as
the environment. Employing a fully black-box approach, we achieve the
violation of a Bell inequality, thus certifying the non-objectivity of the
underlying quantum dynamics in a fully device-independent framework.
## I Introduction
Understanding how the quantum information encoded into a microscopic system
leads to classical features, those observed at the macroscopic scales, remains
a central question in the quantum foundations. In the early days of quantum
theory, the comprehension of the quantum-classical boundary relied on arguably
vague notions such as wave-particle duality [1], complementarity [2, 3] or
even that of a human observer [4]. Nowadays, the tools and concepts of quantum
information offer a more well-grounded framework to address those questions.
The study of decoherence [5, 6], for instance, shows that quantum properties,
such as coherence and entanglement, are degraded due to the interaction of a
quantum system with its surrounding environment, a process that becomes more
noticeable the larger the quantum system is [7], beautifully explaining some
crucial aspects of the quantum to classical transition [8, 9, 10]. Simply put,
decoherence selects the so-called pointer states [11]—natural candidates for
the macroscopically observed classical states obtained after a
measurement—while coherent superpositions of those are suppressed.
Decoherence, however, does not solve by itself the problem of how information
contained in the pointer states becomes available to different measurement
apparatuses, nor how this is turned into objective information, that is,
independent of observers.
That spreading of objective information is the central topic that gave rise to
the idea of quantum Darwinism [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
23, 24, 25, 26]. In quantum Darwinism, the environment—the same entity
responsible for decoherence— is also seen as a special carrier of information
about the quantum system, insofar as redundantly propagating the information
of the naturally selected pointer states to many external observers.
Crucially, the emergence of a classical notion of objectivity is a generic
feature of quantum dynamics [22]. Irrespective of the specific modelling for
the interaction with the environment, whenever the information about the
pointer states is accessible to sufficiently many observers, the evolution
will gradually resemble one where a specific observable is measured by all of
them.
But what if other measurements, not necessarily those related to a pointer
observable, are performed? In particular, if the system-environment dynamics
is not known, how can one test for objectivity or, rather, the absence of it?
Those are precisely the questions we address in this work.
Building on the results of [22], we propose a probabilistic framework to
address the question of an emergent notion of objectivity. In this
probabilistic setting, we associate an observer with each part of the
environment (see Fig. 1)), and we show that the ability of each observer to
encode and retrieve classical information about a quantum system translates
into the emergence of an objective value for a measurement outcome.
Objectivity here ought to be understood in the sense that it reflects a sort
of common knowledge among the observers— _a property of a quantum system is
objective when it is simultaneously agreed upon by all agents_. From that,
considering a particular case of two observers, we show that the Clauser-
Horne-Shimony-Holt (CHSH) inequality [27], a paradigmatic Bell inequality in
the study of quantum non-locality [28], can be also be turned into a witness
of non-objectivity.
More precisely, in our probabilistic setup, the violation of a CHSH inequality
implies the phenomenon we name “collective hallucination”. This collective
hallucination means that several observers can mutually agree upon their
outcome for the measurement of a given observable; still, that outcome can be
completely uncorrelated from the property of the quantum system it should be
related to. We also prove that if objectivity is demanded for all measurements
performed by the observers in the CHSH scenario, then it implies true
objectivity, reflecting not only agreement between observers but also to
properties of the quantum system under scrutiny. Finally, we provide a proof-
of-principle experimental realization of our framework. Employing birefringent
plates placed inside a Sagnac interferometer, the temporal degree of freedom
of photons gets entangled with their polarization, the first being the quantum
system of interest while the latter acts as its environment within the quantum
Darwinism scenario.
## II Emergence of Objectivity in Quantum Darwinism
In the following, we review the basic notions of quantum Darwinism. We give
special emphasis to the standpoint of [22], where the authors prove that a
well-defined notion of objectivity is a generic property of any quantum
dynamics. We then move forward and prove our first result, a generalization of
the findings of [22] in a general probabilistic setting, that is, not
necessarily relying on (but certainly including) quantum theory.
We are interested in a general scenario where $n+1$ quantum systems interact
arbitrarily, being described at a certain instant of time by a density
operator $\rho_{AB_{1},\dots,B_{n}}$—at this level of generality, it is
irrelevant whether we refer to a closed system or a part of a larger system.
The sub-system $A$ describes the quantum system of interest, and $B_{i}$
stands for the different fractions of the environment. Each fraction $B_{i}$
interacts with $A$ and also possibly among themselves in such a way that the
quantum information originally contained in $A$ can be redundantly spread over
the joint system. In a quantum description of the process, this information
spreading is represented by a completely positive and trace preserving (CPTP)
map $\Lambda:\mathcal{D}(A)\rightarrow\mathcal{D}(B_{1}\otimes\dots\otimes
B_{n})$ where $\mathcal{D}(A)$ is the set of density operators on the Hilbert
space associated to system $A$ (similarly to the $B_{i}$’s). The scenario is
illustrated in Fig. 1
Figure 1: Quantum Darwinism scenario. The figure depicts the general scenario
considered in quantum Darwinism: one central system $A$ interacts with the
environment described by $n$ systems $B_{1},\ldots,B_{n}$. As a result of this
interaction, part of the information contained in $A$ is transferred to the
environment and replicated in each system $B_{i}$.
Within this context, Ref. [22] makes a distinction between two notions of
objectivity, that of observables and that of outcomes. The former states that
the observers should extract information about the same observable of the
system by probing parts of the environment, which would be associated with the
"pointer basis" selected by the system-environment interactions. The latter
considers that not only should the observable be the same, but also the value
of the measurement outcome should be agreed upon by the observers.
Regarding the objectivity of observables, it follows that, quite generally,
the map $\Lambda$ can be well approximated by a measure-and-prepare map, such
that the reduced map for most subsets of observers is given by
$\mathcal{E}_{\mathcal{B}}(\rho_{A})=\sum_{k}\operatorname{tr}[\rho_{A}\,F_{k}]\,\sigma_{\mathcal{B}}^{k},$
(1)
where $\mathcal{B}$ is the subset of observers (or degrees of freedom of the
environment being observed), $\\{F_{k}\\}_{k}$ is a POVM which _should be the
same_ for all subsets $\mathcal{B}$ of the same size,
$\sigma_{\mathcal{B}}^{k}$ is the (joint) quantum state for the observers in
$\mathcal{B}$, prepared according to the outcome $k$ of $F_{k}$ and
$\rho_{A}=\mathrm{Tr}_{B_{1}\dots,B_{n}}(\rho_{A,B_{1},\dots,B_{n}})$. More
precisely, the results in [22] $(i)$ provide an upper bound for how close a
family of measure-and-prepare maps sharing the same POVM are to the true
reduced evolution $\mathcal{E}_{\mathcal{B}}$ of smaller portions
$\mathcal{B}$’s and $(ii)$ show that for a suitable fraction of the observers
and for a large enough number of total observers, the bound gets closer to
zero, meaning that all observers would agree they are obtaining information
about the same property of $\rho_{A}$, determined by the observable described
via the POVM $\\{F_{k}\\}_{k}$.
Regarding the objectivity of outcomes, Ref. [22] introduces the guessing
probability of the outcome $k$ obtained with $F_{k}$ for all observers in the
subset $\mathcal{B}$—tacitly assuming that the dynamics for each environment’s
fraction of interest has exactly the form of Eq. (1). Consider a distribution
$\\{p_{i}\\}$ and a set of states $\\{\sigma_{i}\\}$ for
$i\in\\{1,\ldots,m\\}$ and let $p_{\mathrm{guess}}(p_{i},\sigma_{i})$ be the
guessing probability defined as:
$p_{\mathrm{guess}}(p_{i},\sigma_{i})=\max_{\\{E_{i}\\}}\sum_{i}p_{i}\operatorname{tr}(E_{i}\sigma_{i}),$
(2)
representing the capability of the ensemble of states
$\\{\sigma_{i}\\}_{i\in[m]}$ to properly encode $m$ classical states
distributed according to $\\{p_{i}\\}_{i\in[m]}$. It follows that if there
exists a positive $0<\delta<1$ such that for every observer $B_{k}$, with
$k\in\\{1,\ldots,n\\}$,
$\min_{\rho\in\mathcal{D}(A)}\left\\{p_{\mathrm{guess}}\left(\operatorname{tr}(F_{i}\rho),\sigma_{B_{k}}^{i}\right)\right\\}\geq
1-\delta,$ (3)
then there exists some POVM $\\{E^{k}_{i}\\}$ for each $B_{i}$ such that
$\min_{\rho\in\mathcal{D}(A)}\sum_{i}\operatorname{tr}(F_{i}\rho)\operatorname{tr}\left[\left(\bigotimes_{k=1}^{n}E^{k}_{i}\right)\sigma^{i}_{B_{1},\ldots,B_{n}}\right]\geq
1-6n\delta^{1/4},$ (4)
where $\\{F_{i}\\}$ is an appropriate POVM and $\sigma^{i}_{B_{k}}$ the
density matrix relative to the party $B_{k}$ only. Qualitatively, Eq. (3)
combined with Eq. (4) show that if each $B_{i}$ is capable of properly
encoding the outcomes of a measurement on the $A$ system, then one can assign
an objective value to it, shared by all the $B_{i}$, in the sense that
experimenters probing each single $B_{i}$ will all get the same value, with
high probability.
Our first goal is to extend this notion of objectivity beyond quantum
mechanical algebra machinery and instead rely on a purely probabilistic
approach 111A possible route to generalization of quantum Darwinism to GPTs
was proposed in [57]. Unfortunately, the authors are concerned with defining
what an idealized quantum Darwinism process would look like in GPTs, more
precisely, a general version of a fan-out gate, and do not consider the noisy
version of such a process.. There are two main reasons for that. The first is
that properties often seen as inherently quantum mechanical are, in fact, also
features of generalized probability theories, including monogamy of
correlations [30] and the impossibility of broadcasting information [31], just
to cite a few. Understanding informational principles in such generalized
settings often lead to deeper insights about quantum theory itself [32, 33].
The other main reason for our approach is of practical relevance and relates
to what is often called the device-independent approach to quantum information
[34], the paradigmatic examples of which are Bell inequalities violations,
non-contextuality inequality violations and their use in cryptographic
protocols [35, 36, 37]. In the device-independent setting, one can reach non-
trivial conclusions about the quantum states being prepared or the
measurements being performed by simply relying on the classical information
obtained by measurement outcomes—without resorting to a detailed description
of the experimental apparatus. In the particular case of quantum Darwinism, as
we will see, it will allow us to not only define the concept of objectivity
irrespective of any underlying dynamics or measurement setups, but also derive
testable constraints on whether the statistics observed in the experiment can
be deemed objective or not.
In our proposed setting, each agent $i$ has access to a portion $B_{i}$ of the
environment surrounding $A$. Additionally, each agent $i$ is free to
independently choose to measure one out of many possible observables
$x_{i}\in\\{x_{i}^{1},x_{i}^{2},...,x_{i}^{m_{i}}\\}$, obtaining the
corresponding outcome $b_{i}\in\\{b_{i}^{1},b_{i}^{2},...,b_{i}^{o_{i}}\\}$.
If we focus only on the aggregated statistics involved in this process, the
scenario is thus described by a joint probability distribution
$p(b_{1},\dots,b_{n}|x_{1},\dots,x_{n})=\sum_{a}p(a,b_{1},\dots,b_{n}|x_{1},\dots,x_{n}),$
(5)
where $a$ is the outcome one would observe if a direct measurement of the
system $A$, that measurement corresponding to the pointer-state observable
(assuming it exists) defined by a given dynamics, had been performed. Each
$x_{i}$ represents the random variable parametrizing the choice of which
observable the $i-$th agent having access to the portion $B_{i}$ of the
environment measures in a given run of the experiment.
According to the Born rule, a quantum description of the same scenario is
given by
$\displaystyle p(a,b_{1},\dots,b_{n}|x_{1},\dots,x_{n})=$ (6)
$\displaystyle\mathrm{Tr}[(F_{a}\otimes E^{1,x_{1}}_{b_{1}}\dots\otimes
E^{n,x_{n}}_{b_{n}})\,\rho_{A,B_{1},\dots,B_{n}}],$
where $\rho_{A,B_{1},\dots,B_{n}}$ is the density operator representing the
quantum state shared by all the environments $B_{i}$’s plus the central system
$A$, and where $\\{E_{b_{i}}^{i,x_{i}}\\}_{b_{i}}$ is the POVM representing a
possible choice of measurement that the $i-$th agent can realize on her
fraction of the environment. It is exactly Eq. (6) that motivates a general
probabilistic description where the joint distribution
$p(a,b_{1},\dots,b_{n}|x_{1},\dots,x_{n})$ has to fulfil three natural
assumptions.
The first, called _no-superdeterminism_ states that
$p(a|x_{1},\dots,x_{n})=p(a),$ (7)
for every $i\in[n],$ and for every
$x_{i}\in\\{x_{i}^{1},x_{i}^{2},...,x_{i}^{m_{i}}\\}$. In other words, the
choices of which observable to measure can be made by each agent independently
of how the $A$ system has been prepared or which are the pointer observables
defined by a given dynamics. This is reminiscent of the measurement
independence (also called ”free-will” assumption) in Bell’s theorem [38, 39].
The second assumption, named _no-signalling_ , states that
$p(b_{i}|a,x_{1},\dots,x_{n})=p(b_{i}|a,x_{i}),$ (8)
for all $i\in[n]$ and for all
$b_{i}\in\\{b_{i}^{1},b_{i}^{2},...,b_{i}^{o_{i}}\\}$. This is analogous to
the no-signalling condition in Bell’s theorem and implies that the choice of
observables made by a given observer should not have any direct causal effect
on the statistics of all other observers.
Our final assumption, which we name _$\delta$ -objectivity_, is structured as
follows. Let $\delta>0$ represent an error parameter. For each agent $i$,
denote $x^{*}_{i}$ their choice of measurement corresponding to the case where
their outcome should be correlated with the outcome $a$. In a quantum
description, that would precisely correspond to the pointer-state observable
on system $A$, that is, corresponding to a POVM $\\{E_{k}\\}_{k}$ reproducing
as reliably as possible the observable $\\{F_{k}\\}_{k}$ emerging in the
effective measure-and-prepare dynamics in Eq. (1). The outcome $b_{i}$ is
$\delta$-objective, if for each observer, we have that
$\sum_{a}p(a)\,p(b_{i}=a|a,x_{i}^{*})\geq 1-\delta.$ (9)
The fact that this assumption brings a clearer notion of objectivity will
become justified after our first result below. For now, notice that as there
is always a POVM attaining the optimal value for the guessing probability, we
can create a parallel involving Eq.(6), the equation defining
$p_{\mathrm{guess}}$, and the quantity in Eq.(4) as shown below:
$\displaystyle
p_{\mathrm{guess}}(\operatorname{tr}(F_{i}\rho_{A}),\sigma_{i})=\max_{\\{E_{i}\\}}\sum_{i}\operatorname{tr}(F_{i}\rho_{A})\operatorname{tr}(E_{i}\sigma_{i})$
$\displaystyle\longleftrightarrow\sum_{a}p(a)p(b_{i}=a|a,x^{*})$ (10)
$\displaystyle\min_{\rho_{A}}\sum_{i}\operatorname{tr}(F_{i}\rho_{A})\operatorname{tr}\left(\bigotimes_{k}E^{k}_{i}\sigma_{i}^{1,\ldots,n}\right)$
$\displaystyle\longleftrightarrow\sum_{a}p(a)p(b_{1}=b_{2}=\cdots=b_{n}=a|a,x_{1}^{*},\ldots,x_{n}^{*})$
(11)
With that, we can state our first result, proven in Appendix VIII.1,
justifying our $\delta$-objectivity assumption.
###### Result 1.
If there exists a positive $\delta\leq 1$ such that for every
$k\in\\{1,\ldots,n\\}$:
$\sum_{a}p(a)p(b_{k}=a|a,x_{k}^{*})\geq 1-\delta.$ (12)
Then we have
$\displaystyle\sum_{a}p(a)p(b_{1}=b_{2}=\cdots=b_{n}=a|a,x_{1}^{*}$
$\displaystyle,\ldots,x_{n}^{*})$ $\displaystyle\geq 1-n\delta$
Remark: Result 1 says that a result analogous to Eq. (4) continues to hold,
even in a fully probabilistic setting. Put another way, the inequality
$\sum_{a}p(a)p(b_{1}=b_{2}=...=b_{n}=a|x_{1}^{*},...,x_{n}^{*})\geq 1-n\delta$
expresses the possibility of assigning an objective nature to the outcome
obtained by each observer. Recall that objectivity here means that regardless
of the outcome obtained by each agent, that outcome is agreed upon among all
the $B_{i}$’s, that is,
$p(b_{1}=b_{2}=\cdots=b_{n}|x_{1}^{*},\ldots,x_{n}^{*})=1$. What is more, it
also reflects a property related to an observable described by a POVM
$\\{F_{k}\\}_{k}$ acting on the subsystem $A$. In particular, when there is a
perfect local agreement (i.e., when $\delta=0$, implying
$\sum_{a}p(b_{k}=a|x_{k}^{*})=1$ for every agent), Result 1 guarantees that
$\sum_{a}p(b_{1}=b_{2}=\cdots=b_{n}=a|x_{1}^{*},\ldots,x_{n}^{*})=1$. One can
read this implication as saying that perfect local agreement implies perfect
global agreement.
## III Bell-like inequalities witnessing non-objectivity
The conditions of no-superdetermism, no-signalling and $\delta$-objectivity,
eqs. (7), (8) and (9) respectively, clearly define a notion for objectivity of
outcomes in the probabilistic setting. Notwithstanding, note that those
conditions involve the outcome $a$ that by assumption is not directly
observable, as any information about it can only be obtained indirectly, by
correlations of it with the outcomes $b_{i}$’s. Thus, similarly to Bell’s
theorem, $a$ plays the role of a latent or hidden variable. However, the
conjunction of assumptions (7), (8) and (9) do imply testable constraints for
the observed correlations among the outcomes $b_{i}$. We approach those
testable constraints in this section.
For doing so, we consider the particular case of only two observers ($n=2$).
Each observer has two possible measurements available to them. After each
measurement, a single value, out of a list of two, is flashed out. Put another
way, $x_{1},x_{2},b_{1},b_{2}\in\\{0,1\\}$. Moreover, we specify $x_{1}^{*}$
as $x_{1}=0$ and $x_{2}^{*}$ as $x_{2}=0$—recall that each $x_{i}^{\ast}$
corresponds to the special case where the outcome should be correlated with
the outcome $a$. We can then state our second result.
###### Result 2.
Any observed correlation $p(b_{1},b_{2}|x_{1},x_{2})$ compatible with the
conditions (7), (8) and (9), fulfills the inequality
$\mathrm{CHSH}_{\delta,\epsilon}=\left<B^{0}_{1}B^{0}_{2}\right>+\left<B^{0}_{1}B^{1}_{2}\right>\\\
-\left<B^{1}_{1}B^{0}_{2}\right>+\left<B^{1}_{1}B^{1}_{2}\right>\leq
2+4\delta-2\epsilon,$ (13)
with $\left<B^{0}_{1}B^{0}_{2}\right>=1-2\epsilon$ where
$\left<B^{x_{1}}_{1}B^{x_{2}}_{2}\right>=\sum(-1)^{b_{1}+b_{2}}p(b_{1},b_{2}|x_{1},x_{2})$
is the expectation value of the observables corresponding to inputs $x_{1}$
and $x_{2}$.
Notice that Eq.(13) is a relaxed version of the CHSH inequality [27] with one
additional constraint. In Eq. (13) we impose that
$\left<B^{0}_{1}B^{0}_{2}\right>=1-2\epsilon$ to mean that both observers are
in agreement (up to a discordance factor of $2\epsilon$) whenever they decide
to measure the special inputs $x_{1}^{*}=0$ and $x_{2}^{*}=0$ respectively.
Notice that our Result 1 implies that $\delta\geq\epsilon/2$ while
$\delta=\epsilon/2$ corresponds to the "darwinistic" case where the
disagreement $\delta$ between the observers and the latent observable follows
directly from the observable disagreement $\epsilon$ between the observers
themselves. Thus, any observed value $\operatorname{CHSH}_{\delta,\epsilon}>2$
implies that $\delta>\epsilon/2$, witnessing non-objectivity even in the case
of non-perfect agreement between the observers ($\epsilon>0$).
Considering the case where $\delta=0$ and $\epsilon=0$, our next result shows
that quantum theory can violate the $\operatorname{CHSH}_{0,0}$ inequality
while respecting $\left<B^{0}_{1}B^{0}_{2}\right>=1$. That is, the observers
agree among themselves but their outcomes do not reflect a property of the
system $A$ to which they assume to be fully correlated with—a phenomenon we
call ”collective hallucination”.
###### Result 3.
Quantum theory allows a violation of $\operatorname{CHSH}_{0,0}$ up to the
value $5/2$ while respecting $\left<B^{0}_{1}B^{0}_{2}\right>=1$. In
particular, the maximal violation, allows to self-test a maximally two-qubit
entangled state, which at the same time certifies one bit of randomness and
also implies a monogamy relation. That is, even though the observers agree
among themselves, the outcome of each one of them is completely uncorrelated
from system A.
In the following, we will discuss in more depth the consequences of these
results while a detailed proof is presented in the Appendix VIII.3. Notice
that the violation $\operatorname{CHSH}_{0,0}=5/2$ is achieved considering
state
$\left|\psi\right\rangle_{B_{1}B_{2}}=\frac{1}{\sqrt{2}}(\left|00\right\rangle+\left|11\right\rangle),$
(14)
and choosing $B_{j}^{0}=\sigma_{z}$ for $j=1,2$ and
$\displaystyle B_{1}^{1}$
$\displaystyle=-\frac{\sigma_{z}}{2}-\frac{\sqrt{3}}{2}\sigma_{x}$ (15)
$\displaystyle B_{2}^{1}$
$\displaystyle=\frac{\sigma_{z}}{2}-\frac{\sqrt{3}}{2}\sigma_{x}\;,$ (16)
As detailed in the Appendix VIII.3, the proof that
$\operatorname{CHSH}_{0,0}=5/2$ is the maximum quantum violation relies on the
idea that together with the agreement condition
$\left<B^{0}_{1}B^{0}_{2}\right>=1$, it allows to self-test the maximally
entangled state. Recall that the possibility of performing a self-test is a
sufficient condition to ensure that the quantum probability distribution
achieving $\operatorname{CHSH}_{0,0}=5/2$ is unique, as discussed in ref.
[40]. Combining that uniqueness with the convex nature of the set of quantum
correlations, it thus follows that $\operatorname{CHSH}_{0,0}=5/2$ is the
maximal possible violation. Otherwise, if there was a distribution leading to
a higher violation, there would be different manners to mix it with other
probability distributions (say the ones leading to maximal violation of other
symmetries of this inequality) in order to obtain two different correlations
reaching $\operatorname{CHSH}_{0,0}=5/2$, a situation that would forbid the
possibility of self-testing.
Furthermore, following the arguments of Ref. [41], we can state that being an
extreme point of the set of quantum behaviors assures that any third part
event is uncorrelated with the outcomes of the observers, i.e. it holds that
any realization $a$ of some third variable $A$ is such that
$p(a,b_{1},b_{2}|x_{1},x_{2})=p(b_{1},b_{2}|x_{1},x_{2})p(a)$. Finally,
because the $\operatorname{CHSH}_{0,0}$ inequality is invariant under the
transformation $b_{1}^{\prime}=(b_{1}+1)\bmod 2$ (the same holds for a similar
transformation of $b_{2}$), and the behavior leading to its maximal violation
is unique, we can certify a bit of randomness [42], either $b_{1}$ or $b_{2}$.
In particular, the certification of a random bit and the fact that any third
party is uncorrelated implies that the probability of guessing the outcome of
one of the participants is always $1/2$.
Figure 2: a, b) Minimal values possible for $\delta$ as a function of
$\operatorname{CHSH}_{\delta,\epsilon}$ value corresponding to the observable
distribution $p(b_{1},b_{2}|x_{1},x_{2})$. Results were obtained using the
third level of NPA hierarchy [43]. Different curves correspond to different
values for outcome agreement between observers, given by the constraint
$\sum_{b_{1}}p(b_{1}=b_{2}|x_{1}=x^{*},\,x_{2}=x^{*})=1-\epsilon$, or
equivalently $\left<B^{0}_{1}B^{0}_{2}\right>=1-2\epsilon$. A change in the
behavior for the maximal violation of CHSH can be seen between values
$\left<B^{0}_{1}B^{0}_{2}\right>\leq 1/\sqrt{2}$ (shown in panel a), and
values $\left<B^{0}_{1}B^{0}_{2}\right>\geq 1/\sqrt{2}$ (panel b), where the
former increases with increasing $\epsilon$, while the latter decreases with
increasing $\epsilon$. At the same time, the restriction
$\delta\geq\epsilon/2$ from Eq. (1) can be observed throughout the graphics,
with saturation occurring in all cases for
$\operatorname{CHSH}_{\delta,\epsilon}=2$. It also can be seen that a sharp
rise in $\delta$ occurs near maximal violation for each $\epsilon$, which
leads to numerical instabilities in these regions. For this reason, no curve
reaches $\delta=0.5$ and terminal points are different for each curve. c)
Optimal values for the violation of CHSH inequality with the constraint
$\left<B^{0}_{1}B^{0}_{2}\right>=1-2\epsilon$, with explicit quantum
realizations found numerically. The points achieve the self-testing criterion
of [44], satisfying
$\mathrm{asin}(\left<B^{0}_{1}B^{0}_{2}\right>)+\mathrm{asin}(\left<B^{0}_{1}B^{1}_{2}\right>)+\mathrm{asin}(\left<B^{1}_{1}B^{0}_{2}\right>)-\mathrm{asin}(\left<B^{1}_{1}B^{1}_{2}\right>)=\pi$,
together with the constraints
$\left<B^{0}_{1}B^{1}_{2}\right>=\left<B^{1}_{1}B^{0}_{2}\right>=-\left<B^{1}_{1}B^{1}_{2}\right>$.
The analytical curve is obtained by combining the equations, resulting in
$\operatorname{CHSH}_{\delta,\epsilon}=1-2\epsilon+3\sin(\frac{\pi}{3}-\frac{1}{3}\mathrm{asin}(1-2\epsilon))$.
It is worth noting that seen from Bell’s theorem perspective, Result 3 also
has interesting consequences for randomness certification. Differently from
the usual setup that requires a violation of
$\operatorname{CHSH}_{0,0}=2\sqrt{2}$ to certify one bit of randomness, the
agreement condition $\left<B^{0}_{1}B^{0}_{2}\right>=1$ permits the same to
reach with a smaller CHSH inequality violation. Furthermore, the standard
scenario with $\operatorname{CHSH}_{0,0}=2\sqrt{2}$ requires that a third
input is measured by either of the parties if one wishes to directly establish
a common secret bit between them. This follows from the fact that in this
case,
$\left<B^{0}_{1}B^{0}_{2}\right>=\left<B^{0}_{1}B^{1}_{2}\right>=-\left<B^{1}_{1}B^{0}_{2}\right>=\left<B^{1}_{1}B^{1}_{2}\right>=1/\sqrt{2}$,
that is, the measurement outcomes are not completely correlated. In turn, in
our case, the condition $\left<B^{0}_{1}B^{0}_{2}\right>=1$ already guarantees
the perfectly correlated secret bit.
Moving beyond the case of the maximal violation
$\operatorname{CHSH}_{0,0}=5/2$, we can also probe via a semi-definite program
detailed in Appendix VIII.4, the minimum value of $\delta$ required to explain
a given value of $\operatorname{CHSH}_{\delta,\epsilon}$, with the results
shown in Fig. 2. There we also consider the effect of imperfections on the
agreement between the observers, that is, we allow
$\left<B^{0}_{1}B^{0}_{2}\right>=1-2\epsilon$, a condition of relevance for
experimental tests of our witness. Interestingly, we observe that for any
value of $\epsilon$ there is always a quantum violation of the
$\operatorname{CHSH}_{\delta,\epsilon}$ inequality leading to $\delta=1/2$,
that is, the outcomes of the observers are completely uncorrelated from the
outcome $a$ of the central system they are supposedly probing. Between the
range $0\leq\epsilon\leq\frac{1}{2}(1-1/\sqrt{2})$, as we increase the
$\epsilon$ we also increase the maximum quantum violation of
$\operatorname{CHSH}_{0,0}$. From this point on, that corresponds to
$\operatorname{CHSH}_{\delta,\epsilon}=2\sqrt{2}$ and
$\left<B^{0}_{1}B^{0}_{2}\right>=1/\sqrt{2}$ we see the opposite behavior,
since the maximum possible violation of
$\operatorname{CHSH}_{\delta,\epsilon}$ decreases as we increase $\epsilon$ in
the range $\frac{1}{2}(1-1/\sqrt{2})\leq\epsilon\leq 1/2$.
Generalizing our scenario, we can now consider the case where system $A$ has
not only one but actually, two properties, corresponding to the outcomes
$a_{0}$ and $a_{1}$, that we assume to be correlated with the outcomes of
measurements performed by the two observers. In this case, the conditions (7),
(8) and (9) now assume a joint probability distribution
$p(a_{0},a_{1},b_{1},b_{2}|x_{1},x_{2})$ and in particular, assuming
$\delta=0$ for simplicity, the objectivity condition implies that
$\displaystyle\left\\{\begin{array}[]{ll}p(a_{i}=b_{1}|x_{1}=i,x_{2})&=1\\\
p(a_{i}=b_{2}|x_{1},x_{2}=i)&=1\end{array}\right.$ (19)
Put differently, if $x_{i}=0$ then the outcome $b_{i}$ should be correlated
with $a_{0}$; if $x_{i}=1$ then the outcome $b_{i}$ should be correlated with
$a_{1}$. Similarly to the previous case, it follows that
$\operatorname{CHSH}_{\delta,0}$ constraints the correlations compatible with
this scenario. However, as stated in our next result, proven in the Appendix
VIII.5, differently from the previous case, if we impose a stronger notion of
agreement of the observers for both possible measurements, that is,
$p(b_{1}=b_{2}|x_{1}=x_{2})$, then there are no quantum violations of
$\operatorname{CHSH}_{\delta,0}$.
###### Result 4.
If we impose that $p(b_{1}=b_{2}|x_{1}=x_{2})=1$ for all $x_{1}$ and $x_{2}$,
then all quantum correlations are compatible with the assumptions (7), (8) and
(19).
This shows that if the observers agree on the outcomes of all measurements
being performed, necessarily then, the observed correlations are compatible
with an underlying statistics where the observed outcomes are correlated with
the property of the system they are probing, represented by the probability
distribution $p(a)$.
It is worth remarking here the fact that such non-objectivity bounds can also
serve as witnesses of post-quantum correlations. Differently from Result 4 for
quantum correlations, no-signalling correlations, those respecting eqs. (7)
and (8), a set of correlations that includes the quantum set, do allow for
violations even in the case where the observers agree on the outcomes of all
measurements being performed. To illustrate this, it is enough to consider the
paradigmatic PR-box [32], given by
$p(b_{1},b_{2}|x_{1},x_{2})=\frac{1}{2}\delta_{b_{1}\oplus
b_{2},x_{1}\bar{x_{2}}}\;.$ (20)
The PR-box is such that the observers agree on the outcomes of both possible
measurement inputs but at the same time can violate the CHSH inequality up to
its algebraic maximum of $\operatorname{CHSH}_{0,0}=4$. In general, via our
Result 4, the violation of the CHSH inequality under the constraint of
concordance between the observers implies directly the post-quantum nature of
the correlations.
## IV Proof-of-principle experiment: setup
In the same spirit of Bell’s theorem, the constraints we want to test do not
depend on any specific dynamics and do not need to assume any precise physical
theory. In the following we describe a proof-of-principle photonic experiment
realizing a physical interaction dynamics whose output state can be naturally
mapped into the quantum Darwinism scenario.
The quantum mechanical description of the scheme and the interaction between
the involved photonic degrees of freedom provide only an intuitive picture,
and we stress that the conclusions drawn from the experimental data are fully
device-independent and the dynamics could in principle be unknown.
In our scheme, we consider the temporal degree of freedom of photons as the
observed system $A$, while the polarization of two photons represents a pair
of observer systems $B_{1}$ and $B_{2}$, as the environment (see Fig. 3a). To
simplify the presentation we consider a model where the temporal modes of the
photons are treated separately, for a more in-depth analysis the reader can
refer to Appendix VIII.6. More specifically, two photons interact with a
birefringent crystal, thus coupling the polarization and the temporal delay.
Both the generation and the interaction occur within a Sagnac interferometer,
after which the photons are spatially separated and their (possibly) entangled
polarization carries information on the temporal delay, as shown in Fig. 3b.
The action $\hat{T}$ of the birefringent plate on a single photon with state
$\left|\Psi\right\rangle=\alpha\left|H\right\rangle+\beta\left|V\right\rangle$
is to introduce a delay between the $\left|H\right\rangle$ and
$\left|V\right\rangle$ polarization states, coupling them with a temporal
wavefunction $\Phi(t)$, such that
$\begin{split}\hat{T}\left|\Psi\right\rangle=\hat{T}[(\alpha\left|H\right\rangle+\beta\left|V\right\rangle)\,\left|\Phi(t)\right\rangle]=\\\
=\alpha\left|H\right\rangle\left|\Phi(t+\Delta
t_{H})\right\rangle+\beta\left|V\right\rangle\left|\Phi(t)\right\rangle\;,\end{split}$
(21)
where $\Delta t_{H}$ is the temporal delay, induced by the birefringent plate,
on the horizontal polarization with respect to the vertical polarization.
Notice that this operation realizes a dephasing channel, a paradigmatic model
to formalize decoherence in realistic physical situations, among which
measurements are particular cases [45]. Indeed, a complex interaction
involving many degrees of freedom with discrete or continuous spectrum (all
realistic measurements involve several degrees of freedom of different nature)
can be modeled as a unitary operation involving the considered qubit states,
in our case the polarizations, and an external state corresponding to the
system $A$, that in our case corresponds to the joint temporal degree of
freedom.
More specifically, the state of the qubit interacting with the birefringent
plate, modeled as the dephasing channel involving time as the system $A$, can
be described by the following evolution:
$\begin{split}&\left|H\right\rangle\otimes\left|0\right\rangle_{A}\xrightarrow{\hat{T}}\Delta\left|H\right\rangle\otimes\left|0\right\rangle_{A}+\sqrt{1-|\Delta|^{2}}\left|H\right\rangle\otimes\left|1\right\rangle_{A}\;,\\\
&\left|V\right\rangle\otimes\left|0\right\rangle_{A}\xrightarrow{\hat{T}}\left|V\right\rangle\otimes\left|0\right\rangle_{A}\;,\end{split}$
(22)
where $\Delta$ is the dephasing parameter related to the overlap between,
respectively, the two-photon joint temporal wavefunction where the horizontal
polarization is retarded with respect to the vertical polarization and the
wavefunction without retardation. The retarded states
$\left|0\right\rangle_{A}$ and $\left|1\right\rangle_{A}$ are orthonormal
states of the observed system ${}_{A}\left\langle i\mid
j\right\rangle_{A}=\delta_{ij}$, with $i,j=0,1$ (here, $\delta$ is the
Kronecker delta) corresponding to non-overlapping terms of the photonic
temporal wavefunction (see Appendix VIII.6).
Consider the scheme in Fig. 3. A nonlinear crystal is placed within the
interferometer and a laser pump beam enters the interferometer along an input
of a dual-wavelength polarizing beam splitter (DPBS) and its interaction with
the crystal generates pairs of photons in the state $\left|HV\right\rangle$.
The pump passes through the interferometer along a clockwise and a counter-
clockwise direction, and the relative amplitude of these contributions depends
on the polarization state of the pump beam at the input of the DPBS. Note that
this represents a common scheme for the generation of polarization-entangled
photon pairs [46]. Then, a birefringent plate is inserted in a Sagnac
interferometer after (seen from the anti-clockwise path) the crystal. The pump
is unaffected by the plate, which induces only a negligible phase shift of the
pump field, since it has a coherence time much larger than the introduced
delay. On the other hand, the anti-clockwise generated $\left|HV\right\rangle$
state is affected by the dephasing channel as in (LABEL:eq:dephasing), while
the clockwise term is unaffected since it does not pass through the plate.
Hence, for the counter-clockwise generation, the state is given by
$\begin{split}&\left|H\right\rangle\left|V\right\rangle\otimes\left|0\right\rangle_{A}\left|0\right\rangle_{A}\xrightarrow{\hat{T}}\Delta\left|H\right\rangle\left|V\right\rangle\otimes\left|0\right\rangle_{A}\left|0\right\rangle_{A}+\\\
&+\sqrt{1-|\Delta|^{2}}\left|H\right\rangle\left|V\right\rangle\otimes\left|1\right\rangle_{A}\left|0\right\rangle_{A}\;.\end{split}$
(23)
(a) Conceptual scheme.
(b) Experimental setup
Figure 3: Proof-of-principle experimental setup. a) Conceptual scheme of the
experiment, where the polarization of photons represents the environments
while the temporal delay in the joint wavefunction represents the quantum
system of interest. b) Experimental setup. A Sagnac-based polarization
entangled photon source generates pairs of degenerate photons at $808$nm.
Inside the Sagnac interferometer, the interaction of the photons generated in
the anti-clockwise direction interacts with the birefringent plates which
couple the polarizations with the temporal degree of freedom. The strength of
the interaction is parametrized by $\Delta$ indicating the overlap between the
temporal wavefunctions of the horizontal and vertical polarizations and is
varied by changing the thickness of the birefringent plates. Finally, the
photons are collected and detected by single photon detectors. In detail, M =
mirror; PBS = polarizing beam splitter; HWP= half-waveplate; QWP = quarter-
waveplate; DM = dichroic mirror; DHWP = dual-wavelength half-waveplate; APDs =
avalanche photo-diode detectors.
Considering a pump beam in a diagonal polarization state, the (non-normalized)
final state after the DPBS of the Sagnac interferometer will be:
$\begin{split}&\frac{1}{\sqrt{2}}[\Delta\left|H\right\rangle\left|V\right\rangle\left|0\right\rangle_{A}\left|0\right\rangle_{A}+\\\
&+\sqrt{1-|\Delta|^{2}}\left|H\right\rangle\left|V\right\rangle\otimes\left|1\right\rangle_{A}\left|0\right\rangle_{A}-\left|V\right\rangle\left|H\right\rangle\otimes\left|0\right\rangle_{A}\left|0\right\rangle_{A}]\;.\end{split}$
(24)
Tracing out the time degree of freedom, that nonetheless encodes its
information in the polarization state of the photons, the final state of the
two observer systems is given by
$\displaystyle\rho_{\mathrm{f}}=|\Delta|^{2}\left|\Psi^{-}\right\rangle\left\langle\Psi^{-}\right|+(1-|\Delta|^{2})\rho_{\mathrm{mix}}\;,$
(25)
where $\left|\Psi^{-}\right\rangle$ is the singlet polarization state, and
$\rho_{\mathrm{mix}}$ is the mixed state
$\rho_{\mathrm{mix}}=(\left|HV\right\rangle\left\langle
HV\right|+\left|VH\right\rangle\left\langle VH\right|)/2$.
In this way, an interaction occurs between the polarization of the two photons
and their time degree of freedom by means of the same birefringent plate.
After such an interaction, the polarization of the photons, i.e. the
environment systems $B_{1}$ and $B_{2}$, carry information about the time
degree of freedom defined as the observed system $A$ (see Appendix VIII.6 for
a detailed description of the mapping between the quantum Darwinism scenario
and the experimental realization). The strength of this interaction can be
tuned by changing the thickness of the birefringent plates. To illustrate
this, we describe two extremal conditions. When no birefringent plate is
present no interaction occurs, thus the temporal state of the photons is
uncorrelated with respect to the polarization. In this case, $\Delta=1$ and,
from Eq.(25), the final polarization state of the photons, is a maximally
entangled state
$\frac{1}{\sqrt{2}}(\left|HV\right\rangle-\left|VH\right\rangle)$.
Conversely, when the thickness of the birefringent plate introduces a temporal
delay greater than the coherence time of the photons, $\Delta=0$ and then the
global state, from Eq. (LABEL:eq:unitaryt2), will be
$(\left|H\right\rangle\left|V\right\rangle\otimes\left|1\right\rangle_{A}\left|0\right\rangle_{A}-\left|V\right\rangle\left|H\right\rangle\otimes\left|0\right\rangle_{A}\left|0\right\rangle_{A})/\sqrt{2}$.
From Eq. (25), tracing out the time degree of freedom, the polarization state
will be the mixed state $(\left|HV\right\rangle\left\langle
HV\right|+\left|VH\right\rangle\left\langle VH\right|)/2$.
To resume, considering the state of the photonic polarizations in Eq.(25), we
have that, when there is maximum coupling ($\Delta=0$) the polarization values
of the two photons effectively correspond to the presence
($\left|H\right\rangle$ for the first photon and $\left|V\right\rangle$ for
the second one) or absence ($\left|V\right\rangle$ for the first photon and
$\left|H\right\rangle$ for the second one) of a temporal delay of the
wavefunction. Conversely, when no coupling is present, no information on the
presence of temporal delay is stored in the polarization of the photons. From
a Quantum Darwinism perspective, polarization plays the role of an
environment, mediating the interactions between the (indirectly) observed
system, here the temporal delay, and the measurement apparatus.
Figure 4: Experimental data. Optimal values of the experimental CHSH
parameter as a function of the constraint $\left\langle B^{0}_{1}\mid
B^{0}_{2}\right\rangle=1-2\epsilon$ for different values of temporal overlap
$\Delta$ between the wavefunctions of the two polarizations. The dashed lines
represent the optimal values calculated from the theoretical model of the
experimental state. Moreover, for $\Delta=1$ and $\Delta=0.91$, there are also
reported results obtained using the ab-initio approach and are indicated with
the black and blue points, respectively. The error bars are calculated by
assuming Poissonian statistics.
## V Experimental results
We performed the measurements using four different delays between the two
polarizations due to the birefringent plates. For each delay, there is a
correspondent strength of the interaction, parameterized by the overlap
$\Delta_{i}$, with $i=1,2,3,4$, from Eq. (LABEL:eq:unitaryt).
Once the overlap $\Delta_{i}$ is fixed, the measurements are performed by
varying the agreement $\left<B^{0}_{1}B^{0}_{2}\right>=1-2\epsilon$ between
the measurements of the two observers. For each agreement, the violation of
the CHSH inequality is optimized, using the information from a quantum state
tomography [47]. More specifically, from the tomography, we extract the values
of the rotation angles of the measurement waveplates able to reach, within
experimental errors, the desired agreement and the corresponding maximum CHSH
parameter achievable by the generated state.
The results are shown in Fig. 4. For the case with maximum interaction,
$\Delta=0$, the polarization of the two photons became maximally entangled
with the time degree of freedom and so, from the monogamy of entanglement
[48], no entanglement is possible between the polarizations. Thus, no
violation of the CHSH inequality is observed (see green points in Fig. 4). In
this case, one can argue the emergence of objectivity, since the observers
measuring the polarization of the two photons, not only agree among themselves
but their measurement outcomes can indeed correspond to an objective property,
in our proof-of-principle experiment represented by the time degree of freedom
of the photons.
Conversely, when the interaction is absent ($\Delta=1$), the experimental
entangled state is able to violate the CHSH inequality up to a value
$S^{exp}=2.475\pm 0.008$ with an observed agreement $\left\langle
B^{0}_{1}\mid B^{0}_{2}\right\rangle=1-2\epsilon=0.956\pm 0.002$, that is
$\epsilon=0.022\pm 0.001$. Using the $\operatorname{CHSH}_{\delta.\epsilon}$
inequality (13), valid for general no-signalling correlations, we see that
this corresponds to $\delta\geq 0.124\pm 0.002$. The effect becomes more
pronounced when assuming the validity of quantum theory for all systems
involved. A numerical computation approximating the quantum set by a superset
of allowed distributions [43] returns $\delta=0.49\pm 0.01$, revealing that
the observed variables should be almost completely uncorrelated to any
candidate for the system $A$ property. We thus obtain solid experimental
evidence of the (noisy) "collective hallucination", where the two observers,
even if they have an agreement on the supposedly measured value, the latter
cannot correspond to an objective property of the quantum system of interest.
In order to bring our experimental evidence for non-objectivity closer to the
spirit of the device-independent paradigm, we also perform measurements using
the ab-initio approach introduced in [49], where experimental violations of
classical constraints are found and optimized in a fully black-box scenario,
without any knowledge the generated state and the measurement apparatuses.
More specifically, while usually in an experiment one tries to violate some
inequality using precise knowledge of the employed experimental apparatus, in
an ab-initio approach one does not assume any prior information and, based
only on the (noisy) output statistics, adaptively learns the optimal values of
some controllable parameters, in order to optimize a given cost function, such
as the violation of a Bell-like inequality.
In our experiment there are $8$ parameters to be optimized by the algorithm,
corresponding to the values of the angles of pairs of waveplates (one pair for
each measurements station) for each of the $4$ measurements needed to evaluate
the CHSH parameter. In particular, the optimization firstly reaches the target
value of the agreement $\left<B^{0}_{1}B^{0}_{2}\right>=1-2\epsilon$ tuning
the $4$ involved waveplate parameters, and then it reaches a global optimum
for the CHSH value tuning the other $4$ parameters associated with the
remaining CHSH measurements $\\{B^{1}_{1},B^{1}_{2}\\}$. Details on the ab-
initio optimization protocol can be found in Appendix VIII.7. For each run of
the protocol, the algorithm performs $350$ iterations, i.e. number of points
sampled in a pair of $4$-dimensional parameters space associated to the CHSH
measurements $\\{\\{B^{0}_{1},B^{0}_{2}\\},\\{B^{1}_{1},B^{1}_{2}\\}\\}$. The
first four parameters are related to the measurements
$\\{B^{0}_{1},B^{0}_{2}\\}$, fixing a value for the agreement between the
observers; the other parameters are related to the remaining operators
$\\{B^{1}_{1},B^{1}_{2}\\}$. The results on the values of the CHSH
experimentally achieved with the ab-initio approach are shown in Fig.4.
The experimental points collected with the ab-initio approach can reach values
higher than the ones achieved using quantum state tomography information. This
is possible because within an ab-initio framework errors in the
characterization of the optical setup, such as the optical axes of waveplates,
can be compensated automatically by the optimization process. For all the
curves with $\Delta\neq 0$, for each value of the observed agreement, the CHSH
is violated, consequently witnessing a degree of non-objectivity in a device-
independent way.
## VI Discussion
Comprehending how microscopic quantum features give rise to the observed
macroscopic properties is a central goal of decoherence theory and in
particular of quantum Darwinism. Importantly, the emergence of objectivity,
that is, the fact that different observers agree on the properties of a
quantum system under observation, can be seen as a generic aspect as long as
the information of the quantum system is successfully outspread to the
environment it is interacting with. It is unclear, however, how to witness the
presence or rather the absence of such objectivity in practice. Can we witness
non-objectivity by simply probing the environment, without any knowledge of
the underlying dynamics?
To answer in positive to this question, we establish a probabilistic framework
to cast objectivity through operational lens, building up on the results of
[22]. Within this setting, we propose three properties defining what is to be
expected from a generic objective behaviour: no-superdeterminism, no-
signalling and the $\delta$-objectivity, the latter stating that
$p(b_{i}=a|x_{i}^{*})\geq 1-\delta$, where $x_{i}^{*}$ denotes the measurement
for which the observer should try to recover as best as possible the
information about the system $A$ as encoded in the probability $p(a)$. Those
conditions play a role similar to what the concept of local realism implies
for Bell’s theorem [28]. In particular, the notion of $\delta$-objectivity is
justified by our first result stating that the local agreement between a given
observer and the quantum system of interest, translates into a global notion
of agreement between all observers having access to some part of the
environment.
We then showed that a generalization of the seminal CHSH inequality [27]
constraints the set of possible correlations compatible with the three
aforementioned assumptions. Furthermore, the violation of this inequality
offers a device-independent witness of the non-objectivity of the underlying
process, at the same time that it naturally quantifies how much one should
give up objectivity in order to explain the observed correlations. Further, we
proved that quantum mechanics allow for violations of this inequality and, in
particular, leads to a monogamy relation implying that even though the
observers agree among themselves, their outcomes are completely uncorrelated
from the system they supposedly should be correlated with. A phenomenon we
name "collective hallucination" and that we have experimentally probed using a
photonic setup where the quantum property of interest is encoded in the
temporal degree of freedom of photons, the polarization of which plays the
role of the environment to which the information should redundantly
proliferate.
For scenarios where the probed system has more than one property of interest,
we demonstrated that if the observers have to agree on measurement outcomes of
all performed measurements, then quantum correlations are compatible with the
assumptions no-superdeterminism, no-signalling and objectivity, that is, they
cannot violate any Bell-like inequality. A result that can be violated by
correlations beyond those allowed by quantum theory and that thus can be
employed as a test for post-quantumness.
To our knowledge, this is the first connection between quantum Darwinism, and
the notion of objectivity it entails, with Bell inequalities and device-
independent quantum information processing, a bridge that deserves further
investigation. For instance, it would be interesting to generalize our results
to a larger number of observers and consider measurements with more outcomes.
At the same time, one should understand paradigmatic dynamics considered in
the literature of quantum Darwinism [13, 14, 15, 16, 17, 18, 19, 20] under
this new perspective, and explore the connections with other objectivity
measures [50] valid in the quantum framework. It is worth remarking also that
our approach could lead to substantial refinements in recent tests for the
emergence of objectivity [51, 52, 53]. That said, we should also note that
another bridge connecting quantum Darwinism, Spekkens contextuality and
quantum information has also been recently erected. Adapting the prepare-and-
measure scenario into the usual environment as a witness framework, in Ref.
[24], the authors managed to prove that Spekkens non-contextuality for each
observer follows through whenever the environment proliferates the information
about the central system appropriately. Their notion of classicality differs
from ours, insofar as here we consider mutual agreement rather than non-
contextuality as a signature of classicality—and our connection with
foundations of quantum mechanics is via Bell scenarios. Additionally, our work
goes a step further, as we investigate our theoretical findings with a proof-
of-principle experiment.
Finally, we notice that the $\delta$-objectivity constraint we consider here
is mathematically very similar to the notion of absoluteness of events
employed to analyze a generalization of the Wigner’s friend experiment [54,
55, 56] in the foundations of quantum theory. Apprehending further the
connections between quantum Darwinism/objectivity and Wigner’s friend
experiment/absoluteness of events is another relevant research direction that
we hope our results might trigger.
## VII Acknowledgements
This work was supported by the Serrapilheira Institute (Grant No.
Serra-1708-15763), by the Simons Foundation (Grant Number 884966, AF), the
Brazilian National Council for Scientific and Technological Development (CNPq)
via the National Institute for Science and Technology on Quantum Information
(INCT-IQ) and Grants No. 406574/2018-9 and 307295/2020-6 This research was
also supported by the Fetzer Franklin Fund of the John E. Fetzer Memorial
Trust and by grant number FQXi-RFP-IPW-1905 from the Foundational Questions
Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley
Community Foundation. We acknowledge support from the Templeton Foundation,
The Quantum Information Structure of Spacetime (QISS) Project (qiss.fr) (the
opinions expressed in this publication are those of the author(s) and do not
necessarily reflect the views of the John Templeton Foundation) Grant
Agreement No. 61466.
## References
* Broglie [1924] L. d. Broglie, A tentative theory of light quanta, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 47, 446 (1924).
* Bohr _et al._ [1928] N. Bohr _et al._ , _The quantum postulate and the recent development of atomic theory_ , Vol. 3 (Printed in Great Britain by R. & R. Clarke, Limited, 1928).
* Wootters and Zurek [1979] W. K. Wootters and W. H. Zurek, Complementarity in the double-slit experiment: Quantum nonseparability and a quantitative statement of bohr’s principle, Physical Review D 19, 473 (1979).
* Wigner [1963] E. P. Wigner, The problem of measurement, American Journal of Physics 31, 6 (1963).
* Schlosshauer [2007] M. A. Schlosshauer, _Decoherence: and the quantum-to-classical transition_ (Springer Science & Business Media, 2007).
* Joos _et al._ [2013] E. Joos, H. D. Zeh, C. Kiefer, D. J. Giulini, J. Kupsch, and I.-O. Stamatescu, _Decoherence and the appearance of a classical world in quantum theory_ (Springer Science & Business Media, 2013).
* Aolita _et al._ [2008] L. Aolita, R. Chaves, D. Cavalcanti, A. Acín, and L. Davidovich, Scaling laws for the decay of multiqubit entanglement, Physical Review Letters 100, 080501 (2008).
* Brune _et al._ [1996] M. Brune, E. Hagley, J. Dreyer, X. Maitre, A. Maali, C. Wunderlich, J. M. Raimond, and S. Haroche, Observing the progressive decoherence of the “meter” in a quantum measurement, Physical Review Letters 77, 4887 (1996).
* Arndt _et al._ [1999] M. Arndt, O. Nairz, J. Vos-Andreae, C. Keller, G. Van der Zouw, and A. Zeilinger, Wave–particle duality of c60 molecules, nature 401, 680 (1999).
* Sonnentag and Hasselbach [2007] P. Sonnentag and F. Hasselbach, Measurement of decoherence of electron waves and visualization of the quantum-classical transition, Physical review letters 98, 200402 (2007).
* Zurek [2003] W. H. Zurek, Decoherence, einselection, and the quantum origins of the classical, Reviews of modern physics 75, 715 (2003).
* Zurek [2009] W. H. Zurek, Quantum darwinism, Nature physics 5, 181 (2009).
* Ollivier _et al._ [2004] H. Ollivier, D. Poulin, and W. H. Zurek, Objective properties from subjective quantum states: Environment as a witness, Physical review letters 93, 220401 (2004).
* Ollivier _et al._ [2005] H. Ollivier, D. Poulin, and W. H. Zurek, Environment as a witness: Selective proliferation of information and emergence of objectivity in a quantum universe, Physical review A 72, 042113 (2005).
* Blume-Kohout and Zurek [2006] R. Blume-Kohout and W. H. Zurek, Quantum darwinism: Entanglement, branches, and the emergent classicality of redundantly stored quantum information, Physical review A 73, 062310 (2006).
* Korbicz _et al._ [2014] J. K. Korbicz, P. Horodecki, and R. Horodecki, Objectivity in a noisy photonic environment through quantum state information broadcasting, Physical review letters 112, 120402 (2014).
* Riedel and Zurek [2010] C. J. Riedel and W. H. Zurek, Quantum darwinism in an everyday environment: Huge redundancy in scattered photons, Phys. Rev. Lett. 105, 020404 (2010).
* Blume-Kohout and Zurek [2008] R. Blume-Kohout and W. H. Zurek, Quantum darwinism in quantum brownian motion, Phys. Rev. Lett. 101, 240405 (2008).
* Zwolak _et al._ [2009] M. Zwolak, H. T. Quan, and W. H. Zurek, Quantum darwinism in a mixed environment, Phys. Rev. Lett. 103, 110402 (2009).
* Riedel and Zurek [2011] C. J. Riedel and W. H. Zurek, Redundant information from thermal illumination: quantum darwinism in scattered photons, New Journal of Physics 13, 073038 (2011).
* Horodecki _et al._ [2015] R. Horodecki, J. K. Korbicz, and P. Horodecki, Quantum origins of objectivity, Physical review A 91, 032122 (2015).
* Brandao _et al._ [2015] F. G. Brandao, M. Piani, and P. Horodecki, Generic emergence of classical features in quantum darwinism, Nature communications 6, 1 (2015).
* Qi and Ranard [2021] X.-L. Qi and D. Ranard, Emergent classicality in general multipartite states and channels, Quantum 5, 555 (2021).
* Baldijão _et al._ [2021] R. Baldijão, R. Wagner, C. Duarte, B. Amaral, and M. T. Cunha, Emergence of noncontextuality under quantum darwinism, PRX Quantum 2 (2021).
* Çakmak _et al._ [2021] B. Çakmak, Ö. E. Müstecaplıoğlu, M. Paternostro, B. Vacchini, and S. Campbell, Quantum darwinism in a composite system: Objectivity versus classicality, Entropy 23, 995 (2021).
* Touil _et al._ [2022] A. Touil, B. Yan, D. Girolami, S. Deffner, and W. H. Zurek, Eavesdropping on the decohering environment: quantum darwinism, amplification, and the origin of objective classical reality, Physical Review Letters 128, 010401 (2022).
* Clauser _et al._ [1969] J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Proposed experiment to test local hidden-variable theories, Physical review letters 23, 880 (1969).
* Brunner _et al._ [2014] N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, Bell nonlocality, Reviews of Modern Physics 86, 419 (2014).
* Note [1] A possible route to generalization of quantum Darwinism to GPTs was proposed in [57]. Unfortunately, the authors are concerned with defining what an idealized quantum Darwinism process would look like in GPTs, more precisely, a general version of a fan-out gate, and do not consider the noisy version of such a process.
* Barrett _et al._ [2006] J. Barrett, A. Kent, and S. Pironio, Maximally nonlocal and monogamous quantum correlations, Physical review letters 97, 170409 (2006).
* Barnum _et al._ [2007] H. Barnum, J. Barrett, M. Leifer, and A. Wilce, Generalized no-broadcasting theorem, Physical review letters 99, 240501 (2007).
* Popescu and Rohrlich [1994] S. Popescu and D. Rohrlich, Quantum nonlocality as an axiom, Foundations of Physics 24, 379 (1994).
* Müller and Masanes [2016] M. P. Müller and L. Masanes, Information-theoretic postulates for quantum theory, in _Quantum Theory: Informational Foundations and Foils_, edited by G. Chiribella and R. W. Spekkens (Springer Netherlands, Dordrecht, 2016) pp. 139–170.
* Pironio _et al._ [2016] S. Pironio, V. Scarani, and T. Vidick, Focus on device independent quantum information, New Journal of Physics 18, 100202 (2016).
* Ekert [1991] A. K. Ekert, Quantum cryptography based on bell’s theorem, Phys. Rev. Lett. 67, 661 (1991).
* Pironio _et al._ [2010] S. Pironio, A. Acín, S. Massar, A. B. de La Giroday, D. N. Matsukevich, P. Maunz, S. Olmschenk, D. Hayes, L. Luo, T. A. Manning, _et al._ , Random numbers certified by bell’s theorem, Nature 464, 1021 (2010).
* Cabello _et al._ [2011] A. Cabello, V. D’Ambrosio, E. Nagali, and F. Sciarrino, Hybrid ququart-encoded quantum cryptography protected by kochen-specker contextuality, Phys. Rev. A 84, 030302 (2011).
* Hall [2016] M. J. Hall, The significance of measurement independence for bell inequalities and locality, in _At the frontier of spacetime_ (Springer, 2016) pp. 189–204.
* Chaves _et al._ [2021] R. Chaves, G. Moreno, E. Polino, D. Poderini, I. Agresti, A. Suprano, M. R. Barros, G. Carvacho, E. Wolfe, A. Canabarro, _et al._ , Causal networks and freedom of choice in bell’s theorem, PRX Quantum 2, 040323 (2021).
* Šupić and Bowles [2020] I. Šupić and J. Bowles, Self-testing of quantum systems: a review, Quantum 4, 337 (2020).
* Masanes _et al._ [2006] L. Masanes, A. Acin, and N. Gisin, General properties of nonsignaling theories, Phys. Rev. A 73, 012112 (2006).
* Dhara _et al._ [2013] C. Dhara, G. Prettico, and A. Acín, Maximal quantum randomness in bell tests, Phys. Rev. A 88, 052116 (2013).
* Navascués _et al._ [2007] M. Navascués, S. Pironio, and A. Acín, Bounding the set of quantum correlations, Phys. Rev. Lett. 98, 010401 (2007).
* Wang _et al._ [2016] Y. Wang, X. Wu, and V. Scarani, All the self-testings of the singlet for two binary measurements, New Journal of Physics 18, 025021 (2016).
* Preskill [1998] J. Preskill, Lecture notes for physics 229: Quantum information and computation, California Institute of Technology 16, 1 (1998).
* Fedrizzi _et al._ [2007] A. Fedrizzi, T. Herbst, A. Poppe, T. Jennewein, and A. Zeilinger, A wavelength-tunable fiber-coupled source of narrowband entangled photons, Optics Express 15, 15377 (2007).
* James _et al._ [2005] D. F. James, P. G. Kwiat, W. J. Munro, and A. G. White, On the measurement of qubits, in _Asymptotic Theory of Quantum Statistical Inference: Selected Papers_ (World Scientific, 2005) pp. 509–538.
* Osborne and Verstraete [2006] T. J. Osborne and F. Verstraete, General monogamy inequality for bipartite qubit entanglement, Physical review letters 96, 220503 (2006).
* Poderini _et al._ [2022] D. Poderini, E. Polino, G. Rodari, A. Suprano, R. Chaves, and F. Sciarrino, Ab initio experimental violation of bell inequalities, Physical Review Research 4, 013159 (2022).
* Girolami _et al._ [2022] D. Girolami, A. Touil, B. Yan, S. Deffner, and W. H. Zurek, Redundantly amplified information suppresses quantum correlations in many-body systems, arXiv preprint arXiv:2202.09328 (2022).
* Ciampini _et al._ [2018] M. A. Ciampini, G. Pinna, P. Mataloni, and M. Paternostro, Experimental signature of quantum darwinism in photonic cluster states, Physical Review A 98, 020101 (2018).
* Chen _et al._ [2019] M.-C. Chen, H.-S. Zhong, Y. Li, D. Wu, X.-L. Wang, L. Li, N.-L. Liu, C.-Y. Lu, and J.-W. Pan, Emergence of classical objectivity of quantum darwinism in a photonic quantum simulator, Science Bulletin 64, 580 (2019).
* Unden _et al._ [2019] T. K. Unden, D. Louzon, M. Zwolak, W. H. Zurek, and F. Jelezko, Revealing the emergence of classicality using nitrogen-vacancy centers, Phys. Rev. Lett. 123, 140402 (2019).
* Bong _et al._ [2020] K.-W. Bong, A. Utreras-Alarcón, F. Ghafari, Y.-C. Liang, N. Tischler, E. G. Cavalcanti, G. J. Pryde, and H. M. Wiseman, A strong no-go theorem on the wigner’s friend paradox, Nature Physics 16, 1199 (2020).
* Moreno _et al._ [2021] G. Moreno, R. Nery, C. Duarte, and R. Chaves, Events in quantum mechanics are maximally non-absolute (2021).
* Wiseman _et al._ [2022] H. M. Wiseman, E. G. Cavalcanti, and E. G. Rieffel, A" thoughtful" local friendliness no-go theorem: a prospective experiment with new assumptions to suit, arXiv preprint arXiv:2209.08491 (2022).
* Baldijão _et al._ [2022] R. D. Baldijão, M. Krumm, A. J. Garner, and M. P. Müller, Quantum darwinism and the spreading of classical information in non-classical theories, Quantum 6, 636 (2022).
* Navascués _et al._ [2008] M. Navascués, S. Pironio, and A. Acín, A convergent hierarchy of semidefinite programs characterizing the set of quantum correlations, New Journal of Physics 10, 073013 (2008).
* Boyd [2020] R. W. Boyd, _Nonlinear optics_ (Academic press, 2020).
* Chang [2012] K.-H. Chang, Stochastic nelder–mead simplex method–a new globally convergent direct search method for simulation optimization, European journal of operational research 220, 684 (2012).
## VIII Appendix
### VIII.1 Proof of Result 1
###### Proof.
We start by summing all the conditions $p(b_{k}=a,a|x_{k}^{*})\geq 1-\delta$
and expanding them in terms of the joint distribution
$p(b_{1}=a,b_{2}=a,\ldots,b_{n}=a,a|x_{1}^{*},\ldots,x_{n}^{*}),p(b_{1}\neq
a,b_{2}=a,\ldots,b_{n}=a,a|x_{1}^{*},\ldots,x_{n}^{*}),\ldots$. To simplify
the notation we use $p_{s}=p_{\beta_{1},\beta_{2},\ldots,\beta_{n}}$, where
$s$ is a string of bits $\beta_{i}\in\\{0,1\\}$, denoting the term in which
$b_{i}=a$ or $b_{i}\neq a$, respectively with $\beta_{i}=0$ and $\beta_{i}=1$.
So for example $p_{0}=p_{00\ldots 0}$ represents the term where all the
$b_{i}$ agree with $a$, while $p_{1}=p_{10\ldots 0}$ represents the one where
only $b_{1}$ does not. Calling $H(s)$ the hamming distance between $00\ldots
0$ and $s$, i.e. the number of $1$s in the bitstring $s$, we obtain:
$\displaystyle n(1-\delta)$
$\displaystyle\leq\sum_{k}p(b_{k}=a|x_{k}^{*})=\sum_{s\in\\{0,1\\}^{n}}(n-H(s))p_{s}\leq$
(26) $\displaystyle\leq p_{0}+(n-1)\left(p_{0}+\sum_{s:H(s)\geq
1}p_{s}\right)\leq p_{0}+(n-1)$ (27)
where we used the normalization condition for the $p_{s}$. From this we
directly obtain (1). ∎
#### Thightness of bound
###### Proof.
To prove the tightness of the bound consider the distribution
$p(a,b_{1},\ldots,b_{n}|x_{1},\ldots,x_{n})$ defined as follows by
highlighting the settings and the variables associated with special settings
$x^{*}_{i}$:
$p(a,b_{1},\ldots,b_{n}|x^{*}_{i_{1}},\ldots,x^{*}_{i_{k}},x_{i_{k+1}},\ldots,x_{i_{n}})=p_{\delta}(a,b_{i_{1}},\ldots,b_{i_{k}})p_{U}(b_{i_{k+1}},\ldots,b_{i_{n}})$
(28)
where
$p_{\delta}(a,b_{i_{1}},\ldots,b_{i_{k}})=\left\\{\begin{aligned}
&(1-k\delta)/2&\text{for}\quad a=b_{i_{l}}\forall l\leq k\\\
&\delta/2&\text{for}\quad a=b_{i_{l}}\forall l\leq k,l\neq m,b_{i_{m}}\neq
a\\\ &0&\text{elsewhere}\end{aligned}\right.$ (29)
while $p_{U}(b_{i_{k+1}},\ldots,b_{i_{n}})=2^{k-n}$ is the uniform
distribution over the remaining variables. First, notice that this
distribution for $n+1$ variables reduces to the one for $n$ variables by
marginalizing on any $b_{i}$. This is obvious when $x_{n+1}\neq x^{*}_{n+1}$,
because of the definition (28), while for $x_{n+1}=x^{*}_{n+1}$ we have
$\displaystyle
p(a,b_{1},\ldots,b_{n}|x^{*}_{i_{1}},\ldots,x^{*}_{i_{k}})=p(a,b_{1},\ldots,b_{n},b_{n+1}=a|x^{*}_{i_{1}},\ldots,x^{*}_{i_{k}},x^{*}_{n+1})+$
(30) $\displaystyle+p(a,b_{1},\ldots,b_{n},b_{n+1}\neq
a|x^{*}_{i_{1}},\ldots,x^{*}_{i_{k}},x^{*}_{n+1})=(1-k\delta)/2\quad\text{when}\quad
a=b_{i_{l}}\forall l\leq k$ (31) $\displaystyle
p(a,b_{1},\ldots,b_{n}|x^{*}_{i_{1}},\ldots,x^{*}_{i_{k}})=p(a,b_{1},\ldots,b_{n},b_{n+1}=a|x^{*}_{i_{1}},\ldots,x^{*}_{i_{k}},x^{*}_{n+1})+$
(32) $\displaystyle+p(a,b_{1},\ldots,b_{n},b_{n+1}\neq
a|x^{*}_{i_{1}},\ldots,x^{*}_{i_{k}},x^{*}_{n+1})=\delta/2\quad\text{when}\quad
a=b_{i_{l}}\forall l\leq k,l\neq m,b_{i_{m}}\neq a$ (33)
It is now easy to show that these distributions satisfy (9). Indeed the
condition is trivially valid for the case $n=1$, while for a general one, (9)
applies to the marginal on all but one of the $b_{i}$, which corresponds to
the distribution (28) for $n=1$ as we just proved.
Similarly to show that (29) satisfies the no-signaling condition
$\sum_{a,b_{j}}p(a,b_{i},b_{j}|x_{i},x_{j})=\sum_{a,b_{j}}p(a,b_{i},b_{j}|x_{i},x_{j}^{\prime})\quad\forall\;x_{j},x_{j}^{\prime}$,
we again use the fact that marginalizing on $b_{j}$ gives (28) for $n=1$,
which is independent of the choice of $x_{j}$. ∎
### VIII.2 Proof of Result 2
###### Proof.
The proof follows closely Ref. [55]. Given a distribution
$p(b_{1},b_{2}|x_{1},x_{2},a)$, let us define the following quantities:
$\displaystyle\gamma^{(a)}_{x_{1},x_{2}}\equiv
p(b_{1}=0,b_{2}=0|x_{1},x_{2},a),$ (34)
$\displaystyle\beta^{(a)}_{x_{1}}\equiv p(b_{1}=0|x_{1},a),$ (35)
$\displaystyle\eta^{(a)}_{x_{2}}\equiv p(b_{2}=0|x_{2},a).$ (36)
Now, using the definition $\langle
B^{x_{1}}_{1}B^{x_{2}}_{2}\rangle_{a}=\sum(-1)^{b_{1}+b_{2}}p(b_{1},b_{2}|x_{1},x_{2},a)$,
the NS condition, and the fact that
$\max\\{0,\beta^{(a)}_{x_{1}}+\eta^{(a)}_{x_{2}}-1\\}\leq\gamma^{(a)}_{x_{1},x_{2}}\leq\min\\{\beta^{(a)}_{x_{1}},\eta^{(a)}_{x_{2}}\\}$,
one can show that
$\displaystyle
2\left|\beta^{(a)}_{x_{1}}+\eta^{(a)}_{x_{2}}-1\right|-1\leq\langle
B^{x_{1}}_{1}B^{x_{2}}_{2}\rangle_{a}\leq
1-2\left|\beta^{(a)}_{x_{1}}-\eta^{(a)}_{x_{2}}\right|.$ (37)
which can be used to bound the quantity
$\operatorname{CHSH}_{\delta,\epsilon}^{(a)}=\langle
B^{0}_{1}B^{0}_{2}\rangle_{a}+\langle B^{0}_{1}B^{1}_{2}\rangle_{a}-\langle
B^{1}_{1}B^{0}_{2}\rangle_{a}+\langle B^{1}_{1}B^{1}_{2}\rangle_{a}$, as
$\displaystyle\operatorname{CHSH}_{\delta,\epsilon}^{(a)}\leq\langle
B^{0}_{1}B^{0}_{2}\rangle_{a}+3-2J^{(a)}_{\delta},$ (38)
where
$\displaystyle J^{(a)}_{\delta}$
$\displaystyle\equiv\left|\beta^{(a)}_{0}-\eta^{(a)}_{1}\right|+\left|-\beta^{(a)}_{1}+\eta^{(a)}_{1}\right|+\left|\beta^{(a)}_{1}+\eta^{(a)}_{0}-1\right|$
$\displaystyle\geq\left|\beta_{0}^{(a)}-\beta_{1}^{(a)}\right|+\left|\beta^{(a)}_{1}+\eta^{(a)}_{0}-1\right|$
$\displaystyle\geq\left|\beta_{0}^{(a)}+\eta_{0}^{(a)}-1\right|=\left|p(b_{1}=0|x_{1}=0,a)+p(b_{2}=0|x_{2}=0,a)-1\right|.$
All the inequalities above are obtained via a direct application of the
triangle inequality. In the second line above, equality can be obtained by
setting $\eta_{1}^{(a)}=\frac{\beta_{0}^{(a)}-\beta_{1}^{(a)}}{2}$, which is
always allowed under the hypothesis we are working with. The inequality of the
third line can always be saturated if we set
$\beta_{1}^{(a)}=\frac{\beta_{0}^{(a)}-\eta_{0}^{(a)}}{2}$, also an available
choice.
Now, looking at the quantity $\operatorname{CHSH}_{\delta,\epsilon}$ and
recalling that $\sum_{a}p(a)\langle
B^{0}_{1}B^{0}_{2}\rangle_{a}=1-2\epsilon$, it follows that
$\displaystyle\operatorname{CHSH}_{\delta,\epsilon}$
$\displaystyle=\sum_{a}p(a)\operatorname{CHSH}_{\delta}^{(a)}$
$\displaystyle\leq 4-2\,\epsilon-2\sum_{a}p(a)J^{(a)}_{\delta}$
$\displaystyle\leq
4-2\,\epsilon-2\sum_{a}p(a)\left|\beta_{0}^{(a)}+\eta_{0}^{(a)}-1\right|.$
(39)
Let $\tilde{\beta}_{0}^{(a)}=p(b_{1}=1|x_{1}=0,a)$ and
$\tilde{\eta}_{0}^{(a)}=p(b_{2}=1|x_{2}=0,a)$, due to normalization, it holds
that
$\left|\beta_{0}^{(a)}+\eta_{0}^{(a)}-1\right|=\left|\tilde{\beta}_{0}^{(a)}+\tilde{\eta}_{0}^{(a)}-1\right|$.
This allows rewriting the last inequality of Eq. (39) as
$\displaystyle\operatorname{CHSH}_{\delta,\epsilon}\leq
4-2\,\epsilon-\sum_{a}p(a)\left(\,\left|\beta_{0}^{(a)}+\eta_{0}^{(a)}-1\right|+\left|\tilde{\beta}_{0}^{(a)}+\tilde{\eta}_{0}^{(a)}-1\right|\,\right).$
(40)
Using the triangle inequality to combine the $a=b_{i}$ terms and to combine
the $a\neq b_{i}$ terms leads to
$\displaystyle\operatorname{CHSH}_{\delta,\epsilon}\leq
4-2\,\epsilon-\left|p(b_{1}=a|x_{1}=0)+p(b_{2}=a|x_{2}=0)-1\right|-\left|p(b_{1}=1-a|x_{1}=0)+p(b_{2}=1-a|x_{2}=0)-1\right|,$
(41)
which relates the terms in the moduli to the $\delta$-objectivity inequality
(9) and its complement. The expression in the first modulus satisfies
$\sum_{i}\,\sum_{a}p(b_{i}=a,a|x_{i})-1\geq 1-2\delta$, while the expression
in the second modulus gives $\sum_{i}\,\sum_{a}p(b_{i}=1-a,a|x_{i})-1\leq
2\delta-1$. Since $\delta\leq 1/2$, we arrive at
$\displaystyle\operatorname{CHSH}_{\delta,\epsilon}\leq
2-2\,\epsilon+4\,\delta,$ (42)
which concludes the proof.
∎
### VIII.3 Proof of Result 3 and the self-test of the state maximally
violating $\mathrm{CHSH}_{0,0}$
The idea of performing a self-test of a target state $|\psi\rangle$ from the
knowledge about the existence of a given state $|\Psi\rangle$ fulfilling some
special property, consists of exploring this property to build a local
isometry which, when applied to $|\Psi\rangle$ returns the tensor product of
$|\psi\rangle$ with some other ancillary and non-relevant state. Here we will
follow similar steps to that shown in section 4 of Ref. [40], which is an up-
to-date review of self-testing protocols.
Let $|\Psi^{*}\rangle$ be a state leading to the maximal quantum violation of
the $\operatorname{CHSH}_{0,0}$ inequality when one of the parts measures the
observables (and hence Hermitian operators) $B^{0}_{1}$ and $B^{1}_{1}$ and
the other part measures $B^{0}_{2}$ and $B^{1}_{2}$, i.e.,
$\displaystyle\langle\Psi^{*}|B^{0}_{1}B^{0}_{2}+B^{0}_{1}B^{1}_{2}-B^{1}_{1}B^{0}_{2}+B^{1}_{1}B^{1}_{2}|\Psi^{*}\rangle=\frac{5}{2}.$
(43)
in which, by hypothesis, it holds that
$\langle\Psi^{*}|B^{0}_{1}B^{0}_{2}|\Psi^{*}\rangle=1$, a condition which
entails that
$\displaystyle B^{0}_{1}|\Psi^{*}\rangle=B^{0}_{2}|\Psi^{*}\rangle.$ (44)
Our first step is to prove the following relation,
$\displaystyle\langle\Psi^{*}|\left[(B^{0}_{1}-B^{0}_{2})^{2}+\frac{1}{2}\left[(B^{1}_{2}-B^{0}_{2})-B^{1}_{1}\right]^{2}\right]|\Psi^{*}\rangle$
$\displaystyle=$
$\displaystyle\langle\Psi^{*}|\left[\frac{5}{2}\mathbbm{1}-\operatorname{CHSH}_{0,0}\right]|\Psi^{*}\rangle$
(45)
This relation can be achieved via a straightforward calculation using equation
44 and the condition $\langle\Psi^{*}|B^{0}_{1}B^{0}_{2}|\Psi^{*}\rangle=1$:
$\displaystyle\langle\Psi^{*}|\left[(B^{0}_{1}-B^{0}_{2})^{2}+\frac{1}{2}\left[(B^{1}_{2}-B^{0}_{2})-B^{1}_{1}\right]^{2}\right]|\Psi^{*}\rangle$
$\displaystyle=$
$\displaystyle\langle\Psi^{*}|\left[(B^{0}_{1})^{2}-2B^{0}_{1}B^{0}_{2}+(B^{0}_{2})^{2}+\frac{1}{2}(B^{1}_{2}-B^{0}_{2})^{2}\right.$
$\displaystyle-$
$\displaystyle\left.B^{1}_{1}(B^{1}_{2}-B^{0}_{2})+\frac{1}{2}(B^{1}_{1})^{2}\right]|\Psi^{*}\rangle$
$\displaystyle=$
$\displaystyle\langle\Psi^{*}|\left[2\mathbbm{1}-2B^{0}_{1}B^{0}_{2}+\frac{1}{2}((B^{1}_{2})^{2}-B^{1}_{2}B^{0}_{2}-B^{0}_{2}B^{1}_{2}+(B^{0}_{2})^{2})\right.$
$\displaystyle-$
$\displaystyle\left.B^{1}_{1}B^{1}_{2}+B^{1}_{1}B^{0}_{2}+\frac{1}{2}\mathbbm{1}\right]|\Psi^{*}\rangle$
$\displaystyle=$
$\displaystyle\langle\Psi^{*}|\left[\frac{7}{2}\mathbbm{1}-2B^{0}_{1}B^{0}_{2}-\frac{1}{2}(B^{1}_{2}B^{0}_{2}+B^{0}_{2}B^{1}_{2})-B^{1}_{1}B^{1}_{2}+B^{1}_{1}B^{0}_{2}\right]|\Psi^{*}\rangle,$
now we use that $\langle\Psi^{*}|B^{0}_{1}B^{0}_{2}|\Psi^{*}\rangle=1$ to
obtain,
$\displaystyle\langle\Psi^{*}|\left[(B^{0}_{1}-B^{0}_{2})^{2}+\frac{1}{2}\left[(B^{1}_{2}-B^{0}_{2})-B^{1}_{1}\right]^{2}\right]|\Psi^{*}\rangle$
$\displaystyle=$
$\displaystyle\langle\Psi^{*}|\left[\frac{5}{2}\mathbbm{1}-B^{0}_{1}B^{0}_{2}-\frac{1}{2}(B^{1}_{2}B^{0}_{2}+B^{0}_{2}B^{1}_{2})-B^{1}_{1}B^{1}_{2}+B^{1}_{1}B^{0}_{2}\right]|\Psi^{*}\rangle,$
and then we use Eq. 44, from which we get
$\displaystyle\langle\Psi^{*}|\left[(B^{0}_{1}-B^{0}_{2})^{2}+\frac{1}{2}\left[(B^{1}_{2}-B^{0}_{2})-B^{1}_{1}\right]^{2}\right]|\Psi^{*}\rangle$
$\displaystyle=$
$\displaystyle\langle\Psi^{*}|\left[\frac{5}{2}\mathbbm{1}-B^{0}_{1}B^{0}_{2}-B^{0}_{1}B^{1}_{2}+B^{1}_{1}B^{0}_{2}-B^{1}_{1}B^{1}_{2}\right]|\Psi^{*}\rangle$
$\displaystyle=$
$\displaystyle\langle\Psi^{*}|\left[\frac{5}{2}\mathbbm{1}-\operatorname{CHSH}_{0,0}\right]|\Psi^{*}\rangle,$
which concludes the proof of Eq. 45.
By combining 45 and eq 43, we conclude that:
$\displaystyle(B^{1}_{2}-B^{0}_{2})|\Psi^{*}\rangle=B^{1}_{1}|\Psi^{*}\rangle.$
(46)
and using again relation 44,
$\displaystyle(B^{0}_{1}+B^{1}_{1})|\Psi^{*}\rangle=B^{1}_{2}|\Psi^{*}\rangle.$
(47)
Now, using 44, 46, and 47,
$\displaystyle\langle\Psi^{*}|\\{B^{0}_{1},B^{1}_{1}\\}|\Psi^{*}\rangle$
$\displaystyle=\langle\Psi^{*}|B^{0}_{1}B^{1}_{1}+B^{1}_{1}B^{0}_{1}|\Psi^{*}\rangle$
$\displaystyle=\langle\Psi^{*}|B^{0}_{1}(B^{1}_{2}-B^{0}_{2})+B^{1}_{1}B^{0}_{2}|\Psi^{*}\rangle$
$\displaystyle=\langle\Psi^{*}|(B^{1}_{2}-B^{0}_{2})B^{0}_{1}+B^{0}_{2}B^{1}_{1}|\Psi^{*}\rangle$
$\displaystyle=\langle\Psi^{*}|B^{1}_{2}B^{0}_{1}-B^{0}_{2}B^{0}_{1}+B^{0}_{2}(B^{1}_{2}-B^{0}_{2})|\Psi^{*}\rangle$
$\displaystyle=\langle\Psi^{*}|B^{1}_{2}B^{0}_{2}-B^{0}_{2}B^{0}_{2}+B^{0}_{2}B^{1}_{2}-B^{0}_{2}B^{0}_{2}|\Psi^{*}\rangle$
$\displaystyle=\langle\Psi^{*}|B^{1}_{2}B^{0}_{2}+B^{0}_{2}B^{1}_{2}-2\mathbbm{1}|\Psi^{*}\rangle$
$\displaystyle=\langle\Psi^{*}|\\{B^{0}_{2},B^{1}_{2}\\}-2\mathbbm{1}|\Psi^{*}\rangle.$
(48)
in which, because $\langle\Psi^{*}|[B^{0}_{1},B^{1}_{1}]|\Psi^{*}\rangle=0$
and $\langle\Psi^{*}|[B^{0}_{2},B^{1}_{2}]|\Psi^{*}\rangle=0$, it holds that
$\displaystyle\left\\{\begin{array}[]{l}\langle\Psi^{*}|\\{B^{0}_{1},B^{1}_{1}\\}|\Psi^{*}\rangle=2\langle\Psi^{*}|B^{1}_{1}B^{0}_{1}|\Psi^{*}\rangle=2\langle\Psi^{*}|B^{1}_{1}B^{0}_{2}|\Psi^{*}\rangle\\\
\langle\Psi^{*}|\\{B^{0}_{2},B^{1}_{2}\\}|\Psi^{*}\rangle=2\langle\Psi^{*}|B^{0}_{2}B^{1}_{2}|\Psi^{*}\rangle=2\langle\Psi^{*}|B^{0}_{1}B^{1}_{2}|\Psi^{*}\rangle\end{array}\right.,$
(51)
and so
$\displaystyle 2\langle\Psi^{*}|B^{1}_{1}B^{0}_{2}|\Psi^{*}\rangle$
$\displaystyle=\langle\Psi^{*}|2B^{0}_{1}B^{1}_{2}-2\mathbbm{1}|\Psi^{*}\rangle$
$\displaystyle\implies\langle\Psi^{*}|B^{0}_{1}B^{1}_{2}-B^{1}_{1}B^{0}_{2}|\Psi^{*}\rangle=1.$
Because of the constraint
$\langle\Psi^{*}|B^{0}_{1}B^{0}_{2}|\Psi^{*}\rangle=1$ and the Eq. 43, this
means that
$\displaystyle\frac{1}{2}$
$\displaystyle=\langle\Psi^{*}|B^{1}_{1}B^{1}_{2}|\Psi^{*}\rangle$
$\displaystyle=\langle\Psi^{*}|B^{1}_{1}(B^{0}_{1}+B^{1}_{1})|\Psi^{*}\rangle$
$\displaystyle=\langle\Psi^{*}|B^{1}_{1}B^{0}_{1}+\mathbbm{1}|\Psi^{*}\rangle$
$\displaystyle\implies\langle\Psi^{*}|B^{1}_{1}B^{0}_{1}|\Psi^{*}\rangle=-\frac{1}{2}.$
Going back to Eq. 51,
$\displaystyle\langle\Psi^{*}|\\{B^{0}_{1},B^{1}_{1}\\}|\Psi^{*}\rangle=-1\;\;\;\implies\;\;\;\\{B^{0}_{1},B^{1}_{1}\\}|\Psi^{*}\rangle=-|\Psi^{*}\rangle,$
(52)
which can be replaced in Eq. VIII.3 to obtain
$\displaystyle\langle\Psi^{*}|\\{B^{0}_{2},B^{1}_{2}\\}|\Psi^{*}\rangle=1\;\;\;\implies\;\;\;\\{B^{0}_{2},B^{1}_{2}\\}|\Psi^{*}\rangle=|\Psi^{*}\rangle$
(53)
Now let us define the following operators:
$\displaystyle Z_{1}\equiv B^{0}_{1}$ $\displaystyle
X_{1}\equiv-\frac{\sqrt{3}}{3}\left(2B^{1}_{1}+B^{0}_{1}\right)$
$\displaystyle Z_{2}\equiv B^{0}_{2}$ $\displaystyle
X_{2}\equiv\frac{\sqrt{3}}{3}\left(B^{0}_{2}-2B^{1}_{2}\right).$ (54)
These operators are such that,
$\displaystyle Z_{1}|\Psi^{*}\rangle=Z_{2}|\Psi^{*}\rangle,$ (55)
from Eq. 44. Additionally, from eqs.(46), and (47):
$\displaystyle X_{1}|\Psi^{*}\rangle$ $\displaystyle=$
$\displaystyle-\frac{\sqrt{3}}{3}\left(2B^{1}_{1}+B^{0}_{1}\right)|\Psi^{*}\rangle$
(56) $\displaystyle=$
$\displaystyle-\frac{\sqrt{3}}{3}\left(2(B^{1}_{2}-B^{0}_{2})+B^{0}_{2}\right)|\Psi^{*}\rangle$
$\displaystyle=$
$\displaystyle-\frac{\sqrt{3}}{3}\left(2B^{1}_{2}-B^{0}_{2}\right)|\Psi^{*}\rangle$
$\displaystyle=$ $\displaystyle X_{2}|\Psi^{*}\rangle,$
Also, the effect of the anticommutator of $Z_{1}$ and $X_{1}$,
$\\{Z_{1},X_{1}\\}$, and the anticommutator of $Z_{2}$ and $X_{2}$ are related
when acting on $|\Psi^{*}\rangle$:
$\displaystyle\\{Z_{1},X_{1}\\}|\Psi^{*}\rangle$
$\displaystyle=-\frac{\sqrt{3}}{3}\left(2B^{0}_{1}B^{1}_{1}+B^{0}_{1}B^{0}_{1}+2B^{1}_{1}B^{0}_{1}+B^{0}_{1}B^{0}_{1}\right)|\Psi^{*}\rangle$
$\displaystyle=-\frac{\sqrt{3}}{3}\left(2\\{B^{0}_{1},B^{1}_{1}\\}+2\mathbbm{1}\right)|\Psi^{*}\rangle$
$\displaystyle=0$ (57)
and similarly,
$\displaystyle\\{Z_{2},X_{2}\\}|\Psi^{*}\rangle$
$\displaystyle=\frac{\sqrt{3}}{3}\left(B^{0}_{2}B^{0}_{2}-2B^{0}_{2}B^{1}_{2}+B^{0}_{2}B^{0}_{2}-2B^{1}_{2}B^{0}_{2}\right)|\Psi^{*}\rangle$
$\displaystyle=\frac{\sqrt{3}}{3}\left(-2\\{B^{0}_{2},B^{1}_{2}\\}+2\mathbbm{1}\right)|\Psi^{*}\rangle$
$\displaystyle=0$ (58)
in which we used relations 52 and 53.
Furthermore, $Z_{1}$ and $Z_{2}$ are unitary by construction, and $X_{1}$ and
$X_{2}$ act in $|\Psi^{*}\rangle$ as a unitary operator:
$\displaystyle X_{1}^{\dagger}X_{1}|\Psi^{*}\rangle$
$\displaystyle=\frac{1}{3}\left(2B^{1}_{1}+B^{0}_{1}\right)\left(2B^{1}_{1}+B^{0}_{1}\right)|\Psi^{*}\rangle$
$\displaystyle=\frac{1}{3}\left(4B^{1}_{1}B^{1}_{1}+2B^{1}_{1}B^{0}_{1}+2B^{0}_{1}B^{1}_{1}+B^{0}_{1}B^{0}_{1}\right)|\Psi^{*}\rangle$
$\displaystyle=\frac{1}{3}\left(5\mathbbm{1}+2(B^{1}_{1}B^{0}_{1}+B^{0}_{1}B^{1}_{1})\right)|\Psi^{*}\rangle$
$\displaystyle=\frac{1}{3}\left(5\mathbbm{1}-2\mathbbm{1}\right)|\Psi^{*}\rangle$
$\displaystyle=|\Psi^{*}\rangle.$ (59)
The proof for $X_{2}$ follows from equation 56. Finally, as the sub-indices on
the B’s label different parties, it also holds true that $[Z_{1},Z_{2}]=0$ as
well as $[X_{1},X_{2}]=0$.
Now consider the local isotropic transformation depicted in Fig. 5. The final
state is dubbed $|\Psi_{final}\rangle$ and given by:
Figure 5: The partial swap circuit, a local isotropic transformation usually
employed in self-testing protocols.
$\displaystyle|\Psi_{final}\rangle=\frac{1}{4}\sum_{j,k=0}^{1}|jk\rangle\otimes\left[Z_{1}^{j}(\mathbbm{1}+(-1)^{j}X_{1})Z_{2}^{k}(\mathbbm{1}+(-1)^{k}X_{2})|\Psi^{*}\rangle\right].$
(60)
Because of relation 56 and also because $X_{1}$ commutes with $X_{2}$, all
terms for which $j+k=1$ vanish, leading to
$\displaystyle|\Psi_{final}\rangle=\frac{1}{4}|00\rangle\otimes\left[(\mathbbm{1}+X_{1})(\mathbbm{1}+X_{2})|\Psi^{*}\rangle\right]+\frac{1}{4}|11\rangle\otimes\left[Z_{1}(\mathbbm{1}-X_{1})Z_{2}(\mathbbm{1}-X_{2})|\Psi^{*}\rangle\right],$
(61)
in which, we can use the fact that $Z_{1}$ and $X_{1}$, as well as $Z_{2}$ and
$X_{2}$, anticommute to perform the following calculation,
$\displaystyle
Z_{1}(\mathbbm{1}-X_{1})Z_{2}(\mathbbm{1}-X_{2})|\Psi^{*}\rangle$
$\displaystyle=(Z_{1}-Z_{1}X_{1})(Z_{2}-Z_{2}X_{2})|\Psi^{*}\rangle$
$\displaystyle=(Z_{1}+X_{1}Z_{1})(Z_{2}+X_{2}Z_{2})|\Psi^{*}\rangle$
$\displaystyle=(\mathbbm{1}+X_{1})Z_{1}(\mathbbm{1}+X_{2})Z_{2}|\Psi^{*}\rangle$
$\displaystyle=(\mathbbm{1}+X_{1})(\mathbbm{1}+X_{2})Z_{1}Z_{2}|\Psi^{*}\rangle,$
$\displaystyle=(\mathbbm{1}+X_{1})(\mathbbm{1}+X_{2})Z_{1}Z_{1}|\Psi^{*}\rangle,$
(62)
$\displaystyle=(\mathbbm{1}+X_{1})(\mathbbm{1}+X_{2})\mathbbm{1}|\Psi^{*}\rangle,$
(63)
which leads to
$\displaystyle|\Psi_{final}\rangle$
$\displaystyle=\frac{1}{4}|00\rangle\otimes\left[(\mathbbm{1}+X_{1})(\mathbbm{1}+X_{2})|\Psi^{*}\rangle\right]+\frac{1}{4}|11\rangle\otimes\left[(\mathbbm{1}+X_{1})(\mathbbm{1}+X_{2})|\Psi^{*}\rangle\right]$
$\displaystyle=\frac{1}{\sqrt{2}}\left(|00\rangle+|11\rangle\right)\otimes\left[\frac{1}{2\sqrt{2}}(\mathbbm{1}+X_{1})(\mathbbm{1}+X_{2})|\Psi^{*}\rangle\right]$
$\displaystyle=|\phi^{+}\rangle\otimes|\zeta_{junk}\rangle,$
thus completing our self-test.
### VIII.4 A semi-definite program formulation of the problem
Given the distribution $p(b_{1},b_{2}|x_{1},x_{2})$ one may search for the
least $\delta$ necessary to describe it with a $\delta$-objective distribution
$p(a,b_{1},b_{2}|x_{1},x_{2})$. Using a general distribution, this is an
optimization that can be realized with a linear program, the restriction to
quantum attainable distributions $p(a,b_{1},b_{2}|x_{1},x_{2})$, however, is a
hard task in general. Ideally, the objective would be finding the best
combination of density operator $\rho_{AB_{1}B_{2}}$ and POVMs $M_{A}^{(a)}$,
$M^{(b_{1})}_{B_{1},\,x_{1}}$ and $M^{(b_{2})}_{B_{2},\,x_{2}}$ such that
$p(a,b_{1},b_{2}|x_{1},x_{2})=\operatorname{tr}\left(\rho_{AB_{1}B_{2}}M_{A}^{(a)}\otimes
M^{(b_{1})}_{B_{1},\,x_{1}}\otimes
M^{(b_{2})}_{B_{2},\,x_{2}}\right),\,\forall a,b_{1},b_{2},x_{1},x_{2},$ (64)
satisfying constraint (9) with the lowest value possible for $\delta$. Since
no restriction is made on the dimensions of the involved Hilbert spaces, these
would also become parameters to be included in the optimization. Upper bounds
on the optimal $\delta$ can be done simply by providing a generic realization
for $p(a,b_{1},b_{2}|x_{1},x_{2})$ that is not necessarily the optimal one
(noticing that we are minimizing $\delta$). A way to lower bound the optimal
$\delta$ can be realized by relaxing the problem and imposing some but not all
necessary conditions that any quantum-attainable distribution should satisfy.
A standard method in the literature is to employ the NPA hierarchy [43] of
semidefinite program tests, which is proven to converge to the set of quantum
distributions [58]. The method proposes to further constrain the possible
distributions $p(a,b_{1},b_{2}|x_{1},x_{2})$ by requiring that it should also
be compatible with a matrix of statistical moments that can always be made
positive semidefinite whenever there exists a quantum realization for $p$. For
a given $p$ with unknown realization, the moment matrix is built up by
considering free variables and the constraints implied if a quantum
realization exists: a sequence of observables $O_{1},\ldots,O_{n}$ and a
quantum state $|\psi\rangle_{AB_{1}B_{2}}$ are assumed, which can be chosen
pure by using a high enough dimension for the Hilbert space, and each entry in
the moment matrix $\Gamma$ is given by
$\Gamma_{ij}=\langle\psi|O_{i}^{\dagger}O_{j}|\psi\rangle$. In general, the
observables are chosen to be the measurement operators $M_{A}^{(a)}$,
$M^{(b_{1})}_{B_{1},\,x_{1}}$ and $M^{(b_{2})}_{B_{2},\,x_{2}}$ (which can
also be assumed projective by using a dimension big enough) and strings formed
by them, with length up to a given $k\geq 1$. In some cases, the entries can
be associated with components of the distribution
$p(a,b_{1},b_{2}|x_{1},x_{2})$, or they can be identified with other entries.
If a quantum realization for $p$ exists, $k$ can be taken to an arbitrarily
high value; failure to obtain a positive semidefinite moment matrix $\Gamma$
for a given $k$ then certificates that the behavior is post-quantum.
Employing this method to bound a distribution $p(a,b_{1},b_{2}|x_{1},x_{2})$
which reduces to an observable distribution $p(b_{1},b_{2}|x_{1},x_{2})$ with
specific values for the CHSH expression, allows us to bound the objectivity
that can be associated to the observed values when $x_{1}=0$ or $x_{2}=0$. In
Fig. 2 we show different curves for the minimum $\delta$ as a function of the
CHSH value, considering also different agreements between the observers, given
by the extra (linear) constraint
$\sum_{b_{1}}p(b_{1}=b_{2}|x_{1}=0,\,x_{2}=0)=1-\epsilon$, with
$0\leq\epsilon\leq 0.5$.
### VIII.5 Proof of result 4
To prove Result 4, we are going to show that it is always possible to define a
classical joint distribution $p(a_{1},a_{2})$ in a way that $(a_{1},a_{2})$
plays the rule of a hidden-variable in a classical model that can explain any
distribution $p(b_{1},b_{2}|x_{1},x_{2})$ obtained in an experiment with
quantum systems.
First of all, we take advantage of the fact that we are not making any
constrain in the dimension of the systems under consideration, to work only
with pure states. So consider a pure state
$|\Psi\rangle\in(\mathcal{H}_{1}\otimes\mathcal{H}_{2})$. Let
$\\{M^{(1)}_{0|0},M^{(1)}_{1|0},M^{(1)}_{0|1},M^{(1)}_{1|1},M^{(2)}_{0|0},M^{(2)}_{1|0},M^{(2)}_{0|1},M^{(2)}_{1|1}\\}$
be a set of projectors, such that $\sum_{b=0}^{1}M^{(i)}_{b,x}=\mathbbm{1}$
for $i,x\in\\{0,1\\}$, $M^{(i)}_{b,x}\in\mathcal{L}(\mathcal{H}_{i})$, and
$\displaystyle
M^{(i)}_{b,x}=\sum_{k}|\phi^{(i)}_{k,b,x}\rangle\langle\phi^{(i)}_{k,b,x}|$
(65)
in which
$\displaystyle\langle\phi^{(i)}_{k,b,x}|\phi^{(i)}_{k^{\prime},b^{\prime},x}\rangle=\delta_{b,b^{\prime}}\delta_{k,k^{\prime}}.$
(66)
The condition $p(b_{1}=b_{2}|x_{1}=x_{2})$ implies that $E_{00}=E_{11}=1$ and
hence, no symmetry of the CHSH having these two terms with different signs can
be violated. It also implies that,
$\displaystyle|\Psi\rangle$ $\displaystyle=$
$\displaystyle\sum_{k,l}\alpha_{k,l}|\phi^{(0)}_{k,0,0}\rangle\otimes|\phi^{(1)}_{l,0,0}\rangle+\sum_{r,s}\beta_{r,s}|\phi^{(0)}_{r,1,0}\rangle\otimes|\phi^{(1)}_{s,1,0}\rangle$
(67)
and, at the same time,
$\displaystyle|\Psi\rangle$ $\displaystyle=$
$\displaystyle\sum_{p,q}\alpha^{\prime}_{p,q}|\phi^{(0)}_{p,0,1}\rangle\otimes|\phi^{(1)}_{q,0,1}\rangle+\sum_{u,v}\beta^{\prime}_{u,v}|\phi^{(0)}_{u,1,1}\rangle\otimes|\phi^{(1)}_{v,1,1}\rangle.$
(68)
Now, define $\tilde{p}(a_{0},a_{1})$ as follows,
$\displaystyle\tilde{p}(a_{0},a_{1})$ $\displaystyle\equiv$ $\displaystyle
p(b_{1}=a_{0},b_{2}=a_{1}|x_{1}=0,x_{2}=1)$ $\displaystyle=$
$\displaystyle\langle\Psi|M_{a_{0}|0}^{(0)}\otimes
M_{a_{1}|1}^{(1)}|\Psi\rangle$ $\displaystyle=$
$\displaystyle\langle\Psi|\left[\left(\sum_{j}|\phi^{(0)}_{j,a_{0},0}\rangle\langle\phi^{(0)}_{j,a_{0},0}|\right)\otimes\left(\sum_{k}|\phi^{(1)}_{k,a_{1},1}\rangle\langle\phi^{(1)}_{k,a_{1},1}|\right)\right]\left(\sum_{p,q}\alpha^{\prime}_{p,q}|\phi^{(0)}_{p,0,1}\rangle\otimes|\phi^{(1)}_{q,0,1}\rangle+\sum_{u,v}\beta^{\prime}_{u,v}|\phi^{(0)}_{u,1,1}\rangle\otimes|\phi^{(1)}_{v,1,1}\rangle\right)$
$\displaystyle=$
$\displaystyle\langle\Psi|\left[\left(\sum_{j}|\phi^{(0)}_{j,a_{0},0}\rangle\langle\phi^{(0)}_{j,a_{0},0}|\right)\otimes\mathbbm{1}^{(1)}\right]\left(\sum_{k,p,q}\left(\alpha^{\prime}_{pq}\delta_{a_{1},0}+\beta^{\prime}_{p,q}\delta_{a_{1},1}\right)\delta_{k,q}|\phi_{p,a_{1},1}^{(0)}\rangle\otimes|\phi^{(1)}_{k,a_{1},1}\rangle\right)$
$\displaystyle=$
$\displaystyle\left(\sum_{u,v}\alpha_{u,v}^{*}\langle\phi^{(0)}_{u,0,0}|\otimes\langle\phi^{(1)}_{v,0,0}|+\sum_{r,s}\beta_{r,s}^{*}\langle\phi^{(0)}_{r,1,0}|\otimes\langle\phi^{(1)}_{s,1,0}|\right)\left[\left(\sum_{j}|\phi^{(0)}_{j,a_{0},0}\rangle\langle\phi^{(0)}_{j,a_{0},0}|\right)\otimes\mathbbm{1}^{(1)}\right]$
$\displaystyle\times$
$\displaystyle\left(\sum_{k,p,q}\left(\alpha^{\prime}_{pq}\delta_{a_{1},0}+\beta^{\prime}_{p,q}\delta_{a_{1},1}\right)\delta_{k,q}|\phi_{p,a_{1},1}^{(0)}\rangle\otimes|\phi^{(1)}_{k,a_{1},1}\rangle\right)$
$\displaystyle=$
$\displaystyle\left(\sum_{j,u,v}\left(\alpha_{u,v}^{*}\delta_{a_{0},0}+\beta_{u,v}^{*}\delta_{a_{0}},1\right)\delta_{v,j}\langle\phi^{(0)}_{j,a_{0},0}|\otimes\langle\phi^{(1)}_{j,a_{0},0}|\right)\left(\sum_{k,p}\left(\alpha^{\prime}_{pq}\delta_{a_{1},0}+\beta^{\prime}_{p,q}\delta_{a_{1},1}\right)|\phi_{p,a_{1},1}^{(0)}\rangle\otimes|\phi^{(1)}_{k,a_{1},1}\rangle\right)$
$\displaystyle=$
$\displaystyle\sum_{j,u,k,p}\left(\alpha_{u,v}^{*}\delta_{a_{0},0}+\beta_{u,v}^{*}\delta_{a_{0}},1\right)\left(\alpha^{\prime}_{pq}\delta_{a_{1},0}+\beta^{\prime}_{p,q}\delta_{a_{1},1}\right)\langle\phi^{(0)}_{j,a_{0},0}|\phi_{p,a_{1},1}^{(0)}\rangle\langle\phi^{(1)}_{j,a_{0},0}|\phi^{(1)}_{k,a_{1},1}\rangle$
$\displaystyle=$ $\displaystyle\left(\langle\Psi|M_{a_{1}|1}^{(0)}\otimes
M_{a_{0}|0}^{(1)}|\Psi\rangle\right)^{*}$ $\displaystyle=$
$\displaystyle\langle\Psi|M_{a_{1}|1}^{(0)}\otimes
M_{a_{0}|0}^{(1)}|\Psi\rangle$ $\displaystyle=$ $\displaystyle
p(b_{1}=a_{1},b_{2}=a_{0}|x_{1}=1,x_{2}=0)$
Hence, given a quantum distribution (which always satisfies the NS condition)
obeying $p(b_{1}=b_{2}|x_{1}=x_{2})=1$ one can always define a distribution
$\tilde{p}(a_{0},a_{1})$ such that, by construction
$\tilde{p}(a_{0},a_{1}|x_{1},x_{2})=\tilde{p}(a_{0},a_{1})$, i.e., it
satisfies the NSD condition. More precisely, for any quantum distribution
obeying $p(b_{1}=b_{2}|x_{1}=x_{2})=1$, it holds that,
$\displaystyle
p(b_{1},b_{2}|x_{1},x_{2})=\left\\{\begin{array}[]{ll}\delta_{b_{1},b_{2}}\tilde{p}(a_{0}=b_{1})&\mbox{,
if }x_{1}=0,\mbox{ and }x_{2}=0\\\ \tilde{p}(a_{0}=b_{1},a_{1}=b_{2})&\mbox{,
if }x_{1}=0,\mbox{ and }x_{2}=1\\\ \tilde{p}(a_{1}=b_{1},a_{0}=b_{2})&\mbox{,
if }x_{1}=1,\mbox{ and }x_{2}=0\\\
\delta_{b_{1},b_{2}}\tilde{p}(a_{1}=b_{1})&\mbox{, if }x_{1}=1,\mbox{ and
}x_{2}=1\end{array}\right.,$ (73)
in which we have just proven the second and the third equations, and one can
obtain the first and the fourth by applying the NS condition combined with the
imposition $p(b_{1}=b_{2}|x_{1}=x_{2})=1$ to the second and third equations.
Another way of writing 73 is as follows
$\displaystyle
p(b_{1},b_{2}|x_{1},x_{2})=\sum_{a_{1},a_{2}}\delta_{b_{1},f_{a_{1},a_{2}}(x_{1})}\delta_{b_{2},f_{a_{1},a_{2}}(x_{2})}p(a_{1},a_{2}),$
(74)
in which
$\displaystyle f_{a_{1},a_{2}}(x)=a_{x}.$ (75)
By definition, eq 74 is a local hidden-variable model for
$p(b_{1},b_{2}|x_{1},x_{2})$, implying that this distribution cannot violate
the CHSH inequality, as announced in Result 4.
### VIII.6 Mapping the experimental scheme to the Darwin scenario
In section IV we described the dynamics of the system treating independently
the temporal modes of the two photons to simplify the presentation. Here we
will describe more in detail the evolution of the system and environment in
terms of their joint temporal mode. In the experimental scheme, two photons,
generated by a collinear SPDC (Spontaneous Parametric Down-Conversion) process
of type II, in a ppKTP (Periodically-Poled Potassium Titanyl Phosphate)
crystal (see section IV), interact with a birefringent crystal in one of the
branches of the Sagnac interferometer. In this interaction, the polarization
and the temporal mode of the two photons, get entangled. The strength of the
interaction can be tuned by enlarging the thickness of the birefringent plate.
In order to map the scheme into the quantum Darwinism scenario described in
section II, we identify the polarization of the two photons after they leave
the interferometer as two fragments of the environment, identified by the
four-dimensional Hilbert space
$\mathcal{H}_{B_{1}}\otimes\mathcal{H}_{B_{2}}$. On the other hand, the joint
temporal mode of the two photons represents the quantum system probed by the
environment, corresponding to the Hilbert space $H_{A}$. A joint temporal
state of the two photons generated in our scheme can be written, in general,
in the following form:
$\left|f_{HV}\right\rangle=\int
d\omega_{H}d\omega_{V}\tilde{f}(\omega_{H},\omega_{V})a^{\dagger}_{H,\omega_{H}}a^{\dagger}_{V,\omega_{V}}\left|0\right\rangle=\int
dt_{H}dt_{V}f(t_{H},t_{V})A^{\dagger}_{H,t_{H}}A^{\dagger}_{V,t_{V}}\left|0\right\rangle\;,$
(76)
where $a^{\dagger}_{H,\omega_{H}},a^{\dagger}_{V,\omega_{V}}$ are the creation
operators for photons with horizontal and vertical polarization respectively,
while $A^{\dagger}_{P,t}$ is defined as
$A^{\dagger}_{P,t}=\int\frac{d\omega}{2\pi}e^{-i\omega
t}a^{\dagger}_{P,\omega}\;.$ (77)
If we consider a monochromatic pump with $\omega_{p}$, we have that the joint
spectral amplitude
$\tilde{f}(\omega_{H},\omega_{V})=\delta(\omega_{p}-\omega_{H}-\omega_{V})\tilde{f}^{\prime}(\omega_{H},\omega_{V})$,
($\tilde{f}^{\prime}(\omega_{H},\omega_{V})$ corresponds to the phase matching
function in the SPDC process [59]) and its Fourier transform, i.e. the joint
temporal function, will depend only on the difference $\Delta t=t_{H}-t_{V}$,
a part from a global phase shift
$f(t_{H},t_{V})=e^{i\omega_{p}(t_{H}+t_{V})/2}g(t_{H}-t_{V})$.
The birefringent crystal will change this temporal mode by introducing a shift
$t_{H}\rightarrow t_{H}+\tau$. The overlap with the original time mode is then
given by
$\Delta=\left\langle f_{HV}\mid
f_{HV}^{\tau}\right\rangle=e^{i\omega_{p}\tau/2}\int d\Delta t\;g^{*}(\Delta
t)g(\Delta t+\tau),$ (78)
where $\left|f_{HV}^{\tau}\right\rangle$ is the state with the translated
temporal mode $f(t_{H}+\tau,t_{V})$. Using the notation just introduced we can
describe the state in the interferometer, after the birefringent crystal, as
$\frac{1}{\sqrt{2}}\left(e^{-i\phi}\left|f_{HV},\mathrm{CW}\right\rangle+\left|f_{HV}^{\tau},\mathrm{CCW}\right\rangle\right)=\frac{1}{\sqrt{2}}\left(e^{-i\phi}\left|f_{HV},\mathrm{CW}\right\rangle+\Delta\left|f_{HV},\mathrm{CCW}\right\rangle+\gamma\left|f_{HV}^{\perp},\mathrm{CCW}\right\rangle\right)$
(79)
where $\mathrm{CCW},\mathrm{CW}$ represent the counter-clockwise and clockwise
orientations inside the interferometer, and
$\gamma\left|f_{HV}^{\perp}\right\rangle=(\mathds{I}-\left|f_{HV}\right\rangle\left\langle
f_{HV}\right|)\left|f_{HV}^{\tau}\right\rangle$ is the projection of
$\left|f_{HV}^{\tau}\right\rangle$ on the space orthogonal to
$\left|f_{HV}\right\rangle$, with $\gamma=\sqrt{1-|\Delta|^{2}}$. The phase
$\phi$ can be appropriately tuned using a liquid crystal (not shown in Fig 3).
If we now associate a qubit
$\left|0\right\rangle_{A},\left|1\right\rangle_{A}$ to
$\left|f_{HV}\right\rangle$ and its orthogonal state, respectively, we end up,
after the interferometer, with the state
$\frac{1}{\sqrt{2}}\left(e^{-i\phi}\left|V\right\rangle_{B_{1}}\left|H\right\rangle_{B_{2}}+\Delta\left|H\right\rangle_{B_{1}}\left|V\right\rangle_{B_{2}}\right)\left|0\right\rangle_{A}+\frac{\gamma}{\sqrt{2}}\left|H\right\rangle_{B_{1}}\left|V\right\rangle_{B_{2}}\left|1\right\rangle_{A}\;,$
(80)
which, after tracing out $A$, corresponds exactly to the state (25).
This shows that the internal degree of freedom probed by the environment is
precisely the _joint_ temporal mode of the two photons and, in particular, the
mode $\left|f_{HV}\right\rangle$ and its orthogonal complement.
### VIII.7 Ab-initio optimization protocol
The gradient-free optimization algorithm employed in our proof-of-principle
experiment is based on the Stochastic Nelder-Mead (SNM) introduced in Ref.
[60], considered an improvement over the well-known Nelder-Mead algorithm when
dealing with noisy cost functions
$F(\vec{x})=f(\vec{x})+\mathcal{N}(\vec{x})$. In particular, it tends to avoid
getting stuck in a local optimum of the function provided to the algorithm,
which could be due to statistical fluctuations in the noise term
$\mathcal{N}(\vec{x})$. This algorithm has been recently used for the ab-
initio optimization of the violation of Bell inequalities and randomness
generation, extremizing the device-independent approach in experimental tasks
[49]. In this approach, the experimental apparatus is treated as a black box,
that is, the action of the controllable parameters (see Fig. 6 a) is unknown
to the algorithm and only the noisy output statistics is used to reach the
optimal value of a given cost function, that in our case is composed of the
agreement between the observers and subsequently the CHSH parameter.
Figure 6: $\bm{a)}$ Conceptual scheme of the ab-initio optimization process in
an experimental scenario. Both the photonic system being produced and the
measurement stations performing polarization measurement are unknown to the
algorithm, i.e. they are treated as black boxes. Nonetheless, the goal of the
algorithm will be to find the maximal value of a noisy function $S(\vec{x})$
\- e.g. the CHSH quantity - tuning progressively the parameters $\vec{x}$,
which have an apriori unknown relation with the measurements which are being
carried out. In this case, the parameters $\vec{x}$ have a one-to-one
correspondence with the rotation of two motorized waveplates. $\bm{b)}$
Pictorial representation of possible evolutions of the geometrical simplex
tracked by the SNM algorithm as described carried out by the SNM algorithm in
Appendix G. First, the barycenter $x_{bar}$ of the simplex is computed,
together with a reflection point $x_{ref}$. If such a point is promising, i.e.
$F(x_{ref})\leq F(x_{min})$, an expansion point $x_{exp}$ is computed. On the
contrary, if $F(x_{max})\leq F(x_{ref})$, a contraction point $x_{contr}$ is
tested. If such a step fails to find a point such that $F(x_{contr})\leq
F(x_{max})$, an Adaptive Random Search is performed. With probability $1-P$, a
local search occurs: a new point $x_{RNG}$ is uniformly sampled in a
hypersphere of radius $\epsilon$ centered in a randomly chosen simplex point.
The SNM algorithm performs the optimization of a given cost function over a
d-dimensional parameter space, finding
$\min_{\vec{x}}F(\vec{x})\quad\mathrm{where}\quad
F(\vec{x}):\mathbb{R}^{d}\rightarrow\mathbb{R},$ (81)
starting from an initial (d+1)-dimensional simplex of points
$\Sigma_{0}=\\{x_{0},\dots x_{d}\\}$, sampled in a given parameter range
through the Latin Hypercube Sampling algorithm [60].
The algorithm flow proceeds as follows (see also Fig. 6 b for a graphical
representation of the simplex evolutions):
1. 0.
At first, the cost function $F(\cdot)$ is evaluated in each point of the
initial simplex $\Sigma_{0}$.
2. 1.
If $\dim(\Sigma_{k})>d+1$, the point in which the cost function assumes the
highest value is discarded.
3. 2.
The remaining points of the simplex $\Sigma_{k}$ are ranked with respect to
the value of the cost function $F(x_{i})$. $x_{max}$, $x_{2ndmax}$, $x_{min}$;
i.e. the points in which the cost function assumes the highest, the second
highest and the lowest values, are identified.
4. 3.
The barycenter of the simplex points is computed as
$x_{bar}=\sum_{x\in\Sigma_{k}\setminus x_{max}}x/d,$
and the reflection point is computed as
$x_{ref}=(1+\alpha)x_{bar}-\alpha x_{max};\quad\alpha>0.$
The cost function $F(x_{ref})$ is evaluated. If $F(x_{min})\leq
F(x_{ref})<F(x_{2ndmax})$, then $\Sigma_{k+1}=\Sigma_{k}\cup x_{ref}$. Else:
1. (a)
If $F(x_{ref})\leq F(x_{min})$, further expand the simplex in the direction of
$x_{ref}$, computing an expansion point:
$x_{exp}=(1-\gamma)x_{bar}+\gamma x_{ref};\quad\gamma>1.$
Compute the cost function in $x_{exp}$: if $F(x_{exp})<F(x_{ref})$ then
$\Sigma_{k+1}=\Sigma_{k}\cup x_{exp}$, else $\Sigma_{k+1}=\Sigma_{k}\cup
x_{ref}$. Go back to step 1.
2. (b)
If $F(x_{2ndmax})\leq F(x_{ref})<F(x_{max})$, perform an external contraction:
$x_{contr}=(1-\beta)x_{bar}+\beta x_{ref};\quad 0\leq\beta\leq 1.$
Compute the cost function in the contracted point $x_{contr}$: if
$F(x_{contr})\leq F(x_{ref})$, impose $\Sigma_{k+1}=\Sigma_{k}\cup x_{contr}$
and go back to step 1.
3. (c)
If instead $F(x_{max})\leq F(x_{ref})$, perform an internal contraction:
$x_{contr}=(1-\beta)x_{bar}+\beta x_{max};\quad 0\leq\beta\leq 1.$
Compute the cost function in the contracted point $x_{contr}$: if
$F(x_{contr})\leq F(x_{max})$, impose $\Sigma_{k+1}=\Sigma_{k}\cup x_{contr}$
and go back to step 1.
5. 4.
If the contraction fails, perform an Adaptive Random Search (ARS),
stochastically sampling the parameter space:
1. (a)
With probability $P$, a global search is performed, choosing a point $x_{RNG}$
uniformly sampling the parameter space within given bounds for each parameter.
2. (b)
With probability $1-P$, a local search is performed: a point of the simplex
$x_{i}$ is randomly chosen and $x_{RNG}$ is uniformly sampled within an
hypersphere of radius $\epsilon$ centered in $x_{i}$.
The adaptive random search is performed until a point such that
$F(x_{RNG})<F(x_{max})$ is found. Impose $\Sigma_{k+1}=\Sigma_{k}\cup x_{RNG}$
and go back to step 1.
Within our experimental approach, performing polarization measurements of
single photon pairs, each observable $B^{i}_{j}$ is associated to two
parameters $(\delta^{i}_{j},\phi^{i}_{j})$ corresponding to the rotation
angles of the optical axes of the two waveplates implementing, together the
PBS and single-photon detectors, such measurements. Albeit the SNM algorithms
has no knowledge of this physical equivalence, the parameters on which it has
to optimize a given cost function will have a one-to-one correspondence to a
set of angles $(\delta^{i}_{j},\phi^{i}_{j})$ which are then provided to the
waveplates’ motorized rotation mounts, while expectation values
$\left<B^{k}_{1}B^{l}_{2}\right>$ are obtained by recording photon-coincidence
counts on single photon avalanche photodiodes within a window of $2.4$ ns.
Moreover, note that in our proof-of-principle experiment, we need to perform a
constrained optimization of the CHSH parameter:
$\mathrm{CHSH}=\left<B^{0}_{1}B^{0}_{2}\right>+\left<B^{0}_{1}B^{1}_{2}\right>\\\
-\left<B^{1}_{1}B^{0}_{2}\right>+\left<B^{1}_{1}B^{1}_{2}\right>$ (82)
under the constraint $\left<B^{0}_{1}B^{0}_{2}\right>=1-2\epsilon$, for a
given $\epsilon$. To address this problem, we perform first an optimization of
the agreement between the two observers and then the optimization of the CHSH
value, according to the following two-step process:
1. 1.
First, fix a value of $\epsilon$ and perform the 4-parameter minimization of
the function:
$A=\left\lvert\left<B^{0}_{1}B^{0}_{2}\right>-(1-2\epsilon)\right\rvert$
The minimization is performed for 100 proposed points, with early stopping if
$A<0.03$.
2. 2.
Keeping fixed the parameters related to $\\{B^{0}_{1},B^{0}_{2}\\}$, perform a
minimization of:
$\mathrm{S}=-\left\lvert\left<B^{0}_{1}B^{0}_{2}\right>+\left<B^{0}_{1}B^{1}_{2}\right>\\\
-\left<B^{1}_{1}B^{0}_{2}\right>+\left<B^{1}_{1}B^{1}_{2}\right>\right\rvert$
Extract $S$ and $\epsilon=\frac{1-\left<B^{0}_{1}B^{0}_{2}\right>}{2}$ from
the best point in the simplex after 250 explored points in the 4-dimensional
parameter space corresponding to $\\{B^{1}_{1},B^{1}_{2}\\}$.
|
# Distribution-Free Joint Independence Testing and Robust Independent
Component Analysis Using Optimal Transport
Ziang Niu Applied Mathematics and Computational Science, University of
Pennsylvania, Philadelphia, USA<EMAIL_ADDRESS>and Bhaswar B.
Bhattacharya Department of Statistics and Data Science, University of
Pennsylvania, Philadelphia, USA<EMAIL_ADDRESS>
###### Abstract.
In this paper, we study the problem of measuring and testing joint
independence for a collection of multivariate random variables. Using the
emerging theory of optimal transport (OT)-based multivariate ranks, we propose
a distribution-free test for multivariate joint independence. Towards this we
introduce the notion of rank joint distance covariance $(\mathrm{RJdCov})$,
the higher-order rank analogue of the celebrated distance covariance measure,
which can capture the dependencies among all the subsets of the variables. The
$\mathrm{RJdCov}$ can be easily estimated from the data without any moment
assumptions and the associated test for joint independence is universally
consistent. We derive the asymptotic null distribution of the
$\mathrm{RJdCov}$ estimate, using which we can readily calibrate the test
without any knowledge of the (unknown) marginal distributions (due to the
distribution-free property). We also provide an efficient data-agnostic
resampling-based implementation of the test which controls Type-I error in
finite samples and is consistent with only a fixed number of resamples. In
addition to being distribution-free and universally consistent, the proposed
test is also statistically efficient, that is, it has non-trivial asymptotic
(Pitman) efficiency (power against $1/\sqrt{n}$ alternatives). We demonstrate
this by computing the limiting local power of the test for both mixture
alternatives and joint Konijn alternatives. We then use the $\mathrm{RJdCov}$
measure to develop a new method for independent component analysis (ICA) that
is easy to implement and robust to outliers and contamination. Extensive
simulations are performed to illustrate the efficacy of the proposed test in
comparison to other existing methods. Finally, we apply the proposed method to
learn the higher-order dependence structure among different US industries
based on stock prices. As a byproduct of our theoretical analysis, we develop
a version of Hoeffding’s classical combinatorial central theorem for mutiple
independent permutations and a multivariate Hájek representation result for
joint rank statistics, which might be of independent interest.
###### Key words and phrases:
Asymptotic efficiency, combinatorial central limit theorem, distance
correlation, joint dependence measures, multivariate ranks, optimal transport,
Stein’s method.
## 1\. Introduction
Given $r$ probability distributions on
$\mathbb{R}^{d_{1}},\mathbb{R}^{d_{2}},\ldots,\mathbb{R}^{d_{r}}$ with
marginal laws $\mu_{1},\mu_{2},\ldots,\mu_{r}$, respectively, and joint law
$\mu$ on $\mathbb{R}^{d}$, where $d=\sum_{s=1}^{r}d_{s}$, the mutual
independence testing problem is given by
$\displaystyle
H_{0}:\mu=\mu_{1}\otimes\mu_{2}\otimes\cdots\otimes\mu_{r}\qquad\text{versus}\qquad
H_{1}:\mu\neq\mu_{1}\otimes\mu_{2}\otimes\cdots\otimes\mu_{r},$ (1.1)
where $\mu_{1}\otimes\mu_{2}\otimes\cdots\otimes\mu_{r}$ is the product of the
marginal distributions $\mu_{1},\mu_{2},\ldots,\mu_{r}$. This has been
extensively studied in the case $r=2$, which is the classical pairwise
independence testing problem for a collection of random variables. For
univariate distributions, that is, $d_{1}=d_{2}=1$, nonparametric tests for
pairwise independence begin with the celebrated results of Hoeffding [37] and
Blum et al. [7]. These tests are distribution-free, that is, the distributions
of the test statistics under $H_{0}$ do not depend on the (unknown) marginal
distributions, and consistent against a general class of alternatives. Since
then a slew of methods for measuring and testing univariate pairwise
independence have been proposed. These include, among several others, results
of Yanagimoto [75], Feuerverger [21], Bergsma and Dassios [5], Nandy et al.
[56], Heller et al. [36] and the recent breakthrough of Chatterjee [14, 15].
Pairwise independence testing is more challenging for dimensions greater than
1, that is, either $d_{1}>1$ or $d_{2}>1$, because of the lack of a natural
ordering in multivariate data. One of the most popular methods for pairwise
multivariate independence testing is the celebrated distance covariance
($\mathrm{dCov}$) of Székely et al. [70] and Székely and Rizzo [69], which
measures dependence between 2 random vectors based on the difference of their
joint and marginal characteristic functions and can be estimated using the
correlation among the pairwise distances. The distance covariance is zero if
and only if the null hypothesis of independence holds, provided $\mu_{1}$ and
$\mu_{2}$ have finite first moments. Another approach is based on the Hilbert-
Schmidt independence criterion (HSIC), which also provides a consistent
kernel-based test for pairwise independence (see Gretton et al. [28, 27, 29]).
Sejdinovic et al. [65] showed that distance covariance and the kernel-based
independence criterion are, in fact, equivalent if the kernel is chosen based
on the relevant distance function. Other popular methods include mutual
information-based tests [2, 6, 43, 74], graph-based methods [19, 34, 22], the
maximal information coefficient [61], ranking of interpoint distances [35,
55], ball covariance [58], and binning approaches based on partitions of the
sample space [35, 50, 77] (see [41, 49] for reviews of the various methods).
However, none of the aforementioned multivariate tests simultaneously inherit
the distribution-free and universal consistency properties of the rank-based
univariate tests. A breakthrough in this direction was made recently by Deb
and Sen [18] and Shi et al. [67] using optimal transport (OT) based
multivariate ranks [16, 31, 30]. The OT-based tests for pairwise independence
are distribution-free in finite samples, computationally feasible, universally
consistent (that is, the power of the tests converge to 1 as the sample size
increases), and enjoy attractive efficiency properties [68] (see also [20,
50]).
In this paper we consider the mutual independence testing problem (1.1) for
more than 2 variables. Measuring and testing higher-order dependence111We say
a collection of random variables $X_{1},\ldots,X_{r}$ has higher-order
dependency if they are pairwise independent but not jointly independent. and
understanding the dependence structure between the variables are much more
involved tasks than detecting pairwise dependence. Examples of such higher-
order dependencies abound in the literature. This includes, among others,
applications in diagnostic checking for directed acyclic graphs (DAGs), where
the noise variables are assumed to be jointly independent, hence inferring
pairwise independence is not enough, and independent component analysis (ICA),
which entails finding a suitable transformation of multivariate data with
mutually independent components (see Section 5 for details). In fact, Böttcher
[8] collects over 350 datasets featuring statistically significant higher-
order dependencies. While there are a few results for mutual independence
testing in the univariate case, where $d_{1}=d_{2}=\cdots=d_{r}=1$ (see [4,
17, 23, 42, 60] and the references therein), in higher dimensions the problem
is much more challenging. Towards this, Pfister et al. [59] generalized the
HSIC to multiple variables and obtained kernel-based tests joint independence
(see also Sejdinovic et al. [64] for a kernel-based 3-variable interaction
test). Recently, Böttcher et al. [9] and Chakraborty and Zhang [12] proposed
higher-order generalizations of the distance covariance, which provide
consistent tests for multivariate joint independence. This is referred to as
the total distance multivariance by Böttcher et al. [9] or the joint distance
covariance ($\mathrm{JdCov}$) by Chakraborty and Zhang [12]. One of the
attractive properties of the distance multivariance/$\mathrm{JdCov}$ is that
it has a hierarchical structure that can capture the dependence among any
subset of the variables. Recently, Roy et al. [63] proposed tests for joint
independence based on pairwise distances and linear projections (see also
Banerjee and Ghosh [3] for generalizations to functional data). However, none
of the aforementioned methods possesses the distribution-free property of the
univariate methods. In fact, these tests usually have intractable (non-
Gaussian) asymptotic distributions under $H_{0}$ that depend on the (unknown)
marginal distributions and are generally calibrated using permutation methods.
In this paper we propose a distribution-free test for joint independence using
the framework of OT-based multivariate ranks. Towards this, we introduce the
notion of rank joint distance covariance $(\mathrm{RJdCov})$, the rank
analogue of the $\mathrm{JdCov}$, which is obtained by aggregating the
(higher-order) rank distance covariances ($\mathrm{RdCov}$) over all the
subsets of variables (Section 2.4). The higher-order $\mathrm{RdCov}$
characterizes the mutual independence of a subset of variables given their
lower-order independence and the $\mathrm{RJdCov}$ characterizes the total
mutual independence of all the $r$ variables (Proposition 2.10). The
$\mathrm{RdCov}$ and, hence, the $\mathrm{RJdCov}$ can be consistently
estimated from data without any moment assumptions (Theorem 2.12).
Consequently, since OT-based multivariate ranks are themselves distribution-
free (see Section 2.3), we can construct a distribution-free test for
multivariate joint independence based on the estimated $\mathrm{RJdCov}$. The
proposed test has the following properties:
* •
Distribution-free, universally consistent tests: Since $\mathrm{RJdCov}$ is
well-defined whenever the marginal distributions
$\mu_{1},\mu_{2},\ldots,\mu_{r}$ are absolutely continuous and is zero if and
only if the null hypothesis of joint independence holds, our proposed test is
consistent whenever the joint distribution has absolutely continuous marginals
(Proposition 3.2). Moreover, we can use the $\mathrm{RdCov}$ estimates to
obtain distribution-free, universally consistent tests for higher-order
independence of a subset of variables given their lower-independence. For
example, if a collection variables are pairwise independent, the
$\mathrm{RJdCov}$ measure can be used to construct distribution-free,
consistent tests for 3-way or higher-order dependence (see Remark 3.3 for a
discussion and Section 7 for an application in real data).
* •
Asymptotic null distribution: In Section 3.1 we derive the limiting
distribution of the proposed test under the null hypothesis of joint
independence. In fact, we derive the limiting joint distribution of the
$\mathrm{RdCov}$ estimates for all the subsets of $r$ variables, which can be
expressed as a squared integral of a certain Gaussian process or,
equivalently, as an infinite weighted sum of independent chi-squared
distributions (Theorem 3.7). One remarkable property of the limiting
distribution is that the distributions of the $\mathrm{RdCov}$ estimates
across the various subsets are asymptotically independent, thereby
facilitating the hierarchical testing for higher-order dependence mentioned
above. Since the OT-based multivariate ranks are distributed as uniform random
permutations, the $\mathrm{RdCov}$ estimates can be expressed as a
combinatorial sum indexed by multiple (more than 1) independent permutations.
For this we use Stein’s method based on exchangeable pairs to develop a
version of Hoeffiding’s classical combinatorial central limit theorem for
multi-dimensional arrays (tensors) indexed by independent permutations (see
Theorem A.1 in Appendix A), a result which might be of independent interest.
The distribution-free property implies that the asymptotic null distribution
of the test statistic does not depend on the distribution of the data
generating mechanism, hence, can be readily used to calibrate the test
statistic.
* •
Finite sample properties: The distribution-free property also allows us to
approximate the quantiles of the null distribution in finite samples using a
data-agnostic resampling technique. Consequently, we can calibrate the test
statistic to have finite sample level $\alpha$ (Section 3.2). Interestingly,
this finite sample implementation is consistent with only a finite number of
resamples (Theorem 3.11). Hence, we can implement the proposed test both
asymptotically and in finite samples efficiently, without having to estimate
any nuisance parameters of the marginal distributions or use computationally
intensive data-dependent permutation techniques to approximate the rejection
thresholds.
* •
Asymptotic efficiency: In Section 4 we establish the asymptotic (Pitman)
efficiency of the proposed test by computing its limiting power for local
contiguous alternatives (alternatives shrinking towards $H_{0}$ at rate
$O(1/\sqrt{n})$) [71]. Specifically, we derive the asymptotic distribution of
the proposed test for two kinds of local alternatives: mixture alternatives
(Theorem 4.1) and joint Konijn alternatives (Theorem 4.4). To the best of our
knowledge, these are the first results on the efficiency properties of
nonparametric joint independence tests. This implies that the proposed test,
in addition to being distribution-free, universally consistent, and
computationally feasible, is also statistically efficient, making it
particularly attractive for modern data applications. The proof of the local
power analysis is based on a Hajék representation result for the
$\mathrm{RdCov}$ estimates (Proposition 4.5), which allows us to replace the
empirical rank maps with their population counterparts without altering the
limiting distribution of the test statistic under $H_{0}$.
Going beyond hypothesis testing, the $\mathrm{RJdCov}$ can be used more
generally to quantify deviations from joint independence. Specifically, in
Section 5 using the $\mathrm{RJdCov}$ measure as an objective function we
develop a new method for the independent component analysis (ICA) problem that
is robust to outliers and contamination. ICA is a method for extracting
independent components from multivariate data that emerged from research in
artificial neural networks and has found applications in blind source
separation, feature extraction, computational biology, and time series
analysis (see [39] for a review). Our estimator, which is obtained by
minimizing the $\mathrm{RJdCov}$ measure, can be computed efficiently and is
consistent for the independent components (Theorem 5.2). We illustrate the
effectiveness of the proposed ICA estimator with the approach of Matteson and
Tsay [52] based on the $\mathrm{dCov}$ in various simulation settings (Section
6.4).
In Section 6 we compare the performance of the proposed test with other
popular tests for joint independence, such the dHSIC [59], $\mathrm{JdCov}$
[12], and the $\mathrm{dCov}$ based measure in [52]. Our test performs well
across a variety of data distributions and is especially powerful compared to
the existing tests in contamination models and heavy-tailed distributions.
Finally, in Section 7 we apply our method to a dataset consisting of stock
prices of different companies in USA. Our approach sheds more insights into
the structure of the higher-order dependencies in this data, producing results
that are more interpretable than existing methods. All the implementation of
tests can be found in GitHub.
## 2\. Rank-Based Joint Independence Measures
In this section, we define a rank-based measure of joint independence by
combining higher-order distance covariances with optimal transportation of
measures. We review the relevant concepts regarding higher-order distance
covariance and joint dependence measures in Section 2.2 and OT-based
multivariate ranks in Section 2.3. The rank-based joint dependence measures
are introduced in Section 2.4 and their estimation is discussed in Section
2.5. We begin by introducing some notations.
### 2.1. Notations
Fix $r\geq 2$ and suppose $\bm{X}=(X_{1},\ldots,X_{r})$ is a random vector,
where each $X_{i}$ is a random variable taking values in $\mathbb{R}^{d_{i}}$,
for $1\leq i\leq r$ and $d_{0}=\sum_{i=1}^{r}d_{i}$. The characteristic
function of $X_{i}$ will be denoted by
$\phi_{X_{i}}(t)=\mathbb{E}\left[e^{\iota\langle t,X_{i}\rangle}\right],$
for $t\in\mathbb{R}^{d_{i}}$.
###### Definition 2.1.
The random vector $\bm{X}=(X_{1},\ldots,X_{r})$ is said to be $q$-independent
(for some $2\leq q\leq r$) if for any sub-family
$\\{i_{1},i_{2},\ldots,i_{q}\\}\subset\\{1,2,\ldots,r\\}$ the random variables
$X_{i_{1}},X_{i_{2}},\ldots,X_{i_{q}}$ are mutually independent.
Throughout we will denote
$w_{d_{i}}(t):=\frac{1}{c_{d_{i}}\|t\|^{1+d_{i}}}\quad\text{where
}c_{d_{i}}:=\frac{\pi^{\frac{1+d_{i}}{2}}}{\Gamma\left(\frac{1+d_{i}}{2}\right)},$
for $1\leq i\leq r$. Moreover, for $t_{i}\in\mathbb{R}^{d_{i}}$ define
$\displaystyle\mathrm{d}w_{i}:=w_{d_{i}}(t_{i})\mathrm{d}t_{i}=\frac{\mathrm{d}t_{i}}{c_{d_{i}}\|t_{i}\|^{1+d_{i}}}\quad\text{and
}\quad\mathrm{d}w=\prod_{i=1}^{r}\mathrm{d}w_{i}.$ (2.1)
Finally, for $p\geq 1$, let $\mathcal{P}(\mathbb{R}^{p})$ and
$\mathcal{P}_{ac}(\mathbb{R}^{p})$ denote the collection of all probability
distributions and Lebesgue absolutely continuous probability distributions on
$\mathbb{R}^{p}$, respectively.
### 2.2. Higher-Order Distance Covariance and Joint Dependence Measures
The celebrated distance covariance $(\mathrm{dCov})$ of Székely and Rizzo [70]
is a powerful measure for independence between two random vectors. It has been
widely used for testing of independence [70, 66], feature screening [48] and
general association analysis, including canonical component analysis [79] and
independent component analysis [52]. Recently, there has been efforts to
generalize the notion of $\mathrm{dCov}$ beyond pairwise independence to joint
independence of a collection of $r>2$ random vectors. Towards this,
independently and concurrently, Böttcher et al. [9] and Chakraborty and Zhang
[12] proposed the following higher-order generalization of $\mathrm{dCov}$.
This is referred to as the distance multivariance (by Böttcher et al. [9]) or
the $r$-th order $\mathrm{dCov}$ (by Chakraborty and Zhang [12]).
###### Definition 2.2 ([9, Definition 2.1] and [12, Definition 1]).
The $r$-th order $\mathrm{dCov}$ of $(X_{1},X_{2},\ldots,X_{r})$ is defined as
the positive square root of
$\displaystyle\mathrm{dCov}^{2}(X_{1},\ldots,X_{r})=\int_{\mathbb{R}^{d_{0}}}\left|\mathbb{E}\left[\prod_{i=1}^{r}(\phi_{X_{i}}(t_{i})-e^{\iota\langle
t_{i},X_{i}\rangle})\right]\right|^{2}\mathrm{d}w.$ (2.2)
where $\phi_{X_{i}}(t_{i})=\mathbb{E}\left[e^{\iota\langle
t_{i},X_{i}\rangle}\right]$ for $t_{i}\in\mathbb{R}^{d_{i}}$, and
$\mathrm{d}w$ is as defined in (2.1).
Clearly, $\mathrm{dCov}(X_{1},X_{2},\ldots,X_{r})=0$ whenever the collection
$(X_{1},X_{2},\ldots,X_{r})$ are mutually independent. However, the converse
is only true for $r=2$, in which case (2.2) reduces to the classical
$\mathrm{dCov}$ between two random vectors $X_{1}$ and $X_{2}$ [70]. In other
words, when $r\geq 3$ the joint independence of $(X_{1},X_{2},\ldots,X_{r})$
is not a necessary condition for $\mathrm{dCov}(X_{1},X_{2},\ldots,X_{r})$ to
be zero. (For example, $\mathrm{dCov}(X_{1},X_{2},X_{3})=0$ whenever
$X_{1}\perp\\!\\!\\!\perp(X_{2},X_{3})$.) One can conclude
$(X_{1},X_{2},\ldots,X_{r})$ are jointly independent when
$\mathrm{dCov}(X_{1},X_{2},\ldots,X_{r})=0$ under the additional assumption
that $(X_{1},X_{2},\ldots,X_{r})$ are $(r-1)$-independent (see Definition 2.1
and [9, Theorem 3.4]), but not in general. To characterize joint independence
one has to consider all possible interactions between the $r$ variables
$X_{1},X_{2},\ldots,X_{r}$. This leads to the notion of the total distance
multivariance [9] or the joint distance correlation ($\mathrm{JdCov}$) [12].
Towards this, we need the following notation: For
$S\subseteq\\{1,2,\ldots,r\\}$ with $|S|\geq 2$ denote by
$\bm{X}_{S}=(X_{i})_{i\in S}$ and
$\displaystyle\mathrm{dCov}^{2}(\bm{X}_{S})=\int_{\mathbb{R}^{d_{S}}}\left|\mathbb{E}\left[\prod_{i\in
S}(\phi_{X_{i}}(t_{i})-e^{\iota\langle
t_{i},X_{i}\rangle})\right]\right|^{2}\prod_{i\in S}\mathrm{d}w_{i},$ (2.3)
where $d_{S}:=\sum_{i\in S}d_{i}$. In other words,
$\mathrm{dCov}^{2}(\bm{X}_{S})$ is the $|S|$-th order $\mathrm{dCov}$ for the
variables in $\bm{X}_{S}$.
###### Definition 2.3 ([12, Definition 2]).
Given a vector non-negative weights $\bm{C}=(C_{2},C_{3},\ldots,C_{r})$, the
joint distance covariance $(\mathrm{JdCov})$ of the random vector
$\bm{X}=(X_{1},X_{2},\ldots,X_{r})$ is defined as:
$\displaystyle\mathrm{JdCov}^{2}(\bm{X};\bm{C})=\sum_{s=2}^{r}C_{s}\sum_{\begin{subarray}{c}S\subseteq\\{1,2,\ldots,r\\}\\\
|S|=s\end{subarray}}\mathrm{dCov}^{2}(\bm{X}_{S}).$ (2.4)
$\mathrm{JdCov}$ completely characterizes the joint independence of
$(X_{1},X_{2},\ldots,X_{r})$, that is,
$\displaystyle\mathrm{JdCov}^{2}(\bm{X};\bm{C})=0\text{ if and only if
}(X_{1},X_{2},\ldots,X_{r})\text{ are mutually independent }$ (2.5)
(see [12, Proposition 3]). Moreover, by choosing all the weights to be equal,
that is, $C_{s}=1$, for $2\leq s\leq r$, one gets the total distance
multivariance as in [9, Definition 2.1]. Chakraborty and Zhang [12] suggests
choosing $C_{s}=c^{r-s}$, for some constant $c>0$, which allows the following
simplier expression that does not require evaluating all the $(2^{r}-r-1)$
dCov terms in (2.4).
###### Lemma 2.4 (Proposition 4 in [12]).
For any $c\geq 0$,
$\displaystyle\mathrm{JdCov}^{2}(\bm{X};c):=\mathrm{JdCov}^{2}(\bm{X};(c^{r-s})_{2\leq
s\leq
r})=\mathbb{E}\left[\prod_{i=1}^{r}(U_{i}(X_{i},X_{i}^{\prime})+c)\right]-c^{r},$
where $(X_{1}^{\prime},\ldots,X_{r}^{\prime})$ be an independent copy of
$\bm{X}=(X_{1},X_{2},\ldots,X_{r})$ and, for $1\leq i\leq r$,
$\displaystyle U_{i}(x,x^{\prime})$
$\displaystyle=\mathbb{E}\|x-X_{i}^{\prime}\|+\mathbb{E}\|X_{i}-x^{\prime}\|-\|x-x^{\prime}\|-\mathbb{E}\|X_{i}-X_{i}^{\prime}\|$
$\displaystyle=\int_{\mathbb{R}^{d_{i}}}\bigg{\\{}(\phi_{X_{i}}(t)-e^{\iota\langle
t,x\rangle})(\phi_{X_{i}}(-t)-e^{-\iota\langle
t,x^{\prime}\rangle})\bigg{\\}}w_{d_{i}}(t)\mathrm{d}t.$
Depending on whether $c>1$ or $c<1$, $\mathrm{JdCov}^{2}(\bm{X};c)$ puts more
or less weights on the lower-order dependence terms, respectively. For
example, if the joint distribution of $\\{X_{1},\ldots,X_{r}\\}$ is known to
be Gaussian, where mutual independence is equivalent to pairwise independence,
larger values of $c$ should be considered. Otherwise, a smaller $c<1$ makes
more sense.
### 2.3. Multivariate Ranks Based on Optimal Transport
To motivate the notion of multivariate ranks based on optimal transport, let
us recall some fundamental facts about univariate ranks. Suppose that
$X\sim\nu\in\mathcal{P}_{ac}(\mathbb{R})$ is a random variable with
distribution function $F$. Here, the distribution function itself serves as
the one-dimensional population rank function, which has the property that
$F(X)\sim\mathrm{Unif}([0,1])$, that is, $F$ transports $\mu$ to
$\mathrm{Unif}([0,1])$, the uniform distribution on $[0,1]$. In fact, when
$\mu$ has finite second moment, it can be shown that $F$ is the almost
everywhere unique map that transports $\mu$ to $\mathrm{Unif}([0,1])$ and
minimizes the expected squared-error cost, that is,
$F=\mathrm{arg}\min_{T:T(X)\sim\mathrm{Unif}([0,1])}\mathbb{E}_{X\sim\mu}[(X-T(X))^{2}],$
where the minimization is over all functions $T$ that transport the
distribution of $X$ to the uniform distribution on $[0,1]$. This shows that in
dimension 1, ranks can be thought of as the univariate analogue of the
celebrated Monge transportation problem [54, 73]. Hallin et al. [31, 30] used
this interpretation of ranks and proposed the following multivariate
generalization.
###### Definition 2.5 ([18, 31, 30]).
Given a random variable $X\sim\mu$ and a pre-specified reference distribution
$\nu$, with $\mu,\nu\in\mathcal{P}_{ac}(\mathbb{R}^{d})$, the multivariate
population rank map $R_{\mu}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ is
defined as:
$\displaystyle
R_{\mu}:=\mathrm{arg}\min_{T:T(X)\sim\nu}\mathbb{E}_{X\sim\mu}[||X-T(X)||^{2}],$
(2.6)
where the minimum is over all functions
$T:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ that transports the distribution
of $X$ to $\mathbb{R}^{d}$ and $||\cdot||$ denotes the usual Euclidean norm in
$\mathbb{R}^{d}$.222Note that, even though the definition in (2.6) is not
meaningful when $X\sim\mu$ does not have finite second moment, OT-based
multivariate ranks can still be defined using the Brenier-McCann theorem [53,
10], which says that there exists an almost everywhere unique map
$R_{\mu}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$, which is the gradient of a
convex function and satisfies $R_{\mu,\nu}(X)\sim\nu$. This notion coincides
with that in Definition 2.5, whenever $\mu$ and $\nu$ have finite second
moment.
To define the empirical analogue of the population multivariate rank we begin
with a ‘natural’ discretization
$\mathcal{H}_{N}^{d}:=\\{h_{1}^{d},\ldots,h_{N}^{d}\\}$ of the pre-specified
reference distribution $\nu\in\mathcal{P}_{ac}(\mathbb{R}^{d})$, that is, the
empirical measure $\nu_{N}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{h_{i}^{d}}$
converges converges weakly to $\nu$. The natural choice for $d=1$ is
$\mathcal{H}_{N}^{1}=\\{1/N,2/N,\ldots,1\\}$, whose empirical distribution
converges to $\mathrm{Unif}([0,1])$. For higher dimensions one can choose
$\mathcal{H}_{N}^{d}$ as $N$ i.i.d. points from $\nu$ or, more commonly, a
deterministic quasi-Monte Carlo sequence such as the Halton sequence [18, 38].
Then, given i.i.d. samples $X_{1},X_{2},\ldots,X_{N}$ from a distribution
$\mu\in\mathcal{P}_{ac}(\mathbb{R}^{d})$, the multivariate empirical rank map
$\hat{R}(X_{i})$ is the optimal transport map from the empirical distribution
of the data $\hat{\mu}_{N}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_{i}}$ to
$\hat{\mu}_{N}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{h_{i}^{d}}$. In other words,
$\displaystyle\hat{R}(X_{i})=h_{\hat{\sigma}_{N}(i)}^{d},$ (2.7)
where
$\displaystyle\mathbf{\hat{\sigma}}_{N}:=\mathrm{arg}\min_{\sigma\in
S_{N}}\sum_{i=1}^{N}||X_{i}-h_{{\sigma(i)}}^{d}||^{2},$
and $S_{N}$ denotes the set of all $N!$ permutations of $\\{1,2,\ldots,N\\}$.
This is a minimum bipartite matching problem (also known as the assignment
problem), which is a fundamental problem in combinatorial optimization that
can be solved exactly $O(N^{3})$ time [40] and approximately in near-linear
time (see [1] and the references therein).
A remarkable feature of the optimal transport based multivariate empirical
ranks defined in (2.7) is that they mimic the distribution-free property of
univariate ranks [18, 31].
###### Proposition 2.6 ([31, Proposition 1.6.1] and [18, Proposition 2.2]).
Suppose that $X_{1},\ldots,X_{N}$ are i.i.d. samples from a distribution
$\mu\in\mathcal{P}_{ac}(\mathbb{R}^{d})$. Then the vector of empirical ranks
$(\hat{R}(X_{1}),\hat{R}(X_{2}),\ldots,\hat{R}(X_{N}))$
is uniformly distributed over the $N!$ permutations of the fixed grid
$\mathcal{H}_{N}^{d}$.
The result above suggests a general strategy for constructing non-parametric
distribution-free tests, by replacing the data points with their empirical
ranks (appropriately defined depending on the testing problem). This strategy
was recently adopted in [18, 67] to construct distribution-free,
computationally efficient, and universally consistent multivariate two-sample
and independence tests. For other interesting properties and applications of
optimal transport based ranks see [16, 24, 33, 32] and the references therein.
###### Remark 2.7.
Common choices of the reference distribution $\nu$ are
$\mathrm{Unif}([0,1]^{d})$, the uniform distribution on the $d$-dimensional
cube $[0,1]^{d}$ [18], the spherical uniform distribution $U_{d}$, which is
the product of the uniform distribution on $[0,1)$ and the uniform
distribution on the unit sphere $S^{d-1}$ [30, 67], and
$N(\bm{0},\bm{I}_{d})$, the $d$-dimensional standard Gaussian [20]. Hereafter,
for concreteness we will choose $\nu=\mathrm{Unif}([0,1]^{d})$ and
$\mathcal{H}_{N}^{d}:=\\{h_{1}^{d},\ldots,h_{N}^{d}\\}$ will denote the Halton
sequence corresponding to this distribution. However, our results continue to
hold for any reference distribution $\nu$ with finite moments, which, in
particular, includes the spherical uniform and the standard multivariate
Gaussian.
### 2.4. Rank-Based Joint Dependence Measures
We are now ready to define the rank-based counterparts of the $d$-th order
$\mathrm{dCov}$ and $\mathrm{JdCov}$. Towards this, suppose, as before,
$(X_{1},\ldots,X_{r})$ is a $r$-dimensional random vector, where each $X_{i}$
is a random variable taking in $\mathbb{R}^{d_{i}}$ with distribution
$\mu_{i}$, for $1\leq i\leq r$ and $d_{0}=\sum_{i=1}^{r}d_{i}$. Throughout we
will assume $\mu_{i}\in\mathcal{P}_{ac}(\mathbb{R}^{d_{i}})$ and denote the
population rank map of $\mu_{i}$ by $R_{\mu_{i}}$ (recall Definition 2.5), for
$1\leq i\leq r$.
###### Definition 2.8.
The $r$-th order rank $\mathrm{dCov}$ $(\mathrm{RdCov})$ of
$(X_{1},X_{2},\ldots,X_{r})$ is defined as the positive square root of
$\displaystyle\mathrm{RdCov}^{2}(X_{1},\ldots,X_{r})=\int_{\mathbb{R}^{d_{0}}}\left|\mathbb{E}\left[\prod_{i=1}^{r}\left\\{\mathbb{E}\left(e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i})\rangle}\right)-e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i})\rangle}\right\\}\right]\right|\mathrm{d}w$ (2.8)
where $t_{i}\in\mathbb{R}^{d_{i}}$, for $1\leq i\leq r$, and $\mathrm{d}w$ is
as defined in (2.1).
Note that the $r$-th order $\mathrm{RdCov}$ is obtained from the $r$-th order
$\mathrm{dCov}$ by replacing $(X_{1},X_{2},\ldots,X_{r})$ with their
population rank maps
$(R_{\mu_{1}}(X_{1}),R_{\mu_{2}}(X_{2}),\ldots,R_{\mu_{r}}(X_{r}))$. For
$r=2$, (2.8) reduces to the rank distance covariance of $(X_{1},X_{2})$
defined in [18, 67], which, like the classical $\mathrm{dCov}$, is zero if and
only if $X_{1}\perp\\!\\!\\!\perp X_{2}$. For the $r\geq 3$, as in
$\mathrm{JdCov}$, we need to consider all the interactions between the $r$
variables $(X_{1},X_{2},\ldots,X_{r})$ to characterize the joint indepedence.
This leads to our the notion of rank joint distance covariance
($\mathrm{RJdCov}$):
###### Definition 2.9.
Given a vector non-negative weights $\bm{C}=(C_{2},C_{2},\ldots,C_{r})$, the
rank joint distance covariance $(\mathrm{RJdCov})$ of the random vector
$\bm{X}=(X_{1},X_{2},\ldots,X_{r})$ is defined as:
$\displaystyle\mathrm{RJdCov}^{2}(\bm{X};\bm{C})=\sum_{s=2}^{r}C_{s}\sum_{\begin{subarray}{c}S\subseteq\\{1,2,\ldots,r\\}\\\
|S|=s\end{subarray}}\mathrm{RdCov}^{2}(\bm{X}_{S}),$ (2.9)
where
$\displaystyle\mathrm{RdCov}^{2}(\bm{X}_{S})=\int_{\mathbb{R}^{d_{S}}}\left|\mathbb{E}\left[\prod_{i\in
S}\left\\{\mathbb{E}\left(e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i})\rangle}\right)-e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i})\rangle}\right\\}\right]\right|\prod_{i\in
S}\mathrm{d}w_{i}.$ (2.10)
Moreover, for any $c\geq 0$,
$\displaystyle\mathrm{RJdCov}^{2}(\bm{X};c):=\mathrm{RJdCov}^{2}(\bm{X};(c^{r-s})_{2\leq
s\leq
r})=\sum_{s=2}^{r}c^{r-s}\sum_{\begin{subarray}{c}S\subseteq\\{1,2,\ldots,r\\}\\\
|S|=s\end{subarray}}\mathrm{RdCov}^{2}(\bm{X}_{S}).$ (2.11)
$($Note that
$\mathrm{RdCov}^{2}(X_{1},\ldots,X_{r})=\mathrm{RJdCov}^{2}(\bm{X};0)$.$)$
As in Lemma 2.4, $\mathrm{RJdCov}^{2}(\bm{X};c)$ has the following compact
representation:
$\displaystyle\mathrm{RJdCov}^{2}(\bm{X};c):=\mathbb{E}\left[\prod_{i=1}^{r}(W_{i}(R_{\mu_{i}}(X_{i}),R_{\mu_{i}}(X_{i}^{\prime}))+c)\right]-c^{r},$
(2.12)
where $(X_{1}^{\prime},\ldots,X_{r}^{\prime})$ be an independent copy of
$\bm{X}=(X_{1},X_{2},\ldots,X_{r})$ and, for $1\leq i\leq r$,
$\displaystyle W_{i}(x,x^{\prime})$
$\displaystyle=\mathbb{E}\|x-R_{\mu_{i}}(X_{i}^{\prime})\|+\mathbb{E}\|R_{\mu_{i}}(X_{i})-x^{\prime}\|-\|x-x^{\prime}\|-\mathbb{E}\|R_{\mu_{i}}(X_{i})-R_{\mu_{i}}(X_{i}^{\prime})\|.$
(2.13)
Note that $R_{\mu_{1}}(X_{1}),R_{\mu_{2}}(X_{2}),\ldots,R_{\mu_{r}}(X_{r})$
are i.i.d. $\mathrm{Unif}([0,1]^{d})$ when the variables
$X_{1},X_{2},\ldots,X_{r}$ are mutually independent (by the definition of the
optimal transport maps). Therefore, the distribution of
$\mathrm{RJdCov}^{2}(\bm{X};\bm{C})$ does not depend on the marginal
distributions of $X_{1},X_{2},\ldots,X_{r}$ under mutual independence.
Moreover, $\mathrm{RJdCov}^{2}(\bm{X};\bm{C})$ characterizes the joint
independence of $X_{1},X_{2},\ldots,X_{r}$.
###### Proposition 2.10.
Let $\mathrm{RdCov}$ and $\mathrm{RJdCov}$ be as defined in (2.10) and (2.9),
respectively. Then the following hold:
* $(1)$
For any $S\subseteq\\{1,2,\ldots,r\\}$ with $|S|\geq 2$,
$\mathrm{RdCov}(\bm{X}_{S})=0$ if the variables $(X_{i})_{i\in S}$ are
independent. Moreover, the converse holds whenever the variables
$(X_{i})_{i\in S}$ are $|S|-1$ independent.
* $(2)$
For any vector positive weights $\bm{C}$,
$\mathrm{RJdCov}^{2}(\bm{X};\bm{C})=0$ if and only if
$(X_{1},X_{2},\ldots,X_{r})$ are mutually independent.
###### Proof.
Note that $\\{R_{\mu_{i}}(X_{i})\\}_{i\in S}$ are independent when
$(X_{i})_{i\in S}$ are mutually independent, which implies
$\mathrm{RdCov}^{2}(\bm{X}_{S})=0$, by definition. For the converse, note that
$\mathrm{RJdCov}^{2}(\bm{X}_{S})=0$ and $\\{R_{\mu_{i}}(X_{i})\\}_{i\in S}$
are $|S|-1$ independent implies $\\{R_{\mu_{i}}(X_{i})\\}_{i\in S}$ are
mutually independent by [9, Theorem 3.4]. Then, since
$\mu_{i}\in\mathcal{P}_{ac}(\mathbb{R}^{d_{i}})$, by McCann’s theorem [53]
(see also [18, Proposition 2.1]) there exists a measurable function
$Q_{\mu_{i}}$ such that $Q_{\mu_{i}}(R_{\mu_{i}}(X_{i}))=X_{i}$ almost
everywhere $\mu_{i}$, for $1\leq i\leq r$. This implies,
$X_{1},X_{2},\ldots,X_{r}$ are mutually independent. This completes the proof
of (1).
For (2) note that $\mathrm{RJdCov}^{2}(\bm{X};\bm{C})=0$ if and only if
$\mathrm{RdCov}^{2}(\bm{X}_{S})=0$ for all $S\subseteq\\{1,2,\ldots,r\\}$ with
$|S|\geq 2$. The result then follows from (1). ∎
### 2.5. Consistent Estimation of $\mathrm{RJdCov}$
In this section we discuss how $\mathrm{RdCov}$ and $\mathrm{RJdCov}$ can be
consistently estimated given $n$ samples i.i.d. $\\{\bm{X}_{a}\\}_{a=1}^{n}$,
with $\bm{X}_{a}=(X_{1}^{(a)},\ldots,X_{r}^{(a)})$, from a distribution
$\mu\in\mathcal{P}_{ac}(\mathbb{R}^{d_{0}})$. The natural plug-in estimator of
$\mathrm{RdCov}^{2}(\bm{X}_{S})$, for $S\subseteq\\{1,2,\ldots,r\\}$ with
$|S|\geq 2$, is:
$\displaystyle\mathrm{RdCov}^{2}_{n}(\bm{X}_{S}):=\int_{\mathbb{R}^{d_{S}}}\left|\frac{1}{n}\sum_{b=1}^{n}\left\\{\prod_{i\in
S}\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(a)})\rangle}-e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(b)})\rangle}\right)\right\\}\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i},$ (2.14)
where $\hat{R}_{i}$ is the empirical rank map for the $i$-th marginal
distribution, that is, the optimal transport map from
$\frac{1}{n}\sum_{a=1}^{n}\delta_{X_{i}^{(a)}}$ to
$\frac{1}{n}\sum_{a=1}^{n}\delta_{h_{a}^{d_{i}}}$, with
$\mathcal{H}_{n}^{d_{i}}=\\{h_{1}^{d_{i}},h_{2}^{d_{i}},\ldots,h_{n}^{d_{i}}\\}$
the Halton sequence in $[0,1]^{d_{i}}$, for $1\leq i\leq r$. Using the
representation in (2.12) and (2.13) the estimate in (2.14) can be written as:
$\displaystyle\mathrm{RdCov}^{2}_{n}(\bm{X}_{S})$
$\displaystyle=\frac{1}{n^{2}}\sum_{1\leq a,b\leq n}\prod_{i\in
S}\hat{\mathcal{E}}_{i}(a,b).$ (2.15)
where
$\displaystyle\hat{\mathcal{E}}_{i}(a,b)$
$\displaystyle:=\frac{1}{n}\sum_{v=1}^{n}\|\hat{R}_{i}(X_{i}^{(a)})-\hat{R}_{i}(X_{i}^{(v)})\|+\frac{1}{n}\sum_{u=1}^{n}\|\hat{R}_{i}(X_{i}^{(u)})-\hat{R}_{i}(X_{i}^{(b)})\|$
$\displaystyle\hskip
90.3375pt-\|\hat{R}_{i}(X_{i}^{(a)})-\hat{R}_{i}(X_{i}^{(b)})\|-\frac{1}{n^{2}}\sum_{1\leq
u,v\leq n}\|\hat{R}_{i}(X_{i}^{(u)})-\hat{R}_{i}(X_{i}^{(v)})\|,$ (2.16)
for $1\leq a,b\leq n$. Consequently, the natural plug-in estimate of (2.9) is
$\displaystyle\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})=\sum_{s=2}^{r}C_{s}\sum_{\begin{subarray}{c}S\subseteq\\{1,2,\ldots,r\\}\\\
|S|=s\end{subarray}}\mathrm{RdCov}^{2}_{n}(\bm{X}_{S}),$ (2.17)
and that of (2.11) (using the representation in (2.12) is
$\displaystyle\mathrm{RJdCov}^{2}_{n}(\bm{X};c)$
$\displaystyle:=\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\prod_{i=1}^{r}\left(\hat{\mathcal{E}}_{i}(a,b)+c\right)-c^{r}.$
###### Remark 2.11.
For $S\subseteq\\{1,2,\ldots,r\\}$ with $|S|\geq 2$, consider the function
$\theta_{S}:(\mathbb{R}^{d_{S}})^{n}\rightarrow\mathbb{R}_{\geq 0}$
$\displaystyle\theta_{S}((x_{i}^{(1)})_{i\in S},\ldots,(x_{i}^{(n)})_{i\in
S}):=\int_{\mathbb{R}^{d_{S}}}\left|\frac{1}{n}\sum_{b=1}^{n}\left\\{\prod_{i\in
S}\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},x_{i}^{(a)})\rangle}-e^{\iota\langle
t_{i},x_{i}^{(b)}\rangle}\right)\right\\}\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i},$ (2.18)
where $x_{i}^{(a)}\in\mathbb{R}^{d_{i}}$, for $1\leq i\leq r$ and, hence,
$(x_{i}^{(a)})_{i\in S}\in\mathbb{R}^{d_{S}}$, for $1\leq a\leq n$. Evaluating
$\theta_{S}$ at the data points gives us the natural plug-in estimate of
$\mathrm{dCov}(\bm{X}_{S})$ (recall (2.3)) which is denoted by:
$\mathrm{dCov}_{n}(\bm{X}_{S}):=\theta_{S}(\bm{X}_{S}^{(1)},\ldots,\bm{X}_{S}^{(n)}),$
where $\bm{X}_{S}^{(a)}:=(X_{i}^{(a)})_{i\in S}\in\mathbb{R}^{d_{S}}$, for
$1\leq a\leq n$. On the other hand, evaluating $\theta_{S}$ on the the rank
transformed data gives us the estimate of $\mathrm{RdCov}$ in (2.14).
Specifically, recalling (2.14) note that
$\displaystyle\mathrm{RdCov}_{n}(\bm{X}_{S})=\theta_{S}(\hat{\bm{R}}_{S}^{(1)},\ldots,\hat{\bm{R}}_{S}^{(n)}),$
(2.19)
where $\hat{\bm{R}}_{S}^{(a)}:=(\hat{R}_{i}(X_{i}^{(a)}))_{i\in
S}\in\mathbb{R}^{d_{S}}$, for $1\leq a\leq n$. The function $\bm{\theta}_{S}$
will play an important role in the calibration of the independence tests on
$\mathrm{RJdCov}_{n}$ discussed in Section 3.
To establish the consistency of $\mathrm{RJdCov}_{n}$ we will assume the
following:
###### Assumption 1.
For every $1\leq i\leq p$, the empirical distribution of
$\mathcal{H}_{n}^{d_{i}}=\\{h_{1}^{d_{i}},h_{2}^{d_{i}},\ldots,h_{n}^{d_{i}}\\}$
converges weakly to $\mathrm{Unif}([0,1]^{d_{i}})$.
As mentioned before, Assumption 1 holds whenever $\mathcal{H}_{n}^{d_{i}}$ is
the Halton sequence in $[0,1]^{d_{i}}$ (which is our default choice), for a
random sample $n$ i.i.d. $\mathrm{Unif}([0,1]^{d_{i}})$ points, or other
quasi-Monte Carlo sequences as well (see [18, Appendix D] for a discussion on
the various of choices $\mathcal{H}_{n}^{d_{i}}$). Under this assumption we
have the following result:
###### Theorem 2.12.
Suppose Assumption 1 holds. Then for any $S\subseteq\\{1,2,\ldots,d\\}$ with
$|S|\geq 2$,
$\mathrm{RdCov}^{2}_{n}(\bm{X}_{S})\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}\mathrm{RdCov}^{2}(\bm{X}_{S}).$
Consequently, $\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}\mathrm{RJdCov}^{2}(\bm{X};\bm{C})$.
The proof of Theorem 2.12 is given in Appendix B. One of the highlights of the
above result is that it holds without any moment assumptions on the data
generating distribution. This is in contrast to consistency results for
$\mathrm{JdCov}$ which require the distribution to satisfy certain moment
conditions. Consequently, the rank-based joint dependence measures are more
robust to heavy-tail distributions, as will been seen from the simulations in
Section 6.
## 3\. Distribution-Free Joint Independence Testing
In section we discuss new distribution-free tests for the multivariate joint
independence testing. Given i.i.d.samples $\\{\bm{X}_{a}\\}_{a=1}^{n}$, with
$\bm{X}_{a}=(X_{1}^{(a)},\ldots,X_{r}^{(a)})$, from a distribution
$\mu\in\mathcal{P}_{ac}(\mathbb{R}^{d_{0}})$ with marginal distributions
$\mu_{1},\mu_{2},\ldots,\mu_{r}$ such that
$\mu_{i}\in\mathcal{P}_{ac}(\mathbb{R}^{d_{i}})$, for $1\leq i\leq r$,
consider the joint independence testing problem:
$\displaystyle
H_{0}:\mu=\mu_{1}\otimes\mu_{2}\otimes\cdots\otimes\mu_{r}\quad\text{ versus
}\quad H_{1}:\mu\neq\mu_{1}\otimes\mu_{2}\otimes\cdots\otimes\mu_{r}.$ (3.1)
Our test for (3.1) will be based on rejecting $H_{0}$ for ‘large’ values of
$\mathrm{RJdCov}^{2}(\bm{X};\bm{C})$, for a given choice non-negative weights
$\bm{C}=(C_{2},C_{3},\ldots,C_{r})$. Note that the vector of ranks
$\\{\hat{R}_{i}(X_{i}^{(a)})\\}_{1\leq a\leq n}$ are distributed uniformly
over the $n!$ permutations of $\mathcal{H}_{n}^{d_{i}}$, for $1\leq i\leq r$
(by Proposition 2.6) and under $H_{0}$ independent over $1\leq i\leq r$.
Hence, the distribution of $\mathrm{RJdCov}^{2}(\bm{X};\bm{C})$ under $H_{0}$
does not depend on the marginal distributions
$\mu_{1},\mu_{2},\ldots,\mu_{r}$.
###### Proposition 3.1 (Distribution-freeness under $H_{0}$).
Under $H_{0}$, the distribution of $\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})$ is
universal, that is, it is free of $\mu_{1},\ldots,\mu_{r}$.
The above result automatically gives a finite sample distribution-free
independence test that uniformly controls the Type-I error for (3.1). To this
end, fix a level $\alpha\in(0,1)$ and let $c_{\alpha,n}$ denote the upper
$\alpha$ quantile of the universal distribution in Proposition 3.1. Consider
the test function:
$\phi_{n}(\bm{C}):=\bm{1}\left\\{\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})\geq
c_{\alpha,n}\right\\}.$ (3.2)
This test is exactly distribution-free for all $n\geq 1$ and uniformly of
level $\alpha$ under $H_{0}$,333Strictly speaking, to guarantee exact level
$\alpha$, we have to randomize $\phi_{n}(\bm{C})$, as the exact distribution
of $\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})$ is discrete. However, this makes
no practical difference unless $n$ is very small. that is,
$\sup_{\mu=\mu_{1}\otimes\mu_{2}\otimes\cdots\otimes\mu_{r}}\mathbb{E}_{\mu}\left[\phi_{n}(\bm{C})\right]=\alpha.$
(3.3)
Moreover, since $\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})$ is a consistent
estimate of $\mathrm{RJdCov}^{2}(\bm{X};\bm{C})$, which characterizes joint
independence, the test $\phi_{n}(\bm{C})$ is consistent against all fixed
alternatives. This is summarized in the following proposition (see Appendix B
for the proof).
###### Proposition 3.2 (Universal consistency).
Suppose Assumption 1 holds. Then the test $\phi_{n}(\bm{C})$ is universally
consistent, that is, for any
$\mu\neq\mu_{1}\otimes\mu_{2}\otimes\cdots\otimes\mu_{r}$,
$\lim_{n\rightarrow\infty}\mathbb{E}_{\mu}\left[\phi_{n}(\bm{C})\right]=1.$
(3.4)
###### Remark 3.3.
Note that, while $\mathrm{RJdCov}_{n}(\bm{X};\bm{C})$ provides a distribution-
free, universally consistent test for joint independence, the individual
$\mathrm{RdCov}(\bm{X}_{S})$ can be used for testing the joint independence of
the subset of variables $\bm{X}_{S}$ given that they are $|S|-1$ independent.
This is because $\mathrm{RdCov}_{n}(\bm{X}_{S})=0$ if and if only if the
variables $\bm{X}_{S}$ are mutually independent, provided they are $|S|-1$
independent. For example, if we are interested in testing pairwise
independence:
$H_{0}:X_{1},X_{2},\ldots,X_{r}\text{ are pairwise independent }\quad\text{
versus }\quad H_{1}:\text{ not }H_{0},$
then the test which rejects $H_{0}$ for ‘large’ values of
$\sum_{1\leq i<j\leq r}\mathrm{RdCov}^{2}_{n}(X_{i},X_{j}),$
will be distribution-free and consistent. This is, in fact, the rank-analogue
of the test based on pairwise distance covariances studied in [76]. Similarly,
if we know that 3 variables $(X_{1},X_{2},X_{3})$ are pairwise independent,
then $\mathrm{RdCov}^{2}_{n}(X_{1},X_{2},X_{3})$ can be used to obtain a
distribution-free, consistent test for the mutual independence of
$(X_{1},X_{2},X_{3})$. Another attractive property of $\mathrm{RdCov}$ is that
the collection $\mathrm{RdCov}_{n}(\bm{X}_{S})$ is asymptotically independent
over $S\subseteq\\{1,2,\ldots,d\\}$ under the null hypothesis of joint
independence (see Theorem 3.7), which can be leveraged to learn the higher-
order dependencies among the variables $X_{1},X_{2},\ldots,X_{r}$.
###### Remark 3.4.
As mentioned in the Introduction, another related measure for joint
independence based on the second-order $\mathrm{dCov}$ was proposed by
Matteson and Tsay [52], and its rank version was discussed in [18]. Although
these measures characteristic joint independence, unlike
$\mathrm{JdCov}/\mathrm{RJdCov}$ it is unable to capture the dependence
structure among the variables. In Section 6 we compare our method with test
proposed in [52] in simulations.
### 3.1. Asymptotic Null Distribution
In this section we will derive the asymptotic null distribution of the
collection of $2^{r}-r-1$ random variables:
$\displaystyle\\{\mathrm{RdCov}_{n}(\bm{X}_{S}):S\in\mathcal{T}\\},$ (3.5)
where $\mathcal{T}:=\\{S\subseteq\\{1,2,\ldots,r\\}\text{ such that }|S|\geq
2\\}$, and consequently that of $\mathrm{RJdCov}_{n}(\bm{X};\bm{C})$. To
describe the limit we need the following definition:
###### Definition 3.5.
Denote by $\\{Z_{S}\\}_{S\in\mathcal{T}}$ a collection of mutually independent
complex-valued Gaussian processes such that, for each $S\in\mathcal{T}$,
$\displaystyle\\{Z_{S}(\bm{t}):\bm{t}\in\mathbb{R}^{d_{S}}\\}$ (3.6)
a complex-valued Gaussian process indexed by $\mathbb{R}^{d_{S}}$ (recall
$d_{S}=\sum_{i\in S}d_{i}$) with zero mean and covariance function:
$\displaystyle
C_{S}(\bm{t},\bm{t}^{\prime}):=\operatorname{Cov}[Z_{S}(\bm{t}),Z_{S}(\bm{t}^{\prime})]:=\prod_{i\in
S}\left(\mathbb{E}\left[e^{\iota\langle
t_{i}-t_{i}^{\prime},U_{i}\rangle}\right]-\mathbb{E}\left[e^{\iota\langle
t_{i},U_{i}\rangle}\right]\mathbb{E}\left[e^{\iota\langle-
t_{i}^{\prime},U_{i}\rangle}\right]\right),$ (3.7)
where
* •
$\bm{t}=(t_{i})_{i\in S}\in\mathbb{R}^{d_{S}}$ and
$\bm{t}^{\prime}=(t_{i}^{\prime})_{i\in S}\in\mathbb{R}^{d_{S}}$, with
$t_{i},t_{i}^{\prime}\in\mathbb{R}^{d_{i}}$, for $i\in S$, and
* •
$U_{1},U_{2},\ldots,U_{r}$ are independent with
$U_{i}\sim\mathrm{Unif}([0,1]^{d_{i}})$, for $1\leq i\leq r$.
###### Remark 3.6.
The covariance function (3.7) can be expressed in closed form using the fact,
$\mathbb{E}\left[e^{\iota\langle
t_{i},U_{i}\rangle}\right]=\prod_{j=1}^{d_{i}}\frac{e^{\iota t_{ij}}-1}{\iota
t_{ij}},$
for $U_{i}\sim\mathrm{Unif}([0,1]^{d_{i}})$ and $t_{i}=(t_{ij})_{1\leq j\leq
d_{i}}\in\mathbb{R}^{d_{i}}$. Hence,
$C_{S}(\bm{t},\bm{t}^{\prime})=\prod_{i\in
S}\left(\prod_{j=1}^{d_{i}}\frac{e^{\iota(t_{ij}-t_{ij}^{\prime})}-1}{\iota(t_{ij}-t_{ij}^{\prime})}-\prod_{j=1}^{d_{i}}\frac{e^{\iota
t_{ij}}-1}{t_{ij}}\prod_{j=1}^{d_{i}}\frac{e^{-\iota
t_{ij}^{\prime}}-1}{t_{ij}^{\prime}}\right),$
where $t_{i}=(t_{ij})_{1\leq j\leq d_{i}}\in\mathbb{R}^{d_{i}}$ and
$t_{i}^{\prime}=(t_{ij}^{\prime})_{1\leq j\leq d_{i}}\in\mathbb{R}^{d_{i}}$.
With the above definition we can now describe the asymptotic distribution of
the collection (3.5) under the null hypothesis of joint independence.
###### Theorem 3.7.
Suppose Assumption 1 holds. Then under $H_{0}$ as in (3.1), as
$n\rightarrow\infty$,
$\displaystyle\\{n\mathrm{RdCov}_{n}(\bm{X}_{S}):S\in\mathcal{T}\\}\stackrel{{\scriptstyle
D}}{{\to}}\left\\{\int_{\mathbb{R}^{d_{S}}}|Z_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right\\}_{S\in\mathcal{T}},$ (3.8)
for $Z_{S}(\bm{t})$ as defined in (3.6). Consequently, for any vector of non-
negative weights $\bm{C}=(C_{1},C_{2},\ldots,C_{r})$, under $H_{0}$ as in
(3.1),
$\displaystyle n\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})\stackrel{{\scriptstyle
D}}{{\to}}\sum_{s=2}^{r}C_{s}\sum_{\begin{subarray}{c}S\subseteq\\{1,2,\ldots,r\\}\\\
|S|=s\end{subarray}}\int_{\mathbb{R}^{d_{S}}}|Z_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}.$ (3.9)
The proof of Theorem 3.7 is given in Appendix C. One of the challenges in
dealing with $\mathrm{RdCov}_{n}$ and $\mathrm{RJdCov}_{n}$ is that they are
functions of dependent multivariate ranks. To solve the problem, we develop a
version of the classical Hoeffiding’s combinatorial central limit theorem for
$r$-tensors indexed by multiple independent permutations (see Theorem A.1 in
Appendix A), a result which might be more broadly useful in the analysis of
other nonparametric tests. Our proof of the combinatorial CLT uses Stein’s
method based on exchangeable pairs, combined with techniques from empirical
process theory gives us the result in Theorem 3.7.
###### Remark 3.8.
(Equivalent representation of the limiting distribution) By considering the
process $Z_{S}(\bm{t})$ as a random element in $L^{2}(\mathbb{R}^{d_{S}})$ and
an eigen-expansion of the integral operator on associated with the covariance
kernel (3.7) the limit in (3.8) can be expressed as:
$\int_{\mathbb{R}^{d_{S}}}|Z_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\stackrel{{\scriptstyle
D}}{{=}}\sum_{j=1}^{\infty}\lambda_{j}Z_{j}^{2},$
where $\\{\lambda_{j}\\}_{j\geq 1}$ are positive constants and
$Z_{1},Z_{2},\ldots,$ are i.i.d. $N(0,1)$ (see [46, Chapter 1, Section 2]).
Hence, by the independence of the processes $\\{Z_{S}\\}_{S\in\mathcal{T}}$,
the limit in (3.9) can be represented as an infinite sum weighted chi-squares:
$n\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})\stackrel{{\scriptstyle
D}}{{\to}}\sum_{s=2}^{r}C_{s}\sum_{\begin{subarray}{c}S\subseteq\\{1,2,\ldots,r\\}\\\
|S|=s\end{subarray}}\int_{\mathbb{R}^{d_{S}}}|Z_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\stackrel{{\scriptstyle
D}}{{=}}\sum_{j=1}^{\infty}\lambda_{j}^{\prime}Z_{j}^{2},$
for a sequence of positive constants $\\{\lambda_{j}^{\prime}\\}_{j\geq 1}$
and $Z_{1},Z_{2},\ldots,$ are i.i.d. $N(0,1)$.
###### Remark 3.9.
Another interesting consequence of Theorem 3.7 is that collection
$\\{\mathrm{RdCov}_{n}(\bm{X}_{S}):S\in\mathcal{T}\\}$ is asymptotically
independent. This can be attributed to the representation of
$\mathrm{RdCov}_{n}(\bm{X}_{S})$ as a product of zero mean (under $H_{0}$)
random variables (recall (2.14)). This property is shared by the
$\mathrm{JdCov}$ (equivalently, the distance multivariance) which has a
similar product structure (see [9, Theorem 4.10]).
### 3.2. Approximating the Cut-off and Finite Sample Properties
Note that the limit in (3.8), as expected, does not depend on the distribution
of the data or the specific choices of
$\mathcal{H}_{n}^{d_{1}},\mathcal{H}_{n}^{d_{2}},\ldots,\mathcal{H}_{n}^{d_{r}}$.
This is a manifestation of the distribution-free property of the multivariate
ranks. Hence, if $c_{\alpha}$ denotes the $(1-\alpha)$-th quantile of the
distribution in the RHS of (3.9), the test which rejects $H_{0}$ when
$\displaystyle\phi^{\mathrm{asymp}}(\bm{C})=\bm{1}\\{n\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})>c_{\alpha}\\},$
(3.10)
will have asymptotic level $\alpha$ and universally consistent for the
hypothesis (3.1). Although this test is asymptotically valid, there is, in
general, no tractable form of the quantile $c_{\alpha}$ for the distribution
(3.9). Nevertheless, using the distribution-free property of the multivariate
ranks we can still approximate the quantiles of (3.8) (and consequently (3.9))
in a data-agnostic manner. This is because $\mathrm{RdCov}_{n}(\bm{X}_{S})$ is
a function of the multivariate ranks (recall (2.14) and (2.19)), and the ranks
are independent and the uniformly distributed over the $n!$ permutations of
$\mathcal{H}_{n}^{d_{i}}$, for $1\leq i\leq r$ (by Proposition 2.6).
Therefore, we can compute an approximate quantiles of
$\mathrm{RdCov}_{n}(\bm{X};\bm{C})$ as follows: For $1\leq b\leq B$ repeat the
following two steps:
* $(1)$
Generate $r$ i.i.d. uniform random permutations
$\bm{\pi}:=\\{\pi_{1},\pi_{2},\ldots,\pi_{r}\\}$ from $S_{n}$.
* $(2)$
Compute the value of the statistic $\mathrm{RdCov}_{n}(\bm{X};\bm{C})$ on the
permuted Halton sequences
$\pi_{1}\mathcal{H}_{n}^{d_{1}},\pi_{2}\mathcal{H}_{n}^{d_{2}},\ldots,\pi_{r}\mathcal{H}_{n}^{d_{r}}$,
where
$\pi_{i}\mathcal{H}_{n}^{d_{i}}:=\\{h_{\pi_{i}(1)}^{d_{i}},h_{\pi_{i}(2)}^{d_{i}},\ldots,h_{\pi_{i}(n)}^{d_{i}}\\},$
for $1\leq i\leq r$. More precisely, we evaluate
$\displaystyle\mathrm{RdCov}_{n}^{(b)}(\bm{X};\bm{C})=\sum_{s=2}^{r}C_{s}\sum_{\begin{subarray}{c}S\subseteq\\{1,2,\ldots,r\\}\\\
|S|=s\end{subarray}}\theta_{S}(\bm{h}_{S,\bm{\pi}}^{(1)},\ldots,\bm{h}_{S,\bm{\pi}}^{(n)}),$
(3.11)
where $\theta_{S}$ is the function in (2.18) and
$\bm{h}_{S,\bm{\pi}}^{(a)}:=(h_{\pi_{i}(a)}^{d_{i}})_{i\in
S}\in\mathbb{R}^{d_{S}}$, for $1\leq a\leq n$.
Then the permutation $(1-\alpha)$-th quantile of
$\mathrm{RdCov}_{n}(\bm{X};\bm{C})$ is obtained as:
$\displaystyle
c_{n,M,\alpha}:=\min\left\\{\mathrm{RdCov}_{n}^{(s)}(\bm{X};\bm{C}):\frac{1}{B}\sum_{b=1}^{B}\bm{1}\left\\{\mathrm{RdCov}_{n}^{(b)}(\bm{X};\bm{C})>\mathrm{RdCov}_{n}^{(s)}(\bm{X};\bm{C})\right\\}\leq\alpha\right\\}.$
(3.12)
To compute the permutation $p$-value, let $R$ be the rank of
$\mathrm{RdCov}_{n}(\bm{X};\bm{C})$ (the original value computed from the
data) in the sequence
$\displaystyle(\mathrm{RdCov}_{n}^{(1)}(\bm{X};\bm{C}),\mathrm{RdCov}_{n}^{(2)}(\bm{X};\bm{C}),\ldots,\mathrm{RdCov}_{n}^{(B)}(\bm{X};\bm{C})),$
by breaking ties at random and where $R=1$ denotes the rank of the largest
element. Then the permutation $p$-value as $p_{B}:=R/(B+1)$. Note that if
$H_{0}$ is true,
$\mathbb{P}(p_{B}\leq\alpha)=\lfloor\alpha(B+1)\rfloor/(B+1)\leq\alpha$. In
other words,
$\displaystyle\tilde{\phi}_{n}(\bm{C}):=\bm{1}\\{p_{B}\leq\alpha\\}.$ (3.13)
is a finite sample level $\alpha$ test.
###### Remark 3.10.
Note that the resampling method described above is different from the (data-
dependent) bootstrap/permutation method used to calibrate
$\mathrm{JdCov}(\bm{X},\bm{C})$, or other independence tests which are not
distribution-free), where the asymptotic distribution depends on the (unknown)
marginal distributions of $X_{1},X_{2},\ldots,X_{r}$ (see [12, Proposition
9]). On the other hand, our method can be used to compute the cut-offs by only
permuting the Halton sequences, without any knowledge of the data, and hence
is much more efficient when one has to do multiple tests.
Although (3.13) produces a test which has level $\alpha$ in finite samples,
its power properties with a finite number of permutations is a-priori unclear.
This is due to fact that for any collection of $r$ random permutations
$\bm{\pi}=\\{\pi_{1},\pi_{2},\ldots,\pi_{r}\\}$, the RHS of (3.11) tends to
zero with high probability (see Proposition D.1 in Appendix D). This combined
with Theorem 2.12 leads to the following result:
###### Theorem 3.11 (Consistency with finite number of resamples).
Suppose Assumption 1 holds. Fix $\alpha\in(0,1)$ and suppose $B\geq
1/\alpha-1$. Then, for any fixed
$\mu\neq\mu_{1}\otimes\mu_{2}\otimes\cdots\otimes\mu_{r}$,
$\lim_{n\rightarrow\infty}\mathbb{E}_{\mu}[\tilde{\phi}(\bm{C})]=1.$
###### Proof.
Define $T:=\mathrm{RdCov}^{2}(\bm{X},\bm{C})$,
$T_{n}:=\mathrm{RdCov}_{n}(\bm{X};\bm{C})$, and
$T_{n}^{(b)}:=\mathrm{RdCov}_{n}^{(b)}(\bm{X};\bm{C})$, for $1\leq b\leq B$
(as defined in (3.11)). Then, since $B\geq 1/\alpha-1$,
$\displaystyle\mathbb{E}_{\mu}[\tilde{\phi}_{n}(\bm{C})]$
$\displaystyle=\mathbb{P}(p_{B}\leq\alpha)$
$\displaystyle\geq\mathbb{P}\left(T_{n}>T/2\text{ and }T_{n}^{(b)}<T/2\text{
for }1\leq b\leq B\right)$ $\displaystyle\geq
1-\left\\{\mathbb{P}(T_{n}<T/2)+B\cdot\mathbb{P}(T_{n}^{(1)}>T/2)\right\\}$
(by the union bound) $\displaystyle\rightarrow 1,$
where the last convergence is due to the consistency result in Theorem 2.12
and Proposition D.1 in Appendix D. ∎
## 4\. Local Power Analysis
In this section, we derive the asymptotic local power of the statistic
$\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})$ under the contiguous alternatives. In
the context of testing pairwise independence two types of local alternatives
have been considered in the literature: (1) mixture alternatives [20, 68] and
(2) Konijn alternatives which was originally proposed in [44] and has been
later investigated in [26] and used more recently in [20, 68]. To the best of
our knowledge, local power analysis has not been carried out for joint
independence testing of more than 2 variables. The first step towards this is
to define contiguous versions of the mixture and Konijn alternatives for
multiple random vectors. For this suppose the joint distribution $\mu$ has
density $f$ with respect to the Lebesgue measure in $\mathbb{R}^{d_{0}}$ and
the marginal distributions $\mu_{1},\mu_{2},\ldots,\mu_{r}$ have densities
$f_{X_{1}},f_{X_{2}},\ldots,f_{X_{r}}$ with respect to the Lebesgue measures
in $\mathbb{R}^{d_{1}},\mathbb{R}^{d_{2}},\ldots,\mathbb{R}^{d_{r}}$,
respectively.
### 4.1. Mixture Alternatives
The mixture alternatives constructed as follows: Given a $\delta>0$ consider
the following mixture density in $\mathbb{R}^{d_{0}}$:
$\displaystyle f(\bm{x})=(1-\delta)\prod_{i=1}^{r}f_{X_{i}}(x_{i})+\delta
g(\bm{x}),$
where $x_{i}\in\mathbb{R}^{d_{i}}$ for $1\leq i\leq r$,
$\bm{x}=(x_{1},x_{2},\ldots,x_{r})\in\mathbb{R}^{d_{0}}$, and
$g\neq\prod_{i=1}^{r}f_{i}$ is a probability density function with respect to
the Lebsegue measure in $\mathbb{R}^{d_{0}}$ such that the following holds:
###### Assumption 2.
The support of $g$ is contained in that of $\prod_{i=1}^{r}f_{i}$ and
$0<\mathbb{E}\left[\left(\frac{g(\bm{X})}{\prod_{i=1}^{r}f_{X_{i}}(X_{i})}-1\right)^{2}\right]<\infty,$
where the expectation is taken under $H_{0}$, that is,
$\bm{X}=(X_{1},X_{2},\ldots X_{r})\sim\prod_{i=1}^{r}f_{X_{i}}$.
Under this assumption contiguous local alternatives are obtained by
considering local perturbations of the mixing proportion $\delta$ as follows:
$H_{0}:\delta=0\qquad\mathrm{versus}\qquad H_{1}:\delta=h/\sqrt{n},$ (4.1)
for some $h>0$. This type of alternative captures all local additive
perturbations from $H_{0}$ by compressing potential lower-order dependence
among the variables in the functions $g$. The following theorem derives the
distribution of $\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})$ under $H_{1}$ as in
(4.1).
###### Theorem 4.1.
Suppose Assumptions 1 and 2 hold. Then under $H_{1}$ as in (4.1), as
$n\rightarrow\infty$,
$\displaystyle\\{n\mathrm{RdCov}_{n}(\bm{X}_{S}):S\in\mathcal{T}\\}\stackrel{{\scriptstyle
D}}{{\to}}\left\\{\int_{\mathbb{R}^{d_{S}}}|Z_{S}(\bm{t})+\mu_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right\\}_{S\in\mathcal{T}},$ (4.2)
with $Z_{S}(\bm{t})$ as defined in (3.6) and
$\displaystyle\mu_{S}(\bm{t})=h\cdot\mathbb{E}_{\bm{X}\sim
H_{0}}\left[\left(\frac{g(\bm{X})}{\prod_{i=1}^{r}f_{X_{i}}(X_{i})}-1\right)\prod_{i\in
S}\left\\{\mathbb{E}\left[e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i})\rangle}\right]-e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i})\rangle}\right\\}\right],$ (4.3)
where $\bm{t}=(t_{i})_{i\in S}$ and $t_{i}\in\mathbb{R}^{d_{i}}$ for $i\in S$.
Consequently, for any vector of non-negative weights
$\bm{C}=(C_{1},C_{2},\ldots,C_{r})$, under $H_{1}$ as in (4.1),
$\displaystyle n\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})\stackrel{{\scriptstyle
D}}{{\to}}\sum_{s=2}^{r}C_{s}\sum_{\begin{subarray}{c}S\subseteq\\{1,2,\ldots,r\\}\\\
|S|=s\end{subarray}}\int_{\mathbb{R}^{d_{S}}}|Z_{S}(\bm{t})+\mu_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}.$ (4.4)
The proof of this theorem is given in Appendix E. Using this result we can
derive the limiting local power of the test based on
$\mathrm{RdCov}^{2}(\bm{X};\bm{C})$. Specifically, if $\mathcal{H}_{\bm{C}}$
denotes the CDF of the limiting distribution in (4.4) then local asymptotic
power of the test $\phi_{n}^{\mathrm{asymp}}(\bm{C})$ (recall (3.10)) is given
by
$\lim_{n\rightarrow\infty}\mathbb{E}_{H_{1}}[\phi_{n}^{\mathrm{asymp}}(\bm{C})]=1-\mathcal{H}_{\bm{C}}^{(h)}(c_{\alpha}).$
This implies, $\phi_{n}^{\mathrm{asymp}}(\bm{C})$ (and similarly
$\phi_{n}(\bm{C})$ and $\tilde{\phi}_{n}(\bm{C})$ in (3.2) and (3.13),
respectively) have non-trivial Pitman efficiency and is rate-optimal, in the
sense that,
$\lim_{|h|\rightarrow\infty}\lim_{n\rightarrow\infty}\mathbb{E}_{H_{1}}[\phi_{n}^{\mathrm{asymp}}(\bm{C})]=1.$
### 4.2. Konijn Alternative
The Konijn family of alternatives [44] for 2 random vectors can be defined as
follows: Given two independent random vectors
$X_{1}^{\prime}\sim\mu_{1}\in\mathcal{P}_{ac}(\mathbb{R}^{d_{1}})$,
$X_{2}^{\prime}\sim\mu_{1}\in\mathcal{P}_{ac}(\mathbb{R}^{d_{2}})$, and
$\delta>0$, consider the law of
$\displaystyle\begin{pmatrix}X_{1}\\\
X_{2}\end{pmatrix}=\begin{pmatrix}(1-\delta)\bm{I}_{d_{1}}&\delta\bm{M}_{1}\\\
\delta\bm{M}_{2}&(1-\delta)\bm{I}_{d_{2}}\end{pmatrix}\begin{pmatrix}X_{1}^{\prime}\\\
X_{2}^{\prime}\end{pmatrix}$ (4.5)
where $\bm{M}_{1}$ and $\bm{M}_{2}$ are $d_{1}\times d_{2}$ and $d_{2}\times
d_{1}$ dimensional deterministic matrices, respectively. Note that when
$\delta=0$, the random vectors $X_{1},X_{2}$ are independent and as $\delta$
increases introduces dependence/correlation through the matrix $\bm{M}$.
Gieser [25] showed that if one choses $\delta=h/\sqrt{n}$, for some $h>0$,
then the distributions of $(X_{1},X_{2})$ and
$(X_{1}^{\prime},X_{2}^{\prime})$ are contiguous when $X_{1}^{\prime}$ and
$X_{2}^{\prime}$ are elliptically symmetric distributions centered at
$\theta_{1}\in\mathbb{R}^{d_{1}}$ and $\theta_{2}\in\mathbb{R}^{d_{2}}$ with
covariance matrices $\Sigma_{1}$ and $\Sigma_{2}$, respectively. Here, we
extend the Konijn family to multiple random vectors as follows:
###### Definition 4.2.
Given $r$ independent random vectors
$X_{1}^{\prime},X_{2}^{\prime},\ldots,X_{r}^{\prime}$, where
$X_{i}^{\prime}\sim\mu_{i}\in\mathcal{P}_{ac}(\mathbb{R}^{d_{i}})$, for $1\leq
i\leq r$, and $\delta>0$, consider the law of
$\displaystyle\bm{X}=\begin{pmatrix}X_{1}\\\ X_{2}\\\ \vdots\\\
X_{r}\end{pmatrix}=\begin{pmatrix}(1-\delta)\bm{I}_{d_{1}}&\delta\bm{M}_{1,2}&\cdots&\delta\bm{M}_{1,r}\\\
\delta\bm{M}_{2,1}&(1-\delta)\bm{I}_{d_{2}}&\cdots&\delta\bm{M}_{2,r}\\\
\vdots&\vdots&\ddots&\vdots\\\
\delta\bm{M}_{r,1}^{\top}&\delta\bm{M}_{r,2}&\cdots&(1-\delta)\bm{I}_{d_{r}}\\\
\end{pmatrix}\begin{pmatrix}X_{1}^{\prime}\\\ X_{2}^{\prime}\\\ \vdots\\\
X_{r}^{\prime}\end{pmatrix}:=\bm{A}_{\delta}\begin{pmatrix}X_{1}^{\prime}\\\
X_{2}^{\prime}\\\ \vdots\\\ X_{r}^{\prime}\end{pmatrix},$ (4.6)
where $\bm{M}_{i,j}$, for $1\leq i\neq j\leq r$, is a $d_{i}\times d_{j}$ a
dimensional deterministic matrix.
Note that the matrix $\bm{A}_{\delta}$ in (4.6) is a $d_{0}\times d_{0}$ times
block matrix with $\bm{I}_{d_{i}}$ in the $i$-th diagonal block for $1\leq
i\leq r$ and $\bm{M}_{i,j}$ in the $(i,j)$-th block for $1\leq i\neq j\leq r$.
When $\delta=0$, the matrix $\bm{A}_{\delta}$ is identity and, hence, by
continuity there exists a neighborhood $\Theta$ of zero such that
$\bm{A}_{\delta}$ is invertible for $\delta\in\Theta$. Therefore, by a change
of variable formula, for $\delta\in\Theta$, the density of $\bm{X}$ can be
written as:
$\displaystyle
f_{\bm{X}}(\bm{x}|\delta)=\frac{1}{|\mathrm{det}(\bm{A}_{\delta})|}f_{\bm{X}^{\prime}}(\bm{A}_{\delta}\bm{x}),$
(4.7)
where $f_{\bm{X}^{\prime}}(\cdot)=\prod_{i=1}^{r}f_{X_{i}^{\prime}}(\cdot)$ is
the density of $(X_{1}^{\prime},X_{2}^{\prime},\ldots,X_{r}^{\prime})$.
Hereafter, we will denote
$f^{\prime}_{\bm{X}}(\bm{x}|\delta)=\frac{\mathrm{d}}{\mathrm{d}\theta}f_{\bm{X}}(\bm{x}|\bm{\theta})|_{\theta=\delta}$
and assume the following:
###### Assumption 3.
The family of distributions $\\{f_{\bm{X}}(\cdot|\delta)\\}_{\delta\in\Theta}$
satisfies the following:
* •
The support of $f_{\bm{X}}(\cdot|\delta)$ does not depend on
$\delta\in\Theta$. In other words, the set
$\mathcal{S}:=\\{\bm{x}\in\mathbb{R}^{d_{0}}:f_{\bm{X}}(\bm{x}|\delta)>0\\}$
does not depend on $\delta$.
* •
The map $\bm{x}\mapsto\sqrt{f_{\bm{X}}(\bm{x}|0)}$ is continuously
differentiable.
* •
The Fisher information
$I(\delta)=\int_{\mathbb{R}^{d_{0}}}\left(\frac{f^{\prime}_{\bm{X}}(\bm{x}|\delta)}{f_{\bm{X}}(\bm{x}|\delta)}\right)^{2}f_{\bm{X}^{\prime}}(x)\mathrm{d}x$
is well-define, finite, strictly positive, and continuous at $\delta=0$.
Under the above assumption, we consider the following sequence of local
hypotheses:
$H_{0}:\delta=0\qquad\mathrm{versus}\qquad H_{1}:\delta=h/\sqrt{n},$ (4.8)
for some $h>0$. We show in Lemma E.1 that $H_{0}$ and $H_{1}$ as in (4.8) are
contiguous under Assumption 3. This assumption also ensures that the family
$\\{f_{\bm{X}}(\cdot|\delta)\\}_{\delta\in\Theta}$ is quadratic mean-
differentiable (QMD) (see [47, Definition 12.2.1]) and, hence, Le Cam’s theory
of contiguity applies (see [47, Chapter 12] and [71, Chapter 6] ) .
###### Remark 4.3.
Assumption 3 is satisfied, for example, when $X_{1},X_{2},\ldots,X_{r}$ are
centered elliptical distributions, under certain non-degeneracy conditions on
the respective scale matrices (as discussed [68, Example 5.1] for the 2
variables case).
To describe the limiting distribution of $\mathrm{RJdCov}(\bm{X};\bm{C})$
under $H_{1}$, denote the likelihood ratio and its derivative as:
$L(\bm{x}|\delta):=\frac{f_{\bm{X}}(\bm{x}|\delta)}{f_{\bm{X}}(\bm{x}|0)}=\frac{f_{\bm{X}}(\bm{x}|\delta)}{f_{\bm{X}^{\prime}}(\bm{x})}\quad\text{and}\quad
L^{\prime}(\bm{x}|\delta):=\frac{\nabla
f_{\bm{X}}(\bm{x}|\delta)}{f_{\bm{X}^{\prime}}(\bm{x})},$
where $\nabla
f_{\bm{X}}(\bm{x}|\delta)=\frac{\partial}{\partial\delta}f_{\bm{X}}(\bm{x}|\delta)$.
###### Theorem 4.4 (Local power for Konijn alternatives).
Suppose Assumptions 1 and 3 hold. Then under $H_{1}$ as in (4.8), as
$n\rightarrow\infty$,
$\displaystyle\\{n\mathrm{RdCov}_{n}(\bm{X}_{S}):S\in\mathcal{T}\\}\stackrel{{\scriptstyle
D}}{{\to}}\left\\{\int_{\mathbb{R}^{d_{S}}}|Z_{S}(\bm{t})+\nu_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right\\}_{S\in\mathcal{T}},$ (4.9)
with $Z_{S}(\bm{t})$ as defined in (3.6) and
$\displaystyle\nu_{S}(\bm{t})=h\cdot\mathbb{E}_{\bm{X}\sim
H_{0}}\left[L^{\prime}(\bm{X}|0)\prod_{i\in
S}\left\\{\mathbb{E}\left[e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i})\rangle}\right]-e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i})\rangle}\right\\}\right],$ (4.10)
where $\bm{t}=(t_{i})_{i\in S}$ and $t_{i}\in\mathbb{R}^{d_{i}}$ for $i\in S$.
Consequently, for any vector of non-negative weights
$\bm{C}=(C_{1},C_{2},\ldots,C_{r})$, under $H_{1}$ as in (4.1),
$\displaystyle n\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})\stackrel{{\scriptstyle
D}}{{\to}}\sum_{s=2}^{r}C_{s}\sum_{\begin{subarray}{c}S\subseteq\\{1,2,\ldots,r\\}\\\
|S|=s\end{subarray}}\int_{\mathbb{R}^{d_{S}}}|Z_{S}(\bm{t})+\nu_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}.$ (4.11)
### 4.3. Hájek Representation and Proof Outline
The proofs of Theorems 4.1 and 4.4 are given in Appendix E. One of the main
ingredients of the proof is a Hájek representation for the empirical process
associated with $\mathrm{RdCov}_{n}(\bm{X}_{S})$, which shows that we can
replace the empirical rank maps in the definition of
$\mathrm{RdCov}_{n}(\bm{X}_{S})$ with their population counterparts without
altering its limiting distribution under $H_{0}$. Since this result could be
of independent interest we summarized this as a proposition below. To describe
the result, consider the following empirical process:
$\displaystyle Z_{S,n}(\bm{t})$
$\displaystyle=\frac{1}{n}\sum_{b=1}^{n}\left\\{\prod_{i\in
S}\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(a)})\rangle}-e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(b)})\rangle}\right)\right\\},$ (4.12)
where $S\in\mathcal{T}$, $\bm{t}=(t_{i})_{i\in S}\in\mathbb{R}^{d_{S}}$ and
$t_{i}\in\mathbb{R}^{d_{i}}$, for $i\in S$. Recalling (2.14), note that
$\displaystyle\mathrm{RdCov}^{2}_{n}(\bm{X}_{S})=\int|Z_{S,n}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}.$ (4.13)
One of the challenges of dealing with the process $Z_{S,n}(\bm{t})$ is the
dependence across the indices $i\in S$ arising from the empirical rank maps
$\\{\hat{R}_{i}(\cdot)\\}_{i\in S}$. The Hájek representation result shows
that this dependence is asymptotically negligible under $H_{0}$. Towards this,
define the oracle version of the empirical process $Z_{S,n}$ where the
empirical rank maps $\\{\hat{R}_{i}(\cdot)\\}_{i\in S}$ in replaced with their
population analogues empirical rank maps $\\{R_{\mu_{i}}(\cdot)\\}_{i\in S}$
$\displaystyle Z_{S,n}^{\textrm{oracle}}(\bm{t})$
$\displaystyle=\frac{1}{n}\sum_{b=1}^{n}\left\\{\prod_{i\in
S}\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i}^{(a)})\rangle}-e^{\iota\langle
t_{i}R_{\mu_{i}}(X_{i}^{(b)})\rangle}\right)\right\\},$ (4.14)
for $\bm{t}$ as in (3.6). We are now ready the state the Hájek representation
result which shows that the difference between $Z_{S,n}$ and its oracle
version is $o_{P}(1/\sqrt{n})$ under $H_{0}$.
###### Proposition 4.5.
Suppose Assumption 1 holds. Then under $H_{0}$ as $n\rightarrow\infty$,
$\sqrt{n}\left(Z_{S,n}(\bm{t})-Z_{S,n}^{\mathrm{oracle}}(\bm{t})\right)=o_{P}(1),$
for all $S\in\mathcal{T}$ and $\bm{t}\in\mathbb{R}^{d_{S}}$.
Once the Hájek representation is established the limiting distribution of
$\\{\mathrm{RdCov}(\bm{X}_{S})\\}_{S\in\mathcal{T}}$ under contiguous local
alternatives can be obtained as follows:
* •
The first step is to show the alternatives considered in (4.1) and (4.8) are
mutually contiguous with $H_{0}$. While this is well-known for the case of 2
variables and relatively straightforward to extend to multiple variables for
the mixture alternative (4.1), for the Konjin alternative (4.8) establishing
contiguity is much more involved. Invoking Le Cam’s second lemma [71], we show
in Lemma E.1 that Konjin alternatives as in (4.6) are contiguous under
Assumption 3.
* •
Next, we derive the joint distribution of the collection of empirical
processes $\\{\sqrt{n}Z_{S,n}^{\mathrm{oracle}}(\bm{t})\\}_{S\in\mathcal{T}}$
and the log-likelihood under $H_{0}$. Proposition 4.5 together with contiguity
and Le Cam’s third lemma [71] gives the joint distribution of
$\\{\sqrt{n}Z_{S,n}^{\mathrm{oracle}}(\bm{t})\\}_{S\in\mathcal{T}}$ under
$H_{1}$ as in (4.1) and (4.8). The results in (4.2) and (4.9) then follow by
the representation in (4.13) and the continuous mapping theorem.
###### Remark 4.6.
Note that, as expected, one obtains the limiting null distribution of
$n\mathrm{RJdCov}_{n}(\bm{X};\bm{C})$ by substituting $h=0$ in Theorems 4.1
and 4.4. In fact the Hájek representation result provides an alternative
strategy to proving of the asymptotic null distribution, which does require
the combinatorial CLT for multiple permutations. Nevertheless, we present both
approaches because the technical by-products emerging from them, namely the
Hájek representation and the combinatorial CLT, are results of independent
interest. In particular, the scope of the combinatorial CLT with multiple
permutations goes beyond rank-based methods to other nonparametric
independence tests, such as methods based on geometric graphs [22].
To validate the asymptotic results discussed above we present a finite sample
simulation comparing the empirical power of the $\mathrm{RJdCov}$ based test
with test based on $\mathrm{JdCov}$ studied in [12]. Towards this we consider
the following 3 distributions: (1) the multivariate normal (MVN), (2) the
third-order multivariate normal copula (MVN copula),444A random variable $X$
is said to follow a $p$-dimensional $r$-th order, for $r>0$, multivariate
normal copula if $X\stackrel{{\scriptstyle D}}{{=}}Z^{r}$, where $Z\sim
N_{p}(\bm{\mu},\bm{\Sigma})$ as in [12, Example 1], and (3) the multivariate
student distribution (MVT) with 5 degrees of freedom, with dimensions
$d_{1}=d_{2}=d_{3}=2$. In all the 3 cases the mean vectors are set to zero and
the covariance matrix $\Sigma=((\sigma_{ij}))_{1\leq i,j\leq 2}$ is
$\sigma_{11}=\sigma_{22}=1$ and $\sigma_{12}=\sigma_{21}=0.5$. We set the
number of variables $r=3$, the matrices $\bm{M}_{i,j}=\bm{I}_{2}$, for $1\leq
i\neq j\leq 3$ and compute the matrix $\bm{A}_{\delta}$ as in (4.6) by varying
$\delta$ as follows: $\delta\in\\{0,0.4,0.8,1.2,1.6\\}$ for the Gaussian case,
$\delta\in\\{0,0.1,0.2,0.3,0.4\\}$ for the Gaussian copula case, and
$\delta\in\\{0,0.25,0.5,0.75,1\\}$ for the multivariate $t$-distribution.
Table 1 shows the empirical power (over 200 repetitions) with sample size
$n=300$ as a function of the signal strength ($\delta$ varying as above), for
tests based on the $\mathrm{JdCov}$ and the $\mathrm{RJdCov}$. The plots show
that our method has comparable power with $\mathrm{JdCov}$ for the Gaussian
and $t$-distributions, while it is significantly better than the
$\mathrm{JdCov}$ in the more heavy-tailed Gaussian copula model, illustrating
the attractive efficiency properties of rank-based distribution-free methods.
(a)
(b)
(c)
Figure 1. Empirical power of against Konijn alternatives with 3 variables: (a)
bivariate normal, (b) bivariate normal copula model of order $3$, and (c)
bivariate $t$-distribution with $5$ degrees of freedom.
## 5\. Robust Independent Component Analysis
In this section, we apply the $\mathrm{RJdCov}$ measure to develop a robust
method for independent component analysis (ICA). Towards this, suppose
$X\in\mathbb{R}^{r}$ be a random vector satisfying the following the moment
conditions:
###### Assumption 4.
The random vector $X$ has a continuous distribution with $\mathbb{E}[X]=0$ and
$\mathbb{E}[\|X\|^{2}]<\infty$.
The ICA model assume that there is a random vector $S\in\mathbb{R}^{r}$ with
jointly independent coordinates such that $X=\bm{M}S$, where $\bm{M}$ is a
nonsingular mixing matrix. Without loss of generality, we assume $S$ is
standardized such that $\mathbb{E}[S]=0$ and
$\operatorname{Var}[S]=\bm{I}_{r}$. Then by the spectral decomposition:
$\Sigma_{X}=\operatorname{Var}[X]=\bm{P}\Lambda\bm{P}^{\top},$
where $\bm{P}$ and $\Lambda$ are the $r\times r$ matrix of eigenvectors and
the diagonal matrix of the corresponding eigenvalues of $\Sigma_{\bm{X}}$,
respectively. Define $\bm{O}:=\Lambda^{-\frac{1}{2}}\bm{P}^{\top}$ and
$Z=\bm{O}X$. Note that $\mathbb{E}[Z]=0$ and
$\operatorname{Var}[Z]=\bm{I}_{r}$. Therefore, when the ICA model holds we can
write
$\displaystyle S=\bm{M}^{-1}X=\bm{M}^{-1}\bm{O}^{-1}Z=\bm{W}Z,$ (5.1)
where $\bm{W}=\bm{M}^{-1}\bm{O}^{-1}$. This implies,
$\bm{I}=\operatorname{Var}[S]=\bm{W}\operatorname{Var}[Z]\bm{W}^{\top}=\bm{W}\bm{W}^{\top}$,
that is, $\bm{W}$ is an orthogonal matrix.
Following Matteson and Tsay [52] we restrict $\bm{W}\in\mathcal{SO}(r)$, where
$\mathcal{SO}(r)$ denotes the set $r\times r$ orthogonal matrices with
determinant 1 (the rotation group) and parameterize $\bm{W}$ by a vector
$\theta$, where $\theta$ is a vectorized triangular array of rotation angles
of length $r(r-1)/2$ indexed by $\\{i,j:1\leq i<j\leq r\\}$, as follows:
$\displaystyle\bm{W}=\bm{W}(\theta)=\bm{Q}^{(r-1)}\cdots\bm{Q}^{(1)},\quad\text{with
}\bm{Q}^{(i)}=\bm{Q}_{i,r}(\theta_{i,r})\cdots\bm{Q}_{i,i+1}(\theta_{i,i+1}),$
(5.2)
where $\bm{Q}_{i,j}(\theta)$ is the identity matrix $\bm{I}_{r}$ with the
$(i,i)$ and $(j,j)$ elements replaced by $\cos(\theta)$, the $(i,j)$ element
replaced by $-\sin(\theta)$ and the $(j,i)$ element replaced by
$\sin(\theta)$. To ensure the existence and uniqueness of the representation
(5.2) define
$\displaystyle\Theta=\Bigg{\\{}\theta_{i,j}:\begin{cases}0\leq\theta_{1,j}<2\pi,\\\
0\leq\theta_{i,j}<\pi,&i\neq 1.\end{cases}\Bigg{\\}}.$ (5.3)
Then by [51, Theorem 2.3.2] there exists a unique inverse map of
$\bm{W}\in\mathcal{SO}(r)$ into $\theta\in\Theta$ such that the mapping is
assured to be continuous if either all elements on the main-diagonal of $W$
are positive, or all elements of $W$ are nonzero (see [52, 51] for more
details on such decompositions).
Another identification issue with the ICA model is the sign and order of the
independent components. Suppose $\bm{P}_{\pm}$ is a signed permutation matrix
and then note that $X=\bm{M}S=\bm{M}\bm{P}^{\top}_{\pm}\bm{P}_{\pm}S$ in which
case $\bm{P}_{\pm}S$ are the new independent components and
$\bm{M}\bm{P}^{\top}_{\pm}$ is the new mixing matrix. This necessitates
considering metrics that are invariant to the scale, sign, and order of the
independent components, when comparing different estimates (see Section 6.4
for details).
In general the ICA model (5.1) is misspecified, in which case $\theta$ is
chosen such that the components of $\bm{W}(\theta)Z$ are as close to mutually
independent as possible. To this end, the $\mathrm{RJdCov}$ provides a robust
measure for quantifying deviations from joint independence. Hence, one choose
$\theta$ by minimizing the following objective function:
$\displaystyle\theta_{0}={\arg\min}_{\theta}\mathrm{RJdCov}^{2}(\bm{W}(\theta)Z;c).$
(5.4)
Recall that the (population) rank map in the 1-dimensional case is the
distribution function, hence, $\mathrm{RJdCov}^{2}(\bm{W}(\theta)Z;c)$ can be
computed by replacing $R_{\mu_{i}}$ in (2.8) with $F_{W_{i}(\theta)^{\top}Z}$,
the distribution function of $F_{W_{i}(\theta)^{\top}Z}$, for $1\leq i\leq r$,
where $W_{i}(\theta)$ denotes the transpose of $i$-th row of $\bm{W}(\theta)$.
### 5.1. $\mathrm{RJdCov}$-Based ICA Estimator
We now discuss how one can estimate $\theta$ based on samples by minimizing
the empirical version of (5.4). Towards this, suppose
$\bm{X}_{n}=\\{X_{1},X_{2},\ldots X_{n}\\}$ be i.i.d. samples distributed as
$X$ satisfying Assumption 4. We define approximately uncorrelated observations
$\hat{\bm{Z}}_{n}=\bm{X}_{n}\hat{\bm{O}}^{\top}$, where $\hat{\bm{O}}_{n}$ is
the sample covariance matrix of $\bm{X}_{n}$. Assumption 4 implies that
$\operatorname{Var}[\hat{\bm{Z}}_{n}]\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}\bm{I}_{r}$. To simplify notation, we omit the above
steps described and assume $\bm{Z}_{n}=\\{Z_{1},Z_{2},\ldots Z_{n}\\}$ are
i.i.d. samples distributed as $Z$, which has zero mean vector and identity
covariance matrix. Also, we arrange $\\{Z_{1},Z_{2},\ldots Z_{n}\\}$ such that
the $i$-th row of $\bm{Z}_{n}$ is $Z_{i}^{\top}$, that is, $\bm{Z}_{n}$ has
dimension $n\times r$.
To estimate $\theta$ from $\bm{Z}_{n}$ we replace $\mathrm{RJdCov}^{2}$ in
(5.4) with its empirical version
$\displaystyle\hat{\theta}_{n}={\arg\min}_{\theta}\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\theta)^{\top};c)$
(5.5)
Note that for any value of $\theta$ the objective function
$\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\theta)^{\top};c)$ can be computed
easily, since in dimension 1 the empirical rank map is the empirical
distribution function. Specifically, the empirical rank map for the $i$-th
component is
$\hat{F}_{i,n}(t|\theta)=\frac{1}{n}\sum_{a=1}^{n}\bm{1}\\{W_{i}^{\top}(\theta)Z_{a}\leq
t\\}$. To establish the consistency of $\hat{\theta}_{n}$ we need the
following definition:
###### Definition 5.1.
We say $\hat{\theta}_{n}$ is consistent for $\theta$ over $\mathcal{SO}(r)$ if
the function $\bm{W}(\theta)$ is continuous in $\theta$ and
$\mathcal{D}(\bm{W}(\hat{\theta}_{n}),\bm{W}(\theta))\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0$, as $n\rightarrow\infty$, for any metric $\mathcal{D}$
on $\mathcal{SO}(r)$ satisfying $\mathcal{D}(\bm{W},\bm{A})=0$ if and only if
there exists a signed permutation matrix $\bm{P}_{\pm}$ such that
$\bm{W}=\bm{P}_{\pm}\bm{A}$.
With this definition we can now state the consistency result of
$\hat{\theta}_{n}$. Towards this, recall the definition of $\Theta$ from (5.3)
and let $\bar{\Theta}$ denote a sufficiently large compact subset of the space
$\Theta$.
###### Theorem 5.2.
Suppose Assumption 4 holds and the density function of $Z$ is uniformly
bounded. Moreover, assume there exists a unique minimizer
$\theta_{0}\in\bar{\Theta}$ for (5.4) and $\bm{W}(\theta)$ satisfies the
conditions for a unique continuous inverse to exist, then $\hat{\theta}_{n}$
is consistent for $\theta_{0}$ over $\mathcal{SO}(d)$.
The proof of Theorem 5.2 is given in Appendix F.2. This shows that the
proposed estimator $\hat{\theta}_{n}$ is consistent for $\theta_{0}$ for
metrics which are invariant to the non-identifiability issues of the ICA
problem. The assumption of uniform boundedness is required to guarantee the
distribution function is Lipschitz, which ensures the equicontinuity of the
objective function.
### 5.2. Practical Implementation
One of the challenges in directly implementing gradient based methods to
minimize the objective in (5.5) is that, the empirical distribution functions
are not differentiable with respect to $\theta$. Matteson and Tsay [52]
suggested a practical way to circumvent this issue by estimating the
univariate distribution functions $F_{W_{i}(\theta)^{\top}Z}$, for $1\leq
i\leq r$, using kernel-based methods. Specifically, we consider the following
problem:
$\displaystyle\tilde{\theta}_{n}=\arg\min_{\theta}\mathrm{JdCov}^{2}_{n}((\tilde{F}_{1}(\bm{Z}_{n}W_{1}(\theta)),\ldots,\tilde{F}_{r}(\bm{Z}_{n}W_{r}(\theta));c),$
(5.6)
with
$\tilde{F}_{1}(\bm{Z}_{n}W_{1}(\theta))=(\check{F}_{1}(W_{1}(\theta)^{\top}Z_{1}),\ldots,\tilde{F}_{1}(W_{1}(\theta)^{\top}Z_{n}))^{\top}$
and
$\displaystyle\tilde{F}_{i}(s)=\frac{1}{n}\sum_{a=1}^{n}G\bigg{(}\frac{s-W_{i}(\theta)^{\top}Z_{a}}{h_{n}(i)}\bigg{)},$
(5.7)
where $G$ is the integral of a density kernel and $h_{n}(i)$ is a data
dependent bandwidth for the $i$-th component, for $1\leq i\leq r$ (see Section
2 in [52] for practical choices of $h_{n}(i)$). The optimization problem (5.6)
can now be solved using a gradient-based method (see Section F.1 for the
computation of the gradient).
The advantages of using kernel-based estimates is that they have nice
theoretical properties, which can be used to establish the consistency of the
estimate $\tilde{\theta}_{n}$. In particular, if we assume that $h_{n}(i)$ is
a positive measurable function of the $i$-th coordinates of
$Z_{1},Z_{2},\ldots,Z_{n}$ such that $h_{n}(i)\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0$ as $n\rightarrow\infty$, then
$\displaystyle\sup_{s\in\mathbb{R}}\big{|}\tilde{F}_{i}(s)-F_{i}(s)\big{|}\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0,$
as $n\rightarrow\infty$, for $1\leq i\leq r$ (see Corollary 1 in [11]). Then
whenever $G$ is Lipschitz and Assumption 4 and the conditions in Theorem 5.2
holds, similar to the proof Theorem 2.1 in [52], by replacing their objective
function with $\mathrm{RJdCov}_{n}^{2}$, it can be shown that
$\tilde{\theta}_{n}$ is consistent for $\theta_{0}$ over $\mathcal{SO}(r)$.
## 6\. Simulation Study
In this section, we showcase the efficacy of the proposed test, both in terms
of Type-I error control and power, across a range of simulation settings. For
our test we implement the finite sample version discussed in Section 3.2 with
cut-off chosen as in (3.12) and all the weights set to 1. We compare our
method (hereafter referred to as $\mathrm{RJdCov}$) with the JdCov based test
in [12], the dCov based test studied by Matteson and Tsay [52] (hereafter
denoted as MT), and the dHSIC based test [59]. The section is organized as
follows: In Section 6.1 we present the results on Type-I error control. The
empirical power for testing joint independence is given in Section 6.2 and
results for detecting higher-order dependence are in Section 6.3. The
performance of the ICA estimator is studied in Section 6.4.
### 6.1. Type-I Error Control
To illustrate the finite sample Type-I error control of the proposed test we
consider $\bm{X}=(X_{1}^{\top},X_{2}^{\top},X_{3}^{\top})^{\top}$ in the
following three settings:
* $(1)$
Multivariate Gaussian: $X_{1},X_{2},X_{3}$ are i.i.d.
$N_{3}(\bm{0},\bm{I}_{3})$.
* $(2)$
Third-order Gaussian copula: $X_{1},X_{2},X_{3}$ are independent random
vectors in $\mathbb{R}^{3}$ with i.i.d. coordinates distributed as
$N(0,1)^{3}$.
* $(3)$
Cauchy: $X_{1},X_{2},X_{3}$ are independent random vectors in $\mathbb{R}^{3}$
with i.i.d. coordinates distributed as $\mathcal{C}(0,1)$, the Cauchy
distribution with location parameter 0 and scale to parameter 1.
Table 1 shows the empirical Type-I error probability (over 500 repetitions)
with sample size $n=500$ for the different tests in the above 3 situations.
Our test satisfactorily controls Type-I error in finite samples, validating
the theoretical results. The other tests also have good Type-I error control.
Table 1. Empirical probability of Type-I error of the different tests. Distribution | Multivariate Gaussian | Gaussian Copula | Cauchy
---|---|---|---
Proposed | 0.038 | 0.042 | 0.042
JdCov | 0.036 | 0.030 | 0.046
Matteson-Tsay (MT) | 0.056 | 0.046 | 0.056
dHSIC | 0.032 | 0.074 | 0.050
### 6.2. Power Results
In this section we compare the empirical power of $\mathrm{RJdCov}$ with the
existing methods in the 4 settings described below. In the first 2 cases we
have (light-tailed) multivariate Gaussian distributions with covariance
structure as in [12, Example 2], the third setting is a (heavy-tailed) Cauchy
regression model, and the fourth is a Cauchy sine dependence model (similar to
[78]). Throughout we set the sample size $n=500$ and compute the empirical
power over 500 repititions.
(a)
(b)
Figure 2. Power of the different tests in (a) the multivariate Gaussian model
with autoregressive correlation structure and (b) the multivariate Gaussian
model with banded covariance matrix.
* $(1)$
Multivariate Gaussian with autoregressive covariance structure: In this case
the data is generated as follows: $\bm{X}\sim N_{9}(\bm{0},\Sigma)$ where
$\Sigma=((\sigma_{ij}))_{1\leq i,j\leq 9}$ has an autocorrelation structure,
that is, $\sigma_{ij}=\rho^{|i-j|}$. Then the first 3 coordinates of $\bm{X}$
are set as $X_{1}$, second 3 coordinates as $X_{2}$, and the last 3
coordinates as $X_{3}$. Figure 2 (a) shows the empirical power of the
different tests as $\rho$ varies between $0.05$ to $0.25$ over a grid of 5
equally spaced values.
* $(2)$
Multivariate Gaussian with banded covariance matrix: Here, $\bm{X}\sim
N_{9}(\bm{0},\Sigma)$ where $\Sigma=((\sigma_{ij}))_{1\leq i,j\leq 9}$ has a
banded structure, that is, $\sigma_{ii}=1$ for $1\leq i\leq 9$,
$\sigma_{ij}=\rho$ for $1\leq|i-j|\leq 2$, and $\sigma_{ij}=0$ otherwise. As
before, the first 3 coordinates of $\bm{X}$ are set as $X_{1}$, second 3
coordinates as $X_{2}$, and the last 3 coordinates as $X_{3}$. Figure 2 (b)
shows the empirical power of the different tests as $\rho$ varies between
$0.05$ to $0.25$ over a grid of 5 equally spaced values.
* $(3)$
Cauchy regression: In this case
$\bm{X}=(X_{1}^{\top},X_{2}^{\top},X_{3}^{\top})^{\top}$ is generated as
follows:
$\displaystyle X_{i}=Z_{i}+a\cdot V,$ (6.1)
for $1\leq i\leq 3$, where $Z_{i}\in\mathbb{R}^{3}$ have i.i.d. coordinates
distributed according to $\mathcal{C}(0,1)$, and
$V=(V_{1},V_{2},V_{3})^{\top}$ with $V_{1}=V_{2}=V_{3}\sim\mathcal{C}(0,1)$.
Note that the variables $X_{1},X_{2},X_{3}$ are mutually dependent because of
the shared noise vector. Figure 3 (a) shows the empirical power of the
different tests as $a$ varies between $0.2$ to 1 over a grid of 5 equally
spaced values.
* $(4)$
Sine dependence: In this case
$\bm{X}=(X_{1}^{\top},X_{2}^{\top},X_{3}^{\top})^{\top}$ is generated as in
(6.1), but with a different noise distribution. Specifically, we fix $a=1$,
generate $Z_{i}\in\mathbb{R}^{3}$ with i.i.d. coordinates distributed
according to $\mathcal{C}(0,1)$ as before, but change
$V=(V_{1},V_{2},V_{3})^{\top}$ as $V_{1}=V_{2}=V_{3}=\sin(b\cdot W)$ where
$W\sim\mathcal{C}(0,1)$. Figure 3 (b) shows the empirical power of the
different tests as $b$ varies between $0.1$ to $0.5$ over a grid of 5 equally
spaced values.
We observe from the plots in Figure 2 that the $\mathrm{RJdCov}$ along with
the other methods have very similar performance in the Gaussian settings. On
the other hand, Figure 3 shows in the heavy-tailed Cauchy and sine dependence
settings the $\mathrm{RJdCov}$ significantly outperforms the other tests.
Overall the $\mathrm{RJdCov}$ emerges as the preferred method by exhibiting
good power over a range of data distributions.
(a)
(b)
Figure 3. Power of the different tests in (a) the Cauchy regression model and
(b) the sine dependence model.
### 6.3. Higher-Order Dependence
In this section we consider situations where the variables are pairwise
independent but jointly dependent. Specifically, generate $(X,Y,Z^{\prime})$
as follows:
* •
Suppose $X$, $Y$ and $Z$ are independent random vectors with i.i.d.
coordinates distributed according to $F$, where $F$ is a distribution in
$\mathbb{R}$ that is symmetric around $0$.
* •
For $1\leq s\leq d-1$, set $Z_{s}^{\prime}=Z_{s}$ and set
$Z^{\prime}_{d}=-Z_{d}$ if $X_{d}Y_{d}Z_{d}$ is positive, otherwise set
$Z^{\prime}_{d}=Z_{d}$.
It can easily be checked that $X,Y,Z^{\prime}$ are pairwise independent.
However, they are mutually dependent, since, for example,
$Z_{d}^{\prime}X_{d}Y_{d}$ is always non-negative. In the simulations we
choose $F$ to the following 4 distributions: (1) $N(0,1)$, (2) the
$t$-distribution with 2 degrees of freedom $(t_{(2)})$, (3) the
$t$-distribution with 3 degrees of freedom $(t_{(3)})$, and (4) the Cauchy
distribution $\mathcal{C}(0,1)$, equivalently, the the $t$-distribution with 1
degree of freedom.
In Table 2 we show the empirical power (over 500 iterations) with sample size
$n=500$ of the following 3 tests (and their corresponding versions based on
$\mathrm{dCov}$/$\mathrm{JdCov}$):
* •
The test for pairwise independence that rejects for large values of
$\mathrm{RdCov}^{2}_{n}(X,Y)+\mathrm{RdCov}^{2}_{n}(Y,Z^{\prime})+\mathrm{RdCov}^{2}_{n}(X,Z^{\prime}).$
* •
The test for higher-order independence that rejects for large values of
$\mathrm{RdCov}^{2}_{n}(X,Y,Z^{\prime})$.
* •
The test for joint independence based on $\mathrm{RJdCov}_{n}^{2}$.
The results in Table 2 show that for all the 4 distributions considered both
the $\mathrm{RJdCov}$ and $\mathrm{JdCov}$ based tests successfully detect the
null hypothesis of pairwise independence (probability of Type-I error).
However, the $\mathrm{JdCov}$ based test fails to detect higher-order
dependence for the Cauchy distribution and joint dependence for the $t_{(2)}$
and the Cauchy distribution. This is not unexpected because the consistency of
the $\mathrm{dCov}$ based rely on certain moment assumptions which are not
satisfied by heavy-tailed distributions like the $t_{(2)}$ and the Cauchy
(recall that $t_{(2)}$ does not have finite variance and $\mathcal{C}(0,1)$
does not have finite mean). On the other hand, the $\mathrm{RJdCov}$ based
tests easily detects both the higher-order and joint dependences in the all 4
cases, which once again highlights the robustness and broad application of the
proposed method.
Table 2. Empirical power for detecting higher-order dependence. Dependence | Pariwise | Higher-Order | Joint
---|---|---|---
Tests | JdCov | Proposed | JdCov | Proposed | JdCov | Proposed
Gaussian | 0.058 | 0.054 | 1 | 1 | 1.000 | 0.942
$t_{(3)}$ | 0.052 | 0.064 | 1 | 1 | 0.868 | 0.904
$t_{(2)}$ | 0.056 | 0.040 | 1 | 1 | 0.098 | 0.888
Cauchy | 0.064 | 0.044 | 0.04 | 1 | 0.050 | 0.788
### 6.4. Performance of the ICA Estimator
To evaluate the performance of the distribution-free ICA estimator proposed in
Section 5, we consider the following model:
$\displaystyle X=\bm{M}Z,$
where $Z=(Z_{1},Z_{2},\ldots,Z_{d})$, $Z_{1},Z_{2},\ldots,Z_{d}$ i.i.d. for
some distribution $F$, and $M$ is a $d\times d$ random mixing matrix with
condition number between 1 and 2, using the R package
ProDenICA.555https://cran.r-project.org/web/packages/ProDenICA/index.html We
consider 12 candidate distributions for $F$ as described Table 3. For each
candidate distribution we apply the gradient descent algorithm to solve the
optimization program (5.6) (where the gradient is derived in Section 5.2). We
compare our results with the method proposed in Matteson and Tsay [52]. As in
[52] the estimation error is measured in the following metric, which is
designed to resolve the non-identifiability issues:
$\displaystyle
D(\hat{M},M)=\frac{1}{\sqrt{d-1}}\inf_{C\in\mathcal{C}}\|C\hat{M}^{-1}M-\bm{I}_{d}\|_{F}$
where
* •
$\hat{M}$ is the estimated mixing matrix,
* •
$\|\cdot\|_{F}$ denotes the Frobenius norm,
* •
$\mathcal{C}=\\{C\in\mathcal{M}:C=P_{\pm}B\text{ for some }P_{\pm}\text{ and
}B\\}$, where $\mathcal{M}$ is the the set of $d\times d$ nonsingular
matrices, $P_{\pm}$ is a $d\times d$ signed permutation matrix, and $B$ is a
$d\times d$ diagonal matrix with positive diagonal elements.
The computation for metric $D$ is available in the R package JADE [57]. We set
dimension $d=3$ and the sample size $n=500$ and present the results in 4,
where the indices in $x$-axis denote different distributions as described in
Table 3. The results in Table 3 show that our estimator have smaller
estimation error and variance except for distribution (f).
Table 3. Description of the distributions for the ICA simulations and the means and standard deviations of the estimation errors. | | Estimation Error (Standard Deviation)
---|---|---
Index | Distribution | Proposed | MT
$\mathsf{(a)}$ | $\mathsf{N(0,1)^{3}}$ | 0.129 (0.077) | 0.192 (0.094)
$\mathsf{(b)}$ | $\mathsf{N(0,1)^{5}}$ | 0.170 (0.109) | 0.209 (0.120)
$\mathsf{(c)}$ | $\mathsf{Gamma(5,1)}$ | 0.067 (0.028) | 0.073 (0.049)
$\mathsf{(d)}$ | $\mathsf{Gamma(10,1)}$ | 0.127 (0.069) | 0.148 (0.088)
$\mathsf{(e)}$ | $\mathsf{\frac{3}{10}Exp(1)+\frac{7}{19}Exp(5)}$ | 0.034 (0.046) | 0.038 (0.055)
$\mathsf{(f)}$ | $\mathsf{\frac{3}{10}N(-2,1)+\frac{7}{10}N(2,1)}$ | 0.157 (0.121) | 0.181 (0.120)
$\mathsf{(g)}$ | $\mathsf{Uniform(0,1)}$ | 0.189 (0.110) | 0.310 (0.132)
$\mathsf{(h)}$ | $\mathsf{\frac{7}{10}N(-2,3)+\frac{3}{10}N(2,1)}$ | 0.115 (0.117) | 0.174 (0.134)
$\mathsf{(i)}$ | $\mathsf{\frac{5}{10}N(-2,2)+\frac{5}{10}N(2,2)}$ | 0.390 (0.105) | 0.435 (0.104)
$\mathsf{(j)}$ | $\mathsf{(\frac{5}{10}N(-2,2)+\frac{5}{10}N(2,2))^{3}}$ | 0.121 (0.085) | 0.205 (0.093)
$\mathsf{(k)}$ | $\mathsf{(\frac{5}{10}N(-2,2)+\frac{5}{10}N(2,2))^{5}}$ | 0.191 (0.097) | 0.236 (0.107)
$\mathsf{(l)}$ | $\mathsf{(\frac{5}{10}N(-2,2)+\frac{5}{10}N(2,2))^{7}}$ | 0.131 (0.084) | 0.157 (0.101)
Figure 4. ICA estimation error across 12 distributions: The blue boxplots
correspond to the Matteson-Tsay estimator and the green boxplots corresponds
to the proposed estimator based on $\mathrm{RJdCov}$.
## 7\. Real data application
In this section, we apply our method to the stock price data available in R
package gmm. The data can be accessed using the command data(finance). The
dataset has prices for 24 stocks corresponding to 20 companies in US starting
from January 5th, 1993 to January 30th, 2009. Since the dataset has several
missing entries before 2003, we use data from the 2003-2009 period for 9 of
these companies (described below) for our analysis. To account for the
temporal dependence, we average the stock prices over the week, after which we
obtain $n=300$ data points. We categorize the different companies into 6
different industries according to North American Industry Classification
System (NAICS) as follows: (1) Real Estate (RE) category which contains the
company NHP, (2) the Food and Retail (FR) category which includes the
companies WMK and GCO; (3) Manufacture (M) category including ROG, MAT, and
VOXX; (4) the Finance (F) category with the company FNM; (5) Communications
Systems (JCS); and (6) Zoom Technologies (ZOOM). We created 2 separate
categories for the companies JCS and ZOOM because their stock prices seem to
vary independently of the other companies, possibly due to their relatively
small scales, even though they both belong to the information industry.
### 7.1. Results and Analysis
To understand the dependency structure between the 6 categories (industries)
described above, we first test the pairwise dependence of the stock prices
between different industries. Then we test for higher-order dependencies
between triples of categories for which the null hypothesis of pairwise
independence is accepted. We report the results for the tests based on
$\mathrm{RdCov}/\mathrm{RJdCov}$ and $\mathrm{dCov}/\mathrm{JdCov}$ for
comparison. Throughout we set the significance level to $0.05$.
The $p$-values for the pairwise independence tests are summarized in Table 4.
Note that this entails performing ${6\choose 2}=15$, so the $p$-values are
adjusted using the BH-procedure. To better visualize the results, we plot the
dependency graph between 6 categories using function dependence.structure in R
package multivariance [8] and the results are shown in Figure 5. From these we
can see that hypothesis of pairwise independence is accepted by
$\mathrm{RdCov}$ for the following 3 triples: (I) Real Estate-JCS-ZOOM, (II)
Finance-JCS-ZOOM, and (III) Manufacture-JCS-ZOOM. On the other hand,
$\mathrm{dCov}$ only accepts the pairwise independence hypothesis for the
triple: Real Estate-JCS-ZOOM when using $\mathrm{dCov}$. This might be due to
the small scale of companies JCS and ZOOM, which tends to make them
independent of each other and among others. Moreover, observe that the
Manufacture and Food and Retail categories are prone to be dependent with all
the others, which is also reasonable because their services are essential to
the other industries.
Table 4. $p$-values for pairwise independence testing among the 6 categories. | $p$-values
---|---
Industry-Industry | JdCov | Proposed
RE-JCS | 0.096 | 0.136
RE-ZOOM | 0.188 | 0.107
RE-FR | 0.002 | 0.000
RE-M | 0.002 | 0.000
RE-F | 0.002 | 0.000
JCS-ZOOM | 0.152 | 0.136
JCS-FR | 0.002 | 0.004
JCS-M | 0.002 | 0.258
JCS-F | 0.102 | 0.408
ZOOM-FR | 0.008 | 0.010
ZOOM-M | 0.012 | 0.058
ZOOM-F | 0.032 | 0.968
FR-M | 0.002 | 0.000
FR-F | 0.002 | 0.000
M-F | 0.002 | 0.000
The $p$-values for testing third-order independence for the 3 triples for
which the hypothesis of pairwise independence are accepted are given in Table
5. We observe from Table 5 that the third-order $\mathrm{dCov}$ based
$p$-value, which tests for the null hypothesis of joint independence of RE,
JCS, and ZOOM given their pairwise independence, accepts the null hypothesis
at level $0.05$. However, the $\mathrm{JdCov}$ based $p$-value, which tests
for the null hypothesis of joint independence of RE, JCS, and ZOOM (without
any assumptions on pairwise independence), rejects the null hypothesis of
joint independence at $0.05$. Given that the pairwise independence hypothesis
has been accepted, for consistent hierarchical interpretability one would have
ideally expected the third-order $\mathrm{dCov}$ and the $\mathrm{JdCov}$ to
have the same conclusions. On the other hand, the conclusions from the
$\mathrm{RdCov}$ and $\mathrm{RJdCov}$ $p$-values are hierarchically
consistent. Moreover, Figure 5(b) shows that there is higher-order dependence
between JCS, ZOOM, and Manufacture industries when using $\mathrm{RdCov}$ and
$\mathrm{RJdCov}$. This may be because JCS and ZOOM, and Manufacturing will
typically serve as downstream industries but direct links between them are not
obvious due to the scales of the companies.
(a)
(b)
Figure 5. Stock prices data: (a) Dependency graph obtained using higher-order
$\mathrm{dCov}$, (b) Dependency graph obtained using higher-order
$\mathrm{RdCov}$.
.
Table 5. $p$-values for third order independence given pairwise independence. | $\mathrm{dCov}$ | $\mathrm{JdCov}$ | $\mathrm{RdCov}$ | $\mathrm{RJdCov}$
---|---|---|---|---
Real estate-JCS- ZOOM | 0.101 | 0.020 | 0.187 | 0.063
Finance-JCS-ZOOM | | | 0.274 | 0.425
Manufacture-JCS-ZOOM | | | 0.003 | 0.006
### Acknowledgment
The authors are grateful to Björn Böttcher and David Matteson for sharing
their codes and datasets. BBB was partly supported by NSF CAREER Grant
DMS-2046393 and a Sloan research fellowship.
## References
* Agarwal and Sharathkumar [2014] P. K. Agarwal and R. Sharathkumar. Approximation algorithms for bipartite matching with metric and geometric costs. In _STOC’14—Proceedings of the 2014 ACM Symposium on Theory of Computing_ , pages 555–564. ACM, New York, 2014.
* Ai et al. [2022] C. Ai, L.-H. Sun, Z. Zhang, and L. Zhu. Testing unconditional and conditional independence via mutual information. _Journal of Econometrics_ , 2022.
* Banerjee and Ghosh [2022] B. Banerjee and A. K. Ghosh. Test of independence for hilbertian random variables. _Stat_ , 11(1):e474, 2022.
* Beran et al. [2007] R. Beran, M. Bilodeau, and P. L. de Micheaux. Nonparametric tests of independence between random vectors. _Journal of Multivariate Analysis_ , 98(9):1805–1824, 2007.
* Bergsma and Dassios [2014] W. Bergsma and A. Dassios. A consistent test of independence based on a sign covariance related to kendall’s tau. _Bernoulli_ , 20(2):1006–1028, 2014.
* Berrett and Samworth [2019] T. B. Berrett and R. J. Samworth. Nonparametric independence testing via mutual information. _Biometrika_ , 106(3):547–566, 2019.
* Blum et al. [1961] J. R. Blum, J. Kiefer, and M. Rosenblatt. Distribution free tests of independence based on the sample distribution function. _Ann. Math. Statist._ , 32:485–498, 1961.
* Böttcher [2020] B. Böttcher. Dependence and dependence structures: estimation and visualization using the unifying concept of distance multivariance. _Open Statistics_ , 1(1):1–48, 2020.
* Böttcher et al. [2019] B. Böttcher, M. Keller-Ressel, and R. L. Schilling. Distance multivariance: New dependence measures for random vectors. _The Annals of Statistics_ , 47(5):2757–2789, 2019.
* Brenier [1991] Y. Brenier. Polar factorization and monotone rearrangement of vector-valued functions. _Communications on pure and applied mathematics_ , 44(4):375–417, 1991.
* Chacon and Rodriguez-Casal [2010] J. E. Chacon and A. Rodriguez-Casal. A note on the universal consistency of the kernel distribution function estimator. _Statistics & Probability Letters_, 80(17-18):1414–1419, 2010.
* Chakraborty and Zhang [2019] S. Chakraborty and X. Zhang. Distance metrics for measuring joint dependence with application to causal inference. _Journal of the American Statistical Association_ , 2019.
* Chatterjee [2007] S. Chatterjee. Stein’s method, 2007. URL https://souravchatterjee.su.domains/AllLectures.pdf.
* Chatterjee [2021] S. Chatterjee. A new coefficient of correlation. _Journal of the American Statistical Association_ , 116(536):2009–2022, 2021.
* Chatterjee [2022] S. Chatterjee. A survey of some recent developments in measures of association. _arXiv preprint arXiv:2211.04702_ , 2022.
* Chernozhukov et al. [2017] V. Chernozhukov, A. Galichon, M. Hallin, and M. Henry. Monge–kantorovich depth, quantiles, ranks and signs. _The Annals of Statistics_ , 45(1):223–256, 2017\.
* Csörgő [1985] S. Csörgő. Testing for independence by the empirical characteristic function. _Journal of Multivariate Analysis_ , 16(3):290–299, 1985.
* Deb and Sen [2021] N. Deb and B. Sen. Multivariate rank-based distribution-free nonparametric testing using measure transportation. _Journal of the American Statistical Association_ , pages 1–16, 2021\.
* Deb et al. [2020] N. Deb, P. Ghosal, and B. Sen. Measuring association on topological spaces using kernels and geometric graphs. _arXiv preprint arXiv:2010.01768_ , 2020.
* Deb et al. [2021] N. Deb, B. B. Bhattacharya, and B. Sen. Efficiency lower bounds for distribution-free hotelling-type two-sample tests based on optimal transport. _arXiv preprint arXiv:2104.01986_ , 2021.
* Feuerverger [1993] A. Feuerverger. A consistent test for bivariate dependence. _International Statistical Review/Revue Internationale de Statistique_ , pages 419–433, 1993.
* Friedman and Rafsky [1983] J. H. Friedman and L. C. Rafsky. Graph-theoretic measures of multivariate association and prediction. _The Annals of Statistics_ , pages 377–391, 1983.
* Gaißer et al. [2010] S. Gaißer, M. Ruppert, and F. Schmid. A multivariate version of Hoeffding’s phi-square. _Journal of Multivariate Analysis_ , 101(10):2571–2586, 2010.
* Ghosal and Sen [2022] P. Ghosal and B. Sen. Multivariate ranks and quantiles using optimal transport: Consistency, rates and nonparametric testing. _The Annals of Statistics_ , 50(2):1012–1037, 2022.
* Gieser [1993] P. W. Gieser. _A new nonparametric test for independence between two sets of variates_. PhD thesis, University of Florida, 1993.
* Gieser and Randles [1997] P. W. Gieser and R. H. Randles. A nonparametric test of independence between two vectors. _Journal of the American Statistical Association_ , 92(438):561–567, 1997.
* Gretton et al. [2005a] A. Gretton, O. Bousquet, A. Smola, and B. Schölkopf. Measuring statistical dependence with hilbert-schmidt norms. In _International conference on algorithmic learning theory_ , pages 63–77. Springer, 2005a.
* Gretton et al. [2005b] A. Gretton, R. Herbrich, A. Smola, O. Bousquet, B. Schölkopf, et al. Kernel methods for measuring independence. 2005b.
* Gretton et al. [2007] A. Gretton, K. Fukumizu, C. Teo, L. Song, B. Schölkopf, and A. Smola. A kernel statistical test of independence. _Advances in neural information processing systems_ , 20, 2007.
* Hallin [2017] M. Hallin. On distribution and quantile functions, ranks and signs in $\mathbb{R}^{d}$. _ECARES Working Papers_ , 2017.
* Hallin et al. [2020a] M. Hallin, E. del Barrio, J. A. Cuesta-Albertos, and C. Matrán. On distribution and quantile functions, ranks, and signs in $\mathbb{R}^{d}$: a measure transportation approach. _Ann.Stat. (to appear)_ , 2020a.
* Hallin et al. [2020b] M. Hallin, D. La Vecchia, and H. Liu. Center-outward R-estimation for semiparametric VARMA models. _Journal of the American Statistical Association_ , pages 1–14, 2020b.
* Hallin et al. [2022] M. Hallin, D. Hlubinka, and Š. Hudecová. Efficient fully distribution-free center-outward rank tests for multiple-output regression and manova. _Journal of the American Statistical Association_ , pages 1–17, 2022\.
* Heller et al. [2012] R. Heller, M. Gorfine, and Y. Heller. A class of multivariate distribution-free tests of independence based on graphs. _Journal of Statistical Planning and Inference_ , 142(12):3097–3106, 2012.
* Heller et al. [2013] R. Heller, Y. Heller, and M. Gorfine. A consistent multivariate test of association based on ranks of distances. _Biometrika_ , 100(2):503–510, 2013.
* Heller et al. [2016] R. Heller, Y. Heller, S. Kaufman, B. Brill, and M. Gorfine. Consistent distribution-free $k$-sample and independence tests for univariate random variables. _The Journal of Machine Learning Research_ , 17(1):978–1031, 2016.
* Hoeffding [1948] W. Hoeffding. A Non-Parametric Test of Independence. _The Annals of Mathematical Statistics_ , 19(4):546 – 557, 1948.
* Hofer [2009] R. Hofer. On the distribution properties of Niederreiter-Halton sequences. _J. Number Theory_ , 129(2):451–463, 2009. ISSN 0022-314X.
* Hyvärinen and Oja [2000] A. Hyvärinen and E. Oja. Independent component analysis: algorithms and applications. _Neural networks_ , 13(4-5):411–430, 2000.
* Jonker and Volgenant [1987] R. Jonker and A. Volgenant. A shortest augmenting path algorithm for dense and sparse linear assignment problems. _Computing_ , 38(4):325–340, 1987.
* Josse and Holmes [2016] J. Josse and S. Holmes. Measuring multivariate association and beyond. _Stat. Surv._ , 10:132–167, 2016.
* Kankainen [1995] A. Kankainen. _Consistent testing of total independence based on the empirical characteristic function_. University of Jyväskylä, 1995.
* Kinney and Atwal [2014] J. B. Kinney and G. S. Atwal. Equitability, mutual information, and the maximal information coefficient. _Proceedings of the National Academy of Sciences_ , 111(9):3354–3359, 2014.
* Konijn [1956] H. Konijn. On the power of certain tests for independence in bivariate populations. _The Annals of Mathematical Statistics_ , 27(2):300–323, 1956.
* Kosorok [2008] M. R. Kosorok. _Introduction to empirical processes and semiparametric inference._ Springer, 2008.
* Kuo [1975] H.-H. Kuo. Gaussian measures in banach spaces. _Gaussian Measures in Banach Spaces_ , pages 1–109, 1975.
* Lehmann and Romano [2005] E. L. Lehmann and J. P. Romano. _Testing Statistical Hypotheses_. Springer Texts in Statistics. Springer, New York, third edition, 2005\. ISBN 0-387-98864-5.
* Li et al. [2012] R. Li, W. Zhong, and L. Zhu. Feature screening via distance correlation learning. _Journal of the American Statistical Association_ , 107(499):1129–1139, 2012.
* Ma [2022] J. Ma. Evaluating independence and conditional independence measures. _arXiv preprint arXiv:2205.07253_ , 2022.
* Ma and Mao [2019] L. Ma and J. Mao. Fisher exact scanning for dependency. _Journal of the American Statistical Association_ , 114(525):245–258, 2019.
* Matteson [2008] D. S. Matteson. _Statistical inference for multivariate nonlinear time series_. The University of Chicago, 2008.
* Matteson and Tsay [2017] D. S. Matteson and R. S. Tsay. Independent component analysis via distance covariance. _Journal of the American Statistical Association_ , 112(518):623–637, 2017.
* McCann [1995] R. J. McCann. Existence and uniqueness of monotone measure-preserving maps. _Duke Mathematical Journal_ , 80(2):309–323, 1995\.
* Monge [1781] G. Monge. Mémoire sur la théorie des déblais et des remblais. _Histoire de l’Académie Royale des Sciences de Paris_ , 1781.
* Moon and Chen [2022] H. Moon and K. Chen. Interpoint-ranking sign covariance for the test of independence. _Biometrika_ , 109(1):165–179, 2022.
* Nandy et al. [2016] P. Nandy, L. Weihs, and M. Drton. Large-sample theory for the bergsma-dassios sign covariance. _Electronic Journal of Statistics_ , 10(2):2287–2311, 2016.
* Nordhausen et al. [2014] K. Nordhausen, J. Cardoso, J. Miettinen, H. Oja, E. Ollila, and S. Taskinen. Jade: Jade and other bss methods as well as some bss performance criteria. _R package version_ , pages 1–9, 2014.
* Pan et al. [2019] W. Pan, X. Wang, H. Zhang, H. Zhu, and J. Zhu. Ball covariance: A generic measure of dependence in banach space. _Journal of the American Statistical Association_ , 2019.
* Pfister et al. [2018] N. Pfister, P. Bühlmann, B. Schölkopf, and J. Peters. Kernel-based tests for joint independence. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 80(1):5–31, 2018.
* Póczos et al. [2012] B. Póczos, Z. Ghahramani, and J. Schneider. Copula-based kernel dependency measures. _arXiv preprint arXiv:1206.4682_ , 2012.
* Reshef et al. [2011] D. N. Reshef, Y. A. Reshef, H. K. Finucane, S. R. Grossman, G. McVean, P. J. Turnbaugh, E. S. Lander, M. Mitzenmacher, and P. C. Sabeti. Detecting novel associations in large data sets. _science_ , 334(6062):1518–1524, 2011.
* Resnick [2013] S. I. Resnick. _A Probability Path_. Springer Science & Business Media, 2013.
* Roy et al. [2020] A. Roy, S. Sarkar, A. K. Ghosh, and A. Goswami. On some consistent tests of mutual independence among several random vectors of arbitrary dimensions. _Statistics and Computing_ , 30(6):1707–1723, 2020.
* Sejdinovic et al. [2013a] D. Sejdinovic, A. Gretton, and W. Bergsma. A kernel test for three-variable interactions. _Advances in Neural Information Processing Systems_ , 26, 2013a.
* Sejdinovic et al. [2013b] D. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu. Equivalence of distance-based and RKHS-based statistics in hypothesis testing. _Ann. Statist._ , 41(5):2263–2291, 2013b.
* Sejdinovic et al. [2013c] D. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu. Equivalence of distance-based and rkhs-based statistics in hypothesis testing. _The annals of statistics_ , pages 2263–2291, 2013c.
* Shi et al. [2020a] H. Shi, M. Drton, and F. Han. Distribution-free consistent independence tests via center-outward ranks and signs. _Journal of the American Statistical Association_ , pages 1–16, 2020a.
* Shi et al. [2020b] H. Shi, M. Hallin, M. Drton, and F. Han. On universally consistent and fully distribution-free rank tests of vector independence. _arXiv preprint arXiv:2007.02186_ , 2020b.
* Székely and Rizzo [2009] G. J. Székely and M. L. Rizzo. Brownian distance covariance. _The annals of applied statistics_ , 3(4):1236–1265, 2009.
* Székely et al. [2007] G. J. Székely, M. L. Rizzo, and N. K. Bakirov. Measuring and testing dependence by correlation of distances. _The annals of statistics_ , 35(6):2769–2794, 2007.
* Van der Vaart [2000] A. W. Van der Vaart. _Asymptotic statistics_ , volume 3. Cambridge university press, 2000.
* Van Der Vaart and Wellner [1996] A. W. Van Der Vaart and J. Wellner. _Weak convergence and empirical processes: with applications to statistics_. Springer Science & Business Media, 1996.
* Villani [2003] C. Villani. _Topics in optimal transportation_ , volume 58 of _Graduate Studies in Mathematics_. American Mathematical Society, Providence, RI, 2003. ISBN 0-8218-3312-X.
* Wu et al. [2009] E. H. Wu, L. Philip, and W. K. Li. A smoothed bootstrap test for independence based on mutual information. _Computational statistics & data analysis_, 53(7):2524–2536, 2009.
* Yanagimoto [1970] T. Yanagimoto. On measures of association and a related problem. _Annals of the Institute of Statistical Mathematics_ , 22(1):57–63, 1970.
* Yao et al. [2018] S. Yao, X. Zhang, and X. Shao. Testing mutual independence in high dimension via distance covariance. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 80(3):455–480, 2018.
* Zhang [2019] K. Zhang. Bet on independence. _Journal of the American Statistical Association_ , 114(528):1620–1637, 2019.
* Zhang et al. [2018] Q. Zhang, S. Filippi, A. Gretton, and D. Sejdinovic. Large-scale kernel methods for independence testing. _Statistics and Computing_ , 28(1):113–130, 2018\.
* Zhu et al. [2020] H. Zhu, R. Li, R. Zhang, and H. Lian. Nonlinear functional canonical correlation analysis via distance covariance. _Journal of Multivariate Analysis_ , 180:104662, 2020.
## Appendix A Combinatorial CLT with multiple permutations
In this section we derive an analogue of Hoeffding’s combinatorial CLT [37]
for tensors with multiple random permutations. We begin with the following
assumption:
###### Assumption 5.
Fix $r\geq 2$ and suppose $\bm{A}_{n}=((a_{i_{1},i_{2},\ldots,i_{r}}))_{1\leq
i_{1},i_{2},\ldots,i_{r}\leq n}$ is a sequence of $r$-tensors satisfying the
following conditions:
* $(1)$
For all $1\leq s\leq r$ and $1\leq
i_{1},i_{2},\ldots,i_{s-1},i_{s+1},\ldots,i_{r}\leq n$,
$\displaystyle\sum_{i_{s}=1}^{n}a_{i_{1},i_{2},\cdots,i_{r}}=0.$ (A.1)
* $(2)$
There is a universal constants $K_{1},K_{2}>0$ such that
$|a_{i_{1},i_{2},\ldots,i_{r}}|\leq K_{1}/\sqrt{n}$, for all $1\leq
i_{1},i_{2},\ldots,i_{r}\leq n$, and $\sum_{1\leq i_{1},i_{2},\ldots,i_{r}\leq
n}a_{i_{1},i_{2},\ldots,i_{r}}^{2}\geq K_{2}n^{r-1}$.
Under this assumption we have the following theorem:
###### Theorem A.1.
Fix $r\geq 2$ and suppose $\bm{A}_{n}=((a_{i_{1},i_{2},\ldots,i_{r}}))_{1\leq
i_{1},i_{2},\cdots,i_{r}\leq n}$ is a sequence of $r$-tensors satisfying
Assumption 5. Consider $r-1$ independent random permutations
$\pi_{1},\pi_{2},\ldots,\pi_{r-1}$ of the set $\\{1,\ldots,n\\}$ and define
$C_{n}:=\sum_{i=1}^{n}a_{i,\pi_{1}(i),\ldots,\pi_{r-1}(i)}.$
Then, as $n\rightarrow\infty$,
$\displaystyle\frac{C_{n}}{\sqrt{\operatorname{Var}[C_{n}]}}\stackrel{{\scriptstyle
D}}{{\to}}N(0,1).$
The proof of Theorem A.1 is given in Section A.1. We first show that by
centering $\bm{A}_{n}$, condition (1) in Assumption 5 holds without loss of
generality.
###### Remark A.2.
Suppose $\bm{A}_{n}$ is an $r$-tensor not satisfying (A.1), then we can
construct a tensor
$\tilde{\bm{A}_{n}}=((\tilde{a}_{i_{1},i_{2},\ldots,i_{r}}))_{1\leq
i_{1},i_{2},\ldots,i_{r}\leq n}$ which satisfies Assumption 5. We illustrate
this for $r=3$. For $1\leq i_{1},i_{2},i_{3}\leq r$, define
$\displaystyle\tilde{a}_{i_{1},i_{2},i_{3}}=a_{i_{1},i_{2},i_{3}}-a_{\bullet,i_{2},i_{3}}-a_{i_{1},\bullet,i_{3}}-a_{i_{1},i_{2},\bullet}+a_{i_{1},\bullet,\bullet}+a_{i_{2},\bullet,\bullet}+a_{i_{3},\bullet,\bullet}-a_{\bullet,\bullet,\bullet}$
where
* •
$a_{\bullet,i_{2},i_{3}}=\frac{1}{n}\sum_{i_{1}=1}^{n}a_{i_{1},i_{2},i_{3}}$,
$a_{i_{1},\bullet,i_{3}}=\frac{1}{n}\sum_{i_{2}=1}^{n}a_{i_{1},i_{2},i_{3}}$,
$a_{i_{1},i_{2},\bullet}=\frac{1}{n}\sum_{i_{3}=1}^{n}a_{i_{1},i_{2},i_{3}}$;
* •
$a_{i_{1},\bullet,\bullet}=\frac{1}{n^{2}}\sum_{1\leq i_{2},i_{3}\leq
n}a_{i_{1},i_{2},i_{3}}$,
$a_{\bullet,i_{2},\bullet}=\frac{1}{n^{2}}\sum_{1\leq i_{1},i_{3}\leq
n}a_{i_{1},i_{2},i_{3}}$,
$a_{\bullet,\bullet,i_{3}}=\frac{1}{n^{2}}\sum_{1\leq i_{1},i_{2}\leq
n}a_{i_{1},i_{2},i_{3}}$;
* •
$a_{\bullet,\bullet,\bullet}=\frac{1}{n^{3}}\sum_{1\leq i_{1},i_{2},i_{3}\leq
n}a_{i_{1},i_{2},i_{3}}$.
Clearly, $\tilde{\bm{A}_{n}}=((\tilde{a}_{i_{1},i_{2},i_{3}}))_{1\leq
i_{1},i_{2},i_{3}\leq n}$ satisfies (A.1).
### A.1. Proof of Theorem A.1
For notational convenience we will present the proof of Theorem A.1 for $r=3$.
The proof can be easily extended to general $r$ by following the arguments.
Hereafter, we will assume $\bm{A}_{n}=((a_{i_{1},i_{2},i_{3}}))_{1\leq
i_{1},i_{2},\cdots,i_{r}\leq n}$ is a sequence of $r$-tensors satisfying
Assumption 5. We begin by computing $\operatorname{Var}[C_{n}]$.
###### Lemma A.3.
Suppose Assumption 5 holds. Then
$\displaystyle\operatorname{Var}[C_{n}]=\frac{n-2}{n(n-1)^{2}}\sum_{1\leq
i_{1},i_{2},i_{3}\leq n}a_{i_{1},i_{2},i_{3}}^{2}=\Theta(1).$
###### Proof.
Note that
$\displaystyle\operatorname{Var}[C_{n}]=\sum_{i=1}^{n}\operatorname{Var}[a_{i,\pi_{1}(i),\pi_{2}(i)}]+\sum_{1\leq
i_{1}\neq i_{1}^{\prime}\leq
n}\operatorname{Cov}[a_{i_{1},\pi_{1}(i_{1}),\pi_{2}(i_{1})},a_{i_{1}^{\prime},\pi_{1}(i_{1}^{\prime}),\pi_{2}(i_{1}^{\prime})}].$
(A.2)
Assumption 5 implies that $\mathbb{E}[a_{i,\pi_{1}(i),\pi_{2}(i)}]=0$, hence,
$\displaystyle\sum_{i=1}^{n}\operatorname{Var}[a_{i,\pi_{1}(i),\pi_{2}(i)}]=\sum_{i=1}^{n}\mathbb{E}[a_{i,\pi_{1}(i),\pi_{2}(i)}^{2}]=\frac{1}{n^{2}}\sum_{1\leq
i_{1},i_{2},i_{3}\leq n}a_{i_{1},i_{2},i_{3}}.$ (A.3)
Moreover,
$\displaystyle\operatorname{Cov}[a_{i_{1},\pi_{1}(i_{1}),\pi_{2}(i_{1})},a_{i_{1}^{\prime},\pi_{1}(i_{1}^{\prime}),\pi_{2}(i_{1}^{\prime})}]$
$\displaystyle=\mathbb{E}[a_{i_{1},\pi_{1}(i_{1}),\pi_{2}(i_{1})}a_{i_{1}^{\prime}\pi_{1}(i_{1}^{\prime})\pi_{2}(i^{\prime}_{1})}]$
$\displaystyle=\frac{1}{n^{2}(n-1)^{2}}\sum_{1\leq i_{2}\neq
i_{2}^{\prime}\leq n}\sum_{1\leq i_{3}\neq i_{3}^{\prime}\leq
n}a_{i_{1},i_{2},i_{3}}a_{i_{1}^{\prime},i_{2}^{\prime},i_{3}^{\prime}}$
$\displaystyle=\frac{1}{n^{2}(n-1)^{2}}\sum_{1\leq i_{2},i_{3}\leq
n}\left\\{a_{i_{1},i_{2},i_{3}}\sum_{\begin{subarray}{c}1\leq
i_{2}^{\prime}\leq n\\\ i_{2}^{\prime}\neq
i_{2}\end{subarray}}\sum_{\begin{subarray}{c}1\leq i_{3}^{\prime}\leq n\\\
i_{3}^{\prime}\neq
i_{3}\end{subarray}}a_{i_{1}^{\prime},i_{2}^{\prime},i_{3}^{\prime}}\right\\}$
$\displaystyle=\frac{1}{n^{2}(n-1)^{2}}\sum_{1\leq i_{2},i_{3}\leq
n}\left\\{a_{i_{1},i_{2},i_{3}}\sum_{\begin{subarray}{c}1\leq
i_{2}^{\prime}\leq n\\\ i_{2}^{\prime}\neq
i_{2}\end{subarray}}(-a_{i_{1}^{\prime},i_{2}^{\prime},i_{3}})\right\\}$
$\displaystyle=\frac{1}{n^{2}(n-1)^{2}}\sum_{1\leq i_{2},i_{3}\leq
n}\left\\{a_{i_{1},i_{2},i_{3}}a_{i_{1}^{\prime},i_{2},i_{3}}\right\\}.$
Hence,
$\displaystyle\sum_{1\leq i_{1}\neq i_{1}^{\prime}\leq
n}\operatorname{Cov}[a_{i_{1},\pi_{1}(i_{1}),\pi_{2}(i_{1})},a_{i_{1}^{\prime},\pi_{1}(i_{1}^{\prime}),\pi_{2}(i_{1}^{\prime})}]$
$\displaystyle=\frac{1}{n^{2}(n-1)^{2}}\sum_{1\leq i_{1}\neq
i_{1}^{\prime}\leq n}\sum_{1\leq i_{2},i_{3}\leq
n}\left\\{a_{i_{1},i_{2},i_{3}}a_{i_{1}^{\prime},i_{2},i_{3}}\right\\}$
$\displaystyle=\frac{1}{n^{2}(n-1)^{2}}\sum_{1\leq i_{1},i_{2},i_{3}\leq
n}a_{i_{1},i_{2},i_{3}}\sum_{\begin{subarray}{c}1\leq i_{1}^{\prime}\leq n\\\
i_{1}^{\prime}\neq i_{1}\end{subarray}}a_{i_{1}^{\prime},i_{2},i_{3}}$
$\displaystyle=-\frac{1}{n^{2}(n-1)^{2}}\sum_{1\leq i_{1},i_{2},i_{3}\leq
n}a_{i_{1},i_{2},i_{3}}^{2}.$ (A.4)
Therefore, combining (A.3) and (A.1) with (A.2) the lemma follows. ∎
Since $\operatorname{Var}[C_{n}]=\Theta(1)$ by Assumption 5, we can always
normalize the elements of $\bm{A}_{n}$ such that
$\operatorname{Var}[C_{n}]=1$. Hereafter, we will assume $\bm{A}_{n}$ is
normalized such that $\operatorname{Var}[C_{n}]=1$. To prove the CLT in
Theorem A.1 will use Stein’s method based on exchangeable pairs [13]. The
first step towards this is to construct an exchangeable pair for the random
permutations $(\pi_{1},\pi_{2})$. This is done as follows:
* •
We define $\pi_{1}^{\prime}=\pi_{1}\circ(I,J)$. Choose a transposition $(I,J)$
uniformly at random from the set of transpositions on $\\{1,\ldots,n\\}$ and
define $\pi_{1}^{\prime}:=\pi_{1}\circ(I,J)$, that is,
$\pi_{1}^{\prime}(I)=\pi_{1}(J)$, $\pi_{1}^{\prime}(J)=\pi_{1}(I)$, and
$\pi_{1}^{\prime}(\ell)=\pi_{1}(\ell)$ for $\ell\neq\\{I,J\\}$.
* •
Similarly, choose an independent transposition $(K,L)$ uniformly at random
from the set of transpositions on $\\{1,\ldots,n\\}$ and define
$\pi_{2}^{\prime}:=\pi_{2}\circ(K,L)$.
###### Lemma A.4.
$\\{(\pi_{1},\pi_{2}),(\pi_{1}^{\prime},\pi_{2}^{\prime})\\}$ is an
exchangeable pair.
###### Proof.
Denote by $S_{n}$ the collection of all permutations of $\\{1,2,\ldots,n\\}$
and suppose $A_{1},A_{2},A_{1}^{\prime}A_{2}^{\prime}\subseteq S_{n}$. Then by
the independence between $(\pi_{1},\pi_{1}^{\prime})$ and
$(\pi_{2},\pi_{2}^{\prime})$,
$\displaystyle\mathbb{P}((\pi_{1},\pi_{2})\in A_{1}\times
A_{2},(\pi_{1}^{\prime},\pi_{2}^{\prime})\in A_{1}^{\prime}\times
A_{2}^{\prime})$ $\displaystyle=\mathbb{P}(\pi_{1}\in
A_{1},\pi_{1}^{\prime}\in A_{1}^{\prime})\mathbb{P}(\pi_{2}\in
A_{2},\pi_{2}^{\prime}\in A_{2}^{\prime})$
$\displaystyle=\mathbb{P}(\pi_{1}^{\prime}\in A_{1},\pi_{1}\in
A_{1}^{\prime})\mathbb{P}(\pi_{2}^{\prime}\in A_{2},\pi_{2}\in
A_{2}^{\prime})$
$\displaystyle=\mathbb{P}((\pi_{1}^{\prime},\pi_{2}^{\prime})\in A_{1}\times
A_{2},(\pi_{1},\pi_{2})\in A_{1}^{\prime}\times A_{2}^{\prime}),$
where the second inequality uses the fact that $(\pi_{1},\pi_{1}^{\prime})$
and $(\pi_{2},\pi_{2}^{\prime})$ are marginally two exchangeable pairs. ∎
Recall that $C_{n}=\sum_{i=1}^{n}a_{i\pi_{1}(i)\pi_{2}(i)}$ and define
$\displaystyle
C_{n}^{\prime}=\sum_{i=1}^{n}a_{i\pi_{1}^{\prime}(i)\pi_{2}^{\prime}(i)}.$
(A.5)
By Lemma A.4, $(C_{n},C_{n}^{\prime})$ is an exchangeable pair. Hence, by
Stein’s method based on exchangeable pairs we can obtain bounds on the
Wasserstein distance between $C_{n}$ and $N(0,1)$ in terms of the moments of
the difference $C_{n}^{\prime}-C_{n}$. To this end, recall that the
Wasserstein distance between random variables $X\sim\mu$ and $Y\sim\nu$ on
$\mathbb{R}$ is defined as
$\mathrm{Wass}(X,Y)=\sup\left\\{\left|\int f\mathrm{d}\mu-\int
f\mathrm{d}\nu\right|:f\text{ is }1-\text{Lipschitz}\right\\},$
where a function $f:\mathbb{R}\rightarrow\mathbb{R}$ is 1-Lipschitz if
$|f(x)-f(y)|\leq|x-y|$, for all $x,y\in\mathbb{R}$. We now invoke the
following theorem:
###### Theorem A.5.
[13] Let $(C_{n},C_{n}^{\prime})$ be as defined above. If
$\mathbb{E}[C_{n}^{\prime}-C_{n}|C_{n}]=-\lambda C_{n}$, for some
$0<\lambda<1$, and $\mathbb{E}[C_{n}^{2}]=1$, then
$\displaystyle\mathrm{Wass}(C_{n},N(0,1))\leq\sqrt{\frac{2}{\pi}\operatorname{Var}\left[\mathbb{E}\left[\frac{(C_{n}^{\prime}-C_{n})^{2}}{2\lambda}\Big{|}C_{n}\right]\right]}+\frac{1}{3\lambda}\mathbb{E}\left[|C_{n}^{\prime}-C_{n}|^{3}\right].$
The rest of the proof is organized as follows:
* (1)
In Lemma A.6 we evaluate $\mathbb{E}[C_{n}^{\prime}-C_{n}|C_{n}]$ and show
that $\mathbb{E}[C_{n}^{\prime}-C_{n}|C_{n}]=-\lambda C_{n}$ holds with
$\lambda=\Theta(1/n)$.
* (2)
Next, in Lemma A.7 we show
$\frac{1}{\lambda}\mathbb{E}[|C_{n}^{\prime}-C_{n}|^{3}]=O(1/\sqrt{n})$.
* (3)
Finally, in Lemma A.8 we show that
$\operatorname{Var}[\mathbb{E}[\frac{(C_{n}^{\prime}-C_{n})^{2}}{2\lambda}|C_{n}]]=O(1/n)$.
These three steps combined with Theorem A.5 implies,
$\mathrm{Wass}(C_{n},N(0,1))=O(1/\sqrt{n})$, thus, completing the proof of
Theorem A.1.
###### Lemma A.6.
$\mathbb{E}[C_{n}^{\prime}-C_{n}|C_{n}]=-\lambda C_{n}$, where
$\displaystyle\lambda=\frac{4(n^{2}-7n+11)}{n^{2}(n-1)}=\Theta\left(\frac{1}{n}\right).$
###### Proof.
Note that
$\displaystyle
C_{n}^{\prime}-C_{n}=\begin{cases}\Delta_{1}(I,J)-\Delta_{0}(I,J)&\text{if
}K=I,L=J\text{ or }K=J,L=I,\\\ \Delta_{2}(I,J,K)-\Delta_{0}(I,J,K)&\text{if
}K\neq J,L=I,\\\ \Delta_{2}(I,J,L)-\Delta_{0}(I,J,L)&\text{if }K=I,L\neq J,\\\
\Delta_{3}(I,J,K)-\Delta_{0}(I,J,K)&\text{if }K\neq I,L=J,\\\
\Delta_{3}(I,J,L)-\Delta_{0}(I,J,L)&\text{if }K=J,L\neq I,\\\
\Delta_{4}(I,J,K,L)-\Delta_{0}(I,J,K,L)&\text{otherwise;}\end{cases}$ (A.6)
where
$\displaystyle\Delta_{0}(I,J)$
$\displaystyle:=a_{I,\pi_{1}(I),\pi_{2}(I)}+a_{J,\pi_{1}(J),\pi_{2}(J)},$
$\displaystyle\Delta_{0}(I,J,K)$
$\displaystyle:=a_{I,\pi_{1}(I),\pi_{2}(I)}+a_{J,\pi_{1}(J),\pi_{2}(J)}+a_{K,\pi_{1}(K),\pi_{2}(K)},$
$\displaystyle\Delta_{0}(I,J,K,L)$
$\displaystyle:=a_{I,\pi_{1}(I),\pi_{2}(I)}+a_{J,\pi_{1}(J),\pi_{2}(J)}+a_{K,\pi_{1}(K),\pi_{2}(K)}+a_{L,\pi_{1}(L),\pi_{2}(L)};$
and
$\displaystyle\Delta_{1}(I,J)$
$\displaystyle:=a_{I,\pi_{1}(J),\pi_{2}(J)}+a_{J,\pi_{1}(I),\pi_{2}(I)}$
$\displaystyle\Delta_{2}(I,J,\cdot)$
$\displaystyle:=a_{I,\pi_{1}(J),\pi_{2}(\cdot)}+a_{J,\pi_{1}(I),\pi_{2}(J)}+a_{\cdot,\pi_{1}(\cdot),\pi_{2}(I)}$
$\displaystyle\Delta_{3}(I,J,\cdot)$
$\displaystyle:=a_{I,\pi_{1}(J),\pi_{2}(I)}+a_{J,\pi_{1}(I),\pi_{2}(\cdot)}+a_{\cdot,\pi_{1}(\cdot),\pi_{2}(J)}$
$\displaystyle\Delta_{4}(I,J,K,L)$
$\displaystyle:=a_{I,\pi_{1}(J),\pi_{2}(I)}+a_{J,\pi_{1}(I),\pi_{2}(J)}+a_{K,\pi_{1}(K),\pi_{2}(L)}+a_{L,\pi_{1}(L),\pi_{2}(K)}.$
Denote $\eta=\frac{1}{n(n-1)}$. Then (A.6) implies,
$\displaystyle\mathbb{E}[C_{n}^{\prime}-C_{n}|\pi_{1},\pi_{2}]=\eta^{2}\left(S_{1}+S_{2}+S_{3}+S_{4}\right)$
where
$\displaystyle S_{1}$ $\displaystyle:=2\sum_{1\leq i\neq j\leq
n}\left\\{\Delta_{1}(i,j)-\Delta_{0}(i,j)\right\\}$ $\displaystyle S_{2}$
$\displaystyle:=\sum_{1\leq i\neq j\neq k\leq
n}\left\\{\Delta_{2}(i,j,k)+\Delta_{3}(i,j,k)-2\Delta_{0}(i,j,k)\right\\}$
$\displaystyle S_{3}$ $\displaystyle:=\sum_{1\leq i\neq j\neq\ell\leq
n}\left\\{\Delta_{2}(i,j,\ell)+\Delta_{3}(i,j,\ell)-2\Delta_{0}(i,j,\ell)\right\\}$
$\displaystyle S_{4}$ $\displaystyle:=\sum_{1\leq i\neq j\neq k\neq\ell\leq
n}\left\\{\Delta_{4}(i,j,k,\ell)-\Delta_{0}(i,j,k,\ell)\right\\}.$
Note that, by Assumption 5, $\sum_{1\leq i\neq j\leq
n}a_{i,\pi_{1}(j),\pi_{2}(j)}=-\sum_{j=1}^{n}a_{j,\pi_{1}(j),\pi_{2}(j)}=-C_{n}$.
Hence,
$\displaystyle S_{1}=-4C_{n}-4(n-1)C_{n}.$
Computing the other terms similarly and simplifying gives
$\displaystyle\mathbb{E}[C_{n}^{\prime}-C_{n}|C_{n}]=\mathbb{E}[C_{n}^{\prime}-C_{n}|\pi_{1},\pi_{2}]=-\lambda
C_{n}$
where
$\displaystyle\lambda=\frac{4(n-2)(n-3)}{n(n-1)^{2}}+\frac{4(n-2)(5-3n)}{n^{2}(n-1)^{2}}+\frac{4}{n^{2}(n-1)}=\frac{4(n^{2}-7n+11)}{n^{2}(n-1)}=\Theta\left(\frac{1}{n}\right).$
This completes the proof of Lemma A.6. ∎
###### Lemma A.7.
$\frac{1}{\lambda}\mathbb{E}[|C_{n}^{\prime}-C_{n}|^{3}]=O(1/\sqrt{n})$.
###### Proof.
Denote $\eta=\frac{1}{n(n-1)}$. Then (A.6) implies,
$\displaystyle\mathbb{E}[|C_{n}^{\prime}-C_{n}|^{3}|\pi_{1},\pi_{2}]\lesssim\eta^{2}(W_{1}+W_{2}+W_{3}),$
where
$\displaystyle W_{1}$ $\displaystyle:=\sum_{1\leq i\neq j\leq
n}\left(|\Delta_{1}(i,j)|^{3}+|\Delta_{0}(i,j)|^{3}\right),$ $\displaystyle
W_{2}$ $\displaystyle:=\sum_{1\leq i\neq j\neq k\leq
n}\left(|\Delta_{2}(i,j,k)|^{3}+|\Delta_{3}(i,j,k)|^{3}+|\Delta_{0}(i,j,k)|^{3}\right),$
$\displaystyle W_{3}$ $\displaystyle:=\sum_{1\leq i\neq j\neq k\neq\ell\leq
n}\left(|\Delta_{4}(i,j,k,\ell)|^{3}+|\Delta_{0}(i,j,k,\ell)|^{3}\right),$
Using the fact $|a_{i_{1},i_{2},i_{3}}|\leq C/\sqrt{n}$, it follows that
$W_{1}+W_{2}+W_{3}=O(n^{5/2})$. Hence, using $\eta=\Theta(1/n^{2})$ and
$\lambda=\Theta(1/n)$ (by Lemma A.6),
$\displaystyle\frac{1}{\lambda}\mathbb{E}[|C_{n}^{\prime}-C_{n}|^{3}]=\frac{1}{\lambda}\mathbb{E}\left[\mathbb{E}\left[|C_{n}^{\prime}-C_{n}|^{3}|\pi_{1},\pi_{2}\right]\right]=O(1/\sqrt{n}),$
completing the proof of the lemma. ∎
###### Lemma A.8.
$\operatorname{Var}[\mathbb{E}[\frac{(C_{n}^{\prime}-C_{n})^{2}}{2\lambda}|C_{n}]]=O(1/n)$.
###### Proof.
Recall that for any sigma-fields $\mathcal{F}_{1}\subseteq\mathcal{F}_{2}$,
$\displaystyle\operatorname{Var}[\mathbb{E}[X|\mathcal{F}_{1}]]\leq\operatorname{Var}[\mathbb{E}[X|\mathcal{F}_{2}]].$
Hence, to prove the lemma it suffices to bound
$\operatorname{Var}[\mathbb{E}[|C_{n}^{\prime}-C_{n}|^{2}|\pi_{1},\pi_{2}]]$.
We first compute
$\displaystyle\mathbb{E}[|C_{n}^{\prime}-C_{n}|^{2}|\pi_{1},\pi_{2}]=\eta^{2}(T_{1}+T_{2}+T_{3}+T_{4}).$
where $\eta=\frac{1}{n(n-1)}$ and recalling (A.6),
$\displaystyle T_{1}$ $\displaystyle:=2\sum_{1\leq i\neq j\leq
n}|\Delta_{1}(i,j)-\Delta_{0}(i,j)|^{2},$ $\displaystyle T_{2}$
$\displaystyle:=2\sum_{1\leq i\neq j\neq k\leq
n}|\Delta_{2}(i,j,k)-\Delta_{0}(i,j,k)|^{2},$ $\displaystyle T_{3}$
$\displaystyle:=2\sum_{1\leq i\neq j\neq k\leq
n}|\Delta_{3}(i,j,k)-\Delta_{0}(i,j,k)|^{2},$ $\displaystyle T_{4}$
$\displaystyle:=\sum_{1\leq i\neq j\neq k\neq\ell\leq
n}|\Delta_{4}(i,j,k,\ell)-\Delta_{0}(i,j,k,\ell)|^{2}.$
Denote $T_{\nu}=T_{1}+T_{2}+T_{3}$. This implies,
$\displaystyle\operatorname{Var}[\mathbb{E}[|C_{n}^{\prime}-C_{n}|^{2}|\pi_{1},\pi_{2}]]\lesssim\eta^{4}\operatorname{Var}[T_{\nu}]+\eta^{4}\operatorname{Var}[T_{4}]+2\eta^{4}\sqrt{\operatorname{Var}[T_{\nu}]\operatorname{Var}[T_{4}]}.$
(A.7)
By Assumption 5, $T_{\nu}=O(n^{2})$. Hence,
$\displaystyle\eta^{4}\operatorname{Var}[T_{\nu}]=O(1/n^{4}).$ (A.8)
We will now show that $\eta^{4}\operatorname{Var}[T_{4}]=O(1/n^{3})$. For this
define
$\displaystyle b_{ijk\ell}:=|\Delta_{4}(i,j,k,\ell)-\Delta_{0}(i,j,k,\ell)|.$
Then
$\displaystyle\operatorname{Var}[T_{4}]=\sum_{1\leq i\neq j\neq k\neq\ell\leq
n}\sum_{1\leq i^{\prime}\neq j^{\prime}\neq k^{\prime}\neq\ell^{\prime}\leq
n}\left\\{\mathbb{E}[b_{ijk\ell}^{2}b_{i^{\prime}j^{\prime}k^{\prime}\ell^{\prime}}^{2}]-\mathbb{E}[b_{ijk\ell}^{2}]\mathbb{E}[b_{i^{\prime}j^{\prime}k^{\prime}\ell^{\prime}}^{2}]\right\\}=S_{1}+S_{2},$
(A.9)
where
$\displaystyle S_{1}$ $\displaystyle:=\sum_{\begin{subarray}{c}\text{not all
different}\\\ 1\leq i\neq j\neq k\neq\ell\leq n,1\leq i^{\prime}\neq
j^{\prime}\neq k^{\prime}\neq\ell^{\prime}\leq
n\end{subarray}}\left\\{\mathbb{E}[b_{ijk\ell}^{2}b_{i^{\prime}j^{\prime}k^{\prime}\ell^{\prime}}^{2}]-\mathbb{E}[b_{ijk\ell}^{2}]\mathbb{E}[b_{i^{\prime}j^{\prime}k^{\prime}\ell^{\prime}}^{2}]\right\\}$
$\displaystyle S_{2}$ $\displaystyle:=\sum_{1\leq i\neq i^{\prime}\neq j\neq
j^{\prime}\neq k\neq k^{\prime}\neq\ell\neq\ell^{\prime}\leq
n}\left\\{\mathbb{E}[b_{ijk\ell}^{2}b_{i^{\prime}j^{\prime}k^{\prime}\ell^{\prime}}^{2}]-\mathbb{E}[b_{ijk\ell}^{2}]\mathbb{E}[b_{i^{\prime}j^{\prime}k^{\prime}\ell^{\prime}}^{2}]\right\\},$
(A.10)
For $S_{1}$, we bound
$\displaystyle\eta^{4}S_{1}\leq\frac{1}{n^{4}(n-1)^{4}}\sum_{\begin{subarray}{c}\text{not
all different}\\\ 1\leq i\neq j\neq k\neq\ell\leq n,1\leq i^{\prime}\neq
j^{\prime}\neq k^{\prime}\neq\ell^{\prime}\leq
n\end{subarray}}\mathbb{E}\left[b_{ijk\ell}^{2}b_{i^{\prime}j^{\prime}k^{\prime}\ell^{\prime}}^{2}\right]=O(1/n^{3}).$
(A.11)
since $b_{ijk\ell}=O(1/\sqrt{n})$ by Assumption 5 and there are $O(n^{7})$
terms in the sum in (A.11). Next, we bound $S_{2}$. To this end, fix $r\geq 1$
and denote $(n)_{r}:=n(n-1)\cdots(n-r+1)$ and $[n]_{r}$ the collection of
ordered $r$ element subsets of $\\{1,2,\ldots,n\\}$ with distinct elements.
For $\bm{s}=(s_{1},s_{2},s_{3},s_{4})\in[n]_{4}$ and
$\bm{s}^{\prime}=(s_{1}^{\prime},s_{2}^{\prime},s_{3}^{\prime},s_{4}^{\prime})\in[n]_{4}$,
define
$\Delta(i,j,k,\ell;\bm{s},\bm{s}^{\prime}):=\left(a_{i,s_{2},s^{\prime}_{1}}+a_{j,s_{1},s^{\prime}_{2}}+a_{k,s_{3},s^{\prime}_{4}}+a_{\ell,s_{4},s^{\prime}_{3}}-a_{i,s_{1},s^{\prime}_{1}}-a_{j,s_{2},s^{\prime}_{2}}-a_{k,s_{3},s^{\prime}_{3}}-a_{\ell,s_{4},s^{\prime}_{4}}\right)^{2}$
For $\bm{t}=(s_{1},s_{2},s_{3},s_{4})\in[n]_{4}$ and
$\bm{t}^{\prime}=(s_{1}^{\prime},s_{2}^{\prime},s_{3}^{\prime},s_{4}^{\prime})\in[n]_{4}$,
$\Delta(i,j,k,\ell;\bm{t},\bm{t}^{\prime})$ is defined similarly. Then
$\displaystyle\mathbb{E}[b_{ijk\ell}^{2}b_{i^{\prime}j^{\prime}k^{\prime}\ell^{\prime}}^{2}]$
$\displaystyle=\frac{1}{(n)_{8}^{2}}\sum_{(\bm{s},\bm{t})\in[n]_{8}}\sum_{(\bm{s}^{\prime},\bm{t}^{\prime})\in[n]_{8}}\Delta(i,j,k,\ell;\bm{s},\bm{s}^{\prime})\Delta(i,j,k,\ell;\bm{t},\bm{t}^{\prime})$
(A.12)
and
$\displaystyle\mathbb{E}[b_{ijk\ell}^{2}]\mathbb{E}[b_{i^{\prime}j^{\prime}k^{\prime}\ell^{\prime}}^{2}]$
$\displaystyle=\frac{1}{(n)_{4}^{4}}\sum_{\bm{s},\bm{s}^{\prime},\bm{t},\bm{t}^{\prime}\in[n]_{4}}\Delta(i,j,k,\ell;\bm{s},\bm{s}^{\prime})\Delta(i,j,k,\ell;\bm{t},\bm{t}^{\prime}).$
(A.13)
Then we further decompose
$\displaystyle\sum_{\bm{s},\bm{s}^{\prime},\bm{t},\bm{t}^{\prime}\in[n]_{4}}\Delta(i,j,k,\ell;\bm{s},\bm{s}^{\prime})\Delta(i,j,k,\ell;\bm{t},\bm{t}^{\prime})$
$\displaystyle=\sum_{(\bm{s},\bm{t})\in[n]_{8}}\sum_{(\bm{s}^{\prime},\bm{t}^{\prime})\in[n]_{8}}\Delta(i,j,k,\ell;\bm{s},\bm{s}^{\prime})\Delta(i,j,k,\ell;\bm{t},\bm{t}^{\prime})+\mathcal{R},$
where
$\displaystyle\mathcal{R}:=$
$\displaystyle\sum_{\begin{subarray}{c}\bm{s},\bm{s}^{\prime},\bm{t},\bm{t}^{\prime}\in[n]_{4}\\\
\bm{s}\cap\bm{t}\neq\varnothing\text{ or
}\bm{s}^{\prime}\cap\bm{t}^{\prime}\neq\varnothing\end{subarray}}\Delta(i,j,k,\ell;\bm{s},\bm{s}^{\prime})\Delta(i,j,k,\ell;\bm{t},\bm{t}^{\prime}).$
Then, denoting $\sum_{\mathcal{D}}$ as the sum over indices $1\leq i\neq
i^{\prime}\neq j\neq j^{\prime}\neq k\neq
k^{\prime}\neq\ell\neq\ell^{\prime}\leq n$ and noting from Assumption 5 that
$\displaystyle\Delta(i,j,k,\ell;\bm{s},\bm{s}^{\prime})\Delta(i,j,k,\ell;\bm{t},\bm{t}^{\prime})=O\left(\frac{1}{n^{2}}\right),$
(A.14)
implies $\eta^{4}\sum_{\mathcal{D}}\mathcal{R}=O(1/n^{3})$. Recalling (A.1),
(A.12), and (A.13) then gives,
$\displaystyle
S_{2}\leq\sum_{\mathcal{D}}\left(\frac{1}{(n)_{8}^{2}}-\frac{1}{(n)_{4}^{4}}\right)\sum_{(\bm{s},\bm{t})\in[n]_{8}}\sum_{(\bm{s}^{\prime},\bm{t}^{\prime})\in[n]_{8}}\Delta(i,j,k,\ell;\bm{s},\bm{s}^{\prime})\Delta(i,j,k,\ell;\bm{t},\bm{t}^{\prime})+\sum_{\mathcal{D}}\mathcal{R}.$
Note that $\frac{1}{(n)_{8}^{2}}-\frac{1}{(n)_{4}^{4}}=O(1/n^{17})$. Hence, by
(A.14),
$\eta^{4}S_{2}=O(1/n^{3}).$
Combining this with (A.9) and (C) it follows that
$\eta^{4}\operatorname{Var}[T_{4}]=O(1/n^{3})$. This together with (A.7) and
(A.8) implies,
$\displaystyle\operatorname{Var}\left[\mathbb{E}\left[|C_{n}^{\prime}-C_{n}|^{2}|\pi_{1},\pi_{2}\right]\right]=O(1/n^{3}).$
Since $\lambda=\Theta(1/n)$, by Lemma A.6, the result in Lemma A.8 follows. ∎
## Appendix B Proof of Theorem 2.12 and Proposition 3.2
This section is organized as follows: We prove the consistency of the estimate
$\mathrm{RJdCov}_{n}^{2}(\bm{X}_{S})$ (recall definition from (2.14)) in
Section B.1. The consistency of the corresponding test (3.2) is proved in
Section B.2.
### B.1. Proof of Theorem 2.12
Recall from (2.15) and (2.5) that
$\displaystyle\mathrm{RdCov}^{2}_{n}(\bm{X}_{S})$
$\displaystyle=\frac{1}{n^{2}}\sum_{1\leq a,b\leq n}\prod_{i\in
S}\hat{\mathcal{E}}_{i}(a,b),$ (B.1)
where
$\displaystyle\hat{\mathcal{E}}_{i}(a,b)$
$\displaystyle:=\frac{1}{n}\sum_{v=1}^{n}\|\hat{R}_{i}(X_{i}^{(a)})-\hat{R}_{i}(X_{i}^{(v)})\|+\frac{1}{n}\sum_{u=1}^{n}\|\hat{R}_{i}(X_{i}^{(u)})-\hat{R}_{i}(X_{i}^{(b)})\|$
$\displaystyle\hskip
90.3375pt-\|\hat{R}_{i}(X_{i}^{(a)})-\hat{R}_{i}(X_{i}^{(b)})\|-\frac{1}{n^{2}}\sum_{1\leq
u,v\leq n}\|\hat{R}_{i}(X_{i}^{(u)})-\hat{R}_{i}(X_{i}^{(v)})\|.$
Now, expanding the product gives,
$\displaystyle\frac{1}{n^{2}}\sum_{1\leq a,b\leq n}\prod_{i\in
S}\hat{\mathcal{E}}_{i}(a,b)=\frac{1}{n^{2}}\sum_{(\ell_{i})_{i\in
S}}\sum_{1\leq a,b\leq n}\prod_{i\in S}\Delta^{(\ell_{i})}_{i}(a,b),$ (B.2)
where $\ell_{i}\in\\{1,2,3,4\\}$ for $i\in\\{1,\ldots,d\\}$ and
$\displaystyle\Delta_{i}^{(1)}(a,b)$
$\displaystyle=\frac{1}{n}\sum_{v=1}^{n}\|\hat{R}_{i}(X_{i}^{(a)})-\hat{R}_{i}(X_{i}^{(v)})\|,$
$\displaystyle\Delta_{i}^{(2)}(a,b)$
$\displaystyle=\frac{1}{n}\sum_{u=1}^{n}\|\hat{R}_{i}(X_{i}^{(b)})-\hat{R}_{i}(X_{i}^{(u)})\|,$
$\displaystyle\Delta_{i}^{(3)}(a,b)$
$\displaystyle=-\|\hat{R}_{i}(X_{i}^{(a)})-\hat{R}_{i}(X_{i}^{(b)})\|,$
$\displaystyle\Delta_{i}^{(4)}(a,b)$
$\displaystyle=-\frac{1}{n^{2}}\sum_{1\leq u,v\leq
n}\|\hat{R}_{i}(X_{i}^{(u)})-\hat{R}_{i}(X_{i}^{(v)})\|.$
(Note that $\Delta_{i}^{(1)}(a,b)$, $\Delta_{i}^{(2)}(a,b)$, and
$\Delta_{i}^{(4)}(a,b)$ does not depend on $b$, $a$, and $a,b$, respectively.
Nevertheless, for reasons of symmetry keep the dependence on $a,b$ in the
notations.) Similarly, define
$\displaystyle\tilde{\Delta}_{i}^{(1)}(a,b)$
$\displaystyle=\frac{1}{n}\sum_{v=1}^{n}\|R_{\mu_{i}}(X_{i}^{(a)})-R_{\mu_{i}}(X_{i}^{(v)})\|,$
$\displaystyle\tilde{\Delta}_{i}^{(2)}(a,b)$
$\displaystyle=\frac{1}{n}\sum_{u=1}^{n}\|R_{\mu_{i}}(X_{i}^{(b)})-R_{\mu_{i}}(X_{i}^{(u)})\|,$
$\displaystyle\tilde{\Delta}_{i}^{(3)}(a,b)$
$\displaystyle=-\|R_{\mu_{i}}(X_{i}^{(a)})-R_{\mu_{i}}(X_{i}^{(b)})\|,$
$\displaystyle\tilde{\Delta}_{i}^{(4)}(a,b)$
$\displaystyle=-\frac{1}{n^{2}}\sum_{1\leq u,v\leq
n}\|R_{\mu_{i}}(X_{i}^{(u)})-R_{\mu_{i}}(X_{i}^{(v)})\|.$
As for $\Delta_{i}^{(1)}(a,b),\tilde{\Delta}_{i}^{(1)}(a,b)$, we have
$\displaystyle\left|\Delta_{i}^{(1)}(a,b)-\tilde{\Delta}_{i}^{(1)}(a,b)\right|$
$\displaystyle\leq\frac{1}{n}\sum_{v=1}^{n}\left|\|\hat{R}_{i}(X_{i}^{(a)})-\hat{R}_{i}(X_{i}^{(v)})\|-\|R_{\mu_{i}}(X_{i}^{(a)})-R_{\mu_{i}}(X_{i}^{(v)})\|\right|$
$\displaystyle\leq\frac{1}{n}\sum_{v=1}^{n}\|\hat{R}_{i}(X_{i}^{(v)})-R_{\mu_{i}}(X_{i}^{(v)})\|+\|\hat{R}_{i}(X_{i}^{(a)})-R_{\mu_{i}}(X_{i}^{(a)})\|,$
where the second inequality holds by the reserve triangle inequality.
Similarly,
$\displaystyle\left|\Delta_{i}^{(2)}(a,b)-\tilde{\Delta}_{i}^{(2)}(a,b)\right|$
$\displaystyle\leq\frac{1}{n}\sum_{u=1}^{n}\|\hat{R}_{i}(X_{i}^{(u)})-R_{\mu_{i}}(X_{i}^{(u)})\|+\|\hat{R}_{i}(X_{i}^{(b)})-R_{\mu_{i}}(X_{i}^{(b)})\|$
and
$\displaystyle\big{|}\Delta_{i}^{(3)}(a,b)-\tilde{\Delta}_{i}^{(3)}(a,b)\big{|}$
$\displaystyle\leq\|\hat{R}_{i}(X_{i}^{(b)})-R_{\mu_{i}}(X_{i}^{(b)})\|+\|\hat{R}_{i}(X_{i}^{(a)})-R_{\mu_{i}}(X_{i}^{(a)})\|.$
The last term we bound
$\displaystyle\left|\Delta_{i}^{(4)}(a,b)-\tilde{\Delta}_{i}^{(4)}(a,b)\right|\leq\frac{1}{n}\sum_{u=1}^{n}\|\hat{R}_{i}(X_{i}^{(u)})-R_{\mu_{i}}(X_{i}^{(u)})\|+\frac{1}{n}\sum_{v=1}^{n}\|\hat{R}_{i}(X_{i}^{(v)})-R_{\mu_{i}}(X_{i}^{(v)})\|$
Hence, by [18, Theorem 2.1], for $1\leq i\leq r$ and $1\leq s\leq 4$,
$\displaystyle\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\big{|}\Delta_{i}^{(s)}(a,b)-\tilde{\Delta}_{i}^{(s)}(a,b)\big{|}\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0.$ (B.3)
Note that $|\tilde{\Delta}_{i}^{(\ell_{i})}(a,b)|\leq\sqrt{d_{i}}$ and
$|\Delta_{i}^{(\ell_{i})}(a,b)|\leq\sqrt{d_{i}}$, for all $1\leq a\neq b\leq
n$ and $1\leq i\leq r$. Then for any $T\subseteq S$ and $i\notin T$, by
triangle inequality, we have
$\displaystyle\frac{1}{n^{2}}\sum_{1\leq a,b\leq n}\left|\prod_{j\in
T}\tilde{\Delta}_{j}^{\ell_{j}}(a,b)\prod_{j\in S\backslash
T}\Delta_{j}^{(\ell_{j})}(a,b)-\prod_{j\in
T\bigcup\\{i\\}}\tilde{\Delta}_{j}^{(\ell_{j})}(a,b)\prod_{j\in
S\backslash(T\bigcup\\{i\\})}\Delta_{j}^{\ell_{j}}(a,b)\right|$
$\displaystyle\leq\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\left|\Delta_{i}^{(\ell_{i})}(a,b)-\tilde{\Delta}_{i}^{(\ell_{i})}(a,b)\right|\prod_{j\in
S\backslash\\{i\\}}\sqrt{d_{j}}\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0,$
by (B.3). This implies, by a telescoping argument together with (B.2),
$\displaystyle\left|\mathrm{RdCov}^{2}_{n}(\bm{X}_{S})-\frac{1}{n^{2}}\sum_{1\leq
a\neq b\leq n}\prod_{i\in
S}{\mathcal{E}}_{i}^{\mathrm{oracle}}(a,b)\right|\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0,$ (B.4)
where
$\displaystyle{\mathcal{E}}_{i}^{\mathrm{oracle}}(a,b)$
$\displaystyle=\frac{1}{n}\sum_{v=1}^{n}\|R_{\mu_{i}}(X_{i}^{(a)})-R_{\mu_{i}}(X_{i}^{(v)})\|+\frac{1}{n}\sum_{u=1}^{n}\|R_{\mu_{i}}(X_{i}^{(b)})-R_{\mu_{i}}(X_{i}^{(u)})\|$
$\displaystyle\hskip
90.3375pt-\|R_{\mu_{i}}(X_{i}^{(a)})-R_{\mu_{i}}(X_{i}^{(b)})\|-\frac{1}{n^{2}}\sum_{1\leq
u,v\leq n}\|R_{\mu_{i}}(X_{i}^{(u)})-R_{\mu_{i}}(X_{i}^{(v)})\|.$
Since
$\\{(R_{\mu_{1}}(X_{1}^{(a)}),R_{\mu_{2}}(X_{2}^{(a)}),\ldots,R_{\mu_{d}}(X_{d}^{(a)})\\}_{1\leq
a\leq n}$ are i.i.d., applying [12, Proposition 8] to the rank transformed
data shows that
$\displaystyle\frac{1}{n^{2}}\sum_{1\leq a,b\leq n}\prod_{i\in
S}{\mathcal{E}}_{i}^{\mathrm{oracle}}(a,b)\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}\mathrm{RdCov}^{2}(\bm{X}_{S}).$
This combined with (B.4) completes the proof of Theorem 2.12.
### B.2. Proof of Proposition 3.2
Recall that $c_{\alpha,n}$ is the upper $\alpha$ quantile of the universal
distribution in Proposition 3.1. Note from (3.9) in Theorem 3.7,
$c_{\alpha,n}=O(1/n)$. Moreover,
$\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}\mathrm{RJdCov}^{2}(\bm{X};\bm{C})>0$ whenever
$\mu\neq\mu_{1}\otimes\mu_{2}\otimes\cdots\otimes\mu_{d}$ by Theorem 2.12 and
Proposition 2.10. Hence, recalling (3.2), for
$\mu\neq\mu_{1}\otimes\mu_{2}\otimes\cdots\otimes\mu_{d}$,
$\displaystyle\mathbb{E}_{\mu}\left[\phi_{n}(\bm{C})\right]=\mathbb{P}_{\mu}(\mathrm{RJdCov}^{2}_{n}(\bm{X};\bm{C})>c_{\alpha,n})\rightarrow
1.$
## Appendix C Proof of Theorem 3.7
Recall from (4.12) the definition of the empirical process $Z_{S,n}(\bm{t})$,
for $S\in\mathcal{T}$, $\bm{t}=(t_{i})_{i\in S}\in\mathbb{R}^{d_{S}}$ and
$t_{i}\in\mathbb{R}^{d_{i}}$, for $i\in S$:
$\displaystyle Z_{S,n}(\bm{t})$
$\displaystyle=\frac{1}{n}\sum_{b=1}^{n}\left\\{\prod_{i\in
S}\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(a)})\rangle}-e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(b)})\rangle}\right)\right\\}.$ (C.1)
Under the assumption of mutual independence,
$\mathbb{E}_{H_{0}}[Z_{S,n}(\bm{t})]=0$. Next, we compute the covariance
between $Z_{S,n}(\bm{t})$ and $Z_{S,n}(\bm{t}^{\prime})$.
###### Lemma C.1.
Suppose $S\in\mathcal{T}$ and $\bm{t}=(t_{i})_{i\in
S},\bm{t}^{\prime}=(t_{i}^{\prime})_{i\in S}\in\mathbb{R}^{d_{S}}$. Then under
$H_{0}$,
$\displaystyle\lim_{n\rightarrow\infty}n\operatorname{Cov}[Z_{S,n}(\bm{t}),Z_{S,n}(\bm{t}^{\prime})]=C_{S}(\bm{t},\bm{t}^{\prime}),$
(C.2)
where $C_{S}(\bm{t},\bm{t}^{\prime})$ as defined in (3.7). Moreover, for any
$S_{1}\neq S_{2}\in\mathcal{T}$ and $\bm{t}=(t_{i})_{i\in
S_{1}}\in\mathbb{R}^{d_{S_{1}}}$, $\bm{t}^{\prime}=(t_{i}^{\prime})_{i\in
S_{2}}\in\mathbb{R}^{d_{S_{2}}}$, under $H_{0}$,
$\displaystyle\operatorname{Cov}[Z_{S_{1},n}(\bm{t}),Z_{S_{2},n}(\bm{t}^{\prime})]=0.$
(C.3)
###### Proof.
Denote, for $1\leq b\leq n$ and $i\in S$,
$\displaystyle Z_{S,n}(\bm{t},b,i):=\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(a)})\rangle}-e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(b)})\rangle}$ (C.4)
and its complex conjugate
$\bar{Z}_{S,n}(\bm{t},b,i):=\frac{1}{n}\sum_{a=1}^{n}e^{-\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(a)})\rangle}-e^{-\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(b)})\rangle}.$
Then for $\bm{t}=(t_{i})_{i\in S},\bm{t}=(t_{i}^{\prime})_{i\in
S}\in\mathbb{R}^{d_{S}}$,
$\displaystyle nZ_{S,n}(\bm{t})\bar{Z}_{S,n}(\bm{t}^{\prime})$
$\displaystyle=\frac{1}{n}\sum_{1\leq b,b^{\prime}\leq n}\prod_{i\in
S}Z_{S,n}(\bm{t},b,i)\bar{Z}_{S,n}(\bm{t}^{\prime},b^{\prime},i)$
$\displaystyle=T_{1}+T_{2},$ (C.5)
where
$\displaystyle T_{1}$ $\displaystyle:=\frac{1}{n}\sum_{b=1}^{n}\prod_{i\in
S}Z_{S,n}(\bm{t},b,i)\bar{Z}_{S,n}(\bm{t}^{\prime},b,i)$ $\displaystyle T_{2}$
$\displaystyle:=\frac{1}{n}\sum_{1\leq b\neq b^{\prime}\leq n}\prod_{i\in
S}Z_{S,n}(\bm{t},b,i)\bar{Z}_{S,n}(\bm{t}^{\prime},b^{\prime},i).$
Note that under $H_{0}$,
$\\{\hat{R}_{i}(X_{i}^{(1)}),\ldots,\hat{R}_{i}(X_{i}^{(n)})\\}$ are uniformly
distributed over the fixed grid
$\mathcal{H}_{n}^{d_{i}}=\\{h_{1}^{d_{i}},h_{2}^{d_{i}},\ldots,h_{n}^{d_{i}}\\}$
and independent over $1\leq i\leq r$. Therefore,
$\displaystyle\mathbb{E}[Z_{S,n}(\bm{t},b,i)\bar{Z}_{S,n}(\bm{t}^{\prime},b,i)]$
$\displaystyle=\mathbb{E}\left[\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(a)})\rangle}-e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(b)})\rangle}\right)\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i}^{\prime},\hat{R}_{i}(X_{i}^{(a)})\rangle}-e^{\iota\langle
t_{i}^{\prime},\hat{R}_{i}(X_{i}^{(b)})\rangle}\right)\right]$
$\displaystyle=\mathbb{E}\left[\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}-e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(b)})\rangle}\right)\left(\frac{1}{n}\sum_{a=1}^{n}e^{-\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}-e^{-\iota\langle
t_{i}^{\prime},\hat{R}_{i}(X_{i}^{(b)})\rangle}\right)\right]$
$\displaystyle=\frac{1}{n}\sum_{u=1}^{n}\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}-e^{\iota\langle
t_{i},h_{u}^{d_{i}}\rangle}\right)\left(\frac{1}{n}\sum_{a=1}^{n}e^{-\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}-e^{-\iota\langle
t_{i}^{\prime},h_{u}^{d_{i}}\rangle}\right)$
$\displaystyle=\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i}-t_{i}^{\prime},h_{u}^{d_{i}}\rangle}-\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}\right)\left(\frac{1}{n}\sum_{a=1}^{n}e^{-\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}\right).$
This implies, since the collection
$\\{Z_{S,n}(\bm{t},b,i)Z_{S,n}(\bm{t}^{\prime},b^{\prime},i)\\}_{i\in S}$ is
independent,
$\displaystyle\mathbb{E}[T_{1}]$
$\displaystyle:=\frac{1}{n}\sum_{b=1}^{n}\prod_{i\in
S}\mathbb{E}[Z_{S,n}(\bm{t},b,i)Z_{S,n}(\bm{t}^{\prime},b^{\prime},i)]$
$\displaystyle=\prod_{i\in S}\left(\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i}-t_{i}^{\prime},h_{u}^{d_{i}}\rangle}-\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}\right)\left(\frac{1}{n}\sum_{a=1}^{n}e^{-\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}\right)\right)$ $\displaystyle\rightarrow
C_{S}(\bm{t},\bm{t}^{\prime}),$ (C.6)
by Assumption 1. Similarly,
$\displaystyle\mathbb{E}[Z_{S,n}(\bm{t},b,i)\bar{Z}_{S,n}(\bm{t}^{\prime},b^{\prime},i)]$
$\displaystyle=\mathbb{E}\left[\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}-e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(b)})\rangle}\right)\left(\frac{1}{n}\sum_{a=1}^{n}e^{-\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}-e^{-\iota\langle
t_{i}^{\prime},\hat{R}_{i}(X_{i}^{(b^{\prime})})\rangle}\right)\right]$
$\displaystyle=\frac{1}{n(n-1)}\sum_{1\leq u\neq v\leq
n}\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}-e^{\iota\langle
t_{i},h_{u}^{d_{i}}\rangle}\right)\left(\frac{1}{n}\sum_{a=1}^{n}e^{-\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}-e^{-\iota\langle
t_{i}^{\prime},h_{v}^{d_{i}}\rangle}\right)$
$\displaystyle=\frac{1}{n(n-1)}\sum_{1\leq u\neq v\leq n}e^{\iota\langle
t_{i},h_{u}^{d_{i}}\rangle-\iota\langle
t_{i}^{\prime},h_{v}^{d_{i}}\rangle}-\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}\right)\left(\frac{1}{n}\sum_{a=1}^{n}e^{-\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}\right)$
$\displaystyle=-\frac{1}{(n-1)}\left[\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i}-t_{i}^{\prime},h_{u}^{d_{i}}\rangle}-\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}\right)\left(\frac{1}{n}\sum_{a=1}^{n}e^{-\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}\right)\right].$
This implies,
$\displaystyle\mathbb{E}[T_{2}]$
$\displaystyle:=\frac{(-1)^{|S|}}{(n-1)^{|S|-1}}\prod_{i\in
S}\left[\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i}-t_{i}^{\prime},h_{u}^{d_{i}}\rangle}-\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}\right)\left(\frac{1}{n}\sum_{a=1}^{n}e^{-\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}\right)\right]$ $\displaystyle\rightarrow
0,$ (C.7)
by Assumption 1. Combining (C) and (C) with (C) the result in (C.2) follows.
Next, suppose $S_{1}\neq S_{2}\in\mathcal{T}$ and $\bm{t}=(t_{i})_{i\in
S_{1}},\bm{t}^{\prime}=(t_{i}^{\prime})_{i\in S_{2}}$. Then
$\displaystyle Z_{S_{1},n}(\bm{t})\bar{Z}_{S_{2},n}(\bm{t}^{\prime})$
$\displaystyle=\frac{1}{n^{2}}\sum_{1\leq b,b^{\prime}\leq n}\prod_{i\in
S_{1}\bigcap
S_{2}}Z_{S_{1},n}(\bm{t},b,i)\bar{Z}_{S_{2},n}(\bm{t}^{\prime},b^{\prime},i)\prod_{i\in
S_{1}\backslash S_{2}}Z_{S_{1},n}(\bm{t},b,i)\prod_{i\in S_{2}\backslash
S_{1}}\bar{Z}_{S_{2},n}(\bm{t}^{\prime},b^{\prime},i).$
Note that the collections $\\{Z_{S_{1},n}(\bm{t},b,i)\\}_{i\in S_{1}\backslash
S_{2}}$ and $\\{Z_{S_{2},n}(\bm{t}^{\prime},b^{\prime},i)\\}_{i\in
S_{2}\backslash S_{1}}$ are independent. Hence,
$\displaystyle\mathbb{E}[Z_{S_{1},n}(\bm{t})\bar{Z}_{S_{2},n}(\bm{t}^{\prime})]$
$\displaystyle=\frac{1}{n^{2}}\sum_{1\leq b,b^{\prime}\leq n}\prod_{i\in
S_{1}\bigcap
S_{2}}\mathbb{E}[Z_{S_{1},n}(\bm{t},b,i)\bar{Z}_{S_{2},n}(\bm{t}^{\prime},b^{\prime},i)]\prod_{i\in
S_{1}\backslash S_{2}}\mathbb{E}[Z_{S_{1},n}(\bm{t},b,i)]\prod_{i\in
S_{2}\backslash
S_{1}}\mathbb{E}[\bar{Z}_{S_{2},n}(\bm{t}^{\prime},b^{\prime},i)]$
$\displaystyle=0,$
since
$\mathbb{E}[Z_{S,n}(\bm{t},b,i)]=\mathbb{E}[\bar{Z}_{S,n}(\bm{t},b,i)]=0$ for
$i\in S$ under $H_{0}$ and at least one of the sets $S_{1}\backslash S_{2}$
and $S_{2}\backslash S_{1}$ is non-empty. This proves (C.3). ∎
We use the following result to prove for Theorem 3.7 which is restatement of
[62, Theorem 8.6.2]. To this end, denote $D_{S}(\delta)=\prod_{i\in
S}T_{i}(\delta)$ with
$T_{i}(\delta)=\\{t_{i}\in\mathbb{R}^{d_{i}}:\delta\leq\|t_{i}\|\leq
1/\delta\\}$.
###### Lemma C.2.
Suppose the following conditions hold:
1. $(1)$
As $n\rightarrow\infty$,
$\left\\{n\int_{D_{S}(\delta)}|Z_{S,n}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right\\}_{S\in\mathcal{T}}\stackrel{{\scriptstyle
D}}{{\rightarrow}}\left\\{\int_{D_{S}(\delta)}|Z_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right\\}_{S\in\mathcal{T}};$
2. $(2)$
As $\delta\rightarrow 0$,
$\left\\{\int_{D_{S}(\delta)}|Z_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right\\}_{S\in\mathcal{T}}\stackrel{{\scriptstyle
D}}{{\rightarrow}}\left\\{\int_{\mathbb{R}^{\sum_{i\in
S}d_{i}}}|Z_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right\\}_{S\in\mathcal{T}};$
3. $(3)$
As $n\rightarrow\infty$ followed by $\delta\rightarrow 0$,
$\mathbb{E}\left|\left\\{n\int_{D_{S}(\delta)}|Z_{S,n}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right\\}_{S\in\mathcal{T}}-\left\\{n\int_{\mathbb{R}^{\sum_{i\in
S}d_{i}}}|Z_{S,n}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right\\}_{S\in\mathcal{T}}\right|^{2}=0.$
Then the distributional convergence in Theorem 3.7 holds.
Establishing Condition (1) in Lemma C.2: Fix a positive integer $K\geq 1$,
denote $\kappa:=K|\mathcal{T}|$ dimensional random vector
$\displaystyle{\bm{\mathcal{Z}}}_{n}:=\left(\left(Z_{S,n}(\bm{t}_{S}^{(1)}),Z_{S,n}(\bm{t}_{S}^{(2)}),\ldots,Z_{S,n}(\bm{t}_{S}^{(K)})\right)\right)_{S\in\mathcal{T}},$
(C.8)
where $\bm{t}_{S}^{(\ell)}\in\mathbb{R}^{d_{S}}$, for $1\leq\ell\leq K$. We
will use the Cramer-Wold device to establish the joint asymptotic normality of
$\mathcal{Z}_{n}$. For this, suppose
$\bm{\eta}=\left(\bm{\eta}_{S}\right)_{S\in\mathcal{T}}\in\mathbb{R}^{\kappa},\text{
where }\bm{\eta}_{S}=(\eta_{S}^{(1)},\eta_{S}^{(2)},\ldots,\eta_{S}^{(K)})$
and consider
$\displaystyle\sqrt{n}\langle\bm{\eta},{\bm{\mathcal{Z}}}_{n}\rangle=\sum_{S\in\mathcal{T}}\sum_{\ell=1}^{K}\eta_{S}^{(\ell)}\left(\sqrt{n}Z_{S,n}(\bm{t}_{S}^{(\ell)})\right)=\sum_{i=1}^{n}\big{\\{}A_{i,\pi_{1}(i),\ldots,\pi_{d-1}(i)}+\iota
B_{i,\pi_{1}(i),\ldots,\pi_{d-1}(i)}\big{\\}},$
where $A_{i_{1},\ldots,i_{r}}$ and $B_{i_{1},\ldots,i_{r}}$, for $1\leq
i_{1},i_{2},\ldots,i_{r}\leq n$ are the real and imaginary parts of
$\sqrt{n}\langle\bm{\eta},{\bm{\mathcal{Z}}}_{n}\rangle$, respectively. For
$a,b\in\mathbb{R}$ consider,
$\displaystyle
a\sum_{i=1}^{n}A_{i,\pi_{1}(i),\ldots,\pi_{d-1}(i)}+b\sum_{i=1}^{n}B_{i,\pi_{1}(i),\ldots,\pi_{d-1}(i)}.$
(C.9)
Note from (C.1) the modulus of each summand in (C.9) is bounded by
$C/n^{\frac{1}{2}}$, for some constant $C$ depending only on
$r,\bm{\eta},a,b$. Moreover,
$\displaystyle\mathbb{E}\left[\prod_{i\in
S}\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i}^{(\ell)},\hat{R}_{i}(X_{i}^{(a)})\rangle}-e^{\iota\langle
t_{i}^{(\ell)},\hat{R}_{i}(X_{i}^{(b)})\rangle}\right)\right]=0,$
for $\bm{t}_{S}^{(\ell)}\in\mathbb{R}^{d_{S}}$. Hence,
$A_{i_{1},\ldots,i_{r}},B_{i_{1},\ldots,i_{r}}$ are centered around $0$.
Moreover,
$\displaystyle\operatorname{Cov}\left[a\sum_{i=1}^{n}A_{i,\pi_{1}(i),\ldots,\pi_{d-1}(i)},b\sum_{i=1}^{n}B_{i,\pi_{1}(i),\ldots\pi_{d-1}(i)}\right]$
$\displaystyle=ab\operatorname{Cov}\left[\sum_{i=1}^{n}A_{i,\pi_{1}(i),\ldots,\pi_{d-1}(i)},\sum_{i=1}^{n}B_{i,\pi_{1}(i),\ldots,\pi_{d-1}(i)}\right]$
$\displaystyle=\frac{ab}{2}\mathrm{Im}\left(\operatorname{Var}\left[\sum_{S\in\mathcal{T}}\sum_{\ell=1}^{K}\eta_{S}^{(\ell)}\left(\sqrt{n}Z_{S,n}(\bm{t}_{S}^{(\ell)})\right)\right]\right)$
$\displaystyle=\frac{ab}{2}\sum_{S\in\mathcal{T}}\mathrm{Im}\left(\operatorname{Var}\left[\sum_{\ell=1}^{K}\eta_{S}^{(\ell)}\left(\sqrt{n}Z_{S,n}(\bm{t}_{S}^{(\ell)})\right)\right]\right)$
$\displaystyle\rightarrow\frac{ab}{2}\sum_{S\in\mathcal{T}}\mathrm{Im}\left(\sum_{\ell=1}^{K}(\eta_{S}^{(\ell)})^{2}C_{S}(\bm{t}_{S}^{(\ell)},\bm{t}_{S}^{(\ell)})+\sum_{1\leq\ell,\ell^{\prime}\leq
K}\eta_{S}^{(\ell)}\eta_{S}^{(\ell^{\prime})}C_{S}(\bm{t}_{S}^{(\ell)},\bm{t}_{S}^{(\ell^{\prime})})\right)$
$\displaystyle=\frac{ab}{2}\sum_{S\in\mathcal{T}}\mathrm{Im}\left(\operatorname{Var}\left[\sum_{\ell=1}^{K}\eta_{S}^{(\ell)}Z_{S}(\bm{t}_{S}^{(\ell)})\right]\right).$
(C.10)
from Definition 3.5.
To compute variance, we have the following lemma which follows directly from
definitions.
###### Lemma C.3.
Suppose $Z=X+iY$, where $X,Y$ are two real-valued variables. Define
$\mathbb{E}[Z]=\mathbb{E}[X]+i\mathbb{E}[Y]$. Then
$\mathrm{Var}[X]=\frac{1}{2}\left(\mathrm{Re}[\mathbb{E}[Z^{2}]]+\mathbb{E}[|Z|^{2}]\right)$
whenever $\mathbb{E}[X]=0$.
By Lemma C.3,
$\displaystyle\mathrm{Var}\left[\sum_{i=1}^{n}A_{i,\pi_{1}(i),\ldots,\pi_{d-1}(i)}\right]=\frac{1}{2}\left(\mathrm{Re}\left(\mathbb{E}\left[n\langle\bm{\eta},{\bm{\mathcal{Z}}}_{n}\rangle^{2}\right]\right)+\mathbb{E}\left[|\sqrt{n}\langle\bm{\eta},{\bm{\mathcal{Z}}}_{n}\rangle|^{2}\right]\right).$
(C.11)
For the second term in the RHS above Lemma C.1 implies that
$\displaystyle\mathbb{E}\left[|\sqrt{n}\langle\bm{\eta},{\bm{\mathcal{Z}}}_{n}\rangle|^{2}\right]$
$\displaystyle=\sum_{S\in\mathcal{T}}\mathbb{E}\left[\left|\sum_{\ell=1}^{K}\eta_{S}^{(\ell)}\left(\sqrt{n}Z_{S,n}(\bm{t}_{S}^{(\ell)})\right)\right|^{2}\right]$
$\displaystyle=\sum_{S\in\mathcal{T}}\sum_{\ell,\ell^{\prime}=1}^{K}\eta_{S}^{(\ell)}\eta_{S}^{(\ell^{\prime})}\mathbb{E}\left[nZ_{S,n}(\bm{t}_{S}^{(\ell)})\bar{Z}_{S,n}(\bm{t}_{S}^{(\ell^{\prime})})\right]$
$\displaystyle\rightarrow\sum_{S\in\mathcal{T}}\sum_{1\leq\ell,\ell^{\prime}\leq
K}\eta_{S}^{(\ell)}\eta_{S}^{(\ell^{\prime})}C_{S}(\bm{t}_{S}^{(\ell)},\bm{t}_{S}^{(\ell^{\prime})})$
$\displaystyle=\mathbb{E}\left[\left|\sum_{\ell=1}^{K}\eta_{S}^{(\ell)}Z_{S}(\bm{t}_{S}^{(\ell)})\right|^{2}\right].$
(C.12)
Similarly, we can show that
$\displaystyle\mathbb{E}\left[n\langle\bm{\eta},{\bm{\mathcal{Z}}}_{n}\rangle^{2}\right]$
$\displaystyle=\sum_{S\in\mathcal{T}}\sum_{\ell,\ell^{\prime}=1}^{K}\eta_{S}^{(\ell)}\eta_{S}^{(\ell^{\prime})}\mathbb{E}\left[nZ_{S,n}(t_{S}^{(\ell)})Z_{S,n}(t_{S}^{(\ell^{\prime})})\right]$
$\displaystyle\rightarrow\mathbb{E}\left[\left(\sum_{\ell=1}^{K}\eta_{S}^{(\ell)}Z_{S}(\bm{t}_{S}^{(\ell)})\right)^{2}\right].$
(C.13)
Therefore, combining (C.11) with (C) and (C) gives,
$\displaystyle\lim_{n\rightarrow\infty}\mathrm{Var}\left[\sum_{i=1}^{n}A_{i,\pi_{1}(i),\ldots,\pi_{d-1}(i)}\right]$
$\displaystyle=\lim_{n\rightarrow\infty}\frac{1}{2}\left(\mathrm{Re}\left(\mathbb{E}\left[n\langle\bm{\eta},{\bm{\mathcal{Z}}}_{n}\rangle^{2}\right]\right)+\mathbb{E}\left[|\sqrt{n}\langle\bm{\eta},{\bm{\mathcal{Z}}}_{n}\rangle|^{2}\right]\right)$
$\displaystyle=\frac{1}{2}\left(\mathrm{Re}\left(\mathbb{E}\left[\langle\bm{\eta},{\bm{\mathcal{Z}}}\rangle^{2}\right]\right)+\mathbb{E}\left[|\langle\bm{\eta},{\bm{\mathcal{Z}}}\rangle|^{2}\right]\right),$
where
$\displaystyle{\bm{\mathcal{Z}}}:=\left(\left(Z_{S}(\bm{t}_{S}^{(1)}),Z_{S}(\bm{t}_{S}^{(2)}),\ldots,Z_{S}(\bm{t}_{S}^{(K)})\right)\right)_{S\in\mathcal{T}}.$
(C.14)
Hence, by Theorem A.1, under $H_{0}$ (recall (C.8)),
$\displaystyle{\bm{\mathcal{Z}}}_{n}\stackrel{{\scriptstyle
D}}{{\rightarrow}}{\bm{\mathcal{Z}}},$
This establishes the finite dimensional convergence of the process
$\sqrt{n}Z_{S,n}(\bm{t})$. To establish the weak convergence of the process
$\sqrt{n}Z_{S,n}(\bm{t})\stackrel{{\scriptstyle
D}}{{\longrightarrow}}Z_{S}(\bm{t})$ in $L^{\infty}(D_{S}(\delta))$, we need
to prove the following stochastic equicontinuity statement for all
$S\in\mathcal{T}$.
###### Lemma C.4.
For any given $\eta>0$,
$\displaystyle\lim_{\varepsilon\rightarrow
0}\limsup_{n\rightarrow\infty}\mathbb{P}\left\\{\sup_{\mathop{}_{\|\bm{t}-\bm{t}^{\prime}\|\leq\varepsilon}^{\bm{t},\bm{t}^{\prime}\in
D_{S}(\delta)}}\big{|}\sqrt{n}Z_{S,n}(\bm{t})-\sqrt{n}Z_{S,n}(\bm{t}^{\prime})\big{|}>\eta\right\\}=0.$
(C.15)
###### Proof of Lemma C.4.
By [72, Theorem 2.11.1] to prove the the Lemma (C.4) it is enough to show the
following:
1. (1)
There exists a sequence of $\eta_{n}$ such that
$\displaystyle\sup_{1\leq b\leq n}\sup_{\bm{t}\in
D_{S}(\delta)}\left|\Gamma(\bm{t},b)\right|\leq\eta_{n},\
\Gamma(\bm{t},b):=\frac{1}{\sqrt{n}}\prod_{i\in
S}\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(a)})\rangle}-e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(b)})\rangle}\right).$ (C.16)
2. (2)
For any sequence $\varepsilon_{n}\downarrow 0$,
$\displaystyle\sup_{\bm{t},\bm{t}^{\prime}\in
D_{S}(\delta),\|\bm{t}-\bm{t}^{\prime}\|\leq\varepsilon_{n}}\mathbb{E}\bigg{|}\sqrt{n}Z_{S,n}(\bm{t})-\sqrt{n}Z_{S,n}(\bm{t}^{\prime})\bigg{|}^{2}\rightarrow
0.$ (C.17)
3. (3)
For any sequence $\varepsilon_{n}\downarrow 0$
$\displaystyle\int_{0}^{\varepsilon_{n}}\left[\log\left(N\left(\eta,D_{S}(\delta),d_{n}(\cdot,\cdot)\right)\right)\right]^{\frac{1}{2}}\mathrm{d}\eta\stackrel{{\scriptstyle
P}}{{\rightarrow}}0,$ (C.18)
where $N(\eta,D_{S}(\delta),d_{n}(\cdot,\cdot))$ denotes the $\eta$ covering
number of the set $D_{S}(\delta)$ based on the random metric
$d_{n}(\cdot,\cdot)$ satisfying
$\displaystyle
d_{n}^{2}(\bm{t},\bm{t}^{\prime})=\sum_{b=1}^{n}\left|\Gamma(\bm{t},b)-\Gamma(\bm{t}^{\prime},b)\right|^{2}.$
(C.19)
Note that $|\Gamma(\bm{t},b)|\leq 2^{|S|}/\sqrt{n}$ so that (C.16) is
immediate. The next lemma proves (C.17).
###### Lemma C.5.
There exists a constant $C>0$ is depending only on $d_{i}$, for $1\leq i\leq
r$, such that
$\displaystyle\mathbb{E}\big{|}\sqrt{n}Z_{S,n}(\bm{t})-\sqrt{n}Z_{S,n}(\bm{t}^{\prime})\big{|}^{2}\leq
C\|\bm{t}-\bm{t}^{\prime}\|.$
Consequently, the condition in (C.17) holds.
###### Proof.
For ease of notation define,
$\displaystyle F^{(1)}(\bm{t},\bm{t}^{\prime},i):=$
$\displaystyle\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i}-t_{i}^{\prime},h_{u}^{d_{i}})\rangle}-1$ $\displaystyle
F^{(2)}(\bm{t},i):=$ $\displaystyle
1-\left(\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i},h_{u}^{d_{i}}\rangle}\right)\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle-
t_{i},h_{a}^{d_{i}}\rangle}\right)$ $\displaystyle
F^{(3)}(\bm{t},\bm{t}^{\prime},i):=$
$\displaystyle\bigg{(}\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i}^{\prime},h_{u}^{d_{i}}\rangle}\bigg{)}\bigg{(}\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle-
t_{i}^{\prime},h_{u}^{d_{i}}\rangle}\bigg{)}-\bigg{(}\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i},h_{u}^{d_{i}}\rangle}\bigg{)}\bigg{(}\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle-
t_{i}^{\prime},h_{u}^{d_{i}}\rangle}\bigg{)}.$
Then as in the proof of Lemma C.1 we can compute
$\displaystyle\mathbb{E}\left|\sqrt{n}Z_{S,n}(\bm{t})-\sqrt{n}Z_{S,n}(\bm{t}^{\prime})\right|^{2}$
$\displaystyle=\mathbb{E}\left|\sqrt{n}Z_{S,n}(\bm{t})\right|^{2}+\mathbb{E}\left|\sqrt{n}Z_{S,n}(\bm{t}^{\prime})\right|^{2}-\mathbb{E}\left[nZ_{S,n}(\bm{t})\bar{Z}_{S,n}(\bm{t}^{\prime})\right]-\mathbb{E}\left[n\bar{Z}_{S,n}(\bm{t})Z_{S,n}(\bm{t}^{\prime})\right]$
$\displaystyle=c_{S,n}(T_{1}+T_{2}+T_{3}+T_{4}).$ (C.20)
where $c_{S,n}:=1+\frac{(-1)^{|S|}}{(n-1)^{|S|-1}}$ and
$\displaystyle T_{1}$ $\displaystyle=\prod_{i\in S}F^{(2)}(\bm{t},i),$
$\displaystyle T_{2}$ $\displaystyle=\prod_{i\in
S}F^{(2)}(\bm{t}^{\prime},i),$ $\displaystyle T_{3}$
$\displaystyle=-\prod_{i\in
S}\left\\{F^{(1)}(\bm{t},\bm{t}^{\prime},i)+F^{(2)}(\bm{t}^{\prime},i)+F^{(3)}(\bm{t},\bm{t}^{\prime},i)\right\\},$
$\displaystyle T_{4}$ $\displaystyle=-\prod_{i\in
S}\left\\{\bar{F}^{(1)}(\bm{t},\bm{t}^{\prime},i)+F^{(2)}(\bm{t},i)+F^{(3)}(\bm{t}^{\prime},\bm{t},i)\right\\},$
where $\bar{F}^{(1)}$ denotes the complex conjugate of $F^{(1)}$. We will now
show that there exists universal constants $C_{1},C_{2}$ such that for all
$i\in S$,
$\displaystyle|F^{(1)}(\bm{t},\bm{t}^{\prime},i)|\leq
C_{1}\|\bm{t}-\bm{t}^{\prime}\|\text{ and }\
|F^{(3)}(\bm{t},\bm{t}^{\prime},i)|\leq C_{2}\|\bm{t}-\bm{t}^{\prime}\|.$
Towards this, using $|e^{\iota t}-1|\leq|t|$ and the Cauchy-Schwarz
inequality, we have
$\displaystyle|F^{(1)}(\bm{t},\bm{t}^{\prime},i)|\leq\frac{1}{n}\sum_{u=1}^{n}|\langle
t_{i}-t_{i}^{\prime},h_{u}^{d_{i}}\rangle|\leq\|\bm{t}-\bm{t}^{\prime}\|\sqrt{d_{0}}.$
Similarly, we bound
$\displaystyle|F^{(3)}(\bm{t},\bm{t}^{\prime},i)|$
$\displaystyle\leq\left|\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i}^{\prime},h_{u}^{d_{i}}\rangle}-\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i},h_{u}^{d_{i}}\rangle}\right|$
$\displaystyle\leq\left|\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i}^{\prime}-t_{i},h_{u}^{d_{i}}\rangle}-1\right|$
$\displaystyle\leq\|\bm{t}-\bm{t}^{\prime}\|\sqrt{d_{0}}.$
Also, we notice that
$|F^{(1)}(\bm{t},\bm{t}^{\prime},i)|,|F^{(2)}(\bm{t},i)|,|F^{(3)}(\bm{t},\bm{t}^{\prime},i)|\leq
2$ for all $\bm{t},\bm{t}^{\prime}\in\mathbb{R}^{d_{0}}$ and $i\in S$. This
implies,
$\displaystyle|T_{1}+T_{2}+T_{3}+T_{4}|\leq C\|\bm{t}-\bm{t}^{\prime}\|.$
Applying this in (C) completes the proof of Lemma C.5. ∎
Now we prove (C.18). For this we need the following lemma:
###### Lemma C.6.
There exists a universal constant $C_{0}>0$
$\displaystyle d_{n}(\bm{t},\bm{t}^{\prime})\leq
C_{0}\|\bm{t}-\bm{t}^{\prime}\|,$
for $d_{n}(\cdot,\cdot)$ as defined in (C.19).
###### Proof.
We begin by computing
$d^{2}_{n}(\bm{t},\bm{t}^{\prime})=\sum_{b=1}^{n}|\Gamma(\bm{t},b)-\Gamma(\bm{t}^{\prime},b)|^{2}$.
Note that
$\displaystyle|\Gamma(\bm{t},b)-\Gamma(\bm{t}^{\prime},b)|=\frac{1}{\sqrt{n}}\left|\prod_{i\in
S}\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}-e^{\iota\langle
t_{i},h_{b}^{d_{i}}\rangle}\right)-\prod_{i\in
S}\left(\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}-e^{\iota\langle
t_{i}^{\prime},h_{b}^{d_{i}}\rangle}\right)\right|.$
Now, recall the following inequality
$\displaystyle\left|\prod_{a=1}^{n}w_{a}-\prod_{a=1}^{n}v_{a}\right|\leq\sum_{a=1}^{n}|w_{a}-v_{a}|,\quad|w_{a}|,|v_{a}|\leq
1,\text{ for all }1\leq a\leq n.$ (C.21)
Then
$\displaystyle|\Gamma(\bm{t},b)-\Gamma(\bm{t}^{\prime},b)|\leq\frac{2^{|S|-1}}{\sqrt{n}}\sum_{i\in
S}\left|\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}-\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}+e^{\iota\langle
t_{i}^{\prime},h_{b}^{d_{i}}\rangle}-e^{\iota\langle
t_{i},h_{b}^{d_{i}}\rangle}\right|.$
Note that
$\displaystyle\left|\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}-\frac{1}{n}\sum_{a=1}^{n}e^{\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}\right|\leq\frac{1}{n}\sum_{a=1}^{n}\left|e^{\iota\langle
t_{i},h_{a}^{d_{i}}\rangle}-e^{\iota\langle
t_{i}^{\prime},h_{a}^{d_{i}}\rangle}\right|$
$\displaystyle\leq\frac{1}{n}\|\bm{t}-\bm{t}^{\prime}\|\|h_{a}^{d_{1}}\|$
$\displaystyle\leq\sqrt{d_{0}}\|\bm{t}-\bm{t}^{\prime}\|.$
Similarly,
$\displaystyle|e^{\iota\langle t_{i},h_{b}^{d_{i}}}-e^{\iota\langle
t_{i}^{\prime},h_{b}^{d_{i}}\rangle}|\leq\|\bm{t}-\bm{t}^{\prime}\|\|h_{b}^{d_{i}}\|\leq\sqrt{d_{0}}\|\bm{t}-\bm{t}^{\prime}\|.$
Therefore,
$\displaystyle|\Gamma(\bm{t},b)-\Gamma(\bm{t}^{\prime},b)|\leq\frac{2^{|S|}}{\sqrt{n}}\sum_{i\in
S}\sqrt{d_{0}}\|\bm{t}-\bm{t}^{\prime}\|=\frac{|S|\sqrt{d_{0}}}{\sqrt{n}2^{|S|-1}}\|\bm{t}-\bm{t}^{\prime}\|,$
which completes the proof of Lemma C.6. ∎
To complete the proof of (C.18) we compute the following covering number:
$\displaystyle
N(\eta,D_{S}(\delta),\|\cdot\|)\leq\left(\frac{3}{\eta}\right)^{d_{0}}\frac{\mathrm{vol}(D_{S}(\delta))}{\mathrm{vol}(B_{1}(0))}\leq\left(\frac{3}{\eta}+1\right)^{d_{0}}\frac{\mathrm{vol}(D_{S}(\delta))}{\mathrm{vol}(B_{1}(0))}.$
Then by Lemma C.6,
$\displaystyle\int_{0}^{\varepsilon_{n}}\left[\log\left(N(\eta,D_{S}(\delta),d_{n}(\cdot,\cdot))\right)\right]^{\frac{1}{2}}\mathrm{d}\eta$
$\displaystyle\leq\int_{0}^{\varepsilon_{n}}\left[\log\left(N(\eta/C_{0},D_{S}(\delta),\|\cdot\|_{2})\right)\right]^{\frac{1}{2}}\mathrm{d}\eta$
$\displaystyle\leq\int_{0}^{\varepsilon_{n}}\left(\log\left(\frac{\mathrm{vol}(D_{S}(\delta))}{\mathrm{vol}(B_{1}(0))}\right)\right)^{\frac{1}{2}}\mathrm{d}\eta+\int_{0}^{\varepsilon_{n}}\left(\frac{3C_{0}d_{0}}{\eta}\right)^{\frac{1}{2}}\mathrm{d}\eta$
$\displaystyle\rightarrow 0$
as $\varepsilon_{n}\rightarrow 0.$ This completes the proof of (C.18) and,
hence, the proof of Lemma C.4. ∎
Lemma C.4 together with the finite dimensional convergence established earlier
implies that the process $\sqrt{n}Z_{S,n}(\bm{t})\stackrel{{\scriptstyle
D}}{{\longrightarrow}}Z_{S}(\bm{t})$ in $L^{\infty}(D_{S}(\delta))$. This
completes the proof of condition (1) in Lemma C.2.
Establishing Condition (3) in Lemma C.2: Note that for all $S\in\mathcal{T}$,
$\displaystyle\mathbb{E}\left|n\int_{D_{S}(\delta)}|Z_{S,n}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}-n\int_{\mathbb{R}^{d_{0}}}|Z_{S,n}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right|=\mathbb{E}\left|n\int_{D_{S}^{c}(\delta)}|Z_{S,n}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right|.$
Then by triangle inequality and Fubini’s theorem, to establish Condition (3)
in Lemma C.2 it is enough to show that
$\displaystyle\lim_{\delta\rightarrow
0}\limsup_{n\rightarrow\infty}\int_{D_{S}^{c}(\delta)}n\mathbb{E}\left|Z_{S,n}(\bm{t})\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i}=0.$ (C.22)
Recalling the definition of $D_{S}(\delta)=\prod_{i\in S}T_{i}(\delta)$, we
have the following inequality,
$\displaystyle\int_{D_{S}^{c}(\delta)}n\mathbb{E}\left|Z_{S,n}(\bm{t})\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i}$ $\displaystyle\leq\sum_{i\in
S}\left\\{\int_{\|t_{i}\|<\delta}n\mathbb{E}\left|Z_{S,n}(\bm{t})\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i}+\int_{\|t_{i}\|>1/\delta}n\mathbb{E}\left|Z_{S,n}(\bm{t})\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\right\\}.$ (C.23)
Now, define
$\displaystyle G_{i}(\delta,t)=\int_{\|w\|<\delta}\frac{1-\cos(\langle
t,w\rangle)}{c_{d_{i}}\|w\|^{d_{i}+1}}\mathrm{d}w.$
Then by Lemma C.1, we obtain
$\displaystyle\int_{\|t_{i}\|<\delta}n\mathbb{E}\left|Z_{S,n}(\bm{t})\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i}$
$\displaystyle=\int_{\|t_{i}\|<\delta}\left(1+\frac{(-1)^{|S|}}{(n-1)^{|S|-1}}\right)\prod_{i\in
S}\left[1-\left\\{\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i},h_{u}^{d_{i}})\rangle}\right\\}\left\\{\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle-
t_{i},h_{u}^{d_{i}})\rangle}\right\\}\right]\prod_{i\in S}\mathrm{d}w_{i}$
$\displaystyle\leq 2\int_{\|t_{i}\|<\delta}\prod_{i\in
S}\left[1-\left\\{\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i},h_{u}^{d_{i}})\rangle}\right\\}\left\\{\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle-
t_{i},h_{u}^{d_{i}})\rangle}\right\\}\right]\prod_{i\in S}\mathrm{d}w_{i}$
$\displaystyle=\frac{2}{n^{2|S|}}\sum_{\begin{subarray}{c}u_{1},\ldots,u_{|S|}\\\
v_{1},\ldots,v_{|S|}\end{subarray}}\int_{\|t_{i}\|<\delta}\frac{1-\cos\left(\langle
t_{i},h_{u_{i}}^{d_{i}}-h_{v_{i}}^{d_{i}}\rangle\right)}{c_{d_{i}}\|t_{i}\|^{1+d_{i}}}\mathrm{d}t_{i}\prod_{i^{\prime}\in
S\backslash\\{i\\}}\left(1-\cos\left(\langle
t_{i^{\prime}},h_{u_{i^{\prime}}}^{d_{i^{\prime}}}-h_{v_{i^{\prime}}}^{d_{i^{\prime}}}\rangle\right)\right)\prod_{i^{\prime}\in
S\backslash\\{i\\}}\mathrm{d}w_{i^{\prime}}$
$\displaystyle=\frac{2}{n^{2|S|}}\sum_{\begin{subarray}{c}u_{1},\ldots,u_{|S|}\\\
v_{1},\ldots,v_{|S|}\end{subarray}}G_{i}(\delta,h_{u_{i}}^{d_{i}}-h_{v_{i}}^{d_{i}})\prod_{i^{\prime}\in
S\backslash\\{i\\}}\|h_{u_{i^{\prime}}}^{d_{i^{\prime}}}-h_{v_{i^{\prime}}}^{d_{i^{\prime}}}\|.$
(C.24)
Note that the second equality above holds due to the fact that $\sin$ is an
odd function and hence integrates to $0$ over symmetric sets and the last
equality holds due to [70, Lemma 1]. Now, suppose
$t\in\\{t:\|t\|\leq\sqrt{d_{0}}\\}$. Then by [70, Lemma 1],
$\displaystyle G_{i}(y,t)\leq\|t\|\leq\sqrt{d_{0}}.$
Also, by dominated convergence theorem, $\lim_{\delta\rightarrow
0}G_{i}(\delta,t)=0$ for any $t\in\\{t:\|t\|\leq\sqrt{d_{0}}\\}$. Then by
Assumption 1 and (C.24),
$\displaystyle\lim_{n\rightarrow\infty}\int_{\|t_{i}\|<\delta}n\mathbb{E}\left|Z_{S,n}(\bm{t})\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\leq
2\mathbb{E}\left[G_{i}(\delta,U_{i}^{a}-U_{i}^{b})\prod_{i^{\prime}\in
S\backslash\\{i\\}}\|U_{i^{\prime}}^{a}-U_{i^{\prime}}^{b}\|\right],$
where $U_{i}^{a},U_{i}^{b}$ are two independent samples from
$\mathrm{Unif}([0,1]^{d_{i}})$. Applying the dominated convergence theorem
again gives,
$\displaystyle\lim_{\delta\rightarrow
0}\mathbb{E}\left[G_{i}(\delta,U_{i}^{a}-U_{i}^{b})\prod_{i^{\prime}\in
S\backslash\\{i\\}}\|U_{i^{\prime}}^{a}-U_{i^{\prime}}^{b}\|\right]=0,$
which implies,
$\displaystyle\lim_{\delta\rightarrow
0}\lim_{n\rightarrow\infty}\int_{\|t_{i}\|<\delta}n\mathbb{E}\left|Z_{S,n}(\bm{t})\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\leq
2\mathbb{E}\left[G_{i}(\delta,U_{i}^{a}-U_{i}^{b})\prod_{i^{\prime}\in
S\backslash\\{i\\}}\|U_{i^{\prime}}^{a}-U_{i^{\prime}}^{b}\|\right]=0.$ (C.25)
On the other hand,
$\displaystyle\int_{\|t_{i}\|>1/\delta}n\mathbb{E}\left|Z_{S,n}(\bm{t})\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i}$
$\displaystyle\leq\frac{2}{n^{2|S|}}\sum_{\begin{subarray}{c}u_{1},\ldots,u_{|S|}\\\
v_{1},\ldots,v_{|S|}\end{subarray}}\int_{\|t_{i}\|>1/\delta}\frac{1-\cos\left(\langle
t_{i},h_{u_{i}}^{d_{i}}-h_{v_{i}}^{d_{i}}\rangle\right)}{c_{d_{i}}\|t_{i}\|^{1+d_{i}}}\mathrm{d}t_{i}\prod_{i^{\prime}\in
S\backslash\\{i\\}}\left(1-\cos\left(\langle
t_{i^{\prime}},h_{u_{i^{\prime}}}^{d_{i^{\prime}}}-h_{v_{i^{\prime}}}^{d_{i^{\prime}}}\rangle\right)\right)\prod_{i^{\prime}\in
S\backslash\\{i\\}}\mathrm{d}w_{i^{\prime}}$
$\displaystyle=\frac{2}{n^{2|S|}}\sum_{\begin{subarray}{c}u_{1},\ldots,u_{|S|}\\\
v_{1},\ldots,v_{|S|}\end{subarray}}\int_{\|t_{i}\|>1/\delta}\frac{1-\cos\left(\langle
t_{i},h_{u_{i}}^{d_{i}}-h_{v_{i}}^{d_{i}}\rangle\right)}{c_{d_{i}}\|t_{i}\|^{1+d_{i}}}\mathrm{d}t_{i}\prod_{i^{\prime}\in
S\backslash\\{i\\}}\|h_{u_{i^{\prime}}}^{d_{i^{\prime}}}-h_{v_{i^{\prime}}}^{d_{i^{\prime}}}\|$
$\displaystyle\leq\frac{2}{n^{2|S|}}\sum_{\begin{subarray}{c}u_{1},\ldots,u_{|S|}\\\
v_{1},\ldots,v_{|S|}\end{subarray}}\prod_{i^{\prime}\in
S\backslash\\{i\\}}\|h_{u_{i^{\prime}}}^{d_{i^{\prime}}}-h_{v_{i^{\prime}}}^{d_{i^{\prime}}}\|\int_{\|t_{i}\|>1/\delta}\frac{1}{c_{d_{i}}\|t_{i}\|^{1+d_{i}}}\mathrm{d}t_{i}.$
Now, note that
$\displaystyle\int_{\|t_{i}\|>1/\delta}\frac{1}{c_{d_{i}}\|t_{i}\|^{1+d_{i}}}\mathrm{d}t_{i}\leq
K_{d_{i}}\delta,$
where $K_{d_{i}}$ is a constant that only depends on $d_{i}$. This implies,
$\displaystyle\lim_{n\rightarrow\infty}\int_{\|t_{i}\|>1/\delta}n\mathbb{E}\left|Z_{S,n}(\bm{t})\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i}$
$\displaystyle\leq\lim_{n\rightarrow\infty}\frac{2}{n^{2|S|}}\sum_{\begin{subarray}{c}u_{1},\ldots,u_{|S|}\\\
v_{1},\ldots,v_{|S|}\end{subarray}}\prod_{i^{\prime}\in
S\backslash\\{i\\}}\|h_{u_{i^{\prime}}}^{d_{i^{\prime}}}-h_{v_{i^{\prime}}}^{d_{i^{\prime}}}\|K_{d_{i}}\delta$
$\displaystyle=2\mathbb{E}\left[\prod_{i^{\prime}\in
S\backslash\\{i\\}}\|U_{i^{\prime}}^{a}-U_{i^{\prime}}^{b}\|\right]K_{d_{i}}\delta.$
Taking $\delta\rightarrow 0$ on both sides above we conclude
$\displaystyle\lim_{\delta\rightarrow
0}\lim_{n\rightarrow\infty}\int_{\|t_{i}\|>1/\delta}n\mathbb{E}\left|Z_{S,n}(\bm{t})\right|^{2}\prod_{i\in
S}\mathrm{d}w_{i}=0.$ (C.26)
Combining (C.25) and (C.26) establishes (C.22), which completes the proof of
statement (3) in Lemma C.2.
Establishing Condition (2) in Lemma C.2: First note that
$\displaystyle Z_{S}(\bm{t})\bm{1}\big{\\{}\bm{t}\in
D_{S}(\delta)\big{\\}}\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}Z_{S}(\bm{t})\bm{1}\big{\\{}\bm{t}\in\mathbb{R}^{d_{1}}\times\cdots\times\mathbb{R}^{d_{|S|}}\big{\\}}.$
Also,
$\displaystyle\int_{\mathbb{R}^{\sum_{i\in
S}d_{i}}}\mathbb{E}\left[|Z_{S}(\bm{t})|^{2}\right]\prod_{i\in
S}\mathrm{d}w_{i}$ $\displaystyle=\int_{\mathbb{R}^{\sum_{i\in
S}d_{i}}}\prod_{i\in S}\left[1-\mathbb{E}\left[e^{\iota\langle
t_{i},U_{i}\rangle}\right]\mathbb{E}\left[e^{\iota\langle-
t_{i},U_{i}\rangle}\right]\right]\prod_{i\in S}\mathrm{d}w_{i}$
$\displaystyle=\prod_{i\in
S}\mathbb{E}\left\|U_{i}-U^{\prime}_{i}\right\|<\infty.$
where $U_{i},U^{\prime}_{i}\sim\mathrm{Unif}([0,1]^{d_{i}})$. Then by
dominated convergence theorem,
$\int_{D_{S}(\delta)}|Z_{S}(\bm{t})|^{2}\prod_{i\in
S}\mathrm{d}w_{i}\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}\int_{\mathbb{R}^{\sum_{i\in
S}d_{i}}}|Z_{S}(\bm{t})|^{2}\prod_{i\in S}\mathrm{d}w_{i},$
which establishes Condition (2) in Lemma C.2. $\Box$
## Appendix D Consistency with Finite Resamples
In this section we prove the technical result required for establishing the
consistency of the resampling based implementation of the test as shown in
Theorem 3.11.
###### Proposition D.1.
Let $\bm{\pi}=\\{\pi_{1},\ldots,\pi_{r}\\}$ be a collection of independent
uniform random permutations in $S_{n}$. Then for any
$S\subseteq\\{1,2,\ldots,r\\}$ with $|S|\geq 2$,
$\displaystyle\theta_{S}(\bm{h}_{S,\bm{\pi}}^{(1)},\ldots,\bm{h}_{S,\bm{\pi}}^{(n)})\stackrel{{\scriptstyle
P}}{{\rightarrow}}0,$ (D.1)
where $\theta_{S}$ is as defined in (2.18) and
$\bm{h}_{S,\bm{\pi}}^{(a)}:=(h_{\pi_{i}(a)}^{d_{i}})_{i\in
S}\in\mathbb{R}^{d_{S}}$, for $1\leq a\leq n$. Consequently, recalling (3.11),
$\mathrm{RdCov}_{n}^{(1)}(\bm{X};\bm{C})\stackrel{{\scriptstyle
P}}{{\rightarrow}}0.$
###### Proof of Proposition D.1.
Note that since $\theta_{S}$ is always non-negative, to prove (D.1) it
suffices to show that
$\displaystyle\lim_{n\rightarrow\infty}\mathbb{E}[\theta_{S}(\bm{h}_{S,\bm{\pi}}^{(1)},\ldots,\bm{h}_{S,\bm{\pi}}^{(n)})]\rightarrow
0.$
Recalling (2.15), (2.5) and (2.19),
$\theta_{S}(\bm{h}_{S,\bm{\pi}}^{(1)},\ldots,\bm{h}_{S,\bm{\pi}}^{(n)})$ can
be written as
$\displaystyle\theta_{S}(\bm{h}_{S,\bm{\pi}}^{(1)},\ldots,\bm{h}_{S,\bm{\pi}}^{(n)})=\frac{1}{n^{2}}\sum_{1\leq
a,b\leq
n}\prod_{i=1}^{r}\hat{w}_{i}(h_{\pi_{i}(a)}^{d_{i}},h_{\pi_{i}(b)}^{d_{i}}),$
where, for $x,y\in\mathbb{R}^{d_{i}}$,
$\displaystyle\hat{w}_{i}(x,y)$
$\displaystyle=\frac{1}{n}\sum_{v=1}^{n}\|x-h_{v}^{d_{i}}\|+\frac{1}{n}\sum_{u=1}^{n}\|h_{u}^{d_{i}}-y\|$
$\displaystyle\hskip 90.3375pt-\|x-y\|-\frac{1}{n^{2}}\sum_{1\leq u,v\leq
n}\|h_{u}^{d_{i}}-h_{v}^{d_{i}}\|.$
Note that, for $1\leq i\leq r$,
$\displaystyle\mathbb{E}\left[\hat{w}_{i}(h_{\pi_{i}(a)}^{d_{i}},h_{\pi_{i}(b)}^{d_{i}})\right]=\frac{1}{n(n-1)}\sum_{1\leq
s\neq t\leq n}\hat{w}_{i}(h_{s}^{d_{i}},h_{t}^{d_{i}})\rightarrow 0,$
by Assumption 1. Hence, by independence of the permutations,
$\displaystyle\mathbb{E}[\theta_{S}(\bm{h}_{S,\bm{\pi}}^{(1)},\ldots,\bm{h}_{S,\bm{\pi}}^{(n)})]$
$\displaystyle=\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\prod_{i=1}^{r}\mathbb{E}[\hat{w}_{i}(h_{\pi_{i}(a)}^{d_{i}},h_{\pi_{i}(b)}^{d_{i}})]$
$\displaystyle=\prod_{i=1}^{r}\left(\frac{1}{n(n-1)}\sum_{1\leq s\neq t\leq
n}\hat{w}_{i}(h_{s}^{d_{i}},h_{t}^{d_{i}})\right)\rightarrow 0.$
This completes the proof of Proposition D.1. ∎
## Appendix E Proof of Results in section 4
This section is organized as follows: We begin with the proof of the Hájek
projection result in Section E.1. The proof of Theorem 4.1 is given Section
E.2. Theorem 4.4 is proved in Section E.3.
### E.1. Proof of Proposition 4.5
To prove Proposition 4.5 we will show
$\displaystyle\mathbb{E}_{H_{0}}|\sqrt{n}Z_{S,n}^{\mathrm{oracle}}(\bm{t})-\sqrt{n}Z_{S,n}(\bm{t})|^{2}=o(1).$
(E.1)
Hereafter, all moments will be computed under $H_{0}$, so we will omit the
subscript $H_{0}$ for notational convenience. To begin with, note that
$\displaystyle\mathbb{E}|\sqrt{n}Z_{S,n}^{\mathrm{oracle}}(\bm{t})-\sqrt{n}Z_{S,n}(\bm{t})|^{2}$
$\displaystyle=\mathbb{E}[nZ_{S,n}^{\mathrm{oracle}}(\bm{t})\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t})]+\mathbb{E}[nZ_{S,n}(\bm{t})\bar{Z}_{S,n}(\bm{t})]-\mathbb{E}[nZ_{S,n}(\bm{t})\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t})]-\mathbb{E}[n\bar{Z}_{S,n}(\bm{t})Z_{S,n}^{\mathrm{oracle}}(\bm{t})]$
$\displaystyle:=A_{n}+B_{n}+C_{n}+D_{n}.$ (E.2)
By Lemma C.1,
$\lim_{n\rightarrow\infty}B_{n}=\lim_{n\rightarrow\infty}\mathbb{E}[nZ_{S,n}(\bm{t})\bar{Z}_{S,n}(\bm{t})]=\lim_{n\rightarrow\infty}\mathrm{Cov}[Z_{S,n}(\bm{t}),\bar{Z}_{S,n}(\bm{t})]=C_{S}(\bm{t},\bm{t}).$
Similarly, as in the proof of Lemma C.1 we can show that
$\lim_{n\rightarrow\infty}A_{n}=C_{S}(\bm{t},\bm{t}).$
Now, consider $C_{n}$. For $1\leq b\leq n$ and $i\in S$, recall the definition
of $Z_{S,n}(\bm{t},b,i)$ from (C.4) and define
$\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},b,i):=\frac{1}{n}\sum_{a=1}^{n}e^{-\iota\langle
t_{i},R_{\mu_{i}}(X_{i}^{(a)})\rangle}-e^{-\iota\langle
t_{i},R_{\mu_{i}}(X_{i}^{(b)})\rangle}.$
Then
$\displaystyle C_{n}$
$\displaystyle=\mathbb{E}[nZ_{S,n}(\bm{t})\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t})]$
$\displaystyle=\mathbb{E}\left[\frac{1}{n}\sum_{1\leq b,b^{\prime}\leq
n}\prod_{i\in
S}Z_{S,n}(\bm{t},b,i)\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},b^{\prime},i)\right]$
$\displaystyle=\prod_{i\in
S}\mathbb{E}[Z_{S,n}(\bm{t},1,i)\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i)]+(n-1)\prod_{i\in
S}\mathbb{E}[Z_{S,n}(\bm{t},1,i)\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},2,i)],$
(E.3)
using the distributional symmetry of the rank maps. Note that for any
$i^{\prime}\in S$,
$\sum_{b=1}^{n}\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},b,i^{\prime})=0,$
hence,
$\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i^{\prime})=-\sum_{b=2}^{n}\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},b,i^{\prime})$.
This implies,
$\displaystyle\mathbb{E}[Z_{S,n}(\bm{t},1,i^{\prime})\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i^{\prime})]$
$\displaystyle=-\sum_{b=2}^{n}\mathbb{E}[Z_{S,n}(\bm{t},1,i^{\prime})\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},b,i^{\prime})]$
$\displaystyle=-(n-1)\mathbb{E}[Z_{S,n}(\bm{t},1,i^{\prime})\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},2,i^{\prime})].$
Hence,
$\displaystyle(n-1)\prod_{i\in
S}\mathbb{E}[Z_{S,n}(\bm{t},1,i)\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},2,i)]$
$\displaystyle=(n-1)\mathbb{E}[Z_{S,n}(\bm{t},1,i^{\prime})\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},2,i^{\prime})]\prod_{i\in
S\backslash\\{i^{\prime}\\}}\mathbb{E}[Z_{S,n}(\bm{t},1,i)\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},2,i)]$
$\displaystyle=-\mathbb{E}[Z_{S,n}(\bm{t},1,i^{\prime})\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i^{\prime})]\prod_{i\in
S\backslash\\{i^{\prime}\\}}\mathbb{E}[Z_{S,n}(\bm{t},1,i)\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},2,i)]$
$\displaystyle=-\mathbb{E}[Z_{S,n}(\bm{t},1,i^{\prime})\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i^{\prime})]\prod_{i\in
S\backslash\\{i^{\prime}\\}}\mathbb{E}[Z_{S,n}(\bm{t},1,i)]\mathbb{E}[\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},2,i)]=0.$
(E.4)
Combining (E.1) and (E.1) gives,
$\displaystyle
C_{n}=\mathbb{E}[nZ_{S,n}(\bm{t})\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t})]$
$\displaystyle=\prod_{i\in
S}\mathbb{E}[Z_{S,n}(\bm{t},1,i)\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i)]$
(E.5)
To compute the limit of the RHS of (E.5) note that
$\displaystyle\left|Z_{S,n}(\bm{t},1,i)-\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i)\right|$
$\displaystyle=\left|\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i}^{(u)})\rangle}-e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i}^{(1)})\rangle}-\frac{1}{n}\sum_{u=1}^{n}e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(u)})\rangle}+e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(1)})\rangle}\right|$
$\displaystyle\leq\frac{1}{n}\sum_{u=1}^{n}\bigg{|}e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i}^{(u)})\rangle}-e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(u)})\rangle}\bigg{|}+\bigg{|}e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i}^{(1)})\rangle}-e^{\iota\langle
t_{i},\hat{R}_{i}(X_{i}^{(1)})\rangle}\bigg{|}$
$\displaystyle\leq\frac{1}{n}\sum_{u=1}^{n}\left|\langle
t_{i},R_{\mu_{i}}(X_{i}^{(u)})-\hat{R}_{i}(X_{i}^{(u)})\rangle\right|+\left|\langle
t_{i},R_{\mu_{i}}(X_{i}^{(1)})-\hat{R}_{i}(X_{i}^{(1)})\rangle\right|$
$\displaystyle\leq\frac{1}{n}\sum_{u=1}^{n}\|t_{i}\|\|R_{\mu_{i}}(X_{i}^{(u)})-\hat{R}_{i}(X_{i}^{(u)})\|+\|t_{i}\|\|R_{\mu_{i}}(X_{i}^{(1)})-\hat{R}_{i}(X_{i}^{(1)})\|$
$\displaystyle\rightarrow 0,$
by [18, Theorem 2.1]. Moreover, since $Z_{S,n}(\bm{t},1,i)$ and
$\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i)$ are uniformly bounded in $n$, by
the dominated convergence theorem,
$\lim_{n\rightarrow\infty}\mathbb{E}\left|Z_{S,n}(\bm{t},1,i)-\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i)\right|=0.$
This implies
$\displaystyle\left|\mathbb{E}[Z_{S,n}(\bm{t},1,i)\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i)]-\mathbb{E}[Z_{S,n}(\bm{t},1,i)\bar{Z}_{S,n}(\bm{t},1,i)]\right|$
$\displaystyle\leq\mathbb{E}\left[\left|\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i)\right|\left|\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i)]-\bar{Z}_{S,n}(\bm{t},1,i)\right|\right]\rightarrow
0,$
as $n\rightarrow\infty$. Hence, from (E.5) and as in the proof of (C),
$\displaystyle\lim_{n\rightarrow\infty}C_{n}=\lim_{n\rightarrow\infty}\prod_{i\in
S}\mathbb{E}[Z_{S,n}(\bm{t},1,i)\bar{Z}_{S,n}^{\mathrm{oracle}}(\bm{t},1,i)]=\lim_{n\rightarrow\infty}\prod_{i\in
S}\mathbb{E}[Z_{S,n}(\bm{t},1,i)\bar{Z}_{S,n}(\bm{t},1,i)]=C_{S}(\bm{t},\bm{t}).$
Similarly, we can show that
$\lim_{n\rightarrow\infty}D_{n}=C_{S}(\bm{t},\bm{t})$. Recalling (E.1) the
proof of Proposition 4.5 follows.
### E.2. Proof of Theorem 4.1
As the in the proof of Theorem 3.7 denote $\kappa:=K|\mathcal{T}|$ and (recall
(C.8))
$\displaystyle{\bm{\mathcal{Z}}}_{n}:=\left(\left(Z_{S,n}(\bm{t}_{S}^{(1)}),Z_{S,n}(\bm{t}_{S}^{(2)}),\ldots,Z_{S,n}(\bm{t}_{S}^{(K)})\right)\right)_{S\in\mathcal{T}}.$
Define $\delta_{n}:=h/\sqrt{n}$ and
$\displaystyle
V_{n}:=\sum_{a=1}^{n}\log\left(\frac{f(\bm{X}_{a})}{\prod_{i=1}^{r}f_{X_{i}}(X_{i}^{(a)})}\right)=\sum_{a=1}^{n}\log\left(\frac{(1-\delta_{n})\prod_{i=1}^{r}f_{X_{i}}(X_{i}^{(a)})+\delta_{n}g(\bm{X}_{a})}{\prod_{i=1}^{r}f_{X_{i}}(X_{i}^{(a)})}\right),$
(E.6)
the log-likelihood ratio of the data under for the hypothesis (4.1). To obtain
the limiting distribution of $\sqrt{n}{\bm{\mathcal{Z}}}_{n}$ under $H_{1}$ as
in (4.1), we invoke Le Cam’s third lemma (see [71, Example 6.7]), which
entails deriving joint distribution of $(\sqrt{n}\bm{\mathcal{Z}}_{n},V_{n})$
under $H_{0}$. Note that by Proposition 4.5, it is enough to derive the joint
distribution of $(\sqrt{n}\bm{\mathcal{Z}}_{n}^{\mathrm{oracle}},V_{n})$ under
$H_{0}$. Also, it can be shown that
$\displaystyle\sqrt{n}Z_{S,n}^{\mathrm{oracle}}(\bm{t})-\sqrt{n}\tilde{Z}_{S,n}(\bm{t})\stackrel{{\scriptstyle
P}}{{\longrightarrow}}0,$ (E.7)
where
$\displaystyle\sqrt{n}\tilde{Z}_{S,n}(\bm{t}):=\frac{1}{\sqrt{n}}\sum_{u=1}^{n}\prod_{i\in
S}\left\\{\mathbb{E}\left[e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i})\rangle}\right]-e^{\iota\langle
t_{i},R_{\mu_{i}}(X_{i}^{(u)})\rangle}\right\\}.$ (E.8)
Therefore, we only need to obtain the limiting distribution
$(\sqrt{n}\tilde{\bm{\mathcal{Z}}}_{n},V_{n})$ under $H_{0}$, where
$\displaystyle\tilde{\bm{\mathcal{Z}}}_{n}:=((\tilde{Z}_{S,n}(\bm{t}_{S}^{(1)}),\ldots,\tilde{Z}_{S,n}(\bm{t}_{S}^{(K)})))_{S\in\mathcal{T}}.$
(E.9)
To this end, by a Taylor’s expansion around $\delta=0$ in (E.6) gives,
$\displaystyle V_{n}=\dot{V}_{n}+o_{P}(1),$
where
$\dot{V}_{n}:=\frac{h}{\sqrt{n}}\sum_{a=1}^{n}\left(\frac{g(\bm{X}_{a})}{\prod_{i=1}^{r}f_{X_{i}}(X_{i}^{(a)})}-1\right)-\frac{h^{2}}{2}\mathbb{E}\left[\frac{g(\bm{X})}{\prod_{i=1}^{r}f_{X_{i}}(X_{i})}-1\right]^{2}.$
Therefore, under $H_{0}$, by the multivariate central limit theorem,
$\displaystyle(\sqrt{n}\tilde{\bm{\mathcal{Z}}}_{n},\dot{V}_{n})\stackrel{{\scriptstyle
D}}{{\rightarrow}}N_{\kappa}\left(\begin{pmatrix}\bm{0}\\\
-\tfrac{1}{2}\gamma\end{pmatrix},\begin{pmatrix}\bm{\mathcal{D}}&\bm{\mu}\\\
\bm{\mu}^{\top}&\gamma\\\ \end{pmatrix}\right),$ (E.10)
where
$\gamma:=h^{2}\mathbb{E}\left[\frac{g(\bm{X})}{\prod_{i=1}^{r}f_{X_{i}}(X_{i})}-1\right]^{2}$,
$\bm{\mathcal{D}}$ is $\kappa\times\kappa$ a block-diagonal matrix with
elements $C_{S}(\bm{t}^{(\ell)},\bm{t}^{(\ell^{\prime})})$ for
$\ell,\ell^{\prime}\in\\{1,2,\ldots,K\\}$ and $S\in\mathcal{T}$, and
$\displaystyle\bm{\mu}=((\mu_{S}(\bm{t}^{(1)}),\mu_{S}(\bm{t}^{(2)}),\ldots,\mu_{S}(\bm{t}^{(K)})))_{S\in\mathcal{T}},$
with $\mu_{S}(\cdot)$ is as defined in (4.3).
Then by Proposition 4.5 and (E.7), under $H_{0}$
$(\sqrt{n}\tilde{\bm{\mathcal{Z}}}_{n},\dot{V}_{n})$ converges to the same
limit as in (E.10). Hence, by Le Cam’s third lemma ([71, Example 6.7]), under
$H_{1}$
$\displaystyle\sqrt{n}\bm{\mathcal{Z}}_{n}\stackrel{{\scriptstyle
D}}{{\rightarrow}}N_{\kappa}(\bm{\tau},\bm{\mathcal{D}})\stackrel{{\scriptstyle
D}}{{=}}\bm{\mathcal{Z}}+\bm{\mu},$
for $\bm{\mathcal{Z}}$ as defined in (C.14). This establishes the finite
dimensional convergence of the process $\sqrt{n}Z_{S,n}(\bm{t})$ under
$H_{1}$. Finally, since $H_{0}$ and $H_{1}$ are mutually contiguous, the
condition in (C.15) holds under $H_{1}$ as well. Hence, the process
$\\{\sqrt{n}Z_{S,n}(\bm{t}):S\in\mathcal{T}\\}$ converges weakly to
$\\{Z_{S}(\bm{t})+\mu_{S}(\bm{t}):S\in\mathcal{T}\\}$ and the result in (4.2)
follows by the continuous mapping theorem. This completes the proof of Theorem
4.1. $\Box$
### E.3. Proof of Theorem 4.4
To show Theorem 4.4, we will first show the contiguity between $H_{0}$ and
$H_{1}$ under the given assumptions. For this, denote by $\log(L_{n})$ the
likelihood ratio for the hypothesis (4.8),
$W_{n}=2\sum_{i=1}^{n}\bigg{\\{}\frac{f_{\bm{X}}^{\frac{1}{2}}(\bm{X}_{i}|\delta_{n})}{f_{\bm{X}}^{\frac{1}{2}}(\bm{X}_{i}|0)}-1\bigg{\\}}\text{
and }T_{n}=\delta_{n}\sum_{i=1}^{n}L^{\prime}(\bm{X}_{i}|0).$
where $\delta_{n}=h/\sqrt{n}$. To show the contiguity between $H_{0}$ and
$H_{1}$, we will use Le Cam’s second lemma [47, Corollary 12.3.1]. The
following lemma verifies the conditions required for applying Le Cam’s second
lemma.
###### Lemma E.1.
Denote $\delta_{n}=h/\sqrt{n}$ and suppose Assumption 3. Then under $H_{0}$
the following conditions hold:
* $(a)$
$W_{n}=T_{n}-\tfrac{1}{4}\gamma+o_{P}(1)$, where
$\operatorname{Var}_{H_{0}}[T_{n}]=\gamma<\infty$.
* $(b)$
The summands of likelihood ratio are uniformly asymptotic negligible, that is,
$\displaystyle\mathbb{P}_{n}\bigg{(}\bigg{|}\frac{f_{\bm{X}}(\bm{X}_{i}|\delta_{n})}{f_{\bm{X}}(\bm{X}_{i}|0)}-1\bigg{|}>\varepsilon\bigg{)}\rightarrow
0.$
The proof of Lemma E.1 is given in Section E.3.1. First we apply it to
complete the proof of Theorem 4.4. To this end, note that since
$T_{n}\stackrel{{\scriptstyle D}}{{\rightarrow}}N(0,\gamma)$, by Lemma E.1 (a)
$W_{n}\stackrel{{\scriptstyle D}}{{\rightarrow}}N(-\frac{1}{4}\gamma,\gamma)$.
Hence, by Lemma E.1 and Le Cam’s second lemma, the Konjin local alternative
$H_{1}$ as in (4.8) are contiguous to $H_{0}$. Also, by Le Cam’s second lemma
we know that $\log(L_{n})-(T_{n}-\frac{1}{2}\gamma)=o_{P}(1)$. Therefore, as
in (E.10) under $H_{0}$, by the multivariate central limit theorem,
$\displaystyle(\sqrt{n}\tilde{\bm{\mathcal{Z}}}_{n},\log(L_{n}))\stackrel{{\scriptstyle
D}}{{\rightarrow}}N_{\kappa}\left(\begin{pmatrix}\bm{0}\\\
-\tfrac{1}{2}\gamma\end{pmatrix},\begin{pmatrix}\bm{\mathcal{D}}&\bm{\nu}\\\
\bm{\nu}^{\top}&\gamma\\\ \end{pmatrix}\right),$ (E.11)
where $\tilde{\bm{\mathcal{Z}}}_{n}$ is as in (E.9), $\bm{\mathcal{D}}$ as in
(E.10) and
$\displaystyle\bm{\nu}=((\nu_{S}(\bm{t}^{(1)}),\nu_{S}(\bm{t}^{(2)}),\ldots,\nu_{S}(\bm{t}^{(K)})))_{S\in\mathcal{T}},$
with $\nu_{S}(\cdot)$ is as defined in (4.10). The rest of proof can now be
completed by arguments similar to the proof of Theorem 4.1.
#### E.3.1. Proof of Lemma E.1
We begin by computing the derivative of $f_{\bm{X}}(\bm{x}|\delta)$ (as
defined in (4.7)) with respect to $\delta$. This is stated in the following
lemma which follows by a direct computation.
###### Lemma E.2.
Consider the family of distributions
$\\{f_{\bm{X}}(\bm{x}|\delta)\\}_{\delta\in\Theta}$ as in (4.7). Then
$\displaystyle\frac{\partial}{\partial\delta}f_{\bm{X}}(\bm{x}|\delta)=-\frac{1}{|\det(\bm{A}_{\delta})|}\mathrm{tr}(-\bm{A}_{\delta}^{-1}P)f_{\bm{X}}(\bm{A}_{\delta}^{-1}\bm{x}|0)+\frac{1}{|\det(\bm{A}_{\delta})|}(\bm{A}_{\delta}^{-1}P\bm{A}_{\delta}^{-1}\bm{x})^{\top}\nabla
f_{\bm{X}}(\bm{x}|0).$
where
$P=-\frac{\partial}{\partial\delta}\bm{A}_{\delta}\Big{|}_{\delta=0}=\begin{pmatrix}\bm{I}_{d_{1}}&-\bm{M}_{1,2}&\cdots&-\bm{M}_{1,r}\\\
-\bm{M}_{2,1}&\bm{I}_{d_{2}}&\cdots&-\bm{M}_{2,r}\\\
\vdots&\vdots&\ddots&\vdots\\\
-\bm{M}_{r,1}^{\top}&-\bm{M}_{r,2}&\cdots&\bm{I}_{d_{r}}\\\ \end{pmatrix},$
and $\nabla
f_{\bm{X}}(\bm{x}|0)=\frac{\partial}{\partial\delta}f_{\bm{X}}(\bm{x}|\delta)|_{\delta=0}$.
We begin with the proof of Lemma E.1 (a). From Assumption 3 it follows that
$\operatorname{Var}_{H_{0}}[T_{n}]=h^{2}\mathbb{E}_{H_{0}}[(L^{\prime}(\bm{X}|0))^{2}]<\infty$.
To prove $W_{n}=T_{n}-\tfrac{1}{4}\gamma+o_{P}(1)$, it suffices to show that
$\displaystyle\mathbb{E}_{H_{0}}[W_{n}]\rightarrow-\tfrac{1}{4}\gamma\quad\text{and}\quad\operatorname{Var}_{H_{0}}[W_{n}-T_{n}]\rightarrow
0,$ (E.12)
as $n\rightarrow\infty$. To this end, denote
$s(\bm{x})=f^{\frac{1}{2}}_{\bm{X}}(\bm{x}|0)$. Then $\nabla s(\bm{x})=(\nabla
f_{\bm{X}}(\bm{x}|0))/(2f_{\bm{X}}^{\frac{1}{2}}(\bm{x}|0))$ and
$\displaystyle\mathbb{E}_{H_{0}}[W_{n}]=2n\mathbb{E}_{H_{0}}[L^{\frac{1}{2}}(\bm{X}|\delta_{n})]-2$
$\displaystyle=2n\mathbb{E}_{H_{0}}\left[\frac{|\bm{A}_{\delta_{n}}|^{-\frac{1}{2}}s(\bm{A}_{\delta_{n}}^{-1}\bm{X})}{s(\bm{X})}\right]-2$
$\displaystyle=-h^{2}\int\bigg{\\{}\frac{|\bm{A}_{\delta_{n}}|^{-\frac{1}{2}}s(\bm{A}_{\delta_{n}}^{-1}\bm{x})-s(\bm{x})}{\delta_{n}}\bigg{\\}}^{2}\mathrm{d}x.$
To compute the limit of the integral in the RHS above we first show that given
the assumptions, the quadratic mean differentiability (QMD) condition is
satisfied for $f_{\bm{X}}(\bm{x}|\delta)$ around $\delta=0$. It is easy to
verify that $f_{\bm{X}}(\bm{x}|\delta)$ is continuously differentiable with
respect to $\delta$ around the neighborhood of $\delta=0$ for any
$\bm{x}\in\operatorname{supp}\\{f_{\bm{X}}(\bm{x}|0)\\}$. This can be easily
seen from the expression of $\partial f/\partial\delta$ in Theorem E.2 since
$f_{\bm{X}}(\bm{x}|0)$ is continuously differentiable by the assumption.
Combined with the Assumption 3 of the well-definedness and continuity of
$\mathbb{E}_{H_{0}}[(L^{\prime}(\bm{x}|\delta))^{2}]$ around $\delta=0$ we
know the QMD assumption is satisfied around $\delta=0$. Then by Taylor’s
expansion,
$\displaystyle\frac{|\bm{A}_{\delta_{n}}|^{-\frac{1}{2}}s(\bm{A}_{\delta_{n}}^{-1}\bm{x})-s(\bm{x})}{\delta_{n}}=\frac{1}{2}L^{\prime}(\bm{x}|0)s(\bm{x})+\frac{r_{n}}{\delta_{n}}$
where $r_{n}=o(\delta_{n}^{2})$. By QMD, we know $\int
r_{n}^{2}\mathrm{d}x=o(\delta_{n}^{2})$. Therefore we have
$\displaystyle\bigg{|}\int\bigg{\\{}\frac{|\bm{A}_{\delta_{n}}|^{-\frac{1}{2}}s(\bm{A}_{\delta_{n}}^{-1}\bm{x})-s(\bm{x})}{\delta_{n}}\bigg{\\}}^{2}\mathrm{d}x-\int\bigg{\\{}\frac{1}{2}L^{\prime}(\bm{x}|0)s(\bm{x})\bigg{\\}}^{2}\mathrm{d}x\bigg{|}$
$\displaystyle=\bigg{|}2\int\bigg{\\{}\frac{1}{2}L^{\prime}(\bm{x}|0)s(\bm{x})\bigg{\\}}\frac{r_{n}}{\delta_{n}}\mathrm{d}x+\int\frac{r_{n}^{2}}{\delta_{n}^{2}}\mathrm{d}x\bigg{|}$
$\displaystyle\leq
2\sqrt{\int\bigg{\\{}\frac{1}{2}L^{\prime}(\bm{x}|0)s(\bm{x})\bigg{\\}}^{2}\mathrm{d}x\int\frac{r_{n}^{2}}{\delta_{n}^{2}}\mathrm{d}x}+\int\frac{r_{n}^{2}}{\delta_{n}^{2}}\mathrm{d}x$
$\displaystyle\rightarrow 0,$
using the third part of Assumption 3. This implies,
$\displaystyle\lim_{n\rightarrow\infty}\mathbb{E}_{H_{0}}[W_{n}]=-h^{2}\int\bigg{\\{}\frac{1}{2}L^{\prime}(\bm{x}|0)s(\bm{x})\bigg{\\}}^{2}\mathrm{d}x$
$\displaystyle=-\frac{h^{2}}{4}\mathbb{E}_{H_{0}}\left[(L^{\prime}(\bm{X}|0))^{2}\right]$
$\displaystyle=-\frac{h^{2}}{4}\mathrm{Var}_{H_{0}}\left[(L^{\prime}(\bm{X}|0))^{2}\right]=\frac{1}{4}\gamma,$
(E.13)
where the second last inequality uses
$E_{H_{0}}\left[L^{\prime}(\bm{X}|0)\right]=0$, by the QMD condition. This
completes the proof of the first condition in (E.12). To show the second
condition in (E.12) note that
$\displaystyle\operatorname{Var}_{H_{0}}[W_{n}-T_{n}]$
$\displaystyle=4h^{2}\operatorname{Var}_{H_{0}}\left[\frac{|\bm{A}_{\delta_{n}}^{-1}|s(\bm{A}_{\delta_{n}}^{-1}\bm{X})}{\delta_{n}s(\bm{X})}-\frac{1}{\delta_{n}}-\frac{1}{2}L^{\prime}(\bm{X}|0)\right]$
$\displaystyle\leq
4h^{2}\mathbb{E}_{H_{0}}\left[\left(\frac{|\bm{A}_{\delta_{n}}^{-1}|s(\bm{A}_{\delta_{n}}^{-1}\bm{X})}{\delta_{n}s(\bm{X})}-\frac{1}{\delta_{n}}-\frac{1}{2}L^{\prime}(\bm{X}|0)\right)^{2}\right]$
$\displaystyle=4h^{2}\int\bigg{\\{}\frac{|\bm{A}_{\delta_{n}}^{-1}|s(\bm{A}_{\delta_{n}}^{-1}\bm{x})-s(\bm{x})}{\delta_{n}}-\frac{1}{2}L^{\prime}(\bm{x}|0)s(\bm{x})\bigg{\\}}^{2}\mathrm{d}x.$
Following the same idea as in proving (E.3.1), we have by QMD condition
$\displaystyle\lim_{n\rightarrow\infty}\operatorname{Var}_{H_{0}}[W_{n}-T_{n}]$
$\displaystyle\leq
4h^{2}\lim_{n\rightarrow\infty}\int\bigg{\\{}\frac{|\bm{A}_{\delta_{n}}^{-1}|s(\bm{A}_{\delta_{n}}^{-1}\bm{x})-s(\bm{x})}{\delta_{n}}-\frac{1}{2}L^{\prime}(\bm{x}|0)s(\bm{x})\bigg{\\}}^{2}\mathrm{d}x$
$\displaystyle=\lim_{n\rightarrow\infty}4h^{2}\int\frac{r_{n}^{2}}{\delta_{n}^{2}}\mathrm{d}x$
$\displaystyle=0.$
This establishes the second condition in (E.12), and hence, the completes
proof of Lemma E.1 (a).
Now, we prove Lemma E.1 (b). Define $Z:=L^{\prime}(\bm{X}|0)$ and
$\displaystyle
Z_{n}:=\frac{f_{\bm{X}}(\bm{X}|\delta_{n})-f_{\bm{X}}(\bm{X}|0)}{\delta_{n}f_{\bm{X}}(\bm{X}|0)}.$
By a Taylor’s expansion,
$\displaystyle
f^{\frac{1}{2}}_{\bm{X}}(\bm{x}|\delta_{n})=f^{\frac{1}{2}}_{\bm{X}}(\bm{x}|0)+\delta_{n}\frac{1}{2}L^{\prime}(\bm{x}|0)f^{\frac{1}{2}}_{\bm{X}}(\bm{x}|0)+r_{n},$
where $r_{n}=o(\delta_{n}^{2}),\int r_{n}^{2}/\delta_{n}^{2}\mathrm{d}x=o(1)$.
This implies,
$\displaystyle\mathbb{E}_{H_{0}}[|Z_{n}|]$
$\displaystyle\leq\int\bigg{|}\frac{f_{\bm{X}}(\bm{x}|\delta_{n})-f_{\bm{X}}(\bm{x}|0)}{\delta_{n}}\bigg{|}\mathrm{d}x$
$\displaystyle\leq\int\bigg{|}\delta_{n}\bigg{\\{}\frac{1}{2}L^{\prime}(\bm{x}|0)f^{\frac{1}{2}}_{\bm{X}}(\bm{x}|0)\bigg{\\}}^{2}\bigg{|}\mathrm{d}x+2\int\bigg{|}r_{n}\frac{1}{2}L^{\prime}(\bm{x}|0)f^{\frac{1}{2}}_{\bm{X}}(\bm{x}|0)\bigg{|}\mathrm{d}x$
$\displaystyle\qquad+2\int\left|\frac{1}{2}f_{\bm{X}}(\bm{x}|0)L^{\prime}(\bm{x}|0)\right|\mathrm{d}x+2\int\bigg{|}\frac{r_{n}}{\delta_{n}}f^{\frac{1}{2}}_{\bm{X}}(\bm{x}|0)\bigg{|}dx+\int\bigg{|}\frac{r_{n}^{2}}{\delta_{n}}\bigg{|}\mathrm{d}x.$
Then for $n$ enough and the Cauchy-Schwarz inequality gives,
$\displaystyle\mathbb{E}_{H_{0}}[|Z_{n}|]$
$\displaystyle\leq\mathbb{E}_{H_{0}}\left[|L^{\prime}(\bm{x}|0)|\right]+o(1)<\infty.$
Therefore,
$\displaystyle\mathbb{P}\bigg{(}\bigg{|}\frac{f_{\bm{X}}(\bm{X}_{i}|\delta_{n})}{f_{\bm{X}}(\bm{X}_{i}|0)}-1\bigg{|}>\varepsilon\bigg{)}=\mathbb{P}\bigg{(}\bigg{|}\delta_{n}Z_{n}\bigg{|}>\varepsilon\bigg{)}\leq\frac{h\mathbb{E}(|Z_{n}|)}{\varepsilon\sqrt{n}}\rightarrow
0.$
This completes the proof of Lemma E.1 (b). $\Box$
## Appendix F Proof of results in Section 5
This section is organized as follows: In Section F.1 we compute the gradient
for the objective in (5.6). The proof of Theorem 5.2 is given in Section F.2.
### F.1. Gradient Computation of $\mathrm{RJdCov}_{n}^{2}$
In practice, we use a gradient descent algorithm to solve the optimization
problem in (5.6). To compute the gradient of the objective function we use the
Chain rule,
$\displaystyle\frac{\partial\mathrm{JdCov}^{2}_{n}(\bm{\tilde{F}}(\bm{Z}_{n}\bm{W}(\theta)^{\top});c)}{\partial\tilde{F}_{i}(W_{i}(\theta)^{\top}Z_{a})}=\frac{1}{n^{2}}\sum_{b=1}^{n}\prod_{i^{\prime}\neq
i}^{r}\big{\\{}\tilde{W}_{i^{\prime}}(a,b)+c\big{\\}}\frac{\partial\tilde{\mathcal{E}}_{i}(a,b)}{\partial\tilde{F}_{i}(W_{i}(\theta)^{\top}Z_{a})},$
where
$\bm{\tilde{F}}(\bm{Z}_{n}\bm{W}(\theta)^{\top})=(\tilde{F}_{1}(\bm{Z}W_{1}(\theta)),\ldots,\tilde{F}_{d}(\bm{Z}W_{d}(\theta))$
and $\tilde{\mathcal{E}}_{i}(a,b)$ is obtained after replacing
$\hat{R}_{i}(X_{i}^{(a)})$ with $\tilde{F}_{i}(W_{i}(\theta)^{\top}Z_{a})$ and
$\hat{R}_{i}(X_{i}^{(b)})$ with $\tilde{F}_{i}(W_{i}(\theta)^{\top}Z_{b})$ in
the expression of $\hat{\mathcal{E}}_{i}(a,b)$ (recall (2.5)). Moreover,
$\partial\tilde{\mathcal{E}}_{i}(a,b)/\partial\tilde{F}_{i}(W_{i}(\theta)^{\top}Z_{a})$
can be approximated by omitting $O(1/n)$ term as follows:
$\displaystyle\frac{1}{n}\sum_{v=1}^{n}\mathrm{sign}(\tilde{F}_{i}(W_{i}(\theta)^{\top}Z_{a})-\tilde{F}_{i}(W_{i}(\theta)^{\top}Z_{v}))-\mathrm{sign}(\tilde{F}_{i}(W_{i}(\theta)^{\top}Z_{a})-\tilde{F}_{i}(W_{i}(\theta)^{\top}Z_{b})),$
for all $\theta\in\bar{\Theta}$, where $\mathrm{sign}(x)$ equals 1 if $x>0$,
-1 if $x<0$, 0 otherwise. In the optimization procedure, suppose the last
updated value is $\theta^{(k-1)}$, then we are able to compute
$\displaystyle\frac{\partial\tilde{F}_{i}(W_{i}(\theta)^{\top}Z_{a})}{\partial
W_{i}(\theta)}\bigg{|}_{\theta=\theta^{(k-1)}}=\frac{1}{n}\sum_{v=1}^{n}G^{\prime}\bigg{(}\frac{W_{i}^{\top}(\theta^{(k-1)})Z_{a}-W_{i}^{\top}(\theta^{(k-1)})Z_{v}}{h_{n}(i)}\bigg{)}\frac{Z_{a}}{h_{n}(i)},$
where $G^{\prime}$ is the derivate of the function $G$ in (5.7).
### F.2. Proof of Theorem 5.2
Throughout this proof we will drop the dependence on $c$ from the notations of
$\mathrm{RJdCov}^{2}/\mathrm{RJdCov}^{2}_{n}$. First we will show that the
distribution function of $W_{i}(\theta)^{\top}Z$ is Lipschitz by showing the
density is uniformly bounded. This is because of the convolution formula for
the sum of independent random variables. For example, if $X$ and $Y$ are 2
independent random variables then the density of $X+Y$ is given by,
$\displaystyle
f_{X+Y}(z)=\int_{-\infty}^{\infty}f_{Y}(z-x)f(x)\mathrm{d}x\leq\sup_{y}f_{Y}(y).$
Therefore, we know the density of $W_{i}(\theta)^{\top}Z$ is uniformly
bounded, hence the distribution function of $W_{i}(\theta)^{\top}Z$ is
Lipschitz.
Now suppose $\mathcal{D}$ is a metric satisfying the conditions in Theorem
F.2. Partition $\mathcal{SO}(r)$ into equivalence classes by identifying
elements with $\mathcal{D}$-distance zero with the same equivalence class. Let
$\mathcal{SO}(r)_{\mathcal{D}}$ be the quotient space
$\mathcal{SO}(r)/\mathcal{D}$ of these equivalence classes. Then
$\bm{W}=\bm{A}$ on $\mathcal{SO}(r)_{\mathcal{D}}$ if and only if
$\mathcal{D}(\bm{W},\bm{A})=0$. To establish the consistency of
$\hat{\theta}_{n}$ we will use the following result about the consistency of
$M$-estimators.
###### Theorem F.1 (Theorem 2.12 in [45]).
Assume for some $\theta_{0}\in\Theta$ that
$\liminf_{n\rightarrow\infty}M(\theta_{n})\geq M(\theta_{0})$ implies
$d(\theta_{n},\theta_{0})\rightarrow 0$ for any sequence
$\\{\theta_{n}\\}\in\Theta$. Then for a sequence of estimator
$\hat{\theta}_{n}\in\Theta$, if
$M_{n}(\hat{\theta}_{n})=\sup_{\theta\in\Theta}M_{n}(\theta)-o(1)$ and
$\sup_{\theta\in\Theta}|M_{n}(\theta)-M(\theta)|\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0$, then
$d(\hat{\theta}_{n},\theta_{0})\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0$.
We will apply the above result with $M_{n}=\mathrm{RJdCov}^{2}_{n}$ and
$M=\mathrm{RJdCov}^{2}$. We begin by proving the uniform convergence of the
objective functions in the following lemma.
###### Lemma F.2.
Under Assumptions in Theorem 5.2, as $n\rightarrow\infty$,
$\displaystyle\sup_{\theta:W(\theta)\in\mathcal{SO}(r)_{\mathcal{D}}}\big{|}\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\theta)^{\top})-\mathrm{RJdCov}^{2}(\bm{W}(\theta)Z)\big{|}\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0.$
###### Proof of Lemma F.2.
The proof of this lemma relies on the stochastic Arzelá-Ascoli lemma. To this
end, note that since $\mathcal{SO}(r)_{\mathcal{D}}$ is a quotient space so it
is compact. We have also shown in Theorem 2.12 that
$\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\theta)^{\top})\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}\mathrm{RJdCov}^{2}(\bm{W}(\theta)Z),$
for all $\theta$. Therefore, to apply stochastic Arzelá-Ascoli lemma we need
check the condition for stochastic equicontinuity, that is, we need to prove
$\displaystyle\lim_{c\rightarrow\infty}\limsup_{n\rightarrow\infty}m_{1/c}(\mathrm{RJdCov}^{2}_{n})\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0,$
where
$\displaystyle
m_{1/c}(\mathrm{RJdCov}^{2}_{n})=\sup\\{|\mathrm{RJdCov}^{2}_{n}$
$\displaystyle(\bm{Z}_{n}\bm{W}(\theta)^{\top})-\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\psi)^{\top})|:$
$\displaystyle\bm{W}(\theta),\bm{W}(\psi)\in\mathcal{SO}(r)_{\mathcal{D}},\|\bm{W}(\theta)-\bm{W}(\psi)\|_{F}<1/c\\}.$
Consider
$\displaystyle|\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\theta)^{\top})-\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\psi)^{\top})|$
$\displaystyle\leq c^{d-2}\sum_{1\leq i_{1}<i_{2}\leq
r}\bigg{|}\mathrm{RdCov}^{2}_{n}(Z_{i_{1}}^{\top}W_{i_{1}}(\theta),Z_{i_{2}}^{\top}W_{i_{2}}(\theta))-\mathrm{RdCov}^{2}_{n}(Z_{i_{1}}^{\top}W_{i_{1}}(\psi),Z_{i_{2}}^{\top}W_{i_{2}}(\psi))\bigg{|}$
$\displaystyle\qquad\qquad\qquad+\cdots+\big{|}\mathrm{RdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\theta)^{\top})-\mathrm{RdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\psi)^{\top})\big{|}.$
Therefore, we only need to prove that
$\displaystyle\lim_{c\rightarrow\infty}\limsup_{n\rightarrow\infty}m_{1/c}(\mathrm{RdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}^{\top}_{\mathcal{S}}))=0,$
for all $\mathcal{S}\subset\\{1,\ldots,r\\}$, where $\bm{W}_{\mathcal{S}}$ is
the row submatrix of $\bm{W}$ with rows containing in $\mathcal{S}$ and
$\displaystyle
m_{1/c}(\mathrm{RdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}^{\top}_{\mathcal{S}}))$
$\displaystyle=\sup\\{|\mathrm{RdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}_{\mathcal{S}}(\theta)^{\top})-\mathrm{RdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}_{\mathcal{S}}(\psi)^{\top})|:$
$\displaystyle\hskip
61.42993pt\bm{W}(\theta),\bm{W}(\psi)\in\mathcal{SO}(r)_{\mathcal{D}},\|\bm{W}(\theta)-\bm{W}(\psi)\|_{F}<1/c\\}.$
Now, consider the telescoping sum
$\displaystyle|\mathrm{RdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}_{\mathcal{S}}(\theta)^{\top})-\mathrm{RdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}_{\mathcal{S}}(\psi)^{\top})|$
$\displaystyle=\bigg{|}\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\prod_{i\in\mathcal{S}}\hat{\mathcal{E}}_{i}(a,b;\theta)-\frac{1}{n^{2}}\sum_{1\leq
a,b\leq n}\prod_{i\in\mathcal{S}}\hat{\mathcal{E}}_{i}(a,b;\psi)\bigg{|}$
$\displaystyle=\bigg{|}\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\prod_{i\in\mathcal{S}}\hat{\mathcal{E}}_{i}(a,b;\theta)-\frac{1}{n^{2}}\sum_{1\leq
a,b\leq
n}\hat{\mathcal{E}}_{i_{1}}(a,b;\psi)\prod_{i\in\mathcal{S}\backslash\\{i_{1}\\}}\hat{\mathcal{E}}_{i}(a,b;\theta)$
$\displaystyle+\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\hat{\mathcal{E}}_{i_{1}}(a,b;\psi)\prod_{i\in\mathcal{S}\backslash\\{i_{1}\\}}\hat{\mathcal{E}}_{i}(a,b;\theta)-\frac{1}{n^{2}}\sum_{1\leq
a,b\leq
n}\hat{\mathcal{E}}_{i_{1}}(a,b;\psi)\hat{\mathcal{E}}_{i_{2}}(a,b;\psi)\prod_{i\in\mathcal{S}\backslash\\{i_{1},i_{2}\\}}\hat{\mathcal{E}}_{i}(a,b;\theta)$
$\displaystyle+\cdots+$ $\displaystyle+\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\hat{\mathcal{E}}_{i_{|\mathcal{S}|}}(a,b;\theta)\prod_{i\in\mathcal{S}\backslash\\{i_{|\mathcal{S}|}\\}}\hat{\mathcal{E}}_{i}(a,b;\psi)-\frac{1}{n^{2}}\sum_{1\leq
a,b\leq n}\prod_{i\in\mathcal{S}}\hat{\mathcal{E}}_{i}(a,b;\psi)\bigg{|}$
$\displaystyle\leq C\sum_{i=1}^{r}\bigg{|}\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\big{\\{}\hat{\mathcal{E}}_{i}(a,b;\theta)-\hat{\mathcal{E}}_{i}(a,b;\psi)\big{\\}}\bigg{|}$
where $C$ is a universal constant and the last inequality holds due to the
fact that $\hat{\mathcal{E}}_{i}(a,b)$ is uniformly bounded. Then we consider
$\displaystyle\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\big{\\{}\hat{\mathcal{E}}_{i}(a,b;\theta)-\hat{\mathcal{E}}_{i}(a,b;\psi)\big{\\}}$
$\displaystyle\leq\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\frac{1}{n}\sum_{v=1}^{n}\big{|}\hat{R}_{i}(W_{i}^{\top}(\theta)Z_{a})-\hat{R}_{i}(W_{i}^{\top}(\psi)Z_{a})+\hat{R}_{i}(W_{i}^{\top}(\psi)Z_{v})-\hat{R}_{i}(W_{i}(\theta)^{\top}Z_{v})\big{|}$
$\displaystyle+\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\frac{1}{n}\big{|}\hat{R}_{i}(W_{i}^{\top}(\theta)Z_{a})-\hat{R}_{i}(W_{i}^{\top}(\psi)Z_{a})+\hat{R}_{i}(W_{i}^{\top}(\psi)Z_{b})-\hat{R}_{i}(W_{i}^{\top}(\theta)Z_{b})\big{|}$
$\displaystyle+\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\frac{1}{n}\sum_{u=1}^{n}\big{|}\hat{R}_{i}(W_{i}^{\top}(\theta)Z_{b})-\hat{R}_{i}(W_{i}^{\top}(\psi)Z_{b})+\hat{R}_{i}(W_{i}^{\top}(\psi)Z_{u})-\hat{R}_{i}(W_{i}(\theta)^{\top}Z_{u})\big{|}$
$\displaystyle+\frac{1}{n^{2}}\sum_{u,v=1}^{n}\big{|}\hat{R}_{i}(W_{i}^{\top}(\theta)Z_{u})-\hat{R}_{i}(W_{i}^{\top}(\psi)Z_{u})+\hat{R}_{i}(W_{i}^{\top}(\psi)Z_{v})-\hat{R}_{i}(W_{i}^{\top}(\theta)Z_{v})\big{|}$
$\displaystyle\leq\frac{8}{n}\sum_{a=1}^{n}\big{|}\hat{R}_{i}(W_{i}(\theta)^{\top}Z_{a})-R_{\mu_{i}}(W_{i}(\theta)^{\top}Z_{a})\big{|}+\frac{8}{n}\sum_{a=1}^{n}\big{|}\hat{R}_{i}(W_{i}^{\top}(\psi)Z_{a})-R_{\mu_{i}}(W_{i}^{\top}(\psi)Z_{a})\big{|}$
$\displaystyle+\frac{8}{n}\sum_{a=1}^{n}\big{|}R_{\mu_{i}}(W_{i}(\theta)^{\top}Z_{a})-R_{\mu_{i}}(W_{i}^{\top}(\psi)Z_{a})\big{|}.$
As $n\rightarrow\infty$, we know the first and the second term goes to zero by
the consistency of empirical rank. Then we consider the last term by utilizing
the Lipschitzness of $R_{\mu_{i}}(\cdot)$, since $R_{\mu_{i}}(\cdot)$ should
be the CDF of $W_{i}^{\top}(\cdot)Z$. Therefore, if $0<K<\infty$ is the bound
on the density $Z$,
$\displaystyle\frac{1}{n^{2}}\sum_{1\leq a,b\leq
n}\big{\\{}\hat{\mathcal{E}}_{i}(a,b;\theta)-\hat{\mathcal{E}}_{i}(a,b;\psi)\big{\\}}$
$\displaystyle\leq\frac{8K}{n}\sum_{k=1}^{n}\big{|}W_{i}(\theta)^{\top}Z_{k}-W_{i}^{\top}(\psi)Z_{k}\big{|}$
$\displaystyle\leq
8K\|W_{i}(\theta)-W_{i}(\psi)\|\frac{1}{n}\sum_{k=1}^{n}\|Z_{k}\|.$
Then by strong law of large number we know
$\frac{1}{n}\sum_{k=1}^{n}\|Z_{k}\|\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}\mathbb{E}(\|Z\|)<\infty$. Therefore we can conclude that
$\displaystyle\lim_{c\rightarrow\infty}\limsup_{n\rightarrow\infty}m_{1/c}(\mathrm{RdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}^{\top}_{\mathcal{S}}))=0,$
for all $\mathcal{S}\subset\\{1,\ldots,d\\}$. This complete the proof of Lemma
F.2. ∎
To complete the proof of Theorem 5.2, recall the definition of $\theta_{0}$
(from (5.4)) and $\hat{\theta}_{n}$ (from (5.5)) and note that
$\displaystyle\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\hat{\theta}_{n})^{\top})\leq\mathrm{RJdCov}_{n}^{2}(\bm{Z}_{n}\bm{W}(\theta_{0})^{\top})\text{
and
}\mathrm{RJdCov}^{2}(\bm{W}(\theta_{0})Z)\leq\mathrm{RJdCov}^{2}(\bm{W}(\hat{\theta}_{n})Z).$
Therefore, we have
$\displaystyle\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\theta_{0})^{\top})-\mathrm{RJdCov}^{2}(\bm{W}(\theta_{0})Z)$
$\displaystyle\geq\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\hat{\theta}_{n})^{\top})-\mathrm{RJdCov}^{2}(\bm{W}(\theta_{0})Z)$
$\displaystyle\geq\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\hat{\theta}_{n})^{\top})-\mathrm{RJdCov}^{2}(\bm{W}(\hat{\theta}_{n})Z).$
This implies,
$\displaystyle|\mathrm{RJdCov}_{n}^{2}(\bm{Z}_{n}\bm{W}(\hat{\theta}_{n})^{\top})-\mathrm{RJdCov}^{2}(\bm{W}(\theta_{0})Z)|$
$\displaystyle\leq\max\\{|\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\theta_{0})^{\top})-\mathrm{RJdCov}^{2}(\bm{W}(\theta_{0})Z)|,|\mathrm{RJdCov}^{2}_{n}(\bm{Z}_{n}\bm{W}(\hat{\theta}_{n})^{\top})-\mathrm{RJdCov}^{2}(\bm{W}(\hat{\theta}_{n})Z)|\\}$
$\displaystyle\leq\sup_{\theta:W(\theta)\in\mathcal{SO}(r)_{\mathcal{D}}}\big{|}\mathrm{RJdCov}_{n}^{2}(\bm{Z}_{n}\bm{W}(\theta)^{\top})-\mathrm{RJdCov}^{2}(\bm{W}(\theta)Z)\big{|}\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0.$
Then, since $\mathrm{RJdCov}^{2}(\bm{W}(\theta)Z)$ is a Lipschitz continuous
with respect to $\theta$, Theorem F.1 that
$\mathcal{D}(\bm{W}(\hat{\theta}_{n}),\bm{W}(\theta_{0}))\stackrel{{\scriptstyle
a.s.}}{{\rightarrow}}0$ for
$\bm{W}(\theta_{0})\in\mathcal{SO}(r)_{\mathcal{D}}$. Since the map
$\bm{W}(\theta)\mapsto\theta$ is continuous by the assumption, we conclude the
result in Theorem 5.2. $\Box$
|
aainstitutetext: School Of Physical Sciences, Indian Association for the
Cultivation of Science, 2A and 2B, Raja S.C. Mullick Road, Kolkata 700032,
Indiabbinstitutetext: Harish-Chandra Research Institute, A CI of Homi Bhabha
National Institute, Chhatnag Road, Jhunsi, Prayagraj (Allahabad) 211019,
Indiaccinstitutetext: Regional Centre for Accelerator-based Particle Physics,
Harish-Chandra Research Institute,
Prayagraj (Allahabad) 211019, India
# Interplay among gravitational waves, dark matter and collider signals in the
singlet scalar extended type-II seesaw model
Purusottam Ghosh b,c Tathagata Ghosh b,c and Subhojit Roy
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We study the prospect of simultaneous explanation of tiny neutrino masses,
dark matter (DM), and the observed baryon asymmetry of the Universe in a
$Z_{3}$-symmetric complex singlet scalar extended type-II seesaw model. The
complex singlet scalar plays the role of DM. Analyzing the thermal history of
the model, we identify the region of the parameter space that can generate a
first-order electroweak phase transition (FOEWPT) in the early Universe, and
the resulting stochastic gravitational waves (GW) can be detected at future
space/ground-based GW experiments. First, we find that light triplet scalars
do favor an FOEWPT. In our study, we choose the type-II seesaw part of the
parameter space in such a way that light triplet scalars, especially the
doubly charged ones, evade the strong bounds from their canonical searches at
the Large Hadron Collider (LHC). However, the relevant part of the parameter
space, where FOEWPT can happen only due to strong SM doublet-triplet
interactions, is in tension with the SM-like Higgs decay to a pair of photons,
which has already excluded the bulk of this parameter space. On the other
hand, the latest spin-independent DM direct detection constraints from
XENON-1T and PANDA-4T eliminate a significant amount of parameter space
relevant for the dark sector assisted FOEWPT scenarios, and it is only
possible when the complex scalar DM is significantly underabundant. In short,
we conclude from our analysis that the absence of new physics at the HL-LHC
and/or various DM experiments in the near future will severely limit the
prospects of detecting a stochastic GW at future GW experiments and will
exclude the possibility of electroweak baryogenesis within this model.
###### Keywords:
Beyond Standard Model, Cosmology of Theories beyond the Standard Model,
Electroweak Phase transition, Gravitational wave, Dark Matter
††preprint: HRI-RECAPP-2022–013
## 1 Introduction
The discovery of the Higgs Boson at the Large Hadron Collider (LHC) around 125
GeV ATLAS:2012yve ; CMS:2012qbp was a breakthrough moment in our
understanding of the laws of nature, completing the particle content of the
Standard Model (SM). Despite the unparalleled successes of the SM, it cannot
be a complete theory of nature as it fails to explain several phenomena in
nature. Among many other issues, the SM cannot explain either the observed
baryon asymmetry of the Universe (BAU) or tiny neutrino masses, in addition to
not providing a good dark matter (DM) candidate.
In order to generate baryon asymmetry in any model, one needs to satisfy the
three well-known Sakharov conditions Sakharov:1967dj . One of the conditions,
i.e., the out-of-equilibrium condition can be achieved if the electroweak
phase transition (EWPT) in the SM had been a strong first-order one. However,
after the discovery of the 125 GeV Higgs, one can conclude that it is only a
smooth crossover. Therefore, the SM cannot satisfy all the Sakharov’s
conditions. Going beyond the SM (BSM), with the addition of new scalars, one
can modify the scalar potential so that a first-order electroweak phase
transition (FOEWPT) can occur in the early Universe. Then one can generate the
observed BAU via the electroweak baryogenesis (EWBG) mechanism Trodden:1998ym
; Anderson:1991zb ; Huet:1995sh ; Morrissey:2012db .
On the other hand, several astrophysical and cosmological data, including
rotation curves of galaxies, the bullet cluster, gravitational lensing, and
the anisotropy of the cosmic microwave background Hu:2001bc , provide strong
motivation for the presence of DM in the Universe. By analyzing the
anisotropies in the CMB data Hu:2001bc , WMAP WMAP:2006bqn and PLANCK
Planck:2018vyg experiments one can find that around a quarter of the energy
density of the Universe is composed of non-baryonic, non-luminous matter with
only known interaction through gravity. Apart from its abundance, which is
precisely measured by the PLANCK experiment to be $\Omega_{\rm
DM}h^{2}=0.120\pm 0.001$ Planck:2018vyg , the nature of DM and its non-
gravitational interactions are still unknown to us.
At the same time, the evidence of neutrino oscillations Kajita:2016cak ;
McDonald:2016ixn has opened up a new frontier in particle physics, requiring
small neutrino masses, and consequently, offering another evidence for
phenomena beyond the SM. Various seesaw mechanisms, such as type-I
Minkowski:1977sc ; Yanagida:1979as ; Mohapatra:1979ia , type-II
Konetschny:1977bn ; Magg:1980ut ; Lazarides:1980nt ; Schechter:1980gr ;
Cheng:1980qt ; Bilenky:1980cx etc., are the most popular way to generate tiny
neutrino masses. In this article, we focus on type-II seesaw only, where the
SM scalar sector is extended by a $SU(2)$ triplet ($\Delta$) with hypercharge
$Y=2$. Neutrinos obtain masses after the electroweak symmetry breaking (EWSB)
when the neutral component of the triplet acquires an induced vacuum
expectation value (${\it vev}$).
We extend the type-II seesaw model by an extra complex singlet ($S$) and
further impose a discrete $Z_{3}$ symmetry on the model, due to which $S$
becomes a stable DM candidate. The presence of the triplet and the complex
singlet in the model and their interaction with the SM Higgs doublet can
modify the Higgs potential in such a way that a strong first-order electroweak
phase transition (SFOEWPT) is feasible in the early Universe. Hence, by
proposing this economical extension of the SM, we attempt to alleviate three
major shortcomings of it.
The non-observation of DM in its direct search experiments has severely
constrained the simplest weakly interacting massive particle (WIMP)-like DM
scenarios. For instance, DM direct searches impose stringent limits on the
parameter space of the simplest SM Higgs portal scalar DM Casas:2017jjg ;
Bhattacharya:2017fid . However, the situation can drastically change in the
presence of new DM annihilation channels, and one can evade those constraints.
In this model, DM annihilation due to the presence of portal interaction of DM
with the triplet sector and the $Z_{3}-$symmetry induced DM semi-annihilation
channels allow a significant amount of permissible parameter space for DM that
satisfies the observed DM relic density constraint from the PLANCK experiment
Planck:2018vyg , the DM direct detection (DMDD) limits from XENON-1T
Aprile:2018dbl and PANDAX-4T PandaX-4T:2021bab , and the DM indirect search
bounds from FERMI-LAT Fermi-LAT:2017bpc and MAGIC MAGIC:2016xys .
A natural consequence of a first-order phase transition (FOPT) in the early
Universe is the generation of relic gravitational waves (GW) via the
nucleation of bubbles of the broken phase Apreda:2001us ; Grojean:2004xa ;
Weir:2017wfa ; Alves:2018oct ; Alves:2018jsw ; Alves:2019igs ; Alves:2020bpi ;
Chatterjee:2022pxf ; Caprini:2019egz ; Witten:1984rs ; Hogan:1986qda ;
Ellis:2018mja ; Alanne:2019bsm . Such a GW signal can be detected in LISA
LISA:2017pwj , or other proposed ground-based and space-borne experiments,
viz., ALIA Gong:2014mca , Taiji Hu:2017mde , TianQin TianQin:2015yph , aLigo+
Harry:2010zz , Big Bang Observer (BBO) Corbin:2005ny and Ultimate(U)-DECIGO
Kudoh:2005as , if the amplitude is high enough. These studies receive more
attention after the observation of GW from LIGO and VIRGO collaborations
LIGOScientific:2016aoc ; LIGOScientific:2017vwq ; LIGOScientific:2018mvr ;
LIGOScientific:2020ibl . Note that, the first evidence of stochastic GWs has
been announced very recently by NanoGrav NANOGrav:2023gor and EPTA
EPTA:2023fyk collaborations. This discovery provides a new opportunity to
explore physics scenarios beyond the SM. Certain regions of the parameter
space of the present model can be probed via various future GW experiments due
to the production of the GW as a result of an FOEWPT in the scalar sector.
In this paper, we study the interplay among the three sectors mentioned above,
i.e. the production of GW resulting from FOEWPT, the DM prospect, and the LHC
probes of this model. We perform a dedicated scan of the motivated region of
the parameter space. Thereafter, we impose various theoretical constraints,
such as boundedness of the scalar potential from below, electroweak vacuum
stability at zero temperature, perturbativity, unitarity, as well as the
relevant experimental constraints coming from the neutrino sector, flavor
sector, electroweak precision study, dark sector and the heavy Higgs searches
at the LHC. We allow minimal mixing between SM-like Higgs and the triplet-like
neutral scalar to comply with the Higgs signal strength bounds. We further
study the dependencies of the decay width of the SM Higgs into a photon pair
on the model parameters. We then discuss the overall phenomenology of a
complex scalar DM in the presence of the scalar triplet and the SM doublet and
the novel effect of $Z_{3}-$symmetry induced semi-annihilation processes. We
split the study of the FOPT of this model into two parts. First, we examine
the parameter space, where an FOEWPT is facilitated by the triplet in
conjunction with the SM doublet. Next, we study the feasibility of generating
an FOEWPT with the help of the dark sector. Thereafter, we establish a
correlation between the precise measurement of the observed Higgs boson
properties with the detection prospect of the produced GW when the doublet-
triplet interactions are strong. Then we analyze the connection between a
large DM direct detection spin-independent (DMDDSI) cross-section and an
FOEWPT due to significant interaction between DM and the SM Higgs doublet. To
illustrate such correlations, we present a few benchmark points in this work
and discuss the patterns of phase transitions in each case followed by
thorough discussions on the detection prospect at various GW detectors, the
signal-to-noise ratio (SNR) for LISA and possible complementing searches at
the HL-LHC and future DM direct-detection experiments.
The paper is organized as follows: In section 2, we briefly discuss the
theoretical framework of the model, including the interaction pieces of the
Lagrangian, the WIMP DM production mechanism and its possible detection
prospects, the study of EWPT and the production of GW from it. The relevant
theoretical and experimental constraints on the model parameter space are
outlined in section 3. The motivated region of parameter space for this study
has been discussed in section 4. In section 5, we present the results of this
work. Finally, we summarize the outcome of our discussion in section 6.
Various tadpole relations, field-dependent masses of scalar and fermionic
degrees of freedom with thermal correlations (daisy coefficients), various
Feynman diagrams for DM annihilation and various constraints are presented in
the appendix.
## 2 The theoretical framework
### 2.1 The Model
As discussed above, despite many successes of the SM, it cannot explain the
smallness of neutrino masses, the observed matter-antimatter asymmetry and
accommodate a particle DM candidate among other things. To address these three
issues, we consider a model by introducing an additional discrete $Z_{3}$
symmetry and enlarging the particle content of the model by adding a
$SU(2)_{L}$ scalar triplet, $\Delta$, with hypercharge, $Y=2$, and a complex
scalar singlet, $S$, which transforms non-trivially under $Z_{3}$ Yang:2021buu
. The quantum numbers of the BSM scalars in our model along with the SM Higgs
doublet under the extended gauge group $SU(3)_{C}\times SU(2)_{L}\times
U(1)_{Y}\times Z_{3}$ are tabulated in Table 1.
Fields | $\underbrace{SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}}$ $\otimes Z_{3}$
---|---
Complex Scalar DM | $S=\frac{1}{\sqrt{2}}(h_{s}+ia_{s})$ | 1 1 0 $e^{i\frac{2\pi}{3}}$
Scalar Triplet | $\Delta=\left(\begin{matrix}\frac{\Delta^{+}}{\sqrt{2}}&\Delta^{++}\\\ \frac{1}{\sqrt{2}}\big{(}h_{t}+ia_{t}\big{)}&-\frac{\Delta^{+}}{\sqrt{2}}\end{matrix}\right)$ | 1 3 2 1
Higgs doublet | $H=\left(\begin{matrix}G^{+}\\\ \frac{1}{\sqrt{2}}\big{(}h_{d}+ia_{d}\big{)}\end{matrix}\right)$ | 1 2 1 1
Table 1: Charge assignments of the scalar fields in the model under the gauge
group $\mathcal{G}\equiv\mathcal{G}_{\rm SM}\otimes Z_{3}$ where
$\mathcal{G}_{\rm SM}\equiv SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}$.
Hypercharge ($Y$) of the field is obtained by using the relation:
$Q=I_{3}+\frac{Y}{2}$, where $I_{3}$ is the third component of isospin and $Q$
is the electromagnetic charge.
The part of the Lagrangian relevant for our study in this paper is given
below:
$\displaystyle\mathcal{L}\supset\left(D^{\mu}H\right)^{\dagger}\left(D_{\mu}H\right)+{\rm
Tr}\left[\left(D^{\mu}\Delta\right)^{\dagger}\left(D_{\mu}\Delta\right)\right]-V(\Delta,H)+\mathcal{L}^{\rm
Yukawa}+\mathcal{L}^{\rm DM},$ (1)
where
$\displaystyle D_{\mu}H$ $\displaystyle=$
$\displaystyle\Big{(}\partial_{\mu}-ig_{2}\frac{\sigma^{a}}{2}W_{\mu}^{a}-ig_{1}\frac{Y_{H}}{2}B_{\mu}\Big{)}H,$
$\displaystyle D_{\mu}\Delta$ $\displaystyle=$
$\displaystyle\partial_{\mu}\Delta-
ig_{2}\left[\frac{\sigma^{a}}{2}W_{\mu}^{a},\Delta\right]-{ig_{1}}\frac{Y_{\Delta}}{2}B_{\mu}\Delta~{}.$
(2)
The most general scalar potential involving the SM Higgs doublet and the
scalar triplet can be written as Arhrib:2011uy ; Yang:2021buu :
$\displaystyle V(H,\Delta)$ $\displaystyle=$
$\displaystyle-\mu_{H}^{2}(H^{\dagger}H)+\lambda_{H}(H^{\dagger}H)^{2}~{}$ (3)
$\displaystyle+\mu_{\Delta}^{2}{\rm
Tr}\left[\Delta^{\dagger}\Delta\right]+\lambda_{1}\left(H^{\dagger}H\right){\rm
Tr}\left[\Delta^{\dagger}\Delta\right]+\lambda_{2}\left({\rm
Tr}[\Delta^{\dagger}\Delta]\right)^{2}$ $\displaystyle+\lambda_{3}~{}{\rm
Tr}[\left(\Delta^{\dagger}\Delta\right)^{2}]+\lambda_{4}~{}\left(H^{\dagger}\Delta\Delta^{\dagger}H\right)+\left[\mu\left(H^{T}i\sigma^{2}\Delta^{\dagger}H\right)+h.c.\right]~{}~{}.$
Please note that in the above potential $\mu_{\Delta}^{2}>0$ and the Higgs
triplet cannot trigger any spontaneous symmetry breaking (SSB). So, similar to
the SM, the EWSB happens when the doublet, $H$, develops a vacuum expectation
value (VEV), $\langle{H}\rangle={v_{d}}/{\sqrt{2}}$. However, the presence of
the cubic term $H^{T}i\sigma^{2}\Delta^{\dagger}H$ induces a non-vanishing VEV
for the triplet, $\langle{\Delta}\rangle=v_{t}/\sqrt{2}$, after the EWSB. Then
the scalar fields, $H$ and $\Delta$ can be represented as:
$\displaystyle H=\left(\begin{matrix}G^{+}\\\
\frac{h_{d}+v_{d}+ia_{d}}{\sqrt{2}}\end{matrix}\right)~{},~{}~{}\Delta=\left(\begin{matrix}\frac{\Delta^{+}}{\sqrt{2}}&\Delta^{++}\\\
\frac{h_{t}+v_{t}+ia_{t}}{\sqrt{2}}&-\frac{\Delta^{+}}{\sqrt{2}}\end{matrix}\right).$
(4)
Note that $\sqrt{v_{d}^{2}+2v_{t}^{2}}=v=246$ GeV and in the alignment limit
$v_{t}<<v_{d}$. Minimizing the scalar potential at the vacuums ($v_{d}$ and
$v_{t}$), the bare masses of the scalar potential can be expressed in terms of
other free parameters and can obtain the following conditions:
$\displaystyle\mu_{H}^{2}$ $\displaystyle=$
$\displaystyle-\frac{2M_{\Delta}^{2}v_{t}^{2}}{v_{d}^{2}}+\frac{1}{2}v_{t}^{2}(\lambda_{1}+\lambda_{4})+\lambda_{H}v_{d}^{2}$
$\displaystyle\mu_{\Delta}^{2}$ $\displaystyle=$ $\displaystyle
M_{\Delta}^{2}-v_{t}^{2}(\lambda_{2}+\lambda_{3})-\frac{1}{2}v_{d}^{2}(\lambda_{1}+\lambda_{4})~{}~{}~{}~{}{\rm
with}~{}~{}~{}M_{\Delta}^{2}=\frac{\mu v_{d}^{2}}{\sqrt{2}v_{t}}.$ (5)
The field composition of $V(H,\Delta):$ The two CP even states ($h_{d}$ and
$h_{t}$) are mixed up after the EWSB and give rise to two physical eigenstates
($h^{0}$, $H^{0}$) under the orthogonal transformation. The physical and
interaction states are related as,
$\displaystyle
h^{0}=h_{d}~{}\cos\theta_{t}+h_{t}~{}\sin\theta_{t}~{},~{}~{}~{}~{}H^{0}=-h_{d}~{}\sin\theta_{t}+h_{t}~{}\cos\theta_{t}$
(6)
where $\theta_{t}$ is the mixing angle defined by
$\displaystyle\tan 2\theta_{t}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{2}\mu
v_{d}-(\lambda_{1}+\lambda_{4})v_{d}~{}v_{t}}{M_{\Delta}^{2}-\frac{1}{4}\lambda_{H}v_{d}^{2}+(\lambda_{2}+\lambda_{3})v_{t}^{2}}~{}.$
(7)
The corresponding mass eigenvalues of these physical eigenstates are given by
$\displaystyle m_{h^{0}}^{2}$ $\displaystyle=$
$\displaystyle\left(M_{\Delta}^{2}+2v_{t}^{2}(\lambda_{2}+\lambda_{3})\right)\sin^{2}\theta_{t}+2\lambda_{H}v_{d}^{2}\cos^{2}\theta_{t}-\frac{v_{t}\sin
2\theta_{t}\left(2M_{\Delta}^{2}-v_{d}^{2}(\lambda_{1}+\lambda_{4})\right)}{v_{d}}$
$\displaystyle m_{H^{0}}^{2}$ $\displaystyle=$
$\displaystyle\left(M_{\Delta}^{2}+2v_{t}^{2}(\lambda_{2}+\lambda_{3})\right)\cos^{2}\theta_{t}+2\lambda_{H}v_{d}^{2}\sin^{2}\theta_{t}+\frac{v_{t}\sin
2\theta_{t}\left(2M_{\Delta}^{2}-v_{d}^{2}(\lambda_{1}+\lambda_{4})\right)}{v_{d}}.$
With the mass hierarchy $m_{h^{0}}<m_{H^{0}}$, the lighter state, $h^{0}$ acts
like the SM higgs with mass, $m_{h^{0}}=125$ GeV.
Similarly, the mixing between two CP odd states ($a_{d}$, $a_{t}$) leads to
one massless Goldstone mode which is associated with the gauge boson $Z$ in
the SM and one massive pseudo scalar, $A^{0}$ and its mass ($m_{A^{0}}$) is
given by,
$\displaystyle m_{A^{0}}^{2}$ $\displaystyle=$
$\displaystyle\frac{M_{\Delta}^{2}\left(4v_{t}^{2}+v_{d}^{2}\right)}{v_{d}^{2}}~{}~{}.$
(9)
In the singly charged scalar sector, the orthogonal rotation of $G^{\pm}$ and
$\Delta^{\pm}$ fields lead to one massless Goldstone state absorbed by the
longitudinal components of the SM gauge boson $W^{\pm}$ and one massive
charged scalar eigenstate, $H^{\pm}$. The mass of $H^{\pm}$ is given by,
$\displaystyle m_{H^{\pm}}^{2}$ $\displaystyle=$
$\displaystyle\frac{\left(2v_{t}^{2}+v_{d}^{2}\right)\left(4M_{\Delta}^{2}-\lambda_{4}v_{d}^{2}\right)}{4v_{d}^{2}}.$
(10)
The doubly charged scalar ($H^{\pm\pm}\equiv\Delta^{\pm\pm}$) mass is given
by,
$\displaystyle m_{H^{\pm\pm}}^{2}$ $\displaystyle=$ $\displaystyle
M_{\Delta}^{2}-\lambda_{3}v_{t}^{2}-\frac{\lambda_{4}v_{d}^{2}}{2}~{}~{}.$
(11)
Therefore the scalar potential, $V(H,\Delta)$ contains seven massive physical
states: two CP even states ($h^{0},~{}H^{0}$), one CP odd state $(A^{0})$, two
singly charged scalar ($H^{\pm}$) and two doubly charged scalar
($H^{\pm\pm}$). The scalar potential, $V(H,\Delta)$ has seven independent free
parameters as
$\displaystyle\\{M_{\Delta},~{}v_{t},~{}\lambda_{H}(\simeq
0.129),~{}\lambda_{1},~{}\lambda_{2},~{}\lambda_{3},~{}\lambda_{4}\\}$ (12)
whereas the physical masses and mixing angle can be represented in terms of
free parameters using above equations.
In the limiting case, $v_{t}^{2}/v_{d}^{2}<<1$, some interesting mass
relations are followed by:
$\displaystyle\big{(}m_{H^{\pm\pm}}^{2}-m_{H^{\pm}}^{2}\big{)}\approx-\frac{\lambda_{4}v_{d}^{2}}{4}~{};~{}~{}\big{(}m_{H^{\pm}}^{2}-m_{A^{0}}^{2}\big{)}\approx-\frac{\lambda_{4}v_{d}^{2}}{4}~{};~{}~{}m_{H^{0}}\approx
m_{A^{0}}\approx M_{\Delta}.$ (13)
Depending on the sign of the quartic coupling, $\lambda_{4}$, there are two
types of mass hierarchy among different components of the triplet scalar:
* •
when $\lambda_{4}$ is negative: $m_{H^{\pm\pm}}>m_{H^{\pm}}>m_{H^{0},A^{0}}$,
* •
when $\lambda_{4}$ is positive: $m_{H^{\pm\pm}}<m_{H^{\pm}}<m_{H^{0},A^{0}}$.
In this work, we particularly focus on the first scenario where $\lambda_{4}$
is negative. The mass difference between $H^{\pm\pm}$ and $H^{\pm}$ defined
here by $\Delta m$ as: $\Delta m=m_{H^{\pm\pm}}-m_{H^{\pm}}$.
Neutrino masses: The Yukawa Lagrangian involving the SM lepton doublets ($L$)
and scalar triplet ($\Delta$) to generate neutrino masses can be written as
FileviezPerez:2008jbu ,
$-\mathcal{L}^{\rm
Yukawa}=(y_{L})_{\alpha\beta}~{}\overline{L_{\alpha}^{c}}~{}i\sigma^{2}\Delta~{}L_{\beta}+h.c.$
(14)
where $y_{L}$ is a $3\times 3$ symmetric complex matrix. The heavy triplet
scalar helps to generate neutrino masses via the familiar type-II seesaw
mechanism. Neutrino masses are generated once the neutral triplet scalar gets
induced non-zero vev $v_{t}$. The light neutrino mass in the flavor basis is
given by FileviezPerez:2008jbu
$(m_{\nu})_{\alpha\beta}=\sqrt{2}(y_{L})_{\alpha\beta}{v_{t}},$ (15)
where $\\{\alpha,\beta\\}$ be the flavour indices and $m_{\nu}$ be the
$3\times 3$ light neutrino mass matrix. In order to obtain the physical
neutrino masses, the mass matrix, $m_{\nu}$ can be diagonalised using the
Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, $U_{\text{PMNS}}$, as
$U_{\text{PMNS}}^{T}~{}m_{\nu}~{}U_{\text{PMNS}}=\text{diag}(m_{1},m_{2},m_{3}),$
(16)
where
$U_{\text{PMNS}}=\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\\
-c_{23}s_{12}-s_{23}s_{13}c_{12}e^{i\delta}&c_{23}c_{12}-s_{23}s_{13}s_{12}e^{i\delta}&s_{23}c_{13}\\\
s_{23}s_{12}-c_{23}s_{13}c_{12}e^{i\delta}&-s_{23}c_{12}-c_{23}s_{13}s_{12}e^{i\delta}&c_{23}c_{13}\end{pmatrix}\begin{pmatrix}1&0&0\\\
0&e^{i\alpha_{2}}&0\\\ 0&0&e^{i\alpha_{3}}\end{pmatrix}.$ (17)
$m_{i}~{}(i=1,2,3)$ is the eigenvalue of the light neutrino mass eigenstate
$\nu_{i}$. Here $c_{ij}=\cos\theta_{ij}$ and $s_{ij}=\sin\theta_{ij}$ (with
$i,j=1,2,3$ and $i\neq j$) are three mixing angles (with
$0\leq\theta_{ij}\leq\pi/2$) in neutrino sector and $\delta$ denotes the Dirac
CP-phase (with $0\leq\delta\leq 2\pi$) responsible for CP violation in
neutrino oscillations. Here $\alpha_{2,3}$ be the Majorana phases confined to
the range $[0,\pi]$. The parameters of neutrino oscillations are related to
the mass-squared differences defined as $\Delta m_{sol}^{2}\equiv\Delta
m_{21}^{2}=m_{2}^{2}-m_{1}^{2}$, $\Delta m_{atm}^{2}\equiv\Delta
m_{31}^{2}=m_{3}^{2}-m_{1}^{2}$ for $m_{3}>m_{2}>m_{1}$ (Normal Hierarchy
(NH)) and $\Delta m_{atm}^{2}\equiv\Delta m_{32}^{2}=m_{3}^{2}-m_{2}^{2}$ for
$m_{2}>m_{1}>m_{3}$ (Inverted Hierarchy (IH)) as well as with the three mixing
angles ($\theta_{ij}$).
The DM Lagrangian: The $Z_{3}$ symmetry invariant Lagrangian for the scalar
singlet, $S$ which act as a stable DM candidate, can be written as
Yang:2021buu
$\displaystyle\mathcal{L}^{\rm DM}=\frac{1}{2}|\partial_{\mu}S|^{2}-V^{\rm
DM},$
where $V^{\rm DM}$ is the scalar potential involving the DM, $S$. The scalar
potential reads
$\displaystyle V^{\rm DM}$ $\displaystyle=$
$\displaystyle\mu_{S}^{2}~{}|S|^{2}+\lambda_{S}|S|^{4}+\frac{\mu_{3}}{3!}\Big{(}S^{3}+{S^{*}}^{3}\Big{)}$
(18) $\displaystyle+\lambda_{SH}H^{\dagger}H(S^{*}S)+\lambda_{S\Delta}~{}{\rm
Tr}\left[\Delta^{\dagger}\Delta\right]\big{(}S^{*}S\big{)}~{}~{}~{}~{}.$
The complex scalar field, $S$ can be a stable DM particle under the so-called
freeze-out mechanism. After EWSB the mass of the DM turns out to be:
$\displaystyle
m_{S}^{2}=\mu_{S}^{2}+\frac{1}{2}\lambda_{SH}v_{d}^{2}+\frac{1}{2}\lambda_{S\Delta}v_{t}^{2}\equiv
m_{\rm DM}^{2}~{}.$ (19)
The free parameters involved in the dark sector are following
$\displaystyle\\{~{}m_{S}(\equiv m_{\rm
DM}),~{}\mu_{3},~{}\lambda_{SH},~{}\lambda_{S\Delta},~{}\lambda_{S}\\}.$ (20)
In the following subsection, we examine in detail how the aforementioned free
parameters play a role in the phenomenology of DM.
### 2.2 The Dark Matter
In this subsection, we discuss the phenomenology of the scalar DM of this
model in the presence of the scalar triplet and the SM scalar doublet. The
singlet-like DM interacts with the doublet ($H$) and the triplet ($\Delta$)
sectors of the model through the scalar portal interactions
$\lambda_{SH}(S^{*}S)(H^{\dagger}H)$ and $\lambda_{S\Delta}(S^{*}S){\rm
Tr}[\Delta^{\dagger}\Delta]$, respectively. These interactions are essential
for the DM annihilation in the early Universe. At the same time, the cubic
term ( $\mu_{3}(S^{3}+{S^{*}}^{3})$) in the dark scalar potential has a novel
feature of DM semi-annihilation. At the early time of the Universe, DM is in
thermal equilibrium with the bath particles via the portal interactions as
well as the cubic interactions in the dark sector. During that period of time,
the interaction rate between DM and thermal bath particles (TBPs)
($\Gamma_{\rm DM-TBP}=n_{S}\langle{\sigma v}_{SS\to{\rm TBPs}}\rangle$)
dominates over the Hubble expansion rate ($\mathcal{H}$). With the expansion
of the Universe, DM freezes out from the thermal bath when $\Gamma_{\rm DM-
TBP}<\mathcal{H}$ yielding the DM density of the current Universe. This
phenomenon is often referred to as the WIMP miracle Kolb:1990vq . In this
setup, the number density of DM is mainly governed by several number changing
processes which are classified as:
$\displaystyle{\rm DM~{}annihilation}:\hskip 8.5359pt$
$\displaystyle(i)~{}S~{}S^{*}~{}\to{\rm SM~{}~{}SM}~{}~{}$
$\displaystyle(ii)~{}S~{}S^{*}~{}\to{\rm X~{}~{}Y};~{}~{}~{}\\{\rm
X,Y\\}=\\{H^{0},A^{0},H^{\pm},H^{\pm\pm}\\}$ $\displaystyle{\rm DM~{}semi-
annihilation}:\hskip 8.5359pt$ $\displaystyle~{}~{}~{}~{}~{}S~{}S~{}\to
S~{}\xi;~{}~{}~{}~{}~{}~{}~{}\\{\xi\\}=\\{h^{0},H^{0}\\}~{},$
where the Feynman diagrams of these number changing processes of DM are shown
in figures 16, 17 and 18, respectively, in appendix D. The evolution of DM
number density ($n_{S}$) in terms of more convenient variables, such as the
co-moving number density ($Y_{S}=\frac{{n_{S}}}{s}$) and dimensionless
parameter $x=\frac{m_{S}}{T}$, is described by the Boltzmann equation(BEQ) in
the following form Hektor:2019ote ; Yang:2021buu :
$\displaystyle\frac{dY_{S}}{dx}$ $\displaystyle=$
$\displaystyle-\frac{s~{}\langle\sigma v\rangle_{S~{}S\to
SM~{}SM}}{\mathcal{H}~{}x}(Y_{S}^{2}-{Y_{S}^{eq}}^{2})~{}\Theta\Big{(}m_{S}-m_{SM}\Big{)}$
(21) $\displaystyle-\frac{s}{\mathcal{H}~{}x}\sum_{X,Y}\langle\sigma
v\rangle_{S~{}S\to
X~{}Y}(Y_{S}^{2}-{Y_{X}^{eq}}{Y_{Y}^{eq}})~{}\Theta\Big{(}2m_{S}-m_{X}-m_{Y}\Big{)}$
$\displaystyle-\frac{1}{2}\frac{s~{}\langle\sigma v\rangle_{S~{}S\to
SSM}}{\mathcal{H}~{}x}(Y_{S}^{2}-Y_{S}Y_{S}^{eq})~{}\Theta\Big{(}m_{S}-m_{SM}\Big{)}~{}.$
Depending on the mass hierarchies among the DM and various scalars of this
model, different number changing processes start to contribute to DM number
density. This fact has been illustrated in the BEQ using the theta function
($\Theta$). The co-moving equilibrium density of $i$th particle is defined as:
$Y_{i}^{eq}=\frac{45}{4\pi^{4}}\frac{g_{i}}{g_{*s}}\Big{(}\frac{m_{i}}{T}\Big{)}^{2}K_{2}[\frac{m_{i}}{T}]$
where $g_{i}$ is the degree of freedom of the $i-$th species and $g_{*s}$ is
the internal relativistic degree of freedom associated with entropy. $K_{2}$
is the modified Bessel function of the second kind. The entropy density and
Hubble expansion rates are defined as: $s=g_{*s}\frac{2\pi^{2}}{45}T^{3}$ and
$\mathcal{H}=\sqrt{\frac{\pi^{2}g_{*}}{90}}\frac{1}{M_{\rm Pl}}T^{2}$,
respectively with $g_{*}=106.7$ and $M_{\rm Pl}=2.4\times 10^{18}$ GeV.
$\langle\sigma v\rangle_{SS\to ij}$ is the thermal average cross-section of
the corresponding number changing process: $SS\to ij$ Kolb:1990vq . Solving
the above BEQ, today’s DM relic density is given by Kolb:1990vq ,
$\displaystyle\Omega_{S}h^{2}$ $\displaystyle=$ $\displaystyle 2.752\times
10^{8}~{}\Big{(}\frac{m_{S}}{\rm GeV}\Big{)}~{}Y_{S}(x\to\infty).$ (22)
The DM abundance ($\Omega_{S}h^{2}$) followed from equations 21 and 22 depends
on the following independent free parameters which are entered in thermal
average cross sections:
$\displaystyle\\{m_{S},~{}\mu_{3},~{}\lambda_{SH},~{}\lambda_{S\Delta}\\}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{\rm
from~{}dark~{}sector},$
$\displaystyle\\{~{}\lambda_{1},~{}\lambda_{2},~{}\lambda_{3},~{}\lambda_{4}~{}\\}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{\rm
from~{}scalar~{}triplet~{}sector.}$ (23)
The dark sector self coupling, $\lambda_{S}$ does not affect the computation
of the relic density. We will discuss the role of the above parameters on DM
density in section 5.1. Note that, the present day DM relic density is
constrained from the Planck observation Planck:2018vyg which will be
discussed in the subsection 3.1.
$S$$n$$h^{0},H^{0}$$S$$n$ Figure 1: Feynman diagram for spin-independent DM-
nucleon scattering process for the scalar DM.
There are several attempts have been made to detect DM in laboratory
experiments. DMDD is one of them where incoming DM flux scatters with the
nuclei in the target crystals and the recoil of the target nuclei can be
searched for a DM signal. The effective spin-independent DM-Nucleon scattering
cross-section ($\sigma_{\rm DD}^{\rm SI}$) mediated via $t-$channel $CP-$even
scalars ($h^{0}$ and $H^{0}$) exchange diagrams, as shown in figure 1, read as
($q\to 0$):
$\sigma_{\rm DD}^{\rm SI}=\Big{(}\frac{\Omega_{S}h^{2}}{\Omega_{\rm
DM}h^{2}}\Big{)}~{}\Big{(}\frac{1}{4\pi}\bigg{(}\frac{f_{n}\mu_{n}}{m_{S}}\bigg{)}^{2}\bigg{(}\frac{m_{n}}{v_{d}}\bigg{)}^{2}~{}\bigg{[}\frac{\lambda_{D1}}{m_{h^{0}}^{2}}+\frac{\lambda_{D2}}{m_{H^{0}}^{2}}\bigg{]}^{2}\Big{)},$
(24)
where $\Big{(}{\Omega_{S}h^{2}}/{\Omega_{\rm DM}h^{2}}\Big{)}\equiv\xi_{\rm
DM}$ is the fractional DM density. The coupling $\lambda_{D1}$ and
$\lambda_{D2}$ are defined as
$\displaystyle\lambda_{D1}$ $\displaystyle=$
$\displaystyle\Big{(}~{}~{}\lambda_{SH}~{}v_{d}\cos\theta_{t}+\lambda_{S\Delta}v_{t}\sin\theta_{t}\Big{)}\cos\theta_{t}~{}\xrightarrow{\theta_{t}\to
0}~{}\simeq\lambda_{SH}~{}v_{d}$ $\displaystyle\lambda_{D2}$ $\displaystyle=$
$\displaystyle\Big{(}-\lambda_{SH}~{}v_{d}\sin\theta_{t}+\lambda_{S\Delta}v_{t}\cos\theta_{t}\Big{)}\sin\theta_{t}\xrightarrow{\theta_{t}\to
0}~{}\simeq 0$ (25)
The reduced mass of DM-nucleon system is defined by $\mu_{n}$
$\big{(}=\frac{m_{S}m_{n}}{m_{S}+m_{n}}\big{)}$ with $m_{n}=0.946$ GeV and the
nucleon form factor $f_{n}=0.28$ Alarcon:2012nr . For the limiting case:
$\theta_{t}\to 0$, $\sigma_{\rm DD}^{\rm SI}$ solely depends on the the Higgs
portal DM coupling, $\lambda_{SH}$ and the DM mass, $m_{S}$. The null results
from the various DM direct search experiments such as XENON-1T Aprile:2018dbl
and PANDA-4T PandaX-4T:2021bab put a strong constraint on the DMDD cross-
sections which can be further expressed in terms of the model parameters.
At the same time, DM can also be detected at various indirect search
experiments such as the space-based telescopes, Fermi Large Area Telescope
(Fermi-LAT) Fermi-LAT:2017bpc and the ground-based telescopes, Major
Atmospheric Gamma-ray Imaging Cherenkov (MAGIC) MAGIC:2016xys . These
experiments are looking for the gamma-ray flux which is produced via the
production of the SM particles either through DM annihilation or via decay in
the local Universe. The photons which are emitted from WIMP-like DM lie in the
gamma-ray regime and behave as ideal messengers of DM indirect detection
(DMID). The total gamma-ray flux due to DM annihilation into the SM charge
pairs($\psi\overline{\psi}$) in a specific energy range is given by
MAGIC:2016xys
$\displaystyle\Phi_{\psi\overline{\psi}}$ $\displaystyle=$
$\displaystyle\frac{1}{4\pi}\frac{\langle\sigma
v\rangle_{SS\to\psi\overline{\psi}}}{2m_{S}^{2}}\int_{E_{\rm min}}^{E_{\rm
max}}\frac{dN_{\gamma}}{dE_{\gamma}}dE_{\gamma}\underbrace{\int
dx~{}\rho_{S}^{2}\big{(}r(b,l,x)\big{)}}_{J}~{},$ (26)
where $\psi:=\\{\mu^{-},\tau^{-},b,W^{-}\\}$. The $J$ factor contains the
astrophysical information about the DM distribution in the galaxy and
$\frac{dN_{\gamma}}{dE_{\gamma}}$ is the energy spectrum of incoming photons
from DM annihilation. Measuring the gamma-ray flux and using the standard
astrophysical inputs, the indirect search experiments put limits on the
thermal average cross-section of DM annihilation into different SM charge
pairs like
$\mu^{-}\mu^{+},~{}\tau^{-}\tau^{+},~{}b\overline{b},~{}W^{-}W^{+}$. In order
to compare the experimental bound with the theoretically estimated thermal
average cross-section of the corresponding DM annihilation channel, one can
express the above equation 26 as:
$\displaystyle\langle\sigma v\rangle_{SS\to\psi\overline{\psi}}^{\rm eff}$
$\displaystyle=$ $\displaystyle\big{(}\Omega_{S}/\Omega_{\rm
DM}\big{)}^{2}\langle\sigma v\rangle_{SS\to\psi\overline{\psi}}=\xi_{\rm
DM}^{2}\langle\sigma v\rangle_{SS\to\psi\overline{\psi}}~{}~{}.$ (27)
Here the thermal average cross-sections for DMID are scaled with the
fractional DM density as $\xi_{\rm DM}^{2}$. In this setup the thermal average
annihilation cross-sections,$\langle\sigma
v\rangle_{SS\to\psi\overline{\psi}}$ are mostly dominated by $s$-channel SM
Higgs ($h^{0}$) exchange diagram and varies as
$\propto\lambda_{SH}^{2}/m_{S}^{2}$ (for $\sin\theta_{t}\to 0$). Non-
observation of any DM signal in such indirect detection experiments puts a
constraint on the thermal average cross-section of DM annihilation into
different SM charge pairs. Those constraints are further expressed in terms of
the model parameters.
So far, we have talked about the analytical form of DM abundance, DMDDSI
cross-section and DMID cross-section which will help to understand the
numerical results. For the numerical analysis, we have used publicly available
numerical packages. We first implemented the model in ${\tt FeynRules}$
Alloul:2013bka and then the outputs are fed into ${\tt micrOMEGAs}$
Belanger:2006is to obtain DM abundance, $\Omega_{S}h^{2}$ and DMDDSI and DMID
cross-sections.
### 2.3 Study of EWPT and the production of GW
In this section, we discuss the theoretical formulation of EWPT in our model
and examine the possibility of the EWPT being a strong first-order one, which
in turn may facilitate EWBG. We also outline the production of stochastic GW
from FOPT.
Note that in the present study of EWPT, we assume that no spontaneous or
explicit ~~$CP$~~ occurs in the Higgs sector. However, one can incorporate a
$Z_{3}$ symmetric $CP$-violating dimension-7 operator,
$y_{t}\,\overline{Q}\tilde{H}\,\big{(}1+ic\frac{S^{3}}{\Lambda^{3}}\big{)}\,t_{R}\,+\,h.c.$,
in this model to generate $CP$\- violation which is essential for EWBG to take
place. We do not perform a detailed calculation of EWBG in this paper since
the focus of the work is to study the correlation/interplay of the prospects
of detecting stochastic GW from an SFOEWPT in an almost unconstrained
parameter space at various future proposed GW experiments, the collider
signals of this parameter space, and the prospects of detecting DM at various
DM direct detection (DMDD) experiments. However, we should mention that the
detailed baryogenesis calculation for an equivalent $Z_{2}$-symmetric
$CP$-violating dimension-6 operator has been performed in the literature
Vaskonen:2016yiu ; Ellis:2022lft . One can follow the same prescription for
our $Z_{3}$ symmetric $CP$-violating dimension-7 operator given above. We
leave such an EWBG calculation for a future study.
#### 2.3.1 Effective Higgs potential at finite-temperature
The tree-level scalar potential of the model (see equation 3) relevant for the
study of EWPT, in terms of the $CP$-even Higgs fields {$h_{d}$, $h_{t}$ and
$h_{s}$}, can be written as
$\displaystyle V_{0}(h_{d},h_{t},h_{s})=$
$\displaystyle\,\frac{1}{4}(\lambda_{H}h_{d}^{4}+(\lambda_{2}+\lambda_{3})h_{t}^{4}+\lambda_{S}h_{s}^{4}+(\lambda_{1}+\lambda_{4})h_{d}^{2}h_{t}^{2}+\lambda_{SH}h_{d}^{2}h_{s}^{2}+\lambda_{S\Delta}h_{t}^{2}h_{s}^{2})$
$\displaystyle+\frac{\mu_{3}}{6\sqrt{2}}h_{s}^{3}-\frac{\mu}{\sqrt{2}}h_{d}^{2}h_{t}-\frac{\mu_{H}^{2}}{2}h_{d}^{2}+\frac{\mu_{\Delta}^{2}}{2}h_{t}^{2}+\frac{\mu_{S}^{2}}{2}h_{s}^{2}.$
(28)
The vacuum expectation values (${\it vevs}$) of the scalar fields are assumed
to be real at all temperatures. As we have discussed earlier that at $T=0$,
the minimum of the scalar potential is $\langle h_{d}\rangle\equiv v_{d}$,
$\langle h_{t}\rangle\equiv v_{t}$ and $\langle h_{s}\rangle\equiv 0$ where
$v=\sqrt{v_{d}^{2}+v_{t}^{2}}=246$ GeV. The zero temperature tree-level
potential (as described in equation 2.3.1) gets quantum corrections from all
fields that couple to $h_{d}$, $h_{t}$ and $h_{s}$. These corrections can be
written in terms of the well-known Coleman-Weinberg (CW) Coleman:1973jx one-
loop potential in the following form:
$\displaystyle V_{\rm
CW}(h_{d},h_{t},h_{s},T=0)=\sum_{i}(-1)^{2s_{i}}n_{i}\frac{m_{i}^{4}(h_{d},h_{t},h_{s})}{64\pi^{2}}\left[\ln\frac{m_{i}^{2}(h_{d},h_{t},h_{s},)}{Q^{2}}-C_{i}\right]\;\
,$ (29)
where ‘$Q$’ is the renormalization scale which we set at $Q=246$ GeV . The sum
‘$i$’ runs over all particles in the model. The constants $C_{i}$ depend on
the choice of the renormalization scheme. In this work, we consider
$\overline{\text{MS}}$ on-shell scheme, whereby for the transverse
polarizations of the gauge bosons $C_{i}=\frac{1}{2}$ and for their
longitudinal polarizations and for all other particles (scalars and fermions)
$C_{i}=\frac{3}{2}$, i.e., $C_{W^{\pm},Z}=5/6$ and for other species
$C_{i}=3/2$. Here, $m_{i}$ is the field-dependent mass of the $i$-th species
(see Appendix C for the details), and $n_{i}$ and $s_{i}$ are their associated
number of degrees of freedom and spin, respectively. The number of degrees of
freedom of all the fields in the model relevant for equation 29 is given
below:
$n_{h_{d},h_{t},h_{s},a_{d},a_{t},a_{s},G^{0}}=1,\,\,n_{H_{\Delta}^{\pm},H_{\Delta}^{\pm\pm},G^{\pm}}=2,\,\,n_{Z}=3,\,\,n_{W^{\pm}}=6,\,\,n_{t}=12\,\,.$
(30)
In this work, we work in the Landau gauge as the ghost bosons decouple in this
gauge and we do not need to consider them in our calculation.
Note that the location of the electroweak minimum in the field space for the
tree-level potential is shifted by the one-loop CW potential. Thus, suitable
counter-terms need to add to the effective potential to ensure that the
minimum of the effective potential coincides with that of the tree-level
potential in the $\overline{\text{MS}}$ renormalization scheme. The added
counter-terms to the potential are parameterized as,
$\displaystyle V_{\rm
CT}=\delta_{\mu_{H}^{2}}h_{d}^{2}+\delta_{\mu_{\Delta}^{2}}h_{t}^{2}+\delta_{\lambda_{H}}h_{d}^{4}+\delta_{\lambda_{2}}h_{t}^{4}+\delta_{\lambda_{S\Delta}}h_{d}^{2}h_{t}^{2}+\delta_{\lambda_{SH}}h_{d}^{2}h_{s}^{2}\;.$
(31)
Various coefficients in $V_{\rm CT}$ (of equation 31) can be found from the
on-shell renormalization conditions at zero temperature:
$\frac{\partial(V_{\text{CW}}+V_{\text{CT}})}{\partial\phi_{{}_{i}}}=0\,\,\,,\,\,\,\\\
\frac{\partial^{2}(V_{\text{CW}}+V_{\text{CT}})}{\partial\phi_{{}_{i}}\partial\phi_{{}_{i}}}=0\,\,\,,$
(32)
where $\phi_{{}_{i}},\phi_{{}_{j}}=\\{h_{d},h_{t},h_{s}\\}$. All the
derivatives are taken at the true EW minima, i.e., $h_{d}=v_{d}$,
$h_{t}=v_{t}$ and $h_{s}=0.$ The various coefficients of the counter-term
potentials are presented in the appendix B. Note that, in the Landau gauge
that we opt for this work, the masses of the Goldstone modes vanish at the
physical minimum. As a result, when physical masses and coupling coefficients
are estimated from derivatives of the loop-corrected effective potential, it
leads to divergences Martin:2014bca ; Elias-Miro:2014pca and it can be
handled by using an infrared regulator by changing the masses of the Goldstone
modes, $m_{G}^{2}\to m_{G}^{2}+\mu_{\rm IR}^{2}$. In this work, we adopt the
approach which is considered in reference Baum:2020vfl ; Chatterjee:2022pxf
i.e., for numerical calculation it is sufficient to set $\mu_{\rm
IR}^{2}=1\,{\rm GeV}^{2}$.
At finite-temperatures, the effective potential receives additional
corrections and that at the one-loop level are given by Dolan:1973qd ;
Weinberg:1974hy ; Kirzhnits:1976ts
$\displaystyle V_{\rm
th}(h_{d},h_{t},h_{s},T)=\frac{T^{4}}{2\pi^{2}}\,\sum_{i}n_{i}J_{B,F}\left(\frac{m_{i}^{2}(h_{d},h_{t},h_{s})}{T^{2}}\right)\;,$
(33)
where $n_{i}$ are the numbers of degrees of freedom for particles as discussed
earlier. The thermal functions $J_{B(F)}$ are for bosonic (fermionic)
particles defined as
$J_{B/F}(y^{2})=\pm{\rm
Re}\int_{0}^{\infty}x^{2}\ln\left(1\mp\exp^{-\sqrt{x^{2}+y^{2}}}\right){\rm
d}{x}\,,$ (34)
where the lower (upper) sign corresponds to fermions (bosons), $\beta\equiv
1/\text{T}$ and the sum includes all the particles as described in equation
29.
Note that, at high temperature the treatment of the perturbative expansion of
the effective potential no longer remain valid. The quadratically divergent
contributions from the so-called non-zero Matsubara modes need to be re-summed
through the insertion of thermal masses in the one-loop propagator. The
corrections are only to the Matsubara zero-modes, i.e., to the longitudinal
polarization states of the vector bosons and to all the scalars. Since the
thermal contributions to the transverse modes are suppressed due to gauge
symmetry Espinosa:1992kf . To make the expansion reliable, we adopt the well-
known Parwani method Parwani:1991gq and we denote the thermally improved
field-dependent masses as $M_{i}^{2}$, where
$M_{i}^{2}=\text{eigenvalues}[m_{i}^{2}+\Pi(T^{2})]$ where
$\Pi(T^{2})=c_{ij}T^{2}$ and $c_{ij}$’s are the so-called daisy coefficients.
Note that the gauge symmetries plus the discrete $Z_{3}$-symmetry of the
present model set the off-diagonal terms of the $\Pi(T^{2})$ matrix to zero.
Thus, the Daisy coefficients are listed in equations 68.
Including the Coleman-Weinberg and the thermal corrections, the temperature-
dependent effective potential at one-loop order is given by
$V_{\text{eff}}(T)=V_{0}+V_{\rm CW}(M_{i}^{2})+V_{\rm CT}+V_{\rm
th}(M_{i}^{2},T)\,.$ (35)
The EWPT may now be investigated using this potential, with the minimum of the
potential being tracked as a function of temperature. Note that the total one-
loop effective potential (equation 35) carries explicit gauge dependence.
Since, the important quantities of the study of phase transitions such as the
locations of the extrema of $V_{\text{eff}}(T)$, as well as the ratio
$\phi_{c}(T_{c})/T_{c}$, both are gauge-dependent Dolan:1973qd ;
Nielsen:1975fs ; Fukuda:1975di ; Laine:1994zq ; Baacke:1993aj ; Baacke:1994ix
; Garny:2012cg ; Espinosa:2016nld ; Patel:2011th ; Arunasalam:2021zrs ;
Lofgren:2021ogg . 111 Using the Nielsen identities Nielsen:1975fs ;
Fukuda:1975di , we can get gauge-independent variables of the effective
potential. In addition, the one-loop effective potential of equation 35
explicitly depends on the choice of the renormalization scale ($Q$), which
might have a more significant impact than the gauge uncertainty Laine:2017hdk
.
The production of the stochastic GW from FOPTs will be discussed in the next
subsection. Later, we shall use this to identify a suitable region of
parameter space of the model that can be investigated using various future GW
experiments.
#### 2.3.2 Production of stochastic GW from FOPT
In this $Z_{3}$-symmetric singlet scalar extended type-II seesaw model, multi-
step FOPTs may occur in the early Universe, potentially generating a
stochastic background of GW. The amplitude of this stochastic background, in
contrast to GW from a binary system, is a random quantity that is unpolarized,
isotropic and has a Gaussian distribution Allen:1997ad . As a result, the two-
point correlation function can characterize this, and it is proportional to
the power spectral density $\Omega_{\text{GW}}{\rm h}^{2}$. The “cross-
correlation” method between two or more detectors can be used to detect this
kind of stochastic GW Caprini:2015zlo ; Cai:2017cbj ; Caprini:2018mtu ;
Romano:2016dpx ; Christensen:2018iqi .
To determine the GW signals arising from FOPTs and evaluate their evolution
with temperature we require the knowledge of the following set of portal
parameters: $T_{n},\alpha,\beta/H_{n},v_{w}$. Here, the bubble nucleation
temperature, $T_{n}$, is the approximate time at which one bubble per Hubble
volume forms due to a phase transition. A dimensionless quantity, ‘$\alpha$’,
is defined as the ratio of the energy released from the phase transition to
the total radiation energy density at the time when the phase transition is
complete. Thus, it relates to the energy budget of the FOPT and is given by
Espinosa:2010hh
$\alpha=\frac{\rho_{\text{vac}}}{\rho^{*}_{\text{rad}}}=\frac{1}{\rho^{*}_{\text{rad}}}\left[T\frac{\partial\Delta
V(T)}{\partial T}-\Delta V(T)\right]\Bigg{|}_{T_{*}},$ (36)
where $T_{*}=T|_{t_{*}}$. $t_{*}$ is the time when the FOPT completes. In the
absence of substantial reheating effects, $T_{*}\simeq T_{n}$. The total
radiation energy density of the plasma background is
$\rho^{*}_{\text{rad}}=g_{*}\pi^{2}T_{*}^{4}/30$, where $g_{*}$ is the number
of relativistic degrees of freedom at $T=T_{*}$. In this work, we consider
$g_{*}\sim 100$. The difference between the potential energies at the false
minimum and the true minimum is denoted by $\Delta
V(T)=V_{\text{false}}(T)-V_{\text{true}}(T)$. The other two parameters are the
parameter $\beta$, which roughly indicates the inverse time duration of the
phase transition, and the parameter $v_{w}$, which is the bubble wall
velocity.
The tunnelling probability from the false vacuum to the true vacuum per unit
time per unit volume is given by Turner:1992tz
$\Gamma(T)\simeq T^{4}\left(\frac{S_{3}\left(T\right)}{2\pi
T}\right)^{3/2}e^{-S_{3}\left(T\right)/T},$ (37)
where the Euclidean action $S_{3}\left(T\right)$ of the background field
($\phi$), in the spherical polar coordinate, is given by
$S_{3}\left(T\right)=4\pi\int dr\hskip
2.84526ptr^{2}\left[\dfrac{1}{2}\left(\partial_{r}\vec{\phi}\right)^{2}+V_{eff}\right]\,\,,$
(38)
with $V_{eff}$ being the finite temperature total effective potential defined
in equation 35, and $\vec{\phi}$ denoting the three components ${\it vev}$ of
the scalar fields. The condition for the formation of a bubble of critical
size can be found by extremizing this Euclidean action. For this study, we use
the publicly available toolbox $\mathbf{CosmoTransitions}$ Wainwright:2011kj
to find bounce solutions of the above Euclidean action. It uses a path
deformation method and this computation is the most technically challenging
part. The bubble nucleation rate ($\Gamma$) pet unit volume at temperature T
is given by $\frac{\Gamma(T)}{V}\propto T^{4}e^{-S_{3}/T}$. The nucleation
temperature is usually determined by solving the following equation of the
nucleation rate,
$\int_{T_{n}}^{\infty}\frac{dT}{T}\frac{\Gamma(T)}{H(T)^{4}}\simeq 1.$ (39)
This equation states that the nucleation probability of a single bubble within
one horizon volume is approximately 1, and it reduces to the condition
$S_{3}(T)/T\approx 140$, solving which one can obtain the nucleation
temperature $T_{n}$ Apreda:2001us . This is the highest temperature for which
$S_{3}/T\lesssim 140$. For all $T>0$, if $S_{3}/T>140$, the corresponding
transition to the true vacuum or nucleation of bubbles of critical size of the
broken phase does not happen. This occurs because either the barrier between
the two local minima is too high or the distance between them is too large in
the field space, resulting in a low tunnelling probability. For $T<T_{c}$, if
this condition is not fulfilled, the system would remain trapped at the
metastable or false minimum. Thus, in spite of the presence of a deeper
minimum of the potential at the zero temperature the Universe remains at the
false minimum. This phenomenon suggests that, rather than just establishing
the existence of a critical temperature for an FOEWPT, it is more important to
investigate the successful nucleation of a bubble of the broken electroweak
phase.
The inverse time duration of the phase transition, $\beta$, is obtained from
the relation
$\beta=-\frac{dS_{3}}{dt}\Bigr{|}_{t_{*}}\simeq\dfrac{\dot{\Gamma}}{\Gamma}=H_{*}T_{*}\frac{d(S_{3}/T)}{dT}\Bigr{|}_{T_{*}},$
(40)
where $H_{*}$ is the Hubble rate at $T_{*}$. For a strong GW signal, the ratio
needs to be low which implies a relatively slow phase transition.
Having defined the portal we need to know two more parameters to calculate the
GW power spectrum coming from an FOPT in the early Universe. When a phase
transition occurs in a thermal plasma, the released energy is shared between
the plasma’s kinetic energy, which causes a bulk motion of the fluid in the
plasma resulting in GW, and heating the plasma. The quantity $\kappa_{v}$ is
the fraction of latent heat energy converted into the bulk motion of the
fluid, which takes the form Espinosa:2010hh ; Chiang:2019oms
$\kappa_{v}\simeq\left[\dfrac{\alpha}{0.73+0.083\sqrt{\alpha}+\alpha}\right]\,\,.$
(41)
Finally, we need to know $\kappa_{\text{turb}}$, which is a fraction of
$\kappa_{v}$ that is used to generate Magneto-Hydrodynamic-Turbulence (MHD) in
the plasma. The value of $\kappa_{\text{turb}}$ is unknown but it is expected
that $\kappa_{\text{turb}}\approx(5\sim 10)\,\kappa_{v}$ Hindmarsh:2015qta ,
and we consider this fractional value to be 0.1 in all our GW calculations.
Now we have all the relevant parameters at our disposal, using which we can
calculate the GW energy density spectrum.
It is generally accepted that during a cosmological FOPT, the GW can be
produced by three distinct processes: bubble wall collisions, long-standing
sound waves in the plasma, and MHD turbulence. For bubble collisions, the GW
is generated by the stress-energy tensor located at the wall, and the
mechanism is referred to as the “envelope approximation”222In fact, the
“Envelope approximation” is actually two approximations. In the first
approximation, the expanding bubble’s stress-energy tensor is believed to be
non-zero only in an infinitesimally thin shell on the bubble surfaces. In the
second approximation, when two bubbles interact, it is assumed that this
stress-energy tensor vanishes instantly, leaving just the ‘envelopes’ of the
bubbles to interact Kosowsky:1992rz ; Kosowsky:1991ua ; Kosowsky:1992vn ;
Caprini:2019egz .. However, for a phase transition proceeding in a thermal
plasma, bubble collisions’ contribution to the total GW energy density is
believed to be negligible Bodeker:2017cim . On the other hand, the bulk motion
of the plasma gives rise to velocity perturbations in it, resulting in the
generation of sound waves in a medium made up of relativistic particles. The
sound waves receive the majority of the energy released during the phase
transition Hindmarsh:2013xza ; Giblin:2013kea ; Giblin:2014qia ;
Hindmarsh:2015qta . This relatively long-living acoustic production of GW is
often regarded as the most dominant one 333Here, we are presuming that a
bubble expanding in plasma can achieve a relativistic terminal velocity. The
energy in the scalar field is negligible in this situation and the most
significant contributions to the signal are predicted to come from the fluid’s
bulk motion in the form of sound waves and/or MHD. This is the non-runaway
bubble in a plasma (NP) scenario Caprini:2015zlo ; Schmitz:2020syl ..
Percolation may also cause turbulence in the plasma, particularly MHD
turbulence since the plasma is completely ionized. This corresponds to the
third source Caprini:2006jb ; Kahniashvili:2008pf ; Kahniashvili:2008pe ;
Kahniashvili:2009mf ; Caprini:2009yp ; Kisslinger:2015hua of GW production.
Thus, summing the contributions of the above two processes (i.e. only the
contributions from the sound wave and the MHD turbulence) one obtains the
overall GW intensity $\Omega_{\text{GW}}{\rm h}^{2}$ from the SFOPT for a
given frequency $f$,
$\Omega_{\text{GW}}h^{2}\simeq\Omega_{\text{sw}}h^{2}+\Omega_{\text{turb}}h^{2},$
(42)
where $h\approx 0.673$ DES:2017txv .
The contribution to the total GW power spectrum from the sound waves,
$\Omega_{\text{sw}}h^{2}$, in equation 42 can be modelled by the following fit
formula Hindmarsh:2019phv
$\Omega_{\text{SW}}{\rm h}^{2}=2.65\times
10^{-6}\Upsilon(\tau_{SW})\left(\dfrac{\beta}{H_{\star}}\right)^{-1}v_{w}\left(\dfrac{\kappa_{v}\alpha}{1+\alpha}\right)^{2}\left(\dfrac{g_{*}}{100}\right)^{-\frac{1}{3}}\left(\dfrac{f}{f_{\text{SW}}}\right)^{3}\left[\dfrac{7}{4+3\left(\dfrac{f}{f_{\text{SW}}}\right)^{2}}\right]^{\frac{7}{2}}\,\,,$
(43)
where $T_{\star}$ is the temperature just after the end of GW production and
$H_{\star}$ is the Hubble rate at that time. For this analysis, we consider
the approximation that $T_{\star}\approx T_{n}$. The present day peak
frequency $f_{\text{SW}}$ for the sound wave contribution is
$f_{\text{SW}}=1.9\times 10^{-5}\hskip
2.84526pt\text{Hz}\left(\dfrac{1}{v_{w}}\right)\left(\dfrac{\beta}{H_{\star}}\right)\left(\dfrac{T_{n}}{100\hskip
2.84526pt\text{GeV}}\right)\left(\dfrac{g_{*}}{100}\right)^{\frac{1}{6}}\,\,.$
(44)
It has been shown in recent studies Guo:2020grp ; Hindmarsh:2020hop that
sound waves have a finite lifetime, which causes $\Omega_{\text{sw}}h^{2}$ to
be suppressed. The multiplication factor $\Upsilon(\tau_{SW})$ is considered
in equation 43 to take this suppression into account.
$\Upsilon(\tau_{SW})=1-\frac{1}{\sqrt{1+2\tau_{\text{sw}}H_{\ast}}}.$ (45)
Here, the lifetime $\tau_{\text{sw}}$ is considered as the time scale when the
turbulence develops, approximately given by Pen:2015qta ; Hindmarsh:2017gnf :
$\displaystyle\tau_{\text{sw}}\sim\frac{R_{\ast}}{\bar{U}_{f}},$ (46)
where $R_{\ast}$ is the mean bubble separation, and $\bar{U}_{f}$ is the root-
mean-squared (RMS) fluid velocity. $R_{\ast}$ is related to the duration of
the phase transition parameter, $\beta$, through the relation
$R_{\ast}=(8\pi)^{1/3}v_{w}/\beta$ Hindmarsh:2019phv ; Guo:2020grp . On the
other hand, from hydrodynamic analyses, references Hindmarsh:2019phv ;
Weir:2017wfa has shown that the RMS fluid velocity is given by,
$\bar{U}_{f}=\sqrt{3\kappa_{v}\alpha/4}$. At
$\tau_{\text{sw}}\rightarrow\infty$, $\Upsilon(\tau_{SW})$ approaches the
asymptotic value $1$.
In contrast, the contribution $\Omega_{\text{turb}}h^{2}$ from the MHD
turbulence in equation 42 can be modeled by Caprini:2015zlo
$\Omega_{\text{turb}}h^{2}=3.35\times
10^{-4}\left(\frac{H_{*}}{\beta}\right)\left(\frac{\kappa_{\text{turb}}\alpha}{1+\alpha}\right)^{\frac{3}{2}}\left(\frac{100}{g_{s}}\right)^{\frac{1}{3}}v_{w}\frac{(f/f_{\text{turb}})^{3}}{[1+(f/f_{\text{turb}})]^{\frac{11}{3}}(1+8\pi
f/h_{\star})},$ (47)
where turbulence produced GW spectrum present day peak frequency
$f_{\text{turb}}$ is given by,
$f_{\text{turb}}=2.7\times
10^{-2}~{}\text{mHz}\frac{1}{v_{w}}\left(\frac{\beta}{H_{*}}\right)\left(\frac{T_{*}}{100~{}\text{GeV}}\right)\left(\frac{g_{s}}{100}\right)^{\frac{1}{6}},$
(48)
with the parameter
$h_{*}=16.5\times 10^{-6}\hskip
2.84526pt\text{Hz}\left(\dfrac{T_{n}}{100\hskip
2.84526pt\text{GeV}}\right)\left(\dfrac{g_{*}}{100}\right)^{\frac{1}{6}}\,\,.$
(49)
Before ending our discussion on the framework of GW production from FOPT in
the early Universe, a few words are necessary on EWBG. For the early Universe
to undergo a successful EWBG via an SFOEWPT, one of the parameters discussed
in this subsection, the bubble wall velocity ($v_{w}$), needs to play an
important role. From the preceding GW calculation, it is clear that a larger
$v_{w}$ is required for a stronger GW production, but to achieve the observed
matter anti-matter asymmetry a small subsonic $v_{w}$ is needed. Consequently,
a significantly large $v_{w}$, which can generate detectable GW signals, is
detrimental to the process of producing the observed baryon asymmetry.
However, recent studies recognise that $v_{w}$ may not be the velocity that
enters EWBG calculation. This is due to the non-trivial plasma velocity
profile surrounding the bubble wall No:2011fi . This will require an in-depth
analysis of the effect of particle transport near the wall and we leave it for
a future study. For the current work, we assume that expanding bubbles reach a
relativistic terminal velocity in the plasma, i.e., $v_{w}\simeq 1$.
## 3 Theoretical and experimental constraints
Having set up the framework for our study, we discuss various theoretical and
phenomenological constraints on the model parameters used in our analysis. The
theoretical constraints coming from the EW vacuum stability at zero
temperature, perturbativity, partial wave unitarity and the phenomenological
constraints from the flavor sector, the neutrino oscillation data and the
electroweak precision observables including the ‘$\rho$’ parameter are well
discussed in the literature Yang:2021buu . For the sake of completeness of
this work, we discuss those in the appendix E and F. Other phenomenological
constraints from the DM experiments and those coming from the neutral and
charged Higgs boson searches at the LHC are discussed below. Such a discussion
is useful to determine the targeted region of parameter space for our work.
### 3.1 DM constraints
Besides neutrino masses, another objective of the model used in this paper is
to explain the observed DM relic density of the Universe. In Section 2.2 we
provided the analytical formulae necessary to calculate the DM abundance,
$\Omega_{S}h^{2}$, and th$\sigma_{\rm DD}^{\rm SI}$) within this model. Here,
we discuss how we study the DM phenomenology of the model numerically. First,
we generated the model files using FeynRules Alloul:2013bka and then feed the
necessary files into micrOMEGAs(v4.3) Belanger:2006is and perform a scan over
the model parameters relevant for the dark sector, as listed in equation 23.
The package micrOMEGAs evaluate $\Omega_{S}h^{2}$ and $\sigma_{\rm DD}^{\rm
SI}$ for each benchmark point.
It is well known that the Planck collaboration measured the DM relic abundance
to be $0.120\pm 0.001$, which we take into cognizance in our DM analysis. On
the other hand, the most stringent bounds on $\sigma_{\rm DD}^{\rm SI}$ are
provided by the latest XENON-1T Aprile:2018dbl and PANDA-4T PandaX-4T:2021bab
results. While imposing these upper limits on $\sigma_{\rm DD}^{\rm SI}$ we
take into account a scale factor ($\xi_{{}_{\text{D}M}}$), which is the ratio
of the computed relic density for the DM particle of the model and the
observed DM relic density of the Universe, i.e.,
$\xi_{{}_{\text{D}M}}=\frac{\Omega_{S}h^{2}}{0.120}$. $\xi_{{}_{\text{D}M}}$
alleviates the strong bounds on $\sigma_{\rm DD}^{\rm SI}$ for a significant
fraction of our scanned points. More details about how the above DM relic
density and DMDD constraints are incorporated in our numerical analysis are
discussed in Section 5.1.
The indirect search of DM followed from the experiments such as Fermi-LAT and
MAGIC also put a constraint on the individual thermal average annihilation
cross-section of DM into SM charge pairs as $\xi_{\rm DM}^{2}\langle\sigma
v\rangle_{SS\to\psi\overline{\psi}}$ where
$\psi:=\\{\mu^{-},\tau^{-},b,W^{-}\\}$ MAGIC:2016xys . When the density of DM
is under-abundant, the scale factor $\xi_{\rm DM}$ further reduces the
effective indirect search cross-section and alleviates bounds on the
corresponding annihilation process. Like the direct search, same portal
coupling, $\lambda_{SH}$ is also involved in the indirect search cross-
section. The direct search of DM puts a strong constraint on the portal
coupling, $\lambda_{SH}$, which leads to a small indirect search cross-section
well below the existing upper bound of the corresponding process. Therefore in
our discussion of available DM parameter space in Section 5.1, we ignore the
indirect search constraints, which are automatically satisfied once direct
search constraints take into account.
### 3.2 LHC constraints
Finally, we discuss the phenomenology of the Higgs sector at the LHC in this
subsection. The production of the triplet scalars at the LHC via $s$-channel
exchange of $Z/\gamma^{*}$ and $W^{\pm}$ lead to various final states of the
SM particles from their prompt decays. The wide array of phenomenological
consequences of type-II seesaw at the LHC has been thoroughly studied in the
literature Huitu:1996su ; Chakrabarti:1998qy ; Chun:2003ej ; Akeroyd:2005gt ;
Garayoa:2007fw ; Kadastik:2007yd ; Akeroyd:2007zv ; FileviezPerez:2008jbu ;
delAguila:2008cj ; Akeroyd:2009hb ; Melfo:2011nx ; Aoki:2011pz ;
Akeroyd:2011zza ; Chiang:2012dk ; Chun:2012jw ; Akeroyd:2012nd ; Chun:2012zu ;
Dev:2013ff ; Banerjee:2013hxa ; delAguila:2013mia ; Chun:2013vma ;
Kanemura:2013vxa ; Kanemura:2014goa ; Kanemura:2014ipa ; kang:2014jia ;
Han:2015hba ; Han:2015sca ; Das:2016bir ; Babu:2016rcr ; Mitra:2016wpr ;
Cai:2017mow ; Ghosh:2017pxl ; Crivellin:2018ahj ; Du:2018eaw ; Dev:2018kpa ;
Antusch:2018svb ; Aboubrahim:2018tpf ; deMelo:2019asm ; Primulando:2019evb ;
Padhan:2019jlc ; Chun:2019hce ; Ashanujjaman:2021txz ; Mandal:2022zmy ;
Dutta:2014dba . Numerous collider searches have been carried out by the CMS
and ATLAS collaborations to hunt for the same at the LHC ATLAS:2012hi ;
Chatrchyan:2012ya ; ATLAS:2014kca ; Khachatryan:2014sta ; CMS:2016cpz ;
CMS:2017pet ; Aaboud:2017qph ; CMS:2017fhs ; Aaboud:2018qcu ; Aad:2021lzu .
However, there has not been any evidence of an excess over the SM background
expectations so far. Thus, these searches strongly constrain the parameter
space relevant to the type-II seesaw model.
Figure 2: [Left] Branching ratios of doubly charged scalar, $H^{\pm\pm}$ as a
function of $v_{t}$ for $m_{H^{\pm\pm}}=300$ GeV and $\Delta m=15$ GeV.
[Right] Phase diagram for $H^{\pm\pm}$ decays to different modes is shown in
$\Delta m-v_{t}$ plane by different contour lines: the leptonic decay mode by
blue dotted line, the gauge boson decay mode by orange dotted line and the
cascade decay mode by the blue line. Each contour line corresponds to $99\%$
of the branching ratio for the corresponding decay channel. The mass of
$H^{\pm\pm}$ is kept fixed as $m_{H^{\pm\pm}}=300$ GeV. The cascade dominated
region ($Br(H^{\pm\pm}\to{W^{\pm}}^{*}H^{\pm})>99\%$), which we are interested
in our discussion is shown by the shaded orange region.
The most tantalizing collider signature of a type-II seesaw motivated model is
the production and decay of $H^{\pm\pm}$. It is well known that $H^{\pm\pm}$
can decay to $\ell^{\pm}\ell^{\pm}$ ($W^{\pm}W^{\pm})$ for low (relatively
large) values of $v_{t}$, when the triplet scalars are degenerate in mass. If
one introduces a mass-splitting between the triplet scalars, additionally, the
$H^{\pm\pm}\rightarrow H^{\pm}W^{\pm}$ decay channel opens up for $\Delta m>0$
scenarios. However, as we have seen in Section F.3, the EW precision
observables limit the mass-splitting, $\Delta m$, to be less than 40 GeV.
Hence, this restricts $H^{\pm\pm}$ to decay to either $H^{\pm}\ell^{\pm}\nu$
and $H^{\pm}q\bar{q^{\prime}}$ via an off-shell $W^{\pm*}$. For $\Delta
m\lesssim\mathcal{O}(2)$ GeV, when quarks cease to remain as free particles,
then $H^{\pm\pm}$ decay to $H^{\pm}\pi^{\pm}$ has to be considered also.
Similar behavior can be observed for decays of $H^{\pm}$ as well, again for
$\Delta m>0$, leading to a cascade decay of $H^{\pm\pm}$ in turn. In fact, for
$v_{t}\sim 10^{-3}-10^{-6}$ GeV, the $H^{\pm\pm}(H^{\pm})\rightarrow
H^{\pm}(H^{0}/A^{0})W^{\pm*}$ decay becomes the dominant decay channel for
$\Delta m\gtrsim 10$ GeV. For a detailed discussion on the various decay
channels of $H^{\pm\pm}(H^{\pm})$ we refer interested readers to references
FileviezPerez:2008jbu ; Ghosh:2018drw . To illustrate the above points we show
BFs of $H^{\pm\pm}$ to $\ell^{\pm}\ell^{\pm}$ ($W^{\pm}W^{\pm})$, and the
cascade channel ($H^{\pm}W^{\pm*}$) for $m_{H^{\pm\pm}}=300$ GeV with $\Delta
m=15$ GeV in the left panel of figure 2. In contrast, we display the
Br$(H^{\pm\pm}\rightarrow H^{\pm}W^{\pm*})>99\%$ region by the shaded area in
the right panel of figure 2 as a function of $v_{t}$ as well as $\Delta m$.
In the $\Delta m=0$ case, the ATLAS collaboration places the most stringent
bound of $m_{H^{\pm\pm}}\gtrsim 870$ GeV, assuming
Br($H^{\pm\pm}\rightarrow\mu^{\pm}\mu^{\pm}$) to be 100$\%$ Aaboud:2017qph ,
and using 36.1 fb-1 data at $\sqrt{s}=13$ TeV. On the other hand, ATLAS again
excludes $m_{H^{\pm\pm}}\gtrsim 350$ GeV in the $W^{\pm\pm}$ final state with
139 fb-1 of data Aad:2021lzu . The first bound is applicable in the small
$v_{t}$ regime, while the second is for relatively large $v_{t}$ values. It is
clear from the discussion in the previous paragraph that the above limits are
not applicable to the whole $\Delta m-m_{H^{\pm\pm}}$ plane for the range of
$v_{t}$, where the cascade decay have a significant BF.
Let us first consider $\Delta m<0$ scenarios. For a large mass-splitting
($\mathcal{O}(10)$ GeV) and a moderate $v_{t}$, cascade decays of
$H^{0}/A^{0}\to H^{\pm}W^{\mp*}$ and $H^{\pm}\to H^{\pm\pm}W^{\mp*}$, dominate
over other decays. In this regime, $H^{\pm\pm}$ is the lightest triplet
scalar, and decays to either $\ell^{\pm}\ell^{\pm}$ or $W^{\pm}W^{\pm}$
depending on the choice of $v_{t}$. Hence, the effective pair production
cross-section of $H^{\pm\pm}$ is augmented by cascade decays of $H^{0},A^{0}$
and $H^{\pm}$. So, one can expect the bounds on $m_{H^{\pm\pm}}$ to be strong
for $\Delta m<$ 0 scenarios compared to the $\Delta m=0$ case. In a recent
analysis, reference Ashanujjaman:2021txz has shown by recasting various ATLAS
and CMS analyses that for $\Delta m=-10$ ($-30$) GeV the present exclusion
limit for $m_{H^{\pm\pm}}$ is 1115 (1076) GeV with $v_{t}\sim 10^{-5}-10^{-6}$
GeV. In the same paper, the authors have improved the bound on
$m_{H^{\pm\pm}}$ for $\Delta m=0$ to 955 GeV (420 GeV) for small (relatively
large) $v_{t}$.
On the other hand, in $\Delta m>0$ scenarios, $H^{\pm\pm}$ is the heaviest
triplet member, and again for a mass-splitting of $\mathcal{O}(10)$ GeV and a
moderate $v_{t}$, $H^{\pm\pm}$ will mostly undergo the cascade decay,
$H^{\pm\pm}\rightarrow W^{\pm*}H^{\pm}\rightarrow
W^{\pm*}W^{\pm*}\,H^{0}/A^{0}$. $H^{0}$ and $A^{0}$ mostly decay to invisible
neutrinos for $v_{t}<$ $10^{-4}$ GeV. $H^{0}$ can also decay to $b\bar{b}$ or
$\tau^{+}\tau^{-}$ if its mixing angle with the SM Higgs is sufficiently
large. The EW precision bound of $\Delta m<40$ GeV ensures that the leptons
and jets arising from $H^{\pm\pm}$ cascade decay will be soft and may not
cross the $p_{T}^{\ell/j}$ thresholds required in canonical
$\ell^{\pm}\ell^{\pm}$ and $W^{\pm}W^{\pm}$ searches for $H^{\pm\pm}$.
Reference Ashanujjaman:2021txz shows that even with 3 ab-1 of integrated
luminosity data, the LHC will be unable to constrain $m_{H^{\pm\pm}}\gtrsim$
200 GeV for $\Delta m=$ 10 GeV (30 GeV), with $v_{t}$ around $10^{-3}$ GeV to
$10^{-5}$ GeV ($10^{-3}$ GeV to $10^{-6}$ GeV). This scenario is akin to
compressed supersymmetry spectra and a dedicated search strategy is required
to develop to probe this region Baer:2014kya ; Dutta:2012xe ; Dutta:2014jda ;
Ajaib:2015yma ; Dutta:2017nqv .
In addition, the measurement of the properties of the observed 125 GeV Higgs
boson can also restrict some of our model parameters. As the observed Higgs
boson’s couplings to the SM particles are found to be almost SM-like, it will
not allow a significant mixing between the neutral $CP$-even components of the
triplet and the SM doublet. In order to satisfy the Higgs signal strength
bounds, the mixing angle $\sin\theta_{t}$, defined in equation 7, should be
below 0.1 ATLAS:2016neq ; CMS:2018uag . The observed Higgs can decay to a pair
of DM scalars if $m_{S}<m_{h^{0}}/2$. Hence, we impose the limit
Br($h^{0}\rightarrow$ invisible) $<$ 0.15 on our model parameters CMS:2018yfx
.
Furthermore, the decay width of the SM Higgs into a photon pair can be
affected by the presence of $H^{\pm}$ and $H^{\pm\pm}$ in the loops.
$\Gamma(h^{0}\rightarrow\gamma\gamma)$ can both be enhanced or reduced
compared to the SM value depending on the sign of quartic couplings
$\lambda_{1}$ and $\lambda_{4}$. Since the partial decay width of the SM into
photons is tiny, the total decay width of $h^{0}$ barely changes. The signal
strength parameter of $h^{0}\rightarrow\gamma\gamma$ channel is defined as,
$\mu_{\gamma\gamma}=\frac{\Gamma^{\text{N}P}[h^{0}\rightarrow\gamma\gamma]}{\Gamma^{\text{S}M}[h^{0}\rightarrow\gamma\gamma]}$
(50)
where $\Gamma^{\text{SM (NP)}}[h^{0}\rightarrow\gamma\gamma]$ denotes the
decay rate without (with) the inclusion of new physics. We consider that the
production cross-section of $h^{0}$ will remain the same when the effect of NP
is included since $\cos\theta_{t}\rightarrow 1$. For more details on
$\Gamma(h^{0}\rightarrow\gamma\gamma)$ within the type-II seesaw model we
refer the reader to references Arhrib:2011vc ; Gunion:1989we . The current
limits on $\mu_{\gamma\gamma}$ from ATLAS and CMS are $1.04^{+0.10}_{-0.09}$
ATLAS:2022tnm and $1.12{\pm 0.09}$ CMS:2021kom , respectively.
## 4 Choice of parameter space
### 4.1 Implications from Higgs searches
As discussed in subsection 3.2, LHC direct searches for $H^{\pm\pm}$ will not
be able to limit the triplet-like scalar masses even at 200 GeV for a moderate
$v_{t}$ and relatively large $\Delta m$ ($>10$ GeV) Ashanujjaman:2021txz .
Depending on the value of $v_{t}$ and mixing with $h^{0}$, in this scenario,
$H^{0}/A^{0}$ either decays into $\nu\nu/b\bar{b}/\tau^{+}\tau^{-}$ or
$h^{0}h^{0},ZZ/h^{0}Z$. Henceforth, in this work, we focus on 10 GeV $<\Delta
m<$ 40 GeV and $10^{-3}$ GeV $<v_{t}<$ $10^{-6}$ GeV region of parameter space
of the type-II seesaw part of the model. On the other hand, the rich scalar
sector of this model can generate stochastic GW signals from FOPT in the early
Universe. As the LHC fails to constrain this region from the direct searches
of triplet-like scalars, we try to probe this region of parameter space from
GW experiments.
For $\Delta m>0$, $\lambda_{4}$ needs to be $-$ve (see equation 13). Thus, we
consider $\lambda_{4}<0$ region of parameter space in our analysis. For
successful FOPT to take place in the Higgs sector, the two most important
quartic couplings are $\lambda_{1}$ and $\lambda_{4}$. They are responsible
for mixing the doublet and the triplet fields (see quation 2.3.1). We vary the
absolute value of these couplings from 0 to 4. Other quartic couplings of the
triplet-sector $\lambda_{2}$ and $\lambda_{3}$ do not significantly affect
FOPT. Thus, we set them to fixed values for the rest of our study. Since we
focus on light triplet-like scalars, we require $m_{H^{\pm\pm}}$ within 200
GeV to 600 GeV, and for that purpose, we vary $M_{\Delta}$ from 100 GeV to 800
GeV. In addition, for the above choice of our $v_{t}$ and $m_{H^{\pm\pm}}$
values, our benchmark points pass the flavour constraints discussed in
subsection F.2 as well.
The precision measurements of $h^{0}$ couplings at the LHC restrict the mixing
between the doublet-like and the triplet-like $CP$-even neutral scalars to
small values. Therefore, we limit our scan to tiny $\sin\theta_{t}$ values to
allow minimum mixing and maintain the SM-like nature of $h^{0}$. However,
loop-induced $h^{0}$ decays, such as $h^{0}\rightarrow\gamma\gamma$, can
deviate significantly from the observed limits even at small $\sin\theta_{t}$
values. We discuss the variation of the signal strength, $\mu_{\gamma\gamma}$
of the $h^{0}\rightarrow\gamma\gamma$ channel in the above-discussed region of
parameter space below.
Figure 3: Variation of the signal strength $\mu_{\gamma\gamma}$ in the
$\lambda_{1}-\lambda_{4}$ plane. The other two parameters, $M_{\Delta}$ and
$v_{t}$, of the triplet sector that can alter the signal strength are fixed at
300 GeV and $10^{-5}$ GeV, respectively.
At the one-loop level, the top quark, along with the charged weak gauge bosons
and the charged triplet-like Higgs states contribute to the decay of
$h^{0}\rightarrow\gamma\gamma$. The deviation of the signal strength of this
decay channel $\mu_{\gamma\gamma}$, defined in equation 50, from 1 can become
a signature of new physics beyond the SM. We present that the variation of
$\mu_{\gamma\gamma}$ for $\lambda_{1}>0$, $\lambda_{4}<0$, $M_{\Delta}=300$
GeV and $v_{t}=10^{-4}$ GeV in figure 3. The variation of $\lambda_{1}$ and
$\lambda_{4}$ are presented in the $x$-axis and $y$-axis, respectively,
whereas the variation of $\mu_{\gamma\gamma}$ is indicated by palette colors.
The figure also takes into consideration the theoretical constraints. Due to
constraints from vacuum stability that are discussed in section
E.1,444Particularly from the inequality relations:
$(\lambda_{1}+\lambda_{4})+2\sqrt{\lambda_{H}(\lambda_{2}+\lambda_{3})}\geq 0$
and
$(2\lambda_{1}+\lambda_{4})+4\sqrt{\lambda_{H}(\lambda_{2}+\frac{\lambda_{3}}{2})}\geq
0$ the white empty section of figure 3 is excluded. We note that due to a
destructive interference between the SM and BSM contributions to
$\mu_{\gamma\gamma}$, it is mostly less than 1 for $\lambda_{4}<0$. The
deviation of $\mu_{\gamma\gamma}$ from 1 increases with the increase of the
absolute values of $\lambda_{1}$ and $\lambda_{4}$. Thus the precision study
of $\mu_{\gamma\gamma}$ at the LHC and other proposed collider experiments can
exclude a significant amount of the parameter space of our interest in this
paper. We shall discuss this issue further in section 5.
### 4.2 Implications from DM searches
In contrast to the Higgs sector discussed above, in the dark sector, we vary
$m_{S}$, $\lambda_{SH}$, and $\lambda_{S\Delta}$ within a certain range to
satisfy all DM constraints. $\lambda_{SH}$ coupling has a significant impact
on FOPT as it mixes the singlet $h_{s}$ field with the $h_{d}$-field (see
equation 2.3.1). Thus, the possibility of a strong FOPT is expected to
increase with a large $\lambda_{SH}$. However, as we have already discussed in
section 2.2, increasing the same coupling increases $\sigma_{\rm DD}^{\rm SI}$
as well. Hence, large $\lambda_{SH}$ values are ruled out by DMDD limits. We
shall discuss this issue in more detail in section 5.3. Another dark sector
parameter, $\mu_{3}$, also impacts both DM and the FOPT. This coupling not
only can produce the necessary barrier for FOPT but also contributes to DM
semi-annihilation processes. We vary this parameter in our study satisfying
the constraints $\mu_{3}/m_{S}\lesssim 2\sqrt{\lambda_{S}}$ from the stability
of the global minimum, which is discussed in subsection E.1.
To summarize, the ranges of input parameters of the model we consider for the
scan are listed in table 2. The other parameters are fixed at
$\lambda_{H}=0.129$, $\lambda_{2}=0.1$, $\lambda_{3}=0$, $\lambda_{S}=0.1$.
Varying parameters | $\lambda_{1}$ | $\lambda_{4}$ | $M_{\Delta}$ (GeV) | $v_{t}$ (GeV) | $m_{S}$ (GeV) | $|\lambda_{SH}|$ | $|\lambda_{S\Delta}|$ | $\mu_{3}$ (GeV) | $|\lambda_{S}|$
---|---|---|---|---|---|---|---|---|---
Ranges | 0 – 4 | $-4$ – 0 | 100 – 800 | $10^{-6}$ – $10^{-3}$ | 0 – 1000 | $<4$ | $<6$ | $<1000$ | $<6$
Table 2: Ranges of various model parameters employed for scanning the present
model parameter space.
## 5 Results
In this section, we investigate the DM phenomenology, the thermal history
regarding an FOEWPT, the associated production of stochastic GW and the
interplay among the DM, GW and the collider physics in this model. In the
subsection 5.1, we analyse the dependencies of the DM phenomenology in terms
of both DM relic density and DMDDSI cross-section on the triplet-sector scalar
masses. We study the variation of various model parameters of the dark sector
in the estimation of relic density and DMDDSI cross-section. Implications of
the various model parameters on FOEWPT have been studied in two parts. The
impact of the triplet-sector parameters on the FOEWPT, the production of GW
and the interplay with LHC have been studied in the subsection 5.2. Two
benchmark scenarios are presented to illustrate the effect of the triplet-
sector parameters on the FOEWPT. In the subsection 5.3, we study the
possibilities of FOEWPT by altering the shape of the SM Higgs potential mainly
from the dark sector. The connection between the DM observables and an FOPT
has been discussed. A benchmark scenario that exhibits a two-step phase
transition in the early Universe has been presented to illustrate such a
scenario.
### 5.1 DM Parameter space
In this section, we discuss the DM parameter space in terms of model
parameters which is consistent with DM relic density observed by the PLANCK
experiment and the upper bound on DM-nucleon spin-independent cross-section
provided by XENON-1T and PANDA-4T experiments. Before going to the details of
the parameter space scan, we attempt to comprehend the variation of DM relic
density with DM mass, $m_{S}$ $(\equiv m_{\rm DM})$ and other relevant
parameters of the model such as scalar portal couplings $\lambda_{SH}$,
$\lambda_{S\Delta}$ and trilinear coupling $\mu_{3}$. The abundance of DM also
significantly depends on the masses of triplet scalar, i.e.,
$m_{H^{0}},~{}m_{A^{0}},~{}m_{H^{\pm}}$ and $m_{H^{\pm\pm}}$ which can be
expressed as the independent parameters as mentioned in equation 12. It is
difficult to establish the significance of all the relevant parameters that
vary simultaneously. Therefore, we fix the triplet sector parameters (in
equation 12), which simplifies the setup and allows us to investigate DM
phenomenology. For DM discussion, we consider one benchmark point from the
cascade region of the triplet sector (as discussed in section 4.2) which is
given by:
$\displaystyle\\{M_{\Delta}=367.7~{}{\rm GeV},~{}v_{t}=3.3\times
10^{-4}~{}{\rm
GeV},~{}\lambda_{1}=3.92,~{}\lambda_{2}=0.1,~{}\lambda_{3}=0,~{}\lambda_{4}=-0.989\\}$
which corresponds to the physical masses and mixing angle followed from
equations LABEL:MCPE, 9, 10 and 11 as:
$\displaystyle\\{m_{H^{0}}=367.7,~{}m_{A^{0}}=367.7,~{}m_{H^{\pm}}=387.5,~{}m_{H^{\pm\pm}}=406.4~{}{\rm
GeV},$ $\displaystyle~{}\mu=0.00104,~{}\sin\theta_{t}=-1.03\times
10^{-6}~{}\\}$ (52)
It is worth mentioning here that one can also consider another set of
parameters as mentioned above, for which the underlying physics remains the
same. The only thing that will change is the allowed region of DM mass
depending on the triplet scalar mass. Therefore the phenomenology of DM
depends on the following independent parameters in the dark sector along with
the above-fixed parameters
$\displaystyle\\{m_{S},~{}\mu_{3}/m_{S}=1,~{}\lambda_{SH},~{}\lambda_{S\Delta}\\}.$
(53)
Note that for DM discussion we consider here $\mu_{3}/m_{S}=1$ which is
responsible for the semi-annihilation process, $S~{}S\to S~{}h^{0}$. The
larger values of $\mu_{3}/m_{S}$ lead to a larger semi-annihilation
contribution to DM abundance. However, a stable EW vacuum sets an upper bound
on the trilinear coupling, $\mu_{3}\leq 2m_{S}$ as discussed earlier.
Figure 4: Variation of relic density as a function of DM mass for different
ranges $\lambda_{S\Delta}$ : $0.01\leq\lambda_{S\Delta}\leq 0.04$ (red),
$0.04<\lambda_{S\Delta}\leq 0.1$ (blue), $0.1<\lambda_{S\Delta}\leq 0.3$
(green) and $0.3<\lambda_{S\Delta}\leq 0.5$ (orange). We illustrate two
different choices of $\lambda_{SH}:$ $\\{0.01-0.02\\}$ (left) and
$\\{0.10-0.11\\}$ (right).
In Fig.4, we show the variation of DM abundance ($\Omega_{S}h^{2}$) as a
function of $m_{S}$ for two different choices of $\lambda_{SH}$:
$0.01\leq\lambda_{SH}\leq 0.02$ (left) and $0.10\leq\lambda_{SH}\leq 0.11$
(right). Different colored patches indicate different ranges of DM portal
coupling with triplet, $\lambda_{S\Delta}$ : $0.01\leq\lambda_{S\Delta}\leq
0.04$ (red), $0.04<\lambda_{S\Delta}\leq 0.1$ (blue),
$0.1<\lambda_{S\Delta}\leq 0.3$ (green) and $0.3<\lambda_{S\Delta}\leq 0.5$
(orange) are considered for each plot. The black dotted horizontal lines
indicate the observed DM relic density from the PLANCK data Planck:2018vyg ,
i.e., $\Omega_{\text{DM}}h^{2}=0.120$. With the increase in DM mass, different
number-changing processes open up after crossing some threshold values
resulting in a drop in relic density.
$\bullet$ $m_{S}<m_{h^{0}}$: When DM mass, $m_{S}<m_{h^{0}}$, the DM abundance
is mostly dominated by the $S~{}S\to{\rm SM~{}SM}$ number changing process
which is mediated by both the CP even Higgses, $h^{0}$ and $H^{0}$. And the
density of DM varies as:
$\displaystyle\Omega_{S}h^{2}\propto\frac{1}{\langle\sigma v\rangle_{SS\to{\rm
SM~{}SM}}}.$ (54)
where for the small $\sin\theta_{t}$ limit, $\langle\sigma v\rangle_{SS\to{\rm
SM~{}SM}}\propto{\lambda_{SH}^{2}}/{m_{S}^{2}}$ and almost independent of
$\lambda_{S\Delta}$ which is shown from the above Figs.4. The annihilation
contribution will get suppressed with the increase of DM mass and hence relic
density increases. For a fixed DM mass, with an increase of $\lambda_{\phi
H}$, relic density decreases as it is shown in the right panel figure. A sharp
drop in DM density due to the SM Higgs, ${h^{0}}$ resonance near $m_{S}\sim
m_{h^{0}}/2$. Beyond the Higgs($h^{0}$) pole, $m_{S}>m_{W},m_{Z}$, DM
annihilates into a pair gauge final states resulting in a large enhancement in
$\langle\sigma v\rangle$, which leads to less density.
$\bullet$ $m_{h^{0}}\leq m_{S}<m_{H^{0}}$: A new annihilation channel,
$S~{}S\to S~{}h^{0}$ (semi-annihilation)starts contributing when the DM mass
becomes heavier than Higgs mass ($m_{S}>m_{h^{0}}$) thanks to the trilinear
coupling, $\mu_{3}$ present in this scenario. Therefore the relic density of
DM in this region is followed as:
$\displaystyle\Omega_{S}h^{2}\propto\frac{1}{\langle\sigma v\rangle_{SS\to{\rm
SM~{}SM}}+\langle\sigma v\rangle_{SS\to Sh^{0}}}.$ (55)
The second term in the above equation depends on $\mu_{3}$ and $\lambda_{SH}$
whereas the first term only depends on $\lambda_{SH}$. The semi-annihilation
contributions dominate over the standard annihilation to SM for a larger value
of $\mu_{3}$ which will help to evade the direct search bound on
$\lambda_{SH}$. With the increase of $\lambda_{SH}$, both standard
annihilation and semi-annihilation contributions are significantly enhanced
and hence relic density decreases as it is depicted from the left and right
panel of Fig.4. Note here that the effect of semi-annihilation becomes
pronounced where the propagator suppression is minimal near $m_{S}\gtrsim
m_{h^{0}}$. There is no variation of relic density with $\lambda_{S\Delta}$
because both the contributions are almost insensitive to the change of
$\lambda_{S\Delta}$ for the small $\sin\theta_{t}$ limit. There is no drop in
relic density near $m_{S}\sim m_{H^{0}}/2$ due to the second heavy Higgs,
$H^{0}$. This is because of the heavy Higgs mediated diagrams, $S~{}S\to{\rm
SM~{}SM}$, are strongly suppressed by small $\sin\theta_{t}$ and $v_{t}$.
$\bullet$ $m_{S}\gtrsim m_{H^{0}}$: The new annihilation processes start to
contribute to DM relic when DM mass is larger than the masses of the triplet
states i.e. $2m_{S}>m_{X}+m_{Y}$ and the relic density of DM turns out to be
$\displaystyle\Omega_{S}h^{2}\propto\frac{1}{\langle\sigma v\rangle_{SS\to{\rm
SM~{}SM}}+\langle\sigma v\rangle_{SS\to Sh^{0}}+\langle\sigma v\rangle_{SS\to
X~{}Y}}~{},~{}\\{X,Y\\}:=\\{H^{0},A^{0},H^{\pm},H^{\pm\pm}\\}.$
The Higgs mediated s and t channel diagrams for $SS\to XY$ process are
suppressed by a small mixing angle, $\sin\theta_{t}$ and small $v_{t}$. But
the four-point contact diagrams for the $SS\to XY$ process enhance the cross-
section significantly with the increase of $\lambda_{S\Delta}$. Therefore the
effect of new annihilation to DM relic density becomes more prominent with the
increase of $\lambda_{S\Delta}$. Since this coupling is almost insensitive to
$\sigma_{\rm DD}^{\rm SI}$ for the small mixing angle, one can play with it to
satisfy observed DM density. For the smaller values of $\lambda_{SH}$ and
$\mu_{3}(=m_{S})$, the most dominant contribution to DM relic is coming from
the new annihilation channels, $S~{}S\to X~{}Y$ whereas the other two
contributions, $S~{}S\to{\rm SM,SM}$ and $S~{}S\to S~{}h^{0}$ are sub-
dominate. With the increase of $\lambda_{S\Delta}$, $\langle\sigma
v\rangle_{S~{}S\to X,Y}$ increases and hence relic density decreases which are
shown from the left panel of Fig.4 for fixed choices of
$\lambda_{SH}:\\{0.01-0.02\\}$ and $\mu_{3}/m_{S}=1$. In the right panel of
Fig.4, we considered comparably large values of $\lambda_{SH}:\\{0.10-0.11\\}$
which further increases the $S~{}S\to{\rm SM~{}SM}$ and $S~{}S\to S~{}h^{0}$
contributions and hence total effective annihilation cross-section increases.
As a result, relic density decreases compared with the left panel figure. Note
that a new DM semi-annihilation, $SS\to SH^{0}$ is kinematically allowed here
but it has almost negligible contribution to relic density as the cross-
section is suppressed by the small mixing angle, $\sin\theta_{t}$ and small
$v_{t}$.
Figure 5: [Left]SI spin-independent DM-nucleon scattering cross-section,
($\sigma^{\rm SI}_{\rm DD}$) for relic density allowed(PLANCK) parameter space
as a function of DM mass($m_{S}$). For comparison, the current upper bound on
the SI DM-nucleon cross-section from XENON 1T Aprile:2018dbl and PANDAX 4T
PandaX-4T:2021bab data are shown in the same plane. The Neutrino Floor is
represented by a shaded orange region. [Right] Relic(PLANCK)$+$DD(PANDAX-4T)
allowed parameter space is shown in $m_{S}-\lambda_{SH}$ plane for different
ranges of $\lambda_{S\Delta}$ shown by different color patches. Note that the
relic density of DM, $S$ considered here $100\%$ ($\xi_{\rm DM}=1$) of
observed DM density.
We shall now move to the parameter space scan. To get the allowed region of
parameter space, we perform a three-dimensional random scan over the following
parameters keeping other parameters fixed as discussed above.
$\displaystyle\\{m_{S}:~{}\\{30-1000\\}{\rm
GeV},~{}~{}\lambda_{SH}:\\{0.001-0.3\\},~{}~{}\lambda_{S\Delta}:\\{0.01-0.5\\}\\}~{}.$
(57)
In the left panel of Fig.5, we show the relic density allowed parameter space
(with $\xi_{\rm DM}=1$) emerging from the random scan in the plane of
$\sigma_{\rm DD}^{\rm SI}$ versus DM mass, $m_{S}$. As discussed in section
2.2, for a small mixing limit($\sin\theta_{t}\to 0$), the DD cross-section is
proportional to $\lambda_{SH}^{2}$ and almost independent of
$\lambda_{S\Delta}$. Therefore with an increase of $\lambda_{SH}$, the DD
cross-section increases. The latest upper bounds on $\sigma_{\rm DD}^{\rm SI}$
against DM mass from XENON 1T (purple dashed line) and PANDAX 4T (black dashed
line) are shown in the same plane. The parameter space above the dashed line
can be disfavoured from the non-observation of the DM signal of the
corresponding direct search experiments. We find that DM mass below the mass
of the lightest triplet state ($m_{S}<m_{H^{0}}$), the DM-SM Higgs portal
coupling, $\lambda_{SH}$ is required comparably large to satisfy observed
relic density which corresponds to large DD cross-section lies above the
experimental upper limit. As a result, most of the region($m_{S}<m_{H^{0}}$)
is excluded from non-observation of DM signal apart from Higgs poles,
$m_{S}\sim m_{h^{0}}/2$. Once the triplet final states open up, the DM portal
coupling with a triplet, $\lambda_{S\Delta}$ takes up the major role in
controlling DM relic density. With the help of $\lambda_{S\Delta}$ which is
insensitive to DD-cross-section, the pressure on $\lambda_{SH}$ can be reduced
to produce correct relic density which can evade the direct search bound. This
phenomenon can be observed from the right panel of Fig.5. As a result, the
region above the DM mass, $m_{S}>m_{H^{0}}$ is allowed from both relic and
direct search constraints with the help of large $\lambda_{S\Delta}(\gtrsim
0.1)$.
### 5.2 Implications of the triplet sector parameters on SFOEWPT
We have already discussed how the discovery of a 125 GeV Higgs mass in the SM
demonstrates that electroweak breaking occurs through a smooth cross-over
transition. However, it can be an FOPT in the presence of additional scalars,
going Beyond-the SM. A further benefit of FOEWPT is that it can supply one of
the necessary conditions for the explanations of BAU. In this section, we
illustrate the possibility of FOPT in the electroweak sector in our choice of
parameter region of the current model. To explain the observed matter-
antimatter asymmetry via the mechanism of EWBG, the FOPT in the electroweak
sector needs to satisfy another additional condition, i.e.,
$\xi_{n}=\frac{v_{n}}{T_{n}}>1.$ (58)
$v_{n}$ is the ${\it vev}$ in the electroweak minimum at the nucleation
temperature $T_{n}$. The strength of the FOPT is quantified via $\xi_{n}$.
This criterion is required to prevent the wash-out of the generated baryon
asymmetry after the EWPT. We present four plots of the FOPT-allowed scan
results in figure 6 to illustrate the correlations among the model parameters
of the triplet-sector in connection with the FOPT and its strength, $\xi_{n}$.
Here, We fix the various couplings of dark sectors to small values to ensure
that their impact on the FOPT is minimal and we vary $\lambda_{1}$,
$\lambda_{4}$, $v_{t}$ and $M_{\Delta}$ only in the range as described in
table 2. Thus, here we want to study the impact of the triplet-sector
parameters on the FOPT in the $h_{d}$-field direction. In our scan results
$v_{n}\sim v_{h}$, since the triplet ${\it vev}$ always remains very small.
Figure 6: Scattered points that exhibits FOPT along the $h_{d}-$field
direction in the $\lambda_{1}-\lambda_{4}$ plane with $M_{\Delta}$ (strength
of the phase transition, $\xi_{n}$) indicated by the palette in the top left
(bottom right) plot. The variation of the same points in the
$m_{H^{\pm\pm}}-\lambda_{1}$ ($m_{H^{\pm\pm}}-\lambda_{4}$) plane with the
palette indicating the strength of the phase transition $\xi_{n}$ is shown in
the top right (bottom left) plot.
Figure 7: Left: Scatter points in the $(\lambda_{1}+\lambda_{4})-M_{\Delta}$
plane that satisfies FOPT along the $h_{d}-$field direction with the palettes
indicating the strength of the phase transition, $\xi_{n}$. Right: Similar to
the scatter spots on the left plot but in the
$(\lambda_{1}+\lambda_{4})-T_{n}$ plane with showing the variation of
$M_{\Delta}$ via the palette.
The plot on top, left of figure 6 shows the FOPT allowed region of
$\lambda_{1}$ and $\lambda_{4}$ with the $M_{\Delta}$ being indicated by the
palette-color. It shows that FOPT prefers relatively larger $\lambda_{1}$. We
have already discussed that in our choice of parameter region ($\Delta m>0$),
$\lambda_{4}$ has to be $-$ve and from the theoretical constraints
(particularly constraints from the stability) that we have discussed in
section E, the absolute value of $\lambda_{4}$ cannot be too large compared
with $\lambda_{1}$. As a result, the permitted range of $\lambda_{4}$ rises
with larger $\lambda_{1}$. Color variation reveals that $M_{\Delta}$ needs to
be on the smaller side at the relatively smaller values of $\lambda_{1}$. From
equation 11, $m_{H{{}^{\pm}\pm}}\sim M_{\Delta}$ at smaller $\lambda_{4}$.
Thus, FOPT demands relatively light triplet-like scalars at smaller quartic
coupling $\lambda_{1}$. At the relatively larger triplet-like scalar masses
(larger $M_{\Delta}$), FOPT demands larger $\lambda_{1}$. On the other hand,
for a given $\lambda_{1}$, the decrease of the absolute value of $\lambda_{4}$
demands increasing $M_{\Delta}$ for FOPT.
The shape of the SM Higgs potential gets modification from the quartic terms
that are proportional with $\lambda_{1}$ and $\lambda_{4}$ in equation 2.3.1.
Thus, the phase transition pattern in the Higgs sector strongly depends on
these quartic couplings $\lambda_{1}$ and $\lambda_{4}$. In the plot on top,
right and bottom, left of figure 6, we present the variation of
$m_{H^{\pm\pm}}$ with $\lambda_{1}$ and $\lambda_{4}$, respectively, from our
scanned points that satisfy FOPT. The palette-color indicates the associated
strength of the FOPT, $\xi_{n}$, values. For a fixed $m_{H^{\pm\pm}}$, color
variation shows that the strength of the FOPT increases with $\lambda_{1}$.
Thus, larger $\lambda_{1}$ is favoured for an SFOEWPT, which is required for
EWBG. In addition, for a fixed value of the quartic coupling $\lambda_{1}$,
lighter $m_{H^{\pm\pm}}$ is preferred for an SFOEWPT. Thus, larger quartic
coupling $\lambda_{1}$ and lower triplet-like scalars increase the strength of
the FOPT. The other plot indicates that for a given $m_{H^{\pm\pm}}$, the
strength of the FOPT increases with the decrease of the absolute value of
$\lambda_{4}$. The dependence of $\lambda_{1}$ and $\lambda_{4}$ on $\xi_{n}$
can clearly be understood from the bottom and the right plot of figure 6. It
shows that the increase of $\lambda_{1}$ and the decrease of the absolute
value of $\lambda_{4}$ increases the strength of the phase transition.
The effective dependence of these quartic couplings on the phase transition
pattern can be quantified in terms of the sum of these two quartic couplings,
i.e., $\lambda_{1}+\lambda_{4}$. We present the variation of
$\lambda_{1}+\lambda_{4}$ with $M_{\Delta}$ with $\xi_{n}$ being indicated by
the palette-color in the left plot of figure 7. The variation of colors
indicates that the strength of FOPT increases with the increase of the
effective quartic coupling ($\lambda_{1}+\lambda_{4}$) and decrease of
$M_{\Delta}$ (or, the triplet-like scalar masses).
Another very important quantity of the FOPT is the temperature at which the PT
starts to occur, i.e., the nucleation temperature ($T_{n}$). We have discussed
in section 2.3.2 that the production of GW peak amplitude and the peak
frequency from an FOPT depend on $T_{n}$. Thus, studying the variations of
$T_{n}$ with the various model parameters is important to identify the region
of parameter space for a strong GW signal from FOPT. Keeping this in mind, We
show the variation of $T_{n}$ with respect to the effective quartic coupling
($\lambda_{1}+\lambda_{4}$) in the right plot of figure 7. The variation of
$M_{\Delta}$ is presented in palette color. It can be seen that at relatively
small effective quartic coupling, $T_{n}$ is relatively large. Also, low
effective quartic coupling demands relatively low $M_{\Delta}$. On the other
hand, at relatively larger effective quartic coupling, $T_{n}$ can go down to
even below 50 GeV at moderate values of $M_{\Delta}$.
Finally, after studying all these plots in figures 6 and 7, we can point out
that stronger FOEWPT demands relatively larger effective quartic couplings in
the potential and relatively low triplet-like scalars in the parameter space.
However, various Higgs searches at the LHC can constrain this parameter space.
In the following subsection, we discuss the connection between the strength of
the FOPT and the production of GW signals and the interplay between the
detection of GW and the LHC searches in this region of parameter space.
Figure 8: Scatter points in the $(\lambda_{1}+\lambda_{4})-M_{\Delta}$ plane
that exhibits FOPT along the $h_{d}-$field direction. The variation of
$\alpha$ ($log(\beta/H_{n})$) is shown in the left (right) plot via the
palette.
#### 5.2.1 Production of GW and the interplay with LHC
In section 2.3.2, we have discussed the possibility of producing stochastic GW
background from the cosmological FOPT in the early Universe. This can be
detected in the various future proposed GW space/ground-based detectors. The
important portal parameters that control the GW intensity are $\alpha$,
$\beta/H_{n}$, $T_{n}$ and $v_{w}$. We set $v_{w}=1$ for this work. we have
already discussed the variation of $T_{n}$ in the the bottom, right plot of
figure 6. In this subsection, we discuss the variation of the other two main
portal parameters $\alpha$ and $\beta/H_{n}$, which control the GW spectrum.
In figure 8, we present two plots to show the variation of $\alpha$ (top,
left) and $\beta/H_{n}$ (top, right) at $T=T_{n}$ in colors in the
$(\lambda_{1}+\lambda_{4})-M_{\Delta}$ plane. $\alpha$ ($\beta/H_{n}$)
increases (decreases) with increasing ($\lambda_{1}+\lambda_{4}$) and
decreasing $M_{\Delta}$. These variations have a direct connection with
$\xi_{n}$. Stronger FOPT corresponds to larger $\alpha$ and lower
$\beta/H_{n}$ at $T=T_{n}$. From the discussion in section 2.3.2, it can be
found that the magnitude of the peak of the GW intensity is proportional to
$\alpha$ and inversely proportional to $\beta/H_{n}$ for fixed $v_{w}$ and
$T_{n}$ (see equations 43 and 47). These dependencies can be understood from
their physical definitions. A larger $\alpha$ corresponds to the more energy
transfer from the plasma to the form of GW and a smaller $\beta/H_{n}$ implies
a longer phase transition period. Thus larger $\alpha$ and small $\beta/H_{n}$
enhance the GW intensity. Therefore, one can expect an increase in the GW
intensity with $\xi_{n}$. As a result, the parameter space with larger
effective quartic coupling and smaller triplet-like scalars would produce
stronger GW spectrum intensity. In addition to this, as we discuss in section
2.3.2, it can be found that the value of $\beta/H_{n}$ is crucial to set the
peak frequency range (see equations 44 and 48) which is also important for the
detection purpose at the various future proposed GW experiments.
Figure 9: Left: Parameter points of the scan in the $log(\beta/H_{n})-\alpha$
plane with $T_{n}$ indicating via the palette. Right: Scatter plots of GW peak
amplitude ($\Omega_{\text{GW}}h^{2}$) vs the peak frequency
($f_{\text{peak}}$) in the unit of Hz considering only the contribution coming
from the sound waves mechanism. Variation of the signal strength,
$\mu_{\gamma\gamma}$, is indicated via the palette.
Therefore, in the left plot of figure 9, we present the variation of $\alpha$
and $log(\beta/H_{n})$ at $T=T_{n}$. $T_{n}$ is presented in palette color.
The color variation shows that relatively lower $T_{n}$ corresponds to higher
$\alpha$. On the other hand, $\beta/H_{n}$ measures the inverse duration of
the phase transition. In some situations, the phase transition takes longer to
start, corresponding to relatively lower $T_{n}$ and a larger gap between
$T_{c}$ and $T_{n}$. These scenarios correspond to lower $\beta/H_{n}$. On the
contrary, even at relatively low temperatures, the phase transition can happen
quickly, corresponding to higher $\beta/H_{n}$. In the plot, the purple points
(lower $T_{n}$) is across the wide range of $\beta/H_{n}$.
Variations of these portal parameters $\alpha$, $\beta/H_{n}$, $T_{n}$
correspond to a wide range of GW peak amplitude and peak frequency. We have
already discussed in section 2.3.2 that the sound wave contribution mainly
dominates the GW peak amplitude and the peak frequency (see equations 44 and
43). In the right plot of figure 9, we present the peak amplitude
($\Omega_{\text{GW}}h^{2}$) vs the peak frequency ($f_{\text{peak}}$) in the
unit of Hz considering only the contribution coming from the sound waves
mechanism. The future proposed GW experiments have different sensitive regions
in the intensity amplitude and the frequency range. Thus, studying the
variation of the peak frequency and the peak amplitude of the produced GW is
crucial for detecting the spectrum in future proposed experiments. The
projected sensitivity of those experiments is presented in figures 11 and 15.
We have already discussed that the regions of parameter space with lower
triplet-like scalar masses and correspondingly higher effective quartic
couplings can generate larger GW intensity signals. In section 4.1, we show
the variation of $\mu_{\gamma\gamma}$ with the various parameters of the model
like $\lambda_{1},\lambda_{4}$. It is expected that at relatively larger
effective quartic couplings and light charged triplet-like scalars,
$\mu_{\gamma\gamma}$ deviates significantly from 1. Therefore, we present the
variation of $\mu_{\gamma\gamma}$ in the palette-color in the bottom, right
plot of figure 9. The plot shows that the $\mu_{\gamma\gamma}$ significantly
diverges from 1 for larger GW peak amplitudes. We find that most of the region
of parameter space of FOPT, which can be probed via the GW experiments,
$\mu_{\gamma\gamma}$ is more than $3\sigma$ away from the current ATLAS and
CMS limits, whereas few points lie within the $3\sigma$ limit from the latest
observations. Therefore, most of the region of parameter space of this
scenario that lies within the sensitivity of various GW detectors are already
excluded from the latest precision study of SM-like Higgs boson, particularly
$h^{0}\rightarrow\gamma\gamma$ decays channel. The region that lies within the
$3\sigma-$limit entire region is expected to be tested from the precision
study of SM-like Higgs boson in the HL-LHC and/or future colliders. This
scenario has an interesting interplay between LHC physics and various GW
detectors regarding the possibility of FOPT in the present model. The
possibilities of detecting GW at the future proposed GW detectors due to FOPT
from the current model would be severely limited in the absence of new physics
at the HL-LHC.
In the above discussions, we talk about the phase transition that always
happens in the $h_{d}$-field direction. We have only discussed how different
model parameters from the triplet sector of this model affect the phase
transition quantities and the correlations the production of stochastic GW
signals and the LHC searches. We find that such correlations excluded most of
the regions of the triplet sector that can produce GW as a result of an
FOEWPT. On the other hand, the model parameters from the dark sector,
particularly $\lambda_{SH}$ and $m_{S}$, can play a crucial role in altering
the SM Higgs potential in favour of FOEWPT. Before looking at the influence of
the dark sector parameters on the FOPT in the next section, we first provide
two benchmark points from the above scan results to illustrate the effect of
the triplet-sector parameters on FOEWPT where the dark sector couplings are
small. The implications of the dark sector parameters on the FOPT and the
relationships between the generated GW and other DM experimental,
observational constraints are then discussed in section 5.3.
Input/Observables | BP1 | BP2
---|---|---
$\lambda_{1}$ | 3.92 | 3.59
$\lambda_{4}$ | $-0.989$ | $-0.56$
$M_{\Delta}$ | 367.7 | 366.0
$v_{t}$ (GeV) | $3.3\times 10^{-4}$ | $6.1\times 10^{-4}$
$\mu_{3}$ (GeV) | $-540.9$ | 441.1
$\lambda_{SH}$ | 0.0053 | 0.0013
$\lambda_{ST}$ | 0.913 | 0.02
$m_{{}_{\text{D}M}}$ | 432.2 | 420.0
$m_{H^{++}}$ (GeV) | 406.4 | 388.3
$m_{H^{+}}$ (GeV) | 387.5 | 377.3
$m_{H^{0}\sim A^{0}}$ (GeV) | 367.7 | 365.9
$m_{h}$ (GeV) | 125 | 125
$\sin\theta_{t}$ (GeV) | $-1.03\times 10^{-6}$ | $5.69\times 10^{-7}$
$\Omega h^{2}$ | 0.008 | 0.120
$\xi_{\text{DM}}\,\sigma_{\rm DD}^{\rm SI}$ (cm2) | $(9.17)9.31\times 10^{-50}$ | $8.81(8.99)\times 10^{-50}$
SNR (LISA) | 16.7 | $<<1$
Table 3: Set-I benchmark scenarios with relatively larger (smaller) portal
couplings between the triplet (singlet) and the SM Higgs doublet that exhibit
an FOEWPT. Shown are the various model input parameters, relevant masses,
mixing, DM relic density, DMDDSI cross-section and the signal-to-noise ratio
of the produced GW as a result of an FOEWPT in the LISA experiment. The
details of the EWPT of these benchmark scenarios are presented in tables 4 and
5.
#### 5.2.2 Benchmark scenarios (Set-I)
In this subsection, we present two benchmark points, BP1 and BP2, in table 3
to illustrate the effect of the triplet-sector parameters on the FOEWPT. In
the triplet-sector, $v_{t}$ is around $10^{-4}$ GeV, $H^{\pm\pm}$ around 400
GeV and for BP1 (BP2) $\Delta m$ is around 20 GeV (10 GeV). In the dark
sectors, $m_{s}>m_{H^{0},A^{0},H^{\pm},H^{\pm\pm}}$ and the DM annihilation
cross-section was large at the early Universe as a result of the opening of
the various annihilation channels of the triple-sector. Since
$\lambda_{S\Delta}$ is smaller in BP2 than in BP1, the DM relic density is
lower in BP1. Lower values of $\lambda_{SH}$ in BP2 result in a substantially
low $\sigma_{\rm DD}^{\rm SI}$ compared to BP1. However, relic density scaling
sets $\sigma_{\rm DD}^{\rm SI}$ value at the same range. For both BP1 and BP2,
$\sigma_{\rm DD}^{\rm SI}$ is below the neutrino floor. Therefore, it is
difficult to probe from the DMDD experiments. However, a significant amount of
parameter space left will be probed (corresponds to relatively larger
$\lambda_{SH}$) from the future DMDD experiments.
BM No | $T_{i}$ (GeV) | ${\\{h_{d},h_{t},h_{s}\\}}_{\text{false}}$ | $\xrightarrow[\text{type}]{\text{Transition}}$ | ${\\{h_{d},h_{t},h_{s}\\}}_{\text{true}}$ (GeV) | $\gamma_{{}_{\rm{EW}}}$
---|---|---|---|---|---
BP1 | $T_{c}$ | 76.9 | {0, 0, 0} | FO | {230.1, 0, 0} |
$T_{n}$ | 58.1 | {0, 0, 0} | ,, | {241.3, 0, 0} | 4.15
BP2 | $T_{c}$ | 115.0 | {0, 0, 0} | ,, | {120.9, 0, 0} |
$T_{n}$ | 114.4 | {0, 0, 0} | ,, | {130.4, 0, 0} | 1.14
Table 4: Phase transition characteristics of the benchmark scenarios BP1 and
BP2 presented in table 3. Values of $T_{c}$, $T_{n}$, the corresponding field
values at the false and true phases and the strength of the phase transition,
($\gamma_{{}_{\rm{EW}}}$), along the $SU(2)$-direction are presented for each
benchmark scenario. ‘FO’ corresponds that the phase transition is first-order
type.
Figure 10: Phase flows in benchmark scenarios BP1 and BP2. Individual color
corresponds to a specific minimum of the potential (phase) while each line
indicates the evolution of the phase in the $h_{d}-$field direction as a
function of temperature. The arrows reflect the direction of transition from a
false vacuum to a true vacuum, as calculated at $T_{c}$ and $T_{n}$ along the
$h_{d}$-field. See the text for the details.
The values of $T_{c}$ and $T_{n}$, the strength of the phase transition
$\xi_{n}$ and the field values of the phases at $T_{c}$ and $T_{n}$ are
presented in Table 4. The calculation of $T_{c}$ and $T_{n}$ indicates that
these EWPTs are a one-step transition. In BP1, large supercooling is possible
since $T_{n}$ and $T_{c}$ have a significant difference. This makes the
strength of the transition relatively large. On the other hand, for BP2, the
$T_{n}$ and $T_{c}$ are very close to each other. Also, $T_{n}$ in BP1 is much
smaller than in BP2.
We present the evolution of the symmetric and broken phases with temperature
in figure 10 for these two benchmark points. Each line (red and black color)
shows the field values at a particular minimum as a function of temperature.
The black line represents the symmetric phase, whereas the red line indicates
the broken phase. In addition, the green arrow indicates that an FOPT may
happen in the direction of the arrow since at that temperature and the two
phases connected by the arrow are degenerate. The blue color arrow shows the
transition direction at the nucleation temperature along which phase
transition actually starts. In contrast to BP2, where $T_{c}$ and $T_{n}$ are
pretty near, BP1 exhibits a significant separation between them, resulting in
a stronger phase transition than BP2. The evolution of the broken phase for
both benchmark points indicates that the $h_{d}$ value (red line) finally
evolves to 246 GeV at $T=0$. Thus, the Universe finally evolves to the correct
EW minima, i.e., $v=\sqrt{v_{d}^{2}+v_{t}^{2}}=246$ GeV. Note that the
contribution of the triplet ${\it vev}$ to $v$ is minimal as $v_{t}\sim
10^{-4}$ GeV at $T=0$.
BM No. | $T_{n}$ (GeV) | $\alpha$ | $\beta/H_{n}$
---|---|---|---
BP1 | 58.1 | $0.325$ | $33.58$
BP2 | 114.4 | 0.021 | 26689.5
Table 5: Values of the parameters $T_{n}$, $\alpha$ and $\beta/H_{n}$ (that
control the GW intensity) for the benchmark points BP1 and BP2, presented in
table 3. Figure 11: GW energy density spectrum with respect to frequency for
the benchmark scenarios BP1 and BP2 illustrated against the experimental
sensitivity curves of the future proposed GW detectors such as LISA, SKA,
TianQin, Taiji, aLigo+, BBO, NANOGrav, and U-DECIGO. The solid lines indicate
the overall GW energy density produced for the specific benchmark scenario,
while the broken lines in various colours reflect the individual contributions
from sound waves and turbulence. See the text for details.
Various key parameter values ($\alpha$, $\beta$) pertaining to the GW spectra
generated from the FOPTs for these two benchmark scenarios (BP1 and BP2) are
presented in table 5. The corresponding GW (frequency) spectra are presented
in figure 11 using equations 42 to 49. For each phase transition process, the
contributions from sound waves and turbulence are shown in different colors
with broken lines. The GW peak amplitude and the peak frequency are mostly
dominated by sound contributions. These GW spectra are further compared with
the projected sensitivity of some ground- and space-based GW detectors, viz.,
LISA LISA:2017pwj , ALIA Gong:2014mca , Taiji Hu:2017mde , TianQin
TianQin:2015yph , aLigo+ Harry:2010zz , SKA Carilli:2004nx ; Janssen:2014dka ;
Weltman:2018zrl , NANOGrav McLaughlin:2013ira , Big Bang Observer (BBO)
Corbin:2005ny and Ultimate(U)-DECIGO Kudoh:2005as in Fig. 11. Note that, a
significant amount of supercooling happens in BP1 and the corresponding
nucleation temperature is relatively low compared with BP2. This also enhances
the $\alpha$ value for BP1 compared with BP2. Also, the nucleation takes a
much longer time for BP1 compared with BP2. The $\beta$ parameter, which
indicates the inverse of the duration of the phase transition, is smaller in
BP1 than in BP2. Thus, as the FOPT happens much stronger in BP1, the GW
spectrum peak is expected to be much larger in BP1 than in BP2. This behaviour
can be observed in Fig. 11. Note that the GW intensity obtained for BP1 lies
within the sensitivities of LISA, Taiji, ALIA, BBO and UDECIGO, while BP2 lies
only within the sensitivity of UDECIGO and marginally touches the BBO
sensitivity.
The quantity known as the signal-to-noise ratio (SNR) is used to measure the
detection of the GW signal in different experiments. It is defined as
Caprini:2015zlo
$\small{\text{SNR}=\sqrt{\delta\times\mathcal{T}\int_{f_{min}}^{f_{max}}df\bigg{[}\frac{h^{2}\Omega_{\text{GW}}(f)}{h^{2}\Omega_{\text{exp}}(f)}\bigg{]}^{2}},}$
(59)
where $\delta$ corresponds to the number of independent channels for cross-
correlated detectors (to determine the stochastic origin of GW). It is 2 for
BBO, U-DECIGO and 1 for LISA. The duration of the experimental mission in
years is defined by $\mathcal{T}$. In this work, we consider $\mathcal{T}=5$.
The effective power spectral density of the experiment’s strain noise is
indicated by $\Omega_{\text{exp}}(f)\,h^{2}$. For the detection prospects of
these experiments, the SNR values need to cross the threshold value which
depends on the configuration details of the experiment. Such as, the
recommended threshold number is around 50 for a four-link LISA setup, however,
a six-link design allows for a much lower value of around 10 Caprini:2015zlo .
In this work, we calculate the SNR value only for LISA for these two benchmark
points. For BP1, it is 16.7 and for BP2, the SNR value is way below 1. It is
expected that the SNR value of BP2 will be substantially lower because its GW
signal power spectrum does not fall within the LISA sensitivity curves.
However, other experiments like U-DECIGO can detect this type of benchmark
scenario.
Note that in both the benchmark scenarios, $\mu_{\gamma\gamma}$ is more than
the $3\sigma$ away from the latest LHC limits. Thus, as we discussed in the
description of the right plot of figure 9, FOEWPT preferred region of
parameter space of the triplet sector is mostly excluded from the precision
study of the SM Higgs boson di-photon decay modes. Since the FOEWPT is crucial
for the EWBG, we investigate its viability in the current model by changing
the parameters of the dark sector. In the next section, we will examine such
possibilities and determine the consequences of the dark sector parameters on
the FOEWPT as well as the establishing connections between the produced GW and
other DM experimental constraints.
### 5.3 FOPT from the dark sector
In the previous discussions of this article, we set the dark sector portal
couplings $\lambda_{SH}$ and $\lambda_{S\Delta}$ at smaller values and the
phase transition along the $h_{d}-$field direction is mostly controlled by the
triplet sector parameters of the model, i.e., $\lambda_{1}$, $\lambda_{4}$ and
$M_{\Delta}$. However, larger DM Higgs portal coupling, particularly
$\lambda_{SH}$ as the $h_{d}-$field couples with the $h_{s}-$field through
this portal coupling in the dark sector, can alter the Higgs potential in such
a way that FOEWPT occurs. In addition to this, another intriguing possibility
is the FOPT in the dark sector. In this case, $h_{s}-$field changes from zero
${\it vev}$ at high temperature to non-zero ${\it vev}$ at the electroweak
temperature scale, and then it evolves to zero ${\it vev}$ at $T=0$. Moreover,
a two-step FOPT in the scalar potential is even another intriguing
possibility. In this category, the $h_{s}-$field acquires non-zero ${\it vev}$
at a certain temperature in the early Universe through an FOPT and
subsequently, relatively at a lower temperature another phase transition
occurs where the $h_{d}-$field develops a non-zero ${\it vev}$ whereas the
${\it vev}$ of the $h_{s}-$field goes from a non-zero value to a zero value.
It is crucial to remember that among these three forms of phase transitions,
the one-step phase transition along the $h_{s}-$direction is capable of
generating GW signal but is ineffective for the electroweak baryogenesis.
To study the phase transition along the $h_{s}-$field direction, we first fix
the triplet sector parameters the same as the benchmark scenario BP1 and vary
the dark sector parameters $m_{S}$, $\lambda_{SH}$, $\lambda_{S\Delta}$,
$\lambda_{S}$ and $\mu_{3}$. The DM phenomenology due to the variation of
these DM input parameters for the same triplet-sector input parameters has
already been discussed in section 5.1. In figure 12, we present the variation
of $m_{s}$ and $\lambda_{SH}$ ($\lambda_{S\Delta}$) in the left (right) plot
that exhibits FOPT along the $h_{s}-$field direction. The strength of the
phase transition along the $h_{s}-$field direction, i.e.,
$\xi_{s}=\frac{<h_{s}>}{T_{n}}$ where $<h_{s}>$ is the ${\it vev}$ of the
$h_{s}$ field at the broken phase at $T=T_{n}$, is presented via the palette
color.
In the left plot, the variation of $m_{s}$ and $\lambda_{SH}$ reveals that for
the phase transition along the $h_{s}-$field direction, $\lambda_{SH}$ must be
large for larger $m_{s}$ and lower for smaller $m_{s}$. The region of
parameter space with $m_{s}\gtrsim 425$ GeV is almost disfavoured for FOPT for
$\lambda_{SH}\lesssim 6.0$. On the other hand, the right plot indicates that
the phase transition does not have a preferential bias over the choice of
$\lambda_{S\Delta}$. Equation 19 elucidates such connection between $m_{s}$
and $\lambda_{SH}$. The phase transition along the singlet direction generally
prefers relatively low bare mass squared term ($\mu_{s}^{2}$) in the potential
Ghorbani:2018yfr . Therefore, the parameter space that corresponds to an FOPT
along the $h_{s}$ direction, the maximum possible value of $m_{s}$ is mainly
controlled by the term $\lambda_{SH}v_{d}^{2}$ and it is almost independent of
$\lambda_{S\Delta}$ as $v_{t}<<1$. The color variation in the left plot
reveals that the strength of the phase transition is maximum at the outer
edge, corresponding to the parameter space consisting of very small bare mass
$\mu_{s}$.
The variations of these parameters will also have effects on the observables
of the dark sector, such as the $\Omega_{S}h^{2}$ and $\sigma_{\rm DD}^{\rm
SI}$, and make these scenarios more sensitive to the various DM experiments.
It is therefore anticipated that the production of GW as a result of this type
of FOPT in the dark sector will be connected with DM observables. In the
following subsection, we discuss the relationship between the DM phenomenology
and FOPT along the $h_{s}-$field direction. Note that, in the previous
discussion, we found a similar type of correlation between the FOEWPT-induced
GW intensity and the precision measurement of the signal strength,
$\mu_{\gamma\gamma}$ of the $h^{0}\rightarrow\gamma\gamma$ channel at the LHC.
Figure 12: Scatter plots in the $m_{S}-\lambda_{SH}$
($m_{S}-\lambda_{S\Delta}$) plane in the left (right) plot that exhibits FOPT
along the $h_{s}-$field direction. The variation of the strength of the phase
transition $\xi_{S}$ is shown via the palette.
#### 5.3.1 Correlations among the DM observables and an FOPT
In this section, we now turn our attention to the link between the DM
observables and FOPT along the $h_{s}-$field direction. There is an effect on
the various DM observables from the parameter space that exhibits such a phase
transition.
The left plot figure 13 is otherwise similar with the left plot of figure 12
but here the points satisfy the Planck experimental constraints on the relic
density, i.e., $\Omega_{s}h^{2}\leq 0.120$. The relic density of the singlet
scalar field is indicated by the palette color in the plot. Note that, as we
described earlier $m_{s}<h^{0}$, $\lambda_{SH}$ mostly controls the
$\Omega_{s}h^{2}$ whereas for $m_{s}>h^{0}$, the $\mu_{3}$ proportional semi-
annihilation process starts to contribute to DM density. $\lambda_{S\Delta}$
starts to contribute significantly when $m_{s}$ is larger than the triplet-
like scalar masses which is in this case around $\sim$ 400 GeV.
The $CP$-even scalars $h^{0}$ and $H^{0}$ contribute $\sigma_{\rm DD}^{\rm
SI}$. It has been discussed in section 2.2 that for smaller values of $v_{t}$
and $\sin\theta_{t}$, $\sigma_{\rm DD}^{\rm SI}$ mostly depends on the SM-like
Higgs boson medicated process and it increases with $\lambda_{SH}$. Such
enhancement of $\sigma_{\rm DD}^{\rm SI}$ with $\lambda_{SH}$ excludes a
significant amount of parameter space from the latest constraints from
XENON-1T and PANDA-4T. On the other hand, if the DM mass is larger than the
triple-like scalars then larger $\lambda_{S\Delta}$ decreases the relic
density without affecting the $\sigma_{\rm DD}^{\rm SI}$. With this, at a
relatively larger $\lambda_{S\Delta}$ value, the DM could be underabundant.
The small scaling factor due to the low relic density actually reduces the
effective $\sigma_{\rm DD}^{\rm SI}$ value. Such scenarios allow the parameter
space with larger $\lambda_{SH}$ which is otherwise excluded from the latest
constraints.
Figure 13: Left: Scatter points that exhibits FOPT along the $h_{s}-$field
direction and $\Omega_{S}h^{2}<0.120$ in the $m_{S}-\lambda_{SH}$ plane.
Right: Similar to the scatter spots on the left figure, but comply with the
latest DMDDSI experimental limits. The variation of $\Omega_{S}h^{2}$
($\sigma_{\rm DD}^{\rm SI}$) is shown via the palette in the left (right)
plot.
In the right plot of figure 13, we present the points of the left plot that
pass the latest $\sigma_{\rm DD}^{\rm SI}$ constraint from the XENON-1T and
PANDA-4T experiment. The effective $\sigma_{\rm DD}^{\rm SI}$ is presented by
the palette color. Note that due to the requirement of larger $\lambda_{SH}$
for the FOPT a significant amount of the parameter space is excluded from the
$\sigma_{\rm DD}^{\rm SI}$ constraints. Note that for $m_{s}>h^{0}$, the
$\mu_{3}$ dependent semi-annihilation process starts to contribute and because
of that the relic density drops significantly which helps the effective
$\sigma_{\rm DD}^{\rm SI}$ rates to remain within the exclusion limit result
of a significant downward scaling. Because of that, we find some parameter
space for $m_{s}$ after the SM-like Higgs Boson threshold value. Therefore, it
is important to note that the term $\mu_{3}$ has an important role in the dark
sector and the FOPT sector. As it generates the semi-annihilation processes
for the DM annihilation, also it can alter the potential shape and generate a
barrier between the minima which is necessary for an FOPT. In the scenario for
$m_{s}$ larger than the triplet-like scalars, the term proportional to
$\lambda_{S\Delta}$ starts to contribute to the DM annihilation, which can
drastically reduce the relic density. However, in this scenario, as the
triplet-like scalar masses are relatively large $\sim 400$ GeV, this effect
does not start to occur significantly in the FOPT preferred allowed parameter
space as $m_{s}$ is bounded up to $\sim 425$ GeV for $\lambda_{SH}\lesssim
6.0$.
The discussion of this subsection points out the strong correlations between
the FOPT-favoured parameter space and the DM observables. Such correlations
rule out a significant amount of parameter space of having an FOPT along the
$h_{s}-$field direction. In this case, as we consider the triplet-like scalars
a bit heavier, approximately $\sim$400 GeV, the contribution from the
$\lambda_{S\Delta}h_{d}^{2}h_{s}^{2}$ interaction term in the potential on the
relic density estimation significantly start to contribute for comparatively
higher $m_{s}$. In the scenario with relatively smaller triplet-like scalars,
the same interaction becomes significant at relatively small $m_{s}$ that can
open up much more allowed parameter points satisfying the latest DMDDSI cross-
section limits from the XENON-1T and PANDA-4T experiments due to downward
scaling of the underabundant DM. In the next subsection, we choose a benchmark
scenario from that category where the triplet-like scalars are relatively low,
around $\sim$200 GeV, and exhibit FOEWPT in the early Universe and discuss the
such correlation between the FOPT, production of GW and the DM observables is
also discussed.
#### 5.3.2 Benchmark scenario (Set-II)
Input/Observables | BP3
---|---
$\lambda_{1}$ | 0.17
$\lambda_{4}$ | $-0.30$
$M_{\Delta}$ | 193.2
$v_{t}$ (GeV) | $4.5\times 10^{-4}$
$\mu_{3}$ (GeV) | -100.8
$\lambda_{SH}$ | 1.2
$\lambda_{ST}$ | 6.9
$\lambda_{ST}$ | 2.17
$m_{{}_{\text{D}M}}$ | 252.2
$m_{H^{++}}$ (GeV) | 215.3
$m_{H^{+}}$ (GeV) | 204.5
$m_{H^{0}\sim A^{0}}$ (GeV) | 193.2
$m_{h}$ (GeV) | 125
$\sin\theta_{t}$ (GeV) | $8.0\times 10^{-6}$
$\Omega h^{2}$ | $4.3\times 10^{-5}$
$\xi_{\text{{DM}}}\,\sigma_{\rm DD}^{\rm SI}$ (cm2) | $7.1\times 10^{-47}$
SNR (LISA) | $<<1$
Table 6: Set-II benchmark scenario with relatively smaller (larger) portal
couplings between the triplet (singlet) and the SM Higgs doublet that exhibits
a two-step EWPT in the early Universe. Similar to table 3, various model input
parameters, masses, mixing, DM relic density, DMDDSI cross-section and the
signal-to-noise ratio of the produced stochastic GW signal in the LISA
experiment are shown. The details of the phase transition are shown in tables
7 and 8.
In this section, we present a benchmark point (BP3) exhibiting a two-step FOPT
in the early Universe. We consider a relatively light triplet-like scalar of
around 200 GeV and much smaller $\lambda_{1}$ and $|\lambda_{4}|$ values such
that $\mu_{\gamma\gamma}$ is within the 1$\sigma$ (2$\sigma$)-limit of ATLAS
(CMS) latest results. Scenarios like this demand larger quartic couplings
(particularly $\lambda_{SH}$) in the dark sector and smaller DM mass for an
FOPT in the early Universe. In the discussion in section 5.2, we consider very
low quartic couplings of the dark sector, i.e., much smaller $\lambda_{SH}$
and $\lambda_{S\Delta}$. This can be seen in BP1 and BP2 in table 6. Because
of such consideration, we do not find points with smaller quartic coupling
points in the triplet-sector, i.e., parameter points with smaller
$\lambda_{1}$ and $|\lambda_{4}|$, in section 5.2 that can generate an FOEWPT
in the early Universe.
Compared to the scenarios discussed in section 5.2, in the BP3 $\lambda_{SH}$
is relatively large$\sim 1.2$, which facilitates successful FOPT in the scalar
potential. However, $\sigma_{\rm DD}^{\rm SI}$ is expected to be much larger
as a result of a large coupling. To comply with the latest experimental
constraints from XENON-1T and PANDA-4T we consider $m_{s}$ larger than the
triplet-like scalar masses and much larger $\lambda_{S\Delta}$ $\sim 6.9$,
such that the $S~{}S^{*}~{}\to{\rm X~{}~{}Y};~{}~{}~{}\\{\rm
X,Y\\}=\\{H,A^{0},H^{\pm},H^{\pm\pm}\\}$ annihilation processes open up and
contribute significantly to the annihilation cross-section so that DM becomes
underabundant. Such a lower relic density corresponds to a much smaller
scaling fraction $\xi_{\text{DM}}$ which helps the computed $\sigma_{\rm
DD}^{\rm SI}$ rate to comply with the latest experimental stringent limits.
Note that the value of $m_{S}$ is also crucial for the FOPT and cannot be too
large. Due to the comparatively smaller triplet-like scalar masses $\sim$ 200
GeV in this benchmark point, we can keep $m_{S}$ within the range of $\sim
250$ GeV and also take the benefit of the downward relic density scaling
factor on the DMDDSI experimental constraints. This aids in keeping the DMDDSI
cross-section of BP3 below the most recent constraints from the XENON-1T and
PANDA-4T experiments.
BM No | $T_{i}$ (GeV) | ${\\{h_{d},h_{t},h_{s}\\}}_{\text{false}}$ | $\xrightarrow[\text{type}]{\text{Transition}}$ | ${\\{h_{d},h_{t},h_{s}\\}}_{\text{true}}$ (GeV) | $\gamma_{{}_{\rm{EW}}}$
---|---|---|---|---|---
BP3 | $T_{c}$ | 163.2 | {0, 0, 0} | FO | {0, 0, 14.8} |
145.7 | {0, 0, 63.6} | ,, | {78.9, 0, 0} |
$T_{n}$ | 163.1 | {0, 0, 0} | ,, | {0, 0, 15.1} |
116.3 | {0, 0, 86.7} | ,, | {197.3, 0, 0} | 1.7
Table 7: Phase transition characteristics of the benchmark point BP3 presented
in table 6. Values of $T_{c}$, $T_{n}$, the corresponding field values at the
false and true phases and the strength of the phase transition,
($\gamma_{{}_{\rm{EW}}}$), along the $SU(2)$-direction are presented. ‘FO’
denotes to FOPT.
Figure 14: Phase flows in benchmark point BP3. Individual color corresponds to
a specific minimum of the potential (phase) while the each line indicates the
evolution of the phase in the $h_{d}-$field direction in the left and
$h_{s}-$field direction in the right as a function of temperature. The arrows
reflect the direction of transition from a false vacuum to a true vacuum, as
calculated at $T_{c}$ and $T_{n}$ along the $h_{d}$-field, whereas an orange
bullet indicates that the corresponding field value does not change much
throughout that transition. See the text for the details.
In contrast with BP1 and BP2, the phase transition pattern is quite different
in BP3. The values of $T_{c}$ and $T_{n}$ for the BP3 benchmark scenario are
presented in table 7 and it suggests that the transition is a two-step FOPT.
The evolution of the field and the various phases with temperature are
presented in figure 14. Note that three types of phase exist. ‘phase-I’
denotes the trivial phase, i.e., all the field values are always zero. On the
other hand, ‘phase-II’ corresponds to the minimum where only the $h_{s}-$
field acquires a non-zero ${\it vev}$ and other field ${\it vevs}$ remain
zero. ‘phase-III’ corresponds to that minimum of the potential where only
$h_{d}-$ field obtains a non-zero ${\it vev}$ and all other field ${\it vevs}$
remain at zero. In the right and left plots of figure 14, the trivial minimum
$\\{0,0,0\\}$ at the high temperature is denoted by the black line (phase-I)
where the broken phases in the direction of $h_{s}$ and $h_{d}$ are presented
in the red color (phase-II) and magenta color (phase-III). These lines
correspond to the field values of a particular minimum as a function of
temperature. The arrows in green color correspond to the temperature and field
values of two phases where they degenerate and a possible FOPT can happen
along the direction of the arrow. The phase at the end of the arrow indicates
that it has a deeper minimum than the other phase for $T<T_{c}$. The
transitions finally start to occur when a bubble of the true minimum finally
successfully nucleates, i.e., at $T=T_{n}$. The arrows in blue color represent
the direction of the FOPT at $T=T_{n}$ in the field space. The small circle in
yellow color represents that an FOPT happens in the scalar potential at that
temperature but its field values do not change.
BM No. | $T_{n}$ (GeV) | $\alpha$ | $\beta/H_{n}$
---|---|---|---
BP3 | 163.1 | $2.3\times 10^{-4}$ | $4.8\times 10^{6}$
116.3 | $0.027$ | 312.7
Table 8: Values of the parameters $T_{n}$, $\alpha$ and $\beta/H_{n}$ (that
control the GW intensity) for the benchmark point BP3, shown in table 6.
Note that, first, a broken phase $\\{0,0,14.8\\}$ appears only along the
$h_{s}-$field direction at $T_{c}=163.2$ GeV with a possibility of an FOPT.
The successful nucleation occurs at $T_{n}=163.1$ GeV. It has been shown in
the right plot of figure 14. The small difference between $T_{c}$ and $T_{n}$
makes the two arrows overlap each other. In the left plot of figure 14, a
yellow circle has been shown at $T=163.1$ GeV as during this FOPT the
$h_{d}-$field does not acquire any non-zero ${\it vev}$, i.e., EW symmetry
remains unbroken. Thus, the first FOPT occurs from
$\\{h_{d}=0,h_{t}=0,h_{s}=0\\}\rightarrow\\{h_{d}=0,h_{t}=0,h_{s}\neq 0\\}$
phase direction. Subsequently, later stage, another minimum form (phase-III),
designated by the notation $\\{h_{d}\neq 0,h_{t}=0,h_{s}=0\\}$. At $T=145.2$
GeV phase-II ($\\{0,0,63.6\\}$) and phase-III ($\\{78.9,0,0\\}$) become
degenerate and for $T<145.2$ GeV phase-III becomes the global minimum of the
scalar potential. The corresponding nucleation occurs at $T=116.3$ GeV. During
this transition both $h_{d}-$ and $h_{s}-$ fields undergo a change, with the
former acquiring a non-zero ${\it vev}$ from zero and the latter developing a
zero ${\it vev}$ from a non-zero ${\it vev}$ value. Note that, at $T=0$,
$h_{d}$ field in the phase-III evolves to the value of 246 GeV and $h_{s}$
remains at zero in this phase. During these phase transitions, the triplet
field always remains very small. Therefore, the system finally evolves to the
correct EW minimum. Note that in this benchmark scenario, the minimum phase-II
is still present at T= 0 where $<h_{s}>\sim 100$ GeV and all other fields
${\it vevs}$ are almost zero. But, the global potential minimum of the
potential is the phase-III, not the phase-II.
The values of the key parameters ($\alpha$, $\beta$), which are related to the
computation of GW spectra generated from the FOPTs for the benchmark scenarios
BP3, are presented in table 8. The GW frequency spectrum of BP3 is presented
in figure 15 using equations 42 to 49. Note that the first FOPT along the
$h_{s}-$field direction happens very quickly (much larger $\beta/H_{n}$ value)
and also the phase transition is not strong enough which corresponds to a much
lower $\alpha$ value. Thus, the contributions of the first FOPT in the
$h_{s}-$field direction to the total GW spectrum are very small compared to
the ones coming from the second FOPT which happens along the $h_{d}$ and
$h_{s}$ field directions simultaneously. Therefore, a spectral peak due to the
first phase transition in this two-step phase transition does not appear in
the plot.
Figure 15: GW energy density spectrum with respect to frequency for the
benchmark point BP3 illustrated against the experimental sensitivity curves of
the future proposed GW detectors such as LISA, SKA, TianQin, Taiji, aLigo+,
BBO, NANOGrav, and U-DECIGO. The black line denotes the total GW energy
density whereas the red (green) broken line represents the contributions from
sound waves (turbulence). Note that in this benchmark scenario the
contribution from the first phase transition along the $h_{s}-$field direction
does not appear in these plots as its contribution is too small.
Similar to figure 11, here also each phase transition process, the
contributions from sound waves and turbulence are shown in different colors
with broken lines. The GW peak amplitude and the peak frequency are mostly
dominated by sound contributions. This GW spectrum is further compared with
the projected sensitivity of some ground- and space-based GW detectors, viz.,
LISA, Taiji, TianQin, aLigo+, SKA, NANOGrav, Big Bang Observer (BBO) and
Ultimate(U)-DECIGO in Fig. 15. The GW intensity obtained for BP3 lies within
the sensitivities of ALIA, BBO and UDECIGO. As expected, the SNR for LISA is
very low for detection. Although the experiments like ALIA, BBO and UDECIGO
can detect such a GW signal.
Therefore, the FOPT along the singlet-direction permissible parameter space we
address in this section is either excluded from the most recent limits from
the various DMDD experiments or will be investigated in the near future. The
parameter space that is still allowed corresponds to a significantly
underabundant DM. It is projected that the HL-LHC will be unable to limit a
substantial portion of the parameter space. Focusing on this model, the
observation of any signal at future DM and GW experiments but nothing at the
HL-LHC would indicate that DM is severely underabundant.
## 6 Summary and conclusion
In spite of many successes of the SM, it cannot explain the smallness of
neutrino masses and the observed baryon asymmetry of the Universe and does not
have a DM candidate. In this article, we examine the $Z_{3}-$symmetric singlet
scalar extended type-II seesaw model as a potential contender for
simultaneously providing solutions to these three deficiencies of the SM. The
type-II seesaw mechanism generates small neutrino masses from the ${\it vev}$
of the scalar triplet. The complex scalar field, $S$, of the model can be a
stable WIMP-like DM particle under the so-called freeze-out mechanism. In
addition, new interactions among the SM Higgs doublet, the complex triplet,
and the complex singlet can induce an FOEWPT in the early Universe, which is
necessary for the solution of the BAU problem via the EWBG mechanism. Such an
FOEWPT can generate stochastic GW that can be detected in future proposed GW
detectors. In this work, we investigate the correlation among the production
of GW, DM phenomenology, and BSM LHC searches within this model.
In this scenario, the scalar DM can annihilate mainly in three ways to freeze
out in the early Universe and those processes are given by: DM
annihilation$\hskip 8.5359pt(a)~{}S~{}S^{*}~{}\to{\rm SM~{}~{}SM}$, (b)
$S~{}S^{*}~{}\to{\rm X~{}~{}Y};~{}~{}~{}\\{\rm
X,Y\\}=\\{H^{0},A^{0},H^{\pm},H^{\pm\pm}\\}$ and DM semi-annihilation (c)
$~{}S~{}S~{}\to S~{}U;~{}~{}\\{U\\}=\\{h^{0},H^{0}\\}$. The coupling of the
scalar DM with the SM Higgs doublet, $\lambda_{SH}(S^{*}S)(H^{\dagger}H)$, and
the triplet, $\lambda_{S\Delta}(S^{*}S){\rm Tr}[\Delta^{\dagger}\Delta]$,
control the DM-annihilation processes (a) and (b), respectively. In contrast,
the DM semi-annihilation processes are initiated via the cubic term
$\mu_{3}(S^{3}+{S^{*}}^{3})$ in the scalar potential. Such semi-annihilation
processes exist because of the $Z_{3}-$discrete symmetry of the model that
settles the singlet as a DM particle. At a fixed value of $m_{s}$, the DMDDSI
cross-section mostly depends on the $\lambda_{SH}$ portal coupling, which has
an upper bound coming from the latest DMDD experiments. When
$m_{s}<m_{h^{0}}$, the same portal coupling $\lambda_{SH}$ predominantly
controls the DM relic density. Such dependencies on $\lambda_{SH}$
significantly restrict the DM parameter space for $m_{s}<m_{h^{0}}$ and only
the resonance region around $m_{h^{0}}$ is still allowed after complying with
the DM relic density and the latest DMDD constraints. The cubic interaction
based on $\mu_{3}$ expands the parameter space for $m_{s}\gtrsim m_{h^{0}}$.
In addition to this, the interaction between the DM and the triplet sector
through the portal coupling $\lambda_{S\Delta}$ opens up a wide range of DM
allowed region of parameter space if $m_{s}\gtrsim
m_{H^{0},A^{0},H^{\pm},H^{\pm\pm}}$. Therefore, the allowed region of DM
parameter space depends on the triplet-like scalar masses, which are
constrained by the most recent direct searches for heavy neutral and charged
scalars at the LHC. Such phenomena show some correlations between DM
phenomenology and the physics at LHC.
It has been pointed out in the literature that the direct searches at the LHC
will be unable to restrict $H^{\pm\pm}$ mass even at 200 GeV with 3 ab-1 data
for moderate $v_{t}$ and reasonably large $\Delta m$ ($>10$ GeV) values, which
corresponds to $\lambda_{4}<0$. So, we focus on this part of the parameter
space in order to investigate the feasibility of FOPT in the scalar potential
at finite temperatures. We find that the strength of an SFOEWPT along the
$h_{d}-$field direction increases with larger $(\lambda_{1}+\lambda_{4})$ and
smaller $m_{H^{\pm\pm}}$ values. For $\lambda_{1}\lesssim 4.0$,
$m_{H^{\pm\pm}}$ greater than 420 GeV is disfavored since the triplet becomes
decoupled from the SM Higgs potential, rendering an SFOEWPT impossible.
Furthermore, we have thoroughly studied the stochastic GW spectra that will
carry the imprints of FOPT from new physics beyond the SM. Relatively light
triplet-like scalars and large effective quartic couplings
$(\lambda_{1}+\lambda_{4})$ can generate GW with large enough intensity to
fall within the projected sensitivity of some ground- and space-based GW
detectors. However, we also show that the decay rate of
$h^{0}\rightarrow\gamma\gamma$ is significantly affected by the SFOEWPT-
favored region of the parameter space, deviating significantly from the SM-
predicted value for large effective quartic couplings and light triplet-like
scalars. The correlation between the produced GW spectra and the signal
strength $\mu_{\gamma\gamma}$ of that decay channel has been pointed out. LHC
precision study of SM Higgs di-photon decay channel has already excluded most
of the parameter space that can generate stochastic GW signal. Any further
narrowing of $\mu_{\gamma\gamma}$ error bars at the HL-LHC will completely
exclude the possibility of FOEWPT induced by large interactions between the SM
Higgs doublet and $SU(2)$ triplet.
The interaction between the complex scalar and the SM Higgs doublet can be
another source of FOPT in this model. The singlet scalar is employed in this
context to serve dual roles as a DM particle and to offer a favorable
condition for the electroweak phase transition to be strongly first-order.
Such modification in the scalar potential can also generate the possibility of
a two-step FOPT, with the first transition happening in the singlet direction
and the second in the $h_{d}-$field direction. The scenario of an FOPT along
the singlet direction prefers relatively large $\lambda_{SH}$. By imposing
such a criteria for the FOPT, we have shown that a significant amount of the
parameter space is already excluded from the latest constraints on the DMDDSI
cross-section from various DM experiments. Our analysis further shows that
this model can generate an FOPT in the $h_{s}-$field direction and escape the
DMDD constraints when the complex scalar DM is significantly underabundant.
Such a low relic density allows the effective $\sigma_{\rm DD}^{\rm SI}$ rates
to stay below the exclusion limit owing to a considerable scaling down. Such a
situation comes into the picture only when for $m_{s}\gtrsim h^{0}$, where the
$\mu_{3}$ dependent DM semi-annihilation process can significantly contribute
to a drop in the relic density. Scaling down is also conceivable for
$m_{s}\gtrsim m_{H^{0},A^{0},H^{\pm},H^{\pm\pm}}$ by increasing
$\lambda_{S\Delta}$. On the other hand, significantly large values of $m_{s}$
are disfavoured for an FOPT. Thus, the $\mu_{3}$ term in the potential plays a
unique role since it still permits a specific region of parameter space that
can comply with DMDDSI rates and also because it can generate a barrier in the
tree-level potential in favour of FOPT along the $h_{s}-$field direction.
However, a more interesting case of a two-step phase transition from the
singlet-doublet interaction is when the phase transition in the $h_{d}-$field
direction is strongly first-order. This is what we need from the EWBG
perspective. We explore this prospect by analysing a representative benchmark
point where $\mu_{\gamma\gamma}$ remains very close to 1 since the various
triplet sectors couplings are fixed at tiny values. We find that such an FOPT
demands relatively large $\lambda_{SH}$ and as a consequence, future DMDD
experiments will investigate the feasibility of generating GW as a result of
an FOPT in this scenario. Thus, focusing on this model, observing any signal
at future DMDD and GW experiments but nothing at the HL-LHC will indicate that
DM is severely underabundant. The absence of new physics signal at the HL-LHC
and various DMDD experiments in future would severely limit the prospects of
detecting GW at future GW experiments.
## Acknowledgments
PG would like to acknowledge Indian Association for the Cultivation of
Science, Kolkata for the financial support. The work of TG and SR is supported
by the funding available from the Department of Atomic Energy (DAE),
Government of India for Harish-Chandra Research Institute (HRI). SR is also
supported by the Infosys award for excellence in research. SR acknowledges the
High-Performance Scientific Computing facility at the Regional Centre for
Accelerator-based Particle Physics (RECAPP) and HRI.
## Appendix
## Appendix A Tree-level tadpole relations
The mass parameters $\mu_{H}^{2}$, $\mu_{\Delta}^{2}$ and $\mu$ are determined
via the minimization conditions (the tadpoles) of $V_{0}$ of equation 2.3.1
and are given by
$\displaystyle\mu_{H}^{2}$
$\displaystyle=\tfrac{1}{2}(\lambda_{1}+\lambda_{4})v_{t}^{2}+\lambda_{H}v_{d}^{2}-\sqrt{2}\mu
v_{t},$ (60) $\displaystyle\mu_{\Delta}^{2}$
$\displaystyle=-(\lambda_{2}+\lambda_{3})v_{t}^{2}-\tfrac{1}{2}(\lambda_{1}+\lambda_{4})v_{d}^{2}+\frac{\mu
v_{d}^{2}}{\sqrt{2}v_{t}}.$ $\displaystyle\mu$
$\displaystyle=\frac{v_{t}}{\sqrt{2}v_{d}}\frac{-(\lambda_{2}+\lambda_{3})\sin
2\theta_{t}v_{t}^{2}-(\lambda_{1}+\lambda_{4})v_{d}v_{t}\cos
2\theta_{t}+\lambda_{H}v_{d}^{2}\sin
2\theta_{t}}{v_{d}\sin\theta_{t}\cos\theta_{t}-2v_{t}\cos^{2}\theta_{t}+2v_{t}\sin^{2}\theta_{t}}.$
## Appendix B Set of UV-finite counterterm coefficients
The various coefficients of the counterterm potential, defined in equation 31
are given by,
$\displaystyle\delta_{\lambda_{S\Delta}}=-\frac{1}{4v_{d}v_{t}}DV_{12}\,\,,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
(61a)
$\displaystyle\delta_{\lambda_{H}}=-\frac{1}{8v_{d}^{3}}DV_{1}-\frac{1}{8v_{d}^{2}}DV_{11}\,\,,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
(62a)
$\displaystyle\delta\mu_{H}^{2}=-\frac{3}{4v_{d}}DV_{1}+\frac{1}{4}DV_{11}+\frac{v_{t}}{4v_{d}}DV_{12}\,\,,\quad\quad\quad\quad\quad\quad$
(63a)
$\displaystyle\delta\mu_{\Delta}^{2}=-\frac{3}{4v_{t}}DV_{2}+\frac{1}{4}DV_{22}+\frac{v_{d}}{4v_{t}}DV_{12}\,\,,\quad\quad\quad\quad\quad\quad$
(64a)
$\displaystyle\delta_{\lambda_{2}}=\frac{1}{8v_{t}^{3}}DV_{2}-\frac{1}{8v_{t}^{2}}DV_{22}\,\,,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
(65a)
$\displaystyle\delta_{\lambda_{SH}}=-\frac{1}{2v_{t}^{2}}DV_{33}\,\,.\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
(66a)
$DV_{i}=\frac{\partial V_{\text{CW}}}{\partial\phi_{{}_{i}}}$ and
$DV_{ij}=\frac{\partial^{2}V_{\text{CW}}}{\partial\phi_{{}_{i}}\partial\phi_{{}_{j}}}$
where $\phi_{{}_{i}},\phi_{{}_{j}}=\\{h_{d},h_{t},h_{s}\\}$. All the
derivatives are taken at the true EW minima, i.e., $h_{d}=v_{d}$,
$h_{t}=v_{t}$ and $h_{s}=0.$
## Appendix C Daisy resummation improved thermal field-dependent masses
From the high-temperature expansion of the thermal one-loop potential $V_{T}$
(of equation 33 for bosons), the daisy coefficients can be found from the
following relation:
$\small{c_{ij}=\left.\frac{1}{T^{2}}\frac{\partial^{2}V_{T}}{\partial\phi_{i}\partial\phi_{j}}\right|_{T^{2}\gg
m^{2}}.}$ (67)
The eigenvalues of the thermally improved mass-squared matrices for the Higgs
and the gauge bosons, i.e., $M^{2}(h_{d},h_{t},h_{s},T)=$
eigenvalues$[m^{2}(h_{d},h_{t},h_{s})+\Pi(T^{2})]$ where
$\Pi(T^{2})=c_{ij}T^{2}$ and $c_{ij}$’s are the above described daisy
coefficients. The daisy resummation is particularly significant for an FOPT
because it affects the all-important cubic term of the potential at a finite
temperature. Only the longitudinal modes of the vectors and scalars get
thermal mass corrections. Due to gauge symmetry, thermal effects to the
transverse modes are suppressed Espinosa:1992kf . With these in mind, the
various daisy coefficients are as follows Comelli:1996vm ; Basler:2018cwe ;
Carrington:1991hz :
$\displaystyle c_{H}$
$\displaystyle=\frac{1}{16}({g_{1}}^{2}+3g_{2}^{2})+\frac{1}{4}y_{t}^{2}+\frac{1}{24}(6\lambda_{1}+3\lambda_{4}+12\lambda_{H}+12\lambda_{SH}),$
(68a) $\displaystyle c_{\Delta}$
$\displaystyle=\frac{1}{4}({g_{1}}^{2}+2g_{2}^{2})+\frac{1}{12}(2\lambda_{1}+8\lambda_{2}+6\lambda_{3}+\lambda_{4}+\lambda_{S\Delta}),$
(68b) $\displaystyle c_{S}$
$\displaystyle=\tfrac{1}{12}4\lambda_{S}+2\lambda_{SH}+3\lambda_{S\Delta}.$
(68c)
The field-dependent thermally mass-improved mass-squared matrices for the
$2\times 2$, symmetric matrix (${\cal M}_{H}^{2}$) for the $CP$-even scalars,
in the basis $\\{h_{d},h_{t}\\}$, is given by
$\displaystyle
M_{{H}_{11}}^{2}=\tfrac{1}{2}(\lambda_{1}+\lambda_{4})h_{t}^{2}+3\lambda_{H}h_{d}^{2}+\tfrac{1}{2}\lambda_{SH}h_{s}^{2}-\sqrt{2}\mu
h_{t}-\mu_{SH}^{2}+c_{H}T^{2},$ (69a) $\displaystyle
M_{{H}_{22}}^{2}=\tfrac{1}{2}(\lambda_{1}+\lambda_{4})h_{d}^{2}+3(\lambda_{2}+\lambda_{3})h_{t}^{2}+\tfrac{1}{2}\lambda_{ST}h_{s}^{2}+\mu_{\Delta}^{2}+c_{\Delta}T^{2},\quad\quad\quad$
(69b) $\displaystyle
M_{{H}_{12}}^{2}=M_{{H}_{21}}^{2}=(\lambda_{1}+\lambda_{4})h_{d}h_{t}-\sqrt{2}\mu
h_{d}.\quad\quad\quad\quad\quad\quad\quad\quad$ (69c)
The field-dependent thermally mass-improved mass-squared matrices for the
$2\times 2$, symmetric matrix (${\cal M}_{H}^{2}$) for the $CP$-odd scalars,
in the basis $\\{a_{d},a_{t}\\}$, is given by
$\displaystyle
M_{{A}_{11}}^{2}=\tfrac{1}{2}(\lambda_{1}+\lambda_{4})h_{t}^{2}+\lambda_{H}h_{d}^{2}+\tfrac{1}{2}\lambda_{SH}h_{s}^{2}+c_{H}T^{2}+\sqrt{2}\mu
h_{t}-\mu_{SH}^{2},$ (70a) $\displaystyle
M_{{A}_{22}}^{2}=\tfrac{1}{2}(\lambda_{1}+\lambda_{4})h_{d}^{2}+(\lambda_{2}+\lambda_{3})h_{t}^{2}+\tfrac{1}{2}\lambda_{ST}h_{s}^{2}+\mu_{\Delta}^{2}+c_{\Delta}T^{2},\quad\quad\quad$
(70b) $\displaystyle M_{{A}_{12}}^{2}=M_{{H}_{21}}^{2}=-\sqrt{2}\mu
h_{d}.\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
(70c)
The field-dependent thermally mass-improved mass-squared matrices for the
$2\times 2$, diagonal matrix (${\cal M}_{H}^{2}$) for thedark sector, in the
basis $\\{h_{s},a_{s}\\}$, is given by
$\displaystyle
M_{{DM}_{11}}^{2}=3\lambda_{S}h_{s}^{2}+\tfrac{1}{2}\lambda_{SH}h_{d}^{2}+\tfrac{1}{2}\lambda_{S\Delta}h_{t}^{2}+\frac{1}{\sqrt{2}}\mu_{3}h_{s}+\mu_{S}^{2}+c_{S}T^{2},\quad\quad\quad$
(71a) $\displaystyle
M_{{DM}_{22}}^{2}=\lambda_{S}h_{s}^{2}+\tfrac{1}{2}\lambda_{SH}h_{d}^{2}+\tfrac{1}{2}\lambda_{S\Delta}h_{t}^{2}-\frac{1}{\sqrt{2}}\mu_{3}h_{s}+\mu_{S}^{2}+c_{S}T^{2},\quad\quad\quad$
(71b) $\displaystyle
M_{{DM}_{12}}^{2}=M_{{H}_{21}}^{2}=0.\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$
(71c)
The field-dependent thermally mass-improved mass-squared matrices for the
$2\times 2$, symmetric matrix (${\cal M}_{H{{}^{\pm}}}^{2}$) for the $CP$-even
singly charged Higgs bosons, in the basis $\\{h_{d},h_{t}\\}$, is given by
$\displaystyle
M_{{H{{}^{\pm}}}_{11}}^{2}=\tfrac{1}{2}\lambda_{1}h_{t}^{2}+\lambda_{H}h_{d}^{2}+\tfrac{1}{2}\lambda_{SH}h_{s}^{2}-\mu_{H}^{2}+c_{H}T^{2},\quad\quad\quad\quad\quad\quad\quad$
(72a) $\displaystyle
M_{{H{{}^{\pm}}}_{22}}^{2}=\tfrac{1}{2}\lambda_{1}h_{d}^{2}+(\lambda_{2}+\lambda_{3})h_{t}^{2}+\frac{1}{4}\lambda_{4}h_{d}^{2}+\frac{1}{2}\lambda_{S\Delta}h_{s}^{2}+\mu_{\Delta}^{2}+c_{\Delta}T^{2},\quad$
(72b) $\displaystyle
M_{{H{{}^{\pm}}}_{12}}^{2}=M_{{H}_{21}}^{2}=\frac{1}{2\sqrt{2}}\lambda_{4}h_{d}h_{t}-\mu
h_{d}.\quad\quad\quad\quad\quad\quad\quad\quad\quad$ (72c)
The thermally mass-improved field-dependent mass-squared doubly-charged Higgs
mass is given by
$M_{{\Delta{{}^{\pm\pm}}}}^{2}=\tfrac{1}{2}\lambda_{1}h_{d}^{2}+\lambda_{2}h_{t}^{2}+\tfrac{1}{2}\lambda_{S\Delta}h_{s}^{2}+\mu_{\Delta}^{2}+c_{\Delta}T^{2}\,,\quad\quad\quad\quad$
(73)
In the fermionic sector, we only consider the top quark degrees of freedom and
its field-dependent mass is given by
$\begin{gathered}m_{t}=\tfrac{1}{\sqrt{2}}y_{t}h_{u}\,\quad.\end{gathered}$
(74)
The field-dependent Daisy resummation improved thermal mass squared of the
charged gauge boson (only the longitudinal modes get thermal mass corrections)
field $W^{\pm}$ and the neutral electroweak gauge boson fields $W^{3}$ of
$SU(2)$ and the $B$ of $U(1)$ group are given by,
$\small{\begin{gathered}m_{W^{{}^{\pm}}_{{}_{T}}}^{2}=\tfrac{1}{4}g_{2}^{2}\left(h_{d}^{2}+2h_{t}^{2}\right),\quad\\\
m_{W^{{}^{\pm}}_{{}_{L}}}^{2}=\tfrac{1}{4}g_{2}^{2}\left(h_{d}^{2}+2h_{t}^{2}+\frac{5}{2}g_{2}^{2}T^{2}\right),\\\
m_{W^{{}^{3}}_{{}_{T}}}^{2}=\tfrac{1}{4}g_{2}^{2}\left(h_{d}^{2}+4h_{t}^{2}\right),\\\
m_{B_{{}_{T}}}^{2}=\tfrac{1}{4}g_{1}^{2}\left(h_{d}^{2}+4h_{t}^{2}\right),\\\
m_{{W^{{}^{3}}_{{}_{T}}}{B_{{}_{T}}}}^{2}=-\tfrac{1}{4}g_{1}g_{2}\left(h_{d}^{2}+4h_{t}^{2}\right)\\\
m_{W^{{}^{3}}_{{}_{L}}}^{2}=m_{W^{{}^{3}}_{{}_{T}}}^{2}+\frac{5}{2}g_{2}^{2}T^{2},\\\
m_{B_{{}_{L}}}^{2}=m_{B_{{}_{T}}}^{2}+\frac{17}{6}g_{1}^{2}T^{2},\\\
m_{{W^{{}^{3}}_{{}_{L}}}{B_{{}_{L}}}}^{2}=m_{{W^{{}^{3}}_{{}_{T}}}{B_{{}_{T}}}}^{2}\end{gathered}}$
(75)
The physical masses of the gauge bosons can be found after diagonalizing the
above matrices. Note that the photon field also acquires non-zero mass at
finite temperatures. The zero-temperature field-dependent mass relations can
be found by putting $T=0$ in the above mass relations.
## Appendix D Feynman diagrams for DM annihilation
$S$$S^{*}$$h^{0}$$h^{0}$$S$$S^{*}$$S$$h^{0}$$h^{0}$$S$$S^{*}$$h^{0},H^{0}$$f/W^{+}/Z/h^{0}$$\overline{f}/W^{-}/Z/h^{0}$
Figure 16: Feynman diagrams for DM annihilate to the SM particles:
$S~{}S^{*}\to$ SM SM .
$S$$S^{*}$$H^{++}/H^{+}/A^{0}/H^{0}$$H^{--}/H^{-}/A^{0}/H^{0}$$S$$S^{*}$$S$$H^{0}$$H^{0}$$S^{*}$$S$$h^{0},H^{0}$$W^{+}/Z/H^{++}/H^{+}/A^{0}/H^{0}$$H^{-}/A^{0}/H^{--}/H^{-}/A^{0}/H^{0}$
Figure 17: Feynman diagrams for DM annihilate to scalar
triplet(s):$S~{}S^{*}\to W^{+}H^{-},ZA^{0},XY$. Here
$\\{X,Y\\}=\\{H^{\pm\pm},H^{\pm},A^{0},H^{0}\\}$.
$S$$S$$S$$S$$h^{0},H^{0}$$S$$S$$S$$S$$h^{0},H^{0}$ Figure 18: Feynman diagrams
for DM semi-annihilation processes: $S~{}S\to S~{}h^{0},S~{}H^{0}$.
## Appendix E Theoretical constraints
In this section, we highlight some of the key theoretical constraints on our
model parameter space.
### E.1 Vacuum stability
The scalar potential of the model requires to have a stable vacuum. In other
words, it means that at large field values the potential is bounded from below
so that the scalar fields do not run away. At large field values, the
quadratic and cubic terms in the scalar potential can be neglected in
comparison to the quartic term to determine the stability conditions of the
scalar potential. Using the copositivity conditions of vacuum stability, as
outlined in references Kannike:2012pe ; Yang:2021buu , we obtain the following
relations for our model:
$\displaystyle\lambda_{H}\geq 0,~{}~{}\lambda_{S}\geq 0,$
$\displaystyle\lambda_{2}+\lambda_{3}\geq
0,~{}~{}\lambda_{2}+\frac{\lambda_{3}}{2}\geq 0,$
$\displaystyle\lambda_{1}+2\sqrt{\lambda_{H}(\lambda_{2}+\lambda_{3})}\geq
0,~{}~{}(\lambda_{1}+\lambda_{4})+2\sqrt{\lambda_{H}(\lambda_{2}+\lambda_{3})}\geq
0,$
$\displaystyle(2\lambda_{1}+\lambda_{4})+4\sqrt{\lambda_{H}(\lambda_{2}+\frac{\lambda_{3}}{2})}\geq
0,~{}~{}~{}(\lambda_{2}+\lambda_{3})+\sqrt{(\lambda_{2}+\lambda_{3})(\lambda_{2}+\frac{\lambda_{3}}{2})}\geq
0,$ $\displaystyle\lambda_{SH}+2\sqrt{\lambda_{S}\lambda_{H}}\geq
0,~{}~{}\lambda_{S\Delta}+2\sqrt{\lambda_{S}(\lambda_{2}+\lambda_{3})}\geq 0,$
$\displaystyle\lambda_{S\Delta}+2\sqrt{\lambda_{S}(\lambda_{2}+\frac{\lambda_{3}}{2})}\geq
0~{}.$ (76)
It is also important to note that a stable global minimum sets an upper bound
on the dark scalar cubic coupling, $\mu_{3}$ as $\mu_{3}/m_{S}\lesssim
2\sqrt{\lambda_{S}}$ Belanger:2012zr ; Hektor:2019ote . For $\lambda_{S}=1$,
this relation becomes $\mu_{3}\lesssim 2m_{S}$.
### E.2 Perturbativity
In order to ensure the validity of standard perturbation theory where the one-
loop level correction to coupling should be smaller than the tree-level
couplings. The quartic and the Yukawa couplings of the interaction Lagrangian
should obey the following upper bounds Lerner:2009xg ; Yang:2021buu as:
$\displaystyle|\lambda_{H}|\lesssim\frac{2\pi}{3},$
$\displaystyle|\lambda_{1}|\lesssim 4\pi,~{}|\lambda_{2}|\lesssim
2\pi,~{}|\lambda_{3}|\lesssim 2\sqrt{2}\pi,~{}|\lambda_{4}|\lesssim 4\pi,$
$\displaystyle|\lambda_{1}+\lambda_{4}|\lesssim
4\pi,~{}|\lambda_{2}+\lambda_{3}|\lesssim\frac{2\pi}{3},~{}|\lambda_{1}+\frac{\lambda_{4}}{2}|\lesssim
4\pi,~{}|\lambda_{2}+\frac{\lambda_{3}}{2}|\lesssim\pi,$
$\displaystyle|\lambda_{S}|\lesssim\pi,~{}|\lambda_{SH}|\lesssim
4\pi,~{}|\lambda_{S\Delta}|\lesssim 4\pi$ $\displaystyle{\rm
and}~{}~{}~{}|y_{L}|\lesssim\sqrt{4\pi}~{}.$ (77)
### E.3 Partial wave unitarity
The quartic couplings of the scalar potential also be constrained from tree
level unitarity of the theory, coming from all possible $2\to 2$ scattering
amplitudes which will form the $S$ matrix Lee:1977eg ; Arhrib:2011uy ;
Yang:2021buu . The eigenvalues of the $S$ matrix are bounded from above as:
$\displaystyle|\lambda_{H}|\leq 4\pi,~{}~{}|\lambda_{1}|\leq
8\pi,~{}~{}|\lambda_{2}|\leq 4\pi,$ $\displaystyle|\lambda_{SH}|\leq
8\pi,~{}~{}|\lambda_{S\Delta}|\leq 8\pi,$
$\displaystyle|\lambda_{2}+\lambda_{3}|\leq
4\pi,~{}~{}|\lambda_{1}+\lambda_{4}|\leq 8\pi,$
$\displaystyle|\lambda_{1}-\frac{\lambda_{4}}{2}|\leq
8\pi,~{}~{}|\lambda_{1}+\frac{3\lambda_{4}}{2}|\leq 8\pi,$
$\displaystyle|x_{1,2,3}|\leq 8\pi,$ (78)
where $x_{1,2,3}$ are the roots of the following polynomial equation:
$\displaystyle
24\lambda_{1}^{2}\lambda_{S}+24\lambda_{1}\lambda_{4}\lambda_{S}-12\lambda_{1}\lambda_{S\Delta}\lambda_{SH}-192\lambda_{2}\lambda_{H}\lambda_{S}+16\lambda_{2}\lambda_{SH}^{2}-144\lambda_{3}\lambda_{H}\lambda_{S}+12\lambda_{3}\lambda_{SH}^{2}+6\lambda_{4}^{2}\lambda_{S}$
$\displaystyle-6\lambda_{4}\lambda_{S\Delta}\lambda_{SH}+18\lambda_{H}\lambda_{S\Delta}^{2}+x\Big{(}-12\lambda_{1}^{2}-12\lambda_{1}\lambda_{4}+96\lambda_{2}\lambda_{H}+32\lambda_{2}\lambda_{S}+72\lambda_{3}\lambda_{H}+24\lambda_{3}\lambda_{S}-3\lambda_{4}^{2}$
$\displaystyle+24\lambda_{H}\lambda_{S}-3\lambda_{S\Delta}^{2}-2\lambda_{SH}^{2}\Big{)}+x^{2}(-16\lambda_{2}-12\lambda_{3}-12\lambda_{H}-4\lambda_{S})+2x^{3}=0$
(79)
## Appendix F Phenomenological constraints
Here, we turn our attention to the phenomenological constraints on our model
parameter space.
### F.1 Neutrino Oscillation data
Explaining the smallness of neutrino masses is one of the key motivations of
the model under consideration. Hence, we briefly mention the constraints
imposed on the model parameter space coming from neutrino oscillation data.
Neutrino oscillation experiments provide bounds on neutrino sector parameters
such as mass-squared differences and the mixing angles Esteban:2020cvm . The
$3\sigma$ allowed ranges for these observables depend slightly on the ordering
of neutrino masses. Below we list the latest limits from a global fit
Esteban:2020cvm , for the normal ordering (inverted ordering) of neutrino
masses
$\displaystyle\Delta m^{2}_{21}\in[6.82-8.04]\times 10^{-5}{\text{eV}^{2}},$
$\displaystyle\Delta
m^{2}_{3\ell}\in[2.431-2.598]~{}\Big{(}[-2.583,-2.412]\Big{)}\times
10^{-3}{\text{eV}^{2}},$ $\displaystyle\sin^{2}\theta_{12}\in[0.269-0.343],$
$\displaystyle\sin^{2}\theta_{23}\in[0.407-0.618]~{}\Big{(}[0.411-0.621]\Big{)},$
$\displaystyle\sin^{2}\theta_{13}\in[0.02034-0.02430]~{}\Big{(}[0.02053-0.02436]\Big{)},$
$\displaystyle\delta_{\text{CP}}\in[107-403]^{\circ}~{}\Big{(}[192-360]^{\circ}\Big{)}.$
(80)
Only one set of $3\sigma$ allowed values are quoted above for $\Delta
m^{2}_{21}$ and $\sin^{2}\theta_{12}$ since these two observables are
insensitive to the mass ordering of neutrinos. In addition, cosmological
observations put an upper bound on the sum of neutrino masses,
$\sum_{i=1}^{3}m_{i}<0.12$ eV Planck:2018vyg .
### F.2 Flavor constraints
The Yukawa interactions of the Higgs triplet with the SM lepton doublets,
which generate the non-zero neutrino masses, can also induce charged lepton
flavor violating (cLFV) processes. The cLFV processes can arise both at tree-
level (e.g. $\ell_{\alpha}\to\ell_{\beta}\ell_{\gamma}\ell_{\delta}$) or loop-
level (e.g. $\ell_{\alpha}\to\ell_{\beta}\gamma$), where
$\alpha,\beta,\gamma,\delta$ are flavor indices. The bounds from the cLFV
processes can be quite stringent, and the two channels that impose the
strongest limits are muon decay to $3e$ and $e\gamma$ Dev:2021axj . The
branching fractions (BF) of these two important processes are given by
Kakizaki:2003jk ; Akeroyd:2009nu ; Dinh:2012bp ; Ashanujjaman:2021txz
$\displaystyle{\rm Br}(\mu^{-}\to
e^{+}e^{-}e^{-})=\frac{|(y_{L})^{\dagger}_{ee}(y_{L})_{\mu
e}|^{2}}{4\,G_{F}^{2}m_{H^{\pm\pm}}^{4}}~{},$ $\displaystyle{\rm
Br}(\mu^{-}\to
e^{-}\gamma)=\frac{\alpha|\sum_{i=1}^{3}(y_{L})^{\dagger}_{ei}(y_{L})_{\mu
i}|^{2}}{192\pi
G_{F}^{2}}\left(\frac{1}{m_{H^{\pm}}^{2}}+\frac{8}{m_{H^{\pm\pm}}^{2}}\right)^{2}~{},$
where $(y_{L})_{\alpha\beta}=\frac{\sqrt{2}\,m_{\alpha\beta}^{\nu}}{v_{t}}$.
The experimental upper limits for ${\rm Br}(\mu^{-}\to e^{+}e^{-}e^{-})$ and
${\rm Br}(\mu^{-}\to e^{-}\gamma)$ are $1.0\times 10^{-12}$ Bellgardt:1987du
and $4.2\times 10^{-13}$ TheMEG:2016wtm , respectively. Using them we can
compute the following lower bounds on $v_{t}$ as a function of
$m_{H^{\pm\pm}}$
$v_{t}\gtrsim 0.78\text{--}1.5\,\big{(}0.69\text{--}2.3\big{)}\times
10^{-9}{\rm~{}GeV}\times\bigg{(}\frac{1~{}\rm TeV}{m_{H^{\pm\pm}}}\bigg{)}$
for the normal (inverted) ordering of neutrino masses.
### F.3 Electroweak precision observables
Since the $CP$-even neutral scalar of the triplet develops an induced ${\it
vev}$ in this model, it can lead to a deviation of the oblique $T$ parameter
from the SM prediction at the tree level itself. In addition, a mass-
splitting, $\Delta m$, between the components of the triplet can further shift
the oblique parameters $S,\,T$ and $U$ at one-loop level from the SM
predictions Chun:2012jw ; Aoki:2012jj . The tree-level masses of $W$ and $Z$
bosons within this model get altered, which leads to the following expression
of the EW $\rho$ parameter
$\displaystyle\rho=\frac{m_{W}^{2}}{m_{Z}^{2}\cos^{2}\theta_{W}}=\frac{v_{d}^{2}+2v_{t}^{2}}{v_{d}^{2}+4v_{t}^{2}}\simeq
1-\frac{2v_{t}^{2}}{v_{d}^{2}},$ (81)
where $v_{d}=\sqrt{v^{2}-2v_{t}^{2}}$ with $v=246$ GeV. $\rho$ is connected to
the $T$ parameter at the tree level by
$T_{\text{tree}}\simeq-2v_{t}^{2}/v_{d}^{2}/\alpha=(1-\rho)/\alpha$. It is
needless to say that the above relations hold only under the assumption
$v_{t}<<v_{d}$. This assumption is a natural choice in type-II seesaw
motivated models since the triplet ${\it vev}$ generates neutrino masses. A
tiny $v_{t}$ will allow us to fit neutrino masses and mixings with small
Yukawa couplings satisfying all flavor constraints. The current measurement of
the EW observables constrain the $\rho$-parameter as : $\rho=1.00038\pm
0.00020$ ParticleDataGroup:2020ssz , which translates into a bound $v_{t}\leq
2.6$ GeV at $3\sigma$ level.
Now, assuming $v_{t}<<v_{d}$, we incorporate the limits from the three EW
oblique parameters $S,\,T$ and $U$ on $\Delta m$. In this paper, we
exclusively consider $v_{t}<<1$ GeV, and in this regime, the correction to the
$T$ parameter is dominated by 1-loop level contributions compared to the small
tree level correction shown above. References Chun:2012jw ; Aoki:2012jj show
that oblique parameters constrain $\Delta m$ to be below 40 GeV for a large
range of $m_{H^{\pm\pm}}$ values.
## References
* (1) G. Aad et al. [ATLAS], Phys. Lett. B 716 (2012), 1-29 doi:10.1016/j.physletb.2012.08.020 [arXiv:1207.7214 [hep-ex]].
* (2) S. Chatrchyan et al. [CMS], Phys. Lett. B 716 (2012), 30-61 doi:10.1016/j.physletb.2012.08.021 [arXiv:1207.7235 [hep-ex]].
* (3) A. D. Sakharov, Pisma Zh. Eksp. Teor. Fiz. 5 (1967), 32-35 doi:10.1070/PU1991v034n05ABEH002497
* (4) M. Trodden, Rev. Mod. Phys. 71 (1999), 1463-1500 doi:10.1103/RevModPhys.71.1463 [arXiv:hep-ph/9803479 [hep-ph]].
* (5) G. W. Anderson and L. J. Hall, Phys. Rev. D 45 (1992), 2685-2698 doi:10.1103/PhysRevD.45.2685
* (6) P. Huet and A. E. Nelson, Phys. Rev. D 53 (1996), 4578-4597 doi:10.1103/PhysRevD.53.4578 [arXiv:hep-ph/9506477 [hep-ph]].
* (7) D. E. Morrissey and M. J. Ramsey-Musolf, New J. Phys. 14 (2012), 125003 doi:10.1088/1367-2630/14/12/125003 [arXiv:1206.2942 [hep-ph]].
* (8) W. Hu and S. Dodelson, Ann. Rev. Astron. Astrophys. 40 (2002), 171-216 doi:10.1146/annurev.astro.40.060401.093926 [arXiv:astro-ph/0110414 [astro-ph]].
* (9) D. N. Spergel et al. [WMAP], Astrophys. J. Suppl. 170 (2007), 377 doi:10.1086/513700 [arXiv:astro-ph/0603449 [astro-ph]].
* (10) N. Aghanim et al. [Planck], Astron. Astrophys. 641 (2020), A6 [erratum: Astron. Astrophys. 652 (2021), C4] doi:10.1051/0004-6361/201833910 [arXiv:1807.06209 [astro-ph.CO]].
* (11) T. Kajita, Rev. Mod. Phys. 88 (2016) no.3, 030501 doi:10.1103/RevModPhys.88.030501
* (12) A. B. McDonald, Rev. Mod. Phys. 88 (2016) no.3, 030502 doi:10.1103/RevModPhys.88.030502
* (13) P. Minkowski, Phys. Lett. B 67 (1977), 421-428 doi:10.1016/0370-2693(77)90435-X
* (14) T. Yanagida, Conf. Proc. C 7902131 (1979), 95-99 KEK-79-18-95.
* (15) R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44 (1980), 912 doi:10.1103/PhysRevLett.44.912
* (16) W. Konetschny and W. Kummer, Phys. Lett. B 70 (1977), 433-435 doi:10.1016/0370-2693(77)90407-5
* (17) M. Magg and C. Wetterich, Phys. Lett. B 94 (1980), 61-64 doi:10.1016/0370-2693(80)90825-4
* (18) G. Lazarides, Q. Shafi and C. Wetterich, Nucl. Phys. B 181 (1981), 287-300 doi:10.1016/0550-3213(81)90354-0
* (19) J. Schechter and J. W. F. Valle, Phys. Rev. D 22 (1980), 2227 doi:10.1103/PhysRevD.22.2227
* (20) T. P. Cheng and L. F. Li, Phys. Rev. D 22 (1980), 2860 doi:10.1103/PhysRevD.22.2860
* (21) S. M. Bilenky, J. Hosek and S. T. Petcov, Phys. Lett. B 94 (1980), 495-498 doi:10.1016/0370-2693(80)90927-2
* (22) J. A. Casas, D. G. Cerdeño, J. M. Moreno and J. Quilis, JHEP 05 (2017), 036 doi:10.1007/JHEP05(2017)036 [arXiv:1701.08134 [hep-ph]].
* (23) S. Bhattacharya, P. Ghosh, T. N. Maity and T. S. Ray, JHEP 10 (2017), 088 doi:10.1007/JHEP10(2017)088 [arXiv:1706.04699 [hep-ph]].
* (24) E. Aprile et al. [XENON], Phys. Rev. Lett. 121 (2018) no.11, 111302 doi:10.1103/PhysRevLett.121.111302 [arXiv:1805.12562 [astro-ph.CO]].
* (25) Y. Meng et al. [PandaX-4T], Phys. Rev. Lett. 127 (2021) no.26, 261802 doi:10.1103/PhysRevLett.127.261802 [arXiv:2107.13438 [hep-ex]].
* (26) S. Abdollahi et al. [Fermi-LAT], Phys. Rev. D 95 (2017) no.8, 082007 doi:10.1103/PhysRevD.95.082007 [arXiv:1704.07195 [astro-ph.HE]].
* (27) M. L. Ahnen et al. [MAGIC and Fermi-LAT], JCAP 02 (2016), 039 doi:10.1088/1475-7516/2016/02/039 [arXiv:1601.06590 [astro-ph.HE]].
* (28) R. Apreda, M. Maggiore, A. Nicolis and A. Riotto, Nucl. Phys. B 631 (2002), 342-368 doi:10.1016/S0550-3213(02)00264-X [arXiv:gr-qc/0107033 [gr-qc]].
* (29) C. Grojean, G. Servant and J. D. Wells, Phys. Rev. D 71 (2005), 036001 doi:10.1103/PhysRevD.71.036001 [arXiv:hep-ph/0407019 [hep-ph]].
* (30) D. J. Weir, Phil. Trans. Roy. Soc. Lond. A 376 (2018) no.2114, 20170126 doi:10.1098/rsta.2017.0126 [arXiv:1705.01783 [hep-ph]].
* (31) A. Alves, T. Ghosh, H. K. Guo and K. Sinha, JHEP 12 (2018), 070 doi:10.1007/JHEP12(2018)070 [arXiv:1808.08974 [hep-ph]].
* (32) A. Alves, T. Ghosh, H. K. Guo, K. Sinha and D. Vagie, JHEP 04 (2019), 052 doi:10.1007/JHEP04(2019)052 [arXiv:1812.09333 [hep-ph]].
* (33) A. Alves, D. Gonçalves, T. Ghosh, H. K. Guo and K. Sinha, JHEP 03 (2020), 053 doi:10.1007/JHEP03(2020)053 [arXiv:1909.05268 [hep-ph]].
* (34) A. Alves, D. Gonçalves, T. Ghosh, H. K. Guo and K. Sinha, Phys. Lett. B 818 (2021), 136377 doi:10.1016/j.physletb.2021.136377 [arXiv:2007.15654 [hep-ph]].
* (35) A. Chatterjee, A. Datta and S. Roy, JHEP 06 (2022), 108 doi:10.1007/JHEP06(2022)108 [arXiv:2202.12476 [hep-ph]].
* (36) C. Caprini, M. Chala, G. C. Dorsch, M. Hindmarsh, S. J. Huber, T. Konstandin, J. Kozaczuk, G. Nardini, J. M. No and K. Rummukainen, et al. JCAP 03 (2020), 024 doi:10.1088/1475-7516/2020/03/024 [arXiv:1910.13125 [astro-ph.CO]].
* (37) E. Witten, Phys. Rev. D 30 (1984), 272-285 doi:10.1103/PhysRevD.30.272
* (38) C. J. Hogan, Mon. Not. Roy. Astron. Soc. 218 (1986), 629-636
* (39) J. Ellis, M. Lewicki and J. M. No, JCAP 04 (2019), 003 doi:10.1088/1475-7516/2019/04/003 [arXiv:1809.08242 [hep-ph]].
* (40) T. Alanne, T. Hugle, M. Platscher and K. Schmitz, JHEP 03 (2020), 004 doi:10.1007/JHEP03(2020)004 [arXiv:1909.11356 [hep-ph]].
* (41) P. Amaro-Seoane et al. [LISA], [arXiv:1702.00786 [astro-ph.IM]].
* (42) X. Gong, Y. K. Lau, S. Xu, P. Amaro-Seoane, S. Bai, X. Bian, Z. Cao, G. Chen, X. Chen and Y. Ding, et al. J. Phys. Conf. Ser. 610 (2015) no.1, 012011 doi:10.1088/1742-6596/610/1/012011 [arXiv:1410.7296 [gr-qc]].
* (43) W. R. Hu and Y. L. Wu, Natl. Sci. Rev. 4 (2017) no.5, 685-686 doi:10.1093/nsr/nwx116
* (44) J. Luo et al. [TianQin], Class. Quant. Grav. 33 (2016) no.3, 035010 doi:10.1088/0264-9381/33/3/035010 [arXiv:1512.02076 [astro-ph.IM]].
* (45) G. M. Harry [LIGO Scientific], Class. Quant. Grav. 27 (2010), 084006 doi:10.1088/0264-9381/27/8/084006
* (46) V. Corbin and N. J. Cornish, Class. Quant. Grav. 23 (2006), 2435-2446 doi:10.1088/0264-9381/23/7/014 [arXiv:gr-qc/0512039 [gr-qc]].
* (47) H. Kudoh, A. Taruya, T. Hiramatsu and Y. Himemoto, Phys. Rev. D 73 (2006), 064006 doi:10.1103/PhysRevD.73.064006 [arXiv:gr-qc/0511145 [gr-qc]].
* (48) B. P. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. Lett. 116 (2016) no.6, 061102 doi:10.1103/PhysRevLett.116.061102 [arXiv:1602.03837 [gr-qc]].
* (49) B. P. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. Lett. 119 (2017) no.16, 161101 doi:10.1103/PhysRevLett.119.161101 [arXiv:1710.05832 [gr-qc]].
* (50) B. P. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. X 9 (2019) no.3, 031040 doi:10.1103/PhysRevX.9.031040 [arXiv:1811.12907 [astro-ph.HE]].
* (51) R. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. X 11 (2021), 021053 doi:10.1103/PhysRevX.11.021053 [arXiv:2010.14527 [gr-qc]].
* (52) G. Agazie et al. [NANOGrav], Astrophys. J. Lett. 951 (2023) no.1, L8 doi:10.3847/2041-8213/acdac6 [arXiv:2306.16213 [astro-ph.HE]].
* (53) J. Antoniadis et al. [EPTA], [arXiv:2306.16214 [astro-ph.HE]].
* (54) X. Qi and H. Sun, Phys. Rev. D 107 (2023) no.9, 095026 doi:10.1103/PhysRevD.107.095026 [arXiv:2104.01045 [hep-ph]].
* (55) A. Arhrib, R. Benbrik, M. Chabab, G. Moultaka, M. C. Peyranere, L. Rahili and J. Ramadan, Phys. Rev. D 84 (2011), 095005 doi:10.1103/PhysRevD.84.095005 [arXiv:1105.1925 [hep-ph]].
* (56) P. Fileviez Perez, T. Han, G. y. Huang, T. Li and K. Wang, Phys. Rev. D 78 (2008), 015018 doi:10.1103/PhysRevD.78.015018 [arXiv:0805.3536 [hep-ph]].
* (57) E. W. Kolb and M. S. Turner, Front. Phys. 69 (1990), 1-547 doi:10.1201/9780429492860
* (58) A. Hektor, A. Hryczuk and K. Kannike, JHEP 03 (2019), 204 doi:10.1007/JHEP03(2019)204 [arXiv:1901.08074 [hep-ph]].
* (59) J. M. Alarcon, L. S. Geng, J. Martin Camalich and J. A. Oller, Phys. Lett. B 730 (2014), 342-346 doi:10.1016/j.physletb.2014.01.065 [arXiv:1209.2870 [hep-ph]].
* (60) A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks, Comput. Phys. Commun. 185 (2014), 2250-2300 doi:10.1016/j.cpc.2014.04.012 [arXiv:1310.1921 [hep-ph]].
* (61) G. Belanger, F. Boudjema, A. Pukhov and A. Semenov, Comput. Phys. Commun. 176 (2007), 367-382 doi:10.1016/j.cpc.2006.11.008 [arXiv:hep-ph/0607059 [hep-ph]].
* (62) V. Vaskonen, Phys. Rev. D 95 (2017) no.12, 123515 doi:10.1103/PhysRevD.95.123515 [arXiv:1611.02073 [hep-ph]].
* (63) J. Ellis, M. Lewicki, M. Merchand, J. M. No and M. Zych, JHEP 01 (2023), 093 doi:10.1007/JHEP01(2023)093 [arXiv:2210.16305 [hep-ph]].
* (64) S. R. Coleman and E. J. Weinberg, Phys. Rev. D 7 (1973), 1888-1910 doi:10.1103/PhysRevD.7.1888
* (65) S. P. Martin, Phys. Rev. D 90 (2014) no.1, 016013 doi:10.1103/PhysRevD.90.016013 [arXiv:1406.2355 [hep-ph]].
* (66) J. Elias-Miro, J. R. Espinosa and T. Konstandin, JHEP 08 (2014), 034 doi:10.1007/JHEP08(2014)034 [arXiv:1406.2652 [hep-ph]].
* (67) S. Baum, M. Carena, N. R. Shah, C. E. M. Wagner and Y. Wang, JHEP 03 (2021), 055 doi:10.1007/JHEP03(2021)055 [arXiv:2009.10743 [hep-ph]].
* (68) L. Dolan and R. Jackiw, Phys. Rev. D 9 (1974), 3320-3341 doi:10.1103/PhysRevD.9.3320
* (69) S. Weinberg, Phys. Rev. D 9 (1974), 3357-3378 doi:10.1103/PhysRevD.9.3357
* (70) D. A. Kirzhnits and A. D. Linde, Annals Phys. 101 (1976), 195-238 doi:10.1016/0003-4916(76)90279-7
* (71) J. R. Espinosa, M. Quiros and F. Zwirner, Phys. Lett. B 314 (1993), 206-216 doi:10.1016/0370-2693(93)90450-V [arXiv:hep-ph/9212248 [hep-ph]].
* (72) R. R. Parwani, Phys. Rev. D 45 (1992), 4695 [erratum: Phys. Rev. D 48 (1993), 5965] doi:10.1103/PhysRevD.45.4695 [arXiv:hep-ph/9204216 [hep-ph]].
* (73) N. K. Nielsen, Nucl. Phys. B 101 (1975), 173-188 doi:10.1016/0550-3213(75)90301-6
* (74) R. Fukuda and T. Kugo, Phys. Rev. D 13 (1976), 3469 doi:10.1103/PhysRevD.13.3469
* (75) M. Laine, Phys. Rev. D 51 (1995), 4525-4532 doi:10.1103/PhysRevD.51.4525 [arXiv:hep-ph/9411252 [hep-ph]].
* (76) J. Baacke and S. Junker, Phys. Rev. D 49 (1994), 2055-2073 doi:10.1103/PhysRevD.49.2055 [arXiv:hep-ph/9308310 [hep-ph]].
* (77) J. Baacke and S. Junker, Phys. Rev. D 50 (1994), 4227-4228 doi:10.1103/PhysRevD.50.4227 [arXiv:hep-th/9402078 [hep-th]].
* (78) M. Garny and T. Konstandin, JHEP 07 (2012), 189 doi:10.1007/JHEP07(2012)189 [arXiv:1205.3392 [hep-ph]].
* (79) J. R. Espinosa, M. Garny, T. Konstandin and A. Riotto, Phys. Rev. D 95 (2017) no.5, 056004 doi:10.1103/PhysRevD.95.056004 [arXiv:1608.06765 [hep-ph]].
* (80) H. H. Patel and M. J. Ramsey-Musolf, JHEP 07 (2011), 029 doi:10.1007/JHEP07(2011)029 [arXiv:1101.4665 [hep-ph]].
* (81) S. Arunasalam and M. J. Ramsey-Musolf, JHEP 08 (2022), 138 doi:10.1007/JHEP08(2022)138 [arXiv:2105.07588 [hep-ph]].
* (82) J. Löfgren, M. J. Ramsey-Musolf, P. Schicho and T. V. I. Tenkanen, Phys. Rev. Lett. 130 (2023) no.25, 251801 doi:10.1103/PhysRevLett.130.251801 [arXiv:2112.05472 [hep-ph]].
* (83) M. Laine, M. Meyer and G. Nardini, Nucl. Phys. B 920 (2017), 565-600 doi:10.1016/j.nuclphysb.2017.04.023 [arXiv:1702.07479 [hep-ph]].
* (84) B. Allen and J. D. Romano, Phys. Rev. D 59 (1999), 102001 doi:10.1103/PhysRevD.59.102001 [arXiv:gr-qc/9710117 [gr-qc]].
* (85) C. Caprini, M. Hindmarsh, S. Huber, T. Konstandin, J. Kozaczuk, G. Nardini, J. M. No, A. Petiteau, P. Schwaller and G. Servant, et al. JCAP 04 (2016), 001 doi:10.1088/1475-7516/2016/04/001 [arXiv:1512.06239 [astro-ph.CO]].
* (86) R. G. Cai, Z. Cao, Z. K. Guo, S. J. Wang and T. Yang, Natl. Sci. Rev. 4 (2017) no.5, 687-706 doi:10.1093/nsr/nwx029 [arXiv:1703.00187 [gr-qc]].
* (87) C. Caprini and D. G. Figueroa, Class. Quant. Grav. 35 (2018) no.16, 163001 doi:10.1088/1361-6382/aac608 [arXiv:1801.04268 [astro-ph.CO]].
* (88) J. D. Romano and N. J. Cornish, Living Rev. Rel. 20 (2017) no.1, 2 doi:10.1007/s41114-017-0004-1 [arXiv:1608.06889 [gr-qc]].
* (89) N. Christensen, Rept. Prog. Phys. 82 (2019) no.1, 016903 doi:10.1088/1361-6633/aae6b5 [arXiv:1811.08797 [gr-qc]].
* (90) J. R. Espinosa, T. Konstandin, J. M. No and G. Servant, JCAP 06 (2010), 028 doi:10.1088/1475-7516/2010/06/028 [arXiv:1004.4187 [hep-ph]].
* (91) M. S. Turner, E. J. Weinberg and L. M. Widrow, Phys. Rev. D 46 (1992), 2384-2403 doi:10.1103/PhysRevD.46.2384
* (92) C. L. Wainwright, Comput. Phys. Commun. 183 (2012), 2006-2013 doi:10.1016/j.cpc.2012.04.004 [arXiv:1109.4189 [hep-ph]].
* (93) C. W. Chiang and B. Q. Lu, JHEP 07 (2020), 082 doi:10.1007/JHEP07(2020)082 [arXiv:1912.12634 [hep-ph]].
* (94) M. Hindmarsh, S. J. Huber, K. Rummukainen and D. J. Weir, Phys. Rev. D 92 (2015) no.12, 123009 doi:10.1103/PhysRevD.92.123009 [arXiv:1504.03291 [astro-ph.CO]].
* (95) A. Kosowsky, M. S. Turner and R. Watkins, Phys. Rev. Lett. 69 (1992), 2026-2029 doi:10.1103/PhysRevLett.69.2026
* (96) A. Kosowsky, M. S. Turner and R. Watkins, Phys. Rev. D 45 (1992), 4514-4535 doi:10.1103/PhysRevD.45.4514
* (97) A. Kosowsky and M. S. Turner, Phys. Rev. D 47 (1993), 4372-4391 doi:10.1103/PhysRevD.47.4372 [arXiv:astro-ph/9211004 [astro-ph]].
* (98) D. Bodeker and G. D. Moore, JCAP 05 (2017), 025 doi:10.1088/1475-7516/2017/05/025 [arXiv:1703.08215 [hep-ph]].
* (99) M. Hindmarsh, S. J. Huber, K. Rummukainen and D. J. Weir, Phys. Rev. Lett. 112 (2014), 041301 doi:10.1103/PhysRevLett.112.041301 [arXiv:1304.2433 [hep-ph]].
* (100) J. T. Giblin, Jr. and J. B. Mertens, JHEP 12 (2013), 042 doi:10.1007/JHEP12(2013)042 [arXiv:1310.2948 [hep-th]].
* (101) J. T. Giblin and J. B. Mertens, Phys. Rev. D 90 (2014) no.2, 023532 doi:10.1103/PhysRevD.90.023532 [arXiv:1405.4005 [astro-ph.CO]].
* (102) K. Schmitz, JHEP 01 (2021), 097 doi:10.1007/JHEP01(2021)097 [arXiv:2002.04615 [hep-ph]].
* (103) C. Caprini and R. Durrer, Phys. Rev. D 74 (2006), 063521 doi:10.1103/PhysRevD.74.063521 [arXiv:astro-ph/0603476 [astro-ph]].
* (104) T. Kahniashvili, A. Kosowsky, G. Gogoberidze and Y. Maravin, Phys. Rev. D 78 (2008), 043003 doi:10.1103/PhysRevD.78.043003 [arXiv:0806.0293 [astro-ph]].
* (105) T. Kahniashvili, L. Campanelli, G. Gogoberidze, Y. Maravin and B. Ratra, Phys. Rev. D 78 (2008), 123006 [erratum: Phys. Rev. D 79 (2009), 109901] doi:10.1103/PhysRevD.78.123006 [arXiv:0809.1899 [astro-ph]].
* (106) T. Kahniashvili, L. Kisslinger and T. Stevens, Phys. Rev. D 81 (2010), 023004 doi:10.1103/PhysRevD.81.023004 [arXiv:0905.0643 [astro-ph.CO]].
* (107) C. Caprini, R. Durrer and G. Servant, JCAP 12 (2009), 024 doi:10.1088/1475-7516/2009/12/024 [arXiv:0909.0622 [astro-ph.CO]].
* (108) L. Kisslinger and T. Kahniashvili, Phys. Rev. D 92 (2015) no.4, 043006 doi:10.1103/PhysRevD.92.043006 [arXiv:1505.03680 [astro-ph.CO]].
* (109) T. M. C. Abbott et al. [DES], Mon. Not. Roy. Astron. Soc. 480 (2018) no.3, 3879-3888 doi:10.1093/mnras/sty1939 [arXiv:1711.00403 [astro-ph.CO]].
* (110) M. Hindmarsh and M. Hijazi, JCAP 12 (2019), 062 doi:10.1088/1475-7516/2019/12/062 [arXiv:1909.10040 [astro-ph.CO]].
* (111) H. K. Guo, K. Sinha, D. Vagie and G. White, JCAP 01 (2021), 001 doi:10.1088/1475-7516/2021/01/001 [arXiv:2007.08537 [hep-ph]].
* (112) M. B. Hindmarsh, M. Lüben, J. Lumma and M. Pauly, SciPost Phys. Lect. Notes 24 (2021), 1 doi:10.21468/SciPostPhysLectNotes.24 [arXiv:2008.09136 [astro-ph.CO]].
* (113) U. L. Pen and N. Turok, Phys. Rev. Lett. 117 (2016) no.13, 131301 doi:10.1103/PhysRevLett.117.131301 [arXiv:1510.02985 [astro-ph.CO]].
* (114) M. Hindmarsh, S. J. Huber, K. Rummukainen and D. J. Weir, Phys. Rev. D 96 (2017) no.10, 103520 [erratum: Phys. Rev. D 101 (2020) no.8, 089902] doi:10.1103/PhysRevD.96.103520 [arXiv:1704.05871 [astro-ph.CO]].
* (115) J. M. No, Phys. Rev. D 84 (2011), 124025 doi:10.1103/PhysRevD.84.124025 [arXiv:1103.2159 [hep-ph]].
* (116) K. Huitu, J. Maalampi, A. Pietila and M. Raidal, Nucl. Phys. B 487 (1997), 27-42 doi:10.1016/S0550-3213(97)87466-4 [arXiv:hep-ph/9606311 [hep-ph]].
* (117) S. Chakrabarti, D. Choudhury, R. M. Godbole and B. Mukhopadhyaya, Phys. Lett. B 434 (1998), 347-353 doi:10.1016/S0370-2693(98)00743-6 [arXiv:hep-ph/9804297 [hep-ph]].
* (118) E. J. Chun, K. Y. Lee and S. C. Park, Phys. Lett. B 566 (2003), 142-151 doi:10.1016/S0370-2693(03)00770-6 [arXiv:hep-ph/0304069 [hep-ph]].
* (119) A. G. Akeroyd and M. Aoki, Phys. Rev. D 72 (2005), 035011 doi:10.1103/PhysRevD.72.035011 [arXiv:hep-ph/0506176 [hep-ph]].
* (120) J. Garayoa and T. Schwetz, JHEP 03 (2008), 009 doi:10.1088/1126-6708/2008/03/009 [arXiv:0712.1453 [hep-ph]].
* (121) M. Kadastik, M. Raidal and L. Rebane, Phys. Rev. D 77 (2008), 115023 doi:10.1103/PhysRevD.77.115023 [arXiv:0712.3912 [hep-ph]].
* (122) A. G. Akeroyd, M. Aoki and H. Sugiyama, Phys. Rev. D 77 (2008), 075010 doi:10.1103/PhysRevD.77.075010 [arXiv:0712.4019 [hep-ph]].
* (123) F. del Aguila and J. A. Aguilar-Saavedra, Nucl. Phys. B 813 (2009), 22-90 doi:10.1016/j.nuclphysb.2008.12.029 [arXiv:0808.2468 [hep-ph]].
* (124) A. G. Akeroyd and C. W. Chiang, Phys. Rev. D 80 (2009), 113010 doi:10.1103/PhysRevD.80.113010 [arXiv:0909.4419 [hep-ph]].
* (125) A. Melfo, M. Nemevsek, F. Nesti, G. Senjanovic and Y. Zhang, Phys. Rev. D 85 (2012), 055018 doi:10.1103/PhysRevD.85.055018 [arXiv:1108.4416 [hep-ph]].
* (126) M. Aoki, S. Kanemura and K. Yagyu, Phys. Rev. D 85 (2012), 055007 doi:10.1103/PhysRevD.85.055007 [arXiv:1110.4625 [hep-ph]].
* (127) A. G. Akeroyd and H. Sugiyama, Phys. Rev. D 84 (2011), 035010 doi:10.1103/PhysRevD.84.035010 [arXiv:1105.2209 [hep-ph]].
* (128) C. W. Chiang, T. Nomura and K. Tsumura, Phys. Rev. D 85 (2012), 095023 doi:10.1103/PhysRevD.85.095023 [arXiv:1202.2014 [hep-ph]].
* (129) E. J. Chun, H. M. Lee and P. Sharma, JHEP 11 (2012), 106 doi:10.1007/JHEP11(2012)106 [arXiv:1209.1303 [hep-ph]].
* (130) A. G. Akeroyd, S. Moretti and H. Sugiyama, Phys. Rev. D 85 (2012), 055026 doi:10.1103/PhysRevD.85.055026 [arXiv:1201.5047 [hep-ph]].
* (131) E. J. Chun and P. Sharma, JHEP 08 (2012), 162 doi:10.1007/JHEP08(2012)162 [arXiv:1206.6278 [hep-ph]].
* (132) P. S. Bhupal Dev, D. K. Ghosh, N. Okada and I. Saha, JHEP 03 (2013), 150 [erratum: JHEP 05 (2013), 049] doi:10.1007/JHEP03(2013)150 [arXiv:1301.3453 [hep-ph]].
* (133) S. Banerjee, M. Frank and S. K. Rai, Phys. Rev. D 89 (2014) no.7, 075005 doi:10.1103/PhysRevD.89.075005 [arXiv:1312.4249 [hep-ph]].
* (134) F. del Águila and M. Chala, JHEP 03 (2014), 027 doi:10.1007/JHEP03(2014)027 [arXiv:1311.1510 [hep-ph]].
* (135) E. J. Chun and P. Sharma, Phys. Lett. B 728 (2014), 256-261 doi:10.1016/j.physletb.2013.11.056 [arXiv:1309.6888 [hep-ph]].
* (136) S. Kanemura, K. Yagyu and H. Yokoya, Phys. Lett. B 726 (2013), 316-319 doi:10.1016/j.physletb.2013.08.054 [arXiv:1305.2383 [hep-ph]].
* (137) S. Kanemura, M. Kikuchi, K. Yagyu and H. Yokoya, Phys. Rev. D 90 (2014) no.11, 115018 doi:10.1103/PhysRevD.90.115018 [arXiv:1407.6547 [hep-ph]].
* (138) S. Kanemura, M. Kikuchi, H. Yokoya and K. Yagyu, PTEP 2015 (2015), 051B02 doi:10.1093/ptep/ptv071 [arXiv:1412.7603 [hep-ph]].
* (139) Z. Kang, J. Li, T. Li, Y. Liu and G. Z. Ning, Eur. Phys. J. C 75 (2015) no.12, 574 doi:10.1140/epjc/s10052-015-3774-1 [arXiv:1404.5207 [hep-ph]].
* (140) Z. L. Han, R. Ding and Y. Liao, Phys. Rev. D 91 (2015), 093006 doi:10.1103/PhysRevD.91.093006 [arXiv:1502.05242 [hep-ph]].
* (141) Z. L. Han, R. Ding and Y. Liao, Phys. Rev. D 92 (2015) no.3, 033014 doi:10.1103/PhysRevD.92.033014 [arXiv:1506.08996 [hep-ph]].
* (142) D. Das and A. Santamaria, Phys. Rev. D 94 (2016) no.1, 015015 doi:10.1103/PhysRevD.94.015015 [arXiv:1604.08099 [hep-ph]].
* (143) K. S. Babu and S. Jana, Phys. Rev. D 95 (2017) no.5, 055020 doi:10.1103/PhysRevD.95.055020 [arXiv:1612.09224 [hep-ph]].
* (144) M. Mitra, S. Niyogi and M. Spannowsky, Phys. Rev. D 95 (2017) no.3, 035042 doi:10.1103/PhysRevD.95.035042 [arXiv:1611.09594 [hep-ph]].
* (145) Y. Cai, T. Han, T. Li and R. Ruiz, Front. in Phys. 6 (2018), 40 doi:10.3389/fphy.2018.00040 [arXiv:1711.02180 [hep-ph]].
* (146) D. K. Ghosh, N. Ghosh, I. Saha and A. Shaw, Phys. Rev. D 97 (2018) no.11, 115022 doi:10.1103/PhysRevD.97.115022 [arXiv:1711.06062 [hep-ph]].
* (147) A. Crivellin, M. Ghezzi, L. Panizzi, G. M. Pruna and A. Signer, Phys. Rev. D 99 (2019) no.3, 035004 doi:10.1103/PhysRevD.99.035004 [arXiv:1807.10224 [hep-ph]].
* (148) Y. Du, A. Dunbrack, M. J. Ramsey-Musolf and J. H. Yu, JHEP 01 (2019), 101 doi:10.1007/JHEP01(2019)101 [arXiv:1810.09450 [hep-ph]].
* (149) P. S. Bhupal Dev and Y. Zhang, JHEP 10 (2018), 199 doi:10.1007/JHEP10(2018)199 [arXiv:1808.00943 [hep-ph]].
* (150) S. Antusch, O. Fischer, A. Hammad and C. Scherb, JHEP 02 (2019), 157 doi:10.1007/JHEP02(2019)157 [arXiv:1811.03476 [hep-ph]].
* (151) A. Aboubrahim and P. Nath, Phys. Rev. D 98 (2018) no.9, 095024 doi:10.1103/PhysRevD.98.095024 [arXiv:1810.12868 [hep-ph]].
* (152) T. B. de Melo, F. S. Queiroz and Y. Villamizar, Int. J. Mod. Phys. A 34 (2019) no.27, 1950157 doi:10.1142/S0217751X19501574 [arXiv:1909.07429 [hep-ph]].
* (153) R. Primulando, J. Julio and P. Uttayarat, JHEP 08 (2019), 024 doi:10.1007/JHEP08(2019)024 [arXiv:1903.02493 [hep-ph]].
* (154) R. Padhan, D. Das, M. Mitra and A. Kumar Nayak, Phys. Rev. D 101 (2020) no.7, 075050 doi:10.1103/PhysRevD.101.075050 [arXiv:1909.10495 [hep-ph]].
* (155) E. J. Chun, S. Khan, S. Mandal, M. Mitra and S. Shil, Phys. Rev. D 101 (2020) no.7, 075008 doi:10.1103/PhysRevD.101.075008 [arXiv:1911.00971 [hep-ph]].
* (156) S. Ashanujjaman and K. Ghosh, JHEP 03 (2022), 195 doi:10.1007/JHEP03(2022)195 [arXiv:2108.10952 [hep-ph]].
* (157) S. Mandal, O. G. Miranda, G. Sanchez Garcia, J. W. F. Valle and X. J. Xu, Phys. Rev. D 105 (2022) no.9, 095020 doi:10.1103/PhysRevD.105.095020 [arXiv:2203.06362 [hep-ph]].
* (158) B. Dutta, R. Eusebi, Y. Gao, T. Ghosh and T. Kamon, Phys. Rev. D 90 (2014), 055015 doi:10.1103/PhysRevD.90.055015 [arXiv:1404.0685 [hep-ph]].
* (159) G. Aad et al. [ATLAS], Eur. Phys. J. C 72 (2012), 2244 doi:10.1140/epjc/s10052-012-2244-2 [arXiv:1210.5070 [hep-ex]].
* (160) S. Chatrchyan et al. [CMS], Eur. Phys. J. C 72 (2012), 2189 doi:10.1140/epjc/s10052-012-2189-5 [arXiv:1207.2666 [hep-ex]].
* (161) G. Aad et al. [ATLAS], JHEP 03 (2015), 041 doi:10.1007/JHEP03(2015)041 [arXiv:1412.0237 [hep-ex]].
* (162) V. Khachatryan et al. [CMS], Phys. Rev. Lett. 114 (2015) no.5, 051801 doi:10.1103/PhysRevLett.114.051801 [arXiv:1410.6315 [hep-ex]].
* (163) [CMS], CMS-PAS-HIG-14-039.
* (164) [CMS], CMS-PAS-HIG-16-036.
* (165) M. Aaboud et al. [ATLAS], Eur. Phys. J. C 78 (2018) no.3, 199 doi:10.1140/epjc/s10052-018-5661-z [arXiv:1710.09748 [hep-ex]].
* (166) A. M. Sirunyan et al. [CMS], Phys. Rev. Lett. 120 (2018) no.8, 081801 doi:10.1103/PhysRevLett.120.081801 [arXiv:1709.05822 [hep-ex]].
* (167) M. Aaboud et al. [ATLAS], Eur. Phys. J. C 79 (2019) no.1, 58 doi:10.1140/epjc/s10052-018-6500-y [arXiv:1808.01899 [hep-ex]].
* (168) G. Aad et al. [ATLAS], JHEP 06 (2021), 146 doi:10.1007/JHEP06(2021)146 [arXiv:2101.11961 [hep-ex]].
* (169) T. Ghosh, S. Jana and S. Nandi, Phys. Rev. D 97 (2018) no.11, 115037 doi:10.1103/PhysRevD.97.115037 [arXiv:1802.09251 [hep-ph]].
* (170) H. Baer, A. Mustafayev and X. Tata, Phys. Rev. D 90 (2014) no.11, 115007 doi:10.1103/PhysRevD.90.115007 [arXiv:1409.7058 [hep-ph]].
* (171) B. Dutta, A. Gurrola, W. Johns, T. Kamon, P. Sheldon and K. Sinha, Phys. Rev. D 87 (2013) no.3, 035029 doi:10.1103/PhysRevD.87.035029 [arXiv:1210.0964 [hep-ph]].
* (172) B. Dutta, T. Ghosh, A. Gurrola, W. Johns, T. Kamon, P. Sheldon, K. Sinha, K. Wang and S. Wu, Phys. Rev. D 91 (2015) no.5, 055025 doi:10.1103/PhysRevD.91.055025 [arXiv:1411.6043 [hep-ph]].
* (173) M. A. Ajaib, B. Dutta, T. Ghosh, I. Gogoladze and Q. Shafi, Phys. Rev. D 92 (2015) no.7, 075033 doi:10.1103/PhysRevD.92.075033 [arXiv:1505.05896 [hep-ph]].
* (174) B. Dutta, K. Fantahun, A. Fernando, T. Ghosh, J. Kumar, P. Sandick, P. Stengel and J. W. Walker, Phys. Rev. D 96 (2017) no.7, 075037 doi:10.1103/PhysRevD.96.075037 [arXiv:1706.05339 [hep-ph]].
* (175) G. Aad et al. [ATLAS and CMS], JHEP 08 (2016), 045 doi:10.1007/JHEP08(2016)045 [arXiv:1606.02266 [hep-ex]].
* (176) A. M. Sirunyan et al. [CMS], Eur. Phys. J. C 79 (2019) no.5, 421 doi:10.1140/epjc/s10052-019-6909-y [arXiv:1809.10733 [hep-ex]].
* (177) A. M. Sirunyan et al. [CMS], Phys. Lett. B 793 (2019), 520-551 doi:10.1016/j.physletb.2019.04.025 [arXiv:1809.05937 [hep-ex]].
* (178) A. Arhrib, R. Benbrik, M. Chabab, G. Moultaka and L. Rahili, JHEP 04 (2012), 136 doi:10.1007/JHEP04(2012)136 [arXiv:1112.5453 [hep-ph]].
* (179) J. F. Gunion, H. E. Haber, G. L. Kane and S. Dawson, Front. Phys. 80 (2000), 1-404 SCIPP-89/13.
* (180) G. Aad et al. [ATLAS], JHEP 07 (2023), 088 doi:10.1007/JHEP07(2023)088 [arXiv:2207.00348 [hep-ex]].
* (181) A. M. Sirunyan et al. [CMS], JHEP 07 (2021), 027 doi:10.1007/JHEP07(2021)027 [arXiv:2103.06956 [hep-ex]].
* (182) C. L. Carilli and S. Rawlings, New Astron. Rev. 48 (2004), 979 doi:10.1016/j.newar.2004.09.001 [arXiv:astro-ph/0409274 [astro-ph]].
* (183) G. Janssen, G. Hobbs, M. McLaughlin, C. Bassa, A. T. Deller, M. Kramer, K. Lee, C. Mingarelli, P. Rosado and S. Sanidas, et al. PoS AASKA14 (2015), 037 doi:10.22323/1.215.0037 [arXiv:1501.00127 [astro-ph.IM]].
* (184) A. Weltman, P. Bull, S. Camera, K. Kelley, H. Padmanabhan, J. Pritchard, A. Raccanelli, S. Riemer-Sørensen, L. Shao and S. Andrianomena, et al. Publ. Astron. Soc. Austral. 37 (2020), e002 doi:10.1017/pasa.2019.42 [arXiv:1810.02680 [astro-ph.CO]].
* (185) M. A. McLaughlin, Class. Quant. Grav. 30 (2013), 224008 doi:10.1088/0264-9381/30/22/224008 [arXiv:1310.0758 [astro-ph.IM]].
* (186) K. Ghorbani and P. H. Ghorbani, J. Phys. G 47 (2020) no.1, 015201 doi:10.1088/1361-6471/ab4823 [arXiv:1804.05798 [hep-ph]].
* (187) D. Comelli and J. R. Espinosa, Phys. Rev. D 55 (1997), 6253-6263 doi:10.1103/PhysRevD.55.6253 [arXiv:hep-ph/9606438 [hep-ph]].
* (188) P. Basler and M. Mühlleitner, Comput. Phys. Commun. 237 (2019), 62-85 doi:10.1016/j.cpc.2018.11.006 [arXiv:1803.02846 [hep-ph]].
* (189) M. E. Carrington, Phys. Rev. D 45 (1992), 2933-2944 doi:10.1103/PhysRevD.45.2933
* (190) K. Kannike, Eur. Phys. J. C 72 (2012), 2093 doi:10.1140/epjc/s10052-012-2093-z [arXiv:1205.3781 [hep-ph]].
* (191) G. Belanger, K. Kannike, A. Pukhov and M. Raidal, JCAP 01 (2013), 022 doi:10.1088/1475-7516/2013/01/022 [arXiv:1211.1014 [hep-ph]].
* (192) R. N. Lerner and J. McDonald, Phys. Rev. D 80 (2009), 123507 doi:10.1103/PhysRevD.80.123507 [arXiv:0909.0520 [hep-ph]].
* (193) B. W. Lee, C. Quigg and H. B. Thacker, Phys. Rev. D 16 (1977), 1519 doi:10.1103/PhysRevD.16.1519
* (194) I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz and A. Zhou, JHEP 09 (2020), 178 doi:10.1007/JHEP09(2020)178 [arXiv:2007.14792 [hep-ph]].
* (195) P. S. B. Dev, B. Dutta, T. Ghosh, T. Han, H. Qin and Y. Zhang, JHEP 03 (2022), 068 doi:10.1007/JHEP03(2022)068 [arXiv:2109.04490 [hep-ph]].
* (196) M. Kakizaki, Y. Ogura and F. Shima, Phys. Lett. B 566 (2003), 210-216 doi:10.1016/S0370-2693(03)00833-5 [arXiv:hep-ph/0304254 [hep-ph]].
* (197) A. G. Akeroyd, M. Aoki and H. Sugiyama, Phys. Rev. D 79 (2009), 113010 doi:10.1103/PhysRevD.79.113010 [arXiv:0904.3640 [hep-ph]].
* (198) D. N. Dinh, A. Ibarra, E. Molinaro and S. T. Petcov, JHEP 08 (2012), 125 [erratum: JHEP 09 (2013), 023] doi:10.1007/JHEP08(2012)125 [arXiv:1205.4671 [hep-ph]].
* (199) U. Bellgardt et al. [SINDRUM], Nucl. Phys. B 299 (1988), 1-6 doi:10.1016/0550-3213(88)90462-2
* (200) A. M. Baldini et al. [MEG], Eur. Phys. J. C 76 (2016) no.8, 434 doi:10.1140/epjc/s10052-016-4271-x [arXiv:1605.05081 [hep-ex]].
* (201) M. Aoki, S. Kanemura, M. Kikuchi and K. Yagyu, Phys. Rev. D 87 (2013) no.1, 015012 doi:10.1103/PhysRevD.87.015012 [arXiv:1211.6029 [hep-ph]].
* (202) P. A. Zyla et al. [Particle Data Group], PTEP 2020 (2020) no.8, 083C01 doi:10.1093/ptep/ptaa104
|
# Efficient Mirror Detection via Multi-level Heterogeneous Learning
Ruozhen He, Jiaying Lin, Rynson W.H. Lau∗ Corresponding authors: Jiaying Lin
and Rynson W.H. Lau
###### Abstract
We present HetNet (Multi-level Heterogeneous Network), a highly efficient
mirror detection network. Current mirror detection methods focus more on
performance than efficiency, limiting the real-time applications (such as
drones). Their lack of efficiency is aroused by the common design of adopting
homogeneous modules at different levels, which ignores the difference between
different levels of features. In contrast, HetNet detects potential mirror
regions initially through low-level understandings (e.g., intensity contrasts)
and then combines with high-level understandings (contextual discontinuity for
instance) to finalize the predictions. To perform accurate yet efficient
mirror detection, HetNet follows an effective architecture that obtains
specific information at different stages to detect mirrors. We further propose
a multi-orientation intensity-based contrasted module (MIC) and a reflection
semantic logical module (RSL), equipped on HetNet, to predict potential mirror
regions by low-level understandings and analyze semantic logic in scenarios by
high-level understandings, respectively. Compared to the state-of-the-art
method, HetNet runs 664$\%$ faster and draws an average performance gain of
8.9$\%$ on MAE, 3.1$\%$ on IoU, and 2.0$\%$ on F-measure on two mirror
detection benchmarks. The code is available at https://github.com/Catherine-R-
He/HetNet.
## Introduction
Mirrors are common objects in our daily lives. The reflection of mirrors may
cause depth prediction errors and confusion about reality and virtuality.
Ignoring them in computer vision tasks may cause severe safety issues in
situation such as drone and robotic navigation. In addition, owing to the
limited computation resources, application scenarios may heavily depend on the
model efficiency while efficient mirror detection is essential for real-time
computer vision applications.
Figure 1: Comparison of our proposed method with Mirror Detection and SOD
models on F-measure, FLOPs (GMac), and FPS using the MSD dataset. Our method
achieves a new SOTA result with considerable efficiency.
Recently, Yang et al. (Yang et al. 2019) propose MirrorNet based on contextual
contrasted features. Lin et al. (Lin, Wang, and Lau 2020) propose PMD, which
considers content similarity. Guan et al. (Guan, Lin, and Lau 2022) propose
SANet, which focuses on semantic associations. Though experimental results
show their superior performances on mirror detection, these methods suffer
from huge computation costs since they apply the same modules for both low-
level features with large spatial resolutions and high-level features with
small spatial resolutions at every stage. Besides, they heavily rely on post-
processing algorithms, e.g., CRF (Krähenbühl and Koltun 2011), which heavily
limit the usage of these existing methods to real-world scenarios with the
demand for real-time processing, as demonstrated in Figure 1. For more
comparisons, we also include some methods from a related task, salient object
detection (SOD). Although some of them (e.g., LDF (Wei et al. 2020), CPDNet
(Wu, Su, and Huang 2019)) contain few FLOPs with high FPS, they are not able
to achieve competitive performances. Thus, it is challenging and significant
to propose a method that meets the trade-off between accuracy and efficiency.
Figure 2: Illustrations of different network architectures used in the
existing mirror detection methods. (a) to (d) is the network architecture of
MirrorNet (Yang et al. 2019), PMD (Lin, Wang, and Lau 2020), SANet (Guan, Lin,
and Lau 2022), and HetNet, respectively. We denote different modules in
different colors. Both MirrorNet and PMD share the same high-level design,
which uses the same type of modules to learn from the backbone features at
different levels, while SANet aggregates all-level features simultaneously.
Different from these designs, we split the backbone features as low- and high-
level features, and adopt different modules to learn them. For example, in
(d), the low-level features are fed into multi-orientation intensity-based
contrasted modules (in yellow), and the high-level features are forwarded into
reflection semantic logical modules (in green). Such design can fully exploit
features at different levels effectively and efficiently.
Figure 2 compares different network architectures for mirror detection. As
shown in Figure 2(a-c), existing mirror detection network architectures use
the same module for both low-level and high-level features (Yang et al. 2019;
Lin, Wang, and Lau 2020; Guan, Lin, and Lau 2022). However, although low-level
features contain the representation of colors, shapes, and textures, learning
these features require a higher computational cost due to their larger spatial
resolutions compared with high-level features. On the other hand, high-level
features involve more semantic information, and it is more difficult for
models to directly extract precise mirror boundaries as high-level features
contain only rough spatial information. On account of the representation gap
between low-level and high-level features, it is inappropriate to adopt the
same module for both types of features in model design.
To overcome these limitations, we propose in this paper a highly efficient
HetNet (Multi-level Heterogeneous Network). We observe that both low-level and
high-level understandings assist mirror detection. We also notice that cells
on the retina convert the received light signals into electrical signals (Wald
1935), which are then processed and transmitted to the visual cortex for
further information processing, integration and abstraction (Hubel and Wiesel
1962), thereby forming vision. HetNet mimics this process in mirror detection.
Specifically, human eyes initially accept low-level information (e.g.,
intensity contrast, colors) from the scene, and after transmission and
abstraction (e.g., objects’ edges and shapes) in our brains, we confirm the
reflection semantic high-level understandings (e.g., content similarity,
contextual contrast) of objects to finally determine mirrors. Considering
these observations, we propose our new model HetNet to take advantage of the
characteristics of low-level and high-level features individually. HetNet
includes multi-orientation intensity-based contrasted (MIC) modules to learn
low-level features at shallow stages to help localize mirrors, and reflection
semantic logical (RSL) modules to help extract high-level understandings at
deep stages and then aggregate them with low-level understandings to output
the final mirror mask. In addition, we fully use the backbone network for
better learning of low-level features without a huge computational cost. With
the benefit of the disentangled learning on low-level and high-level features,
our model exploits low-level and high-level features effectively and
efficiently. Experimental results show that with the proper heterogeneous
design for features of different levels, our model outperforms the state-of-
the-art mirror detection method PMD (Lin, Wang, and Lau 2020) with 72.73$\%$
fewer model FLOPs and 664$\%$ faster.
Our main contributions are summarized as follows:
* $\bullet$
We propose the first highly efficient mirror detection model HetNet which
learns specific understandings via heterogeneous modules at different levels.
Unlike existing mirror detection methods that adopt the same modules in all
stages, suchour heterogeneous network can reduce the computational cost via a
proper design for different levels of features.
* $\bullet$
We propose a novel model that consists of multi-orientation intensity-based
contrasted (MIC) modules for initial localization by low-level features in
multi-orientation, and reflection semantic logical (RSL) modules for semantic
analysis through high-level features.
* $\bullet$
Experiments demonstrate that HetNet outperforms all relevant baselines,
achieving outstanding efficiency (664$\%$ faster) and accuracy (an average
enhancement of 8.9$\%$ on MAE, 3.1$\%$ on IoU, and 2.0$\%$ on F-measure on two
benchmarks) compared with the SOTA method PMD.
## Related Works
### Mirror Detection
The initial work of automatic mirror detection was proposed by Yang et al.
(Yang et al. 2019). They localize mirrors through multi-scale contextual
contrasting features. Thus, this method has limitations if the mirror and non-
mirror contents are similar. To solve the problem, Lin et al. (Lin, Wang, and
Lau 2020) propose a method focused on the relationship between features of
mirror and non-mirror areas. Recently, two concurrent works (Tan et al. 2022)
and (Huang et al. 2023) adopt heavy structures (e.g., transformer) for mirror
detection despite low efficiency. However, these methods fail when an area is
likely to be a mirror from a context aspect. They are also not efficient for
real-time mirror detection.
To overcome the limitation, our method preliminary localizes mirror regions
based on the intensity contrast. Then it integrates contextual relationships
inside and outside mirrors to refine the final predicted regions. Experiments
results prove our approach has better performance on both MSD (Yang et al.
2019) and PMD (Lin, Wang, and Lau 2020) datasets.
### Salient Object Detection
It detects the most salient objects in images. Early methods are mainly based
on low-level features such as color and contrast (Achanta et al. 2009) and
spectral residual (Hou and Zhang 2007). Recently, most approaches have
depended on deep learning. Qin et al. (Qin et al. 2019) propose a densely
supervised encoder-decoder with a residual refine module to generate and
refine saliency maps. Chen et al. (Chen et al. 2020) propose a global context-
aware progressive aggregation network to aggregate multi-level features. Pang
et al. (Pang et al. 2020) design an aggregate interaction module to extract
useful inter-layer features via interactive learning. Ma et al. (Ma, Xia, and
Li 2021) extract effective features and denoise through an adjacent fusion
module. Liu et al. (Liu et al. 2021) propose a transformer-based model from a
sequence-to-sequence perspective. Nevertheless, the mirror reflects a part of
a scene, including both salient and non-salient objects. Hence, salient object
detection may not detect mirrors precisely.
### Shadow Detection
It identifies or removes shadow areas of images. Nguyen et al. (Nguyen et al.
2017) propose a conditional generative adversarial network supporting multi-
sensitivity level shadow generation. Hu et al. (Hu et al. 2018) propose a
direction-aware spatial context module based on spatial RNN to learn spatial
contexts. Zhu et al. (Zhu et al. 2018) refine context features recurrently
through a recurrent attention residual module. Zheng et al. (Zheng et al.
2019) design a distraction-aware shadow module to solve indistinguishable area
problems. Zhu et al. (Zhu et al. 2021) propose a feature decomposition and
reweighting to adjust the significance of intensity and other features. Han et
al. (Han et al. 2022) further incorporate shadow detection in a blind image
decomposition setting. The main factor of shadow detection is the distinct
intensity contrast between non-shadow and shadow areas. However, there are
usually no strong intensity contrasts in the mirror detection scene but many
weak ones. Therefore, it is hard to detect mirrors by shadow detection
methods.
## Methodology
HetNet is based on two observations. We observe that humans are easily
attracted to regions with distinctive low-level features (e.g., intensity
contrast) first, and then pay attention to high-level information (e.g.,
content similarity, contextual contrast) to check for object details to detect
mirrors. These observations motivate us to learn low-level features at shallow
stages and extract high-level features at deep stages with heterogeneous
modules. Figure 3 illustrates the pipeline.
Figure 3: An overview of our proposed method. We first use ResNeXt-101 (Xie et
al. 2017) as a backbone and a global extractor (GE) to extract multi-scale
image features. We then apply multi-orientation intensity-based contrasted
(MIC) modules at the first three stages and reflection semantic logical (RSL)
modules at the remaining stages. We use edge and auxiliary output supervisions
alongside the main output during the training process.
### Overall Structure
After obtaining the multi-scale image features from the backbone network (Xie
et al. 2017), multi-orientation intensity-based contrasted (MIC) modules are
used in the first three stages, while reflection semantic logical (RSL)
modules and a global extractor (GE) are used in the last three, as shown in
Figure 3. The global extractor (GE) extracts multi-scale image features
following the pyramid pooling module (Zhao et al. 2017). Outputs after MIC,
RSL and GE are denoted as $f_{i}$, where $i$ is the stage number starting from
1 to 6. Learning intensity-based low-level understandings, $f_{1}$ and
$f_{2}$, $f_{2}$ and $f_{3}$ are then fused to $f_{21}$, $f_{22}$,
respectively. To integrate reflection semantic understandings, we fuse $f_{6}$
with $f_{5}$ first and then with $f_{4}$ to produce $f_{23}$. Finally, after
$f_{23}$ is aggregated with $f_{22}$ to $f_{31}$, the output feature map is a
fusion of $f_{31}$ and $f_{21}$. The fusion strategy is multiplication and two
3$\times$3 convolution layers with BatchNorm, where the low-level features
$f_{low}$ and high-level features $f_{high}$ are fused in a cross aggregation
strategy (Zhao et al. 2021; Cai et al. 2020; Chen et al. 2018). First, the
interim low-level features are computed from $f_{low}$ multiplies upsampled
$f_{high}$. The interim high-level features are the product of $f_{high}$ and
$f_{low}$ processed by a 3$\times$3 convolution layer. The interim low-level
and high-level features are fused after applying a 3$\times$3 convolution
layer with BatchNorm and ReLU.
### The MIC Module
Only learning contextual information is insufficient, especially when
contextual information is limited or complex. Thus, utilizing an additional
strong cue to facilitate mirror detection is necessary. Gestalt psychology
(Koffka 2013) believes that most people see the whole scene first, and then
pay attention to individual elements of the scene. In addition, the whole is
not equivalent to the sum of the individual elements. Instead, it takes into
account the degree of association of these elements (e.g., shape, position,
size, color). Based on this, we believe that observing the same scene from
different orientations may obtain different information. We use ICFEs to
imitate orientation-selective preference visual cortex cells (Hubel and Wiesel
1962) to be proficient in learning features in one orientation. Strengthened
contrast information is acquired by combining information learned from two
single orientations. Considering low-level contrasts between mirror and non-
mirror regions, we first design a MIC module to focus on two-orientation low-
level contrasts to localize possible mirror regions. In addition, to reduce
computational costs, ICFEs process input features as two parallel 1D features
in two directions separately.
Figure 4: The architecture of the multi-orientation intensity-based contrasted
(MIC) module. We first rotate the input features to obtain the other
orientation and then use two Intensity-based Contrasted Feature Extractors
(ICFEs) to extract low-level features in two orientations. After rotating the
features back to the original orientation, we multiply them and take the
product into convolution layers to compute low-level contrasts.
A MIC module consists of two Intensity-based Contrasted Feature Extractor
(ICFE) modules, as shown in Figure 4. Given the input image features after a
1$\times$1 convolution $f^{low}_{in}$, we first extract one orientation
contrast features $f^{low}_{1}$ with ICFE directly and then extract contrast
features $f^{low}_{2}$ at another orientation after rotating 90 degrees
counterclockwise. To combine two-orientation contrasts into original
orientation, we get $f^{low}_{3}$ by element-wise multiplication of
$f^{low}_{1}$ and $f^{low}_{2}$ rotated back. Finally, we use a 3$\times$3
convolution and a 1$\times$1 convolution layer to extract the intensity-based
contrasted low-level features $f^{low}_{out}$. Each of the convolution layers
is followed by BatchNorm and ReLU.
$\displaystyle f^{low}_{1}=\textbf{ICFE}(f^{low}_{in}),\quad
f^{low}_{2}=\textbf{ICFE}(\textit{Rot}(f^{low}_{in},1)),$ (1) $\displaystyle
f^{low}_{3}=f^{low}_{1}\odot\textit{Rot}(f^{low}_{2},-1),$ (2) $\displaystyle
f^{low}_{out}=\textbf{BConv}_{1\times 1}(\textbf{BConv}_{3\times
3}(f^{low}_{3})),$ (3)
where Rot(f, d) denotes rotating $f$ 90∘ for $d$ times. $\odot$ represents
element-wise multiplication. $\textbf{BConv}_{k\times k}(\cdot)$ refers to a
k$\times$k convolution with BatchNorm and ReLU activation function.
Inspired by the direction-aware strategy (Hou, Zhou, and Feng 2021), which
embeds spatial information by a pair of 1D feature encoders instead of 2D
global pooling, the input $f^{low}_{in}$ of ICFE first pools horizontally and
vertically. The concatenated pooling results go through a 1$\times$1
convolution layer with BatchNorm and Swish function. After processing by two
split branches, $f^{mid}_{h}\in\mathbb{R}^{C\times H\times 1}$ and
$f^{mid}_{w}\in\mathbb{R}^{C\times 1\times W}$ are applied a 1$\times$1
convolution layer and sigmoid function before multiplying together with
$f^{low}_{in}$. Formally, we have:
$\displaystyle f^{mid}_{1}=\textbf{SConv}_{1\times
1}(\mathcal{P}_{h}(f^{low}_{in})\copyright
permute(\mathcal{P}_{v}(f^{low}_{in}))),$ (4) $\displaystyle
f^{mid}_{h},f^{mid}_{w}=split(f^{mid}_{1}),$ (5) $\displaystyle
f^{mid}_{out}=\sigma(\textbf{Conv}_{1\times
1}(f^{mid}_{h}))\odot\sigma(\textbf{Conv}_{1\times 1}(f^{mid}_{w}))\odot
f^{low}_{in},$ (6)
where Ph,v($\cdot$), $\copyright$, $\sigma$ denote horizontal or vertical
average pooling, concatenation, and Sigmoid, respectively.
$\textbf{Conv}_{k\times k}(\cdot)$ represents a k$\times$k convolution, and
$\textbf{SConv}_{k\times k}(\cdot)$ refers to a $\textbf{Conv}_{k\times
k}(\cdot)$ with BatchNorm and Swish activation function.
### The RSL Module
As there are usually many low-level contrasts in the scene, it could be
difficult to detect mirrors simply by low-level understandings. Owing to the
reflection, parts outside the mirrors are similar to the contents inside
mirrors. Besides, reflected contents may be distinctive to objects around
mirrors, making content discontinuity a clue. Based on the above observations,
we use a RSL module to learn high-level understandings to assist in finalizing
mirror detection combined with the initial localization.
Figure 5: The architecture of the reflection semantic logical (RSL) module.
RSL extracts information in four branches of different numbers of convolution
layers with different kernel sizes and dilation rates. Each branch extracts
semantic features from a receptive field. Thus, integrating all branches
expands a wider receptive field. To obtain the output features $f^{s}_{i}$ of
branch $i$ from the input high-level features $f^{high}_{in}$ in our RSL, we
have:
$\displaystyle f^{s}_{1}=\mathcal{N}(\textbf{Conv}_{1\times
1}(f^{high}_{in})),$ (7) $\displaystyle
f^{s}_{2}=\mathcal{N}(\textbf{Conv}_{3\times
3}^{p,d=7}(\textbf{BConv}_{7\times 7}^{p=3}(\textbf{BConv}_{1\times
1}(f^{high}_{in})))),$ (8) $\displaystyle
f^{s}_{3}=\mathcal{N}(\textbf{Conv}_{3\times
3}^{p,d=7}(\textbf{BConv}_{7\times 7}^{p=3}(\textbf{BConv}_{7\times
7}^{p=3}(\textbf{BConv}_{1\times 1}(f^{high}_{in}))))),$ (9) $\displaystyle
f^{s}_{4}=\mathcal{N}(\textbf{Conv}_{1\times 1}(f^{high}_{in})),$ (10)
where N denotes BatchNorm, and R denotes ReLU. Superscripts p, d of
Conv($\cdot$) related modules represent padding and dilation rate,
respectively. The default padding and dilation rate are set as 1.
We then combine the output features from all branches:
$\displaystyle f^{s}_{mid}=\mathcal{N}(\textbf{Conv}_{3\times
3}(f^{s}_{1}\copyright f^{s}_{2}\copyright f^{s}_{3})),$ (11) $\displaystyle
f^{s}_{out}=\mathcal{R}(f^{s}_{mid}+f^{s}_{4}).$ (12)
Under such design, our RSL can acquire rich reflection semantic logical
information to determine real mirrors from potential regions predicted by the
previous modules.
### Loss Function
During the training process, we apply multi-scale supervision for mirror maps
and also supervise edge extraction for initial localization. We use the pixel
position aware (PPA) loss (Wei, Wang, and Huang 2020) for multi-scale mirror
map supervision, and binary cross entropy (BCE) loss for mirror edge
supervision. The PPA loss is the sum of weighted BCE (wBCE) loss and weighted
IoU (wIoU) loss. wBCE loss concentrates more on hard pixels (e.g., holes) than
the BCE loss (De Boer et al. 2005). wIoU measures global structure and pays
more attention to important pixels than the IoU loss (Máttyus, Luo, and
Urtasun 2017). The final loss function is therefore:
$\displaystyle Loss=L_{bce}+\sum\limits^{4}_{i=0}\frac{1}{2^{i}}L_{ppa}^{i},$
(13)
where $L_{ppa}$ is the pixel position aware (PPA) loss between the i-th mirror
map and the ground truth mirror map, while $L_{bce}$ is the binary cross-
entropy (BCE) loss.
## Experiments
### Datasets
We conduct experiments on two datasets: MSD (Yang et al. 2019) and PMD (Lin,
Wang, and Lau 2020). MSD focuses more on similar indoor scenes, but PMD
contains diverse scenes. MSD includes 3,063 images for training and 955 for
testing, while PMD has 5,096 images for training and 571 for testing. We train
our method on each training set and then test it separately.
### Evaluation Metrics
We adopt three evaluation metrics: Mean Absolute Error (MAE), Intersection
over union (IoU), and F-measure to evaluate the performances of the models
quantitatively. MAE represents average pixel-wise error between the prediction
mask and ground truth. F-measure (Fβ) is a trade-off between precision and
recall. It is computed as:
$F_{\beta}=\frac{(1+\beta^{2})\cdot Precision\cdot Recall}{\beta^{2}\cdot
Precision+Recall},$
where $\beta^{2}$ is set to 0.3, for precision is more important (Achanta et
al. 2009). Larger Fβ is better.
### Implementation Details
We implement our model by PyTorch and conduct experiments on a GeForce
RTX2080Ti GPU. We use ResNeXt-101 (Xie et al. 2017) pretrained on ImageNet as
our backbone network. Input images are resized to multi-scales with random
crops and horizontal flips during the training process. We use the stochastic
gradient descent (SGD) optimizer with a momentum value of 0.9 and a weight
decay of 5e-4. In the training phase, the maximum learning rate is 1e-2, the
batch size is 12, and the training epoch is 150. It takes around 5 hours to
train. As for the inference process, input images are only resized to
352$\times$352 and then directly predict final maps without any post-
processing.
### Comparison to the State-of-the-art Methods
To prove the effectiveness and efficiency of our method, we compare it with 10
state-of-the-art methods, including salient object detection methods (R3Net
(Deng et al. 2018), CPDNet (Wu, Su, and Huang 2019), EGNet (Zhao et al. 2019),
LDF (Wei et al. 2020), MINet (Pang et al. 2020), SETR (Zheng et al. 2021), VST
(Liu et al. 2021)), and mirror detection methods MirrorNet(Yang et al. 2019),
PMD(Lin, Wang, and Lau 2020), SANet(Guan, Lin, and Lau 2022). Table 1
illustrates the quantitative comparison on the three metrics. Our method
achieves the best performances on all metrics. Besides, as shown in Table 2,
we compare Parameters, FLOPs, and FPS with relevant methods. As the hardware
environment influences FPS, we conduct all the experiments on the same PC to
ensure fairness. Quantitative results show that our method meets the balance
between efficiency and accuracy.
Table 1: Quantitative comparison with the state-of-the-art methods on two
benchmarks with evaluation metrics MAE, IoU, and Fβ. The best results are
shown in red and the second best results are shown in blue.
| MSD | PMD
---|---|---
Method | MAE$\downarrow$ | IoU$\uparrow$ | F${}_{\beta}\uparrow$ | MAE$\downarrow$ | IoU$\uparrow$ | F${}_{\beta}\uparrow$
R3Net | 0.111 | 0.554 | 0.767 | 0.045 | 0.496 | 0.713
CPDNet | 0.116 | 0.576 | 0.743 | 0.041 | 0.600 | 0.734
EGNet | 0.096 | 0.630 | 0.779 | 0.088 | 0.210 | 0.590
LDF | 0.068 | 0.729 | 0.843 | 0.038 | 0.633 | 0.783
MINet | 0.088 | 0.664 | 0.817 | 0.038 | 0.608 | 0.765
SETR | 0.071 | 0.690 | 0.851 | 0.035 | 0.564 | 0.797
VST | 0.054 | 0.791 | 0.871 | 0.036 | 0.591 | 0.736
MirrorNet | 0.065 | 0.790 | 0.857 | 0.043 | 0.585 | 0.741
PMD | 0.047 | 0.815 | 0.892 | 0.032 | 0.660 | 0.794
SANet | 0.054 | 0.798 | 0.877 | 0.032 | 0.668 | 0.795
HetNet | 0.043 | 0.828 | 0.906 | 0.029 | 0.690 | 0.814
Table 2: Quantitative comparison on efficiency. We compare our model with
relevant state-of-the-art models on Parameters(M), FLOPs(GMAC), and FPS.
Method | Input Size | Para. | FLOPs | FPS
---|---|---|---|---
R3Net | 300$\times$300 | 56.16 | 47.53 | 8.30
CPDNet | 352$\times$352 | 47.85 | 17.77 | 65.83
EGNet | 256$\times$256 | 111.64 | 157.21 | 10.76
LDF | 352$\times$352 | 25.15 | 15.51 | 107.74
MINet | 320$\times$320 | 162.38 | 87.11 | 3.55
SETR | 480$\times$480 | 91.76 | 85.41 | 2.84
VST | 224$\times$224 | 44.48 | 23.18 | 6.05
MirrorNet | 384$\times$384 | 121.77 | 77.73 | 7.82
PMD | 384$\times$384 | 147.66 | 101.54 | 7.41
SANet | 384$\times$384 | 104.80 | 66.00 | 8.53
HetNet | 352$\times$352 | 49.92 | 27.69 | 49.23
Input GT HetNet SANet PMD MirrorNet VST SETR MINet LDF EGNet CPDNet R3Net
Figure 6: Qualitative comparison of our model with relevant state-of-the-arts
in challenging scenarios.
We provide some visual comparisons with state-of-the-art methods. As shown in
Figure 6, our method can generate more precise segmentation than other
counterparts. It performs excellently in various challenging scenarios, such
as high-intrinsic similar surroundings (rows 1, 2), mirrors split by objects
(row 3), ambiguous regions outside mirrors (row 4), tiny mirrors (rows 5, 6),
and partially hidden mirrors (rows 7, 8). Note that we do not use any post-
processing to generate our maps. This shows that our method is effective and
robust in processing complex images.
### Ablation Study
To better analyze the architecture and effectiveness of our network, we
conduct ablation studies of each component in our proposed network on the MSD
dataset.
#### Network Architecture Analysis.
In this section, we focus on analyzing the network structure to demonstrate
the rationality and necessity of learning specific information with the
heterogeneous modules at different stages. As shown in Table 3, using MICs at
all stages (Aa) performs the worst. Models with RSLs at shallow stages and
MICs at deep stages (Aba) or RSLs (Ab) at all stages have similar
performances, which are not as effective as our HetNet. Low-level features
focus more on colors, shapes, texture and contain more precise spatial
information, while high-level features involve more semantics with rough
spatial information. MIC aims to learn low-level contrasts, but RSL extracts
high-level understandings. Hence, it is more reasonable to learn low-level
information to roughly localize mirrors with MICs at shallow (1-3) stages and
learn high-level information by RSLs at deep (4-5) stages.
Table 3: The ablation study results of network architecture. Aba, Aa, Ab
denote applying RSL at the shallow stages and MIC at the deep stages, MIC at
all stages, and RSL at all stages, respectively.
Architecture | MAE$\downarrow$ | IoU$\uparrow$ | F${}_{\beta}\uparrow$ | Para. | FLOPs | FPS
---|---|---|---|---|---|---
Aba | 0.046 | 0.821 | 0.897 | 50.13 | 60.89 | 37.73
Aa | 0.049 | 0.811 | 0.889 | 47.58 | 26.90 | 43.34
Ab | 0.046 | 0.817 | 0.897 | 52.50 | 61.67 | 39.00
HetNet | 0.043 | 0.828 | 0.906 | 49.92 | 27.69 | 49.23
#### Component Analysis.
To verify the component effectiveness, we conduct ablation experiments by
gradually adding them to the network. We simply run the original backbone
network (Xie et al. 2017) (I) with the remaining HetNet architecture without
the 6th stage as a baseline, and then insert GE (II) into it to complete 6
stages. We gradually add three alternative components to it. The first one
adds one ICFE (III) instead of MIC. The second includes the entire MIC
component (IV). Both of the above two methods exclude RSLs. The third one
applies RSLs without MICs (V).
Table 4 illustrates the experimental results. The ablated model (I) performs
the worst of all. We may also observe that adding MICs (IV) or RSLs (V) is
generally better than the other alternative (i.e., III). As MICs learn low-
level information in two orientations instead of a single orientation, while
(IV) performs better than (III). In addition, “basic + GE + MICs” (IV) shows a
slight overall advantage over “basic + GE + RSLs” (V), but when MICs and RSLs
are combined (HetNet or “basic + GE + MICs + RSLs”), it outperforms all the
other ablated models. It proves that the cooperation of low- and high-level
cues is more effective than single-level understandings in mirror detection.
Figure 7 shows a visual example where MICs and RSLs play an important role
together.
Table 4: The ablation study results of components. By adding each component
gradually, our model achieves the best performance.
Ablation | Base | GE | ICFEs | MICs | RSLs | MAE$\downarrow$ | IoU$\uparrow$ | F${}_{\beta}\uparrow$ | Para. | FLOPs | FPS
---|---|---|---|---|---|---|---|---|---|---|---
I | $\surd$ | | | | | 0.056 | 0.773 | 0.872 | 47.53 | 26.61 | 60.31
II | $\surd$ | $\surd$ | | | | 0.050 | 0.805 | 0.882 | 47.53 | 26.68 | 58.05
III | $\surd$ | $\surd$ | $\surd$ | | | 0.053 | 0.781 | 0.878 | 47.55 | 26.68 | 53.87
IV | $\surd$ | $\surd$ | | $\surd$ | | 0.049 | 0.811 | 0.892 | 47.56 | 26.68 | 51.45
V | $\surd$ | $\surd$ | | | $\surd$ | 0.049 | 0.809 | 0.890 | 49.92 | 27.47 | 54.20
HetNet | $\surd$ | $\surd$ | | $\surd$ | $\surd$ | 0.043 | 0.828 | 0.906 | 49.92 | 27.69 | 49.23
Input GT (I) (II) (III) (IV) (V) HetNet
Figure 7: A visual example of the ablation study. (I) to (V) correspond to the
prediction maps from five ablated models: “basic”, “basic + GE”, “basic + GE +
ICFEs”, “basic + GE + MICs”, “basic + GE + RSLs”, respectively.
#### Effectiveness of the Rotation Strategy in the MIC Module.
In Table 5, we compare five rotation strategies in MIC and show the
effectiveness of our multi-orientation strategy. MIC performing better than 1
ICFE (“ICFE”) or 2 parallel ICFEs (“ICFE+ICFE”). This shows that rotation
helps learn more comprehensive low-level information in two orientations. To
obtain as much distinct information as possible, we adopt an equal division
strategy for orientations. If we use lines to denote orientations on a 2D
plane, we expect intersecting lines to divide 360∘ equally, e.g., 1 line into
2*180∘, 2 lines into 4*90∘, and 3 lines into 6*60∘. However, rotating tensors
for non-(90*k)∘ induces information loss. For example, if a tensor rotates
60∘, we need to add paddings or crop it. Hence, 90*k∘ are better options. The
failure of ICFE*3 is possibly caused by observing one orientation (0∘,180∘)
twice, which makes information on this orientation stronger than the other
(90∘). ICFE*4 performs better than ICFE*3 for its balanced observation on two
orientations. However, it may introduce more noise so that it is worse than
our HetNet (MIC).
Table 5: The ablation study results of MIC rotation strategies. ICFE,
ICFE+ICFE, ICFE*3, ICFE*4, and MIC denote a single ICFE, 2 same-orientation
ICFEs, 3 ICFEs with ($0^{\circ},90^{\circ},180^{\circ}$) orientations, 4 ICFEs
with ($0^{\circ},90^{\circ},180^{\circ},270^{\circ}$), and 2 ICFEs with
($0^{\circ},180^{\circ}$) orientations, respectively.
Strategy | MAE$\downarrow$ | IoU$\uparrow$ | F${}_{\beta}\uparrow$ | Para. | FLOPs | FPS
---|---|---|---|---|---|---
ICFE | 0.049 | 0.802 | 0.887 | 49.92 | 27.68 | 49.52
ICFE+ICFE | 0.048 | 0.806 | 0.891 | 49.92 | 27.69 | 45.58
ICFE*3 | 0.047 | 0.821 | 0.895 | 49.93 | 27.69 | 40.73
ICFE*4 | 0.046 | 0.820 | 0.901 | 49.93 | 27.70 | 36.08
MIC | 0.043 | 0.828 | 0.906 | 49.92 | 27.69 | 49.23
## Conclusions
In this paper, we propose a highly efficient mirror detection model HetNet. To
meet the trade-off of efficiency and accuracy, we adopt heterogeneous modules
at different stages to provide the benefits of feature characteristics at both
low and high levels. Additionally, considering the low-level contrasts inside
and outside mirrors, we propose a multi-orientation intensity-based contrasted
(MIC) module to learn low-level understanding in two orientations to select
likely mirror regions. To further confirm mirrors, we propose a reflection
semantic logical (RSL) module to extract high-level information. Overall,
HetNet has outstanding and efficient feature extraction performances, making
it effective and robust in challenging scenarios. Experimental results on two
benchmarks illustrate that our method outperforms SOTA methods on three
evaluation metrics with an average enhancement of 8.9$\%$ on MAE, 3.1$\%$ on
IoU, 2.0$\%$ on F-measure, as well as 72.73$\%$ fewer model FLOPs and 664$\%$
faster than the SOTA mirror detection method PMD (Lin, Wang, and Lau 2020).
Our method does have limitations. Since both MSD and PMD datasets collect
mostly regular mirrors, our method may fail in some mirrors with special
reflection property occasions. In Figure 8, the three mirrors reflect the same
man with three different statuses. In the right image, the mirrors show
complex intensity contrast. Hence, the distortion of high-level or low-level
information is even more challenging. For future work, we are currently
considering additional information to help detect different kinds of mirrors.
Figure 8: Failure cases. Our model may fail in scenarios with complex
reflection mirrors, like distorting mirrors.
## References
* Achanta et al. (2009) Achanta, R.; Hemami, S.; Estrada, F.; and Susstrunk, S. 2009. Frequency-tuned salient region detection. In _2009 IEEE conference on computer vision and pattern recognition_ , 1597–1604. IEEE.
* Cai et al. (2020) Cai, Y.; Wang, Z.; Luo, Z.; Yin, B.; Du, A.; Wang, H.; Zhang, X.; Zhou, X.; Zhou, E.; and Sun, J. 2020. Learning delicate local representations for multi-person pose estimation. In _European Conference on Computer Vision_ , 455–472. Springer.
* Chen et al. (2018) Chen, Y.; Wang, Z.; Peng, Y.; Zhang, Z.; Yu, G.; and Sun, J. 2018. Cascaded pyramid network for multi-person pose estimation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 7103–7112.
* Chen et al. (2020) Chen, Z.; Xu, Q.; Cong, R.; and Huang, Q. 2020. Global context-aware progressive aggregation network for salient object detection. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 34, 10599–10606.
* De Boer et al. (2005) De Boer, P.-T.; Kroese, D. P.; Mannor, S.; and Rubinstein, R. Y. 2005. A tutorial on the cross-entropy method. _Annals of operations research_ , 134(1): 19–67.
* Deng et al. (2018) Deng, Z.; Hu, X.; Zhu, L.; Xu, X.; Qin, J.; Han, G.; and Heng, P.-A. 2018. R3net: Recurrent residual refinement network for saliency detection. In _Proceedings of the 27th International Joint Conference on Artificial Intelligence_ , 684–690. AAAI Press Menlo Park, CA, USA.
* Guan, Lin, and Lau (2022) Guan, H.; Lin, J.; and Lau, R. W. 2022. Learning Semantic Associations for Mirror Detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 5941–5950.
* Han et al. (2022) Han, J.; Li, W.; Fang, P.; Sun, C.; Hong, J.; Armin, M. A.; Petersson, L.; and Li, H. 2022. Blind Image Decomposition. In _European Conference on Computer Vision (ECCV)_.
* Hou, Zhou, and Feng (2021) Hou, Q.; Zhou, D.; and Feng, J. 2021. Coordinate attention for efficient mobile network design. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 13713–13722.
* Hou and Zhang (2007) Hou, X.; and Zhang, L. 2007. Saliency detection: A spectral residual approach. In _2007 IEEE Conference on computer vision and pattern recognition_ , 1–8. Ieee.
* Hu et al. (2018) Hu, X.; Zhu, L.; Fu, C.-W.; Qin, J.; and Heng, P.-A. 2018. Direction-aware spatial context features for shadow detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 7454–7462.
* Huang et al. (2023) Huang, T.; Dong, B.; Lin, J.; Liu, X.; Lau, R. W.; and Zuo, W. 2023. Symmetry-Aware Transformer-based Mirror Detection. _AAAI_.
* Hubel and Wiesel (1962) Hubel, D. H.; and Wiesel, T. N. 1962. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. _The Journal of physiology_ , 160(1): 106.
* Koffka (2013) Koffka, K. 2013. _Principles of Gestalt psychology_. Routledge.
* Krähenbühl and Koltun (2011) Krähenbühl, P.; and Koltun, V. 2011. Efficient inference in fully connected crfs with gaussian edge potentials. _Advances in neural information processing systems_ , 24.
* Lin, Wang, and Lau (2020) Lin, J.; Wang, G.; and Lau, R. W. 2020. Progressive mirror detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 3697–3705.
* Liu et al. (2021) Liu, N.; Zhang, N.; Wan, K.; Shao, L.; and Han, J. 2021. Visual saliency transformer. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 4722–4732.
* Ma, Xia, and Li (2021) Ma, M.; Xia, C.; and Li, J. 2021. Pyramidal feature shrinking for salient object detection. In _AAAI_ , volume 35, 2311–2318.
* Máttyus, Luo, and Urtasun (2017) Máttyus, G.; Luo, W.; and Urtasun, R. 2017. Deeproadmapper: Extracting road topology from aerial images. In _Proceedings of the IEEE international conference on computer vision_ , 3438–3446.
* Nguyen et al. (2017) Nguyen, V.; Yago Vicente, T. F.; Zhao, M.; Hoai, M.; and Samaras, D. 2017. Shadow detection with conditional generative adversarial networks. In _Proceedings of the IEEE International Conference on Computer Vision_ , 4510–4518.
* Pang et al. (2020) Pang, Y.; Zhao, X.; Zhang, L.; and Lu, H. 2020. Multi-scale interactive network for salient object detection. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 9413–9422.
* Qin et al. (2019) Qin, X.; Zhang, Z.; Huang, C.; Gao, C.; Dehghan, M.; and Jagersand, M. 2019. Basnet: Boundary-aware salient object detection. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 7479–7489.
* Tan et al. (2022) Tan, X.; Lin, J.; Xu, K.; Chen, P.; Ma, L.; and Lau, R. W. 2022. Mirror Detection With the Visual Chirality Cue. _IEEE Transactions on Pattern Analysis and Machine Intelligence_.
* Wald (1935) Wald, G. 1935. Carotenoids and the visual cycle. _The Journal of general physiology_ , 19(2): 351–371.
* Wei, Wang, and Huang (2020) Wei, J.; Wang, S.; and Huang, Q. 2020. F3Net: fusion, feedback and focus for salient object detection. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, 12321–12328.
* Wei et al. (2020) Wei, J.; Wang, S.; Wu, Z.; Su, C.; Huang, Q.; and Tian, Q. 2020. Label decoupling framework for salient object detection. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 13025–13034.
* Wu, Su, and Huang (2019) Wu, Z.; Su, L.; and Huang, Q. 2019. Cascaded partial decoder for fast and accurate salient object detection. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 3907–3916.
* Xie et al. (2017) Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; and He, K. 2017. Aggregated residual transformations for deep neural networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 1492–1500.
* Yang et al. (2019) Yang, X.; Mei, H.; Xu, K.; Wei, X.; Yin, B.; and Lau, R. W. 2019. Where is my mirror? In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 8809–8818.
* Zhao et al. (2017) Zhao, H.; Shi, J.; Qi, X.; Wang, X.; and Jia, J. 2017. Pyramid scene parsing network. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2881–2890.
* Zhao et al. (2019) Zhao, J.-X.; Liu, J.-J.; Fan, D.-P.; Cao, Y.; Yang, J.; and Cheng, M.-M. 2019. EGNet: Edge guidance network for salient object detection. In _Proceedings of the IEEE/CVF international conference on computer vision_ , 8779–8788.
* Zhao et al. (2021) Zhao, Z.; Xia, C.; Xie, C.; and Li, J. 2021. Complementary Trilateral Decoder for Fast and Accurate Salient Object Detection. In _Proceedings of the 29th ACM International Conference on Multimedia_ , 4967–4975.
* Zheng et al. (2019) Zheng, Q.; Qiao, X.; Cao, Y.; and Lau, R. W. 2019. Distraction-aware shadow detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 5167–5176.
* Zheng et al. (2021) Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P. H.; et al. 2021. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 6881–6890.
* Zhu et al. (2018) Zhu, L.; Deng, Z.; Hu, X.; Fu, C.-W.; Xu, X.; Qin, J.; and Heng, P.-A. 2018. Bidirectional feature pyramid network with recurrent attention residual modules for shadow detection. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , 121–136.
* Zhu et al. (2021) Zhu, L.; Xu, K.; Ke, Z.; and Lau, R. W. 2021. Mitigating Intensity Bias in Shadow Detection via Feature Decomposition and Reweighting. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 4702–4711.
|
# Fast feedback control of mechanical motion using circuit optomechanics
Cheng Wang QTF Centre of Excellence, Department of Applied Physics, Aalto
University, FI-00076 Aalto, Finland Louise Banniard QTF Centre of
Excellence, Department of Applied Physics, Aalto University, FI-00076 Aalto,
Finland Laure Mercier de Lépinay QTF Centre of Excellence, Department of
Applied Physics, Aalto University, FI-00076 Aalto, Finland Mika A. Sillanpää
<EMAIL_ADDRESS>QTF Centre of Excellence, Department of Applied
Physics, Aalto University, FI-00076 Aalto, Finland
###### Abstract
Measurement-based control, utilizing an active feedback loop, is a standard
tool in technology. Feedback control is also emerging as a useful and
fundamental tool in quantum technology and in related fundamental studies,
where it can be used to prepare and stabilize pure quantum states in various
quantum systems. Feedback-cooling of center-of-mass micromechanical
oscillators, which typically exhibit a high thermal noise far above the
quantum regime has been particularly actively studied and has recently been
shown to allow for ground-state cooling using optical measurements. Here, we
realize measurement-based feedback operations in an electromechanical system,
cooling the mechanical thermal noise down to 3 quanta, limited by added
amplifier noise. Counter-intuitively, we also obtain significant cooling when
the system is pumped at the blue optomechanical sideband, where the system is
unstable without feedback.
## I Introduction
In cavity optomechanics, quantum control of mechanical motion can be achieved
via radiation pressure force from an optical light field on a mechanical
degree of freedom in two different ways. Coherent quantum control involves
applying a coherent pump tone to induce a strong coupling between the motion
and an effective cold bath, so that the combined system evolves to a desired
state. In measurement-based feedback control, an error signal obtained from a
measurement result is applied as a force on the mechanical oscillator through
a time-delayed and carefully filtered feedback loop to steer and control the
evolution of motional states.
Feedback control and its ability to achieve cooling of massive mechanical
objects has been investigated earlier both theoretically and experimentally.
It was first demonstrated in optics Cohadon _et al._ (1999), with active
experimental research following along the same lines Kleckner and Bouwmeester
(2006); Poggio _et al._ (2007); Vinante _et al._ (2008); Wilson _et al._
(2015); Schäfermeier _et al._ (2016); Rossi _et al._ (2017); Sudhir _et
al._ (2017); Christoph _et al._ (2018); Guo _et al._ (2019); Kumar _et al._
(2022). Recently, feedback cooling down to the ground state was achieved for
an ultrahigh quality factor SiN membrane resonator Rossi _et al._ (2018).
Feedback cooling also allowed to bring a 10 kg mass in the Advanced LIGO
gravitational-wave detector close to its motional ground state Abbott _et
al._ (2009); Whittle _et al._ (2021). Besides massive oscillators, levitated
nanoparticles have been successfully feedback-cooled Gieseler _et al._
(2012); Vovrosh _et al._ (2017); Li _et al._ (2011); Conangla _et al._
(2019); Iwasaki _et al._ (2019), some recent experiments even reaching the
motional ground state Delić _et al._ (2020); Tebbenjohanns _et al._ (2021).
Feedback control applied to a microwave optomechanical system Regal _et al._
(2008); Teufel _et al._ (2011) has yet to be realized. The implementation
poses experimental challenges, but also carries potential for operation deep
in the quantum regime for which electromechanical systems are generally well
suited. Typical micro-fabricated electromechanical resonators have rather high
frequencies ($>5\,\rm MHz$), which sets constraints for a digital realization
of a control system, as high processing rates are required. Furthermore, since
the electromagnetic degree of freedom has to react fast to the control, a
microwave cavity with a high external coupling is necessary. This poses
further challenges, as large external couplings are not easily combined with
mechanical elements directly integrated in superconducting on-chip cavities.
Here, we realize feedback control in an electromechanical system employing a
drum mechanical membrane, using a scheme adapted for this system, where we use
a coherent tone to carry out a strong measurement, and a modulated tone to
apply the adequate feedback force on the system.
## II Theory
In measurement-based feedback cooling of a mechanical oscillator, the motion
is continuously monitored with a very high precision which allows to derive
the oscillator’s speed. A force proportional to the speed, which therefore
acts as a viscous force, is fed back to the oscillator. This force
artificially damps the motion without adding the fluctuation counterpart
usually linked to damping mechanisms. This reduces the displacement variance,
so that the oscillator is effectively cooled. In order to cool the
oscillator’s thermal occupancy near the quantum ground state, it is critical
that the measurement is close to the quantum-limited sensitivity. Indeed, a
measurement noise higher than the level of position quantum fluctuations
results in a feedback force noise limiting the cooling efficiency above the
quantum ground state.
### II.1 Basic principle of feedback cooling
The principle of feedback cooling of mechanical oscillations is fairly well-
known Mancini _et al._ (1998); Vitali _et al._ (2002); Hopkins _et al._
(2003); Genes _et al._ (2008); Zhang _et al._ (2017); Sommer and Genes
(2019) and is recalled here only briefly. The position $x$ (for the moment
given in meters) of an oscillator of mass $M$, frequency $\omega_{m}$ and
damping rate $\gamma$ follow the evolution equation:
$\ddot{x}(t)=-\omega_{m}^{2}x(t)-\gamma\dot{x}(t)+F_{\rm th}(t)/M,$ (1)
where $F_{\rm th}(t)$ is the Langevin force whose spectrum $S_{F}[\omega]$ in
the classical limit is $S^{th}_{F}[\omega]=2k_{B}TM\gamma$, with $T$ the
temperature of the oscillator’s bath and $k_{B}$ the Boltzmann constant. The
spectrum of the position of the free oscillator is
$S_{x}[\omega]=\frac{2k_{B}T\gamma}{M[(\omega_{m}^{2}-\omega^{2})^{2}+\gamma^{2}\omega^{2}]}\,.$
(2)
The damping rate appears both in the intensity of the coupling to the thermal
bath and as the bandwidth of the Lorentzian mechanical spectrum. Applying a
damping feedback force of $F_{\mathrm{FB}}=-gM\gamma\dot{x}$, where $g$ is the
feedback gain, broadens the spectrum $\gamma\rightarrow\gamma(1+g)$, resulting
in:
$S_{x}[\omega]=\frac{2k_{B}T\gamma}{M[(\omega_{m}^{2}-\omega^{2})^{2}+\gamma^{2}(1+g)^{2}\omega^{2}]}\,.$
(3)
This process of damping the oscillator without adding fluctuations has been
coined cold damping. Effectively, the resulting spectrum is that of an
oscillator of damping $\gamma(1+g)$ at a temperature $T/(1+g)$, lower than the
temperature of the bath $T$. This effect is therefore also called feedback
cooling. The process requires a detection and a reaction on the measurement
result much faster than the decay time of the oscillator $1/\gamma$. The
cooling efficiency is limited by the amount of noise added in the feedback
loop. This noise predominantly comes from the background noise in the
detection of the oscillator’s position.
### II.2 Microwave optomechanical detection
We consider an archetypal optomechanical system, where a single mechanical
harmonic mode (frequency $\omega_{m}$, and damping rate $\gamma$) is coupled
to an electromagnetic cavity (frequency $\omega_{c}$, and damping rate
$\kappa$). We ignore internal losses of the cavity. The cavity is probed by a
strong coherent field (frequency $\omega_{p}$) which is detuned from
$\omega_{c}$ by the amount of $\Delta=\omega_{p}-\omega_{c}$. The probing
induces an effective optomechanical coupling $G=g_{0}\sqrt{n_{c}}$, where
$n_{c}$ is the number of photons driven in the cavity by the tone, and $g_{0}$
is the vacuum optomechanical coupling.
In order to describe the feedback process, we treat the system using the
standard input-output theory of optical cavities. Now we treat the mechanical
oscillator with dimensionless position $x(t)$ and momentum $p(t)$. We also
define the dimensionless quadratures of the field in the cavity, $x_{c}(t)$
and $y_{c}(t)$ in the frame rotating at the cavity frequency. The equations of
motion in the frequency domain are
$\begin{split}\chi_{c}^{-1}x_{c}&=-\Delta
y_{c}+\sqrt{\kappa}x_{\mathrm{c,in}}\,,\\\ \chi_{c}^{-1}y_{c}&=\Delta
x_{c}-2Gx+\sqrt{\kappa}y_{\mathrm{c,in}}\,,\\\ -i\omega x&=\omega_{m}p\,,\\\
\left(\gamma-i\omega\right)p&=-\omega_{m}x-2Gx_{c}+\frac{f_{\rm
th}}{\omega_{m}}+\frac{f_{\rm fb}}{\omega_{m}}\,.\\\ \end{split}$ (4)
Here, the cavity susceptibility is $\chi_{c}^{-1}=\frac{\kappa}{2}-i\omega$,
and $x_{\mathrm{c,in}}$ and $y_{\mathrm{c,in}}$ are the input noise operators
for the cavity. Finally, $f_{\rm th}$ and $f_{\rm fb}$ are scaled forces:
$f_{X}=\omega_{m}F_{X}/(Mx_{\rm zpf})$, where $F_{X}$ is a force, $M$ the
effective mass and $x_{\rm zpf}$ the zero-point fluctuations of the
oscillator.
The feedback force $f_{\mathrm{fb}}$ is now present in addition to the thermal
force. If information on the measured observable, position $x$, is contained
in the feedback force, a closed feedback loop is formed.
In a generic optomechanical measurement, the output field
$\begin{split}y_{\mathrm{out}}=\sqrt{\kappa}y_{c}-y_{\mathrm{c,in}}\end{split}$
(5)
emitted from the cavity carries information about $x$. The feedback force
nature is designed by choosing a suitable processing, or filter function, to
the measured $y_{\mathrm{out}}$. In a real situation, the measurement can only
provide an approximation of $x$. This is primarily because a significant
amount of noise is added to $y_{\mathrm{out}}$ before it is converted into a
force. In microwave experiments, this noise, denoted as a random field $y_{\rm
add}(t)$, is due to transistor amplifiers. Even in the best cases, this noise
is at least an order of magnitude higher than the quantum limit. The amplifier
noise is typically characterized as the added number of noise quanta $\langle
y_{\rm add}[\omega]y_{\rm add}[-\omega]\rangle=n_{\mathrm{add}}$.
Figure 1: Feedback setup. (a) The basic frequency scheme of feedback cooling
in cavity optomechanics, with the strong probe tone set at the cavity
frequency ($\Delta=0$). (b) The probe tone set alternatively to the blue
mechanical sideband ($\Delta=\omega_{m}$). (c) Optical micrograph a similar
circuit-electromechanical device. The aluminum drumhead oscillator of diameter
13 $\mu$m is connected to a meander inductor to form a cavity strongly coupled
to a transmission line through a large external finger capacitor. A zoom on
the area of the drumhead is indicated with red dashed lines. (d) Simplified
schematic of the microwave circuit around the electromechanical device; PM
means phase modulator.
The feedback force is obtained by applying a filter function $A[\omega]$ to
the signal, including a gain, and scaled for convenience by $\sqrt{\kappa}$:
$\begin{split}f_{\mathrm{fb}}[\omega]=\frac{A[\omega]}{\sqrt{\kappa}}\left(y_{\rm
out}[\omega]+y_{\rm add}[\omega]\right)\,.\end{split}$ (6)
To approximate a force proportional to the oscillator’s velocity, we take the
filter function $A[\omega]$ to be a phase-shifting (phase-shift
$\phi\in\mathbb{R}$) and amplifying (gain $A_{0}>0$) application
$\begin{split}A[\omega]=A_{0}\exp\left(-i\phi\frac{\omega}{\omega_{m}}\right)\,.\end{split}$
(7)
#### II.2.1 Resonant probing
The unresolved sideband (bad-cavity) situation $\kappa\gg\omega_{m}$, where
the cavity follows the mechanics without delay, allows for a simple treatment
of the entire process. Our experimental parameters, where
$\kappa\approx\omega_{m}$, do not well satisfy this condition. The basic case
of resonant probing ($\Delta=0$), as shown in Fig. 1 (a), allows for some
analytical results at arbitrary sideband resolution. Here, the effective
susceptibility of the oscillator is similar to that implied by Eq. (3)
$\begin{split}\chi_{\mathrm{fb}}[\omega]&=\frac{1}{-i\omega\gamma_{\mathrm{eff}}+\omega_{\mathrm{eff}}^{2}-\omega^{2}}\,.\end{split}$
(8)
Here, for large mechanical quality factors $\omega_{m}/\gamma\gg 1$ (in the
experiment $\omega_{m}/\gamma\sim 10^{5}$), the effective mechanical frequency
and damping rate are, respectively
$\displaystyle\omega_{\mathrm{eff}}\simeq\omega_{m}+\frac{2GA_{0}\left(\kappa\cos\phi+2\omega_{m}\sin\phi\right)}{\kappa^{2}+4\omega_{m}^{2}}\,,$
(9a)
$\displaystyle\gamma_{\mathrm{eff}}\simeq\gamma+\frac{4GA_{0}(\kappa\sin\phi-2\omega_{m}\cos\phi)}{\kappa^{2}+4\omega_{m}^{2}}\,.$
(9b)
At the optimum feedback phase satisfying
$\phi_{m}=\tan^{-1}\left(-\frac{\kappa}{2\omega_{m}}\right)+\pi$ (10)
the resonant frequency is unchanged, and the damping is maximized, with the
feedback-induced damping
$\gamma_{\mathrm{fb}}=\frac{4GA_{0}}{\sqrt{\kappa^{2}+4\omega_{m}^{2}}}\,.$
(11)
The oscillator is supposed to couple to a bath with a thermal occupation
number $n_{m}^{T}$, which is usually much larger than one. As mentioned
earlier, the added feedback damping will induce cooling of the oscillator.
However, there are competing processes which limit the cooling effect.
The mechanical noise energy is obtained from the spectral density of the
oscillator’s displacement, where we can identify three contributions. The
thermal plus zero-point fluctuation spectrum is cooled via the enhanced
damping down to the variance
$\begin{split}n_{T}&=\frac{\gamma}{\gamma_{\mathrm{eff}}}\left(n_{m}^{T}+\frac{1}{2}\right)\,.\end{split}$
(12)
As in generic optomechanical position measurements, the quantum backaction of
the measurement tends to heat up the oscillator linearly with the
cooperativity, adding a mechanical population
$\begin{split}n_{\mathrm{qba}}=C_{\mathrm{eff}}\frac{\kappa^{2}}{\kappa^{2}+4\omega_{m}^{2}}\,.\end{split}$
(13)
The cooperativity $C_{\rm eff}$ appearing in this quantum backaction noise is
the cooperativity defined from the damped oscillator’s parameters
$\begin{split}C_{\mathrm{eff}}=\frac{4G^{2}}{\kappa\gamma_{\mathrm{eff}}}\,.\end{split}$
(14)
The increased damping of the oscillator (reduced $C_{\mathrm{eff}}$) thus
makes the oscillator less susceptible to the quantum backaction, and the
backaction contribution $n_{\mathrm{qba}}$ is reduced with increasing feedback
gain.
The background noise in the detection in typical microwave-optomechanical
systems is dominated by the microwave amplifier noise $n_{\rm add}$. Here, it
is fed back to the oscillator, leading to additional mechanical fluctuations.
Another, more fundamental contribution is due to vacuum fluctuations
associated to the measurement, which are also fed back to the mechanical
oscillator. The sum of these is
$\begin{split}n_{\rm{fb}}=\frac{A_{0}^{2}}{2\kappa\gamma_{\mathrm{eff}}}\left(n_{\mathrm{add}}+\frac{1}{2}\right)\,.\end{split}$
(15)
The total remaining mechanical occupation $n_{m}$ under feedback cooling
satisfies
$\begin{split}n_{m}+\frac{1}{2}=n_{T}+n_{\mathrm{qba}}+n_{\mathrm{fb}}\,,\end{split}$
(16)
which decreases with increasing gain $A_{0}$, then, for high gain, starts
increasing again as $n_{\rm fb}$ becomes the dominant contribution to the
occupation. The optimum cooling is reached when the oscillator is strongly
damped, $n_{T}\ll 1$, and when the contributions of backaction and noise
injection balance each other. This occurs when
$\frac{G}{A_{0}}=\frac{1}{4}\sqrt{1+2n_{\mathrm{add}}}\sqrt{\kappa^{2}+4\omega_{m}^{2}}/\kappa$,
and results in the optimum cooling
$\begin{split}n_{m,\mathrm{min}}+\frac{1}{2}=\frac{1}{2}\sqrt{1+2n_{\mathrm{add}}}\,.\end{split}$
(17)
In order to cool down to the ground state with ${n_{m}<1}$, one has to reach
$n_{\mathrm{add}}<4$. This is beyond reach of transistor amplifiers, but is
possible with near-quantum limited Josephson parametric amplifiers.
We now discuss the detection aspect. Under resonant cavity probing, the
quadrature of the cavity output $y_{\rm out}=i(a^{\dagger}_{\rm out}+a_{\rm
out})/\sqrt{2}$, where $a_{\rm out}$ is the output field annihilation
operator, displays a mechanical signature at the optomechanical sidebands
around the probe tone as shown by Eq. (5). In the experiment, the signal used
to establish the feedback loop is the demodulated signal coming out of the
cavity. For demonstration purposes, we also record the heterodyne spectrum
$\begin{split}S_{\mathrm{out}}[\omega]=\langle
a^{\dagger}_{\mathrm{out}}[\omega]a_{\mathrm{out}}[-\omega]\rangle+\frac{1}{2}\,.\end{split}$
(18)
Inference of the state of the mechanical oscillator based on the in-loop
spectrum is complicated by the fact that the injected and reflected noises are
out-of-phase, which leads to “noise squashing”, or destructive interference
close to the mechanical resonance. This becomes relevant at strong feedback
strength around the optimum cooling, Eq. (17). In the bad-cavity case, one can
easily identify correlations leading to the squashing, see Appendix B. In the
case of arbitrary sideband resolution, we calculate the theoretical output
spectra numerically (see Appendix A) from the solution of the equations of
motion, Eq. (4), and from the input noise correlators.
#### II.2.2 Blue-sideband probing
For optomechanical systems with $\kappa\lesssim\omega_{m}$, blue-sideband
probing leads to optomechanical antidamping. The damping rate is reduced by
$\gamma_{\mathrm{opt}}=\frac{4G^{2}}{\kappa}\frac{1}{1+\left(\frac{\kappa}{4\omega_{m}}\right)^{2}}\,.$
(19)
If this antidamping rate overcomes the intrinsic, or otherwise enhanced
damping rate of the mechanical oscillator, the latter becomes unstable.
It has been shown that blue-sideband pumping can be utilized to drive a stable
steady state, but only if combined with other processes that stabilize the
system. Examples include optomechanical dissipative squeezing and entanglement
Woolley and Clerk (2013); Wollman _et al._ (2015); Lecocq _et al._ (2015a);
Pirkkalainen _et al._ (2015); Ockeloen-Korppi _et al._ (2018), which count
on steadying dynamical backaction effects dominating backaction noise in some
regimes, to allow for the squeezing of a quadrature, in spite of a
simultaneous increase of the effective bath temperature for that quadrature
because of backaction noise.
A similar competition of effects exists in the present case: large probe
powers provide large detection sensitivities beneficial to the feedback
process, but also large optomechanical amplifications that can result in
instability. The gain of the feedback loop can be chosen independently from
the probe power, so that a good configuration of feedback parameters is
expected to produce cooling. The question is then whether the strong cold
damping effect obtained from this efficient measurement can, in some range of
parameters, dominate optomechanical anti-damping. We find that the answer is
yes. The effective mechanical damping rate in the scheme of Fig. 1 (b) is the
sum of the rates due to dynamical backaction, and of the feedback cooling:
$\gamma_{\mathrm{eff}}=\gamma-\gamma_{\mathrm{opt}}+\frac{4GA_{0}\left[(\kappa^{2}+8\omega_{m}^{2})\sin\phi-2\kappa\omega_{m}\cos\phi\right]}{\kappa^{3}+16\kappa\omega_{m}^{2}}\,.$
(20)
Because the feedback has to first counteract the amplification induced by
dynamical backaction, a larger gain as compared to resonant probing is needed,
which in turn will inject more noise and tend to reduce the cooling
performance. Ground-state cooling is still possible, but very little added
noise can be tolerated.
Figure 2: Feedback control measurements. We use a strong probe tone at the
cavity center frequency ($\Delta=0$), with the effective coupling
$G/2\pi\simeq 420$ kHz. The data in each panel correspond to the feedback gain
$A_{0}/2\pi=2.8$ kHz (red), $A_{0}/2\pi=6.7$ kHz (blue), and $A_{0}/2\pi=16.7$
kHz (green). The solid lines are theoretical fits. (a) Mechanical frequency
shift, and (b) effective damping rate, as functions of feedback loop phase.
(c) The area of the mechanical peak in the heterodyne output spectrum.
### II.3 Feedback scheme
In some feedback-cooling experiments, a single laser is used both for probing
and applying the feedback force by radiation pressure Rossi _et al._ (2017);
Kumar _et al._ (2022). A situation described by Eq. (4) requires a separate
method to create the force. One possibility is to use direct mechanical
actuation Abbott _et al._ (2009). Most optomechanical experiments have
utilized another laser dedicated to applying the feedback force. We adapt this
technique in this work, and create the feedback force by suitable modulation
of another microwave tone, a relatively weak feedback tone.
The basic frequency scheme is shown in Fig. 1 (a), where the probe tone
frequency is set at the resonance of the cavity. The feedback tone is
typically detuned from the cavity by a detuning $\Delta_{f}$ larger than
several cavity linewidths. This large frequency separation between microwave
tones allows to treat all optomechanical processes independently. Similar
reasoning holds also for the other explored alternative, the blue-sideband
probing shown in Fig. 1 (b).
At room temperature, the cavity output field is homodyne-detected by
demodulating with a local oscillator at the probe tone frequency $\omega_{p}$,
as seen in Fig. 1 (d). The phase of the local oscillator (with respect to the
phase of the driving tone) is tuned to measure the quadrature $y_{c}$ carrying
the most information on the mechanical oscillator. The demodulated quadrature
is a direct record of the position, appearing as an oscillatory signal at the
frequency $\omega_{m}$ in the laboratory frame. This signal, once phase-
shifted and amplified, is used to realize a weak phase modulation of a second
microwave tone (the feedback tone) at the frequency
$\omega_{f}=\omega_{c}+\Delta_{f}$. This effectively generates a triplet of
frequencies separated by $\omega_{m}$, centered at $\omega_{f}$. With an
adequate setting for the feedback phase-shift, the amplitude of each sideband
of the driving triplet is approximately proportional to the velocity $\dot{x}$
while the central peak has a constant, much larger amplitude. Due to the
nonlinearity of the optomechanical interaction, each sideband interferes with
the central peak to produce a feedback force linear in sideband amplitude,
that is, proportional to $\dot{x}$. Cross-products of sidebands generate a
force dependent on the mechanical energy, which is maintained negligible in
comparison to the linear feedback force by keeping the amplitude of the
central peak much larger than the amplitude of the sidebands.
Figure 3: Feedback cooling. The parameter values are $G/2\pi\simeq 427$ kHz,
$\phi\simeq 143$ degrees. (a), (b) In-loop heterodyne output spectra at the
lower and upper sideband, respectively. The gain values were
$A_{0}/2\pi\simeq[0,10,28,76,125,206]$ kHz from top to bottom. The solid lines
are theoretical fits. (c) Damping rate extracted by fitting Lorentzian curves
to the spectra at lower gain values, together with a fit to Eq. (9b). The data
is shown in the range where the peaks are roughly Lorentzian. (d) Mechanical
occupation as a function of feedback gain. The horizontal arrow indicates the
value $n_{m}\simeq 440$ at zero gain.
## III Experimental setup
### III.1 Electromechanical device
A microwave optomechanical device is used in which an aluminium drum
oscillator is coupled to a superconducting microwave resonator (the cavity).
Photographs of the device are shown in Fig. 1 (c). The aluminum drum
oscillator is a parallel plate capacitor with a vacuum gap, consisting of an
aluminium membrane suspended above an aluminum electrode. It oscillates at a
frequency $\omega_{m}/2\pi=8.14$ MHz and has an intrinsic damping rate
$\gamma/2\pi=76$ Hz. An LC circuit formed by this plate capacitor and a
meandering inductor sustains a resonance at a microwave frequency
$\omega_{c}/2\pi=5.35$ GHz. The microwave cavity is strongly coupled to a
transmission line thanks to a large interdigitated capacitor. The cavity is
overcoupled, with a decay rate $\kappa/2\pi=8.5$ MHz largely dominated by
external coupling.
The device is maintained at a somewhat elevated temperature $80$ mK, where
$n_{m}^{T}\simeq 205$, in a dilution refrigerator. We chose an operating
temperature clearly higher than the base temperature, because we observed
intermittent “spiking” Zhou _et al._ (2019) of the mode temperature at lower
temperatures.
The single-photon optomechanical coupling is found to be $g_{0}/2\pi\simeq
130\,\rm Hz$. The calibration of the effective coupling $G$ of the probing
tone is realized by monitoring the sideband-cooling effect as described in
Appendix D. The values of the enhanced couplings of all tones used in the work
are inferred from this calibration and from the measurement of the cavity
susceptibility.
### III.2 Feedback setup
The cavity output signal, Eq. (5), which mainly consists of the two
optomechanical sidebands generated by the probe tone, is amplified inside the
refrigerator with an amplifier exhibiting the effective noise
$n_{\mathrm{add}}\simeq 13$ quanta, then demodulated using the probe tone as a
local oscillator to realize a homodyne detection. The demodulation result is
passed through an analog band-pass filter to retain the signal oscillating at
the mechanical frequency, while limiting unnecessary broadband noise before
digitization. The signal is then sent to a 14-bit FPGA-backed acquisition
device (Red Pitaya STEMlab 125-14) capable of 125 MHz input and output
sampling rate. The FPGA card is programmed to replay this signal, after
further digital band-pass filtering, time delaying, and amplification.
The FPGA output signal is sent to a microwave phase modulator to modulate the
feedback tone at the frequency $\omega_{f}$. This phase modulation, being very
weak, is essentially comparable to an amplitude modulation, and generates
mechanical-momentum-dependent sidebands $\pm\omega_{m}$ around a strong
coherent peak at $\omega_{f}$. The details are given in Appendix C.
Since the feedback tone (and its modulation sidebands) sits far on the red
side of the cavity, the cavity susceptibility significantly shifts the phase
of the feedback force, by an estimated $72^{\circ}$. The tunable contribution
to the phase shift is eventually tuned to produce a total phase shift of
$\phi_{m}+2n\pi$ ($n\in{\mathbb{N}}$) between the position signal and the
feedback force. As long as the total number of additional periods $n$ by which
the force is delayed from the momentum signal remains much smaller than the
quality factor of the mechanical oscillator, the feedback quality is not
significantly affected by this additional delay.
Each sideband of the feedback triplet, by interfering within the nonlinear
optomechanical interaction with the strong central coherent peak of the
triplet, generates a term in the feedback force with a slightly different
phase-shift, as the cavity susceptibility and microwave transmission lines
contribute differently to the total phase shift for each component of the
driving triplet. These two contributions to the feedback force containing
versions of the mechanical signal phase-shifted by different amounts do not
necessarily add up fully constructively (see Appendix C), but one is
sufficiently attenuated by the cavity susceptibility to limit the effect of a
destructive interference and allow for a relatively strong feedback force in
the experimental situation.
## IV Results
### IV.1 Probing at the cavity frequency
The probe tone is first positioned at the cavity frequency ($\Delta=0$), such
that the displacement spectra of the oscillator is encoded in the cavity
output spectrum with high efficiency while maintaining mechanical stability.
The feedback tone is detuned by $\Delta_{f}/2\pi=-20$ MHz from the cavity
frequency. This detuning is chosen such that $|\Delta_{f}|>2\omega_{m}$ to
avoid integrating stray feedback field components into the measurement of the
position, but is kept of the order of $\kappa$ to allow for a significant
response of the cavity.
Next, we vary the feedback phase $\phi$ and record the properties of the
mechanical peaks in the heterodyne spectrum. The peaks are primarily
characterized by their frequency, Eq. (9a), and linewidth, Eq. (9b). We show
the phase dependence of these quantities in Fig. 2 (a), (b). The data is not
plotted in the regime where the system is unstable
($\gamma_{\mathrm{eff}}<0$). Aside from the effect of the feedback, the
mechanical frequency is expected to undergo a red-shift under the strong probe
driving due to the second-order optomechanical coupling. The shift is given by
$\delta f_{2}=-\frac{1}{2}g_{2}n_{c}$, where
$g_{2}=\frac{1}{2}\frac{d^{2}\omega_{c}}{dx^{2}}x_{\mathrm{zpf}}^{2}$. With
the experimental parameters, we expect $\delta f_{2}/2\pi\simeq 400$ Hz.
However, the frequency at zero feedback gain is observed to be red-shifted by
1.2 kHz with respect to its value calibrated independently. We believe that
the additional shift is due to occasional drifts of the intrinsic mechanical
frequency observed during the cooldown. For the feedback-induced frequency
shifts shown in Fig. 2 (a), we adjust the zero to correct for the uncertainty
in the intrinsic frequency.
With this adjustment, we reach a good agreement with the theoretical
predictions, letting the unknown phase shift due to the time delay in cables
as another adjustable parameter. We also study phase dependence of the area of
the mechanical peaks as shown in Fig. 2 (c). In the limit of weak feedback,
where the noise squashing plays little role, the peak area is a good measure
of the mechanical mode temperature. This condition is satisfied in the data
shown in Fig. 2, where $\gamma_{\mathrm{eff}}$ reaches values up to $\sim
2\pi\cdot 1$ kHz. A cooling by one order of magnitude is observed in Fig. 2
(c), where the reference is the crossing point of the curves corresponding to
zero cooling or amplification.
The maximum damping from the phase sweeping measurements is determined at a
phase value around $145^{\circ}\pm 5^{\circ}$, which corresponds to an
optimized loop delay to a exert damping force proportional to velocity. Notice
that this phase differs from the optimum value $\phi_{m}=90^{\circ}$ expected
in the extreme bad-cavity limit. With this optimized phase, we proceed to vary
the feedback gain up to large values and investigate the maximum cooling we
can achieve.
Figure 4: Feedback stabilization and cooling of an intrinsically unstable
system. The probe tone is set at the blue sideband frequency
($\Delta=\omega_{m}$), and the effective coupling is $G/2\pi\simeq 104$ kHz.
The gain values are $A_{0}/2\pi\simeq[189,225,267,378]$ kHz from top to
bottom. The panels present the same quantities as in the resonant pumping
case, Fig. 3; (a), (b) Lower and upper sideband peaks around the probe tone,
respectively. (c) Effective damping in the stable range. (d) Mechanical
occupation as a function of feedback gain.
We show in Fig. 3 (a), (b), the two peaks of the in-loop heterodyne spectra.
At high gain, the peaks are not equal. The lower sideband of the probe tone
exhibits less squashing than the upper sideband. This can be interpreted as a
manifestation of the sideband asymmetry that has been studied in various
optomechanical systems Weinstein _et al._ (2014); Underwood _et al._ (2015);
Lecocq _et al._ (2015b), see Eq. (25). For the theoretical fits, we used the
bath temperature for each gain as the only adjustable parameter because we
anticipate the mechanical bath may be heated up as the gain increases. We
indeed observe a heating of the bath from $n_{m}^{T}\simeq 200$ to
$n_{m}^{T}\simeq 370$ between zero and the high feedback gain values. Overall,
the theory portrays a good fit to the data. There is some discrepancy at the
highest gain values in the upper sideband, which we believe may be due to some
noise from the feedback tone starting to circulate in the system.
According to Eq. (9b), the mechanical linewidth evolves linearly with respect
to the feedback gain. We test this basic property as shown in Fig. 3 (c). At
high gain values, the peaks get distorted due to noise squashing, and this
data is not considered in this situation. The solid line in Fig. 3 (c) is a
linear fit that is the basis for creating the theoretical spectra in Fig. 3
(a), (b), according to Eqs. (45,46). Finally, in Fig. 3 (d) we show the
mechanical occupancy that can be reached in our setup. The occupancy is
calculated with Eq. (16) using the calibrated quantities, and the bath
temperature obtained from fitting the spectra. We reach an occupation of
$n_{m}\simeq 2.9\pm 0.3$ quanta, which is significantly close to the ground
state, and is limited by the amplifier added noise injected back to the
sample.
### IV.2 Probing at the blue optomechanical sideband
We then moved the probe tone from the cavity center to the blue optomechanical
sideband frequency. Since the probe tone at its final intended power pushes
the oscillator far into the instability regime, the feedback parameters were
first optimized at a low probe power, only incrementally reaching higher
powers by exploring small parameter ranges around the stability regime. Figure
4 shows the result of gain sweeps performed at the highest probe power used in
this configuration, and at the optimal feedback phase. The investigation of
small gains is precluded by instability at low feedback efficiencies.
The lower sideband peak shown in Fig. 4 (a), which in this case is located
right at the cavity frequency, exhibits a good agreement with the numerically
computed lineshape for each gain displayed as black solid lines in the panel.
The upper sideband peak shown in Fig. 4 (b) is strongly suppressed by the
unfavorable cavity susceptibility as this signal is detuned from the cavity by
approximately twice the cavity linewidth. With the upper sideband, we
acknowledge a discrepancy between the theoretical lineshapes also displayed in
the panel. We again anticipate some additional noise is circulating in the
feedback loop at frequencies near the upper sideband of the probe tone,
inducing additional squashing effects. The calibrated height of the peaks,
however, matches well with the predictions.
Similar to resonant pumping case, we again use the mechanical bath temperature
as a free parameter in the fits to the lineshapes. Here we find a very strong
technical heating when the feedback gain increases: the mechanical bath heats
up to $n_{m}^{T}\simeq 5\cdot 10^{3}$ phonons at the maximum gain
$A_{0}/2\pi\simeq 400$ kHz.
As shown in Fig. 4 (c), a sizable damping rate $\gamma_{\rm eff}/2\pi\simeq 5$
kHz can be obtained in the blue-sideband configuration. This damping is not
only significant considering that the oscillator is anti-damped at zero gain,
but it is also nearly two orders of magnitude larger than the intrinsic
damping. Finally, in Fig. 4 (d) we display the mechanical occupation inferred
in the situation using our numerical model. We obtain a modest cooling down to
$n_{m}\simeq 38$, limited primarily by the technical heating.
## V Conclusions
In summary, we demonstrated feedback cooling in a microwave optomechanical
system. We reached a mechanical occupation $n_{m}\simeq 3$ quanta in a 8 MHz
membrane resonator. The cooling is limited by the added noise of the microwave
amplifier. By introducing a much less noisier Josephson parametric amplifier
Macklin _et al._ (2015), ground-state cooling in the present system, with the
parameters used in this work, should be well within reach. This will open up
possibilities for feedback-based preparation of more sophisticated states,
such as squeezed states Clerk _et al._ (2008) through backaction-evading
measurements.
###### Acknowledgements.
We would like to thank Juha Muhonen and Marton Gunyho for useful discussions.
We acknowledge the facilities and technical support of Otaniemi research
infrastructure for Micro and Nanotechnologies (OtaNano). This work was
supported by the Academy of Finland (contracts 307757, 312057), and by the
European Research Council (101019712). The work was performed as part of the
Academy of Finland Centre of Excellence program (project 336810). We
acknowledge funding from the European Union’s Horizon 2020 research and
innovation program under grant agreement 824109, the European Microkelvin
Platform (EMP), and QuantERA II Programme (13352189). L. Mercier de Lépinay
acknowledges funding from the Strategic Research Council at the Academy of
Finland (Grant No. 338565).
## References
* Cohadon _et al._ (1999) P. F. Cohadon, A. Heidmann, and M. Pinard, “Cooling of a mirror by radiation pressure,” Phys. Rev. Lett. 83, 3174–3177 (1999).
* Kleckner and Bouwmeester (2006) Dustin Kleckner and Dirk Bouwmeester, “Sub-kelvin optical cooling of a micromechanical resonator,” Nature 444, 75–78 (2006).
* Poggio _et al._ (2007) M. Poggio, C. L. Degen, H. J. Mamin, and D. Rugar, “Feedback cooling of a cantilever’s fundamental mode below 5 mK,” Phys. Rev. Lett. 99, 017201 (2007).
* Vinante _et al._ (2008) A. Vinante, M. Bignotto, M. Bonaldi, M. Cerdonio, L. Conti, P. Falferi, N. Liguori, S. Longo, R. Mezzena, A. Ortolan, G. A. Prodi, F. Salemi, L. Taffarello, G. Vedovato, S. Vitale, and J.-P. Zendri, “Feedback cooling of the normal modes of a massive electromechanical system to submillikelvin temperature,” Phys. Rev. Lett. 101, 033601 (2008).
* Wilson _et al._ (2015) D. J. Wilson, V. Sudhir, N. Piro, R. Schilling, A. Ghadimi, and T. J. Kippenberg, “Measurement-based control of a mechanical oscillator at its thermal decoherence rate,” Nature 524, 325–329 (2015).
* Schäfermeier _et al._ (2016) Clemens Schäfermeier, Hugo Kerdoncuff, Ulrich B. Hoff, Hao Fu, Alexander Huck, Jan Bilek, Glen I. Harris, Warwick P. Bowen, Tobias Gehring, and Ulrik L. Andersen, “Quantum enhanced feedback cooling of a mechanical oscillator using nonclassical light,” Nature Communications 7, 13628 (2016).
* Rossi _et al._ (2017) Massimiliano Rossi, Nenad Kralj, Stefano Zippilli, Riccardo Natali, Antonio Borrielli, Gregory Pandraud, Enrico Serra, Giovanni Di Giuseppe, and David Vitali, “Enhancing sideband cooling by feedback-controlled light,” Phys. Rev. Lett. 119, 123603 (2017).
* Sudhir _et al._ (2017) V. Sudhir, D. J. Wilson, R. Schilling, H. Schütz, S. A. Fedorov, and T. J. Kippenberg, “Appearance and disappearance of quantum correlations in measurement-based feedback control of a mechanical oscillator,” Physical Review X 7, 011001 (2017).
* Christoph _et al._ (2018) Philipp Christoph, Tobias Wagner, Hai Zhong, Roland Wiesendanger, Klaus Sengstock, Alexander Schwarz, and Christoph Becker, “Combined feedback and sympathetic cooling of a mechanical oscillator coupled to ultracold atoms,” New Journal of Physics 20, 093020 (2018).
* Guo _et al._ (2019) Jingkun Guo, Richard Norte, and Simon Gröblacher, “Feedback cooling of a room temperature mechanical oscillator close to its motional ground state,” Phys. Rev. Lett. 123, 223602 (2019).
* Kumar _et al._ (2022) Arvind Shankar Kumar, Joonas Nätkinniemi, Henri Lyyra, and Juha T. Muhonen, “Single-laser feedback cooling of optomechanical resonators,” arXiv:2209.06029 (2022).
* Rossi _et al._ (2018) Massimiliano Rossi, David Mason, Junxin Chen, Yeghishe Tsaturyan, and Albert Schliesser, “Measurement-based quantum control of mechanical motion,” Nature 563, 53–58 (2018).
* Abbott _et al._ (2009) B Abbott _et al._ , “Observation of a kilogram-scale oscillator near its quantum ground state,” New Journal of Physics 11, 073032 (2009).
* Whittle _et al._ (2021) C. Whittle _et al._ , “Approaching the motional ground state of a 10-kg object,” Science 372, 1333–1336 (2021).
* Gieseler _et al._ (2012) Jan Gieseler, Bradley Deutsch, Romain Quidant, and Lukas Novotny, “Subkelvin parametric feedback cooling of a laser-trapped nanoparticle,” Phys. Rev. Lett. 109, 103603 (2012).
* Vovrosh _et al._ (2017) Jamie Vovrosh, Muddassar Rashid, David Hempston, James Bateman, Mauro Paternostro, and Hendrik Ulbricht, “Parametric feedback cooling of levitated optomechanics in a parabolic mirror trap,” J. Opt. Soc. Am. B 34, 1421–1428 (2017).
* Li _et al._ (2011) Tongcang Li, Simon Kheifets, and Mark G. Raizen, “Millikelvin cooling of an optically trapped microsphere in vacuum,” Nature Physics 7, 527–530 (2011).
* Conangla _et al._ (2019) Gerard P. Conangla, Francesco Ricci, Marc T. Cuairan, Andreas W. Schell, Nadine Meyer, and Romain Quidant, “Optimal feedback cooling of a charged levitated nanoparticle with adaptive control,” Phys. Rev. Lett. 122, 223602 (2019).
* Iwasaki _et al._ (2019) M. Iwasaki, T. Yotsuya, T. Naruki, Y. Matsuda, M. Yoneda, and K. Aikawa, “Electric feedback cooling of single charged nanoparticles in an optical trap,” Phys. Rev. A 99, 051401 (2019).
* Delić _et al._ (2020) Uroš Delić, Manuel Reisenbauer, Kahan Dare, David Grass, Vladan Vuletić, Nikolai Kiesel, and Markus Aspelmeyer, “Cooling of a levitated nanoparticle to the motional quantum ground state,” Science 367, 892–895 (2020).
* Tebbenjohanns _et al._ (2021) Felix Tebbenjohanns, M. Luisa Mattana, Massimiliano Rossi, Martin Frimmer, and Lukas Novotny, “Quantum control of a nanoparticle optically levitated in cryogenic free space,” Nature 595, 378–382 (2021).
* Regal _et al._ (2008) C. A. Regal, J D Teufel, and K. W. Lehnert, “Measuring nanomechanical motion with a microwave cavity interferometer,” Nature Physics 4, 555–560 (2008).
* Teufel _et al._ (2011) J. D. Teufel, T. Donner, Dale Li, J. W. Harlow, M. S. Allman, K. Cicak, A. J. Sirois, J. D. Whittaker, K. W. Lehnert, and R. W. Simmonds, “Sideband cooling of micromechanical motion to the quantum ground state,” Nature 475, 359–363 (2011).
* Mancini _et al._ (1998) Stefano Mancini, David Vitali, and Paolo Tombesi, “Optomechanical cooling of a macroscopic oscillator by homodyne feedback,” Phys. Rev. Lett. 80, 688–691 (1998).
* Vitali _et al._ (2002) David Vitali, Stefano Mancini, Luciano Ribichini, and Paolo Tombesi, “Mirror quiescence and high-sensitivity position measurements with feedback,” Phys. Rev. A 65, 063803 (2002).
* Hopkins _et al._ (2003) Asa Hopkins, Kurt Jacobs, Salman Habib, and Keith Schwab, “Feedback cooling of a nanomechanical resonator,” Phys. Rev. B 68, 235328 (2003).
* Genes _et al._ (2008) C. Genes, D. Vitali, P. Tombesi, and M. Aspelmeyer, “Ground-state cooling of a micromechanical oscillator: Comparing cold damping and cavity-assisted cooling schemes,” Physical Review A 77, 033804 (2008).
* Zhang _et al._ (2017) Jing Zhang, Yu xi Liu, Re-Bing Wu, Kurt Jacobs, and Franco Nori, “Quantum feedback: Theory, experiments, and applications,” Physics Reports 679, 1–60 (2017).
* Sommer and Genes (2019) Christian Sommer and Claudiu Genes, “Partial optomechanical refrigeration via multimode cold-damping feedback,” Phys. Rev. Lett. 123, 203605 (2019).
* Woolley and Clerk (2013) M. J. Woolley and A. A. Clerk, “Two-mode back-action-evading measurements in cavity optomechanics,” Phys. Rev. A 87, 063846 (2013).
* Wollman _et al._ (2015) E. E. Wollman, C. U. Lei, A. J. Weinstein, J. Suh, A. Kronwald, F. Marquardt, A. A. Clerk, and K. C. Schwab, “Quantum squeezing of motion in a mechanical resonator,” Science 349, 952–955 (2015).
* Lecocq _et al._ (2015a) F. Lecocq, J. B. Clark, R. W. Simmonds, J. Aumentado, and J. D. Teufel, “Quantum nondemolition measurement of a nonclassical state of a massive object,” Phys. Rev. X 5, 041037 (2015a).
* Pirkkalainen _et al._ (2015) J.-M. Pirkkalainen, E. Damskägg, M. Brandt, F. Massel, and M. A. Sillanpää, “Squeezing of quantum noise of motion in a micromechanical resonator,” Phys. Rev. Lett. 115, 243601 (2015).
* Ockeloen-Korppi _et al._ (2018) C. F. Ockeloen-Korppi, E. Damskägg, J. M. Pirkkalainen, M. Asjad, A. A. Clerk, F. Massel, M. J. Woolley, and M. A. Sillanpää, “Stabilized entanglement of massive mechanical oscillators,” Nature 556, 478–482 (2018).
* Zhou _et al._ (2019) X. Zhou, D. Cattiaux, R. R. Gazizulin, A. Luck, O. Maillet, T. Crozes, J.-F. Motte, O. Bourgeois, A. Fefferman, and E. Collin, “On-chip thermometry for microwave optomechanics implemented in a nuclear demagnetization cryostat,” Phys. Rev. Applied 12, 044066 (2019).
* Weinstein _et al._ (2014) A. J. Weinstein, C. U. Lei, E. E. Wollman, J. Suh, A. Metelmann, A. A. Clerk, and K. C. Schwab, “Observation and interpretation of motional sideband asymmetry in a quantum electromechanical device,” Phys. Rev. X 4, 041003 (2014).
* Underwood _et al._ (2015) M. Underwood, D. Mason, D. Lee, H. Xu, L. Jiang, A. B. Shkarin, K. Børkje, S. M. Girvin, and J. G. E. Harris, “Measurement of the motional sidebands of a nanogram-scale oscillator in the quantum regime,” Phys. Rev. A 92, 061801 (2015).
* Lecocq _et al._ (2015b) F. Lecocq, J. D. Teufel, J. Aumentado, and R. W. Simmonds, “Resolving the vacuum fluctuations of an optomechanical system using an artificial atom,” Nat. Phys. 11, 635–639 (2015b).
* Macklin _et al._ (2015) C. Macklin, K. O’Brien, D. Hover, M. E. Schwartz, V. Bolkhovsky, X. Zhang, W. D. Oliver, and I. Siddiqi, “A near-quantum-limited Josephson traveling-wave parametric amplifier,” Science 350, 307–310 (2015).
* Clerk _et al._ (2008) A. A. Clerk, F. Marquardt, and K. Jacobs, “Back-action evasion and squeezing of a mechanical resonator using a cavity detector,” New Journal of Physics 10, 095010 (2008).
## Appendix A Solving the closed-loop dynamics
In the bad-cavity case there are simple analytical results for the entire
system including the output spectrum due to the probe tone (Appendix B). Also,
with arbitrary sideband resolution but restricted to $\Delta=0$, we can
recover expressions for the mechanical occupation, Eq. (16), and the preceding
equations. However, with the current parameters we calculate the output
spectrum always numerically. We start from Eq. (4), and write the variables’
connection to the inputs with coefficients to be determined:
$\displaystyle x_{c}$ $\displaystyle=X_{cx}x_{\mathrm{c,in}}\,,$ (21a)
$\displaystyle y_{c}$
$\displaystyle=Y_{cy}y_{\mathrm{c,in}}+Y_{x}x_{\mathrm{c,in}}+Y_{f}f_{\mathrm{th}}+Y_{n}y_{\mathrm{add}}\,,$
(21b) $\displaystyle x$
$\displaystyle=X_{f}f_{\mathrm{th}}+X_{bax}x_{\mathrm{c,in}}+X_{\mathrm{inj}}y_{\mathrm{c,in}}+X_{n}y_{\mathrm{add}}\,,$
(21c) $\displaystyle p$
$\displaystyle=P_{f}f_{\mathrm{th}}+P_{bax}x_{\mathrm{c,in}}+P_{\mathrm{inj}}y_{\mathrm{c,in}}+P_{n}y_{\mathrm{add}}\,.$
(21d)
As an example, the term with $X_{bax}$ gives the quantum measurement
backaction to the position, $X_{\mathrm{inj}}$ describes how the quantum noise
in the feedback loop is injected back to the sample, and
$X_{n}=X_{\mathrm{inj}}$ is a similar injection of amplifier noise. To
continue with the example, in the case $\Delta=0$,
$\begin{split}X_{f}&=\chi_{\mathrm{fb}}\,,\\\
X_{\mathrm{inj}}&=-\frac{4G\omega_{m}\sqrt{\kappa}}{\kappa-2i\omega}\chi_{\mathrm{fb}}\,,\end{split}$
(22)
with $\chi_{\mathrm{fb}}$ given by Eq. (8).
### A.1 Noise considerations
In this section, we assume that the optimal feedback condition is met, that
is, $\phi=\phi_{m}$ [Eq. (10)]. In this case,
$f_{\rm
fb}[\omega]=\frac{i2GA_{0}}{\sqrt{\kappa^{2}/4+\omega_{m}^{2}}}x[\omega]\\\
+\frac{A_{0}\omega_{m}}{\sqrt{\kappa}}e^{-i\phi\frac{\omega}{\omega_{m}}}\Big{(}(\kappa\chi_{c}[\omega]-1)y_{c,{\rm
in}}[\omega]+y_{\rm add}\Big{)}.$ (23)
The first term of the feedback force is responsible for feedback damping. The
second term accounts for noise coming from the cavity quantum fluctuations re-
injected in the feedback loop, as well as added noise from the detection stage
(mainly amplifier added noise), also fed back to the oscillator. As a result,
the position satisfies the equation
$x=\chi_{\rm fb}\Big{[}f_{\rm th}-2\omega_{m}Gx_{c}\\\
+\frac{A_{0}\omega_{m}}{\sqrt{\kappa}}e^{-i\phi\frac{\omega}{\omega_{m}}}\Big{(}(\kappa\chi_{c}-1)y_{c,{\rm
in}}+y_{\rm add}\Big{)}\Big{]}$ (24)
with $x_{c}=\chi_{c}\sqrt{\kappa}x_{\mathrm{c,in}}$. In the equation above,
the first term accounts for thermal noise, the second accounts for measurement
backaction noise, and the third for the total measurement noise fed back to
the oscillator, which is a sum of the two contributions discussed above. All
noise processes appearing in these equations are uncorrelated, so that
contributions do not interfere together. Assuming (as always) that
$\gamma\ll\kappa$ and that the cavity is in its ground state, the backaction
noise arising from the second term in equation Eq. (24) gives Eq. (13). The
third term in equation Eq. (24) leads to a total measurement noise fed back to
the oscillator, Eq. (15).
## Appendix B Spectrum in the unresolved sideband situation
The “noise squashing” in the bad-cavity situation allows for simple analytical
results. At the optimum feedback phase in this situation, $\phi_{m}=\pi/2$,
the in-loop heterodyne spectrum can be understood as consisting of two
Lorentzians with opposite signs:
$\begin{split}S_{\mathrm{out,x}}[\omega]&=\frac{8G^{2}}{\kappa}S_{x}[\omega]\,,\\\
S_{\mathrm{out,\pm}}[\omega]&=-\frac{GA_{0}\omega_{m}\gamma_{\mathrm{eff}}/\kappa(n_{\mathrm{add}}+\frac{1}{2})}{(\omega\mp\omega_{m})^{2}+(\frac{\gamma_{\mathrm{eff}}}{2})^{2}}\,,\\\
\end{split}$ (25)
and
$S_{\mathrm{out}}[\omega]=S_{\mathrm{out,x}}[\omega]+S_{\mathrm{out,-}}[\omega]+S_{\mathrm{out,+}}[\omega]+n_{\mathrm{add}}+\frac{1}{2}$.
The term $S_{\mathrm{out,x}}[\omega]$ gives the sideband asymmetry, while the
squashing term is the same for both lower and upper sidebands.
In the case of weak feedback, $S_{\mathrm{out,\pm}}$ are negligible.
## Appendix C Electromechanical forces
The intracavity field annihilation operator can be decomposed into
$a=\alpha(t)+\alpha_{f}(t)+\tilde{a}(t)$, where $\tilde{a}$ is a (quantum)
annihilation operator associated with a small fluctuation, $\alpha$ is the
classical complex amplitude of the probe tone oscillating at $\omega_{c}$ or
$\omega_{c}+\omega_{m}$, and $\alpha_{f}$ is the classical complex amplitude
of the feedback tone oscillating at $\omega_{c}+\Delta_{f}\pm\omega_{m}$. The
original non-linear evolution equation for the mechanical oscillator momentum
is:
$\begin{array}[]{*3{>{\displaystyle}l}}\dot{p}&=&-\gamma
p-\omega_{m}x-\sqrt{2}g_{0}a^{\dagger}a+\frac{f_{\rm
th}}{\omega_{m}}\,.\end{array}$ (26)
The total electromechanical force $-\sqrt{2}g_{0}a^{\dagger}a$ contains the
following terms:
* •
$-\sqrt{2}g_{0}|\alpha_{f}|^{2}$: feedback force
* •
$-\sqrt{2}g_{0}|\alpha|^{2}$: a DC force from the probe tone
* •
$-\sqrt{2}g_{0}(\alpha\tilde{a}^{\dagger}+\alpha^{*}\tilde{a})$: dynamical and
quantum backaction from the probe tone
* •
$-\sqrt{2}g_{0}(\alpha_{f}\tilde{a}^{\dagger}+\alpha_{f}^{*}\tilde{a})$:
dynamical and quantum backaction from the feedback tone
The force also contains cross-products of the probe and feedback tones
(resonating at $\Delta_{f}$ or $\Delta_{f}+\omega_{m}$) that are largely out
of resonance with the mechanical oscillator and whose impact is neglected.
Finally, it also contains terms such as $-2g_{0}\tilde{a}^{\dagger}\tilde{a}$
coming from quantum fluctuations of the cavity field, which is, as usual in
driven cavity optomechanics, neglected compared to all other forces as it is
typically much weaker.
### C.1 Probe tone at the cavity center
We now derive $\alpha_{f}$ to give the expression of the feedback force, in
the situation where the probe tone is sent at the cavity center. The position
signal in frequency space is
$x[\omega]\equiv\frac{b[\omega]+b^{\dagger}[\omega]}{\sqrt{2}}\,,$ (27)
with $b$ ($b^{\dagger}$) the annihilation (creation) operator in the
laboratory frame. In the following, we will also denote phase-shifted position
signals
$x_{\varphi}[\omega]\equiv\frac{b[\omega]e^{i\varphi}+b^{\dagger}[\omega]e^{-i\varphi}}{\sqrt{2}}$
(28)
The inverse Fourier transform of $x_{\varphi}$ is, in the limit of small
delays compared to the decoherence time, the delayed position signal
$x_{\varphi}(t)\simeq x(t-\varphi/\omega_{m})$. In the following paragraph, we
will assume the small delay condition systematically satisfied. The cavity
quadrature coupled to the motion is:
$y_{c}[\omega]=\chi_{c}[\omega]\bigg{[}-\sqrt{2}G\,\Big{(}b[\omega]+b^{\dagger}[\omega]\Big{)}\,+\sqrt{\kappa}\,y_{c,\,\rm
in}[\omega]\bigg{]}$ (29)
Each operator ($b$, $b^{\dagger}$) selects a narrow frequency range
($+\omega_{m}$, $-\omega_{m}$) over which the cavity susceptibility can be
considered constant. That is, $b[\omega]$ and $b^{\dagger}[\omega]$ sample the
cavity susceptibility at different (opposite) frequencies $\pm\omega_{m}$,
such that:
$y_{c}[\omega]\simeq-\sqrt{2}G|\chi_{c}[\omega_{m}]|\,\Big{(}b[\omega]e^{i\phi_{0}}+b^{\dagger}[\omega]e^{-i\phi_{0}}\Big{)}\\\
+\chi_{c}[\omega]\sqrt{\kappa}\,y_{c,\,\rm in}[\omega]$ (30)
where
$\begin{array}[]{*3{>{\displaystyle}l}}\phi_{\rm 0}&\equiv&{\rm
arg}\\{\chi_{c}[\omega_{m}]\\}=\arctan\Big{(}\frac{2\omega_{m}}{\kappa}\Big{)}\\\\[8.0pt]
|\chi_{c}[\omega_{m}]|&\equiv&(\kappa^{2}/4+\omega_{m}^{2})^{-1/2}.\end{array}$
(31)
Neglecting cavity noise, the corresponding output quadrature is:
$\begin{array}[]{*3{>{\displaystyle}l}}y_{\rm
out}[\omega]&=&\sqrt{\kappa}y_{c}[\omega]-y_{c,\,\rm in}[\omega]\\\\[5.0pt]
&=&\frac{2\sqrt{\kappa}G}{\sqrt{\kappa^{2}/4+\omega_{m}^{2}}}\,x_{\phi_{0}}[\omega]+(\kappa\chi_{c}[\omega]-1)\,y_{c,\,\rm
in}[\omega]\,.\end{array}$ (32)
We neglect cavity noise in the following. The feedback loop applies a filter
which amplifies this signal by a gain $\mathcal{G}$ and delays it by $\tau$.
The equivalent phase shift for an oscillator at $\omega_{m}$ is
$\phi_{\tau}\equiv\omega_{m}\tau$. The unit of $\mathcal{G}$ is chosen such
that the result of the filtering operation is a dimensionless signal $s(t)$
whose Fourier transform is
$s[\omega]=\frac{2\mathcal{G}\sqrt{\kappa}G}{\sqrt{\kappa^{2}/4+\omega_{m}^{2}}}\,x_{\phi_{0}+\phi_{\tau}}[\omega]\,.$
(33)
This signal is sent to a phase modulator driven by a coherent pump at
$\omega_{f}$ and of amplitude $\alpha_{0}/\pi$. The result of this modulation
is an electronic signal
$\alpha_{f,\rm mod}(t)=\alpha_{0}/\pi\sin(\omega_{f}t+s(t)).$ (34)
In the equation above, the conversion factor affecting $s(t)$ in the mixing
operation is also integrated into $\mathcal{G}$, and therefore $s(t)$, for
convenience. Finally, if $|s(t)|\ll 2\pi$, using $\sin s(t)\simeq s(t)$ and
$\cos s(t)\simeq 1$, this signal corresponds to the sum of a strong coherent
tone oscillating at $\omega_{f}$ and of a weaker signal at $\omega_{f}$ whose
amplitude is modulated by $s(t)$:
$\alpha_{f,\rm
mod}(t)\simeq\alpha_{0}/\pi\Big{(}\sin\omega_{f}t+s(t)\cos\omega_{f}t\Big{)}\,.$
(35)
The corresponding complex amplitude in the frame oscillating at the cavity
resonance frequency is
$\alpha_{f,\rm
in}[\omega]=i\alpha_{0}\delta(\omega-\Delta_{f})+\frac{\alpha_{0}\mathcal{G}\sqrt{\kappa}2G}{\sqrt{\kappa^{2}/4+\omega_{m}^{2}}}\,x_{\phi_{0}+\phi_{\tau}}[\omega-\Delta_{f}]\,.$
(36)
This signal is driving the cavity to apply the feedback force. The spectrum of
this signal is a triplet of peaks: one strong peak of amplitude $\alpha_{0}$
at $\omega=\omega_{c}+\Delta_{f}$ and two weaker sidebands (weak as per the
assumption above $|s(t)|\ll 2\pi$), whose spectrum reproduces the spectrum of
$x$, oscillating at frequencies $\omega=\omega_{c}+\Delta_{f}\pm\omega_{m}$.
Since these three peaks sit at significantly different frequencies compared to
the cavity linewidth, they are affected differently by the cavity
susceptibility. In particular, they will be affected by different phase shifts
due to the cavity susceptibility denoted $\phi_{1}$, $\phi_{+}$ and
$\phi_{-}$.
$\alpha_{f}[\omega]=\frac{\alpha_{0}\mathcal{G}\kappa\sqrt{2}G}{\sqrt{\kappa^{2}/4+\omega_{m}^{2}}}\Bigg{(}\frac{b[\omega-\Delta_{f}]e^{i(\phi_{0}+\phi_{\tau}+\phi_{+})}}{\sqrt{\kappa^{2}/4+(\Delta_{f}+\omega_{m})^{2}}}\\\\[15.0pt]
+\frac{b^{\dagger}[\omega-\Delta_{f}]e^{-i(\phi_{0}+\phi_{\tau}-\phi_{-})}}{\sqrt{\kappa^{2}/4+(\Delta_{f}-\omega_{m})^{2}}}\Bigg{)}+\alpha_{0}\sqrt{\kappa}\,\frac{\delta(\omega-\Delta_{f})e^{i(\phi_{1}+\pi/2)}}{\sqrt{\kappa^{2}/4+\Delta_{f}^{2}}}$
(37)
with
$\phi_{1}=\tan^{-1}\frac{2\Delta_{f}}{\kappa},\\\
\phi_{+}=\tan^{-1}\frac{2(\Delta_{f}+\omega_{m})}{\kappa}\,,\;\text{and
}\phi_{-}=\tan^{-1}\frac{2(\Delta_{f}-\omega_{m})}{\kappa}\,.$ (38)
#### C.1.1 Feedback force
The linear feedback force therefore comes from a cross-product of either
sideband of the drive with the central peak of the drive (neglecting the added
noise contribution for now):
$f_{\rm
fb}[\omega]=-g_{0}\frac{\kappa^{3/2}\alpha_{0}^{2}\mathcal{G}2G}{\sqrt{(\kappa^{2}/4+\omega_{m}^{2})(\kappa^{2}/4+\Delta_{f}^{2})}}\\\
\Bigg{(}\frac{x_{\phi_{0}+\phi_{\tau}+\phi_{+}-\phi_{1}-\pi/2}[\omega]}{\sqrt{\kappa^{2}/4+(\Delta_{f}+\omega_{m})^{2}}}\\\
+\frac{x_{\phi_{0}+\phi_{\tau}-\phi_{-}+\phi_{1}+\pi/2}[\omega]}{\sqrt{\kappa^{2}/4+(\Delta_{f}-\omega_{m})^{2}}}\Bigg{)}$
(39)
The force therefore has two contributions. This results from each of the
sideband of the driving triplet interfering with the central peak to give a
force contribution. Each of these force contains a phase-shifted version of
the position, with a different phase-shift for both, due to the frequency-
dependence of the cavity susceptibility phase. In the limit of small delays
compared to the decoherence rate, the sum of two position signals with
different phase shifts is itself a phase-shifted position signal. Denoting the
phase shifts: $\varphi=\phi_{\tau}+\phi_{+}-\phi_{1}-\pi/2$ and
$\varphi^{\prime}=\phi_{\tau}-\phi_{-}+\phi_{1}+\pi/2$, and the coefficients
$A=\frac{\kappa}{\sqrt{\kappa^{2}/4+(\Delta_{f}+\omega_{m})^{2}}}$,
$B=\frac{\kappa}{\sqrt{\kappa^{2}/4+(\Delta_{f}-\omega_{m})^{2}}}$, the force
is then written (again neglecting the noise contribution)
$f_{\rm
fb}[\omega]=-\frac{2G}{\sqrt{\kappa^{2}/4+\omega_{m}^{2}}}\frac{g_{0}\sqrt{\kappa}\alpha_{0}^{2}\mathcal{G}}{\sqrt{\kappa^{2}/4+\Delta_{f}^{2}}}\,D\,x_{\phi_{0}+\phi}[\omega]$
(40)
with $D\equiv\sqrt{A^{2}+B^{2}+2AB\cos(\varphi-\varphi^{\prime})}$ and
$\phi\equiv{\rm
arctan}\frac{A\sin\varphi+B\sin\varphi^{\prime}}{A\cos\varphi+B\cos\varphi^{\prime}}$.
We can now identify the phase shift $\phi$ of the feedback filtering
application as defined in the main text, as well as the gain $A_{0}$
$A_{0}=\frac{g_{0}\sqrt{\kappa}\alpha_{0}^{2}\mathcal{G}}{\sqrt{\kappa^{2}/4+\Delta_{f}^{2}}}D$
(41)
proportional to parameter $\mathcal{G}$. We also see that, because the two
position signals contributing to the feedback force are not affected by the
same phase-shift, their sum is only partially constructive. Indeed, the gain
$A_{0}$ is maximum when $\varphi$ and $\varphi^{\prime}$ are equal. On the
other hand, if one force contribution largely dominates over the other, then
the interference between these two contributions is weak. The only very
unfavorable situation, which makes $A_{0}$ vanish, is therefore the very bad
cavity limit, wherein the amplitudes of the two contributions are similar and
$\varphi-\varphi^{\prime}\simeq-\pi$. In the experimental situation, not
accounting for phase shifts incurred in the propagation in transmission lines,
the phase difference between force components
$\varphi-\varphi^{\prime}=\phi_{+}+\phi_{-}-2\phi_{1}-\pi$ is about
$-176^{\circ}$, which would be quite unfavorable. However, the sideband at
$\omega_{c}+\omega_{m}+\Delta_{f}$ of the feedback triplet is more than twice
stronger than the sideband at $\omega_{c}-\omega_{m}+\Delta_{f}$, limiting the
destructive interference’s impact. Furthermore, as they come from signals at
different frequencies, $\varphi$ and $\varphi^{\prime}$ are also impacted by
different phase shifts accumulated in the propagation in transmission lines,
which are not accounted for in the latter estimate. In conclusion, even in the
worst phase configuration where the two feedback force contributions interfere
destructively, the fact that one dominates over the other ensures that the
destructive interference is not complete and that the feedback force is
significant.
## Appendix D Data analysis
The effective coupling $G$ at a given generator power is obtained based on a
standard power sweep in a sideband cooling measurement, where the pump
frequency is set at the red sideband. The optomechanical damping is fitted
linearly with the generator power $P$:
$\gamma_{\mathrm{opt}}=\mathcal{P}P\,,$ (42)
where $\mathcal{P}$ is the calibration coefficient. The effective coupling
under the red-sideband probing is obtained from Eq. (19) as
$G_{\mathrm{rsb}}=\frac{1}{2}\sqrt{\mathcal{P}P\kappa\left[1+\left(\frac{\kappa}{4\omega_{m}}\right)^{2}\right]}\,.$
(43)
In determining $G$ in the feedback experiment, we have to account for the
specific probe tone detuning because the field amplitude in the cavity, at a
given generator power, depends on the cavity susceptibility:
$G=\frac{|\chi_{c}(\Delta)|}{|\chi_{c}(-\omega_{m})|}G_{\mathrm{rsb}}\,,$ (44)
where $\Delta$ is the specific detuning, which can be either $\Delta=0$, or
$\Delta=\omega_{m}$.
The gain $A_{0}$ utilized in the theoretical discussion is not a quantity
directly applicable to the experiment. The experimentally relevant gain value
$\mathcal{G}$ is simply proportional to it, with an unknown coefficient coming
from the transduction.
The values of $A_{0}$ calibrated as described immediately below are used when
fitting the theoretically obtained output spectrum to the feedback data, and
to infer the mechanical occupation.
### D.1 Resonant probing
We fit the measured linewidths as follows to obtain the calibration
coefficient $\mathcal{L}$:
$\gamma_{\mathrm{fb}}=\mathcal{L}g\,.$ (45)
We combine Eq. (45) with Eq. (11):
$A_{0}=\frac{\mathcal{L}\mathcal{G}\sqrt{\kappa^{2}+4\omega_{m}^{2}}}{4G}\,.$
(46)
### D.2 Blue-sideband probing
Here, we use the damping stated in Eq. (20), and obtain at the optimum phase
$\gamma_{\mathrm{eff}}=\gamma-\gamma_{\mathrm{opt}}+\gamma_{\mathrm{fb,bsb}}\,,$
(47)
where
$\gamma_{\mathrm{fb,bsb}}=\frac{4A_{0}G\sqrt{\kappa^{4}+20\kappa^{2}\omega_{m}^{2}+64\omega_{m}^{4}}}{\kappa^{3}+16\kappa\omega_{m}^{2}}\,.$
(48)
We fit to Eqs. (47,48) the measured linewidth with increasing gain, in a
similar manner as described in Eq. (46) for the resonant probing situation.
|
# Stochastic Optimal Control via Local Occupation Measures
Flemming Holtorf E-mail<EMAIL_ADDRESS>Department of Chemical Engineering,
Massachusetts Institute of Technology Computer Science and Artificial
Intelligence Laboratory, Massachusetts Institute of Technology Christopher
Rackauckas E-mail<EMAIL_ADDRESS>Computer Science and Artificial
Intelligence Laboratory, Massachusetts Institute of Technology Julia
Computing
###### Abstract
Viewing stochastic processes through the lens of occupation measures has
proven to be a powerful angle of attack for the theoretical and computational
analysis for a wide range of stochastic optimal control problems. We present a
simple modification of the traditional occupation measure framework derived
from resolving the occupation measures locally on a partition of the state
space and control horizon. This modification bridges the gap between
discretization based approximations to the solution of the Hamilton-Jacobi-
Bellmann equations and convex optimization based approaches relying on the
moment-sum-of-squares hierarchy. When combined with the moment-sum-of-squares
hierarchy, the notion of local occupation measures provides fine-grained
control over the construction of highly structured semidefinite programming
relaxations for a rich class of stochastic optimal control problems with
embedded diffusion and jump processes. We show how these relaxations are
constructed, analyze their favorable properties, and demonstrate with examples
that they hold the potential to allow for the computation of tighter bounds
orders of magnitude faster than is possible via naive combination of the
moment-sum-of-squares hierarchy with the traditional occupation measure
framework.
## 1 Introduction
The optimal control of stochastic processes is arguably one of the most
fundamental problems in the context of decision-making under uncertainty.
While a wide range of decision-making problems in engineering lend themselves
to be naturally modeled as such stochastic optimal control problems, only a
small subset of them admits identification of a globally optimal control
policy in a tractable and certifiable manner. As a consequence, engineers are
often forced to resort to one of many available heuristics for the
construction of control policies in practice. And although such heuristics
often perform remarkably well, they seldom come with a simple mechanism to
quantify rigorously the degree of suboptimality they introduce, ultimately
leaving it to the engineer’s intuition to determine when the controller design
process shall be terminated.
Motivated by this undesirable situation and the need for a baseline
quantifying the best attainable control performance, the task of computing
theoretically guaranteed yet informative bounds on the optimal value of
various classes of stochastic optimal control and related problems has
received considerable attention in the past [SRS18, HRS01, Sch01, LPZ06,
Las+08, SLD09]. In particular the framework of occupation measures [FV89,
BB96, KS98] offers a versatile approach to address this task. The notion of
occupation measures allows for the translation of a rich class of stochastic
optimal control problems into generalized moment problems for which a sequence
of increasingly tight, finite semidefinite programming (SDP) relaxations can
be readily constructed via the moment-sum-of-squares hierarchy [Las01, Par00].
A key limitation of this approach, however, remains in its poor scalability.
Specifically, the problem size of the SDP relaxations grows
combinatorially111The problem size scales as ${n+d\choose n}$ where $n$ and
$d$ refer to the dimension of the state space and hierarchy level,
respectively. with the hierarchy level; however, often large hierarchy levels
are necessary to establish truly informative bounds. The notorious numerical
ill-conditioning of moment problems further exacerbates this limitation. With
this contribution, we set out to improve the practicality of the occupation
measure approach to stochastic optimal control by proposing a simple
modification of the traditional occupation measure framework. Concretely, we
introduce a local notion of occupation measures based on the discretization of
the state space of the process and the control horizon. The so-constructed
relaxations can then be tightened simply by refining the spatio-temporal
discretization without increasing the hierarchy level. While this does not
offer inherently better asymptotic scaling than the traditional moment-sum-of-
squares hierarchy, this “tightening-by-refinement” approach provides two major
practical advantages:
1. 1.
It avoids numerical ill-conditioning which in practice often prohibits the
solution of SDP relaxations associated with high levels of the moment-sum-of-
squares hierarchy.
2. 2.
It provides more fine-grained control over tightening the SDP relaxations than
simply traversing the traditional moment-sum-of-squares hierarchy.
As we demonstrate, the latter advantage holds the potential to construct
equally or even tighter relaxations that can be solved orders of magnitude
faster than those derived from combining the moment-sum-of-squares hierarchy
with the traditional occupation measure framework. Another potential advantage
worth mentioning yet beyond the scope of this work is that the proposed
approach is similar in spirit to a wide range of numerical approximation
techniques for the solution of partial differential equations (PDEs); as such,
the resultant SDP relaxations exhibit a benign, weakly-coupled block structure
akin that of discretized PDEs which may be exploited, for example through
distributed optimization techniques [SZA20, Boy+10].
A discretization approach closely related to the proposed local occupation
measure framework has recently been proposed by [CKH21] in the context of
approximating the region of attraction for deterministic control systems by
means of sum-of-squares programming. In another related work, [HB21] have used
temporal discretization in order to improve moment bounding schemes for
trajectories of stochastic chemical systems described as jump processes. Both
works found significant computational merits over the traditional moment-sum-
of-squares hierarchy. This contribution unifies and extends both works by
introducing the notion of local occupation measures which applies beyond
deterministic control problems to jump and diffusion control problems alike.
The local occupation measure framework is independent from and can be
complemented by other approaches aimed at improving the tractability and
practicality of the moment-sum-of-squares hierarchy, most notably sparse
moment-sum-of-squares hierarchies [SK20, Wan+21, ZFP19] and linear/second-
order-cone programming hierarchies [AM14, ADH17, AH19, AM19].
The remainder of this article is structured as follows: In Section 2, we
formally introduce the concept of occupation measures and show how it enables
the construction of tractable convex relaxations for a large class of
stochastic optimal control problems with embedded diffusion processes. In
Sections 3 and 4, we propose the notion of local occupation measures and study
its interpretation in the context of stochastic optimal control from the
primal (moment) and dual (polynomial) perspective, respectively. Section 5 is
dedicated to highlight the advantages of the proposed framework for the
construction of high quality relaxations with regard to the scaling properties
and structure of the resultant optimization problems. In Section 6, we
showcase the potential of the proposed approach with an example problem from
population control. In Section 7, we discuss the extension of the described
local occupation measure framework to discounted infinite horizon control
problems as well as the control of jump processes, supported with an example
from systems biology. We conclude with some final remarks in Section 8.
## 2 Problem Description & Preliminaries
We consider a controlled, continuous-time diffusion process $x(t)$ in
$\mathbb{R}^{n}$ driven by a standard $\mathbb{R}^{m}$-Brownian motion $b(t)$,
$\displaystyle dx(t)=f(x,u)\,dt+g(x,u)\,db(t),$ (SDE)
and study the associated finite horizon optimal control problem
$\displaystyle\inf_{u(\cdot)}\quad$
$\displaystyle\mathbb{E}_{\nu_{0}}\left[\int_{[0,T]}\ell(x(t),u(t))\,dt+\phi(x(T))\right]$
(OCP) s.t. $\displaystyle x(t)\text{ satisfies \eqref{eq:SDE} on }[0,T]\text{
with }x(0)\sim\nu_{0},$ $\displaystyle x(t)\in X,$ $\displaystyle u(t)\in U.$
Here $\mathbb{E}_{\nu_{0}}$ refers to the expectation with respect to the
probability measure $\mathbb{P}_{\nu_{0}}$ over the paths of the process
described by (SDE). The subscript $\nu_{0}$ refers to the dependence on the
initial distribution, which we assume to be known exactly. Throughout, we
further assume that all problem data is described in terms of polynomials in
the following sense.
###### Assumption 1.
The drift coefficient $f:X\times U\to\mathbb{R}^{n_{x}}$, diffusion matrix
$gg^{\top}:X\times U\to\mathbb{R}^{n_{x}\times n_{x}}$, stage cost $l:X\times
U\to\mathbb{R}$ and terminal cost $\phi:X\times U\to\mathbb{R}$ are
componentwise polynomial functions jointly in both arguments. The state space
$X$ and the set of admissible control inputs $U$ are basic closed
semialgebraic sets.
It is well-known that the (extended) infinitesimal generator
$\mathcal{A}:\mathcal{C}^{1,2}([0,T]\times X)\to\mathcal{C}([0,T]\times
X\times U)$ associated with the process described by (SDE) is given by
$\displaystyle\mathcal{A}:w\mapsto\frac{\partial w}{\partial
t}(t,x)+f(x,u)^{\top}\nabla_{x}w(t,x)+\frac{1}{2}\text{Tr}\left(gg^{\top}(x,u)\nabla_{x}^{2}w(t,x)\right).$
A crucial observation for the construction of tractable relaxations of (OCP)
is the following.
###### Observation 1.
Under Assumption 1, $\mathcal{A}$ is a linear operator that maps polynomials
to polynomials.
Further recall that the infinitesimal generator encodes a deterministic
description of the expected time evolution of observables of the process state
evolving according to (SDE). This stochastic analogue of the fundamental
theorem of calculus is known as Dynkin’s formula,
$\displaystyle\mathbb{E}_{\nu_{0}}\left[w(t,x(t))\right]-\mathbb{E}_{\nu_{0}}\left[w(0,x(0))\right]=\mathbb{E}_{\nu_{0}}\left[\int_{[0,t]}\mathcal{A}w(s,x(s),u(s))\,ds\right],$
(1)
and holds for any stopping time $t$ and smooth observable
$w(t,x)\in\mathcal{C}^{1,2}([0,T]\times X)$ [OS07].
The key insight that enables the construction of relaxations of the stochastic
optimal control problem (OCP) is that, in light of Observation 1, Dynkin’s
formula describes a linear subspace of so-called occupation measures [FV89,
KS98, BB96]. To take advantage of this fact, we define two types of occupation
measures: the instantaneous and expected state-action occupation measure. The
instantaneous occupation measure $\nu_{t}$ associated with time point
$t\in[0,T]$ is given by the probability to observe the stochastic process in
any Borel set $B\subset X$ at time $t$. This is illustrated in Figure 1(a).
Formally, we define
$\displaystyle\nu_{t}(B)=\mathbb{P}_{\nu_{0}}\left[x(t)\in B\right].$
The expected state-action occupation measure $\xi$ is defined as the average
time the stochastic process $(t,x(t),u(t))$ remains in the Borel subset
$B\subset[0,T]\times X\times U$; this is illustrated in Figure 1(b) for the
special case of an uncontrolled process. Formally, we define
$\displaystyle\xi(B_{T}\times B_{X}\times
B_{U})=\mathbb{E}_{\nu_{0}}\left[\int_{[0,T]\cap B_{T}}\mathds{1}_{B_{X}\times
B_{U}}(x(t),u(t))\,dt\right]$
for any Borel subsets $B_{T}\subset[0,T]$, $B_{X}\subset X$, $B_{U}\subset U$.
It is further critical to note that both $\nu_{t}$ and $\xi$ are finite, non-
negative measures by definition.
(a) The instantaneous occupation measure of the shaded set associated with the
time $T$ is the probability to observe the process in the set at time $T$.
(b) The expected state occupation measure of the shaded set coincides with the
expected length of the projection of the red portions of the process paths
onto the $t$ axis.
Figure 1: Illustration of occupation measures
With these definitions, the expectations in Equation (1) can be expressed in
terms of Lebesgue integrals with respect to $\nu_{0}$, $\nu_{t}$ and $\xi$:
$\displaystyle\int_{X}w(t,x)\,d\nu_{t}(x)-\int_{X}w(0,x)\,d\nu_{0}(x)=\int_{[0,T]\times
X\times U}\mathcal{A}w(s,x,u)\,d\xi(s,x,u).$ (2)
Taking Observation 1 into account, Equation (2) shows that Dynkin’s formula
not only describes an affine subspace of the occupation measures $\nu_{t}$ and
$\xi$ but more importantly, under Assumption 1, also an affine subspace of
their moments. We say that two measures $\nu_{t}$ and $\xi$ are a weak
solution to (SDE) on the interval $[0,t]$ if the above relation is satisfied
for all $w\in\mathcal{C}^{1,2}([0,t]\times X)$. This notion of weak solutions
to (SDE) motivates the following weak form of (OCP):
$\displaystyle\inf_{\nu_{T},\xi}\quad$ $\displaystyle\int_{[0,T]\times X\times
U}\ell(x,u)\,d\xi(t,x,u)+\int_{X}\phi(x)\,d\nu_{T}(x)$ (weak OCP) s.t.
$\displaystyle\int_{X}w(T,x)\,d\nu_{T}(x)-\int_{X}w(0,x)\,d\nu_{0}(x)=\int_{[0,T]\times
X\times U}\mathcal{A}w(s,x,u)\,d\xi(s,x,u),$ $\displaystyle\hskip
227.62204pt\forall w\in\mathcal{C}^{1,2}([0,T]\times X),$
$\displaystyle\nu_{T}\in\mathcal{M}_{+}(X),$
$\displaystyle\xi\in\mathcal{M}_{+}([0,T]\times X\times U).$
where $\mathcal{M}_{+}(Y)$ refers to the cone of finite, positive Borel
measures supported on the set $Y$. While (weak OCP) remains intractable as an
infinite dimensional linear program, Assumption 1 enables the construction of
a sequence of increasingly tight SDP relaxations via the moment-sum-of-squares
hierarchy [Par00, Las01]. This is achieved by relaxing (weak OCP) to
optimization over the moments of the measures $\nu_{t}$ and $\xi$ up to finite
order $d$ while imposing the affine constraints generated by Equation (2) and
the necessary positive semidefinite constraints on the moment and localizing
matrices reflecting non-negativity and support of both occupation measures;
the relaxations can be tightened by increasing the maximum moment order $d$.
The technical details for this construction are immaterial for our
contribution so we instead refer the interested reader to [Las10].
The dual to (weak OCP) has an informative interpretation that serves as
motivation for the discretization strategy presented in the next section. The
dual reads
$\displaystyle\sup_{w}\quad$ $\displaystyle\int_{X}w(t,x)\,d\nu_{0}(x)$
(subHJB) s.t. $\displaystyle\mathcal{A}w(t,x,u)+\ell(x,u)\geq
0,\quad\forall(t,x,u)\in[0,T]\times X\times U,$ (3) $\displaystyle
w(T,x)\leq\phi(x),\quad\forall x\in X,$ (4) $\displaystyle
w\in\mathcal{C}^{1,2}([0,T]\times X),$
where the decision variable $w$ can be interpreted as a smooth underestimator
of the value function associated with the control problem (OCP) [BB96].
###### Corollary 1.
Let $w$ be feasible for (subHJB). Then, $w$ underestimates the value function
$\displaystyle V(t,z)=\inf_{u(\cdot)}\quad$
$\displaystyle\mathbb{E}_{\delta_{z}}\left[\int_{t}^{T}\ell(x(s),u(s))\,ds+\phi(x(T))\right]$
(5) s.t. $\displaystyle x(s)\text{ satisfies }\eqref{eq:SDE}\text{ on
}[t,T]\text{ with }x(t)\sim\delta_{z},$ $\displaystyle x(s)\in X,$
$\displaystyle u(s)\in U.$
on $[0,T]\times X$.
###### Proof.
Let $z\in X$ and $0\leq t\leq T$ and fix any admissible control policy $u$,
i.e., a control policy such that any path of the stochastic process
$(x(s),u(s))$ remains in $X\times U$ on $[t,T]$. Then, Constraints (3) and (4)
imply that
$\displaystyle\mathbb{E}_{\delta_{z}}\left[-\int_{t}^{T}\mathcal{A}w(s,x(s),u(s))\,ds+w(x(T))\right]\leq\mathbb{E}_{\delta_{z}}\left[\int_{t}^{T}\ell(x(s),u(s))\,ds+\phi(x(T))\right].$
The left-hand-side coincides with $w(t,z)$ by Dynkin’s formula. Since the
inequality holds for any admissible control policy, the result follows by
minimizing the right-hand-side over all admissible control policies. ∎
Analogous to the primal counterpart, the moment-sum-of-squares hierarchy gives
rise to a sequence of increasingly tight SDP restrictions of (subHJB) by
restricting $w$ to be a polynomial of degree at most $d$ and imposing the non-
negativity constraints by means of sufficient sum-of-squares conditions
[Las01, Par00]. The resultant restrictions are loosened by increasing the
degree $d$ of $w$.
In the following, we will use the dual perspective to devise a generalization
of (subHJB) which, like its predecessor, enables computation of valid bounds
for (OCP) via the moment-sum-of-squares hierarchy. From that, we will then
construct the notion of local occupation measures.
## 3 The Dual Perspective revisited: Piecewise Polynomial Approximation
In order to construct better approximations to the value function in the
spirit of (subHJB), we here propose to generalize problem (subHJB) by seeking
a piecewise smooth underapproximation of the value function over the problem
domain $[0,T]\times X$. To that end, we consider a discretization
$0=t_{0}<t_{1}<\dots<t_{n_{T}}=T$ of the time domain and a partition
$X_{1},X_{2},\dots,X_{n_{X}}$ of the state space $X$, i.e.,
$X=\cup_{k=1}^{n_{x}}X_{k}$ and $X_{i}\cap X_{j}=\emptyset$ if $i\neq j$. This
allows us to form a partition of the whole problem domain as illustrated in
Figure 2 and we can formulate the following natural generalization of
(subHJB):
$\displaystyle\sup_{w_{i,k}}\quad$
$\displaystyle\sum_{k=1}^{n_{X}}\int_{X_{k}}w_{1,k}(0,x)\,d\nu_{0}(x)$
(discretized subHJB) s.t.
$\displaystyle\mathcal{A}w_{i,k}(t,x,u)+\ell(x,u)\geq
0,\quad\forall(t,x,u)\in[t_{i-1},t_{i}]\times X_{k}\times U,$ (6)
$\displaystyle w_{i,k}(t_{i-1},x)-w_{i-1,k}(t_{i-1},x)\geq 0,\quad\forall x\in
X_{k},$ (7) $\displaystyle
w_{i,j}(t,x)-w_{i,k}(t,x)=0,\quad\forall(t,x)\in[t_{i-1},t_{i}]\times(\partial
X_{j}\cap\partial X_{k}),$ (8) $\displaystyle
w_{n_{T},k}(T,x)\leq\phi(x),\quad\forall x\in X_{k},$ (9) $\displaystyle
w_{i,k}\in\mathcal{C}^{1,2}([0,T]\times X_{k}).$ (10)
Figure 2: Discretization of space-time domain
The constraints in problem (discretized subHJB) are chosen to ensure that a
valid underestimator of the value function can be constructed from the
individual pieces $w_{i,k}$. This is formalized in the following Corollary.
###### Corollary 2.
Let $\\{w_{i,k}:1\leq i\leq n_{T},1\leq k\leq n_{X}\\}$ be feasible for
(discretized subHJB) and define
$\displaystyle w(t,x)=w_{i,k}(t,x)\text{ with
}i=\sup\\{j:t\in[t_{j-1},t_{j}]\\}\text{ and }k\text{ such that }x\in X_{k}$
(11)
Then, $w$ is an underestimator of the value function $V$ as defined in (5).
###### Proof Sketch.
The idea is to split the paths of the process $(t,x(t),u(t))$ up into pieces
in which it is confined to a subdomain $[t_{i-1},t_{i}]\times X_{k}\times U$.
For each of those pieces the same argument as in Corollary 1 applies to show
that $w_{i,k}$ underestimates the value function on the respective subdomain.
Additionally, Constraints (7) and (8) ensure conservatism when the process
crosses between different time intervals and subdomains of the state space,
respectively. Specifically, Constraint (7) enforces that $w(t,x(t))$ can at
most increase when traced across the boundary between the intervals
$[t_{i-1},t_{i}]$ and $[t_{i},t_{i+1}]$ ensuring that $w(t,z)$ cannot cross
$V(t,z)$ at such time points. Likewise, Constraint (8) ensures that $w(t,z)$
cannot cross $V(t,z)$ when the process crosses between spatial subdomains by
imposing the stronger continuity condition. The formal argument is presented
in Appendix A. ∎
###### Remark 1.
Contrasting the monotonicity condition (7) enforced between subsequent time
intervals, the stronger continuity requirement (8) at the boundary between
subdomains of the state space is necessary as the process may cross the
boundary in any direction due to stochastic vibrations. In case of a
deterministic process ($g=0$) this condition may be further relaxed as we only
require that $w(t,x(t))$ must at most increase for all trajectories of the
system when crossing the boundary between two subdomains. [CKH21] show in a
related argument that in this case it suffices to impose that
$\displaystyle(w_{i,k}(t,x)-w_{i,j}(t,x))n_{j,k}^{\top}f(x,u)\geq
0,\quad\forall(t,x,u)\in[t_{i-1},t_{i}]\times(\partial X_{j}\cap\partial
X_{k})\times U$
where $n_{j,k}$ denotes the normal vector of the boundary between $X_{j}$ and
$X_{k}$ pointing from $X_{j}$ to $X_{k}$.
###### Remark 2.
In the discussion of this section, we ignored that some elements of the
partition $X_{1},\dots,X_{n_{X}}$ cannot be closed. We wish to emphasize,
however, that this does not complicate the construction of valid moment-sum-
of-squares restrictions of (discretized subHJB) as the non-negativity
conditions can equivalently be enforced on the closure of the respective
subdomains.
## 4 The Primal Perspective revisited: Local Occupation Measures
In this section, we briefly discuss the primal side of the construction
presented in the previous section which gives rise to the localized notion of
occupation measures. The primal corresponding to (discretized subHJB) reads
$\displaystyle\inf_{\nu_{t_{i},k},\xi_{i,k},\pi_{i,j,k}}\quad$
$\displaystyle\sum_{i=1}^{n_{T}}\sum_{k=1}^{n_{X}}\int_{[t_{i-1},t_{i}]\times
X_{k}\times
U}\ell(x,u)\,d\xi_{i,k}(t,x,u)+\sum_{k=1}^{n_{X}}\int_{X_{k}}\phi(T,x)\,d\nu_{t_{n_{T}},k}(x)$
(discretized weak OCP) s.t.
$\displaystyle\int_{X_{k}}w(t_{i},x)\,d\nu_{t_{i},k}(x)-\int_{X_{k}}w(t_{i-1},x)\,d\nu_{t_{i-1},k}(x)=$
$\displaystyle\hskip 28.45274pt\int_{[t_{i-1},t_{i}]\times X_{k}\times
U}\mathcal{A}w(t,x,u)\,d\xi_{i,k}(t,x,u)+\sum_{\stackrel{{\scriptstyle
j=1}}{{j\neq k}}}^{n_{X}}\int_{(t_{i-1},t_{i})\times(\partial
X_{k}\cap\partial X_{j})}\,w(t,x)\,d\pi_{i,j,k}(t,x),$ $\displaystyle\hskip
256.0748pt\forall w\in\mathcal{C}^{1,2}([t_{i-1},t_{i}]\times X_{k}),$
$\displaystyle\nu_{t_{i},k}\in\mathcal{M}_{+}(X_{k}),$
$\displaystyle\xi_{i,k}\in\mathcal{M}_{+}([t_{i-1},t_{i}]\times X_{k}\times
U),$
$\displaystyle\pi_{i,j,k}=-\pi_{i,k,j}\in\mathcal{M}((t_{i-1},t_{i})\times(\partial
X_{j}\cap\partial X_{k}))$
where $\mathcal{M}(Y)$ refers to the set of signed measures supported on $Y$.
This problem reveals that discretization of the problem domain translates from
this perspective into a “localized” notion of the occupation measures.
Specifically, restriction of the expected state-action occupation measures
$\xi$ introduced in Section 2 to a subdomain $[t_{i-1},t_{i}]\times
X_{k}\times U$ from the partition (cf. Figure 3) yields the local state-action
occupation measure $\xi_{i,k}$:
$\displaystyle\xi_{i,k}(B_{T}\times B_{X}\times
B_{U})=\xi((B_{T}\cap[t_{i-1},t_{i}])\times(B_{T}\cap X_{k})\times B_{U}).$
Figure 3: Support of local occupation measure $\xi_{i,k}$
Likewise, the local instantaneous occupation measures with respect to
different time points $t_{i}$ and subdomains $X_{k}$ are given by the
restriction of $\nu_{t_{i}}$ to $X_{k}$:
$\displaystyle\nu_{t_{i},k}(B)=\nu_{t_{i}}(B\cap X_{k}).$
The measures $\pi_{i,j,k}$ in the formulation of (discretized weak OCP)
account for transitions of the process between the spatial subdomains $X_{j}$
to $X_{k}$ in the time interval $[t_{i-1},t_{i}]$. Formally, $\pi_{i,j,k}$ can
be defined for any Borel subsets $B_{T}\subset(t_{i-1},t_{i})$ and
$B_{\partial X}\subset\partial X_{j}\cap\partial X_{k}$ as
$\displaystyle\pi_{i,j,k}(B_{T}\times
B_{X})=\mathbb{E}_{\nu_{0}}\left[\sum_{n\in\mathbb{N}}(-1)^{n+1}\mathds{1}_{B_{T}}(\tau^{j}_{n})\mathds{1}_{B_{\partial
X}}(x(\tau^{j}_{n}))\right]$
where the $\tau^{j}_{n}$ denote the times at which the process enters and
leaves the subdomain $X_{j}$, i.e.,
$\displaystyle\tau^{j}_{n}=\begin{cases}t_{i-1},&n=0\\\
\inf\left\\{\tau^{j}_{n-1}<t<t_{i}:x(t)\in\begin{cases}X\setminus
X_{j},&n\text{ odd}\\\ X_{j},&n\text{ even}\end{cases}\right\\},&n\geq
1\end{cases}.$
Note that this definition indeed gives rise to a measure that satisfies the
symmetry condition $\pi_{i,j,k}=-\pi_{i,k,j}$. Lastly, we wish to emphasize
that it is easily verified that the relaxations derived from (discretized weak
OCP) via the moment-sum-of-squares hierarchy are at least as tight as those
derived from its traditional counterpart (weak OCP). To see this, note that
the constraints in (weak OCP) are implied by the constraints in (discretized
weak OCP) as $\xi=\sum_{k=1}^{n_{X}}\sum_{i=1}^{n_{T}}\xi_{i,k}$,
$\sum_{k=1}^{n_{X}}\sum_{j=1,j\neq k}^{n_{X}}\pi_{i,j,k}=0$, and
$\sum_{k=1}^{n_{X}}\nu_{t_{n_{T}},k}=\nu_{T}$.
## 5 Moment-Sum-of-Squares Approximations: Structure & Scalability
As already mentioned in Section 2, the construction of tractable relaxations
of the problems (subHJB) or (weak OCP) relies on the restriction to
optimization over polynomials or the relaxation to optimization over the
finite sequence of moments up to some degree $d$, respectively. Traditionally,
increasing this approximation order $d$ has been the only mechanism used to
loosen the restriction, respectively strengthening the relaxation, to
ultimately improve the obtained bounds to the desired level. The main
motivation behind the proposed discretization approach lies in circumventing
the combinatorial scaling associated with this tightening mechanism. With the
proposed notion of local occupation measures, refinement of the domain
discretization may be used as an additional bound tightening mechanism. Table
1 summarizes the scaling of the critical problem components of the moment-sum-
of-squares SDP approximations of (discretized subHJB) and (weak OCP) with
respect to the different tightening mechanisms of increasing $n_{X},n_{T}$
(refining the discretization) or $d$ (increasing the approximation order). The
linear scaling of the SDP sizes with respect to $n_{X}$ and $n_{T}$ underlines
the flexibility added by the proposed framework. While a naive refinement of
the discretization will cause $n_{X}$ to scale itself combinatorially with the
state space dimension, it opens the door to employ tailored refinement
strategies with improved scaling and reap the computational rewards directly
in the form of smaller SDPs. This finer control over the tightening process
further enables exploitation of problem specific insights such as the
knowledge of critical parts of the state space to be resolved more finely than
others to construct tighter relaxations. This flexibility is in stark contrast
to increasing the approximation order $d$ as translating such insights into
specific moments to be constrained or monomial terms to be considered in the
value function approximator is significantly less straightforward as
highlighted by several works on sparse moment-sum-of-squares hierarchies
[SK20, Wan+21, ZFP19]. In this context, it is further worth emphasizing that
not only the linear scaling with respect to $n_{T}$ and $n_{X}$ is desirable
but in particular the invariance of the linear matrix inequality (LMI)
dimension promotes scalability [AM14, ADH17, AH19, AM19].
| #variables | # LMIs | dimension of LMIs
---|---|---|---
$d$ | $O({n+d\choose d})$ | $O(1)$ | $O({n+\left\lfloor d/2\right\rfloor\choose\left\lfloor d/2\right\rfloor})$
$n_{T}$ | $O(n_{T})$ | $O(n_{T})$ | $O(1)$
$n_{X}$ | $O(n_{X})$ | $O(n_{X})$ | $O(1)$
Table 1: Scaling of moment-sum-of-squares SDP approximations with respect to
different tightening mechanisms
Aside from the favorable scaling properties, the problems (discretized subHJB)
and (weak OCP) give rise to highly structured SDPs. Specifically, all
constraints involve only variables corresponding to adjacent subdomains
($\partial X_{j}\cap\partial X_{k}\neq\emptyset$). As a consequence, the
structure of the constraints is analogous to those arising from discretized
PDEs which could potentially be exploited in tailored, distributed
optimization algorithms.
## 6 Case Study: Population Control
### 6.1 Control Problem
We study the proposed local occupation measure framework and its computational
implications with an example problem from the field of population control. The
problem is adjusted from [SLD09] where it has been studied in a discrete time,
infinite horizon setting. The objective is to control the population size of a
primary predator and its prey in an ecosystem featuring the prey species,
primary predator species as well as a secondary predator species with an
erratically changing population size. The interactions between the primary
predator and prey population are assumed to be described by a standard Lotka-
Volterra model, while the effect of presence/absence of the secondary predator
species is modeled by a Brownian motion. The population sizes are assumed to
be controlled via hunting of the primary predator species. This model gives
rise to the diffusion process,
$\displaystyle
dx(t)=\begin{bmatrix}\gamma_{1}x_{1}(t)-\gamma_{2}x_{1}(t)x_{2}(t)\\\
\gamma_{4}x_{1}(t)x_{2}(t)-\gamma_{3}x_{2}(t)-x_{2}(t)u(t)\end{bmatrix}\,dt+\begin{bmatrix}\gamma_{5}x_{1}(t)\\\
0\end{bmatrix}\,db(t),$
where $x_{1}$, $x_{2}$, and $u$ refer to the prey species, predator species
and hunting effort, respectively. The model parameters
$\gamma=(1,2,1,2,0.025)$ and initial condition $x(0)\sim\delta_{(1,0.25)}$ are
assumed to be known deterministically. Moreover, we assume that the admissible
hunting effort is confined to $U=[0,1]$. Under these assumptions, it is easily
verified that the process $x(t)$ is by construction confined to the non-
negative orthant $X=\\{x:x_{1},x_{2}\geq 0\\}$ for any admissible control
policy. For the control problem we further choose a time horizon of $T=10$ and
stage cost
$\displaystyle\ell(x,u)=(x_{1}-0.75)^{2}+\frac{(x_{2}-0.5)^{2}}{10}+\frac{(u-0.5)^{2}}{10}$
penalizing variations from the target population sizes.
### 6.2 Discretization of Problem Domain
In order to investigate the effect of different discretizations on bound
quality and computational cost, we utilize a simple grid partition of the
state space $X$ as parameterized by the number of grid cells $n_{1}$ and
$n_{2}$ in the $x_{1}$ and $x_{2}$ direction, respectively. As $X$ is the non-
negative orthant in our example, and hence semi-infinite, we choose to
discretize the compact interval box $[0,1.5]\times[0,1.5]$ with a uniform grid
of $(n_{1}-1)\times(n_{2}-1)$ cells and cover the remainder of $X$ with
appropriately chosen semi-infinite interval boxes. This choice is motivated by
the fact that the uncontrolled system resides with high probability in this
part of the state space. The resultant grid is illustrated in Figure 4. Note
further that any cell of the described grid is by construction a basic closed
semialgebraic set. As a consequence, sufficient sum-of-squares conditions for
non-negativity of a polynomial on such a grid cell are provided by standard
sum-of-squares arguments.
Figure 4: Spatial discretization of $X=\mathbb{R}^{n}_{+}$ using $n_{1}\times
n_{2}$ interval boxes. The spatial discretization is chosen as uniform on
$[0,1.5]\times[0,1.5]$.
The temporal domain is discretized uniformly into $n_{T}$ subintervals, i.e.,
$t_{i}=i\Delta t$ with $\Delta t=\frac{T}{n_{T}-1}$. Throughout we will only
refer to a specific discretization with the associated triple
$(n_{1},n_{2},n_{T})$. The computational experiments are conducted for all
discretizations corresponding to the triples $\\{(n_{1},n_{2},n_{T}):1\leq
n_{1},n_{2}\leq 5,1\leq n_{T}\leq 10\\}$.
### 6.3 Evaluation of Bound Quality
In order to assess the tightness of the bounds obtained with different
approximation orders and discretizations, we compare the relative optimality
gap defined as
$\displaystyle\text{rel. optimality gap }=\frac{UB-LB}{UB}$
where $LB$ refers to the lower bound furnished by a concrete instance of the
moment-sum-of-squares restriction of (discretized subHJB), and $UB$ to the
control cost associated with the best known admissible control policy. In
order to determine $UB$, we constructed an admissible control policy from the
approximate value function $w^{*}$ obtained as solution of the moment-sum-of-
squares restriction of (discretized subHJB) with approximation order $d=4$ on
the grid described by $n_{1}=n_{2}=4$ and $n_{T}=10$. Specifically, we
employed the following control law mimicking a one-step model-predictive
controller
$\displaystyle u^{*}(t)\in\operatorname*{arg\,min}_{u\in
dU}\mathcal{A}w^{*}(t,x(t),u)+\ell(x(t),u)$
where $dU=\\{0,0.05,0.1,\dots,1\\}\subset U$. The best known control cost
$\displaystyle
UB=\mathbb{E}_{\nu_{0}}\left[\int_{[0,\infty)}\ell(x(t),u^{*}(t))\,dt\right]$
was then estimated by the ensemble average of $10,000$ sample trajectories. A
comparison of the sample trajectories for the controlled and uncontrolled
process is shown in Figure 5.
Figure 5: Comparison between uncontrolled system and system controlled via
best-known control policy $u^{*}$
### 6.4 Computational Details
All computational experiments presented in this section were conducted on a
MacBook Pro with Apple M1 Pro with 16GB unified memory. All sum-of-squares
programs and the associated SDPs were constructed using the custom developed
package MarkovBounds.jl222see https://github.com/FHoltorf/MarkovBounds.jl
built on top of SumOfSquares.jl [Wei+19] and solved with Mosek v9.0.87.
### 6.5 Results
We put special emphasis on investigating the effect of refining the
discretization of the problem domain on bound quality and computational cost.
Focusing on the effect on computational cost in isolation first, Figure 6
indicates that the computational cost for the solution of moment-sum-of-
squares programs generated by the restriction of (discretized subHJB) to
polynomials of degree at most $d$ scales approximately linearly with the
number of cells $n_{1}\times n_{2}\times n_{T}$ of the spatiotemporal
partition. On the other hand, Figure 6 also shows that increasing the
approximation order $d$ results in a much more rapid increase in computational
cost. These results are perfectly in line with the discussion in Section 5.
Figure 6: Computational cost scales approximately linearly with number of grid
cells for fixed approximation order
Figure 7 further shows the trade-off between bound quality and the associated
computational cost for different approximation orders and discretization
grids. First, it is worth noting that the proposed discretization strategy
enables the computation of overall tighter bounds with an approximation order
of only up to $d=6$ when compared to the traditional formulation with an
approximation order of up to $d=14$. It is further worth emphasizing that
beyond $d=14$, numerical issues prohibited an accurate solution of the SDPs
arising from the traditional formulation such that no tighter bounds could be
obtained this way. Furthermore, the figure also indicates that across almost
the entire accuracy range, a significant speed up ($>5\times$) can be achieved
by using the proposed discretization strategy instead of only increasing the
approximation order as done traditionally. Lastly, the results clearly show
that a careful choice of discretization is crucial to achieve good
performance. Figure 7(b) indicates that for this example particularly good
performance is achieved when only the time domain is discretized; additionally
discretizing the spatial domain becomes an effective means of bound tightening
only after the time domain has already been resolved sufficiently finely.
(a) spatial & temporal discretization
(b) temporal discretization highlighted ($n_{1}=n_{2}=1$)
Figure 7: Trade-off between computational cost and bound quality for different
approximation orders $d$ and domain discretizations $(n_{1},n_{2},n_{T})$. The
red markers correspond to moment-sum-of-squares restrictions of the labeled
approximation order for the traditional formulation (subHJB) without any
discretization.
## 7 Extensions
Before we close, we briefly showcase two direct extensions to the described
local occupation measure framework showcasing its versatility.
### 7.1 Discounted Infinite Horizon Problems
Consider the following discounted infinite horizon stochastic control problem
with discount factor $\rho>0$:
$\displaystyle\inf_{u(\cdot)}\quad$
$\displaystyle\mathbb{E}_{\nu_{0}}\left[\int_{[0,\infty)}e^{-\rho
t}\ell(x(t),u(t))\,dt\right]$ s.t. $\displaystyle x(t)\text{ satisfies
\eqref{eq:SDE} on }[0,\infty)$ $\displaystyle x(t)\in X$ $\displaystyle
u(t)\in U.$
The construction of a weak formulation of this problem akin (weak OCP) can be
done in full analogy to Section 2. To that end, note that the infinitesimal
generator $\mathcal{A}$ maps functions of the form $\hat{w}(t,x)=e^{-\rho
t}w(t,x)$ to functions of the same form, i.e.,
$\displaystyle\mathcal{A}\hat{w}(t,x,u)=e^{-\rho t}(\mathcal{A}w(t,x,u)-\rho
w(t,x,u)).$
By analogous arguments as in Section 2, it therefore follows that any function
$w\in C^{1,2}([0,\infty)\times X)$ that satisfies
$\displaystyle\mathcal{A}w(t,x,u)-\rho w(t,x,u)+\ell(x,u)\geq
0,\quad\forall(t,x,u)\in[0,\infty)\times X\times U$
furnishes a valid subsolution $\hat{w}(t,x)=e^{-\rho t}w(t,x)$ of the value
function associated with the infinite horizon problem. Since the proposed
discretization approach does neither rely on boundedness of the state space
nor control horizon in order to establish valid bounds, it follows that it
readily extends to the infinite horizon setting. Moreover, additional
assumptions or insights on the structure of the value function can be easily
incorporated via appropriate constraints. One example for such additional
insights could be the invariance of the value function with respect to time,
which is well-known for many infinite horizon control problems [OS07].
### 7.2 Jump Processes with Discrete State Space
Many fields and applications ranging from chemical physics to queuing theory
call for models that describe stochastic transitions between discrete states.
In those cases, jump processes are a common modeling choice [Gil92, Bre03]. In
the following, we will show that the proposed local occupation measure
framework extends with only minor modifications to stochastic optimal control
of a large class of such jump processes. Specifically, we will consider
controlled, continuous-time jump processes driven by $m$ independent Poisson
counters $n_{i}(t)$ with associated propensities $a_{i}(x(t),u(t))$:
$\displaystyle dx(t)=\sum_{i=1}^{m}h_{i}(x(t),u(t))-x(t)\,dn_{i}(t).$ (JDE)
We will again assume that the process can be fully characterized by
polynomials, but we now additionally impose the assumption that the state
space of the process is discrete.
###### Assumption 2.
The jumps $h_{i}:X\times U\to X$, propensities $a_{i}:X\times
U\to\mathbb{R}_{+}$, stage cost $l:X\times U\to\mathbb{R}$ and terminal cost
$\phi:X\times U\to\mathbb{R}$ are polynomial functions jointly in both
arguments. The state space is a discrete, countable set and the set of
admissible control inputs $U$ is a basic semialgebraic.
The local occupation measure framework outlined previously for diffusion
processes can be effectively extended for computing lower bounds on stochastic
optimal control problems with such jump processes embedded. Specifically, we
consider stochastic optimal control problems of the form
$\displaystyle\inf_{u(\cdot)}\quad$
$\displaystyle\mathbb{E}_{\nu_{0}}\left[\int_{[0,T]}\ell(x(t),u(t))\,dt+\phi(x(T))\right]$
(jump OCP) s.t. $\displaystyle x(t)\text{ satisfies \eqref{eq:JDE} on
}[0,T]\text{ with }x(0)\sim\nu_{0},$ $\displaystyle x(t)\in X,$ $\displaystyle
u(t)\in U$
where $\ell:X\times U\to\mathbb{R}$ and $\phi:X\to\mathbb{R}$ are again
polynomials. The extended infinitesimal generator
$\mathcal{A}:\mathcal{C}^{1,0}([0,T]\times X)\to\mathcal{C}([0,T]\times
X\times U)$ of a process described by (JDE) is given by
$\displaystyle\mathcal{A}w\mapsto\frac{\partial w}{\partial
t}(t,x)+\sum_{i=1}^{m}a_{i}(x,u)\left(w(t,h_{i}(x,u))-w(t,x)\right).$
Note that under Assumption 2, $\mathcal{A}$ again maps polynomials to
polynomials providing the basis for application of the moment-sum-of-squares
hierarchy for construction of tractable relaxations of (jump OCP). The weak
form of (jump OCP) and its dual take a form essentially identical to (weak
OCP) and (subHJB) where (subHJB) again can be tied to seeking the maximal
subsolution to the value function associated with (jump OCP). For the sake
brevity, we discuss these details in Appendix B. Unlike in Section 2, however,
$X$ is now only closed basic semialgebraic if and only if it is finite. Thus,
the moment-sum-of-squares hierarchy provides finite SDP relaxations of the
weak form counterpart of (jump OCP) only in the case of a finite state space
$X$. Moreover, these relaxations may in fact not even be practically tractable
if $X$ is finite but has a large cardinality. If $X$ is infinite (or of very
large cardinality), tractable moment-sum-of-squares relaxations can only be
constructed at the price of introducing additional conservatism. From the
primal perspective, this additional conservatism is introduced by optimizing
over measures supported on a basic closed semialgebraic overapproximation of
$X$ in (weak OCP). This overapproximation is usually chosen to be a polyhedral
or box superset of $X$ due to simple computational treatment [DB18, Dow19].
The framework of local occupation measures provides a way to reduce the
conservatism introduced by such a basic closed semialgebraic overapproximation
of $X$, in particular in the case of polyhedral overapproximations. To
appreciate this fact, consider a partition $X_{1},\dots,X_{n_{X}}$ of the
state space $X$ and a discretization $t_{0}<t_{1}<\dots<t_{n_{T}}=T$ of the
control horizon. Given such a partition, the dual of the weak form stochastic
optimal control problem takes the form
$\displaystyle\sup_{w_{i,k}}\quad$
$\displaystyle\sum_{k=1}^{n_{X}}\int_{X_{k}}w_{1,k}(0,x)\,d\nu_{0}(x)$ (jump
discHJB) s.t. $\displaystyle\mathcal{A}w_{i,k}(t,x,u)+\ell(x,u)\geq
0,\quad\forall(t,x,u)\in[t_{i-1},t_{i}]\times X_{k}\times U,$ $\displaystyle
w_{i,k}(t_{i-1},x)-w_{i-1,k}(t_{i-1},x)\geq 0,\quad\forall x\in X_{k},$
$\displaystyle
w_{i,j}(t,x)-w_{i,k}(t,x)=0,\quad\forall(t,x)\in[t_{i-1},t_{i}]\times
N_{X_{k}}(X_{j}),$ $\displaystyle w_{n_{T},k}(T,x)\leq\phi(x),\quad\forall
x\in X_{k}$
where $N_{X_{k}}(X_{j})$ denotes the “neighborhood” of $X_{k}$ in $X_{j}$
defined as all states in $X_{j}$ which have a non-zero transition probability
into $X_{k}$; formally,
$\displaystyle N_{X_{k}}(X_{j})=\\{x\in X_{j}:\exists u\in U\text{ s.t.
}h_{i}(x,u)\in X_{k}\text{ for some }i\text{ and }a_{i}(x,u)>0\\}.$
In order to construct tractable relaxations for the above problem via the
moment-sum-of-squares hierarchy, it of course remains still necessary to
replace any infinite (or very large) partition element by a closed basic
semialgebraic overapproximation; however, as Figure 8 illustrates, the union
of overapproximations of the individual subdomains will generally be less
conservative than an overapproximation chosen for the whole state space.
(a)
(b)
Figure 8: Overapproximation of a semi-infinite lattice by a box (a) vs.
overapproximation by the union of box-overapproximations of a three-element
partition of the same semi-infinite lattice (b).
### 7.3 Example: Optimal gene regulation for protein expression
In order to demonstrate the effectiveness of the proposed local occupation
measure framework for the optimal control of jump processes, we consider here
the problem of protein regulation in a cell. We assume that the protein
expression is biologically implemented via a simple biocircuit as shown in
Figure 9.
Figure 9: Simplified biocircuit model for gene regulation
The biocircuit is modeled as a jump process reflecting the stochastic nature
of chemical reactions in cellular environments due to low molecular copy
numbers [Gil92]. The resultant jump process has three states encoding the
molecular counts of protein $x_{1}$, active promoter $x_{2}$ and inactive
promoter $x_{3}$. The expression of protein can be controlled indirectly via
the activation kinetics of the promoter. The resultant jump process is
summarized by the following transitions with associated rates:
$\displaystyle h_{1}:(x_{1},x_{2},x_{3})\mapsto(x_{1}+1,x_{2},x_{3}),\quad
a_{1}(x,u)=10x_{2}$ (expression) $\displaystyle
h_{2}:(x_{1},x_{2},x_{3})\mapsto(x_{1}-1,x_{2},x_{3}),\quad
a_{2}(x,u)=0.1x_{1}$ (degradation) $\displaystyle
h_{3}:(x_{1},x_{2},x_{3})\mapsto(x_{1},x_{2}-1,x_{3}+1),\quad
a_{3}(x,u)=0.1x_{1}x_{2}$ (repression) $\displaystyle
h_{4}:(x_{1},x_{2},x_{3})\mapsto(x_{1},x_{2}+1,x_{3}-1),\quad
a_{4}(x,u)=10(1-u)x_{3}$ (activation) $\displaystyle
h_{5}:(x_{1},x_{2},x_{3})\mapsto(x_{1},x_{2}-1,x_{3}+1),\quad
a_{5}(x,u)=10ux_{2}$ (inactivation)
Admissible control actions $u$ are constrained to lie within the interval
$U=[0,1]$. Moreover, we assume a deterministic initial condition
$x(0)\sim\delta_{(0,1,0)}$ and exploit that due to the reaction invariant
$x_{2}(t)+x_{3}(t)=x_{2}(0)+x_{3}(0)$ the state space $X$ is effectively two-
dimensional, i.e., we eliminate $x_{3}(t)=1-x_{2}(t)$. It can be easily
verified that, after elimination of the reaction invariant, the state space of
the jump process is given by
$\displaystyle X=\\{x\in\mathbb{Z}_{+}^{2}:x_{2}\in\\{0,1\\}\\}$
such that Assumption 2 is satisfied. The goal of the control problem is to
stabilize the protein level in the cell at a desired value of 10 molecules. To
that end, we choose to minimize the stage cost
$\displaystyle\ell(x,u)=(x_{1}(t)-10)^{2}+10(u-0.5)^{2}$
over the horizon $[0,10]$.
In order to investigate the effect of different discretizations of the problem
domain on bound quality and computational cost, we discretize the time horizon
uniformly into $n_{T}$ intervals and partition the state space into the
singletons
$\displaystyle X_{i}=\begin{cases}\\{(i-1,0),&i\leq n_{X}\\\
\\{(i-n_{X}-1,1)\\},&i>n_{X}\end{cases}\text{ for }i=1,\dots,2n_{X}$
and capture the remaining part of the state space in the last partition
element
$\displaystyle X_{2n_{X}+1}=\\{x\in\mathbb{Z}_{+}^{2}:x_{1}\geq
n_{X},x_{2}\in\\{0,1\\}\\}.$
By changing the parameters $n_{T}$ and $n_{X}$, the fineness of the partition
of the problem domain can be modulated. For this example, we explore parameter
values in the ranges $n_{T}\in\\{2,4,\dots,8,20\\}$ and
$n_{X}\in\\{0,8,\dots,32,40\\}$.
Note that the partition elements $X_{1},\dots,X_{2n_{X}}$ are already basic
closed semialgebraic such that no overapproximation is required for the
construction of valid moment-sum-of-squares restriction of the non-negativity
constraints in (jump discHJB). In contrast, $X_{2n_{X}+1}$ is infinite, hence
not basic closed semialgebraic. We therefore strengthen the formulation of the
moment-sum-of-squares restriction of (jump discHJB) by imposing the non-
negativity conditions on the convex hull of $X_{2n_{X}+1}$, thereby recovering
tractability.
In order to estimate the bound quality we follow a strategy analogous to that
described in Section 6.3.
Figure 10 shows a comparison of the trade-off between computational cost and
bound quality achieved by different choices for the partition of the problem
domain and approximation order. The results obtained with the classical
occupation measure framework are emphasized with enlarged red markers. As
already found for the example considered in Section 6, the results show that
by adequately partitioning the problem domain the computational cost to
compute bounds of a given quality can be substantially reduced compared to the
traditional approach. In fact, it is again observed that tighter bounds could
be computed in significantly less time than with the traditional approach.
Figure 10: Trade-off between computational cost and bound quality for
different approximation orders $d$ and domain partitions (different markers).
## 8 Conclusion
We have proposed and investigated a simple discretization strategy akin that
used for the numerical solution of PDEs in order to improve the scalability of
moment-sum-of-squares relaxations for stochastic optimal control problems.
From the dual perspective this strategy can be interpreted as finding the
tightest piecewise polynomial underestimator of the value function. Our
analysis further revealed that the primal perspective motivates a localized
notion of occupation measures obtained by restricting the traditional notion
occupation measures to subdomains of a partition of the state space and
control horizon. The key advantage of the proposed framework is that it offers
a flexible way to tighten the SDP relaxations at linearly increasing cost,
simply by refining the partition. This is in stark contrast to the traditional
tightening mechanism of increasing the approximation order which results in
combinatorial scaling. Two examples illustrate that the proposed strategy can
indeed improve the efficiency and practicality of the occupation measure
approach to stochastic optimal control by a moderate amount.
For future work, we seek to investigate if and how the use of distributed
optimization techniques can further improve efficiency of the proposed
framework by exploiting the weakly-coupled block structure of the SDP
relaxations.
## Acknowledgements
FH gratefully acknowledges helpful discussions with Professor Paul I. Barton.
This material is based upon work supported by the National Science Foundation
under grant no. OAC-1835443, grant no. SII-2029670, grant no. ECCS-2029670,
grant no. OAC-2103804, and grant no. PHY-2021825. We also gratefully
acknowledge the U.S. Agency for International Development through Penn State
for grant no. S002283-USAID. The information, data, or work presented herein
was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E),
U.S. Department of Energy, under Award Number DE-AR0001211 and DE-AR0001222.
We also gratefully acknowledge the U.S. Agency for International Development
through Penn State for grant no. S002283-USAID. The views and opinions of
authors expressed herein do not necessarily state or reflect those of the
United States Government or any agency thereof. This material was supported by
The Research Council of Norway and Equinor ASA through Research Council
project “308817 - Digital wells for optimal production and drainage”. Research
was sponsored by the United States Air Force Research Laboratory and the
United States Air Force Artificial Intelligence Accelerator and was
accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views
and conclusions contained in this document are those of the authors and should
not be interpreted as representing the official policies, either expressed or
implied, of the United States Air Force or the U.S. Government. The U.S.
Government is authorized to reproduce and distribute reprints for Government
purposes notwithstanding any copyright notation herein.
## References
* [ADH17] Amir Ali Ahmadi, Sanjeeb Dash and Georgina Hall “Optimization over structured subsets of positive semidefinite matrices via column generation” In _Discrete Optimization_ 24 Elsevier, 2017, pp. 129–151
* [AH19] Amir Ali Ahmadi and Georgina Hall “On the construction of converging hierarchies for polynomial optimization based on certificates of global positivity” In _Mathematics of Operations Research_ 44.4 INFORMS, 2019, pp. 1192–1207
* [AM14] Amir Ali Ahmadi and Anirudha Majumdar “DSOS and SDSOS optimization: LP and SOCP-based alternatives to sum of squares optimization” In _2014 48th Annual Conference on Information Sciences and Systems (CISS)_ , 2014, pp. 1–5 IEEE
* [AM19] Amir Ali Ahmadi and Anirudha Majumdar “DSOS and SDSOS optimization: more tractable alternatives to sum of squares and semidefinite optimization” In _SIAM Journal on Applied Algebra and Geometry_ 3.2 SIAM, 2019, pp. 193–230
* [BB96] Abhay G. Bhatt and Vivek S. Borkar “Occupation Measures for Controlled Markov Processes: Characterization and Optimality” In _The Annals of Probability_ 24.3, 1996, pp. 1531–1562
* [Boy+10] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato and Jonathan Eckstein “Distributed optimization and statistical learning via the alternating direction method of multipliers” In _Foundations and Trends in Machine Learning_ 3.1, 2010, pp. 1–122 DOI: 10.1561/2200000016
* [Bre03] Lothar Breuer “From Markov jump processes to spatial queues” Springer Science & Business Media, 2003
* [CKH21] Vit Cibulka, Milan Korda and Tomáš Haniš “Spatio-temporal decomposition of sum-of-squares programs for the region of attraction and reachability” In _IEEE Control Systems Letters_ 6 IEEE, 2021, pp. 812–817
* [DB18] Garrett R Dowdy and Paul I Barton “Bounds on stochastic chemical kinetic systems at steady state” In _The Journal of Chemical Physics_ 148.8 AIP Publishing, 2018, pp. 84106
* [Dow19] Garrett Ryan Dowdy “Using semidefinite programming to bound distributions in chemical engineering systems”, 2019
* [FV89] Wendell H. Fleming and Domokos Vermes “Convex duality approach to the optimal control of diffusions” In _SIAM Journal on Control and Optimization_ 27.5, 1989, pp. 1136–1155 DOI: 10.1137/0327060
* [Gil92] Daniel T Gillespie “A rigorous derivation of the chemical master equation” In _Physica A: Statistical Mechanics and its Applications_ 188.1-3 Elsevier, 1992, pp. 404–425
* [HRS01] Kurt Helmes, Stefan Röhl and Richard H Stockbridge “Computing Moments of the Exit Time Distribution for Markov Processes by Linear Programming” In _Operations Research_ 49.4 INFORMS, 2001, pp. 516–530
* [HB21] Flemming Holtorf and Paul I Barton “Tighter bounds on transient moments of stochastic chemical systems”, 2021
* [IW14] Nobuyuki Ikeda and Shinzo Watanabe “Stochastic Differential Equations and Diffusion Processes” Elsevier, 2014 DOI: 10.2307/1268562
* [KS98] Thomas G. Kurtz and Richard H. Stockbridge “Existence of Markov Controls and Characterization of Optimal Markov Controls” In _SIAM Journal on Control and Optimization_ 36.2, 1998, pp. 609–653 DOI: 10.1137/S0363012995295516
* [Las01] Jean B Lasserre “Global optimization with polynomials and the problem of moments” In _SIAM Journal on Optimization_ 11.3 SIAM, 2001, pp. 796–817
* [Las+08] Jean B Lasserre, Didier Henrion, Christophe Prieur and Emmanuel Trélat “Nonlinear Optimal Control via Occupation Measures and LMI-Relaxations” In _SIAM Journal on Control and Optimization_ 47.4 SIAM, 2008, pp. 1643–1666
* [Las10] Jean Bernard Lasserre “Moments, Positive Polynomials and Their Applications” World Scientific, 2010
* [LPZ06] Jean-Bernard Bernard Lasserre, Tomas Prieto-Rumeau and Mihail Zervos “Pricing a class of exotic options via moments and SDP relaxations” In _Mathematical Finance_ 16.3 Wiley Online Library, 2006, pp. 469–494 DOI: 10.1111/j.1467-9965.2006.00279.x
* [OS07] Bernt Oksendal and Agnès Sulem “Applied Stochastic Control of Jump Diffusions” Springer, 2007 DOI: 10.1007/978-3-540-69826-5
* [Par00] Pablo A Parrilo “Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization”, 2000
* [SLD09] Carlo Savorgnan, Jean B. Lasserre and Moritz Diehl “Discrete-time stochastic optimal control via occupation measures and moment relaxations” In _Proceedings of the IEEE Conference on Decision and Control_ IEEE, 2009, pp. 519–524 DOI: 10.1109/CDC.2009.5399899
* [SK20] Corbinian Schlosser and Milan Korda “Sparse moment-sum-of-squares relaxations for nonlinear dynamical systems with guaranteed convergence”, 2020, pp. 1–34 URL: http://arxiv.org/abs/2012.05572
* [Sch01] Elizabeth Schwerer “A linear programming approach to the steady-state analysis of reflected Brownian motion” In _Stochastic Models_ 17(3) Taylor & Francis, 2001, pp. 341–368
* [SRS18] Yuanxun Shao, Dillard Robertson and Joseph K. Scott “Convex Relaxations for Nonlinear Stochastic Optimal Control Problems” In _Proceedings of the American Control Conference_ 2018-June.2 AACC, 2018, pp. 3955–3960 DOI: 10.23919/ACC.2018.8430981
* [SZA20] Sungho Shin, Victor M. Zavala and Mihai Anitescu “Decentralized Schemes with Overlap for Solving Graph-Structured Optimization Problems” In _IEEE Transactions on Control of Network Systems_ 7.3 Institute of ElectricalElectronics Engineers Inc., 2020, pp. 1225–1236 DOI: 10.1109/TCNS.2020.2967805
* [Wan+21] Jie Wang, Corbinian Schlosser, Milan Korda and Victor Magron “Exploiting term sparsity in moment-sos hierarchy for dynamical systems” In _arXiv preprint arXiv:2111.08347_ , 2021
* [Wei+19] Tillmann Weisser, Benoı̂t Legat, Chris Coey, Lea Kapelevich and Juan Pablo Vielma “Polynomial and moment optimization in Julia and JuMP” In _JuliaCon. URL https://pretalx. com/juliacon2019/talk/QZBKAU_ , 2019
* [ZFP19] Yang Zheng, Giovanni Fantuzzi and Antonis Papachristodoulou “Sparse sum-of-squares (SOS) optimization: A bridge between DSOS/SDSOS and SOS optimization for sparse polynomials” In _Proceedings of the American Control Conference_ 2019-July American Automatic Control Council, 2019, pp. 5513–5518 DOI: 10.23919/acc.2019.8814998
## Appendix A Proof of Corollary 1
###### Proof.
Fix $z\in X$. Now consider an admissible control policy $u$ such that all
paths of the control process $(x(s),u(s))\in X\times U$ $\forall s\in[t,T]$
with $x(t)\sim\delta_{z}\in X$ and $t\in[t_{n_{T}-1},T]$ are feasible. Further
define $\tau_{0}=t$ and $\tau_{i}$ for $i\geq 1$ to be the minimum between $T$
and the time point at which the process crosses for the $i$th time from one
subdomain of the partition $X_{1},\dots,X_{n_{X}}$ to another. By definition,
the process is confined to some (random) subdomain $X_{k}$ in the interval
$[\tau_{i},\tau_{i+1}]$. Since $w_{n_{T},k}$ is sufficiently smooth on
$[\tau_{i},\tau_{i+1}]\times X_{k}$, Ito’s lemma applies and yields that
$w_{n_{T},k}(\tau_{i+1},x(\tau_{i+1}))=w_{n_{T},k}(\tau_{i},x(\tau_{i}))+\int_{\tau_{i}}^{\tau_{i+1}}\mathcal{A}w_{n_{T},k}(s,x(s),u(s))\,ds\\\
+\int_{\tau_{i}}^{\tau_{i+1}}\nabla_{x}w_{n_{T},k}(s,x(s))^{\top}g(x(s),u(s))\,db(s).$
Now note that by Constraint (6),
$\displaystyle\int_{\tau_{i}}^{\tau_{i+1}}\mathcal{A}w_{n_{T},k}(s,x(s),u(s))\,ds\geq-\int_{\tau_{i}}^{\tau_{i+1}}\ell(x(s),u(s))\,ds.$
Further note that
$\displaystyle\mathbb{E}_{\delta_{z}}\left[\int_{\tau_{i}}^{\tau_{i+1}}\nabla_{x}w_{n_{T},k}(s,x(s))^{\top}g(x(s),u(s))\,db(s)\right]=0$
as the integrand is square-integrable by Assumption 1 and
$\tau_{i}\leq\tau_{i+1}$ are stopping times with respect to the natural
filtration (see [IW14], Chapter 2, Proposition 1.1). Thus, after taking
expectations, we obtain
$\displaystyle\mathbb{E}_{\delta_{z}}\left[w_{n_{T},k}(\tau_{i},x(\tau_{i}))\right]\leq\mathbb{E}_{\delta_{z}}\left[\int_{\tau_{i}}^{\tau_{i+1}}\ell(x(s),u(s))\,ds+w_{n_{T},k}(\tau_{i+1},x(\tau_{i+1}))\right].$
Moreover, continuity holds at any crossings between subdomains $X_{k}$ and
$X_{j}$ due to Constraint (8) such that
$\displaystyle\mathbb{E}_{\delta_{z}}\left[w(\tau_{i},x(\tau_{i}))\right]=\mathbb{E}_{\delta_{z}}\left[w_{n_{T},k}(\tau_{i},x(\tau_{i}))\right]=\mathbb{E}_{\delta_{z}}\left[w_{n_{T},j}(\tau_{i},x(\tau_{i}))\right]$
when the process crosses from $X_{k}$ to $X_{j}$ at $\tau_{i}$. Now using that
$\mathbb{E}_{\delta_{z}}\left[w(\tau_{0},x(\tau_{0}))\right]=w(t,z)$, we
obtain by summing over the time intervals
$[\tau_{0},\tau_{1}],\dots,[\tau_{N},\tau_{N+1}]$ that
$\displaystyle
w(t,z)\leq\mathbb{E}_{\delta_{z}}\left[\sum_{i=0}^{N}\int_{\tau_{i}}^{\tau_{i+1}}\ell(x(s),u(s))\,ds+w(\tau_{N+1},x(\tau_{N+1}))\,ds\right]=\mathbb{E}_{\delta_{z}}\left[\int_{t}^{\tau_{i+1}}\ell(x(s),u(s))\,dt+w(\tau_{N+1},x(\tau_{N+1}))\,ds\right].$
Letting $N\to\infty$, it follows that
$\displaystyle
w(t,z)\leq\mathbb{E}_{\delta_{z}}\left[\int_{t}^{T}\ell(x(s),u(s))\,ds+w(T,x(T))\right]$
as $\tau_{N}\to T$ (almost) surly. Finally using that $w(T,x)\leq\phi(x)$ on
$X$ due to Constraint (9) and the fact that all results hold for any
admissible control policy, we obtain the desired result $w(t,z)\leq V(t,z)$.
It remains to show that $w$ preserves the lower bounding property across the
boundaries introduced by discretization of the time domain. To that end, note
that by an analogous argument as before, we have for any $t\in[t_{i-1},t_{i})$
that
$\displaystyle
w(t,z)\leq\mathbb{E}_{\delta_{z}}\left[\int_{t}^{t_{i}}\ell(x(s),u(s))\,dt+\lim_{s\nearrow
t_{i}}w(s,x(t_{i}))\right].$
Since Constraint (7) implies that $\lim_{s\nearrow t_{i}}w(s,x)\leq
w(t_{i},x)$ on $X$, it finally follows by induction that $w(t,z)\leq V(t,z)$
for any $t\in[0,T]$ and $z\in X$. ∎
## Appendix B Weak form stochastic optimal control with embedded jump process
on a discrete state space
The weak form of stochastic optimal control of a jump process on a discrete
state space is given by
$\displaystyle\inf_{\nu_{T},\xi}\quad$ $\displaystyle\int_{[0,T]\times X\times
U}\ell(x,u)\,d\xi(t,x,u)+\int_{X}\phi(x)\,d\nu_{T}(x)$ s.t.
$\displaystyle\int_{X}w(T,x)\,d\nu_{T}(x)-\int_{X}w(0,x)\,d\nu_{0}(x)=\int_{[0,T]\times
X\times U}\mathcal{A}w(s,x,u)\,d\xi(s,x,u),$ $\displaystyle\hskip
227.62204pt\forall w\in\mathcal{C}^{1,2}([0,T]\times X),$
$\displaystyle\nu_{T}\in\mathcal{M}_{+}(X),$
$\displaystyle\xi\in\mathcal{M}_{+}([0,T]\times X\times U).$
where the occupation measures $\nu_{T}$ and $\xi$ are defined as in Section 2.
The corresponding dual is given by
$\displaystyle\sup_{w}\quad$ $\displaystyle\int_{X}w(t,x)\,d\nu_{0}(x)$ s.t.
$\displaystyle\mathcal{A}w(t,x,u)+\ell(x,u)\geq
0,\quad\forall(t,x,u)\in[0,T]\times X\times U,$ $\displaystyle
w(T,x)\leq\phi(x),\quad\forall x\in X,$ $\displaystyle
w\in\mathcal{C}^{1,0}([0,T]\times X),$
and characterizes again the maximal smooth subsolution of the value function
associated with (jump OCP).
|
∎
22institutetext: 1 Earth, Planetary, and Space Sciences Department, and
Institute of Geophysics and Planetary Physics, University of California, Los
Angeles, Los Angeles, CA 90095, USA
22email<EMAIL_ADDRESS>33institutetext: 2 Now at: University of Texas at
Dallas, Richardson, TX 75080 44institutetext: 3 CEA, DAM, DIF, Arpajon, France
55institutetext: 4 Atmospheric and Oceanic Sciences Departments, University of
California, Los Angeles, CA 66institutetext: 5 Now at: Johns Hopkins
University Applied Physics Laboratory, Laurel, Maryland 77institutetext: 6
Department of Astronomy and Center for Space Physics, Boston University,
Boston, MA 88institutetext: 7 Mechanical and Aerospace Engineering Department,
Henry Samueli School of Engineering, University of California, Los Angeles, CA
90095 99institutetext: 8 Now at: School of Mechanical, Industrial, and
Manufacturing Engineering, Oregon State University, Corvallis, OR 97331
1010institutetext: 9 University of St. Petersburg, St. Petersburg, Russia
1111institutetext: 10 Jet Propulsion Laboratory, California Institute of
Technology, Pasadena, CA 91109 1212institutetext: 11 Space Science Institute,
Boulder, CO 80301 1313institutetext: 12 Sodankylä Geophysical Observatory,
University of Oulu, Sodankylä, Finland 1414institutetext: 13 Materials Science
and Engineering Department, Henry Samueli School of Engineering, University of
California, Los Angeles, CA 90095 1515institutetext: 14 Now at: Deloitte
Consulting, New York, NY 10112 1616institutetext: 15 Computer Science
Department, Henry Samueli School of Engineering, University of California, Los
Angeles, CA 90095 1717institutetext: 16 Now at: Microsoft, Redmond, WA 98052
1818institutetext: 17 Physics and Astronomy Department, University of
California, Los Angeles, CA 90095 1919institutetext: 18 Now at: Department of
Astronomy and Astrophysics, The University of Chicago, Chicago, IL 60637
2020institutetext: 19 Now at: Raybeam, Inc., Mountain View, CA 94041
2121institutetext: 20 Now at: SpaceX, Hawthorne, CA 90250 2222institutetext:
21 Now at: Reliable Robotics Corporation, Mountain View, CA 94043
2323institutetext: 22 Now at: Los Alamos National Laboratory, Los Alamos, NM
87545 2424institutetext: 23 Mathematics Department, University of California,
Los Angeles, CA 90095 2525institutetext: 24 Now at: Planet Labs, PBC, San
Francisco, CA 94107 2626institutetext: 25 Now at: KSAT, Inc., Denver, CO 80231
2727institutetext: 26 Now at: Tyvak Nano-Satellite Systems, Inc., Irvine, CA
92618 2828institutetext: 27 Now at: Northrop Grumman Aerospace Systems,
Redondo Beach, CA 90278 2929institutetext: 28 Now at: Apple, Cupertino, CA
95014 3030institutetext: 29 Now at: Zipline International, South San
Francisco, CA, 94080 3131institutetext: 30 Now at: Lucid Motors, Newark, CA
94560 3232institutetext: 31 Now at: College of Engineering and Computer
Science, California State University, Fullerton, Fullerton, CA 92831
3333institutetext: 32 Now at: The Aerospace Corporation, El Segundo, CA 90245
3434institutetext: 33 Electrical and Computer Engineering Department, Henry
Samueli School of Engineering, University of California, Los Angeles, CA 90095
3535institutetext: 34 Now at: Heliogen, Pasadena, CA 91103 3636institutetext:
35 Now at: Argo AI, LLC Pittsburgh, PA 15222 3737institutetext: 36 Now at:
Terran Orbital, Irvine, CA 92618 3838institutetext: 37 Now at: Millenium Space
Systems, El Segundo, CA 90245 3939institutetext: 38 Now at: Department of
Electrical Engineering, Stanford University, Stanford, CA 94305
4040institutetext: 39 Now at: Mercedes-Benz Research and Development North
America, Long Beach, CA 90810 4141institutetext: 40 Now at: Geosyntec
Consultants, Inc., Costa Mesa, CA 92626 4242institutetext: 41 Now at: Juniper
Networks Sunnyvale, California, 94089 4343institutetext: 42 Now at: Niantic
Inc., San Francisco, CA 94111 4444institutetext: 43 Now at: Teledyne
Scientific and Imaging, Thousand Oaks, CA 91360 4545institutetext: 44 Now at:
Naval Surface Warfare Center Corona Division, Norco, CA 92860
4646institutetext: 45 Now at: Epirus Inc., Torrance, CA 90501
4747institutetext: 46 Now at: Department of Astronomy, Ohio State University,
Columbus, OH 43210 4848institutetext: 47 Now at: Amazon, Seattle, WA 98109
4949institutetext: 48 Now at: Department of Radiology, University of
California, San Francisco, San Francisco, CA 94143
# Energetic electron precipitation driven by electromagnetic ion cyclotron
waves from ELFIN’s low altitude perspective
V. Angelopoulos1 X.-J. Zhang1,2 A. V. Artemyev1 D. Mourenas3 E. Tsai1 C.
Wilkins1 A. Runov1 J. Liu1,4 D. L. Turner1,5 W. Li4 K. Khurana1 R. E. Wirz7,8
V. A. Sergeev9 X. Meng10 J. Wu1 M. D. Hartinger1,11 T. Raita12 Y. Shen1 X. An1
X. Shi1 M. F. Bashir1 X. Shen6 L. Gan6 M. Qin6 L. Capannolo6 Q. Ma6 C. L.
Russell1 E. V. Masongsong1 R. Caron1 I. He1,13 L. Iglesias1,14 S. Jha1,15,16
J. King1,15 S. Kumar 1,17 18 K. Le1,13 J. Mao1,15,19 A. McDermott1,7 K.
Nguyen1,7,20 A. Norris1 A. Palla1,15,21 Roosnovo1,17,22 J. Tam1,7 E.
Xie1,14,15 R. C. Yap1,23,24 S. Ye1,7 C. Young1,15,16 L. A. Adair1,17,25 C.
Shaffer1,7,26 M. Chung1,27 P. Cruce1,28 M. Lawson1 D. Leneman1 M. Allen1,7,29
M. Anderson1,23,30 M. Arreola-Zamora1,27 J. Artinger1,17,31 J. Asher1,7,32 D.
Branchevsky1,32,33 M. Cliffe1,20,33 K. Colton1,23,24 C. Costello1,15,34 D.
Depe1,33,35 B. W. Domae1,33 S. Eldin1,16,33 L. Fitzgibbon1,17,36 A.
Flemming1,7,27 D. M. Frederick1,7,37 A. Gilbert1,33,38 B. Hesford1,10,33 R.
Krieger1,13,39 K. Lian1,7,32 E. McKinney1,40 J. P. Miller1,15,41 C.
Pedersen1,7 Z. Qu1,7,42 R. Rozario1,7,20 M. Rubly1,7,43 R. Seaton1,7 A.
Subramanian1,27,33 S. R. Sundin1,7,44 A. Tan1,33,45 D. Thomlinson1,7,32 W.
Turner1,17,46 G. Wing1,15,47 C. Wong1,17,48 A. Zarifian1,7,10
(Received: date / Accepted: date)
###### Abstract
We review comprehensive observations of electromagnetic ion cyclotron (EMIC)
wave-driven energetic electron precipitation using data from the energetic
electron detector on the Electron Losses and Fields InvestigatioN (ELFIN)
mission, two polar-orbiting low-altitude spinning CubeSats, measuring 50-5000
keV electrons with good pitch-angle and energy resolution. EMIC wave-driven
precipitation exhibits a distinct signature in energy-spectrograms of the
precipitating-to-trapped flux ratio: peaks at $>$0.5 MeV which are abrupt
(bursty) (lasting $\sim$17s, or $\Delta L\sim 0.56$) with significant
substructure (occasionally down to sub-second timescale). We attribute the
bursty nature of the precipitation to the spatial extent and structuredness of
the wave field at the equator. Multiple ELFIN passes over the same MLT sector
allow us to study the spatial and temporal evolution of the EMIC wave -
electron interaction region. Case studies employing conjugate ground-based or
equatorial observations of the EMIC waves reveal that the energy of moderate
and strong precipitation at ELFIN approximately agrees with theoretical
expectations for cyclotron resonant interactions in a cold plasma. Using 2
years of ELFIN data uniformly distributed in local time, we assemble a
statistical database of $\sim$50 events of strong EMIC wave-driven
precipitation. Most reside at $L\sim 5-7$ at dusk, while a smaller subset
exists at $L\sim 8-12$ at post-midnight. The energies of the peak-
precipitation ratio and of the half-peak precipitation ratio (our proxy for
the minimum resonance energy) exhibit an $L$-shell dependence in good
agreement with theoretical estimates based on prior statistical observations
of EMIC wave power spectra. The precipitation ratio’s spectral shape for the
most intense events has an exponential falloff away from the peak (i.e., on
either side of $\sim 1.45$ MeV). It too agrees well with quasi-linear
diffusion theory based on prior statistics of wave spectra. It should be noted
though that this diffusive treatment likely includes effects from nonlinear
resonant interactions (especially at high energies) and nonresonant effects
from sharp wave packet edges (at low energies). Sub-MeV electron precipitation
observed concurrently with strong EMIC wave-driven $>$1MeV precipitation has a
spectral shape that is consistent with efficient pitch-angle scattering down
to $\sim$ 200-300 keV by much less intense higher frequency EMIC waves at dusk
(where such waves are most frequent). At $\sim$100 keV, whistler-mode chorus
may be implicated in concurrent precipitation. These results confirm the
critical role of EMIC waves in driving relativistic electron losses. Nonlinear
effects may abound and require further investigation.
###### Keywords:
Relativistic electron precipitation Radiation Belts Magnetosphere
Electromagnetic Ion Cyclotron Waves Whistler-mode chorus Plasma waves
###### MSC:
MSC code1 MSC code2 more
††journal: Space Science Review
## 1 Introduction
### 1.1 Earth’s Radiation Belts
Relativistic electron fluxes in Earth’s radiation belts pose a significant
hazard to satellites [Horne et al., 2013] and astronauts, especially during
magnetic storms [Gonzalez et al., 1994; Baker et al., 2018]. These fluxes wax
and wane in response to upstream solar wind variations [Baker et al., 1987;
Reeves et al., 1998, 2003], reflecting a delicate competition between
acceleration, transport, and loss within the magnetosphere in ways that still
defy accurate forecasting [Li & Hudson, 2019]. Additionally, energetic
electrons precipitate to the high-latitude mesosphere and lower thermosphere,
where they can create $NO_{x}$ and $HO_{x}$, ozone-destroying catalysts
[Jackman et al., 1980; Thorne, 1980; Randall et al., 2005]. Even though
$NO_{x}$ has a short lifetime when in sunlight, during polar winter darkness
it can last for days to weeks and can be brought by vertical winds to $30-50$
km altitude, near the stratospheric peak of the ozone layer. Under such
conditions it can catalytically convert ozone and contribute significantly to
ozone destruction. In particular, relativistic electrons, those at energies of
hundreds of keV to several MeV, pose the most significant threat to space
assets. These electrons often have both sufficient fluxes and energy to
penetrate through spacecraft and space suits, causing deep dielectric charging
or high levels of radiation exposure. Additionally, when $>1$MeV electrons are
scattered by magnetospheric processes and precipitate to the upper
stratosphere, they can produce $NO_{x}$ directly in the regions of dominant
ozone concentration where they can be most destructive [Baker et al., 1987].
The transient nature of the high energy electron flux and precipitation
complicates forecast efforts. The trapped flux is greatly affected by local
acceleration (as opposed to transport) via electron resonant interaction with
intense electromagnetic whistler-mode waves [Chen et al., 2007; Thorne et al.,
2013; Li et al., 2014; Allison & Shprits, 2020]; by heating during injections
or during radial diffusion via intense ultra-low-frequency (ULF) waves
[Elkington et al., 2004; Hudson et al., 2012; Mann et al., 2016; Sorathia et
al., 2017]; by magnetopause shadowing at the dayside leading to rapid electron
losses [Shprits et al., 2006; Turner et al., 2012; Mann et al., 2016; Olifer
et al., 2018; Sorathia et al., 2018; Pinto et al., 2020]; by field-line
scattering at the nightside [Sergeev & Tsyganenko, 1982; Artemyev et al.,
2013]; and by wave-driven precipitation into the upper atmosphere [Thorne et
al., 2005; Blum, Li & Denton, 2015; Shprits et al., 2017]. During active
times, relativistic electrons in the radiation belts can be sourced
adiabatically by inward diffusion from outer $L$-shells where the electron
phase space density is high, due to enhanced electric fields [Dessler &
Karplus, 1961; Kim & Chan, 1997] or ULF waves [Elkington et al., 2003; Mann et
al., 2016]. They can also be sourced from even lower energy, $\sim 10$ keV,
perpendicularly anisotropic magnetospheric electrons at larger distances. This
can occur by rapid injections (transport and simultaneous acceleration) of the
source electrons to the inner magnetosphere, leading to tens to hundreds of
keV seed electrons [Turner et al., 2015; Gabrielse et al., 2017]. Waves, in
particular whistler-mode chorus, are also excited by these anisotropic, highly
unstable source electrons in the magnetosphere. These waves can cause further
rapid acceleration of the seed electrons to even higher, relativistic energies
[Jaynes et al., 2015]. As the energetic electrons are transported into closed
drift shells they further interact with waves in the ULF, ELF and VLF range to
cause both local acceleration at low L-shells [Elkington et al., 1999; Horne
et al., 2005; Ma et al., 2015; Li et al., 2016; Thorne, 2010; Thorne et al.,
2013] and scattering into the loss cone [Thorne et al., 2006; Millan & Thorne,
2007]. Drift-shell splitting and other dynamic effects such as solar wind
compression pulses can cause further wave excitation and scattering of
particles into the loss cone [Jin et al., 2022].
Acceleration and loss of radiation belt electrons can occur simultaneously,
over a wide range of temporal scales (from seconds to weeks) and spatial
scales (across different local times, $L$-shells and latitudes). It is evident
that precipitating electron fluxes also result from a dynamic competition
between acceleration, transport, and loss processes. The two main waves
responsible for such lossws are whistler-mode chorus and EMIC waves. The
efficacy of such wave-driven losses must then be studied in the geophysical
context (geomagnetic activity, plasma environment and location) within which
these waves occur. Scattering of $>1$ MeV electrons by chorus waves in a
plasma of realistic (on the order of $1-10$cm-3) density requires cyclotron
resonance at high magnetic latitudes ($|\lambda|>30^{\circ}$; see [Thorne et
al., 2005; Shprits et al., 2006; Summers et al., 2007; Agapitov et al., 2018])
where the average whistler-mode wave intensity is observed to be weak
[Meredith et al., 2012; Agapitov et al., 2013, 2018; Wang et al., 2019]. Thus,
local (i.e., unrelated to magnetopause shadowing) and rapid losses of $>2$ MeV
electrons are typically attributed to their resonant scattering by EMIC waves
[Usanova et al., 2014; Omura & Zhao, 2012; Mourenas et al., 2016; Drozdov et
al., 2017]. This interaction is also believed to be a key process controlling
the dynamics of the relativistic (as low as hundreds of keV) electron fluxes
in Earth’s radiation belts [Thorne & Kennel, 1971]. EMIC waves can be very
effective in electron scattering both in the quasi-linear diffusion regime [Ni
et al., 2015a; Drozdov et al., 2017] and in the nonlinear resonant interaction
regime [Albert & Bortnik, 2009; Kubota et al., 2015; Grach & Demekhov, 2020;
Bortnik et al., 2022; Grach et al., 2022]. In fact, both event and statistical
studies have shown that precipitating fluxes of relativistic electrons at low-
altitudes correlate well with equatorial EMIC wave bursts [Blum et al., 2015;
Capannolo et al., 2019; Zhang et al., 2021]. However, the transient nature of
EMIC wave emissions [Blum et al., 2016, 2017] and the effects of hot ions on
the EMIC wave dispersion relation [Cao et al., 2017a; Chen et al., 2019]
complicate the evaluation of the relative contribution of EMIC waves to multi-
MeV, MeV, and sub-MeV electron losses in the radiation belts. This is a
question that remains open for both observational and theoretical reasons,
which we further detail below.
### 1.2 Relative impact of EMIC waves on relativistic electron precipitation.
Trapped fluxes of electrons of relativistic energy, associated with EMIC and
chorus wave-driven precipitation, are monotonically decreasing with energy.
EMIC waves are expected to be mainly responsible for multi-MeV electron
scattering, but exhibit a minimum energy of precipitation potentially
extending down to several hundred keV under some specific wave and plasma
conditions Summers & Thorne [2003]; Cao et al. [2017a]; Zhang et al. [2021].
All other wave modes, and in particular whistler-mode chorus waves, are more
effective at pitch-angle scattering tens of keV particles, with progressively
reduced efficacy at hundreds of keV (except under very special circumstances,
such as nonlinear scattering and ducted wave propagation away from the equator
[Horne & Thorne, 2003; Miyoshi et al., 2020; Zhang et al., 2022]). Thus, the
precipitating-to-trapped electron flux ratio, plotted as a function of energy,
should be a good indicator of the scattering mechanism. In particular it would
be ideal for separating the precipitation induced by EMIC waves from that by
other types of waves. However, due to the scarcity of low-altitude satellite
data with good pitch-angle and energy resolution in the relevant (sub-MeV to
multi-MeV) energy range, this has been difficult in the past, and the relative
contribution of the various wave modes to sub-MeV electron losses remains an
open question.
This has not been for the lack of trying, as the contribution of energy and
pitch-angle scattering by various waves to the overall flux levels and spectra
is critically important for modeling and predicting space weather. Theoretical
modelling using diffusion theory of EMIC waves, whistler-mode chorus and hiss
does a decent job in predicting the high equatorial-pitch-angle flux decay as
a function of activity indices, such as $Kp$ or $Dst$, over many days to weeks
[Glauert et al., 2014; Ma et al., 2015; Drozdov et al., 2017; Mourenas et al.,
2017; Pinto et al., 2019]. Similarly, an overall agreement has been found
between diffusion theory predictions and observations, from the combination of
low-altitude POES satellites and equatorial Van Allen Probes for the flux
decay due to chorus waves at low equatorial-pitch-angles [Li et al., 2013;
Reidy et al., 2021; Mourenas et al., 2021]. However, the relative contribution
of each mode to relativistic electron acceleration, precipitation, and short-
term flux-evolution has been more difficult to pin down. This is particularly
true for the relative contribution of EMIC waves to relativistic (hundreds of
keV to MeV range) electron scattering. This relative contribution has been
surmised recently from Versatile Electron Radiation Belt modeling comparisons
with Van Allen Probes data. It was shown that EMIC waves are critical for the
flux evolution of $>1$ MeV electrons, but in order to explain these electrons’
equatorial-pitch-angle spectra over a broad range of L-shells, a combination
of EMIC waves and whistler-mode chorus and hiss waves was needed [Drozdov et
al., 2015, 2017, 2020]. Indeed it has been theoretically and observationally
shown that the efficacy of MeV electron precipitation by EMIC waves is
enhanced in the presence of whistlers even when the two wave types are
operating at different local times, because EMIC waves alone cannot cause
precipitation of the most abundant, high pitch-angle electrons [Li et al.,
2007; Shprits et al., 2009; Mourenas et al., 2016; Zhang et al., 2017].
Another example of an important unanswered question in the same area is the
origin of microburst precipitation. This is a particularly intense, short-
lived, electron precipitation phenomenon, lasting on the order of $1$s or
less. It is thought to contribute significantly to the overall energetic
electron losses at $100$s of keV to several MeV [Blake et al., 1996; O’Brien
et al., 2004; Blum, Li & Denton, 2015; Greeley et al., 2019; Hendry et al.,
2019]. While the sub-MeV energy range of this precipitation could be
consistent with whistler-mode resonant interactions with electrons, especially
since the broad spatial extent of microbursts at dawn overlaps with the
typical location of whistler-mode chorus [Douma et al., 2017; Shumko et al.,
2018; Zhang et al., 2022], theoretical studies and observations suggest that
intense EMIC waves can also drive relativistic electron microbursts,
especially longer duration ones, at dusk [Blum et al., 2015; Zhang et al.,
2016a; Kubota & Omura, 2017]. As the two wave modes (chorus and EMIC waves)
are both able to scatter electrons in the hundreds of keV to $\sim 1$ MeV
range, overlap in their spatial distribution, and both can exhibit a bursty
nature, it remains unclear which wave mode dominates relativistic microburst
precipitation.
Thus, the relative contribution of EMIC waves to scattering loss of sub-MeV to
few MeV electrons compared to the contribution by other waves is still an open
question. This question is important for accurately modeling short-term
variations of both radiation belt fluxes and the atmospheric response to
relativistic electron precipitation. Part of the difficulty in addressing this
question can be attributed to the previous lack of energy-resolved and pitch-
angle resolved spectra of precipitating electrons in the 10s of keV to a few
MeV range that would enable a quantitative validation of theoretical models of
diffusion. This situation has changed with the recent launch of the ELFIN
mission, which provides for the first time such data over a wide range of
$L$-shells and local times. ELFIN’s multi-year dataset allows us now to
accurately compare precipitating-to-trapped electron flux ratios and, thus, to
infer electron diffusion rates as a function of energy [Kennel & Petschek,
1966; Li et al., 2013]. This is especially useful for comparisons of
theoretical expectations of such diffusion rates and measurements of such
rates using ELFIN observations, especially when they are combined with
conjugate, equatorial measurements of waves and plasma parameters at THEMIS,
Van Allen Probes, Arase and MMS. This represents a significant improvement
over the otherwise massive and previously well-utilized POES dataset [Rodger
et al., 2010; Yando et al., 2011; Yahnin et al., 2016; Capannolo et al.,
2019], which can only provide limited pitch-angle and integral (high-) energy
spectra. It is also an improvement over the dataset from the 1972-076B mission
[Imhof et al., 1977], which had similar pitch-angle and energy resolution as
ELFIN but was not accompanied by conjugate equatorial missions.
ELFIN was proposed as a focused investigation to address the question of
whether EMIC waves, the primary candidate for pitch-angle scattering of
relativistic electrons, can be definitively proven to be responsible for such
scattering using the advantages offered by its new dataset. In this paper we
aim to achieve that objective and exemplify the salient features that
accompany such scattering. We will address this objective using ELFIN together
with its numerous fortuitous conjunctions with equatorial spacecraft and
ground observations. We first review, below, the properties of EMIC waves and
their interaction with relativistic electrons. We next discuss how chorus
waves may be also implicated in the scattering and precipitation of such
electrons and how to differentiate the effects of these two wave types. Next,
we present the first comprehensive ELFIN measurements of EMIC wave-driven
electron precipitation. We discuss the observed features of the precipitating
electron fluxes that indicate nonlinear resonant interaction of EMIC waves
with electrons, compare precipitating electron energy spectra at high energy-
resolution with theoretical expectations, and provide the first statistical
distributions of EMIC wave-driven precipitation and its properties.
## 2 EMIC waves: generation and effects on relativistic electrons – present
knowledge
### 2.1 Generation
EMIC waves were first postulated to be excited by a low density, high energy
population of hot ions which achieve cyclotron resonance with the ion
cyclotron wave of a cold, dense ion background by appropriately Doppler-
shifting the wave’s frequency in their own frame through streaming along the
magnetic field [Cornwall, 1965; Cornwall et al., 1970]. Such conditions
prevail near the plasmapause where drift-shell splitting of ring current ions,
or fresh ion injections, or magnetopause compressions of ambient, low density
hot plasma may acquire perpendicular anisotropy. Portions of this (hot) ion
distribution having a field-aligned streaming velocity that can thus attain
cyclotron resonance with the wave can liberate the free energy available in
their anisotropy to achieve wave growth [Kennel & Petschek, 1966; Cornwall et
al., 1970, 1971]. This resonance condition is: $\omega-
k_{\parallel}v_{i,\parallel}=n\Omega_{ci}$. Here, $n=+1$ is the relevant
harmonic resonance number corresponding to first order resonance;
$\Omega_{ci}$ and $v_{i}$ are the ion cyclotron angular frequency and ion
velocity, respectively; $\omega$ and $k$ are the wave angular frequency and
wave number, respectively; and the parallel symbol denotes components along
the ambient magnetic field. Electromagnetic waves of the background (cold,
presumed dominant) plasma population propagating opposite to the beam
($k_{\parallel}v_{i,\parallel}<0$) over a range of frequencies near, say,
$0.5\Omega_{ci}$ and with wave vectors satisfying the cold plasma dispersion
relation can thus become unstable. The dispersion relation of the cold
component for parallel propagation (assuming that ions are protons and that
the hot anisotropic component has a sufficiently low density to make a
negligible contribution to the plasma dielectric response) is:
$ck/\omega=(c/V_{A})\left(1-\omega/\Omega_{ci}\right)^{-1/2}=(c/V_{A})\left(1-x\right)^{-1/2}$,
or $ck/\Omega_{pi}=x/(1-x)^{1/2}$, with $x=\omega/\Omega_{ci}$,
$V_{A}=B/\sqrt{\mu_{0}m_{i}N_{i}}$ the Alfvén speed, $\Omega_{pi}$ the plasma
frequency and other symbols having their usual meaning (note that:
$c/V_{A}\equiv\Omega_{pi}/\Omega_{ci}$). This is a monotonic function of
$\omega$, approaching the Alfvén wave dispersion relation in the low
frequency, MHD limit ($\omega\to 0$). In the high frequency limit,
$\omega\to\Omega_{ci}$ as $k\to\infty$, which means that in this limit the
waves are absorbed by the plasma and cannot propagate – this is the ion
cyclotron resonance. At intermediate frequencies, though, when the cyclotron
growth provided by the hot component exceeds cyclotron damping by the cold
component, the waves can grow. At oblique propagation, the dispersion relation
near $\Omega_{ci}$ is only slightly modified, becoming:
$ck_{\parallel}/\omega=(\xi c/V_{A})\left(1-x\right)^{-1/2}$, where
$\xi=\left[(1+cos^{2}\theta)/2\right]^{1/2}$.
Using this dispersion relation, the aforementioned cyclotron resonance
condition can be recast as: $v_{i,R}/c=v_{i,\parallel}/c=(V_{A}/\xi
c)(1-x)^{3/2}/x$. Maximum growth occurs for parallel propagation since at
oblique propagation the resonant velocity decreases and the ion cyclotron
damping by the cold component prevails quickly, due to that component’s high
density and low temperature. It is evident from the above resonance condition
that EMIC wave generation depends critically on the ratio
$\Omega_{pi}/\Omega_{ci}=c/V_{A}$ (or equivalently on $f_{pe}/f_{ce}$, the
ratio of electron cyclotron and plasma frequencies that we use more commonly
below) and on $1-x$, the latter denoting the proximity of the wave frequency
to the ion cyclotron frequency. These parameters determine the resonance
energy and its proximity to the free energy available in the velocity-
distribution’s anisotropy. Typical EMIC wave excitation requires that this
resonance energy be low enough for the waves to resonate with anisotropic hot
magnetospheric ions in the few to 10s of keV range. Hence, the larger the
aforementioned frequency ratios are (the smaller $V_{A}$ is), the easier it is
for EMIC waves to resonate with the free energy source of hot ions typically
available. Because the Alfvén speed increases rapidly away from the equator
along a field line, conditions at the geomagnetic equator favor such wave
excitation. At high-density equatorial regions that are far enough from Earth
so the geomagnetic field is also low, such as near the plasmapause, or within
plasmaspheric plumes, $\Omega_{pi}/\Omega_{ci}$ can increase ($V_{A}$ can
decrease) sufficiently for EMIC waves to be excited if anisotropic hot ions
are also present.
And indeed, EMIC waves are often excited near the post-noon and pre-midnight
sectors where the cold, dense plasmaspheric bulge and plume [Horne & Thorne,
1993] are intersected by the drift-paths of (hot) ring current ions exhibiting
velocity space anisotropies. The cold, dense background plasma there is
critical for lowering the resonant energy into the energy range where there
exists a sufficient number flux of hot ions with high enough anisotropy. This
situation occurs in that sector, especially during storm times, according to
case studies [Kozyra et al., 1997; Jordanova et al., 1998]. However,
statistical studies have also revealed that banded, low-frequency
electromagnetic waves exist at other local times as well [Anderson et al.,
1992; Erlandson & Ukhorskiy, 2001; Fraser et al., 2010; Min et al., 2012;
Meredith et al., 2014; Allen et al., 2016; Paulson et al., 2017]. These waves
have amplitudes $0.1-10$nT, are typically left-hand polarized and field-
aligned near the equator, and can extend from the Alfvén mode at low
frequencies upwards to the local ion cyclotron frequency [Kersten et al.,
2014]. They too can be identified as EMIC waves. Further supporting this
identification is that in the presence of a multi-component plasma, typically
$H^{+}$ with a few percent of either $He^{+}$, $O^{+}$, or both, such waves
are observed to split into the classical EMIC wave distinct bands (a $H^{+}$,
$He^{+}$ and $O^{+}$ band), each between their respective gyrofrequency and
the one below it, except that the $O^{+}$ band extends continuously below the
$O^{+}$ gyrofrequency down to the Alfvén branch (e.g., [Cornwall & Schulz,
1971; Young et al., 1981; Horne & Thorne, 1993]).
While the highest amplitude waves are most frequently observed at the duskside
equator in the $He^{+}$ band with left-hand circular polarization and nearly
field-aligned propagation, lower amplitude waves are also routinely observed
at the dawnside equator except in the $H^{+}$ band with linear or elliptical
polarization and occasional oblique propagation [Min et al., 2012]. They are
also seen further away from the equator, where they become oblique, likely due
to their propagation, and are eventually (at high enough latitudes) Landau
damped. Hence off-equatorial waves are seen with lower amplitudes and
occurrence rates. However, such waves can occasionally also be ducted. Then
they can propagate nearly-field-aligned and evade damping, thus reaching the
ionosphere and the ground [Kim et al., 2010] where they are detected
[Engebretson et al., 2008, 2015, 2018] as continuous magnetic pulsations of
Type 1 (Pc1, 0.2-5Hz) or Type 2 (Pc2, 5-10Hz). Other means of evading damping
are mode conversion to the R-mode, and tunneling near the bi-ion frequency,
just below the respective ion frequency [Thorne et al., 2006]. Substorm-
related, freshly injected, anisotropic ring current ions drifting duskward
from midnight and interacting with the plasmaspheric bulge or plume are most
often responsible for exciting the EMIC waves seen at the duskside [Cornwall &
Schulz, 1971; Chen et al., 2010; Morley et al., 2010]. However, solar wind
compressions can also cause hot ions drifting in the inner magnetosphere with
pre-existing moderate (marginally stable) anisotropy to attain (through
betatron acceleration) an enhanced anisotropy, one that exceeds the threshold
for EMIC wave growth. This excitation mechanism is often credited for EMIC
wave observations at the dayside, at pre- and post-noon [Anderson & Hamilton,
1993; Arnoldy et al., 2005; Usanova et al., 2008, 2010]. However, prolonged,
quiet time EMIC wave activity over a broad range of local times in the dayside
(pre- and post-noon), but over a narrow L-shell range, is attributed to the
large anisotropy of freshly supplied ions from the nightside plasmasheet by
injections (that can persist at low occurrence rates at large distances even
during geomagnetically quiet conditions). Such anisotropy develops at the
dayside due to differential drifts at different energies and can excite EMIC
waves near an expanded plasmasphere [Anderson et al., 1996; Engebretson et
al., 2002].
### 2.2 Interaction with electrons
Relativistic electron pitch-angle scattering due to their resonant interaction
with $H^{+}$ EMIC waves was first considered by Thorne & Kennel [1971] and
Lyons & Thorne [1972]. Horne & Thorne [1998] calculated the minimum resonance
energies for a multi-ion plasma ($H^{+}$, $He^{+}$, $O^{+}$) inside and
outside the plasmapause during storms. Summers et al. [1998] addressed
relativistic effects, showing that even with such corrections, electrons in
gyroresonance with EMIC waves undergo nearly pure pitch-angle (but not much
energy) diffusion. Summers & Thorne [2003] demonstrated that such interactions
can rarely result in scattering of electrons at or below 1 MeV. Such
conditions arise only for $\Omega_{pe}/\Omega_{ce}\geq 10$ (or equivalently
$\Omega_{ci}/\Omega_{pi}\equiv V_{A}/c\leq 4.3$) which occurs near and just
inside the dusk plasmapause, most often at storm times (where $\Omega_{ce}$ is
the unsigned electron cyclotron angular frequency). To understand why, we
discuss below the fundamental characteristics of this interaction.
Electrons can resonate with an EMIC wave by overtaking (moving in the same
direction, but faster than) the wave if they have a sufficiently high
(relativistic) speed to Doppler-shift the very low sub-ion-cyclotron wave
frequency to the very high, electron cyclotron frequency. The left-hand
circularly polarized (in time) EMIC wave electric field vector tip carves a
right-handed helical wave-train in space. From the viewpoint (in the frame) of
the guiding center of an electron able to overtake the wave’s helix crests and
valleys, the electric field vector tip rotates now (in time) in a right-handed
way, opposite to that in the (ion or plasma) rest frame. This polarization
reversal has the potential to put the electron, also gyrating in a right-
handed sense, in cyclotron resonance with the EMIC wave. The generic electron
cyclotron resonance condition is: $\omega-
k_{\parallel}v_{e,\parallel}=n\Omega_{ce}/\gamma$. Here $v_{e}$ is the
velocity of the electron, which in our case has a projection along the
magnetic field that is in the same direction as the wave’s projection along
the field ($k_{\parallel}v_{e,\parallel}>0$), $\gamma$ is the relativistic
correction factor (the Lorentz factor) and $n=-1$, for first order anomalous
cyclotron resonance. (Anomalous, because due to the aforementioned overtaking,
the sense of polarization experienced by the electron is opposite to that of
the wave in the plasma rest frame.) Since $\omega\ll\Omega_{ce}$, the
resonance condition becomes simply:
$k_{\parallel}v_{e,\parallel}\sim\Omega_{ce}/\gamma$.
The electron resonance energy obtained from the aforementioned cold plasma
dispersion relation of ion cyclotron waves and from the above electron
cyclotron resonance condition [Thorne & Kennel, 1971], simplified for a
proton-electron plasma is $E_{R}/E_{0}=\gamma-1$, where $E_{0}$ is the
electron rest mass, and (based on the solution of the above two equations)
$\gamma$ is given by:
$\sqrt{\gamma^{2}-1}=\frac{1}{{\xi\cos\alpha}}\frac{{\Omega_{ce}}}{{\Omega_{pi}}}\frac{{\sqrt{1-x}}}{x}$
The minimum resonance energy, $E_{R,\min}$, is obtained for zero pitch angle,
$\alpha=0$, for a given total energy. The most common situation of parallel
propagation ($\xi=1$) serves as a case-in-point. Moreover, for a fixed
wavenumber, $\theta=0$ also minimizes the resonance energy. We see that
$E_{R,\min}$, corresponding to
$\sqrt{\gamma^{2}-1}=\Omega_{ce}/\Omega_{pi}\sqrt{1-x}/x$, is a monotonic
function of $x=\omega/\Omega_{ci}$ and $\Omega_{ce}/\Omega_{pi}$, so the
closer $\omega$ gets to $\Omega_{ci}$ the lower the $E_{R,\min}$. This is seen
more clearly if the resonant velocity in the resonance condition above can be
simply recast as resonance energy:
$E_{R}(k)=E_{0}\left(\sqrt{1+(\Omega_{ce}/ck)^{2}}-1\right)$. This is minimum
for the maximum unstable wavenumber, which (based on the cold plasma
dispersion relation, seen earlier) corresponds to the maximum
$\omega/\Omega_{ci}$, closest to 1.
For fixed $\omega/\Omega_{ci}\sim 0.5$, $E_{R}(k)$ falls off with L-shell as a
power law in the plasmasphere, due to the magnetic field decreasing faster
than the square root of the density Sheeley et al. [2001]; Ozhogin et al.
[2012]. At the plasmapause, $E_{R,\min}$ increases abruptly with $L$-shell
(outward) by about an order of magnitude, to $\sim 10$ MeV as the density
drops by 1-2 orders of magnitude [Cornwall, 1965; Thorne & Kennel, 1971].
Therefore, $E_{R,\min}$ has a local minimum (near $\sim 1$ MeV) just at the
interior of the plasmapause. This situation remains true for EMIC resonances
with heavy ions, when those are included in the dispersion relation [Summers &
Thorne, 2003].
However, incorporating thermal effects in the cold plasma dispersion relation
complicates this picture [Chen et al., 2011]. When even a fraction of the low-
energy ions has a significant temperature ($10$s to $100$s of eV), as is often
observed [Lee & Angelopoulos, 2014], the dispersion relation is significantly
modified: the waves can propagate through their respective cyclotron
frequencies and the stop bands can vanish [Silin et al., 2011; Chen et al.,
2011; Lee et al., 2012]. While the dispersion relation becomes more complex,
and the wave frequency is not limited to just below, or between the ion
gyrofrequencies as the case may be, heavy cyclotron damping by the cold
species near those frequencies severely limits wave propagation away from the
source, even when the waves are, in principle, unstable due to an exceedingly
strong anisotropy of the hot ions. These conditions cause excessive wave
damping at large wavenumbers, those with $kc/\Omega_{pi}$ higher than $\approx
1$. Yet, the resonance condition, expressed above as the resonance energy as a
function of wavenumber, $E_{R}(k)$, still applies regardless of the dispersion
relation, and shows that there is still a lower limit to the minimum resonance
energy, the one for the maximum wavenumber permitted for propagation, even
with warm plasma effects accounted for. This realization simplifies the
analysis: approximating the maximum wavenumber that can be attained under the
presence of thermal effects as $kc/\Omega_{pi}\approx 1$, we obtain a similar,
monotonic dependence of the resonance energy on $\Omega_{ce}/\Omega_{pi}$ as
for the cold plasma approximation: $E_{R,\min}\sim
E_{0}\left(\sqrt{1+(\Omega_{ce}/\Omega_{pi})^{2}}-1\right)$. We will compare
this relationship with data, later in the paper.
A parametric analysis of the instability for multi-species plasmas including
warm plasma effects confirms that the maximum unstable wavenumber rarely
results in $E_{R,\min}$ below 1 MeV: this only occurs for conditions of large
$\Omega_{pe}/\Omega_{ce}$ (15 to 100), and a large hot species anisotropy
$A\gtrsim 2$ [Chen et al., 2011, 2013]. In those cases, electrons of energy as
low as $\sim 500$ keV may be able to resonate with and be scattered by waves
of sufficiently high frequency. (Note that diffusion rates still peak at
energies higher than $E_{R,\min}$ corresponding to the frequency at peak wave
power, that is lower than the maximum observed frequency of wave propagation
that corresponds to $E_{R,\min}$.) In particular, $He^{+}$ EMIC waves which
are most easily able to resonate with $\lesssim 1$MeV electrons in cold plasma
theory are strongly suppressed by cyclotron absorption; warm plasma effects
cause the $H^{+}$ band to resonate more readily with $0.5-1$ MeV electrons
than the $He^{+}$ band [Chen et al., 2013]. Thus, even though warm plasma
effects allow EMIC wave spectra to reach closer to and even cross the
cyclotron frequency, consistent with some observations, at least in the
context of quasi-linear theory $E_{R,\min}$ still remains most often above 1
MeV except in rare cases of high density regions such as plumes at high
$L$-shells [Ross et al., 2021] or at low $L$-shells for compressed
plasmaspheric conditions during the storm main phase [Cao et al., 2017a].
Observations of precipitating electrons from POES, albeit with instruments of
limited energy and pitch-angle resolution [Zhang et al., 2021], have shown
that $>0.7$ MeV electron precipitation can indeed be observed at POES,
preferentially when equatorial spacecraft in close conjunction with POES
confirm the existence of plasma conditions favorable for $0.7-1$MeV electron
scattering by $H^{+}$ waves. Note, though, that POES does not have
differential energy channels to finely resolve the peak in precipitation as a
function of energy, hence these results should be considered suggestive, not
conclusive evidence for the operation of EMIC waves. Additionally, many
counter-examples were also found (when theoretically expected precipitation
from waves was not observed, or vice versa) suggesting that quasi-linear
theory alone may not be able to fully explain these observations. Further
supporting the latter suggestion is that on occasion, relativistic electron
precipitation ($\sim 1$ MeV) events can occur on timescales of a few seconds
or less [Imhof et al., 1992; Lorentzen et al., 2000, 2001; O’Brien et al.,
2004; Douma et al., 2017], whereas the usual timescales of quasi-linear
diffusion are on the order of many minutes to hours [Albert, 2003; Li et al.,
2007; Ukhorskiy et al., 2010; Ni et al., 2015a]. Such short-lived
precipitation can often extend down to hundreds of keV. These counter-examples
cast doubt on the ability of EMIC waves to fully explain the observations, at
least when studied in the quasi-linear regime even when hot plasma effects are
incorporated into the theory.
Nonlinear treatments of EMIC wave interaction with relativistic electrons have
also resulted in some successes in interpreting observations of rapid sub-MeV
electron precipitation. Early work initially showed that nonlinear interaction
with moderate amplitude, fixed frequency waves in a dipole field typically
leads to scattering towards large pitch angles, away from the loss cone
[Albert & Bortnik, 2009]. However, the observed departures of the EMIC
waveforms from a constant frequency and the presence of a magnetic field
gradient near the equator can cause phase trapping of resonant electrons and
result in very rapid pitch-angle scattering and precipitation of MeV and even
sometimes sub-MeV energies [Omura & Zhao, 2012; Kubota et al., 2015; Hendry et
al., 2017; Nakamura et al., 2019; Grach et al., 2021]. This effect can be
enhanced by diffusive scattering by large amplitude EMIC waves, which may
transport electrons directly into the loss cone from intermediate ($\sim
30^{\circ}$) pitch-angles [Grach et al., 2022]. For realistic EMIC waveforms
having sufficiently steep edge effects, or equivalently having a few wave
periods in a single packet, even sub-MeV nonresonant electrons can be pitch-
angle scattered, when their interaction occurs over a small number of
gyroperiods [Chen et al., 2016; An et al., 2022]. Additionally, bounce
resonance of near-equatorially mirroring, hundreds of keV energy electrons
with EMIC waves can also result in moderate pitch-angle scattering and
contribute to precipitation at those energies [Cao et al., 2017b; Blum et al.,
2019]. But since hundreds of keV electrons can also interact with chorus
waves, which may occur simultaneously with EMIC waves, either at different
local times [Zhang et al., 2017] or even at the same location when driven by
ULF pulsations [Zhang et al., 2019; Bashir et al., 2022], an unambiguous
determination of the distinct (let alone independent) EMIC wave contribution
to the precipitation can be difficult.
### 2.3 Identification in precipitation spectra
Previous studies have presented suggestive evidence of telltale signatures of
EMIC wave-driven relativistic electron precipitation. This was achieved either
using in-situ magnetospheric observations of depletion of near-field-aligned
flux (in velocity-space) concurrent with EMIC wave enhancements [Usanova et
al., 2014; Zhang et al., 2016c; Bingley et al., 2019; Adair et al., 2022], or
by identifying local minima in radial profiles of the phase-space density at
$L$-shells consistent with simultaneous ground observations of EMIC waves
[Aseev et al., 2017], or through observations of simultaneous precipitation of
$10$s of keV protons and MeV electrons [Imhof et al., 1986; Miyoshi et al.,
2008; Hendry et al., 2017; Capannolo et al., 2019]. However, chorus waves can
also scatter and cause precipitation of electrons of hundreds of keV to MeV
[Artemyev et al., 2016; Ma et al., 2016b; Miyoshi et al., 2020; Zhang et al.,
2022]. The relative contribution of chorus and EMIC waves was not addressed in
those studies (e.g., see discussions in [Zhang et al., 2021, 2017]).
Noting that typical chorus wave scattering is most effective at tens of keV
rather than at hundreds of keV, a monotonically decreasing precipitating-to-
trapped flux ratio as a function of energy would favor a chorus wave
scattering interpretation over an EMIC wave scattering one. Conversely, that
ratio increasing with energy, particularly when peaking at 1 MeV or greater,
would favor the EMIC wave scattering interpretation, since EMIC waves are most
effective scatterers at $>1$ MeV electron energies. However, electron spectra
of sufficiently high resolution in energy and pitch-angle to make the above
distinction were not available in prior studies, which were mostly based on
POES data [Evans & Greer, 2004; Yahnin et al., 2017; Capannolo et al., 2018,
2019; Zhang et al., 2021]. Such high resolution spectra, obtained at a low-
altitude (ionospheric) satellite, especially when combined with equatorial or
ground-based measurements of the EMIC waves, are critical for determining if
such waves are responsible for relativistic electron scattering, and for
addressing the physical mechanism of the scattering process (quasi-linear,
nonlinear, resonant or nonresonant, etc). Such measurements are needed not
only to identify but also to quantify EMIC wave-driven precipitation and its
role in radiation belt dynamics and magnetosphere-atmosphere coupling.
EMIC wave resonant interactions with electrons can be attributed to (and
studied as) one of two processes: quasi-linear diffusion toward the loss cone
[Kennel & Petschek, 1966; Lyons, 1974] and fast nonlinear phase trapping
transport toward the loss cone [Albert & Bortnik, 2009; Kubota et al., 2015;
Grach et al., 2021; Bortnik et al., 2022; Grach et al., 2022]. The relative
importance and occurrence rate of these two regimes of wave-particle
interaction for EMIC wave scattering has not been addressed yet, even though
there is consensus from observations that EMIC waves [Kersten et al., 2014;
Saikin et al., 2015; Zhang et al., 2016c] are often sufficiently intense to
resonate with electrons nonlinearly [Wang et al., 2017]. The strongest losses
associated with quasi-linear diffusion, those in the strong diffusion limit,
have (by definition) loss-cone fluxes comparable to trapped fluxes, those next
to the loss-cone edge [Kennel & Petschek, 1966]. However, nonlinear electron
interaction may exceed the strong diffusion limit and produce loss-cone fluxes
higher than trapped fluxes [Grach & Demekhov, 2020]. Distinguishing these two
precipitation regimes requires electron flux measurements at fine pitch-angle
resolution near and within the loss cone, which is possible with energetic
particle detectors of modest angular resolution observing from low altitudes.
The recently launched, ELFIN CubeSat twins, ELFIN A and ELFIN B, provide a new
dataset of precipitating electrons that is very helpful for addressing the
above questions related to the process and efficiency of EMIC wave resonant
scattering of energetic electrons. Their energetic particle detector for
electrons (EPDE) measures the full $180^{\circ}$ pitch-angle distribution of
electron fluxes with approximately $22.5^{\circ}$ resolution, over the energy
range $50-5000$ keV sampled at 16 logarithmically-spaced energy channels of
width $\Delta E/E<40$%. Thus, they can resolve perpendicular (locally
trapped), precipitating, and backscattered fluxes with good pitch-angle and
energy resolution [Angelopoulos et al., 2020]. Due to ELFIN’s altitude,
300-450 km, the locally-trapped (perpendicular) flux measured corresponds to
particles that are most often in the drift loss cone, i.e., destined to be
lost before they complete a full circle around the Earth due to the variation
of the geomagnetic field magnitude with geographic longitude. Near the
longitude of the south-Atlantic anomaly, in the northern hemisphere the
perpendicular fluxes are still inside the bounce loss cone (they will
precipitate in the south) but even in that case, intense fluxes generated
locally in the same hemisphere above at the equator will still provide
valuable information on EMIC wave scattering at a rate faster than a quarter-
bounce period, and are therefore valuable to retain. However, at most
longitudes the measured perpendicular fluxes still correspond to electrons
outside the local maximum bounce loss cone, meaning that such electrons have
had a chance to drift in longitude for some time. In this paper we simply
refer to perpendicular fluxes as trapped, meaning at least quarter-bounce
trapped, or locally trapped.
In Section 3, below, we present ELFIN examples of EMIC wave-driven electron
precipitation. We show the salient features of that precipitation and its
difference from whistler-mode precipitation, consistent with the prior
discussion in the subsection above. In Section 4, we also incorporate in our
analysis ancillary observations from other assets, such as conjugate
measurements from ground-based stations and equatorial spacecraft. These are
providing a regional context for the observed ELFIN precipitation (equatorial
density and magnetic field), independent confirmation of the trapped particle
fluxes and information on the occurrence and properties of EMIC waves that may
be responsible for the observed precipitation. Then, in Section 5, we take a
statistical approach to the study of ELFIN’s observed precipitation spectra
attributed to EMIC waves. We show that their spectral properties, such as the
peak precipitation energy and the slope of precipitating-to-trapped flux ratio
as a function of energy, as well as the spatial distribution of the inferred
EMIC wave power are all consistent with expectation from theory and equatorial
observations of these waves. We also find evidence of nonlinear interactions
that can be further explored with the new dataset at hand.
## 3 ELFIN examples of EMIC wave-driven electron precipitation
Moving along their low-Earth ($\sim 450$ km altitude), $\sim 90$ min period
orbits, the ELFIN CubeSats can, in principle, each record up to four science
zones (covering the near-Earth plasma sheet, outer radiation belt,
plasmasphere, and inner belt) during each orbit. However, power and telemetry
constraints demand judicious selection of (typically) 4-12 such zones per day
per spacecraft. Choice of science zones (planned and scheduled with a weekly
cadence) depends on conjunction availability with other missions and ground
stations, or uniformity of coverage in time, MLT and hemisphere. For several
months after ELFIN’s launch in September 2018, there were conjunctions with
the near-equatorial, dual satellite, $\sim 12$ hour period Van Allen Probes
mission [Mauk et al., 2013]. During the four years (2018-2022) of ELFIN
operations, there have been multiple conjunction periods with the equatorial
Time History of Events and Macroscale Interactions during Substorms (THEMIS)
mission (three spacecraft, roughly on a string-of-pearls orbital configuration
when in the inner magnetosphere) on a $\sim 1$ day orbital period, and $\sim
13R_{E}$ apogee [Angelopoulos, 2008]), as well as with the near-equatorial
Exploration of energization and Radiation in Geospace (ERG) spacecraft (also
known as ARASE; [Miyoshi et al., 2018]), and with the near-equatorial, $\sim
3$ day period, four closely-separated satellite Magnetospheric Multiscale
(MMS) mission [Burch et al., 2016]). Additionally there have been very useful
ELFIN conjunctions with ground-based magnetometer stations providing magnetic
field measurements in the EMIC wave frequency range. Such stations often
detect equatorial EMIC waves propagating down to the ionosphere and associated
with relativistic electron precipitation [Usanova et al., 2014; Yahnin et al.,
2017].
### 3.1 EMIC wave-driven versus whistler-mode wave-driven electron
precipitation
Two ELFIN A science zone crossings in Figure 1, one at the nightside/dawn
flank (left) and the other at dayside/dusk flank (right), depict precipitation
patterns representative of the outer radiation belt and auroral zone, under
moderately active conditions. (See comprehensive plots of these crossings also
in the standard ELFIN overview plots at https://elfin.igpp.ucla.edu under
Science $\rightarrow$ Summary Plots. Navigate to the appropriate time using
the drop-down menus and then use mouse-clicks to zoom in and out in time on
daily-plots and on science zone overview plots respectively.) To orient the
reader, we first describe the magnetospheric regions ELFIN A traversed and
discuss the predominant scattering mechanisms responsible for sub-MeV
precipitation. We then explain how EMIC wave-driven MeV electron precipitation
can be recognized within this otherwise fairly typical context.
Figure 1 shows energy-time spectrograms of (locally) trapped fluxes (averaged
over near-perpendicular pitch angles, those outside the local bounce loss
cone, also referred to as “trap” in plots), precipitating fluxes (averaged
over pitch-angles inside the local loss cone, and also referred to as “prec”
in plots), as well as precipitating-to-trapped (or “prec-to-trap” in plots)
flux ratios. ELFIN A travelled on a post-midnight/dawn meridian from high to
low $L$-shells as depicted in the bottom panel of that figure and as
demarcated in the annotations at the bottom. It was in the plasma sheet prior
to $\sim$11:05:40 UT, and moved to outer radiation belt and plasmasphere soon
thereafter.
The plasma sheet is identified by above-background trapped fluxes which do not
exceed energies $\sim 200$ keV, concurrent with a precipitating-to-trapped
flux ratio of around one. The latter ratio is expected for energetic electrons
that have been isotropized by the small field-line curvature radius and the
low equatorial field intensity of the plasma sheet [Birmingham, 1984; Büchner
& Zelenyi, 1989; Delcourt et al., 1994; Lukin et al., 2021]. At $\sim$11:05:55
UT (shortly after, but still near the transition region between the plasma
sheet and inner magnetosphere), ELFIN A observed a distinct, dispersive
feature of the precipitating-to-trapped flux ratio: the lowest energy
approaching the ratio of $\sim 1$ increased with proximity to Earth (as
$L$-shell decreased). This energy versus $L$-shell dispersion is typical of
the isotropy (magnetic latitude) boundary. For a given energy, this boundary
is the ionospheric magnetic latitude poleward of which electrons of that
energy become isotropic (due to field-line scattering) and equatorward of
which the same energy electrons are anisotropic (due to the strong intensity
and large radius of curvature field in the inner magnetosphere that prevents
such scattering; [Sergeev & Tsyganenko, 1982; Dubyagin et al., 2002; Sergeev
et al., 2012]). At progressively higher energy this isotropy boundary
typically resides at progressively lower latitude (corresponding to a lower
$L$-shell), because electrons of higher gyroradius can still be field-line
scattered at the stronger and less curved equatorial magnetic field at this
lower latitude. This isotropy boundary signature in the precipitating-to-
trapped ratio can be used as an additional identifier of the transition from
the plasma sheet to the outer radiation belt.
Subsequently, between 11:05:55 UT and 11:08:00 UT ELFIN A was in the outer
radiation belt: both the trapped flux magnitude at all energies increased, and
the maximum energy of the (statistically significant) trapped fluxes also
increased as the $L$-shell decreased. There, a low-level precipitation (ratio
$<0.1$) is intermittently punctuated by bursty, intense precipitation (ratio
reaching $\sim 1$). In this (post-midnight/dawn) science zone crossing, the
precipitating-to-trapped flux ratio was highest at the lowest energies. This
is expected for electron resonant interactions with whistler-mode (chorus)
waves [Kennel & Petschek, 1966]. Such spectra are typical at ELFIN’s outer
belt crossings at post-midnight or dawn, especially at times of moderate to
high geomagnetic activity.
At $\sim$11:07:40 UT, ELFIN A started to enter the plasmasphere, as evidenced
by the decrease in the trapped electron fluxes and the simultaneous weakening
of the precipitating fluxes. Shortly after that time, at $\sim$11:08:10 UT,
the trapped fluxes at $\sim 200-300$ keV decreased below instrument background
level. This a distinct feature of the plasmasphere, where 100s of keV
electrons are efficiently scattered by plasmaspheric hiss (also whistler-mode)
waves. This scattering, occurring at $L\sim 3.5-4.3$, forms a slot region
between the outer and inner belts that is nearly devoid of energetic electrons
[Lyons & Thorne, 1972, 1973; Ma et al., 2016a, 2017; Mourenas et al., 2017].
We thus interpret the ELFIN observations as entry into the slot region at
$\sim$11:08:10 UT. The precipitating-to-trapped flux ratio in the plasmasphere
(which was seen inside of $L\sim 4.5$ in this event) is almost nil because (i)
the flux of $>200$ keV electrons near the loss-cone has decreased
considerably, and (ii) hiss and lightning-generated whistler-mode wave power
at frequencies that can attain cyclotron-resonance with $\sim 50-200$ keV
electrons is weak in the 17-24 and 00-05 MLT sectors inside the plasmasphere
[Agapitov et al., 2013, 2018; Li et al., 2015], where the much higher plasma
density compared to outside the plasmasphere further reduces the scattering
efficiency of such whistler-mode waves Mourenas & Ripoll [2012].
Figure 1, right column, shows ELFIN A post-noon/dusk observations acquired
about $1.5$ hours later. In this science zone crossing, ELFIN A moved from low
to high $L$-shell (see Panel (f′)) and traversed the regions discussed in the
previous crossing but in reverse order. The plasmasphere was traversed first,
and the plasmapause was encountered at $\sim$12:24:10 UT ($L\sim 4.5$) as
evidenced by the transition from low to high trapped fluxes and from low to
high precipitating-to-trapped flux ratio of $\sim 200-300$ keV electrons.
Between 12:23:40 UT and 12:26:40 UT, ELFIN A traversed the outer radiation
belt, as evidenced by the significant fluxes of $>200$ keV electrons. In that
period it observed whistler-mode (chorus) wave-driven electron scattering, as
implied by the bursty nature of the precipitation and by the precipitating-to-
trapped flux ratio being largest at the lowest energies. After 12:26:45 UT
ELFIN A was likely magnetically conjugate to the equatorial post-noon/dawn
plasma sheet (and outside the outer radiation belt) as suggested by the
reduction in the flux of trapped electrons of energy $>200$ keV. But because
in that local time sector the equatorial magnetic field is both stronger than
and not as curved as that at the nightside plasma sheet at a similar L-shell,
we do not expect plasma sheet field-line scattering to be significant,
explaining the dearth of $<200$ keV precipitation at ELFIN A at that time.
While in the outer radiation belt but near the plasmapause, ELFIN A also
observed a distinctly different type of precipitation: bursts of precipitation
at relativistic ($E\geq 1$ MeV) energies. These bursts are demarcated by black
arrows in the fifth panel (the one depicting the precipitating-to-trapped flux
ratio). The most intense bursts occurred at $\sim$12:25:05UT and
$\sim$12:25:28UT, but other less intense ones are also evident around the same
time period (all occurred within a $\sim$1 min interval, or
$\Delta$L$\approx$1.5). The MeV range precipitating-to-trapped flux ratio
peaks are clearly separated in energy from the lower-energy peaks of chorus-
driven precipitation discussed previously. The MeV range bursts are
accompanied by increases in both trapped and precipitating fluxes from near-
or below-background levels outside the bursts to above such levels within the
bursts. We attribute this type of precipitation to EMIC wave scattering. Aside
from the fact that EMIC waves resonate with such high energy electrons and are
abundant in this region of space (at or near the post-noon and dusk
plasmapause), the spectral properties observed by ELFIN A are also consistent
with this interpretation: It is known that equatorial fluxes of relativistic
electrons are typically very anisotropic and thus have a low flux level near
the loss cone [Gannon et al., 2007; Chen et al., 2014; Shi et al., 2016; Zhao
et al., 2018]. Scattering by EMIC waves transports electrons to smaller pitch-
angles in velocity-space over a broad range of pitch-angles, increasing both
trapped and precipitating fluxes near (and on either side of) the loss cone.
Since low-altitude spacecraft, like ELFIN A, measure locally mirroring fluxes
that still map to low equatorial pitch angles, they experience EMIC wave
scattering as flux increases at both locally trapped (i.e., locally mirroring
or perpendicular) and precipitating electrons. In Section 4, we provide more
examples of such relativistic electron precipitation, using conjunctions with
ground-based and equatorial observatories measuring the relevant EMIC waves
directly, further supporting our EMIC wave-driver interpretation of these
precipitation signatures seen at ELFIN.
### 3.2 The latitudinally localized and intense (bursty) nature of EMIC wave-
driven electron precipitation
The event presented next is a prototypical observation of EMIC wave-driven
electron precipitation at ELFIN with accompanying wave measurements on a
conjugate platform. Figure 2 shows ELFIN A measurements in the same format as
Figure 1. The event occurred on 2021-04-29 in the outer radiation belt at dawn
(where EMIC waves are also observed quite often, see [Zhang et al., 2016c]).
The energy- and time-localized precipitation ratio (Panel (c)) at
$\sim$13:21:05 UT marks the relativistic precipitation burst of interest. It
lasted anywhere from 7-14s (2.5-5 spins) the range depending on the intensity
used for its definition. That ratio peaked in energy at $\sim 1$ MeV, while
both precipitating and trapped fluxes increased at the same time. These are
all indicative of EMIC wave scattering [Kersten et al., 2014; Ni et al.,
2015a; Mourenas et al., 2016]. In particular, whistler-mode chorus waves
(otherwise also abundant in this region of space) preferentially scatter lower
energy electrons (see energy distributions of EMIC and whistler-mode wave
scattering rates in [Glauert & Horne, 2005; Summers et al., 2007; Shprits &
Ni, 2009]) and cannot be responsible for these observations. Note that Panels
(d) and (e), which show pitch-angle spectrograms during the event, demonstrate
that the precipitation was evident far from the edge of the loss cone, close
to the downgoing direction (whereas upgoing electons are interpreted as
reflected electrons from the atmosphere below). The pitch-angles that enter
the precipitating flux energy spectrograms are, by selection, centered at
$>$22.5 degrees from the edge of the loss cone, providing clean separation of
precipitating and trapped fluxes.
About 35 min prior to these ELFIN A observations at $L\sim 5$, the equatorial
THEMIS E spacecraft traversed the inner magnetosphere approximately in the
radial direction away from Earth at a similar MLT as ELFIN (Figure 3, Panel
(a)) and detected intense wave activity at 0.5 - 1 Hz frequency. The observed
waves were propagating near-parallel to the background magnetic field (Panel
(b)), had left-hand circular polarization (Panel (c)) and were seen to peak
just below the $He^{+}$ gyrofrequency (dashed lines, calculated using the
local magnetic field), allowing us to identify them as $He^{+}$-band EMIC
waves. The waves were seen in a locally measured magnetic field of
$(-66,48,227)$nT in GSM coordinates, when THEMIS E was about 2000km away from
the magnetic equator. They were observed near the plasmapause: a region of
density gradient (Panel (d)) exhibiting more than an order of magnitude change
throughout the event (from 500 to 15 per cm3) and short-scale density
variations of a factor of 2 on timescales as short as tens of seconds. The
EMIC wave emission was seen from $L\sim 4.65$ to $L\sim 4.88$, i.e. close to
the $L$ shell where ELFIN detected the $\sim 1$ MeV electron precipitation.
The narrow $L$ shell range of EMIC observation at THEMIS E, $\Delta
L\approx$0.23 (a 6 min duration x 4km/s THEMIS E’s radial velocity) is
traversed by ELFIN A at 7.9km/s in 9.6sec, or 3.4 spin periods. This is
similar to the duration of ELFIN-A’s observation of precipitation (7-14s, as
discussed earlier), and explains the short lifetime (the bursty apparent
nature) of the electron precipitation observed from ELFIN’s ionospheric
vantage-point. The EMIC wave emission was not only localized in $L$ shell but,
in this particular case, was likely also relatively short-lived: THEMIS A and
THEMIS D which traversed the same region of space as THEMIS E about 50 and 70
minutes prior, respectively, did not observe the emission. Neither did ELFIN
A, which traversed the same region in its prior and subsequent orbits (at
02:40 and 05:40 UT) observe similar type of precipitation. This limits the
likely duration of the EMIC wave-driven precipitation to $<$1.5hours. As we
shall see in the following subsection, in other instances the precipitation
can last a lot longer (many hours), confirming our assumption of the spatial
interpretation of the EMIC wave power variability at THEMIS E.
It is instructive to examine several features of the EMIC waves in space that
influence the precipitation signatures at ELFIN A. We continue with our
assumption that at least the gross features of plasmaspheric density gradients
and EMIC wave power last longer than a few minutes (their crossing time by
THEMIS E) and are organized by the plasmapause in azimuthal sheets. We also
note that the measured flow velocity and ExB velocity in the radial direction
at THEMIS E are both 4km/s consistent with the spacecraft velocity through the
plasma. (Corotation, about 2.3km/s in the azimuthal direction, is ignored here
as it is both smaller than the satellite velocity and because the structures
of interest vary mostly in the radial direction.) Examining the time-series
EMIC wave data in one component on the plane of polarization (Panel (e)) we
see that the emissions consist of bursts of $<$30s duration ($<$23 wave
periods) including transient bursts (such as the one marked by an arrow) as
short as 7s (or 5 wave periods). The crossing time of such 7-30 s structures
(if spatial at the equator) by ELFIN A in the ionosphere (scaled from the
estimates in the previous paragraph) would be 0.19-0.8 s which is much smaller
than ELFIN’s nominal spin period (2.8 s), and around 1.-4.5 azimuthal spin
sectors (each of 16 spin sectors lasts 0.175 s). Examination of the longer
waveforms in further detail (Panel (f)) reveals phase skips every 5-15 s, or
4-11 wave periods (marked by the arrows and their separation times), mapping
to 0.13-0.40 s, or 0.75-2.3 spin phase sectors at ELFIN. These EMIC wave
structures (jumps in coherence and amplitude) from 5-30 s, corresponding to 20
- 120km at THEMIS, are smaller than the $He^{+}$ inertial length
($d_{He+}\approx$124-264 km in this case for $N_{e}\approx$ 50-100/cc and a
$He^{+}$ number density $\sim$ $N_{e}$/16) and the thermal proton gyroradius
(100 km for 60keV $H^{+}$) and much smaller than the expected (parallel) EMIC
wavelength ($\lambda\approx 4\pi d_{He+}$). They may be due to oblique
propagation and interference of waves from different source locations in the
presence of plasma density inhomogeneity of a comparable gradient scale (Panel
(d)). As bouncing and drifting energetic electrons near cyclotron resonance
encounter these EMIC wave structures, both along and across the field, they
should experience short-lived coherent interactions with an ensemble of waves
of a range of wavelengths, amplitudes and phases that, for small amplitudes,
would appear like turbulence. (Even if wave-field temporal variations occur
and are partly responsible for the wave observations, above, the energetic
electrons stream and drift so fast across them that they are effectively
stationary in the Earth frame.) Their resultant interaction with electrons
would then be describable by quasi-linear theory. Field-aligned electrons are
organized on drift shells with equatorial cross-sections encircling Earth but
centered towards dawn, since field lines are stretched out further from Earth
at post-midnight. Conversely, waves excited due to ion cyclotron resonance
near density gradients are (like the plasmapause) arranged on distorted drift
shells with circular equatorial cross-sections having centers near Earth but
displaced towards dusk due to the plasmasphere buldging toward dusk. The
intersection of resonant electron drift-shells and plasmaspheric density
enhancements filled with EMIC waves would be sheet-like structures: longer in
azimuth and thin in radial distance. A good fraction of that layer of waves
could be also interacting with resonant drifting electrons, resulting in
precipitation over several spins. We would expect the EMIC wave-driven
electron precipitation to be organized in thin azimuthal layers of thickness
roughly consistent with the aforementioned variations in wave power at THEMIS
E over a 6min in duration, i.e., over a $\Delta L\approx$0.23 as discussed
above. The significant variations in power on spatial scales 7-30 s at THEMIS
E could result in abrupt enhancements in near-field-aligned equatorial fluxes
on that time scale. Mapping at ELFIN A to 1-4.5 spin sectors sectors, they can
be either inside the loss cone or outside it (both are quite close to the
equatorial loss cone), depending on ELFIN’s spin phase and look direction.
Hence the perpendicular-to-trapped flux ratio would be time-aliased at ELFIN,
exhibiting large increases (even above unity) or decreases due to abrupt
changes in equatorial wave power, even when that power is consistent with the
quasi-linear regime. Two arrows in Figure 1(e′), of the event discussed in the
previous subsection, show instances when the average flux in the loss-cone
dominates the precitating-to-trapped flux ratio (the ratio exceeds one) due to
flux enhancements in a single sector – clearly aliased. In fact, because there
are more spin-phase sectors in the loss cone than outside (typically 6 versus
4 in each full spin) a random distribution (in time) of 1-5 sector-long flux
enhancements would result in more flux ratios exceeding one than below one,
statistically. That bias can be normalized away, however, in statistical
studies. Case studies can rely on the consistency between consecutive trapped
fluxes, twice per spin, to ensure aliasing has been minimized (e.g., Figure 1
in Zhang et al. [2022]).
Figure 4 shows energy-spectra of the number flux of precipitating and trapped
electrons, averaged over the 6s (four ELFIN A half-spins) when the dominant
precipitation attributed to EMIC waves occurred, in the event of Figure 2 that
was discussed earlier in this subsection. The measured spectrum of the
precipitating flux (dotted thin red line) shows a peak near 1 MeV. This peak
is even more pronounced when the average of the precipitation 6s before and 6s
after the dominant precipitation interval (the trend, dashed thin red line) is
removed, revealing the net contribution to the precipitation from just the
EMIC waves (solid thick red line). (We interpret the trend as most likely due
to low-level hiss waves.) The ratio of the average measured precipitating-to-
trapped flux (not shown) also peaks near 1 MeV at a high value $\sim 0.4$,
consistent with the color spectrogram in Figure 2(c), that depicts this ratio
for individual half-spins peaking at $\sim 0.7$. After detrending the average
of the precipitating flux in Figure 4, the detrended precipitating flux has an
even more clear peak near 1 MeV (see the solid thick red line), corresponding
also to a stronger peak near 1 MeV for the detrended precipitating to un-
detrended trapped flux ratio appropriate for comparisons with quasi-linear
theory [Kennel & Petschek, 1966]. The detrended precipitating and trapped
fluxes have a very similar energy spectrum peaked near 1 MeV (compare solid
thick red and blue curves), suggesting that EMIC waves may be responsible for
both flux increases compared to the trend, possibly through nonlinear
transport from higher pitch-angles [Grach et al., 2022]. As we will see later,
in about 50% of the time in our database of EMIC events the un-detrended flux
ratio actually peaks and exceeds one above 1 MeV. However, given the
significant spatial variability of the EMIC wave field and its electron
interaction region in the magnetosphere, the ratio exceeding unity must be
viewed with caution, because of the temporal aliasing effects arising from
latitudinally narrow regions of precipitation lasting a few spin sectors,
which prevents the precipitating and trapped fluxes to be measured
simultaneously.
Nevertheless, it is evident from the above discussion that the precipitating
flux during EMIC events exhibits a strong peak and a high precipitating-to-
trapped flux ratio in the $0.5-1.5$ MeV energy range, making this a hallmark
of EMIC wave-driven precipitation in spectra that are well-resolved in energy
and pitch-angle.
Assuming that there exist cases when time aliasing does not affect the
precipitating-to-trapped flux ratio, that ratio exceeding one would signify
the presence of nonlinear EMIC wave-relativistic electron interactions. This
is because a ratio $>$1 cannot be explained by quasi-linear diffusion, which
has an upper limit of precipitation, the strong diffusion limit (see [Kennel &
Petschek, 1966]), that necessitates that precipitating and trapped fluxes be
equal. However, nonlinear resonant interactions can indeed result in loss cone
fluxes that are greater than permitted by quasi-linear theory. This could be
due to the phase trapping effect [Kubota et al., 2015; Kubota & Omura, 2017;
Grach & Demekhov, 2020; Grach et al., 2021]. In this mechanism, very intense
EMIC waves can interact resonantly with electrons initially located well above
the loss-cone edge and transport them in phase space directly into the loss
cone. Such nonlinear trapping results in a large pitch-angle change,
$20^{\circ}-40^{\circ}$, during a single resonant interaction. Therefore,
electrons with large equatorial pitch-angles exhibiting increasingly larger
flux at fixed energy (due to the typically strong perpendicular anisotropy of
relativistic electrons [Ni et al., 2015b]) can be transported all the way into
the loss cone directly and without simultaneously enhancing the trapped flux
near the edge of the equatorial loss-cone that corresponds to the only trapped
electron population visible at ELFIN’s altitude. This has been already
demonstrated for a case study of EMIC wave-driven precipitation by Grach et
al. [2022] who used simultaneous equatorial observations of the EMIC waves and
modeling of the wave-particle interactions to demonstrate the precipitation
ratio should exceed unity, as was indeed observed on ELFIN. In that case the
ratio exceeded unity for 3 consecutive spins (albeit at different energies)
which bolsters the case for nonlinear scattering. Similar case-by-case studies
of the details of the ELFIN particle distributions are needed, hand-in-hand
with modeling, to verify the presence of nonlinear effects and separate them
from temporal aliasing. Statistical studies of the problem can either rely on
multiple consecutive spins with similar signatures, or probabilistic analysis
of the cluster of strong precipitation events (ratio $>$1) after removal of
trend and biases. Thus, the fine energy and pitch-angle resolution of ELFIN’s
energetic electron measurements from low altitude can allow us to also
identify and study the properties of nonlinear interactions of EMIC waves and
relativistic electrons, which are likely important at times of intense EMIC
waves.
### 3.3 Evolution of long-lasting EMIC wave-driven electron precipitation
As we saw in the previous section, relativistic electron precipitation driven
by EMIC waves is often localized in $L$-shell due to the localization of the
EMIC wave excitation and distribution in the magnetosphere [Blum et al., 2016,
2017]. From ELFIN’s low-altitude vantage point, such a spatial localization is
evidenced as temporally localized, transient or bursty precipitation (as in
Fig. 2 and in events to be discussed subsequently in Sections 4.1, 4.2).
However, EMIC waves can occasionally persist for hours, as revealed in prior
studies from combinations of ground-based and multi-spacecraft observations
[Engebretson et al., 2015; Blum et al., 2020]. From its low-altitude, $\sim$
90 min period orbit, ELFIN can traverse the same $MLT$ and $L$-shell region
repeatedly, and thus it too can identify long-lasting EMIC waves, as well as
monitor and study their gradual evolution in intensity and latitude using
their clear electron precipitation signatures. Figure 5 shows such an event
over three ELFIN B orbits (lasting $\sim 3$ hours).
At first (Figure 5, Panels (a-c)), ELFIN B observed EMIC wave-driven electron
precipitation at 02:42:50 UT, at $L$-shell$\sim 5.4$ (see Panel (j)), lasting
$\sim 8.4$ seconds ($\Delta L\approx 0.1$). This is evident based on the
precipitating-to-trapped flux ratio, showing that the most efficient
precipitation lies in the range $[0.5,2]$ MeV, where that ratio is $\sim$1 –
precipitating fluxes at these energies are comparable to trapped. An orbit
later (Figure 5, Panels (d-f)), ELFIN B passed over approximately the same MLT
(at a distance of $\Delta MLT\approx 0.1$) and observed again strong
precipitation (at 04:15:25–04:15:35 UT) at $L$-shell$\sim 5.2$, over a
somewhat broader energy range, now $[0.4,2]$ MeV, and lasting much longer,
$\sim 22.4$ seconds ($\Delta L\approx 0.25$). The third time around (Figure 5,
Panels (g-i)), EMIC wave-driven precipitating fluxes were seen again at ELFIN
B (at 05:48:20–05:48:35 UT) at $L$-shell$\sim 5.0$. They were comparable to
trapped fluxes and extended over an even greater range in energy, $[0.3.,1.5]$
MeV, and L-shell ($\Delta L\approx 0.5$). There is a small $MLT$ evolution
between the three orbits (from the first to the third crossing the MLT of the
precipitation event changed by $\Delta MLT\approx 0.35$), but this is well
within the expected EMIC wave azimuthal extent of many hours in MLT in the
equatorial magnetosphere [Blum et al., 2016, 2017]. The location of the
center-time of the emissions moves closer to Earth in each encounter (from
$L\approx$ 5.4, to 5.2 to 5.0.) Given their similarity, we conclude that these
observations are likely due to continuous EMIC wave activity from the roughly
same region in space, where either the EMIC wave free energy source (e.g.
drifting ions) or the density gradient that enables resonant interactions with
that source was evolving in time (moving closer to Earth and expanding in
radial extent). Therefore, using ELFIN’s repeated passes over a long-lasting
($\sim 3$ hours) relativistic electron precipitation event also allows us to
infer characteristics of the EMIC wave spatial location, intensity, extent,
and temporal evolution at timescales of an orbit period or, on occasion, even
faster (due to the occasional availability of data from two satellites or from
two science zones at the same MLT, in the north and south hemispheres, on each
orbit).
## 4 Studies of EMIC wave-driven precipitation with ELFIN and its
conjunctions with ancillary datasets
### 4.1 Confirming $He^{+}$-band EMIC wave resonance with electrons
Figure 6 shows an overview of ELFIN A observations on 2 November 2020. There
is a clear peak of electron precipitation above 1 MeV at $\sim$15:19:00 UT.
The precipitating-to-trapped flux ratio is $\sim$ 1 for energies $\sim 1-3$
MeV and decreases with decreasing energy at energies $<1$ MeV. Enhanced
precipitation is observed during 6 consecutive ELFIN half-spins ($\sim 8.5$
seconds), the temporal localization again likely being due to ELFIN A’s
crossing of flux-tubes mapping to a spatially localized region of EMIC waves
near the equator at $L$-shell$\sim 5.5$. The typical scale of such wave
emissions in the radial direction at the equator is about $0.5RE$ [Blum et
al., 2016, 2017], consistent with the observed L-shell extent in this event of
$\Delta L\approx 0.4$.
ELFIN A’s trajectory projections to the north and south hemisphere, shown in
Fig. 7, demonstrate the presence of several nearby ground-based magnetometer
stations. These could potentially provide high-resolution magnetic field
measurements that can reveal EMIC waves of interest during the time of
interest. (The time of strong precipitation at ELFIN A is denoted by a thick
trace superimposed on its projections in that figure.) Thus, although there is
no equatorial spacecraft conjugate to ELFIN A at this time, it is still
possible to obtain information on the presence and properties of the EMIC
waves associated with the observed precipitation using such stations [Usanova
et al., 2014], if the EMIC waves managed to propagate to the ground. And
indeed, stations PG3, PG4, and SPA measured low frequency, banded emissions
consistent with EMIC waves at this time. We elect to work with data from SPA
(Fig. 8) which was closest to ELFIN A at the time of interest. Using the T89
magnetic field model Tsyganenko et al. [1989] we plot the equatorial helium
gyrofrequency for the $L$-shells pertaining to these stations. The observed
wave emission, exhibiting an upper limit $f_{\max}$ just below the $He^{+}$
gyrofrequency, $f_{cHe}$, evidently corresponds to helium band EMIC waves.
To further confirm the resonance condition for these waves, we use an
empirical plasma density model [Sheeley et al., 2001] to estimate the
equatorial plasma frequency $f_{pe}$; its ratio to the electron cyclotron
$f_{ce}$ frequency (from the aforementioned use of T89) is about $8.5-9$. This
is typical of the plasma trough, where ELFIN A was located during the subject
precipitation (between the plasmasphere and the inner edge of the plasma
sheet, marked atop the trapped electron energy spectrogram in Fig. 7(a) using
the same criteria as explained earlier for the events of Fig. 1). For
reasonable $He^{+}$ concentrations, $\approx 5-10\%$, the above
$f_{pe}/f_{ce}$ ratio and the observed value of $f_{\max}/f_{cHe}\approx
0.97-0.99$ (Fig. 8(a)) can result in a theoretical estimate of the minimum
resonance energy of EMIC waves. To demonstrate this, we plot the theoretical
minimum resonance energy for a wide range of $f_{\max}/f_{cHe}$ (vertical
axis) and $f_{pe}/f_{ce}$ (horizontal axis) in Fig. 9. We then denote the
range of the latter two parameters expected from observations as the gray
areas between two horizontal and two vertical lines. The intersection of those
two gray areas defines the region in this two-parameter space that corresponds
to the expected range of parameters and solutions for the resonance energy as
depicted by the plot’s contours. This resonance energy estimate, between 1.5
and 2.5 MeV, is quite close to the moderate and strong electron precipitation
energies observed at ELFIN A.
To make the last point point clear, we marked in yellow and red in Fig. 9 the
energies where moderate and strong precipitation was observed, respectively.
This determination was based on the ratio $R$, depicted in Panel (a), being
$0.5>R>0.3$ and $R>0.5$, for moderate (yellow) and strong (red) precipitation,
respectively. Transferring the energy range of these two categories onto the
contours of Panels (b) and (c) immediately depicts our observational
assessment of moderate and strong precipitation within the resonance energy
contours. We can thus see that the expected resonance energy based on the most
likely values of the two aforementioned parameters, $f_{\max}/f_{cHe}$ and
$f_{pe}/f_{ce}$ (the intersection of their respective grayed area bounds),
overlaps with the observed moderate and strong precipitation (yellow and red
highlighted contours), as one would expect from quasi-linear theory.
### 4.2 Confirming $H^{+}$-band EMIC wave resonance with electrons
Figure 10 shows an overview of ELFIN A observations on 6 Dec 2020 exhibiting a
clear peak of electron precipitation at $E>1$ MeV around 20:20:40 UT (marked
by a left-leaning arrow), a putative EMIC wave scattering event. At that time,
the precipitating-to-trapped flux ratio peaked (at $\gtrsim 1$) near $\sim
1-2$ MeV, and decreased with decreasing energy at $<1$MeV. Strong relativistic
electron precipitation at energies $\sim 1$ MeV was also seen at four earlier
times in the same ELFIN science zone crossing (denoted by down-pointing
arrows). Each of the five instances of intense precipitation was transient,
lasting for a few (2-8) spins ($\sim 5-20$ seconds), had a peak ratio $\gtrsim
1$ at $\sim 0.5-2$ MeV and exhibited a decreasing ratio with decreasing energy
below its peak.
As in the event we examined previously, ELFIN A’s magnetic projections to the
north and south hemispheres (Fig. 11) reveal that several ground-based
stations were magnetically conjugate to ELFIN A, and might be able to provide
magnetic field data that can explore the presence and properties of EMIC waves
during this interval. And indeed, stations KEV, OUL, and SOD of the Finnish
pulsation magnetometer network do show evidence for narrow-banded waves,
likely EMIC waves, that started abruptly with a broadband burst, probably due
to a nightside injection. Here we elect to show in Fig. 12) only data from SOD
which was closest to ELFIN A during the 20:20:40 UT burst. Although the wave
emission was not well defined spectrally on the ground at the time of ELFIN
A’s science zone crossing (demarcated by the two vertical magenta lines in
that figure), it became clearly defined a few minutes after (at $\sim$20:25:00
UT): it had peak-power and maximum frequencies ($f_{peak}$ and $f_{max}$),
that varied (rose) considerably over the next 2 hours, both within the range
0.2-0.7 Hz. T89 magnetic field model-based $O^{+}$, $He^{+}$ and $H^{+}$
gyrofrequencies at the equatorial location conjugate to SOD are over-plotted
in the above figure. They reveal that the observed wave intensity peaked below
the expected equatorial $H^{+}$ gyrofrequency ($\approx 0.7Hz$) and was
suppressed as that frequency was approached from below. The lack of a local
power minimum (a stop band) near the helium gyrofrequency and just above it
leads us to surmise that there is likely negligible helium concentration in
the wave source region [Lee et al., 2012; Chen et al., 2019], and that the
observed emission is likely due to an equatorial source of $H^{+}$ EMIC waves.
We take note of the delay in appearance of the EMIC wave on the ground
relative to the broadband burst that initiated the wave activity and also
relative to the overhead passage of ELFIN A. We interpret this delay as due to
the fact that it takes some time for the duct that facilitates unimpeded EMIC
wave propagation to the ground to establish itself along the entire flux tube.
The broadband waves, in the Pi2 and Pi1 range, are (at least part of the way)
compressional and Alfvénic and arrive to the ionosphere from the equatorial
magnetosphere without the need for ducting. Thus, due to the need for duct
formation for EMIC waves to propagate far from the equator, ground EMIC wave
detection is delayed relative to the equatorial injection that likely
initiated the hot anisotropic ions, the equatorial EMIC wave emission, the
associated electron precipitation, the initial broadband waves on the ground,
and the density perturbation that eventually led to the duct.
Equatorial observations of spacecraft potential from THEMIS-A, -D and -E
[Angelopoulos, 2008], which crossed the same L-shell as ELFIN A two hours
later and a few MLT hours away (at $\sim$20 MLT), provide estimates of the
equatorial plasma density (and plasma frequency). At the times when the
locally measured magnetic field at THEMIS was $\sim 45-50$nT, consistent with
the aforementioned $H^{+}$ gyrofrequency, the inferred equatorial density from
the spacecraft potential was 2.5-3.0 cm-3, such that the inferred equatorial
$f_{pe}/f_{ce}$ ratio was about $10-12$. Together with the observed value of
$f_{\max}/f_{cp}\sim 0.6-0.9$ from the ground-observatories, this gives a
theoretical estimate of the wave minimum resonance energy for electrons of
$\approx$ 0.3 - 2 MeV as marked by the cross-section of the horizontal and
vertical highlighted regions in Fig. 13. This estimate is quite close to the
energies of moderate and strong electron precipitation observed by ELFIN,
depicted in the same figure with red and yellow areas within the wider
$f_{\max}/f_{cp}$ versus $f_{pe}/f_{ce}$ parameter space. Our ELFIN
observations of peak precipitation combined with ground based estimates of the
wave frequency and equatorial spacecraft estimates of plasma density are
therefore consistent with a $H^{+}$ band EMIC wave-driven interpretation in
the context of quasi-linear theory.
### 4.3 EMIC wave-driven precipitation in the context of TEC maps
As discussed above, the equatorial electron density is one of the most
important parameters affecting EMIC wave generation and resonance with
relativistic electrons. When that density is large, as in the case of
plasmaspheric plumes [Fraser et al., 2005; Usanova et al., 2013; Halford et
al., 2015], the EMIC wave resonant excitation by hot anisotropic ions is
favored [Horne & Thorne, 1993; Chen et al., 2009, 2010], while the energy of
electrons resonant with EMIC waves also decreases down to energies of peak
flux in the outer radiation belt, 0.5-2MeV, leading to intense precipitation
of such relativistic electrons [Summers & Thorne, 2003; Summers et al., 2007].
Thus, magnetospheric density enhancements, collocated with azimuthally
drifting energetic ions, have been commonly reported as favorable sites for
EMIC wave growth and relativistic electron precipitation. Such sites are
expected around boundaries between the nominal ring current region (filled by
hot ions injected from the plasma sheet [Gkioulidou et al., 2015; Runov et
al., 2015]) and the plasmasphere [Thorne & Kennel, 1971; Horne & Thorne, 1993;
Chen et al., 2010]. It is challenging, however, to definitively establish the
connection between EMIC wave-driven electron precipitation and density
enhancements solely based on fortuitous conjunctions between low-altitude and
equatorial spacecraft. This is because density ducts (spatially limited
enhancements) can be highly-localized and the equatorial spacecraft in those
fortuitous conjunctions are not always in the optimal location to reveal the
spatial profile of the pertinent density enhancement responsible for wave
excitation. Mapping uncertainties due to imperfect magnetic field models
further complicate such conjunction-based studies.
An alternate approach to exploring the density variations at play during ELFIN
measurements of relativistic electron precipitation is to use total electron
content (TEC) measurements of the ionosphere. These are obtained using the
phase delay of radio signals transmitted from Global Navigation Satellite
Systems (GNSS) satellites (moving along circular orbits at an altitude of
$\sim 20,000$ km) to ground-based receivers. That phase delay allows estimates
of the altitude-integrated electron density [Davies, 1965; Vo & Foster, 2001]
along the line-of-sight. Data from multiple propagation rays during a finite
time-step (from tens of seconds to minutes) at a wide range of propagation
angles are assimilated through a tomographic reconstruction process to produce
TEC maps over a wide area on the ground. TEC maps reveal well the spatial and
temporal variations of plasmaspheric density [Heise et al., 2002; Belehaki et
al., 2004; Lee et al., 2013], including those at adjacent density enhancement
structures, like plasmaspheric plumes [Foster et al., 2002; Walsh et al.,
2014]. Moreover, TEC data also reveal quite well the nightside region of
enhanced hot (ring-current energy) ions, which, is known to the ionospheric
community as the mid-latitude ionospheric trough (MIT; see discussion on its
formation in Aa et al. [2020]). This topside ionosphere, subauroral latitude
region is recognized in satellite data by its enhanced ionospheric electron
temperature and its prominent density reduction. The latter is also commonly
captured in TEC data (see: [Yizengaw & Moldwin, 2005; Weygand et al., 2021]).
Because ring current ions can provide free energy for EMIC wave generation in
the pre-midnight sector, a significant fraction of EMIC wave-driven
precipitation is observed in the nightside region [Capannolo et al., 2022;
Carson et al., 2013], precisely where the MIT develops. Using TEC data to
study the correlation between the TEC-derived density gradients with
relativistic electron precipitation can be quite advantageous: The magnetic
projection of the low-altitude ELFIN satellites downwards, onto ionospheric
TEC maps is highly accurate compared to upwards, to the magnetic equator.
Moreover, the large availability and wide spatial coverage (in MLT and
$L$-shell) of TEC observations provides a large dataset with which a
correlation between precipitation events and plasma density boundaries can be
investigated. In the following, we present first results from such studies,
while also exemplifying advances that can be made in the future using a
similar approach.
Figure 14 shows two relativistic electron precipitation events on successive
orbits of ELFIN A ($\sim$ 90 min apart, at the same MLT). (Note that Panel
(e), showing MLAT and L-shell, applies to both events.) The events show clear
signatures of EMIC wave-driven scattering at around 11:30:00 UT and 13:02:30
UT, respectively. At those times, ELFIN observed strong, transient increases
in trapped and precipitating electron fluxes at an L-shell $\sim$ 6.5. The
precipitating-to-trapped flux ratio peaked at relativistic energies ($\sim
1$MeV). Both bursts were observed between the inner edge of the plasma sheet
(the earthward boundary of isotropic plasma sheet fluxes in the energy range
of $<200$ keV, at $\sim$11:29:23 UT and $\sim$13:02:05 UT in the two
crossings, respectively) and the plasmasphere (where trapped fluxes of energy
$\sim$200 keV fall below those at $>$500 keV energy due to effective
scattering of $<$500keV electrons by plasmasheric hiss [Ma et al., 2016a;
Mourenas et al., 2017], at and after 11:30:55 UT and 13:03:25 UT in the two
crossings, respectively). The precipitation was likely spatially localized
($\Delta L\sim 0.4$ for both events). The similarity of the EMIC-driven
precipitation signatures in two ELFIN orbits suggests here too that EMIC wave
generation persists for at least 1.5 hours (as in the cases presented earlier,
in Fig. 5, and as was previously reported by Engebretson et al. [2015]; Blum
et al. [2020]).
Figure 15 shows projections of ELFIN A orbits onto TEC maps in the northern
hemisphere (provided by MIT Haystack via the Madrigal database [Rideout &
Coster, 2006; Coster et al., 2013; Vierinen et al., 2015]). The spatial
resolution of the TEC map is 1 by 1 degree. The images are geographic
projections but the magnetic longitude is denoted by the two overlaid MLT
meridians, for $MLT=0$ and $MLT=6$ respectively. The ELFIN A orbits map
magnetically near the midnight meridian, slightly post-midnight (MLT $\approx$
0-1). This is to the west of the pre-midnight boundary of the MIT, which can
be identified as the blue regions of TEC depletion. The intense TEC values at
the dayside are due to enhanced TEC due to the sunlight. Note that during the
second ELFIN A orbit, at $\sim$13:02:00 UT, the western (pre-midnight)
boundary of the MIT is not clearly visible due to statistical uncertainties
from lack of high-latitude TEC stations in the Northern Pacific. This TEC
quality degradation occurs as the Earth rotates counterclockwise, viewed from
the north, by 22.5 degrees between the two successive orbits, and the American
sector moves eastward. The ELFIN A orbit and the sunlit-enhanced TEC at the
dayside, which are both nearly-fixed in magnetic local time coordinates,
appear to rotate clockwise (westward) by the same amount in the geographic
coordinate system of this figure.
The MIT region’s western edge evolves as the season changes between the summer
solstice and the autumnal equinox, moving from dawn to dusk, according to
previous statistical studies [Aa et al., 2020]. As expected for the season of
our event, close to autumnal equinox, the MIT is observed in Fig. 15
preferentially at post-midnight to dawn ($MLT\in[0,6]$). It has also been
previously established through statistical investigations of the MIT in
conjunction with near-equatorial spacecraft measurements that at each
longitude the minimum of TEC viewed as a function of latitude within the MIT
corresponds to the equatorial plasmapause location [Shinbori et al., 2021;
Heilig et al., 2022], whereas the pre-midnight MIT boundary (roughly along a
magnetic meridian) is magnetically conjugate to the plasmaspheric plume
[Heilig et al., 2022]. Thus, both ELFIN A orbits map magnetically to the pre-
midnight MIT boundary. This boundary should be interpreted as the overlap
between ring current ions (resulting in the MIT’s formation in the first
place, [Aa et al., 2020]) and the cold plasma density region at pre-midnight
(associated with the plasmaspheric plume [Heilig et al., 2022]). Finally, the
poleward TEC boundary of the MIT maps to the inner edge of the plasma sheet,
as it is attributed to local, ionospheric electron enhancements brought-about
by plasma sheet electron precipitation [Aa et al., 2020].
During its 6-min-long science-zone crossing in each orbit, ELFIN A was moving
equatorward and was initially in a region of localized TEC enhancement
associated with plasma sheet electron precipitation, as also evident by the
electron spectrograms in Figure 14, Panels (a) and (a′) and denoted by
horizontal black bars above those panels. It traversed the inner edge of the
plasma sheet at approximately 11:29:23 UT and 13:02:05 UT, respectively (the
times when the precipitating-to-trapped flux ratio stopped being isotropic and
trapped fluxes exceeding background levels were still below $<200keV$ in Fig.
14). Observations of EMIC wave-driven precipitation, which ensued and are
demarcated along its orbit projections in Figure 15 by thick magenta lines,
appear to map into the green or blue TEC region, between the poleward and
equatorward MIT boundaries and near the west MIT boundary. The TEC values
along the satellite tracks on Figure 15 are transferred as line plots into
Figure 14 (see Panels (d) and (d′)) to facilitate comparison with the ELFIN A
precipitation signatures in the panel stacks right above them and with region
identifications depicted by the horizontal color bars at the top of each
stack. At $\sim$11:31:20 UT and $\sim$13:03:55 UT, the times of ELFIN A’s
crossings of the plasmapause, as inferred from in-situ measurements in Figure
14 and shown by the black color bars, ELFIN A was located on the poleward edge
of, but near the MIT’s TEC minima in Figure 14(d) and (d′) and also evident in
Figure 15. These minima are known to map to the equatorial plasmapause
location [Shinbori et al., 2021; Heilig et al., 2022]) and therefore are
consistent with our approximate identification of the plasmapause from in-situ
energetic electron measurements. Therefore, comparison of TEC and ELFIN
A-measured electron precipitation demonstrates that regions of EMIC wave-
driven electron precipitation indeed correlate well with the TEC minimum and
its poleward gradient within the MIT region i.e., they are located just
outside the plasmapause. Similar comparisons with TEC maps can provide two-
dimensional ionospheric and plasmaspheric context information for ELFIN’s
locally trapped and precipitating flux measurements.
## 5 Statistical results
To analyze the spatial and spectral properties of EMIC wave-driven
precipitation statistically, we examined all ELFIN A&B observations from
2019-2021. Based on the aforementioned telltale signatures of EMIC wave-driven
precipitation, we applied an operational definition for such a precipitation
event to be a peak in the precipitating-to-trapped flux ratio at $>0.5$ MeV.
We thus identified $\sim 50$ relativistic electron precipitation events
similar to those discussed in the previous sections. We specifically excluded
from the event database those which were within the expected location, or were
consistent with, the electron isotropy boundary. The latter is distinctly
evident by its location at the inner edge of the plasma sheet and the
characteristic energy dispersion exhibited by the precipitating-to-trapped
flux ratio [Imhof et al., 1977, 1979; Yahnin et al., 1997]. The putative EMIC
wave-driven precipitation events thus selected correspond to a total of $\sim
310$ spins, with an average event duration of $\sim 6$ spins or $\sim 17$ s.
They are most often located at L$\sim$6 and have typical $\Delta L\sim$ 0.56,
consistent with prior reports of the radial extent of the EMIC waves at the
equator [Blum et al., 2016, 2017]. This is in accordance with our earlier
assertion (in Section 3.2) that the bursty nature of the subject precipitation
at ELFIN is due to the radial localization of the EMIC waves and of their
electron-interaction region in the magnetosphere.
### 5.1 Spatial distribution of all EMIC wave-driven events
Figure 16(a) shows that most events occurred at around $L\sim 5-7$ consistent
with their expected location within the outer radiation belt. The events
occurred predominantly at the midnight and dusk sectors, even though the ELFIN
database of science zone crossings has uniform coverage in local time. Their
probability distribution in MLT - L space, shown in Figure 16(b), reveals that
while events are indeed most probable at pre-midnight and dusk at $L\sim 5-7$,
a second class of events is present at midnight, post-midnight and dawn at
higher L-shells, $L\sim 8-12$. This bares close resemblance to the
distribution of EMIC waves in the magnetosphere (see e.g., [Min et al.,
2012]).
All events have a large precipitating-to-trapped flux ratio, near one (Panel
(c)). In fact, for almost half of the events that ratio exceeds unity. Full-
spin resolution data have been used (thus trapped fluxes represent an average
of flux accumulations from two look directions, one before and one after the
precipitation measurement). Only points with relative error of the ratio, $R$,
$\Delta R/R<$ 50% were included herein. We thus find statistically that the
above ratio exceeds unity for a very considerable fraction of all events. This
suggests that such nonlinear effects may be considerable in our database of
EMIC wave-driven electron scattering process. However, because transient
precipitation on a sub-spin resolution, lasting only a few spin sectors, may
be common, careful consideration and removal of aliasing needs to take place
in a statistical or event-by-event analysis of this database in a future study
to definitively identify events that are due to nonlinear scattering.
It is possible that few, some, or many of our events are due to nonlinear
scattering due to spatially-localized or short-coherence scale, high-amplitude
wave emissions. In fact, it has been known for a while that individual EMIC
wave-packets often reach sufficient amplitudes to allow faster, nonlinear
wave-particle interactions [Engebretson et al., 2008; Albert & Bortnik, 2009;
Pickett et al., 2010; Omura & Zhao, 2012; Grach et al., 2021]. However,
despite the presence of such nonlinearly scattered, bursty precipitation
events in our database, it is still quite likely that such events
statistically conform to a diffusive treatment. The argument for this is an
analogy with whistler-mode chorus waves. Chorus wave-packets having large
amplitude (thus, in the nonlinear regime), can have short wave-packet lengths
or exhibit strong, random phase jumps, or both, resulting in phase
decoherence. This allows a diffusive description of the resulting electron
phase space density evolution, despite the large wave amplitudes at play
[Zhang et al., 2018, 2020; Artemyev et al., 2021, 2022; Mourenas et al.,
2022]. And for EMIC waves too, it is likely that a prevalence of short EMIC
wave-packets exhibiting strong and random wave phase jumps across or within
them (e.g., see various examples of short packets in [Usanova et al., 2010]
and [An et al., 2022]; some examples have also been seen in Figure 3 and
discussed in Section 3.2) can permit a diffusive description of the scattering
process despite the nonlinear nature of the scattering. In this approach the
average precipitation fluxes and the average wave amplitudes incorporate the
combined effects of linear and nonlinear resonant interaction regimes. We
therefore proceed in our analysis of the precipitation energy spectra with a
diffusive paradigm in mind, and study the minimum resonance energy, peak
precipitation energy and the typical wave amplitude variation with frequency
that corresponds to the energy-spectrum of the precipitation, next.
Figure 16(d) shows that the energy of the peak precipitating-to-trapped flux
ratio $E^{*}$ lies typically in the range 1 to 3 MeV. We interpret $E^{*}$ as
the energy of electrons resonating with the most intense EMIC waves. This is
because the typical spectrum of EMIC waves on the ground or in space (as seen
for example in Figure 3(a) or Figure 8) has a peak intensity at a certain
frequency, $f_{peak}$. Then, electrons of energy $E_{peak}$ resonating with
the most intense EMIC waves at that frequency should also exhibit a peak in
their precipitating-to-trapped flux ratio. The higher frequency part of the
EMIC wave spectrum (at $f>f_{peak}$), has a lower wave intensity, but it will
still lead to electron precipitation at energies below $E_{peak}$, down to the
minimum resonance energy $E_{R,min}$ corresponding to $f_{max}$, the maximum
wave frequency of appreciable wave power.
To better approximate the minimum resonant energy for the purpose of
statistical studies of its spatial distribution, we define $E^{*}_{\min}$ as
the half-way point in precipitation intensity below its peak. $E^{*}_{\min}$
is thus the energy at half-peak of the measured precipitating-to-trapped
electron flux ratio at energies lower than that of peak precipitation
($E^{*}_{\min}<E^{*}$). (For some EMIC spectra without appreciable wave power
at frequencies higher than their peak intensity, it would be
$E^{*}_{\min}=E^{*}$, but that is a rare occasion.) We note that warm plasma
effects may limit scattering to frequencies below $f_{max}$ (energies above
$E^{*}_{\min}$) but also that such effects may also limit propagation and
suppress the average wave power to below its $f_{max}$ attainable in each
individual event time series. Thus the half-way point appears as a reasonable
energy selection in precipitation data to represent the theoretical
$E_{R,\min}$ expected from a statistical average of $f_{max}$ in available
datasets.
In summary, $E^{*}$ is our measured estimate for the theoretical $E_{peak}$
corresponding to the point of maximum precipitation by EMIC wave-driven waves
of peak wave power at $f_{peak}$, and $E^{*}_{min}$ is our measured proxy for
the theoretical minimum resonance energy $E_{R,\min}$ corresponding to the
maximum frequency of appreciable wave power $f_{max}$ in statistical averages
of such wave power observations. In other words, $E^{*}_{min}$ is an estimate
of the minimum resonance energy $E_{R,\min}$ corresponding to significant
wave-driven electron scattering toward the loss-cone. Figure 16(e) shows
$E^{*}_{\min}$ for each spin in our database of events. Evidently, its average
lies in the range of 0.5 to 1.5 MeV.
Both $E^{*}$ and $E^{*}_{min}$ tend to decrease, on average, as $L$ increases
(despite the large scatter, which is in part due to uncertainties in $L$-shell
determination). This is expected, due to the decrease of the minimum resonance
energy for cyclotron resonance with distance from Earth: Since the dipole
field intensity fall-off with distance ($\sim 1/L^{3}$) is faster than that of
the square-root of the density Sheeley et al. [2001], $f_{pe}/f_{ce}$
increases with distance, and the minimum resonance energy for electrons (which
is monotonically and inversely dependent on $f_{pe}/f_{ce}$, as discussed in
Section 2.2) decreases with distance. We can see this behavior even in case
studies with multiple EMIC wave-driven bursts, such as that of Figure10: the
energy of peak precipitating-to-trapped ratio, i.e., the energy of the most
efficient precipitation, decreases at progressively larger $L$-shells.
To assess the trends revealed in the averages shown, we compare them with
estimates for $E^{*}$ and $E^{*}_{min}$ determined independently, from
published statistical averages of EMIC wave spectra providing peak-power and
maximum frequency [Zhang et al., 2016c] and from a model-based estimation of
the $f_{pe}/f_{ce}$ ratio [Summers & Thorne, 2003]. Our independent estimates
resulted in the dashed red lines in Panels (d) and (e). Specifically, our
procedure was as follows: Using an empirical model for the plasmaspheric plume
density $n_{e}\sim 1300(3/L)^{4.83}$ cm-3 at $L>4$ [Sheeley et al., 2001], as
appropriate for the dusk sector where most of our events were observed, and
assuming that the resonance is mainly with hydrogen band EMIC waves (based on
previous statistical observations [Kersten et al., 2014; Mourenas et al.,
2016, 2017; Zhang et al., 2016c, 2017, 2021]), we get for the resonance energy
of precipitating fluxes $E[{\rm MeV}]=[(1+C/L^{1.17})^{1/2}-1]/2$. The
coefficient $C$ is to be determined from EMIC wave and background-plasma
characteristics. For typical peak power parameters Kersten et al. [2014];
Zhang et al. [2016b] $f_{\rm peak}/f_{cp}\simeq 0.4$, hydrogen band EMIC
waves, and 98% proton $+2$ % helium ions (or $f_{\rm peak}/f_{cp}=0.43$ and
$92$% protons $+8$% helium ions which result in similar values), we get
$C\simeq 246$. The corresponding theoretical estimate of the typical peak
resonance energy, $E_{\rm peak}$ is shown as a dashed red curve in Panel (d).
Next, we used a factor $C/2.8$ instead of $C$, to get our theoretical estimate
of the minimum resonance energy $E_{R,\min}$ corresponding to significant
wave-driven electron scattering toward the loss-cone. This is shown as a
dashed red line in Panel (e) along with the observationally determined
energies at half-max precipitating-to-trapped electron flux ratio
$E^{*}_{\min}$ and their average. The factor $C/2.8$ corresponds to electron
cyclotron resonance with hydrogen band EMIC waves of frequency $f/f_{cp}\sim
0.56$, well above the peak-power frequency $f_{\rm peak}/f_{cp}\simeq 0.4$,
and taken to be a reasonable approximation for $f_{\max}/f_{cp}$ in
statistical averages of wave power. At that frequency, wave power is still
finite albeit an order of magnitude lower than at peak-power frequency [Zhang
et al., 2016c, 2021]. This choice is consistent with the smaller
precipitating-to-trapped flux ratio at $E_{R,\min}$ (compared to that at
$E_{\rm peak}$), despite the theoretically anticipated increase in diffusion
rate with decreasing energy for otherwise constant wave power [Mourenas et
al., 2016; Ni et al., 2015a; Summers & Thorne, 2003]. As we see from the
panels under discussion, there is a reasonably good match between the
theoretical expectation for $E_{\rm peak}$ and $E_{R,\min}$ independently
derived from EMIC wave-power statistics alone (the dashed red lines), and the
averages of $E^{*}$ and $E^{*}_{\min}$ from observations of the precipitating-
to-trapped flux ratio at ELFIN (the solid black lines).
In addition, we show in Panel (e) two lower limits of $E_{R,\min}$ estimated
based on wave cyclotron damping by cold/cool ions near the proton
gyrofrequency (see Section 2.2) at $kc/\Omega_{pi}\sim 1$ (dashed blue line)
and at $kc/\Omega_{pi}\sim 2$ (dotted blue line). These two estimates fit well
the average $E^{*}_{\min}$ values (solid black line) and the lower limit of
$E^{*}_{\min}$ data points at $L=4-10$, respectively. This suggests a
significant effect of wave cyclotron damping in determining the wave spectrum
shape at high frequencies $f>0.6\,f_{cp}$ [Chen et al., 2013].
### 5.2 Properties of most intense events
#### 5.2.1 Dependence on AE, MLT and concurrence with whistlers
Let us now investigate some properties of the highly relativistic, strong
electron precipitation EMIC wave-driven events, those with
$R=j_{prec}/j_{trap}>1/2$ at $E^{*}>1$ MeV, of the predominant spatial
category of events, the ones with $L<7$. These represent the most efficient
subset of the previously discussed strong precipitation (R$>$0.5) EMIC wave-
driven events, those with peak energy $>$1MeV at L-shell $L<7$, but otherwise
very similar. Figure 17(a) shows that when binned as a function of $AE^{*}$
(the maximum $AE$ in the preceding 3 hours), the fraction of these most
efficient events over the total number of events within our database strongly
increases from $250$ nT to $\sim 1500$ nT and is quite significant for
$AE^{*}>600$ nT. It is noteworthy that the increase of this fraction with
$AE^{*}$ happens despite the rapidly decreasing probability of intervals with
large $AE^{*}$, for values above $\sim 200$ nT. Thus the smaller number of
events in the highest $AE^{*}$ bin ($1750-2500$ nT) is likely due to the
decrease in occurrence rate of such extremely high $AE^{*}$ periods. We
interpret the result of Figure 17(a) as a consequence of the fact that EMIC
wave power (which determines the efficiency of electron scattering) is itself
strongly dependent on geomagnetic activity owing to activity-dependent
injections of anisotropic hot ions which provide the free energy for EMIC wave
generation [Chen et al., 2010, 2011].
Figure 17(b) shows the fraction of the most efficient EMIC wave-driven
electron precipitation events (normalized to all events within $L<7$) as a
function of MLT. It also reflects the relative occurrence probability of such
events with MLT, since ELFIN’s science zone collections are uniformly
distributed in local time. This is much higher in the 18-24 MLT sector than
elsewhere. We suggest this is because it is towards this sector that highly
anisotropic hot ions drift, after being produced by nightside injections
(which are known to peak in occurrence rate around the pre-midnight sector).
The probability of efficient precipitation is much weaker at 0-16 MLT, likely
because of the reduced anisotropy of the aforementioned hot ion populations
(since these ions have had a chance to be modified by EMIC wave scattering at
dusk). Thus when the injected ions reach the 12-16 MLT range they can generate
only weaker EMIC waves, if any. Conversely, the 0-6 MLT sector can still
harbor direct injections from the magnetotail, albeit at a lower probability
than at pre-midnight, which could explain the reduced probability of both EMIC
wave generation and subsequent efficient electron precipitation in that
sector.
Figure 17(c) shows the fraction of the aforementioned most efficient EMIC
wave-driven events that have a low energy $j_{prec}/j_{trap}$ ratio that is
clearly low, $0\leqq R<1/3$. This condition is applied in three energy ranges:
$E\simeq 100$ keV, E$\in$ [100,200] keV, and E$\in$ [100,300] keV. The figure
reveals that 85% of events with strong precipitation ratio at $E^{*}>1$ MeV
are accompanied by a significantly weaker precipitation ratio, $<1/3$, at
$E\simeq 100$ keV. However, the unity-complement of the third category (E$\in$
[100,300] keV) also shows that $\sim$35% of the most efficient EMIC wave-
driven events still have $j_{prec}/j_{trap}>1/3$ at some energy between $100$
and $300$ keV. Therefore, moderate to strong precipitation is still present at
low energies (up to a few hundred keV), for quite a significant fraction of
these most efficient, highly relativistic precipitation events.
So, we next investigate whether the precipitation at low ($100-300$ keV)
energies we observed here during the most efficient EMIC wave-driven events
could be due to electron scattering by whistler-mode chorus or hiss waves that
may occur simultaneously with EMIC waves. Since the whistler-mode pitch-angle
diffusion rate (and, thus, the precipitation efficiency) decreases with
energy, $E$, the precipitating-to-trapped flux ratio $R=j_{prec}/j_{trap}$
should first decrease and reach a minimum somewhere above $100$ keV, before
increasing again at relativistic energies due to cyclotron resonance with EMIC
waves (e.g., see [Mourenas et al., 2021, 2022]). We note that such intense
sub-MeV precipitation has been reported previously [Hendry et al., 2017, 2019;
Capannolo et al., 2021], but in the past it was not possible to separate the
effects of whistler-mode wave and EMIC wave scattering by studying at
sufficient energy resolution and extent this $R-E$ relationship. Here, we
explore this relationship next, in Figure 17(d). The figure shows, on the
right, the fraction of all events with $j_{prec}/j_{trap}>1/2$ at $E^{*}>1$
MeV for which $R(200$ keV)$<0.7\times R(100$ keV) OR $R(300$ keV)$<0.7\times
R(100$ keV); in other words, where the spectral slope of the ratio $R$ is
decidedly negative ($0.7$ being an arbitrary but reasonable choice). Such a
negative slope with energy at the energy range $E<300$ keV likely corresponds
to chorus or hiss wave-driven precipitation simultaneous with EMIC wave
precipitation at MeV energies. We see that the depicted fraction is $\simeq
22\%$, indicating that about one fifth of the most efficient EMIC wave events
exhibit signatures of simultaneous whistler-mode chorus precipitation below
300 keV.
The OR condition has been used above to ensure inclusive accounting of a broad
range of low energies and an increased statistical significance of the
results. However, it is still possible that sub-MeV precipitation by EMIC
waves due to the aforementioned nonresonant [Chen et al., 2016; An et al.,
2022], or bounce resonant [Cao et al., 2017b; Blum et al., 2019] interactions,
might extend down to the 300 keV range of energies. This could provide a
positive $R-E$ slope in the 300 keV range but retain a negative slope in the
200 keV range. Therefore, we examine a more stringent criterion, applying the
AND condition to the above quantities. In other words, to examine bona fide
concurrent whistler-mode and EMIC wave scattering, we investigate the fraction
of putative EMIC wave-precipitation events (those with $j_{prec}/j_{trap}>1/2$
at $E^{*}>1$ MeV) which exhibit both $R(200$ keV)$<0.7\times R(100$ keV) AND
$R(300$ keV)$<0.7\times R(100$ keV). Figure 17(d), left, depicts this
fraction. It shows that whistler-mode wave-driven precipitation may be present
at $E\leq 300$ keV for only $\sim 6.5$%, or one sixteenth of all events with
$j_{prece}/j_{trap}>1/2$ at $E^{*}>1$ MeV at $L<7$. These results suggest that
at least $\sim 78$% and possibly up to $\sim 93.5$% of all intense EMIC wave-
driven precipitation events extending down to $200$ keV are likely not due to
concurrent whistler-mode and EMIC wave scattering, and deserve further
scrutiny.
#### 5.2.2 Consistency of precipitation with diffusion theory
The statistically significant and moderately efficient precipitation observed
at $\sim$200 keV in the presence of strong precipitation at highly
relativistic energies ($E^{*}>1$ MeV) by EMIC waves, does not seem, at first
glance, to be similarly due to resonant scattering by EMIC waves: Based on Van
Allen Probes statistics [Ross et al., 2021], EMIC waves with sufficiently high
frequencies (relatively close to the ion, and in particular the proton
gyrofrequency) to provide resonant scattering and precipitation at such low
energies [Kennel & Petschek, 1966; Mourenas et al., 2022] indeed have very low
wave power (at least 10 times lower than that at peak wave power [Zhang et
al., 2016c]. In addition, hot plasma corrections to the wave dispersion
relation become necessary in the close vicinity of the ion gyrofrequency,
where they can suppress the expected resonant scattering [Chen et al., 2013;
Cao et al., 2017a]. Nonresonant scattering by EMIC waves with very sharp edges
may be implicated [Chen et al., 2016; An et al., 2022], but the actual
presence of such sharp wave-packet edges still has to be verified
experimentally and its effects have yet to be compared with theory and
simulations. Bounce resonant scattering can, in principle, provide
precipitation of sub-MeV energies [Cao et al., 2017b; Blum et al., 2019], but
the associated scattering rate is quite small for quasi-parallel EMIC waves
and, thus, further statistical investigation of the most effective oblique
EMIC waves is needed to evaluate the relative contribution of this mechanism.
Nevertheless, some useful insights into the possible origin of the moderate
sub-MeV electron precipitation that accompanies strong relativistic electron
precipitation by EMIC waves, can be gained from a careful examination of the
energy spectrum characteristics of EMIC wave-driven precipitation in our
database. We do this next.
Figure 18(a) shows the average trapped electron flux (in black) and the
average precipitating-to-trapped flux ratio $\langle j_{prec}/j_{trap}\rangle$
(in red) as a function of energy for the dominant category of all highly
relativistic electron strong precipitation events (at $L<7$ with
$j_{prec}/j_{trap}>1/2$ at $E^{*}>1$ MeV). The flux ratio, denoting
precipitation efficiency, increases approximately as $\gamma^{2}$ (where
$\gamma=1+E/mc^{2}$ is the Lorentz factor) as kinetic energy $E$ increases
(blue dotted line). It starts from a low, approximately constant average value
$\langle j_{prec}/j_{trap}\rangle\sim 1/8$ at $E<200$ keV and rises to
$\langle j_{prec}/j_{trap}\rangle>1/2$ at $\gtrsim 1$ MeV. It attains a high,
approximately constant average value $\langle j_{prec}/j_{trap}\rangle\approx
0.85$ for $E>1.5$ MeV. The low values of $\langle j_{prec}/j_{trap}\rangle\sim
1/8$ at the low energies could be partly due to the simultaneous presence of
chorus wave-driven precipitation during a small fraction $\approx 6.5$% of
these events (as suggested by results in Figure 17(d)). Since such chorus
events should give a higher $j_{prec}/j_{trap}$ ratio at $E<100$ keV [Mourenas
et al., 2022], whereas all the other events without chorus waves should give a
slightly smaller $j_{prec}/j_{trap}$ in the same energy range, the resulting
average $\langle j_{prec}/j_{trap}\rangle$ can be nearly constant at $50-200$
keV. The flattening of the slope of $\langle j_{prec}/j_{trap}\rangle$ versus
energy at $E\geq 1.5$ MeV is consistent with the most common minimum cyclotron
resonance energy with intense EMIC waves being $E_{R,\min}>1$ MeV based on
wave statistics from satellites [Summers & Thorne, 2003; Kersten et al., 2014;
Cao et al., 2017a; Ross et al., 2021]. With some energies below and some above
$E_{R,\min}$ in the range $\sim 1.5$ MeV to $4$ MeV in our dataset, the
averaging of $\langle j_{prec}/j_{trap}\rangle$ over all events lowers its
value (to well below the peak ratio value of $\approx 1$) and flattens its
spectrum.
Figure 18(b) shows the average ratio $\langle j_{prec}/j_{trap}\rangle$ as a
function of $E/E^{*}$. For each event, the energy $E$ has been normalized to
the event’s $E^{*}$ at maximum $j_{prec}/j_{trap}$, the approximate energy for
cyclotron resonance with the most intense EMIC waves. A least-squares fit to
the data at $E\geq E^{*}$ (red dashed line) shows that $\langle
j_{prec}/j_{trap}\rangle\sim(E/E^{*})^{-1}$. This agrees well with the
prediction of quasi-linear diffusion theory, which can be expressed as
$j_{prec}/j_{trap}\sim\sqrt{D_{\alpha\alpha}}\sim[\gamma(E)/\gamma(E^{*})]^{-5/4}\sim(E/E^{*})^{-1}$
for $E>E^{*}$ and $E^{*}\in[1,4]$ MeV, where $D_{\alpha\alpha}$ is the
electron bounce-averaged quasi-linear pitch-angle diffusion rate at the loss-
cone angle $\alpha_{LC}$ and, again, $\gamma$ is the Lorentz factor [Kennel &
Petschek, 1966; Ni et al., 2015a; Mourenas et al., 2016, 2021, 2022]. This
suggests that electron precipitation driven by EMIC waves is described well by
a quasi-linear diffusion treatment, which was developed under the assumption
of low amplitude waves exhibiting low coherence (random phases) leading to
slow phase space evolution compared to the wave period.
The precipitation ratio falloff with normalized energy at $E/E^{*}>1$ in
Figure 18(b) is consistent with our assertion that the energy $E^{*}$ likely
corresponds to the energy for cyclotron resonance at the frequency of the most
intense EMIC waves. This is because, we contend, at lower energies,
$E/E^{*}<1$, the observed rapid decrease of $\langle j_{prec}/j_{trap}\rangle$
with progressively decreasing energy, down to $E/E^{*}\sim 0.1$, likely
corresponds to cyclotron resonance with less intense EMIC waves at
progressively higher frequencies [Zhang et al., 2016c; Ross et al., 2021], or
to electron nonresonant scattering by the steep edge of some EMIC wave packets
[Chen et al., 2016; An et al., 2022], both of which result in a reduced
efficiency of scattering compared to that at peak wave power. Thus, viewed as
a function of decreasing energy starting from the highest energies, while that
ratio increases exponentially at first (above $E^{*}$) due to the increase in
the diffusion rate, it starts to decrease just as quickly later (below
$E^{*}$) due to the fast wave power decrease – the turning point in that
behavior being consistent with $E^{*}$, the resonance energy at peak wave
power.
To check whether the decrease of $\langle j_{prec}/j_{trap}\rangle$ with
decreasing energy below $E^{*}$ is actually consistent with EMIC wave power
observations, it is useful to fit that ratio below $E^{*}$ (blue dashed line
in Figure 18(b)). This fit quantifies the ratio’s observed energy dependence
at $E<E^{*}$ as $\langle
j_{prec}/j_{trap}\rangle\sim\gamma^{2}(E)/\gamma^{2}(E^{*})\sim
0.065\,\gamma^{2}(E)$, where we have replaced $E^{*}$ with its mean value
$\langle E^{*}\rangle\approx 1.45$ MeV in our dataset ($E^{*}$ varies between
$\sim 1$ MeV and $\sim 3$ MeV). With this replacement, the best fit in Figure
18(b) is identical to the best fit in Figure 18(a). The fit value for $\langle
j_{prec}/j_{trap}\rangle\sim 1$ actually occurs at an energy $E\sim E^{*}$.
For a peak wave power nearly flat over a significant range of frequencies,
$E^{*}$ represents the lowest energy at which cyclotron resonance with the
highest power waves is achieved, because the diffusion rate
$D_{\alpha\alpha}(E)$ that characterizes the behavior on the high-energy side
decreases toward higher energy even for a flat wave power spectrum
$B_{w}^{2}(f)$ [Mourenas et al., 2016]. In that case, $E^{*}$ corresponds to
the highest frequency over the nearly flat region of peak wave power,
henceforth simply denoted $f_{peak}$, for which the diffusion rate
$D_{\alpha\alpha}(E_{R,\min})\sim B_{w}^{2}(f)\cdot f/f_{cp}$ is maximized.
Next, we investigate the simplest explanation for the $\langle
j_{prec}/j_{trap}\rangle$ dependence on $E/E^{*}$, at $E<E^{*}$: cyclotron
resonance with progressively less intense EMIC waves at higher frequencies.
This is done in two steps: First, assuming that this precipitation is due to
quasi-parallel hydrogen band EMIC waves we use quasi-linear theory to infer
from the above fit to the ELFIN measurements the statistical EMIC wave-power
ratio $B_{w}^{2}(f)/B_{w}^{2}(f_{peak})$. And second, we compare this inferred
wave power ratio with published statistics of EMIC wave power $B_{w}^{2}(f)$
directly observed by the Van Allen Probes near the equator in 2012-2016 [Zhang
et al., 2016c]. As a reminder, nonlinear resonant scattering by an ensemble of
independent, short-range, and large amplitude wave packets that may partake in
the measured average wave spectral shape should produce average $\langle
j_{prec}/j_{trap}\rangle$ energy spectra similar to that caused by classical
diffusion by waves with the average wave power. In other words, the diffusive
formalism, even though borrowed from quasi-linear theory, also applies for
such nonlinear interactions if they participate statistically in the process.
Yet, since we focus now on the precipitation of sub-MeV electrons, at
$E<E^{*}$, which can reach cyclotron resonance only with high frequency waves,
at $f>f_{peak}$, of much lower amplitudes than at peak wave power, the
contribution from nonlinear interactions is likely much smaller than in the
case of multi-MeV electron precipitation.
Towards the first step, we start from the full quasi-linear expressions for
the precipitating to trapped flux ratio at ELFIN
$j_{prec}(\alpha)/j_{trap}(\alpha_{trap})$ ([Kennel & Petschek, 1966] and [Li
et al., 2013]). Pitch angles are referenced to the equator and $\alpha_{trap}$
is such that $\ln(\sin\alpha_{trap}/\sin\alpha_{LC})\sim 1/20$ (where
$\alpha_{LC}$ is the equatorial loss-cone angle). Averaging $j_{prec}$ over
equatorial pitch-angles $\alpha<\alpha_{LC}$, we get
$\frac{j_{prec}}{j_{trap}}\simeq\int_{0}^{1}\frac{I_{0}(z_{0}x)/I_{0}(z_{0})}{1+(z_{0}/20)I_{1}(z_{0})/I_{0}(z_{0})}dx,$
where $z_{0}=2\alpha_{LC}/\sqrt{D_{\alpha\alpha}\tau_{B}}$ [Li et al., 2013],
and $\tau_{B}\sim\gamma/(\gamma^{2}-1)^{1/2}$ is the bounce period [Schulz &
Lanzerotti, 1974]. This integral can be approximated as:
$j_{prec}/j_{trap}=0.9/z_{0}$ with less than 20% error for $z_{0}\in[0.9,8]$,
corresponding to $j_{prec}/j_{trap}\in[0.11,1]$. At low energies, $E<E^{*}$,
and for quasi-parallel left-hand-polarized hydrogen band EMIC waves [Kersten
et al., 2014] with a monotonically decreasing power $B_{w}^{2}(f)$ toward
higher frequencies $f>f_{peak}$ in Van Allen Probe statistics [Zhang et al.,
2016c], the most efficient wave-driven pitch-angle diffusion of electrons near
the loss-cone through cyclotron resonance should occur for $E(f)\sim
E_{R,\min}(f)$ at similar latitudes $\lambda_{R}$ close to the equator for all
$E$ in that range [Mourenas et al., 2016]. Bounce-averaging [Mourenas et al.,
2016] the full expression of the local pitch-angle diffusion rate in the cold
plasma approximation [Summers & Thorne, 2003; Su et al., 2012] then gives
$D_{\alpha\alpha}\approx\frac{B_{w}^{2}(f_{cp}/f)(1-f/f_{cp})^{3/2}}{(\gamma^{2}-1)^{1/2}\gamma\,(1-f/2f_{cp})}$
at an equatorial pitch-angle $\alpha\simeq\alpha_{LC}$.
Combining the theoretical scaling laws for $j_{prec}/j_{trap}$ and
$D_{\alpha\alpha}$ and the fit to ELFIN observations
$j_{prec}/j_{trap}\sim\gamma^{2}$ in Figure 18, gives us the estimate of the
wave-power spectrum consistent with these observations:
$\frac{B_{w}^{2}(E)}{B_{w}^{2}(E^{*})}=\frac{(1+2E)^{4}E(E+1)}{(1+2E^{*})^{4}E^{*}(E^{*}+1)}\cdot\frac{f(1-f_{peak}/f_{cp})^{3/2}(1-f/2f_{cp})}{f_{peak}(1-f/f_{cp})^{3/2}(1-f_{peak}/2f_{cp})}$
To obtain the mapping between energy and frequency, we utilize the cyclotron
resonance condition coupled to the cold plasma dispersion relation. Its
expression for $E<E^{*}$ and $f>f_{peak}$ is combined with its expression for
$E^{*}$ and $f_{peak}$, the (highest-)frequency of peak EMIC wave-power
($f_{peak}/f_{cp}\sim 0.37-0.41$, as seen in statistical wave observations
from the Van Allen Probes [Zhang et al., 2016c]). We also assume an ion
composition with $>$94% protons, as appropriate for when hydrogen band waves
are present [Kersten et al., 2014; Ross et al., 2022]. This yields a second
order equation for $f/f_{cp}$, with solution $f/f_{cp}\simeq
2/(1+\sqrt{1+4C})$, where
$C=(f_{cp}/f_{peak})(f_{cp}/f_{peak}-1)E(1+E)/\left(E^{*}(1+E^{*})\right)$.
The resonant frequency $f(E)/f_{cp}$, expressed as a function of the resonance
energy in the above equation, allows us, by substitution in the previous
equation, to obtain the EMIC wave-power ratio
$B_{w}^{2}(f)/B_{w}^{2}(f_{peak})=B_{w}^{2}(E)/B_{w}^{2}(E^{*})$ inferred from
ELFIN, with an error smaller than 40% for $j_{prec}/j_{trap}\in[0.11,1]$. Note
that the above normalizations of $j_{prec}/j_{trap}$ to its level at $E^{*}$
and of EMIC wave power to its level at $f_{peak}/f_{cp}$ allow us to eliminate
the dependencies of $j_{prec}/j_{trap}$ and $D_{\alpha\alpha}$ on
$f_{pe}/f_{ce}$ and on the non-normalized EMIC wave power at the equator
conjugate to ELFIN, which are unknown. Here we only had to assume that the
ensemble of $j_{prec}/j_{trap}$ measurements from ELFIN are statistically
compliant to an ensemble of EMIC wave power observations on another,
equatorial, platform in order to convert the energy spectrum of the
precipitation to a frequency spectrum of wave power that should be consistent
with that precipitation.
We proceed, now, to the second step in our consistency check between the ELFIN
observations of electron precipitation and independently collected EMIC wave-
power spectra, in order to assess the validity of quasi-linear diffusion in
describing the precipitation down to low energies. Figure 19(a) compares the
wave power ratio $B_{w}^{2}(f)/B_{w}^{2}(f_{peak})$ inferred above from ELFIN
statistics of $j_{prec}/j_{trap}$ (solid blue curve) with the measured
hydrogen band EMIC wave power ratio from Van Allen Probes statistics when
$f_{pe}/f_{ce}>15$ [Zhang et al., 2016c] in different MLT sectors (black,
green, magenta, and red curves). In the high density plasmaspheric plume or at
the plasmapause, as implied by the $f_{pe}/f_{ce}$ ratio criterion used in
this subset of wave data, the cyclotron resonance condition can indeed be
satisfied for the optimum parameters $E^{*}\sim 1.45$ MeV in Figures 18(a,b)
and $f_{peak}/f_{cp}\sim 0.37$ from the above wave statistics. Figure 19(c)
indicates that $200$ keV electrons are then in resonance with waves at
$f/f_{cp}\sim 0.8$ near the equator. We see in Figure 19(a) that the observed
statistical EMIC wave power ratios in the 12-22 MLT sector (magenta and red
curves) agree quite well with the power ratios inferred from ELFIN data (blue
curve with dashed lines indicating uncertainties) over two decades in power,
in the frequency range $f/f_{cp}\sim 0.37$ up to $f/f_{cp}\sim 0.8-0.95$.
Therefore, in high density regions (those with $f_{pe}/f_{ce}>15$), the
observed EMIC wave power ratio is consistent with that inferred from ELFIN
precipitation of $j_{prec}/j_{trap}$ from $\sim 1.45$ MeV down to $200$ keV,
sometimes even down to $100$ keV.
In regions of lower plasma density, with $5<f_{pe}/f_{ce}<15$, EMIC wave
statistics from the Van Allen Probes show that $f_{peak}/f_{cp}\sim 0.41$
[Zhang et al., 2016c], corresponding to a higher $E^{*}\sim 2.5$ MeV for
$f_{pe}/f_{ce}\sim 13-15$. For these parameters, Figure 19(b) shows that in
the 12-22 MLT sector the observed EMIC wave power ratios in the range
$f/f_{cp}\sim 0.41$ to $f/f_{cp}\sim 0.90-0.95$ [Zhang et al., 2016c] at Van
Allen Probes (magenta and red curves) are consistent with those inferred from
ELFIN statistics (blue curves). In the above $f_{pe}/f_{ce}$ range, the
frequencies where agreement prevails correspond to $j_{prec}/j_{trap}$ from
$\sim 2.5$ MeV down to $\sim 100-200$ keV.
It is evident from Figures 19(a,b) that the inferred and observed EMIC wave
power ratios agree well, and up to high frequencies (low resonance energies),
only in the 12-22 MLT sector, i.e., near dusk. This is quite consistent with
the fact that most of the EMIC wave-driven events at $L<7$ in our database
occurred near dusk (Figure 17(b)). It is also consistent with the predominance
of similar events at dusk in Firebird-II statistics [Capannolo et al., 2021].
In the 0-3 MLT sector (black lines in Figures 19(a,b)) the observed EMIC wave
power is consistent with EMIC wave-driven electron precipitation down to $\sim
300$ keV only when $f_{pe}/f_{ce}>15$. Few events exist in this sector in our
database. In the 4-12 MLT sector (green lines) the observed EMIC wave power is
too weak to drive significant electron precipitation below $\sim 1$ MeV,
consistent with the absence of precipitation events in that sector in our
database. (Note: Since EMIC wave power spectra depend weakly on $L$ in Van
Allen Probes statistics [Zhang et al., 2016c; Ross et al., 2021], the present
results should hold for $L\in[4,7]$.)
Therefore, we see that the observed, moderately efficient electron
precipitation at energies as low as $\sim 200-300$ keV could simply be due to
quasi-linear scattering by moderate intensity EMIC waves at high frequencies,
up to $f/f_{cp}\sim 0.8-0.9$, provided that such waves are present during a
majority of these events with a similar average intensity as in statistical
empirical wave models. Such waves may still be sufficiently below the proton
gyrofrequency to evade hot plasma effects, at least to first order [Chen et
al., 2011; Ross et al., 2021]. Assuming a maximum wavenumber that can be
attained in the presence of hot plasma effects given by $kc/\Omega_{pi}\sim 2$
(corresponding to a typical cold ion temperature of $\sim 10$ eV in a
plasmaspheric plume or just within the plasmasphere) for left-hand-polarized
hydrogen band waves in a plasma with more than 94% protons, the minimum
electron energy for cyclotron resonance can indeed be as low as $\sim 150$ keV
to $350$ keV for $f_{pe}/f_{ce}\sim 15$ to $30$, when a sufficient transient
hot $H^{+}$ temperature anisotropy $A>2.3$ generates such high-frequency waves
[Chen et al., 2011, 2013]. A typical precipitation event of this kind has been
analyzed in Section 4.2.
Another important point is that the measured ratios $j_{prec}/j_{trap}\sim
1/8-1$ from low energy, $E\ll E^{*}$, to $E^{*}$ actually correspond to a
quasi-linear diffusion close to the strong diffusion regime with
characteristic time scales for partial or full loss-cone filling on the order
of a quarter of a bounce period $\tau_{B}/4\sim 0.25$ s [Kennel & Petschek,
1966]. Such fast diffusive time scales are therefore still consistent with the
time scales of the observed electron precipitation at ELFIN. Thus, even in the
quasi-linear regime it is possible for temporal or spatial wave power
variations encountered by drifting and bouncing electrons to explain the
short-lived (sub-spin), bursty nature of the precipitation at ELFIN.
The present results agree with test particle hybrid simulations suggesting
that low-amplitude, high-frequency hydrogen band EMIC waves could be the main
cause of 0.55 MeV electron precipitation [Denton et al., 2019]. Several
previous studies have also provided hints of a prevalence of hydrogen band
waves in driving sub-MeV relativistic electron precipitation [Chen et al.,
2013; Qin et al., 2018; Zhang et al., 2021]. Even when such high-frequency
waves are not evident at apparently conjugate measurements at the equator,
this may be due to mapping uncertainties - the waves could still be present
$\gtrsim$ 0.5 Earth radii away where the ionospheric electron precipitation
measurements actually map to. Further analysis of EMIC wave data and sub-MeV
precipitation with ELFIN’s twin spacecraft, providing higher spatio-temporal
resolution of these phenomena, will be able to further study these points in
greater detail. In addition, one cannot totally rule out a possible role of
helium band EMIC waves in driving some sub-MeV electron precipitation, in
spite of the expected strong cyclotron damping very close to the helium
gyrofrequency [Chen et al., 2013; Cao et al., 2017a]. A similar study as in
Figure 19, but focusing on helium band waves, could be performed to check this
point. This would require extensive numerical calculations including hot
plasma effects in the dispersion relation [Chen et al., 2013].
As a final note, we comment on the global contribution of EMIC waves to sub-
MeV electron scattering. Although bursts of EMIC waves can drive relatively
intense sub-MeV electron precipitation in a narrow MLT sector, their very weak
time- and MLT-averaged power at high frequencies should prevent them from
contributing significantly to the global loss rates of sub-MeV electrons up to
high equatorial pitch-angles $\alpha\sim 90^{\circ}$, above and beyond what
can already be done by typical chorus waves in the dawnside trough [Mourenas
et al., 2016; Boynton et al., 2017; Agapitov et al., 2018; Drozdov et al.,
2019, 2020; Miyoshi et al., 2020; Ross et al., 2021; Zhang et al., 2022]. This
is because at a fixed energy $E\sim E^{*}$, cyclotron resonance between
electrons of equatorial pitch-angle $\alpha$ and hydrogen band EMIC waves at a
frequency $f$ indeed corresponds to a scaling
$\cos^{2}\alpha/\cos^{2}\alpha_{LC}\sim(f_{cp}/f-1)/(f_{cp}/f_{peak}-1)f_{peak}/f$.
This implies that electrons of higher $\alpha$ reach resonance with waves of
higher frequency $f>f_{peak}$ (e.g. see first standalone equation in Mourenas
et al. [2016]). Therefore, the steep decrease of EMIC wave power from
$f/f_{cp}\sim 0.4$ to $\sim 0.9$, by two orders of magnitude in Figure 19,
implies a rapid decrease of the quasi-linear pitch angle diffusion rate
$D_{\alpha\alpha}(\alpha)$ away from the loss-cone [Summers & Thorne, 2003;
Mourenas et al., 2016]. This is consistent with pitch angle bite-out
signatures (stronger electron loss near the loss-cone than at higher pitch
angles) produced by the most intense EMIC waves [Usanova et al., 2014; Adair
et al., 2022] at energies of $\sim 1-4$ MeV. Such signatures demonstrate the
inefficacy of EMIC waves in depleting the relativistic electron flux at
equatorial pitch angles far from the loss cone, absent intense chorus waves
[Mourenas et al., 2016; Zhang et al., 2017]. The same arguments hold at sub-
MeV energies, except that cyclotron resonant EMIC waves have higher
frequencies than at $\sim 1-4$ MeV and, therefore, much lower amplitudes,
making them even less efficient than typical intense chorus waves in the
dawnside trough in driving the precipitation of electrons near the loss-cone.
## 6 Summary and discussion
Relativistic electrons in the near-Earth environment are an important
contributor to space weather and may play a significant role in charged
particle energy input to the atmosphere from space. The question of how such
electrons, after being accelerated in near-Earth space by waves or transported
into it by, say, radial diffusion, are lost remains open. This question is
particularly vexing for such electron loss from the outer radiation belt, at
(L$<$7), which has the highest fluxes of such electrons and is thus most
important for space weather. Aside from magnetopause shadowing (the result of
magnetospheric compressions changing electron drift paths from trapped to
open), and field-line curvature scattering precipitation at the isotropy
boundary, both affecting L-shells beyond the outer radiation belt, and
whistler-mode wave-driven precipitation which mostly affects sub-MeV
electrons, most attention on relativistic electron loss has been placed on
these electrons’ interaction with EMIC waves. This is because EMIC waves can
resonate with electrons of highly relativistic energies $>1$ MeV under high-
density, low magnetic field plasma conditions that can be realized in the
outer edge of the plasmasphere, well inside the outer radiation belt at active
times. As they can act on trapped, outer radiation belt electrons, such waves
can change significantly the outer radiation belt fluxes and therefore are an
important process to include in space weather models. The recent launch of the
ELFIN mission, with a goal to determine whether EMIC waves are predominantly
responsible for this loss or whether other wave processes are implicated,
provided a unique dataset of 50 - 5000 keV electrons obtained on a polar, low-
altitude orbit, with which we can address this question comprehensively and
for the first time. In this paper, we primarily focused on the question of
whether EMIC waves can be definitively shown to be responsible for the
observed scattering of relativistic electrons, and whether whistler-mode
chorus waves (the other candidate for pitch-angle scattering of sub-MeV and up
to MeV electrons into the loss cone) simultaneously might be implicated.
After a short review of EMIC wave generation and its interaction with
energetic electrons in Section 2, we presented in Section 3 the first
comprehensive examples of typical EMIC wave-driven precipitation of such
electrons from ELFIN. The high energy and pitch-angle resolution of ELFIN
allowed us to identify and quantitatively investigate the energy range of such
precipitation and study its properties using the precipitating-to-trapped flux
ratio, $R$, measured by a single detector on a spinning platform, thus
avoiding problems with multiple detector inter-calibration. This ratio
increasing with energy and peaking in the range $>$0.5 MeV is a tell-tale
signature of EMIC wave-driven precipitation. In Section 3.1 we showed how this
spectral signature is differentiated quite well from that of whistler-mode
chorus driven precipitation which is a decreasing ratio with energy from as
low as 50 keV (the lower energy limit of the ELFIN detector) to $>$0.5 MeV.
(Both spectral types can be observed on occasion simultaneously, as expected,
since whistler-mode chorus and EMIC waves can co-exist – in that case, the two
spectral shapes can still be well separated.)
Next, in Section 3.2 we used a case study of typical EMIC wave-driven
emissions, accompanied by equatorial measurements of EMIC waves on THEMIS to
demonstrate that the bursty nature of the precipitation at ELFIN (lasting only
7-14s, or 2.5-5 ELFIN spin periods) is consistent with the spatial extent of
the EMIC wave region at the equator (lasting 6min). In this case, the
equatorial extent of the EMIC region was $\Delta L\approx$0.23 which is
consistent with published statistical averages of $\Delta L\approx$0.5 [Blum
et al., 2016, 2017]. Thus, the burstiness of EMIC wave-driven precipitation at
polar orbiting, low-altitude spacecraft like ELFIN is likely typical. We have
also shown that the burstiness can extend to sub-spin time-scales, 1-5 spin
sectors, at least in part due to the spatial variability of the wave-power at
the equator. This can cause the precipitating-to-trapped flux ratio to be, on
occasion, aliased and exceed unity. While there are reported cases when this
flux ratio on ELFIN exceeds unity due to nonlinear scattering, an assertion
supported by fortuitous measurements of the EMIC wave amplitudes at conjugate
observatories [Grach et al., 2022], conclusions on the preponderance and
significance of nonlinear effects cannot be drawn from the ratio alone without
further statistical or detailed case study analyses, both left for the future.
However, we have argued that nonlinear effects (from presumably mainly short
wave packets – e.g., see various examples of such short packets in [Usanova et
al., 2010] and [An et al., 2022]) are statistically incorporated into the
spectral shapes of precipitation at ELFIN, just as these high-amplitude EMIC
wave packets are incorporated into the statistical averages of equatorial wave
spectra, so that a direct comparison using a diffusive formalism should be
possible.
In Section 3.3 we showed that consecutive ELFIN passes over the same MLT
region can reveal the EMIC wave-driven precipitation’s spatio-temporal
evolution in location, extent (in $L$ and $\Delta L$), and intensity. This
allows us to study the EMIC wave-generation region in a geophysical context
(e.g., during substorms and storms) as well as the potential of the waves for
reducing the outer radiation belt flux.
To evaluate the consistency of the observed energy of moderate and strong
precipitation at ELFIN (ratios $R\in$[0.3,0.5] and $R>$0.5, respectively) with
the resonance energy expected for EMIC wave-driven precipitation, we utilized
in Section 4 conjunctions with ground based or equatorial assets. Using such
data-informed estimates of $f_{pe}/f_{ce}$ and of $f_{\max}/f_{ci}$, with only
occasional support from models, we examined the resonance energy that would be
consistent with these estimates. We found it to be in agreement with the
observed energies of moderate and strong precipitation seen at ELFIN. This
demonstrates that the observed precipitation as identified by the ratio $R$ is
indeed consistent with theoretical expectation from resonant interactions.
We proceeded, in Section 5, to study statistically an ensemble of EMIC wave-
driven events observed by ELFIN. These were identified in two years of ELFIN
data (2019-2021) as enhancements in the precipitating-to-trapped flux ratio
$R$, peaking at $>0.5$MeV. The average event duration on ELFIN, 17s (6 spins),
or $\Delta L\sim$ 0.56, is consistent with published reports of the typical
radial extent of EMIC waves in the magnetosphere, $\Delta L\sim$ 0.5 [Blum et
al., 2016, 2017], validating our assertion that the bursty nature of the
precipitation is due to the spatial localization of the EMIC wave interaction
region in the magnetosphere. The most populous category of events is those
occurring at pre-midnight and dusk, at $L\sim$5-7. A second class of events
was found at midnight, post-midnight and dawn at $L\sim$8-12. These two
categories are roughly co-located with the two main populations of EMIC waves
in the magnetosphere [Min et al., 2012], further solidifying our assertion
that our observations of relativistic electron precipitation are caused by
EMIC waves.
The peak precipitating-to-trapped flux ratio exceeds one for many of the
events studied. Considering that such a ratio, if scrutinized and confirmed,
could signify nonlinear interactions, the only means of exceeding the quasi-
linear strong diffusion limit (e.g., Kubota et al. [2015]; Grach & Demekhov
[2020]), we examined further how it may arise. We found that on many occasions
extreme burstiness of the precipitation, lasting 1-5 spin phase sectors was
evidenced in the data. (Each ELFIN spin period has 16 sectors and the loss
cone most often contains 6 such sectors.) Examination of EMIC waves at THEMIS
E on one event studied revealed that this burstiness is, in fact, to be
expected at ELFIN due to the spatial localization of the EMIC wave coherence
and amplitude arising from interference of wave packets in the presence of
extreme density gradients. This burstiness in precipitation can result in
temporal aliasing of ELFIN’s precipitating-to-trapped flux ratio, $R$,
resulting in values that can exceed 1 or be well below 1. Careful analysis of
individual events together with modeling of measured wave spectra from
ancillary datasets can, at times, confirm the nonlinear nature of the
scattering (Grach et al. [2021]). Either future extensive case-by-case
analysis or careful statistical analysis to identify and remove events subject
to aliasing is required to establish the preponderance of nonlinear effects.
However, we argued that by analogy with similar effects for whistler-mode
chorus waves, it is still quite likely that precipitation due to nonlinear
EMIC wave scattering can conform to a statistical analysis using a diffusive
treatment, because precipitation bursts contribute to the average flux (and
flux ratio) at a level commensurate to the contribution of nonlinear wave
power bursts in statistical averages of equatorial wave power. We proceeded
with such a statistical analysis even though we recognize that in small or
large part the precipitation and the associated wave amplitudes considered may
incorporate nonlinear effects.
Using this database we have shown (Section 5.1) that:
* •
The typical energy of the peak precipitating-to-trapped flux ratio, $E^{*}$, a
measured proxy for the resonance energy at the frequency of peak wave power,
$E_{peak}$, is in the range $\sim 1-3$ MeV. This is consistent with
expectation for EMIC wave-electron resonant interactions, based on the cold
plasma dispersion relation and prior statistics of wave-power near peak wave-
power, $f_{peak}$, at the equator.
* •
The above measured energy $E^{*}$ decreases with $L$-shell as $\sim L^{-0.5\pm
0.1}$. This dependence is also in good agreement with theoretical
expectations.
* •
The typical energy of the half-peak of the precipitating-to-trapped flux
ratio, $E^{*}_{\min}$, a proxy for the minimum resonance energy $E_{R,\min}$
corresponding to significant wave-driven electron scattering toward the loss-
cone, is in the range $\sim 0.5-1.5$ MeV and falls off with $L$-shell at the
same rate as $E^{*}$. This too is consistent with theoretical expectations
based on equatorial wave-power estimates at the maximum frequency below the
ion gyrofrequency where wave-power remains significant. This shows that sub-
MeV electron precipitation accompanying multi-MeV electron precipitation, is
also likely driven by EMIC waves.
We next examined (Section 5.2) the properties of the most intense ($R>$1/2),
highly relativistic (having $E^{*}>1$ MeV), EMIC wave-driven electron
precipitation events, of the most populous category in our database (L$<$7).
We consider these to be the most efficient EMIC wave-driven events in our
database. Such events are highly correlated with geomagnetic activity. And, as
expected from the statistics of all events, they are predominantly seen near
dusk (MLT $\sim$ 18) with their occurrence rates dropping precipitously
(though still finite) at noon and post-midnight. We found that:
* •
About 35% of the time, the most efficient EMIC wave-driven events are still
associated with moderate or strong ($R>$1/3) precipitation of electrons with
energies as low as 100-300 keV. Only 6.5% of the time do such events have a
definitively negative slope of $R$ versus E, at that low energy range. For
most events (at least 78% and potentially as many as 93.5%) that slope is
positive. This suggests that simultaneous energetic electron scattering by
whistler-mode chorus and EMIC waves (as was presented in Figure 1, right
column, and discussed in Section 3.1) are a minority, occurring for only one
in every 16 of the highly relativistic, strong EMIC wave-driven events.
* •
The average $R$ versus $E$ spectrum of the most efficient EMIC wave-driven
events exhibits an exponential increase with energy (more precisely with the
square of the Lorentz factor, $R\sim\gamma^{2}$) up to its peak value $R\sim
1$ at $E\sim E^{*}$ (whose average value is $\langle E^{*}\rangle\sim 1.45$
MeV). Examined versus energy normalized to the peak-precipitation energy,
$E/E^{*}$, $R$ exhibits an exponential decay with normalized energy away from
(on both sides of) unity: ($R\sim(E/E^{*})^{-1}$ above and
$R\sim(E/E^{*})^{+1}$ below $E/E^{*}=1$). This decay is in agreement with
quasi-linear diffusion theory for resonant scattering by a typical equatorial
EMIC wave power spectrum that falls off away from its peak wave power, which
is consistent with observations.
* •
Based on both points above, the majority of the weaker precipitation below $1$
MeV is thus probably due to electron resonant interaction with EMIC waves at
frequencies $f>f_{peak}$, i.e., above the peak power frequency of hydrogen
band EMIC waves (as was also inferred by the $E^{*}_{\min}$ decrease as a
function of L-shell in Section 5.1 and summarized above). Even though this
wave power is much smaller than at $f_{peak}$, it is still finite [Denton et
al., 2019; Zhang et al., 2016c, 2021]. But because of the significantly higher
trapped electron flux at lower energies, even waves of such small power may
still produce finite precipitating fluxes at $\sim 200-500$ keV, as observed.
* •
Very low energy ($\sim 100$ keV) electron precipitation at times of strong,
highly relativistic precipitation, could still be predominantly due to
interactions with very low-amplitude high-frequency hydrogen band EMIC waves,
excited very near the equatorial gyrofrequency, $f_{cp}$, by $10-100$ eV
anisotropic ion populations [Teng et al., 2019; Zhang et al., 2021]. Such
waves can more easily experience strong cyclotron damping by low energy ions,
likely causing them to be transient and rendering their equatorial
observations sparse. Since $\sim 100$ keV electrons cannot reach cyclotron
resonance with the most frequent hydrogen band EMIC waves [Zhang et al.,
2016c], other precipitation mechanisms should also be examined and quantified.
Those include nonresonant scattering by EMIC wave-packets with sharp edges
[Chen et al., 2016; An et al., 2022], which can scatter electrons well below
the minimum resonance energy while still exhibiting a rising $R$ versus $E$
spectrum. However, simultaneous whistler-mode wave-driven precipitation which
is very efficient at such low energies, is clearly contributing to this
precipitation (as discussed above), even though it is likely a minority, and
should also be further considered [Artemyev et al., 2016; Ma et al., 2016b].
Our results confirm the crucial role played by EMIC waves in relativistic
electron losses in the Earth’s outer radiation belt (at $L$-shells between 5
and 7, where ELFIN detected most EMIC wave-driven precipitation events). We
find that the main predictions of the classical resonant scattering models of
EMIC wave-driven precipitation (e.g., typical energy of precipitation peaks,
the energy spectra and the $L$-shell dependence of minimum and peak-power
resonance energy) are consistent with ELFIN case studies and statistical
results presented herein. However, ELFIN’s observed fine structure of the
precipitation in energy, pitch-angle and time reveal interesting details
pertaining to the nature of low-energy precipitation concurrent with strong,
highly relativistic electron scattering, and the origin of precipitation
exceeding the strong diffusion limit, that deserve further attention.
Towards that end, significant new information on EMIC wave-driven
precipitation can be obtained by further combining ELFIN electron measurements
and theoretical modelling of wave-particle interactions with several ancillary
datasets: First, ELFIN conjunctions with ground-based and equatorial
spacecraft such as Van Allen Probes, THEMIS, ERG and MMS exist, can provide
important information on the EMIC waves implicated in the scattering. The
equatorial spacecraft can also provide local density and magnetic field
information, thus constraining the $f_{pe}/f_{ce}$ ratio. The large dataset of
ELFIN (spanning four years of operation, from 2018 to 2022) provides ample
opportunities for studies using such conjugate observations. These also
present good opportunities to investigate the relationship between the fine
structure of EMIC waves at the equator (wave packet coherence [Blum et al.,
2016, 2017], frequency ranges [Kersten et al., 2014; Zhang et al., 2016c], and
hot plasma effects [Cao et al., 2017a; Chen et al., 2019]) and the energy
distribution of precipitating electrons at ELFIN. Second, ELFIN’s energetic
particle detector for ions (EPDI), which measures total ions over the same
energy range as electrons (50 - 5000 keV), has been calibrated and has
collected data for approximately three months prior to the end of the mission
on both satellites. And finally, ELFIN’s fluxgate magnetometer (FGM)
instrument, which provides field-aligned current information and has the
capability of measuring waves up to a Nyquist frequency of 5Hz, has been
operating well and has provided data on both satellites for more than a year
towards the end of the mission (note that sub-spin resolution calibration to
reveal EMIC waves is still on-going). These datasets can provide information
on energetic ion scattering and precipitation by the same EMIC waves that
scatter energetic electrons in the inner magnetosphere (when the ion resonance
energy is sufficiently high at appropriate $f_{pe}/f_{ce}$ ratios), on the
location of the EMIC wave excitation relative to large scale magnetospheric
current system sources, and on EMIC waves potentially seen at ELFIN at the
same time as electron precipitation. Critical open questions are (i) What is
the overall contribution of nonlinear interactions and what can we learn about
the physics of such interactions from ELFIN’s precipitation energy and pitch-
angle spectrum? (ii) What effects are responsible for precipitation at sub-MeV
energies? (iii) What is the relative contribution of EMIC wave-driven
precipitation to the total loss of energetic electrons in the inner
magnetosphere and to the total atmospheric energy deposition by energetic
particles? There is already good theoretical background for further model
development of nonlinear resonant interactions [Albert & Bortnik, 2009; Kubota
et al., 2015; Kubota & Omura, 2017; Grach & Demekhov, 2020; Grach et al.,
2021] and nonresonant interactions [Chen et al., 2016; An et al., 2022]. Such
development can proceed on solid grounds only if guided by statistically-
derived properties of the observed electron precipitation and ancillary
measurements afforded by the above datasets.
###### Acknowledgements.
At UCLA, we acknowledge support by NASA awards NNX14AN68G (5/22/2014 –
7/15/2022) and 80NSSC22K1005 (7/16/2022 - present), NSF award AGS-1242918
(9/24/2014-7/31/2019) and AGS-2019950 (May 2020 – present). We are grateful to
NASA’s CubeSat Launch Initiative program for successfully launching the ELFIN
satellites in the desired orbits under ELaNa XVIII. We thank the AFOSR for
their early support of the ELFIN program under its University Nanosat Program,
UNP-8 project, contract FA9453-12-D-0285 (02/01/2013-01/31/2015). We also
thank the California Space Grant program for student support during the
project’s inception (2010-2014). The work at UCLA was also supported by NASA
contract NAS5-02099 for data analysis from the THEMIS mission. We specifically
thank: J. W. Bonnell and F. S. Mozer for the use of EFI data in determining
spacecraft potential derived density from THEMIS E, J. P. McFadden for use of
ESA data in obtaining electron temperature measurements to incorporate into
the procedure for electron density determination from the spacecraft
potential, and K. H. Glassmeier, U. Auster and W. Baumjohann for the use of
FGM data provided under the lead of the Technical University of Braunschweig
and with financial support through the German Ministry for Economy and
Technology and the German Center for Aviation and Space (DLR) under contract
50 OC 0302. Additionally, we thank the operators of the Magnetic Induction
Coil Array, in particular Dr. Marc Lessard of UNH and the operators of the
South Pole, PG3, and PG4 sites at NJIT, UNH, and Virginia Tech; and the
Sodankyla Geophysical Observatory for providing the Finnish pulsation
magnetometer network data. The induction coil magnetometer at South Pole is
operated and managed by New Jersey Institute of Technology (NJIT) under NSF
grant OPP-1643700. X.-J.Z. and A.A. acknowledge NASA awards 80NSSC20K1270,
80NSSC22K0517; X.-J.Z. also acknowledges NSF award 2021749; X.A. acknowledges
NSF grant NO. 2108582; W. Li acknowledges NSF award AGS-2019950 and NASA award
80NSSC20K1270; M.D.H. was supported by NSF AGS-2027210; X. M. acknowledges
NASA contract 80NM0018D0004 to the Jet Propulsion Laboratory, California
Institute of Technology; S.R.S. thanks Dasha Gloutak of CU Boulder and Samuel
Rickert of UCLA. We acknowledge the critical contributions and talent of
numerous volunteer team members (more than 300 students contributed to this
program since its inception) who made this challenging program both successful
and fun. ELFIN would not have been possible without the advice and generous
time contributions from numerous members of the science and technology
community who served as reviewers, became unofficial mentors, or were the
bouncing board of ideas during trade studies or tiger team deliberations.
Special thanks go to UCLA’s: Steve Joy and Joe Mafi; The Aerospace
Corporation’s: David Hinkley, Eddson Alcid, David Arndt, Jim Clemmons (now at
UNH), Chris Coffman, Joe Fennel, Michael Forney, Jerry Fuller, Brian Hardy,
Petras Karuza, Christopher Konichi, Justin Lee, Pierce Martin, Leslie
Peterson, David Ping, Dan Rumsey and Darren Rowen; NASA GSFC’s: Gary Crum,
Thomas Johnson, Nick Paschalidis, David L. Pierce, Luis Santos and Rick
Schnurr; NASA JPL’s: Matt Bennett, Olivia Dawson, Nick Emis, Justin Foley,
Travis Imken, Allen Kummer, Andrew Lamborn, Marc Lane, Neil Murphy, Keith
Novac, David R. Pierce, Sara Spangelo, and Scott Trip; AFRL’s: Travis Willett,
David Voss and Kyle Kemble; UCB’s: Peter Berg, Manfred Bester, Dan Cosgrove,
Greg Dalton, David Glaser, Jim Lewis, Michael Ludlam, Chris Smith, Rick
Sterling, and Ellen Taylor; Tyvak’s: Bill McCown, Robert Erwin, Nathan Fite,
Ehson Mosleh, Anthony Ortega, Jacob Portucalian, and Austin Williams; Cal
State University Northridge’s: James Flynn and Sharlene Katz; JHU/APL’s: Jeff
Asher and Edward Russell; Montana State University’s: Adam Gunderson;
Space-X’s: Michael Sholl; Planet Labs’: Bryan Klofas; and LoadPath’s: Jerry
Footdale.
## Conflict of Interest
The authors have no relevant financial or non-financial interests to disclose.
## References
* (1)
* Aa et al. (2020) Aa, E., Zou, S., Erickson, P. J., Zhang, S.-R. & Liu, S. (2020), ‘Statistical Analysis of the Main Ionospheric Trough Using Swarm in Situ Measurements’, Journal of Geophysical Research (Space Physics) 125(3), e27583.
* Adair et al. (2022) Adair, L., Angelopoulos, V., Sibeck, D. & Zhang, X. J. (2022), ‘A Statistical Examination of EMIC Wave-Driven Electron Pitch Angle Scattering Signatures’, Journal of Geophysical Research (Space Physics) 127(2), e29790.
* Agapitov et al. (2013) Agapitov, O. V., Artemyev, A., Krasnoselskikh, V., Khotyaintsev, Y. V., Mourenas, D., Breuillard, H., Balikhin, M. & Rolland, G. (2013), ‘Statistics of whistler mode waves in the outer radiation belt: Cluster STAFF-SA measurements’, J. Geophys. Res. 118, 3407–3420.
* Agapitov et al. (2018) Agapitov, O. V., Mourenas, D., Artemyev, A. V., Mozer, F. S., Hospodarsky, G., Bonnell, J. & Krasnoselskikh, V. (2018), ‘Synthetic Empirical Chorus Wave Model From Combined Van Allen Probes and Cluster Statistics’, Journal of Geophysical Research (Space Physics) 123(1), 297–314.
* Albert (2003) Albert, J. M. (2003), ‘Evaluation of quasi-linear diffusion coefficients for EMIC waves in a multispecies plasma’, Journal of Geophysical Research (Space Physics) 108(A6), 1249.
* Albert & Bortnik (2009) Albert, J. M. & Bortnik, J. (2009), ‘Nonlinear interaction of radiation belt electrons with electromagnetic ion cyclotron waves’, Geophys. Res. Lett. 36, 12110.
* Allen et al. (2016) Allen, R. C., Zhang, J. C., Kistler, L. M., Spence, H. E., Lin, R. L., Klecker, B., Dunlop, M. W., André, M. & Jordanova, V. K. (2016), ‘A statistical study of EMIC waves observed by Cluster: 2. Associated plasma conditions’, Journal of Geophysical Research (Space Physics) 121(7), 6458–6479.
* Allison & Shprits (2020) Allison, H. J. & Shprits, Y. Y. (2020), ‘Local heating of radiation belt electrons to ultra-relativistic energies’, Nature Communications 11, 4533.
* An et al. (2022) An, X., Artemyev, A., Angelopoulos, V., Zhang, X., Mourenas, D., Bortnik, J. (2022), ‘Nonresonant scattering of relativistic electrons by electromagnetic ion cyclotron waves in Earth’s radiation belts’, Phys. Rev. Lett. 129, 135101, doi:10.1103/PhysRevLett.129.135101
* Anderson et al. (1996) Anderson, B. J., Erlandson, R. E., Engebretson, M. J., Alford, J. & Arnoldy, R. L. (1996), ‘Source region of 0.2 to 1.0 Hz geomagnetic pulsation bursts’, Geophys. Res. Lett. 23(7), 769–772.
* Anderson et al. (1992) Anderson, B. J., Erlandson, R. E. & Zanetti, L. J. (1992), ‘A statistical study of Pc 1-2 magnetic pulsations in the equatorial magnetosphere 1. Equatorial occurrence distributions’, J. Geophys. Res. 97(A3), 3075–3088.
* Anderson & Hamilton (1993) Anderson, B. J. & Hamilton, D. C. (1993), ‘Electromagnetic ion cyclotron waves stimulated by modest magnetospheric compressions’, J. Geophys. Res. 98(A7), 11369–11382.
* Angelopoulos (2008) Angelopoulos, V. (2008), ‘The THEMIS Mission’, Space Sci. Rev. 141, 5–34.
* Angelopoulos et al. (2020) Angelopoulos, V., Tsai, E., Bingley, L., Shaffer, C., Turner, D. L., Runov, A., Li, W., Liu, J., Artemyev, A. V., Zhang, X. J., Strangeway, R. J., Wirz, R. E., Shprits, Y. Y., Sergeev, V. A., Caron, R. P., Chung, M., Cruce, P., Greer, W., Grimes, E., Hector, K., Lawson, M. J., Leneman, D., Masongsong, E. V., Russell, C. L., Wilkins, C., Hinkley, D., Blake, J. B., Adair, N., Allen, M., Anderson, M., Arreola-Zamora, M., Artinger, J., Asher, J., Branchevsky, D., Capitelli, M. R., Castro, R., Chao, G., Chung, N., Cliffe, M., Colton, K., Costello, C., Depe, D., Domae, B. W., Eldin, S., Fitzgibbon, L., Flemming, A., Fox, I., Frederick, D. M., Gilbert, A., Gildemeister, A., Gonzalez, A., Hesford, B., Jha, S., Kang, N., King, J., Krieger, R., Lian, K., Mao, J., McKinney, E., Miller, J. P., Norris, A., Nuesca, M., Palla, A., Park, E. S. Y., Pedersen, C. E., Qu, Z., Rozario, R., Rye, E., Seaton, R., Subramanian, A., Sundin, S. R., Tan, A., Turner, W., Villegas, A. J., Wasden, M., Wing, G., Wong, C., Xie, E., Yamamoto, S., Yap, R., Zarifian, A. & Zhang, G. Y. (2020), ‘The ELFIN Mission’, Space Sci. Rev. 216(5), 103.
* Arnoldy et al. (2005) Arnoldy, R. L., Engebretson, M. J., Denton, R. E., Posch, J. L., Lessard, M. R., Maynard, N. C., Ober, D. M., Farrugia, C. J., Russell, C. T., Scudder, J. D., Torbert, R. B., Chen, S. H. & Moore, T. E. (2005), ‘Pc 1 waves and associated unstable distributions of magnetospheric protons observed during a solar wind pressure pulse’, Journal of Geophysical Research (Space Physics) 110(A7), A07229.
* Artemyev et al. (2016) Artemyev, A. V., Agapitov, O., Mourenas, D., Krasnoselskikh, V., Shastun, V. & Mozer, F. (2016), ‘Oblique Whistler-Mode Waves in the Earth’s Inner Magnetosphere: Energy Distribution, Origins, and Role in Radiation Belt Dynamics’, Space Sci. Rev. 200(1-4), 261–355.
* Artemyev et al. (2022) Artemyev, A. V., Mourenas, D., Zhang, X.-J. & Vainchtein, D. (2022), ‘On the incorporation of nonlinear resonant wave-particle interactions into radiation belt models’, J. Geophys. Res. 127(9), e2022JA030853.
* Artemyev et al. (2021) Artemyev, A. V., Neishtadt, A. I., Vasiliev, A. A. & Mourenas, D. (2021), ‘Transitional regime of electron resonant interaction with whistler-mode waves in inhomogeneous space plasma’, Phys. Rev. E 104(5), 055203.
* Artemyev et al. (2013) Artemyev, A. V., Orlova, K. G., Mourenas, D., Agapitov, O. V. & Krasnoselskikh, V. V. (2013), ‘Electron pitch-angle diffusion: resonant scattering by waves vs. nonadiabatic effects’, Annales Geophysicae 31, 1485–1490.
* Aseev et al. (2017) Aseev, N. A., Shprits, Y. Y., Drozdov, A. Y., Kellerman, A. C., Usanova, M. E., Wang, D. & Zhelavskaya, I. S. (2017), ‘Signatures of Ultrarelativistic Electron Loss in the Heart of the Outer Radiation Belt Measured by Van Allen Probes’, Journal of Geophysical Research (Space Physics) 122(10), 10,102–10,111.
* Baker et al. (1987) Baker, D. N., Anderson, R. C., Zwickl, R. D. & Slavin, J. A. (1987), ‘Average plasma and magnetic field variations in the distant magnetotail associated with near-earth substorm effects’, J. Geophys. Res. 92, 71–81.
* Baker et al. (2018) Baker, D. N., Erickson, P. J., Fennell, J. F., Foster, J. C., Jaynes, A. N. & Verronen, P. T. (2018), ‘Space Weather Effects in the Earth’s Radiation Belts’, Space Sci. Rev. 214(1), 17.
* Bashir et al. (2022) Bashir, M. F., Artemyev, A., Zhang, X.-J. & Angelopoulos, V. (2022), ‘Energetic Electron Precipitation Driven by the Combined Effect of ULF, EMIC, and Whistler Waves’, Journal of Geophysical Research (Space Physics) 127(1), e29871.
* Belehaki et al. (2004) Belehaki, A., Jakowski, N. & Reinisch, B. W. (2004), ‘Plasmaspheric electron content derived from GPS TEC and digisonde ionograms’, Advances in Space Research 33(6), 833–837.
* Bingley et al. (2019) Bingley, L., Angelopoulos, V., Sibeck, D., Zhang, X. & Halford, A. (2019), ‘The Evolution of a Pitch-Angle “Bite-Out” Scattering Signature Caused by EMIC Wave Activity: A Case Study’, Journal of Geophysical Research (Space Physics) 124(7), 5042–5055.
* Birmingham (1984) Birmingham, T. J. (1984), ‘Pitch angle diffusion in the Jovian magnetodisc’, J. Geophys. Res. 89, 2699–2707.
* Blake et al. (1996) Blake, J. B., Looper, M. D., Baker, D. N., Nakamura, R., Klecker, B. & Hovestadt, D. (1996), ‘New high temporal and spatial resolution measurements by SAMPEX of the precipitation of relativistic electrons’, Advances in Space Research 18(8), 171–186.
* Blum et al. (2016) Blum, L. W., Agapitov, O., Bonnell, J. W., Kletzing, C. & Wygant, J. (2016), ‘EMIC wave spatial and coherence scales as determined from multipoint Van Allen Probe measurements’, Geophys. Res. Lett. 43, 4799–4807.
* Blum et al. (2019) Blum, L. W., Artemyev, A., Agapitov, O., Mourenas, D., Boardsen, S. & Schiller, Q. (2019), ‘EMIC Wave-Driven Bounce Resonance Scattering of Energetic Electrons in the Inner Magnetosphere’, Journal of Geophysical Research (Space Physics) 124(4), 2484–2496.
* Blum et al. (2017) Blum, L. W., Bonnell, J. W., Agapitov, O., Paulson, K. & Kletzing, C. (2017), ‘EMIC wave scale size in the inner magnetosphere: Observations from the dual Van Allen Probes’, Geophys. Res. Lett. 44, 1227–1233.
* Blum et al. (2015) Blum, L. W., Halford, A., Millan, R., Bonnell, J. W., Goldstein, J., Usanova, M., Engebretson, M., Ohnsted, M., Reeves, G., Singer, H., Clilverd, M. & Li, X. (2015), ‘Observations of coincident EMIC wave activity and duskside energetic electron precipitation on 18-19 January 2013’, Geophys. Res. Lett. 42, 5727–5735.
* Blum, Li & Denton (2015) Blum, L. W., Li, X. & Denton, M. (2015), ‘Rapid MeV electron precipitation as observed by SAMPEX/HILT during high-speed stream-driven storms’, J. Geophys. Res. 120, 3783–3794.
* Blum et al. (2020) Blum, L. W., Remya, B., Denton, M. H. & Schiller, Q. (2020), ‘Persistent EMIC Wave Activity Across the Nightside Inner Magnetosphere’, Geophys. Res. Lett. 47(6), e87009.
* Bortnik et al. (2022) Bortnik, J., Albert, J. M., Artemyev, A., Li, W., Jun, C.-W., Grach, V. S., & Demekhov, A. G. (2022), ‘Amplitude dependence of nonlinear precipitation blocking of relativistic electrons by large amplitude EMIC waves’, Geophysical Research Letters 49, e2022GL098365, doi:10.1029/2022GL098365.
* Boynton et al. (2017) Boynton, R. J., Mourenas, D. & Balikhin, M. A. (2017), ‘Electron Flux Dropouts at $L\sim 4.2$ From Global Positioning System Satellites: Occurrences, Magnitudes, and Main Driving Factors’, Journal of Geophysical Research (Space Physics) 122, 11.
* Büchner & Zelenyi (1989) Büchner, J. & Zelenyi, L. M. (1989), ‘Regular and chaotic charged particle motion in magnetotaillike field reversals. I - Basic theory of trapped motion’, J. Geophys. Res. 94, 11821–11842.
* Burch et al. (2016) Burch, J. L., Moore, T. E., Torbert, R. B. & Giles, B. L. (2016), ‘Magnetospheric Multiscale Overview and Science Objectives’, Space Sci. Rev. 199, 5–21.
* Cao et al. (2017a) Cao, J., Shprits, Y. Y., Ni, B. & Zhelavskaya, I. S. (2017a), ‘Scattering of Ultra-relativistic Electrons in the Van Allen Radiation Belts Accounting for Hot Plasma Effects’, Scientific Reports 7, 17719.
* Cao et al. (2017b) Cao, X., Ni, B., Summers, D., Bortnik, J., Tao, X., Shprits, Y. Y., Lou, Y., Gu, X., Fu, S., Shi, R., Xiang, Z. & Wang, Q. (2017b), ‘Bounce resonance scattering of radiation belt electrons by H+ band EMIC waves’, Journal of Geophysical Research (Space Physics) 122(2), 1702–1713.
* Capannolo et al. (2019) Capannolo, L., Li, W., Ma, Q., Shen, X. C., Zhang, X. J., Redmon, R. J., Rodriguez, J. V., Engebretson, M. J., Kletzing, C. A., Kurth, W. S., Hospodarsky, G. B., Spence, H. E., Reeves, G. D. & Raita, T. (2019), ‘Energetic Electron Precipitation: Multievent Analysis of Its Spatial Extent During EMIC Wave Activity’, Journal of Geophysical Research (Space Physics) 124(4), 2466–2483.
* Capannolo et al. (2018) Capannolo, L., Li, W., Ma, Q., Zhang, X.-J., Redmon, R. J., Rodriguez, J. V., Kletzing, C. A., Kurth, W. S., Hospodarsky, G. B., Engebretson, M. J., Spence, H. E. & Reeves, G. D. (2018), ‘Understanding the Driver of Energetic Electron Precipitation Using Coordinated Multisatellite Measurements’, Geophys. Res. Lett. 45, 6755–6765.
* Capannolo et al. (2022) Capannolo, L., Li, W., Millan, R., Smith, D., Sivadas, N., Sample, J. & Shekhar, S. (2022), ‘Relativistic Electron Precipitation Near Midnight: Drivers, Distribution, and Properties’, Journal of Geophysical Research (Space Physics) 127(1), e30111.
* Capannolo et al. (2021) Capannolo, L., Li, W., Spence, H., Johnson, A. T., Shumko, M., Sample, J. & Klumpar, D. (2021), ‘Energetic Electron Precipitation Observed by FIREBIRD-II Potentially Driven by EMIC Waves: Location, Extent, and Energy Range From a Multievent Analysis’, Geophys. Res. Lett. 48, e2020GL091564.
* Carson et al. (2013) Carson, Bonar R., Rodger, Craig J., Clilverd, Mark A. (2021), ‘POES satellite observations of EMIC-wave driven relativistic electron precipitation during 1998-2010’, J. Geophys. Res. 118, 232, doi:10.1029/2012JA017998
* Chen et al. (2011) Chen, L., Thorne, R. M. & Bortnik, J. (2011), ‘The controlling effect of ion temperature on EMIC wave excitation and scattering’, Geophys. Res. Lett. 38(16), L16109.
* Chen et al. (2016) Chen, L., Thorne, R. M., Bortnik, J. & Zhang, X.-J. (2016), ‘Nonresonant interactions of electromagnetic ion cyclotron waves with relativistic electrons’, J. Geophys. Res. 121(10), 9913–9925.
http://dx.doi.org/10.1002/2016JA022813
* Chen et al. (2009) Chen, L., Thorne, R. M. & Horne, R. B. (2009), ‘Simulation of EMIC wave excitation in a model magnetosphere including structured high-density plumes’, Journal of Geophysical Research (Space Physics) 114(A7), A07221.
* Chen et al. (2010) Chen, L., Thorne, R. M., Jordanova, V. K., Wang, C.-P., Gkioulidou, M., Lyons, L. & Horne, R. B. (2010), ‘Global simulation of EMIC wave excitation during the 21 April 2001 storm from coupled RCM-RAM-HOTRAY modeling’, Journal of Geophysical Research (Space Physics) 115(A7), A07209.
* Chen et al. (2013) Chen, L., Thorne, R. M., Shprits, Y. & Ni, B. (2013), ‘An improved dispersion relation for parallel propagating electromagnetic waves in warm plasmas: Application to electron scattering’, Journal of Geophysical Research (Space Physics) 118(5), 2185–2195.
* Chen et al. (2019) Chen, L., Zhu, H. & Zhang, X. (2019), ‘Wavenumber Analysis of EMIC Waves’, Geophys. Res. Lett. 46(11), 5689–5697.
* Chen et al. (2014) Chen, Y., Friedel, R. H. W., Henderson, M. G., Claudepierre, S. G., Morley, S. K. & Spence, H. E. (2014), ‘REPAD: An empirical model of pitch angle distributions for energetic electrons in the Earth’s outer radiation belt’, Journal of Geophysical Research (Space Physics) 119(3), 1693–1708.
* Chen et al. (2007) Chen, Y., Reeves, G. D. & Friedel, R. H. W. (2007), ‘The energization of relativistic electrons in the outer Van Allen radiation belt’, Nature Physics 3, 614–617.
* Cornwall (1965) Cornwall, J. M. (1965), ‘Cyclotron Instabilities and Electromagnetic Emission in the Ultra Low Frequency and Very Low Frequency Ranges’, J. Geophys. Res. 70(1), 61–69.
* Cornwall et al. (1970) Cornwall, J. M., Coroniti, F. V. & Thorne, R. M. (1970), ‘Turbulent loss of ring current protons’, J. Geophys. Res. 75, 4699.
* Cornwall et al. (1971) Cornwall, J. M., Coroniti, F. V. & Thorne, R. M. (1971), ‘Unified theory of SAR arc formation at the plasmapause’, J. Geophys. Res. 76(19), 4428.
* Cornwall & Schulz (1971) Cornwall, J. M. & Schulz, M. (1971), ‘Electromagnetic ion-cyclotron instabilities in multicomponent magnetospheric plasmas’, J. Geophys. Res. 76(31), 7791–7796.
* Coster et al. (2013) Coster, A., Williams, J., Weatherwax, A., Rideout, W. & Herne, D. (2013), ‘Accuracy of GPS total electron content: GPS receiver bias temperature dependence’, Radio Science 48(2), 190–196.
* Davies (1965) Davies, K. (1965), Ionospheric Radio Propagation, Technical report, National Bureau of Standards.
* Delcourt et al. (1994) Delcourt, D. C., Martin, Jr., R. F. & Alem, F. (1994), ‘A simple model of magnetic moment scattering in a field reversal’, Geophys. Res. Lett. 21, 1543–1546.
* Denton et al. (2019) Denton, R. E., Ofman, L., Shprits, Y. Y., Bortnik, J., Millan, R. M., Rodger, C. J., da Silva, C. L., Rogers, B. N., Hudson, M. K., Liu, K., Min, K., Glocer, A. & Komar, C. (2019), ‘Pitch Angle Scattering of Sub-MeV Relativistic Electrons by Electromagnetic Ion Cyclotron Waves’, Journal of Geophysical Research (Space Physics) 124(7), 5610–5626.
* Dessler & Karplus (1961) Dessler, A. J. & Karplus, R. (1961), ‘Some Effects of Diamagnetic Ring Currents on Van Allen Radiation’, J. Geophys. Res. 66(8), 2289–2295.
* Douma et al. (2017) Douma, E., Rodger, C. J., Blum, L. W. & Clilverd, M. A. (2017), ‘Occurrence characteristics of relativistic electron microbursts from SAMPEX observations’, Journal of Geophysical Research (Space Physics) 122(8), 8096–8107.
* Drozdov et al. (2019) Drozdov, A. Y., Aseev, N., Effenberger, F., Turner, D. L., Saikin, A. & Shprits, Y. Y. (2019), ‘Storm Time Depletions of Multi-MeV Radiation Belt Electrons Observed at Different Pitch Angles’, J. Geophys. Res. 124, 8943–8953.
* Drozdov et al. (2015) Drozdov, A. Y., Shprits, Y. Y., Orlova, K. G., Kellerman, A. C., Subbotin, D. A., Baker, D. N., Spence, H. E. & Reeves, G. D. (2015), ‘Energetic, relativistic, and ultrarelativistic electrons: Comparison of long-term VERB code simulations with Van Allen Probes measurements’, J. Geophys. Res. 120, 3574–3587.
* Drozdov et al. (2017) Drozdov, A. Y., Shprits, Y. Y., Usanova, M. E., Aseev, N. A., Kellerman, A. C. & Zhu, H. (2017), ‘EMIC wave parameterization in the long-term VERB code simulation’, J. Geophys. Res. 122, 8488–8501.
* Drozdov et al. (2020) Drozdov, A. Y., Usanova, M. E., Hudson, M. K., Allison, H. J. & Shprits, Y. Y. (2020), ‘The Role of Hiss, Chorus, and EMIC Waves in the Modeling of the Dynamics of the Multi-MeV Radiation Belt Electrons’, J. Geophys. Res. 125, e2020JA028282.
* Dubyagin et al. (2002) Dubyagin, S., Sergeev, V. A. & Kubyshkina, M. V. (2002), ‘On the remote sensing of plasma sheet from low-altitude spacecraft’, Journal of Atmospheric and Solar-Terrestrial Physics 64(5-6), 567–572.
* Elkington et al. (1999) Elkington, S. R., Hudson, M. K. & Chan, A. A. (1999), ‘Acceleration of relativistic electrons via drift-resonant interaction with toroidal-mode Pc-5 ULF oscillations’, Geophys. Res. Lett. 26(21), 3273–3276.
* Elkington et al. (2003) Elkington, S. R., Hudson, M. K. & Chan, A. A. (2003), ‘Resonant acceleration and diffusion of outer zone electrons in an asymmetric geomagnetic field’, Journal of Geophysical Research (Space Physics) 108(A3), 1116.
* Elkington et al. (2004) Elkington, S. R., Wiltberger, M., Chan, A. A. & Baker, D. N. (2004), ‘Physical models of the geospace radiation environment’, Journal of Atmospheric and Solar-Terrestrial Physics 66(15-16), 1371–1387.
* Engebretson et al. (2008) Engebretson, M. J., Lessard, M. R., Bortnik, J., Green, J. C., Horne, R. B., Detrick, D. L., Weatherwax, A. T., Manninen, J., Petit, N. J., Posch, J. L. & Rose, M. C. (2008), ‘Pc1-Pc2 waves and energetic particle precipitation during and after magnetic storms: Superposed epoch analysis and case studies’, Journal of Geophysical Research (Space Physics) 113(A1), A01211.
* Engebretson et al. (2002) Engebretson, M. J., Peterson, W. K., Posch, J. L., Klatt, M. R., Anderson, B. J., Russell, C. T., Singer, H. J., Arnoldy, R. L. & Fukunishi, H. (2002), ‘Observations of two types of Pc 1-2 pulsations in the outer dayside magnetosphere’, Journal of Geophysical Research (Space Physics) 107(A12), 1451.
* Engebretson et al. (2018) Engebretson, M. J., Posch, J. L., Braun, D. J., Li, W., Ma, Q., Kellerman, A. C., Huang, C. L., Kanekal, S. G., Kletzing, C. A., Wygant, J. R., Spence, H. E., Baker, D. N., Fennell, J. F., Angelopoulos, V., Singer, H. J., Lessard, M. R., Horne, R. B., Raita, T., Shiokawa, K., Rakhmatulin, R., Dmitriev, E. & Ermakova, E. (2018), ‘EMIC Wave Events During the Four GEM QARBM Challenge Intervals’, Journal of Geophysical Research (Space Physics) 123(8), 6394–6423.
* Engebretson et al. (2015) Engebretson, M. J., Posch, J. L., Wygant, J. R., Kletzing, C. A., Lessard, M. R., Huang, C.-L., Spence, H. E., Smith, C. W., Singer, H. J., Omura, Y., Horne, R. B., Reeves, G. D., Baker, D. N., Gkioulidou, M., Oksavik, K., Mann, I. R., Raita, T. & Shiokawa, K. (2015), ‘Van Allen probes, NOAA, GOES, and ground observations of an intense EMIC wave event extending over 12 h in magnetic local time’, J. Geophys. Res. 120, 5465–5488.
* Erlandson & Ukhorskiy (2001) Erlandson, R. E. & Ukhorskiy, A. J. (2001), ‘Observations of electromagnetic ion cyclotron waves during geomagnetic storms: Wave occurrence and pitch angle scattering’, J. Geophys. Res. 106(A3), 3883–3896.
* Evans & Greer (2004) Evans, D. S. & Greer, M. S. (2004), ‘Polar orbiting environmental satellite space environment monitor-2: instrument description and archive data documentation’. NOAA Technical Memorandum version 1.3. NOAASpace Environment Center, Boulder, Colorado. 2004
* Foster et al. (2002) Foster, J. C., Erickson, P. J., Coster, A. J., Goldstein, J. & Rich, F. J. (2002), ‘Ionospheric signatures of plasmaspheric tails’, Geophys. Res. Lett. 29(13), 1623.
* Fraser et al. (2010) Fraser, B. J., Grew, R. S., Morley, S. K., Green, J. C., Singer, H. J., Loto’aniu, T. M. & Thomsen, M. F. (2010), ‘Storm time observations of electromagnetic ion cyclotron waves at geosynchronous orbit: GOES results’, Journal of Geophysical Research (Space Physics) 115(A5), A05208.
* Fraser et al. (2005) Fraser, B. J., Singer, H. J., Adrian, M. L., Gallagher, D. L. & Thomsen, M. F. (2005), ‘The Relationship Between Plasma Density Structure and EMIC Waves at Geosynchronous Orbit’, Geophysical Monograph Series 159, 55.
* Gabrielse et al. (2017) Gabrielse, C., Angelopoulos, V., Harris, C., Artemyev, A., Kepko, L. & Runov, A. (2017), ‘Extensive electron transport and energization via multiple, localized dipolarizing flux bundles’, J. Geophys. Res. 122, 5059–5076.
* Gannon et al. (2007) Gannon, J. L., Li, X. & Heynderickx, D. (2007), ‘Pitch angle distribution analysis of radiation belt electrons based on Combined Release and Radiation Effects Satellite Medium Electrons A data’, J. Geophys. Res. 112, 5212.
* Gkioulidou et al. (2015) Gkioulidou, M., Ohtani, S., Mitchell, D. G., Ukhorskiy, A. Y., Reeves, G. D., Turner, D. L., Gjerloev, J. W., Nose, M., Koga, K., Rodriguez, J. V. & Lanzerotti, L. J. (2015), ‘Spatial structure and temporal evolution of energetic particle injections in the inner magnetosphere during the 14 july 2013 substorm event’, J. Geophys. Res. 120(3), 1924–1938. 2014JA020872.
http://dx.doi.org/10.1002/2014JA020872
* Glauert & Horne (2005) Glauert, S. A. & Horne, R. B. (2005), ‘Calculation of pitch angle and energy diffusion coefficients with the PADIE code’, J. Geophys. Res. 110, 4206.
* Glauert et al. (2014) Glauert, S. A., Horne, R. B. & Meredith, N. P. (2014), ‘Three-dimensional electron radiation belt simulations using the BAS Radiation Belt Model with new diffusion models for chorus, plasmaspheric hiss, and lightning-generated whistlers’, J. Geophys. Res. 119, 268–289.
* Gonzalez et al. (1994) Gonzalez, W. D., Joselyn, J. A., Kamide, Y., Kroehl, H. W., Rostoker, G., Tsurutani, B. T. & Vasyliunas, V. M. (1994), ‘What is a geomagnetic storm?’, J. Geophys. Res. 99, 5771–5792.
* Grach & Demekhov (2020) Grach, V. S. & Demekhov, A. G. (2020), ‘Precipitation of Relativistic Electrons Under Resonant Interaction With Electromagnetic Ion Cyclotron Wave Packets’, Journal of Geophysical Research (Space Physics) 125(2), e27358.
* Grach et al. (2021) Grach, V. S., Demekhov, A. G. & Larchenko, A. V. (2021), ‘Resonant interaction of relativistic electrons with realistic electromagnetic ion-cyclotron wave packets’, Earth, Planets and Space 73(1), 129.
* Grach et al. (2022) Grach, V. S., Artemyev, A. V., Demekhov, A. G., Zhang, X.-J., Bortnik, J., Angelopoulos, V., et al. (2022), ‘Relativistic electron precipitation by EMIC waves: Importance of nonlinear resonant effects’, Geophysical Research Letters 49, e2022GL09999, doi:10.1029/2022GL099994
* Greeley et al. (2019) Greeley, A. D., Kanekal, S. G., Baker, D. N., Klecker, B. & Schiller, Q. (2019), ‘Quantifying the Contribution of Microbursts to Global Electron Loss in the Radiation Belts’, Journal of Geophysical Research (Space Physics) 124(2), 1111–1124.
* Halford et al. (2015) Halford, A. J., Fraser, B. J. & Morley, S. K. (2015), ‘EMIC waves and plasmaspheric and plume density: CRRES results’, Journal of Geophysical Research (Space Physics) 120(3), 1974–1992.
* Heilig et al. (2022) Heilig, B., Stolle, C., Kervalishvili, G., Rauberg, J., Miyoshi, Y., Tsuchiya, F., Kumamoto, A., Kasahara, Y., Shoji, M., Nakamura, S., Kitahara, M. & Shinohara, I. (2022), ‘Relation of the Plasmapause to the Midlatitude Ionospheric Trough, the Sub-Auroral Temperature Enhancement and the Distribution of Small-Scale Field Aligned Currents as Observed in the Magnetosphere by THEMIS, RBSP, and Arase, and in the Topside Ionosphere by Swarm’, Journal of Geophysical Research (Space Physics) 127(3), e29646.
* Heise et al. (2002) Heise, S., Jakowski, N., Wehrenpfennig, A., Reigber, C. & Lühr, H. (2002), ‘Sounding of the topside ionosphere/plasmasphere based on GPS measurements from CHAMP: Initial results’, Geophys. Res. Lett. 29(14), 1699.
* Hendry et al. (2017) Hendry, A. T., Rodger, C. J. & Clilverd, M. A. (2017), ‘Evidence of sub-MeV EMIC-driven electron precipitation’, Geophys. Res. Lett. 44, 1210–1218.
* Hendry et al. (2019) Hendry, A. T., Santolik, O., Kletzing, C. A., Rodger, C. J., Shiokawa, K. & Baishev, D. (2019), ‘Multi-instrument Observation of Nonlinear EMIC-Driven Electron Precipitation at sub-MeV Energies’, Geophys. Res. Lett. 46(13), 7248–7257.
* Horne et al. (2013) Horne, R. B., Glauert, S. A., Meredith, N. P., Boscher, D., Maget, V., Heynderickx, D. & Pitchford, D. (2013), ‘Space weather impacts on satellites and forecasting the Earth’s electron radiation belts with SPACECAST’, Space Weather 11, 169–186.
* Horne & Thorne (1993) Horne, R. B. & Thorne, R. M. (1993), ‘On the preferred source location for the convective amplification of ion cyclotron waves’, J. Geophys. Res. 98, 9233–9247.
* Horne & Thorne (1998) Horne, R. B. & Thorne, R. M. (1998), ‘Potential waves for relativistic electron scattering and stochastic acceleration during magnetic storms’, Geophys. Res. Lett. 25, 3011–3014.
* Horne & Thorne (2003) Horne, R. B. & Thorne, R. M. (2003), ‘Relativistic electron acceleration and precipitation during resonant interactions with whistler-mode chorus’, Geophys. Res. Lett. 30(10), 100000–1.
* Horne et al. (2005) Horne, R. B., Thorne, R. M., Shprits, Y. Y., Meredith, N. P., Glauert, S. A., Smith, A. J., Kanekal, S. G., Baker, D. N., Engebretson, M. J., Posch, J. L., Spasojevic, M., Inan, U. S., Pickett, J. S. & Decreau, P. M. E. (2005), ‘Wave acceleration of electrons in the Van Allen radiation belts’, Nature 437, 227–230.
* Hudson et al. (2012) Hudson, M., Brito, T., Elkington, S., Kress, B., Li, Z. & Wiltberger, M. (2012), ‘Radiation belt 2D and 3D simulations for CIR-driven storms during Carrington Rotation 2068’, Journal of Atmospheric and Solar-Terrestrial Physics 83, 51–62.
* Imhof et al. (1977) Imhof, W. L., Reagan, J. B. & Gaines, E. E. (1977), ‘Fine-scale spatial structure in the pitch angle distributions of energetic particles near the midnight trapping boundary’, J. Geophys. Res. 82, 5215–5221.
* Imhof et al. (1979) Imhof, W. L., Reagan, J. B. & Gaines, E. E. (1979), ‘Studies of the sharply defined L dependent energy threshold for isotropy at the midnight trapping boundary’, J. Geophys. Res. 84, 6371–6384.
* Imhof et al. (1992) Imhof, W. L., Voss, H. D., Mobilia, J., Datlowe, D. W., Gaines, E. E., McGlennon, J. P. & Inan, U. S. (1992), ‘Relativistic electron microbursts’, J. Geophys. Res. 97(A9), 13829–13837.
* Imhof et al. (1986) Imhof, W. L., Voss, H. D., Reagan, J. B., Datlowe, D. W., Gaines, E. E., Mobilia, J. & Evans, D. S. (1986), ‘Relativistic electron and energetic ion precipitation spikes near the plasmapause’, J. Geophys. Res. 91(A3), 3077–3088.
* Jackman et al. (1980) Jackman, C. H., Frederick, J. E. & Stolarski, R. S. (1980), ‘Production of odd nitrogen in the stratosphere and mesosphere - An intercomparison of source strengths’, J. Geophys. Res. 85, 7495–7505.
* Jaynes et al. (2015) Jaynes, A. N., Baker, D. N., Singer, H. J., Rodriguez, J. V., Loto’aniu, T. M., Ali, A. F., Elkington, S. R., Li, X., Kanekal, S. G., Fennell, J. F., Li, W., Thorne, R. M., Kletzing, C. A., Spence, H. E. & Reeves, G. D. (2015), ‘Source and seed populations for relativistic electrons: Their roles in radiation belt changes’, Journal of Geophysical Research (Space Physics) 120(9), 7240–7254.
* Jin et al. (2022) Jin, Y., Liu, N., Su, Z., Zheng, H., Wang, Y. & Wang, S. (2022), ‘Immediate Impact of Solar Wind Dynamic Pressure Pulses on Whistler-Mode Chorus Waves in the Inner Magnetosphere’, Geophys. Res. Lett. 49(5), e2022GL097941.
* Jordanova et al. (1998) Jordanova, V. K., Farrugia, C. J., Quinn, J. M., Thorne, R. M., Ogilvie, K. E., Lepping, R. P., Lu, G., Lazarus, A. J., Thomsen, M. F. & Belian, R. D. (1998), ‘Effect of wave-particle interactions on ring current evolution for January 10-11, 1997: Initial results’, Geophys. Res. Lett. 25(15), 2971–2974.
* Kennel & Petschek (1966) Kennel, C. F. & Petschek, H. E. (1966), ‘Limit on Stably Trapped Particle Fluxes’, J. Geophys. Res. 71, 1–28.
* Kersten et al. (2014) Kersten, T., Horne, R. B., Glauert, S. A., Meredith, N. P., Fraser, B. J. & Grew, R. S. (2014), ‘Electron losses from the radiation belts caused by EMIC waves’, J. Geophys. Res. 119, 8820–8837.
* Kim & Chan (1997) Kim, H.-J. & Chan, A. A. (1997), ‘Fully adiabatic changes in storm time relativistic electron fluxes’, J. Geophys. Res. 102, 22107–22116.
* Kim et al. (2010) Kim, K. C., Lee, D.-Y., Kim, H.-J., Lee, E. S. & Choi, C. R. (2010), ‘Numerical estimates of drift loss and Dst effect for outer radiation belt relativistic electrons with arbitrary pitch angle’, J. Geophys. Res. 115, 3208.
* Kozyra et al. (1997) Kozyra, J. U., Jordanova, V. K., Home, R. B. & Thorne, R. M. (1997), ‘Modeling of the contribution of electromagnetic ion cyclotron (EMIC) waves to stormtime ring current erosion’, Washington DC American Geophysical Union Geophysical Monograph Series 98, 187–202.
* Kubota & Omura (2017) Kubota, Y. & Omura, Y. (2017), ‘Rapid precipitation of radiation belt electrons induced by EMIC rising tone emissions localized in longitude inside and outside the plasmapause’, Journal of Geophysical Research (Space Physics) 122(1), 293–309.
* Kubota et al. (2015) Kubota, Y., Omura, Y. & Summers, D. (2015), ‘Relativistic electron precipitation induced by EMIC-triggered emissions in a dipole magnetosphere’, Journal of Geophysical Research (Space Physics) 120(6), 4384–4399.
* Lee et al. (2013) Lee, H. B., Jee, G., Kim, Y. H. & Shim, J. S. (2013), ‘Characteristics of global plasmaspheric TEC in comparison with the ionosphere simultaneously observed by Jason-1 satellite’, Journal of Geophysical Research (Space Physics) 118(2), 935–946.
* Lee & Angelopoulos (2014) Lee, J. H. & Angelopoulos, V. (2014), ‘On the presence and properties of cold ions near Earth’s equatorial magnetosphere’, Journal of Geophysical Research (Space Physics) 119(3), 1749–1770.
* Lee et al. (2012) Lee, J. H., Chen, L., Angelopoulos, V. & Thorne, R. M. (2012), ‘THEMIS observations and modeling of multiple ion species and EMIC waves: Implications for a vanishing He+ stop band’, Journal of Geophysical Research (Space Physics) 117(A6), A06204.
* Li & Hudson (2019) Li, W. & Hudson, M. K. (2019), ‘Earth’s Van Allen Radiation Belts: From Discovery to the Van Allen Probes Era’, Journal of Geophysical Research (Space Physics) 124(11), 8319–8351.
* Li et al. (2015) Li, W., Ma, Q., Thorne, R. M., Bortnik, J., Kletzing, C. A., Kurth, W. S., Hospodarsky, G. B. & Nishimura, Y. (2015), ‘Statistical properties of plasmaspheric hiss derived from Van Allen Probes data and their effects on radiation belt electron dynamics’, J. Geophys. Res. 120, 3393–3405.
* Li et al. (2016) Li, W., Ma, Q., Thorne, R. M., Bortnik, J., Zhang, X.-J., Li, J., Baker, D. N., Reeves, G. D., Spence, H. E., Kletzing, C. A., Kurth, W. S., Hospodarsky, G. B., Blake, J. B., Fennell, J. F., Kanekal, S. G., Angelopoulos, V., Green, J. C. & Goldstein, J. (2016), ‘Radiation belt electron acceleration during the 17 March 2015 geomagnetic storm: Observations and simulations’, J. Geophys. Res. 121, 5520–5536.
* Li et al. (2013) Li, W., Ni, B., Thorne, R. M., Bortnik, J., Green, J. C., Kletzing, C. A., Kurth, W. S. & Hospodarsky, G. B. (2013), ‘Constructing the global distribution of chorus wave intensity using measurements of electrons by the POES satellites and waves by the Van Allen Probes’, Geophys. Res. Lett. 40, 4526–4532.
* Li et al. (2007) Li, W., Shprits, Y. Y. & Thorne, R. M. (2007), ‘Dynamic evolution of energetic outer zone electrons due to wave-particle interactions during storms’, J. Geophys. Res. 112, 10220.
* Li et al. (2014) Li, W., Thorne, R. M., Ma, Q., Ni, B., Bortnik, J., Baker, D. N., Spence, H. E., Reeves, G. D., Kanekal, S. G., Green, J. C., Kletzing, C. A., Kurth, W. S., Hospodarsky, G. B., Blake, J. B., Fennell, J. F. & Claudepierre, S. G. (2014), ‘Radiation belt electron acceleration by chorus waves during the 17 March 2013 storm’, J. Geophys. Res. 119, 4681–4693.
* Lorentzen et al. (2001) Lorentzen, K. R., Blake, J. B., Inan, U. S. & Bortnik, J. (2001), ‘Observations of relativistic electron microbursts in association with VLF chorus’, J. Geophys. Res. 106(A4), 6017–6028.
* Lorentzen et al. (2000) Lorentzen, K. R., McCarthy, M. P., Parks, G. K., Foat, J. E., Millan, R. M., Smith, D. M., Lin, R. P. & Treilhou, J. P. (2000), ‘Precipitation of relativistic electrons by interaction with electromagnetic ion cyclotron waves’, J. Geophys. Res. 105, 5381–5390.
* Lukin et al. (2021) Lukin, A. S., Artemyev, A. V., Petrukovich, A. A. & Zhang, X. J. (2021), ‘Charged particle scattering in dipolarized magnetotail’, Physics of Plasmas 28(10), 102901.
* Lyons (1974) Lyons, L. R. (1974), ‘Pitch angle and energy diffusion coefficients from resonant interactions with ion-cyclotron and whistler waves’, Journal of Plasma Physics 12, 417–432.
* Lyons & Thorne (1972) Lyons, L. R. & Thorne, R. M. (1972), ‘Parasitic pitch angle diffusion of radiation belt particles by ion cyclotron waves’, J. Geophys. Res. 77(28), 5608–5616.
* Lyons & Thorne (1973) Lyons, L. R. & Thorne, R. M. (1973), ‘Equilibrium structure of radiation belt electrons’, J. Geophys. Res. 78, 2142–2149.
* Ma et al. (2016a) Ma, Q., Li, W., Thorne, R. M., Bortnik, J., Reeves, G. D., Kletzing, C. A., Kurth, W. S., Hospodarsky, G. B., Spence, H. E., Baker, D. N., Blake, J. B., Fennell, J. F., Claudepierre, S. G. & Angelopoulos, V. (2016a), ‘Characteristic energy range of electron scattering due to plasmaspheric hiss’, J. Geophys. Res. 121, 11.
* Ma et al. (2017) Ma, Q., Li, W., Thorne, R. M., Bortnik, J., Reeves, G. D., Spence, H. E., Turner, D. L., Blake, J. B., Fennell, J. F., Claudepierre, S. G., Kletzing, C. A., Kurth, W. S., Hospodarsky, G. B. & Baker, D. N. (2017), ‘Diffusive Transport of Several Hundred keV Electrons in the Earth’s Slot Region’, J. Geophys. Res. 122, 10.
* Ma et al. (2015) Ma, Q., Li, W., Thorne, R. M., Ni, B., Kletzing, C. A., Kurth, W. S., Hospodarsky, G. B., Reeves, G. D., Henderson, M. G., Spence, H. E., Baker, D. N., Blake, J. B., Fennell, J. F., Claudepierre, S. G. & Angelopoulos, V. (2015), ‘Modeling inward diffusion and slow decay of energetic electrons in the Earth’s outer radiation belt’, Geophys. Res. Lett. 42, 987–995.
* Ma et al. (2016b) Ma, Q., Li, W., Thorne, R. M., Nishimura, Y., Zhang, X.-J., Reeves, G. D., Kletzing, C. A., Kurth, W. S., Hospodarsky, G. B., Henderson, M. G., Spence, H. E., Baker, D. N., Blake, J. B., Fennell, J. F. & Angelopoulos, V. (2016b), ‘Simulation of energy-dependent electron diffusion processes in the Earth’s outer radiation belt’, J. Geophys. Res. 121, 4217–4231.
* Mann et al. (2016) Mann, I. R., Ozeke, L. G., Murphy, K. R., Claudepierre, S. G., Turner, D. L., Baker, D. N., Rae, I. J., Kale, A., Milling, D. K., Boyd, A. J., Spence, H. E., Reeves, G. D., Singer, H. J., Dimitrakoudis, S., Daglis, I. A. & Honary, F. (2016), ‘Explaining the dynamics of the ultra-relativistic third Van Allen radiation belt’, Nature Physics 12, 978–983.
* Mauk et al. (2013) Mauk, B. H., Fox, N. J., Kanekal, S. G., Kessel, R. L., Sibeck, D. G. & Ukhorskiy, A. (2013), ‘Science Objectives and Rationale for the Radiation Belt Storm Probes Mission’, Space Sci. Rev. 179, 3–27.
* Meredith et al. (2014) Meredith, N. P., Horne, R. B., Kersten, T., Fraser, B. J. & Grew, R. S. (2014), ‘Global morphology and spectral properties of EMIC waves derived from CRRES observations’, J. Geophys. Res. 119, 5328–5342.
* Meredith et al. (2012) Meredith, N. P., Horne, R. B., Sicard-Piet, A., Boscher, D., Yearby, K. H., Li, W. & Thorne, R. M. (2012), ‘Global model of lower band and upper band chorus from multiple satellite observations’, J. Geophys. Res. 117, 10225.
* Millan & Thorne (2007) Millan, R. M. & Thorne, R. M. (2007), ‘Review of radiation belt relativistic electron losses’, Journal of Atmospheric and Solar-Terrestrial Physics 69, 362–377.
* Min et al. (2012) Min, K., Lee, J., Keika, K. & Li, W. (2012), ‘Global distribution of EMIC waves derived from THEMIS observations’, Journal of Geophysical Research (Space Physics) 117(A5), A05219.
* Miyoshi et al. (2020) Miyoshi, Y., Saito, S., Kurita, S., Asamura, K., Hosokawa, K., Sakanoi, T., Mitani, T., Ogawa, Y., Oyama, S., Tsuchiya, F., Jones, S. L., Jaynes, A. N. & Blake, J. B. (2020), ‘Relativistic Electron Microbursts as High-Energy Tail of Pulsating Aurora Electrons’, Geophys. Res. Lett. 47(21), e90360.
* Miyoshi et al. (2008) Miyoshi, Y., Sakaguchi, K., Shiokawa, K., Evans, D., Albert, J., Connors, M. & Jordanova, V. (2008), ‘Precipitation of radiation belt electrons by EMIC waves, observed from ground and space’, Geophys. Res. Lett. 35(23), L23101.
* Miyoshi et al. (2018) Miyoshi, Y., Shinohara, I., Takashima, T., Asamura, K., Higashio, N., Mitani, T., Kasahara, S., Yokota, S., Kazama, Y., Wang, S.-Y., Tam, S. W. Y., Ho, P. T. P., Kasahara, Y., Kasaba, Y., Yagitani, S., Matsuoka, A., Kojima, H., Katoh, Y., Shiokawa, K. & Seki, K. (2018), ‘Geospace exploration project ERG’, Earth, Planets, and Space 70(1), 101.
* Morley et al. (2010) Morley, S. K., Friedel, R. H. W., Cayton, T. E. & Noveroske, E. (2010), ‘A rapid, global and prolonged electron radiation belt dropout observed with the Global Positioning System constellation’, Geophys. Res. Lett. 37, 6102.
* Mourenas et al. (2016) Mourenas, D., Artemyev, A. V., Ma, Q., Agapitov, O. V. & Li, W. (2016), ‘Fast dropouts of multi-MeV electrons due to combined effects of EMIC and whistler mode waves’, Geophys. Res. Lett. 43(9), 4155–4163.
* Mourenas et al. (2021) Mourenas, D., Artemyev, A. V., Zhang, X. J., Angelopoulos, V., Tsai, E. & Wilkins, C. (2021), ‘Electron Lifetimes and Diffusion Rates Inferred From ELFIN Measurements at Low Altitude: First Results’, Journal of Geophysical Research (Space Physics) 126(11), e29757.
* Mourenas et al. (2017) Mourenas, D., Ma, Q., Artemyev, A. V. & Li, W. (2017), ‘Scaling laws for the inner structure of the radiation belts’, Geophys. Res. Lett. 44, 3009–3018.
* Mourenas & Ripoll (2012) Mourenas, D. & Ripoll, J.-F. (2012), ‘Analytical estimates of quasi-linear diffusion coefficients and electron lifetimes in the inner radiation belt’, J. Geophys. Res. 117, A01204.
* Mourenas et al. (2022) Mourenas, D., Zhang, X.-J., Nunn, D., Artemyev, A. V., Angelopoulos, V., Tsai, E. & Wilkins, C. (2022), ‘Short chorus wave packets: Generation within chorus elements, statistics, and consequences on energetic electron precipitation’, Journal of Geophysical Research: Space Physics 127, e2022JA030310.
* Nakamura et al. (2019) Nakamura, S., Omura, Y., Kletzing, C. & Baker, D. N. (2019), ‘Rapid Precipitation of Relativistic Electron by EMIC Rising-Tone Emissions Observed by the Van Allen Probes’, Journal of Geophysical Research (Space Physics) 124(8), 6701–6714.
* Ni et al. (2015a) Ni, B., Cao, X., Zou, Z., Zhou, C., Gu, X., Bortnik, J., Zhang, J., Fu, S., Zhao, Z., Shi, R. & Xie, L. (2015a), ‘Resonant scattering of outer zone relativistic electrons by multiband EMIC waves and resultant electron loss time scales’, J. Geophys. Res. 120, 7357–7373.
* Ni et al. (2015b) Ni, B., Zou, Z., Gu, X., Zhou, C., Thorne, R. M., Bortnik, J., Shi, R., Zhao, Z., Baker, D. N., Kanekal, S. G., Spence, H. E., Reeves, G. D. & Li, X. (2015b), ‘Variability of the pitch angle distribution of radiation belt ultrarelativistic electrons during and following intense geomagnetic storms: Van Allen Probes observations’, J. Geophys. Res. 120, 4863–4876.
* O’Brien et al. (2004) O’Brien, T. P., Looper, M. D. & Blake, J. B. (2004), ‘Quantification of relativistic electron microburst losses during the GEM storms’, Geophys. Res. Lett. 31(4), L04802.
* Olifer et al. (2018) Olifer, L., Mann, I. R., Boyd, A. J., Ozeke, L. G. & Choi, D. (2018), ‘On the Role of Last Closed Drift Shell Dynamics in Driving Fast Losses and Van Allen Radiation Belt Extinction’, J. Geophys. Res. 123, 3692–3703.
* Omura & Zhao (2012) Omura, Y. & Zhao, Q. (2012), ‘Nonlinear pitch angle scattering of relativistic electrons by EMIC waves in the inner magnetosphere’, J. Geophys. Res. 117, 8227.
* Ozhogin et al. (2012) Ozhogin, P., Tu, J., Song, P. & Reinisch, B. W. (2012), ‘Field-aligned distribution of the plasmaspheric electron density: An empirical model derived from the IMAGE RPI measurements’, J. Geophys. Res. 117, 6225.
* Paulson et al. (2017) Paulson, K. W., Smith, C. W., Lessard, M. R., Torbert, R. B., Kletzing, C. A. & Wygant, J. R. (2017), ‘In situ statistical observations of Pc1 pearl pulsations and unstructured EMIC waves by the Van Allen Probes’, Journal of Geophysical Research (Space Physics) 122(1), 105–119.
* Pickett et al. (2010) Pickett, J. S., Grison, B., Omura, Y., Engebretson, M. J., Dandouras, I., Masson, A., Adrian, M. L., Santolík, O., Décréau, P. M. E., Cornilleau-Wehrlin, N. & Constantinescu, D. (2010), ‘Cluster observations of EMIC triggered emissions in association with Pc1 waves near Earth’s plasmapause’, Geophys. Res. Lett. 37, L09104.
* Pinto et al. (2019) Pinto, V. A., Mourenas, D., Bortnik, J., Zhang, X. J., Artemyev, A. V., Moya, P. S. & Lyons, L. R. (2019), ‘Decay of Ultrarelativistic Remnant Belt Electrons Through Scattering by Plasmaspheric Hiss’, Journal of Geophysical Research (Space Physics) 124(7), 5222–5233.
* Pinto et al. (2020) Pinto, V. A., Zhang, X. J., Mourenas, D., Bortnik, J., Artemyev, A. V., Lyons, L. R. & Moya, P. S. (2020), ‘On the Confinement of Ultrarelativistic Electron Remnant Belts to Low L Shells’, Journal of Geophysical Research (Space Physics) 125(3), e27469.
* Qin et al. (2018) Qin, M., Hudson, M., Millan, R., Woodger, L. & Shekhar, S. (2018), ‘Statistical Investigation of the Efficiency of EMIC Waves in Precipitating Relativistic Electrons’, Journal of Geophysical Research (Space Physics) 123(8), 6223–6230.
* Randall et al. (2005) Randall, C. E., Harvey, V. L., Manney, G. L., Orsolini, Y., Codrescu, M., Sioris, C., Brohede, S., Haley, C. S., Gordley, L. L., Zawodny, J. M. & Russell, J. M. (2005), ‘Stratospheric effects of energetic particle precipitation in 2003-2004’, Geophys. Res. Lett. 32(5), L05802.
* Reeves et al. (1998) Reeves, G. D., Baker, D. N., Belian, R. D., Blake, J. B., Cayton, T. E., Fennell, J. F., Friedel, R. H. W., Meier, M. M., Selesnick, R. S. & Spence, H. E. (1998), ‘The global response of relativistic radiation belt electrons to the January 1997 magnetic cloud’, Geophys. Res. Lett. 25, 3265–3268.
* Reeves et al. (2003) Reeves, G. D., McAdams, K. L., Friedel, R. H. W. & O’Brien, T. P. (2003), ‘Acceleration and loss of relativistic electrons during geomagnetic storms’, Geophys. Res. Lett. 30, 1529.
* Reidy et al. (2021) Reidy, J. A., Horne, R. B., Glauert, S. A., Clilverd, M. A., Meredith, N. P., Woodfield, E. E., Ross, J. P., Allison, H. J. & Rodger, C. J. (2021), ‘Comparing Electron Precipitation Fluxes Calculated From Pitch Angle Diffusion Coefficients to LEO Satellite Observations’, J. Geophys. Res. 126, e2020JA028410.
* Rideout & Coster (2006) Rideout, W. & Coster, A. (2006), ‘Automated GPS processing for global total electron content data’, GPS Solutions 10, 219–228.
* Rodger et al. (2010) Rodger, C. J., Clilverd, M. A., Green, J. C. & Lam, M. M. (2010), ‘Use of POES SEM-2 observations to examine radiation belt dynamics and energetic electron precipitation into the atmosphere’, Journal of Geophysical Research (Space Physics) 115(A4), A04202.
* Ross et al. (2021) Ross, J. P. J., Glauert, S. A., Horne, R. B., Watt, C. & Meredith, N. P. (2021), ‘On the variability of emic waves and the consequences for the relativistic electron radiation belt population’, Journal of Geophysical Research: Space Physics 126, e2975426.
* Ross et al. (2022) Ross, J. P. J., Glauert, S. A., Horne, R. B. & Meredith, N. P. (2022), ‘The Importance of Ion Composition for Radiation Belt Modeling’, Journal of Geophysical Research: Space Physics 127, e2022JA030680.
* Runov et al. (2015) Runov, A., Angelopoulos, V., Gabrielse, C., Liu, J., Turner, D. L. & Zhou, X.-Z. (2015), ‘Average thermodynamic and spectral properties of plasma in and around dipolarizing flux bundles’, J. Geophys. Res. 120, 4369–4383.
* Saikin et al. (2015) Saikin, A. A., Zhang, J.-C., Allen, R. C., Smith, C. W., Kistler, L. M., Spence, H. E., Torbert, R. B., Kletzing, C. A. & Jordanova, V. K. (2015), ‘The occurrence and wave properties of H+-, He+-, and O+-band EMIC waves observed by the Van Allen Probes’, J. Geophys. Res. 120, 7477–7492.
* Schulz & Lanzerotti (1974) Schulz, M. & Lanzerotti, L. J. (1974), Particle diffusion in the radiation belts, Springer, New York.
* Sergeev et al. (2012) Sergeev, V. A., Nishimura, Y., Kubyshkina, M., Angelopoulos, V., Nakamura, R. & Singer, H. (2012), ‘Magnetospheric location of the equatorward prebreakup arc’, Journal of Geophysical Research (Space Physics) 117(A1), A01212.
* Sergeev & Tsyganenko (1982) Sergeev, V. A. & Tsyganenko, N. A. (1982), ‘Energetic particle losses and trapping boundaries as deduced from calculations with a realistic magnetic field model’, Plan. Sp. Sci. 30, 999–1006.
* Sheeley et al. (2001) Sheeley, B. W., Moldwin, M. B., Rassoul, H. K. & Anderson, R. R. (2001), ‘An empirical plasmasphere and trough density model: CRRES observations’, J. Geophys. Res. 106, 25631–25642.
* Shi et al. (2016) Shi, R., Summers, D., Ni, B., Fennell, J. F., Blake, J. B., Spence, H. E. & Reeves, G. D. (2016), ‘Survey of radiation belt energetic electron pitch angle distributions based on the Van Allen Probes MagEIS measurements’, Journal of Geophysical Research (Space Physics) 121(2), 1078–1090.
* Shinbori et al. (2021) Shinbori, A., Otsuka, Y., Tsugawa, T., Nishioka, M., Kumamoto, A., Tsuchiya, F., Matsuda, S., Kasahara, Y. & Matsuoka, A. (2021), ‘Relationship Between the Locations of the Midlatitude Trough and Plasmapause Using GNSS TEC and Arase Satellite Observation Data’, Journal of Geophysical Research (Space Physics) 126(5), e28943.
* Shprits et al. (2009) Shprits, Y. Y., Chen, L. & Thorne, R. M. (2009), ‘Simulations of pitch angle scattering of relativistic electrons with MLT-dependent diffusion coefficients’, J. Geophys. Res. 114, A03219.
* Shprits et al. (2017) Shprits, Y. Y., Kellerman, A., Aseev, N., Drozdov, A. Y. & Michaelis, I. (2017), ‘Multi-MeV electron loss in the heart of the radiation belts’, Geophys. Res. Lett. 44(3), 1204–1209.
* Shprits & Ni (2009) Shprits, Y. Y. & Ni, B. (2009), ‘Dependence of the quasi-linear scattering rates on the wave normal distribution of chorus waves’, J. Geophys. Res. 114, 11205.
* Shprits et al. (2006) Shprits, Y. Y., Thorne, R. M., Horne, R. B. & Summers, D. (2006), ‘Bounce-averaged diffusion coefficients for field-aligned chorus waves’, J. Geophys. Res. 111, 10225.
* Shumko et al. (2018) Shumko, M., Turner, D. L., O’Brien, T. P., Claudepierre, S. G., Sample, J., Hartley, D. P., Fennel, J., Blake, J. B., Gkioulidou, M. & Mitchell, D. G. (2018), ‘Evidence of Microbursts Observed Near the Equatorial Plane in the Outer Van Allen Radiation Belt’, Geophys. Res. Lett. 45(16), 8044–8053.
* Silin et al. (2011) Silin, I., Mann, I. R., Sydora, R. D., Summers, D. & Mace, R. L. (2011), ‘Warm plasma effects on electromagnetic ion cyclotron wave MeV electron interactions in the magnetosphere’, Journal of Geophysical Research (Space Physics) 116(A5), A05215.
* Sorathia et al. (2017) Sorathia, K. A., Merkin, V. G., Ukhorskiy, A. Y., Mauk, B. H. & Sibeck, D. G. (2017), ‘Energetic particle loss through the magnetopause: A combined global MHD and test-particle study’, Journal of Geophysical Research (Space Physics) 122(9), 9329–9343.
* Sorathia et al. (2018) Sorathia, K. A., Ukhorskiy, A. Y., Merkin, V. G., Fennell, J. F. & Claudepierre, S. G. (2018), ‘Modeling the Depletion and Recovery of the Outer Radiation Belt During a Geomagnetic Storm: Combined MHD and Test Particle Simulations’, Journal of Geophysical Research (Space Physics) 123(7), 5590–5609.
* Su et al. (2012) Su, Z., Zhu, H., Xiao, F., Zheng, H., Shen, C., Wang, Y. & Wang, S. (2012), ‘Bounce-averaged advection and diffusion coefficients for monochromatic electromagnetic ion cyclotron wave: Comparison between test-particle and quasi-linear models’, Journal of Geophysical Research (Space Physics) 117(A9), A09222.
* Summers et al. (2007) Summers, D., Ni, B. & Meredith, N. P. (2007), ‘Timescales for radiation belt electron acceleration and loss due to resonant wave-particle interactions: 2. Evaluation for VLF chorus, ELF hiss, and electromagnetic ion cyclotron waves’, J. Geophys. Res. 112, 4207.
* Summers & Thorne (2003) Summers, D. & Thorne, R. M. (2003), ‘Relativistic electron pitch-angle scattering by electromagnetic ion cyclotron waves during geomagnetic storms’, J. Geophys. Res. 108, 1143.
* Summers et al. (1998) Summers, D., Thorne, R. M. & Xiao, F. (1998), ‘Relativistic theory of wave-particle resonant diffusion with application to electron acceleration in the magnetosphere’, J. Geophys. Res. 103, 20487–20500.
* Teng et al. (2019) Teng, S., Li, W., Tao, X., Ma, Q. & Shen, X. (2019), ‘Characteristics and Generation of Low-Frequency Magnetosonic Waves Below the Proton Gyrofrequency’, Geophys. Res. Lett. 46(21), 11,652–11,660.
* Thorne (1980) Thorne, R. M. (1980), ‘The importance of energetic particle precipitation on the chemical composition of the middle atmosphere’, Pure and Applied Geophysics 118(1), 128–151.
* Thorne (2010) Thorne, R. M. (2010), ‘Radiation belt dynamics: The importance of wave-particle interactions’, Geophys. Res. Lett. 372, 22107.
* Thorne et al. (2006) Thorne, R. M., Horne, R. B., Jordanova, V. K., Bortnik, J. & Glauert, S. (2006), Interaction of EMIC Waves With Thermal Plasma and Radiation Belt Particles, in K. Takahashi, P. J. Chi, R. E. Denton & R. L. Lysak, eds, ‘Magnetospheric ULF Waves: Synthesis and New Directions’, Vol. 169 of Washington DC American Geophysical Union Geophysical Monograph Series, p. 213.
* Thorne & Kennel (1971) Thorne, R. M. & Kennel, C. F. (1971), ‘Relativistic electron precipitation during magnetic storm main phase’, J. Geophys. Res. 76, 4446.
* Thorne et al. (2013) Thorne, R. M., Li, W., Ni, B., Ma, Q., Bortnik, J., Chen, L., Baker, D. N., Spence, H. E., Reeves, G. D., Henderson, M. G., Kletzing, C. A., Kurth, W. S., Hospodarsky, G. B., Blake, J. B., Fennell, J. F., Claudepierre, S. G. & Kanekal, S. G. (2013), ‘Rapid local acceleration of relativistic radiation-belt electrons by magnetospheric chorus’, Nature 504, 411–414.
* Thorne et al. (2005) Thorne, R. M., O’Brien, T. P., Shprits, Y. Y., Summers, D. & Horne, R. B. (2005), ‘Timescale for MeV electron microburst loss during geomagnetic storms’, J. Geophys. Res. 110, 9202.
* Tsyganenko et al. (1989) Tsyganenko, N. A. (1989), ‘A magnetospheric magnetic field model with a warped tail current sheet’, Plan. Sp. Sci. 37, 5–20.
* Turner et al. (2015) Turner, D. L., Claudepierre, S. G., Fennell, J. F., O’Brien, T. P., Blake, J. B., Lemon, C., Gkioulidou, M., Takahashi, K., Reeves, G. D., Thaller, S., Breneman, A., Wygant, J. R., Li, W., Runov, A. & Angelopoulos, V. (2015), ‘Energetic electron injections deep into the inner magnetosphere associated with substorm activity’, Geophys. Res. Lett. 42, 2079–2087.
* Turner et al. (2012) Turner, D. L., Shprits, Y., Hartinger, M. & Angelopoulos, V. (2012), ‘Explaining sudden losses of outer radiation belt electrons during geomagnetic storms’, Nature Physics 8, 208–212.
* Ukhorskiy et al. (2010) Ukhorskiy, A. Y., Shprits, Y. Y., Anderson, B. J., Takahashi, K. & Thorne, R. M. (2010), ‘Rapid scattering of radiation belt electrons by storm-time EMIC waves’, Geophys. Res. Lett. 37, L09101.
* Usanova et al. (2013) Usanova, M. E., Darrouzet, F., Mann, I. R. & Bortnik, J. (2013), ‘Statistical analysis of EMIC waves in plasmaspheric plumes from Cluster observations’, Journal of Geophysical Research (Space Physics) 118(8), 4946–4951.
* Usanova et al. (2014) Usanova, M. E., Drozdov, A., Orlova, K., Mann, I. R., Shprits, Y., Robertson, M. T., Turner, D. L., Milling, D. K., Kale, A., Baker, D. N., Thaller, S. A., Reeves, G. D., Spence, H. E., Kletzing, C. & Wygant, J. (2014), ‘Effect of EMIC waves on relativistic and ultrarelativistic electron populations: Ground-based and Van Allen Probes observations’, Geophys. Res. Lett. 41, 1375–1381.
* Usanova et al. (2010) Usanova, M. E., Mann, I. R., Kale, Z. C., Rae, I. J., Sydora, R. D., Sandanger, M., Søraas, F., Glassmeier, K. H., Fornacon, K. H., Matsui, H., Puhl-Quinn, P. A., Masson, A. & Vallières, X. (2010), ‘Conjugate ground and multisatellite observations of compression-related EMIC Pc1 waves and associated proton precipitation’, Journal of Geophysical Research (Space Physics) 115(A7), A07208.
* Usanova et al. (2008) Usanova, M. E., Mann, I. R., Rae, I. J., Kale, Z. C., Angelopoulos, V., Bonnell, J. W., Glassmeier, K. H., Auster, H. U. & Singer, H. J. (2008), ‘Multipoint observations of magnetospheric compression-related EMIC Pc1 waves by THEMIS and CARISMA’, Geophys. Res. Lett. 35(17), L17S25.
* Vierinen et al. (2015) Vierinen, J., Coster, A. J., Rideout, W. C., Erickson, P. J. & Norberg, J. (2015), ‘Statistical framework for estimating GNSS bias’, Atmospheric Measurement Techniques Discussions 8(9), 9373–9398.
* Vo & Foster (2001) Vo, H. B. & Foster, J. C. (2001), ‘A quantitative study of ionospheric density gradients at midlatitudes’, J. Geophys. Res. 106(A10), 21555–21564.
* Walsh et al. (2014) Walsh, B. M., Foster, J. C., Erickson, P. J. & Sibeck, D. G. (2014), ‘Simultaneous Ground- and Space-Based Observations of the Plasmaspheric Plume and Reconnection’, Science 343(6175), 1122–1125.
* Wang et al. (2019) Wang, D., Shprits, Y. Y., Zhelavskaya, I. S., Agapitov, O. V., Drozdov, A. Y. & Aseev, N. A. (2019), ‘Analytical Chorus Wave Model Derived from Van Allen Probe Observations’, Journal of Geophysical Research (Space Physics) 124(2), 1063–1084.
* Wang et al. (2017) Wang, G., Su, Z., Zheng, H., Wang, Y., Zhang, M. & Wang, S. (2017), ‘Nonlinear fundamental and harmonic cyclotron resonant scattering of radiation belt ultrarelativistic electrons by oblique monochromatic emic waves’, J. Geophys. Res. 122(2), 1928–1945.
http://dx.doi.org/10.1002/2016JA023451
* Weygand et al. (2021) Weygand, J. M., Zhelavskaya, I. & Shprits, Y. (2021), ‘A Comparison of the Location of the Mid Latitude Trough and Plasmapause Boundary’, Journal of Geophysical Research (Space Physics) 126(4), e28213.
* Yahnin et al. (1997) Yahnin, A. G., Sergeev, V. A., Gvozdevsky, B. B. & Vennerstrøm, S. (1997), ‘Magnetospheric source region of discrete auroras inferred from their relationship with isotropy boundaries of energetic particles’, Annales Geophysicae 15, 943–958.
* Yahnin et al. (2017) Yahnin, A. G., Yahnina, T. A., Raita, T. & Manninen, J. (2017), ‘Ground pulsation magnetometer observations conjugated with relativistic electron precipitation’, Journal of Geophysical Research (Space Physics) 122(9), 9169–9182.
* Yahnin et al. (2016) Yahnin, A. G., Yahnina, T. A., Semenova, N. V., Gvozdevsky, B. B. & Pashin, A. B. (2016), ‘Relativistic electron precipitation as seen by NOAA POES’, Journal of Geophysical Research (Space Physics) 121(9), 8286–8299.
* Yando et al. (2011) Yando, K., Millan, R. M., Green, J. C. & Evans, D. S. (2011), ‘A Monte Carlo simulation of the NOAA POES Medium Energy Proton and Electron Detector instrument’, Journal of Geophysical Research (Space Physics) 116(A10), A10231.
* Yizengaw & Moldwin (2005) Yizengaw, E. & Moldwin, M. B. (2005), ‘The altitude extension of the mid-latitude trough and its correlation with plasmapause position’, Geophys. Res. Lett. 32(9), L09105.
* Young et al. (1981) Young, D. T., Perraut, S., Roux, A., de Villedary, C., Gendrin, R., Korth, A., Kremser, G. & Jones, D. (1981), ‘Wave-particle interactions near $\Omega_{He+}$ observed in GEOS 1 and 2 1. Propagation of ion cyclotron waves in He+ -rich plasma’, J. Geophys. Res. 86(A8), 6755–6772.
* Zhang et al. (2016a) Zhang, J., Halford, A. J., Saikin, A. A., Huang, C.-L., Spence, H. E., Larsen, B. A., Reeves, G. D., Millan, R. M., Smith, C. W., Torbert, R. B., Kurth, W. S., Kletzing, C. A., Blake, J. B., Fennel, J. F. & Baker, D. N. (2016a), ‘EMIC waves and associated relativisticelectron precipitation on 25–26January 2013’, J. Geophys. Res. 121, 11,086–11,100.
* Zhang et al. (2020) Zhang, X. J., Agapitov, O., Artemyev, A. V., Mourenas, D., Angelopoulos, V., Kurth, W. S., Bonnell, J. W. & Hospodarsky, G. B. (2020), ‘Phase Decoherence Within Intense Chorus Wave Packets Constrains the Efficiency of Nonlinear Resonant Electron Acceleration’, Geophys. Res. Lett. 47(20), e89807.
* Zhang et al. (2022) Zhang, X.-J., Angelopoulos, V., Mourenas, D., Artemyev, A. V., Tsai, E. & Wilkins, C. (2022), ‘Characteristics of Electron Microburst Precipitation Based on High-Resolution ELFIN Measurements’, J. Geophys. Res. 127(5), e2022JA030509.
* Zhang et al. (2019) Zhang, X.-J., Chen, L., Artemyev, A. V., Angelopoulos, V. & Liu, X. (2019), ‘Periodic Excitation of Chorus and ECH Waves Modulated by Ultralow Frequency Compressions’, Journal of Geophysical Research (Space Physics) 124(11), 8535–8550.
* Zhang et al. (2016b) Zhang, X. J., Li, W., Ma, Q., Thorne, R. M., Angelopoulos, V., Bortnik, J., Chen, L., Kletzing, C. A., Kurth, W. S., Hospodarsky, G. B., Baker, D. N., Reeves, G. D., Spence, H. E., Blake, J. B. & Fennell, J. F. (2016b), ‘Direct evidence for EMIC wave scattering of relativistic electrons in space’, Journal of Geophysical Research (Space Physics) 121(7), 6620–6631.
* Zhang et al. (2016c) Zhang, X.-J., Li, W., Thorne, R. M., Angelopoulos, V., Bortnik, J., Kletzing, C. A., Kurth, W. S. & Hospodarsky, G. B. (2016c), ‘Statistical distribution of EMIC wave spectra: Observations from Van Allen Probes’, Geophys. Res. Lett. 43, 12.
* Zhang et al. (2017) Zhang, X.-J., Mourenas, D., Artemyev, A. V., Angelopoulos, V. & Thorne, R. M. (2017), ‘Contemporaneous EMIC and whistler mode waves: Observations and consequences for MeV electron loss’, Geophys. Res. Lett. 44, 8113–8121.
* Zhang et al. (2021) Zhang, X. J., Mourenas, D., Shen, X. C., Qin, M., Artemyev, A. V., Ma, Q., Li, W., Hudson, M. K. & Angelopoulos, V. (2021), ‘Dependence of Relativistic Electron Precipitation in the Ionosphere on EMIC Wave Minimum Resonant Energy at the Conjugate Equator’, Journal of Geophysical Research (Space Physics) 126(5), e29193.
* Zhang et al. (2018) Zhang, X. J., Thorne, R., Artemyev, A., Mourenas, D., Angelopoulos, V., Bortnik, J., Kletzing, C. A., Kurth, W. S. & Hospodarsky, G. B. (2018), ‘Properties of Intense Field-Aligned Lower-Band Chorus Waves: Implications for Nonlinear Wave-Particle Interactions’, Journal of Geophysical Research (Space Physics) 123(7), 5379–5393.
* Zhao et al. (2018) Zhao, H., Friedel, R. H. W., Chen, Y., Reeves, G. D., Baker, D. N., Li, X., Jaynes, A. N., Kanekal, S. G., Claudepierre, S. G., Fennell, J. F., Blake, J. B. & Spence, H. E. (2018), ‘An Empirical Model of Radiation Belt Electron Pitch Angle Distributions Based On Van Allen Probes Measurements’, Journal of Geophysical Research (Space Physics) 123(5), 3493–3511.
Figure 1: Overview of two consecutive ELFIN A science zone crossings: one at
the nightside/dawn sector (left) and the other at the dayside/dusk sector
(right, primed panel letters). From top to bottom shown are 3 energy-
spectrograms (a-c), 2 pitch-angle spectrograms (d-e) and the satellite’s
L-shell and magnetic latitude (f) computed using the international geophysical
reference field (IGRF) model. All spectrograms show products derived from the
number-flux of electrons (measured in individual sectors, in units of $1/{\rm
cm}^{2}/{\rm s}/{\rm sr}/{\rm MeV}$) averaged over the selected pitch-angle
and energy range as follows: The energy spectrograms in (a) and (a′) are for
locally trapped electrons (only pitch angles outside the loss cone and anti-
loss cone, near perpendicular to the local field line direction, were
included); those in (b) and (b′) are for precipitating electrons (with pitches
in the loss cone); and those in (c) and (c′) are precipitating-to-trapped flux
spectra ratios formed from the panels right above. The pitch-angle
spectrograms in (d-e) and (d′-e′) are average fluxes in two broad energy
ranges: a low energy range, 80-270 keV, and a high energy range, 0.4 - 3.4
MeV. The horizontal lines demarcate 90$\deg$ (vertically centered solid line),
the loss cone (the other solid line) and the anti-loss cone (the dashed line).
Horizontal color bars above Panel (a) represent magnetospheric regions
identified based on the data and discussed in the main text. Arrows in Panels
(c′) and (e′) represent spectral features also discussed in the main text.
Figure 2: Overview of ELFIN A observations for a science zone crossing,
exhibiting a typical EMIC wave-driven precipitation signature. Format is
identical to that of Figure 1 Figure 3: Observations from THEMIS E at the
equator at an MLT and UT near those of the science zone crossing by ELFIN A
depicted in Figure 2. Panels (a-d) show about an hour of data centered around
an EMIC wave emission: power spectral density of the magnetic field measured
by the fluxgate magnetometer, FGM, instrument (a), wave normal angle (b),
ellipticity (c), electron density inferred from the spacecraft potential
computed on-board by the electric field instrument, EFI, and processed on the
ground using the measured electron temperature by the electrostatic analyzer,
ESA, instrument (d). The black dashed line is the $He^{+}$ gyrofrequency.
Panel (e) shows $\sim$2.5 min of data from a single magnetic field component
in a field-aligned-coordinate system (FAC), $B_{X,FAC}$. It is oriented
perpendicular to the average magnetic field direction (hence, near the plane
of polarization) and lies on a plane also containing the sunward direction.
Panel (f) is an expanded, $\sim$1 min long, view of the same quantity as above
it. Arrows in Panels (e) and (f) are discussed in the main text. Figure 4:
Average spectra of precipitating (red) and trapped (blue) electrons at the
moment of the strongest precipitation (04:10:42-04:10:48 UT) at ELFIN A for
the event of Figure 2. Detrended fluxes (solid, thick lines) are measured
averages (dotted, thin lines) minus the trends (dashed, thin lines). Trends
are average fluxes from 6s immediately before and 6s immediately after the
strongest precipitation interval. Figure 5: An example of prolonged
relativistic electron precipitation, presumably due to the long-lasting
presence of EMIC waves in the equatorial magnetosphere. Three consecutive
northern hemisphere, equatorward science zone crossings by ELFIN A at post-
noon/dusk (MLT $\sim$16.75) are depicted in three panels per crossing (a-c;
d-f; g-i), arranged in time to have a common L-shell and magnetic latitude,
shown in Panel (j). Each crossing’s three panels have the same format as the
top three panels in Figures 1 and 2, i.e., they are energy spectrograms of
trapped fluxes, precipitating fluxes and precipitating-to-trapped flux ratio.
Figure 6: Overview of ELFIN A observations during a $He^{+}$ band EMIC wave-
driven event, on 02 Nov 2020, in a format similar to that of Figure 1. Figure
7: ELFIN A projections to the ionosphere in the north and south for the event
of Fig. 6. Diamonds and asterisks mark the start and end times of the
trajectories; crosses are 1 min tickmarks; thick traces denote times of
intense relativistic electron precipitation identified from Fig. 6(c) as a
putative EMIC wave-driven precipitation event. Figure 8: Magnetic field
spectra from the ground-based station at SPA in the Antarctic. As shown in
Fig. 7, SPA is in close conjunction with ELFIN-A during this event.
Superimposed in the spectra are $He^{+}$ and $O^{+}$ equatorial
gyrofrequencies (horizontal lines) using the magnetic field at their
equatorial projection, inferred from the T89 model. The vertical magenta lines
mark the time interval of ELFIN A’ science zone crossing during this event.
Figure 9: Theoretical estimates of minimum resonance energy and comparison
with observations for the event of Fig. 6. From top to bottom: average
precipitating-to-trapped flux ratio during the time of peak precipitation
(15:18:51-15:19:03UT); contour plot of minimum resonance energy (in MeV) as a
function of the maximum unstable frequency $f_{\max}$ (normalized to the
relevant ion cyclotron gyrofrequency $f_{cHe}$) and of the $f_{pe}/f_{ce}$
ratio for a $He^{+}$ ion concentration of 5$\%$; and same as the panel above,
but for a 10$\%$ $He^{+}$ concentration. Red and yellow colors depict the
electron energy ranges for which ELFIN A measured strong precipitation and
moderate precipitation (R$>$0.5 and 0.5$>$R$>$0.3, respectively). The plot
shows that resonance energies exhibiting moderate and strong precipitation at
ELFIN are consistent with the range of parameters $f_{\max}/f_{cHe}$,
$f_{pe}/f_{ce}$ inferred from in-situ measurements at conjugate platforms (at
the intersection of the corresponding grayed areas), for a reasonable range of
$He^{+}$ concentrations. Figure 10: Overview of ELFIN A observations during a
$H^{+}$ band EMIC wave-driven event, on 6 December 2020, in a format similar
to that of Figure 1. Figure 11: ELFIN A projections to the ionosphere in the
north and south for the event of Fig. 10, in a format similar to that of
Figure 7. Figure 12: Magnetic field power spectral density (arbitrary units)
from the Finland ground-based station at SOD, located as shown in Fig. 11.
Superimposed $H^{+}$, $He^{+}$ and $O^{+}$ equatorial gyrofrequencies
(horizontal lines) using the equatorial magnetic field conjugate to these
stations. The magenta vertical lines bracket the time interval of ELFIN A’s
science zone crossing during this event. Figure 13: Theoretical estimates of
minimum resonance energy and comparison with observations for the event of
Fig. 10. From top to bottom: average precipitating-to-trapped flux ratio
during the time of peak precipitation (20:20:33 - 20:20:48 UT); contour plot
of minimum resonance energy (in MeV) as a function of the maximum unstable
frequency $f_{\max}$ (normalized to the relevant ion cyclotron frequency,
$f_{cp}$), and of the $f_{pe}/f_{ce}$ ratio for $0\%$ helium concentration;
and same as the panel above but for a $2.5\%He^{+}$ concentration. Red and
yellow colors depict the electron energy ranges for which ELFIN A measured
strong and moderate precipitation (R$>$0.5 and 0.5$>$R$>$0.3, respectively).
The plot shows that resonance energies exhibiting moderate and strong
precipitation at ELFIN are consistent with the range of parameters
$f_{\max}$/$f_{cp}$ and $f_{pe}/f_{ce}$ inferred from in-situ observations at
conjugate platforms ($f_{\max}/f_{cp}\sim$0.6-0.9 and
$f_{pe}/f_{ce}\sim$10-12) for a reasonable range of $He^{+}$ concentrations.
Figure 14: Overview of ELFIN A observations on 10 September 2020, which is
being examined together with concurrent TEC observations. Two consecutive
northern hemisphere, equatorward science zone crossings at midnight/pre-
midnight (MLT $\sim$0.75) are depicted in three panels per crossing (a-c; and
f-g), arranged in time to have a common L-shell and magnetic latitude, shown
in Panel (e). Each crossing’s three panels have the same format as the top
three panels in Figures 1 and 2, i.e., they show energy spectrograms of
trapped fluxes, precipitating fluxes and precipitating-to-trapped flux ratio.
A fourth panel for each crossing (Panels (d) and (h), respectively) show the
TEC values at the satellite ionospheric projection (after averaging over 3
degrees each in latitude and longitude, and over 20min in time). Figure 15:
Projections of ELFIN A science zone crossings of Figure 14 onto the Madrigal
TEC maps at 11:30:00 UT (top panel) and 13:00:00 UT (bottom panel) in the
northern hemisphere: start and end times of ELFIN A trajectories are marked by
a diamond and an asterisk, respectively; crosses are 1 min marks. Regions of
interest along the ELFIN trajectory are denoted with same colors as at the top
of Figure 14 (blue: plasma sheet, magenta: outer radiation belt, black:
plasmasphere). The thick magenta line sections depict the times of
relativistic electron precipitation related to EMIC wave scattering identified
from Figure 14, Panels (c) and (c$*\prime$), respectively. Numbered black
traces in the TEC maps show contours of TEC. Figure 16: Properties of
relativistic electron precipitation inferred from $\sim 50$ EMIC wave-driven
events (comprising $\sim 310$ individual spin samples) measured by ELFIN A&B.
(a) Ensemble spatial distribution versus $L$-shell. (b) Occurrence rate of
events (total event duration normalized by the ELFIN residence time in each
bin) in $(L,MLT)$ space. (c) Peak precipitating-to-trapped flux ratio at each
spin sample (circle) and average of that ratio within each $L$-shell bin
(solid curve), both shown as a function of $L$-shell. (d) Energy of the peak
precipitating-to-trapped electron flux ratio, $E^{*}$, at each spin sample as
a function of $L$. (e) Energy of half-peak (below the peak) of the
precipitating-to-trapped electron flux ratio, denoted by $E^{*}_{min}$ at each
spin sample, shown as a function of $L$. Red dashed lines in Panels (d) and
(e) depict theoretical estimates of the respective quantities based on
statistical wave spectra (derived as discussed in text). Dashed and dotted
blue lines in Panel (e) depict theoretical lower limits of the resonance
energy estimated based on wave cyclotron damping at $kc/\Omega_{pi}\sim 1$ and
$kc/\Omega_{pi}\sim 2$, respectively. Figure 17: Properties of the most
efficient EMIC wave-driven precipitation events, those exhibiting highly
relativistic (E${}^{*}>$1 MeV), strong (R = jprec/jtrap $>$ 1/2) electron
precipitation, observed at L $<$ 7\. (a) Fraction of these most efficient
events in each $AE^{*}$ bin divided by total number of most efficient events
($AE^{*}$ is the maximum AE in the preceeding 3 hours). (b) Same as (a) but as
a function of MLT. (c) Fraction of most efficient events that also have
$0\leqq R<1/3$) in the three low-energy range categories listed in the
abscissa. The rest (those with $R>1/3$), representing concurrent low-energy
moderate or strong precipitation, can still be a significant fraction of the
most efficient EMIC wave-driven precipitation events (15 - 35%, depending on
energy range category). (d) Fraction of most efficient EMIC wave-driven
precipitation events satisfying two different criteria for
$R=j_{prec}/j_{trap}$ at lower energy as defined in annotations. Implications
are discussed in the main text. Figure 18: Spectral properties of EMIC wave-
driven electron precipitation. (a) Average spectrum of trapped fluxes (black
curve) and of precipitating-to-trapped flux ratio (red curve) for events with
$j_{prec}/j_{trap}>1/2$ at $E^{*}>1$ MeV. The dotted blue curve shows a best
least-squares fit to the average flux ratio at low energy. (b) Average
precipitating-to-trapped flux ratio for the events of Panel (a), plotted as a
function of the normalized energy $E/E^{*}$, where $E^{*}$ is the peak
precipitating-to-trapped flux ratio energy for each event (a proxy for the
minimum resonance energy $E_{R,\min}$ with the most intense waves). Best
least-squares fits to the average flux ratio are shown below the peak,
$\langle j_{prec}/j_{trap}\rangle\approx\gamma^{2}(E)/\gamma^{2}(E^{*})\sim
0.065\,\gamma^{2}(E)$ for $E^{*}=\langle E^{*}\rangle\sim 1.45$ MeV (dashed
blue curve), and above the peak, $\langle j_{prec}/j_{trap}\rangle\approx
1.01\cdot(E/E^{*})^{-1.01}$ (dashed red curve). Note that if the last point
($E/E^{*}\approx 4.84$) with a small number of measurements was discarded, the
best fit would become $\langle j_{prec}/j_{trap}\rangle\approx
0.92\cdot(E/E^{*})^{-0.8}$ (dotted red curve). Figure 19: (a) EMIC wave power
ratio $B_{w}^{2}(f)/B_{w}^{2}(f_{peak})$ inferred, using quasi-linear theory,
from ELFIN statistics of precipitating-to-trapped electron flux ratio
$j_{prec}/j_{trap}$ (solid blue curve), and compared with statistical EMIC
wave power ratios obtained from Van Allen Probes 2012-2016 observations in
four different MLT sectors when $f_{pe}/f_{ce}>15$ (black, green, magenta, and
red curves). Specifically, the solid blue curve was derived from the best fit
to $\langle j_{prec}/j_{trap}\rangle$ in Figure 18(a) and the dashed blue
curves show its uncertainty range, corresponding to the uncertainty in $E^{*}$
from the measurements, also depicted in Panel (c), below. (b) Same as (a) but
for $f_{pe}/f_{ce}\in[5,15]$. (c) Energy $E$ of electrons near the loss cone
in cyclotron resonance with EMIC waves at $f/f_{cp}$ in Panel (a), assuming
waves at $f_{peak}/f_{cp}\sim 0.37$ in resonance with $E^{*}\sim 1.45$ MeV
electrons for $f_{pe}/f_{ce}>15$. Dashed curves are based on the measurement
uncertainty range in $E^{*}$. (d) Same as (c) for $f_{peak}/f_{cp}\sim 0.41$
and $E^{*}\sim 2.5$ MeV for $f_{pe}/f_{ce}\in[5,15]$, corresponding to wave
power ratios in Panel (b).
|
# Connecting the Dots: Floorplan Reconstruction Using Two-Level Queries
Yuanwen Yue1 Theodora Kontogianni2 Konrad Schindler1,2 Francis Engelmann2
1Photogrammetry and Remote Sensing, ETH Zurich 2ETH AI Center, ETH Zurich
###### Abstract
We address 2D floorplan reconstruction from 3D scans. Existing approaches
typically employ heuristically designed multi-stage pipelines. Instead, we
formulate floorplan reconstruction as a single-stage structured prediction
task: find a variable-size set of polygons, which in turn are variable-length
sequences of ordered vertices. To solve it we develop a novel Transformer
architecture that generates polygons of multiple rooms in parallel, in a
holistic manner without hand-crafted intermediate stages. The model features
two-level queries for polygons and corners, and includes polygon matching to
make the network end-to-end trainable. Our method achieves a new state-of-the-
art for two challenging datasets, Structured3D and SceneCAD, along with
significantly faster inference than previous methods. Moreover, it can readily
be extended to predict additional information, i.e., semantic room types and
architectural elements like doors and windows. Our code and models are
available at: https://github.com/ywyue/RoomFormer.
## 1 Introduction
The goal of floorplan reconstruction is to turn observations of an (indoor)
scene into a 2D vector map in birds-eye view. More specifically, we aim to
abstract a 3D point cloud into a set of closed polygons corresponding to
rooms, optionally enriched with further structural and semantic elements like
doors, windows and room type labels.
Floorplans are an essential representation that enables a wide range of
applications in robotics, AR/VR, interior design, _etc._ Like prior work [30,
8, 9, 3, 2], we start from a 3D point cloud, which can easily be captured with
RGB-D cameras, laser scanners or SfM systems. Several works [22, 8, 30, 9]
have shown the effectiveness of projecting the raw 3D point data along the
gravity axis, to obtain a 2D density map that highlights the building’s
structural elements (_e.g._ , walls). We also employ this early transition to
2D image space. The resulting density maps are compact and computationally
efficient, but inherit the noise and data gaps of the underlying point clouds,
hence floorplan reconstruction remains a challenging task.
Input 3D Point Cloud | | Reconstructed Floorplan
---|---|---
Figure 1: Semantic floorplan reconstruction. Given a point cloud of an indoor
environment, _RoomFormer_ jointly recovers multiple room polygons along with
their associated room types, as well as architectural elements such as doors
and windows.
Existing methods can be split broadly into two categories that both operate in
two stages: _Top-down methods_ [8, 30] first extract room masks from the
density map using neural networks (_e.g._ , Mask R-CNN [15]), then employ
optimization/search techniques (_e.g._ , integer programming [29], Monte-Carlo
Tree-Search [4]) to extract a polygonal floorplan. Such techniques are not
end-to-end trainable, and their success depends on how well the hand-crafted
optimization captures domain knowledge about room shape and layout.
Alternatively, _bottom-up methods_ [22, 9] first detect corners, then look for
edges between corners (_i.e._ , wall segments) and finally assemble them into
a planar floorplan graph. Both approaches are strictly sequential and
therefore dependent on the quality of the initial corner, respectively room,
detector. The second stage starts from the detected entities, therefore
missing or spurious detections may significantly impact the reconstruction.
We address those limitations and design a model that directly maps a density
image to a set of room polygons. Our model, named _RoomFormer_ , leverages the
sequence prediction capabilities of Transformers and directly outputs a
variable-length, ordered sequence of vertices per room. RoomFormer requires
neither hand-crafted, domain-specific intermediate products nor explicit
corner, wall or room detections. Moreover, it predicts all rooms that make up
the floorplan at once, exploiting the parallel nature of the Transformer
architecture.
In more detail, we employ a standard CNN backbone to extract features from the
birds-eye view density map, followed by a Transformer encoder-decoder setup
that consumes image features (supplemented with positional encodings) and
outputs multiple ordered corner sequences, in parallel. The floorplan is
recovered by simply connecting those corners in the predicted order. Note that
the described process relies on the ability to generate hierarchically
structured output of variable and a-priori unknown size, where each floorplan
has a different number of rooms (with no natural order), and each room polygon
has a different number of (ordered) corners. We address this challenge by
introducing two-level queries with one level for the room polygons and one
level for their corners. The varying numbers of both rooms and corners are
accommodated by additionally classifying each query as valid or invalid. The
decoder iteratively refines the queries, through self-attention among queries
and cross-attention between queries and image features. To enable end-to-end
training, we propose a polygon matching strategy that establishes the
correspondence between predictions and targets, at both room and corner
levels. In this manner, we obtain an integrated model that holistically
predicts a set of polygons to best explain the evidence in the density map,
without hand-tuned intermediate rules of which corners, walls or rooms to
commit to along the way. The model is also fast at inference, since it
operates in single-stage feed-forward mode, without optimization or search and
without any post-processing steps. Moreover, it is flexible and can, with few
straight-forward modifications, predict additional semantic and structural
information such as room types, doors and windows (Fig. 1).
We evaluate our model on two challenging datasets, Structured3D [38] and
SceneCAD [2]. For both of them, RoomFormer outperforms the state of the art,
while at the same time being significantly faster than existing methods. In
summary, our contributions are:
* •
A new formulation of floorplan reconstruction, as the simultaneous generation
of multiple ordered sequences of room corners.
* •
The RoomFormer model, an end-to-end trainable, Transformer-type architecture
that implements the proposed formulation via two-level queries that predict a
set of polygons each consisting of a sequence of vertex coordinates.
* •
Improved floorplan reconstruction scores on both Structured3D [38] and
SceneCAD [2], with faster inference times.
* •
Model variants able to additionally predict semantic room type labels, doors
and windows.
## 2 Related Work
Floorplan reconstruction turns raw sensor data (_e.g._ , point clouds, density
maps, RGB images) into vectorized geometries. Early methods rely on basic
image processing techniques, _e.g._ , Hough transform or plane fitting [23,
27, 1, 5, 28, 34, 26]. Graph-based methods [14, 6, 16] cast floorplan
reconstruction as an energy minimization problem. Recent deep learning methods
replace some hand-crafted components with neural networks. Typical top-down
methods such as Floor-SP[8] rely on Mask R-CNN[15] to detect room segments and
reconstruct polygons of individual room segments by sequentially solving
shortest path problems. Similarly, MonteFloor [30] first detects room segments
and then relies on Monte-Carlo Tree-Search to select room proposals.
Alternative bottom-up methods, such as FloorNet [21] first detect room
corners, followed by an integer programming formulation to generate wall
segments. This approach, however, is limited to Manhattan scenes. Recently,
HEAT [9] proposed an end-to-end model following a typical bottom-up pipeline:
first detect corners, then classify edge candidates between corners. Although
end-to-end trainable, it cannot recover edges from undetected corners.
Instead, our approach skips the heuristics-guided processes from both
approaches. Without explicit corner, wall or room detection, our approach
directly generates rooms as polygons in a holistic fashion.
#### Transformers for structured reconstruction.
Transformers [32], originally proposed for sequence-to-sequence translation
tasks, have shown promising performance in many vision tasks such as object
detection [7, 39], image/video segmentation [33, 11, 10] and tracking [24].
DETR [7] reformulates object detection as a direct set prediction problem with
Transformers which is free from many hand-crafted components, _e.g._ , anchor
generation and non-maximum suppression. LETR [35] extends DETR by adopting
Transformers to predict a set of line segments. PlaneTR [31] follows a similar
paradigm for plane detection and reconstruction. These works show the
promising potential of Transformers for structured reconstruction without
heuristic designs. Our work goes beyond these initial steps and asks the
question: _Can we leverage Transformers for structured polygon generation?_
Different from predicting primitive shapes that can be represented by a fixed
number of parameters (_e.g._ , bounding boxes, lines, planes), polygons are
more challenging due to the arbitrary number of (ordered) vertices.While some
recent works [20, 18, 37] utilize Transformers for polygon generation in the
context of instance segmentation or text spotting, there are two essential
differences: (1) They assume a fixed number of polygon vertices, which is not
suitable for floorplans. This results in over-redundant vertices for simple
shapes and insufficient vertices for complex shapes. Instead, our goal is to
generate polygons that match the target shape with the correct number of
vertices. (2) They rely on bounding box detection as instance initialization,
while our single-stage method directly generates multiple polygons in
parallel.
Figure 2: Illustration of the RoomFormer model. Given a top-down-view density
map of the input point cloud, (a) the feature backbone extracts multi-scale
features, adds positional encodings, and flattens them before passing them
into the (b) Transformer encoder. (c) The Transformer decoder takes as input
our _two-level_ queries, one level for the room polygons (up to $M$) and one
level for their corners (up to $N$ per room polygon). A feed-forward network
(FFN) predicts a class $c$ for each query to accommodate varying numbers of
rooms and corners. During training, the polygon matching guarantees optimal
assignment between predicted and groundtruth polygons.
## 3 Method
### 3.1 Floorplan Representation
A suitable floorplan representation is key to an efficient floorplan
reconstruction system. Intuitively, one can decompose floorplan reconstruction
as intermediate geometric primitives detection problems (corners, walls,
rooms) and tackle them separately, as in prior works [8, 30, 22, 9]. However,
such pipelines involve heuristics-driven designs and lack holistic reasoning
capabilities.
Our core idea is to cast floorplan reconstruction as a direct set prediction
problem of polygons. Each polygon represents a room and is modeled as an
ordered sequence of vertices. The edges (_i.e._ , walls) are implicitly
encoded by the order of the vertices – two consecutive vertices are connected
– thus a separate edge prediction step is not required. Formally, the goal is
to predict _a set of sequences_ of arbitrary length, defined as
$S=\left\\{V_{m}\right\\}_{m=1}^{M^{\text{gt}}}$, where $M^{\text{gt}}$ is the
number of sequences per scene, and each sequence
$V_{m}=\left(v_{1}^{m},v_{2}^{m},...,v_{N_{m}}^{m}\right)$ represents a closed
polygon (_i.e._ , room) defined by $N_{m}$ ordered vertices.
As each polygon has an arbitrary number of vertices $N_{m}$, we model each
vertex $v_{n}^{m}$ in a polygon $V_{m}$ by two variables
$v_{n}^{m}=\left(c_{n}^{m},p_{n}^{m}\right)$, where
$c_{n}^{m}\in\left\\{0,1\right\\}$ indicates whether $v_{n}^{m}$ is a valid
vertex or not, and $p_{n}^{m}\in\mathbb{R}^{2}$ are the 2D coordinates of the
corner in the floorplan. Once the model predicts the ordered corner sequences,
we connect all valid corners to obtain the polygonal representation of all
rooms.
### 3.2 Architecture Overview
Fig. 2 shows the model architecture. It consists of (a) a feature backbone
that extracts image features, (b) a Transformer encoder to refine the CNN
features, and (c) a Transformer decoder using _two-level_ queries for polygon
prediction. (d) During training, the polygon matching module yields optimal
assignments between predicted and groundtruth polygons, enabling end-to-end
supervision.
CNN backbone. The backbone extracts pixel-level feature maps from the density
map $\mathbf{x}_{d}\in\mathbb{R}^{H\times W}$. Since both local and global
contexts are required for accurately locating corners and capturing their
order, we utilize the $L$ multi-scale feature maps
$\left\\{I_{l}\right\\}_{l=1}^{L}$ from each layer $l$ of the convolutional
backbone, where $I_{l}\in\mathbb{R}^{C\times H_{l}\times W_{l}}$. Each feature
map is flattened to a feature sequence $I_{l}\in\mathbb{R}^{C\times
H_{l}W_{l}}$ and sine/cosine positional encodings $E_{l}\in\mathbb{R}^{C\times
H_{l}W_{l}}$ are added to each pixel location. The flattened feature maps are
concatenated and serve as multi-scale input to the Transformer encoder.
Multi-scale deformable attention. To avoid the computational and memory
complexities of standard Transformers [32], we adopt deformable attention from
[39]. Given a feature map, for each query element, the deformable attention
only attends to a small set $N_{s}$ of key sampling points around a reference
point, instead of looking over all $H_{l}W_{l}$ spatial locations on the
feature map, where $N_{s}<<H_{l}W_{l}$. The multi-scale deformable attention
applies deformable attention across multi-scale feature maps and enables
encoding richer context. We use multi-scale deformable attention for the self-
and cross-attention in the encoder and decoder.
Transformer encoder takes as input the position-encoded multi-scale feature
maps and outputs enhanced feature maps of the same resolution. Each encoder
layer consists of a multi-scale deformable self-attention module (MS-DSA) and
a feed forward network (FFN). In the MS-DSA module, both the query and key
elements are pixel features from the multi-scale feature maps. The reference
point is the coordinate of each query pixel. A learnable scale-level embedding
is added to the feature representation to identify which feature level each
query pixel lies in.
Transformer decoder is stacked with multiple layers (Fig. 3(a)). Each layer
consists of a self-attention module (SA), a multi-scale deformable cross-
attention module (MS-DCA) and an FFN. Each decoder layer takes in the enhanced
image features from the encoder and a set of polygon queries from the previous
layer. The polygon queries first interact with each other in the SA module. In
the MS-DCA, the polygon queries attend to different regions of the density
map. Finally, the output of the decoder is passed to a shared FFN to predict
binary class labels $c$ for each query indicating its validity as a corner.
### 3.3 Modeling Floorplan As Two-Level Queries
We model floorplan reconstruction as a prediction of a set of sequences. This
motivates the two-level polygon queries, one level for room polygons and one
level for their vertices. Specifically, we represent polygon queries as
$Q\in\mathbb{R}^{M\times N\times 2}$, where $M$ is the maximum number of
polygons (_i.e._ , room level), $N$ is the maximum number of vertices per
polygon (_i.e._ , corner level). Using this formulation, we can directly learn
_ordered_ corner coordinates for each room as queries, which are subsequently
refined after each layer in the decoder (Fig. 3(b)).
We illustrate the structure of one decoder layer in Fig. 3(a). The queries in
the decoder consist of two parts: _content_ queries (_i.e._ , decoder
embeddings) and _positional_ queries (generated from polygon queries). We
denote $Q^{i}=(x,y)^{i}$ as the polygon queries in decoder layer $i$111For
simplicity, we drop the polygon and vertex indices., and
$D^{i}\in~{}\mathbb{R}^{M\times N\times C}$ and $P^{i}\in\mathbb{R}^{M\times
N\times C}$ as the corresponding content and positional queries. Given the
polygon query $Q^{i}$, its positional query $P^{i}$ is generated as
$P^{i}=~{}\textrm{MLP}(\textrm{PE}(Q^{i}))$, where PE (Positional Encoding)
maps the 2D coordinates to a $C$-dimensional sinusoidal positional embedding.
The decoder performs self-attention on all corner-level queries regardless of
the room they belong to. This simple design not only allows the interaction
between corners of a single room, but also enables interaction among corners
across different rooms (_e.g._ , corners on the adjacent walls of two rooms
are influencing each other), thus enabling global reasoning. In the multi-
scale deformable attention module, we directly use polygon queries as
reference points, allowing us to use explicit spatial priors to pool features
from the multi-scale feature maps around the polygon vertices. The varying
numbers of both rooms and corners are achieved by classifying each query as
valid or invalid. In Sec. 3.4, we describe the polygon matching strategy that
encourages the queries at corner level to follow a specific order (_i.e._ , a
sequence) while the queries at room level can be un-ordered (_i.e._ , a set).
The key advantage of the above approach is that the room polygons can directly
be obtained by connecting the valid vertices in the provided order, without
the need for an explicit edge detector as in prior bottom-up methods, _e.g._ ,
[9].
Iterative polygon refinement. Inspired by iterative bounding box refinement in
[39], we refine the vertices in each polygon in the decoder layer-by-layer. We
use a prediction head (MLP) to predict relative offsets $(\Delta x,\Delta y)$
from the decoder embeddings and update the polygon queries for the next layer.
Both decoder embeddings and polygon queries input to the first layer are
initialized from a normal distribution and learned as part of the model
parameters. During inference, we directly load the learned decoder embeddings
and polygon queries and update them layer-by-layer. We visualize this
iterative refinement process in Fig. 3(b). The final predicted labels are used
to select valid queries and visualize their position after each layer.
(a) Structure of the decoder
(b) Polygon queries evolution
Figure 3: (a) Illustration of one layer of the Transformer decoder (we omit
the FFN blocks for clarity). (b) Visualization of the evolution of the polygon
queries after each decoder layer.
(a) SD-TQ
(b) TD-TQ
(c) TD-SQ
Figure 4: Model variants for semantically-rich floorplans. SD-TQ: Single
decoder with two-level queries. TD-TQ: two decoders with two-level queries.
TD-SQ: two decoders with single-level queries in the line decoder.
### 3.4 Polygon Matching
The prediction head of the Transformer decoder outputs a fixed-number $M$ of
polygons with a fixed-number $N$ vertices (including non-valid ones, mapped to
$\varnothing$) while the groundtruth contains an arbitrary number of polygons
with an arbitrary number of vertices. One of the challenges is to match the
fixed-number predictions with the arbitrary-numbered groundtruth to make the
network end-to-end trainable. To this end, we introduce a strategy to handle
the matching at two levels: set and sequence level.
Let us denote
$\hat{S}=\left\\{\hat{V}_{m}=(\hat{v}_{1}^{m},\hat{v}_{2}^{m},...,\hat{v}_{N}^{m})\right\\}_{m=1}^{M}$
as a set of predicted polygon instances. Each predicted vertex is represented
as $\hat{v}_{n}^{m}=\left(\hat{c}_{n}^{m},\hat{p}_{n}^{m}\right)$, where
$\hat{c}_{n}^{m}$ indicates the probability of a valid vertex and
$\hat{p}_{n}^{m}$ is the predicted 2D coordinates in normalized space $[0,1]$.
Assume there are $M^{\text{gt}}$ polygons in the groundtruth set
$S=\left\\{V_{m}\right\\}_{m=1}^{M^{\text{gt}}}$, where $V_{m}$ has a length
of $N_{m}$. For each groundtruth polygon, we first pad it to a length of $N$
vertices so that $V_{m}=(v_{1}^{m},v_{2}^{m},...,v_{N}^{m})$, where
$v_{n}^{m}=\left(c_{n}^{m},p_{n}^{m}\right)$ and $c_{n}^{m}$ maybe 0, _i.e._ ,
$\varnothing$. Then we further pad $S$ with additional $M-M^{\text{gt}}$
polygons full of $N$ $\varnothing$ vertices so that
$\forall(m\in\left[M^{\text{gt}}+1,M\right]\wedge
n\in\left[1,N\right]),c_{n}^{m}=0$. At the set level, we find a bipartite
matching between the predicted polygons and the groundtruth by searching for a
permutation $\hat{\sigma}$ with minimal cost:
$\hat{\sigma}=\underset{\sigma}{\mathrm{arg}\,\mathrm{min}}\sum_{m=1}^{M}\mathcal{D}(V_{m},\hat{V}_{\sigma(m)})$
(1)
where $\mathcal{D}$ is a function that measures the matching cost between
groundtruth polygon $V_{m}$ and a prediction with index $\sigma(m)$. Since we
view a polygon as a sequence of vertices, we calculate the matching cost at
sequence level and define $\mathcal{D}$ as $\mathds{1}_{\left\\{m\leq
M^{\text{gt}}\right\\}}\lambda_{\mathrm{cls}}\sum_{n=1}^{N}\left\|c_{n}^{m}-\hat{c}_{n}^{\sigma(m)}\right\|+\mathds{1}_{\left\\{m\leq
M^{\text{gt}}\right\\}}\lambda_{\mathrm{coord}}d(P_{m},\hat{P}_{\sigma(m)})$,
where $d$ measures the sum of pair-wise $L_{1}$ distance between groundtruth
vertex coordinates _without_ padding
$P_{m}=(p_{1}^{m},p_{2}^{m},...,p_{N_{m}}^{m})$ and the prediction sliced with
the same length
$\hat{P}_{\sigma(m)}=(\hat{p}_{1}^{\sigma(m)},\hat{p}_{2}^{\sigma(m)},...,\hat{p}_{N_{m}}^{\sigma(m)})$.
The matching cost $\mathcal{D}$ takes into account both the vertex label and
coordinates with balancing coefficients $\lambda_{\mathrm{cls}}$ and
$\lambda_{\mathrm{coord}}$. A closed polygon is a cycle so there exist
multiple equivalent parametrizations depending on the starting vertex and the
orientation. Here, we fix the groundtruth $P_{m}$ to always follow counter-
clockwise orientation, but can start from any of the vertices. We calculate
the distance between $\hat{P}_{\sigma(m)}$ and all possible permutations of
$P_{m}$ and take the minimum as the final $d$.
Loss functions. After finding the optimal permutation $\hat{\sigma}$ with the
Hungarian algorithm, we can compute the loss function which consists of three
parts: a vertex label classification loss, a vertex coordinates regression
loss and a polygon rasterization loss. The vertex label classification loss is
a standard binary cross-entropy:
$\mathcal{L}_{\text{cls}}^{m}=-\frac{1}{N}\sum_{n=1}^{N}c_{n}^{m}\cdot\mathrm{log}(\hat{c}_{n}^{\hat{\sigma}(m)})-(1-c_{n}^{m})\cdot\log(1-\hat{c}_{n}^{\hat{\sigma}(m)})$
(2)
Similar to the matching cost function, the $L_{1}$ distance serves as a loss
function for vertex coordinates regression:
$\mathcal{L}_{\text{coord}}^{m}=\frac{1}{N_{m}}\mathds{1}_{\left\\{m\leq
M^{\text{gt}}\right\\}}d(P_{m},\hat{P}_{\hat{\sigma}(m)})$ (3)
We additionally compute the Dice loss [25] between rasterized polygons as
auxiliary loss:
$\mathcal{L}_{\text{ras}}^{m}=\mathds{1}_{\left\\{m\leq
M^{\text{gt}}\right\\}}\text{Dice}\big{(}R(P_{m}),R(\hat{P}_{\hat{\sigma}(m)})\big{)}$
(4)
where $R$ indicates the rasterized mask of a given polygon, using a
differentiable rasterizer [18]. We only compute
$\mathcal{L}_{\text{coord}}^{m}$ and $\mathcal{L}_{\text{ras}}^{m}$ for
predicted polygons with matched non-padded groundtruth while computing
$\mathcal{L}_{\text{cls}}^{m}$ for all predicted polygons (including
$\varnothing$). The total loss $\mathcal{L}$ is then defined as:
$\mathcal{L}=\sum_{m}^{M}(\lambda_{\text{cls}}\mathcal{L}_{\text{cls}}^{m}+\lambda_{\text{coord}}\mathcal{L}_{\text{coord}}^{m}+\lambda_{\text{ras}}\mathcal{L}_{\text{ras}}^{m})$
(5)
### 3.5 Towards Semantically-Rich Floorplans
Our method can easily be extended to classify different room types and
reconstruct additional architectural details such as doors and windows, while
using the same input. Room types. The two-level polygon queries make our
pipeline very flexible to be extended to identify room types. We denote the
output embedding from the last layer of the Transformer decoder as
$D^{last}\in\mathbb{R}^{M\times N\times C}$. We then aggregate room-level
features by simply averaging corner-level features and obtain an aggregated
embedding $\widehat{D}^{last}\in\mathbb{R}^{M\times C}$. Finally, a simple
linear projection layer predicts the room label probabilities using a softmax
function. Since $M$ is usually larger than the actual number of rooms in a
scene, an additional empty class label is used to represent invalid rooms. We
denote $t_{m}$ as the type for polygon instance $V_{m}$. We use the same
matching permutation $\hat{\sigma}$ from Eq. 1 to find the matched prediction
$\hat{t}_{\hat{\sigma}(m)}$. The room type is supervised by a cross-entropy
loss $\mathcal{L}_{{\text{room}\\_\text{cls}}}^{m}$.
Doors and windows. A door or a window can be regarded as a line in a 2D
floorplan. Intuitively, we can view them as a special “polygon” with 2
vertices. This way, our pipeline can be directly adapted to predict doors and
windows _without_ architecture modification. It is only required to increase
the room-level queries $M$ since more polygons need to be predicted (Fig.
4(a)). Alternatively, we could use a separate line decoder to predict doors
and windows. Since a line can be parameterized by a fixed number of 2
vertices, we can either represent them as two-level queries but with a fixed
number of corner-level queries (Fig. 4(b)), or single-level queries (Fig.
4(c)). The two-level queries variant is simply an adaptation of our polygon
decoder. For the single-level queries variant, we follow LETR [35] that
directly predicts the two endpoints from the query. The performance of each
variant is analyzed in Sec. 4.4.
| | | | | Room | Corner | Angle
---|---|---|---|---|---|---|---
Method | Venue | Fully-neural | Single-stage | t (s) | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1
Floor-SP [8] | ICCV19 | ✗ | ✗ | 785 | 89. | 88. | 88. | 81. | 73. | 76. | 80. | 72. | 75.
MonteFloor [30] | ICCV21 | ✗ | ✗ | 71 | 95.6 | 94.4 | 95.0 | 88.5 | 77.2 | 82.5 | 86.3 | 75.4 | 80.5
HAWP [36] | CVPR20 | ✓ | ✗ | 0.02 | 77.7 | 87.6 | 82.3 | 65.8 | 77.0 | 70.9 | 59.9 | 69.7 | 64.4
LETR [35] | CVPR21 | ✓ | ✗ | 0.04 | 94.5 | 90.0 | 92.2 | 79.7 | 78.2 | 78.9 | 72.5 | 71.3 | 71.9
HEAT [9] | CVPR22 | ✓ | ✗ | 0.11 | 96.9 | 94.0 | 95.4 | 81.7 | 83.2 | 82.5 | 77.6 | 79.0 | 78.3
RoomFormer (Ours) | - | ✓ | ✓ | 0.01 | 97.9 | 96.7 | 97.3 | 89.1 | 85.3 | 87.2 | 83.0 | 79.5 | 81.2
Table 1: Foorplan reconstruction scores on Structured3D test set [38]. Our
method offers state-of-the-art results while being significantly faster than
existing works. Runtime is averaged over the test set. Scores of prior works
are as taken from [9, 30].
## 4 Experiments
### 4.1 Datasets and Metrics
Datasets. Structured3D [38] is a large-scale photo-realistic dataset
containing 3500 houses with diverse floor plans covering both Manhattan and
non-Manhattan layouts. It contains semantically-rich annotations including
doors and windows, and 16 room types. We adhere to the pre-defined split of
3000 training samples, 250 validation samples and 250 test samples. As in [30,
9], we convert the registered multi-view RGB-D panoramas to point clouds, and
project the point clouds along the vertical axis into density images of size
256$\times$256 pixels. The density value at each pixel is the number of
projected points to that pixel divided by the maximum point number so that it
is normalized to [0, 1].
SceneCAD [2] contains 3D room layout annotations on real-world RGB-D scans of
ScanNet [12]. We convert the layout annotations to 2D floorplan polygons.
Annotations are only available for the ScanNet training and validation splits,
so we train on the training split and report scores on the validation split.
We use the same procedure as in Structured3D to project RGB-D scans to density
maps.
Metrics. Following [30, 9], for each groundtruth room, we loop through the
predictions and find the best-matching reconstructed room in terms of IoU. For
the matched rooms we then report precision, recall and F1 scores at three
geometry levels: rooms, corners and angles. We compute precision, recall and
F1 scores also for the semantic enrichment predictions. For the room type, the
metrics are computed like the room metric described above, with the additional
constraint that the predicted semantic label must match the groundtruth. A
window or door is considered correct if its $L_{2}$ distance to the
groundtruth element is <10 pixels.
### 4.2 Implementation Details
Model settings. We use the same ResNet-50 backbone as HEAT [9]. We generate
multi-scale feature maps from the last three backbone stages without FPN. The
fourth scale feature map is obtained via a $3\times 3$ stride 2 convolution on
the final stage. All feature maps are reduced to 256 channels by a 1$\times$1
convolution. The Transformer consists of 6 encoder and 6 decoder layers with
256 channels. We use 8 heads and $N_{s}$=$4$ sampling points for the
deformable attention module. The number of room-level queries and corner-level
queries is set to $M=20$ and $N=40$.
Training. We use the Adam optimizer [17] with a weight decay factor 1e-4.
Depending on the dataset size, we train the model on Structured3D for 500
epochs with an initial learning rate 2e-4 and on SceneCAD for 400 epochs with
an initial learning rate 5e-5. The learning rate decays by a factor of 0.1 for
the last 20% epochs. We set the coefficients for the matcher and losses to
$\lambda_{\text{cls}}$ = $2$, $\lambda_{\text{coord}}$ = $5$,
$\lambda_{\text{ras}}$ = $1$ and use a single TITAN RTX GPU with 24GB memory
for training.
### 4.3 Comparison with State-of-the-art Methods
| | Room | Corner | Angle
---|---|---|---|---
Method | t(s) | IoU | Prec. | Rec. | F1 | Prec. | Rec. | F1
Floor-SP [8] | 26 | 91.6 | 89.4 | 85.8 | 87.6 | 74.3 | 71.9 | 73.1
HEAT [9] | 0.12 | 84.9 | 87.8 | 79.1 | 83.2 | 73.2 | 67.8 | 70.4
RoomFormer (Ours) | 0.01 | 91.7 | 92.5 | 85.3 | 88.8 | 78.0 | 73.7 | 75.8
Table 2: Foorplan reconstruction on the SceneCAD val set [2].
Results are summarized in Tab. 1 for Structured3D and in Tab. 2 for SceneCAD.
We compare our method to a range of prior approaches that can be grouped into
two broad categories: Floor-SP [8] and MonteFloor [30] both rely on Mask R-CNN
to segment rooms, followed by learning-free optimization techniques to recover
the floorplan. HAWP [36] and LETR [35] are originally generic methods to
detect line segments and have been adapted to floorplan reconstruction in [9].
HEAT [9] is an end-to-end trainable neural model that first detects corners,
then links them via edge classification. Our RoomFormer outperforms all
previous methods on Structured3D (Tab. 1), increasing the F1 score by +$1.9$
for rooms, +$4.7$ for corners and +$2.9$ for angles from the previous state-
of-the-art, HEAT. Our RoomFormer is the fastest one among the tested methods,
with more than 10 times faster inference than HEAT. MonteFloor additionally
employs a Douglas-Peucker algorithm [13] for post-processing to simplify the
topology of the output polygons. By contrast, RoomFormer does not rely on any
post-processing steps.
On SceneCAD (Tab. 2), we compare with two representative methods (for which
code is available) from optimization-based and fully-neural categories Floor-
SP and HEAT. RoomFormer achieves notable improvement over other methods,
especially on corner/angle precision. For more details, please see the
supplementary. Cross-data generalization. We further evaluate the ability of
our model to generalize across datasets. For that, we train on Structured3D
training set and evaluate on SceneCAD validation set (without fine-tuning on
SceneCAD). We compare with the current state-of-the-art, end-to-end method
HEAT and report scores in Tab. 3. Our model generalizes better to unseen data
characteristics. It outperforms HEAT significantly in almost every metric and
particularly on room IoU (74.0 _vs._ 52.5). We attribute the better
generalization to the learned global reasoning capacity rather than focusing
on separate corner detection and edge classifications as in HEAT.
| | Room | Corner | Angle
---|---|---|---|---
Method | t(s) | IoU | Prec. | Rec. | F1 | Prec. | Rec. | F1
HEAT [9] | 0.12 | 52.5 | 50.9 | 51.1 | 51.0 | 42.2 | 42.0 | 41.6
RoomFormer (Ours) | 0.01 | 74.0 | 56.2 | 65.0 | 60.3 | 44.2 | 48.4 | 46.2
Table 3: Cross-data generalization. Models are trained on Structured3D train
set but evaluated on SceneCAD val set. Our method shows significant robustness
when the train-test domains differ.
| Door/Window | Room∗ | Room | Corner | Angle
---|---|---|---|---|---
Method | F1 | F1 | F1 | F1 | F1
SD-TQ (Fig. 4(a)) | 81.1 | 70.7 | 94.3 | 83.9 | 76.7
TD-TQ (Fig. 4(b)) | 80.8 | 71.4 | 93.4 | 82.0 | 73.7
TD-SQ (Fig. 4(c)) | 81.7 | 74.4 | 94.9 | 84.2 | 75.9
Table 4: Semantically-rich floorplan reconstruction scores on Structured3D
test set. The Room∗ metric is similar to Room, but additionally considers the
correct room type classification.
3D Scan Density Map MonteFloor [30] HEAT [9] Ours Ground Truth
Figure 5: Qualitative evaluations on Structured3D [38]. Best viewed in color
on a screen and zoom in. Colors are assigned based on room locations,
_without_ semantic meaning.
3D Scan Density Map Floor-SP [8] HEAT [9] Ours Ground Truth
Figure 6: Qualitative evaluations on SceneCAD [2]. HEAT is affected by missing
corners and edges (last 4 rows). Floor-SP tends to produce redundant corners
due to its containment constraint (4th, 5th row). Our method is more robust in
these cases.
Qualitative results on Structured3D are shown in Fig. 5. The quality of
floorplan generated by two-stage pipelines is strongly affected by errors in
the first stage, _e.g._ , missing rooms with MonteFloor (3rd, 4th row) and
missing corners/edges with HEAT (5th, 6th row). Instead, our holistic single-
stage model produces more accurate predictions while being able to capture
geometric details. We observe a similar pattern in Fig. 6. HEAT suffers from
missing corners/edges when the input point cloud is sparse (last 4 rows),
while our RoomFormer handles these cases more robustly. Floor-SP forces the
generated polygon to completely contain its room segmentation mask which,
however, results in redundant corners (4th, 5th row). By contrast, RoomFormer
produces more plausible results without imposing any hard constraints.
### 4.4 Semantically-Rich Floorplans
The quantitative results on semantically-rich floorplan are summarized in Tab.
4. We observe that separating the room and door/window decoding can help
improve room type classification since room and door/window may have
significantly different geometry and semantic properties. However, the second-
best performing method is SD-TQ where we use our original model with more
polygon queries to model the doors and windows. For the two variants with line
decoder, the single-level queries (TD-SQ) work better than the two-level
queries (TD-TQ), suggesting that for shapes represented by a fixed number of
parameters (_e.g._ , lines), single-level queries are sufficient. We show our
qualitative results on rich floorplan reconstruction in Fig. 7. For
visualization purposes, we use arcs to represent doors. Interestingly, our
single decoder variant (SD-TQ) incorrectly identifies a room as “bathroom”
instead of “misc” (3rd col.). However, this follows house design principles
better than the groundtruth, since each house usually has a bathroom.
Figure 7: Results of semantic-rich floorplans. Dashed lines represent windows,
arcs represent doors. The radius, number and orientation of arcs are
determined by the length and direction of the predicted lines. Best viewed in
color on a screen.
### 4.5 Analysis Experiments
Two-level _vs._ single-level queries. We model floorplans as two-level
queries. To validate that choice we compare with single-level queries, where a
single query is responsible for predicting all ordered vertices of a room.
Tab. 5 shows that two-level queries greatly improve all metrics. The reason is
that the two-level design relaxes the role of each query to model a single
vertex rather than a sequence. Furthermore, it enables explicit interactions
between vertices of the same room, and vertices across adjacent rooms, while
single-level queries only enable room-level interactions.
Multi-scale feature maps. We leverage multi-scale feature maps to aggregate
both local and global contexts for joint vertex positioning and order
capturing. To validate this design, we conduct ablations by using only a
single-level feature map obtained from the last stage of ResNet-50. Tab. 6
shows that multi-scale feature maps significantly improve all metrics, which
indicates that local and global contexts are crucial for our Transformers for
structured reasoning.
Iterative polygon refinement. We propose to directly learn vertex sequence
coordinates as queries, refine them iteratively layer-by-layer, and use the
updated positions as new reference points for deformable cross-attention. We
ablate this by removing the refinement process and keeping the reference
points static in intermediate layers while only updating the decoder
embeddings. Tab. 6 suggests that the refinement strategy significantly
improves the performance.
Loss functions. We use three loss components to supervise our network (Tab.
7). Since the vertex label classification loss is essential and cannot be
removed, we ablate for $\mathcal{L}_{\text{coord}}$ and
$\mathcal{L}_{\text{ras}}$. In the first experiment, we remove
$\mathcal{L}_{\text{coord}}$ and replace the matching cost for sequence
coordinates with a matching cost for the rasterized mask. This leads to a
significant drop in all metrics. Next, we only remove
$\mathcal{L}_{\text{ras}}$ which incurs a smaller drop in all metrics. We
conclude that the sequence coordinates regression loss is essential and the
rasterization loss serves as an auxiliary loss.
| Room | Corner | Angle
---|---|---|---
Queries | Prec. | Rec. | Prec. | Rec. | Prec. | Rec.
Single-level | 74.4 | 73.4 | 65.1 | 58.9 | 61.4 | 55.6
Two-level (Ours) | 96.5 | 95.3 | 91.2 | 82.8 | 88.3 | 80.3
Table 5: Query analysis. Comparison between two-level and single-level
queries. Scores are on Structured3D validation set.
Multi-Scale | Polygon | Room | Corner | Angle
---|---|---|---|---
Features | Refinement | Prec. | Rec. | Prec. | Rec. | Prec. | Rec.
- | ✓ | 93.9 | 92.9 | 87.8 | 79.6 | 83.3 | 75.6
✓ | - | 94.8 | 93.0 | 88.7 | 80.7 | 84.2 | 76.7
✓ | ✓ | 96.5 | 95.3 | 91.2 | 82.8 | 88.3 | 80.3
Table 6: Model analysis. Impact of multi-scale features and polygon
refinement. Scores are on Structured3D validation set.
Settings | Room | Corner | Angle
---|---|---|---
$\mathcal{L}_{\mathrm{cls}}$ | $\mathcal{L}_{\mathrm{coord}}$ | $\mathcal{L}_{\mathrm{ras}}$ | Prec. | Rec. | Prec. | Rec. | Prec. | Rec.
✓ | - | ✓ | 84.3 | 83.2 | 75.4 | 68.8 | 70.6 | 64.5
✓ | ✓ | - | 96.0 | 94.5 | 89.9 | 81.7 | 87.0 | 79.2
✓ | ✓ | ✓ | 96.5 | 95.3 | 91.2 | 82.8 | 88.3 | 80.3
Table 7: Loss analysis. Impact of various losses. $\mathcal{L}_{\text{cls}}$
is always required for determining valid corners. The combination of
$\mathcal{L}_{\text{coord}}$ and $\mathcal{L}_{\text{ras}}$ yields best
results. Scores are on Structured3D val set.
## 5 Conclusion
In this work, we have introduced RoomFormer, a simple and direct model for 2D
floorplan reconstruction formulated as a polygon estimation problem. The
network learns to predict a varying number of rooms per floorplan, each room
represented as a varying length of ordered corner sequence. Our single-stage,
end-to-end trainable model shows significant improvements over prior multi-
stage and heuristics-driven methods, both in performance and speed metrics.
Moreover, it can be flexibly extended to reconstruct semantically-rich
floorplans. We hope our approach inspires more applications in polygonal
reconstruction tasks.
Acknowledgments We thank the authors of HEAT and MonteFloor for providing
results on Structured3D for better comparison. Theodora Kontogianni and
Francis Engelmann are postdoctoral research fellows at the ETH AI Center.
## References
* [1] Antonio Adan and Daniel Huber. 3D Reconstruction of Interior Wall Surfaces Under Occlusion and Clutter. In International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DPVT), 2011.
* [2] Armen Avetisyan, Tatiana Khanova, Christopher Choy, Denver Dash, Angela Dai, and Matthias Nießner. SceneCAD: Predicting Object Alignments and Layouts in RGB-D Scans. In European Conference on Computer Vision (ECCV), 2020.
* [3] Maarten Bassier and Maarten Vergauwen. Unsupervised Reconstruction of Building Information Modeling Wall Objects From Point Cloud Data. Automation in Construction, 120, 2020.
* [4] Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A Survey of Monte Carlo Tree Search Methods. IEEE Transactions on Computational Intelligence and AI in Games, 2012.
* [5] Angela Budroni and Jan Boehm. Automated 3D Reconstruction of Interiors From Point Clouds. International Journal of Architectural Computing, 8, 2010.
* [6] Ricardo Cabral and Yasutaka Furukawa. Piecewise Planar and Compact Floorplan Reconstruction From Images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
* [7] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end Object Detection With Transformers. In European Conference on Computer Vision (ECCV), 2020.
* [8] Jiacheng Chen, Chen Liu, Jiaye Wu, and Yasutaka Furukawa. Floor-SP: Inverse CAD for Floorplans by Sequential Room-Wise Shortest Path. In International Conference on Computer Vision (ICCV), 2019.
* [9] Jiacheng Chen, Yiming Qian, and Yasutaka Furukawa. HEAT: Holistic Edge Attention Transformer for Structured Reconstruction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
* [10] Bowen Cheng, Anwesa Choudhuri, Ishan Misra, Alexander Kirillov, Rohit Girdhar, and Alexander G Schwing. Mask2former for Video Instance Segmentation. arXiv preprint arXiv:2112.10764, 2021.
* [11] Bowen Cheng, Alex Schwing, and Alexander Kirillov. Per-pixel Classification Is Not All You Need for Semantic Segmentation. Advances in Neural Information Processing Systems (NeurIPS), 2021\.
* [12] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
* [13] David H Douglas and Thomas K Peucker. Algorithms for the Reduction of the Number of Points Required to Represent A Digitized Line or Its Caricature. Cartographica: the International Journal for Geographic Information and Geovisualization, 1973.
* [14] Yasutaka Furukawa, Brian Curless, Steven M Seitz, and Richard Szeliski. Reconstructing Building Interiors From Images. In International Conference on Computer Vision (ICCV), 2009.
* [15] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In International Conference on Computer Vision (ICCV), 2017.
* [16] Satoshi Ikehata, Hang Yang, and Yasutaka Furukawa. Structured Indoor Modeling. In International Conference on Computer Vision (ICCV), 2015.
* [17] Diederik P Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR), 2015\.
* [18] Justin Lazarow, Weijian Xu, and Zhuowen Tu. Instance Segmentation With Mask-Supervised Polygonal Boundary Transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
* [19] Muxingzi Li, Florent Lafarge, and Renaud Marlet. Approximating Shapes in Images with Low-complexity Polygons. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
* [20] Justin Liang, Namdar Homayounfar, Wei-Chiu Ma, Yuwen Xiong, Rui Hu, and Raquel Urtasun. Polytransform: Deep Polygon Transformer for Instance Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
* [21] Cheng Lin, Changjian Li, and Wenping Wang. Floorplan-jigsaw: Jointly Estimating Scene Layout and Aligning Partial Scans. In International Conference on Computer Vision (ICCV), 2019.
* [22] Chen Liu, Jiaye Wu, and Yasutaka Furukawa. Floornet: A Unified Framework for Floorplan Reconstruction From 3D Scans. In European Conference on Computer Vision (ECCV), 2018.
* [23] Josep Lladós, Jaime López-Krahe, and Enric Martí. A System to Understand Hand-drawn Floor Plans Using Subgraph Isomorphism and Hough Transform. Machine Vision and Applications, 1997.
* [24] Tim Meinhardt, Alexander Kirillov, Laura Leal-Taixe, and Christoph Feichtenhofer. Trackformer: Multi-object Tracking With Transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
* [25] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In International Conference on 3D Vision (3DV), 2016.
* [26] Aron Monszpart, Nicolas Mellado, Gabriel J Brostow, and Niloy J Mitra. RAPter: Rebuilding Man-made Scenes With Regular Arrangements of Planes. ACM Transactions on Graphics (TOG), 34, 2015.
* [27] Brian Okorn, Xuehan Xiong, Burcu Akinci, and Daniel Huber. Toward Automated Modeling of Floor Plans. In International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DPVT), 2010.
* [28] Victor Sanchez and Avideh Zakhor. Planar 3D Modeling of Building Interiors From Point Cloud Data. In International Conference on Image Processing (ICIP), 2012.
* [29] Alexander Schrijver. Theory of Linear and Integer Programming. John Wiley & Sons, 1998.
* [30] Sinisa Stekovic, Mahdi Rad, Friedrich Fraundorfer, and Vincent Lepetit. MonteFloor: Extending MCTS for Reconstructing Accurate Large-Scale Floor Plans. In International Conference on Computer Vision (ICCV), 2021.
* [31] Bin Tan, Nan Xue, Song Bai, Tianfu Wu, and Gui-Song Xia. Planetr: Structure-guided Transformers for 3D Plane Recovery. In International Conference on Computer Vision (ICCV), 2021.
* [32] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. Advances in Neural Information Processing Systems (NeurIPS), 2017\.
* [33] Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. SegFormer: Simple and Efficient Design for Semantic Segmentation With Transformers. Advances in Neural Information Processing Systems (NeurIPS), 2021\.
* [34] Xuehan Xiong, Antonio Adan, Burcu Akinci, and Daniel Huber. Automatic Creation of Semantically Rich 3D Building Models From Laser Scanner Data. Automation in Construction, 31, 2013.
* [35] Yifan Xu, Weijian Xu, David Cheung, and Zhuowen Tu. Line Segment Detection Using Transformers Without Edges. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
* [36] Nan Xue, Tianfu Wu, Song Bai, Fudong Wang, Gui-Song Xia, Liangpei Zhang, and Philip HS Torr. Holistically-attracted Wireframe Parsing. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
* [37] Xiang Zhang, Yongwen Su, Subarna Tripathi, and Zhuowen Tu. Text Spotting Transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
* [38] Jia Zheng, Junfei Zhang, Jing Li, Rui Tang, Shenghua Gao, and Zihan Zhou. Structured3D: A Large Photo-Realistic Dataset for Structured 3D Modeling. In European Conference on Computer Vision (ECCV), 2020.
* [39] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable DETR: Deformable Transformers for End-to-end Object Detection. In International Conference on Learning Representations (ICLR), 2021\.
In the appendix, we first provide additional ablation studies on the attention
design and numbers of the two-level queries (Sec. A). Then we describe an
alternative ‘tight’ room layout used by some baselines and report the scores
of our model trained with this layout (Sec. B).We also provide further
implementation details and more results on semantically-rich floorplans (Sec.
C). Finally, we provide details for running competing methods Floor-SP [8] and
HEAT [9], as well as a learning-free baseline on the SceneCAD dataset [2]
(Sec. D).
## Appendix A Additional Ablation Studies
### A.1 Self-attention of the two-level queries
In the main paper, the Transformer decoder performs self-attention on all
vertex-level queries regardless of the polygon they belong to (Tab. 8,
\scriptsize{2}⃝). Alternatively, in this experiment, we restrict the vertex-
level queries to attend only to vertices within the same polygon.
Implementation-wise, we add an attention mask to prevent attention from
vertex-level queries in one polygon to vertex-level queries of another
polygon. We find that this restricted form of attention leads to an overall
reduced performance (Tab. 8, \scriptsize{1}⃝). We conclude that the self-
attention between vertices across all polygons plays an important role for
structured reasoning. In particular, the attention mechanism across multiple
polygons seems to help fine-tune the vertex positions of one polygon by
attending to the vertex positions of its neighboring polygons.
| Room | Corner | Angle
---|---|---|---
Settings | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1
\scriptsize{1}⃝ Intra-Poly. Attn. | 95.5 | 94.2 | 94.8 | 90.3 | 81.9 | 85.9 | 87.2 | 79.1 | 83.0
\scriptsize{2}⃝ Inter-Poly. Attn. | 96.5 | 95.3 | 95.9 | 91.2 | 82.8 | 86.8 | 88.3 | 80.3 | 84.1
Table 8: Impact of self-attention design on Structured3D val. Using self-
attention only between the vertices of a single polygon leads to a drop in the
F1 score, showing the usefulness of extending the effect of vertices across
all polygons.
### A.2 Number of queries
We study the effect of different numbers of queries in each level in Tab. 9.
The number of queries is based on: 1) the maximum number of rooms and the
maximum number of corners in each room in the training dataset. 2)
computational efficiency. Although the number of queries empirically has a
small impact on reconstruction quality, the results highlight the robustness
of our model towards fewer queries and justify the associated gain in runtime
(16 ms → 8 ms), especially since we aim to be much faster than prior work. We
chose $M=20$ and $N=40$ as a good compromise between model performance,
inference time, as well as training time.
Settings | | Room | Corner | Angle
---|---|---|---|---
$M$ | $N$ | t (ms) | Prec. | Rec. | Prec. | Rec. | Prec. | Rec.
15 | 30 | 8.4 | 95.7 | 94.5 | 90.2 | 82.2 | 86.5 | 79.0
20 | 40 | 10.8 | 96.3 | 95.0 | 90.8 | 82.7 | 87.8 | 80.0
20 | 50 | 12.8 | 96.2 | 94.6 | 90.5 | 82.1 | 87.6 | 79.5
30 | 40 | 13.9 | 96.8 | 95.5 | 91.1 | 82.7 | 88.1 | 80.1
30 | 50 | 16.8 | 96.0 | 94.8 | 90.6 | 82.4 | 87.5 | 79.7
Table 9: Analysis on number of queries. The reported scores are on
Structured3D validation set averaged over three runs.
## Appendix B Tight Room Layouts
| Room | Corner | Angle
---|---|---|---
Method | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1
Floor-SP [8] | 89. | 88. | 88. | 81. | 73. | 76. | 80. | 72. | 75.
MonteFloor [30] | 95.6 | 94.4 | 95.0 | 88.5 | 77.2 | 82.5 | 86.3 | 75.4 | 80.5
HAWP [36] | 77.7 | 87.6 | 82.3 | 65.8 | 77.0 | 70.9 | 59.9 | 69.7 | 64.4
LETR [35] | 94.5 | 90.0 | 92.2 | 79.7 | 78.2 | 78.9 | 72.5 | 71.3 | 71.9
HEAT [9] | 96.9 | 94.0 | 95.4 | 81.7 | 83.2 | 82.5 | 77.6 | 79.0 | 78.3
RoomFormer (Ours) | 97.9 | 96.7 | 97.3 | 89.1 | 85.3 | 87.2 | 83.0 | 79.5 | 81.2
RoomFormer∗ (Ours) | 97.2 | 96.2 | 96.7 | 91.6 | 83.4 | 87.3 | 88.3 | 80.5 | 84.2
Table 10: Foorplan reconstruction scores on Structured3D test. For a complete
and fair comparison, we complement Tab. 1 from the main paper with the result
of our model trained on tight room layouts, indicated by ∗. Cyan and orange
mark the two top scores.
We represent floorplans as a set of closed polygons, which is consistent with
the groundtruth annotations in Structured3D [38]. The advantage of this
representation is that the thickness of inner walls is implicitly provided by
the distance between neighboring room polygons (see Fig. 8, _right column_).
Alternatively, HEAT [9] predict floorplans as planar graphs, which means that
the walls of adjacent rooms are represented by a single shared edge in the
graph. In particular, this simpler graph representation can approximate the
true floorplan only up to the thickness of the walls. Furthermore, to train
HEAT, an additional pre-processing step is required which merges the
groundtruth edges of neighboring polygons into a single shared edge. It is
important to note that the evaluation still runs on the unmodified groundtruth
floorplans. Therefore, during evaluation, HEAT performs a post-processing step
to obtain a set of closed room polygons from the estimated planar graph.
To show that the improved performance of our model is independent of the
layout representation, we train a model on the same ‘tight’ room layout
representation as HEAT and report the scores in Tab. 10 marked with a star
(∗). Note again that all models are evaluated on the same non-processed
groundtruth annotations of Structured3D [38]. We observe that the room metrics
of the model trained on tight room layouts drop a bit compared with our
original model, while still outperforming all other methods. The drop in
scores is not surprising since the modified training data is only an
approximation of the original groundtruth used for evaluation. Furthermore,
the room metrics penalize overlap between rooms [30]. Results from tight room
layouts are more likely to overlap, which can potentially degrade the room
metrics. Interestingly, the angle metrics improve, especially when it comes to
angle precision, which already outperforms MonteFloor. In the tight room
layout, we encourage corners in adjacent walls to share the same location and
have exactly complementary angles. This implicit constraint might help the
model to reason about angle relationships, thus benefiting the angle metrics.
More qualitative results can be found in Fig. 8.
3D Scan MonteFloor [30] HEAT [9] Ours∗ Ours Ground Truth
Figure 8: More qualitative evaluations on Structured3D [38]. Colors are
assigned based on room locations, _without_ semantic meaning. Ours∗ denotes
the result of our model trained on tight room layout. (Best viewed in color on
a screen and zoomed in.)
## Appendix C Semantically-Rich Floorplan Models
| Door/Window | Room∗ | Room | Corner | Angle
---|---|---|---|---|---
Method | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1
SD-TQ | 83.4 | 79.0 | 81.1 | 71.5 | 70.0 | 70.7 | 95.3 | 93.3 | 94.3 | 86.0 | 81.8 | 83.9 | 78.6 | 74.9 | 76.7
TD-TQ | 82.6 | 79.1 | 80.8 | 71.9 | 70.9 | 71.4 | 94.0 | 92.8 | 93.4 | 84.2 | 80.0 | 82.0 | 75.6 | 71.9 | 73.7
TD-SQ | 85.6 | 78.2 | 81.7 | 74.8 | 74.0 | 74.4 | 95.4 | 94.4 | 94.9 | 85.8 | 82.6 | 84.2 | 77.3 | 74.5 | 75.9
Table 11: Detailed scores of semantically-rich floorplan reconstruction on
Structured3D test set [38].
We proposed three model variants for reconstructing semantically-rich
floorplans: (1) SD-TQ: single decoder with two-level queries. (2) TD-TQ: two
decoders with two-level queries. (3) TD-SQ: two decoders with single-level
queries in the line decoder. Here, we explain additional implementation
details along with more results. We also describe our heuristic for drawing
the door arcs based on the width of the door segments.
### C.1 Implementation details
We describe the implementation details of the three model variants to extend
our RoomFormer architecture to predict room types, doors and windows (Fig. 4,
main paper). SD-TQ. Single decoder with two-level queries. We take our
original architecture and increase the number of room-level queries $M$ from
20 to 70 while keeping the number of corner-level queries $N$ unchanged. In
addition, since there is no need to rasterize lines, we remove the
rasterization loss $\mathcal{L}_{\text{ras}}$. We predict all the semantic
types (room types, door or window) from the aggregated room-level features of
the output embedding of the polygon decoder, as described in Sec. 3.5 of the
main paper. TD-TQ. Two decoders with two-level queries. We add a separate line
decoder with the same architecture as the polygon decoder. In the line
decoder, we set the number of line-level queries to 50 and the number of
corner-level queries to 2. We remove the rasterization loss
$\mathcal{L}_{\text{ras}}$ term for the line decoder (still used in the
polygon decoder). Room types are predicted from the aggregated room-level
features of the output embedding of the polygon decoder, while door/window
types are predicted from the aggregated line-level features of the output
embedding of the line decoder. TD-SQ. Two decoders with single-level queries.
We add a separate line decoder that takes single-level queries, and predicts
the coordinates of the two endpoints of each line directly, similar to [35].
We set the number of single-level queries to 50. Room types are predicted as
in TD-TQ. Door/window types are predicted based on the output embedding of the
line decoder.
### C.2 Plotting doors
To obtain floorplan illustrations that are closer to actual floorplans used by
architects, we follow the typical notation and represent doors as arcs. Note
that this step is only for visualization purposes and is not part of the
annotations in the training datasets. In particular, we cannot predict towards
which side a door opens and if it is a double door or a single door. The
visualization used in this paper is based on a heuristic which creates a
double door when the predicted width of the door exceeds a certain threshold.
The exact heuristic is shown in Algorithm 1.
Algorithm 1 Algorithm for plotting doors as arcs
1:a set of predicted lines $L=\left\\{l_{i}\right\\}_{i=1}^{N^{l}}$, where
$l_{i}=\left(\left(x_{1}^{i},y_{1}^{i}\right),\left(x_{2}^{i},y_{2}^{i}\right)\right)$
2:plotting of a set of arcs
3:calculate the median length $m$ of all the lines $L$
4:for $l_{i}$ in $L$ do
5: if $\left\|y_{2}^{i}-y_{1}^{i}\right\|>\left\|x_{2}^{i}-x_{1}^{i}\right\|$
then
6: if $y_{2}^{i}>y_{1}^{i}$ then $e_{1}^{i}=(x_{1}^{i},y_{1}^{i})$ and
$e_{2}^{i}=(x_{2}^{i},y_{2}^{i})$
7: else $e_{1}^{i}=(x_{2}^{i},y_{2}^{i})$ and
$e_{2}^{i}=(x_{1}^{i},y_{1}^{i})$
8: end if
9: if $\text{len}(l_{i})<1.5\times m$ then draw a quadrant centered at
$e_{1}^{i}$ with a radius of len($l_{i}$) from $e_{2}^{i}$ clockwise
10: else draw two opposite quadrants centered at $e_{1}^{i}$ and $e_{2}^{i}$
with a radius of $\text{len}(l_{i})/2$ on the right of
$\overrightarrow{e_{1}^{i}e_{2}^{i}}$
11: end if
12: else
13: if $x_{2}^{i}>x_{1}^{i}$ then $e_{1}^{i}=(x_{2}^{i},y_{2}^{i})$ and
$e_{2}^{i}=(x_{1}^{i},y_{1}^{i})$
14: else $e_{1}^{i}=(x_{1}^{i},y_{1}^{i})$ and
$e_{2}^{i}=(x_{2}^{i},y_{2}^{i})$
15: end if
16: if $\text{len}(l_{i})<1.5\times m$ then draw a quadrant centered at
$e_{1}^{i}$ with a radius of len($l_{i}$) from $e_{2}^{i}$ counterclockwise
17: else draw two opposite quadrants centered at $e_{1}^{i}$ and $e_{2}^{i}$
with a radius of $\text{len}(l_{i})/2$ on the left of
$\overrightarrow{e_{1}^{i}e_{2}^{i}}$
18: end if
19: end if
20:end for
3D Scan SD-TQ TD-TQ TD-SQ Ground Truth
Figure 9: Additional qualitative results on semantically-rich floorplans.
(Best viewed in color on a screen and zoomed in.)
### C.3 Additional results
Tab. 11 complements Tab. 4 in the main paper by providing more detailed scores
of the three model variants on semantically-rich floorplan reconstruction. By
comparing the single-decoder variant (SD-TQ) with the two-decoder variants
(TD-TQ and TD-SQ), we find that separate decoders can help improve room type
classification. Our polygon queries are designed for geometries with a varying
number of vertices. Since a line has a fixed number of 2 vertices, the single-
level query variant (TD-SQ) works better than the two-level query variant (TD-
TQ). We provide more qualitative results in Fig. 9.
## Appendix D Running Competing Approaches
For comparison on the SceneCAD dataset [2] in the main paper, we select two
representative methods from both optimization-based and fully-neural
categories Floor-SP [8] and HEAT [9] that offer state-of-the-art results in
floorplan reconstruction and have a public codebase. For a more complete
comparison, here we also report the performance of a heuristic-guided pipeline
that is free of any deep learning components. Here we describe in detail how
we adapted those methods for the SceneCAD dataset [2].
#### Floor-SP:
We used the official implementation with a change in the sequential room-wise
shortest path module. We project 3D scans into density images of size
256$\times$256 pixels. Unlike Structured3D, the SceneCAD dataset usually
contains only one room per scene, which will result in larger occupancy pixel
areas for a single room. While this does not cause problems for HEAT and our
approach, it can lead to a large search space for the sequential room-wise
shortest path module in Floor-SP since each room mask contains a large number
of pixels. Using Floor-SP default settings, we observe the solver cannot find
a solution for many scenes due to the computational complexity of the larger
number of pixels per room. Therefore, we down-sample the density map to
64$\times$64 pixels (a similar size with a single room in the density map of
Structured3D) to help reduce the search space. Please note, we train other
modules (room mask extraction and corner/edge detection) on the original
density map without down-sampling.
#### HEAT:
We used the official implementation with a batch size of 10. We train the
model for 400 epochs and found that longer training times would not help to
improve the performance further. For the experiments on cross-data
generalization, we directly load the released official checkpoints trained on
Structured3D and evaluate them on SceneCAD.
#### Non-learned baseline:
We first project the 3D scan along the vertical axis into an occupancy map of
size 256×256 pixels. A pixel is occupied if at least one point is projected to
this pixel. To mitigate the impact of missing scans, we apply dilation and
erosion to fill holes in the occupancy map. Then we employ a learning-free
polygon vectorization algorithm [19] to extract closed polygons from the
occupancy map, finally followed by a Douglas-Peucker algorithm [13] to remove
redundant corners.
| | Room | Corner | Angle
---|---|---|---|---
Method | t(s) | IoU | Prec. | Rec. | F1 | Prec. | Rec. | F1
Non-learned | 0.22 | 86.0 | 66.0 | 80.8 | 72.7 | 45.0 | 55.2 | 49.6
Floor-SP [8] | 26 | 91.6 | 89.4 | 85.8 | 87.6 | 74.3 | 71.9 | 73.1
HEAT [9] | 0.12 | 84.9 | 87.8 | 79.1 | 83.2 | 73.2 | 67.8 | 70.4
RoomFormer (Ours) | 0.01 | 91.7 | 92.5 | 85.3 | 88.8 | 78.0 | 73.7 | 75.8
Table 12: Foorplan reconstruction on the SceneCAD val set [2].
We complement Tab. 2 of the main paper with the result of the non-learned
baseline (the 1st row in Tab. 12). We report the running time of the polygon
vectorization process. Surprisingly, the pipeline achieves a room IoU of 86.0
and a corner recall of 80.8. However, the angle metrics indicate that those
polygons usually fail to accurately describe the geometry of the actual room
shapes. This suggests that the non-learned baseline can only capture the rough
shape of the floorplan compared with methods that utilize deep learning.
|
# Directed flow of light flavor hadrons for Au+Au collisions at
$\sqrt{S_{NN}}=$ 7.7 - 200 GeV
Tribhuban Parida<EMAIL_ADDRESS>Sandeep Chatterjee
<EMAIL_ADDRESS>Department of Physical Sciences,
Indian Institute of Science Education and Research Berhampur,
Transit Campus (Govt ITI), Berhampur-760010, Odisha, India
###### Abstract
We have studied the directed flow of light-flavor hadrons for Au + Au
collisions at $\sqrt{S_{NN}}=$ 7.7 - 200 GeV. The initial condition is taken
from a suitable Glauber model which is further evolved within the framework of
relativistic hydrodynamics. Model calculations of the rapidity-odd directed
flow ($v_{1}$) of identified light-flavor hadrons are compared with the
available experimental data after suitably calibrating the initial condition
to describe the rapidity dependence of charged particle multiplicity and net-
proton yield. For reasonable choice of the initial condition, we are able to
describe the measured rapidity and beam energy dependence of identified hadron
$v_{1}$ including the observed $v_{1}$ splitting between baryons and anti-
baryons.
## I Introduction
Heavy-ion collisions in the Beam Energy Scan (BES) program at the Relativistic
Heavy Ion Collider (RHIC) create a QCD medium with varying amount of net
baryon in the mid-rapidity region Adamczyk _et al._ (2017), which allows one
to probe a large region of the QCD phase diagram. It provides a unique
opportunity to understand the dynamics of conserved charges in the QCD medium
and to study the nature of phase transition from the state of Quark Gluon
Plasma (QGP) to hadronic medium at finite baryon chemical potential. The beam
energy dependence of the rapidity-odd directed flow($v_{1}$) of identified
particles is proposed to be one of the relevant observable to probe the nature
of QCD phase transition Rischke:1995pe; Rischke:1996nq; Hung:1994eq;
Steinheimer _et al._ (2014); Ivanov and Soldatov (2015); Ivanov:2016spr.
The directed flow($v_{1}(y)$) is defined as the first harmonic coefficient in
the Fourier series expansion of the azimuthal distribution of the particles
produced relative to the reaction plane $\Psi_{\text{RP}}$.
$\frac{d^{2}N}{dyd\phi}=\frac{dN}{dy}\left(1+2\sum_{n=0}^{\infty}v_{n}(y)\cos{\left[n(\phi-\Psi_{\text{RP}})\right]}\right)$
(1)
where $y$ and $\phi$ are longitudinal rapidity and azimuthal angle of a
produced particle, respectively. Breaking in forward-bacward symmetry
generates a non-zero $v_{1}(y)$ which is an odd function of rapidity in
symmetric heavy-ion collisions.
STAR collaboration has measured the energy dependence of the slope of directed
flow at mid-rapidity for various identified particles Adamczyk _et al._
(2014, 2018). A minima in the slope of proton $v_{1}$ has been observed
between $\sqrt{s_{NN}}=11.5$ GeV and $\sqrt{s_{NN}}=19.6$ GeV. This has been
attributed to the softening of the QCD equation of state (EoS) Steinheimer
_et al._ (2022); Nara _et al._ (2016). From transport model calculations, it
has been observed that the rapidity odd directed flow is sensitive to the
nature of EoS Steinheimer _et al._ (2022); Nara _et al._ (2016);
Konchakovski _et al._ (2014); Ivanov and Soldatov (2015); Nara and Ohnishi
(2022) and provides a possible hint of first order phase transition of the QCD
matter at larger baryon chemical potential Steinheimer _et al._ (2022); Nara
_et al._ (2016). It has been further noticed that the reported experimental
data shows a splitting between the slope of baryon and anti-baryon directed
flow which increases by decreasing the collision energy. Recently, a
hydrodynamic model calculation at $\sqrt{s_{\textrm{NN}}}=200$ GeV has shown
that one may also obtain such splitting from the initial inhomogeneous
deposition of baryon in the transverse plane Bozek (2022).
The directed flow is generated as a response to the initial pressure asymmetry
and carries information about the early stage of collisions Csernai and
Rohrich (1999); Snellings _et al._ (2000); Adil and Gyulassy (2005);
Becattini _et al._ (2015); Bozek and Wyskiel (2010); Ryu _et al._ (2021);
Jiang _et al._ (2022). Therefore, model-to-data comparison of the $v_{1}$ of
identified hadrons has the potential to constrain the initial profile of both
matter and baryon which in turn serves as a crucial ingredient to study the
dynamics of conserved charges.
There have been several attempts to capture the beam energy dependence of
directed flow of identified particles in transport Chen _et al._ (2010); Guo
_et al._ (2012); Nara _et al._ (2022) and hybrid models Steinheimer _et al._
(2014); Ivanov and Soldatov (2015); Shen and Alzhrani (2020); Du _et al._ .
However, a consistent approach that qualitatively captures the experimental
trends across beam energies is missing Singha _et al._ (2016).
There have been efforts to construct frameworks for the three-dimensional
evolution of the QCD conserved charges along with energy density for the RHIC
BES Karpenko _et al._ (2015); Du and Heinz (2020); Li and Shen (2018); Shen
and Alzhrani (2020); Denicol _et al._ (2018); Wu _et al._ (2022); De _et
al._ (2022); Fotakis _et al._ (2020). One of the important goals in building
such frameworks is to constrain the transport coefficients related to charge
diffusion. Such attempts are hindered mainly by the large uncertainties in the
initial matter and baryon deposition. An additional complication arises due to
the large passage time of the colliding nuclei at lower
$\sqrt{s_{\textrm{NN}}}$, which consequently increases the initial proper time
when hydrodynamics becomes applicable. There have been few attempts to
implement a dynamical initialization condition for hydrodynamic evolution
Akamatsu _et al._ (2018); Du _et al._ (2019); Shen and Schenke (2022); De
_et al._ (2022); Okai _et al._ (2017); Shen and Schenke (2022, 2018). There
have also been attempts to propose a simple geometrical ansatz that can
qualitatively capture the trends in the data Denicol _et al._ (2018); Shen
and Alzhrani (2020); Ryu _et al._ (2021); Bozek (2022); Du _et al._ . In
this work, we follow the latter approach. We have recently proposed an initial
baryon profile Parida and Chatterjee (2022) which when coupled to tilted
matter profile Bozek and Wyskiel (2010) provides a reasonably good description
of the identified hadron $v_{1}$ at $\sqrt{s_{\textrm{NN}}}=19.6$ GeV and 200
GeV. In this work, we have tested the performance of this newly proposed
initial condition model across beam energies ranging from $\sqrt{s_{NN}}=$ 200
GeV to 7.7 GeV. In addition to that, the centrality and transverse momentum
dependency of $v_{1}$ have been presented and compared to available
measurements.
In this study, we have used a multistage hydrid framework (hydrodynamic
evolution + hadronic transport) for simulations at different
$\sqrt{s_{\textrm{NN}}}$. In the next section, we will describe the initial
condition model which is used as an input to the hybrid framework. The
transport coefficients and EoS used during the hydrodynamic evolution are
presented in Sec. III. The procedure for selecting the model parameters to
capture the experimental results are explained in Sec. IV. We have presented
the results in Sec. V and Sec. VI is devoted to summarize the current study
with some concluding remarks.
## II Initial condition
We have adopted a similar procedure as described in Shen and Alzhrani (2020);
Denicol _et al._ (2018) to prepare an event average profile of the initial
energy and the net baryon density. The participant and binary collision
sources obtained from each MC Glauber event are rotated by the second-order
participant plane angle and then smeared out in the transverse plane. The
smearing profile is assumed to be a Gaussian with parametric width,
$\sigma_{\perp}$. Profiles are prepared by averaging over 25,000 initial
configurations.
Assuming asymmetric matter deposition by a participant along the rapidity, the
form of the initial energy density $\epsilon(x,y,\eta_{s};\tau_{0})$
deposition at a constant proper time $\tau_{0}$ is taken as Bozek and Wyskiel
(2010),
$\displaystyle\epsilon(x,y,\eta_{s})$ $\displaystyle=$
$\displaystyle\epsilon_{0}\left[\left(N_{+}(x,y)f_{+}(\eta_{s})+N_{-}(x,y)f_{-}(\eta_{s})\right)\right.$
(2)
$\displaystyle\left.\times\left(1-\alpha\right)+N_{coll}(x,y)\epsilon_{\eta_{s}}\left(\eta_{s}\right)\alpha\right]$
where, $N_{+}(x,y)$ and $N_{-}(x,y)$ are the participant densities of forward
and backward moving nucleus respectively. $N_{coll}(x,y)$ is the contribution
of binary collision sources at each transverse position $(x,y)$. $\alpha$ is
the hardness factor. $f_{+,-}(\eta_{s})$ are the asymmetric rapidity envelop
function for the energy density, $\epsilon$.
$f_{+,-}(\eta_{s})=\epsilon_{\eta_{s}}(\eta_{s})\epsilon_{F,B}(\eta_{s})$ (3)
with
$\epsilon_{F}(\eta_{s})=\begin{cases}0,&\text{if }\eta_{s}<-\eta_{m}\\\
\frac{\eta_{s}+\eta_{m}}{2\eta_{m}},&\text{if
}-\eta_{m}\leq\eta_{s}\leq\eta_{m}\\\ 1,&\text{if
}\eta_{m}<\eta_{s}\end{cases}$ (4)
and
$\epsilon_{B}(\eta_{s})=\epsilon_{F}(-\eta_{s})$ (5)
The form of the initial baryon profile is
$n_{B}\left(x,y,\eta_{s}\right)=N_{B}\left[W_{+}^{B}(x,y)f_{+}^{B}(\eta_{s})+W_{-}^{B}(x,y)f_{-}^{B}(\eta_{s})\right]$
(6)
$W_{\pm}^{B}(x,y)$ are the weight factors to deposit the net baryon in the
transverse plane which has the following form.
$W_{\pm}^{B}(x,y)=\left(1-\omega\right)N_{\pm}(x,y)+\omega N_{coll}(x,y)$ (7)
The net baryon distribution in the transverse plane can be tuned by varying
the phenomenological parameter $\omega$. The net baryon density rapidity
envelope profiles are taken as Denicol _et al._ (2018); Shen and Alzhrani
(2020),
$\displaystyle f_{+}^{n_{B}}\left(\eta_{s}\right)$ $\displaystyle=$
$\displaystyle\left[\theta\left(\eta_{s}-\eta_{0}^{n_{B}}\right)\exp{-\frac{\left(\eta_{s}-\eta_{0}^{n_{B}}\right)^{2}}{2\sigma_{B,+}^{2}}}+\right.$
(8)
$\displaystyle\left.\theta\left(\eta_{0}^{n_{B}}-\eta_{s}\right)\exp{-\frac{\left(\eta_{s}-\eta_{0}^{n_{B}}\right)^{2}}{2\sigma_{B,-}^{2}}}\right]$
and
$\displaystyle f_{-}^{n_{B}}\left(\eta_{s}\right)$ $\displaystyle=$
$\displaystyle\left[\theta\left(\eta_{s}+\eta_{0}^{n_{B}}\right)\exp{-\frac{\left(\eta_{s}+\eta_{0}^{n_{B}}\right)^{2}}{2\sigma_{B,-}^{2}}}+\right.$
(9)
$\displaystyle\left.\theta\left(-\eta_{s}-\eta_{0}^{n_{B}}\right)\exp{-\frac{\left(\eta_{s}+\eta_{0}^{n_{B}}\right)^{2}}{2\sigma_{B,+}^{2}}}\right]$
The normalization factor $N_{B}$ in Eq. 6 is not a free parameter, rather it
is constrained by the initially deposited net baryon carried by the
participants.
$\int\tau_{0}n_{B}\left(x,y,\eta,\tau_{0}\right)dxdyd\eta=N_{\text{Part}}$
(10)
With the asymmetric baryon profile given in Eqs. 8 and 9 we generate a tilted
baryon profile in the reaction plane at the initial stage. The magnitude of
the tilt can be controlled by changing the $\omega$ parameter.
## III Hydrodynamic Evolution
The publicly available MUSIC Schenke _et al._ (2010); Paquet _et al._
(2016); Schenke _et al._ (2012) code has been used for the hydrodynamic
evolution of the deposited energy and the baryon density profile. The
hydrodynamic equations of the conserved quantities and the dissipative
currents in the presence of finite baryon density, which are solved in the
framework of MUSIC have been described in Denicol _et al._ (2018). Other
conserved charges, net strangeness( $n_{S}$) and net electric charge
densities($n_{Q}$) are not evolved independently and are assumed to satisfy
the following constraints locally.
$\displaystyle n_{S}$ $\displaystyle=$ $\displaystyle 0$ (11) $\displaystyle
n_{Q}$ $\displaystyle=$ $\displaystyle 0.4n_{B}$ (12)
A temperature ($T$) and baryon chemical potential ($\mu_{B}$) dependent baryon
diffusion coefficient($\kappa_{B}$) has been implemented in the code which is
derived from the Bolzmann equation in relaxation time approximation Denicol
_et al._ (2018). The form of $\kappa_{B}$ is as follows :
$\kappa_{B}=\frac{C_{B}}{T}n_{B}\left[\frac{1}{3}\coth{\left(\frac{\mu_{B}}{T}\right)}-\frac{n_{B}T}{\epsilon+p}\right]$
(13)
$C_{B}$ is a free parameter which controls the baryon diffusion in the medium.
This has been taken as a model parameter in the simulation. In the above
expression, $n_{B}$ is the net baryon density and $p$ is the local pressure of
the fluid. The specific shear viscosity $C_{\eta}$ is related to the shear
transport co-efficient $\eta$ as follows.
$C_{\eta}=\frac{\eta T}{\epsilon+p}$ (14)
This is another model parameter and chosen to be 0.08. In this work, we have
not considered the effects of bulk viscosity.
A lattice QCD based EoS at finite baryon density, NEoS-BQS Monnai _et al._
(2019); Bazavov _et al._ (2012); Ding _et al._ (2015); Bazavov _et al._
(2017) has been used during the hydrodynamic evolution. The EoS imposes the
constraints of Eqs. 11 and 12
The Cooper-Frye conversion of fluid into particles has been performed on the
hypersurface of constant energy density, $\epsilon_{f}=0.26$ GeV/fm3 using iSS
Shen _et al._ (2014); htt . The sampled primary hadrons are then fed into
UrQMD Bass _et al._ (1998); Bleicher _et al._ (1999) for the late stage
hadronic transport.
## IV Model parameters
We have studied Au+Au collisions at seven different energies: 7.7, 11.5, 19.6,
27, 39, 62.4 and 200 GeV. The values of the model parameters used in the study
have been summarized in Table 1. They are chosen to describe the
experimentally observed rapidity dependence of the charged particle yield,
net-proton yield and directed flow of $\pi^{+},p$ and $\bar{p}$
simultaneously. Using these model parameters, we have computed and presented
the rapidity dependence of $v_{1}$ of other identified hadrons: K±, $\Lambda$,
$\bar{\Lambda}$ and $\phi$. In addition, we have also presented the $p_{T}$
differential $v_{1}$.
In our previous work Parida and Chatterjee (2022), we have shown that our
proposed initial condition with both zero and non-zero $C_{B}$ is able to
describe the observed splitting between baryon and anti-baryon directed flow
at $\sqrt{s_{NN}}=$19.6 GeV and 200 GeV within the experimentally measured
rapidity range. Hence, we are not able to constrain the $C_{B}$ values within
the current ambit of model and available experimental data by a unique model-
to-data comparison. The same has been observed for other
$\sqrt{s_{\textrm{NN}}}$. In this work, we have presented the model
calculations with $C_{B}=1$ for $\sqrt{s_{\textrm{NN}}}$ above 11.5 GeV and
$C_{B}=0.5$ at $\sqrt{s_{\textrm{NN}}}=11.5$ and 7.7 GeV (we couldn’t find a
suitable parameter set for $C_{B}=1$ at these lowest energies).
The normalization parameter of the initial energy density distribution
$\epsilon_{0}$ is calibrated to match the yield of charged hadron in mid-
rapidity. However, the $\pi^{+}$ yield has been considered to set
$\epsilon_{0}$ at energies where the mid-rapidity measurement of charged
hadron is not available. The hardness factor $\alpha$ has been chosen to
capture the centrality dependence of the charged hadron yield.
The plateau length $\eta_{0}$ and Gaussian fall off $\sigma_{\eta}$ in the
initial matter profile are adjusted to describe the rapidity dependence of the
charged particle yield. From the existing experimental data, we have observed
that the rapidity dependent charged hadron yields at different energies follow
the same distribution when the pseudo-rapidity is scaled by the respective
beam rapidity. Therefore, we have chosen $\eta_{0}$ and $\sigma_{\eta}$ to
capture that scaled distribution at the energies where the rapidity
distribution of neither the charged particle nor the identified particle has
been experimentally measured.
Rapidity distribution of net-proton yield which is capable of constraining
$\eta_{0}^{n_{B}}$, $\sigma_{B,+}$ and $\sigma_{B,-}$ has been measured for
Au+Au systems at $\sqrt{s_{NN}}=200$ and 62.4 GeV by the BRAHMS collaboration
Bearden _et al._ (2004); Arsene _et al._ (2009) but is not available at
other energies considered in this work. However, the mid-rapidity measurements
for Au+Au systems has been done by STAR collaboration Adamczyk _et al._
(2017). We tuned the parameters of initial baryon profile to match with the
mid-rapidity measurements at energies below 62.4 GeV.
The adopted initial condition model creates a tilted profile of both energy
density and the net baryon density in the reaction plane (spanned by beam axis
and impact parameter axis). Individually $\eta_{m}$ and $\omega$ parameters
can be tuned to obtain the desired tilt in the energy and baryon profile
respectively. However, only a proper set of $\eta_{m}$ and $\omega$ choice can
explain the directed flow of $\pi^{+},p$ and $\bar{p}$ simultaneously after
hydrodynamic evolution.
$\sqrt{S_{NN}}$ (GeV) | $C_{B}$ | $\tau_{0}$(fm) | $\epsilon_{0}$ (GeV/fm3) | $\alpha$ | $\eta_{0}$ | $\sigma_{\eta}$ | $\eta_{0}^{n_{B}}$ | $\sigma_{B,-}$ | $\sigma_{B,+}$ | $\eta_{m}$ | $\omega$
---|---|---|---|---|---|---|---|---|---|---|---
200 | 1.0 | 0.6 | 8.0 | 0.14 | 1.3 | 1.5 | 4.6 | 1.6 | 0.1 | 2.2 | 0.25
62.4 | 1.0 | 0.6 | 5.4 | 0.14 | 1.4 | 1.0 | 3.0 | 1.0 | 0.1 | 1.4 | 0.25
39 | 1.0 | 1.0 | 3.0 | 0.12 | 1.0 | 1.0 | 2.2 | 1.2 | 0.2 | 1.1 | 0.2
27 | 1.0 | 1.2 | 2.4 | 0.11 | 1.3 | 0.7 | 2.3 | 1.1 | 0.2 | 1.1 | 0.11
19.6 | 1.0 | 1.8 | 1.55 | 0.1 | 1.3 | 0.4 | 0.3 | 0.8 | 0.15 | 0.8 | 0.15
11.5 | 0.5 | 2.6 | 0.9 | 0.1 | 0.9 | 0.4 | 1.2 | 0.55 | 0.2 | 0.4 | 0.22
7.7 | 0.5 | 3.6 | 0.55 | 0.1 | 0.9 | 0.4 | 0.9 | 0.35 | 0.2 | 0.3 | 0.35
Table 1: Model parameters used in the simulations at different
$\sqrt{S_{NN}}$ .
## V Results
Pseudo-rapidity($\eta$) distributions of charged hadrons have been plotted in
panel (A), (B) and (C) of Fig. 1 for 0-6$\%$ and 15-25$\%$ Au+Au collisions at
$\sqrt{s_{NN}}=200,62.4$ and $19.6$ GeV. We are able to capture the rapidity
distribution by suitably choosing the $\epsilon_{0}$, $\eta_{0}$ and
$\sigma_{\eta}$ for the initial space-time rapidity distribution of energy
density. The rapidity($y$) differential $\pi^{+}$ yield has been presented in
panel (D), (E), (F) and (G) of Fig. 1 for Au+Au collisions at
$\sqrt{s_{NN}}=39,27,11.5$ and $7.7$ GeV where the charged particle
measurement is not available. We have fixed the parameter $\epsilon_{0}$ by
matching the model calculation with the mid-rapidity $\pi^{+}$ yield
measurement at those energies. The rapidity distribution of inital energy
profile has been calibrated by compairing the model calculation with the
scaled distribution as discussed in previous section. The chosen hardness
factor $\alpha$ provides a reasonable description about the centrality
dependence of the charged particle yield across all energies.
Figure 1: (Color online) Pseudo-rapidity distribution of charged hadrons in
0-6$\%$, 15-25$\%$ Au+Au collisions at $\sqrt{s_{\textrm{NN}}}=200,62.4$ and
$19.6$ GeV has been shown in panel (A), (B) and (C) respectively. The model
calculations(red solid lines) are compared with the measurements from PHOBOS
collaboration Back _et al._ (2003). The $\pi^{+}$ yield has been plotted a
function of rapidity for $\sqrt{s_{\textrm{NN}}}=39,27,11.5$ and $7.7$ GeV in
panel (D),(E),(F) and (G). Comparisons are made with the mid-rapidity
mesurements by STAR collaboration Adamczyk _et al._ (2017).
It is important to look at the distribution of net-proton along rapidity to
constrain the longitudinal gradient of baryon chemical potential which plays a
significant role in flow calculations. In this regard, we have plotted the
proton, anti-proton and net-proton rapidity distribution for central Au+Au
collsions at $\sqrt{s_{NN}}=200,62.4$, $19.6$ and $7.7$ GeV in Fig. 2. We have
obtained a good agreement between our model calculations and the experimental
measuremets. The contributions from weak decays have been included in the
calculation of proton and anti-proton yield to compare with the experimental
measurements. In addition to that, we are able to capture the rapidity
distribution of proton and anti-proton separately which reflects the fact that
the chosen freeze-out energy density represents a proper combination of the
tempearture and baryon chemical potential of chemical equilibration Andronic
_et al._ (2018). The rapidity diferential net-proton measurements for Pb+Pb
collisions at $\sqrt{s_{NN}}=$ 17.3 GeV and 8.7 GeV has been done by NA49
collaborartion Anticic _et al._ (2011). We have considered the experimental
data of net-proton distribution at 17.3 GeV as a proxy to constrain the model
parameters present in Eq. 8 and 9 for the Au+Au collisions at 19.6 GeV,
whereas the data of net-proton at 8.7 GeV has been put along with the model
calculation of Au+Au $\sqrt{s_{NN}}=7.7$ GeV just for reference.
Figure 2: (Color online) Rapidity distribution of proton, anti-proton and net-
proton for 0-5$\%$ Au+Au collisions at $\sqrt{s_{NN}}=200,19.6,7.7$ GeV and
for 0-10$\%$ Au+Au colllisions at $\sqrt{s_{NN}}=62.4$ GeV. The net-proton
rapidity distributions at $\sqrt{s_{NN}}=17.3$ and $8.7$ GeV for Pb+Pb systems
are plotted in panel (C) and (D) respectively for reference. The measurements
are from Bearden _et al._ (2004); Arsene _et al._ (2009); Adamczyk _et al._
(2017); Anticic _et al._ (2011). The model calculations of proton(blue
dashed-dotted line), anti-proton(green dashed line) and net-proton(red solid
line) are compared with the experimental measurements. Figure 3: (Color
online) Rapidity dependence of identified particles’ directed flow
coefficient($v_{1}$) for 10-40$\%$ Au+Au collisions at
$\sqrt{s_{NN}}=200,62.4,39,27,19.6,11.5$ and $7.7$ GeV. Plots for a particular
energy are placed in a single row of the figure. Model calculations(lines with
shaded bands) are compared with the experimental measurements(different
symbols) of STAR collaboration Adamczyk _et al._ (2014, 2018). The available
measurements and model calculations for $0-10\%$ centrality are plotted in
grey colored symbols and lines.
After calibrating the model parametrs of the initial energy and baryon density
profile, we now present the directed flow($v_{1}$) of identified particles in
10-40$\%$ Au+Au collisions at different $\sqrt{s_{NN}}$ in Fig. 3. Each row in
the figure contains the results from a particluar collision energy whereas the
directed flow of various particle species are put in different columns. The
top row contains the rapidity dependence of directed flow coefficients in
Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV. The results for other energies
are placed in subsequent rows in descending order. The $v_{1}$ for
$\pi^{\pm}$, $K^{\pm}$, $p-\bar{p}$, $\Lambda-\bar{\Lambda}$ and $\phi$ has
been presented in column 1, 2, 3, 4 and 5 respectively. In addition, the
$v_{1}$ of $\pi$ and $p$ in 0-10$\%$ centrality class has been plotted for the
comparison at energies below $\sqrt{s_{NN}}=39$ GeV.
The relative tilt between the matter and baryon profiles determines the sign
of the splitting between proton and anti-proton directed flow Parida and
Chatterjee (2022). Thus, by suitable choice of ($\eta_{m}$, $\omega$) at each
$\sqrt{s_{NN}}$, we are able to describe the rapidity dependence of $v_{1}$
for $\pi^{+},p$ and $\bar{p}$ simultaneously in 10-40$\%$ centrality whereas,
the $v_{1}$ of other particle species plotted in Fig 3 are the model
predictions. We are able to capture the sign change in slope of proton
directed flow at $\sqrt{s_{NN}}=7.7$ GeV. From Table 1, one can observe that
the value of $\eta_{m}$ decreases consistently with collision energy whereas
the chosen $\omega$ value for model calibration seems to suggest a non-
monotonic behaviour with collision energy. In order to confirm such non-
monotonic behaviour of $\omega$ with $\sqrt{s_{\textrm{NN}}}$, we need to
perform a more sophisticated fitting procedure to extract the model
parameters. The purpose of the current study is to demonstrate the existence
of reasonable parameter space in this proposed model that captures the data
trends quite well.
Figure 4: (Color online) Transverse momentum dependency($p_{T}$) of charged
particle directed flow for 0-10$\%$ and 10-40$\%$ Au+Au collisions at
$\sqrt{s_{NN}}=39,27,19.6,11.5$ and $7.7$ GeV. The model calculations for
0-10$\%$(green dashed-dotted lines) and 10-40$\%$(black solid lines)
centrality are compared with measurements from STAR collaboration Adam _et
al._ (2020). Figure 5: (Color online) Centrality dependency of identified
particles’ directed flow coefficient($v_{1}$) for Au+Au collisions at
$\sqrt{s_{NN}}=200$ GeV. Model calculations of $\pi^{+}$(red line), $p$(green
line) and $\bar{p}$(blue line) are compared with experimental data from
Adamczyk _et al._ (2012).
An inhomogeneous deposition and further evolution of net-baryon density in the
medium creates inhomogeneities for the other conserved charges like
strangeness and electric charge via the constraints in Eqs. 11 and 12 that
give rise to correlations between the corresponding chemical potentials:
$\mu_{B}$, $\mu_{Q}$ and $\mu_{S}$ for baryon, electric charge and strangeness
respectively. This gives rise to splitting in directed flow of hadrons with
different quantum numbers but same mass similar to the $v_{1}$ splitting of
$p$ and $\bar{p}$. The splitting of $v_{1}$ between $\pi^{+}$ and $\pi^{-}$ is
solely due to the inhomogeneity of $\mu_{Q}$ in the medium. However, the
inhomogeneity of both $\mu_{S}$ and $\mu_{Q}$ are responsible for the
difference in directed flow coefficient of $K^{+}$ and $K^{-}$. In the present
model calculation, we have not observed any significant splitting of $v_{1}$
between $\pi^{+}$ and $\pi^{-}$ which is consistent with experimental
measurements. However, the splitting between $K^{+}$ and $K^{-}$ at
$\sqrt{s_{NN}}=11.5$ and $7.7$ GeV is noticeable while it is consistent with
zero in data. The same has been observed in case of $\Lambda$ and
$\bar{\Lambda}$ since they are getting affected by $\mu_{S}$ along with
$\mu_{B}$. In this case, the data also shows split. Although the model split
is consistent with data for $\sqrt{s_{\textrm{NN}}}>11.5$ GeV, for
$\sqrt{s_{NN}}=11.5$ and $7.7$ GeV the model overpredicts the data. At these
same energies, the model also fails to capture the $\phi$ $v_{1}$ implying
interesting physics of strange carriers that has not been included in the
current model. These discrepancies underline the significance to evolve all
the conserved charges independently in a fluid dynamical simulation Greif _et
al._ (2018); Fotakis _et al._ (2020).
The directed flow measurements of $\pi$ and $p$ at 0-10$\%$ centrality are
also plotted in Fig. 3 along with 10-40$\%$ for the ease of comparison. Our
model calculations capture the centrality dependence of $v_{1}$ for $\pi$ at
all the considered collision energies but it fails to do so for $p$ below
$\sqrt{s_{NN}}=19.6$ GeV. It indicates that the mechanism of baryon stopping
from central to peripheral collisions is different at lower energies and our
model is unable to provide a proper gradient of initial net-baryon density at
different centralities. From the microscopic models, we have the guidance that
the baryon deposition mechanism is dependent on transverse position Li and
Kapusta (2019, 2017); De _et al._ (2022); Shen and Schenke (2018, 2022) which
need to be further explored. In this direction the centrality dependence
measurement of proton directed flow could play a significant role.
So far we have discussed about the rapidity differential behaviour of $v_{1}$.
Now we will present the $p_{T}$ dependency of $v_{1}$ which provides the
information about the initial deposition and evolution of matter in the
transverse plane around mid-rapidity region. In this regard, The $p_{T}$
differential directed flow of charged hadrons in 0-10$\%$ and 10-40$\%$ Au+Au
collisions at $\sqrt{s_{NN}}=39,27,19.6,11.5$ and $7.7$ have been plotted in
Fig. 4. The model calculations are in agreement with the experimental
measurements in low $p_{T}$ region at all energies but fail to explain the
trend in the high $p_{T}$ region. Nevertheless, the $p_{T}$ integrated $v_{1}$
does not get affected by this discrepancies due to relatively higher yield of
low $p_{T}$ hadrons. We are able to capture the centrality trend in the
$p_{T}$ differential $v_{1}$, which in turn shows the correct centrality
dependence of rapidity differential $\pi$ directed flow in Fig 3.
Fig. 5 shows the centrality dependence of the slope of identified particle
directed flow in Au+Au collisons at $\sqrt{s_{NN}}=200$ GeV. $v_{1}$ slopes in
the model at each centrality have been calculated by fitting a straight line
in the mid-rapidity region. Although the model parameter is tuned for
10-40$\%$ centrality, we are able to describe the magnitude and hierarchy in
the directed flow slope of $\pi,p$ and $\bar{p}$ at other centralities.
The slope of identified particles’ directed flow has been plotted as function
of collision energy for 10-40$\%$ Au+Au collisions in Fig. 6. Slopes of
$\pi^{\pm},p$ and $\bar{p}$ have been plotted in panel (A) whereas the slopes
of $K^{\pm},\Lambda$ and $\bar{\Lambda}$ have been presented in the panel (B).
In the model calculation of directed flow slope, the fitting function and
fitting range in rapidity has been taken the same as mentioned in the
experimental papers Adamczyk _et al._ (2014, 2018). We have observed that the
$v_{1}$-slope of $p$ and $\Lambda$ changes sign at lower energies whereas the
slope of their corresponding anti-particles remains negative at all energies.
The mangnitude of the slopes are in good agreement with the expermental data.
The split between the slopes of $K^{+}-K^{-}$ and $\Lambda-\bar{\Lambda}$ at
$\sqrt{s_{NN}}=7.7$ GeV is overestimed by the model calculations which is
already observed in the plots of rapidity differential $v_{1}$ in Fig 3.
Figure 6: (Color online) Beam energy dependence of identified particles’
directed flow slope($\frac{dv_{1}}{dy}$) for 10-40$\%$ Au+Au collisions. The
model calculation for a particular particle species is plotted as a line
having the same color as the symbol of experimental data. The experimental
meaurements are from STAR collaboration Adamczyk _et al._ (2014, 2018).
## VI Summary
In the present study, our goal is to study the measured beam energy dependence
of the rapidity-odd directed flow of identified particles at RHIC BES. In this
regard, we have adopted the model of initial baryon deposition proposed in
Parida and Chatterjee (2022). The model assumes the asymmetric matter and
baryon deposition in forward and backward rapidity by a participating nucleon.
As a consequence, a tilted matter and baryon profile in the reaction plane are
formed, although the tilts are different. Taking the model as an input for a
multi stage hybrid framework(hydrodynamics + hadronic transport) and
calibrating the model parameters to describe the measured rapidity
distribution of charged particle and net-proton, we are able to describe the
observed directed flow splitting between proton and anti-proton across beam
energies ranging from 7.7 GeV to 200 GeV. The model is able to capture the
sign change of proton-$v_{1}$ slope at lower enrgies. In addition to that we
have studied the directed flow of $\Lambda,\bar{\Lambda}$, $\pi^{\pm},K^{\pm}$
and $\phi$. From model calculations, we are able to describe the rapidity
differential directed flow of all the identified particles simultaneously
above $\sqrt{s_{NN}}=11.5$ GeV. However, the noticable discrepancies in the
split of direct flow between $K^{+}-K^{-}$ and $\Lambda-\bar{\Lambda}$ at
$\sqrt{s_{NN}}=7.7$ GeV is attributed to the constraints put in the EoS. This
indicates that individual evolution of strangeness and electric charge should
be done in hydrodynamic simulations at lower RHIC energies instead of
constraing those via EoS. Our model is unable to capture the centrality
dependency of proton directed flow at lower beam energies which shows that
baryon stopping mechanism is very different from central to peripheral
collisions. The analysis demonstrates the importance of the measurement of
centrality dependecy of proton-$v_{1}$.
In hydrodynamic calculations, it is known that the flow coefficients are
response of initial gradient. Hence, the initial deposition plays a dominant
role in explaining the measured directed flow of identified particles.
Nevertheless, the effects due to non-trivial phenomena like dynamical
initialization, non-zero initial longitudianl flow, event-by-event
fluctuations and the nature of EoS which are not considered in present
framework should not be ignored while studying the dynamics of conserved
charges.
## References
* Adamczyk _et al._ (2017) L. Adamczyk _et al._ (STAR), Phys. Rev. C 96, 044904 (2017), arXiv:1701.07065 [nucl-ex] .
* Steinheimer _et al._ (2014) J. Steinheimer, J. Auvinen, H. Petersen, M. Bleicher, and H. Stöcker, Phys. Rev. C 89, 054913 (2014), arXiv:1402.7236 [nucl-th] .
* Ivanov and Soldatov (2015) Y. B. Ivanov and A. A. Soldatov, Phys. Rev. C 91, 024915 (2015), arXiv:1412.1669 [nucl-th] .
* Adamczyk _et al._ (2014) L. Adamczyk _et al._ (STAR), Phys. Rev. Lett. 112, 162301 (2014), arXiv:1401.3043 [nucl-ex] .
* Adamczyk _et al._ (2018) L. Adamczyk _et al._ (STAR), Phys. Rev. Lett. 120, 062301 (2018), arXiv:1708.07132 [hep-ex] .
* Steinheimer _et al._ (2022) J. Steinheimer, A. Motornenko, A. Sorensen, Y. Nara, V. Koch, and M. Bleicher, (2022), arXiv:2208.12091 [nucl-th] .
* Nara _et al._ (2016) Y. Nara, H. Niemi, A. Ohnishi, and H. Stöcker, Phys. Rev. C 94, 034906 (2016), arXiv:1601.07692 [hep-ph] .
* Konchakovski _et al._ (2014) V. P. Konchakovski, W. Cassing, Y. B. Ivanov, and V. D. Toneev, Phys. Rev. C 90, 014903 (2014), arXiv:1404.2765 [nucl-th] .
* Nara and Ohnishi (2022) Y. Nara and A. Ohnishi, Phys. Rev. C 105, 014911 (2022), arXiv:2109.07594 [nucl-th] .
* Bozek (2022) P. Bozek, (2022), arXiv:2207.04927 [nucl-th] .
* Csernai and Rohrich (1999) L. P. Csernai and D. Rohrich, Phys. Lett. B 458, 454 (1999), arXiv:nucl-th/9908034 .
* Snellings _et al._ (2000) R. J. M. Snellings, H. Sorge, S. A. Voloshin, F. Q. Wang, and N. Xu, Phys. Rev. Lett. 84, 2803 (2000), arXiv:nucl-ex/9908001 .
* Adil and Gyulassy (2005) A. Adil and M. Gyulassy, Phys. Rev. C 72, 034907 (2005), arXiv:nucl-th/0505004 .
* Becattini _et al._ (2015) F. Becattini, G. Inghirami, V. Rolando, A. Beraudo, L. Del Zanna, A. De Pace, M. Nardi, G. Pagliara, and V. Chandra, Eur. Phys. J. C 75, 406 (2015), [Erratum: Eur.Phys.J.C 78, 354 (2018)], arXiv:1501.04468 [nucl-th] .
* Bozek and Wyskiel (2010) P. Bozek and I. Wyskiel, Phys. Rev. C 81, 054902 (2010), arXiv:1002.4999 [nucl-th] .
* Ryu _et al._ (2021) S. Ryu, V. Jupic, and C. Shen, Phys. Rev. C 104, 054908 (2021), arXiv:2106.08125 [nucl-th] .
* Jiang _et al._ (2022) Z.-F. Jiang, S. Cao, X.-Y. Wu, C. B. Yang, and B.-W. Zhang, Phys. Rev. C 105, 034901 (2022), arXiv:2112.01916 [hep-ph] .
* Chen _et al._ (2010) J. Y. Chen, J. X. Zuo, X. Z. Cai, F. Liu, Y. G. Ma, and A. H. Tang, Phys. Rev. C 81, 014904 (2010), arXiv:0910.1400 [nucl-th] .
* Guo _et al._ (2012) Y. Guo, F. Liu, and A. Tang, Phys. Rev. C 86, 044901 (2012), arXiv:1206.2246 [nucl-ex] .
* Nara _et al._ (2022) Y. Nara, A. Jinno, K. Murase, and A. Ohnishi, (2022), arXiv:2208.01297 [nucl-th] .
* Shen and Alzhrani (2020) C. Shen and S. Alzhrani, Phys. Rev. C 102, 014909 (2020), arXiv:2003.05852 [nucl-th] .
* (22) L. Du, , C. Shen, C. Gale, and S. Jeon, “Probing initial baryon stopping and baryon diffusive transport with rapidity-dependent directed flow of identified particles(Quark Matter 2022).” .
* Singha _et al._ (2016) S. Singha, P. Shanmuganathan, and D. Keane, Adv. High Energy Phys. 2016, 2836989 (2016), arXiv:1610.00646 [nucl-ex] .
* Karpenko _et al._ (2015) I. A. Karpenko, P. Huovinen, H. Petersen, and M. Bleicher, Phys. Rev. C 91, 064901 (2015), arXiv:1502.01978 [nucl-th] .
* Du and Heinz (2020) L. Du and U. Heinz, Comput. Phys. Commun. 251, 107090 (2020), arXiv:1906.11181 [nucl-th] .
* Li and Shen (2018) M. Li and C. Shen, Phys. Rev. C 98, 064908 (2018), arXiv:1809.04034 [nucl-th] .
* Denicol _et al._ (2018) G. S. Denicol, C. Gale, S. Jeon, A. Monnai, B. Schenke, and C. Shen, Phys. Rev. C 98, 034916 (2018), arXiv:1804.10557 [nucl-th] .
* Wu _et al._ (2022) X.-Y. Wu, G.-Y. Qin, L.-G. Pang, and X.-N. Wang, Phys. Rev. C 105, 034909 (2022), arXiv:2107.04949 [hep-ph] .
* De _et al._ (2022) A. De, J. I. Kapusta, M. Singh, and T. Welle, (2022), arXiv:2206.02655 [nucl-th] .
* Fotakis _et al._ (2020) J. A. Fotakis, M. Greif, C. Greiner, G. S. Denicol, and H. Niemi, Phys. Rev. D 101, 076007 (2020), arXiv:1912.09103 [hep-ph] .
* Akamatsu _et al._ (2018) Y. Akamatsu, M. Asakawa, T. Hirano, M. Kitazawa, K. Morita, K. Murase, Y. Nara, C. Nonaka, and A. Ohnishi, Phys. Rev. C 98, 024909 (2018), arXiv:1805.09024 [nucl-th] .
* Du _et al._ (2019) L. Du, U. Heinz, and G. Vujanovic, Nucl. Phys. A 982, 407 (2019), arXiv:1807.04721 [nucl-th] .
* Shen and Schenke (2022) C. Shen and B. Schenke, Phys. Rev. C 105, 064905 (2022), arXiv:2203.04685 [nucl-th] .
* Okai _et al._ (2017) M. Okai, K. Kawaguchi, Y. Tachibana, and T. Hirano, Phys. Rev. C 95, 054914 (2017), arXiv:1702.07541 [nucl-th] .
* Shen and Schenke (2018) C. Shen and B. Schenke, Phys. Rev. C 97, 024907 (2018), arXiv:1710.00881 [nucl-th] .
* Parida and Chatterjee (2022) T. Parida and S. Chatterjee, (2022), arXiv:xxxx.xxxxx [nucl-th] .
* Schenke _et al._ (2010) B. Schenke, S. Jeon, and C. Gale, Phys. Rev. C 82, 014903 (2010), arXiv:1004.1408 [hep-ph] .
* Paquet _et al._ (2016) J.-F. Paquet, C. Shen, G. S. Denicol, M. Luzum, B. Schenke, S. Jeon, and C. Gale, Phys. Rev. C 93, 044906 (2016), arXiv:1509.06738 [hep-ph] .
* Schenke _et al._ (2012) B. Schenke, S. Jeon, and C. Gale, Phys. Rev. C 85, 024901 (2012), arXiv:1109.6289 [hep-ph] .
* Monnai _et al._ (2019) A. Monnai, B. Schenke, and C. Shen, Phys. Rev. C 100, 024907 (2019), arXiv:1902.05095 [nucl-th] .
* Bazavov _et al._ (2012) A. Bazavov _et al._ (HotQCD), Phys. Rev. D 86, 034509 (2012), arXiv:1203.0784 [hep-lat] .
* Ding _et al._ (2015) H. T. Ding, S. Mukherjee, H. Ohno, P. Petreczky, and H. P. Schadler, Phys. Rev. D 92, 074043 (2015), arXiv:1507.06637 [hep-lat] .
* Bazavov _et al._ (2017) A. Bazavov _et al._ , Phys. Rev. D 95, 054504 (2017), arXiv:1701.04325 [hep-lat] .
* Shen _et al._ (2014) C. Shen, Z. Qiu, H. Song, J. Bernhard, S. Bass, and U. Heinz, “The iebe-vishnu code package for relativistic heavy-ion collisions,” (2014).
* (45) “The iSS code packge can be downloaded from https://github.com/chunshen1987/iSS,” .
* Bass _et al._ (1998) S. A. Bass _et al._ , Prog. Part. Nucl. Phys. 41, 255 (1998), arXiv:nucl-th/9803035 .
* Bleicher _et al._ (1999) M. Bleicher _et al._ , J. Phys. G 25, 1859 (1999), arXiv:hep-ph/9909407 .
* Bearden _et al._ (2004) I. G. Bearden _et al._ (BRAHMS), Phys. Rev. Lett. 93, 102301 (2004), arXiv:nucl-ex/0312023 .
* Arsene _et al._ (2009) I. C. Arsene _et al._ (BRAHMS), Phys. Lett. B 677, 267 (2009), arXiv:0901.0872 [nucl-ex] .
* Back _et al._ (2003) B. B. Back _et al._ , Phys. Rev. Lett. 91, 052303 (2003), arXiv:nucl-ex/0210015 .
* Andronic _et al._ (2018) A. Andronic, P. Braun-Munzinger, K. Redlich, and J. Stachel, Nature 561, 321 (2018), arXiv:1710.09425 [nucl-th] .
* Anticic _et al._ (2011) T. Anticic _et al._ (NA49), Phys. Rev. C 83, 014901 (2011), arXiv:1009.1747 [nucl-ex] .
* Adam _et al._ (2020) J. Adam _et al._ (STAR), Phys. Rev. C 101, 024905 (2020), arXiv:1908.03585 [nucl-ex] .
* Adamczyk _et al._ (2012) L. Adamczyk _et al._ (STAR), Phys. Rev. Lett. 108, 202301 (2012), arXiv:1112.3930 [nucl-ex] .
* Greif _et al._ (2018) M. Greif, J. A. Fotakis, G. S. Denicol, and C. Greiner, Phys. Rev. Lett. 120, 242301 (2018), arXiv:1711.08680 [hep-ph] .
* Li and Kapusta (2019) M. Li and J. I. Kapusta, Phys. Rev. C 99, 014906 (2019), arXiv:1808.05751 [nucl-th] .
* Li and Kapusta (2017) M. Li and J. I. Kapusta, Phys. Rev. C 95, 011901 (2017), arXiv:1604.08525 [nucl-th] .
|
# SatlasPretrain: A Large-Scale Dataset for Remote Sensing Image Understanding
Favyen Bastani Piper Wolters Ritwik Gupta Joe Ferdinando Aniruddha Kembhavi
Allen Institute for AI
<EMAIL_ADDRESS>
###### Abstract
Remote sensing images are useful for a wide variety of planet monitoring
applications, from tracking deforestation to tackling illegal fishing. The
Earth is extremely diverse—the amount of potential tasks in remote sensing
images is massive, and the sizes of features range from several kilometers to
just tens of centimeters. However, creating generalizable computer vision
methods is a challenge in part due to the lack of a large-scale dataset that
captures these diverse features for many tasks. In this paper, we present
SatlasPretrain, a remote sensing dataset that is large in both breadth and
scale, combining Sentinel-2 and NAIP images with 302M labels under 137
categories and seven label types. We evaluate eight baselines and a proposed
method on SatlasPretrain, and find that there is substantial room for
improvement in addressing research challenges specific to remote sensing,
including processing image time series that consist of images from very
different types of sensors, and taking advantage of long-range spatial
context. Moreover, we find that pre-training on SatlasPretrain substantially
improves performance on downstream tasks, increasing average accuracy by 18%
over ImageNet and 6% over the next best baseline. The dataset, pre-trained
model weights, and code are available at https://satlas-pretrain.allen.ai/.
Figure 1: SatlasPretrain is a large-scale remote sensing dataset. Its labels
are relevant to many important planet monitoring applications, including water
resource monitoring, tracking deforestation, detecting wind turbines for
infrastructure mapping, tracking glacier loss, detecting floods, tracking
urban expansion, and detecting vessels for tackling illegal fishing.
## 1 Introduction
Satellite and aerial images provide a diverse range of information about the
physical world. In images of urban areas, we can identify unmapped roads and
buildings and incorporate them into digital map datasets, as well as monitor
urban expansion. In images of industrial areas, we can catalogue solar farms
and wind turbines to track the progress of renewable energy deployment. In
images of glaciers and forests, we can monitor slow natural changes like
glacier loss and deforestation. With the availability of global, regularly
updated, and public domain sources of remote sensing images like the EU’s
Sentinel missions [4], we can monitor the Earth for all of these applications
and more at a global-scale, on a monthly or even weekly basis.
Because the immense scale of the Earth makes global manual analysis of remote
sensing images cost-prohibitive, automatic computer vision methods are crucial
for unlocking their full potential. Previous work has proposed applying
computer vision for automatically inferring the positions of roads and
buildings [10, 33, 60, 13, 37, 61]; monitoring changes in land cover and land
use such as deforestation and urban expansion [46, 47]; predicting vessel
positions and types to help tackle illegal fishing [42]; and tracking the
progress and extent of natural disasters like floods, wildfires, and tornadoes
[8, 23, 44]. However, in practice, most deployed applications continue to rely
on manual or semi-automated rather than fully automated analysis of remote
sensing images [1] for two reasons. First, accuracy remains a barrier even in
major applications like road extraction [12], making full automation
impractical. Second, there is a long tail of remote sensing applications that
require expert annotation but have few labeled examples (e.g., a recent New
York Times study manually documented illegal airstrips in Brazil using
satellite images [9]).
We believe that the lack of a very-large-scale, multi-task remote sensing
dataset is a major impediment for progress on automated methods for remote
sensing tasks today. First, state-of-the-art architectures such as ViT [26]
and CLIP [43] require huge datasets to achieve peak performance. However,
existing remote sensing datasets for object detection, instance segmentation,
and semantic segmentation like DOTA [55], iSAID [58], and DeepGlobe [24]
contain less than 10K images each, compared to the 328K images in COCO and
millions used to train CLIP; the small size of these datasets means we cannot
fully take advantage of recent architectures. Second, existing remote sensing
benchmarks are fragmented, with individual benchmarks for categories like
roads [41], vessels [42], and crop types [28], but no benchmark spanning many
categories. The lack of a large-scale, centralized, and accessible benchmark
prevents transfer learning opportunities across tasks, and makes it difficult
for computer vision researchers to engage in this domain.
We present SatlasPretrain, a large-scale dataset for improving remote sensing
image understanding models. Our goal with SatlasPretrain is _to label
everything that is visible in a satellite image_. To this end, SatlasPretrain
combines Sentinel-2 and NAIP images with 302M distinct labels under 137
diverse categories and 7 label types: the label types are points like wind
turbines and water towers; polygons like buildings and airports; polylines
like roads and rivers; segmentation and regression labels like land cover
categories and bathymetry (water depth); properties of objects like the rotor
diameter of a wind turbine; and patch classification labels like the presence
of smoke in an image. Figure 1 demonstrates the wide range of categories in
SatlasPretrain, along with the diverse applications that they serve.
We find that the huge scale of SatlasPretrain enables pre-training to
substantially improve downstream performance. We compare SatlasPretrain pre-
training against pre-training on other datasets as well as self-supervised
learning methods, and find that it improves average performance across seven
downstream tasks by 18% over ImageNet and 6% over the next best baseline.
These results show that SatlasPretrain can readily improve accuracy on the
numerous niche remote sensing tasks that require costly expert annotation.
Additionally, we believe that SatlasPretrain will encourage work on computer
vision methods that tackle the unique research challenges in the remote
sensing domain. Compared to general-purpose computer vision methods, remote
sensing models require specialized techniques such as accounting for long-
range spatial context, synthesizing information across images over time
captured by diverse sensors like multispectral images and synthetic aperture
radar (SAR), and predicting objects that vary widely in size, from forests
spanning many km2 to street lamps. We evaluate eight computer vision baselines
on SatlasPretrain and find that no single existing method supports all the
SatlasPretrain label types; instead, each baseline can only predict a subset
of categories. Thus, inspired by recent work that integrate task-specific
output heads [21, 35, 29, 36], we develop a unified model called SatlasNet
that incorporates seven such heads so that it can learn from every category in
the dataset. Compared to training separately on each label type, we find that
jointly training SatlasNet on all categories and then fine-tuning on each
label type improves average performance by 7.1%, showing that SatlasNet is
able to leverage transfer learning opportunities between label types.
In summary, our contributions are:
1. 1.
SatlasPretrain, a large-scale remote sensing dataset with 137 categories under
seven label types.
2. 2.
Demonstrating that pre-training on SatlasPretrain improves average performance
on seven downstream datasets by 6%.
3. 3.
SatlasNet, a unified model that supports predictions for all label types in
SatlasPretrain.
We have released the dataset and code at https://satlas-pretrain.allen.ai/. We
have also released model weights pre-trained on SatlasPretrain which can be
fine-tuned for downstream tasks.
## 2 Related Work
Large-Scale Remote Sensing Datasets. Several general-purpose remote sensing
computer vision datasets have been released. Many of these focus on scene and
patch classification: the UC Merced Land Use (UCM) [57] and BigEarthNet [51]
datasets involve land cover classification with 21 and 43 categories
respectively, while the AID [56], Million-AID [39], RESISC45 [19], and
Functional Map of the World (FMoW) [22] datasets additionally include
categories corresponding to manmade structures such as bridges and railway
stations, with up to 63 categories. A few datasets focus on tasks other than
scene classification. DOTA [55] involves detecting objects in 18 categories
ranging from helicopter to roundabout. iSAID [58] involves instance
segmentation for 15 categories.
| Types | Classes | Labels | Pixels | km2
---|---|---|---|---|---
SatlasPretrain | 7 | 137 | 302222K | 17003B | 21320K
UCM [57] | 1 | 21 | 2K | 1B | 1K
BigEarthNet [51] | 1 | 43 | 1750K | 9B | 850K
AID [56] | 1 | 30 | 10K | 4B | 14K
Million-AID [39] | 1 | 51 | 37K | 4B | 18K
RESISC45 [19] | 1 | 45 | 32K | 2B | 10K
FMoW [22] | 1 | 63 | 417K | 437B | 1748K
DOTA [55] | 1 | 19 | 99K | 9B | 38K
iSAID [58] | 1 | 15 | 355K | 9B | 38K
Table 1: Comparison of SatlasPretrain against existing remote sensing datasets
(K=thousands, B=billions). Types is number of label types and km2 is area
covered.
All of these datasets involve making predictions for a single label type, and
most involve doing so from a single image. Thus, they are limited in three
ways: the number of object categories, the diversity of labels, and the
opportunities for approaches to learn to synthesize features across image time
series. In contrast, SatlasPretrain incorporates 137 categories under seven
label types (see full comparison in Table 1), and provides image time series
that methods can leverage to improve prediction accuracy.
A few domain-specific datasets extend beyond these limitations. xView3 [42]
involves predicting vessel positions (object detection) and attributes of
those vessels such as vessel type and length (per-object classification and
regression) in SAR images. PASTIS-R [28] involves panoptic segmentation of
crop types in crop fields using a time series of SAR and optical satellite
images captured by the Sentinel-1 and Sentinel-2 constellations. IEEE Data
Fusion datasets incorporate various aerial and satellite images for tasks like
land cover segmentation [48].
Self-Supervised and Multi-Task Learning for Remote Sensing. Similar to our
work, these approaches share the goal of improving accuracy on downstream
applications with few labels. Several methods [40, 7, 50, 49, 54, 11]
incorporate temporal augmentations into a contrastive learning framework,
where images of the same location captured at different times are encouraged
to have closer representations than images of different locations. They show
that the model improves downstream performance by learning invariance to
transient differences between images of the same location, such as different
lighting and nadir angle conditions as well as seasonal changes. GPNA proposes
combining self-supervised learning with supervised training on diverse tasks
[45].
## 3 SatlasPretrain
Figure 2: Overview of the SatlasPretrain dataset. SatlasPretrain consists of
image time series and labels for 856K Web-Mercator tiles at zoom 13 (left).
There are two image modes on which methods are trained and evaluated
independently: high-resolution NAIP images (top) and low-resolution Sentinel-2
images (bottom). Labels may be slow-changing (corresponding to the most recent
image at a tile) or dynamic (referencing a specific image and time).
We present SatlasPretrain, a very-large-scale dataset for remote sensing image
understanding that improves on existing remote sensing datasets in three key
ways:
1. 1.
Scale: SatlasPretrain contains 40x more image pixels and 150x more labels than
the largest existing dataset.
2. 2.
Label diversity: Existing datasets in Table 1 have unimodal labels, e.g. only
classification. SatlasPretrain labels span _seven label types_ ; furthermore,
they comprise 137 categories, 2x more than the largest existing dataset.
3. 3.
Spatio-temporal images and labels: Rather than being tied to individual remote
sensing images, our labels are associated with geographic coordinates (i.e.,
longitude-latitude positions) and time ranges. This enables methods to make
predictions from multiple images across time, as well as leverage long-range
spatial context from neighboring images. These features present new research
challenges that, if solved, can greatly improve model performance.
We first provide an overview of the structure of SatlasPretrain and detail the
imagery that it contains below. We then describe the labels and how they were
collected.
### 3.1 Structure and Imagery
Figure 3: Geographic coverage of SatlasPretrain, with bright pixels indicating
locations covered by images and labels in the dataset. SatlasPretrain spans
all continents except Antarctica.
SatlasPretrain consists of 856K _tiles_. These tiles correspond to Web-
Mercator tiles at zoom level 13, i.e., the world is projected to a 2D plane
and divided into a $2^{13}\times 2^{13}$ grid, with each tile corresponding to
a grid cell. Thus, each SatlasPretrain tile covers a disjoint spatial region
spanning up to 25 km2. At each tile, SatlasPretrain includes (1) a time series
of remote sensing images of the tile; and (2) labels drawn from the 137
SatlasPretrain categories. Figure 2 summarizes the dataset, and Figure 3 shows
its global geographic coverage.
Existing datasets typically use either high-resolution imagery (0.5–2 m/pixel)
[19, 39, 22, 58] or low-resolution imagery (10 m/pixel) [51, 27]. Although
high-resolution imagery enables higher prediction accuracy, low-resolution
imagery is often employed in practical applications since it is available more
frequently (weekly vs yearly) and broadly (globally vs in limited countries).
Thus, in SatlasPretrain, we incorporate both low- and high-resolution images,
which we will refer to as _image modes_. We define separate train and test
splits for each image mode, and compare methods over each mode independently.
In all 856K tiles (828K train and 28K test), we provide low-resolution 512x512
images. Specifically, we include 8–12 Sentinel-2 images captured during 2022;
this enables methods to leverage multiple spatially aligned images of a
location to improve prediction accuracy. We also include historical 2016–2021
images that are relevant for dynamic labels like floods and ship positions.
Sentinel-2 captures 10 m/pixel multispectral images; the European Space Agency
(ESA) releases these images openly. Some categories are not visible in the
low-resolution images, so for this mode we only evaluate methods on 122 of 137
categories.
In 46K tiles (45.5K train and 512 test), we provide high-resolution 8192x8192
images. We include 3–5 public domain 1 m/pixel aerial images from the US
National Agriculture Imagery Program (NAIP) between 2011–2020. These images
are only available in the US, so train and test tiles for the high-resolution
mode are restricted to the US.
We download the images from ESA and USGS, and use GDAL [6] to process the
images into Web-Mercator tiles.
The structure of SatlasPretrain enables methods to leverage both spatial and
temporal context. Methods can make use of long-range spatial context from many
neighboring tiles to improve the accuracy of predictions at a tile. Similarly,
methods can learn to synthesize features across the image time series that we
include at each tile in the dataset to improve prediction accuracy; for
example, when predicting the crop type grown at a crop field, observations of
the crop field at different stages of the agricultural cycle can provide
different clues about the type of crop grown there. In contrast, existing
datasets (including all but FMoW in Table 1) typically associate each label
with a single image, and require methods to predict the label with that one
image only.
### 3.2 Labels
SatlasPretrain labels span 137 categories, with seven label types (see
examples in Figure 2):
1. 1.
Semantic segmentation—e.g., predicting per-pixel land cover (water vs forest
vs developed vs etc.).
2. 2.
Regression—e.g., predicting per-pixel bathymetry (water depth) or percent tree
cover.
3. 3.
Points (object detection)—e.g., predicting wind turbines, oil wells, and
vessels.
4. 4.
Polygons (instance segmentation)—e.g., predicting buildings, dams, and
aquafarms.
5. 5.
Polylines—e.g., predicting roads, rivers, and railways.
6. 6.
Properties of points, polygons, and polylines—e.g., the rotor diameter of a
wind turbine.
7. 7.
Classification—e.g., whether an image exhibits negligible, low, or high
wildfire smoke density.
Most categories represent slow-changing objects like roads or wind turbines.
During dataset creation, we aim for labels under these categories to
correspond to the most recent image available at each tile. Thus, during
inference, if these objects change over the image time series available at a
tile, the model predictions should reflect the last image in the time series.
A few categories represent dynamic objects like vessels and floods. For labels
in these categories, in addition to specifying the object position, the label
specifies the timestamp of the image that it corresponds to. During inference,
for dynamic categories, the model should make a separate set of predictions
for each image in the time series.
We derive SatlasPretrain labels from seven sources: new annotation by domain
experts, new annotation by Amazon Mechanical Turk (AMT) workers, and
processing five existing datasets—OpenStreetMap [30], NOAA lidar scans,
WorldCover [53], Microsoft Buildings [3], and C2S [5].
Each category is annotated (valid) in only a subset of tiles. Thus, in some
tiles, a given category may be invalid, meaning that there is no ground truth
for the category in that tile. In other tiles, a category may be valid but
have zero labels, meaning that there are no instances of that category in the
tile. In supplementary Section A.1, for each category, we detail the number of
tiles where the category is valid, the number of tiles where the category has
at least one label, and the number of labels under that category; we also
detail the category’s label type and data source.
Labels in SatlasPretrain are relevant to numerous planet and environmental
monitoring applications, which we discuss in supplementary Section A.2.
We summarize the data collection process for each of the data sources below.
Expert Annotation. Two domain experts annotated 12 categories: off-shore wind
turbines, off-shore platforms, vessels, 6 tree cover categories (e.g. low vs
high), and 3 snow presence categories (none, partial, or full). To facilitate
this process, we built a dedicated annotation tool called Siv that is
customizable for individual categories. For example, when annotating marine
objects, we found that displaying images of the same marine location at
different times was crucial for accurately distinguishing vessels from fixed
infrastructure (generally, a vessel will only appear in one of the images,
while wind turbines and platforms appear in all images); thus, for these
categories, we ensured the domain experts could press the arrow keys in Siv to
toggle between different spatially aligned images of the same tile. Similarly,
for tree cover, we found that consulting external sources like Google Maps and
OpenStreetMap helped improve accuracy in cases where tree cover was not clear
in NAIP or Sentinel-2 images; thus, when annotating tree cover, we included
links in Siv to these external sources.
AMT. AMT workers annotated 9 categories: coastal land, coastal water, fire
retardant drops, areas burned by wildfires, airplanes, rooftop solar panels,
and 3 smoke presence categories (none, low, or high). We reused the Siv
annotation tool for AMT annotation, incorporating additional per-category
customizations as needed (which we detail in supplementary Section A.3.1).
To maximize annotation quality, for each category, we first selected AMT
workers through a qualification task: domain experts annotated between 100–400
tiles, and we asked each candidate AMT worker to annotate the same tiles; we
only asked workers whose labels corresponded closely with expert labels to
continue with further annotation. We also conducted majority voting over
multiple workers; we decided the number of workers needed per tile on a per-
category basis (see Section A.3.2), by first having one worker annotate each
tile, and then analyzing the label quality. For example, we found that
airplanes were unambiguous enough that a single worker sufficed, while we had
three workers label each tile for areas burned by wildfires.
Figure 4: Model architecture of SatlasNet. A separate head is used to predict
outputs for each label type. We visualize example outputs from two such heads
(segmentation and polygon).
OpenStreetMap (OSM). OSM is a collaborative map dataset built through edits
made by contributing users. Objects in OSM span a wide range of categories,
from roads to power substations. We obtained OSM data as an OSM PBF file on 9
July 2022 from Geofabrik, and processed it using the Go osmpbf library to
extract 101 categories.
Recall is a key issue for labels derived from OSM. From initial qualitative
analysis, we consistently observed that OSM objects have high precision but
variable recall: the vast majority of objects were correct, but for some
categories, many objects were visible in satellite imagery but not mapped in
OSM. To mitigate this issue, we employed heuristics to automatically prune
tiles that most likely had low recall, based on the number of labels and
distinct categories in the tile. For instance, we found that tiles with many
roads but no buildings were likely to have missing objects in other categories
like silo or water tower. We detail these heuristics in supplementary Section
A.4.
We found that these heuristics were sufficient to yield high-quality labels
for most categories. However, we identified 13 remaining low-recall
categories, including gas stations, helipads, and oil wells. From an analysis
of 1300 tiles, we determined that recall was still at least 80% for these
categories, which we deemed sufficient for the training set: there are methods
for learning from sparse labels, and large-scale training on noisy labels has
produced models like CLIP that deliver state-of-the-art performance. However,
we deemed that these 13 categories did not have sufficient recall for the test
set. Thus, to ensure a highly accurate test set, for each of these 13
categories, we trained an initial model on OSM labels and tuned its confidence
threshold for high-recall low-precision detection; we then hand-labeled its
predictions to add missing labels to the test set. In Section A.4, we detail
these categories and the number of missing labels identified in the test set.
NOAA Lidar Scans. NOAA coastal topobathy maps derived from lidar scans contain
elevation data for land and depth data for water. We download 5,868 such maps
from various NOAA surveys, and process them to derive per-pixel depth and
elevation labels for 5,123 SatlasPretrain tiles.
WorldCover. WorldCover [53] is a global land cover map developed by the
European Space Agency. We process the map to derive 11 land cover and land use
categories, ranging from barren land to developed areas.
Microsoft Buildings. We process 70 GeoJSON files from various Microsoft
Buildings datasets [3] to derive building polygons in SatlasPretrain. The data
is released under ODbL.
C2S. C2S [5] consists of flood and cloud labels in Sentinel-2 images, released
under CC-BY-4.0. We warp the labels to Web-Mercator and include them in
SatlasPretrain. We also download and process the Sentinel-2 images that
correspond exactly to the ones used in C2S, so that they share the same
processing as other Sentinel-2 images in SatlasPretrain.
Balancing the scale of labels with label quality was a key consideration in
managing new annotation and selecting existing data sources to process. As we
developed the dataset, we conducted iterative analyses to evaluate the
precision and recall of labels that we collected, and used this information to
improve later annotation and refine data source processing. In supplementary
Section A.5, we include an analysis of incorrect and missing labels under
every category in the final dataset; we find that 116/137 categories have >99%
precision, 15 have 95-99% precision, 4 have 90-95% precision, and 2 have
80-90% precision.
## 4 SatlasNet
Off-the-shelf computer vision models cannot handle all the label types in
SatlasPretrain, e.g., while Mask2Former [18] can simultaneously perform
semantic and instance segmentation, it is not designed to predict properties
of polygons or classify images. This prevents these models from leveraging the
full set of transfer learning opportunities present in SatlasPretrain; for
example, detecting building polygons is likely useful for segmenting images
for land cover and land use, since land use includes a human-developed
category. We develop a unified model, SatlasNet, that is capable of learning
from all seven label types.
Figure 4 shows a schematic of our model. SatlasNet is inspired by recent work
that employ task-specific output heads [21, 35, 29], as well as methods that
synthesize features across remote sensing image time series [22, 27]. It
inputs a time series of spatially aligned images, and processes each image
(which may contain more than three bands) through a Swin Transformer [38]
backbone (Swin-Base), which outputs feature maps for each image at four
scales. We apply max temporal pooling at each scale to derive one set of
multi-scale features. We pass the features to seven output heads (one for each
label type) to compute outputs. For polylines, while specialized polyline
extraction architectures have been shown to improve accuracy [10, 33, 52], we
opt to employ the simpler segmentation approach [60] where we apply a UNet
head to segment images for polyline categories, and post-process the
segmentation probabilities with binary thresholding, morphological thinning,
and line following and simplification [20] to extract polylines.
| High-Resolution NAIP Images | Low-Resolution Sentinel-2 Images
---|---|---
Method | Seg $\uparrow$ | Reg $\downarrow$ | Pt $\uparrow$ | Pgon $\uparrow$ | Pline $\uparrow$ | Prop $\uparrow$ | Seg $\uparrow$ | Reg $\downarrow$ | Pt $\uparrow$ | Pgon $\uparrow$ | Pline $\uparrow$ | Prop $\uparrow$ | Cls $\uparrow$
PSPNet (ResNext-101) [59] | 77.8 | 15.0 | - | - | 53.2 | - | 62.1 | 16.2 | - | - | 30.7 | - | -
LinkNet (ResNext-101) [15] | 77.3 | 12.9 | - | - | 61.0 | - | 61.1 | 14.1 | - | - | 41.4 | - | -
DeepLabv3 (ResNext-101) [16] | 80.1 | 10.6 | - | - | 59.8 | - | 61.8 | 13.9 | - | - | 44.7 | - | -
ResNet-50 [32] | - | - | - | - | - | 87.6 | - | - | - | - | - | 70.3 | 97
ViT-Large [26] | - | - | - | - | - | 78.1 | - | - | - | - | - | 66.9 | 99
Swin-Base [38] | - | - | - | - | - | 87.1 | - | - | - | - | - | 69.4 | 99
Mask R-CNN (ResNet-50) [31] | - | - | 27.6 | 30.4 | - | - | - | - | 22.0 | 12.3 | - | - | -
Mask R-CNN (Swin-Base) [31] | - | - | 30.4 | 31.5 | - | - | - | - | 25.6 | 15.2 | - | - | -
ISTR [34] | - | - | 2.0 | 4.9 | - | - | - | - | 1.2 | 1.4 | - | - | -
SatlasNet (single-image, per-type) | 79.4 | 8.3 | 28.0 | 30.4 | 61.5 | 86.6 | 64.8 | 9.3 | 25.7 | 14.8 | 42.5 | 67.5 | 99
SatlasNet (single-image, joint) | 74.5 | 7.4 | 28.0 | 31.1 | 60.9 | 87.3 | 55.8 | 10.6 | 22.0 | 10.3 | 45.5 | 73.8 | 99
SatlasNet (single-image, fine-tuned) | 79.8 | 7.2 | 32.3 | 33.0 | 62.4 | 89.5 | 65.3 | 9.0 | 27.4 | 16.3 | 45.9 | 80.0 | 99
SatlasNet (multi-image, per-type) | 79.4 | 8.2 | 25.8 | 27.5 | 59.2 | 77.3 | 67.2 | 10.5 | 31.9 | 19.0 | 48.1 | 67.1 | 99
SatlasNet (multi-image, joint) | 79.2 | 7.8 | 31.2 | 33.8 | 53.6 | 87.8 | 66.7 | 8.5 | 31.5 | 19.5 | 41.9 | 78.8 | 99
SatlasNet (multi-image, fine-tuned) | 81.0 | 7.6 | 33.2 | 34.1 | 61.1 | 89.2 | 69.7 | 7.8 | 32.0 | 20.2 | 50.4 | 80.0 | 99
Table 2: Results on the SatlasPretrain test set for the high- and low-
resolution image modes. We break down results by label type: segmentation
(Seg), regression (Reg), points (Pt), polygons (Pgon), polylines (Pline),
properties (Prop), and classification (Cls). We show absolute error for Reg
(lower is better), and accuracy for the others (higher is better).
## 5 Evaluation
We first evaluate our method and eight classification, semantic segmentation,
and instance segmentation baselines on the SatlasPretrain test split in
Section 5.1. We then evaluate performance on seven downstream tasks in Section
5.2, comparing pre-training on SatlasPretrain to pre-training on other remote
sensing datasets, as well as self-supervised learning techniques specialized
for remote sensing.
### 5.1 Results on SatlasPretrain
Methods. We compare SatlasNet against eight baselines on SatlasPretrain. We
select baselines that are either standard models or models that provide state-
of-the-art performance for subsets of label types in SatlasPretrain. None of
the baselines are able to handle all seven SatlasPretrain label types. For
property prediction and classification, we compare ResNet [32], ViT [26], and
Swin Transformer [38]. For segmentation, regression, and polylines, we compare
PSPNet [59], LinkNet [15], and DeepLabv3 [16]. For points and polygons, we
compare Mask R-CNN [31] and ISTR [34].
We train three variants of SatlasNet:
* •
Per-type: train separately on each label type.
* •
Joint: jointly train across all categories.
* •
Fine-tuned: fine-tune the jointly trained parameters on each label type.
All baselines are fine-tuned on each label type (after joint training on the
subset of label types they can handle), which provides the highest
performance.
For each SatlasNet variant, we also evaluate in single-image and multi-image
modes. For all baselines and single-image SatlasNet, we sample training
examples by either (a) sampling a tile, and pairing the most recent image at
the tile with slow-changing labels (with dynamic and other invalid categories
masked); or (b) sampling a tile and image, and pairing the image with
corresponding dynamic labels. For multi-image SatlasNet, we provide as input a
time series of eight Sentinel-2 images for low-resolution mode or four NAIP
images for high-resolution mode; for slow-changing labels, the images are
ordered by timestamp, but for dynamic labels, we always order the sampled
image at the end of the time series input. In all cases, we sample examples
based on the maximum inverse frequency of categories appearing in the example.
We use RGB bands only here, but include results for single-image SatlasNet
with nine Sentinel-2 bands in supplementary Section C.
Across all methods, we input 512x512 images during both training and
inference; for high-resolution inference, since images covering the tile are
8K by 8K, we independently process 256 512x512 windows and merge the model
outputs. We employ random cropping, horizontal and vertical flipping, and
random resizing augmentations during training. We initialize models with
ImageNet-pretrained weights. We use the Adam optimizer, and initialize the
learning rate to $10^{-4}$, decaying via halving down to $10^{-6}$ upon
plateaus in the training loss. We train with a batch size of 32 for 100K
batches.
Metrics. We use standard metrics for each label type: accuracy for
classification, F1 score for segmentation, mean absolute error for regression,
mAP accuracy for points and polygons, and GEO accuracy [14] for polylines. We
compute metrics per-category, and report the average across categories under
each label type.
Figure 5: Qualitative results on SatlasPretrain. Rightmost: a failure case where SatlasNet detects only 1/5 oil wells. | UCM | RESISC45 | AID | FMoW | Mass Roads | Mass Buildings | Airbus Ships | Average
---|---|---|---|---|---|---|---|---
Method | 50 | All | 50 | All | 50 | All | 50 | All | 50 | All | 50 | All | 50 | All | 50 | All
Random Initialization | 0.26 | 0.86 | 0.15 | 0.77 | 0.18 | 0.68 | 0.03 | 0.17 | 0.69 | 0.80 | 0.68 | 0.77 | 0.31 | 0.53 | 0.33 | 0.65
ImageNet [25] | 0.35 | 0.92 | 0.17 | 0.95 | 0.20 | 0.81 | 0.03 | 0.21 | 0.77 | 0.85 | 0.78 | 0.83 | 0.37 | 0.65 | 0.38 | 0.75
BigEarthNet [51] | 0.35 | 0.95 | 0.20 | 0.94 | 0.23 | 0.78 | 0.03 | 0.27 | 0.78 | 0.85 | 0.81 | 0.85 | 0.40 | 0.68 | 0.40 | 0.76
MillionAID [39] | 0.72 | 0.97 | 0.30 | 0.96 | 0.30 | 0.82 | 0.04 | 0.35 | 0.78 | 0.84 | 0.82 | 0.85 | 0.46 | 0.67 | 0.49 | 0.78
DOTA [55] | 0.56 | 0.99 | 0.28 | 0.95 | 0.33 | 0.83 | 0.03 | 0.30 | 0.82 | 0.86 | 0.84 | 0.87 | 0.62 | 0.75 | 0.50 | 0.79
iSAID [58] | 0.60 | 0.97 | 0.29 | 0.97 | 0.34 | 0.86 | 0.04 | 0.30 | 0.82 | 0.86 | 0.84 | 0.86 | 0.55 | 0.73 | 0.50 | 0.79
MoCo [17] | 0.14 | 0.14 | 0.07 | 0.09 | 0.05 | 0.12 | 0.02 | 0.03 | 0.56 | 0.69 | 0.62 | 0.63 | 0.01 | 0.21 | 0.21 | 0.27
SeCo [40] | 0.48 | 0.95 | 0.20 | 0.90 | 0.27 | 0.74 | 0.03 | 0.26 | 0.70 | 0.81 | 0.71 | 0.77 | 0.27 | 0.54 | 0.38 | 0.71
SatlasPretrain | 0.83 | 0.99 | 0.36 | 0.98 | 0.42 | 0.88 | 0.06 | 0.44 | 0.82 | 0.87 | 0.87 | 0.88 | 0.56 | 0.80 | 0.56 | 0.83
Table 3: Results on seven downstream tasks when fine-tuned with 50 examples
(50) or the entire downstream dataset (All). Accuracy is reported for UCM,
RESISC45, and AID while F1 Score is reported for FMoW, Mass Roads, Mass
Buildings, and Airbus Ships. SatlasPretrain pre-training improves average
accuracy across the tasks by 6% over the next best baseline.
Results. We show results on SatlasPretrain in Table 2. Across the seven label
types, single-image SatlasNet matches or surpasses the performance of state-
of-the-art, purpose-built baselines when trained per-type, validating its
effectiveness as a unified model that can predict diverse remote sensing
labels. Jointly training one set of SatlasNet parameters for all categories
reduces average performance on several label types, but SatlasNet remains
competitive in most cases; this training mode provides large efficiency gains
since the backbone features need only be computed once for each image during
inference, rather than once per label type. When fine-tuning SatlasNet on each
label type using the parameters derived from joint training, it provides an
average 7.1% relative improvement across the label types and image modes over
per-type training. This supports our hypothesis that there are transfer
learning opportunities between the label types, validating the utility of a
unified model for improving performance. Multi-image SatlasNet provides
another 5.6% relative improvement in average performance, showing that it is
able to effectively synthesize information across image time series to produce
better predictions; nevertheless, we believe that there is substantial room
for further improvement in methods for processing remote sensing image time
series.
We show qualitative results in Figure 5, with additional examples in
supplementary Section E. We achieve high accuracy on several categories, such
as wind turbines and water towers. However, for oil wells, one well is
detected but several others are not. Similarly, for polyline features like
roads and railways, the model produces short noisy segments, despite ample
training data for these categories; we believe that incorporating and
improving models that are tailored for specialized output types like polylines
[33, 52] has the potential to improve accuracy.
### 5.2 Downstream Performance
We now evaluate accuracy on seven downstream tasks when pre-training on
SatlasPretrain compared to pre-training on four existing remote sensing
datasets, as well as two self-supervised learning methods. For each downstream
task, we evaluate accuracy when training on just 50 examples and when training
on the whole dataset, to focus on the challenge of improving performance on
niche remote sensing applications that require expert annotation and thus have
few labeled examples.
Methods. We compare pre-training on high-resolution images in SatlasPretrain
to pre-training on four existing remote sensing datasets: BigEarthNet [51],
Million-AID [39], DOTA [55], and iSAID [58]. We use SatlasNet in all cases,
fine-tuning the pre-trained Swin backbone on each downstream dataset.
We also compare two self-supervised learning methods, Momentum Contrast v2
(MoCo) [17] and Seasonal Contrast (SeCo) [40]. The latter is a specialized
method for remote sensing that leverages multiple image captures of the same
location to learn invariance to seasonal changes. For MoCo, we use our
SatlasNet model and train on SatlasPretrain images. For SeCo, we evaluate
their original model trained on their dataset. We fine-tune the weights
learned through self-supervision on the downstream tasks. We provide results
for additional variants in supplementary Section B.3.
We fine-tune both the pre-training and self-supervised learning methods by
first freezing the backbone and only training the prediction head for 32K
examples, and then fine-tuning the entire model. We provide additional
experiment details in supplementary Section B.1.
Downstream Datasets. The downstream tasks consist of four existing large-scale
remote sensing datasets that involve classification with between 21 and 63
categories: UCM [57], AID [56], RESISC45 [19], and FMoW [22]. The other three
are the Massachusetts Buildings and Massachusetts Roads datasets [41], which
involve semantic segmentation, and the Airbus Ships [2] dataset, which
involves instance segmentation.
Results. Table 3 shows downstream performance with varying training set sizes.
SatlasPretrain consistently outperforms the baselines: when training on 50
examples, we improve average accuracy across the tasks by 18% over ImageNet
pre-training, and by 6% over the next best baseline. The state-of-the-art
performance achieved across such a wide range of downstream tasks clearly
demonstrates the generalizability of the representations derived from
SatlasPretrain pre-training, and the potential of SatlasPretrain to improve
performance on the numerous niche remote sensing applications. We include
results with more varying training examples in supplementary B.2.
## 6 Use in AI-Generated Geospatial Data
We have deployed SatlasPretrain to develop high-accuracy models for Satlas
(https://satlas.allen.ai/), a platform for global geospatial data generated by
AI from satellite imagery. Timely geospatial data, like the positions of wind
turbines and solar farms, is critical for informing decisions in emissions
reduction, disaster relief, urban planning, etc. However, high-quality global
geospatial data products can be hard to find because manual curation is often
cost-prohibitive. Satlas instead applies models fine-tuned for tasks like wind
turbine detection to automatically extract geospatial data from satellite
imagery on a monthly basis. Satlas currently consists of four geospatial data
products: wind turbines, solar farms, offshore platforms, and tree cover.
## 7 Conclusion
By improving on existing datasets in both scale and label diversity,
SatlasPretrain serves as an effective very-large-scale dataset for remote
sensing methods. Pre-training on SatlasPretrain increases average downstream
accuracy by 18% over ImageNet and 6% over existing remote sensing datasets,
indicating that it can readily be applied to the long tail of remote sensing
tasks that have few labeled examples. We have already leveraged models pre-
trained on SatlasPretrain to accurately detect wind turbines, solar farms,
offshore platforms, and tree cover in the Satlas platform at
https://satlas.allen.ai/.
## Appendix A Supplementary Material
The supplementary material can be accessed from
https://github.com/allenai/satlas/blob/main/SatlasPretrain.md.
## References
* [1] The machine vision challenge to better analyze satellite images of Earth. MIT Technology Review.
* [2] Airbus ship detection challenge. https://www.kaggle.com/c/airbus-ship-detection, 2018. Airbus.
* [3] Microsoft Building Footprints Datasets, 2021. Microsoft.
* [4] Copernicus Sentinel Missions. https://sentinel.esa.int/web/sentinel/home, 2022. European Space Agency.
* [5] A global flood events and cloud cover dataset (version 1.0), 2022. Cloud to Street, Microsoft, Radiant Earth Foundation.
* [6] GDAL, 2023. Open Source Geospatial Foundation.
* [7] Peri Akiva, Matthew Purri, and Matthew Leotta. Self-supervised Material and Texture Representation Learning for Remote Sensing Tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8203–8215, 2022.
* [8] Robert S Allison, Joshua M Johnston, Gregory Craig, and Sion Jennings. Airborne Optical and Thermal Remote Sensing for Wildfire Detection and Monitoring. Sensors, 16(8):1310, 2016.
* [9] Manuela Andreoni, Blacki Migliozzi, Pablo Robles, and Denise Lu. The Illegal Airstrips Bringing Toxic Mining to Brazil’s Indigenous Land. The New York Times.
* [10] Favyen Bastani, Songtao He, Sofiane Abbar, Mohammad Alizadeh, Hari Balakrishnan, Sanjay Chawla, Sam Madden, and David DeWitt. RoadTracer: Automatic Extraction of Road Networks from Aerial Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4720–4728, 2018.
* [11] Favyen Bastani, Songtao He, Satvat Jagwani, Mohammad Alizadeh, Hari Balakrishnan, Sanjay Chawla, Sam Madden, and Mohammad Amin Sadeghi. Updating Street Maps using Changes Detected in Satellite Imagery. In Proceedings of the 29th International Conference on Advances in Geographic Information Systems, pages 53–56, 2021.
* [12] Favyen Bastani and Samuel Madden. Beyond Road Extraction: A Dataset for Map Update using Aerial Images. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 11905–11914, 2021.
* [13] Anil Batra, Suriya Singh, Guan Pang, Saikat Basu, CV Jawahar, and Manohar Paluri. Improved Road Connectivity by Joint Learning of Orientation and Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10385–10393, 2019.
* [14] James Biagioni and Jakob Eriksson. Map Inference in the Face of Noise and Disparity. In Proceedings of the 20th International Conference on Advances in Geographic Information Systems, pages 79–88, 2012.
* [15] Abhishek Chaurasia and Eugenio Culurciello. LinkNet: Exploiting encoder representations for efficient semantic segmentation. IEEE Visual Communications and Image Processing (VCIP), pages 1–4, 2017.
* [16] Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking Atrous Convolution for Semantic Image Segmentation. ArXiv, abs/1706.05587, 2017.
* [17] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved Baselines with Momentum Contrastive Learning. arXiv preprint arXiv:2003.04297, 2020.
* [18] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention Mask Transformer for Universal Image Segmentation. 2022\.
* [19] Gong Cheng, Junwei Han, and Xiaoqiang Lu. Remote Sensing Image Scene Classification: Benchmark and State of the Art. In Proceedings of the IEEE, volume 105, pages 1865–1883, 2017.
* [20] Guangliang Cheng, Ying Wang, Shibiao Xu, Hongzhen Wang, Shiming Xiang, and Chunhong Pan. Automatic Road Detection and Centerline Extraction via Cascaded End-to-end Convolutional Neural Network. IEEE Transactions on Geoscience and Remote Sensing, 55(6):3322–3337, 2017.
* [21] Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. Unifying Vision-and-Language Tasks via Text Generation. In International Conference on Machine Learning, pages 1931–1942. PMLR, 2021.
* [22] Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional Map of the World. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
* [23] Annarita D’Addabbo, Alberto Refice, Guido Pasquariello, Francesco P Lovergine, Domenico Capolongo, and Salvatore Manfreda. A Bayesian Network for Flood Detection Combining SAR Imagery and Ancillary Data. IEEE Transactions on Geoscience and Remote Sensing, 54(6):3612–3625, 2016.
* [24] Ilke Demir, Krzysztof Koperski, David Lindenbaum, Guan Pang, Jing Huang, Saikat Basu, Forest Hughes, Devis Tuia, and Ramesh Raskar. DeepGlobe 2018: A Challenge to Parse the Earth through Satellite Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 172–181, 2018.
* [25] J. Deng, W. Dong, R. Socher, L. J. Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
* [26] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations, 2021.
* [27] Vivien Sainte Fare Garnot and Loic Landrieu. Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 4872–4881, 2021.
* [28] Vivien Sainte Fare Garnot, Loic Landrieu, and Nesrine Chehata. Multi-modal Temporal Attention Models for Crop Mapping from Satellite Time Series. ISPRS Journal of Photogrammetry and Remote Sensing, pages 294–305, 2022.
* [29] Tanmay Gupta, Amita Kamath, Aniruddha Kembhavi, and Derek Hoiem. Towards General Purpose Vision Systems: An End-to-End Task-Agnostic Vision-Language Architecture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16399–16409, 2022.
* [30] Mordechai Haklay and Patrick Weber. OpenStreetMap: User-Generated Street Maps. IEEE Pervasive computing, 7(4):12–18, 2008.
* [31] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In IEEE International Conference on Computer Vision (ICCV), pages 2980–2988, 2017.
* [32] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
* [33] Songtao He, Favyen Bastani, Satvat Jagwani, Mohammad Alizadeh, Hari Balakrishnan, Sanjay Chawla, Mohamed M Elshrif, Samuel Madden, and Mohammad Amin Sadeghi. Sat2Graph: Road Graph Extraction through Graph-Tensor Encoding. European Conference on Computer Vision, pages 51–67, 2020.
* [34] Jie Hu, Liujuan Cao, Yao Lu, Shengchuan Zhang, Yan Wang, Ke Li, Feiyue Huang, Ling Shao, and Rongrong Ji. ISTR: End-to-End Instance Segmentation with Transformers. ArXiv, abs/2105.00637, 2021.
* [35] Ronghang Hu and Amanpreet Singh. Unit: Multimodal Multitask Learning with a Unified Transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1439–1449, 2021.
* [36] Amita Kamath, Christopher Clark, Tanmay Gupta, Eric Kolve, Derek Hoiem, and Aniruddha Kembhavi. Webly Supervised Concept Expansion for General Purpose Vision Models. arXiv preprint arXiv:2202.02317, 2022.
* [37] Zuoyue Li, Jan Dirk Wegner, and Aurélien Lucchi. Topological Map Extraction from Overhead Images. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1715–1724, 2019.
* [38] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 9992–10002, 2021.
* [39] Yang Long, Gui-Song Xia, Shengyang Li, Wen Yang, Michael Ying Yang, Xiao Xiang Zhu, Liangpei Zhang, and Deren Li. On Creating Benchmark Dataset for Aerial Image Interpretation: Reviews, Guidances, and Million-AID. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, pages 4205–4230, 2021.
* [40] Oscar Manas, Alexandre Lacoste, Xavier Giro i Nieto, David Vazquez, and Pau Rodriguez. Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021.
* [41] Volodymyr Mnih. Machine Learning for Aerial Image Labeling. PhD thesis, University of Toronto, 2013.
* [42] Fernando Paolo, Tsu ting Tim Lin, Ritwik Gupta, Bryce Goodman, Nirav Patel, Daniel Kuster, David Kroodsma, and Jared Dunnmon. xView3-SAR: Detecting Dark Fishing Activity Using Synthetic Aperture Radar Imagery. Neural Information Processing Systems, 2022.
* [43] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning Transferable Visual Models from Natural Language Supervision. International Conference on Machine Learning, pages 8748–8763, 2021\.
* [44] Sudha Radhika, Yukio Tamura, and Masahiro Matsui. Application of Remote Sensing Images for Natural Disaster Mitigation using Wavelet based Pattern Recognition Analysis. In 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 84–87, 2016.
* [45] Nasim Rahaman, Martin Weiss, Frederik Träuble, Francesco Locatello, Alexandre Lacoste, Yoshua Bengio, Chris Pal, Li Erran Li, and Bernhard Schölkopf. A General Purpose Neural Architecture for Geospatial Systems. HADR Workshop at NeurIPS 2022, 2022.
* [46] Caleb Robinson, Le Hou, Kolya Malkin, Rachel Soobitsky, Jacob Czawlytko, Bistra Dilkina, and Nebojsa Jojic. Large Scale High-Resolution Land Cover Mapping with Multi-Resolution Data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12726–12735, 2019.
* [47] Caleb Robinson, Anthony Ortiz, Kolya Malkin, Blake Elias, Andi Peng, Dan Morris, Bistra Dilkina, and Nebojsa Jojic. Human-Machine Collaboration for Fast Land Cover Mapping. pages 2509–2517, 2020.
* [48] Ronny Hänsch; Claudio Persello; Gemine Vivone; Javiera Castillo Navarro; Alexandre Boulch; Sebastien Lefevre; Bertrand Le Saux. Data fusion contest 2022 (dfc2022), 2022.
* [49] Linus Scheibenreif, Joëlle Hanna, Michael Mommert, and Damian Borth. Self-Supervised Vision Transformers for Land-Cover Segmentation and Classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1422–1431, 2022.
* [50] Linus Scheibenreif, Michael Mommert, and Damian Borth. Contrastive Self-Supervised Data Fusion for Satellite Imagery. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 3:705–711, 2022.
* [51] Gencer Sumbul, Marcela Charfuelan, Begum Demir, and Volker Markl. BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding. In International Geoscience and Remote Sensing Symposium (IGARSS), 2019.
* [52] Yong-Qiang Tan, Shang-Hua Gao, Xuan-Yi Li, Ming-Ming Cheng, and Bo Ren. VecRoad: Point-based Iterative Graph Exploration for Road Graphs Extraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8910–8918, 2020.
* [53] Ruben Van De Kerchove, Daniele Zanaga, Wanda Keersmaecker, Niels Souverijns, Jan Wevers, Carsten Brockmann, Alex Grosu, Audrey Paccini, Oliver Cartus, Maurizio Santoro, et al. ESA WorldCover: Global land cover mapping at 10 m resolution for 2020 based on Sentinel-1 and 2 data. In AGU Fall Meeting Abstracts, volume 2021, pages GC45I–0915, 2021\.
* [54] Yi Wang, Nassim Ait Ali Braham, Zhitong Xiong, Chenying Liu, Conrad M. Albrecht, and Xiao Xiang Zhu. SSL4EO-S12: A Large-Scale Multi-Modal, Multi-Temporal Dataset for Self-Supervised Learning in Earth Observation. ArXiv, abs/2211.07044, 2022.
* [55] Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. DOTA: A Large-scale Dataset for Object Detection in Aerial Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
* [56] Gui-Song Xia, Jingwen Hu, Fan Hu, Baoguang Shi, Xiang Bai, Yanfei Zhong, and Liangpei Zhang. AID: A Benchmark Dataset for Performance Evaluation of Aerial Scene Classification. IEEE Journal of Transactions on Geoscience and Remote Sensing, 55(7):3965–3981, 2017.
* [57] Yi Yang and Shawn Newsam. Bag-Of-Visual-Words and Spatial Extensions for Land-Use Classification. ACM Conference on Spatial Information (SIGSPATIAL), 2010.
* [58] Syed Waqas Zamir, Aditya Arora, Akshita Gupta, Salman Khan, Guolei Sun, Fahad Shahbaz Khan, Fan Zhu, Ling Shao, Gui-Song Xia, and Xiang Bai. iSAID: A Large-scale Dataset for Instance Segmentation in Aerial Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019.
* [59] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid Scene Parsing Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6230–6239, 2017.
* [60] Lichen Zhou, Chuang Zhang, and Ming Wu. D-LinkNet: LinkNet with Pretrained Encoder and Dilated Convolution for High Resolution Satellite Imagery Road Extraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 182–186, 2018.
* [61] Stefano Zorzi, Shabab Bazrafkan, Stefan Habenschuss, and Friedrich Fraundorfer. PolyWorld: Polygonal Building Extraction with Graph Neural Networks in Satellite Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1848–1857, 2022.
|
# Artificial Intelligence-based Eosinophil Counting in Gastrointestinal
Biopsies
Harsh Shah, Thomas Jacob, Amruta Parulekar, Anjali Amarapurkar, Amit Sethi
###### Abstract
Normally eosinophils are present in the gastrointestinal (GI) tract of healthy
individuals. When the eosinophils increase beyond their usual amount in the GI
tract, a patient gets varied symptoms. Clinicians find it difficult to
diagnose this condition called eosinophilia. Early diagnosis can help in
treating patients. Histopathology is the gold standard in the diagnosis for
this condition. As this is an under-diagnosed condition, counting eosinophils
in the GI tract biopsies is important. In this study, we trained and tested a
deep neural network based on UNet to detect and count eosinophils in GI tract
biopsies. We used connected component analysis to extract the eosinophils. We
studied correlation of eosinophilic infiltration counted by AI with a manual
count. GI tract biopsy slides were stained with H&E stain. Slides were scanned
using a camera attached to a microscope and five high-power field images were
taken per slide. Pearson correlation coefficient was 85% between the machine-
detected and manual eosinophil counts on 300 held-out (test) images.
## I Introduction
Eosinophils are the granulocytes that are formed inside the bone marrow. From
bone marrow after maturation, they migrate to thymus, uterus, mammary glands,
and the gastrointestinal (GI) tract. These are the normal residing sites of
eosinophils in our body. [1] They actively participate in protective mechanism
against parasites and allergens by degranulating inflammatory mediators such
as leukotrienes, vasoactive polypeptides and interleukins. An increase in
their number is, therefore, indicative of an underlying condition, which can
provide vital context for the management of diseases associated with these
organs. For instance, eosinophils in GI tract are found in the lamina propria
layer of mucosa. [2] Eosinophils in GI biopsy normally shows orange coloured
coarse granular cytoplasm and a bilobed nucleus. Normally eosinophils are
present in the GI tract of healthy individuals. [4] When the eosinophils
increase than its usual amount in the GI tract, patient gets varied symptoms
depending upon the site of involvement. Their presence in the GI tract varies
in number based on various factors like geographical variation, age group,
food allergies, type of food consumption by the populations, environmental
hygiene and sanitation, any parasitic infestation and drug allergies.
Eosinophils are increased secondary to diseases commonly like Celiac disease,
Ulcerative colitis, Chron’s disease, Malignancies, Eosinophilic syndromes and
drug induced eosinophilia. [3]
Due to varied symptoms, the clinician finds difficulty in diagnosing
eosinophilia. As there is a delay in the diagnosis, it can lead to progression
of chronic diseases. Histopathology is helpful in diagnosing the eosinophilia,
which is a benign inflammatory disorder. Timely diagnosis can benefit the
patient and treatments with steroids, Leukotriene antagonists, Anti- IgE and
Anti- IL-5 can be given. Histopathology is the gold standard in the diagnosis
for this condition. Thus, there is a need of counting eosinophils in the GI
tract biopsies as it is a common, often subclinical, underdiagnosed, and
diagnostically challenging entity. [5]
Artificial intelligence (AI), especially in the form of deep convolutional
neural networks, can be used for detecting objects in images. For detecting
eosinophilia, we trained and tested deep neural networks on carefully
annotated images of GI tract histopathology and tested the models on held-out
images. [6] We hope that when this technology is deployed in the clinics after
further validation, it will take comparatively lesser time to carry out the
function done manually, be more effective than relatively lesser experienced
pathologists, and work without fatigue.
### I-A Aims and Objectives
We summarize the objectives of the study as follows:
1. 1.
To train and test AI models for recognizing and counting eosinophils in
histological slides.
2. 2.
To study correlation of eosinophilic infiltration counted by AI and manual
method.
Towards this end, we carefully annotated eosinophils in 400 cases of GI tract
biopsies, separated the images into training and testing at the patient-level,
trained models based on the UNet [8] with data augmentation, and examined
various metrics, including correlation between actual and predicted eosinophil
counts for the test images.
## II Materials and Methods
### II-A Study design
The design of the study can be summarized as follows:
1. 1.
Study description: Non-randomised prospective comparative study.
2. 2.
Place of study: Department of Pathology, Lokmanya Tilak Municipal Medical
College and Sion Hospital, Mumbai.
3. 3.
Duration of study: 14 months (January 2021-February 2022).
4. 4.
Sample size: 400.
5. 5.
Inclusion criteria: (1) All gastro-intestinal tract (GIT) biopsy samples
received in Department of Pathology, (2) all images of high-power fields
ranging from 0 to 255-pixel intensities for eosinophil prediction by AI.
6. 6.
Exclusion Criteria: Sections thicker than three micrometres, (2) sections
smaller in size than required, (3) slides with uneven sections taken by the
microtome, (4) over-staining or under-staining with haematoxylin and eosin
(H&E) stain, (5) slides with air bubbles trapped after mounting, (6) blurred
images of high-power fields, and (7) eosinophilic or basophilic colour
imbalance in the images.
Figure 1: Overall idea: Raw images of H&E stained slides of GI tract biopsies,
(taken using a camera attachment to a microscope), annotations of eosinophils
(using Aperio ImageScope), ground truth map, and a sample prediction map
produced by our method.
### II-B Study material
All gastro-intestinal tract biopsy samples received in the Department of
Pathology from January 2021 to February 2022 were included in this prospective
study. The biopsy samples were taken for routine tissue processing. Paraffin
blocks preparation and section cutting of three micrometres thickness was done
using a microtome. Slides were stained with haematoxylin and eosin (H&E).
After diagnosis of the case, slides were taken for image scanning using a
microscope with a camera port. For each slide, five such high-power fields
were selected for taking images.
### II-C Annotation and software used
For AI to recognize eosinophils, image annotation was done using Aperio
ImageScope software. Images were fed to a convolutional neural network model
for training and testing using python language and numpy, pytorch, and
matplotlib libraries. This process is described in Figure 1.
### II-D Training details
For training the AI model 100 cases were randomly selected and the remaining
300 cases were taken for testing and statistical analysis. The image data was
standardized using color normalization, which helped in training the AI. [7]
Data was augmented using geometrical image transformations (flips and
90-degree rotations), and brightness augmentations. Annotations used to create
mask images were converted from .xml files to binary mask using a python
script. The samples for training were split for training (80%) and validation
(20%). Model learns from the training data and validation shows evaluation of
the model during it is training phase. All images are resized to 1024x1024
resolution for standardisation and divided by 255 to map them from 0 to 1 in
the pixel intensity space and then fed to a UNet architecture. [8]
Loss function was performed with binary cross entropy and Dice losses
combination to quantifies the difference between the prediction and ground
truth. [9] We used early stopping if there was no improvement in the
validation Dice loss for 8 epochs. We used a learning rate scheduler to reduce
the learning rate by a factor of 0.1 if there was no improvement in the
validation Dice loss for 4 epochs. The initial learning rate was 0.0003 with
Adam optimizer. We use the TensorBoard to monitor the loss. The output of the
model is in the form of 1024x1024 single channel output. The output of the
model is threshold at 0.8 or more than 0.8 to count it as an eosinophil. This
cut-off was determined using validation. The function
“connectedComponentsWithStats” from OpenCV was used to retrieve the count from
the output.
### II-E Assessment of results
AI-determined eosinophil counts were compared with the manual counts of the
same cases which were already calculated by the co- investigator. For
statistical analysis, we used Pearson correlation coefficient across cases.
## III Results
### III-A Distribution of the cases
A distribution of the cases is described below:
1. 1.
Age distribution among the cases: Among the 300 tested samples, majority of
the samples were belonging to the age group of 23 to 32 years, that is 77
(25.6%). 25 (8.33%) samples were from patients of paediatric age group (¡12
years). Median age was 34 years.
2. 2.
Sex distribution among the cases: Among the patients, 59% were male and 41%
were females. Thus, the Male: Female ratio is 1.4:1.
3. 3.
Distribution according to Clinical symptoms: Abdominal pain was the most
common (35%) clinical presentation followed by chronic diarrhoea in 32% cases.
Only 1 (0.33%) patient presented with constipation.
4. 4.
Distribution according to Endoscopic findings: Endoscopic findings noted in
165 (55%) patients were normal. Among the abnormal findings, mucosal
ulceration was the common finding 79 (26.33%) and only 2 (0.6%) cases showed
polyps.
5. 5.
Distribution according to biopsy site: Most number of biopsies were taken from
Duodenum- 149 (49.66%), followed by Ileum-35 (11.66%).
### III-B Eosinophil prediction
A qualitative sample of the results is shown in Figure 1. A scatter plot to
study the correlation between AI prediction of eosinophils with manual
eosinophil annotation can be seen in Figure 2. An analysis of correlation
between Eosinophils count by AI and manual method can be seen in Table I.
Figure 2: Scatter plot to study the correlation between AI prediction of eosinophils with manual eosinophil count TABLE I: Correlation between eosinophils count by AI and manual method Biopsy site | Correlation between | Interpretation
---|---|---
| eosinophil count by AI | (Correlation type)
| and manual method |
| (Pearson correlation coeff.) |
Esophagus | 0.41 | Medium strong positive
Duodenum | 0.55 | Medium strong positive
Ileum | 0.84 | Very strong positive
Caecum | 0.83 | Very strong positive
Ascending colon | 0.81 | Very strong positive
Transverse colon | 0.94 | Very strong positive
Descending colon | 0.86 | Very strong positive
Sigmoid colon | 0.98 | Very strong positive
Rectum | 0.9 | Very strong positive
TABLE II: Comparison of prevalence of duodenal eosinophilia amongst our study and three other studies Study | Our study | Czyzewski T. et al. [9] | Archila L. R. et al. [10] | Catalano A. et al. [11]
---|---|---|---|---
Year of study | 2022 | 2021 | 2021 | 2022
Place | Mumbai, India | Cincinnati, USA | Minnesota, USA | Columbia, USA
Number of cases | 400 | 420 | 40 | 36
Inclusion criteria | All gastrointestinal tract biopsies received. | Esophageal biopsies | Esophageal biopsies | Esophageal biopsies
AI model architecture | U-Net architecture modification. | ResNet50 Deep convolutional neural network. | Nested individual Convoluted neural networks | U-Net architecture modification.
Statistical analysis | Pearson correlation coefficient | Sensitivity, Accuracy and positive predictive prevalence | Sensitivity and positive predictive prevalence | Accuracy with standard error
Final analysis | Strong positive Eosinophil count correlation of 85% (+/- 5%) accuracy | Approximately 85% accuracy | Very good accuracy-80-95% | 91% accuracy with difference of 8 eosinophils (+/- 1.5 SD)
## IV Discussion and Prior Work
Our study was performed for all gastrointestinal tract biopsies from the
esophagus to the rectum. For image analysis, images of five non-overlapping
high-power fields were taken for 400 cases. Annotations were performed on
eosinophils (single layer annotations). For AI model, a modified U-Net
architecture was used. Since the cases of esophagus and stomach were few, the
correlation was medium strong positive. In cases of biopsies from other sites,
the correlation was strong positive as the samples were more, hence the
correlation improved. Overall accuracy was found to be 85% (+/-5%).
Previous studies on eosinophil counting were done only on esophageal biopsies.
As AI is sensitive to the context, we believe that expanding that context to
the entire GI tract was important. We summarize the previous studies below.
Czyewski T. et al performed their studies on esophageal biopsies. They used
whole slide images (WSI) for eosinophil quantification. Total 420 whole slide
images were used in the study. They used ResNet50 DCNN for Image analysis and
quantification. In this model, eosinophilic infiltration in the form of
patches was analysed, thus it becomes easy to quantify eosinophils in patches.
So, the cropping of WSI into patches and downscaling these patches into inputs
into 224x224 for AI training was done. They used sensitivity, accuracy and
positive predictive prevalence for statistical analysis which showed 85%
accuracy. Accuracy turned out to be lesser as the images were downsized after
cropping them, on downscaling, many eosinophils would have lost their
differentiation as the quality of the image was compromised. [10]
Archila L. R. et al performed their study on esophageal biopsies. Their sample
size was 40 whole slide images. They used Nested individual CNN for eosinophil
quantification. This model apart from eosinophil differentiation also had
differentiation for collagen, intra-cellular oedema, lymphocytes, squamous
nuclei. Thus, the layer wise (superficial squamous, basal layer and lamina
propria) and cellular (eosinophils, lymphocytes and squamous cells)
differentiation can give the output in the form of eosinophilic esophagitis
(EoE) or non-eosinophilic esophagitis (Non-EoE). They used statistical tools
such as Sensitivity and positive predictive prevalence. The model showed very
good accuracy of 80-95% with performance score-2. [11]
Catalano A. et al studies on esophageal biopsies. Thirty six esophageal
biopsies were used for AI training and testing. Like our study, they created
the AI model by using a modified U-Net architecture. The output was evaluated
using accuracy and standard error of the predictions. The accuracy was 91%
with difference of 8 eosinophils (+/-1.5 S. D.) The difference in AI
predictions for eosinophils as compared to manual counts can be reduced by
increasing the sample size with this AI model. [12]
Table II illustrates the comparison of prevalence of duodenal eosinophilia
amongst our study and the other three studies.
## V Conclusions
Overall Pearson correlation of eosinophil count of the AI model in our study
was found to be 85% (+/-5%) for the entire GI tract. The accuracy of this
model can be improved by increasing the sample size of the biopsies which
shows medium positive correlation to make it a strong positive correlation.
Correlation of AI prediction for eosinophils with manual eosinophil count
showed moderate to strong positive correlation. There is a need of counting
eosinophils in the GI tract biopsies as it is a common, often subclinical,
underdiagnosed and diagnostically challenging entity. Medications like
steroids, leukotriene antagonists, Anti- IgE and Anti- Interleukin-5 have
shown significant improvement in patients with eosinophilia.
## References
* [1] Pineton de C., Guillaume D., Pierre C., Antoine. Eosinophilic Enteritis. Digestive Diseases. 2015; 33:183–189.
* [2] Turner Kevin O., Sinkre Richa A., Neumann William L., MD and Genta Robert M.MD. Primary colonic eosinophilia and eosinophilic colitis in adults. Am J Surg Pathol. 2017; 41:225-233.
* [3] McCarthy, Aoife J., Sheahan, Kieran. Classification of eosinophilic disorders of the small and large intestine. Virchows Archiv. 2018; 472:15-28.
* [4] DeBrosse CW, Case JW, Putnam PE. Quantity and Distribution of eosinophils in the gastrointestinal tract of children. Paediatr. Dev Pathol. 2006; 9:210-218.
* [5] Yan BM, Shaffer EA (2009) Primary eosinophilic disorders of the gastrointestinal tract. Gut 58:721–732.
* [6] Davenport T., Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94-98.
* [7] Kumar N, Verma R, Anand D, et al. A Multi-Organ Nucleus Segmentation Challenge. IEEE Trans Med Imaging. 2020; 39:1380-1391.
* [8] Ronneberger O., Fischer P., Brox T., “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015; 3: 234–241
* [9] Dice L. R., “Measures of the amount of ecologic association between species,” Ecology, vol. 26, no. 3, pp. 297–302, 1945.
* [10] Czyzewski T, Daniel N, Rochman M, et al. Machine Learning Approach for Biopsy-Based Identification of Eosinophilic Esophagitis Reveals Importance of Global features. IEEE Open J Eng Med Biol. 2021; 2:218-223.
* [11] Archila LR, Smith L, Sihvo HK, et al. Development and technical validation of an artificial intelligence model for quantitative analysis of histopathologic features of eosinophilic esophagitis. J Pathol Inform. 2022; 13:100144.
* [12] Catalano A., Adorno W., Ehsan L., Shrivastava A., Barnes B. Use of machine learning and computer vision to link Eosinophilic esophagitis cellular pattern with clinical phenotypes and disease location. Gastroenterology. 2020; 158:S-184.
|
# Four hints and test candidates of the local cosmic expansion
Kazuhiro Agatsuma<EMAIL_ADDRESS>School of Physics and Astronomy
and Institute for Gravitational Wave Astronomy, University of Birmingham,
Edgbaston, Birmingham B15 2TT, United Kingdom
###### Abstract
The expansion of the universe on short distance scales is a new frontier to
investigate the dark energy. The excess orbital decay in binary pulsars may be
related to acceleration by the local cosmic expansion, called the cosmic drag.
Modern observations of two independent binaries (PSR B1534+12 and PSR
B1913+16) support this interpretation and result in a scale-independent
expansion with viscous uniformity, in which binary systems have a smaller
expansion rate than the Hubble constant. This paper shows additional
evidential binaries (PSR J1012–5307 and PSR J1906+0746), supporting the cosmic
drag picture. The total anomaly of the conventional model is about
$3.6\,\sigma$ including two evidential binaries reported before. In addition,
an observable range of the cosmic drag has been calculated for typical models
of both NS-NS binary and NS-WD binary. In this region, six test candidates are
listed with predictions of the excess orbital decay.
###### pacs:
## I Introduction
A discovery of the expansion of the universe made a paradigm shift. The
recession velocities of distant galaxies follow the Hubble–Lemaître law Hubble
; Lemaitre . In these two decades, progress in observations revealed that the
cosmic expansion is accelerating Riess_1998 ; Perlmutter_1999 . The
acceleration mechanism, which is called the dark energy, is one of the most
intriguing mysteries in our universe. The cosmic expansion is visible on large
scale observations but not valid on small scales ($\lesssim 70$ Mpc) because
local peculiar velocities exceed the recession velocity Freedman_2001 . Since
the gravitationally bound systems maintain own size, the cosmic expansion is
separated from local dynamics as the current standard interpretation.
Another interpretation survives with following general relativity (GR)
Anderson1995 ; Cooperstock1998 ; Bonnor1999 ; Adkins2007 ; Carrera2010 ;
Price2012 ; Giulini2014 . They argue that the cosmic expansion can be
acceleration on local dynamics. The formulation was built using the geodesic
deviation equation Cooperstock1998 , in which Friedmann-Lemaître-Robertson-
Walker (FLRW) coordinates is transformed to Fermi normal coordinates as an
approximation of the locally inertial frame. The equations of motion to the
lowest order are expressed by
$\frac{d^{2}x_{\mathrm{F}}^{k}}{dt^{2}}-\left(\frac{\ddot{a}}{a}\right)x_{\mathrm{F}}^{k}=0$
(1)
where $x_{\mathrm{F}}^{k}$ is the geodesic distance in the Fermi normal
coordinates with $k$ from 1 to 3, and $a$ is the scale factor. The expansion
term is called the cosmic drag Anderson1995 , which represents a stretch of
space. The object’s position is not fixed on space but dragged by a flow of
the expanding space. In this framework, the Hubble–Lemaître law is related to
the cosmic drag by
$\ddot{R}_{\mathrm{H}}=\dot{H}R+H\dot{R}=(\ddot{a}/a)R=-qH^{2}R.$ (2)
The last term employs the deceleration parameter $q$, which is defined as
$q:=-\ddot{a}a/\dot{a}^{2}=-(\ddot{a}/\dot{a})H^{-1}=-(\ddot{a}/a)H^{-2}$ (3)
using $H=\dot{a}/a$ and $\dot{H}=(\ddot{a}/a)-H^{2}$. This expression has been
applied to orbital systems Carrera2010 ; Giulini2014 . The two body problem is
expressed by the pseudo-Newtonian approach:
$\ddot{R}=L^{2}/R^{3}-GM_{0}/R^{2}-qH^{2}R$ (4)
with $L=R^{2}\dot{\phi}$. Here, $R$ is the radial geodesic distance, $L$ the
conserved angular momentum per unit mass in planar polar coordinates
($R,\phi$), $G$ the gravitational constant, $M_{0}$ the mass of the central
body, and the present value of $H$ is the Hubble constant $H_{0}\approx 70$ km
$\mathrm{s}^{-1}\mathrm{Mpc}^{-1}$ (this value is used in this paper). The
same expression can be applied for the scales of galaxy clusters Nandra_1 ;
Nandra_2 . A critical radius can exist at
$R_{\mathrm{c}}=\left(\frac{GM_{0}}{-q_{0}H_{0}^{2}}\right)^{1/3}$ (5)
for $q_{0}<0$ as the present cosmological value of $q$, in which the
gravitational attraction force equals the local expansion ($L=0$ and
$\ddot{R}=0$). This model is consistent with observations because the
recession velocity asymptotes the Hubble–Lemaître law ($\dot{R}=H_{0}R$) on
large scales ($R\gg R_{\mathrm{c}}$). However, we cannot test this theory by
searching for deviations from standard Keplerian orbits because they are too
small to measure Carrera2010 ; Giulini2014 .
A unique approach has been proposed to observe this local expansion effect
using binary pulsar systems Agatsuma_2020 . By comparing with accurate
predictions of the orbital decay by gravitational wave (GW) emission, it is
shown that the braking effect of the local cosmic expansion can be observed by
precise measurements. The orbital decay of a binary system is given by
$\displaystyle\frac{dr}{dt}$ $\displaystyle=$ $\displaystyle
H_{\mathrm{s}}\cdot
r-\frac{64}{5}\frac{G^{3}}{c^{5}}\frac{(m_{\mathrm{1}}m_{\mathrm{2}})(m_{\mathrm{1}}+m_{\mathrm{2}})}{r^{3}}$
(6) $\displaystyle\equiv$ $\displaystyle H_{\mathrm{s}}\cdot
r-\frac{K_{\mathrm{gw}}}{r^{3}},$
including both the local expansion and GW emission Agatsuma_2020 ;
PetersMathews ; Peters . Here, $c$ is the speed of light, $r$ is the orbital
radius (separation) of binary stars, $m_{1}$ and $m_{2}$ are each mass of the
binary, and $H_{\mathrm{s}}$ is the Hubble parameter in this spiral decay
system. By differentiating Eq. (6) and using Eq. (2), effective force
expression is obtained:
$\ddot{r}=-q_{\mathrm{s}}H_{\mathrm{s}}^{2}r-3K_{\mathrm{gw}}^{2}r^{-7}.$ (7)
A purely Keplerian orbit is achieved at the critical radius $r_{\mathrm{c}}$
where the local expansion equals GW reaction. At this point, $dr/dt=0$ and
$\ddot{r}=0$ are imposed in Eq. (6) and Eq. (7), respectively. These provide
$r_{\mathrm{c}}=\left(\frac{K_{\mathrm{gw}}}{H_{\mathrm{s}}}\right)^{1/4}$ (8)
and
$r_{\mathrm{c}}=\left(\sqrt{-\frac{3}{q_{\mathrm{s}}}}\frac{K_{\mathrm{gw}}}{H_{\mathrm{s}}}\right)^{1/4}.$
(9)
Since both radii have to be equivalent, the deceleration parameter of this
system is determined to be $q_{\mathrm{s}}=-3$. This value brought a
characteristic of viscous uniformity, corresponding to $H_{\mathrm{s}}\approx
H_{0}/6$ Agatsuma_2020 .
According to GR, binary star systems dominated by GW reaction have to be the
slowest orbital decay. However, both PSR B1534+12 Stairs2002 ; B1534+12 and
PSR B1913+16 Weisberg2010 ; Weisberg2016 show slower decay than the theory,
called the excess $\dot{P_{b}}$, with total anomaly of about $2.5\,\sigma$
Agatsuma_2020 . A faster decay is possible by other energy losses, but slower
decay is forbidden by GR. The above model can explain this deviation from GR.
The cosmic drag picture is a promising model, featured by scale-independent
and uniform expansion.
This is still a hypothesis due to the consistency with few observations of the
$2.5\,\sigma$ anomaly. In order to validate it furthermore, it is significant
to scrutinize other observations. This paper shows additional two evidential
binary pulsars with future test candidates.
## II Excess orbital decay
The estimation of the orbital decay in a binary star system is written by
$\dot{P_{b}}^{\mathrm{obs}}=\dot{P_{b}}^{\mathrm{Kin}}+\dot{P_{b}}^{\mathrm{GW}}+\dot{P_{b}}^{\mathrm{X}}.$
(10)
Here, $\dot{P_{b}}^{\mathrm{obs}}$ is the observed orbital decay (time
derivative of the orbital period), $\dot{P_{b}}^{\mathrm{Kin}}$ is the all
kinematic contributions in a galactic potential, $\dot{P_{b}}^{\mathrm{GW}}$
is the theoretical estimate from GW reaction (intrinsic decay),
$\dot{P_{b}}^{\mathrm{X}}$ is the excess $\dot{P_{b}}$. Other contributions
like the mass loss are omitted. The conventional model (no local expansion)
expects $\dot{P_{b}}^{\mathrm{X}}=0$ when GW emission is a dominant factor of
the energy loss.
In the cosmic drag picture, the local cosmic expansion brake the orbital
decay, which makes $\dot{P_{b}}^{\mathrm{X}}$ as predicted below. According to
Agatsuma_2020 , the orbital decay by GW emission is described by
$\frac{\dot{P_{b}}}{P_{b}}=\frac{3}{2}\left[H_{\mathrm{s}}-\frac{K_{\mathrm{gw}}}{r^{4}}\frac{1+(73/24)e^{2}+(37/96)e^{4}}{(1-e^{2})^{7/2}}\right],$
(11)
including the contribution from the cosmic drag. Here, $r$ is the orbital
radius (as the eccentricity $e=0$) or the semimajor axis (as $e\neq 0$). Thus,
the excess $\dot{P_{b}}$ by the cosmic drag ($\dot{P_{b}}^{\mathrm{CD}}$) is
the first term of it:
$\dot{P_{b}}^{\mathrm{CD}}=\frac{3}{2}H_{\mathrm{s}}P_{b}.$ (12)
This is applicable for $r\leq r_{\mathrm{c}}$. Since the system can be
regarded as the Keplerian orbit around $r_{\mathrm{c}}$, the orbital period is
written by
$\displaystyle P_{b}$
$\displaystyle=2\pi\left[\frac{r^{3}}{G(m_{1}+m_{2})}\right]^{1/2}$ (13)
$\displaystyle=2\pi\left[\frac{k^{3}}{G(m_{1}+m_{2})}\right]^{1/2}\left(\frac{K_{\mathrm{gw}}}{H_{\mathrm{s}}}\right)^{3/8}$
(14)
where the radius is normalized by $r_{\mathrm{c}}$ and $k=r/r_{\mathrm{c}}$ is
used. From the above equations, the expected excess is
$\dot{P_{b}}^{\mathrm{CD}}=3\pi
H_{\mathrm{s}}\left[\frac{k^{3}}{G(m_{1}+m_{2})}\right]^{1/2}\left(\frac{K_{\mathrm{gw}}}{H_{\mathrm{s}}}\right)^{3/8}$
(15)
with viscous uniformity ($H_{\mathrm{s}}=H_{0}/6$) Agatsuma_2020 .
For a region of $r_{\mathrm{c}}\leq r\ll R_{\mathrm{c}}$, as discussed in the
prior paper Agatsuma_2020 , the cosmic drag completely cancel the GW reaction
(orbital decay), which indicates
$\dot{P_{b}}^{\mathrm{CD}}=-\dot{P_{b}}^{\mathrm{GW}}.$ (16)
Here, $\dot{P_{b}}^{\mathrm{GW}}$ is obtained from Eq. (11) as
$H_{\mathrm{s}}=0$. A fixed orbit is expected in spite of GW emission.
If the observed $\dot{P_{b}}^{\mathrm{X}}$ is consistent with the prediction
$\dot{P_{b}}^{\mathrm{CD}}$, it supports the cosmic drag picture.
## III The third and fourth evidential objects
PSR J1012+5307 has become observable of the cosmic drag by recent refinements
of parameters Ding_2020 . J1012+5307 is a binary system, which consist of a
pulsar of $1.82\,M_{\odot}$ and a white dwarf of $0.174\,M_{\odot}$ (NS-WD
system) with a small eccentricity ($e=1.2\times 10^{-6}$). The orbital radius
slightly exceeds the critical radius ($r=1.24r_{\mathrm{c}}$) according to the
orbital period $P_{b}$ of 0.6 days. It means that the local expansion cancels
the orbital decay of GW emission under the cosmic drag picture, and hence, the
orbit should asymptote Eq. (4) (almost Keplerian) without orbital decay
Agatsuma_2020 . When the system follows Keplerian orbit without orbital decay,
$\dot{P_{b}}^{\mathrm{obs}}=\dot{P_{b}}^{\mathrm{Kin}}$ should be observed in
Eq. (10). In other words,
$\dot{P_{b}}^{\mathrm{X}}=-\dot{P_{b}}^{\mathrm{GW}}$ (Eq. (16)) is expected
when $\dot{P_{b}}^{\mathrm{GW}}$ is included in the estimate. According to the
latest observation of PSR J1012+5307 (see Table 5 in Ding_2020 ), the relevant
parameters are
$\displaystyle\dot{P_{b}}^{\mathrm{GW}}$ $\displaystyle=$
$\displaystyle-13\times 10^{-15},$ (17)
$\displaystyle\dot{P_{b}}^{\mathrm{X}}$ $\displaystyle=$
$\displaystyle(10.6\pm 6.1)\times 10^{-15},$ (18)
which is consistent with the above interpretation. This system looks a fixed
orbit as $\dot{P_{b}}^{\mathrm{X}}$ cancels $\dot{P_{b}}^{\mathrm{GW}}$. It
may be explained by the local cosmic expansion. This is the first hint to
support the cosmic drag picture for both a region of $r>r_{\mathrm{c}}$ and an
NS-WD binary system.
Besides, a binary neutron star PSR J1906+0746 is also marginally evidential.
The predicted $\dot{P_{b}}^{\mathrm{CD}}$ is $0.8\times 10^{-14}$, which is
consistent with the observed $\dot{P_{b}}^{\mathrm{X}}$ of $(3\pm 3)\times
10^{-14}$ (see Table 3 in VanLeeuwen_2015 ). The deviation from the
conventional model is only $1\,\sigma$.
By adding anomalies in the above two evidential objects to the previous value
($2.5\,\sigma$), the total significance beyond the conventional model is about
$3.6\,\sigma$ (99.97%). Their relevant parameters are listed in Table 1. The
observed $\dot{P_{b}}^{\mathrm{X}}$ in J1012+5307, B1534+12, and J1906+0746
have a good agreement with the cosmic drag prediction
($\dot{P_{b}}^{\mathrm{X}}\approx\dot{P_{b}}^{\mathrm{CD}}$), and B1913+16 has
a close value. All four independent binaries in Table 1 have a positive
$\dot{P_{b}}^{\mathrm{X}}$ (slower decay) with $1.0-1.8\,\sigma$ significance,
which is a forbidden region of GR without the cosmic drag.
## IV Test region and candidates
There is an observable range of the cosmic drag in the binary pulsars. For a
region of $r<r_{\mathrm{c}}$, the shorter the radius, the smaller the
expansion effect. For a region of $r>r_{\mathrm{c}}$, the expected difference
($\dot{P_{b}}^{\mathrm{CD}}=-\dot{P_{b}}^{\mathrm{GW}}$) is getting smaller
and smaller with increasing the radius. For the comparison,
$\dot{P_{b}}^{\mathrm{CD}}$ has to be larger than the observational
uncertainty. The binary pulsars with too small separation or too large
separation are out of scope.
Figure 1 (top) shows $\dot{P_{b}}^{\mathrm{GW}}$ with/without
$\dot{P_{b}}^{\mathrm{CD}}$ for NS-NS binary calculated by the above equations
(11), (15), and (16). Fig. 1 (bottom) shows the expected
$\dot{P_{b}}^{\mathrm{CD}}$ for both NS-NS binary and NS-WD binary. Modern
observations have an uncertainty of $\dot{P_{b}}^{\mathrm{X}}$ of about
$0.4-1.1\times 10^{-14}$. Therefore, the observable range requires
$\dot{P_{b}}^{\mathrm{CD}}\gtrsim 1.0\times 10^{-14}$, which is roughly
$0.4\,r_{\mathrm{c}}\lesssim r\lesssim 1.8\,r_{\mathrm{c}}$ for NS-NS binary,
corresponding to
$0.2\lesssim P_{b}\lesssim 2.0\,\,\mathrm{[days]}$ (19)
as $P_{b}=0.8$ days at $r=r_{\mathrm{c}}$. In the case of NS-WD binary, the
observable range is shallower ($0.6\,r_{\mathrm{c}}\lesssim r\lesssim
1.4\,r_{\mathrm{c}}$), corresponding to
$0.2\lesssim P_{b}\lesssim 0.6\,\,\mathrm{[days]}$ (20)
as $P_{b}=0.4$ days at $r=r_{\mathrm{c}}$. Note that $H_{\mathrm{s}}$ is
observable only for $r<r_{\mathrm{c}}$ by Eq. (12). Increase of mass and/or
eccentricity expand the observable range because the increased
$r_{\mathrm{c}}$ makes the peak excess higher. The He star companion,
redbacks, and black-widow pulsars would be difficult for this test because the
binary evolution is not GW dominant, meaning increased uncertainties.
Figure 1: Rough estimates of $\dot{P_{b}}$ and the excess $\dot{P_{b}}$ due to
the cosmic drag ($\dot{P_{b}}^{\mathrm{CD}}$). For simplicity, the same mass
condition ($1.4\,M_{\odot}$) for neutron star (NS) and the 1/10 ratio
($0.14\,M_{\odot}$) for white dwarf (WD) are assumed with $e=0$. Viscous
uniformity ($H_{\mathrm{s}}=H_{0}/6$) is adopted for $r<r_{\mathrm{c}}$.
In addition to the evidential four binary pulsars, there are six test
candidates, which can estimate $\dot{P_{b}}^{\mathrm{CD}}$ in this observable
range. Their parameters are listed in Table 2. In this list, the first two
(J0509+3801 and J0751–1807) have no information about
$\dot{P_{b}}^{\mathrm{X}}$. The last two (J1829+2456 and B2127–11C) are
uncertain values for $\dot{P_{b}}^{\mathrm{X}}$. Except for B2127–11C, only
J1738–0333 meets the conventional model ($\dot{P_{b}}^{\mathrm{X}}=0$) in this
region. J1756–2251 has a negative $\dot{P_{b}}^{\mathrm{X}}$ (faster decay),
which fit neither the conventional model nor cosmic drag picture.
[htb] Binary pulsar Pulsar mass Companion mass Eccentricity Orbital period
Separation $\dot{P_{b}}^{\mathrm{X}}$ $\dot{P_{b}}^{\mathrm{CD}}$ Reference
$m_{1}(M_{\odot})$ $m_{2}(M_{\odot})$ (e)a $P_{b}$ (days)a $(r_{\mathrm{c}})$b
($10^{-14}$) ($10^{-14}$)c J1012+5307 1.817d 0.174(11)d $1.2\times 10^{-6}$
0.605 1.24(3) 1.1(6) 1.3(1) Ding_2020 B1534+12 1.3330(2) 1.3455(2) 0.274
0.421 0.57 2.0(1.1) 2.1 B1534+12 J1906+0746 1.291(11) 1.322(11) 0.085 0.166
0.35 3(3) 0.8 VanLeeuwen_2015 B1913+16 1.438(1) 1.390(1) 0.617 0.323 0.28
0.5(4), 0.8(5) 1.6 Weisberg2016 , Weisberg2010
Table 1: Evidential binary pulsars.
[htb] Binary pulsar Pulsar mass Companion mass Eccentricity Orbital period
Separation $\dot{P_{b}}^{\mathrm{X}}$ $\dot{P_{b}}^{\mathrm{CD}}$ Reference
$m_{1}(M_{\odot})$ $m_{2}(M_{\odot})$ (e)a $P_{b}$ (days)a $(r_{\mathrm{c}})$b
($10^{-14}$) ($10^{-14}$)c J0509+3801 1.34(8) 1.46(8) 0.586 0.380 0.34(1) —
1.9 Lynch_2018 J0751–1807 1.64(15)e 0.16(1)e $3.3\times 10^{-6}$ 0.263
0.74(2) — 1.3 Desvignes_2016 J1738–0333 1.46(6) 0.181(8) $3.4\times 10^{-7}$
0.355 0.89(1) 0.2(4) 1.7 Freire_2012 J1756–2251 1.341(7) 1.230(7) 0.181 0.320
0.52 $-1.7^{+0.9}_{-0.6}$ 1.6 Ferdman_2014 J1829+2456 1.306(4) 1.299(4) 0.139
1.176 1.25 $-2.1(1.1)$f 2.3 Haniewicz_2021 B2127–11C 1.358(10) 1.354(10)
0.681 0.335 0.25 $-1(13)$ 1.6 Jacoby_2006
Table 2: Test candidates.
* a
The number of digits are rounded for simplicity.
* b
Semimajor axis when $e\neq 0$. The uncertainty is due to mass estimation but
it ($\sim 2\%$) from the Hubble tension is omitted.
* c
The main uncertainty is due to mass estimation for $r>r_{\mathrm{c}}$, or the
Hubble tension of $\sim 7\%$ for $r<r_{\mathrm{c}}$. The latter is omitted.
* d
The pulsar mass is derived from the companion mass Antoniadis_2016 and mass
ratio of 10.44 Sanchez_2020 .
* e
For the mass estimation, $\dot{P_{b}}^{\mathrm{GW}}$ is used as the true value
($\dot{P_{b}}^{\mathrm{X}}=0$).
* f
Not claimed value. This is derived from DDH solution but there is a
discrepancy with DDGR solution.
## V Discussion
In the observable range of the cosmic drag, there are ten binary pulsars at
least. At the present observations, four binaries of them have not reached the
precision of interest. Four of the others (six) are almost consistent with the
cosmic drag prediction. From their results, anomalies of the conventional (no
expansion) model appear in the forbidden region with a total significance of
about $3.6\,\sigma$. It is worth considering although the statistical
significance has not reached a criteria of the “discovery ($5\,\sigma$)”.
As a possible reason of $\dot{P_{b}}^{\mathrm{X}}$, a distance estimate by the
dispersion measure is doubted in PSR B1534+12 B1534+12 ; Stairs2002 . While,
the distance uncertainty cannot be a reason of $\dot{P_{b}}^{\mathrm{X}}$ in
PSR J1012+5307 Ding_2020 because this observation employs a parallax-based
distance with less uncertainty.
PSR J1738–0333 and PSR J1756–2251 have a discrepancy with the cosmic drag
picture. Their results of $\dot{P_{b}}^{\mathrm{X}}<\dot{P_{b}}^{\mathrm{CD}}$
(faster decay) are possible when uncounted energy losses contribute to
$\dot{P_{b}}^{\mathrm{obs}}$ (i.e. not GW dominant). If the cosmic drag
picture is correct, J1738–0333 and J1756–2251 may be such situation or any
systematic issue. It would be likely to reach $\dot{P_{b}}^{\mathrm{CD}}$
observable by refining parameters as well as the case of J1012+5307 Ding_2020
.
There is another interpretation of $\dot{P_{b}}^{\mathrm{X}}$ as alternative
theory of gravity to change $G$ with time and/or gravitational dipole
radiation (e.g. tested in Lazaridis_2009 ; Ding_2020 ). The main difference of
the cosmic drag from them is the limited observable range around
$r_{\mathrm{c}}$ as shown in Fig. 1.
## VI Acknowledgements
I am grateful to Dr. Alberto Vecchio and Dr. Andreas Freise for their great
support to continue this study. I thank Dr. Kazuhiro Yamamoto for a useful
discussion. I appreciate Dr. Paulo Freire and his team for confirming
measurement values in J1829+2456.
## References
* (1) E. Hubble, A relation between distance and radial velocity among extra-galactic nebulae, Proc. Natl. Acad. Sci. 15 (1929) 168–173.
* (2) G. Lemaître, Un Univers homogène de masse constante et de rayon croissant rendant compte de la vitesse radiale des nébuleuses extra-galactiques, Ann. La Société Sci. Bruxelles. 47 (1927) 49–59.
* (3) A.G. Riess, et. al., Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant, Astron. J. 116 (1998) 1009–1038.
* (4) S. Perlmutter, et. al. [T.S.C. Project], Measurements of $\Omega$ and $\Lambda$ from 42 High‐Redshift Supernovae, Astrophys. J. 517 (1999) 565–586.
* (5) W.L. Freedman, et. al., Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophys. J. 553 (2001) 47–72.
* (6) J.L. Anderson, Multiparticle dynamics in an expanding universe, Phys. Rev. Lett. 75 (1995) 3602–3604.
* (7) F.I. Cooperstock, V. Faraoni, D.N. Vollick, The Influence of the Cosmological Expansion on Local Systems, Astrophys. J. 503 (1998) 61–66.
* (8) W.B. Bonnor, Size of a hydrogen atom in the expanding universe, Class. Quantum Gravity. 16 (1999) 1313–1321.
* (9) G.S. Adkins, J. McDonnell, R.N. Fell, Cosmological perturbations on local systems, Phys. Rev. D - Part. Fields, Gravit. Cosmol. 75 (2007) 64011.
* (10) M. Carrera, D. Giulini, Influence of global cosmological expansion on local dynamics and kinematics, Rev. Mod. Phys. 82 (2010) 169–208.
* (11) R.H. Price, J.D. Romano, In an expanding universe, what doesn’t expand?, Am. J. Phys. 80 (2012) 376–381.
* (12) D. Giulini, Does cosmological expansion affect local physics?, Stud. Hist. Philos. Sci. Part B - Stud. Hist. Philos. Mod. Phys. 46 (2014) 24–37.
* (13) R. Nandra, A.N. Lasenby, M.P. Hobson, The effect of a massive object on an expanding universe, Mon. Not. R. Astron. Soc. 422 (2012) 2931–2944.
* (14) R. Nandra, A.N. Lasenby, M.P. Hobson, The effect of an expanding universe on massive objects, Mon. Not. R. Astron. Soc. 422 (2012) 2945–2959.
* (15) K. Agatsuma, The expansion of the universe in binary star systems, Physics of the Dark Universe. 30 (2020) 100732.
* (16) P.C. Peters, J. Mathews, Gravitational radiation from point masses in a keplerian orbit, Phys. Rev. 131 (1963) 435–440.
* (17) P.C. Peters, Gravitational radiation and the motion of two point masses, Phys. Rev. 136 (1964) B1224–B1232.
* (18) I.H. Stairs, S.E. Thorsett, J.H. Taylor, A. Wolszczan, Studies of the Relativistic Binary Pulsar PSR B1534+12. I. Timing Analysis, Astrophys. J. 581 (2002) 501–508.
* (19) E. Fonseca, I.H. Stairs, S.E. Thorsett, A comprehensive study of relativistic gravity using PSR B1534+12, Astrophys. J. 787 (2014) 82.
* (20) J.M. Weisberg, D.J. Nice, J.H. Taylor, Timing measurements of the relativistic binary pulsar PSR B1913+16, Astrophys. J. 722 (2010) 1030-1034.
* (21) J.M. Weisberg, Y. Huang, Relativistic Measurements From Timing the Binary Pulsar Psr B1913+16, Astrophys. J. 829 (2016) 55.
* (22) H. Ding, A.T. Deller, P. Freire, D.L. Kaplan, T.J. W. Lazio, R. Shannon, B. Stappers, Very Long Baseline Astrometry of PSR J1012+5307 and its Implications on Alternative Theories of Gravity, Astrophys. J. 896 (2020) 85.
* (23) J. Van Leeuwen, L. Kasian, I.H. Stairs, D.R. Lorimer, F. Camilo, S. Chatterjee, I. Cognard, THE BINARY COMPANION OF YOUNG , RELATIVISTIC PULSAR J1906 + 0746, 118 (2015).
* (24) J. Antoniadis, T.M. Tauris, F. Ozel, E. Barr, D.J. Champion, P.C.C. Freire, The millisecond pulsar mass distribution: Evidence for bimodality and constraints on the maximum neutron star mass, (2016). http://arxiv.org/abs/1605.01665.
* (25) D.M. Sánchez, A.G. Istrate, M.H. Van Kerkwijk, R.P. Breton, D.L. Kaplan, PSR J1012+5307: A millisecond pulsar with an extremely low-mass white dwarf companion, Mon. Not. R. Astron. Soc. 494 (2020) 4031–4042.
* (26) R.S. Lynch, J.K. Swiggum, V.I. Kondratiev, D.L. Kaplan, K. Stovall, E. Fonseca, M.S.E. Roberts, L. Levin, M.E. DeCesar, B. Cui, S.B. Cenko, P. Gatkine, A.M. Archibald, S. Banaszak, C.M. Biwer, J. Boyles, P. Chawla, L.P. Dartez, D. Day, A.J. Ford, J. Flanigan, J.W.T. Hessels, J. Hinojosa, F.A. Jenet, C. Karako-Argaman, V.M. Kaspi, S. Leake, G. Lunsford, J.G. Martinez, A. Mata, M.A. McLaughlin, H. Al Noori, S.M. Ransom, M.D. Rohr, X. Siemens, R. Spiewak, I.H. Stairs, J. van Leeuwen, A.N. Walker, B.L. Wells, The Green Bank North Celestial Cap Pulsar Survey. III. 45 New Pulsar Timing Solutions, Astrophys. J. 859 (2018) 93.
* (27) G. Desvignes, R.N. Caballero, L. Lentati, J.P.W. Verbiest, D.J. Champion, B.W. Stappers, G.H. Janssen, P. Lazarus, S. Osłowski, S. Babak, C.G. Bassa, P. Brem, M. Burgay, I. Cognard, J.R. Gair, E. Graikou, L. Guillemot, J.W.T. Hessels, A. Jessner, C. Jordan, R. Karuppusamy, M. Kramer, A. Lassus, K. Lazaridis, K.J. Lee, K. Liu, A.G. Lyne, J. McKee, C.M.F. Mingarelli, D. Perrodin, A. Petiteau, A. Possenti, M.B. Purver, P.A. Rosado, S. Sanidas, A. Sesana, G. Shaifullah, R. Smits, S.R. Taylor, G. Theureau, C. Tiburzi, R. Van Haasteren, A. Vecchio, High-precision timing of 42 millisecond pulsars with the European Pulsar Timing Array, Mon. Not. R. Astron. Soc. 458 (2016) 3341–3380.
* (28) P.C.C. Freire, N. Wex, G. Esposito-Farèse, J.P.W. Verbiest, M. Bailes, B.A. Jacoby, M. Kramer, I.H. Stairs, J. Antoniadis, G.H. Janssen, The relativistic pulsar-white dwarf binary PSR J1738+0333 - II. The most stringent test of scalar-tensor gravity, Mon. Not. R. Astron. Soc. 423 (2012) 3328–3343.
* (29) R.D. Ferdman, I.H. Stairs, M. Kramer, G.H. Janssen, C.G. Bassa, B.W. Stappers, P.B. Demorest, I. Cognard, G. Desvignes, G. Theureau, M. Burgay, A.G. Lyne, R.N. Manchester, A. Possenti, PSR J1756-2251: A pulsar with a low-mass neutron star companion, Mon. Not. R. Astron. Soc. 443 (2014) 2183–2196.
* (30) H.T. Haniewicz, R.D. Ferdman, P.C.C. Freire, D.J. Champion, K.A. Bunting, D.R. Lorimer, M.A. McLaughlin, Precise mass measurements for the double neutron star system J1829+2456, Mon. Not. R. Astron. Soc. 500 (2021) 4620–4627.
* (31) B.A. Jacoby, P.B. Cameron, F.A. Jenet, S.B. Anderson, R.N. Murty, S.R. Kulkarni, Measurement of Orbital Decay in the Double Neutron Star Binary PSR B2127+11C, Astrophys. J. 644 (2006) L113–L116.
* (32) K. Lazaridis, N. Wex, A. Jessner, M. Kramer, B.W. Stappers, G.H. Janssen, G. Desvignes, M.B. Purver, I. Cognard, G. Theureau, A.G. Lyne, C.A. Jordan, J.A. Zensus, Generic tests of the existence of the gravitational dipole radiation and the variation of the gravitational constant, Mon. Not. R. Astron. Soc. 400 (2009) 805–814.
|
# ExpNet: A unified network for Expert-Level Classification
Junde Wu
Healthcare Group, Baidu
<EMAIL_ADDRESS>Huihui Fang
Healthcare Group, Baidu Yu Zhang
Harbin Institute of Technology Yehui Yang
Healthcare Group, Baidu Haoyi Xiong
Big Data Lab, Baidu Huazhu Fu
IHPC, A*STAR Yanwu Xu
Healthcare Group, Baidu
<EMAIL_ADDRESS>
###### Abstract
Different from the general visual classification, some classification tasks
are more challenging as they need the professional categories of the images.
In the paper, we call them expert-level classification. Previous fine-grained
vision classification (FGVC) has made many efforts on some of its specific
sub-tasks. However, they are difficult to expand to the general cases which
rely on the comprehensive analysis of part-global correlation and the
hierarchical features interaction. In this paper, we propose Expert Network
(ExpNet) to address the unique challenges of expert-level classification
through a unified network. In ExpNet, we hierarchically decouple the part and
context features and individually process them using a novel attentive
mechanism, called Gaze-Shift. In each stage, Gaze-Shift produces a focal-part
feature for the subsequent abstraction and memorizes a context-related
embedding. Then we fuse the final focal embedding with all memorized context-
related embedding to make the prediction. Such an architecture realizes the
dual-track processing of partial and global information and hierarchical
feature interactions. We conduct the experiments over three representative
expert-level classification tasks: FGVC, disease classification, and artwork
attributes classification. In these experiments, superior performance of our
ExpNet is observed comparing to the state-of-the-arts in a wide range of
fields, indicating the effectiveness and generalization of our ExpNet. The
code will be made publicly available.
## 1 Introduction
Figure 1: Expert-level classification is more challenging than general and
fine-grained vision classification as it faces both Label Complexity
(professional labels) and Task Complexity (various tasks). We show the unique
challenges of expert-level classification by three representative examples. In
bird species classification, the detailed parts are important, e.g., the tiny
red eye is discriminative evidence of bronzed cowbird. In glaucoma
classification, the global structure relationship is important, e.g., the
large vertical disc-cup ratio is a major biomarker for glaucoma. In the
artwork attributes classification, the interaction of different features is
important, e.g., to recognize the culture of the pot needs a comprehensive
analysis of the style, texture, and material of the object.
Deep neural network (DNN) has made great progress in computer vision tasks. As
a fundamental task in this field, image classification has been studied by a
large number of methods. Thanks to the effort of these researches, image
classification has achieved a very high level, which is even comparable to
humankind on some of the tasks [28].
In human society, some visual classification tasks are even challenging for
ordinary people. These tasks can only be completed by a small part of our
humankind who have experienced a long-term professional trainee, whom we call
experts. For example, only ornithologists can recognize the bird species from
a wild bird photo, only ophthalmologists can accurately diagnose glaucoma from
the fundus images, and only archaeologists can tell the culture of an artwork
from a picture of it. Since the expert-level categories are semantically more
complex than the general ones, these tasks will also be more challenging for
deep learning models. Previous Fine-grained Vision Classification (FGVC) [73]
delves into some of its specific tasks, e.g., to recognize the bird species,
car brands, or aircraft types. These tasks share the same characteristic that
some parts or details are discriminative for the classification, which FGVC
methods can often take advantage of. But not all the tasks with expert-level
categories have the same characteristic, and can be well addressed by FGVC
methods. Just as the taxonomy shown in Fig. 1, compared with general
classification, FGVC predicts more complex labels but can only process limited
types of tasks.
This paper aims to address a kind of classification problem with a larger
scope of FGVC, which we call expert-level classification. We define expert-
level classification as a classification task in which the target category
needs to be further inferred or analyzed from the salient objects in the
images. A wide range of tasks fall into this definition, like recognizing the
bird species from the bird, detecting the diseases from the lesions/tissues,
discovering the culture of the artworks, and recognizing the artists of the
paintings. Facing both task complexity and label complexity, expert-level
classification is more challenging than general classification tasks and FGVC.
To be specific, the challenges of expert-level classification can be concluded
as three points. First, the devils are in the details. Like the bird case
shown in Fig. 1, a tiny detail of the image may decide the category of the
whole image. Second, not only the local details, but the global structures
also matter. For example, as the fundus case in Fig. 1, a large vertical optic
cup and disc ratio (vCDR) of the fundus image is a major biomarker indicating
glaucoma. Finally, feature-level interactions are also important. As the
artwork case in Fig. 1, a comprehensive analysis of various features like
painting style, object textures, and patterns is the key to recognize the
Corinthian culture of the pot.
In order to address the unique challenges in expert-level classification, we
propose a novel network to decouple and individually process the focal and
global information in a hierarchical manner. The basic idea is to
progressively zoom in the attractive parts and memorize the context features
in each stage. In the end, we fuse the final focal feature and all the
memorized context-related features to make the final decisions. In the
implementation, we design Gaze-Shift to produce a focal-part feature map and a
Context Impression embedding in each stage. In Gaze-Shift, we first use the
proposed Neural Firing Fields (NeFiRF) to split focus parts and context parts
of the given feature map. Then we use convolution layers to extract the focal
parts, and global Cross Attention (CA)[7] to model the context correlation.
The focal-part feature will be sent to the next stage, and Context Impression
will be stripped out for the final fusion. Such an architecture models part-
global trade-offs and hierarchical feature interactions to overcome the unique
challenges of expert-level classification. We verify the effectiveness of
ExpNet on three representative tasks of expert-level classification, which are
FGVC, disease classification, and artwork attributes classification. The
experiments show ExpNet outperforms state-of-the-art (SOTA) on several
mainstream benchmarks with solid generalization ability. Moreover, the
intermediate results produced by ExpNet show a close relationship with salient
object shape and location, which yields competitive performance in weakly-
supervised segmentation/localization.
The contributions of the paper can be concluded as follow. 1). We verify that
a significant and challenging vision classification task, i.e., expert-level
classification, can be solved by a unified deep learning framework. 2). We
propose ExpNet with Gaze-Shift to individually process the focal and context
features by different parameters and architectures in a hierarchical manner,
which helps to overcome the unique challenges of expert-level classification.
3). We propose NeFiRF in Gaze-Shift to group the features in the frequency
space with spatial constraints. 4). We achieve SOTA on three representative
expert-level classification tasks compared with both task-specific and general
classification methods. 5). We achieve competitive performance on weakly-
supervised segmentation and localization.
Figure 2: An illustration of our ExpNet framework, which starts from (a) an
overview pipeline using ResNet34 backbone as the example, and continues with
zoomed-in diagrams of individual modules, including (b) Gaze-Shift, and (c)
Neural Firing Fields (NeFiRF).
## 2 Related Work
Fine-grained visual categorization (FGVC) aims to distinguish subordinate
categories within entry-level categories, which can be seen as a special case
of expert-level classification. Previous FGVC strategies include exploring
multi-level feature interaction [47], locating discriminative parts [70, 27]
and identifying the subtle difference through pair-wise learning [19, 32] or
patch-level shuffling [10, 18]. Among them, the part-based methods are most
similar to our idea. The previous CNN-based implementations[70, 27] first
predict the part localizations and then classify based on the selected
regions. Recent vision transformer (ViT) [17] based implementations [29, 75]
adopt self-attention mechanism to model the local-global interaction. But they
often select the parts through the response of the features, in which the
selection and classification are highly coupled. In addition, they equally
process the global and part information at each level, which dilutes the
importance of the discriminative features.
Besides FGVC, the regional attention mechanism itself is a popular technique
that has been widely studied. A basic idea is to first localize the attractive
parts, and then classify based on the region of interest [4, 23]. The
alternative strategies include enhancing the features with self-transformation
[62, 57, 9] or using localization information [64, 65]. However, these space-
based methods are likely to cause overfitting since the learnable attention
maps are strongly correlated with the adopted features. Recently, there are
studies [51, 56, 41] show that aggregating/filtering the features in frequency
domain space is more efficient and robust. However, they are easy to introduce
high-frequency noises by completely ignoring the spatial constraint.
## 3 Method
### 3.1 Motivation and ExpNet
Based on the research of human vision system [6, 5, 44], when we are
recognizing an image, we will first shift the attention to a salient object,
such as the bird in the picture, and also retains the impression of the
context, such as the spring season, in the memory. When an expert needs to
obtain more professional information from the image, he/she will repeat this
process more times than ordinary people [5]. For example, the expert will
further observe the details of birds, such as wings and beaks, and retain the
overall impression of birds, such as the blue color, in the memory. Later,
more detailed features, such as the patterns of feathers on the wings, will be
observed, and so on. Finally, the expert can integrate all the memorized
information and the observation to make a comprehensive inference.
Inspired by this biological process, we design ExpNet to hierarchically
decouple the focal part and the context information, then individually process
the two parts in each stage. An illustration is shown in Fig. 2 (a). Over a
standard ResNet [30] backbone, we propose a module named Gaze-Shift between
two Convolution Blocks to decompose the $l$ stage feature map $f^{l}$ to the
focal-part feature $f^{l+1}$ and a context-related embedding $e_{c}^{l}$,
named Context Impression. The focal-part feature will be sent to the next
stage and Context Impression will be memorized. Repeating this way, the local
but discriminative parts will be progressively abstracted to high-level
semantic features, while the Context Impression will be stripped out at its
appropriate level to reinforce/calibrate the final decision. Using ResNet as
the backbone, we stack four of such stages to obtain a final focal embedding
and three Context Impression. These embeddings are then fused (Section 3.4) to
obtain the final decision. The whole network is trained end-to-end by image-
level categories using cross-entropy loss.
### 3.2 Gaze-Shift
Specifically, Gaze-Shift works as which shown in Fig. 2 (b). Given a feature
$f\in\mathbb{R}^{H\times W\times C}$, we first pachlized the feature map with
a patch size $k$. Then we use NeFiRF (Section 3.3) to generate a binary patch-
level map $m\in\mathbb{R}^{p\times p\times 1}$, called saliency map, where
$p=H/k$ considering the common case $H$ equals to $W$. Saliency map spatially
splits the patches of the feature map to focal patches and context patches. In
the focus branch, the selected focal patches are spatially organized (Section
3.4) to keep the spatial correlations. In this process, they will be
downsampled and abstracted to the focal-part feature
$f^{l+1}\in\mathbb{R}^{\frac{H}{2}\times\frac{W}{2}\times 2C}$. In the context
branch, the focus patches and context patches are interacted through Cross
Attention (CA) [7] to obtain $e_{c}$. The focal parts are used as the query,
the context parts are used as the key and value to be interacted by the
attentive mechanism (see Section 3.4 for details). In this way, we encode the
discriminative embedding from the context content considering its interaction
with the focus parts.
### 3.3 NeFiRF
Spatially grouping and weighting the features is a well-recognized strategy in
classification tasks. The common practices include learning the spatial
attention map [62] or measuring the feature similarity [45]. However, these
spatial modeling methods will likely cause overfitting as the feature maps and
attention maps are strongly correlated. Recently, processing the features in
the frequency space has shown to be an efficient way to promise the
generalization [51, 41, 56]. However, directly learning the filter in the
frequency space [51] will lose the spatial constraint of the feature map. Note
that our grouping is supposed to be naturally spatial correlated, i.e., focus
parts or context parts are likely to gather together spatially. To introduce
the spatial constraint in the frequency space, we generate the saliency map by
a conditional Neural Fields (NeRF) [66], which we named Neural Firing Fields
(NeFiRF). NeFiRF is a NeRF conditioned by the frequency encoding of the given
feature map. As the neural networks are prone to produce similar responses
given similar inputs, NeRF inputted by a coordination map will naturally
constrain the output to be spatially smooth [66].
An illustration of NeFiRF is shown in Fig. 2 (c). Specifically, NeFiRF
generates a binary patch-level map $m$ conditioned by the inputted feature map
$f$. The map assigns each patch of the feature map to ‘focus’ or ‘context’.
The main architecture of NeFiRF is a 2-D NeRF, which predicts the value of the
position from the inputted coordination map. Different from the original
implementation of NeRF, we apply convolution layer over the input coordination
map. We build six convolution layers in total, one in the low-stage, four in
the middle-stage, and another one in the high-stage. To introduce the feature
frequency, we follow SIREN [55] to insert the periodic activation function
between each two convolution layers, but we control the activation through the
embedding of $f$ instead of the direct learning. Specifically, we encode $f$
as the parameters to control the amplitude $a$ and frequency $w$ of the sine
activation function $asin(wx)$, where $x$ is the inputted feature element. In
the implementation, we first apply patch-average pooling (PAP) to convert $f$
to a $p\times p\times C$ map, then encode it by $1$ convolution to a $p\times
p\times 2$ map. The two channels respectively represent the amplitude and
frequency for each patch. We then control the bandwidth of each different
stage to encourage coarse-to-fine generation. Following the physical concept
of band-pass filters, we eliminate the top 20% high-frequency activation to
zero in the low-pass filter, the top 10% high-frequency and low-frequency
activation to zero in the middle-pass, and the top 20% low-frequency
activation to zero in the high-pass filter. After then, we use global average
pooling (GAP) [43] to produce two values $a$ and $w$ to decide a unique
activation. In NeFiRF, we adopt the skip connection over the middle and high
stages to pass the low-frequency information to the higher levels. The Sigmoid
function and 0.5 thresholding are adopted on the last layer to produce a
binary $p\times p$ saliency map. NeFiRF will not share weights in different
stages.
### 3.4 Details of Implementation
Spatial Organization After NeFiRF selects the focal patches, we reorganize and
abstract them to the next stage feature map while maintaining their original
spatial correlation. The process is implemented by padding, max pooling, and
deformable convolution [16]. Specifically, we first arrange the selected focal
patches according to their previous positions and adopt padding following
2-stride max pooling to downsample the feature map to
$\frac{H}{2}\times\frac{H}{2}\times C$. Then, we apply a deformable
convolution on the feature map to keep the same scale but double the channels.
Deformable convolution learns the offsets together with the convolution
parameters. For each convolution kernel, the offset helps it abstract only the
informative positions and ignore the blank ones. In this way, it actually
enlarges the selected focus features to the blank positions. More details and
experiments about the spatial organization are provided in supplementary
materials.
Cross Attention We use Cross Attention (CA) [7] to model the interaction
between the focus and context features in each stage. Different from its
original setting, we do not use the class token for the classification.
Instead, only embeddings (flattened from the feature maps) are interacted
through the attention mechanism. GAP is applied on the produced results to
obtain the final embedding. In addition, we use conditional position encoding
[14] to encode the position information. The flattened focus embedding and
flattened context embedding will be concatenated as the condition for
positional encoding learning. The produced encoding is split to the focus and
context parts based on the positions, and respectively added to the focus
embedding and context embedding.
Embedding Fusion We tried several ways to fuse the embedding for the
classification, including concatenation + multilayer perceptron (MLP), MLP +
multiplication, MLP + addition, dynamic MLP [67] and CA. The detailed
implementation and experiments can be found in supplementary materials. In the
experiments, we adopt CA in the large variant of the model and MLP + addition
in the small variant of the model.
## 4 Experiments
Method | Param | Architecture | FGVC | Medical | Artworks | Mean
---|---|---|---|---|---|---
| | | | CUB
---
(Acc%)
| Air
---
(Acc%)
| REFUGE2
---
(AUC%)
| Convid-19
---
(Acc%)
| iMet2020
---
(F2 Score%)
| WikiArt
---
(Acc%)
BCN [19] | 25M | ResNet-50 | 87.7 | 90.3 | 78.7 | 91.0 | 57.8 | 72.6 | 79.7
ACNet [32] | 48M | ResNet-50 | 88.1 | 92.4 | 80.5 | 93.2 | 59.0 | 73.8 | 81.2
PMG [18] | 25M | ResNet-50 | 89.6 | 93.4 | 78.8 | 91.0 | 58.7 | 72.2 | 80.6
API-NET [76] | 36M | ResNet-50 | 87.7 | 93.0 | 80.9 | 93.6 | 59.5 | 74.3 | 81.5
Cross-X [47] | 30M | SENet-50 | 87.5 | 92.7 | 80.5 | 93.1 | 63.5 | 78.9 | 82.7
DCL [10] | 28M | ResNet-50 | 87.8 | 93.0 | 77.9 | 90.2 | 58.1 | 70.9 | 79.7
MGE [70] | 28M | ResNet-50 | 88.5 | 90.8 | 82.3 | 93.0 | 59.8 | 74.5 | 81.5
Mix+ [42] | 25M | ResNet-50 | 88.4 | 92.0 | 82.0 | 92.8 | 60.0 | 74.8 | 81.7
TransFG[29] | 86M | ViT-B_16 | 91.7 | 93.6 | 82.6 | 94.8 | 59.2 | 78.5 | 83.4
RAMS-Trans [31] | 86M | ViT-B_16 | 91.3 | 92.7 | 81.9 | 93.2 | 60.8 | 78.2 | 83.0
DualCross [75] | 88M | ViT-B_16 | 92.0 | 93.3 | 83.8 | 94.3 | 60.5 | 78.8 | 83.8
DualStage [4] | 33M | UNet + ResNet-50 | 86.1 | 87.9 | 80.3 | 92.8 | 59.7 | - | -
DENet [23] | 30M | ResNet-50 | - | - | 84.7 | - | - | - | -
FundTrans [21] | 86M | ViT-B_16 | 90.1 | 91.8 | 85.3 | 93.5 | 61.0 | 78.1 | 83.8
SatFormer [35] | 92M | ViT-B_16 | 91.2 | 92.6 | 85.0 | 94.3 | 62.3 | 78.7 | 84.0
Convid-ViT [71] | 88M | Swin-B | - | - | - | 95.5 | - | - | -
ConvidNet [1] | 46M | ConvidNet | 83.2 | 81.0 | 78.4 | 93.3 | 55.2 | 67.5 | 76.4
PSNet [58] | 22M | WRN | 86.7 | 87.9 | 80.5 | 94.0 | 56.7 | 69.0 | 79.1
Convid-Trans [54] | 86M | ViT-B_16 | 90.1 | 91.2 | 81.1 | 95.0 | 60.1 | 77.6 | 82.5
ResGANet [12] | 27M | ResNet-50 | 86.7 | 90.0 | 82.9 | 94.0 | 58.4 | 76.5 | 78.9
SynMIC [69] | 30M | ResNet-50 | 87.1 | 91.6 | 83.6 | 94.5 | 59.8 | 74.5 | 81.9
SeATrans [64] | 96M | UNet + ViT-B_16 | 90.3 | 92.6 | 87.6 | 95.8 | - | - | -
CLIP-Art [15] | 88M | ViT-B_32 | - | - | - | - | 60.8 | - | -
MLMO [25] | 27M | ResNet-50 | 84.8 | 87.1 | 77.8 | 90.2 | 60.3 | 75.2 | 79.2
GCNBoost [20] | 21M | GCN | 85.2 | 88.9 | 79.5 | 92.6 | 62.8 | 74.7 | 80.6
Pavel [26] | 155M | SENet*2 + PNasNet-5 | - | - | - | - | 67.2 | - | -
DualPath [74] | 29M | ResNet-50 | 84.2 | 87.5 | 77.8 | 91.3 | 60.1 | 78.5 | 79.9
Two-Stage [53] | 48M | ResNet-50 | 86.7 | 90.4 | 83.9 | 91.8 | 62.7 | 79.3 | 82.5
DeepArt [49] | 45M | Vgg16 | - | - | - | - | 55.6 | 75.2 | -
RASA [40] | 23M | ResNet-34 | 81.8 | 85.2 | 75.6 | 87.0 | 58.5 | 76.3 | 77.4
CrossLayer [8] | 42M | Vgg16 | 81.1 | 84.7 | 75.8 | 86.8 | 62.7 | 77.0 | 77.7
ResNet [30] | 25M | ResNet-50 | 84.5 | 87.3 | 77.3 | 90.6 | 59.2 | 74.1 | 78.8
CvT [63] | 32M | CvT-21 | 90.6 | 93.0 | 82.1 | 94.5 | 61.5 | 79.7 | 83.6
DeiT [59] | 86M | DeiT-B | 91.1 | 93.8 | 83.3 | 95.7 | 60.7 | 80.4 | 84.2
ViT [17] | 86M | ViT-B_16 | 90.3 | 91.6 | 81.8 | 93.8 | 60.4 | 78.8 | 82.8
ExpNet-S | 16M | ResNet18 + CA | 89.4 | 92.8 | 84.8 | 95.1 | 61.8 | 78.5 | 83.7
ExpNet-M | 47M | ResNet50 +CA | 92.6 | 94.2 | 86.8 | 96.7 | 64.2 | 81.9 | 86.1
ExpNet-L | 64M | ResNet50 +CA | 92.8 | 94.6 | 87.2 | 96.9 | 65.7 | 83.1 | 86.7
Table 1: The comparison of ExpNet with SOTA classification methods in
different fields. The gray background denotes the method is proposed for that
specific task/tasks. The metric Acc denotes accuracy, and AUC denotes area
under the ROC curve. The $1^{st}$, $2^{nd}$ and $3^{rd}$ methods are denoted
as red, blue and green respectively. The number of the parameters is counted
in million (M).
### 4.1 Dataset
We conduct the experiments on three representative expert-level classification
tasks: FGVC, medical image classification (diagnosis), and art attributes
classification. For FGVC, two commonly used benchmarks, CUB-200-2011 (CUB)
[60] and FGVC-Aircraft (Air) [48] are involved in demonstrating the
performance of our method. In medical image classification, we use Convidx
dataset [61], which is a large-scale and open access benchmark dataset for
predicting COVID-19 positive cases from chest X-Ray images, and REFUGE2
dataset [22], which is a publicly released challenge dataset for screening
glaucoma from fundus images. WikiArt [52] and iMet [68] are used as two art-
related datasets. WikiArt consists of paintings and iMet mainly consists of
artworks in The Metropolitan Museum like sculptures or porcelains.
### 4.2 Setting
We experiment with the large, medium, and small variants of our model,
ExpNet-L, ExpNet-M, and ExpNet-S, respectively. In ExpNet-S, we use ResNet18
as the backbone and use MLP + addition for the embedding fusion. In ExpNet-M
and ExpNet-S, we use ResNet50 as the backbone and CA for the embedding fusion.
ExpNet-M adopts CA with $12$ heads and $768$ hidden size. ExpNet-L adopts CA
with $16$ heads and $1024$ hidden size. On all variant models, the images are
uniformly resized to the dimension of 448$\times$448 pixels. We train the
models for 100 epochs using AdamW[39] with batch size of 16. Detailed
configurations and training settings are provided in the supplementary
material.
Figure 3: The visualized results of ExpNet-M saliency map mapping back to the
raw image in each stage (from shallow to deep, denoted as stage 1, 2, and 3).
The focal parts are kept and the context parts are masked. For the small focal
parts, we zoom in the region on the up-left/right corners for clarity. From
top to down are FGVC, medical image, and artwork classification respectively.
### 4.3 Main Results
We compare ExpNet with SOTA classification methods proposed in four different
fields: FGVC, medical image classification, artwork attribution recognition,
and general vision classification. The main results are shown in Table 1. For
the better analysis of ExpNet, we also show the visualized results of the
intermediate saliency maps in Fig. 3. The maps are mapping back to the raw
image space for the analysis.
#### 4.3.1 ExpNet vs. FGVC methods
In FGVC, a typical strategy is to identify the intra-class and inter-class
variances, like BCN [19], API-NET [76], PMG [18] and DCL [10]. But without the
attention to the discriminative parts, they do not perform well on FGVC and
medical images, in which the regional lesions/details are vital for the
correct classification. MGE [70], Mix+ [42], TransFG [29] and DualCross [75]
select and reinforce the discriminative parts for FGVC; however, they either
ignore the context information [29, 70] or equally process the local-global
features [75, 42], which limited their performance.
As CUB and Air cases shown in Fig. 3, the proposed ExpNet also focuses on the
discriminative details, like part-based FGVC methods. The difference is that
ExpNet also individually models the Context Impression for the classification.
Thanks to individual part-context modeling, ExpNet-L outperforms SOTA
DualCross by 0.8% and 1.3% on CUB and Air, respectively.
#### 4.3.2 ExpNet vs. Medical Imaging Methods
Medical image classification tasks need to focus on both the local lesions and
the global structural relationship of the organs/tissues. For example, in the
glaucoma screening, the network needs to first locate the optic-disc region[4]
on the fundus image and then model the relationship between optic disc and
optic cup[23] for the diagnosis. On CONVID-19 diagnosis, methods are also
designed to find and focus on the airspace opacities from the chest X-Ray
images[1, 58, 71]. However, they often can not model the global structure,
thus performing worse for the glaucoma classification. The general medical
image diagnosis methods [12, 64, 70] are often designed with both regional
attention and feature interaction. For example, ResGANet [12] uses spatial
attention and channel attention respectively to reinforce the discriminative
features. SeATrans [64] combines the localization UNet and classification
network at the feature-level through ViT.
Unlike these methods, ExpNet abstracts the discriminative region and models
the global structure relationship in a unified network. As the glaucoma case
shown in Fig. 3, ExpNet focuses on the optic disc in Stage2 and the optic cup
in Stage3, which we can infer the fusion of the focal embedding and the last
Complex Impression can effectively model the optic-disc/cup relationship
(verified in Section 4.6). The results show ExpNet-L ranks the first on
CONVID-19 and the second on REFUGE2. Our performance on glaucoma diagnosis is
even competitive with the segmentation-assisted method (SeATrans), without
using any prior segmentation.
#### 4.3.3 ExpNet vs. Artwork recognition methods
On the art-related dataset, modeling the interaction of the hierarchical
features is an effective strategy[7, 53]. That is because predicting the
attributions of the artwork, e.g., the artist or the culture, needs a
comprehensive interaction of various features, like the style or stroke of the
painting, the pattern of the ornaments on pottery, etc. Motivated by this
observation, CrossLayer[7] applies the interactions on the high-level layers.
Two-Stage[53] first splits the image by overlapped windows and separately
extracts their features, and uses another CNN to learn the interaction of the
high-level features, achieving the highest performance among the CNN-based
models.
In ExpNet, NeFiRF can effectively recognize orthogonal features for each
stage, and thus facilitate the final feature interaction. As the painting case
shown in Fig. 3, ExpNet individually models the background, the figure
faces/gestures, and ornament patterns in the three stages. After abstracting
the different features at each stage, the final interaction will become more
effective. As shown in Table 1, ExpNet-L ranks the first on WikiArt and the
second on iMet. The only method ranking higher (Pavel) uses double the
parameters.
#### 4.3.4 ExpNet vs. General vision classification
We also compare our method with various general vision classification
architectures. We take ResNet50, two modified ViT architectures [63, 59] and
vanilla ViT [17] for the comparison. We can see these methods are often more
general, but commonly perform worse than SOTA on the specific tasks. The
proposed ExpNet performs well on all three tasks, and shows the best
generalization ability. Comparing the mean score with general vision
classification methods, ExpNet-M outperforms SOTA DeiT by 2% with half of the
parameters.
#### 4.3.5 Cross-dataset Generalization
Considering the cross-dataset generalization performance of different methods,
we can see that the part-based strategies often work well on both fine-grained
and medical image classification. For example, MGE, TransFG, DualCross,
SatFormer, SeATrans often work well on both FGVC and medical tasks. In these
tasks, the local parts, like bird beads or optic-cup are very important for
the classification. The methods modeling the interactions of hierarchical
features are likely to perform better on art-related datasets. For example,
Cross-X shows strong generalization on art-related dataset. We can also see
that the stronger backbone architecture will be more robust. Considering the
mean performance, the top methods in each field are commonly based on the ViT
backbone. The proposed ExpNet combines part-context modeling, hierarchical
feature interaction, and appropriate network architectures in a unified
framework, which achieves top-3 performance in all the tasks and achieves the
best mean score with a good accuracy-complexity trade-off.
Figure 4: Quantitative and qualitative comparison of weakly supervised
detection and segmentation on CUB and REFUGE2, respectively. On CUB, we show
activation score map and bounding box (Bbox) of SCM for comparison. The
ground-truth and predicted Bboxes are shown as red and green respectively. On
REFUGE2, we show optic disc (green) and cup (brown) segmentation of RS+EPM for
the comparison.
### 4.4 Weakly-supervised Localization/Segmentation
We can observe from Fig. 3 that ExpNet has a close relationship with the
object shape and location. In the section, we further explore this
relationship through quantitative and qualitative experiments, which show
ExpNet is also a strong model for weakly-supervised localization and
segmentation.
To demonstrate the effect of weakly-supervised learning, we use the
intermediate saliency map $m$ to produce object segmentation or localization
results over ExpNet-M. We produce the bird localization prediction from the
first-stage map $m^{1}$ on CUB dataset, and produce the optic disc and cup
segmentation prediction from $m^{2}$ and $m^{3}$ respectively on REFUGE2
dataset (details are in the supplementary material). On CUB dataset, we
compare against the mainstream weakly-supervised localization methods,
including l2C [72], BGC [38], PDM [50], LCTR [11], TS-CAM [24], and SCM [3].
We evaluate the performance by the commonly used GT-Known (GT-K), Top1/Top5
Localization Accuracy, and more strict ones like MaxboxAccV1 and MaxboxAccV2
[13]. On REGUGE2 dataset, we compare against various weakly-supervised
segmentation methods, including IRNet [2], OAA++ [33], LIID [46], Puzzle-CAM
[36], L2G [34], and RS+EPM [37] through IoU and Dice Score. We show the
quantitative results and visual comparison in Fig. 4. On the CUB dataset,
ExpNet outperforms SOTA SCM with 1.2% over GT-K and 1.1% over MaxboxAccV1. The
visualized results show that ExpNet predicts more sophisticated activation
maps, and thus produces more accurate bounding boxes. On REFUGE2 dataset,
ExpNet outperforms SOTA RS-EPM with 2.5% over IoU and 2.1% over Dice. Compared
with RS+EPM over the visualized results, ExpNet predicts neater and more
reasonable segmentation masks, especially on the ambiguous optic-cup.
### 4.5 Ablation Study and Alternatives
In order to verify the effectiveness of the proposed NeFiRF, we compare it
with alternative feature grouping/activation techniques, including Spatial
Attention [62], Group Trans [45] and Global Filter [51]. The experimental
results over ExpNet-M are shown in Table 2. We can see that the frequency-
based method Global Filter, basically performs better than space-based methods
Spatial Attention and Group Trans. The proposed NeFiRF which introduces
spatial constraint in the frequency space outperforms Global Filter by 0.6%,
0.9%, and 1.8% on three datasets. Compared with the baseline NeRF, NeFiRF
significantly improves 1.2%, 2.3% and 2.0% on three datasets, demonstrating
the effectiveness of the proposed conditional sine activation and band-pass
filter.
To further verify the effectiveness of each proposed component, we perform
detailed ablation studies over both Gaze-Shift and NeFiRF on ExpNet-M, as
listed in Table 3. In Table 3, we sequentially add the proposed modules on top
of the ResNet50 baseline, and the model performance is gradually improved.
First, we adopt Focal Part separation over the baseline with Spatial Attention
(SA) and CNN-based spatial organized (Section 3.4) context embedding. It
increases the performance on the tasks in which certain parts are
discriminative (CUB and COVID-19). By applying part-context correlated CA
Context Impression, the performance is significantly increased, indicating
that global attention is more effective for the context information modeling.
Then we use NeFiRF with only conditional sine activation to replace SA, the
performance again significantly improved. According NeRF results in Table 2,
we can infer conditional sine activation individually improves 0.9%, 2.2% and
2.2% on three datasets. Further applying band-pass filters over NeFiRF
increases the model performance over all three datasets, which demonstrates
the general effectiveness of this simple regularization strategy.
Table 2: Comparison of NeFiRF and alternative feature grouping or activation
strategies.
| | Spatial
---
Attention
| Group
---
Trans
| Global
---
Filter
NeRF | | NeFiRF
---
CUB | 91.9 | 92.0 | 91.7 | 91.4 | 92.6
Convid-19 | 94.6 | 95.2 | 95.6 | 94.4 | 96.7
WikiArt | 79.5 | 79.9 | 80.8 | 79.1 | 81.9
Table 3: Ablation study over Gaze-Shift and NeFiRF.
Baseline | Gaze-Shift | NeFiRF | CUB | COVID-19 | WikiArt
---|---|---|---|---|---
| | Focal
---
Part
| Context
---
Impression
| Cond Sine
---
Activation
| Band-Pass
---
Filter
| |
✓ | | | | | 84.5 | 90.6 | 74.1
✓ | ✓ | CNN | SA | 87.2 | 92.9 | 74.8
✓ | ✓ | ✓ | SA | 91.9 | 94.6 | 79.5
✓ | ✓ | ✓ | ✓ | | 92.3 | 96.2 | 81.3
✓ | ✓ | ✓ | ✓ | ✓ | 92.6 | 96.7 | 81.9
Figure 5: The classification performance of the each-stage intermediate
result. Stages 0, 1, 2, 3 correspond to the first, second, third Context
Impression and final focus embedding, respectively. Individual indicates the
performance of each stage alone. Accumulation represents the performance after
sequential fusion.
### 4.6 Analysis and Discussion
For an in-depth understanding of the function and effect of each stage in
ExpNet, we quantitatively test the performance of Context Impression produced
in each stage. Specifically, we connect and train a fully-connected layer on
each embedding of a frozen ExpNet-M to predict the classification, and use the
same way to test the performance of each sequential fusion of the embedding.
The results are denoted as Individual and Accumulate respectively in Fig. 5.
We can see that for the tasks in which the local parts are discriminative,
like CUB, Air, and Convid-19, the deeper embeddings often contribute more to
the right decision. On REFUGE2, we can see from the Accumulate results that
the fusion of the last two embeddings are the most important. That is large
because the last two embeddings represent optic disc and cup parts
respectively (as shown in Fig. 3), and their fusion will learn the key
parameters, i.e., vCDR for the final classification. On WikiArt, the fusion
linearly improves the performance. We can infer that various features, like
image style, painting stroke, and object patterns, are all critical for
discrimination. Based on the above analysis, we can see that the discriminant
factors vary greatly in different expert level classification tasks. The
previous methods, like FGVC, will be hard to cover all these factors. But we
design ExpNet to fit different discrimination manners in a unified
architecture, which achieves the best performance in expert-level
classification.
## 5 Conclusion
In this work, we focused on a particular vision classification task requiring
professional categories, i.e., expert-level classification. As the task is
hard to be addressed with the existing solutions, we proposed to
hierarchically decouple the part and context features in our ExpNet and
individually modeled the two parts through different architectures. It enables
the network to focus on the discriminative parts and perceive the context
content in a unified framework, as well as to integrate hierarchical features
for comprehensive predictions. Extensive experiments demonstrated the overall
superior performance of our ExpNet on a wide range of expert-level
classification tasks.
## References
* [1] Sabbir Ahmed, Md Hossain, Manan Binth Taj Noor, et al. Convid-net: an enhanced convolutional neural network framework for covid-19 detection from x-ray images. In Proceedings of international conference on trends in computational and cognitive engineering, pages 671–681. Springer, 2021.
* [2] Jiwoon Ahn, Sunghyun Cho, and Suha Kwak. Weakly supervised learning of instance segmentation with inter-pixel relations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2209–2218, 2019.
* [3] Haotian Bai, Ruimao Zhang, Jiong Wang, and Xiang Wan. Weakly supervised object localization via transformer with implicit spatial calibration. In European Conference on Computer Vision, pages 612–628. Springer, 2022.
* [4] Muhammad Naseer Bajwa, Muhammad Imran Malik, Shoaib Ahmed Siddiqui, Andreas Dengel, Faisal Shafait, Wolfgang Neumeier, and Sheraz Ahmed. Two-stage framework for optic disc localization and glaucoma classification in retinal fundus images using deep learning. BMC medical informatics and decision making, 19(1):1–16, 2019.
* [5] Lauren Barghout. How global perceptual context changes local contrast processing. University of California, Berkeley, 2003.
* [6] Lauren Barghout-Stein. On differences between peripheral and foveal pattern masking. University of California, Berkeley, 1999.
* [7] Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda. Crossvit: Cross-attention multi-scale vision transformer for image classification. In Proceedings of the IEEE/CVF international conference on computer vision, pages 357–366, 2021.
* [8] Liyi Chen and Jufeng Yang. Recognizing the style of visual arts via adaptive cross-layer correlation. In Proceedings of the 27th ACM international conference on multimedia, pages 2459–2467, 2019.
* [9] Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, Wei Liu, and Tat-Seng Chua. Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5659–5667, 2017.
* [10] Yue Chen, Yalong Bai, Wei Zhang, and Tao Mei. Destruction and construction learning for fine-grained image recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5157–5166, 2019.
* [11] Zhiwei Chen, Changan Wang, Yabiao Wang, Guannan Jiang, Yunhang Shen, Ying Tai, Chengjie Wang, Wei Zhang, and Liujuan Cao. Lctr: On awakening the local continuity of transformer for weakly supervised object localization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 410–418, 2022.
* [12] Junlong Cheng, Shengwei Tian, Long Yu, Chengrui Gao, Xiaojing Kang, Xiang Ma, Weidong Wu, Shijia Liu, and Hongchun Lu. Resganet: Residual group attention network for medical image classification and segmentation. Medical Image Analysis, 76:102313, 2022.
* [13] Junsuk Choe, Seong Joon Oh, Seungho Lee, Sanghyuk Chun, Zeynep Akata, and Hyunjung Shim. Evaluating weakly supervised object localization methods right. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3133–3142, 2020.
* [14] Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Conditional positional encodings for vision transformers. arXiv preprint arXiv:2102.10882, 2021.
* [15] Marcos V Conde and Kerem Turgutlu. Clip-art: contrastive pre-training for fine-grained art classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3956–3960, 2021.
* [16] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 764–773, 2017.
* [17] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
* [18] Ruoyi Du, Dongliang Chang, Ayan Kumar Bhunia, Jiyang Xie, Zhanyu Ma, Yi-Zhe Song, and Jun Guo. Fine-grained visual classification via progressive multi-granularity training of jigsaw patches. In European Conference on Computer Vision, pages 153–168. Springer, 2020.
* [19] Abhimanyu Dubey, Otkrist Gupta, Pei Guo, Ramesh Raskar, Ryan Farrell, and Nikhil Naik. Pairwise confusion for fine-grained visual classification. In Proceedings of the European conference on computer vision (ECCV), pages 70–86, 2018.
* [20] Cheikh Brahim El Vaigh, Noa Garcia, Benjamin Renoust, Chenhui Chu, Yuta Nakashima, and Hajime Nagahara. Gcnboost: Artwork classification by label propagation through a knowledge graph. In Proceedings of the 2021 International Conference on Multimedia Retrieval, pages 92–100, 2021.
* [21] Rui Fan, Kamran Alipour, Christopher Bowd, Mark Christopher, Nicole Brye, James A Proudfoot, Michael H Goldbaum, Akram Belghith, Christopher A Girkin, Massimo A Fazio, et al. Detecting glaucoma from fundus photographs using deep learning without convolutions: Transformer for improved generalization. Ophthalmology Science, page 100233, 2022.
* [22] Huihui Fang, Fei Li, Huazhu Fu, Xu Sun, Xingxing Cao, Jaemin Son, Shuang Yu, Menglu Zhang, Chenglang Yuan, Cheng Bian, et al. Refuge2 challenge: Treasure for multi-domain learning in glaucoma assessment. arXiv preprint arXiv:2202.08994, 2022.
* [23] Huazhu Fu, Jun Cheng, Yanwu Xu, Changqing Zhang, Damon Wing Kee Wong, Jiang Liu, and Xiaochun Cao. Disc-aware ensemble network for glaucoma screening from fundus image. IEEE transactions on medical imaging, 37(11):2493–2501, 2018.
* [24] Wei Gao, Fang Wan, Xingjia Pan, Zhiliang Peng, Qi Tian, Zhenjun Han, Bolei Zhou, and Qixiang Ye. Ts-cam: Token semantic coupled attention map for weakly supervised object localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2886–2895, 2021.
* [25] Yang Gao, Neng Chang, and Kai Shang. Multi-layer and multi-order fine-grained feature learning for artwork attribute recognition. Computer Communications, 173:214–219, 2021.
* [26] Konstantin Gavrilchik. Pavel, 2022.
* [27] Weifeng Ge, Xiangru Lin, and Yizhou Yu. Weakly supervised complementary parts models for fine-grained image classification from the bottom up. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3034–3043, 2019.
* [28] Robert Geirhos, David HJ Janssen, Heiko H Schütt, Jonas Rauber, Matthias Bethge, and Felix A Wichmann. Comparing deep neural networks against humans: object recognition when the signal gets weaker. arXiv preprint arXiv:1706.06969, 2017.
* [29] Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, and Changhu Wang. Transfg: A transformer architecture for fine-grained recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 852–860, 2022.
* [30] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [31] Yunqing Hu, Xuan Jin, Yin Zhang, Haiwen Hong, Jingfeng Zhang, Yuan He, and Hui Xue. Rams-trans: Recurrent attention multi-scale transformer for fine-grained image recognition. In Proceedings of the 29th ACM International Conference on Multimedia, pages 4239–4248, 2021.
* [32] Ruyi Ji, Longyin Wen, Libo Zhang, Dawei Du, Yanjun Wu, Chen Zhao, Xianglong Liu, and Feiyue Huang. Attention convolutional binary neural tree for fine-grained visual categorization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10468–10477, 2020.
* [33] Peng-Tao Jiang, Qibin Hou, Yang Cao, Ming-Ming Cheng, Yunchao Wei, and Hong-Kai Xiong. Integral object mining via online attention accumulation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2070–2079, 2019.
* [34] Peng-Tao Jiang, Yuqi Yang, Qibin Hou, and Yunchao Wei. L2g: A simple local-to-global knowledge transfer framework for weakly supervised semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16886–16896, 2022.
* [35] Yankai Jiang, Ke Xu, Xinyue Wang, Yuan Li, Hongguang Cui, Yubo Tao, and Hai Lin. Satformer: Saliency-guided abnormality-aware transformer for retinal disease classification in fundus image.
* [36] Sanghyun Jo and In-Jae Yu. Puzzle-cam: Improved localization via matching partial and full features. In 2021 IEEE International Conference on Image Processing (ICIP), pages 639–643. IEEE, 2021.
* [37] Sang Hyun Jo, In Jae Yu, and Kyung-Su Kim. Recurseed and edgepredictmix: Single-stage learning is sufficient for weakly-supervised semantic segmentation. arXiv preprint arXiv:2204.06754v3, 2022.
* [38] Eunji Kim, Siwon Kim, Jungbeom Lee, Hyunwoo Kim, and Sungroh Yoon. Bridging the gap between classification and localization for weakly supervised object localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14258–14267, 2022.
* [39] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [40] Adrian Lecoutre, Benjamin Negrevergne, and Florian Yger. Recognizing art style automatically in painting with deep learning. In Asian conference on machine learning, pages 327–342. PMLR, 2017\.
* [41] James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, and Santiago Ontanon. Fnet: Mixing tokens with fourier transforms. arXiv preprint arXiv:2105.03824, 2021.
* [42] Hao Li, Xiaopeng Zhang, Qi Tian, and Hongkai Xiong. Attribute mix: Semantic data augmentation for fine grained recognition. In 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP), pages 243–246. IEEE, 2020.
* [43] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
* [44] Tony Lindeberg. A computational theory of visual receptive fields. Biological cybernetics, 107(6):589–635, 2013.
* [45] Kai Liu, Tianyi Wu, Cong Liu, and Guodong Guo. Dynamic group transformer: A general vision transformer backbone with dynamic group attention. arXiv preprint arXiv:2203.03937, 2022.
* [46] Yun Liu, Yu-Huan Wu, Pei-Song Wen, Yu-Jun Shi, Yu Qiu, and Ming-Ming Cheng. Leveraging instance-, image-and dataset-level information for weakly supervised instance segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020\.
* [47] Wei Luo, Xitong Yang, Xianjie Mo, Yuheng Lu, Larry S Davis, Jun Li, Jian Yang, and Ser-Nam Lim. Cross-x learning for fine-grained visual categorization. In Proceedings of the IEEE/CVF international conference on computer vision, pages 8242–8251, 2019.
* [48] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013.
* [49] Hui Mao, Ming Cheung, and James She. Deepart: Learning joint representations of visual arts. In Proceedings of the 25th ACM international conference on Multimedia, pages 1183–1191, 2017.
* [50] Meng Meng, Tianzhu Zhang, Wenfei Yang, Jian Zhao, Yongdong Zhang, and Feng Wu. Diverse complementary part mining for weakly supervised object localization. IEEE Transactions on Image Processing, 31:1774–1788, 2022.
* [51] Yongming Rao, Wenliang Zhao, Zheng Zhu, Jiwen Lu, and Jie Zhou. Global filter networks for image classification. Advances in Neural Information Processing Systems, 34:980–993, 2021\.
* [52] Babak Saleh and Ahmed Elgammal. Large-scale classification of fine-art paintings: Learning the right metric on the right feature. arXiv preprint arXiv:1505.00855, 2015.
* [53] Catherine Sandoval, Elena Pirogova, and Margaret Lech. Two-stage deep learning approach to the classification of fine-art paintings. IEEE Access, 7:41770–41781, 2019.
* [54] Debaditya Shome, T Kar, Sachi Nandan Mohanty, Prayag Tiwari, Khan Muhammad, Abdullah AlTameem, Yazhou Zhang, and Abdul Khader Jilani Saudagar. Covid-transformer: Interpretable covid-19 detection using vision transformer for healthcare. International Journal of Environmental Research and Public Health, 18(21):11086, 2021.
* [55] Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems, 33:7462–7473, 2020.
* [56] Yehui Tang, Kai Han, Jianyuan Guo, Chang Xu, Yanxi Li, Chao Xu, and Yunhe Wang. An image patch is a wave: Phase-aware vision mlp. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10935–10944, 2022.
* [57] Chunwei Tian, Yong Xu, Zuoyong Li, Wangmeng Zuo, Lunke Fei, and Hong Liu. Attention-guided cnn for image denoising. Neural Networks, 124:117–129, 2020.
* [58] Zhiqiang Tian, Lizhi Liu, Zhenfeng Zhang, and Baowei Fei. Psnet: prostate segmentation on mri based on a convolutional neural network. Journal of Medical Imaging, 5(2):021208, 2018.
* [59] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, pages 10347–10357. PMLR, 2021.
* [60] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011\.
* [61] Linda Wang, Zhong Qiu Lin, and Alexander Wong. Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Scientific Reports, 10(1):1–12, 2020.
* [62] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018.
* [63] Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, and Lei Zhang. Cvt: Introducing convolutions to vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22–31, 2021.
* [64] Junde Wu, Huihui Fang, Fangxin Shang, Dalu Yang, Zhaowei Wang, Jing Gao, Yehui Yang, and Yanwu Xu. Seatrans: Learning segmentation-assisted diagnosis model via transformer. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 677–687. Springer, 2022.
* [65] Junde Wu, Shuang Yu, Wenting Chen, Kai Ma, Rao Fu, Hanruo Liu, Xiaoguang Di, and Yefeng Zheng. Leveraging undiagnosed data for glaucoma classification with teacher-student learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 731–740. Springer, 2020.
* [66] Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, and Srinath Sridhar. Neural fields in visual computing and beyond. In Computer Graphics Forum, volume 41, pages 641–676. Wiley Online Library, 2022.
* [67] Lingfeng Yang, Xiang Li, Renjie Song, Borui Zhao, Juntian Tao, Shihao Zhou, Jiajun Liang, and Jian Yang. Dynamic mlp for fine-grained image classification by leveraging geographical and temporal information. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10945–10954, 2022.
* [68] Chenyang Zhang, Christine Kaeser-Chen, Grace Vesom, Jennie Choi, Maria Kessler, and Serge Belongie. The imet collection 2019 challenge dataset. arXiv preprint arXiv:1906.00901, 2019.
* [69] Jianpeng Zhang, Yutong Xie, Qi Wu, and Yong Xia. Medical image classification using synergic deep learning. Medical image analysis, 54:10–19, 2019.
* [70] Lianbo Zhang, Shaoli Huang, Wei Liu, and Dacheng Tao. Learning a mixture of granularity-specific experts for fine-grained categorization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8331–8340, 2019.
* [71] Lei Zhang and Yan Wen. A transformer-based framework for automatic covid19 diagnosis in chest cts. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 513–518, 2021.
* [72] Xiaolin Zhang, Yunchao Wei, and Yi Yang. Inter-image communication for weakly supervised localization. In European Conference on Computer Vision, pages 271–287. Springer, 2020.
* [73] Bo Zhao, Jiashi Feng, Xiao Wu, and Shuicheng Yan. A survey on deep learning-based fine-grained object classification and semantic segmentation. International Journal of Automation and Computing, 14(2):119–135, 2017.
* [74] Sheng-hua Zhong, Xingsheng Huang, and Zhijiao Xiao. Fine-art painting classification via two-channel dual path networks. International Journal of Machine Learning and Cybernetics, 11(1):137–152, 2020.
* [75] Haowei Zhu, Wenjing Ke, Dong Li, Ji Liu, Lu Tian, and Yi Shan. Dual cross-attention learning for fine-grained visual categorization and object re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4692–4702, 2022.
* [76] Peiqin Zhuang, Yali Wang, and Yu Qiao. Learning attentive pairwise interaction for fine-grained classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13130–13137, 2020.
|
# Quantum diffeomorphisms cannot make indefinite causal order definite
Anne-Catherine de la Hamette University of Vienna, Faculty of Physics, Vienna
Doctoral School in Physics, and Vienna Center for Quantum Science and
Technology (VCQ), Boltzmanngasse 5, A-1090 Vienna, Austria Institute for
Quantum Optics and Quantum Information (IQOQI), Austrian Academy of Sciences,
Boltzmanngasse 3, A-1090 Vienna, Austria Viktoria Kabel University of
Vienna, Faculty of Physics, Vienna Doctoral School in Physics, and Vienna
Center for Quantum Science and Technology (VCQ), Boltzmanngasse 5, A-1090
Vienna, Austria Institute for Quantum Optics and Quantum Information (IQOQI),
Austrian Academy of Sciences, Boltzmanngasse 3, A-1090 Vienna, Austria Marios
Christodoulou University of Vienna, Faculty of Physics, Vienna Doctoral
School in Physics, and Vienna Center for Quantum Science and Technology (VCQ),
Boltzmanngasse 5, A-1090 Vienna, Austria Institute for Quantum Optics and
Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3,
A-1090 Vienna, Austria Časlav Brukner University of Vienna, Faculty of
Physics, Vienna Doctoral School in Physics, and Vienna Center for Quantum
Science and Technology (VCQ), Boltzmanngasse 5, A-1090 Vienna, Austria
Institute for Quantum Optics and Quantum Information (IQOQI), Austrian Academy
of Sciences, Boltzmanngasse 3, A-1090 Vienna, Austria
###### Abstract
The study of indefinite causal order has seen rapid development, both
theoretically and experimentally, in recent years. While classically the
causal order of two timelike separated events A and B is fixed - either A
before B or B before A - this is no longer true in quantum theory. There, it
is possible to encounter superpositions of causal orders. In light of recent
work on quantum reference frames, which reveals that the superposition of
locations, momenta, and other properties can depend on the choice of reference
frame or coordinate system, the question arises whether this also holds true
for superpositions of causal orders. Here, we provide a negative answer to
this question for quantum diffeomorphisms. First, we provide an unambiguous
definition of causal order between two events in terms of worldline
coincidences and the proper time of a third particle. Then, we show that
superpositions of causal order defined as such cannot be rendered definite
even through the most general class of coordinate transformations - quantum-
controlled, independent diffeomorphisms in each branch. Finally, based on our
results, we connect the information theoretic and gravitational perspectives
on indefinite causal order.
## I Introduction
The principle of general covariance postulates that any choice of coordinates
is an equally valid description of nature: physical statements cannot depend
on the choice of coordinates. This is manifested by the invariance of the laws
of physics under general coordinate transformations. Recently, it has been
suggested that the invariance extends to quantum superpositions of coordinate
transformations, also known as quantum reference frame transformations Hardy
(2019); Giacomini _et al._ (2019); Loveridge _et al._ (2018); Vanrietvelde
_et al._ (2020); Castro-Ruiz _et al._ (2020); Höhn _et al._ (2021); de la
Hamette and Galley (2020); Krumm _et al._ (2021); Castro-Ruiz and Oreshkov
(2021); de la Hamette _et al._ (2021a, b); Cepollaro and Giacomini (2021); de
la Hamette _et al._ (2021c); Kabel _et al._ (2022). Applied to situations at
the border between quantum theory and gravity, these can offer new insights on
genuine quantum phenomena of spacetime, such as superpositions of geodesics,
time dilations, or causal orders Zych _et al._ (2011); Marletto and Vedral
(2017); Bose _et al._ (2017); Belenchia _et al._ (2018); Christodoulou and
Rovelli (2019); Castro-Ruiz _et al._ (2020); Smith and Ahmadi (2020); Paczos
_et al._ (2022); Dębski _et al._ (2022). Importantly, these effects can
already be explored without a full theory of quantum gravity by considering
simplified scenarios that combine features of quantum theory and general
relativity.
One such situation concerns the superposition of two semi-classical,
orthogonal states of the metric, each peaked around a classical solution
$g_{\mathcal{A}}$ and $g_{\mathcal{B}}$ of the Einstein equations, formally,
$\alpha\mathinner{|{g_{\mathcal{A}}}\rangle}+\beta\mathinner{|{g_{\mathcal{B}}}\rangle},\
\alpha,\beta\in\mathbb{C},$ (1)
with $|\alpha|^{2}+|\beta|^{2}=1$. Note that the above equation should be
understood as suggestive since the Hilbert space assigned to the metric has
not properly been introduced. Such a scenario might, for example, arise when
placing a gravitating object in a superposition of two locations (see e.g.
Refs. Marletto and Vedral (2017); Bose _et al._ (2017); Belenchia _et al._
(2018); Christodoulou and Rovelli (2019); Anastopoulos and Hu (2020); Zych
_et al._ (2019); de la Hamette _et al._ (2021c)). As the spacetime metric
imposes the causal structure, superpositions of the form (1) may also feature
superpositions of causal orders between events, also known as _indefinite
causal order_ 111To be exact, as shown in Ref. Costa (2022), under certain
conditions this is only possible if there is a quantum control degree of
freedom, which labels the different geometries. This control degree of freedom
will be introduced explicitly shortly.. The latter has been studied
extensively both theoretically Hardy (2005); Chiribella _et al._ (2013);
Oreshkov _et al._ (2012); Zych _et al._ (2019) and experimentally Procopio
_et al._ (2015); Rubino _et al._ (2017, 2022); Goswami _et al._ (2018); Wei
_et al._ (2019); Taddei _et al._ (2021) in recent years.
Each of the classical configurations in Eq. (1) can be described in a specific
coordinate system. When considering superpositions thereof, we can in
principle choose a different coordinate system in each branch. These different
choices of coordinates can be tied together with the help of physical events
Giacomini and Brukner (2021, 2022) to furnish a quantum coordinate system.
More specifically, we use the term _quantum coordinates_ to refer to a pair of
coordinates $x^{\mu}_{\mathcal{A}},x^{\mu}_{\mathcal{B}}$ ($\mu=0,1,2,3$), one
for each branch $\mathcal{A}$ and $\mathcal{B}$. A _change_ of quantum
coordinates can then be implemented more abstractly by a quantum-controlled
pair of diffeomorphisms $\phi_{\mathcal{A}},\phi_{\mathcal{B}}$ acting
correspondingly on each branch – a _quantum diffeomorphism_. This requires a
control degree of freedom, whose orthogonal states $\mathinner{|{0}\rangle}$
and $\mathinner{|{1}\rangle}$ label the different branches of the
superposition. The state (1) should thus be extended to read
$\alpha\mathinner{|{0}\rangle}\mathinner{|{g_{\mathcal{A}}}\rangle}+\beta\mathinner{|{1}\rangle}\mathinner{|{g_{\mathcal{B}}}\rangle}.$
(2)
Given that classically there is an invariance under the choice of coordinate
system, one may expect the same to hold true for quantum superpositions
thereof. In particular, based on this intuition, it has recently been shown
that one can always find a local quantum inertial frame in which a
superposition of metrics becomes locally Minkowskian across all branches
Giacomini and Brukner (2021, 2022). Given that the metric governs the causal
relations between events, one might be tempted to expect an analogous result
for superpositions of causal orders. That is, that under the larger group of
quantum diffeomorphisms, one could choose coordinates independently in each
branch such that the overall causal order becomes definite.
When giving physical meaning to spacetime events and causal order in terms of
coincidences and proper times, causal order is diffeomorphism invariant. Here,
we extend this result and show that it remains invariant also under any
quantum coordinate transformation and can be seen as a true observable in the
general relativistic sense of the word Rovelli (1991). Our result does not
stand in contradiction with the possibility to find a local quantum inertial
frame in which the metric is Minkowskian. While the latter is a local
property, the causal order between events is not.
Our result has significant implications for several different reasons. First,
the question as to what extent quantum reference frame changes can modify
indefinite causal order has puzzled the community for years. We show that
causal order, as defined in this article, remains invariant under generic
quantum-controlled coordinate transformations. Second, by connecting the
quantum information notion of indefinite causal order with general
relativistic concepts, we bridge the gap between two communities that have
long been trying to address similar questions from different viewpoints. We
thus hope to open up the dialogue required for advancing our understanding of
the interface between quantum theory and gravity.
## II Definition of Causal Order
We begin by defining a notion of causal order between events based on the
concepts of worldline coincidences and proper time. Following setups
considered in the study of indefinite causal order Chiribella _et al._
(2013); Oreshkov _et al._ (2012); Procopio _et al._ (2015); Rubino _et al._
(2017, 2022); Goswami _et al._ (2018); Wei _et al._ (2019); Taddei _et al._
(2021); Zych _et al._ (2019); Castro-Ruiz _et al._ (2020) in quantum
information theory, we consider two systems, each following a timelike
worldline, and a test particle interacting with each of the systems once. Each
of these interactions defines an event. As an example, one could consider two
laboratories and an electron travelling between the two, with the events
defined by the application of a unitary operation acting on the spin degree of
freedom of the electron. To model the events in a physically meaningful and
coordinate independent way in a general relativistic language, we define them
in terms of coincidences of worldlines. Consider thus three timelike curves
$\gamma_{p}$, where $p=0,1,2$. The worldline $\gamma_{0}$ coincides once with
$\gamma_{1}$ and once with $\gamma_{2}$ as depicted in Fig. 1. We denote these
coincidences by $\mathcal{E}_{1}$ and $\mathcal{E}_{2}$. For simplicity, we
assume that there are no other crossings of curves in this setup.
Figure 1: We consider a scenario of two systems and a test particle on a fixed
spacetime. The timelike worldline $\gamma_{0}$ of the test particle is
depicted in black, while the timelike curves of the systems are depicted by
the blue line $\gamma_{1}$ and the red line $\gamma_{2}$. The initial and
final points defining the curve $\gamma_{0}$ are denoted by $P_{\gamma,i}$ and
$P_{\gamma,f}$, respectively. The worldlines $\gamma_{0}$ and $\gamma_{1}$
coincide once and their crossing defines the event $\mathcal{E}_{1}$, while
${\mathcal{E}_{2}}$ is defined by the single crossing of the worldlines
$\gamma_{0}$ and $\gamma_{2}$. We can use the proper time of $\gamma_{0}$
together with fixed initial and final points to define a causal order for the
events.
Each of the curves can be parametrized by the proper time of the test
particle,
$\displaystyle\tau_{\gamma}=-\frac{1}{c}\int_{P_{\gamma,i}}^{P_{\gamma,f}}\sqrt{g_{\mu\nu}dx^{\mu}dx^{\nu}},$
(3)
where $P_{\gamma,i}$ and $P_{\gamma,f}$ are the initial and final points
defining the curve. We could picture $P_{\gamma,i}$ as the event at which the
test particle is released and $P_{\gamma,f}$ as the one at which a final
measurement is applied. These two events operationally fix a time orientation
along the worldline of the test particle, which we can formally characterize
by the tangent vector $V_{0}^{\mu}\equiv\frac{d\gamma^{\mu}_{0}}{d\tau}$ along
$\gamma_{0}$. The parametrizations of $\gamma_{1}$ and $\gamma_{2}$ are then
chosen such that at the respective crossing, their tangent vectors
$V_{1}^{\mu}$ and $V_{2}^{\mu}$ are future-pointing with respect to this time
orientation, i.e.
$\displaystyle g_{\mu\nu}V_{0}^{\mu}V_{1,2}^{\nu}<0.$ (4)
Now we can use $\tau_{\gamma_{0}}\equiv\tau$ to order the events. Define the
crossing points
$\displaystyle p_{\mathcal{E}_{1}}\equiv\gamma_{0}(\tau_{1}),$ (5)
$\displaystyle p_{\mathcal{E}_{2}}\equiv\gamma_{0}(\tau_{2}).$ (6)
Since the events are connected by a timelike curve, ${\mathcal{E}_{2}}$ lies
either in the future or the past lightcone of ${\mathcal{E}_{1}}$. This
corresponds directly to being able to signal from ${\mathcal{E}_{1}}$ to
${\mathcal{E}_{2}}$ or vice versa, and thus to the operational notion of
causal order. With respect to our choice of time orientation,
${\mathcal{E}_{2}}$ lies in the future of ${\mathcal{E}_{1}}$ if
$\tau_{1}<\tau_{2}$. The causal order is thus characterized by the sign $s$ of
the proper time difference, defined as
$\displaystyle\Delta\tau=\tau_{2}-\tau_{1}\equiv s|\tau_{2}-\tau_{1}|.$ (7)
Now, consider two spacetimes $g_{\mathcal{A}}$ and $g_{\mathcal{B}}$ assumed
to be globally hyperbolic. Either of them is associated to a manifold
$\mathcal{M}_{\mathcal{A}}$ and $\mathcal{M}_{\mathcal{B}}$ on which points
and trajectories can be defined. We take the same setup as above on each of
the spacetimes, i.e. three timelike curves and two events,
${\mathcal{E}_{1}}^{g_{\mathcal{A}}}$, ${\mathcal{E}_{2}}^{g_{\mathcal{A}}}$
and ${\mathcal{E}_{1}}^{g_{\mathcal{B}}}$,
${\mathcal{E}_{2}}^{g_{\mathcal{B}}}$ defined by their crossings respectively.
Figure 2: The left hand side depicts a superposition of two scenarios with
different causal orders $s^{g_{\mathcal{A}}}$ and $s^{g_{\mathcal{B}}}$. In
each of the branches, we consider a manifold $\mathcal{M}_{\mathcal{A,B}}$, a
metric $g_{\mathcal{A,B}}$, and the worldlines of three systems. As before,
the black line depicts $\gamma_{0}$, the blue line $\gamma_{1}$, and the red
line $\gamma_{2}$ with different transparencies illustrating the
superposition. As the events are defined by the crossing of the test particle
with one of the other trajectories, there are two events ${\mathcal{E}_{1}}$
and ${\mathcal{E}_{2}}$. As a first step, we apply a quantum-controlled
diffeomorphism, that is, a diffeomorphism
$\phi_{\mathcal{A}}:\mathcal{M}_{\mathcal{A}}\to\mathcal{M}$ together with a
diffeomorphism $\phi_{\mathcal{B}}:\mathcal{M}_{\mathcal{B}}\to\mathcal{M}$
chosen such that each of the events is associated with a single point on
$\mathcal{M}$ while the metric and causal order may still be in a
superposition.
In a next step, we take a superposition of these two classical situations.
Since $\mathcal{E}_{i}^{g_{\mathcal{A}}}$ and
$\mathcal{E}_{i}^{g_{\mathcal{B}}}$, $i=1,2$ are defined, respectively, as the
crossing of the test particle with system $i$, they represent the _same_
physical event. We thus write
$\displaystyle\mathcal{E}_{i}^{g_{\mathcal{A}}}=\mathcal{E}_{i}^{g_{\mathcal{B}}}\equiv\mathcal{E}_{i},\
i=1,2.$ (8)
This extends the relativistic notion of coincidences to incorporate
superpositions of trajectories. In particular, the worldlines of any two
particles can cross in a superposition of two different points on the
manifolds. In this setup, there are thus _two_ distinct physical events – the
crossing of particle $0$ with system $1$ and the crossing of particle $0$ with
system $2$ – _each_ associated with a point
$p_{\mathcal{E}_{i}}^{\mathcal{A}}$ on manifold $\mathcal{M}_{\mathcal{A}}$
and a point $p_{\mathcal{E}_{i}}^{\mathcal{B}}$ on manifold
$\mathcal{M}_{\mathcal{B}}$ (see Fig. 2).
As for the causal order, there are two possibilities for each of the
spacetimes $g_{\mathcal{A}}$ and $g_{\mathcal{B}}$: either
$\tau^{g}_{1}>\tau^{g}_{2}$ or $\tau^{g}_{1}<\tau^{g}_{2}$. As a consequence,
there are also two possibilities for the sign of the product of the time
differences $\Delta\tau^{g_{\mathcal{A}}}\Delta\tau^{g_{\mathcal{B}}}$, either
$+1$ or $-1$. When
$s^{g_{\mathcal{A}}}s^{g_{\mathcal{B}}}=1,$ (9)
we say that we have _definite causal order_ between the two events
$\mathcal{E}_{1}$ and $\mathcal{E}_{2}$. When
$s^{g_{\mathcal{A}}}s^{g_{\mathcal{B}}}=-1,$ (10)
we say that we have _indefinite causal order_.
Note again that this corresponds to the operational notion of causal order,
i.e. the ability to signal between events. An implementation of indefinite
causal order is provided by the gravitational quantum switch Zych _et al._
(2019); Paunković and Vojinović (2020). In Ref. Zych _et al._ (2019), a
massive object in superposition of two locations sources a gravitational field
in superposition, while two agents, Alice and Bob, perform operations on a
test particle. As a result of the indefinite gravitational field, the causal
order between these operations is found to be in a superposition as well.
Note, however, that indefinite causal order in the above sense can be not only
due to a superposition of gravitational fields but also due to a superposition
of worldlines. For instance, the superposition of paths of the target system
in the optical quantum switch Chiribella _et al._ (2013); Oreshkov _et al._
(2012); Procopio _et al._ (2015); Rubino _et al._ (2017, 2022); Goswami _et
al._ (2018); Wei _et al._ (2019); Taddei _et al._ (2021) leads to indefinite
causal order on a fixed Minkowskian background. In short, the quantity $s$
depends both on the metric and the trajectories of the systems. If either of
the two is in superposition, the causal order can be indefinite as a
consequence.
To conclude this section, let us briefly discuss how one could operationally
access the quantity $s$ characterizing causal order. To this end, let us
encode the causal order in orthogonal quantum states $\mathinner{|{s=\pm
1}\rangle}$. In the Supplementary Material, we give a concrete example of how
to achieve this operationally. Once we have a state of the form
$\displaystyle\mathinner{|{\Psi}\rangle}=\alpha\mathinner{|{0}\rangle}\mathinner{|{g_{\mathcal{A}}}\rangle}\mathinner{|{s=+1}\rangle}+\beta\mathinner{|{1}\rangle}\mathinner{|{g_{\mathcal{B}}}\rangle}\mathinner{|{s=-1}\rangle},$
(11)
one possibility for determining $s$ is to measure the control, together with
the gravitational field, in the basis
$\\{\mathinner{|{\phi_{\pm}=}\rangle}\frac{1}{\sqrt{2}}(\mathinner{|{0}\rangle}\mathinner{|{g_{\mathcal{A}}}\rangle}\pm\mathinner{|{1}\rangle}\mathinner{|{g_{\mathcal{B}}}\rangle})\\}$.
By post-selecting on the outcome $\mathinner{|{\phi_{+}}\rangle}$, the test
particle is left in a superposition of states corresponding to opposite causal
orders,
$\displaystyle\mathinner{|{\psi}\rangle}=\alpha\mathinner{|{s=+1}\rangle}+\beta\mathinner{|{s=-1}\rangle}.$
(12)
Figure 3: Bloch representation of the state space of operationally encoded
causal order. Along the $z$-axis, the causal order is (a) definite, that is in
a state $\mathinner{|{s=\pm 1}\rangle}$, or (b) in a classical mixture of such
states. (c) On the surface of the Bloch ball (except the points
$\mathinner{|{s=\pm 1}\rangle}$), the system is in a coherent superposition of
causal orders and, finally, (d) the remaining states inside the ball,
represent mixed indefinite causal orders.
In the case of definite causal order, the resulting state corresponds to
$\alpha=1,\beta=0$ or $\alpha=0,\beta=1$. In general, the test particle may be
in a mixture of states of the form (12) indicating a mixture of causal orders,
e.g. when different causal orders are realized with a certain probability.
The states $\mathinner{|{s=+1}\rangle}$ and $\mathinner{|{s=-1}\rangle}$ can
be viewed as eigenstates of a $\sigma_{z}$ operator. If one now collects the
statistical data of the measurements of $\sigma_{z}$, $\sigma_{x}$, and
$\sigma_{y}$ on the test particle, one can distinguish between (a) definite
causal orders, (b) a classical mixture of definite (opposite) causal orders,
(c) pure indefinite causal orders, and (d) mixed indefinite causal orders (see
Fig. 3). Note that the measurement of only $\sigma_{z}$ would not be
sufficient because such a measurement would not distinguish coherently
superposed causal orders from a classical mixture. Many other specific
proposals for protocols that verify indefinite causal order can be found in
the literature Araújo _et al._ (2015); Branciard _et al._ (2015); Branciard
(2016); Bavaresco _et al._ (2019); van der Lugt _et al._ (2022).
## III Quantum diffeomorphisms cannot render causal order definite
Consider now a situation of indefinite causal order between events
$\mathcal{E}_{1}$ and $\mathcal{E}_{2}$ as above. In the remainder of this
section, we attempt to render this causal order definite through the
application of arbitrary coordinate changes in each branch. We will see that,
despite making use of the full diffeomorphism invariance of general relativity
in each branch separately, causal order cannot be rendered definite.
Let us start by mapping the two setups onto the same manifold $\mathcal{M}$
such that each of the events ${\mathcal{E}_{1}}$ and ${\mathcal{E}_{2}}$ is
associated with a single point on this manifold. For this, define
diffeomorphisms $\phi_{\mathcal{A}}:\mathcal{M}_{\mathcal{A}}\to\mathcal{M}$
and $\phi_{\mathcal{B}}:\mathcal{M}_{\mathcal{B}}\to\mathcal{M}$ such that
$\displaystyle\phi_{\mathcal{A}}(p_{{\mathcal{E}_{1}}}^{\mathcal{A}})$
$\displaystyle=\phi_{\mathcal{B}}(p_{{\mathcal{E}_{1}}}^{\mathcal{B}})\equiv
p_{{\mathcal{E}_{1}}},$ (13)
$\displaystyle\phi_{\mathcal{A}}(p_{{\mathcal{E}_{2}}}^{\mathcal{A}})$
$\displaystyle=\phi_{\mathcal{B}}(p_{{\mathcal{E}_{2}}}^{\mathcal{B}})\equiv
p_{{\mathcal{E}_{2}}}.$ (14)
These transformations can be seen as a composition of two diffeomorphisms, one
of which maps the setups onto the same manifold while the other deforms the
curves such that the events coincide (see Fig. 2).
In a second step, we can make the metric definite locally at both
$p_{{\mathcal{E}_{1}}}$ and $p_{{\mathcal{E}_{2}}}$. To this end, consider two
disjoint open sets $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ around
$p_{{\mathcal{E}_{1}}}$ and $p_{{\mathcal{E}_{2}}}$ and define diffeomorphisms
$\phi_{1}$ and $\phi_{2}$ that act as the identity in the complement of the
respective region, i.e.
$\displaystyle\phi_{1}|_{\bar{\mathcal{S}}_{1}}=\mathrm{Id}_{\mathcal{M}},\hskip
14.22636pt\phi_{2}|_{\bar{\mathcal{S}}_{2}}=\mathrm{Id}_{\mathcal{M}}.$ (15)
In accordance with the equivalence principle, we know that there exist
diffeomorphisms $\phi_{1}|_{\mathcal{S}_{1}}$ and
$\phi_{2}|_{\mathcal{S}_{2}}$, acting on each of the classical metrics in
superposition so that they become locally Minkowskian. Those can be further
chosen to be differentiable at all points of the manifold and in particular
across the boundary of $\mathcal{S}_{1/2}$ and $\mathcal{\bar{S}}_{1/2}$. This
means, we can define $\phi_{1}$ and $\phi_{2}$ such that
$\displaystyle(\phi_{1}\circ\phi_{\mathcal{A}})_{*}\left(g^{\mathcal{A}}\right)(p_{{\mathcal{E}_{1}}})$
$\displaystyle=(\phi_{1}\circ\phi_{\mathcal{B}})_{*}\left(g^{\mathcal{B}}\right)(p_{{\mathcal{E}_{1}}})=\eta_{\mu\nu},$
$\displaystyle(\phi_{2}\circ\phi_{\mathcal{A}})_{*}\left(g^{\mathcal{A}}\right)(p_{{\mathcal{E}_{2}}})$
$\displaystyle=(\phi_{2}\circ\phi_{\mathcal{B}})_{*}\left(g^{\mathcal{B}}\right)(p_{{\mathcal{E}_{2}}})=\eta_{\mu\nu}.$
Now that the metric is locally Minkowskian, we can associate a definite
lightcone to each of the points $p_{{\mathcal{E}_{1}}}$ and
$p_{{\mathcal{E}_{2}}}$ (see Fig. 4). If one views the lightcone at a point as
defining the causal structure, then the latter is now rendered definite.
However, this notion is a manifestly local one, not to be confounded with the
notion of causal _order_ between events, as defined above. The latter _global_
notion is defined via the sign of the proper time differences, which is a
diffeomorphism invariant quantity. If we now consider a superposition of
different causal orders, we can apply a quantum-controlled diffeomorphism,
that is, an independent diffeomorphism in each branch: Formally,
$\displaystyle\mathinner{|{0}\rangle}\mathinner{\langle{0}|}\otimes\mathcal{U}_{\mathcal{A}}+\mathinner{|{1}\rangle}\mathinner{\langle{1}|}\otimes\mathcal{U}_{\mathcal{B}},$
(16)
where $\mathcal{U}_{\mathcal{A},\mathcal{B}}$ are unitary representations of
the above compositions of diffeomorphisms acting on
$\mathcal{M}_{\mathcal{A},\mathcal{B}}.$ Providing the explicit form of the
unitary representations would require a well-defined Hilbert space structure
for the gravitational field and any fields defined on the spacetime manifold,
which remains an open problem. Nevertheless, we know from the action of
diffeomorphisms on classical manifolds that in each branch, the causal order
$s^{g_{\mathcal{A},\mathcal{B}}}$ remains unchanged. Thus, the overall causal
order $s^{g_{\mathcal{A}}}s^{g_{\mathcal{B}}}$ will stay invariant under any
quantum diffeomorphism. Hence, causal order cannot be rendered definite even
by the most general quantum coordinate transformations.
Figure 4: In a second step, the diffeomorphisms $\phi_{1}$ and $\phi_{2}$ are
applied on disjoint open sets around $p_{{\mathcal{E}_{1}}}$ and
$p_{{\mathcal{E}_{2}}}$, respectively, such that the metric becomes locally
Minkowskian. Outside these open sets, the diffeomorphisms are equal to the
identity. We can thus define a definite lightcone at each of the two points.
Viewing the lightcone at a point as defining the local causal structure, this
means that the latter is rendered definite. However, the causal order between
the two events still remains indefinite.
As a last resort, we may also allow for reparametrizations of the proper time,
in addition to changes of coordinates. This is equivalent to shifting the
parameter value at the initial point of the worldline in each spacetime. It
allows to match the time shown on a clock along $\gamma_{0}$ in both branches
on either of the two events, but not on both. For example, we can ensure that
the proper times at ${\mathcal{E}_{1}}$ coincide, that is
$\tau^{g_{\mathcal{A}}}_{1}=\tau^{g_{\mathcal{B}}}_{1}\equiv\tau^{*}$.
However, then the proper times at ${\mathcal{E}_{2}}$ are in a superposition
such that either
$\tau^{g_{\mathcal{A}}}_{2}<\tau^{*}<\tau^{g_{\mathcal{B}}}_{2}$, or
$\tau^{g_{\mathcal{A}}}_{2}>\tau^{*}>\tau^{g_{\mathcal{B}}}_{2}$. In general,
it has been shown that for generic setups of indefinite causal order, it is
impossible to align the proper times of all events. In particular, Refs.
Guérin and Brukner (2018); Castro-Ruiz _et al._ (2020) show that in the case
of indefinite causal order, there are no temporal or causal reference frames
such that every event is associated to a single localized point. While this is
possible for a single event, the remaining events will in general be
delocalized in the corresponding frame. In Ref. Castro-Ruiz _et al._ (2020)
this inability to align all events was even suggested as a definition of
indefinite causal order.
## IV Discussion
Despite the generality of transformations offered by the diffeomorphism group,
indefinite causal order remains indefinite under any _quantum coordinate
transformations_. The underlying reason that diffeomorphisms cannot render
indefinite causal order definite is that this is not a locally defined
property. If we were considering only one coincidence of worldlines in
superposition, it would be possible to match all the relevant quantities –
that is, the topological points of the events as well as the local lightcone
structure – across the branches. However, when there is more than one
coincidence on a single worldline, as in the scenarios considered in this
paper, this is not the end of the story, as there is another, diffeomorphism
invariant quantity: the proper time elapsed between the two events. For what
regards causal order, the relevant observable is the sign of the proper time
difference. It thus seems misplaced to assume that there always exists a
choice of quantum coordinates for which indefinite causal order becomes
definite, as one may infer naively from a quantum version of the equivalence
principle. If that were the case, more general symmetry transformations would
be required, which go beyond arbitrary changes of coordinates independently on
each spacetime.
The point of this work is to connect different notions of indefinite causal
order – information theoretic and gravitational – and to clarify conceptually
what it means in a general relativistic scenario that exhibits quantum
features. Observables in general relativity are defined as diffeomorphism-
invariant quantities. Since an indefinite causal order requires that either
spacetimes or system paths are brought into superposition, it is reasonable to
extend the notion of observables to quantities that are invariant under
quantum diffeomorphisms. Indeed, we can conclude from our results that the
notion of indefinite causal order as presented here constitutes a physically
meaningful observable in that sense. This emphasizes the significance of
indefinite causal order as a frame independent notion even in these general
scenarios.
What remains to be done is to cast the quantum-controlled diffeomorphisms
employed here in the framework of the quantum reference frame formalism. This
requires identifying the explicit form of the unitary representations
$\mathcal{U}_{\mathcal{A},\mathcal{B}}$ and the corresponding Hilbert space
structure for the gravitational field and the quantum coordinates. A promising
route to constructing such explicit QRF transformations that is currently
being explored is to extend the frameworks of Refs. Giacomini and Brukner
(2021, 2022); de la Hamette _et al._ (2021c).
The present findings are in line with several other works on indefinite causal
order. Firstly, we can view them in the light of recent results in the process
matrix formalism. Specifically, it was shown in Ref. Castro-Ruiz _et al._
(2018) that the causal order between operations is always preserved under
continuous and reversible transformations. In fact, the only continuous and
reversible operations are local unitaries in the respective laboratories,
which do not change the causal order. This result is derived in an abstract
framework of Hilbert spaces in the absence of any spacetime structure and has
not yet been extended to a field theoretic realm. However, similarities
between the general structure of the transformations in this work and
diffeomorphisms hint at a deeper connection, which could explain why causal
order is preserved in both frameworks. Secondly, different notions of
indefinite causal order have also been analyzed and compared in Ref. Vilasini
and Renner (2022). In particular, the authors connect information theoretic
notions of indefinite causal order to relativistic causality and prove several
no-go results in the context of definite spacetime and classical reference
frames. It would be interesting to investigate whether these results extend to
the realm considered here, that is, indefinite metrics and quantum coordinate
systems.
Finally, let us stress that the present work connects scenarios with localized
events on an indefinite spacetime background with delocalized events on
definite spacetime. This is an idea that has been prevalent in the community
for years Oreshkov (2019); Castro-Ruiz _et al._ (2020); Giacomini and Brukner
(2021, 2022); de la Hamette _et al._ (2021c); Kabel _et al._ (2022);
Vilasini and Renner (2022); Paunković and Vojinović (2020) but is made
explicit here. It is based on the insight that, although superpositions of
invariant quantities cannot be eliminated, they can be shifted between the
various systems under consideration. This is reflected in the fact that
indefinite causal order can be not only due to a superposition of
gravitational fields but also due to delocalized paths of systems. This shows
that the study of indefinite causal order can be approached equally well from
a purely quantum information perspective, in which particles move in a
superposition of paths on a fixed spacetime background, and a quantum gravity
perspective, in which the indefiniteness of spacetime itself is the reason for
indefinite causal order. While there _are_ some observables, such as the Ricci
curvature scalar, that can distinguish between these scenarios, indefinite
causal order, as defined here, cannot. We are positive that emphasizing this
connection will facilitate the discussion between the general relativity and
quantum information communities and thus pave the way to a fruitful
collaboration at the interface between quantum theory and gravity.
###### Acknowledgements.
MC acknowledges several useful preliminary discussions with Giulio Chiribella
and Some Sankar Bhattacharya. We acknowledge financial support by the Austrian
Science Fund (FWF) through BeyondC (F7103-N48). This publication was made
possible through the support of the ID 61466 and ID 62312 grants from the John
Templeton Foundation, as part of The Quantum Information Structure of
Spacetime (QISS) Project (qiss.fr). The opinions expressed in this publication
are those of the authors and do not necessarily reflect the views of the John
Templeton Foundation.
## Author Contributions
All authors contributed equally.
## References
* Hardy (2019) L. Hardy, Implementation of the Quantum Equivalence Principle (2019), arXiv:1903.01289 [quant-ph] .
* Giacomini _et al._ (2019) F. Giacomini, E. Castro-Ruiz, and Č. Brukner, Quantum mechanics and the covariance of physical laws in quantum reference frames, Nature Communications 10, 494 (2019).
* Loveridge _et al._ (2018) L. Loveridge, T. Miyadera, and P. Busch, Symmetry, Reference Frames, and Relational Quantities in Quantum Mechanics, Foundations of Physics 48, 135–198 (2018).
* Vanrietvelde _et al._ (2020) A. Vanrietvelde, P. A. Höhn, F. Giacomini, and E. Castro-Ruiz, A change of perspective: switching quantum reference frames via a perspective-neutral framework, Quantum 4, 225 (2020).
* Castro-Ruiz _et al._ (2020) E. Castro-Ruiz, F. Giacomini, A. Belenchia, and Č. Brukner, Quantum clocks and the temporal localisability of events in the presence of gravitating quantum systems, Nature Communications 11, 2672 (2020).
* Höhn _et al._ (2021) P. A. Höhn, A. R. Smith, and M. P. Lock, Trinity of relational quantum dynamics, Physical Review D 104, 10.1103/physrevd.104.066001 (2021).
* de la Hamette and Galley (2020) A.-C. de la Hamette and T. D. Galley, Quantum reference frames for general symmetry groups, Quantum 4, 367 (2020).
* Krumm _et al._ (2021) M. Krumm, P. A. Höhn, and M. P. Müller, Quantum reference frame transformations as symmetries and the paradox of the third particle, Quantum 5, 530 (2021).
* Castro-Ruiz and Oreshkov (2021) E. Castro-Ruiz and O. Oreshkov, Relative subsystems and quantum reference frame transformations (2021), arXiv:2110.13199 [quant-ph] .
* de la Hamette _et al._ (2021a) A.-C. de la Hamette, T. D. Galley, P. A. Höhn, L. Loveridge, and M. P. Müller, Perspective-neutral approach to quantum frame covariance for general symmetry groups (2021a), arXiv:2110.13824 [quant-ph] .
* de la Hamette _et al._ (2021b) A.-C. de la Hamette, S. L. Ludescher, and M. P. Mueller, Entanglement/Asymmetry correspondence for internal quantum reference frames (2021b), arXiv:2112.00046 [quant-ph] .
* Cepollaro and Giacomini (2021) C. Cepollaro and F. Giacomini, Quantum generalisation of Einstein’s Equivalence Principle can be verified with entangled clocks as quantum reference frames (2021).
* de la Hamette _et al._ (2021c) A.-C. de la Hamette, V. Kabel, E. Castro-Ruiz, and Č. Brukner, Falling through masses in superposition: quantum reference frames for indefinite metrics (2021c).
* Kabel _et al._ (2022) V. Kabel, A.-C. de la Hamette, E. Castro-Ruiz, and Č. Brukner, Quantum conformal symmetries for spacetimes in superposition (2022).
* Zych _et al._ (2011) M. Zych, F. Costa, I. Pikovski, and Č. Brukner, Quantum interferometric visibility as a witness of general relativistic proper time, Nature Communications 2, 10.1038/ncomms1498 (2011).
* Marletto and Vedral (2017) C. Marletto and V. Vedral, Gravitationally induced entanglement between two massive particles is sufficient evidence of quantum effects in gravity, Physical Review Letters 119, 10.1103/physrevlett.119.240402 (2017).
* Bose _et al._ (2017) S. Bose, A. Mazumdar, G. W. Morley, H. Ulbricht, M. Toroš, M. Paternostro, A. A. Geraci, P. F. Barker, M. Kim, and G. Milburn, Spin entanglement witness for quantum gravity, Physical Review Letters 119, 10.1103/physrevlett.119.240401 (2017).
* Belenchia _et al._ (2018) A. Belenchia, R. M. Wald, F. Giacomini, E. Castro-Ruiz, Č. Brukner, and M. Aspelmeyer, Quantum superposition of massive objects and the quantization of gravity, Physical Review D 98, 10.1103/physrevd.98.126009 (2018).
* Christodoulou and Rovelli (2019) M. Christodoulou and C. Rovelli, On the possibility of laboratory evidence for quantum superposition of geometries, Physics Letters B 792, 64–68 (2019).
* Smith and Ahmadi (2020) A. R. H. Smith and M. Ahmadi, Quantum clocks observe classical and quantum time dilation, Nature Communications 11, 5360 (2020).
* Paczos _et al._ (2022) J. Paczos, K. Dębski, P. T. Grochowski, A. R. H. Smith, and A. Dragan, Quantum time dilation in a gravitational field (2022).
* Dębski _et al._ (2022) K. Dębski, P. T. Grochowski, R. Demkowicz-Dobrzański, and A. Dragan, Universality of quantum time dilation (2022).
* Anastopoulos and Hu (2020) C. Anastopoulos and B. L. Hu, Quantum superposition of two gravitational cat states, Classical and Quantum Gravity 37, 235012 (2020).
* Zych _et al._ (2019) M. Zych, F. Costa, I. Pikovski, and Č. Brukner, Bell’s theorem for temporal order, Nature Communications 10, 3772 (2019).
* Note (1) To be exact, as shown in Ref. Costa (2022), under certain conditions this is only possible if there is a quantum control degree of freedom, which labels the different geometries. This control degree of freedom will be introduced explicitly shortly.
* Hardy (2005) L. Hardy, Probability theories with dynamic causal structure: A new framework for quantum gravity (2005).
* Chiribella _et al._ (2013) G. Chiribella, G. M. D’Ariano, P. Perinotti, and B. Valiron, Quantum computations without definite causal structure, Physical Review A 88, 10.1103/physreva.88.022318 (2013).
* Oreshkov _et al._ (2012) O. Oreshkov, F. Costa, and Č. Brukner, Quantum correlations with no causal order, Nature Communications 3, 10.1038/ncomms2076 (2012).
* Procopio _et al._ (2015) L. M. Procopio, A. Moqanaki, M. Araújo, F. Costa, I. A. Calafell, E. G. Dowd, D. R. Hamel, L. A. Rozema, Č. Brukner, and P. Walther, Experimental superposition of orders of quantum gates, Nature Communications 6, 10.1038/ncomms8913 (2015).
* Rubino _et al._ (2017) G. Rubino, L. A. Rozema, A. Feix, M. Araú jo, J. M. Zeuner, L. M. Procopio, Č. Brukner, and P. Walther, Experimental verification of an indefinite causal order, Science Advances 3, 10.1126/sciadv.1602589 (2017).
* Rubino _et al._ (2022) G. Rubino, L. A. Rozema, F. Massa, M. Araú jo, M. Zych, Č. Brukner, and P. Walther, Experimental entanglement of temporal order, Quantum 6, 621 (2022).
* Goswami _et al._ (2018) K. Goswami, C. Giarmatzi, M. Kewming, F. Costa, C. Branciard, J. Romero, and A. White, Indefinite causal order in a quantum switch, Physical Review Letters 121, 10.1103/physrevlett.121.090503 (2018).
* Wei _et al._ (2019) K. Wei, N. Tischler, S.-R. Zhao, Y.-H. Li, J. M. Arrazola, Y. Liu, W. Zhang, H. Li, L. You, Z. Wang, Y.-A. Chen, B. C. Sanders, Q. Zhang, G. J. Pryde, F. Xu, and J.-W. Pan, Experimental quantum switching for exponentially superior quantum communication complexity, Physical Review Letters 122, 10.1103/physrevlett.122.120504 (2019).
* Taddei _et al._ (2021) M. M. Taddei, J. Cariñe, D. Martínez, T. García, N. Guerrero, A. A. Abbott, M. Araújo, C. Branciard, E. S. Gómez, S. P. Walborn, L. Aolita, and G. Lima, Computational advantage from the quantum superposition of multiple temporal orders of photonic gates, PRX Quantum 2, 10.1103/prxquantum.2.010320 (2021).
* Giacomini and Brukner (2021) F. Giacomini and Č. Brukner, Einstein’s Equivalence principle for superpositions of gravitational fields (2021), arXiv:2012.13754 [quant-ph] .
* Giacomini and Brukner (2022) F. Giacomini and Č. Brukner, Quantum superposition of spacetimes obeys Einstein’s Equivalence Principle, AVS Quantum Sci. 4, 015601 (2022), arXiv:2109.01405 [quant-ph] .
* Rovelli (1991) C. Rovelli, What is observable in classical and quantum gravity?, Classical and Quantum Gravity 8, 297–316 (1991).
* Paunković and Vojinović (2020) N. Paunković and M. Vojinović, Causal orders, quantum circuits and spacetime: distinguishing between definite and superposed causal orders, Quantum 4, 275 (2020).
* Araújo _et al._ (2015) M. Araújo, C. Branciard, F. Costa, A. Feix, C. Giarmatzi, and Č. Brukner, Witnessing causal nonseparability, New Journal of Physics 17, 102001 (2015).
* Branciard _et al._ (2015) C. Branciard, M. Araújo, A. Feix, F. Costa, and Č. Brukner, The simplest causal inequalities and their violation, New Journal of Physics 18, 013008 (2015).
* Branciard (2016) C. Branciard, Witnesses of causal nonseparability: an introduction and a few case studies, Scientific Reports 6, 10.1038/srep26018 (2016).
* Bavaresco _et al._ (2019) J. Bavaresco, M. Araújo, Č. Brukner, and M. T. Quintino, Semi-device-independent certification of indefinite causal order, Quantum 3, 176 (2019).
* van der Lugt _et al._ (2022) T. van der Lugt, J. Barrett, and G. Chiribella, Device-independent certification of indefinite causal order in the quantum switch (2022).
* Guérin and Brukner (2018) P. A. Guérin and Č. Brukner, Observer-dependent locality of quantum events, New J. Phys. 20, 103031 (2018), arXiv:1805.12429 [quant-ph] .
* Castro-Ruiz _et al._ (2018) E. Castro-Ruiz, F. Giacomini, and Č. Brukner, Dynamics of quantum causal structures, Phys. Rev. X 8, 011047 (2018), arXiv:1710.03139 [quant-ph] .
* Vilasini and Renner (2022) V. Vilasini and R. Renner, Embedding cyclic causal structures in acyclic spacetimes: no-go results for process matrices (2022).
* Oreshkov (2019) O. Oreshkov, Time-delocalized quantum subsystems and operations: on the existence of processes with indefinite causal structure in quantum mechanics, Quantum 3, 206 (2019).
* Costa (2022) F. Costa, A no-go theorem for superpositions of causal orders, Quantum 6, 663 (2022), arXiv:2008.06205 [quant-ph] .
## Appendix A Example of an operational encoding of the causal order
Here, we provide an example of a concrete setup for operationally encoding the
causal order. Consider a superposition of causal orders as depicted in Fig. 5.
Figure 5: In order to encode the causal order operationally, we consider a
particle with an internal spin degree of freedom moving along $\gamma_{0}$,
and two agents in laboratories with worldlines $\gamma_{1}$ and $\gamma_{2}$.
These are placed in a superposition of causal orders between events
$\mathcal{E}_{1}$ and $\mathcal{E}_{2}$. The setup is chosen such that the
particle always enters the first laboratory at proper time $\tau_{1}^{\ast}$
while it crosses the second laboratory at $\tau_{2}^{\ast}$. Through a careful
choice of these proper times, we ensure that the agents can perform non-
disturbing measurements of the proper time of the particle whenever it enters
their laboratory and thus encode the causal order in a memory register.
Let us assume that the particle $0$ has an internal spin degree of freedom,
which precesses around the $z$-axis according to its proper time Zych _et
al._ (2011); Smith and Ahmadi (2020); Paczos _et al._ (2022); Dębski _et
al._ (2022). It is prepared in the initial state $\mathinner{|{b_{0}}\rangle}$
in both branches. In addition, the systems are considered to be laboratories
here, with $\gamma_{1}$ denoting the lab of Agent 1 and $\gamma_{2}$ the one
of Agent 2. We want to ensure that the particle crosses the laboratories at
specific proper times. Its first crossing with a laboratory occurs at
$\tau^{*}_{1}$ while the second crossing happens at proper time
$\tau^{*}_{2}$. Thus, in branch $\mathcal{A}$,
$p_{\mathcal{E}_{1}}=\gamma_{0}(\tau^{*}_{1})$ and
$p_{\mathcal{E}_{2}}=\gamma_{0}(\tau^{*}_{2})$, while in branch $\mathcal{B}$,
$p_{\mathcal{E}_{2}}=\gamma_{0}(\tau^{*}_{1})$ and
$p_{\mathcal{E}_{1}}=\gamma_{0}(\tau^{*}_{2})$.
When the spin degree of freedom enters the first laboratory at proper time
$\tau^{*}_{1}$, it has evolved to be in some state
$\mathinner{|{b_{1}}\rangle}$. The crossings of the worldlines are now tuned
such that the spin is in the orthogonal state $\mathinner{|{b_{2}}\rangle}$
when it crosses the second laboratory at time $\tau^{*}_{2}$. This ensures
that, whenever the particle enters a laboratory, the corresponding agent can
measure in the basis
$\\{\mathinner{|{b_{1}}\rangle},\mathinner{|{b_{2}}\rangle}\\}$ without
disturbing the state and its time evolution. Upon measurement, each agent
encodes the result of their measurement in a memory register with associated
Hilbert space $\mathcal{H}_{1/2}\cong\mathbb{C}^{2}$, which are initialized in
the state $\mathinner{|{0}\rangle}_{1/2}$. Thus, the overall state evolves
from the initial state
$\displaystyle\mathinner{|{\psi_{1}}\rangle}=(\alpha\mathinner{|{0}\rangle}\mathinner{|{g_{\mathcal{A}}}\rangle}+\beta\mathinner{|{1}\rangle}\mathinner{|{g_{\mathcal{B}}}\rangle})\mathinner{|{b_{0}}\rangle}\mathinner{|{0}\rangle}_{1}\mathinner{|{0}\rangle}_{2}$
(17)
to the state after the particle enters the first laboratory,
$\displaystyle\mathinner{|{\psi_{2}}\rangle}=(\alpha\mathinner{|{0}\rangle}\mathinner{|{g_{\mathcal{A}}}\rangle}\mathinner{|{\tau^{*}_{1}}\rangle}_{1}\mathinner{|{0}\rangle}_{2}+\beta\mathinner{|{1}\rangle}\mathinner{|{g_{\mathcal{B}}}\rangle}\mathinner{|{0}\rangle}_{1}\mathinner{|{\tau^{*}_{1}}\rangle}_{2})\mathinner{|{b_{1}}\rangle},$
(18)
and finally, after it has entered the second laboratory, to
$\displaystyle\mathinner{|{\psi_{3}}\rangle}=(\alpha\mathinner{|{0}\rangle}\mathinner{|{g_{\mathcal{A}}}\rangle}\mathinner{|{\tau^{*}_{1}}\rangle}_{1}\mathinner{|{\tau^{*}_{2}}\rangle}_{2}+\beta\mathinner{|{1}\rangle}\mathinner{|{g_{\mathcal{B}}}\rangle}\mathinner{|{\tau^{*}_{2}}\rangle}_{1}\mathinner{|{\tau^{*}_{1}}\rangle}_{2})\mathinner{|{b_{2}}\rangle}.$
(19)
During post-processing, once the spin state has evolved to
$\mathinner{|{b_{f}}\rangle}$, a referee combines the two memory states and
unitarily transforms the overall state to
$\displaystyle(\alpha\mathinner{|{0}\rangle}\mathinner{|{g_{\mathcal{A}}}\rangle}\mathinner{|{\tau^{*}_{2}-\tau^{*}_{1}}\rangle}_{1}\mathinner{|{\tau^{*}_{1}+\tau^{*}_{2}}\rangle}_{2}+\beta\mathinner{|{1}\rangle}\mathinner{|{g_{\mathcal{B}}}\rangle}\mathinner{|{\tau^{*}_{1}-\tau^{*}_{2}}\rangle}_{1}\mathinner{|{\tau^{*}_{2}+\tau^{*}_{1}}\rangle}_{2})\mathinner{|{b_{f}}\rangle}.$
(20)
Now, the causal order $s$ can be encoded in a quantum state by defining
$\mathinner{|{s=\pm
1}\rangle}:=\mathinner{|{\pm(\tau^{*}_{2}-\tau^{*}_{1})}\rangle}_{1}$. Since
the state of the second register and the spin degree of freedom factorize out,
we recover a state of the form of Eq. (11). The same procedure can be applied
in the case of a definite causal order if either the paths of the particles
and the laboratories are well-defined (not in a superposition) or if they are
in a superposition but always specify the same causal order between the
events. It is easy to see that in these cases the final state becomes
$(\alpha\mathinner{|{0}\rangle}\mathinner{|{g_{\mathcal{A}}}\rangle}+\beta\mathinner{|{1}\rangle}\mathinner{|{g_{\mathcal{B}}}\rangle})\mathinner{|{b_{f}}\rangle}|s=\pm
1\rangle$.
Going beyond the protocol given above, in the case of indefinite causal order,
Agents 1 and 2 may apply more general operations than the ones described here
and thereby implement (definite or) indefinite causal order in the sense of
directions of signalling: in the latter case, a superposition from Agent 1 to
Agent 2 to the referee and from Agent 2 to Agent 1 to the referee. This
corresponds to the quantum information notion of indefinite causal order
Oreshkov _et al._ (2012).
|
Disturbed, diffuse, or just missing?
A global study of the content of Hickson compact groups
M. G<EMAIL_ADDRESS>L. Verdes-Montenegro1 J. Moldon1 A. Damas Segovia1,3 S. Borthakur4 S. Luna1,5 M. Yun6 A. del Olmo1 J. Perea1 J. Cannon7 D. Lopez Gutierrez7,8 M. Cluver9,10 J. Garrido1 S. Sanchez1
Jones et al.
The content of Hickson Compact Groups
Instituto de Astrofísica de Andalucía (CSIC), Glorieta de la Astronomía, 18008 Granada, Spain
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Rm. N204, Tucson, AZ 85721-0065, USA
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
School of Earth and Space Exploration, Arizona State University, 781 Terrace Mall, Tempe, AZ 85287, USA
EGI Foundation, Science Park 140, 1098 XG Amsterdam, Netherlands
Astronomy Department, University of Massachusetts, Amherst, MA 01003, USA
Department of Physics & Astronomy, Macalester College, 1600 Grand Avenue, Saint Paul, MN 55105, USA
Physics Department, Harvard University, 17 Oxford Street, Cambridge, MA 02138, USA
Centre for Astrophysics and Supercomputing, Swinburne University of Technology, John Street, Hawthorn 3122, Victoria, Australia
Department of Physics and Astronomy, University of the Western Cape, Robert Sobukwe Road, Bellville, South Africa
Hickson compact groups (HCGs) are dense configurations of four to ten galaxies, whose morphology appears to follow an evolutionary sequence of three phases, with gas initially confined to galaxies, then significant amounts spread throughout the intra-group medium, and finally with almost no gas remaining in the galaxies themselves. It has also been suggested that several groups may harbour a diffuse component that is resolved out by interferometric observations.
The deficiency of HCGs is expected to increase as the morphological phase progresses along the evolutionary sequence. If this is the case, deficiency would be a rough proxy for the age and evolutionary state of a HCG. We aim to test this hypothesis for the first time using a large sample of HCGs and to investigate the evidence for diffuse in HCGs.
We performed a uniform reduction of all publicly available VLA observations (38 HCGs) with a purpose-built pipeline that also maximises the reproducibility of this study. The resulting data cubes were then analysed with the latest software tools to perform a manual separation of emission features into those belonging to galaxies and those extending into the intra-group medium. We thereby classified the morphological phase of each group as well as quantified their deficiency compared to galaxies in isolation.
We find little evidence that deficiency can be used as a proxy for the evolutionary phase of a compact group in either of the first two phases, with the distribution of deficiency being consistent in both. However, for the final phase, the distribution clearly shifts to high deficiencies, with more than 90% of the expected content typically missing. Across all HCGs studied, we identify a few cases where there is strong evidence for a diffuse gas component in the intra-group medium, which might be detectable with improved observations. We also classify a new sub-phase where groups contain a lone -bearing galaxy, but are otherwise devoid of gas.
The new morphological phase we have identified is likely the result of an evolved, gas-poor group acquiring a new, gas-rich member. The large spread of deficiencies in the first two morphological phases suggests that there is a broad range of initial content in HCGs, which is perhaps influenced by large-scale environment, and that the timescale for morphological changes is, in general, considerably shorter than the timescale for the destruction or consumption of neutral gas in these systems.
§ INTRODUCTION
<cit.> catalogued 100 groups of four to ten members packed in extremely dense configurations, but located in relatively low density environments on a larger scale, which are commonly referred to as Hickson compact groups (HCGs). The high number density of HCGs is coupled with their relatively low velocity dispersions <cit.>. Together, this make them ideal sites for strong, frequent galaxy–galaxy interactions. These interactions are visible through a myriad of neutral gas () tails, bridges, and clumps <cit.>, as well as line emission from shocks <cit.>, apparent rapid morphological transformations <cit.>, and in some cases the presence of a diffuse hot intra-group medium <cit.>.
Early single dish studied of HCGs revealed them to be deficient in gas <cit.>, while the first interferometric maps began to reveal the complex morphologies of HCGs in neutral gas <cit.>. These findings led <cit.> to pose the question: where is the neutral gas in HCGs? This seminal study of 72 HCGs observed with single dish telescopes confirmed and expanded on the initial findings that HCGs were -deficient. A subset (16 HCGs) of the sample was followed up with Very Large Array (VLA) observations, allowing for the spatial distribution and kinematics of the gas to be studied in detail.
Based on this imaging of 16 groups, <cit.> proposed an evolutionary sequence for HCGs, with their phase assigned based on their morphology. In Phase 1 of the sequence, emission is predominantly confined to the discs of the galaxies in the group. In Phase 2 significant tidal features become evident and a significant fraction of the observed is seen outside of galaxies. Finally, in Phase 3 almost all the gas has been removed from the galaxies or is simply undetected.[The sequence of <cit.> originally included a Phase 3b for galaxies evolving within a common cloud. However, there are no longer any convincing example of this proposed phenomenon <cit.>, with the possible exception of HCG 49, and we do no consider this a valid separate phase.]
If this morphological sequence tracks the evolution of a HCG, then it would be expected that the deficiency of the group would also generally increase with increasing morphological phase as gas (both atomic and molecular) is consumed or ionised and galaxies transform to earlier morphological types <cit.>. However, the validity of this proxy has never been verified for a larger sample of HCGs beyond the original sample used to formulate the proposed sequence. We note that although content is merely one aspect of these extremely complex systems, a viable proxy of evolutionary state based solely on total content, even if only approximate, would be a valuable tool for categorising compact groups identified in future surveys regardless of whether or not they are resolved in .
<cit.> expanded on the work of <cit.> by following up on 22 HCGs with deep observations with the Green Bank Telescope (GBT). These observations recovered more flux than the previous VLA observations, somewhat reducing the estimated deficiency of certain groups. However, more important was the distribution of this additional gas, which in some cases suggested the existence of a diffuse component, spread over up to 1000 and not recovered by the interferometer. <cit.> performed GBT mapping for four HCGs with the richest IGrM content, revealing that the additional gas mostly traces the higher column density features mapped with the VLA, suggesting that its origin is also largely tidal.
In this work we return to the evolutionary sequence proposed by <cit.> in order to test the long-standing assumption that the evolutionary phase of HCGs is a proxy for their deficiency, and vice versa. To do this we have compiled all publicly available VLA observations of HCGs, 38 groups in total. These were reduced in as uniform a manner as possible, allowing revised deficiency measurements of 38 groups, and the morphological classification of 32. With this dataset we address the question of where the gas in HCGs has gone and discuss the pathways along which these groups likely evolve.
In the following section we outline how these data were compiled and reduced. In Section <ref> we discuss how features were manually separated in each group and present moment maps and spectra of all groups. In Section <ref> we present our results, and comparisons to previous works. Finally, in Section <ref> we discuss our revised morphological classification scheme, before presenting our conclusions in Section <ref>. In addition to scientific analysis of these data, the entire process from acquisition of the raw data, through their reduction and analysis, has been performed with the utmost attention paid to reproducibility and open science. Our efforts in this direction are discussed in detail in Appendix <ref>.
§ DATA COMPILATION AND REDUCTION
HCG sample
HCG $N_\mathrm{mem}$ RA Dec $cz_\odot$ Dist. VLA $\Delta v_\mathrm{chan}$ $\sigma_\mathrm{rms}$ Beam Size $N_\mathrm{HI}\; (4\sigma)$
$\mathrm{deg}$ $\mathrm{deg}$ $\mathrm{km\,s^{-1}}$ $\mathrm{Mpc}$ Config. $\mathrm{km\,s^{-1}}$ $\mathrm{mJy\,beam^{-1}}$ arcsec $\mathrm{cm^{-2}}$
2 3 7.87517 8.43126 4326 50 D 10.3 0.74 $69.3 \times 51.4$ $1.9 \times 10^{19}$
7 4 9.84960 0.87818 4224 50 C,D 20.6 0.3 $35.0 \times 28.0$ $4.0 \times 10^{19}$
10 4 21.53076 34.69095 4761 50 DnC 20.6 0.39 $61.6 \times 49.7$ $1.7 \times 10^{19}$
15 6 31.91244 2.13827 7042 96 DnC 20.6 0.48 $60.7 \times 47.5$ $2.2 \times 10^{19}$
16 5 32.38881 -10.16298 3977 49 C,D 20.6 0.41 $38.9 \times 31.7$ $4.3 \times 10^{19}$
19 3 40.68807 -12.41181 4253 53 C,D 10.3 0.47 $41.0 \times 27.0$ $3.9 \times 10^{19}$
22 4 45.88027 -15.67570 2626 37 CnB,D 20.6 0.56 $50.9 \times 36.8$ $3.9 \times 10^{19}$
23 5 46.77701 -9.58547 4921 65 C 20.6 0.54 $25.3 \times 20.0$ $13.9 \times 10^{19}$
25 4 50.18220 -1.05192 6343 82 DnC 20.6 0.5 $65.4 \times 56.6$ $1.7 \times 10^{19}$
26 7 50.47584 -13.64586 9618 130 C 10.3 0.63 $25.5 \times 16.8$ $13.5 \times 10^{19}$
30 4 69.11918 -2.83239 4645 61 DnC 20.6 0.51 $57.9 \times 44.8$ $2.6 \times 10^{19}$
31 5 75.40372 -4.25671 4068 53 CnB 10.3 0.65 $14.6 \times 12.1$ $33.6 \times 10^{19}$
33 4 77.69969 18.03465 7795 107 C 20.6 0.82 $17.5 \times 15.4$ $39.4 \times 10^{19}$
37 5 138.39859 30.01417 6741 97 DnC 20.6 0.74 $49.5 \times 47.4$ $4.1 \times 10^{19}$
38 3 141.91194 12.28081 8652 123 D 4.9 1.24 $85.6 \times 57.6$ $1.6 \times 10^{19}$
40 5 144.72717 -4.85196 6628 99 DnC 20.6 0.33 $58.8 \times 44.8$ $1.6 \times 10^{19}$
47 4 156.45175 13.73157 9508 136 C,D 4.9 0.49 $31.0 \times 20.0$ $5.0 \times 10^{19}$
48 2 159.44020 -27.08746 2352 34 DnC 20.6 0.78 $54.3 \times 37.4$ $5.0 \times 10^{19}$
49 5 164.15204 67.17920 9939 139 DnC 10.3 0.4 $47.4 \times 37.1$ $2.1 \times 10^{19}$
54 5$^\ast$ 172.31364 20.57852 1412 27 C 10.3 0.46 $20.8 \times 16.9$ $11.9 \times 10^{19}$
56 5 173.13297 52.94860 8110 116 C,D 4.9 0.49 $23.5 \times 19.6$ $6.8 \times 10^{19}$
57 8 174.46054 21.98504 9032 135 C 4.9 0.62 $23.7 \times 18.9$ $8.8 \times 10^{19}$
58 5 175.54908 10.31703 6138 85 DnC,D 20.6 0.25 $65.8 \times 57.0$ $0.9 \times 10^{19}$
59 4 177.10671 12.72623 4008 60 C,D 4.9 0.63 $24.0 \times 16.0$ $10.4 \times 10^{19}$
61 3 183.09972 29.17781 3956 60 C 4.9 0.35 $26.2 \times 18.8$ $4.6 \times 10^{19}$
62 4 193.28399 -9.22410 4239 60 C 4.9 0.62 $24.8 \times 14.9$ $10.8 \times 10^{19}$
68 5 208.42036 40.32794 2401 35 D 10.3 0.68 $58.3 \times 54.8$ $2.0 \times 10^{19}$
71 5 212.76906 25.48492 9199 131 C,D 4.9 0.44 $26.0 \times 20.0$ $5.4 \times 10^{19}$
79 4 239.80365 20.75163 4369 59 C 10.3 0.5 $20.8 \times 17.4$ $12.5 \times 10^{19}$
88 4 313.09503 -5.75790 6032 74 C 20.6 0.3 $22.6 \times 17.2$ $10.1 \times 10^{19}$
90 4 330.52343 -31.96680 2635 33 DnC 41.2 0.55 $49.1 \times 39.9$ $5.1 \times 10^{19}$
91 4 332.30172 -27.77593 7195 92 DnC 20.6 0.66 $51.3 \times 47.0$ $3.5 \times 10^{19}$
92 4 339.00215 33.96577 6614 87 B,C,D 20.6 0.06 $15.0 \times 15.0$ $3.7 \times 10^{19}$
93 4 348.85099 18.98311 5136 64 DnC 20.6 0.4 $59.7 \times 53.3$ $1.6 \times 10^{19}$
95 4 349.88240 9.49185 11615 153 CnB 20.6 0.32 $21.4 \times 19.3$ $10.1 \times 10^{19}$
96 4 351.99291 8.77408 8725 116 C 20.6 0.21 $26.9 \times 18.3$ $5.4 \times 10^{19}$
97 5 356.86224 -2.30542 6579 85 DnC 20.6 0.44 $63.2 \times 49.5$ $1.8 \times 10^{19}$
100 5 0.33654 13.13256 5461 67 DnC 20.6 0.47 $58.7 \times 50.5$ $2.1 \times 10^{19}$
Columns: (1) HCG ID number, (2) number of group members considered in this work, (3 & 4) group coordinates (J2000), (5) heliocentric radial velocity (re-calculated as the mean velocity of the group members), (6) group distance calculated via the Cosmicflows-3 model <cit.>, which have uncertainties of $\sim$3 Mpc, (7) VLA configurations the target group was observed with, (8) channel width, (9) rms noise, (10) synthesised beam size, (11) 4$\sigma$ column density sensitivity (over 20 ).
$^\ast$ The five `members' of HCG 54 are now thought to be clumps all associated with a single merging system.
The interferometric data used in this work were compiled using the (J)VLA[We note that in this work we endeavour to use `VLA' to refer to historical data from the Very Large Array and `JVLA' for data from the upgraded Karl G. Jansky Very Large Array.] data archive and were originally observed as part of the projects AB651, AG645, AK580, AM559, AR251, AV206, AV221, AV227, AV230, AV275, AV285, AW234, AW272, AW351, AW500, AW568, AW601, AY155, AY160, AY86, AZ125, MYUN, and 13A-387. Together these projects observed 38 of the original 100 HCGs (HCG 2, 7, 10, 15, 16, 19, 22, 23, 25, 26, 30, 31, 33, 37, 38, 40, 47, 48, 49, 54, 56, 57, 58, 59, 61, 62, 68, 71, 79, 88, 90, 91, 92, 93, 95, 96, 97, 100) from <cit.> in the line. The full list of target groups and a summary of their VLA observations are shown in Table <ref>.
The key goals of our methodology here is to create a data reduction process that is as uniform as possible across all these heterogeneous data sets and to make it as reproducible as possible (see Appendix <ref>). To this end we developed a pipeline for the reduction of historical VLA data (but can also be used for JVLA data), which makes up the majority of the data for HCGs. In the following paragraphs we provide an overview of this pipeline. The code can be found on [<https://github.com/AMIGA-IAA/hcg_hi_pipeline>].
§.§ Pipeline and reduction overview
The majority of the VLA data was observed with channel resolutions of approximately 49 or 98 kHz, corresponding to velocity resolutions of about 10 and 20 , respectively. The JVLA project (13A-387) used a channel resolution of 7.8 kHz ($\sim$1.6 ), however, we chose to average these data over three channels to expedite the reduction process and to make them more directly equivalent to the VLA data, without degrading their velocity resolution beyond $\sim$5 .
Our data reduction pipeline is based in <cit.> and and is designed specifically to reduce spectral line data, particularly for historical VLA data sets. It provides a consistent and uniform data calibration, but allows manually tuned parameters to be adapted when needed. These are input as a parameters file with separate parameters for each section of the reduction process. The pipeline is designed so that each step can be run individually or the full pipeline run end-to-end (for a given set of observations). There is also an interactive mode where the pipeline will pause and query the user for any missing parameters rather than failing.
The pipeline has been encoded using <cit.> workflow management system, which allows defining the dependencies between the pipeline steps by using decorators. If previous steps or parameters are altered then the pipeline will know that it must re-run these steps before subsequent steps can be executed. The pipeline also performs automatic logging, recording every command that executes, flagging errors automatically and halting the workflow when it is required. Subsequent steps cannot be executed without the successful completion of the prerequisite steps in the pipeline.
The reduction pipeline begins by reading in the data and applying any manual flags that the user has defined in a separate file. It then proceeds with standard flagging procedures, which were typically to remove antennae with more than 5.0 m of shadowing, and to remove the first 5 s of each scan (although in general these value can be modified in the pipeline parameters file). The algorithm is then typically run on all target fields, for example science targets and all calibrators. Again this can be, and was in a few cases, disabled in the parameters file. The pipeline then queries the user to identify the purpose of each target field (i.e. the science target, and the flux, bandpass and phase calibrators), which are not stored in the metadata of historical VLA observations. The pipeline can then proceed with standard calibration steps to calibrate the gains, phases, and absolute flux scale.
After the first round of calibration the algorithm is run to remove any remaining radio frequency interference (RFI). Although caution is advised when using this algorithm on spectral line data, typically for observations the line is sufficiently weak that it is not apparent in the visibilities data, and this is not a concern. However, this step was disabled for all the JVLA data as the RFI required careful manual removal in many of these observations, as well as for a selection of the VLA data. If is run then the pipeline repeats a second round of calibration, if not then it proceeds to split the data and perform continuum subtraction. When the science targets are split from the rest of the data the pipeline automatically identifies overlapping spectral windows and merges them together (this option can be disabled).
Before proceeding with the continuum subtraction the pipeline queries the user to define a line-free range of channels. Single dish spectra in the literature, in particular <cit.>, were used whenever available to identify genuinely emission free channels for each group. In some cases this required a 0th order fit to the continuum because line-free channels only existed on one side of the band as the bandwidths of the historical VLA data were narrow. However, for the JVLA data the bandwidth was sufficiently broad that an identical range of channels could be used in most cases without the risk of impinging on the groups' line emission. Once the line-free channel range was defined for each group the algorithm was used to subtract the continuum. In most cases a 1st order fit was used, however, in several of the VLA data sets only a 0th order fit can be made as very few of the channels are free from line emission, or the observation is made up of two separate spectral windows that only overlap where there is line emission. Using a 0th order fit often leads to minor continuum artefacts in the final cubes, which then must be manually excluded when generating the source masks and moment maps.
The final step of the pipeline is to image data and generate the final cubes. The pipeline images the data from each project and target separately using the task in . In most cases automatic masking and multi-scale clean were used. However, in a few cases the sidelobes of the synthesised beam were not sufficiently suppressed for multi-scale clean to be used, while in a few other cases using a simple primary beam mask gave better results than automated masking. Typically the data were cleaned down to a threshold of 2.5$\sigma$ within the mask. In most cases the Brigg's robust parameter was set to 2.0 in order the maximise the recovery of extended emission, at the expense of some angular resolution. This choice of weighting is likely non-optimal for many other science cases, but the robust parameter can be changed in the pipeline parameters files and the reduction repeated with a single command. In cases where a single target was observed in multiple projects a secondary script was used to combine the observations before following the same approach for the final imaging. The pipeline also generates simple moment zero maps using a simple threshold. However, these are only intended for quality control purposes and are not used in our analysis. There is also a “cleanup” function which can automatically remove unwanted files after execution in order to save disk space.
All the parameters files to re-execute or modify the steps of the pipeline on another machine, starting with the raw data from the VLA archive, are publicly available through [<https://doi.org/10.5281/zenodo.6366659>]. In addition, the final image cubes and moment zero maps are stored there.
§.§ Source masking
All source masking was performed using the package <cit.>. All parameters files for each group are provided in the repository associated with this paper, which can be referred to for the exact details of the masking of any individual group. However, in general we used the smooth and clip algorithm within with Gaussian smoothing kernels of approximately 0.5, 1.0, and 2.0 times the beam diameter. This was combined with boxcar smoothing in the spectral direction, but the number of channels smoothed over depended on the original resolution of the data. The historical VLA data with a resolution of $\sim$10 was smoothed over zero, two, and three channels, whereas the data with $\sim$20 resolution was only smoothed over zero and two channels. The newer, much higher spectral resolution, JVLA data were smoothed over zero, four, and six channels (roughly corresponding to 5.0, 20, and 30 , after the initial three-channel averaging already performed). When constructing the masks typically a signal-to-noise ratio (S/N) threshold of 4.0 was used and a source reliability of over 95% was enforced. Sources within half a beam width and within two or three channels (six channels for the JVLA data) were merged and then the masks dilated to recover as much extended flux as possible. In some cases it was necessary to manually exclude certain regions of the cube to avoid residual continuum artefacts being falsely identified as sources.
The rationale behind this approach to source masking was not to separate out different features at this stage, indeed it is preferable to blend them into single SoFiA sources to recover as much extended flux as possible. The separation of features, described in Section <ref>, was performed after the mask generation using <cit.>.
§.§ Distance estimates and uncertainties
Distances to all groups were calculated from their radial velocities and the flow model[<http://edd.ifa.hawaii.edu/CF3calculator/>] of the Cosmicflows-3 project <cit.>. Although <cit.> do not provide uncertainty estimates for as part of the distance calculator, an approximate estimate can be made by assuming a typical peculiar velocity (not associated with large scale motions) of around 200 . For a Hubble constant of 70 $\mathrm{km\,s^{-1}\,Mpc^{-1}}$, this equates to a minimum distance uncertainty of $\sim$3 Mpc. The distance estimates given in Table <ref> should be considered accurate to approximately this level.
Generally the velocity of each group member was taken from <cit.> and the median velocity of the groups re-calculated after new members were added and false members removed. However, upon inspection of the emission in HCGs it became apparent that the published redshifts in <cit.> were erroneous for a number of galaxies. In these cases we adopted the redshift value given in the NASA/IPAC Extragalactic Database (NED). The galaxies that we identified with erroneous redshifts were: HCGs 19c, 22c, 48a, 48b, 57h, 71c, 91a, 95d, and 100d. After updating these redshifts we re-calculated the median redshift of each group and updated our distance estimate accordingly.
§ SEPARATION OF FEATURES
For each group with sources detected in the VLA observations we attempted to separate features into three broad classes: those associated with and likely bound to a group member galaxy, extended features likely not bound to any galaxy, and finally galaxies and features outside the core group. The separation of features associated with galaxies from extended features is usually not a straightforward process in HCGs, and consideration of the full 3-dimensional spatial and kinematic information is often required. For example, HCG 92 contains a large tidal feature that overlaps (in projection) with multiple member galaxies, but is in fact not associated with any of them <cit.>.
Each cube was displayed with the 2D and 3D visualisation tools in and the source mask was manually segmented into separate features. In cases with well resolved galaxies (spatially and in velocity) detected at high S/N, this process is quite robust as regions of emission that strongly deviate from the kinematic structure of each galaxy can be readily identified and excised. However, in many cases sources are marginally resolved, blended, and/or detected at low S/N. Wherever possible we attempt to perform this separation, but we note that in these cases the process can become quite subjective. However, as the purpose of this separation is to be able to broadly classify HCGs based on their morphology, a certain level of uncertainty and subjectivity can be tolerated.
There is also the issue that some HCGs originally had false members, which have since been excluded (making some groups triplets). Triplets are included in our analysis, but are distinguished from $N \geq 4$ groups wherever possible. Other groups have new members which were too low surface brightness (LSB) to be included in the original catalogue or were too separated from the other members (with no apparent connection in optical images), but are clearly connected in . The goal of this work is not to re-define or refine the HCG catalogue, hence we take the following simple approach: potential new members which are separated from the core (original) groups by more than $\sim$100 kpc will not be included as new members, unless they have a definite connection in , indicating that they have already begun interacting with the core group. We note that in some cases such galaxies may be within (or entering for the first time) the DM halo of the CG, however, in this work we are seeking to characterise the morphology, content, and evolution of the core group members and therefore these peripheral members are tangential to our goals unless they have already begun interactions with the core group. Within the core of each group, galaxy–galaxy projected separations (for nearest neighbours) are almost always less than 100 kpc. Hence this represents a reasonable cutoff separation for considering new members.
A galaxy must contain for it to be detected in our observations, which clearly leads to some form of bias, as gas-poor galaxies in a similar configuration may not be considered members where gas-rich ones would be, in the absence of a uniform, deep, and highly complete redshift survey covering all HCGs, the introduction of such biases on some level is unavoidable. We have thus favoured clarity and simplicity over attempting to redefine these compact groups, for which frequently there would be insufficient data.
In the following subsections we show a contour map of emission (moment 0) overlaid on an optical image (from DECaLS, SDSS, or POSS) for each group. We favour DECaLS <cit.> images wherever they exist, as these are the deepest of the three, followed by SDSS <cit.> and POSS <cit.>. We use the $r$-band (R-band for POSS) images obtained directly from the public image cutout services of each survey. In cases where we were able to separate features, the group member galaxies are shown in blue contours, extended features are shown in green, and non-members are shown in red. For cases where it was not possible to separate features, the moment zero map is shown with multi-coloured contours, starting at 4$\sigma$ (black) and each subsequent contour being twice the previous one. Dashed black contours indicate -4$\sigma$.
In addition, a spectrum is shown for each group. This spectrum includes all emission within the source mask, including non-members and features. Note that in some cases this includes emission well beyond the region shown in the optical images of each group. Where it exists a GBT spectrum <cit.> is overlaid on the VLA spectrum, along with a weighted version of the VLA spectrum that accounts for the response of the GBT beam (and any difference in the pointing centres). We model the GBT beam as a circular 2D Gaussian with a HPBW of 9.1. The primary beam-corrected VLA data are weighted (in the image plane) with a Gaussian with a width such that its convolution with the VLA synthesised beam (assumed to be circular, with a width equal to the average of the major and minor axes), gives a Gaussian of HPBW 9.1. This thereby approximately accounts for the beam response of the GBT observations, which in cases where there is emission near the edge (or beyond) the GBT primary beam can be essential to making a fair comparison. Finally, the moment one (velocity field) contour maps overlaid on the optical images of each group (with detected emission) are shown in Appendix <ref>.
§.§ HCG 2
Integrated emission (moment 0) contours overlaid on a DECaLS $r$-band image. HCG members are shown with blue contours, extended features in green, and non-member galaxies in red (none in this panel). The VLA synthesised beam is shown in the bottom left as a solid black ellipse and a scale bar indicating 20 kpc is in the lower right. The contours start at 4$\sigma$ (over 20 ) and each subsequent contour is double the one before it. The minimum contour level is listed for each group in Table <ref>.
VLA spectrum of all detected emission (solid black line) within the primary beam. The light grey shading around the black line indicates the uncertainty based on the rms noise and the number of pixels included in the source mask for a given channel. The vertical dotted lines indicate the velocities of group member galaxies. Regions of the spectrum where all values are exactly zero indicate spectral ranges where there are no pixels in the source mask.
HCG 2 is a triplet (HCG 2d is a background object) of late-type galaxies at approximately 4300 (Figure <ref>). The global moment map and velocity field of the group clearly shows that HCG 2a and c are strongly detected. HCG 2c shows minimal signs of disturbance other than the misalignment of its iso-velocity contours with its minor axis, suggesting a probable warp. HCG 2a on the other hand is blended with emission that appears to be from HCG 2b, as it is co-spatial with the optical source and has a separate velocity structure (though is scarcely larger than a single beam) that occurs at the expected redshift. We therefore attribute this emission to HCG 2b as best as possible. There is a small, faint extension to the north of HCG 2a which we designate as an extended feature as it does not conform to the velocity structure of either HCG 2a or b. Finally, it should also be noted that one, central channel of the HCG 2 was almost entirely flagged, which will slightly influence the flux measurements of all sources. This channel is clearly visible in the integrated spectrum (Figure <ref>).
§.§ HCG 7
As in Figure <ref>. The orange dashed circle indicates the primary beam of the GBT observation of this group.
As in Figure <ref>. The green (high velocity resolution) spectrum is from the GBT observation of this group <cit.>, and the orange spectrum is the VLA spectrum weighted to match the GBT primary beam response.
HCG 7 is a compact configuration of four (three late-type and one lenticular) galaxies at approximately 4200 . The moment maps indicate that HCG 7a, c, and d are all strongly detected (Figure <ref>).
HCG 7b, the lenticular, does not appear to be detected.
HCG 7c is a little separated from the other three group members and has a mostly regular velocity field.
The mask already separates the emission from HCG 7a and d, so not manual intervention was required. None of the galaxies in the group appear to be strongly disturbed in (with caveat that HCG 7b is undetected), and as such no separate extended features were identified. One channel of the cube is entirely flagged, as is apparent in the integrated spectrum (Figure <ref>).
§.§ HCG 10
As in Figure <ref>.
As in Figure <ref>.
HCG 10 is a quartet made up of three tightly packed spirals and one elliptical slightly further afield, all within the range 4600-5200 . Both HCG 10a and d are strongly detected, but no others. In this case the mask generated by was not split further at all. It had already identified the two galaxies separately and there is little sign of extended features (Figures <ref> & <ref>), though in the cube the western side of HCG 10a appears to be somewhat kinematically disturbed.
§.§ HCG 15
As in Figure <ref>.
As in Figure <ref>.
HCG 15 is a collection of six galaxies in a compact, inverted `T' formation. Most members are at $\sim$7000 , but HCG 15d and f are at $\sim$6200 . Only HCG 15f is detected in , on the very edge of the group. At the (poor) resolution of the data no clear disturbance or extended features could be identified and thus no manual separate was performed (Figures <ref> & <ref>).
§.§ HCG 16
As in Figure <ref>.
As in Figure <ref>.
HCG 16 is a compact, linear group of five galaxies at $\sim$4000 . <cit.> performed a detailed study of HCG 16 based on the same archival VLA data. Here we attempt an equivalent separation of features to that study, however, we note that the automated approach to reducing the data that is used here results in slightly more D-array data being flagged relative to C-array than in <cit.>, which in turn leads to a slightly lower column density sensitivity and slightly finer spatial resolution. In addition, in this work the source masks are clipped at 4$\sigma$ as opposed to 3.5$\sigma$ in <cit.>.
All the core members of HCG 16 (a, b, c, and d) are detected as well as NGC 848 and PGC 8210. The former has clearly previously interacted with the core group and is at the head of a $\sim$160 kpc long tidal tail (Figure <ref>), while the latter is a dwarf galaxy that has likely never entered the core group before (we do not consider it a group member). The emission from HCG 16a and b are almost blended together, but have been separated as best as possible, there is a high column density bridge between HCG 16c and d, as well as numerous tidal features and clumps throughout the group. The integrated spectrum of the group for one continuous (in velocity) feature (Figure <ref>).
§.§ HCG 19
As in Figure <ref>.
As in Figure <ref>.
HCG 19 is a triplet (HCG 19d is a background object) of two late-type and one early-type galaxies at 4200 . The mask for HCG 19 was generated in a slightly different manner to most other groups owing to a troublesome continuum source artefact to the west of the group. To prevent the artefact from being included in the mask, positivity of sources was enforced, which in turn prevented the use of 's reliability estimation, and therefore required the threshold level to be raised to 5.5$\sigma$ (from 4$\sigma$) to eliminate spurious sources.
In the core of the group HCG 19b and c are clearly detected, while HCG 19a is not detected. Further afield there are also strong detections of WISEA J024313.79-122138.9 and WISEA J024206.39-122118.3, which are low surface brightness (LSB) galaxies likely falling towards the group for the first time, but are still separated from the core group by $\sim$100 kpc. We therefore do not include them in our measurement of content. For brevity we refer to these as HCG 19W1 and HCG 19W2.
With the exception of HCG 19c, the core galaxies do not appear to be strongly disturbed in . However, there are numerous incipient tidal features on the outskirts of the galaxies, including a bridge between HCG 19 a and c. Although there is not much flux in these features and they are difficult to trace due to the poor noise properties of this cube, we have attempted to separate them from the emission of the galactic discs (Figures <ref> & <ref>).
§.§ HCG 22
As in Figure <ref>.
As in Figure <ref>.
HCG 22 is an `S' shaped configuration of four galaxies (two early-types and two late-types) at $\sim$2600 . Two original members of the group (HCG 22d and e) were subsequently shown to be background galaxies. The archival observations of HCG 22 form a very broad (in RA) mosaic of the group, however, only two galaxies, HCG 22c and NGC 1231 are detected within this field (Figures <ref> & <ref>). The latter is separated from the core group by over 20 and we do not consider it as a group member at present. HCG 22c shows a clean velocity field and we do not see evidence for tidal features in the cube. We also note that the tabulated redshift for HCG 22c from <cit.> appears to be erroneous (Figure <ref>). NGC 1231 shows some minor disturbances in the outskirts of its disc which we separate from its disc emission (even though it will not be included in our analysis of the content of the group). This is likely a result of interactions with its neighbour, NGC 1209. We also note that we consider NGC 1188 as a member of HCG 22, as it is at the same redshift and close to the edge of the group. It is not detected in .
§.§ HCG 23
As in Figure <ref>.
As in Figure <ref>.
HCG 23 is a quintet of four late-type and one lenticular galaxies in a `T' configuration in the range 4500-5300 . The narrow bands available for the historical VLA and the original observing strategy for this group, with the overlap of the two spectral windows centred at the redshift of the group's peak emission, led to imperfect continuum subtraction and poorly behaved noise for this group. This complicates the interpretation of faint features in this cube.
In the core group HCG 23a, b, and d are detected, HCG 23c is undetected. Outside the core group there are three further detections: HCG 23-26 <cit.>, PGC 987787, and PGC 011654. We consider the first of these as a group member, but the latter two are separated from the core group by well over 100 kpc and show no signs of past interaction with the group.
HCG 23b has two small tidal tails emanating from it. The first extends SE towards HCG 23c, and the second towards the SW (Figures <ref> & <ref>). The remaining galaxies don't show any clear tidal features above the noise level. We also note that the emission from the almost edge-on HCG 23a is split into two separate sources by , which we combine into one.
§.§ HCG 25
As in Figure <ref>.
As in Figure <ref>.
HCG 25 is a quartet of two late-type and two lenticular galaxies at $\sim$6300 (HCG 25c, e and g are all background objects). The moment zero map of this group shows four clear detections (Figures <ref> & <ref>), HCG 25a and b in the core group, PGC 135673 slightly to the north, and the LSB galaxy 2SLAQ J032021.10-011013.6 to the SW, subsequently classified as an ultra-diffuse galaxy by <cit.>. For brevity we refer to the latter as HCG 25S1. Both of these detections of the periphery of the group are separated from the core group by well over 100 kpc and we do not consider them as current members.
HCG 25a and b are connected by a faint bridge, which we attempt to separate from the emission of the two galaxies themselves. However, this is complicated by the fact that it is at the limit of the spatial resolution and that, although HCG 25b is a strongly detected source overall, it is low S/N in individual channels. Thus the separation is largely based on the form of the emission from HCG 25a and the optical locations of the galaxies. HCG 25a also has an apparent incipient tidal tails connected to it on the NW and SW sides, which we also separate from the disc emission.
§.§ HCG 26
As in Figure <ref>.
As in Figure <ref>.
HCG 26 is a septet of galaxies between 9100-9700 , dominated by the edge-on late-type HCG 26a (Figure <ref>). The vast majority of the emission detected in HCG 26 belongs to HCG 26a (although there appears to an offset between the catalogued redshift and the emission, Figure <ref>). HCG 26e is also faintly detected as well as some tidal features. The velocity structure of HCG 26a dominates almost the entire band and appears to be mostly regular, however, there are two clear tidal features, one extending towards HCG 26e and another extending in the opposite directions towards the NE. There are also numerous apparent minor clumps of emission that do not follow the velocity structure of HCG 26a. These were excised wherever possible, but the limited resolution and S/N of the features presented a challenge and some fainter features have likely not been successfully separated. HCG 26e itself appears significantly perturbed, not displaying a clear velocity structure, but given the resolution and low S/N of the detection it is difficult to draw more detailed conclusions. There also appears to be a $\sim$30% offset between the GBT and VLA spectra of this group. Given the similar forms of the spectral profiles, this is most likely the result of differing absolute calibrations or continuum subtraction. However, the root cause could not be definitely identified.
§.§ HCG 30
As in Figure <ref>.
As in Figure <ref>. Note that the emission in the VLA spectrum here is all emission beyond the core group, but still within the primary beam of the VLA. Therefore, it is not shown in Figure <ref>.
HCG 30 is a quartet of three late-type and one lenticular galaxies at $\sim$4600 . None of the core members of HCG 30 are detected in the cube (Figures <ref> & <ref>). However, three galaxies, 6dFGS gJ043650.5-030237 (which we refer to as HCG 30dF1), NGC 1618, and NGC 1622, are strongly detected close to the edge of the VLA primary beam. None of these three detections appear particularly disturbed in (at the resolution of the data) and it is unlikely that they have ever interacted with the core group. Thus we do not consider them in our analysis of the group. NGC 1618 and NGC 1622 are both split into two separate sources by , which we manually combine, but other than this no modification or separation of features was performed. The emission from these three detections accounts for the entirety of the flux in the VLA spectrum (Figure <ref>) and when the data are weighted to compare to the GBT spectrum there is no flux remaining.
In addition it should be noted that three channels (4364–4406 ) in this data set were entirely flagged and a number of (primarily negative) low-level artefacts remain in the cube. Thus, positivity was enforced when creating the mask and the threshold raised from 4$\sigma$ to 5$\sigma$ to increase reliability.
§.§ HCG 31
As in Figure <ref>.
As in Figure <ref>.
HCG 31 is quintet of late-type galaxies at $\sim$4100 , three of which overlap with each other on the plane of the sky and are likely in the process of merging.
The emission in HCG 31 is an extremely complicated mixture of dwarf galaxies and tidal features (Figure <ref>), studied in detail by <cit.> based on an independent reduction of the same VLA data. HCG 31a, b, c, g (IC 399), and q (WISEA J050138.33-041321.2) are all detected in . Most of these galaxies are highly disturbed and are embedded in an almost continuous set of tidal features which spans the entire group. HCG 31e and f are also detected in , but are candidate tidal dwarf galaxies embedded in an tail <cit.>, and we therefore consider them as tidal features rather than normal galaxies. <cit.> also noted the detection of PGC 3080767 to the south of the group, however, this is separated from the rest of the group by over 200 kpc and we do not consider it a member. There are also some blue clumps visible in the DECaLS image within the extended features around HCG 31q (to the north of the core group), which may be other cases of in situ star formation. We also note that the absolute flux scale resulting from our calibration appears considerably higher than that of <cit.>, which greatly improves how well the integrated profile of the group matches to GBT observations shown Figure <ref> <cit.>.
Reliable separation of galaxies and tidal material in this group is extraordinarily challenging, in particular the galaxies HCG 31a and c, which are not only blended with tidal material, but also each other. The reported masses for these two galaxies and, to a lesser extent, the other galaxies in the group are undoubtedly quite uncertain (for example, the approaching side of HCG 31b is deeply embedded in the complex of emission at the centre of the group). Despite this difficulty, what is clear, and of most importance for this work, is that a substantial fraction of the total emission of this group originates in the IGrM.
§.§ HCG 33
As in Figure <ref>, except the background image is POSS $R$-band.
As in Figure <ref>.
HCG 33 quartet of three early-type galaxies and a late-type galaxy in the redshift range 7500-8000 .
Only HCG 33c is detected in with the VLA (Figures <ref> & <ref>). The velocity structure of HCG 33c spans a large fraction of the observational bandwidth and is mostly regular except for a clear incipient tidal feature extending to the NE, which we separate from the galaxy's emission.
§.§ HCG 37
As in Figure <ref>.
As in Figure <ref>.
HCG 37 is a very compact quintet containing two early-type, one lenticular, and two late-type galaxies in the redshift range 6300-7400 .
None of the core galaxies of HCG 37 are detection in with the VLA (Figures <ref> & <ref>). The only feature above 's threshold is an amorphous blob to the SW of the group. This feature may be spurious, however, its peak coincides with an uncatalogued LSB dwarf galaxy (at 09h13m05.95s +29d55m14.2s), which we refer to as HCG 37LSB1. We separated this feature into the densest clump that appears to be associated with an optical counterpart and the more diffuse emission. However, if associated with the LSB counterpart then this source is separated from the core group by well over 100 kpc and we do not consider it part of the compact group at present.
§.§ HCG 38
Integrated emission (moment 0) contours overlaid on a DECaLS r-band image. Contours begin at 2$\sigma$ (solid black lines) and each subsequent contour is a factor of two greater. The dashed black contours indicate -2$\sigma$. The VLA synthesised beam is shown as a black ellipse in the bottom left and a 20 kpc scale bar is in the bottom right.
As in Figure <ref>.
The single short observation of this group was hampered by RFI resulting in poor $uv$ coverage, a poorly behaved synthesised beam, and a noisy cube. We caution the reader that results for this group are less reliable than for most others.
This group is apparently a triplet of late type galaxies at 8700 (the redshift of HCG 38d places it in the distant background), where two of the members appear to be merging HCG 38b and c. In our cube we detect emission in the vicinity of all of these galaxies, as is apparent in the moment map (Figure <ref>). However, given the poor angular resolution and the noisy nature of the cube, emission is extremely challenging to confidently identify in individual channels and we thus do not attempt separation of features in this group. Will consider it only in terms of its global properties in the following analyses (Figure <ref>).
§.§ HCG 40
As in Figure <ref>.
As in Figure <ref>.
HCG 40 is an extremely compact configuration of five galaxies (three late-type and two early-type) in the redshift range 6400-6900 , all of which overlap on the plane of the sky with their nearest neighbour. In the cube the only well-defined velocity structures (in the vicinity of the core group) align with the major axes of HCG 40c and d. We therefore attribute most of the to these two galaxies. Although some emission is classified as tidal features as it clearly does not conform to a disc-like structure, this is mostly low significance emission with minimal flux. To the NW of the group another source is detected, probably associated with GALEXASC J093831.05-044853.6. For brevity we will refer to this as HCG 40GLX1. This object is separated from the core group by $\sim$150 kpc and we do not consider it as a member.
§.§ HCG 47
As in Figure <ref>.
As in Figure <ref>.
HCG 47 has the appearance of a pair of pairs, with the two larger galaxies, HCG 47a and b, in the south and the two smaller galaxies, HCG 47c and d, to the north, all of which are in the redshift range 9400-9600 . emission is detected in HCG 47a and b and towards HCG 47c and d (Figures <ref> & <ref>). However, much of this is a relatively low S/N, an issue which is exacerbated by the fact that the synthesised beam has significant side lobes due to the short observations and poor resulting $uv$ coverage. In addition, five channels ($\Delta v \approx 24$ ) near the centre of the cube were entirely, or almost entirely, flagged due to RFI.
The mask contains two separate regions of emission. One, upon inspection of the channel maps, appears to mostly coincide with HCG 47a and b, while the other lies in between the two pairs. Owing to the difficulties with the data quality, we simply denote the first region as being from HCG 47a and b combined (although likely the majority is from HCG 47a), while the latter region is classified as extended emission which does not coincide with a galaxy. The reader is cautioned that the results for this group a quite uncertain.
§.§ HCG 48
As in Figure <ref>.
As in Figure <ref>.
HCG 48 is likely a false group and is really just a pair of galaxies (HCG 48a, a large elliptical, and HCG 48b, a smaller late-type galaxy). The other two original members of this group were revealed to be a background pair once redshifts were obtained (we also note that there are several conflicting redshift measurements for these galaxies in the literature). In the VLA data only the emission from HCG 48b is detected, as well as PGC 762452, approximately 165 kpc to the NW (Figures <ref> & <ref>). The cube has mostly well-behaved noise, but there is a continuum artefact NW of the group, as well as some structured noise. There are a few minor, low significance features connected to HCG 48b which we assign as tidal. In summary, as this HCG is unlikely to be a genuine group we do not consider it in the remainder of our analysis.
§.§ HCG 49
As in Figure <ref>.
As in Figure <ref>.
HCG 49 is an extremely compact configuration of three late-type galaxies and one early-type, with an additional late-type slightly to the south. All have redshifts of $\sim$10000 . Owing to the distance of this group $\sim$140 Mpc, the physical resolution of the VLA DnC imaging is extremely poor. There is no possibility of reliably separating emission from the four core group members as this appears almost a one continuous cloud in the cube (Figures <ref> & <ref>). Therefore, we approximately separate the emission of SDSS J105638.63+670906.0 (HCG 40SDSS1, the galaxy slightly to the south), which does appear to be interacting with the core group (though resolution is a limiting factor), and SDSS J105454.97+670919.4 (HCG 40SDSS2), detected about 350 kpc to the west (not considered a member). However, for the remainder of this work this group will only be considered in terms of its global properties.
§.§ HCG 54
As in Figure <ref>.
As in Figure <ref>.
HCG 54 is a tight, linear arrangement of what were originally thought to be late-type dwarf galaxies <cit.>, but are likely all regions associated with a merger event of two or more galaxies <cit.>. The emission mapped with the VLA spans all the bright, star-forming components of this merger and includes a significant tidal tail which wraps around to the north of the group <cit.>. The moment zero map of the group has a mottled appearance due to significant side lobes to the synthesised beam (Figures <ref> & <ref>).
We do not attempt to separate individual features in the main region of emission, which forms a contiguous structure from one side of the merger to the other. However, we do separate the clear tidal extended features around the group. In particular, there is a main tail that wraps around the merger and ends in the galaxy A1127+2054. As this HCG mainly consists of a single merger in cannot be considered as a group in the same manner as the other groups and we therefore do not consider it in our analysis.
§.§ HCG 56
As in Figure <ref>. Note that the extend feature in the NW corner of this image is due to a large foreground galaxy just outside the frame.
As in Figure <ref>.
HCG 56 is a compact configuration of four lenticular and one late-type galaxies in the redshift range 7900-8400 . Only HCG 56a is detected in , which is an extremely high S/N detection with a clear, regular velocity structure (Figures <ref> & <ref>). No separation of features is required as this is the only source in the mask and there are no clear signs of tidal tails. However, there are some low significance features which did not reach the threshold required to be included within the mask, deeper mapping would be required to verify these features and reveal if they are genuine associated with the remaining galaxies.
§.§ HCG 57
As in Figure <ref>.
As in Figure <ref>.
HCG 57 is a group of eight, mostly late-type, galaxies in the redshift range 8700-9600 . The three galaxies at the centre of the group (HCG 57a, c, and d) appear to be detected in the VLA observations (Figures <ref> & <ref>). However, the data quality is not optimal, with a relatively poorly behaved synthesised beam and some residual structure in the noise that becomes visible when summing several channels. Given these limitation we do not attempt to separate this emission into distinct galaxies and tidal features, and will consider this group only in terms of its global properties.
In addition, we detect faint emission approximately 400 kpc to the SW of the group. This appears to coincide with an uncatalogued LSB galaxy. However, we note that given the low data quality this detection may be spurious and the apparent optical counterpart coincidental.
§.§ HCG 58
As in Figure <ref>.
As in Figure <ref>.
HCG 58 is a configuration of three late-type galaxies, one lenticular, and one early-type. emission is detected in the VLA observations throughout the structure (Figures <ref> & <ref>). The core group is also surrounded by a number of other galaxies, separated from the core group by 200-500 kpc, several of which we also detect in but do not consider as members of the core group. There is a continuum subtraction artefact to the south of the core group which was excluded when generating the mask.
Although is detected that is co-spatial with all the five group members, there are only clear overdensities associated with HCG 58a and e. The remaining galaxies appear so disturbed that they are virtually indistinguishable from the gas the fills the IGrM. We therefore separate only the high density gas associated with HCG 58a and e, and mark all remaining gas in the group as belonging to extended features.
The VLA spectrum weighted to the match the GBT beam response (orange line, Figure <ref>) is down-weighted so much in comparison to the raw spectrum that it is significantly below the GBT spectrum. Given that HCG 58e is on the edge of the GBT primary beam (Figure <ref>) and much of the extended emission is beyond the beam, this comparison will be extremely sensitive to the approximate beam model we have assumed. In addition, any minor pointing offset in the GBT observation would also lead to significant differences in the comparison. Thus, although the GBT spectrum appears to contain considerably more flux, it is difficult to be certain that this indicates emission that was missed by the VLA.
§.§ HCG 59
As in Figure <ref>.
As in Figure <ref>.
Unlike all other sources the data for HCG 59 went through an additional step of self-calibration due to strong interference fringes originating from a bright, double continuum source near the edge of the primary beam. These additional steps greatly reduce the strength of artefacts in the cube, however, there is still some residual structure in the noise and a high threshold (5$\sigma$) is used in to attempt to eliminate them.
HCG 59 is a group of two spirals, one irregular and one elliptical galaxy arranged in an L-shape at a redshift of approximately 4000 . All four appear to be detected in with the VLA data, although the emission of HCG 59c is interrupted by RFI (Figures <ref> & <ref>). Although the emission from the three of these galaxies appears to overlap in the moment map, but are separated by slight differences in their radial velocities. The spectrum of the group also implies that the catalogued redshift for HCG 59d is offset from its velocity by over 200 . A 5th detection occurs just south of the core group. There does not appear to by any optical counterpart to this source (which we label HCG 59LSB1) and it is likely spurious.
The low quality of the data makes confident separation of features challenging and caution is advised when interpreting results of this group. However, while in the cases of HCG 59c and d the emission appears to be well centred, for HCG 59a most of the emission occurs between HCG 59a and b. The emission associated with HCG 59b also appears to have been broken into approaching and receding parts by . We therefore attribute, as best as possible, emission associated with HCG 59b and label the remainer as an extended feature rather than gas bound to HCG 59a. This feature is likely an bridge between the two galaxies.
§.§ HCG 61
As in Figure <ref>.
As in Figure <ref>.
HCG 61 is a triplet made up of HCG 61a, c and d (HCG 61b is a foreground galaxy). HCG 61a and d are both classified as lenticular and HCG 61c is late-type, but edge-on. The map from the VLA shows several enormous extended features reminiscent of HCG 92. The moment zero map appears mottled due to significant side lobes to the synthesised beam.
The most easily discernible galaxy in the cube is HCG 61c as a faint, point-like emission region progresses along its edge-on disc when stepping through the channels of the cube in a relatively uniform manner. However, near its central velocity the signal almost entirely disappears and there are several apparent tidal features which we separate as best as possible. Aside from this gas, none of the remainder of the emission is definitively associated with either of the other galaxies. Although some of this emission does coincide with HCG 61a and d, and there is some velocity gradient across each, both are embedded in the large extended features and it is unlikely that much of this gas is still bound to the galaxies. Therefore, we classify all remain emission in the group as tidal in nature (Figures <ref> & <ref>).
In addition to the in the group we also detect PGC 4316478 about 220 kpc to the SE of the group. This source is at the very edge of the VLA primary beam and its flux is likely unreliable. We do not consider it a member of the group.
§.§ HCG 62
As in Figure <ref>, except no emission is detected within the group.
As in Figure <ref>.
HCG 62 is a group of two lenticular and two elliptical galaxies in the range 3600-4400 . None are detected in the JVLA observations of this group (Figures <ref> & <ref>). Three low significance, outlying clumps exist in the spectral line cube. However, these have no apparent optical counterparts and are almost certainly spurious. Therefore, we consider this group as entirely undetected with the JVLA observations.
§.§ HCG 68
As in Figure <ref>.
As in Figure <ref>.
HCG 68 is a group of five galaxies, two lenticulars, two ellipticals, and one late-type, the only core group galaxy detected in with the VLA observations (HCG 68c). The velocity structure of the detected in HCG 68c is highly regular, with only a very minor, faint extension on the NW side which we separate as a tidal feature (but may simply be spurious), and a possible slight warp in its outer disc (Figures <ref> & <ref>).
In addition to HCG 68c, UGC 8841 is clearly detected in about 150 kpc to the SW of the core group, and NGC 5371 is also detected a similar projected distance to the west of the group. The receding side of NGC 5371 is not visible as this is truncated by the edge of the bandpass of the VLA observation. We do not consider either of these galaxies as members of the core group.
The comparison of the VLA and GBT sprecta in Figure <ref> shows a strange phenomenon where at low velocities the GBT spectrum (green line) contains more flux, but at high velocities the VLA weighted spectrum does (orange line). As with HCG 58 this is likely the result of the main source of emission lying near the edge of the GBT beam. Even a small offset in pointing or a slight bias in the beam weighting we have used could result in more emission for the receding or approaching side of HCG 68c being included.
§.§ HCG 71
As in Figure <ref>.
As in Figure <ref>.
HCG 71 is a triangular configuration made up of the three late-type galaxies HCG 71a, b, and c (HCG 71d is a distant background galaxy) in the range 8800-9400 . The moment map is dominated by emission associated with HCG 71a, a near face-on spiral (Figures <ref> & <ref>). HCG 71c is also detected, but HCG 71b is not. Thes emission from HCG 71a stretches out to form a large loop to the north that connects to two LSB galaxies, AGC 732898 and AGC 242021, both of which were detected in in the ALFALFA survey (though blended with other emission in the group) and which we consider group members. The moment zero map of the group has a slightly mottled appearance due to significant side lobes and elongation of the synthesised beam, and possible slight residual continuum artefacts.
HCG 71c is slightly separated from the rest of the group in velocity and thus it is straightforward to distinguish its emission. It appears to have two minor (low S/N) extensions to the south and west, which we excise in separate features. HCG 71a has two large tails emanating from the east and west. The latter stretches down to the SE and in the moment map apparently connects to HCG 71c. However, this is a projection effect as in velocity space it stretches away from, not towards, HCG 71c. There is a dense clump of emission in the other tail that is at the same velocity as HCG 71b, but it does not quite overlap on the plane of the sky and therefore does not appear to be associated (although perhaps some of this gas originated from HCG 71b). Again, in projection, this tails appears to connect to AGC 732898 and AGC 242021, but veers away in velocity. Although only detected in a small number of channels AGC 732898 appears to have a regular structure and no extended features were identified around it. AGC 242021, on the other hand, appears highly disturbed in . We separate the emission into galaxy and extended features, but note that this is quite uncertain for this object.
§.§ HCG 79
As in Figure <ref>.
As in Figure <ref>.
HCG 79 is a group of two lenticular, one elliptical, and one late-type galaxy, all of which overlap on the plane of the sky (a 5th galaxy, HCG 79e, is a background interloper) and have redshifts in the range 4100-4500 . Most of the emission detected with the VLA is centred on HCG 79d, the sole late-type galaxy in the group (Figures <ref> & <ref>). There are numerous faint features on the edge of HCG 79d that appear disturbed and are separated out. However, it should be noted that these are faint features that are subject to variations in the noise.
§.§ HCG 88
As in Figure <ref>.
As in Figure <ref>.
HCG 88 is a linear arrangement of four late-type galaxies all with redshifts of $\sim$6000 . All four are detected in with the VLA, and have appear mostly undisturbed, except for HCG 88a which appears to have a truncated disc (Figures <ref> & <ref>). HCG 88b and c have overlapping distributions. The regular velocity structure of the two galaxies implies that this is still mostly bound to their respective discs, and therefore we separate the two galaxies as best as possible and assign very little emission to a bridge feature. HCG 88d also appears to have an incipient tidal tail on its SE side.
§.§ HCG 90
As in Figure <ref>, except the background image is POSS $R$-band.
As in Figure <ref>.
HCG 90 is an extremely compact configuration of three early-type galaxies and a fourth late-type galaxy about 60 kpc to the north. The VLA observations of the group used a very broad channel width (195 kHz) in order to span the full velocity range of the group and beyond. However, the broad channels cause 's reliability verification algorithm to fail, as too few sources (spurious plus real) are detected. Therefore, we raised the masking threshold to 5$\sigma$ in order to eliminate any spurious detections.
We note that HCG 90 appears to be embedded at the centre of a much larger structure containing many tens of galaxies. There are several dwarf galaxies in the vicinity of the core group, some of which could be members <cit.>, but which we had chosen to disregard, and instead we focus on the core group, as we have done with other HCGs. With the exception of PGC 198500, which is marginally detected (but excluded due to the higher threshold of the mask), these additional members are all undetected in and would minimally impact the estimated deficiency of the group, owing to their low mass relative to the core members.
splits the emission from HCG 90a (the only core member detected) into two halves, which we combine (Figures <ref> & <ref>). No other modification is made to the mask.
§.§ HCG 91
As in Figure <ref>, except the background image is POSS $R$-band.
As in Figure <ref>.
HCG 91 consists for four disc galaxies (in the redshift range 7100-7300 ) in an almost linear N-S configuration. The distribution of this group has been studied in detail before by <cit.>. Our analysis is based on a re-reduction of the same data. HCG 91a, b and c and all detected in , but HCG 91d is not (Figures <ref> & <ref>). The main tidal feature in the group is a tail originating on the eastern side of HCG 91a and wrapping around to its north side. HCG 91c has a bright and narrow distribution, whereas HCG 91b is quite broad (in velocity) reducing its S/N in each channel. There also appears to be a low S/N extended feature between HCG 91b and c, which we separate out as best as possible. Finally, there are three additional detections in the separated from the core group by over 300 kpc. These are PGC 68187, PGC 68161, and a likely spurious detection to the SE for which we could not identify and optical counterpart.
§.§ HCG 92
As in Figure <ref>.
As in Figure <ref>.
HCG 92 (or Stephan's Quintet) is an extremely compact configuration of galaxies that exhibits shocks and tidal tails in both neutral and ionised gas. It has been the target of over a hundred papers in IR, optical, UV, and X-ray imaging <cit.>, radio continuum <cit.>, and line emission <cit.>.
The core group is made up of four members, as HCG 92a is a foreground interloper. Although the morphology and kinematics of the gas in the group are extremely complex <cit.>, for our purposes it is quite straightforward. As discussed in detail by <cit.>, none of the emission is located in the galaxies themselves. The majority is in an enormous L-shaped tail that extends to the east of the group, then there are three separate clouds on the western side of the group (two of which project on top of each other), but these do no coincide with the galaxies (Figures <ref> & <ref>).
To the east of the group we also detect AGC 320272 in , however, this dwarf galaxy is $\sim$175 kpc from the core group and appears undisturbed in . Therefore, we do not consider it as a member. In addition, we detect AGC 321245 about 250 kpc north of the core group, and finally there is a likely spurious detection in the cube a similar distance to the NW. We also note that a few central channels of the cube were effectively lost due to inadequate continuum subtraction where the two sub-bands of the observation (just) overlap, however, this does not appear to overlap with any of the aforementioned features.
§.§ HCG 93
As in Figure <ref>.
As in Figure <ref>.
HCG 93 (projected adjacent to HCG 94, separated by only $\sim$30, but by over 6000 ) is made up of three late-type and one elliptical galaxy. In the core group only HCG 93b, a loosely wound spiral, is the only galaxy detected in the VLA map. In addition, we also detect three uncatalogued dwarf galaxies in the vicinity of the group, however, these are all separated from the core group by $\sim$200 kpc.
The of HCG 93b appears quite regular and we do not separate off any tidal features; quite surprising given its optical appearance (Figures <ref> & <ref>). This may in part be the result of the fairly low resolution of the VLA data, however, there is certainly no highly extended emission as is typical in gas-bearing galaxies experiencing strong tidal forces.
The first two of the three outlying dwarf galaxies (which we refer to as HCG 93LSB1–3) have clear optical counterparts in the DECaLS images. The third may be spurious as it is the lowest S/N of the three and has no apparent optical counterpart (although this may be hidden by its proximity to a bright star).
§.§ HCG 95
As in Figure <ref>.
As in Figure <ref>.
HCG 95 is a group of three late-type and one elliptical galaxies in the redshift range 11500-11900 . Owing to its large distance (160 Mpc) the physical resolution of the VLA data are quite poor (even in CnB configuration). We therefore do not attempt to separate galaxies and extended features in this group. However, we note the detected emission is mostly co-spatial with HCG 95d, though there does appear to be an extension towards HCG 95a (Figures <ref> & <ref>).
Beyond 125 kpc outside of the core group we detect HCG 95f and HCG 95e in . We note that these were not originally classified as HCG members by <cit.> and <cit.>, but they were added after deep imaging searching for LSB galaxies <cit.>. For consistency with our approach to the other groups we do not consider these as members of the group. We also note that there are slight continuum subtraction artefacts remaining in this cube.
§.§ HCG 96
As in Figure <ref>.
As in Figure <ref>.
HCG 96 is a group of three late-type and one elliptical in the redshift range 8600-9000 , two of which (HCG 96a and c) may be in the process of merging. There is a large extended region of emission in the core group that appears to encompass HCG 96a, c, and d (Figures <ref> & <ref>). HCG 96b is undetected. There are also several additional detections around the core ground. The closest of these is an uncatalogued compact, blue, irregular galaxy GALEXASC J232813.78+084750.1 (which we refer to as HCG 96cl1). This object could plausibly be interacting with core group, or could even be a TDG forming from the ongoing major interactions. We therefore consider it as part of the group, but designate it as a tidal feature. The remaining detections outside of the group are GALEXASC J232735.09+085018.2, GALEXASC J232727.99+083853.7, GALEXASC J232714.32+084125.5, PGC 71560, and three likely spurious detections near the edge of the primary beam with no apparent optical counterparts.
Although the extended emission in the core group covers all of HCG 96a, c, and d in projection, all of this emission appears to be associated with HCG 96a. HCG 96d is sufficiently separated in velocity that it would not fall within the most complex region of emission, and we can confidently conclude that it is undetected. In the case of HCG 96c there are clumps in the extended emission which might correspond to the galaxy, however, even if they do, they do not significantly stand out from the surrounding emission and are thus unlikely to be bound to the galaxy. Hence, all the emission in the core group that is not clearly consistent with the rotation of HCG 96a is classified as extended features. The in the core group is also complicated by the presence of a strong absorption feature in front of the nucleus of HCG 96b.
§.§ HCG 97
As in Figure <ref>.
As in Figure <ref>.
HCG 97 consists of five galaxies approximately in the redshift range 6000-7000 , two ellipticals, two spirals, and one lenticular. Only one of these, HCG 97b, is detected in the VLA observations (Figures <ref> & <ref>). However, a number of other objects are detected within the VLA primary beam (WISEA J234701.65-022033.4, LEDA 1092439, PGC 72457, PGC 3080162, UM 177, PGC 1098512, and PGC 1092963, whose emission is truncated by the edge of the band), between about 100 kpc to 500 kpc away from the core group. The nearest of these lie on the edge of the beam of the GBT observation, and likely contributed some additional flux to the group's mass measurement.
HCG 97b is an edge-on spiral and it appears that is likely only detected on one side of the galaxy, with the emission from the receding side of the galaxy being too low S/N to be included in the source mask. This indicates that the galaxy is likely disturbed, but it is only marginally resolved in the VLA data.
§.§ HCG 100
As in Figure <ref>.
As in Figure <ref>.
HCG 100 is made up of a chain of four late-type galaxies in the redshift range 5200-5600 . All are detected in in the VLA imaging, as well as an enormous extended feature, stretching over 100 kpc to the SW of the core group. At the tip of this tail there is a faint, blue optical counterpart, perhaps indicating the formation of a TDG <cit.>. In addition, we detect MRK 935 about 85 kpc east of the core group.[We note that there are conflicting redshift measurements for MRK 935, however, the detection is consistent with $cz_\odot = 5606$ <cit.>.] The in this galaxy appears heavily perturbed and it is likely interacting with the core group and we therefore consider it a member. Further to the SE we also detect in NGC 7810 and AGC 105092.
We do not attempt to separate the four core galaxies from each other. Their optical discs almost overlap, and given that the are spread over only $\sim$400 it is not possible to reliably separate them at the resolution of the VLA data. However, the majority of the emission in extended features appears to be in the SE tail, which is separated as a distinct feature. Although the detection of MRK 935 is quite low S/N we attempt to separate this into emission from the galaxy itself and extended emission. However, the faintness of this object (in ) mean that it has little bearing of the overall ratio of disc versus extended emission in the group.
§ RESULTS
In this section we present the global results for the content of HCGs based on aggregating the results for individual groups from the previous section. First we compare the VLA measurements to those from the GBT, and then proceed calculate the deficiency of each HCG.
mass of HCGs measured by the GBT (x-axis) and measured by the VLA with spatial weighting to match the falloff in sensitivity corresponding to the GBT beam (y-axis). The solid line indicates equality of the two. Note that the scale is not the same of both axes.
Measured versus predicted (with the <cit.> $L_{B}$–$M_\mathrm{HI}$ scaling relation) masses of HCGs. All measured masses are those from the VLA data, except for HCG 30 and 37, which use the GBT measurements. Points circled with black dashed lines are triplets. The solid black line indicates equality and the dashed and dotted lines indicate 1$\sigma$ and 2$\sigma$ scatters from the <cit.> relation (for a single galaxy). HCG 62 was undetected with the VLA (and has no GBT observation) and is marked as an upper limit.
As for Figure <ref>, except now marked with green circles are the predicted masses using the <cit.> $L_{B}$–$M_\mathrm{HI}$ scaling relation. In general this relation predicts significantly higher masses than those from the equivalent <cit.> relation.
deficiencies and extended emission fraction for each (resolved) group in the sample. The colour of each point indicates its morphological phase: blue for 1, green for 2,orangefor 3a, and pink for 3c. The marker shape indicates the IR classification <cit.> of each group: stars for groups dominated by IR active galaxies, crosses for groups not dominated by either active or quiescent (or transition) galaxies, squares for groups dominated by quiescent (or transition) galaxies, and rings for groups with too many members without IR classifications. Markers enclosed in a dashed black circle correspond to triplets. The lone upper limit plotted in the top-right corner corresponds to HCG 62 that was entirely undetected in with the VLA and has no GBT observation. The horizontal dashed lines indicate 25% and 75% extended emission. These thresholds entirely determine which groups are classified as Phase 2 or 3a. Phase 1 and 3c are distinguished from each other based on the number of galaxies detected in (Section <ref>). The shaded arrows indicate our proposed evolutionary path of groups through this parameter space.
Histogram of deficiencies of HCGs. The three phases are shown individually by differently hatched bars. The vertical black dashed lines indicate approximately the typical 1$\sigma$ uncertainty (away from zero) in the measure of deficiency for an individual galaxy <cit.>. Note that HCG 62 is not included as there is only an upper lower limit on its deficiency.
deficiencies of HCGs versus the fraction of early-type members. In this case early-types are defined as lenticular or elliptical. The markers follow the same scheme as in Figure <ref>.
deficiency of HCGs
HCG $\log M_\mathrm{HI,VLA}$ $\log M_\mathrm{HI,GBT}$ $\log M_\mathrm{HI,pred}$ -def. -def. $\log M_\mathrm{HI,gals}$ $\log M_\mathrm{HI,exfs}$ $f_\mathrm{exfs}$ Phase
$[\mathrm{M_\odot}]$ $[\mathrm{M_\odot}]$ $[\mathrm{M_\odot}]$ (VLA) (GBT) $[\mathrm{M_\odot}]$ $[\mathrm{M_\odot}]$
2$\ddagger$ $10.08\pm0.04$ $9.87 \pm 0.12$ $-0.21$ 10.06 8.66 0.04 1
7 $9.73\pm0.05$ 9.65 $10.22 \pm 0.16$ $0.49$ $0.57$ 9.73 0.0 1
10 $9.85\pm0.04$ 9.77 $10.24 \pm 0.12$ $0.39$ $0.47$ 9.85 0.0 1
15 $9.09\pm0.07$ 9.52 $10.37 \pm 0.09$ $1.28$ $0.85$ 9.09 0.0 3c
16 $10.34\pm0.05$ 10.08 $10.34 \pm 0.09$ $-0.0$ $0.26$ 10.11 9.95 0.41 2
19$\ddagger$ $9.39\pm0.05$ $9.7 \pm 0.13$ $0.31$ 9.28 8.72 0.22 1
22 $9.41\pm0.04$ $9.95 \pm 0.15$ $0.54$ 9.41 0.0 3c
23 $10.13\pm0.05$ 10.03 $9.84 \pm 0.12$ $-0.29$ $-0.19$ 10.05 9.37 0.17 1
25 $10.15\pm0.05$ 10.1 $10.06 \pm 0.13$ $-0.09$ $-0.04$ 10.1 9.23 0.12 1
26 $10.27\pm0.05$ 10.47 $10.04 \pm 0.11$ $-0.23$ $-0.43$ 10.19 9.49 0.17 1
30 $<7.78$ 8.77 $10.17 \pm 0.13$ $>2.39$ $1.4$ 3a
31 $10.17\pm0.05$ 10.22 $10.13 \pm 0.19$ $-0.04$ $-0.09$ 9.74 9.96 0.63 2
33 $10.07\pm0.06$ $9.88 \pm 0.12$ $-0.19$ 10.02 9.1 0.11 3c
37 $<8.35$ 9.78 $10.45 \pm 0.14$ $>2.1$ $0.67$ 3a
38$\ddagger$ $10.14\pm0.05$ $10.14 \pm 0.12$ $-0.0$
40 $9.69\pm0.07$ 9.84 $10.42 \pm 0.12$ $0.73$ $0.58$ 9.5 9.25 0.36 2
47 $9.69\pm0.05$ $10.19 \pm 0.13$ $0.5$ 9.41 9.37 0.48 2
48$\dagger$ $8.85\pm0.06$ 8.78 $9.44 \pm 0.17$ $0.59$ $0.66$
49 $10.49\pm0.04$ $9.87 \pm 0.11$ $-0.62$
54 $9.25\pm0.05$ $8.99 \pm 0.17$ $-0.26$
56 $9.89\pm0.04$ $10.22 \pm 0.11$ $0.33$ 9.89 0.0 3c
57 $9.88\pm0.05$ $10.75 \pm 0.09$ $0.87$
58 $10.19\pm0.04$ 9.83 $10.51 \pm 0.1$ $0.32$ $0.68$ 9.78 9.98 0.61 2
59 $9.69\pm0.05$ $9.77 \pm 0.13$ $0.08$ 9.61 8.92 0.17 1
61$\ddagger$ $10.2\pm0.04$ $10.21 \pm 0.13$ $0.01$ 9.13 10.16 0.91 3a
62 $<7.54$ $10.07 \pm 0.12$ $>2.53$ 3a
68 $9.95\pm0.04$ 9.83 $10.34 \pm 0.11$ $0.39$ $0.51$ 9.95 7.4 0.0 3c
71 $10.8\pm0.05$ $10.45 \pm 0.14$ $-0.35$ 10.59 10.39 0.39 2
79 $9.49\pm0.05$ 9.66 $9.91 \pm 0.12$ $0.42$ $0.25$ 9.26 9.11 0.42 2
88 $10.08\pm0.05$ 10.02 $10.42 \pm 0.11$ $0.34$ $0.4$ 10.08 8.4 0.02 1
90 $8.94\pm0.06$ 8.43 $10.13 \pm 0.1$ $1.19$ $1.7$ 8.94 0.0 3c
91 $10.34\pm0.06$ 10.3 $10.54 \pm 0.15$ $0.2$ $0.24$ 10.21 9.76 0.26 2
92 $10.13\pm0.04$ 10.23 $10.59 \pm 0.11$ $0.46$ $0.36$ 10.13 1.0 3a
93 $9.53\pm0.04$ 9.38 $10.38 \pm 0.12$ $0.85$ $1.0$ 9.53 0.0 3c
95 $9.42\pm0.09$ $10.44 \pm 0.12$ $1.02$
96 $10.45\pm0.05$ $10.43 \pm 0.14$ $-0.02$ 9.93 10.29 0.69 2
97 $8.68\pm0.1$ 9.8 $10.23 \pm 0.1$ $1.55$ $0.43$ 8.68 0.0 3c
100 $9.92\pm0.05$ 9.89 $10.01 \pm 0.11$ $0.09$ $0.12$ 9.64 9.59 0.47 2
Columns: (1) HCG ID number, (2) total mass from the VLA observations with uncertainties approximated as $\sigma_{M_\mathrm{HI}} = 235600 \left( \frac{D}{\mathrm{Mpc}} \right)^2\times 3\frac{\sigma_\mathrm{rms}}{\mathrm{Jy}} \times 100\;\mathrm{km\,s^{-1}}$, combined with an assumed 10% uncertainty in absolute flux calibration (distance uncertainties are neglected), (3) total mass from the GBT observations <cit.>, (4) predicted mass based on B-band luminosity <cit.>, (5 & 6) deficiency of the entire group from the VLA and GBT mass measurements, respectively, (7) combined mass of all member galaxies, (8) combined mass of all extended features in the group, (9) fraction of total mass in extended features, (10) morphological classification. $\dagger$ Pair or false group. $\ddagger$ Triplet.
§.§ Discrepancies between single-dish and VLA fluxes
As shown in Figure <ref> the vast majority of the VLA observations result in total fluxes (after weighting for the GBT beam response) that are just marginally below those observed with the GBT. This is expected, as in general an interferometer resolves out some extended emission and therefore does not recover the full flux seen with a single dish. However, there are a few points which require further investigation: a) the single point above the line of equality, b) the group of four points about 0.5 dex below the line, and c) HCG 30 which is about 2 dex below the line.
The first of these is formally impossible as an interferometer cannot detect more flux than a single dish, and would therefore normally point to a calibration, pointing, or continuum subtraction problem. This point corresponds to HCG 90. As can be seen in Figure <ref>, only HCG 90a is detected in the VLA map and this galaxy lies outside the HPBW of the GBT observation. Therefore, the comparison between the VLA and GBT fluxes will be extremely sensitive to exact nature of our (approximate) weighting for the GBT primary beam response. Furthermore, there is a strong absorption feature at the centre of HCG 90a, which appears to impact the GBT spectrum more strongly than the VLA spectrum (Figure <ref>).
The second case, the four groups with about 0.5 dex lower VLA fluxes than GBT fluxes, also requires explanation. These groups are HCGs 15, 37, 58, and 97.
HCG 97 is another case similar to HCG 90, where much of the emission occurs near or beyond the edge of the GBT primary beam, and therefore the comparison between the VLA and GBT spectra is likely to be unreliable (Figures <ref> & <ref>). However, the spectra do also show regions where emission is detected by the GBT, but not at all with the VLA. This may be a sign of diffuse emission that was resolved out by the VLA.
The comparison of the VLA and GBT spectra (Figure <ref>) for HCG 15 indicates that a considerable fraction of the emission in the GBT spectrum is undetected with the VLA. We also note that the VLA spectrum presented here differs strongly from that presented by <cit.>, where even HCG 15f is not detected. The high rms noise of the VLA data relative to the signals in the GBT spectrum, might prevent them from being included in the source mask. Alternatively, these could be diffuse features that were resolved out by the VLA. We also note that the two highest peaks in the GBT spectrum (aside from HCG 15f) do not correspond to the redshifts of any of the group members, which could mean that they are additional LSB members, -only features, or spurious in some way.
In the cases of HCG 30 and 37, there is no emission detected in the group cores by the VLA (Figures <ref> & <ref>), but in the GBT spectra there is broad ($\sim$1000 wide), faint emission that might indicate the presence of a diffuse component that the VLA is unable to detect <cit.>. These two groups are the most clear examples of this phenomena that we identified, with the other possibility being HCG 97. In all other cases (except HCG 15) the spectra from the VLA and the GBT are broadly consistent.
Two other groups warrant mention for their minor mismatches in their GBT and VLA spectra, HCG 26 and 79. The spectral profile of HCG 26 in the VLA data matches closely with that from the GBT data, however, it is consistently lower (Figure <ref>). As the profile is so similar this likely indicates a slight calibration problem with one or both of these observations, rather than the presence of diffuse emission. For HCG 79 an entire emission feature appears to be absent from the VLA spectrum when compared to that of the GBT (Figure <ref>). At present there is not a clear explanation for this mismatch. It seems unlikely that this feature would be too low S/N to be detected, meaning it could be a diffuse feature resolved out by the VLA, an artefact in the GBT data, or a problem with the continuum subtraction of the VLA data. However, we were unable to identify the root cause.
Aside from the few cases discussed above, the GBT flux is well recovered by the VLA observations. Therefore, as all our targets have VLA data, but not all have GBT data, we use the deficiency of each group determined from the VLA data in most of our subsequent analysis. Switching to the GBT observations would make only marginal differences for all but a handful of groups, and in most of those cases would lead to a worse estimate (e.g. because of blended emission from non-members), not an improvement. The exception are HCG 30 and 37 for which we use the GBT measurements, as these are undetected with the VLA.
§.§ deficiency of HCGs
<cit.> and <cit.> found that the vast majority of HCGs were highly -deficient, motivating them to ask the question “where is the neutral atomic gas in HCGs?" The distribution of observed versus predicted masses is shown in Figure <ref>. We too find that many HCGs have significantly lower masses than would be expected for non-interacting galaxies, while very few are significantly rich in . We also see that triplets (circled with black dashed lines in Figure <ref>) appear to follow the same distribution as groups with more members. However, there are just four triplets in the sample.
We define deficiency <cit.> to the be logarithmic decrement between the predicted and observed mass, that is -$\mathrm{def} = \log M_\mathrm{HI,pred} - \log M_\mathrm{HI,obs}$, where the predicted mass is found from the B-band luminosity and the scaling relation of <cit.>, measured based on isolated galaxies in the AMIGA <cit.> sample. The AMIGA sample offers a benchmark of content of galaxies in the (near) absence of interactions. It is also an optically selected sample (as are HCGs) rather than an -selected sample. Thus, the <cit.> relations offer a well-suited metric for HCGs, whereas other scaling relations in the literature <cit.> would be less appropriate for this particular use case and could introduce a bias in the measures of -deficiency.
We note, however, that no scaling relation will function perfectly as intended in a compact group environment. The myriad of ongoing interactions, starbursts, shocks etc. in these groups is bound to influence the B-band luminosities of the galaxies and complicate the use of this scaling relation. We chose to adopt B-band luminosity as the predictor for mass as the alternative, optical diameter <cit.>, can be extremely uncertain in systems where tidal effects are significant. Although also affected by tidal interactions, B-band luminosity is usually dominated by the bulge and is therefore expected to be less severely impacted in most cases. Even so, we caution the reader that the uncertainty estimates on the predicted $M_\mathrm{HI}$ values in Tables <ref> & <ref> are based on the scatter of values for isolated galaxies and are likely underestimate the scale of the uncertainties for HCG galaxies.
The B-band magnitudes (and morphological types) of the member galaxies (Table <ref>) for use with this scaling relation are taken from <cit.>. In cases where we have identified new group members these values were taken from the HyperLeda database <cit.>. To perform detailed photometry <cit.> imaged HCG galaxies with the Canada-France-Hawaii telescope and overlapping galaxies were separated. For pairs the separation was performed by slicing the image at the nuclei of each galaxy and proportioning the light in between them based on the magnitude of the non-overlapping side of each galaxy. For high multiplicity blends the light in overlapping isophotes was divided amongst all the galaxies based on their magnitudes inside their non-overlapping isophotes. Finally, <cit.> corrected these values for internal and Galactic extinction following <cit.>.
For uniformity, we use the same relation for all galaxies with no consideration of morphological type. Although <cit.> did also fit separate relations for early, late, and very late types, these relations were considerably more uncertain than the main relation, and <cit.> largely advised against their use. We note, however, that this may lead to a slight bias towards higher deficiencies for groups dominated by early-types. However, in practice these are mostly Phase 3 groups that mostly have high deficiencies, regardless of the exact relation used. In addition, as these groups are likely more evolved than those dominated by late-types, and as we are interested in the comparison of their original and current contents, it is uncertain what morphological types should be used for them in the event that a morphology-dependent relation were to be used, further supporting the choice to ignore morphological type when estimating deficiency.
We find a similar result to <cit.>, that the mean deficiency is 0.34 dex. We also note that the median deficiency is a fairer estimate of what is typical of the population, both because deficiency is a non-linear quantity and because it allows for the inclusion of upper limits from undetected groups. This value is 0.33 dex, almost identical to the mean value.
This close agreement between the average deficiency that we find and that found by <cit.>, 0.40 dex, disguises some fundamental differences between the results of that work and this work. The first is that the <cit.> $L_{B}$–$M_\mathrm{HI}$ scaling relation provides a less biased prediction of the mass of a galaxy than the equivalent relation from <cit.>. As explained in that work, this is primarily driven by the improved fitting methodology employed by <cit.> and also, to a lesser extent, because of the larger and more morphologically diverse sample used. This results in predicted masses being significantly lower (on average) for the same groups compared to <cit.>, as shown in Figure <ref>. However, even our predicted masses based on the <cit.> differ from those of <cit.>, likely a result of changes in group membership and the use of a corrected form of the <cit.> relation <cit.>. In addition, our measurements of the mass of HCGs (after correcting for the different distance estimates used) are typically about 0.1 dex larger than those of <cit.>. This is likely due to the fact that the single-dish observations of many HCGs had primary beams that were either too small to include all the flux from the group, or the flux from the outlying regions of a group was down weighted by the declining beam response. Serendipitously, these effects appear to have approximately cancelled each other out, resulting in a very similar average deficiency being calculated.
§ DISCUSSION
In this section we discuss how groups are classified based on their and IR morphology. In addition we assess whether the hypothesis that deficiency should act as a proxy for morphological phase is supported by the observations that we have presented.
§.§ morphological classification scheme
<cit.> first proposed an evolutionary sequence for HCGs based on their morphology. One of the goals of this work is to compare that morphological sequence to the deficiencies of HCGs. With the uniform analysis of all available VLA observations of HCGs we have now greatly expanded the sample for which this comparison is possible, while the deficiency has been revised for all these groups based on a more self-consistent definition <cit.>, as described above. However, before proceeding with this comparison, we first make slight adjustments to the <cit.> scheme based on our expanded findings of the morphology of our sample of HCGs.
The original scheme split groups up into three main categories, phases 1, 2, and 3. Phase 1 groups were those where the majority (70% or more) of the detected was in features associated with the galaxies themselves. Phase 2 groups were those where a significant fraction (30-60%) of the detected was in extended features. Finally, Phase 3 were those groups that were either undetected in or whose was predominantly (60% or more) in extended features, rather than in the galaxies themselves. The prototypical examples of each phase are HCGs 88, 16, and 92, in order of increasing evolutionary phase.
The first modification is very minor, and is to set the thresholds for being classified a Phase 2 or 3 as 25% and 75% of the detected in extended features, respectively. These are slight a decrease and increase compared to <cit.>, respectively, and a reflection of the apparent breaks between these classes in our data. In particular, the morphology of groups like HCG 31 and 58 (Figures <ref> & <ref>) seem to fit best into Phase 2, as they have both significant extended features, but also clear concentration remaining in the galactic discs. However, with the new analysis presented here and the original classification boundaries, these would have been classified as Phase 3. We note that the exact location of these thresholds is somewhat subjective, and there are cases where a particular HCG is essentially on the threshold and could be classified in one of two ways (e.g. HCG 19 or 91).
The second modification is more significant and concerns distinguishing groups which are Phase 1 from those that are Phase 3. <cit.> split the Phase 3 classification into two subcategories, 3a and 3b. In that scheme, 3a represented groups with either very high fractions of emission in extended features or no detection at all. We keep this subcategory essentially unchanged (except the threshold mentioned above). Phase 3b was used to denote groups that appeared to be evolving within a common envelope.
However, improved observations and analyses <cit.> have demonstrated that no groups (with the possible exception of HCG 49) convincingly fit this scenario, and thus the classification is no longer required. In this work we add a new designation, Phase 3c, to indicate groups that would be classified as Phase 1, but where only a single galaxy is detected in . There are a large number of groups where only one member of the group is detected in , for example, HCGs 15, 22, 33, 48, 56, 68, 90, 93, 97. The detected galaxy is always a late-type, while the remaining group members are typically dominated by early-types. In at least some, if not all, of these cases it is likely that this lone detection is a recent addition to the group, and effectively leads to a rejuvenation of the content of the group. Therefore, these groups are likely in a late stage of their evolution, but have recently gained a new member, hence the classification as Phase 3.
We note that the possibility of new members is not limited to Phase 3 groups. For example, <cit.> found that HCG 16 has a recent addition to the group that is partly responsible for its extreme morphology (Figure <ref>). However, in most Phase 1 and 2 groups it would be difficult to identify new members as these groups already typically host several members with normal reservoirs. Indeed in Phase 1 it is not clear if the concept of a new member even has much meaning, as the lack of tidal features implies that the galaxies only recently entered into a compact configuration. In the case of Phase 3 groups, however, the incorporation of a new -bearing member is striking as these are clear outliers both in terms of their content and morphology.
As noted by <cit.> the abundance of lenticular galaxies increases with phase, with Phase 3a groups being $\sim$30% lenticulars, compared to $\sim$10% in Phase 2 groups. As has been argued previously <cit.> this increase in abundance of lenticulars is likely the result of spirals being stripped and the increase in their abundance from Phase 2 to 3 is an indication that the latter phase is more evolved. To estimate the abundance of lenticulars in Phase 3c we deducted one member per group (i.e. removing the assumed recent addition) before calculation the fraction, which resulted in a value of $\sim$30%, supporting the idea that these are also evolved groups, just with one new (gas-rich) member.
The diffuse light seen in Phase 3c groups also supports the idea that these are evolved groups. HCG 90 is an extreme case where over a third of its total light is in a diffuse form <cit.>. In HCG 22 the DECaLS images show clear shells and arcs around HCG 22b, signs of past interactions, and potentially even more extended (and extremely faint) features to the NW. HCG 93a also shows an extraordinary complex of diffuse features, and signs of similar (though less impressive) diffuse features are evident in all Phase 3c groups with DECaLS imaging. We are therefore confident that these are all evolved systems that have simply gained a new, gas-rich member.
In summary, our classification scheme is as follows:
* Phase 1: $f_\mathrm{exfs} < 0.25$ and $N_\mathrm{det} > 1$
* Phase 2: $0.25 < f_\mathrm{exfs} < 0.75$
* Phase 3a: $f_\mathrm{exfs} > 0.75$ and/or $N_\mathrm{det} = 0$
* Phase 3c: $f_\mathrm{exfs} < 0.25$ and $N_\mathrm{det} = 1$
The fraction of emission in extended features, $f_\mathrm{exfs}$, is the fraction of the total emission in each group that was assigned as an extended feature (e.g. a gas tail or bridge) in Section <ref>. $N_\mathrm{det}$ is the number of galaxies in each group that were detected in resolved imaging.
Figure <ref> shows the early-type fraction of group members against the deficiency of each group. Although there is no simple correspondence between the two, we see that almost all groups classified as either Phase 3a or 3c have 50% or more of their members as early-type galaxies, and the reverse is true for Phases 1 and 2, even though no explicit reference to the morphological type of members is made in the classification scheme above. In general the Phase 1 and 2 HCGs are towards the lower-left corner of this figure (low deficiencies and few early-type members). The deficiencies of the Phase 3 HCGs cover a broad range (extending to much higher deficiency values than the Phase 1 and 2 groups), but these groups are confined almost exclusively to the top half of the plot. This figure thus supports the notion that the evolutionary sequence above does in some way correspond to the temporal evolution of HCGs under the assumption that interactions in these groups drive morphological change <cit.>.
The classification of each group is written in Table <ref> and plotted in Figure <ref>. Figure <ref> show the histogram of the deficiencies of all HCGs in our sample and split into the three phases. We see that the Phase 1 groups and the Phase 2 groups have a very similar distribution of deficiencies, with the typical value being just marginally deficient. The Phase 3 groups extend from this range to deficiencies of about 1.5 dex, and two groups are entirely undetected and have only limits on their deficiencies. These findings are discussed further in Section <ref>.
§.§ Comparison with IR activity
In addition to highlighting the morphological phase and deficiency of HCGs, Figure <ref> also shows the IR classification of <cit.> for each HCG. Star symbols indicate groups where more than 50% of the member galaxies are classified as IR active, crosses indicate groups with no dominant classification, squares indicate groups where more than 50% of members are classified as quiescent or `canyon' (IR transition) galaxies, and rings indicate groups where too many galaxies are missing IR classifications in <cit.> for them to be conclusively assigned to any of the three other categories.
Curiously there is not a strong correspondence between the morphology and deficiency, and the IR classifications. The clearest trend is that there are almost no groups dominated by quiescent galaxies in Phase 2. There is some correlation between galaxies being -rich and those that are IR active (Table <ref>), but there are also numerous cases of active galaxies that are undetected in , as well as quiescent or canyon galaxies that have relatively normal reservoirs. We note that these results are somewhat at odds with <cit.> who find a stronger correlation between the $g-r$ colours of galaxies in compact groups and the global content of the groups. However, the finding that there are several galaxies that appear to strongly deviate from the expected correlation between activity and gas content, is in common with that work. We also note that by defining content in terms of $M_\mathrm{HI}/M_\ast$, rather than deficiency, <cit.> likely incurred a considerable bias, as lower stellar mass galaxies in the field are expected to be more -rich (by that metric) as well as later morphological type <cit.>.
We find that the majority of the IR active groups are actually those classified as Phase 2 by morphology, not those in Phase 1 (although nearly half of the Phase 1 groups are unclassified). This may indicate that the ongoing interactions that lead to large amounts of being spread throughout the IGrM more or less ensure that the participating galaxies will be actively forming stars. However, it should also be noted that some of the Phase 1 groups with the richest contents are not classified as active, for example, HCG 23. Thus, provided a group is not strongly -deficient, the connection between and IR classification appears to be more closely related to morphology of the group than to the exact quantity of neutral gas that is available.
There are some groups that are relatively -deficient (e.g. HCGs 7, 40) yet are still IR active. Both HCG 7 and HCG 40 contain a mixture of quiescent and active galaxies (though active galaxies dominate). In all cases the quiescent galaxy(ies) are undetected in , immediately raising the -deficiency of the group. However, in some cases the active galaxies are also -deficient themselves. Although we caution against over interpreting the -deficiencies of individual galaxies (which are only expected, in ideal circumstances, to be accurate to 0.2 dex), some of these groups may be in the process of losing their gas (potentially accelerated by interactions) but have not yet lost their molecular gas. Thus, interactions are still able to promote elevated levels of star formation activity. Figure 1 of <cit.> shows that IR activity and molecular gas richness are highly correlated in HCGs. Furthermore, <cit.> argue that the ongoing tidal interactions in HCGs might also enhance the efficiency of the conversion from to H$_2$. All of the active galaxies in HCGs 7 and 40 have significant molecular gas reservoirs, typically $M_\mathrm{H_2}/M_\ast > 0.1$, which is the threshold where galaxies typically appear to transition from active to quiescent <cit.>. is generally much more loosely bound than molecular gas and is therefore lost first, meaning that -poor galaxies may still be quite H$_2$-rich and actively forming stars. However, this is presumably an unsustainable situation as they will rapidly deplete or disperse their molecular gas reservoirs.
Phase 3 groups are mostly classified as quiescent (or unclassified), although there is one example of an IR active Phase 3 group, HCG 56, which is a rather peculiar group containing mostly lenticular galaxies. This finding that Phase 3c groups are often quiescent reinforces our conclusion that most groups with a sole detection are evolved groups (regardless of the value of their deficiency).
§.§ Evidence for a diffuse component
<cit.> compared the spectra of HCGs from GBT and VLA observations and raised the possibility that, in some groups, flux missed by the VLA observations could be in the form of a diffuse component of the IGrM. In this work we have re-reduced all of the VLA observations which <cit.> compared to and have attempted to recover as much (extended) flux as possible, using both multi-scale clean (which was not widely used when the original reduction was performed) for imaging and for masking. As discussed in Section <ref> the vast majority of our VLA spectra agree remarkably well with the GBT spectra, with their slight discrepancies likely arising from small differences in calibration or minor (undetected) extensions of high column density features. In most groups there is thus little evidence for a significant diffuse component of the IGrM.
Having said this, there are some notable exceptions. As mentioned in Section <ref>, HCGs 30, 37, and 97 all show significant additional emission in their GBT spectra that does not trace the emission from high column density features detected in the VLA data. In the cases of HCGs 30 and 37 this additional emission manifests as a spectrally broad and continuous emission feature, where no emission is detected the VLA data (Figure <ref> & <ref>).
Based on the rms noise levels in the VLA cubes, this emission should be detectable in theory (at least in the spectral range where it is brightest). However, this is only true if the emission were spatially concentrated at the scale of the VLA synthesised beam, else the effective rms noise level would be higher, decreasing the significance of the emission feature. Hence the suggestion of <cit.> that this may indicate a diffuse component. It is even possible that the emission is so spatially extended that it could be unrecoverable with the VLA. Both HCGs 30 and 37 were observed in DnC configuration, where the largest recoverable scale is expected to be $\sim$16. Given that this is larger than the maximum separation between the galaxies, it seems unlikely that this is the reason for the non-detection of this feature in the VLA data. However, if this were to be a limiting factor then the smaller minimum baselines of MeerKAT (20 m rather than 35 m) would be a means to resolve this issue. Furthermore, the increased sensitivity of MeerKAT, relative to the VLA, offers the current best possibility for detecting this `missing' emission with an interferometer and resolving its nature.
§.§ Evolution of content of HCGs
As noted above there is little difference between the distribution of deficiencies for Phase 1 and Phase 2 groups (Figure <ref>). The apparent lack of a shift demonstrates that the processes that remove from the discs of galaxies and disperse it throughout the IGrM act on a timescale significantly shorter than the timescale on which is destroyed via evaporation or consumed through SF. As has been found by numerous works <cit.> the content of galaxies is a function of their larger scale environment, which may act as a partial explanation of the large scatter in deficiencies seen across Phase 1 & 2. This scatter is sufficiently large that even some Phase 3 groups are not distinct (in terms of their deficiencies) from Phase 1 & 2. HCG 61 has over 90% of its emission in extended features, but has a negligible deficiency. Even the prototypical example of a Phase 3 group, HCG 92, only has an deficiency of 0.46 dex, which is equalled by a handful of Phase 1 and 2 groups. Although, there is a clear trend for most Phase 3 groups to be significantly more -deficient.
However, the increased deficiency of Phase 3 is not necessarily a one-way process. As indicated in Figure <ref> by the pink arrow, it appears that Phase 3c groups were likely previously Phase 3a groups, devoid of , but have gained a new gas-bearing member. In some cases that new member is so gas-rich that these groups attain relatively low deficiencies again, and would be difficult to distinguish from Phase 1 groups if it were not for the fact that they are dominated by early-type galaxies and only one galaxy contains gas. However, we reiterate here that the morphological type of the galaxies is not used to define Phase 3c, which is based solely on the morphology of each group.
In some exceptional cases it appears these groups can undergo interactions with the newcomer galaxy, such that they effectively re-enter Phase 2 and appear to go around the cycle again. An example is HCG 79, which is classified as Phase 2 owing to 42% of its being in extended features. However, as can be seen in Figure <ref>, most of this emission clearly originated from HCG 79d, the only late-type in the group. We note that some of the extended features in this case are close to the noise level of the data, and that there is a notable difference between the GBT spectrum and that of the VLA (Figure <ref>), perhaps indicating there could be further extended features associated with HCG 79a. However, it regardless remains an evolved group that appears to have gained a new gas-rich member and now has a large fraction of its gas in extended features, analogous to a Phase 2 group.
This new step in the morphological sequence (Phase 3c) complicates the assumption that deficiency would increase with increasing morphological phase. This hypothesis implicitly assumes that groups cannot regain neutral gas after they have become significantly evolved. However, the larger number of groups dominated by early-type galaxies, but with a single gas-rich, late-type member casts serious doubt on this assumption. Though less likely, it is even possible that some groups containing a mixture of early and late-type galaxies (e.g. HCGs 10 & 25) represent the merger of two groups, one gas-rich and one evolved and gas-poor. In our scheme, such objects would likely be classified as either Phase 1 or Phase 2. Regardless of whether this more contrived scenario occurs frequently, the finding that the deficiency of an individual compact group does not necessarily monotonically increase with time means that, in general, it cannot be used as a proxy for evolutionary phase. Even with the Phase 3c groups excluded, deficiency is not a useful proxy as there appears to be more scatter than evolution in deficiency values across the remaining phases. Only the most -deficient Phase 3a groups would be distinguishable with such a proxy.
§ CONCLUSIONS
We have reduced and analysed archival VLA observations of 38 HCGs and determined their morphological phase based on an adaptation of the <cit.> evolutionary sequence. As this sequence is thought to represent the temporal evolution of HCGs it was expected that deficiency would act as a proxy for the evolutionary phase of a HCG, as gas is consumed or destroyed as the group evolves. However, we find that deficiency is a very poor proxy for the morphology of HCGs, with the exception that most Phase 3 groups are significantly deficient in .
The reason for this poor correspondence appears to be two-fold. Firstly, there is very little difference between the distribution of deficiency of Phase 1 and Phase 2 groups. This suggests that the initial content of groups is the primary factor determining their deficiencies in these two phases, and that the enormous morphological changes that occur between Phases 1 and 2 proceed on a timescale that is too short for a significant quantity of to be either consumed or destroyed. Secondly, the hypothesis that deficiency should be a proxy for morphology relies on the assumption that gas replenishment is largely negligible. However, we find a large fraction of the HCGs, about 25%, appear to be evolved groups dominated by early-type galaxies with a lone, late-type, gas-rich member, presumably a newcomer. In this way, once highly -deficient groups can have their gas content rejuvenated, confounding the use of deficiency as a proxy for evolutionary phase. We add a new sub-phase to the <cit.> evolutionary sequence to classify such groups.
We also find that there is not a clear one-to-one correspondence between the content of HCGs and the IR activity of their galaxies. However, most groups that are dominated by active galaxies are in Phase 2 of the evolutionary sequence, with ongoing interactions likely driving the ubiquitous IR activity. Phase 3 are likely to be dominated by passive galaxies, as is expected for these gas-poor groups. However, Phase 1 groups are a mixture of active, intermediate, and even quiescent IR classifications, as well as several groups with missing classifications.
Finally, we searched for evidence of a potential diffuse component in HCGs, as proposed by <cit.>. While some cases appear less compelling with our revised reduction of the VLA observations, the discrepancy between the spectra of HCGs 30 and 37 in VLA versus GBT observations remain difficult to explain without the presence of a diffuse component. Deeper interferometric observations with improved surface brightness sensitivity and $uv$-coverage might be capable of revealing this component and conclusively demonstrating its reality.
We thank the anonymous referee for their helpful comments that improved this paper. MGJ thanks the NRAO helpdesk for their rapid and insightful support, in particular with some of the oldest and most problematic VLA datasets.
MGJ was supported by a Juan de la Cierva formación fellowship (FJCI-2016-29685) from the Spanish Ministerio de
Ciencia, Innovación y Universidades (MCIU) during much this work. We also acknowledge support from the grants AYA2015-65973-C3-1-R (MINECO/FEDER, UE) and RTI2018-096228-B-C31 (MCIU), and from the grant IAA4SKA (Ref. R18-RT-3082) from the Economic Transformation, Industry, Knowledge and Universities Council of the Regional Government of Andalusia and the European Regional Development Fund from the European Union. This work has been supported by the State Agency for Research of the Spanish MCIU “Centro de Excelencia Severo Ochoa” programme under grant SEV-2017-0709.
This work used the following libraries and packages: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and (<https://anaconda.com>).
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID 2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID 2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID 2016A-0453; PI: Arjun Dey). This research has made use of the NASA/IPAC Extragalactic Database, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was
published in <cit.>. This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is <http://www.sdss.org/>.
§ DISCREPANCIES WITH PREVIOUSLY PUBLISHED VLA DATA
There are some other notable differences between the VLA data presented here and in <cit.>. In particular, the spectra of HCGs 15, 23, and 31 all differ significantly. HCG 15 has already been discussed in Section <ref>. HCG 23 is essentially undetected in the VLA data presented in <cit.> and the bandwidth shown is too narrow to include most of the group's emission. Whereas here we find strong detections of four group members and a close match with the GBT spectrum. It is unclear why this difference exists as the observations for this group were completed in 1990 and thus the same data were presumably used in both cases. In the case of HCG 31 the difference is primarily in the flux scale. <cit.> found that this group was missing a significant amount of emission in its VLA spectrum, however, we find a very close match. We took no special steps in the reduction of this group and used the standard flux calibration models in . There was likely an error in the calibration of these data in their original reduction, but at this point it is difficult to know where this occurred.
The final spectrum of HCG 58 is also quite different in the two works, with the <cit.> spectrum matching somewhat better to the GBT spectrum. In this case it is likely that it is the exact form of the correction for the GBT beam response that is most responsible for the difference as much of the flux detected in the VLA cube is near or beyond the GBT beam, resulting in enormous correction factors.
§ REPRODUCIBILITY
In addition to the analysis of the gas content of HCGs, we have followed best practices to support the reproducibility of the software methods used in this study. In this section we provide an overview of how we have tried to accomplish this, where barriers were met, and some lessons learnt.
§.§ Discussion of our approach
For our pilot study of HCG 16 <cit.> we constructed a full end-to-end pipeline (<https://github.com/AMIGA-IAA/hcg-16>), with software containers to preserve the exact software environment (in addition to the scripts themselves) in order to maximise the longevity of the pipeline. In this case, because of the larger scale of the current project, the increased volume of data, and the computation time it would take to reprocess all steps, we have instead aimed to create a flexible pipeline for processing the data of individual groups or VLA observing projects. This allowed the parameters of the data reduction to be tuned (e.g. for flagging, continuum subtraction, and imaging) for each data set and group, and the processing of each data set to be repeated and modified independently. This approach has clear advantages, but it also means that modifying the data reduction is a more manual process, as the pipeline must be executed separately for each VLA project or each HCG. In addition, the choice not to run the pipeline scripts in containers aides short term simplicity of execution, but may results in poorer long term preservation.
Our pipeline for processing the VLA data is available at <https://github.com/AMIGA-IAA/hcg_hi_pipeline>. The parameters and log file for our actual excution of the pipeline are stored in a separate repository, <https://github.com/AMIGA-IAA/hcg_global_hi_analysis>. Instructions for the installation and use of the resources are described in the repositories themselves. In addition, the second repository includes scripts and notebooks to recreate all the plots and figures in this paper. These are based on our reduction of the data, and we have therefore also included all the final data cubes, moment zero maps, and cubelets[It show be noted that a bug in produces a single bad pixel at the origin of each cubelet, which our scripts correct for.] of separated features in a repository, <https://doi.org/10.5281/zenodo.6366659>. The analysis repository also contains links to the specific versions of and that were used to reduce and analyse the data.
Unfortunately, one step of the analysis process remains not fully reproducible, which is the separation of features using . This separation is a necessarily manual and subjective process that is not straightforward to fully record in our pipeline. In the other steps of data reduction and analysis we have endeavoured to provide all the relevant parameters files such that another astronomer could modify and re-run these steps in a relatively approachable manner. However, for the separation of features step we have instead opted for preservation and exact reproduction, rather than full, independent reproduciblity. We therefore note that even if another astronomer were to repeat (and modify) all the steps of our pipeline and then run our notebooks, their modifications would only be carried forward for the properties of the groups that do not rely on the separated features cubelets. In general these are the global properties, such as the total mass and deficiency of each group, not those that depend on individual objects, such as the fraction of emission in extended features or the masses of individual galaxies.
All of the VLA data presented in this work are publicly available in the VLA archive. However, this archive cannot be automatically queried for data downloads, and thus the data for any group must be downloaded manually.
The logs for the reduction of each data set include the names of all files that are imported so that the exact files can be obtained from the VLA archive, thus allowing anyone to repeat and modify our reduction (using our pipeline, or otherwise) from the beginning. In general, we anticipate end users only downloading and reprocessing data for the individual groups that they are interested in. Therefore, it does not make sense to re-host the entire data set <cit.> to allow automated access. In addition, creating additional, distributed copies of an archive is inefficient and impractical given the potential volume of data (for a generic VLA project). However, we point out that a more straightforward means to identify and locate individual VLA archive files, such as a digital object identifier, would streamline the process of (re-)acquiring the raw data.
For those who do not wish to repeat all the reduction, we advise downloading the reduced data products from the repository. We have also included the relevant GBT spectra of HCGs from <cit.>, as the digital versions of these spectra have not been publicly released previously. Finally, we have included the optical images from DECaLS, SDSS, and POSS that we use. Although these can all be obtained from public services, because of the small volume of these data and because some services require manual interaction, we have chosen to re-host them. These data products can all be used with the analysis notebooks to reproduce or modify any of the figures or values presented in the paper. In our prior work with HCG 16 we elected to enable cloud-execution of these notebooks using the service (<https://mybinder.org/>). Although, the current analysis notebooks can also be executed using , and we do recommend attempting this for those simply wishing to reproduce a few figures or a table, we also note that given the volume of data the container may fail to build or launch and that the success or failure may depend on the end user's location as well as the current load on the infrastructure where attempts to allocate resources.
When generating plots and tables, where possible we have generally tried to avoid using locally stored data that another user would have to locate, download, and extract manually. Instead we have relied heavily on <cit.> to query data tables stored at the Centre de Données astronomiques de Strasbourg (CDS, <https://cds.u-strasbg.fr/>). In certain cases values were not available from CDS and we instead queried the NASA/IPAC Extragalactic Database (NED, <https://ned.ipac.caltech.edu/>). In the extremely unusual cases where values were not available from either service they were added manually from other sources, and these additions are preserved explicitly in the notebooks.
Making use of querying services such as CDS and NED is very convenient and avoids the issue of each paper needing to replicate every data table on which they rely in order to be reproducible. However, it also raises a separate issue, which is that if any of the tables we query were to be amended or corrected then the final values we present could be altered. While this may be the desired functionality in some cases, it is also valid to want to preserve the exact form of the analysis. In practice, given the age of the data tables that we query, they are extremely unlikely to be updated. However, it would be an improvement to be able to specify whether to query from a specific version of a table (e.g. corresponding to a version on a particular date) or to simply query the latest version. As far as we are aware this option does not currently exist, but it would allow for an end user to select either functionality, depending on their aims.
To allow for either functionality, we have included the final data tables that our notebooks produce (after querying CDS and NED) in the repository. If the CDS/NED tables are amended, then these fixed versions of our tables can be used instead of regenerating them with the analysis notebooks. Thereby preserving the original form of the analysis.
§.§ Outlook
Our efforts to maximise the reproducibility of this work have taught us certain lessons regarding the storage and processing of astronomical data. We do not claim to be the first to discuss any of these issues <cit.>, instead mostly these are lessons re-learnt or reinforced, rather than being genuinely novel.
One of the least streamlined steps in our reproducibility framework is actually the first, the retrieval of the raw data from the VLA archive. This is mostly because this archive cannot be queried automatically. As we say above, this design choice is understandable due to the considerable data volume that VLA observations can represent. However, this issue is only likely to get worse in the future as new facilities, such as Square Kilometre Array (SKA) precursors (and soon the SKA itself) are already greatly increasing the volume of this type of data. The solution to this problem, which has been discussed many times before, is to stop moving the data (via the internet or on hard disks) to the home institution of each end user, but instead to have them process it in a central computing facility that also hosts the data archive. In our case this would prevent the disconnect arising from the raw data being stored in a completely different system from where they are processed. It would likely also make sense for this same facility to act as an archive for the final data products that the end users produce when using the computing resources to reduce and analyse their data. This is essentially a (rudimentary) description of the proposed structure of SKA Regional Centres (e.g. <https://aussrc.org/wp-content/uploads/2021/05/SRC-White-Paper-v1.0-Final.pdf>).
This does not, however, address the issue of additional external data that is required for certain analyses. The need for such data typically arises when performing multi-frequency analyses, as observations at different wavelengths are often stored in different archives and can follow different standards. In our case, such data is mostly in the form of optical images from large sky surveys (POSS, SDSS, and DECaLS), as well as data tables that we query from CDS and NED. The need for such multi-frequency comparisons and analyses mean that the transfer of some data is inevitable. However, typically the external data will already be reduced and therefore be much smaller in volume than in its raw form, and thus represent less of an issue. If the data portals for data at all wavelengths have automatically accessible interfaces and follow common standards (such as those described by the International Virtual Observatory Alliance) then the inclusion of external data resources would be further streamlined.
Unlike in our previous work with HCG 16 <cit.>, we have not constructed containers for the software. is publicly available, with its version history, through and we therefore consider this to be a well-preserved resource that will likely be possible to build and install for several years. Similarly, although is not available through , the National Radio Astronomy Observatory (NRAO) provide back dated versions of going back almost a decade. Aside from these software packages (and , which we return to below) our analysis was performed exclusively in . We make use of as a means to preserve and build the coding environment such that the results can be reproduced exactly. While these solution are all adequate and suitable for our use case, software and hardware will always continue to change over time, and at present the best means of preserving software environments is likely through containers and virtual machines (VMs). However, for each project to build their own containers and VMs for all their software is a formidable task. These are some community projects to assist with this, for example the suite <cit.> project in radio astronomy, however, ultimately once data reduction takes place predominantly in the cloud it will likely become the role of computing centres to maintain such software containers and backwards compatibility. This will make achieving reproducibility (at least of the software environment) more straightforward for the end users.
Finally, we return to the issue that the separation of features (in ) is not as fully reproducible as other steps in our pipeline because it requires manual (and subjective) input. This means that a new data product, created by modifying earlier data reduction steps, cannot automatically be propagated through the separation of features steps, and thus the results of any modifications can only be included automatically in some, but not all, of the final output of our analysis. Although in this particular scenario it is conceivable that in the future some algorithm maybe be developed to perform an equivalent separation of features automatically, in research in general there will always be manual and subjective steps which cannot be fully included in an automated pipeline. In such cases we suggest that the best way forward is to strive for preservation and exact duplication, rather than full independent, automated reproducibility. This means storing the data products before and after such steps, along with any logs and important intermediate data products that may be manually produced during the process. Although this does not allow another researcher to interact with and modify the original reduction and analysis process, it does allow them to reliably compare to the results of each step in the process.
In summary, in our opinion the means to maximising the reproducibility of future radio astronomy projects (and in other domains) is for data, software, and computing resources to be provided in a central location which a user can access through a unified portal. As users would develop their own analysis scripts and tools in this environment, integrating version control tools and standard astronomy tools and services into the platform would help to maximise its usability. Crucially such a resource would remove much of the burden of “recreating the wheel" that currently falls on individual teams, thus making reproducibility a much more readily achievable goal for most projects.
§ INDIVIDUAL GROUP MEMBERS
Table <ref> shows the predicted and observed masses on individual galaxies in each HCG in our sample, as well as their IR classifications from <cit.>.
deficiency of individual HCG members
HCG Name RA Dec Type IR class $cz_\odot$ $m_\mathrm{B,c}$ $\log M_\mathrm{HI,pred}$ $\log M_\mathrm{HI}$ -def
$\mathrm{deg}$ $\mathrm{deg}$ $\mathrm{km\,s^{-1}}$ $\mathrm{mag}$ $[\mathrm{M_\odot}]$ $[\mathrm{M_\odot}]$ dex
2 HCG2a 7.84955 8.46822 SBd Active 4326 $13.35 \pm 0.1$ $9.59 \pm 0.2$ 9.93 -0.34
2 HCG2b 7.82825 8.47518 cI Active 4366 $14.39 \pm 0.1$ $9.2 \pm 0.2$ 8.9 0.3
2 HCG2c 7.87232 8.40068 SBc Active 4235 $14.15 \pm 0.1$ $9.29 \pm 0.2$ 9.36 -0.07
7 HCG7a 9.8065 0.86363 Sb Active 4210 $12.98 \pm 0.1$ $9.73 \pm 0.2$ 9.05 0.68
7 HCG7b 9.82517 0.91273 SB0 Quiescent 4238 $13.74 \pm 0.2$ $9.44 \pm 0.21$
7 HCG7c 9.89604 0.85962 SBc Active 4366 $12.6 \pm 0.7$ $9.87 \pm 0.33$ 9.53 0.34
7 HCG7d 9.82911 0.89148 SBc Active 4116 $14.77 \pm 0.2$ $9.06 \pm 0.21$ 8.9 0.16
10 HCG10a 21.58943 34.70204 SBb Quiescent 5148 $12.62 \pm 0.1$ $9.86 \pm 0.2$ 9.8 0.06
10 HCG10b 21.41817 34.71284 E1 Quiescent 4862 $12.7 \pm 0.1$ $9.83 \pm 0.2$
10 HCG10c 21.57849 34.75395 Sc Active 4660 $14.07 \pm 0.1$ $9.32 \pm 0.2$
10 HCG10d 21.62865 34.67508 Scd Active 4620 $14.69 \pm 0.2$ $9.09 \pm 0.21$ 8.83 0.26
15 HCG15a 31.97094 2.16759 Sa Quiescent 6967 $14.29 \pm 0.2$ $9.77 \pm 0.21$
15 HCG15b 31.89217 2.11521 E0 7117 $14.74 \pm 0.1$ $9.6 \pm 0.2$
15 HCG15c 31.91572 2.14973 E0 Quiescent 7222 $14.37 \pm 0.1$ $9.74 \pm 0.2$
15 HCG15d 31.90634 2.18078 E2 Quiescent 6244 $14.65 \pm 0.2$ $9.63 \pm 0.21$
15 HCG15e 31.85567 2.11613 Sa Quiescent 7197 $15.56 \pm 0.1$ $9.29 \pm 0.2$
15 HCG15f 31.90778 2.19028 Sbc Active 6242 $15.74 \pm 0.2$ $9.22 \pm 0.21$ 9.09 0.13
16 NGC848 32.5735 -10.32145 SBab 3992 $13.37 \pm 0.1$ $9.57 \pm 0.2$ 9.64 -0.07
16 HCG16a 32.35312 -10.13622 SBab Active 4152 $12.76 \pm 0.15$ $9.79 \pm 0.21$ 8.97 0.82
16 HCG16b 32.3361 -10.13309 Sab Quiescent 3977 $13.27 \pm 0.15$ $9.6 \pm 0.21$ 8.66 0.94
16 HCG16c 32.41071 -10.14637 Im Active 3851 $13.1 \pm 0.08$ $9.67 \pm 0.2$ 9.46 0.21
16 HCG16d 32.42872 -10.18394 Im Active 3847 $13.42 \pm 0.09$ $9.55 \pm 0.2$ 9.64 -0.09
19 HCG19a 40.65998 -12.42128 E2 Quiescent 4279 $14.0 \pm 0.2$ $9.39 \pm 0.21$
19 HCG19b 40.67548 -12.42777 Scd Active 4210 $15.24 \pm 0.2$ $8.93 \pm 0.21$ 9.17 -0.24
19 HCG19c 40.69507 -12.39783 Sdm Canyon 4253 $14.46 \pm 0.1$ $9.22 \pm 0.2$ 8.62 0.6
22 NGC1188 45.93135 -15.48522 S0 2628 $14.32 \pm 0.39$ $8.98 \pm 0.25$
22 HCG22a 45.91056 -15.61369 E2 Quiescent 2705 $12.24 \pm 0.13$ $9.76 \pm 0.21$
22 HCG22b 45.8593 -15.66187 Sa 2625 $14.47 \pm 0.1$ $8.92 \pm 0.2$
22 HCG22c 45.85141 -15.62332 SBcd Canyon 2544 $13.9 \pm 0.7$ $9.14 \pm 0.33$ 9.41 -0.27
23 HCG23-26 46.78954 -9.48784 cI 5283 $19.26 \pm 0.5$ $7.58 \pm 0.27$ 9.22 -1.64
23 HCG23a 46.73285 -9.54406 Sab Quiescent 4798 $14.32 \pm 0.2$ $9.44 \pm 0.21$ 9.28 0.16
23 HCG23b 46.78976 -9.59332 SBc Active 4921 $14.42 \pm 0.2$ $9.4 \pm 0.21$ 9.74 -0.34
23 HCG23c 46.82692 -9.61328 S0 Quiescent 5016 $15.52 \pm 0.2$ $8.99 \pm 0.21$
23 HCG23d 46.73012 -9.62953 Sd Active 4562 $16.0 \pm 0.5$ $8.81 \pm 0.27$ 9.31 -0.5
25 HCG25a 50.17978 -1.10935 SBc Active 6285 $13.86 \pm 0.1$ $9.8 \pm 0.2$ 9.95 -0.15
25 HCG25b 50.18997 -1.04486 SBa Quiescent 6408 $14.45 \pm 0.2$ $9.58 \pm 0.21$ 9.55 0.03
25 HCG25d 50.16172 -1.0354 S0 6401 $15.92 \pm 0.1$ $9.03 \pm 0.2$
25 HCG25f 50.18977 -1.05425 S0 6279 $16.98 \pm 0.2$ $8.63 \pm 0.21$
26 HCG26a 50.4804 -13.65085 Scd Active 9678 $16.1 \pm 0.7$ $9.34 \pm 0.33$ 10.17 -0.83
26 HCG26b 50.48811 -13.64841 E0 9332 $15.61 \pm 0.2$ $9.52 \pm 0.21$
26 HCG26c 50.45648 -13.64554 S0 9618 $17.1 \pm 0.1$ $8.96 \pm 0.2$
26 HCG26d 50.48558 -13.64543 cI 9133 $15.81 \pm 0.2$ $9.44 \pm 0.21$
26 HCG26e 50.46302 -13.66473 Im Active 9623 $17.05 \pm 0.1$ $8.98 \pm 0.2$ 8.88 0.1
26 HCG26f 50.48992 -13.66392 cI 9626 $18.68 \pm 0.1$ $8.37 \pm 0.2$
26 HCG26g 50.48012 -13.64874 S0 9293 $17.4 \pm 0.7$ $8.85 \pm 0.33$
30 HCG30a 69.07744 -2.83129 SB- Quiescent 4697 $12.87 \pm 0.2$ $9.93 \pm 0.21$
30 HCG30b 69.12619 -2.86656 Sa Quiescent 4625 $13.65 \pm 0.1$ $9.64 \pm 0.2$
30 HCG30c 69.097 -2.79985 SBbc Active 4508 $15.06 \pm 0.1$ $9.11 \pm 0.2$
30 HCG30d 69.15276 -2.84302 S0 4666 $15.69 \pm 0.1$ $8.87 \pm 0.2$
31 HCG31g 75.43338 -4.28875 cI 4011 $15.11 \pm 0.5$ $8.98 \pm 0.27$ 9.0 -0.02
31 HCG31q 75.40974 -4.22245 cI 4090 $16.01 \pm 0.5$ $8.64 \pm 0.27$ 8.71 -0.07
31 HCG31a 75.41146 -4.25946 Sdm 4042 $14.83 \pm 0.2$ $9.08 \pm 0.21$ 9.22 -0.14
31 HCG31b 75.39756 -4.26401 Sm 4171 $14.31 \pm 0.2$ $9.28 \pm 0.21$ 9.13 0.15
31 HCG31c 75.40751 -4.2577 Im Active 4068 $12.5 \pm 0.5$ $9.96 \pm 0.27$ 8.96 1.0
Columns: (1) HCG ID number, (2) name of group member, (3) right ascension, (4) declination, (5) optical morphological Hubble type <cit.>, (6) infrared classification <cit.>, (7) redshift <cit.>, (8) corrected apparent B-band magnitude <cit.>, (9) predicted mass based on B-band luminosity <cit.>, (10) measured mass from VLA observations, (11) deficiency.
Table C.1 continued.
HCG Name RA Dec Type IR class $cz_\odot$ $m_\mathrm{B,c}$ $\log M_\mathrm{HI,pred}$ $\log M_\mathrm{HI}$ -def
$\mathrm{deg}$ $\mathrm{deg}$ $\mathrm{km\,s^{-1}}$ $\mathrm{mag}$ $[\mathrm{M_\odot}]$ $[\mathrm{M_\odot}]$ dex
33 HCG33a 77.69998 18.01867 E1 Quiescent 7570 $15.35 \pm 0.2$ $9.46 \pm 0.21$
33 HCG33b 77.69854 18.02957 E4 Quiescent 8006 $15.41 \pm 0.2$ $9.44 \pm 0.21$
33 HCG33c 77.68831 18.01956 Sd Active 7823 $16.4 \pm 0.7$ $9.06 \pm 0.33$ 10.02 -0.96
33 HCG33d 77.72327 18.03292 E0 7767 $16.73 \pm 0.2$ $8.94 \pm 0.21$
37 HCG37a 138.41438 29.99243 E7 Quiescent 6745 $12.97 \pm 0.2$ $10.27 \pm 0.21$
37 HCG37b 138.38589 29.99977 Sbc Canyon 6741 $14.5 \pm 0.2$ $9.7 \pm 0.21$
37 HCG37c 138.40514 29.99954 S0a 7357 $15.57 \pm 0.2$ $9.3 \pm 0.21$
37 HCG37d 138.39107 30.01442 SBdm Active 6207 $15.87 \pm 0.2$ $9.18 \pm 0.21$
37 HCG37e 138.3918 30.03975 E0 Active 6363 $16.21 \pm 0.1$ $9.06 \pm 0.2$
38 HCG38a 141.89442 12.26922 Sbc Active 8652 $15.25 \pm 0.1$ $9.61 \pm 0.2$
38 HCG38b 141.93169 12.28719 SBd Active 8635 $14.76 \pm 0.2$ $9.79 \pm 0.21$
38 HCG38c 141.93544 12.2879 Im Active 8692 $15.39 \pm 0.2$ $9.56 \pm 0.21$
40 HCG40a 144.72309 -4.84903 E3 Quiescent 6628 $13.44 \pm 0.2$ $10.11 \pm 0.21$
40 HCG40b 144.72942 -4.86608 S0 Quiescent 6842 $14.58 \pm 0.2$ $9.68 \pm 0.21$
40 HCG40c 144.72173 -4.85936 Sbc Active 6890 $15.15 \pm 0.2$ $9.47 \pm 0.21$ 9.29 0.18
40 HCG40d 144.73246 -4.83737 SBa Active 6492 $14.53 \pm 0.2$ $9.7 \pm 0.21$ 9.09 0.61
40 HCG40e 144.73111 -4.85775 Sc Active 6625 $16.69 \pm 0.2$ $8.89 \pm 0.21$
47 HCG47a 156.4431 13.71686 SBb Active 9581 $14.61 \pm 0.2$ $9.93 \pm 0.21$
47 HCG47b 156.4527 13.72793 E3 9487 $15.67 \pm 0.2$ $9.53 \pm 0.21$
47 HCG47c 156.45454 13.75306 Sc Active 9529 $16.63 \pm 0.2$ $9.17 \pm 0.21$
47 HCG47d 156.4495 13.74866 Sd Active 9471 $16.2 \pm 0.7$ $9.33 \pm 0.33$
48 HCG48a 159.44682 -27.08051 E2 Quiescent 2267 $13.21 \pm 0.2$ $9.33 \pm 0.21$
48 HCG48b 159.45641 -27.12175 Sc Active 2437 $14.63 \pm 0.2$ $8.79 \pm 0.21$ 8.77 0.02
49 HCG49SDSS1 164.16092 67.15169 cI 9950 $18.56 \pm 0.02$ $8.47 \pm 0.2$ 9.02 -0.55
49 HCG49a 164.17315 67.18515 Scd Active 9939 $15.87 \pm 0.2$ $9.48 \pm 0.21$
49 HCG49b 164.16325 67.18027 Sd Active 9930 $16.3 \pm 0.2$ $9.31 \pm 0.21$
49 HCG49c 164.15288 67.18123 Im Active 9926 $17.18 \pm 0.2$ $8.98 \pm 0.21$
49 HCG49d 164.13946 67.17808 E5 Active 10010 $16.99 \pm 0.2$ $9.06 \pm 0.21$
54 A11272054 172.36851 20.63192 cI 1397 $19.01 \pm 0.05$ $6.96 \pm 0.2$ 7.7 -0.74
54 HCG54a 172.31332 20.58358 Sdm Active 1397 $13.86 \pm 0.2$ $8.89 \pm 0.21$
54 HCG54b 172.30866 20.5815 Im Active 1412 $16.08 \pm 0.2$ $8.06 \pm 0.21$
54 HCG54c 172.31785 20.58646 Im Active 1420 $16.8 \pm 0.7$ $7.79 \pm 0.33$
54 HCG54d 172.31885 20.5886 Im 1670 $18.02 \pm 0.2$ $7.33 \pm 0.21$
56 HCG56a 173.19443 52.94092 Sc Active 8245 $15.24 \pm 0.2$ $9.57 \pm 0.21$ 9.89 -0.32
56 HCG56b 173.16862 52.95052 SB0 Active 7919 $14.5 \pm 0.2$ $9.84 \pm 0.21$
56 HCG56c 173.15296 52.94758 S0 Quiescent 8110 $15.37 \pm 0.2$ $9.52 \pm 0.21$
56 HCG56d 173.14712 52.94725 S0 Active 8346 $16.52 \pm 0.2$ $9.08 \pm 0.21$
56 HCG56e 173.13647 52.93923 S0 Active 7924 $16.23 \pm 0.1$ $9.19 \pm 0.2$
57 HCG57a 174.47409 21.98084 Sb Canyon 8727 $13.99 \pm 0.2$ $10.16 \pm 0.21$
57 HCG57b 174.43198 22.00933 SBb Quiescent 9022 $14.32 \pm 0.2$ $10.04 \pm 0.21$
57 HCG57c 174.46564 21.97384 E3 9081 $14.63 \pm 0.2$ $9.92 \pm 0.21$
57 HCG57d 174.47966 21.98564 SBc Active 8977 $14.51 \pm 0.2$ $9.96 \pm 0.21$
57 HCG57e 174.45486 22.02576 S0a Quiescent 8992 $15.37 \pm 0.1$ $9.64 \pm 0.2$
57 HCG57f 174.47535 21.93609 E4 Quiescent 9594 $15.22 \pm 0.1$ $9.7 \pm 0.2$
57 HCG57g 174.43585 22.02083 SB0 9416 $15.84 \pm 0.1$ $9.46 \pm 0.2$
57 HCG57h 174.46122 22.01187 SBb Active 9042 $16.75 \pm 0.1$ $9.12 \pm 0.2$
58 HCG58a 175.54619 10.2777 Sb Active 6138 $13.56 \pm 0.2$ $9.94 \pm 0.21$ 9.67 0.27
58 HCG58b 175.59827 10.2642 SBab Quiescent 6503 $13.4 \pm 0.2$ $10.0 \pm 0.21$
58 HCG58c 175.47156 10.30409 SB0a Quiescent 6103 $13.83 \pm 0.2$ $9.84 \pm 0.21$
58 HCG58d 175.52466 10.35087 E1 Quiescent 6270 $14.49 \pm 0.2$ $9.59 \pm 0.21$
58 HCG58e 175.52022 10.38388 Sbc Active 6052 $14.86 \pm 0.2$ $9.45 \pm 0.21$ 9.13 0.32
59 HCG59a 177.11466 12.72734 Sa Active 4109 $14.52 \pm 0.1$ $9.3 \pm 0.2$
59 HCG59b 177.08402 12.71615 E0 Canyon 3908 $15.2 \pm 0.2$ $9.04 \pm 0.21$ 9.35 -0.31
59 HCG59c 177.13534 12.70525 Sc 4347 $14.4 \pm 0.5$ $9.34 \pm 0.27$ 8.63 0.71
59 HCG59d 177.12802 12.72981 Im Active 3866 $15.8 \pm 0.2$ $8.82 \pm 0.21$ 9.16 -0.34
61 HCG61a 183.07729 29.1798 S0a Quiescent 3784 $12.82 \pm 0.1$ $9.94 \pm 0.2$
61 HCG61c 183.12894 29.16854 Sbc Active 3956 $13.53 \pm 0.1$ $9.67 \pm 0.2$ 9.13 0.54
61 HCG61d 183.11153 29.14912 S0 Quiescent 3980 $14.12 \pm 0.1$ $9.45 \pm 0.2$
Table C.1 continued.
HCG Name RA Dec Type IR class $cz_\odot$ $m_\mathrm{B,c}$ $\log M_\mathrm{HI,pred}$ $\log M_\mathrm{HI}$ -def
$\mathrm{deg}$ $\mathrm{deg}$ $\mathrm{km\,s^{-1}}$ $\mathrm{mag}$ $[\mathrm{M_\odot}]$ $[\mathrm{M_\odot}]$ dex
62 HCG62a 193.27438 -9.20458 E3 Quiescent 4355 $13.36 \pm 0.2$ $9.73 \pm 0.21$
62 HCG62b 193.26862 -9.19889 S0 Quiescent 3651 $13.76 \pm 0.2$ $9.58 \pm 0.21$
62 HCG62c 193.29147 -9.19809 S0 Quiescent 4359 $14.57 \pm 0.2$ $9.28 \pm 0.21$
62 HCG62d 193.27827 -9.25811 E2 Canyon 4123 $15.81 \pm 0.1$ $8.81 \pm 0.2$
68 HCG68a 208.36072 40.28294 S0 Quiescent 2162 $11.84 \pm 0.08$ $9.87 \pm 0.2$
68 HCG68b 208.3611 40.30247 E2 Quiescent 2635 $12.24 \pm 0.06$ $9.72 \pm 0.2$
68 HCG68c 208.34055 40.36331 SBbc Active 2313 $11.93 \pm 0.2$ $9.83 \pm 0.21$ 9.95 -0.12
68 HCG68d 208.44021 40.33812 E3 Quiescent 2408 $13.73 \pm 0.1$ $9.16 \pm 0.2$
68 HCG68e 208.49892 40.27334 S0 Quiescent 2401 $14.22 \pm 0.1$ $8.97 \pm 0.2$
71 AGC732898 212.76914 25.55967 Sd 9083 $17.97 \pm 0.5$ $8.64 \pm 0.27$ 9.22 -0.58
71 AGC242021 212.72563 25.55402 cI 9199 $17.1 \pm 0.36$ $8.97 \pm 0.24$ 9.72 -0.75
71 HCG71a 212.73775 25.4967 SBc Canyon 9320 $13.75 \pm 0.2$ $10.23 \pm 0.21$ 10.39 -0.16
71 HCG71b 212.76055 25.51965 Sb Active 9335 $14.9 \pm 0.1$ $9.79 \pm 0.2$
71 HCG71c 212.77152 25.48257 SBc Active 8827 $15.56 \pm 0.1$ $9.54 \pm 0.2$ 9.85 -0.31
79 HCG79a 239.79777 20.75416 E0 Active 4292 $14.35 \pm 0.2$ $9.35 \pm 0.21$
79 HCG79b 239.80276 20.76312 S0 Active 4446 $13.78 \pm 0.2$ $9.56 \pm 0.21$
79 HCG79c 239.79586 20.76154 S0 Quiescent 4146 $14.72 \pm 0.2$ $9.21 \pm 0.21$
79 HCG79d 239.80026 20.74648 Sdm Active 4503 $15.87 \pm 0.2$ $8.78 \pm 0.21$ 9.26 -0.48
88 HCG88a 313.14714 -5.71068 Sb Canyon 6033 $13.18 \pm 0.1$ $9.97 \pm 0.2$ 8.82 1.15
88 HCG88b 313.12394 -5.74655 SBb Quiescent 6010 $13.24 \pm 0.1$ $9.95 \pm 0.2$ 9.33 0.62
88 HCG88c 313.10833 -5.77224 Sc Active 6083 $13.87 \pm 0.1$ $9.71 \pm 0.2$ 9.79 -0.08
88 HCG88d 313.05323 -5.79796 Sc Active 6032 $14.49 \pm 0.2$ $9.48 \pm 0.21$ 9.46 0.02
90 HCG90a 330.50889 -31.87014 Sa Active 2575 $12.36 \pm 0.07$ $9.62 \pm 0.2$ 8.94 0.68
90 HCG90b 330.53631 -31.99068 E0 Quiescent 2525 $12.57 \pm 0.13$ $9.54 \pm 0.21$
90 HCG90c 330.51423 -31.97451 E0 Quiescent 2696 $12.73 \pm 0.06$ $9.48 \pm 0.2$
90 HCG90d 330.52602 -31.99423 Im Active 2778 $12.81 \pm 0.15$ $9.45 \pm 0.21$
91 HCG91a 332.28174 -27.80984 SBc Active 7151 $12.62 \pm 0.13$ $10.36 \pm 0.21$ 9.83 0.53
91 HCG91b 332.3183 -27.73134 Sc Active 7196 $14.63 \pm 0.2$ $9.61 \pm 0.21$ 9.67 -0.06
91 HCG91c 332.30881 -27.78241 Sc Active 7319 $14.47 \pm 0.2$ $9.67 \pm 0.21$ 9.66 0.01
91 HCG91d 332.28567 -27.80086 SB0 Quiescent 7195 $14.99 \pm 0.2$ $9.47 \pm 0.21$
92 HCG92b 338.994 33.96595 Sbc Quiescent 5774 $13.18 \pm 0.13$ $10.11 \pm 0.21$
92 HCG92c 339.01604 33.97535 SBc Active 6764 $13.33 \pm 0.1$ $10.05 \pm 0.2$
92 HCG92d 338.9871 33.96555 Sc Quiescent 6630 $13.63 \pm 0.08$ $9.94 \pm 0.2$
92 HCG92e 338.96756 33.94459 E1 Quiescent 6599 $14.01 \pm 0.08$ $9.79 \pm 0.2$
93 HCG93a 348.81679 18.96147 E1 Quiescent 5140 $12.61 \pm 0.1$ $10.07 \pm 0.2$
93 HCG93b 348.82171 19.04158 SBd Active 4672 $13.18 \pm 0.1$ $9.85 \pm 0.2$ 9.53 0.32
93 HCG93c 348.76515 18.97311 SBa Quiescent 5132 $13.94 \pm 0.1$ $9.57 \pm 0.2$
93 HCG93d 348.88811 19.0479 SB0 5173 $15.27 \pm 0.1$ $9.07 \pm 0.2$
95 HCG95a 349.87475 9.50793 E3 Quiescent 11888 $14.42 \pm 0.2$ $10.1 \pm 0.21$
95 HCG95b 349.8909 9.49491 Scd Active 11637 $15.34 \pm 0.1$ $9.75 \pm 0.2$
95 HCG95c 349.86626 9.49434 Sm Active 11562 $15.2 \pm 0.2$ $9.81 \pm 0.21$
95 HCG95d 349.87951 9.50274 Sc Active 11593 $16.14 \pm 0.1$ $9.45 \pm 0.2$
96 HCG96a 351.98711 8.77821 Sc Active 8698 $13.53 \pm 0.2$ $10.21 \pm 0.21$ 9.93 0.28
96 HCG96b 352.02523 8.76841 E2 Quiescent 8616 $14.49 \pm 0.1$ $9.85 \pm 0.2$
96 HCG96c 351.99507 8.7828 Sa Active 8753 $15.69 \pm 0.2$ $9.4 \pm 0.21$
96 HCG96d 352.00084 8.76733 Im Active 8975 $16.56 \pm 0.1$ $9.07 \pm 0.2$
97 HCG97a 356.84591 -2.30096 E5 Quiescent 6910 $14.16 \pm 0.2$ $9.72 \pm 0.21$
97 HCG97b 356.90753 -2.31716 Sc Canyon 6940 $14.83 \pm 0.1$ $9.47 \pm 0.2$ 8.68 0.79
97 HCG97c 356.84884 -2.35143 Sa Quiescent 5995 $14.54 \pm 0.1$ $9.58 \pm 0.2$
97 HCG97d 356.82867 -2.31329 E1 6239 $14.45 \pm 0.1$ $9.61 \pm 0.2$
97 HCG97e 356.83253 -2.28102 S0a Canyon 6579 $16.31 \pm 0.2$ $8.91 \pm 0.21$
100 MRK935 0.43845 13.10067 cI 5606 $15.34 \pm 0.05$ $9.08 \pm 0.2$ 8.54 0.54
100 HCG100a 0.33374 13.11095 Sb 5300 $13.66 \pm 0.1$ $9.71 \pm 0.2$
100 HCG100b 0.35885 13.11278 Sm 5253 $14.9 \pm 0.1$ $9.25 \pm 0.2$
100 HCG100c 0.30629 13.14398 SBc 5461 $15.22 \pm 0.1$ $9.13 \pm 0.2$
100 HCG100d 0.31151 13.11264 Scd 5590 $15.97 \pm 0.1$ $8.84 \pm 0.2$
§ VELOCITY MAPS
Maps of iso-velocity contours overlaid on the same optical images for each group in Section <ref> are shown in Figure <ref>. They can also be generated as described in Appendix <ref>. The same source masks were used to produce these moment one contour maps, but only pixels with fluxes higher than $4.5 \sigma_\mathrm{rms} \sqrt{20 \mathrm{km\,s^{-1}}/\Delta v_\mathrm{chan}}$ were included. The iso-velocity contours have separations of 20 in all cases. HCGs 30, 37, and 62 are omitted as no was detected in their core groups.
Iso-velocity contours (separated by 20 ) overlaid on DECaLS, SDSS, or POSS images depending on the group. Contours follow a rainbow colour scheme with high velocities corresponding to redder colours and lower velocities to bluer colours (note that the velocity range varies in each panel, but contours are always separated by 20 ).
Figure <ref> continued.
Figure <ref> continued.
Figure <ref> continued.
Figure <ref> continued.
Figure <ref> continued.
Figure <ref> continued.
Figure <ref> continued.
Figure <ref> continued.
|
# Anisotropic Photon and Electron Scattering without Ultrarelativistic
Approximation
Anderson C.M. Lai 0000-0003-2741-4556<EMAIL_ADDRESS>Department of
Physics, The Chinese University of Hong Kong, Shatin, New Territories, Hong
Kong, China Kenny C.Y. Ng 0000-0001-8016-2170<EMAIL_ADDRESS>Department
of Physics, The Chinese University of Hong Kong, Shatin, New Territories, Hong
Kong, China
(November 28, 2022)
###### Abstract
Interactions between photons and electrons are ubiquitous in astrophysics.
Photons can be down scattered (Compton scattering) or up scattered (inverse
Compton scattering) by moving electrons. Inverse Compton scattering, in
particular, is an essential process for the production of astrophysical gamma
rays. Computations of inverse Compton emission typically adopts an isotropic
or an ultrarelativistic assumption to simplify the calculation, which makes
them unable to broadcast the formula to the whole phase space of source
particles. In view of this, we develop a numerical scheme to compute the
interactions between anisotropic photons and electrons without taking
ultrarelativistic approximations. Compared to the ultrarelativistic limit, our
exact results show major deviations when target photons are down scattered or
when they possess energy comparable to source electrons. We also consider two
test cases of high-energy inverse Compton emission to validate our results in
the ultrarelativistic limit. In general, our formalism can be applied to cases
of anisotropic electron-photon scattering in various energy regimes, and for
computing the polarizations of the scattered photons.
## I Introduction
Interactions between electrons and photons are responsible for a variety of
astrophysical phenomena. In particular, inverse Compton (IC) scattering—the up
scattering of low-energy photons by high-energy electrons—is one of the main
mechanisms for the production of astrophysical X-rays and gamma rays. For
example, cosmic-ray electrons can IC scatter with the interstellar radiation
field Hunter _et al._ (1997); Moskalenko and Strong (1999); Strong _et al._
(2000) and produce part of the Galactic diffuse gamma-ray emission Thompson
_et al._ (1997); Strong _et al._ (2004a, b); Ackermann _et al._ (2011).
Cosmic-ray electrons can interact with solar photons and produce a gamma-ray
halo around the Sun Orlando and Strong (2007); Moskalenko _et al._ (2006);
Zhou _et al._ (2017); Orlando and Strong (2008); Orlando _et al._ (2008);
Abdo _et al._ (2011); Ng _et al._ (2016); Orlando and Strong (2021); Tang
_et al._ (2018); Linden _et al._ (2022, 2018). Gamma rays can be produced in
astrophysical sources, such as pulsars, Blazars, and gamma-ray bursts, via
external IC emission or synchrotron self-Compton (SSC) emission Chiang and
Dermer (1999); Meszaros _et al._ (1994); Yuksel _et al._ (2009); Linden _et
al._ (2017); Sudoh _et al._ (2019). The up scattering of Cosmic Microwave
Background (CMB) radiation is also responsible for the Sunyaev–Zeldovich (SZ)
effect Sunyaev and Zeldovich (1970).
The IC emission formulation was studied in detail by Jones Jones (1968).
Analytic expressions were obtained by considering isotropic distributions of
electrons and photons for arbitrary electron energies. Jones also derived the
photon spectrum from IC scattering in the ultrarelativistic limit, which was
further developed by Blumenthal & Gould (BG70 Hereafter)BLUMENTHAL and GOULD
(1970) and Rybicki & Lightman Rybicki and Lightman (1986) by considering
electrons with a power-law energy spectrum. These results have found numerous
applications in high-energy astrophysics, such as the calculation of SSC
emission associated with relativistic jets Inoue _et al._ (2019); Banik and
Bhadra (2019) and remnants from binary neutron star mergers Takami _et al._
(2014), etc. While the ultrarelativistic results work well, the general
formalism by Jones suffers from numerical instability over a broad range of
photon and electron energies due to large number subtraction Belmont (2009).
Substantial efforts have been devoted to mitigating the numerical instability
in Jones’ expression, including reformulations of or corrections to Jones’
formula Nagirner and Poutanen (1994); Pe’er and Waxman (2005), as well as
interpolation from ultrarelativistic limit Coppi and Blandford (1990). These
improvements have been applied to numerical calculations of radiative
processes Pe’er _et al._ (2006); Vurm and Poutanen (2009) and CMB spectral
distortions Sarkar _et al._ (2019). Nevertheless, these works only considered
isotropic scattering, and usually focused on specific kinematic regimes and
target energy distributions. (E.g., ultrarelativistic electrons or thermal
photons/electrons.)
For the more general cases of anisotropic photon-electron scatterings,
Aharonian & Atoyan Aharonian and Atoyan (1981) derived the differential cross
section for the scattering between anisotropic photons and isotropic
electrons, and was applied in Chen and Bastian (2012); Murase _et al._
(2010). These were also later rederived by Nagirner and Poutanen (1993) and
Brunetti (2000). In addition, Poutanen & Vurm Poutanen and Vurm (2010)
considered electrons in the weak anisotropic approximation scattering with
isotropic photons. Kelner et. al. Kelner _et al._ (2014) studied anisotropic
electrons scattering with isotropic photons but took the ultrarelativistic
limit. Alternatively, a Monte Carlo approach was proposed to tackle the
anisotropic scattering Molnar and Birkinshaw (1999). The scattering between
anisotropic photon and isotropic electrons, in the ultrarelativistic limit,
was also considered by Moskalenko & Strong (MS hereafter) Moskalenko and
Strong (2000), which is often applied in computing the IC emission from
cosmic-ray interactions in the galaxy Strong _et al._ (2000) and in the solar
system Moskalenko _et al._ (2006); Orlando and Strong (2007). However, these
analytic expressions were sometimes found to be numerically unstable (Vurm and
Poutanen (2009); Belmont (2009)). A general formalism for anisotropic
electron-photon scatterings that is applicable in all kinematic regimes
remains unavailable.
In this paper, we present a numerical integration approach that solves the
general anisotropic IC scattering problem and calculates the resulting photon
spectrum. In section II, we describe the formalism on how we handle the
kinematic constraints arising from the differential cross section. In section
III, we validate our calculations using solar IC and SSC emission as test
cases. We show that our results are numerically stable and correctly converge
in the ultrarelativistic limit; we also discuss the cases where the exact
calculation deviate from the ultrarelativistic limit. We conclude and discuss
the outlook of this work in section V.
## II Formulation
### II.1 Master Equation for IC intensity
The master equation for line-of-sight (LOS) IC spectral intensity is given by:
$\displaystyle\frac{dI}{dE_{2}}(\Omega_{o},E_{2})=\frac{c}{4\pi}\int ds\int
n_{e}dE_{e}\int n_{ph}dE_{1}\left<\frac{d^{2}\sigma}{d\Omega dE_{2}}\right>,$
(1)
where $\Omega_{o}(\theta_{o},\phi_{o})$ is the LOS direction w.r.t the
observer, $s$ is the LOS distance, $E_{e}$, $E_{1}$, $E_{2}$ are the electron,
the target photon, and the scattered photon energies, respectively.
$n_{e}(E_{e})$ is the differential number density of source electron flux,
$n_{ph}(E_{1})$ is the differential number density of target photon field. The
angular averaged differential cross section is given by
$\displaystyle\left<\frac{d^{2}\sigma}{d\Omega dE_{2}}\right>=\int
d\Omega_{ph}f_{ph}\int d\Omega_{e}f_{e}\frac{d^{2}\sigma}{d\Omega
dE_{2}}(1-\beta\cos\theta_{e}),$ (2)
where $\Omega(\theta,\phi)$, $\Omega_{ph}(\theta_{ph},\phi_{ph})$, and
$\Omega_{e}(\theta_{e},\phi_{e})$ are the directions of the scattered photon,
target photons, and source electrons, respectively.
$f_{ph}(E_{ph},\Omega_{ph})$ and $f_{e}(E_{e},\Omega_{e})$ represent
normalized angular distributions of target photons and source electrons with
$\int fd\Omega=1$. $\beta$ is the speed of incident electrons and the factor
$(1-\beta\cos\theta_{e})$ comes in as the correction factor on electron
flux/photon density for non-parallel target photon and source electron. We
will use the simplified expression:
$\displaystyle\frac{d^{2}\sigma^{\prime}}{d\Omega
dE_{2}}=\frac{d^{2}\sigma}{d\Omega dE_{2}}(1-\beta\cos\theta_{e})$ (3)
hereafter. The definitions of the variables are schematically shown in Fig. 1.
Figure 1: A schematic diagram that depicts the IC scattering for the case of a
point source of target photons. $\vec{k}_{1}$, $\vec{k}_{2}$, and
$\vec{\beta}$ are the directions of target photon, scattered photon and source
electron. See the text for a detailed description of the variables. Note that
in general $\vec{k}_{1}$, $\vec{k}_{2}$, and $\vec{\beta}$ do not lie in the
same plane, see Fig. 2.
### II.2 Assumptions
In general, the differential cross section Eq. 2 depends on many variables. To
simplify, we assume a unidirectional target photon along direction
$\Omega^{\prime}_{ph}$ from a point source, we also assume an isotropic
distribution of source electron flux. That is:
$\begin{split}f_{ph}&=\delta({\Omega_{ph}-\Omega^{\prime}_{ph}})\,,\\\
f_{e}&=\frac{1}{4\pi}\,.\end{split}$ (4)
Eq. 2 then becomes:
$\displaystyle\left<\frac{d^{2}\sigma}{d\Omega
dE_{2}}\right>=\frac{1}{4\pi}\int
d\Omega_{e}\frac{d^{2}\sigma^{\prime}}{d\Omega
dE_{2}}(E_{1},E_{2},E_{e},\Omega,\Omega_{e}),$ (5)
where $\Omega_{ph}$ is dropped by the delta function and $\Omega_{e}$ is now
defined w.r.t to the direction of target photon. The explicit form of the
differential cross section in Eq. 3 is given by Jauch and Rohrlich (2012):
$\displaystyle\frac{d^{2}\sigma^{\prime}}{d\Omega
dE_{2}}=\frac{r_{e}^{2}\bar{X}}{2\gamma^{2}(1-\beta\cos\theta_{e})}\frac{E_{2}^{2}}{E_{1}^{2}}\delta(E_{2}-\bar{E}_{2}),$
(6)
where $r_{e}$ is the classical electron radius, $\gamma$ is the electron
Lorentz factor and
$\displaystyle\bar{X}=\left[\frac{\kappa_{1}}{\kappa_{2}}+\frac{\kappa_{2}}{\kappa_{1}}+2m_{e}^{2}\left(\frac{1}{\kappa_{1}}-\frac{1}{\kappa_{2}}\right)+m_{e}^{4}\left(\frac{1}{\kappa_{1}}-\frac{1}{\kappa_{2}}\right)^{2}\right],$
(7)
where $\kappa_{1}=p\cdot k_{1}=p_{0}k_{0}-\vec{p}\cdot\vec{k}_{1}$ and
$\kappa_{2}=p\cdot k_{2}$ are the dot products of four momentum of incident
electron and incident/scattered photon, the subscript 0 denotes the time
component. $\bar{E}_{2}$ can be found from conservation of 4-momentum:
$\displaystyle\bar{E}_{2}=\frac{E_{1}(1-\beta\cos\theta_{e})}{\frac{E_{1}}{E_{e}}(1-\cos\theta)+1-\beta\cos\theta_{1}},$
(8)
where $\theta_{1}$ is the angle between source electron and scattered photon,
$\displaystyle\cos\theta_{1}=\cos\theta\cos\theta_{e}+\sin\theta\sin\theta_{e}\cos(\phi_{e}-\phi).$
(9)
We note that although Eq. 9 depends on 4 angular variables
$\phi_{e},\theta_{e},\phi,\theta$, the two polar angles only come in as their
difference $\phi_{e}-\phi$. This implies a polar symmetry on $\phi$ as we
integrate $\phi_{e}$. In other words, Eq. 6 does not depend on the $\phi$, and
we chose $\phi=0$ hereafter.
In the limit $\beta\ll 1$, Eq. 8 reduces to the Compton scattering formula and
$\kappa_{1},\kappa_{2}$ become $m_{e}E_{1}$ and $m_{e}E_{2}$. Putting these
expressions back in Eq. 6 yields the Klein-Nishina (KN) differential cross
section in electron-rest frame (ERF):
$\frac{d^{2}\sigma^{\prime}_{\text{KN}}}{d\Omega
dE_{2}}=\frac{r_{e}^{2}}{2}\frac{E_{2}^{2}}{E_{1}^{2}}\left(\frac{E_{2}}{E_{1}}+\frac{E_{1}}{E_{2}}-\sin^{2}\theta\right)\delta(E_{2}-\bar{E}_{2}).$
(10)
Figure 2: A diagram for the IC scattering between a photon and an electron.
With the assumptions made in Eq. 4, we can choose to align the target photon
direction with $+z-$axis and set $x-z$ plane as the scattering plane.
### II.3 Reduction to MS result
As mentioned in the introduction, MS Moskalenko and Strong (2000) adopted an
ultrarelativistic assumption, in particular, $\gamma\gg 1$ and $\theta_{1}=0$.
The latter implies that scattered photons are unidirectional along the
direction of electron, or $\Omega=\Omega_{e}$. To illustrate the effect of
such approximation, we first rewrite the delta function in Eq. 6 as:
$\displaystyle\delta(E_{2}-\bar{E}_{2})=\frac{\delta(\Omega_{e}-\Omega_{e,sol})}{\left|\frac{d\bar{E}_{2}}{d\Omega_{e}}\right|},$
(11)
where $\Omega_{e,sol}$ is the solution to the condition of
$E_{2}=\bar{E}_{2}(\Omega_{e,sol})$. The unidirectional approximation is then
equivalent to:
$\displaystyle\delta(\Omega_{e}-\Omega_{e,sol})$
$\displaystyle=\delta(\phi_{e})~{}\delta(\cos\theta_{e}-\cos\theta)$ (12)
With this simplification, integration in Eq. 5 implies removing the
$\Omega_{e}$ dependence and evaluate the denominator in Eq. 11 at
$\Omega_{e}=\Omega$. By restoring a general source photon distribution
$f_{ph}$ and integrating over $\Omega_{ph}$, it can be shown that the whole
expression reduces to MS’s Eq. (8)Moskalenko and Strong (2000):
$\displaystyle\left<\frac{d\sigma}{dE_{2}}\right>=\frac{\pi
r_{e}^{2}}{E_{1}(\gamma-E_{2})^{2}}\int
d\Omega_{ph}f_{ph}\left[2-2\frac{E_{2}}{\gamma}\left(\frac{1}{E^{\prime}_{1}}+2\right)\right.$
$\displaystyle\left.+\frac{E_{2}^{2}}{\gamma^{2}}\left(\frac{1}{{E^{\prime}}_{1}^{2}}+2\frac{1}{E^{\prime}_{1}}+3\right)-\frac{E_{2}^{3}}{\gamma^{3}}\right].$
(13)
### II.4 General treatment without unidirectional approximation
Instead of doing the unidirectional approximation in the last section, we look
for the exact solution of $\cos\theta_{e,sol}$ in terms of
$E_{1},E_{2},E_{e},\theta$ and $\phi_{e}$. To begin with, we write the
complete form of Eq. 5 using Eq. 6, 7 and 11:
$\displaystyle\left<\frac{d^{2}\sigma}{d\Omega dE_{2}}\right>$
$\displaystyle=$ $\displaystyle\frac{r^{2}_{e}}{8\pi\gamma^{2}}\int
d\Omega_{e}\frac{\bar{X}E^{2}_{2}}{(1-\beta\cos\theta_{e})E^{2}_{1}}\frac{\delta(\cos\theta_{e}-\cos\theta_{e,sol})}{\left|\frac{d\bar{E}_{2}}{d\cos\theta_{e}}\right|_{\cos\theta_{e,sol}}}$
$\displaystyle=$
$\displaystyle\frac{r^{2}_{e}}{8\pi\gamma^{2}}\int^{2\pi}_{0}d\phi_{e}\sum_{\cos\bar{\theta}_{e,sol}}\left[\frac{\bar{X}E^{2}_{2}}{(1-\beta\cos\theta_{e})E^{2}_{1}\left|\frac{d\bar{E}_{2}}{d\cos\theta_{e}}\right|}\right]\,,$
where
$\displaystyle\frac{d\bar{E}_{2}}{d\cos\theta_{e}}$ $\displaystyle=$
$\displaystyle\frac{\beta\bar{E}_{2}}{1-\beta\cos\theta_{e}}\left[\frac{\bar{E}_{2}}{E_{1}}(\cos\theta-\sin\theta\cos\phi_{e}\cot\theta_{e})-1\right].$
Figure 3: The differential cross sections $E_{2}^{2}\>d\sigma/dE_{2}$ against
scattered photon energy $E_{2}$ in the Thomson regime $E_{1}\ll E_{e}$ (Sec.
III.1.1). Left: target photon energy $E_{1}=10^{-6}$ MeV, source electron
Lorentz factor $\gamma=1.5$. BG70 refers to the Eq. (2.48) in BLUMENTHAL and
GOULD (1970), the blue line marks the value of $E_{1}$. See paragraph for more
details; Right: target photon energy $E_{1}=10^{-6}$ MeV, source electron
Lorentz factor $\gamma=100$.
The problem of finding the differential cross section then reduces to summing
and finding the possible solutions $\cos\theta_{e,sol}$, and then numerically
integrate over $\phi_{e}$. It can be achieved by putting Eq. 8 into a
quadratic equation of $\tan(\theta_{e}/2)$:
$\displaystyle(C-A)\tan^{2}\left(\frac{\theta_{e}}{2}\right)+2B\tan\left(\frac{\theta_{e}}{2}\right)+(C+A)=0,$
(16)
where $A,B$ and $C$ are given by:
$\displaystyle\begin{split}A&=\beta(E_{1}-E_{2}\cos\theta)\,,\\\
B&=-E_{2}\beta\sin\theta\cos\phi_{e}\,,\\\
C&=\frac{E_{2}E_{1}(1-\cos\theta)}{E_{e}}+E_{2}-E_{1}.\end{split}$ (17)
Hence, the two solutions to the equation
$\tan(\theta_{e}/2)=\frac{-B\pm\sqrt{A^{2}+B^{2}-C^{2}}}{C-A}$ (18)
correspond to two possible IC scattering geometry for a given set of
$(E_{2},E_{1},E_{e},\theta,\phi_{e})$. Although both solutions lead to
physical scattering geometries, only positive solutions are retained. That is
because a negative $\tan(\theta_{e}/2)$ can be mapped to a positive one
$\theta_{e}\rightarrow\theta_{e}+\pi$ together with
$\phi_{e}\rightarrow\phi_{e}+\pi$. Therefore, it represents a duplicated a
solution at another $\phi_{e}$. The relation between a positive determinant
$A^{2}+B^{2}-C^{2}$ and the physical limit of $E_{2}$ is explained in Appendix
A.
## III Results
### III.1 Scattering between isotropic photon and isotropic electrons
In this section, we compare the results obtained with our general formalism to
that from BG70 BLUMENTHAL and GOULD (1970), which was obtained using the
high-$\gamma$ approximation and is used frequently in the IC literature.
BG70 also assumed isotropic photon and electron distributions. To match that,
we evaluate the differential cross section by averaging over the scattering
angle of Eq. II.4. Although Eq. II.4 is derived from isotropic source electron
and unidirectional target photon (Eq. 4), averaging over the scattering angle
is equivalent to averaging over the incident photon directions, and thus
corresponds to an isotropic source photon distribution.
We note that in the context of thermal SZ effect, Sarkar et. al. Sarkar _et
al._ (2019) also considered general isotropic scatterings. They defined the
kinematic regimes by comparing the source photon energy $E_{1}$ and the
electron energy $E_{e}$. This is different from us, as we consider mainly the
energies in the electron-rest frame (ERF). Below we refer to primed variables
as ERF quantities (e.g., $E_{1}^{\prime}$) and unprimed variables as the
observer frame quantities (e.g., $E_{1}$).
#### III.1.1 Thomson regime ($E_{1}^{\prime}\ll m_{e}$)
When the ERF target photon energy $E_{1}^{\prime}=\gamma
E_{1}(1-\beta\cos\theta_{e})$ is much smaller than the electron rest mass, the
scattering between target photons and electrons falls into the Thomson regime
regardless of the value of $\gamma$. In the Thomson limit, photon energies are
the same before and after scattering in the ERF, $E_{1}^{\prime}\approx
E_{2}^{\prime}$. In the ultrarelativistic limit ($\gamma\gg 1$), the maximum
cutoff of the IC scattered photon energy in the observer frame, $E_{2,\rm
max}$, is well approximated by $4\gamma^{2}E_{1}$, which is the limit used by
BG70.
Figure 4: Similar to Fig. 3, but for the KN regime, $E^{\prime}_{1}\gtrsim
m_{e}$, $E_{1}\lesssim E_{e}$ (Sec. III.1.2). Left: $E_{1}=0.5$ MeV and
$\gamma=1.5$, which corresponds to mildly-relativistic scattering; Right:
$E_{1}=100$ MeV and $\gamma=500$, which corresponds to ultrarelativistic
scattering.
Our exact formalism is expected to agree well with BG70 in the
ultrarelativistic limit. However, in the mildly-relativistic limit, when the
$\gamma$ value is smaller, the approximation $E_{2,\rm max}\sim
4\gamma^{2}E_{1}$ from head-on scattering between source electron and photon
no longer holds. In this case, the upscattered photon energy
$E_{2}\simeq\frac{1-\beta\cos\theta_{e}}{1-\beta\cos\theta_{1}}E_{1}$ (19)
can be obtained by taking the limit $E_{1}\ll E_{e}$ in Eq. 8 or by
considering the relation $E_{1}^{\prime}=E_{2}^{\prime}$ and express
$E_{1}^{\prime},E_{2}^{\prime}$ in observer frame quantities. The exact
$E_{2,\rm max}$ is then yielded by maximizing Eq. 19 with respect to
$\theta_{e}$ and $\theta_{1}$.
The left panel in Fig. 3 plots the differential cross section against $E_{2}$
for the mildly-relativistic case of $\gamma=1.5$ electrons scattering with
source photons at $10^{-6}$ MeV. The scattering falls into the Thomson regime
as $E_{1}^{\prime}\approx 3\times 10^{-6}$ MeV. The blue line marks the value
of $E_{1}$ and separates the spectrum into upscattering ($E_{1}<E_{2}$) and
downscattering ($E_{1}>E_{2}$) regions. In our exact calculation, the cutoff
energy is lower than BG70, and the differential cross section shifts to the
left. Our differential cross section also differs from BG70 in the down-
scattering region, which we discuss in detail later in Sec. III.1.4.
The right panel of Fig. 3 shows the same plot but with $\gamma=100$ for source
electrons to depict the ultrarelativistic IC scattering. The scattering is
still in the Thomson regime as $E_{1}^{\prime}\sim 2\times 10^{-3}$ MeV $\ll
m_{e}$. As expected, in the upscattering domain $E_{2}>E_{1}$, our result
agrees well with BG70, and have produced the same $E_{2,\rm
max}=4\gamma^{2}E_{1}\approx 4\times 10^{-2}$ MeV. In addition, the total
Thomson cross section $\sigma_{T}=8\pi/3~{}r_{e}^{2}$ can be recovered by
integrating the area under $d\sigma/dE_{2}$ in Fig. 3 over $E_{2}$. Therefore,
it validates our formalism on the differential cross section in the
ultrarelativistic limit.
#### III.1.2 KN regime ($E_{1}\lesssim E_{e}$, $E^{\prime}_{1}\gtrsim m_{e}$)
For larger values of $E_{1}$ that approaches $E_{e}$, the ERF target photon
energy $E_{1}^{\prime}$ can easily surpass the rest mass of the electron
$m_{e}$; this corresponds to the KN regime. In this regime, BG70 approximated
the maximum scattered photon energy $E_{2,\rm max}$ to be:
$E_{2,\rm max}\approx\frac{4\gamma^{2}E_{1}}{1+4\gamma\frac{E_{1}}{m_{e}}}\,,$
(20)
which follows from applying the large $\gamma$, head-on scattering with the
photon scattered backward approximation ($\theta_{e}=\pi,\theta_{1}=0$) to the
conservation of energy Eq. 8. When $E_{1}^{\prime}\gg m_{e}$, the term
$4\gamma E_{1}/m_{e}$ in the denominator of Eq. 20 dominates and $E_{2,\rm
max}\simeq E_{e}$. Using the general differential cross section, we instead
find that $E_{2,\rm max}=(\gamma-1)E_{e}+E_{1}$, which is simply the case when
the electron transfer all its kinetic energy to the photon and is valid for
any values of $\gamma$.
The left panel in Fig. 4 shows the differential cross section of $\gamma=1.5$
electrons scattering with 0.5 MeV target photons. As $E_{1}^{\prime}\approx
2\gamma E_{1}=1.5$ MeV, which implies that photons undergo Compton scattering
in ERF and a large portion of the energy is transferred to the electron. This
corresponds to the KN regime.
The right panel in Fig. 4 shows the ultrarelativistic scattering $\gamma=500$
electron and 100 MeV target photons, which is in the regime of
$E_{1}^{\prime}\approx 2\gamma E_{1}\gg m_{e}$. As expected, our results agree
with BG70 in the $E_{1}<E_{2}<E_{e}$ energy range. BG70’s formula, however,
does not work above $E_{e}$, while our results extend correctly to the true
maximum at $E_{2}=(\gamma-1)E_{e}+E_{1}$. We note that for large $\gamma$
cases, the differential cross section strongly peaks at the electron energy
$E_{e}$. This can be clearly seen in this plot (as well as in the right panel
of Fig. 5.)
Figure 5: Similar to Fig. 3, but for the trans-Compton regime, $E_{1}>E_{e}$
(Sec. III.1.3). Left: $E_{1}=1$ MeV, $\gamma=1.5$, corresponds to mildly-
relativistic scattering; Right: $E_{1}=100$ MeV, $\gamma=100$, corresponds to
ultrarelativistic scattering.
#### III.1.3 Trans-Compton regime ($E_{1}>E_{e}$)
When $E_{1}>E_{e}$, that is, when the target photon energy is greater than the
electron energy, the scattering in ERF entirely falls in the KN regime. But in
this case, the target photon mostly experiences energy loss, similar to the
case of Compton scattering. Although the case of $E_{1}>E_{e}$ was not
considered in BG70, we compare to their formula for completeness. Our
formalism works for all energy range up to $E_{2,\rm
max}=E_{1}+(\gamma-1)E_{e}$.
The left panel in Fig. 5 shows the differential cross section of $\gamma=1.5$
electrons scattering with 1 MeV target photons, which corresponds to a mildly-
relativistic scattering. The ultrarelativistic case ($\gamma=100$, $E_{1}=100$
MeV) is illustrated in the right panel of Fig. 5. The large deviation of
BG70’s differential cross section from ours implies the breakdown of
ultrarelativistic approximation in this kinematic regime.
#### III.1.4 Down-scattering cases ($E_{2}<E_{1}$)
In all the cases discussed above, our general formalism includes the effect of
down scattering (when $E_{2}<E_{1}$), which is not included in BG70. From Fig.
3 to Fig. 5, we note that the correct differential cross sections always
decline more rapidly than BG70 in the down scattering regime. In addition, we
also correctly calculate the minimum scattered photon energy $E_{2,\rm min}$
using Eq. 8, which corresponds to a photon and an electron initially
travelling in the same direction with the photon scattered backward
($\theta_{e}=0,~{}\theta_{1}=\pi$). In contrast, there is no $E_{2,\rm min}$
from BG70.
## IV Case studies
We consider two simple cases of high-energy IC emission to show that our
formalism can correctly reproduce the results in ultrarelativistic limits, and
find the regime where the ultrarelativistic assumption would break down.
### IV.1 Solar Inverse Compton Emission
Solar IC emission are produced when cosmic-ray electrons up scatter solar
photons Moskalenko _et al._ (2006); Orlando _et al._ (2008); Orlando and
Strong (2007, 2008). The anisotropic IC scattering cross section in MS
Moskalenko and Strong (2000) (with ultrarelativistic approximations) was
adopted to produce the IC emission package StellarICs Orlando and Strong
(2013). In this section, we compare our exactly formalism with the latest
StellarICs calculation on solar gamma ray in Ref. Orlando and Strong (2021).
Fig. 6 shows the LOS solar IC spectral intensity from the general electron-
photon scattering formalism and StellarICs. The computation of our spectrum
follows the master equation Eq. 1 with an observation angle
$\theta_{0}=0.3^{\circ}$. We employ the same $n_{ph},f_{ph},n_{e},f_{e}$ as in
StellarICs. Specifically, the differential number density of target photons
follows a black-body spectrum at 5770 K, with a spherical and uniform
distribution. The differential number density of source electrons is inferred
from the curve fitted with AMS02 data in the Fig. 3 of Orlando and Strong
(2021) and the distribution is isotropic. From Fig. 6, our spectrum agrees
well with StellarICs in the range of $E_{2}=10^{-2}-10^{3}$ MeV. The
correction from releasing the ultrarelativistic approximation in the
differential cross section cannot be seen in the figure, since electrons with
$\gamma=10^{2}-10^{4}$ are responsible for scattering solar photons to the
range of hard X-rays and gamma rays. The general formula thus reduces to
ultrarelativistic approximation in this regime and converge to StellarICs’s.
Figure 6: The LOS solar IC intensity at $0.3^{\circ}$ away from the centre of
the sun. The red spectrum labelled with ’StellarICs’ refers to the solid blue
curve in Orlando and Strong (2021) Fig. 4. The intensity spectrum in this work
was computed using Eq. 1 with the same solar photon and cosmic electron
spectrum in Aguilar _et al._ (2014). In particular, a thermal spectrum from
$10^{-7}$ to $10^{-5}$ MeV was used for the photon field. An electron spectrum
from $10^{1}$ to 10 5 MeV that follows the curve fitted with AMS02 data in
Fig. 3 of Orlando and Strong (2021), was used. Figure 7: A decomposition of
LOS solar IC spectrum. The decomposed spectra with $\gamma=1-1.5$,
$\gamma=1.5-5$, $\gamma=5-10$, $\gamma=10-25$ and $\gamma=25-100$ are given by
the magenta, blue, green, cyan, and yellow curves, respectively. Solid lines
represent the StellarICs results and our results (general formalism) are shown
in dashed lines. Linear vertical scale is used to highlight the mildly-
relativistic correction.
For smaller values of $E_{2}$, we expect to see some discrepancies between our
exact calculation and the ultrarelativistic approximations. For illustration,
we extend the electron spectrum to lower energies, adopting a $E_{e}^{-3.2}$
power law between $\gamma=1$ to 100.
Fig. 7 shows the full energy range for the anisotropic solar IC scattering. It
is clear that the solar emission above $E_{2}\approx 10^{-3}$ MeV is indeed
dominated by larger $\gamma>10$. The ultrarelativistic approximation holds in
this regime and the two emission spectra coincide. When $E_{2}<10^{-3}$ MeV,
our calculation deviates from StellarICs. From the spectrum decomposition, we
see the deviation is due to mildly-relativistic correction from general
formalism. The deviation is the largest in the interval $\gamma=1.5-5$. The
overall correction to the total emission is a reduced intensity for
$E_{2}<5\times 10^{-3}$ MeV.
### IV.2 Synchrotron Self-Compton
Synchrotron photons are emitted when energetic electrons gyrate along strong
magnetic fields. These synchrotron photons can also undergo IC scattering with
the gyrating electrons, resulting high-energy gamma-ray emission. This
synchrotron self-Compton (SSC) mechanism have been used to model gamma-ray
emissions from various sources, such as blazars and relativistic jets Potter
and Cotter (2012); Inoue _et al._ (2019); Banik and Bhadra (2019).
In the SSC mechanism, the target photon energy is higher than the solar IC
case. We therefore consider a simplified SSC model to see the effect of
relaxing the ultrarelativistic assumption.
We consider a relativistic jet, where the Lorentz factor of the jet is 500
with a $0.5^{\circ}$ observation angle from the jet direction. The resulted
boost from the jet frame (JF) to the observer frame is about 50. Both the
electron and photon spectra are taken to be isotropic in JF so that we can
compare the result with BG70. In the JF (quantities denoted with the “JF”
subscript), we consider the electron spectrum to be a power law
$E_{e,\text{JF}}^{-3}$ between $\gamma_{\text{JF}}=1$ to $10^{4}$. The photon
spectrum is taken to be a $E_{1,\text{JF}}^{-2}$ power law between $10^{-3}$
MeV to $10^{3}$ MeV.
Fig. 8 shows the SSC emission spectrum $E_{2}^{2}d\epsilon/dE_{2}$ in the
observer frame with the above configuration. The red dashed line represents
the total emission from our calculation. For comparison, results obtained from
BG70 is shown in solid lines. Our results agrees well to that obtained with
BG70 at high photon energies, but large deviations start to appear at low
energy.
Figure 8: The SSC emission spectrum at observation angle $0.5^{\circ}$ away
from the jet with $E_{1,\text{JF}}$ from $10^{-3}$ to $10^{3}$ MeV and
$\gamma_{\text{JF}}$ from 1 to $10^{4}$. Both of target photons and electrons
are assumed to be isotropic in JF. The emission is plotted in scattered photon
energy in observer frame $E_{2}$. Red line gives the total emission. The
individual contributions from
$\gamma_{\text{JF}}=1-10^{1},10^{1}-10^{2},10^{2}-10^{3}$, and $10^{3}-10^{4}$
are presented in the black, blue, cyan, and yellow lines, respectively. The
transition points $P_{1}$ and $P_{2}$ for the blue line are highlighted. See
the text for details about the decomposition spectrum and transition points.
For $E_{2,\text{JF}}$ smaller than the minimum photon energy, $10^{-3}$ MeV
($E_{2}$ $\approx 0.05$ MeV, in Fig. 8), these are dominated by down-scattered
photons produced in the Thomson regime. In this case, BG70’s formula
corresponds to a flat differential cross section, and thus results in a
$E_{2}^{2}$ behaviour in the $E_{2}^{2}d\epsilon/dE_{2}$ plot. As discussed in
Sec. III.1.4, our results deviates considerably from the ultrarelativistic
approximation.
To further illustrate the physics of the IC scattering, consider the blue line
in Fig. 8, which is the spectrum produced by electrons with
$\gamma_{\text{JF}}$ between $10^{1}$ and $10^{2}$. For $E_{2}>0.05$ MeV
($E_{2,\text{JF}}>10^{-3}$), the spectrum enters a region where all scattering
geometries are accessible by Thomson scattering, thus resulting in the
$E_{2}^{2}$ behaviour. The $E_{2}^{2}$ trend continues until reaching the
point $P_{1}$, which corresponds to
$E_{2,\text{JF}}=4\gamma_{\text{JF,m}}^{2}E_{1,\text{JF,m}}$, where both
$\gamma_{\text{JF,m}}$ and $E_{1,\text{JF,m}}$ are the lower cutoff of the
source photon and electron spectrum. Above $P_{1}$, the spectrum then softens
mainly due to the spectral shape of source photons. Finally, above the point
$P_{2}$, the scattering transits into the KN regime, and the spectrum steepens
further mainly due to the spectral shape of source electrons. We note that the
transition into the KN regime can be rapid in the case of large $\gamma$
values. E.g., for the case of the the yellow line in Fig. 8.
Finally, as mentioned in Sec. III.1.2, our formalism works correctly for cases
with $E_{2}>E_{e,\rm{max}}$ when there are appreciable energy in the target
photon. These contribution appears as the ’bump’ features in Fig. 8. In these
cases, the extra contributions are overwhelmed by those from higher energy
electrons.
## V Conclusions and Outlook
In this work, we consider the scattering between energetic electrons and
photons, and study the differential cross section for the outgoing photon
spectrum. This process is frequently used in high-energy astrophysics for the
production of IC photon emission.
We demonstrate that for the general case (without ultrarelativistic
approximation) of anisotropic photons scattering with isotropic electrons, the
differential cross section can be written as Eq. II.4, which can be easily
integrated numerically by finding the analytic solutions to the scattering
geometry, given by Eq. 18.
Focusing on isotropic photon and isotropic electron scatterings, we compare
our formalism with that from BG70, which considered the ultrarelativistic
approximation. We find that the scattering can be divided into three regimes,
the Thomson regime, the KN regime, and the Trans-Compton regime. In general,
we find that there would be deviations to the BG70 formula in the
downscattering limit ($E_{2}<E_{1}$), as well as when the target photon energy
becomes comparable to the electron energy.
To validate our formalism in the ultrarelativistic limit and show that it is
numerically stable, we consider two cases of high-energy IC emission. The
first is solar IC emission produced by anisotropic solar photon scattering
with isotropic cosmic-ray electrons. We find that our results agree well with
that from StellarICs Orlando and Strong (2021). Due to solar photons being
relatively low in energy, corrections from the exact formalism only appear
below keV photon energy. The second case we consider is SSC emission, where
isotropic synchrotron photons can scatter with isotropic electrons in
astrophysical jets. Due to the synchrotron photons being comparatively
energetic, the corrections from downscattering can be significant in the low
energy part of the SSC. At high energies, our results agree well with that
produced by the BG70 formula.
In these case studies, although we only consider scattering between isotropic
source electrons with isotropic photons (SSC) or anisotropic photons (solar
IC), it is straightforward to generalise the calculations to include different
angular distributions for photons and electrons in Eq. 2. For example, for the
case of external Compton emission in astrophysical jets Dermer (1995); Kelner
_et al._ (2014), photons emission are produced by jet electrons scattering
with photons from CMB, accretion disk, or dusty torus Finke (2016). In these
cases, both photon and electron distribution can be anisotropic, and thus a
more general formalism like ours is required.
In this work, we have focused much of our discussion in the gamma-ray regime,
where IC emission is typically considered. This is in part to show that our
formalism can be reduced to the well established works in the
ultrarelativistic regime. In general, we expect that a general formalism like
this is required whenever mildly relativistic electrons are involved, or when
the target photon energy is not small compared to electron energy.
Finally, through Eq. 18, we have obtained an exact solution to the scattering
angle geometry. This allows us to obtain the photon polarization caused by
anisotropic scattering, which we will discuss in detail in a followup work Lai
and Ng . In the literature, results of photon polarization caused by IC
scattering are somewhat inconsistent Krawczynski (2012). This work forms the
basis for a numerical framework to obtain the polarization spectrum. This is
especially timely, given that there are recent and future X-ray and gamma-ray
telescopes that are capable of detecting photon polarizations Weisskopf _et
al._ (2016); Zhang _et al._ (2016); Alessandro _et al._ (2021).
## VI Acknowledgement
We thank Ming-Chung Chu for helpful discussions. This work makes use of
StellarICs, an open source code that is available from Ref. Orlando and Strong
(2021). This project is supported by a grant from the Research Grant Council
of the Hong Kong Special Administrative Region, China (Project No. 24302721).
## References
* Hunter _et al._ (1997) S. D. Hunter _et al._ , “EGRET observations of the diffuse gamma-ray emission from the galactic plane,” Astrophys. J. 481, 205–240 (1997).
* Moskalenko and Strong (1999) I. V. Moskalenko and A. W. Strong, “Puzzles of Galactic continuum gamma-rays,” Astrophysical Letters and Communications 38, 445–448 (1999), arXiv:astro-ph/9811221 [astro-ph] .
* Strong _et al._ (2000) Andrew W. Strong, Igor V. Moskalenko, and Olaf Reimer, “Diffuse Continuum Gamma Rays from the Galaxy,” ApJ 537, 763–784 (2000), arXiv:astro-ph/9811296 [astro-ph] .
* Thompson _et al._ (1997) D. J. Thompson, D. L. Bertsch, D. J. Morris, and R. Mukherjee, “Energetic gamma ray experiment telescope high-energy gamma ray observations of the Moon and quiet Sun,” J. Geophys. Res. 102, 14735–14740 (1997).
* Strong _et al._ (2004a) Andrew W. Strong, Igor V. Moskalenko, and Olaf Reimer, “Diffuse galactic continuum gamma rays: A model compatible with EGRET data and cosmic-ray measurements,” The Astrophysical Journal 613, 962–976 (2004a).
* Strong _et al._ (2004b) Andrew W. Strong, Igor V. Moskalenko, and Olaf Reimer, “A new determination of the extragalactic diffuse gamma-ray background from EGRET data,” The Astrophysical Journal 613, 956–961 (2004b).
* Ackermann _et al._ (2011) M. Ackermann _et al._ , “A Cocoon of Freshly Accelerated Cosmic Rays Detected by Fermi in the Cygnus Superbubble,” Science 334, 1103–1107 (2011).
* Orlando and Strong (2007) E. Orlando and A. W. Strong, “Gamma rays from halos around stars and the sun,” Astrophysics and Space Science 309, 359–363 (2007).
* Moskalenko _et al._ (2006) Igor V. Moskalenko, Troy A. Porter, and Seth W. Digel, “Inverse Compton Scattering on Solar Photons, Heliospheric Modulation, and Neutrino Astrophysics,” ApJ 652, L65–L68 (2006), arXiv:astro-ph/0607521 [astro-ph] .
* Zhou _et al._ (2017) Bei Zhou, Kenny C. Y. Ng, John F. Beacom, and Annika H. G. Peter, “TeV solar gamma rays from cosmic-ray interactions,” Physical Review D 96 (2017), 10.1103/physrevd.96.023015.
* Orlando and Strong (2008) E. Orlando and A. W. Strong, “Gamma-ray emission from the solar halo and disk: a study with EGRET data,” A&A 480, 847–857 (2008), arXiv:0801.2178 [astro-ph] .
* Orlando _et al._ (2008) E. Orlando, D. Petry, and A. W. Strong, “Extended inverse-Compton gamma-ray emission from the Sun seen by EGRET,” in _International Cosmic Ray Conference_ , International Cosmic Ray Conference, Vol. 1 (2008) pp. 11–14.
* Abdo _et al._ (2011) A. A. Abdo _et al._ (Fermi-LAT), “Fermi-LAT Observations of Two Gamma-Ray Emission Components from the Quiescent Sun,” Astrophys. J. 734, 116 (2011), arXiv:1104.2093 [astro-ph.HE] .
* Ng _et al._ (2016) Kenny C. Y. Ng, John F. Beacom, Annika H. G. Peter, and Carsten Rott, “First observation of time variation in the solar-disk gamma-ray flux with fermi,” Physical Review D 94 (2016), 10.1103/physrevd.94.023004.
* Orlando and Strong (2021) Elena Orlando and Andrew Strong, “StellarICS: inverse Compton emission from the quiet Sun and stars from keV to TeV,” J. Cosmology Astropart. Phys 2021, 004 (2021), arXiv:2012.13126 [astro-ph.HE] .
* Tang _et al._ (2018) Qing-Wen Tang, Kenny C. Y. Ng, Tim Linden, Bei Zhou, John F. Beacom, and Annika H. G. Peter, “Unexpected dip in the solar gamma-ray spectrum,” Phys. Rev. D 98, 063019 (2018), arXiv:1804.06846 [astro-ph.HE] .
* Linden _et al._ (2022) Tim Linden, John F. Beacom, Annika H. G. Peter, Benjamin J. Buckman, Bei Zhou, and Guanying Zhu, “First observations of solar disk gamma rays over a full solar cycle,” Phys. Rev. D 105, 063013 (2022), arXiv:2012.04654 [astro-ph.HE] .
* Linden _et al._ (2018) Tim Linden, Bei Zhou, John F. Beacom, Annika H. G. Peter, Kenny C. Y. Ng, and Qing-Wen Tang, “Evidence for a New Component of High-Energy Solar Gamma-Ray Production,” Phys. Rev. Lett. 121, 131103 (2018), arXiv:1803.05436 [astro-ph.HE] .
* Chiang and Dermer (1999) J. Chiang and C. D. Dermer, “Synchrotron and ssc emission and the blast-wave model of gamma-ray bursts,” Astrophys. J. 512, 699 (1999), arXiv:astro-ph/9803339 .
* Meszaros _et al._ (1994) P. Meszaros, M. J. Rees, and H. Papathanassiou, “Spectral properties of blast wave models of gamma-ray burst sources,” Astrophys. J. 432, 181–193 (1994), arXiv:astro-ph/9311071 .
* Yuksel _et al._ (2009) Hasan Yuksel, Matthew D. Kistler, and Todor Stanev, “TeV Gamma Rays from Geminga and the Origin of the GeV Positron Excess,” Phys. Rev. Lett. 103, 051101 (2009), arXiv:0810.2784 [astro-ph] .
* Linden _et al._ (2017) Tim Linden, Katie Auchettl, Joseph Bramante, Ilias Cholis, Ke Fang, Dan Hooper, Tanvi Karwal, and Shirley Weishi Li, “Using HAWC to discover invisible pulsars,” Phys. Rev. D 96, 103016 (2017), arXiv:1703.09704 [astro-ph.HE] .
* Sudoh _et al._ (2019) Takahiro Sudoh, Tim Linden, and John F. Beacom, “TeV Halos are Everywhere: Prospects for New Discoveries,” Phys. Rev. D 100, 043016 (2019), arXiv:1902.08203 [astro-ph.HE] .
* Sunyaev and Zeldovich (1970) R. A. Sunyaev and Ya. B. Zeldovich, “Small scale fluctuations of relic radiation,” Astrophys. Space Sci. 7, 3–19 (1970).
* Jones (1968) F C Jones, “Calculated spectrum in inverse-compton-scattered photons.” Phys. Rev., 167: 1159-69(Mar. 25, 1968). (1968), 10.1103/PhysRev.167.1159.
* BLUMENTHAL and GOULD (1970) GEORGE R. BLUMENTHAL and ROBERT J. GOULD, “Bremsstrahlung, synchrotron radiation, and compton scattering of high-energy electrons traversing dilute gases,” Rev. Mod. Phys. 42, 237–270 (1970).
* Rybicki and Lightman (1986) George B. Rybicki and Alan P. Lightman, _Radiative Processes in Astrophysics_ (1986).
* Inoue _et al._ (2019) Yoshiyuki Inoue, Dmitry Khangulyan, Susumu Inoue, and Akihiro Doi, “On high-energy particles in accretion disk coronae of supermassive black holes: implications for MeV gamma rays and high-energy neutrinos from AGN cores,” (2019), 10.3847/1538-4357/ab2715, arXiv:1904.00554 [astro-ph.HE] .
* Banik and Bhadra (2019) Prabir Banik and Arunava Bhadra, “Describing correlated observations of neutrinos and gamma-ray flares from the blazar TXS 0506+056 with a proton blazar model,” Phys. Rev. D 99, 103006 (2019), arXiv:1908.11849 [astro-ph.HE] .
* Takami _et al._ (2014) Hajime Takami, Koutarou Kyutoku, and Kunihito Ioka, “High-Energy Radiation from Remnants of Neutron Star Binary Mergers,” Phys. Rev. D 89, 063006 (2014), arXiv:1307.6805 [astro-ph.HE] .
* Belmont (2009) R. Belmont, “Numerical computation of isotropic Compton scattering,” Astron. Astrophys. 506, 589 (2009), arXiv:0908.2705 [astro-ph.HE] .
* Nagirner and Poutanen (1994) D. I. Nagirner and J. Poutanen, _Single Compton scattering_ , Vol. 9 (1994).
* Pe’er and Waxman (2005) Asaf Pe’er and Eli Waxman, “Time dependent numerical model for the emission of radiation from relativistic plasma,” Astrophys. J. 628, 857–866 (2005), arXiv:astro-ph/0409539 .
* Coppi and Blandford (1990) P S Coppi and R D Blandford, “Reaction rates and energy distributions for elementary processes in relativistic pair plasmas,” Monthly Notices of the Royal Astronomical Society 245, 453–453 (1990), https://academic.oup.com/mnras/article-pdf/245/3/453/42466511/mnras0453.pdf .
* Pe’er _et al._ (2006) Asaf Pe’er, Peter Meszaros, and Martin J. Rees, “The observable effects of a photospheric component on grb’s and xrf’s prompt emission spectrum,” Astrophys. J. 642, 995–1003 (2006), arXiv:astro-ph/0510114 .
* Vurm and Poutanen (2009) Indrek Vurm and Juri Poutanen, “Time-dependent modelling of radiative processes in hot magnetized plasmas,” Astrophys. J. 698, 293–316 (2009), arXiv:0807.2540 [astro-ph] .
* Sarkar _et al._ (2019) Abir Sarkar, Jens Chluba, and Elizabeth Lee, “Dissecting the Compton scattering kernel I: Isotropic media,” Mon. Not. Roy. Astron. Soc. 490, 3705–3726 (2019), arXiv:1905.00868 [astro-ph.CO] .
* Aharonian and Atoyan (1981) F. A. Aharonian and A. M. Atoyan, “Compton Scattering of Relativistic Electrons in Compact X-Ray Sources,” Ap&SS 79, 321–336 (1981).
* Chen and Bastian (2012) Bin Chen and Timothy S. Bastian, “The Role of Inverse Compton Scattering in Solar Coronal Hard X-ray and Gamma-ray Sources,” Astrophys. J. 750, 35 (2012), arXiv:1108.0131 [astro-ph.SR] .
* Murase _et al._ (2010) Kohta Murase, Kenji Toma, Ryo Yamazaki, Shigehiro Nagataki, and Kunihito Ioka, “High-energy emission as a test of the prior emission model for gamma-ray burst afterglows,” MNRAS 402, L54–L58 (2010), arXiv:0910.0232 [astro-ph.HE] .
* Nagirner and Poutanen (1993) D. I. Nagirner and Yu. J. Poutanen, “Compton scattering by Maxwellian electrons - Redistribution of radiation according to frequencies and directions,” Astronomy Letters 19, 262–267 (1993).
* Brunetti (2000) G. Brunetti, “Anisotropic inverse Compton scattering from the trans-relativistic to the ultrarelativistic regime and application to the radio galaxies,” Astropart. Phys. 13, 107–125 (2000), arXiv:astro-ph/9908236 .
* Poutanen and Vurm (2010) Juri Poutanen and Indrek Vurm, “Theory of Compton scattering by anisotropic electrons,” Astrophys. J. Suppl. 189, 286–308 (2010), arXiv:1006.2397 [astro-ph.HE] .
* Kelner _et al._ (2014) S. R. Kelner, E. Lefa, F. M. Rieger, and F. A. Aharonian, “The Beaming Pattern of External Compton Emission from Relativistic Outflows: The Case of Anisotropic Distribution of Electrons,” ApJ 785, 141 (2014), arXiv:1308.5157 [astro-ph.HE] .
* Molnar and Birkinshaw (1999) S. M. Molnar and M. Birkinshaw, “Inverse Compton scattering in mildly relativistic plasma,” Astrophys. J. 523, 78 (1999), arXiv:astro-ph/9903444 .
* Moskalenko and Strong (2000) Igor V. Moskalenko and Andrew W. Strong, “Anisotropic inverse compton scattering in the galaxy,” The Astrophysical Journal 528, 357–367 (2000).
* Jauch and Rohrlich (2012) J.M. Jauch and F. Rohrlich, _The Theory of Photons and Electrons: The Relativistic Quantum Field Theory of Charged Particles with Spin One-half_, Theoretical and Mathematical Physics (Springer Berlin Heidelberg, 2012).
* Orlando and Strong (2013) Elena Orlando and Andrew Strong, “Stellarics: Stellar and solar inverse compton emission package,” (2013).
* Aguilar _et al._ (2014) M. Aguilar _et al._ (AMS), “Electron and Positron Fluxes in Primary Cosmic Rays Measured with the Alpha Magnetic Spectrometer on the International Space Station,” Phys. Rev. Lett. 113, 121102 (2014).
* Potter and Cotter (2012) William J. Potter and Garret Cotter, “Synchrotron and inverse-Compton emission from blazar jets I: a uniform conical jet model,” Mon. Not. Roy. Astron. Soc. 423, 756 (2012), arXiv:1203.3881 [astro-ph.HE] .
* Dermer (1995) Charles D. Dermer, “On the Beaming Statistics of Gamma-Ray Sources,” ApJ 446, L63 (1995).
* Finke (2016) Justin D. Finke, “External Compton Scattering in Blazar Jets and the Location of the Gamma-Ray Emitting Region,” Astrophys. J. 830, 94 (2016), [Erratum: Astrophys.J. 860, 178 (2018)], arXiv:1607.03907 [astro-ph.HE] .
* (53) Anderson C.M. Lai and Kenny C. Y. Ng, “In prep.” .
* Krawczynski (2012) H. Krawczynski, “The Polarization Properties of Inverse Compton Emission and Implications for Blazar Observations with the GEMS X-Ray Polarimeter,” ApJ 744, 30 (2012), arXiv:1109.2186 [astro-ph.HE] .
* Weisskopf _et al._ (2016) Martin C. Weisskopf, Brian Ramsey, Stephen L. O’Dell, Allyn Tennant, Ronald Elsner, Paolo Soffita, Ronaldo Bellazzini, Enrico Costa, Jeffery Kolodziejczak, Victoria Kaspi, Fabio Mulieri, Herman Marshall, Giorgio Matt, and Roger Romani, “The imaging x-ray polarimetry explorer (ixpe),” Results in Physics 6, 1179–1180 (2016).
* Zhang _et al._ (2016) S. N. Zhang _et al._ (eXTP), “eXTP – enhanced X-ray Timing and Polarimetry Mission,” Proc. SPIE Int. Soc. Opt. Eng. 9905, 99051Q (2016), arXiv:1607.08823 [astro-ph.IM] .
* Alessandro _et al._ (2021) D. A. Alessandro _et al._ , “Gamma-ray Astrophysics in the MeV Range: the ASTROGAM Concept and Beyond,” arXiv e-prints , arXiv:2102.02460 (2021), arXiv:2102.02460 [astro-ph.IM] .
## Appendix A Relating positive determinant to the kinematic constraint of
$E_{2}$
A non-negative determinant in the quadratic equation Eq. 16
$\Delta=A^{2}+B^{2}-C^{2}\geq 0$ secures real solutions and sets the kinematic
constraint on the possible range of $E_{2}$. To illustrate this, rewrite the
determinant into a quadratic equation of $E_{2}$:
$\displaystyle K_{1}E^{2}_{2}+K_{2}E_{2}+K_{3}\geq 0$ (21) $\displaystyle
K_{1}$ $\displaystyle=$
$\displaystyle\beta^{2}(\cos^{2}\theta+\sin^{2}\theta\cos^{2}\phi_{e})-\left[\frac{E_{1}(1-\cos\theta)}{E_{e}}+1\right]^{2}$
$\displaystyle K_{2}$ $\displaystyle=$ $\displaystyle
2E_{1}\left[\frac{E_{1}(1-\cos\theta)}{E_{e}}+1-\beta\cos\theta\right]$
$\displaystyle K_{3}$ $\displaystyle=$ $\displaystyle(\beta^{2}-1)E^{2}_{1}.$
Eq. 21 has another determinant $\Delta^{\prime}=K_{2}^{2}-4K_{1}K_{3}$ which
is positive definite for a physical solution. In the non-relativistic KN
regime, $\beta\rightarrow 0$, $E_{1}\sim m_{e}\approx E_{e}$, We have
$K_{1}<0$ such that $E_{2}$ is bounded the roots of Eq. 21. We also have
$\Delta^{\prime}\approx 0$, so:
$\displaystyle\frac{E_{1}}{\frac{E_{1}}{m_{e}}(1-\cos\theta)+1}$
$\displaystyle\leq$ $\displaystyle
E_{2}\leq\frac{E_{1}}{\frac{E_{1}}{m_{e}}(1-\cos\theta)+1}$ (22)
$\displaystyle\implies E_{2}$ $\displaystyle=$
$\displaystyle\frac{E_{1}}{\frac{E_{1}}{m_{e}}(1-\cos\theta)+1}=E_{\text{Compton}}\,,$
as expected.
In the Thomson regime, $E^{\prime}_{1}\ll m_{e}$ and $E_{2}$ is again bounded
by the roots as $K_{1}<0$. We further consider ultrarelativistic limit such
that $\gamma\ll 1$, both maximum and minimum scattered photon energy $E_{2}$
occurs at $\cos\theta=-1$ scattering in this limit. Manipulation on
$\Delta^{\prime}$ yields:
$\displaystyle\begin{split}K^{2}_{2}-4K_{1}K_{3}&=(1+\beta)^{2}-(1-\beta^{2})^{2}\\\
&=(1+\beta)^{2}[1-(1-\beta)^{2}]\\\
&\approx(1+\beta)^{2}\left(1-\frac{1}{4\gamma^{4}}\right),\end{split}$ (23)
where the relativistic approximation $\beta\approx 1-1/(2\gamma^{2})$ is used
in last line. With Eq. 23 and additional calculations, the conditional
$\Delta\geq 0$ becomes:
$\displaystyle\frac{(1-(1-1/(8\gamma^{4}))}{1-\beta}E_{1}\leq$ $\displaystyle
E_{2}$ $\displaystyle\leq\frac{1+(1-1/(8\gamma^{4}))}{1-\beta}E_{1}$
$\displaystyle\frac{1}{4\gamma^{2}}E_{1}\leq$ $\displaystyle E_{2}$
$\displaystyle\leq 4\gamma^{2}E_{1}.$ (24)
Again, it agrees with the limits mentioned in Rybicki and Lightman (1986).
Therefore, a real solution of Eq. 18 implies a physical solution for the
scattering.
|
# Relaxion Dark Matter from Stochastic Misalignment
Aleksandr Chatrchyan and Géraldine Servant
###### Abstract
Cosmological relaxation of the electroweak scale via Higgs-axion interplay,
named as relaxion mechanism, provides a dynamical solution to the Higgs mass
hierarchy. In the original proposal by Graham, Kaplan and Rajendran, the
relaxion abundance today is too small to explain the dark matter of the
universe because of the high suppression of the misalignment angle after
inflation. It was then realised by Banerjee, Kim and Perez that reheating
effects can displace the relaxion, thus enabling it to account for the dark
matter abundance from the misalignment mechanism. However, this scenario is
realised in a limited region of parameter space to avoid runaway. We show that
in the regime where inflationary fluctuations dominate over the classical
slow-roll, the “stochastic misalignment" of the field due to fluctuations can
be large. We study the evolution of the relaxion after inflation, including
the high-temperature scenario, in which the barriers of the potential shrink
and destabilise temporarily the local minimum. We open new regions of
parameter space where the relaxion can naturally explain the observed dark
matter density in the universe, towards larger coupling, larger mass, larger
mixing angle, smaller decay constant, as well as larger scale of inflation.
DESY-22-177
## 1 Introduction
A different approach from the traditional methods to address the Higgs
hierarchy problem has been proposed lately, that relies on a Higgs-axion
cosmological interplay [1]. The axion field introduced in this context is
called the relaxion, and its role is to dynamically select a small value for
the Higgs mass in the early universe. The potential for the coupled relaxion-
Higgs system has the simple form
$V(h,\phi)=-g\Lambda^{3}\phi+\frac{1}{2}[\Lambda^{2}-g^{\prime}\Lambda\phi]h^{2}+\frac{\lambda_{h}}{4!}h^{4}+\Lambda_{b}^{4}\Bigl{[}1-\cos\Bigl{(}\frac{\phi}{f}\Bigr{)}\Bigr{]},$
(1.1)
where $\Lambda$ is the cut-off scale of the standard model (SM) Higgs
effective field theory, $g$ and $g^{\prime}$ are small dimensionless
parameters that characterize the rolling potential and the relaxion-Higgs
coupling, $\lambda_{h}$ is the quartic coupling of the Higgs, $f$ is the decay
constant of the relaxion and $\Lambda_{b}$ describes the barriers of the
relaxion potential. The height of the barriers $\Lambda_{b}$ is sensitive to
the Higgs vev and vanishes if $\langle h\rangle=0$. As a consequence, starting
from a large and positive Higgs mass squared, the slow-roll dynamics of the
relaxion eventually brings it to a local minimum corresponding to a small and
negative Higgs mass squared.
In a large part of the parameter region, the relaxion is light and stable on
cosmological time scales. Such scalars are known to be generally good
candidates for dark matter (DM) when produced via the misalignment mechanism
[2, 3, 4, 5]. In the original proposal [1], the relaxion cannot address both
the hierarchy problem and the DM. In this work, we re-visit the possibility
that the relaxion solves the hierarchy problem and, at the same time, explains
the observed DM density in the universe.
A long period of inflation is usually required for the relaxion to scan the
Higgs mass and roll down to the correct local minimum (for a status review of
alternative friction mechanisms, see [6]). The field is also subject to
quantum fluctuations which, in the superhorizon limit, are determined by the
inflationary Hubble scale $H_{I}$. These fluctuations produce a random walk
for the relaxion, preventing it from stopping exactly at the minimum. On
timescales much shorter compared to the total relaxation time, a
(meta)equilibrium is established in the local minimum, of the form [7]
$\rho(\phi)\propto\exp\Bigl{(}-\frac{8\pi^{2}V(\phi)}{3H_{I}^{4}}\Bigr{)},$
(1.2)
where $\rho(\phi)$ denotes the probability distribution for the field to have
an average value $\phi$ inside a Hubble patch. The local minimum is expected
to be long-lived, hence, the distribution is concentrated near the quadratic
minimum of the potential. In this limit, it is approximately gaussian with a
variance given by
$\sigma_{\phi}^{2}=\frac{3H_{I}^{4}}{8\pi^{2}m_{\phi}^{2}}.$ (1.3)
This “stochastic” misalignment is later converted into coherent oscillations
of the field which can behave as DM. Such a stochastic window for standard QCD
axion DM was investigated in detail in [8, 9].
The parameter region of the relaxion can be split into two parts.
* •
$\boxed{H_{I}^{3}<g\Lambda^{3}}$
In the so-called classical-beats-quantum (CbQ) regime, the slow-roll of the
field per Hubble time dominates over its random motion, as the field rolls
down the potential. Most of the studies, including [1], considered the
mechanism in this regime. Unfortunately, the stochastic misalignment can
explain only a tiny fraction of the DM abundance in this case.
* •
$\boxed{H_{I}^{3}>g\Lambda^{3}}$
Somewhat less explored is the quantum-beats-classical (QbC) regime, where the
random walk of the relaxion dominates over its classical motion. The mechanism
in this regime was investigated in our recent work [10], as well as in [11,
12]. Using the Fokker-Planck equation it was shown that also in this case the
field can roll down to the minimum with a small Higgs mass, successfully
generating the hierarchy. At the same time, larger values of the inflationary
scale $H_{I}$ in this case allow for a larger spread of the distribution
$\rho(\phi)$ in the local minimum. In contrast to the CbQ regime, here we
identify a large parameter region where the stochastic misalignment can
naturally generate the observed DM abundance. This is in a regime where the
mechanism does not require eternal inflation.
There is yet another source for a misalignment of the relaxion, which was
described in [13]. Let us denote by $T_{b}$ the temperature, below which the
barriers of the relaxion potential are established. Already at temperatures
comparable to the weak scale, the Higgs vev itself changes due to the thermal
corrections to the Higgs potential and, in particular, at $T\approx 160$ GeV
the electroweak symmetry is expected to be restored (see e.g. [14]). This
would remove the barriers implying that $T_{b}$ cannot exceed that
temperature, although it can be lower. If after inflation the universe reheats
to temperatures $T_{\mathrm{rh}}$ well above $T_{b}$, the local minimum in
which the relaxion is trapped will disappear for some time. Due to the slope
of the potential, the relaxion will be further displaced from its minimum.
Under certain conditions that were derived in [13], the misalignment from this
“roll-on” can explain the DM abundance already in the CbQ regime.
In this work, we investigate both the low-temperature and the high-temperature
reheating scenarios, extending the analysis of [13] to the QbC regime. The
largest DM window is achieved in the case of a low reheating temperature.
However, even in the high reheating temperature case, we find a substantial
parameter region where the displacement after reheating is small and the
stochastic misalignment accounts for the DM. For completeness, we also compare
these windows to the one from roll-on from [13] and extend the later to the
QbC regime.
Other less-minimal scenarios, in which the relaxion mechanism provides a
viable DM candidate, were studied in several works. The authors of [15]
considered relaxation with two scanning scalar fields instead of one, which
enables to get away with the ‘coincidence problem’ of the original proposal
($\Lambda_{b}\lesssim$ EW scale). In this setup, the second scalar field scans
the barrier $\Lambda_{b}$ of the relaxion potential and can be the DM of the
universe. In [13, 16], a coupling of the relaxion to an additional dark photon
field was added, which allows for a large DM window in the high-temperature
reheating case. In [17], the authors considered an alternative relaxion model
where friction comes from gauge boson production. In this case, the relaxion
is produced via scattering with the thermal bath and can be a warm DM
candidate in the $\mathrm{keV}$ range. Let us also mention another mechanism
for selecting a small Higgs mass, the sliding naturalness [18, 19], which can
also explain the DM.
The outline of this work is the following. In section 2, we compute the energy
density stored in the coherent oscillations due to the stochastic
misalignment. This allows us to construct the relaxion DM window in section 3,
for the $T_{\mathrm{rh}}<T_{b}$ case. We verify that in the CbQ regime the
relaxion is always underabundant, as well as demonstrate how in the QbC regime
the relaxion can explain the DM abundance in a large parameter region. In
section 4, we study the case of high reheating temperature,
$T_{\mathrm{rh}}\gg T_{b}$, taking into account the additional displacement of
the field after reheating. In section 5, we estimate the thermal production of
the relaxion and verify that in the DM window this contribution is negligible.
In section 6, we focus on the QCD relaxion model and the DM window in that
scenario. Except for that section, we always restrict ourselves to the
scenario where the relaxion mechanism does not require inflation to be
eternal. We conclude in section 7.
## 2 Axion abundance from the stochastic misalignment
In this section, we compute the energy density in the coherent oscillations of
a generic axion-like field $\phi$ with mass $m_{\phi}$, comparing this to the
observed DM abundance in the universe. We focus on the stochastic misalignment
of the field i.e. assume that the typical displacement from the local minimum
of its potential is set by the inflationary Hubble scale and given by Eq.
(1.3). We will now show, that the energy density today scales as
$\Omega_{\phi,0}\propto H_{I}^{4}m_{\phi}^{-3/2}$.
The scale of inflation in the relaxion mechanism is typically low,
$H_{I}<v_{h}$, where $v_{h}=246$ GeV is the electroweak scale. We consider two
cases, comparing the Hubble parameter at which the field starts oscillating
around the minimum of its potential, $H_{\mathrm{osc}}\approx m_{\phi}/3$, to
the Hubble parameter at the end of reheating,
$H^{2}_{\mathrm{rh}}=\Bigl{(}\frac{1}{2t_{\mathrm{rh}}}\Bigr{)}^{2}=\frac{8\pi^{3}}{90}\frac{g(T_{\mathrm{rh}})T_{\mathrm{rh}}^{4}}{M_{Pl}^{2}},$
(2.1)
where $g(T)$ denotes the effective number of relativistic degrees of freedom
for the energy density [20]. If $H_{\mathrm{rh}}>H_{\mathrm{osc}}$, the field
enters the oscillatory regime after reheating, in the radiation-dominated era.
The condition for this can be re-written as an upper bound on the mass,
$m_{\phi}<4\times
10^{-5}\mathrm{eV}\Bigl{(}\frac{T_{\mathrm{rh}}}{100\mathrm{GeV}}\Bigr{)}^{2}\sqrt{\frac{g(T_{\mathrm{rh}})}{100}}.$
(2.2)
If $H_{\mathrm{rh}}<H_{\mathrm{osc}}$, the onset of oscillations is during
reheating. The relic abundance in this case is sensitive to the equation of
state of the universe before reheating.
### 2.1 ${H_{\mathrm{rh}}>H_{\mathrm{osc}}}$
We start with the simple case $H_{\mathrm{rh}}>H_{\mathrm{osc}}$. Employing
entropy conservation, the energy density today can be expressed as,
$\rho_{\phi,0}\approx\rho_{\phi,\mathrm{osc}}\Bigl{(}\frac{a_{\mathrm{osc}}}{a_{0}}\Bigr{)}^{3}\approx\frac{m_{\phi}^{2}{\phi}^{2}}{2}\Bigl{(}\frac{T_{0}}{T_{\mathrm{osc}}}\Bigr{)}^{3}\Bigl{(}\frac{g_{s,0}}{g_{\mathrm{s,osc}}}\Bigr{)}.$
(2.3)
Inserting $T_{0}=2.73K$ and
$T^{2}_{\mathrm{osc}}=M_{Pl}H_{\mathrm{osc}}(1.66\sqrt{g(T_{\mathrm{osc}})})^{-1}$
one arrives111Here we take into account the fact that an accurate estimate for
the relic density is obtained if one uses $H_{\mathrm{osc}}=m_{\phi}/A$ with
$A\approx 1.6$, see e.g. [21]. This is the reason why the prefactor in Eq.
(2.4) is slightly different from the expressions in [5]. at the usual
expression [5]
$\rho_{\phi,\mathrm{0}}=6.5\,\frac{\mathrm{keV}}{\mathrm{cm}^{3}}\sqrt{\frac{m_{\phi}}{\mathrm{eV}}}\Bigl{(}\frac{{\phi}}{10^{12}\mathrm{GeV}}\Bigr{)}^{2}\mathcal{F}(T_{\mathrm{osc}}),$
(2.4)
where the dimensionless factor $\mathcal{F}(T_{\rm
osc})=(g_{s,0}/g_{s,\mathrm{osc}})(g_{\mathrm{osc}}/g_{0})^{3/4}$ encodes the
changing number of degrees of freedom for entropy, $g_{s}(T)$, and energy,
$g(T)$ [20]. It can take values between $0.3$ and $1$, thus, can be neglected
for simplicity. In the case of stochastic misalignment, the typical energy
density
$\langle\rho_{\phi,\mathrm{osc}}\rangle=\frac{1}{2}m^{2}\sigma_{\phi}^{2}$
from the above expression can be expressed as
$\frac{\langle\Omega_{\phi,0}\rangle}{\Omega_{\mathrm{DM}}}\approx
5\sqrt{\frac{m_{\phi}}{\mathrm{eV}}}\Bigl{(}\frac{\sigma_{\phi}}{10^{12}\mathrm{GeV}}\Bigr{)}^{2}\mathcal{F}(T_{\mathrm{osc}})\approx
20\Bigl{(}\frac{\mathrm{eV}}{m_{\phi}}\Bigr{)}^{3/2}\Bigl{(}\frac{H_{I}}{100\mathrm{GeV}}\Bigr{)}^{4}\mathcal{F}(T_{\mathrm{osc}}),$
(2.5)
where $\Omega$ denotes the fractional energy density and
$\Omega_{\mathrm{DM}}\approx 0.24$ is the measured DM abundance in the
universe [22].
### 2.2 ${H_{\mathrm{rh}}<H_{\mathrm{osc}}}$
We now move to the case $H_{\mathrm{rh}}<H_{\mathrm{osc}}$, which is slightly
more complicated compared to the previous one, because here the field starts
to oscillate during reheating. Some assumption should be made about the
evolution of the universe at those times. For simplicity, we assume that the
background energy density after inflation scales as $\rho(a)\propto
H^{2}(a)\propto a^{-3(1+w)}$, where $w$ is the equation of state parameter
after inflation. One may generally expect $w\approx 0$, but we will allow for
a general equation of state parameter in our expressions. The energy density
today can be written as
$\rho_{\phi,0}\approx\rho_{\phi,\mathrm{osc}}\Bigl{(}\frac{a_{\mathrm{osc}}}{a_{\mathrm{rh}}}\Bigr{)}^{3}\Bigl{(}\frac{a_{\mathrm{rh}}}{a_{0}}\Bigr{)}^{3}\approx\frac{m_{\phi}^{2}{\phi}^{2}}{2}\Bigl{(}\frac{H_{\mathrm{rh}}}{H_{\mathrm{osc}}}\Bigr{)}^{2/(1+w)}\Bigl{(}\frac{T_{0}}{T_{\mathrm{rh}}}\Bigr{)}^{3}\Bigl{(}\frac{g_{s,0}}{g_{\mathrm{s,rh}}}\Bigr{)}.$
(2.6)
Performing the same steps as in the previous case we arrive at
$\langle\rho_{\phi,0}\rangle\approx
6.5\,\frac{\mathrm{keV}}{\mathrm{cm}^{3}}\sqrt{\frac{m_{\phi}}{\mathrm{eV}}}\Bigl{(}\frac{\sigma_{\phi}}{10^{12}\mathrm{GeV}}\Bigr{)}^{2}\Bigl{(}\frac{H_{\mathrm{rh}}}{H_{\mathrm{osc}}}\Bigr{)}^{\frac{1-3w}{2(1+w)}}=\langle\rho^{(w=1/3)}_{\phi,0}\rangle\Bigl{(}\frac{H_{\mathrm{rh}}}{H_{\mathrm{osc}}}\Bigr{)}^{\frac{1-3w}{2(1+w)}}.$
(2.7)
Here $\rho^{(w=1/3)}_{\phi,0}$ can be understood as today’s energy density in
the case of $w=1/3$ which is also the prediction for the relic density in the
previous case of $H_{\mathrm{rh}}>H_{\mathrm{osc}}$. As can be seen, $w<1/3$
leads to a suppression of the relic density.
Combining the two cases ${H_{\mathrm{rh}}<H_{\mathrm{osc}}}$ and
${H_{\mathrm{rh}}>H_{\mathrm{osc}}}$ the typical DM fraction can be expressed
as
$\boxed{\frac{\langle\Omega_{\phi,0}\rangle}{\Omega_{\mathrm{DM}}}\approx
20\,\Bigl{(}\frac{\mathrm{eV}}{m_{\phi}}\Bigr{)}^{3/2}\Bigl{(}\frac{H_{I}}{100\mathrm{GeV}}\Bigr{)}^{4}\>\mathrm{min}\Bigl{\\{}1,\Bigl{(}\frac{H_{\mathrm{rh}}}{H_{\mathrm{osc}}}\Bigr{)}\Bigr{\\}}^{\frac{1-3w}{2(1+w)}}.}$
(2.8)
For fixed values of $w$ and $T_{\mathrm{rh}}$,
$\langle\Omega_{\phi,0}\rangle/\Omega_{\mathrm{DM}}$ is determined by the the
mass $m_{\phi}$ and has a strong dependence on the Hubble scale during
inflation $H_{I}$.
## 3 Relaxion dark matter for $T_{\mathrm{rh}}\lesssim T_{b}$
The stochastic misalignment of the relaxion can naturally explain the observed
DM abundance in a large parameter region if the QbC regime is included. This
is illustrated in Fig. 1, where we show the allowed parameter region for the
relaxion and the brown lines are determined from Eq. (2.8) setting the DM
fraction to one. We consider several reheating temperatures for the $w=0$
equation of state before reheating, as well as the $w=1/3$ case where the
relic density does not depend on $T_{\mathrm{rh}}$. In section 3.1 we explain
why the relaxion is always underabundant in the CbQ regime. We then construct
the DM window in section 3.2. Here we focus on the $T_{\mathrm{rh}}\lesssim
T_{b}$ scenario, for which the misalignment of the relaxion is unaffected by
the reheating of the universe. The scenario $T_{\mathrm{rh}}\gg T_{b}$ is
examined in section 4.
Before proceeding, let us summarize the constraints that are imposed on the
relaxion when constructing its parameter region. We refer to [10] for more
details. The main free parameters are $g$ (we set $g^{\prime}=g$), $\Lambda$,
$f$ and $H_{I}$. The relaxion is expected not to back-react on Hubble
expansion during inflation, implying
$H_{I}^{2}>(8\pi/3)\Lambda^{4}/M_{\mathrm{Pl}}^{2}$. The spread in the Higgs
mass due to diffusion effects $\Delta\mu_{h}^{2}\sim H_{I}^{2}$ is supposed to
be small and we impose $H_{I}<100\mathrm{GeV}$. The separation between the
local minima is also required to be less than the scanning precision,
$g^{\prime}\Lambda(2\pi f)<|\mu_{h}^{2}|=(88\mathrm{GeV})^{2}$. The decay
constant is usually assumed to be subplanckian, $f<M_{\mathrm{Pl}}$, and we
also require $f>\Lambda$ for the consistency of the effective theory. The cut-
off scale $\Lambda$ is assumed to be at least $\mathrm{TeV}$. Except for
section 6, we always require that inflation is not eternal i.e. that the
minimal number of e-folds required for the field to relax the cut-off scale,
$N_{I}\sim 3H_{I}^{2}/(g^{2}\Lambda^{2})$, does not exceed the critical number
of e-folds corresponding to eternal inflation
$N_{c}=(2\pi^{2}/3)M_{\mathrm{Pl}}^{2}/H_{I}^{2}$ [8, 23]. This condition
translates into
$g\Lambda>\frac{3H_{I}^{2}}{\sqrt{2}\pi M_{Pl}}.$ (3.1)
We stress that in the CbQ regime this condition is satisfied automatically.
The parameter $\Lambda_{b}$, by which we denote the barrier height at the
final local minimum corresponding to the correct Higgs vev, is determined from
the stopping condition. As it was derived in [10], inflationary fluctuations
allow for transitions of $\phi$ between neighbouring local minima and, in the
general case, the relaxion stops at a minimum for which the Hawking-Moss
transition rate to the next minimum is suppressed. We use
$B=\frac{8\pi^{2}\Delta V_{b}^{\rightarrow}}{3H_{I}^{4}}\sim 1$ (3.2)
as the stopping condition, where $\Delta V_{b}^{\rightarrow}$ denotes the
height of the barrier to the next minimum. We note that
* •
In the CbQ regime, this condition always implies $\Lambda_{b}^{4}\approx
g\Lambda^{3}f$. Here we do not consider the scenario of low Hubble friction by
requiring $H_{I}>\dot{\phi}_{\mathrm{SR}}/(2\pi f)$. This ensures that the
relaxion tracks the slow-roll velocity
$\dot{\phi}_{\mathrm{SR}}=g\Lambda^{3}/3H_{I}$ as it rolls down the potential.
If this is not the case, the field can stop in a much deeper minimum as well
as undergo fragmentation, as it was explained in [6].
* •
In the QbC regime we have separated a QbC I regime where the relation
$\Lambda_{b}^{4}\approx g\Lambda^{3}f$ still holds, and a QbC II regime in
which Eq. (3.2) implies a barrier height that is determined by the
inflationary Hubble scale, $\Lambda_{b}^{4}\approx(3/16\pi^{2})H_{I}^{4}$ or
$\Lambda_{b}\sim H_{I}$. The transition between the QbC I and the QbC II
regimes occurs approximately at $H_{I}^{4}=(16\pi^{2}/3)g\Lambda^{3}f$.
In principle, the stopping condition depends on the total number of e-folds
$N_{I}$ of inflation, i.e. the larger $N_{I}$ the deeper would be the final
minimum of the relaxion. This dependence is however only logarithmic and hence
can be neglected. An upper bound
$\Lambda_{b}<\sqrt{4\pi}v_{h},$ (3.3)
is imposed to ensure that the barrier height is sensitive to the Higgs vev.
Figure 1: The relaxion parameter region in the $[m_{\phi},H_{I}]$ plane. The
brown lines show where the stochastic misalignment of the relaxion would
explain DM according to Eq. (2.8). The different lines correspond to different
reheating temperatures and values of the equation of state parameter between
the end of inflation and the end of reheating ($w=1/3$ for the solid line and
$w=0$ for the rest). The region hashed with blue vertical lines corresponds to
the CbQ regime and does not overlap with these brown lines. In contrast, the
relaxion in the QbC I (the region hashed with orange lines) and QbC II (the
region hashed with yellow lines) regimes has such an overlap, which is where
it can naturally explain DM.
Finally, it is convenient to introduce the dimensionless parameter $\delta$,
defined as
$\cos\delta=\frac{g\Lambda^{3}f}{\Lambda_{b}^{4}}.$ (3.4)
As explained in [10], the mass of the relaxion, the barrier height and the
separation between the minimum $\phi_{0}$ and the maximum of the barrier
$\phi_{b}$ are given respectively by
$m_{\phi}^{2}=\frac{\Lambda_{b}^{4}}{f^{2}}\times\sin\delta,\>\>\>\>\>\>\>\>\>\Delta
V_{b}^{\rightarrow}=2\Lambda_{b}^{4}\times[\sin\delta-\delta\cos\delta],\>\>\>\>\>\>\>\>\>\phi_{b}-\phi_{0}=2f\times\delta,$
(3.5)
where the $\delta$-dependent corrections are due to the linear slope. Near the
first local minimum $\delta\ll 1$ and, as a consequence, both the mass and the
barrier height are suppressed compared to the naive expectation (see also
[13]).
### 3.1 Insufficient dark matter in the CbQ regime
We start demonstrating that the relic density from the stochastic relaxion
misalignment in the CbQ regime is too small to explain DM.
In the CbQ regime, $H_{I}<g^{1/3}\Lambda$ holds and the relaxion stops near
$\Lambda_{b}^{4}\approx g\Lambda^{3}f$. Assuming that $w\leq 1/3$ holds during
reheating, one can write for the relic density
$\frac{\Omega_{\phi,0}}{\Omega_{\mathrm{DM}}}\leq
20\Bigl{(}\frac{m_{\phi}}{\mathrm{eV}}\Bigr{)}^{-3/2}\Bigl{(}\frac{H_{I}}{100\mathrm{GeV}}\Bigr{)}^{4}<20\Bigl{(}\frac{m_{\phi}}{\mathrm{eV}}\Bigr{)}^{-3/2}\Bigl{(}\frac{\Lambda_{b}^{4/3}f^{-1/3}}{100\mathrm{GeV}}\Bigr{)}^{4}.$
(3.6)
Using $\sin\delta\geq\Lambda_{b}^{2}/(\Lambda\sqrt{-\mu_{h}^{2}})$ and
inserting the maximal value for the cut-off scale [10] one arrives at
$\frac{\Omega_{\phi,0}}{\Omega_{\mathrm{DM}}}<8.3\times
10^{-8}\Bigl{(}\frac{m_{\phi}}{\mathrm{eV}}\Bigr{)}^{-1/6}\Bigl{(}\frac{\Lambda_{b}}{\sqrt{4\pi}v_{h}}\Bigr{)}^{4/3}\Bigl{(}\frac{\Lambda}{4\times
10^{9}\mathrm{GeV}}\Bigr{)}^{2/3},$ (3.7)
and, even for the lightest possible masses of fuzzy DM $m_{\phi}\sim
10^{-21}\rm eV$, the relic density does not exceed $\Omega_{\phi,0}\sim
10^{-4}$, even if we allow for super-Planckian decay constants. Note that at
those light masses an even stronger bound on the relic density can be put from
the $\Lambda<f$ condition. To illustrate this, in figure 1 we show the
relaxion parameter region in $H_{I}$ vs $m_{\phi}$ plane. The CbQ region,
shown in blue, indeed does not overlap with the
$\Omega_{\phi,0}={\Omega_{\mathrm{DM}}}$ lines, in contrast to the QbC I
(orange) and QbC II (yellow) regions that we discuss next.
### 3.2 Dark matter from stochastic misalignment (QbC)
To obtain the bounds on the relaxion mass in the DM window we set
$\langle\Omega_{\phi,0}\rangle$ equal to $\Omega_{\mathrm{DM}}$ in Eq. (2.8).
Assuming $w=1/3$ one arrives at
$m_{\phi}\approx
10\,\mathrm{eV}\Bigl{(}\frac{H_{I}}{100\mathrm{GeV}}\Bigr{)}^{8/3}.$ (3.8)
If one instead uses $w=0$, the expression for the mass takes the form
$m_{\phi}\approx
0.4\,\mathrm{eV}\Bigl{(}\frac{H_{I}}{100\mathrm{GeV}}\Bigr{)}^{2}\Bigl{(}\frac{T_{\mathrm{rh}}}{100\mathrm{GeV}}\Bigr{)}^{1/2}\Bigl{(}\frac{g(T_{\mathrm{rh}})}{100}\Bigr{)}^{1/8}.$
(3.9)
For the upper bound we simply impose $H_{I}<100$ GeV for the inflationary
Hubble scale in the above expressions. Note that for the considered reheating
temperatures, the upper bound depends on physics before reheating, which is
consistent with the fact that the onset of oscillations for masses that do not
satisfy (2.2) is before reheating.
A lower bound on the mass of relaxion DM can be imposed by requiring (3.1) to
avoid eternal inflation. Here the stopping condition near the first minimum is
relevant and the $\delta$-dependent prefactor in the expression for the mass,
which is now expected to be small, should be included. Inserting everything
into the expression for the mass one can write
$m_{\phi}^{2}=\frac{\Lambda_{b}^{4}}{f^{2}}\sin\delta\approx\frac{\Lambda_{b}^{6}}{f^{2}\Lambda(-\mu_{h}^{2})^{1/2}}>\Bigl{(}\frac{3H_{I}^{2}}{\sqrt{2}\pi
M_{Pl}}\Bigr{)}^{3/2}\frac{\Lambda^{2}}{f^{1/2}(-\mu_{h}^{2})^{1/2}}.$ (3.10)
Rewriting this as an upper bound on $H_{I}$ and inserting into the expression
for the relic density with $\Omega_{\phi,0}\sim\Omega_{\mathrm{DM}}$ one
arrives at
$m_{\phi}>10^{-13}\mathrm{eV}\Bigl{(}\frac{\Lambda}{\mathrm{TeV}}\Bigr{)}^{\frac{16}{7}}\Bigl{(}\frac{M_{Pl}}{f}\Bigr{)}^{\frac{4}{7}}.$
(3.11)
For this lower bound the oscillations start after reheating, according to
(2.2).
To summarize, in a wide range of masses,
$\boxed{10^{-13}\Bigl{(}\frac{\Lambda}{\mathrm{TeV}}\Bigr{)}^{\frac{16}{7}}\Bigl{(}\frac{M_{Pl}}{f}\Bigr{)}^{\frac{4}{7}}<\frac{m_{\phi}}{\mathrm{eV}}<0.4\times
10^{4w}\Bigl{(}\frac{H_{I}}{100\mathrm{GeV}}\Bigr{)}^{2(1+w)}\Bigl{[}\frac{T_{\mathrm{rh}}}{100\mathrm{GeV}}\Bigl{(}\frac{g(T_{\mathrm{rh}})}{100}\Bigr{)}^{\frac{1}{4}}\Bigr{]}^{\frac{1-3w}{2}},}$
(3.12)
the relaxion can potentially constitute the DM. It is worth mentioning that
the DM window includes not only the regime of late stopping, QbC II, but also
the regime where the relaxion stops closer to the first minimum, QbC I. In the
second case, the typically small misalignment from the minimum is compensated
by the large value of $f$ (similar to the stochastic QCD axion scenario from
[8]). We note that the regions above/below the
$\langle\Omega_{\phi,0}\rangle=\Omega_{\mathrm{DM}}$ are not strictly
excluded. The relaxion would simply need by chance to sit very close to the
minimum/maximum of the potential to generate the required abundance.
Figure 2: The relaxion DM window in the $[m_{\phi},\sin\theta_{h\phi}]$ (top)
and $[m_{\phi},1/f]$ (bottom) planes. The brown shaded regions correspond to
the stochastic window in the QbC regime with $T_{\mathrm{rh}}<T_{b}$. Here
different lines correspond to different values of the equations of state
parameter during reheating and different values of the reheating temperature.
The grey region shows the DM window from roll-on for $T_{\mathrm{rh}}\gg
T_{b}$, which was proposed in [13] for the CbQ regime, and extended here for
the QbC case. The stochastic window in the QbC regime for $T_{\mathrm{rh}}\gg
T_{b}$ is enclosed by the black solid line. The constraints from fifth force
experiments [24] (navy), stellar cooling (purple) of sun, RGs and WDs via
brehmsstrahlung [25] and via resonant production [26] (labeled as “res”), as
well as from black hole superradiance [27] (pink) are shown for the DM window.
We plot the parameter space where it is possible for the relaxion to generate
the correct DM abundance in Figs. 2 and 3, in four different planes. In figure
2 the parameter region is shown in the $\sin\theta_{h\phi}$ vs $m_{\phi}$
plane (upper panel), as well as in the $f^{-1}$ vs $m_{\phi}$ plane (lower
panel). Here, the following definition of the mixing angle is used (see also
[28, 10]),
$\sin\theta_{h\phi}\approx-\frac{1}{m_{h}^{2}}\frac{\partial^{2}V}{\partial
h\partial\phi}\Big{|}_{v_{h},\phi_{\mathrm{0}}},$ (3.13)
where $V(h,\phi)$ is given in Eq. (1.1). The DM window is highlighted in
brown. We use the same choices of $w$ and $T_{\mathrm{rh}}$ as in figure 1.
Constraints arising from fifth force experiments, including inverse-square-law
and equivalence-principle tests [29, 30, 31, 32, 33, 34, 35], are shown in
navy, while the stellar cooling bounds for the sun, red giants (RGs) and white
dwarfs (WDs) [26, 36, 25] are shown in purple. We show both the bounds from
[26], which focuses on resonant production from the plasma effects, as well as
the limits from [25], which considers the bremsstrahlung production finding
much stronger bounds on the mixing angle. In the limits from [25] the details
of the stellar profile are taken into account for the sun, while in the case
of RGs and WDs constant profiles are used due to the larger uncertainties.
These limits are highlighted in a lighter shade. Both the fifth force
constraints and the stellar bounds are based on the mixing angle with the
Higgs. They are also projected in the remaining plots of both figures. In
other words, a point inside the DM window is marked as excluded, if it is
excluded for any choice of the remaining free parameters, including the choice
of the reheating temperature for $w=0$, that can explain DM. Similarly, we
include the black hole superradiance bounds from [27], which exclude the
$10^{-12}\,\mathrm{eV}\lesssim m_{\phi}\lesssim 10^{-11}\,\mathrm{eV}$ masses
for the relaxion DM.222ALP DM with masses in the range
$10^{-15}\mathrm{eV}\lesssim m_{a}\lesssim 10^{-13}\mathrm{eV}$ may leave
unique signatures in direct detection experiments [37]. In the case of
relaxion DM such a mass range requires either $f>M_{\mathrm{Pl}}$ or eternal
inflation. The dashed line in the $f^{-1}$ vs $m$ plane of Fig. 2 corresponds
to the standard ALP DM window with $\theta\sim 1$, which can be computed using
Eq. (2.4), setting $\phi^{2}\sim f^{2}$.
In Fig. 3, the parameter region is shown in the $g$ vs $\Lambda$ (upper panel)
and the $H_{I}$ vs $f$ (lower panel) planes. As can be inferred from the upper
panel, the DM relaxion can explain cut-off scales as large as almost $10^{7}$
GeV. However, taking into account the constraints from fifth force experiments
and from astrophysics, the cut-off can reach up to around
$10^{5}\mathrm{GeV}$. Note that the decay constant for such relaxion can be as
small as $10^{11}-10^{14}\,\mathrm{GeV}$ and the Hubble scale during inflation
at most $100$ GeV. We also find that $\Lambda_{b}$ can take values in the
range $1\mbox{ GeV}<\Lambda_{b}<\sqrt{4\pi}v_{h}$ in the DM window. Below the
dashed line in the $g$ vs $\Lambda$ plane the typical relaxion excursion
$\Delta\phi$ during inflation exceeds the Planck scale. As can be seen, such
superplanckian field excursion can be avoided in the upper region of the
parameter space for our relaxion DM window.
In the upper right region of the $H_{I}$ vs $f$ plane in figure 3, above the
DM window, the relaxion overproduces DM, hence, this region is excluded. In
contrast, there are no regions in the remaining three panels of figures 2 and
3 that are excluded by overproduction for any choice of the remaining free
parameters.
Figure 3: The relaxion DM windows (in brown and grey), in the $[\Lambda,g]$
(top) and the $[f,H_{I}]$ (bottom) planes, complementing figure 2.
Bounds on isocurvature fluctuations: The relaxion picks up the Hubble-sized
isocurvature fluctuations, $\delta\phi\sim H_{I}$, during inflation. These
perturbations are uncorrelated with the adiabatic ones and modify the
temperature power spectrum of the cosmic microwave background (CMB). So far,
they have not been observed by Planck, and, therefore, are tightly
constrained. We use the bound from [38],
$\frac{H_{I}}{\mathrm{GeV}}<0.3\times
10^{7}\frac{\phi}{10^{11}\mathrm{GeV}}\Bigl{(}\frac{\Omega_{DM}}{\Omega_{\phi,0}}\Bigr{)},$
(3.14)
which approximates the potential as quadratic. The last assumption is
justified in our case since the relaxion can get trapped only in a long-lived
minimum, which necessarily has $\sigma_{\phi}<f$ such that the anharmonicities
of the potential are not so important.
The isocurvature bound, computed using Eq. (3.14) is shown in Fig. 1. As can
be seen, it is weaker compared to the bound from overproduction.
## 4 Relaxion dark matter for $T_{\mathrm{rh}}\gg T_{b}$
We now move to the case of high reheating temperatures, $T_{\mathrm{rh}}\gg
T_{b}$. The barriers temporarily disappear in this case and the relaxion can
roll down further along its potential during that time interval. Here one has
to make sure that the field gets trapped again once the barriers are back. The
displacement was computed in [13], and in section 4.1 we revisit the
computation generalizing it to the QbC regime. As it was seen in the previous
section, the stochastic misalignment of the relaxion cannot explain the
observed DM abundance in the universe in the CbQ regime. This is however not
true when $T_{\mathrm{rh}}\gg T_{b}$. In the later case, the additional
displacement of the relaxion can itself generate the required misalignment to
explain DM as it was found in [13]. We discuss this DM window in section 4.2,
first in the CbQ regime and then extend it to the QbC regime. Finally, in
section 4.3, we construct the stochastic DM window for the case of high
reheating temperatures, where we require that the displacement during
reheating is much smaller compared to the stochastic misalignment.
### 4.1 The displacement (roll-on) after inflation
#### CbQ and QbC I
The displacement of the relaxion after its barriers disappear was studied in
[13] for the CbQ regime. The field evolution can be studied by solving the
field equations of motion in the radiation-dominated universe,
$\ddot{\phi}+\frac{3}{2t}\dot{\phi}-g\Lambda^{3}+C(T)\frac{\Lambda_{b}^{4}}{f}\sin\Bigl{(}\frac{\phi}{f}\Bigr{)}=0.$
(4.1)
The function $C(T)$ encodes the temperature-dependence of the barriers of the
potential. For simplicity here it is taken to be a step function,
$C(T(t))=\theta(T_{b}/T(t)-1)$, implying that the barriers reappear
instantaneously at $T_{b}$.
We first discuss the displacement at $t<t_{b}$, assuming that the relaxion
starts at rest in its local minimum $\phi_{0}$ at $t=t_{\mathrm{rh}}$. For
$C=0$ the solution to the above equation is given by
$\dot{\phi}(t)=\frac{2}{5}g\Lambda^{3}t[1-(t_{\mathrm{rh}}/t)^{5/2}]$. If
$T_{\mathrm{rh}}\gg T_{b}$, one can approximate
$\dot{\phi}(t_{b})\approx\frac{2}{5}g\Lambda^{3}t_{b}$ and, therefore,
$\phi(t_{b})-\phi_{0}\approx\frac{1}{5}g\Lambda^{3}t_{b}^{2}$. It is
convenient to introduce a dimensionless parameter
$\chi=\frac{g\Lambda^{3}t_{b}^{2}}{f},$ (4.2)
which depends on the slope $g\Lambda^{3}$, the decay constant and the time
$t_{b}$ at which the barriers reappear. It characterizes the typical
displacement in units of $f$ during the time when the barriers are absent,
$\Delta\phi=\chi f/5$.
Next, we consider the evolution at $t>t_{b}$, after the barriers reappear. The
average slope of the potential from the minimum to the maximum can be
estimated as
$\frac{\delta V}{\delta\phi}\approx\frac{\Delta
V_{b}^{\rightarrow}}{\phi_{b}-\phi_{0}}=\frac{2\Lambda_{b}^{4}[\sin\delta-\delta\cos\delta]}{2\delta
f}=g\Lambda^{3}\Bigl{[}\frac{\tan\delta}{\delta}-1\Bigr{]}.$ (4.3)
For $\delta\ll 1$, which always holds in the CbQ regime, the sum of the last
two terms in Eq. (4.1) is then approximately $\delta V/\delta\phi\approx
g\Lambda^{3}\delta^{2}/3$, which is much smaller compared to each of the first
two terms in Eq. (4.1) at $t=t_{b}$. The solution to the equation of motion
ignoring the potential terms has the form
$\phi(t_{b})-\phi_{0}=g\Lambda^{3}t_{b}^{2}[1-\frac{4}{5}\sqrt{t_{b}/t}]$ or,
equivalently, $\dot{\phi}\propto a^{-3}$ for $t>t_{b}$.
Combining everything, the total displacement can be expressed as
$\Delta\phi\approx\chi
f=\frac{g\Lambda^{3}}{4H_{b}^{2}}\>\>\>\>\>\>\>\>\>\>\>\>(\text{total
displacement, }\>\delta\ll 1).$ (4.4)
In order to get trapped, the displacement of the relaxion has to be less than
the distance to the next maximum,
$\Delta\phi<2\delta f.$ (4.5)
If this is not the case, the relaxion would runaway since the acceleration of
the relaxion in the regions after a maximum and before its next minimum will
not be compensated by the deceleration in the much narrower regions from a
minimum to its next maximum.
The onset of oscillations is at $H_{\mathrm{osc}}=m_{\phi}/3$, where
$m_{\phi}^{2}=({\Lambda_{b}^{4}}/{f^{2}})\sin\delta=({g\Lambda^{3}}/{f})\tan\delta$.
In the case when the relaxion gets re-trapped, the onset of oscillations is
much later compared to the reappearance of the barriers at $H_{b}$, which
follows directly from Eq. (4.5) for $\delta\ll 1$. The field thus remains
frozen at the displacement determined by Eq. (4.4) until eventually it starts
oscillating around the minimum.
The above discussion of the dynamics at $t>t_{b}$ assumes the relation
$\Lambda_{b}^{4}\approx g\Lambda^{3}f$ or, equivalently, $\delta\ll 1$ for the
local minimum of the relaxion. This condition is however not satisfied in the
QbC II regime. We now generalize the discussion to this case with
$\tan\delta\gg 1$, splitting it into several parts.
#### QbC II
* •
$\chi<9/(4\tan\delta)$: In this case, the displacement of the relaxion at
$t_{b}$ is much smaller compared to the distance to the next maximum. Within
the harmonic approximation the equation of motion can be written as
$\ddot{\phi}+\frac{3}{2t}\dot{\phi}+m_{\phi}^{2}\phi=0$. In this regime
$H_{\mathrm{osc}}<H_{b}$ holds and the mass term in the equation is small (at
least at $t_{b}$) compared to the other two terms. The solution is hence
similar to the one from [13] with $\dot{\phi}\propto a^{-3}$ and the total
displacement $\Delta\phi$ approximately given by Eq. (4.4). The relaxion gets
trapped in the same minimum.
* •
$9/(4\tan\delta)<\chi<10\delta$: In this case, $H_{\mathrm{osc}}<H_{b}$,
therefore, the onset of oscillations is directly once the barriers are back at
$t_{b}$. The friction term is subdominant already at $t_{b}$ and the
oscillation amplitude decreases as $a^{-3/2}$. The kinetic energy at $t_{b}$
is suppressed compared to potential energy. Consequently, the maximal
displacement can be estimated as
$\Delta\phi\approx\frac{\chi
f}{5}=\frac{g\Lambda^{3}}{20H_{b}^{2}}\>\>\>\>\>\>\>\>\>\>\>\>(\text{total
displacement, }\>\delta\gg 1).$ (4.6)
The upper bound on $\chi$ ensures that $\Delta\phi<2\delta f$ and the relaxion
again does not overshoot to the next local minimum.
* •
$10\delta<\chi<25/\cos\delta$: Here the relaxion finds itself in a local
minimum different from the original one at $t_{b}$. Importantly, it is not
guarantied that the relaxion gets trapped in that minimum afterwards. Whether
this happens or not depends on the value of $\chi$ i.e. whether the energy
density at $t_{b}$ exceeds the barrier height. We generally expect an
$\mathcal{O}(1)$ misalignment from the local minimum with the oscillations
starting directly once the barriers are back.
* •
$\chi>25/\cos\delta$: In this case, the kinetic energy of the relaxion is
always enough to overcome the next barrier, even if it stops exactly at a
local minimum at $t_{b}$. As a result, the relaxion keeps overshooting and
rolling down.
The requirement for the relaxion to get re-trapped puts additional constraints
on the parameter space compared to the low-temperature reheating scenario. In
particular, it entirely excludes the QbC II regime. Indeed, the condition
$\chi<25/\cos\delta$ to get re-trapped can be re-formulated as a lower bound
on $H_{b}$ and, requiring non-eternal inflation according to Eq. (3.1) and
using the stopping condition of QbC II, one arrives at
$H_{b}>\frac{g\Lambda^{3}}{10\Lambda_{b}^{2}}>2\sqrt{6}\frac{\Lambda^{2}}{M_{\mathrm{Pl}}}.$
(4.7)
Even for a cut-off scale of $\Lambda=\mathrm{TeV}$ this exceeds the Hubble
scale corresponding to a temperature $T_{b}=100$ GeV. It is therefore
impossible for the relaxion to get re-trapped in this regime.
In the CbQ and QbC I regimes the condition to get trapped reduces the
available parameter region as compared to the $T_{\mathrm{rh}}<T_{b}$ case.
The condition $\chi<2\delta$ can be expressed as an upper bound on $g$.
Setting $f=M_{\mathrm{Pl}}$ and assuming that the relaxion stops at the first
minimum, which implies
$\delta={\Lambda_{b}^{2}}/({\Lambda\sqrt{-\mu_{h}^{2}}})$, one arrives at
$g<64(8\pi^{3})^{2}\frac{T_{b}^{8}}{\Lambda^{5}M_{\mathrm{Pl}}\mu_{h}^{2}}\>\>\>\>\>\>\>\>\>\>\text{and}\>\>\>\>\>\>\>\>\>\>g<8(8\pi^{3})\frac{T_{b}^{6}}{\Lambda^{4}M_{\mathrm{Pl}}\mu_{h}}.$
(4.8)
where $\Lambda_{b}^{4}\approx g\Lambda^{3}f$ and $\Lambda_{b}<T_{b}$ was used,
respectively.
Figure 4: The relaxion parameter region in the $[\Lambda,g]$ plane for the CbQ
(left) and QbC (right) regimes for small (top) and large (bottom) reheating
temperatures. Constraints from meson decays, stellar cooling (via resonant
production, labeled as “res”, and via bremsstrahlung), late decays
($1s<\tau_{\phi}<10^{26}s$), black hole superradiance and density-induced
runaway (in NSs) are incorporated. The region where the relaxion can explain
DM is inside the black dashed lines, where also the contours of
$\log_{10}(H_{I,\mathrm{max}})$ are shown. In the low-temperature reheating
scenario $w=0$ before reheating is assumed. The laboratory and the
astrophysical constraints under the additional assumption that the relaxion
explains DM are not shown here and can be found in the upper panel of Fig. 3.
The viable parameter region for the relaxion is shown in white in Fig 4 in the
$[\Lambda,g]$ plane. The CbQ (left) and QbC (right) cases are separated, as
well as the low-temperature (top) and the high-temperature (botttom) reheating
scenarios. In the $T_{\mathrm{rh}}\gg T_{b}$ case, the red region, marked
“destabilization”, is excluded by the constraints from Eq. (4.8) for any
$T_{b}<100$ GeV. Shaded are also the regions excluded by proton beam dump and
accelerator experiments testing meson decays [39, 40, 41, 42, 43, 44] (grey),
stellar cooling bounds [26, 36, 25] (purple), cosmological bounds on late
relaxion decays via the Higgs mixing for lifetimes of
$1s<\tau_{\phi}<10^{26}s$ [24], black hole superradiance [27] (pink) and
runaway in stars induced by finite-density effects [45] (cyan). More details
about the various constraints can be found in [10]. We emphasize that a point
is marked as excluded only if it is excluded for any choice of the remaining
free parameters ($f$, $H_{I}$, $T_{b}$ and $T_{\mathrm{rh}}$) that is allowed
in a given scenario. In particular, this is the reason why the superradiance
and the density-induced runaway constraints are stronger in the
$T_{\mathrm{rh}}\gg T_{b}$ scenario, as this scenario restricts the parameter
region to smaller masses and, thus, larger decay constants where the
constraints are stronger. We do not show the projected supernova constraints
[46, 47, 25], as that region is essentially covered by stellar bounds and late
decays. Inside the black dashed lines, the relaxion can also explain the DM.
We show the contours of the maximal value of the Hubble scale during inflation
in this region. We discuss the DM window in the $T_{\mathrm{rh}}\gg T_{b}$
scenario in the next two subsections.
### 4.2 Dark matter from roll-on
As it was shown, for small enough $\chi$, the relaxion gets re-trapped after
reheating. In this section, we construct the DM window for the roll-on
misalignment.
The CbQ regime: To reconstruct the window for the CbQ case, we impose the
condition from Eq. (4.5) for the relaxion to get retrapped, as it was done in
[13]. Here the displacement is determined by Eq. (4.4). We also require
$T_{b}<100\,\mathrm{GeV}$ for the barrier reappearance temperature. Since the
field enters the oscillatory phase much after the barriers reappear i.e.
$H_{\mathrm{osc}}<H_{b}$, the energy density in the oscillations today can be
computed using the standard formula from Eq. (2.4), inserting (4.4) for the
field value.
The resulting DM window is shown in the plots of Figs. 2 and 3 using grey
color. As can be seen, this window covers different regions of the parameter
space as compared to the stochastic window from section 3.
It can be checked that in the parameter region where the above described
scenario can explain DM, the relaxion always gets trapped in the first local
minimum, even for the largest available value of the Hubble scale during
inflation, $H_{I}^{3}=g\Lambda^{3}$ [13]. As a consequence the DM window is
determined by the values of $g$, $\Lambda$ and $f$ and by physics after
inflation, while the value of $H_{I}$ is irrelevant.
The QbC regime: Larger values of inflationary Hubble scales $H_{I}$ are
available in the QbC regime. It is thus important to find the additional
parameter region for the DM window, that opens up if one drops the CbQ
condition.
For reasons explained in the previous section, we consider only the case when
the field stops at $\Lambda_{b}^{4}\approx g\Lambda^{3}f$ and $\delta\ll 1$,
i.e. the QbC I regime. Increasing $H_{I}$ increases also the stochastic
misalignment, which can be computed using Eq. (2.5). To ensure that the
stochastic DM component is subdominant we require that the typical
misalignment from Eq. (1.3) is much less compared to the roll-on displacement
$\Delta\phi$ from Eq. (4.4). This sets as a new upper bound on $H_{I}$. The
new available parameter region is shown in the lower panel of figure 3, as the
grey shaded region marked as “QbC”. As expected, it lies above the “CbQ”
region and even has some overlap with it. We have checked that also inside the
additional QbC window the relaxion gets trapped in the first minimum. This
explains why the parameter region remains unchanged in the remaining three
plots of figures 2 and 3.
### 4.3 Dark matter from stochastic misalignment (QbC)
We now return to the stochastic misalignment and demonstrate that a DM window
is possible for high reheating temperatures. Here the additional requirement
is that the roll-on displacement after inflation is less compared to the
stochastic misalignment. In other words, the temperature $T_{b}$, for which
the correct relic density rom roll-on would be generated, should be less than
100 GeV since, otherwise, the roll-on contribution would always dominate. Note
that this ensures that the relaxion gets re-trapped and that it does so
without changing its local minimum. We also require $T_{b}>\Lambda_{b}$.
The expression for the relic density from Eq. (2.4) should in principle be
modified to cover the case when the barriers appear after $t_{\mathrm{osc}}$
and the onset of oscillations is delayed,
$\frac{\langle\Omega_{\phi,0}\rangle}{\Omega_{\mathrm{DM}}}\approx
5\sqrt{\frac{m_{\phi}}{\mathrm{eV}}}\Bigl{(}\frac{\sigma_{\phi}}{10^{12}\mathrm{GeV}}\Bigr{)}^{2}\mathcal{F}(T_{\mathrm{osc}})\times\mathrm{max}\Bigl{\\{}\Bigl{(}1,\frac{m_{\phi}}{3H_{b}}\Bigr{)}^{3/2}\Bigr{\\}}.$
(4.9)
As explained in section 4.1, requiring non-eternal inflation implies that the
relaxion can get re-trapped only in the QbC I regime, where $\delta\ll 1$
holds. In that case, the barriers necessarily re-appear before the onset of
oscillations, thus, there is no delay in the onset of oscillations.
With all formulas at hand, the new DM window can be constructed. It is shown
in figures 2 and 3 using black solid lines. Compared to the
$T_{\mathrm{rh}}<T_{b}$ case, the parameter region has shrunk. In particular,
to derive the new upper bound on the mass for high reheating temperatures, the
additional requirement of getting re-trapped should be imposed. The condition
(4.5) can be re-expressed as $m_{\phi}<2\sqrt{2}H_{b}\delta$. The upper bound
on $\delta$ can be obtained from the stopping condition from Eq. (3.2),
$\delta^{3}\approx(9H_{I}^{4})/{(16\pi^{2}\Lambda_{b}^{4})}$, inserting
$\Lambda_{b}^{4}=g\Lambda^{3}f$, the upper bound on $H_{I}$ from (3.1) for
non-eternal inflation and the condition from (4.5). One then arrives at the
following mass range for relaxion DM
$\boxed{10^{-13}\Bigl{(}\frac{\Lambda}{\mathrm{TeV}}\Bigr{)}^{\frac{16}{7}}\Bigl{(}\frac{M_{Pl}}{f}\Bigr{)}^{\frac{4}{7}}<\frac{m_{\phi}}{\mathrm{eV}}<10^{-5}\Bigl{(}\frac{g(T_{b})}{100}\Bigr{)}\Bigl{(}\frac{T_{b}}{100\mathrm{GeV}}\Bigr{)}^{4}\Bigr{(}\frac{\mathrm{TeV}}{\Lambda}\Bigl{)}^{2},}$
(4.10)
in the regime of high reheating temperatures. This new range of masses is
still larger compared to the case of roll-on misalignment, which can be seen
in Fig. 2. The cut-off scale can be raised to at most $\Lambda\approx
80\mathrm{TeV}$. The DM windows for the different scenarios are summarized in
Fig. 4, where they are shown in black dashed lines.
## 5 Thermal production of the relaxion
In addition to the misalignment mechanism, the relaxions (as well as general
ALPs) can be produced in thermal equilibrium, by various processes in the
early universe. Here, we consider processes that involve only the couplings of
the relaxion due to the Higgs mixing [15, 24]. It turns out that, in the
regime where the stochastic misliagnment of the relaxion can explain DM, the
thermal production is smaller. It is thus only relevant for heavier relaxions.
We sketch the relevant formulas below, following [24].
At temperatures below the electroweak scale, relaxions can be produced via the
Primakoff process, $q(g)+g\rightarrow q(g)+\phi$, and the Compton
photoproduction process, $q+g\rightarrow q+\phi$. Here, the first process
involves the effective coupling of the Higgs to gluons, while the second one
involves the Higgs coupling to quarks. The corresponding production rates are
given by [48, 49]
$\Gamma_{\mathrm{Primakoff}}=0.3\frac{\alpha_{s}^{3}\sin^{2}\theta_{h\phi}T^{3}}{\pi^{2}v_{h}^{2}},\>\>\>\>\>\>\>\>\Gamma_{\mathrm{Compton}}=\frac{\alpha_{s}\sin^{2}\theta_{h\phi}T\sum_{f}m^{2}_{f}}{\pi^{2}v_{h}^{2}},$
(5.1)
where in the second term the dominant contribution comes from the $c$ and $b$
quark.
With the knowledge of the total rate,
$\Gamma\approx\Gamma_{\mathrm{Primakoff}}+\Gamma_{\mathrm{Compton}}$, the
abundance of thermally produced relaxions can be computed by solving the
Boltzmann equation. The solution is [24]
$Y_{\mathrm{thermal}}\approx 0.003\Bigl{[}1-\exp(-9\times
10^{11}\mathrm{sin}^{2}\theta_{h\phi})\Bigr{]},$ (5.2)
which is approximately $Y_{\mathrm{thermal}}\approx 0.003$ for
$\sin\theta_{h\phi}\gtrsim 10^{-6}$ and $Y_{\mathrm{thermal}}\approx 2.9\times
10^{9}\sin^{2}\theta_{h\phi}$ for $\sin\theta_{h\phi}\lesssim 10^{-6}$. Here
$Y$ denotes the yield, or comoving number density.
From the yield, the relic density can be computed using the following formula,
$\rho_{\phi,0}=Ym_{\phi}s(T_{0})=0.14g_{s}(T_{0})T_{0}^{3}Ym_{\phi},$ (5.3)
and the fractional energy density has the form
$\frac{\Omega_{\phi,0}}{\Omega_{\mathrm{DM}}}=0.75\Bigl{(}\frac{m_{\phi}}{\mathrm{eV}}\Bigr{)}Y_{\mathrm{thermal}}=2.4\times
10^{-3}\Bigl{(}\frac{m_{\phi}}{\mathrm{eV}}\Bigr{)}\Bigl{[}1-\exp(-9\times
10^{11}\mathrm{sin}^{2}\theta_{h\phi})\Bigr{]}.$ (5.4)
Comparing this thermal production with the one from the stochastic
misalignment, we find that in the stochastic relaxion DM window, the thermal
population is always negligible. Moreover, there is no DM window from the
thermal production itself (as also mentioned in [24]) in the region which is
not excluded by astrophysical or laboratory probes.
Thermal production is still relevant to constrain the relaxion parameter space
in the region where the relaxion is cosmologically unstable. Depending on the
mass of the relaxion, its main decay channels via the Higgs mixing are into a
pair of photons, leptons or mesons. The lifetime $\tau_{\phi}$ can be computed
as in [50, 10]. In the region with $1s<\tau_{\phi}<10^{26}s$, which is where
the thermal production typically dominates, a number of cosmological
constraints apply on the relaxion [15, 24]. These include the bounds on the
baryon-to-photon ratio and on the effective number of neutrino species due to
entropy injection, constraints from big bang nucleosynthesis, distortions of
the CMB and that of the extragalactic background light. In Fig. 4 these
constraints, taken from [24], are projected into the parameter region of the
relaxion in the $[\Lambda,g]$ plane.
## 6 The case of the QCD relaxion
The stochastic misalignment of the relaxion in the QbC regime can explain the
DM abundance in the universe. Importantly, this does not require eternal
inflation, thus avoids the associated measure problems [12]. This is not true
however if the relaxion is identified with the QCD axion, as this case
requires eternal inflation, as it was shown in [10]. Nevertheless, we still
show in this section how the QCD relaxion can be the DM, overlooking the
eternal inflation issue. In the original proposal [1], the QCD axion can be
the relaxion in the CbQ regime if a change of the slope of the potential after
inflation can be engineered, but the cutoff scale is limited to ${\cal O}$(30)
TeV. The corresponding region of parameter space is shown in the upper left
plot of figure 5 (see also [10]). In the following, we show that the QCD axion
can be the relaxion up to large cutoff scales and constitute DM. We review
both the low and high reheating temperature cases. The DM discussion is
essentially the same in the CbQ and QbC regimes, only the corresponding
regions of parameter space are different.
Figure 5: Upper row: the QCD relaxion DM window in the $[\Lambda,g]$ (left)
and in the $[1/f,m_{\phi}]$ (right) planes. In the left plot, the contours of
the minimal value of $\theta_{\mathrm{QCD}}$ are shown inside the region where
the relaxion can constitute the totality of DM. In the right plot, the current
and projected sensitivities of haloscope experiments are shown, assuming a
KSVZ axion model. Different benchmark cases are displayed along the QCD line.
Lower row: a schematic illustration of the different value of
$\theta_{\mathrm{QCD}}$, determined by the linear slope, in the QCD relaxion
model (red), compared to the standard QCD axion case (black) which predicts
$\theta_{\mathrm{QCD}}\lesssim 10^{-17}$ (see e.g. [51]). The first panel
depicts the decaying oscillations of $\theta_{\mathrm{QCD}}$ while the
remaining two panels illustrate the potential energy for both cases.
### 6.1 QCD relaxion dark matter for $T_{\mathrm{rh}}<T_{b}$
The energy density due to the stochastic misalignment can be estimated using
the formulas from section 2 where, for the QCD relaxion, we require that the
reheating temperature does not exceed $T_{b}=\Lambda_{\mathrm{QCD}}\approx$
150 MeV. The field is typically misaligned by $\Delta\phi\sim f$ from its
local minimum, which follows directly from the modified stopping condition
$H_{I}\sim\Lambda_{b}$ [10]. This is also true for the model of [1] where,
although $H_{I}$ is much smaller, the change of the slope of the potential
after inflation results in $\Delta\phi\approx(\pi/2)f$. The final slope in
both cases should generate only a small CP violating $\theta$-angle,
$\sin(\theta_{QCD})=\frac{g\Lambda^{3}f}{\Lambda_{b}^{4}}<10^{-10},$ (6.1)
where $\Lambda_{b}\approx 75$ MeV. For known values of $T_{\mathrm{rh}}$ and
the equation of state of the universe $w$ between inflation and reheating, the
relaxion energy density today is determined only by the mass (or the decay
constant). The parameter region where the relaxion can explain DM is shown in
the upper part of Fig. 5, as the brown shaded region in the $g$ vs $\Lambda$
plane and as the colored points in the $f^{-1}$ vs $m_{\phi}$ plane. In the
second plot, we consider two benchmark cases with $w=0$ (the red point) and
$w=1/3$ (the green points). Similar to the standard QCD axion, the relaxion is
on the $m_{\phi}f=\Lambda_{b}^{2}$ line, shown in blue. Indeed, the expression
for the relaxion mass
$m^{2}_{\phi}=\frac{\Lambda_{b}^{4}}{f^{2}}\sin\delta\approx\frac{\Lambda_{b}^{4}}{f^{2}}\cos(\theta_{QCD}),$
(6.2)
is very close to the standard expression given that $\theta_{QCD}<10^{-10}$.
As can be seen, the cut-off scale for such a DM relaxion can be raised to up
to $10^{9}\mathrm{GeV}$.
### 6.2 QCD relaxion dark matter for $T_{\mathrm{rh}}\gg T_{b}$:
Assuming that the universe reheats to temperatures well above
$\Lambda_{\mathrm{QCD}}$, the barriers of the potential shrink and the
relaxion can roll down. The displacement can be computed using the expression
from section 4.1 with the only difference being the nontrivial temperature-
dependence of the barriers for a QCD axion,
$\Lambda_{b}^{4}(T,h)\approx\frac{\Lambda_{b}^{4}(0,h)}{1+(T/\Lambda_{\mathrm{QCD}})^{m}},$
(6.3)
with $m\approx 8.16$ [52]. We note that
* •
At temperatures $T>\Lambda_{\mathrm{QCD}}\theta_{\mathrm{QCD}}^{-1/m}$ the
wiggles are essentially negligible, $\Lambda_{b}^{4}(T)<g\Lambda^{3}f$, and
the solution for the linear potential can be used,
$\dot{\phi}(t)=\frac{2}{5}g\Lambda^{3}t[1-(t_{\mathrm{rh}}/t)^{5/2}]$.
* •
For $T_{\mathrm{osc}}<T<\Lambda_{\mathrm{QCD}}\theta_{\mathrm{QCD}}^{-1/m}$,
where $\Lambda_{b}^{2}(T_{\mathrm{osc}})=3H(T_{\mathrm{osc}})f$, the field
evolves in a potential with a small slope and its velocity decreases
approximately as $a^{-3}$.
* •
At temperatures below $T_{\mathrm{osc}}$ the relaxion enters the oscillatory
phase. Its mass still increases with time until $T\approx\Lambda_{QCD}$.
The displacement from roll-on at $T_{\mathrm{osc}}$ can be estimated using the
above approximations and compared to the stochastic misalignment. As it was
already pointed out in [1], for $f>10^{10}$ GeV, the first contribution is
always very small and, therefore, the stochastic misalignment is unaffected by
reheating. The relic density can be computed using Eq. (2.4), multiplied by a
factor $\sqrt{m_{\phi}(T_{\mathrm{osc}})/m_{\phi}}$ due to the temperature-
dependence of the mass (see e.g. [5]). Note that this is the expression for
the standard QCD axion DM is shown in the upper part of Fig. 5.
The main difference compared to the standard QCD axion DM is the possibility
to have a large $\theta_{\mathrm{\mathrm{QCD}}}$ angle at the minimum of the
relaxion potential. The later is a consequence of the linear term, which
shifts the minimum from the CP conserving value according to Eq. (6.1). This
is illustrated in the bottom part of the figure, where the standard QCD
potential is shown in black and the relaxion potential is shown in red. The
third bottom plot on the left illustrates how the $\theta_{\mathrm{QCD}}$
parameter is expected to oscillate around the minimum of the potential with a
decreasing amplitude due to Hubble expansion. We stress that even in the case
of the standard QCD axion CP violation in the SM displaces the minimum of the
axion potential from zero. In [51], the authors estimated
$\theta_{\mathrm{QCD}}\sim 10^{-17}$ in the SM (see also [53, 54]). The
average value of the oscillation amplitude of $\theta_{\mathrm{QCD}}$ today is
smaller and approximately
$\bar{\theta}_{\mathrm{QCD}}\sim\frac{2\bar{\rho}_{\mathrm{DM}}}{m_{\phi}^{2}f^{2}}\approx\frac{2\bar{\rho}_{\mathrm{DM}}}{\Lambda_{b}^{4}}\sim
10^{-21}.$ (6.4)
In contrast, in the QCD relaxion model, $\theta_{\mathrm{QCD}}$ oscillates
around some value determined by the slope of the potential. In the $g$ vs
$\Lambda$ parameter region of Fig. 5 the contours of the minimal values of
this angle are shown.
The value of the decay constant $f$ determines the strength of the pseudo-
scalar couplings of the axion, and, in particular, the axion-photon coupling.
Assuming the KSVZ model where
$g_{\phi\gamma\gamma}=\frac{\alpha}{2\pi}\frac{1.92}{f}$, we show the current
and projected future sensitivities of haloscopes, taken from [55], in the
$[m_{\phi},1/f]$ plane of Fig. 5.
## 7 Summary
In this work, we have identified a novel scenario in which the original
relaxion [1] naturally constitutes the DM in our universe. It is remarkable
that the minimal Higgs-axion lagrangian with the simple potential (1.1) can
address both the Higgs mass hierarchy and the dark matter puzzles. The
misalignment from the local minimum is generated during the long phase of
inflationary dynamics, which is accompanied with a random-walk due to
fluctuations. The only requirement here is dropping the CbQ condition for the
relaxion dynamics during inflation, which was discussed in detail in our
earlier work [10].
This DM scenario from stochastic misalignment is complementary to the one from
[13], where the misalignment of the relaxion originates instead from its
evolution after inflation. Compared to this latter case, the stochastic DM
window covers a wider range of masses and, in particular, allows the relaxion
to have larger couplings and smaller decay constants. This is possible both
for reheating temperatures not exceeding $T_{b}<v_{h}$, as well as in the case
of high reheating temperatures. In the first case, the mass range for relaxion
DM is given by (3.12) while in the second case it is given by (4.10). The
parameter region available for relaxion DM is illustrated in figures 1, 2, 3
and 4. Among others, figures 2 and 3 show the regions that are excluded by
fifth-force experiments, as well as constraints from stellar cooling and black
hole superradiance. The parameter region can be further extended and include
smaller masses if one allows for the possibility of eternal inflation.
The mixing with the Higgs enables unique search strategies for relaxion DM. In
the presence of a coherently oscillating relaxion background, the Higgs mass,
hence, most of the fundamental constants of the SM, become oscillatory in
time, as explained in [13, 56]. Such time-variations may potentially be probed
by table-top experiments, including atomic, molecular or nuclear clocks [57,
58, 59, 60, 61]. The strength of oscillations depends on the local DM density
and, ignoring the substructure of DM, is beyond the currently projected
sensitivity of nuclear clocks [28]. The sensitivity can be enhanced if a
significant fraction of DM is contained in dense localized objects, such as
relaxion stars or miniclusters [56, 62], which motivates further studies in
this direction. We also mention the interesting possibility discussed in [56,
62], of a DM overdensity forming around the sun or the earth, which would
further enhance the signal. In addition to the Higgs coupling, the relaxion is
expected to have pseudo-scalar couplings to the SM and, in that case, can be
probed by e.g. haloscopes.333We refer to [55, 63] for a detailed list of
experiments.
We have also considered the QCD relaxion model from [10] and the possibility
of such relaxion constituting the DM via its stochastic misalignment. In
contrast to the nonQCD models, eternal inflation is required for setting a
small value for $\theta_{\mathrm{QCD}}$. The mixing with the Higgs is small
and the main interaction channel with the SM is through the pseudo-scalar
coupling, which can be probed in haloscope searches. The larger CP violation
compared to the standard axion DM scenario, illustrated in Fig. 5, can be
probed by future neutron EDM searches [64, 65].
## Acknowledgments
We are grateful to Hyungjin Kim for many insights as well as collaboration on
related work. We also thank Marco Gorghetto, Alessandro Lenoci and Enrico
Morgante for useful discussions. This work is supported by the Deutsche
Forschungsgemeinschaft under Germany Excellence Strategy - EXC 2121 “Quantum
Universe” - 390833306.
## References
* [1] P.W. Graham, D.E. Kaplan and S. Rajendran, _Cosmological Relaxation of the Electroweak Scale_ , _Phys. Rev. Lett._ 115 (2015) 221801 [1504.07551].
* [2] J. Preskill, M.B. Wise and F. Wilczek, _Cosmology of the Invisible Axion_ , _Phys. Lett._ 120B (1983) 127.
* [3] L.F. Abbott and P. Sikivie, _A Cosmological Bound on the Invisible Axion_ , _Phys. Lett._ 120B (1983) 133.
* [4] M. Dine and W. Fischler, _The Not So Harmless Axion_ , _Phys. Lett._ 120B (1983) 137.
* [5] P. Arias, D. Cadamuro, M. Goodsell, J. Jaeckel, J. Redondo and A. Ringwald, _WISPy Cold Dark Matter_ , _JCAP_ 1206 (2012) 013 [1201.5902].
* [6] N. Fonseca, E. Morgante, R. Sato and G. Servant, _Relaxion Fluctuations (Self-stopping Relaxion) and Overview of Relaxion Stopping Mechanisms_ , _JHEP_ 05 (2020) 080 [1911.08473].
* [7] A.A. Starobinsky and J. Yokoyama, _Equilibrium state of a selfinteracting scalar field in the De Sitter background_ , _Phys. Rev. D_ 50 (1994) 6357 [astro-ph/9407016].
* [8] P.W. Graham and A. Scherlis, _Stochastic axion scenario_ , _Phys. Rev. D_ 98 (2018) 035017 [1805.07362].
* [9] F. Takahashi, W. Yin and A.H. Guth, _QCD axion window and low-scale inflation_ , _Phys. Rev. D_ 98 (2018) 015042 [1805.08763].
* [10] A. Chatrchyan and G. Servant, _The Stochastic Relaxion_ , 2210.01148.
* [11] A. Nelson and C. Prescod-Weinstein, _Relaxion: A Landscape Without Anthropics_ , _Phys. Rev. D_ 96 (2017) 113007 [1708.00010].
* [12] R.S. Gupta, _Relaxion measure problem_ , _Phys. Rev. D_ 98 (2018) 055023 [1805.09316].
* [13] A. Banerjee, H. Kim and G. Perez, _Coherent relaxion dark matter_ , _Phys. Rev. D_ 100 (2019) 115026 [1810.01889].
* [14] O. Matsedonskyi and G. Servant, _High-Temperature Electroweak Symmetry Non-Restoration from New Fermions and Implications for Baryogenesis_ , _JHEP_ 09 (2020) 012 [2002.05174].
* [15] J.R. Espinosa, C. Grojean, G. Panico, A. Pomarol, O. Pujolàs and G. Servant, _Cosmological Higgs-Axion Interplay for a Naturally Small Electroweak Scale_ , _Phys. Rev. Lett._ 115 (2015) 251803 [1506.09217].
* [16] A. Banerjee, E. Madge, G. Perez, W. Ratzinger and P. Schwaller, _Gravitational wave echo of relaxion trapping_ , _Phys. Rev. D_ 104 (2021) 055026 [2105.12135].
* [17] N. Fonseca and E. Morgante, _Relaxion Dark Matter_ , _Phys. Rev. D_ 100 (2019) 055010 [1809.04534].
* [18] R. Tito D’Agnolo and D. Teresi, _Sliding Naturalness: New Solution to the Strong- $CP$ and Electroweak-Hierarchy Problems_, _Phys. Rev. Lett._ 128 (2022) 021803 [2106.04591].
* [19] R. Tito D’Agnolo and D. Teresi, _Sliding naturalness: cosmological selection of the weak scale_ , _JHEP_ 02 (2022) 023 [2109.13249].
* [20] L. Husdal, _On Effective Degrees of Freedom in the Early Universe_ , _Galaxies_ 4 (2016) 78 [1609.04979].
* [21] D.J.E. Marsh, _Axion Cosmology_ , _Phys. Rept._ 643 (2016) 1 [1510.07633].
* [22] Planck collaboration, _Planck 2018 results. VI. Cosmological parameters_ , 1807.06209.
* [23] S. Dubovsky, L. Senatore and G. Villadoro, _Universality of the Volume Bound in Slow-Roll Eternal Inflation_ , _JHEP_ 05 (2012) 035 [1111.1725].
* [24] T. Flacke, C. Frugiuele, E. Fuchs, R.S. Gupta and G. Perez, _Phenomenology of relaxion-Higgs mixing_ , _JHEP_ 06 (2017) 050 [1610.02025].
* [25] S. Balaji, P.S.B. Dev, J. Silk and Y. Zhang, _Improved stellar limits on a light CP-even scalar_ , 2205.01669.
* [26] E. Hardy and R. Lasenby, _Stellar cooling bounds on new light particles: plasma mixing effects_ , _JHEP_ 02 (2017) 033 [1611.05852].
* [27] M. Baryakhtar, M. Galanis, R. Lasenby and O. Simon, _Black hole superradiance of self-interacting scalar fields_ , _Phys. Rev. D_ 103 (2021) 095019 [2011.11646].
* [28] A. Banerjee, H. Kim, O. Matsedonskyi, G. Perez and M.S. Safronova, _Probing the Relaxed Relaxion at the Luminosity and Precision Frontiers_ , _JHEP_ 07 (2020) 153 [2004.02899].
* [29] J.K. Hoskins, R.D. Newman, R. Spero and J. Schultz, _Experimental tests of the gravitational inverse square law for mass separations from 2-cm to 105-cm_ , _Phys. Rev. D_ 32 (1985) 3084.
* [30] D.J. Kapner, T.S. Cook, E.G. Adelberger, J.H. Gundlach, B.R. Heckel, C.D. Hoyle et al., _Tests of the gravitational inverse-square law below the dark-energy length scale_ , _Phys. Rev. Lett._ 98 (2007) 021101 [hep-ph/0611184].
* [31] S. Schlamminger, K.Y. Choi, T.A. Wagner, J.H. Gundlach and E.G. Adelberger, _Test of the equivalence principle using a rotating torsion balance_ , _Phys. Rev. Lett._ 100 (2008) 041101 [0712.0607].
* [32] M. Bordag, G.L. Klimchitskaya, U. Mohideen and V.M. Mostepanenko, _Advances in the Casimir effect_ , vol. 145, Oxford University Press (2009).
* [33] Y.J. Chen, W.K. Tham, D.E. Krause, D. Lopez, E. Fischbach and R.S. Decca, _Stronger Limits on Hypothetical Yukawa Interactions in the 30–8000 nm Range_ , _Phys. Rev. Lett._ 116 (2016) 221102 [1410.7267].
* [34] J. Bergé, P. Brax, G. Métris, M. Pernot-Borràs, P. Touboul and J.-P. Uzan, _MICROSCOPE Mission: First Constraints on the Violation of the Weak Equivalence Principle by a Light Scalar Dilaton_ , _Phys. Rev. Lett._ 120 (2018) 141101 [1712.00483].
* [35] W.-H. Tan, A.-B. Du, W.-C. Dong, S.-Q. Yang, C.-G. Shao, S.-G. Guan et al., _Improvement for testing the gravitational inverse-square law at the submillimeter range_ , _Phys. Rev. Lett._ 124 (2020) 051301.
* [36] P.S.B. Dev, R.N. Mohapatra and Y. Zhang, _Stellar limits on light CP-even scalar_ , _JCAP_ 05 (2021) 014 [2010.01124].
* [37] H. Kim and A. Lenoci, _Gravitational focusing of wave dark matter_ , _Phys. Rev. D_ 105 (2022) 063032 [2112.05718].
* [38] L. Di Luzio, M. Giannotti, E. Nardi and L. Visinelli, _The landscape of QCD axion models_ , _Phys. Rept._ 870 (2020) 1 [2003.01100].
* [39] BNL-E949 collaboration, _Study of the decay $K^{+}\to\pi^{+}\nu\bar{\nu}$ in the momentum region $140<P_{\pi}<199$ MeV/c_, _Phys. Rev. D_ 79 (2009) 092004 [0903.0030].
* [40] Belle collaboration, _Measurement of the Differential Branching Fraction and Forward-Backward Asymmetry for $B\to K^{(*)}\ell^{+}\ell^{-}$_, _Phys. Rev. Lett._ 103 (2009) 171801 [0904.0770].
* [41] BaBar collaboration, _Search for $B\to K^{(*)}\nu\overline{\nu}$ and invisible quarkonium decays_, _Phys. Rev. D_ 87 (2013) 112005 [1303.7465].
* [42] LHCb collaboration, _Search for hidden-sector bosons in $B^{0}\\!\to K^{*0}\mu^{+}\mu^{-}$ decays_, _Phys. Rev. Lett._ 115 (2015) 161802 [1508.04094].
* [43] LHCb collaboration, _Search for long-lived scalar particles in $B^{+}\to K^{+}\chi(\mu^{+}\mu^{-})$ decays_, _Phys. Rev. D_ 95 (2017) 071101 [1612.07818].
* [44] CHARM collaboration, _Search for Axion Like Particle Production in 400-GeV Proton - Copper Interactions_ , _Phys. Lett. B_ 157 (1985) 458.
* [45] R. Balkin, J. Serra, K. Springmann, S. Stelzl and A. Weiler, _Runaway relaxion from finite density_ , _JHEP_ 06 (2022) 023 [2106.11320].
* [46] G. Krnjaic, _Probing Light Thermal Dark-Matter With a Higgs Portal Mediator_ , _Phys. Rev. D_ 94 (2016) 073009 [1512.04119].
* [47] P.S.B. Dev, R.N. Mohapatra and Y. Zhang, _Revisiting supernova constraints on a light CP-even scalar_ , _JCAP_ 08 (2020) 003 [2005.00490].
* [48] E. Masso, F. Rota and G. Zsembinszki, _On axion thermalization in the early universe_ , _Phys. Rev. D_ 66 (2002) 023004 [hep-ph/0203221].
* [49] M.S. Turner, _Thermal Production of Not SO Invisible Axions in the Early Universe_ , _Phys. Rev. Lett._ 59 (1987) 2489.
* [50] F. Bezrukov and D. Gorbunov, _Light inflaton Hunter’s Guide_ , _JHEP_ 05 (2010) 010 [0912.0390].
* [51] H. Georgi and L. Randall, _Flavor Conserving CP Violation in Invisible Axion Models_ , _Nucl. Phys. B_ 276 (1986) 241.
* [52] S. Borsanyi et al., _Calculation of the axion mass based on high-temperature lattice quantum chromodynamics_ , _Nature_ 539 (2016) 69 [1606.07494].
* [53] L. Di Luzio, _CP-violating axions_ , _PoS_ EPS-HEP2021 (2022) 513 [2108.09071].
* [54] M. Pospelov and A. Ritz, _Electric dipole moments as probes of new physics_ , _Annals Phys._ 318 (2005) 119 [hep-ph/0504231].
* [55] C. O’Hare, “cajohare/axionlimits: Axionlimits.” https://cajohare.github.io/AxionLimits/, July, 2020. 10.5281/zenodo.3932430.
* [56] A. Banerjee, D. Budker, J. Eby, H. Kim and G. Perez, _Relaxion Stars and their detection via Atomic Physics_ , _Commun. Phys._ 3 (2020) 1 [1902.08212].
* [57] A. Arvanitaki, J. Huang and K. Van Tilburg, _Searching for dilaton dark matter with atomic clocks_ , _Phys. Rev. D_ 91 (2015) 015015 [1405.2925].
* [58] Y.V. Stadnik and V.V. Flambaum, _Enhanced effects of variation of the fundamental constants in laser interferometers and application to dark matter detection_ , _Phys. Rev. A_ 93 (2016) 063630 [1511.00447].
* [59] M.S. Safronova, D. Budker, D. DeMille, D.F.J. Kimball, A. Derevianko and C.W. Clark, _Search for New Physics with Atoms and Molecules_ , _Rev. Mod. Phys._ 90 (2018) 025008 [1710.01833].
* [60] S. Aharony, N. Akerman, R. Ozeri, G. Perez, I. Savoray and R. Shaniv, _Constraining Rapidly Oscillating Scalar Dark Matter Using Dynamic Decoupling_ , _Phys. Rev. D_ 103 (2021) 075017 [1902.02788].
* [61] E. Peik, T. Schumm, M.S. Safronova, A. Pálffy, J. Weitenberg and P.G. Thirolf, _Nuclear clocks for testing fundamental physics_ , _Quantum Sci. Technol._ 6 (2021) 034002 [2012.09304].
* [62] A. Banerjee, D. Budker, J. Eby, V.V. Flambaum, H. Kim, O. Matsedonskyi et al., _Searching for Earth/Solar Axion Halos_ , _JHEP_ 09 (2020) 004 [1912.04295].
* [63] C. Eröncel, R. Sato, G. Servant and P. Sørensen, _ALP dark matter from kinetic fragmentation: opening up the parameter window_ , _JCAP_ 10 (2022) 053 [2206.14259].
* [64] C. Abel et al., _The n2EDM experiment at the Paul Scherrer Institute_ , _EPJ Web Conf._ 219 (2019) 02002 [1811.02340].
* [65] B.W. Filippone, _Worldwide Search for the Neutron EDM_ , in _13th Conference on the Intersections of Particle and Nuclear Physics_ , 10, 2018 [1810.03718].
|
# Fake Z
Anatoly Dymarsky Department of Physics and Astronomy
University of Kentucky
Lexington, KY, USA 40506 Rohit R. Kalloor Department of Particle Physics and
Astrophysics
Weizmann Institute of Science
Rehovot 7610001, Israel
###### Abstract
Recently introduced connections between quantum codes and Narain CFTs provide
a simple ansatz to express a modular-invariant function $Z(\tau,\bar{\tau})$
in terms of a multivariate polynomial satisfying certain additional
properties. These properties include algebraic identities, which ensure
modular invariance of $Z(\tau,\bar{\tau})$, and positivity and integrality of
coefficients, which imply positivity and integrality of the
$\mathfrak{u}(1)^{n}\times\mathfrak{u}(1)^{n}$ character expansion of
$Z(\tau,\bar{\tau})$. Such polynomials come naturally from codes, in the sense
that each code of a certain type gives rise to the so-called enumerator
polynomial, which automatically satisfies all necessary properties, while the
resulting $Z(\tau,\bar{\tau})$ is the partition function of the code CFT – the
Narain theory unambiguously constructed from the code. Yet there are also
“fake” polynomials satisfying all necessary properties, that are not
associated with any code. They lead to $Z(\tau,\bar{\tau})$ satisfying all
modular bootstrap constraints (modular invariance and positivity and
integrality of character expansion), but whether they are partition functions
of any actual CFT is unclear. We consider the group of the six simplest fake
polynomials and denounce the corresponding $Z$’s as fake: we show that none of
them is the torus partition function of any Narain theory. Moreover, four of
them are not partition functions of any unitary 2d CFT; our analysis for other
two is inconclusive. Our findings point to an obvious limitation of the
modular bootstrap approach: not every solution of the full set of torus
modular bootstrap constraints is due to an actual CFT. In the paper we
consider six simple examples, keeping in mind that thousands more can be
constructed.
###### Contents
1. 1 Introduction
2. 2 “It’s not Narain”
1. 2.1 Notations and preliminaries
2. 2.2 Elementary proof for $Z_{3}$
3. 2.3 Proof by constructing all lattices explicitly
1. 2.3.1 Taxonomy of integral lattices
2. 2.3.2 The Lorentzian step
3. 3 Excluding unitary CFTs
1. 3.1 Polynomials 5 and 6
2. 3.2 Polynomials 3 and 4
3. 3.3 Polynomials 1 and 2
4. 3.4 A genuine CFT example
4. 4 Conclusions
5. 5 Acknowledgements
6. A Elementary proof for other $Z_{i}$
1. A.1 $Z_{1}$
2. A.2 $Z_{2}$
3. A.3 $Z_{5}$
4. A.4 $Z_{4}$
5. A.5 $Z_{6}$
7. B Constructing theta functions of sublattices
8. C Continuous family of impostor $Z$
## 1 Introduction
Conformal modular bootstrap [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
15, 16, 17], leverages consistency conditions of 2d CFT torus partition
functions $Z(\tau,\bar{\tau})$, and has proved to be a powerful tool yielding
many new interesting results about 2d theories. Similar to its higher-
dimensional cousin: the conformal bootstrap [18, 19, 20], the modular
bootstrap is merely a set of necessary conditions, which any 2d CFT must
satisfy. Thus solving modular bootstrap constraints, in the sense of finding
appropriate $Z(\tau,\bar{\tau})$, does not guarantee one is dealing with an
actual CFT. The same of course applies to any version of bootstrap. So far
this has not been a problem partially because the focus was on the exclusion
plots, charting the space of CFT data not compatible with bootstrap
constraints. Besides, we have been lucky (though the reason for the “luck” is
unclear) in the sense that points sitting at the edge of exclusion plots
(solutions of certain bootstrap constraints) happen to be actual CFTs, as is
the case of 3d Ising model [21]. Yet it would be naive to take for granted
that a solution to modular bootstrap constraints is always an actual bona fide
CFT. To drive the point home, in this paper we consider several explicit
examples of would-be torus partition functions $Z(\tau,\bar{\tau})$ satisfying
all apparent properties, such as modular invariance and decomposition into
characters with positive-integer coefficients, yet we show these are not
partition functions of any 2d CFT.
These examples come from the relation between Narain CFTs and quantum codes
and were first formulated in [22, 23]. Each code ${\mathcal{C}}$ (quantum
stabilizer code of length $n$, and of the self-dual, real type) gives rise to
a refined enumerator polynomial $W_{\mathcal{C}}(x,y,z)$, a homogeneous
polynomial of degree $n$ satisfying several additional properties: i) all
coefficients are positive integers ii) $W_{\mathcal{C}}(1,0,0)=1$, and iii)
$\displaystyle W_{\mathcal{C}}(x,y,z)$ $\displaystyle=$ $\displaystyle
W_{\mathcal{C}}(x,-y,z),$ (1) $\displaystyle W_{\mathcal{C}}(x,y,z)$
$\displaystyle=$
$\displaystyle\tfrac{1}{2^{n}}W_{\mathcal{C}}(x+y+2z,x+y-2z,x-y).$ (2)
Starting from ${\mathcal{C}}$ one can construct a Narain CFT with central
charge $n$ and torus partition function given by (extension to higher-genus
partition functions was recently discussed in [24, 25])
$\displaystyle Z_{\mathcal{C}}\left(\tau,\bar{\tau}\right)$
$\displaystyle=\frac{W_{\mathcal{C}}\left(\theta_{3}\bar{\theta}_{3}+\theta_{4}\bar{\theta}_{4},\theta_{3}\bar{\theta}_{3}-\theta_{4}\bar{\theta}_{4},\theta_{2}\bar{\theta}_{2}\right)}{\left(2\eta\left(\tau\right)\bar{\eta}\left(\bar{\tau}\right)\right)^{n}},$
(3)
where $\theta_{2},\theta_{3},\theta_{4}(\tau)$ are the standard Jacobi theta
functions. It is straightforward to see that (1) and (2) ensure invariance of
$Z_{\mathcal{C}}$ under $\tau\rightarrow\tau+1$ and $\tau\rightarrow-1/\tau$
respectively; condition ii) ensures $Z_{\mathcal{C}}$ has unique vacuum state,
and i) guarantees $Z_{\mathcal{C}}$ can be decomposed into
$\mathfrak{u}(1)^{n}\times\mathfrak{u}(1)^{n}$ characters with positive-
integer coefficients. In other words, connection with codes provides an ansatz
to solve all torus modular bootstrap constraints in terms of a polynomial
satisfying certain algebraic identities. This approach was subsequently
generalized to include codes and Narain CFTs of the more general type [26, 27,
28], while some of the main features remain the same: the partition function
$Z_{\mathcal{C}}(\tau,\bar{\tau})$ is written in terms of a multivariate
polynomial satisfying certain algebraic identities. This form implies $Z$ is a
sum of a finite, although potentially large, number of “characters”, though
this description is schematic because the relation with codes includes non-
rational CFTs [29, 28]. The algebraic approach stemming from codes is
complementary to the approach to construct modular invariant $Z$’s by
combining only a few characters [30, 31, 32, 33].
While each code gives rise to a $W_{\mathcal{C}}$, there could be polynomials
satisfying all the necessary conditions, yet not associated with any code.
This is a completely general situation for codes of any type, e.g. classical,
quantum, etc. For the small code length $n$ it is straightforward to classify
which polynomials are “genuine” (code-related), and which ones are “fake” (not
associated with any code) because all codes and all polynomials can be found
explicitly. For large $n$ this problem is highly non-trivial.111This is
related to the problem of understanding for which $n$ optimal codes are
extremal. For the polynomials of the type introduced above, all polynomials
with $n=1,2$ are genuine, but starting from $n\geq 3$ there is a rapidly
growing number of fake polynomials. For $n=3$, there are six fake polynomials
[23],
$\displaystyle W_{1}\left(x,y,z\right)$
$\displaystyle=x^{3}+2x^{2}z+y^{2}z+3xz^{2}+z^{3}$ $\displaystyle
W_{2}\left(x,y,z\right)$ $\displaystyle=x^{3}+x^{2}z+2y^{2}z+3xz^{2}+z^{3}$
$\displaystyle W_{3}\left(x,y,z\right)$
$\displaystyle=x^{3}+xy^{2}+2x^{2}z+2xz^{2}+2z^{3}$ $\displaystyle
W_{4}\left(x,y,z\right)$ $\displaystyle=x^{3}+xy^{2}+2y^{2}z+2xz^{2}+2z^{3}$
$\displaystyle W_{5}\left(x,y,z\right)$
$\displaystyle=x^{3}+2xy^{2}+x^{2}z+xz^{2}+3z^{3}$ $\displaystyle
W_{6}\left(x,y,z\right)$ $\displaystyle=x^{3}+2xy^{2}+y^{2}z+xz^{2}+3z^{3}$
(4)
leading to six “impostor” partition functions $Z_{i}$ via (3). The “impostor”
status here indicates that they are not directly associated with any 2d CFT,
but whether such a CFT could potentially exist is a priori unclear. We analyze
the $Z_{i}$ to show that none of them can be the partition function of a
Narain theory, and confirm that last four with $3\leq i\leq 6$ are fake – they
are not partition functions of any unitary 2d CFT. Our analysis, based on the
peculiarities of chiral states, is inconclusive for $i=1,2$.
Our analysis consists of two logically independent parts. The structure of an
impostor partition function given by (3) is consistent with the
$\mathfrak{u}(1)^{n}\times\mathfrak{u}(1)^{n}$ chiral algebra, hence one may
suspect $Z$ is the partition function of a Narain theory. Our first step is to
rule that out. We notice that (3) with the polynomials (4) yields spectra
inconsistent with the linear structure of the OPE coefficients. Namely, all
six would-be partition functions $Z_{i}$, interpreted as the partition
functions of Narain theories, include
$\mathfrak{u}(1)^{n}\times\mathfrak{u}(1)^{n}$ primary states with the
dimensions $\Delta\leq 1$ and zero spin $\ell=0$. They also include states
with $\Delta=7/4$ which all have spin $\ell=\pm 2$. In a Narain CFT, each
primary is a lattice vector and hence their linear combination with integer
coefficients must be a primary as well. Yet, we show a particular combination
of $\ell=0$ states must have zero spin and $\Delta=7/4$, leading to a
contradiction. This argument can be put on a more general footing, suitable
for generalizations. Any $Z$ obtained via (3) yields the dimensions of
$\mathfrak{u}(1)^{n}\times\mathfrak{u}(1)^{n}$ primaries which are quantized
in units of $1/4$, namely $\Delta=k/4$ for some integer $k$. In terms of
corresponding Narain lattices, this means all lattice vectors are located
half-integer distance-squared away from the origin, i.e.,
$r^{2}=k/2,\,k\in\mathbbm{Z}$. All Euclidean low-dimensional lattices of this
kind can be constructed explicitly. Apparently there are no lattices with the
theta series (spectrum) matching $Z_{1},Z_{2}$ and unique lattices matching
$Z_{i}$ for $i=3,4,5,6$. Here we rendered $Z_{i}$ “Euclidean” by considering
pure imaginary $\tau$. To extend the comparison for arbitrary $\tau$ we need
to equip Euclidean lattices with a Lorentzian metric. Apparently no metric
exists to reproduce $Z_{i}$ for $i=3,4,5,6$ with arbitrary $\tau$. This simply
means that no $Z_{i}$ is the partition function of a genuine Narain theory.
The second argument is based on the analysis of chiral states. Assuming that a
given $Z$ is the partition function of a genuine CFT, we analyze chiral states
with $\Delta=\ell$ and list possible chiral algebras consistent with this
spectrum. Then, for each such scenario we find that particular chiral algebras
require additional states in the spectrum, beyond those present in $Z$. Using
contradictions of this kind, we rule out all possible scenarios one by one for
$Z_{i}$ for $i=3,4,5,6$. For $i=1,2$ our analysis is inconclusive, but in
principle can be extended. The logic above rules out unitary 2d CFTs of any
kind, thus rendering first argument unnecessary, at least for $i=3,4,5,6$. We
nevertheless present both arguments because they are completely independent
and rely on two different techniques. Besides, our second approach falls short
of yielding a conclusive result for $i=1,2$.
The paper is organized as follows. Section 2.1 recalls some general facts
about Narain CFTs and also sets the notation. In Section 2.2, we give a proof
in elementary terms that $Z_{3}$ is not a partition function of a Narain
theory. Similar proof for other $Z_{i}$ are delegated to appendix A. A more
systematic treatment of the Narain case is given in section 2.3, where we
construct the list of all Euclidean lattices which can potentially give rise
to $Z_{i}$, and show $Z_{1,2}$ don’t come from the Narain CFTs. To rule out
other $Z_{i}$, we consider equipping these Euclidean lattices with the
Lorentzian structure, which is discussed in section 2.3.2. This completes the
case of ruling out Narain theories. We give a general argument based on
peculiarities of chiral algebra for $Z_{3,4,5,6}$ in section 3. We conclude in
section 4.
## 2 “It’s not Narain”
In this section, we show that none of the impostor functions
$Z_{i}(\tau,\bar{\tau})$, defined by (3) with help of (4), originate from the
CFTs of the Narain type.
### 2.1 Notations and preliminaries
A Narain CFT is a theory of non-interacting bosons that move on a
$n$-dimensional torus, and are coupled to a background $B$-field:
$\displaystyle S$
$\displaystyle=\frac{1}{4\pi\alpha^{\prime}}\int\text{d}^{2}x\left(\partial_{\xi}X_{i}\partial^{\xi}X^{i}+\epsilon^{\xi\zeta}B_{ij}\partial_{\xi}X^{i}\partial_{\zeta}X^{j}\right)$
(5)
where in what follows we set $\alpha^{\prime}=2$. Here $B_{ij}$ is an
antisymmetric $n\times n$ matrix and the $X^{i}$ (with $i=1,...,n$) are
periodic: $X^{i}\sim X^{i}+f^{i}$, for any element $\overrightarrow{f}$ of an
$n-$dimensional lattice $\Gamma$. The lattice $\Gamma$ and the $B$-field
characterise the space of Narain CFTs.222This parametrisation is rife with
degeneracies, with multiple $(\Gamma,B)$’s mapping to the same theory.
Any such theory has central charge $c=\bar{c}=n$, and admits a
$\mathfrak{u}(1)^{n}\times\mathfrak{u}(1)^{n}$ current algebra (throughout the
rest of this section, we’ll just be calling it the $\mathfrak{u}(1)$ current
algebra) generated by:
$\displaystyle\quad j^{a}(z)={i\mkern 1.0mu}\partial
X^{a},\quad\bar{j}^{a}(\bar{z})={i\mkern 1.0mu}\bar{\partial}X^{a},\qquad
a=1,...,n$ (6)
In special cases (which will be relevant to us later), this current algebra is
enhanced.
The operator spectrum of the theory may then be organised into
$\mathfrak{u}(1)$ primaries and their descendants. The former are vertex
operators of the form:
$\displaystyle\mathcal{V}_{\alpha,\beta}=e^{\frac{{i\mkern
1.0mu}}{\sqrt{2}}(\alpha_{j}X^{j}+\beta_{j}\bar{X}^{j})}$ (7)
where
$\displaystyle X=X_{L}+X_{R},\qquad\bar{X}=X_{L}-X_{R},$
and the momenta $(\alpha,\beta)$ form a lattice ${\sf\Lambda}$ generated by:
$\displaystyle\Lambda$
$\displaystyle=\frac{1}{\sqrt{2}}\begin{pmatrix}2\gamma^{*}&B\gamma\\\
0&\gamma\end{pmatrix}$ (8)
where $\gamma$ generates $\Gamma$. It is easy to verify that
$\displaystyle\Lambda^{t}\eta\Lambda$
$\displaystyle=\eta,\qquad\eta=\begin{pmatrix}0&\mathbbm{1}_{n}\\\
\mathbbm{1}_{n}&0\end{pmatrix},$ (9)
and therefore the set of $\mathfrak{u}(1)$ primaries has the structure of a
$(n,n)$-dimensional Lorentzian lattice that is even and self-dual with respect
to the metric $\eta$. In other words, ${\sf\Lambda}$ is a Narain lattice. The
mapping between the set of primaries and ${\sf\Lambda}$ is natural:
$\displaystyle\mathcal{V}_{\alpha_{1},\beta_{1}}\times\mathcal{V}_{\alpha_{2},\beta_{2}}\sim\mathcal{V}_{\alpha_{1}+\alpha_{2},\beta_{1}+\beta_{2}}$
(10)
where the right-hand side is the leading contribution to the OPE.
The torus CFT partition function of the theory can also be organised into
contributions from $\mathfrak{u}(1)$ towers:
$\displaystyle Z$
$\displaystyle=\frac{1}{|\eta|^{2n}}\sum_{(\alpha,\beta)\in{\sf\Lambda}}q^{(\alpha+\beta)^{2}/4}{\bar{q}}^{(\alpha-\beta)^{2}/4}$
(11)
where
$\displaystyle\quad q=e^{2\pi i\tau}=\varrho\ t,\quad\bar{q}=e^{-2\pi
i\bar{\tau}}=\varrho\ t^{-1}.$ (12)
The vertex partition function $\tilde{Z}\equiv Z\,|\eta|^{2n}$ in the
variables $(\varrho,t)$ gives the combined Siegel theta function of
${\sf\Lambda}$,
$\displaystyle{\tilde{Z}}$ $\displaystyle=1+\sum_{(\Delta,\ell)\neq
0}\,C_{\Delta,\ell}\varrho^{\Delta}t^{\ell}=\sum_{v=(\alpha,\beta)\in{\sf\Lambda}}\varrho^{\frac{1}{2}v^{t}v}t^{\frac{1}{2}v^{t}\eta
v}$ (13)
where $C_{\Delta,\ell}\in\mathbbm{Z}_{\geq 0}$ counts the number of primaries
with dimension $\Delta$ and (signed) spin $\ell$. Hence, for a lattice point
$v=(\alpha,\beta)\in{\sf\Lambda}$, the Euclidean and Lorentzian norms
(respectively) give the dimension and spin of the corresponding vertex
primary:
$\displaystyle\Delta_{v}$
$\displaystyle=\frac{1}{2}v^{t}v=\frac{1}{2}(\alpha^{2}+\beta^{2})$
$\displaystyle\ell_{v}$ $\displaystyle=\frac{1}{2}v^{t}\eta v=\alpha.\beta$
(14)
To summarize, the Narain theory is defined by a Narain lattice $\Lambda$ with
the $\mathfrak{u}(1)$ primaries being in one to one correspondence with the
lattice vectors.
In what follows, we will discusses lattices without explicitly specifying
generating matrices or even the form of Lorentzian norm $\eta$, which could be
differ from (9) by a $O(2n)$ orthogonal transformation. We introduce the
notion of Euclidean and Lorentzian norms defined for any vector
$v\in{\sf\Lambda}\subset\mathbb{R}^{2n}$,
$\displaystyle v^{t}v$ $\displaystyle=v.v=||v||^{2},$ $\displaystyle v^{t}\eta
v$ $\displaystyle=<v,v>=\langle\langle v\rangle\rangle^{2}.$ (15)
Length in what follows would refer exclusively to Euclidean norm and sq-length
to norm squared.
### 2.2 Elementary proof for $Z_{3}$
A simple observation, which generalizes to all $Z$’s obtained via (3), is that
rescaling $\rho\rightarrow\rho^{8},t\rightarrow t^{8}$ renders $\tilde{Z}$ a
sum of even-integer powers of $\rho$ and $t$,
$\displaystyle\tilde{Z}_{3}((\varrho t)^{8},(\varrho t^{-1})^{8})$
$\displaystyle=\sum_{w\in
2{\sf\Lambda}_{3}}\varrho^{||w||^{2}}t^{\langle\langle w\rangle\rangle^{2}}$
$\displaystyle=1+4\varrho^{2}+8\varrho^{4}+16\varrho^{6}+\dots+48\left(t^{8}+t^{-8}\right)\varrho^{14}+\dots$
(16)
This means that if $\tilde{Z}_{3}$ is a Siegel theta function of some
“progenitor” Narain lattice ${\sf\Lambda}_{3}$, then the rescaled lattice
$2{\sf\Lambda}_{3}$ would be an even (and hence integer) lattice in the
Euclidean sense,
$||w||^{2}\in 2\mathbbm{Z},\quad\forall w\in 2{\sf\Lambda}_{3}.$ (17)
The rescaling is not a crucial step and is done to simplify the following
presentation. The mapping between lattice points of $2{\sf\Lambda}_{3}$ and
${\mathfrak{u}(1)}$ primaries is now as follows,
$\displaystyle\Delta_{w}=\frac{1}{8}||w||^{2},\qquad\ell_{w}=\frac{1}{8}\langle\langle
w\rangle\rangle^{2}.$ (18)
Our goal will be to force a contradiction between the given Euclidean and
Lorentzian structures. We do this by showing that the sublattice generated by
the scalar operators with $\Delta<1$ must contain a scalar of Euclidean sq-
length (norm squared) 14, or $\Delta=\frac{7}{4}$, which is absent in (16). We
start by observing there are four vectors of sq-length 2. At least two of
these (say $a_{1},a_{2}$) must be linearly independent, the remaining two are
then $-a_{1},-a_{2}$. Now, the following equation is called the polarisation
identity:
$\displaystyle||a_{1}+a_{2}||^{2}+||a_{1}-a_{2}||^{2}$
$\displaystyle=2(||a_{1}||^{2}+||a_{2}||^{2})$ $\displaystyle=8$
$\displaystyle=4+4$ (19)
which is to say
$\displaystyle||a_{1}+a_{2}||^{2}=||a_{1}-a_{2}||^{2}=4$ (20)
This is the only split possible since all sq-lengths must be even integers,
and only $\pm\\{a_{1},a_{2}\\}$ can have sq-length 2. This translates to
$a_{1}.a_{2}=0$.
We now turn our attention to the sq-length 4 vectors; the only (integral)
linear combinations of $a_{1,2}$ with sq-length 4 are $\pm a_{1}\pm a_{2}$.
Hence the remaining sq-length 4 vectors are linearly independent – let’s pick
one of these and call it $b_{1}$. Let’s see how $b_{1}$ plays with $a_{1,2}$
by applying the polarisation identity:
$\displaystyle||a_{1,2}+b_{1}||^{2}+||a_{1,2}-b_{1}||^{2}$
$\displaystyle=2(||a_{1,2}||^{2}+||b_{1}||^{2})$ $\displaystyle=12$
$\displaystyle=4+8=6+6=8+4$ (21)
These options lead to $a_{1,2}.b_{1}=-1,0,1$ respectively. Let’s say one of
these inner products is non-zero; i.e., (without loss of generality)
$a_{1}.b_{1}=\pm 1$. Subsequently, $||a_{1}\mp 2b_{1}||^{2}=14$, and must
correspond to an (unsigned) spin-1 operator; i.e., $\langle\langle a_{1}\mp
b_{1}\rangle\rangle^{2}=2\langle a_{1},b_{1}\rangle=\pm 8$ (see (16) and
(18)).333Note that we’ve used $\langle\langle
a_{1}\rangle\rangle^{2}=\langle\langle b_{1}\rangle\rangle^{2}=0$ – throughout
this proof, we’ll be making sure that all named vectors ($a,b,c$, etc) have
zero Lorentzian norm; their linear combinations may not. But we also have
$||a_{1}\mp b_{1}||^{2}=4$, which we know is a scalar: $\langle\langle
a_{1}\mp b_{1}\rangle\rangle^{2}=2\langle a_{1},b_{1}\rangle=0$. We summarise
these points in the following box:
$\displaystyle\boxed{\begin{array}[]{rl}||a_{1}\mp
b_{1}||^{2}=4&\Rightarrow\langle a_{1},b_{1}\rangle=0\\\ ||a_{1}\mp
2b_{1}||^{2}=14&\Rightarrow\langle a_{1},b_{1}\rangle\neq 0\end{array}}$ (24)
We will use these boxed equations to denote possible contradictions. In the
present case, we say that we will run into (24) if we choose $a_{1}.b_{1}=\pm
1$; and a similar contradiction if we go with $a_{2}.b_{1}=\pm 1$. Hence, we
are forced into the choice: $a_{1}.b=a_{2}.b=0$, in which case we can
construct another contradiction:
$\displaystyle\boxed{\begin{array}[]{rl}||a_{1}+b_{1}||^{2}=6&\Rightarrow\langle
a_{1},b_{1}\rangle=0\\\ ||a_{2}+b_{1}||^{2}=6&\Rightarrow\langle
a_{2},b_{1}\rangle=0\\\ ||a_{1}+a_{2}||^{2}=4&\Rightarrow\langle
a_{1},a_{2}\rangle=0\\\ |||a_{1}+2a_{2}+b_{1}||^{2}=14&\Rightarrow\langle
a_{1},a_{2}\rangle+\langle a_{1},b_{1}\rangle+\langle a_{2},b_{1}\rangle\neq
0\\\ \end{array}}$ (29)
The third equation states that none of the operators with $\Delta=\frac{7}{4}$
are scalars. In conclusion, all choices lead to contradictions and hence the
assumption of a Narain progenitor must be false.
$||a_{1}||^{2}=2,\quad||a_{2}||^{2}=2$
---
$a_{1}.a_{2}=0$
$||b_{1}||^{2}=4$
(24)(29)$a_{1}.b_{1}=\pm 1$$a_{2}.b_{1}=\pm 1$$a_{1,2}.b_{1}=0$ Figure 1: The
outline of the argument in the case of $\tilde{Z}_{3}$ (and also
$\tilde{Z}_{1}$, see appendix A). The colored boxes indicate contradictions.
The proof above in terms of elementary operations surprisingly generalizes to
all six $Z_{i}$, although the number of steps could be significant, see
Appendix A.
### 2.3 Proof by constructing all lattices explicitly
In this section, we give alternative and more general proof that none of the
impostor functions $Z(\tau,\bar{\tau})$ obtained from by (4) originate from
CFTs of the Narain type.
We saw in the previous section that would be Euclidean lattice
$2{\sf\Lambda}_{3}$ had to be even. This situation is completely general for
all $Z_{i}$. Consider any “partition function” $\tilde{Z}$ obtained via (3).
It is a polynomial in
$\displaystyle\theta_{3}\bar{\theta}_{3}+\theta_{4}\bar{\theta}_{4}$
$\displaystyle=1+4\varrho+\varrho^{2}\left(2t^{2}+\frac{2}{t^{2}}\right)+4{\varrho}^{4}+{\varrho}^{5}\left(4{t}^{4}+\frac{4}{{t}^{4}}\right)+\dots$
$\displaystyle\theta_{3}\bar{\theta}_{3}-\theta_{4}\bar{\theta}_{4}$
$\displaystyle=\frac{2\sqrt{{\varrho}}({t}+1)}{\sqrt{{t}}}+\frac{4{\varrho}^{5/2}\left({t}^{3}+1\right)}{{t}^{3/2}}+\frac{2{\varrho}^{9/2}\left({t}^{9}+1\right)}{{t}^{9/2}}+\dots$
$\displaystyle\theta_{2}\bar{\theta}_{2}$
$\displaystyle=2\sqrt[4]{{\varrho}}+{\varrho}^{5/4}\left(2{t}+\frac{2}{{t}}\right)+2{\varrho}^{9/4}+{\varrho}^{13/4}\left(2{t}^{3}+\frac{2}{{t}^{3}}\right)+{\varrho}^{17/4}\left(2{t}^{2}+\frac{2}{{t}^{2}}\right)+\dots$
and hence the series expansion of $\tilde{Z}$ only contains positive integer
powers of $\varrho^{1/4}$. Assuming that $\tilde{Z}$ is genuine, meaning it is
a Siegel theta function of some Narain lattice ${\sf\Lambda}$, the sq-length
of all vectors of ${\sf\Lambda}$ are half-integers. Therefore, $2{\sf\Lambda}$
is an even lattice with respect to the Euclidean inner product and in
particular, all Euclidean norms are even integers.444Since ${\sf\Lambda}$ is
already even and self-dual with respect to $\eta$; $2{\sf\Lambda}$ is also
integral (but no longer self-dual) with respect to $\eta$. However, we will
not use this fact directly.
Since ${\sf\Lambda}$ is self-dual with respect to $\eta$, its determinant is
$1$. Therefore $2\,{\sf\Lambda}$, understood as a Euclidean lattice, is even
and has determinant $2^{2n}$. All such lattices – for small $n$ – can be
constructed explicitly and their Euclidean theta series can be compared with
$\tilde{Z}({\varrho}^{4},{\varrho}^{4})$. This prompts the following strategy:
construct all $2n=6$-dimensional even lattices with determinant $2^{6}$ and
compute their theta series up to some order. We will see that there are no
lattice theta series matching $\tilde{Z}_{i}({\varrho}^{4},{\varrho}^{4})$ for
$i=1,2$, thus ruling them out as torus partition functions of Narain theories.
Yet we find lattices reproducing $\tilde{Z}_{i}({\varrho}^{4},{\varrho}^{4})$
for $i=3,4,5,6$, for which the argument has to be extended. Namely, for each
Euclidean lattice yielding a particular
$\tilde{Z}_{i}({\varrho}^{4},{\varrho}^{4})$, we would like to equip it with a
Lorentzian inner product $\eta$ such that Siegel theta function matches
$\tilde{Z}_{i}({\varrho}^{4}t^{4},{\varrho}^{4}t^{-4})$. This leads to a
number of linear equations for $\eta$ written in some particular basis, which
in all cases have no solutions. Here again, absence of zero spin states with
$\Delta=7/4$ play a crucial role.
We will illustrate these points below, using an $n=1$ enumerator polynomial as
a working example. However, in this case the corresponding $Z$ comes from an
actual Narain CFT, and we will end up rediscovering this fact.
#### 2.3.1 Taxonomy of integral lattices
In (8), the lattice generating matrix was written in a particular form, e.g.
satisfying (9). More generally a Euclidean $2n$-dimensional lattice can be
defined as an equivalence class of lattice generating matrices $\Lambda$,
where $\Lambda\sim\Lambda^{\prime}$ if they are related by a rotation and
renumeration of lattice points,
$\Lambda^{\prime}=O\Lambda S,\qquad O\in O(2n,\mathbb{R}),\qquad S\in
GL(2n,\mathbb{Z}).$ (30)
Alternatively, a lattice is an equivalence class of quadratic forms
$G=\Lambda^{t}\Lambda$ up to $S\in GL(2n,\mathbb{Z})$ transformations $G\sim
S^{T}GS$.
To construct all even (and hence integral) Euclidean lattices in
$\mathbb{R}^{2n}$ with the determinant $2^{2n}$ we use the following result
(our statement is a fusion of two lemmas in [34]):
###### Lemma 2.1.
Any integral Euclidean lattice of determinant $2\mathfrak{D}$ (with
$\mathfrak{D}^{2}\in\mathbbm{Z}$) can be obtained from an integral Euclidean
lattice of determinant $\mathfrak{D}$ via
$\Lambda_{2\mathfrak{D}}=\Lambda_{\mathfrak{D}}B,$ (31)
where $B$ is some integer-valued matrix with ${\rm det}\,B=2$.
It is obvious that if $\Lambda_{\mathfrak{D}}$ generates an integral lattice
of determinant $\mathfrak{D}$, then $\Lambda_{\mathfrak{d}}B$ generates an
integral lattice of determinant $2\mathfrak{D}$. The non-trivial part here is
that all integral lattices of determinant $2\mathfrak{D}$ can be obtained this
way.
Let us now restrict our attention to the case at hand – we have $n=3$. It
follows from the above, that all integral lattices in $\mathbb{R}^{6}$ of
determinant $2^{6}$ are given by
$\Lambda_{i_{1}\dots i_{6}}=B_{i_{1}}\dots B_{i_{6}}$ (32)
where $B_{i}$ exhaust all possible integer-valued matrices of determinant 2.
Here we used the fact that the only unimodular (integral and of determinant
one) lattice in $\mathbb{R}^{6}$ is $\mathbb{Z}^{6}$ and its generating matrix
can be chosen to be the identity matrix.555This statement is true in any
dimension less than $8$. In $\mathbb{R}^{8}$, besides $\mathbb{Z}^{8}$, there
is also a unimodular root lattice $E_{8}$. Using the $GL(6,\mathbb{Z})$ action
from the right we can bring all $B_{i_{k}}$ to Hermite normal form, namely
lower triangular with non-negative integer matrix elements, and the diagonal
matrix elements (pivots) are largest in each row. There are $2^{6}-1$ such
matrices. Using permutations of rows, which is an element of $O(6,\mathbb{R})$
one can bring $B_{i_{1}}$ to one of $6$ possible forms (last pivot is two, all
others are one, there are $0\leq k\leq 5$ ones in the last row located
arbitrarily), hence bringing the total number of combinations in (32) to
$6(2^{6}-1)^{5}$. We should stress that not all combinations yield distinct
lattices, yet possible equivalence relations between different $B_{i_{1}}\dots
B_{i_{6}}$ combinations are difficult to identify. We illustrate the
construction with a 2d example below.
#### All integral determinant-$4$ lattices in $\mathbb{R}^{2}$
To illustrate our method, we compute all two-dimensional integral lattices of
determinant $4$. There are $2^{2}-1=3$ possible matrices $B_{i}$ in the
Hermite form
$\displaystyle D_{1,1}=\left(\begin{array}[]{cc}2&0\\\
0&1\end{array}\right),\quad D_{1,2}=\left(\begin{array}[]{cc}1&0\\\
0&2\end{array}\right),\quad D_{2,2}=\left(\begin{array}[]{cc}1&0\\\ 1&2\\\
\end{array}\right).$ (39)
Naively, there are $9$ different combinations. Using permutations of rows we
can reduce $B_{i_{1}}$ to be either $D_{1,2}$ or $D_{2,2}$. There are then 6
possible combinations, which generate 4 distinct lattices/quadratic
forms/theta series.
As explained before, this list must contain the (appropriately rescaled)
lattices of all $c=1$ Narain theories with vertex operators of quarter-integer
dimensions. There are three such theories – the compact scalars with
$R=1,2,\sqrt{2}$. The first two are code theories, and being T-dual to each
other, their lattices coincide. The generating matrices for the corresponding
Narain lattices are:
$\displaystyle 2\Lambda_{R=1}=\sqrt{2}\begin{pmatrix}1&0\\\
0&2\end{pmatrix}\sim D_{2,2}D_{1,2}\sim D_{2,2}D_{2,2},$ $\displaystyle
2\Lambda_{R=\sqrt{2}}=\begin{pmatrix}2&0\\\ 0&2\end{pmatrix}\sim
D_{2,2}D_{1,1}\sim D_{1,2}D_{1,1}.$ (40)
In the two-dimensional case, the equivalence of lattice-generating matrices is
easy to establish explicitly, with a more practical approach would be first to
compare corresponding theta series calculated to a sufficient order.
The other two combinations $D_{1,2}D_{1,2}$ and $D_{1,2}D_{2,2}$ do not
generate even integral lattices, and hence cannot correspond to Narain
theories with quarter-integer spectra.
#### The full list of $d=6$ theta series and the result of Euclidean analysis
A naive attempt to write down all $6(2^{6}-1)^{5}$ combinations
$B_{i_{1}}\dots B_{i_{6}}$ is challenging even using computer algebra. In
fact, we do not need the lattices themselves but only their theta series, to
compare with $\tilde{Z}_{i}({\varrho}^{4},{\varrho}^{4})$. There is a very
useful trick666We thank Ohad Mamroud for a valuable discussion on this point.,
described in Appendix B, which allows for efficient construction of the Siegel
theta series of $B_{i_{1}}\dots B_{i_{r-1}}B_{i_{r}}$ from the generating
matrix of $B_{i_{1}}\dots B_{i_{r-1}}$, thus reducing the number of necessary
matrix multiplications to $6(2^{6}-1)^{4}$. While this is still a large
number, this task can be performed on a computer cluster, yielding no
combinations with theta series matching $\tilde{Z}_{1}$ and $\tilde{Z}_{2}$
and many different combinations matching $\tilde{Z}_{i}$ for $i=3-6$. These
combinations will be used for further analysis in Section 2.3.2. We also
conclude at this point that the partition functions $Z_{1,2}(q,\bar{q})$
cannot come from any Narain CFTs.
We close this section with a trick that could help to drastically reduce the
computation time. After each $B_{i_{k}}$ is bought to the Hermite normal form,
the matrix product (32) is of the form:
$\displaystyle D=\left(\begin{array}[]{cccc}2^{p_{1}}&0&0&\dots\\\
&2^{p_{2}}&0&\dots\\\ &*&2^{p_{3}}&\dots\\\
\dots&\dots&\dots&\dots\end{array}\right)$ (45)
The sum $p_{1}+\dots+p_{d}=r$ is fixed by the value of the determinant. We are
interested in the case $r=d=6$. To simplify things further, we use the
equivalence relation – the row permutations acting from the left to satisfy an
additional condition $p_{1}\leq p_{2}\leq\dots\leq p_{d}$.777We do not prove
that this is always possible, but checked it explicitly for all cases with
$r=d\leq 5$. The diagonal of such matrices are labelled by Young diagrams with
$r$ boxes.
The lattices we are interested in are not merely integral but even. Hence we
can demand that the squared-sum of elements within each row of $D$ be even.
Finally, using reflections along each of the $d$ axes (which are lattice
equivalences) we can make sure that in each column, moving downwards starting
from below the diagonal, the first non-zero element must be less or equal than
the corresponding pivot divided by two. So if the first non-zero element in
the column numbered $d-1$ is $D_{d,d-1}$, we must have $D_{d,d-1}\leq
2^{p_{d}-1}$.
With these “gauge fixing” conditions, in the case of $d=r=6$ there are only
$7,725,064$ distinct matrices $D$. All of them can be easily generated on a
laptop in a matter of minutes. A brute force calculation of the corresponding
theta series then shows that none of them match
$\tilde{Z}_{1}(\varrho^{4},\varrho^{4})$ or
$\tilde{Z}_{2}(\varrho^{4},\varrho^{4})$, in agreement with the discussion
above. At first glance, one finds many thousands of $D$ that yield
$\tilde{Z}_{i}(\varrho^{4},\varrho^{4})$ for each $i=3-6$, but upon further
analysis they all turn out to be equivalent. In conclusion, for each $i=3-6$,
we find exactly one lattice with the theta series matching
$\tilde{Z}_{i}(\varrho^{4},\varrho^{4})$– with the generators given by:
$\displaystyle D_{3}=\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\
0&1&0&0&0&0\\\ 0&0&2&0&0&0\\\ 0&0&0&2&0&0\\\ 0&1&0&0&4&0\\\ 1&0&0&0&0&4\\\
\end{array}\right),\qquad D_{4}=\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\
0&1&0&0&0&0\\\ 0&0&2&0&0&0\\\ 0&1&0&2&0&0\\\ 1&0&0&2&4&0\\\ 2&2&2&0&0&4\\\
\end{array}\right),$ (58) $\displaystyle
D_{5}=\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\ 0&1&0&0&0&0\\\
0&1&2&0&0&0\\\ 0&1&0&2&0&0\\\ 0&1&0&0&4&0\\\ 1&2&0&0&0&4\\\
\end{array}\right),\qquad D_{6}=\left(\begin{array}[]{cccccc}1&0&0&0&0&0\\\
0&1&0&0&0&0\\\ 0&1&2&0&0&0\\\ 0&1&0&2&0&0\\\ 1&2&0&0&4&0\\\ 2&1&0&0&0&4\\\
\end{array}\right),$ (71)
The corresponding theta series read:
$\displaystyle\Theta_{3}$
$\displaystyle=Z_{3}(\varrho^{4},\varrho^{4})=1+4\varrho+8\varrho^{2}+16\varrho^{3}+28\varrho^{4}+40\varrho^{5}+64\varrho^{6}+96\varrho^{7}+124\varrho^{8}+\dots,$
$\displaystyle\Theta_{4}$
$\displaystyle=Z_{4}(\varrho^{4},\varrho^{4})=1+8\varrho^{2}+16\varrho^{3}+28\varrho^{4}+64\varrho^{5}+64\varrho^{6}+96\varrho^{7}+124\varrho^{8}+\dots,$
$\displaystyle\Theta_{5}$
$\displaystyle=Z_{5}(\varrho^{4},\varrho^{4})=1+2\varrho+4\varrho^{2}+24\varrho^{3}+44\varrho^{4}+20\varrho^{5}+32\varrho^{6}+144\varrho^{7}+188q^{8}+\dots,$
$\displaystyle\Theta_{6}$
$\displaystyle=Z_{6}(\varrho^{4},\varrho^{4})=1+4\varrho^{2}+24\varrho^{3}+44\varrho^{4}+32\varrho^{5}+32\varrho^{6}+144\varrho^{7}+188\varrho^{8}+\dots$
#### 2.3.2 The Lorentzian step
Now we proceed to prove that $Z_{i}\left(q,\bar{q}\right)$ for $i=3-6$ cannot
arise from any Narain CFT. That is for $i=3-6$, we show that no lattice which
yields the theta series
$\Theta_{\Lambda/2}(\varrho)=\tilde{Z}_{i}\left(\varrho,\varrho\right)$ also
satisfies
$\Theta_{\Lambda/2,\eta}\left(q,\bar{q}\right)=\tilde{Z}_{i}\left(q,\bar{q}\right)$
at the level of the Siegel theta series, with some Lorentzian inner product
$\eta$.
We start with some particular $Z_{i}$; for example, with $i=3$. As we
explained above, we have constructed an exhaustive list of candidate lattice-
generating matrices such that corresponding theta series satisfy
$\Theta_{\Lambda}(\varrho)=\tilde{Z}_{3}(\varrho^{4},\varrho^{4})$. While we
expect that all these matrices are equivalent, i.e. correspond to the same
Euclidean lattice, we can not rigorously prove it (see the footnote 7) and
deal with them one by one. We would like to know if there is an appropriate
$\eta$, a symmetric $6\times 6$ matrix with the signature $(+,+,+,-,-,-)$ such
that
$\displaystyle\tilde{Z}_{3}(\varrho^{4}t,\varrho^{4}t^{-1})$
$\displaystyle=\sum_{\begin{subarray}{c}v=\Lambda m\\\
m\in\mathbbm{Z}^{2n}\end{subarray}}\varrho^{\frac{1}{2}v^{t}v}\,t^{\frac{1}{8}v^{t}\eta
v}$
$\displaystyle=1+4\varrho+8\varrho^{2}+16\varrho^{3}+4\left(5+t+t^{-1}\right)\varrho^{4}+4\left(8+t+t^{-1}\right)\varrho^{5}$
$\displaystyle\qquad\qquad+16\left(2+t+t^{-1}\right)\varrho^{6}+48\left(t+t^{-1}\right)\varrho^{7}+\dots$
(72)
We start by noting that all lattice vectors $v$ that have sq-length $2,4,6$
have zero norm under $\eta$; and those with sq-length $14$ have $\eta$ norm-
squared $\pm 8$. For each $\Lambda$ we can easily identify all the short
vectors explicitly, leading to a system of linear (non)equalities:
$\displaystyle v^{t}\,\eta\,v=0,$ $\displaystyle\qquad\text{for}\
||v||^{2}=2,4,6$ $\displaystyle v^{t}\eta v\neq 0,$
$\displaystyle\qquad\text{for}\ ||v||^{2}=14.$ (73)
However, in all cases, this system has no solutions, which we checked on a
cluster using the list of all suitable $\Lambda=B_{i_{1}}\dots B_{i_{6}}$.
The same applies to all the other cases:
$\displaystyle\tilde{Z}_{4}(\varrho^{4}t,\varrho^{4}t^{-1})$
$\displaystyle=1+8\varrho^{2}+16\varrho^{3}+4(5+t+t^{-1})\varrho^{4}+16(2+t+t^{-1})\varrho^{5}$
$\displaystyle\qquad\qquad+16(2+t+t^{-1})\varrho^{6}+48(t+t^{-1})\varrho^{7}+\dots$
$\displaystyle\tilde{Z}_{5}(\varrho^{4}t,\varrho^{4}t^{-1})$
$\displaystyle=1+2\varrho+4\varrho^{2}+24\varrho^{3}+(28+8t+8t^{-1})\varrho^{4}+2(8+t+t^{-1})\varrho^{5}$
$\displaystyle\qquad\qquad+8(2+t+t^{-1})\varrho^{6}+72(t+t^{-1})\varrho^{7}+\dots$
$\displaystyle\tilde{Z}_{6}(\varrho^{4}t,\varrho^{4}t^{-1})$
$\displaystyle=1+4\varrho^{2}+24\varrho^{3}+(28+8t+8t^{-1})\varrho^{4}+8(2+t+t^{-1})\varrho^{5}$
$\displaystyle\qquad\qquad+8(2+t+t^{-1})\varrho^{6}+72(t+t^{-1})\varrho^{7}+\dots$
In each case the $\Delta=7/4$ states have non-zero spins, while states of
dimension $2,4,6$ (if any) are scalars; and in all cases, meaning for all
suitable lattices $\Lambda=B_{i_{1}}\dots B_{i_{6}}$ the system (73) has no
solutions.
For the four representatives (58), (71); we can see explicitly why this
happens: some (but not all) vectors $v_{14}$ of sq-length $14$ can be
expressed through vectors of sq-length $2,4,6$ in the sense:
$\displaystyle v_{14}v^{t}_{14}=\sum_{i}a_{i}v_{i}v_{i}^{t},$ (74)
with some coefficients $a_{i}$, and $i$ running through all scalar vectors of
sq-length less or equal to $6$. This immediately rules out $v_{14}^{t}\eta
v_{14}\neq 0$. This concludes the proof that none of $Z_{i}$’s is a torus
partition function of some Narain theory.
In conclusion, it is interesting to note that the elementary proof of Section
2.2 and the more general proof of this section both rely on “representing”
$\Delta=7/4,\ell=\pm 1$ vectors starting from the shorter scalar ones.
#### Example: the $c=1$ case
Let’s illustrate the procedure in the $c=1$ case – for the enumerator
polynomial $W(x,y,z)=x+z$. The vertex partition function in this case is given
by:
$\displaystyle\tilde{Z}_{W}\left(\varrho^{4}t,\varrho^{4}t^{-1}\right)$
$\displaystyle=1+2\varrho+4\varrho^{4}+2\varrho^{5}\left(t+t^{-1}\right)+2\varrho^{8}\left(t^{2}+t^{-2}\right)+2\varrho^{9}+2\varrho^{13}\left(t^{3}+t^{-3}\right)+\dots$
(75)
This is the partition function of the $R=1$ Narain theory, but this is not
known to us yet. In Section 2.3.1, we identified a Euclidean lattice with the
theta function $\tilde{Z}_{W}$,
$\displaystyle\tilde{Z}_{W}\left(\varrho^{4},\varrho^{4}\right)$
$\displaystyle=\sum_{r\in\mathbbm{Z}^{2}}\varrho^{\frac{1}{2}r^{t}\Lambda^{t}\Lambda
r},\qquad\Lambda=D_{2,2}D_{1,2}.$ (76)
We would like to find a Lorentzian inner product matrix $\eta$ such that
$\displaystyle\tilde{Z}_{W}\left(\varrho t,\varrho t^{-1}\right)$
$\displaystyle=\sum_{r\in\mathbbm{Z}^{2}}\varrho^{\frac{1}{8}r^{t}\Lambda^{t}\Lambda
r}t^{\frac{1}{8}r^{t}\Lambda^{t}\eta\Lambda r},$ (77)
or show that it does not exist. First, we identify all vectors of sq-length
two, $v_{2}=\pm(1,1)$ and four $v_{4}=(\pm 2,\pm 2)$. Demanding that these
vectors be scalars restricts $\eta$ to be of the form
$\eta=\begin{pmatrix}\eta_{11}&0\\\ 0&-\eta_{11}\end{pmatrix}.$ (78)
Then we consider a vector of sq-length five, $v_{5}=(-3,1)$. Demanding that
this vector has (unsigned) spin one; i.e., $\tfrac{1}{8}v_{5}^{t}\eta
v_{5}=1$, fixes $\eta_{11}=1$. This fixes $\eta$ and one can check, at least
pertubatively, that (77) is satisfied. Clearly (78) is related to (9) by an
orthogonal $O(2,\mathbb{R})$ transformation, which finishes the proof –
starting from the polynomial $W$, we have constructed a Narain lattice (of the
$c=1$ compact scalar at radius $R=1$) that yields the same Siegel theta series
as $\tilde{Z}_{W}$.
## 3 Excluding unitary CFTs
In this section, we show that $Z_{3-6}$ cannot come from any unitary CFT with
a local stress tensor. In particular, this also proves that they cannot come
from the Narain CFTs and constitutes an independent proof of this claim. We do
this by first recalling that the algebra formed by spin-1 currents in a
unitary CFT with a local stress tensor must be a direct sum of affine Lie
algebras and Heisenberg algebras888By which we mean the chiral algebra of a
non-compact free boson.. Then we show that the spectra of $Z_{3-6}$ cannot be
organised into representations of any such algebra. Our analysis will prove
inconclusive for $Z_{1,2}$.
Consider the full set of holomorphic, spin-1 operators in a unitary CFT with a
unique ground state. Since we are dealing with a unitary theory, we can assume
that this set has a basis of Hermitian operators – say $j^{a}(z)$ –
orthonormal under the two-point function:
$\displaystyle\langle j^{a}(z)j^{b}(0)\rangle=\frac{1}{z^{2}}\delta^{ab}$ (79)
The products of these operators have to be of the form:
$\displaystyle
j^{a}(z)j^{b}(0)\sim\frac{1}{z^{2}}\delta^{ab}+\frac{1}{z}{i\mkern
1.0mu}\tilde{f}^{ab}_{c}j^{c}(0)$ (80)
It then follows from crossing symmetry that the structure constants
$\tilde{f}^{ab}_{c}$ satisfy the Jacobi-Lie identity. This means that the zero
modes of $j^{a}(z)$ form a (real) Lie algebra $\mathfrak{g}$. The two-point
function is a positive-definite bi-invariant (i.e., to both left and right
action) metric on this Lie algebra, and the three-point functions
$\tilde{f}^{abc}$ are completely antisymmetric thanks to Bose symmetry. A
positive-definite metric and completely antisymmetric structure constants mean
that $\mathfrak{g}$ is constrained to be a direct sum of simple algebras and
$\mathfrak{u}(1)$’s, see for instance [35]. Subsequently, the $j^{a}(z)$ form
a direct sum of affine Lie algebras and Heisenberg algebras.
In what follows, we will constrain the list of algebras that can constitute
$\mathfrak{g}$ for each of the $\tilde{Z}_{i}$’s and for $i=3-6$ conclude that
this list is empty. Our tools are going to be crossing symmetry (i.e., the
conformal bootstrap) and some representation theory. For $Z_{1,2}$ we will
find that the spectrum falls into representation of
$\left(\mathfrak{u}(1)_{4}\right)^{3}$ chiral algebra, thus leading to no
apparent contradiction. This does not necessarily mean $Z_{1,2}$ are genuine,
but this is a possibility. To illustrate this point, toward the end of this
section, we consider a $c=2$ code theory with the spectrum falling into
representations of $\mathfrak{su}(2)_{1}\oplus\mathfrak{su}(2)_{1}$.
### 3.1 Polynomials 5 and 6
Since $\mathfrak{g}=\bigoplus_{i=1}^{s}\mathfrak{g}_{i}$ is a direct sum of
simple and Abelian Lie algebras, we will rescale the $j^{a}$’s from each
individual simple $\mathfrak{g}_{i}$ (we keep the Abelian generators as they
are) to their conventional normalisation:
$\displaystyle\mathfrak{k}^{ab}=f^{ap}_{q}f^{bq}_{p}=2g_{i}^{\vee}\delta^{ab},\
\text{for }a,b\in\mathfrak{g}_{i}$ (81)
where $g_{i}^{\vee}$ is the dual Coxeter number of $\mathfrak{g}_{i}$ (see
Table 1). The OPE is then:
$\displaystyle
j^{a}(z)j^{b}(0)\sim\frac{1}{z^{2}}\kappa^{ab}+\frac{1}{z}{i\mkern
1.0mu}f^{ab}_{c}j^{c}(0)$ (82)
$\kappa^{ab}$ here is a direct sum of matrices of the form $k_{i}\mathbbm{1}$
that tell us the levels of representations of each of the simple factors
($k=1$ for the Abelian factors as per our normalisation). The levels of the
simple factors are positive integers thanks to unitarity. Moreover, the
stress-energy tensor breaks up into two commuting factors:
$\displaystyle T$ $\displaystyle=T_{\text{sug}}+T_{\text{rest}}$ (83)
where the $T_{\text{sug}}$ is given by the Sugawara construction. This also
means that (again, by unitarity):
$\displaystyle c$
$\displaystyle=c_{\text{rest}}+\sum_{i=1}^{s}c_{i}\geq\sum_{i=1}^{s}c_{i}$
$\displaystyle c_{i}$
$\displaystyle=c_{sug}(\mathfrak{g}_{i},k_{i})=\frac{k_{i}\text{ dim}\
\mathfrak{g}_{i}}{k_{i}+g_{i}^{\vee}}$ (84)
Now consider:
$\displaystyle Z_{5}\left(\tau,\bar{\tau}\right)$
$\displaystyle=\frac{1}{\left(\eta\left(q\right)\bar{\eta}\left(\bar{q}\right)\right)^{3}}\left(1+2\left(q\bar{q}\right)^{\frac{1}{8}}+4\left(q\bar{q}\right)^{\frac{1}{4}}+24\left(q\bar{q}\right)^{\frac{3}{8}}+8q+8\bar{q}+...\right)$
(85)
We can see that there are eight extra conserved currents, on top of $3$ ($+\
\bar{3}$) that come from the $\eta$ terms in the denominator. Together, they
must give rise to an affine algebra $\mathfrak{g}_{k}$ with:
$\displaystyle\sum_{i=1}^{s}\frac{k_{i}\text{ dim}\
\mathfrak{g}_{i}}{k_{i}+g_{i}^{\vee}}\leq 3$ (86)
and dim $\mathfrak{g}=11$. We will now use the inequality (86) to narrow down
possible options for $\mathfrak{g}_{i}$.
$\mathfrak{g}$ | dim $\mathfrak{g}$ | rank $\mathfrak{g}$ | $g^{\vee}$
---|---|---|---
$\mathfrak{u}(1)$ | $1$ | $1$ | $0$
$\mathfrak{su}(N)$ | $N^{2}-1$ | $N-1$ | $N$
$\mathfrak{so}(2N+1)$ | $2N^{2}+N$ | $N$ | $2N-1$
$\mathfrak{sp}(N)$ | $2N^{2}+N$ | $N$ | $N+1$
$\mathfrak{so}(2N)$ | $2N^{2}-N$ | $N$ | $2N-2$
$E_{6}$ | $78$ | $6$ | $12$
$E_{7}$ | $133$ | $7$ | $18$
$E_{8}$ | $248$ | $8$ | $30$
$F_{4}$ | $52$ | $4$ | $9$
$G_{2}$ | $14$ | $2$ | $4$
Table 1: A list of various simple algebras and their properties (borrowed from
[36]).
For each of the factors:
$\displaystyle\text{rank }\mathfrak{g}_{i}$ $\displaystyle\leq
c_{i}\leq\text{dim }\mathfrak{g}_{i}\leq 11$ (87)
A quick look at Table 1 then tells us that each simple factor must be one of
the following: $\mathfrak{su}(2)\simeq\mathfrak{sp}(1)\simeq\mathfrak{so}(3)$,
$\mathfrak{su}(3)$, $\mathfrak{sp}(2)\simeq\mathfrak{so}(5)$, and
$\mathfrak{so}(4)$. We haven’t constrained possible values of levels yet, and
list corresponding central charges as a function of $k$ in Table 2.
$\mathfrak{g}$ | dim $\mathfrak{g}$ | $c_{sug}(\mathfrak{g},k)$
---|---|---
$\mathfrak{u}(1)$ | $1$ | $1$
$\mathfrak{su}(2)_{k}$ | $3$ | $\frac{3k}{k+2}$
$\mathfrak{su}(3)_{k}$ | $8$ | $\frac{8k}{k+3}$
$\mathfrak{so}(4)_{k}$ | $6$ | $\frac{6k}{k+2}$
$\mathfrak{so}(5)_{k}$ | $10$ | $\frac{10k}{k+3}$
Table 2: The central charges of the relevant simple algebras (and
$\mathfrak{u}(1)$) as a function the level.
We must now find a combination with eleven generators and total central charge
$\leq 3$ – the only possible choice (up to isomorphisms) is
$\mathfrak{su}(3)_{1}\oplus\mathfrak{su}(2)_{1}$. It also saturates the
central charge bound and gives
$\displaystyle T$ $\displaystyle=T_{\text{sug}}$ (88)
This means that the group action (i.e., the global charges) completely fixes
the conformal dimensions. Also, the partition function must be of the form:
$\displaystyle Z_{5}\left(\tau,\bar{\tau}\right)$
$\displaystyle=\left|\chi^{\mathfrak{su}(3)_{1}}_{0}\chi^{\mathfrak{su}(2)_{1}}_{0}\right|^{2}+...$
(89)
These are spin-1 currents and their descendants, however,
$\displaystyle\chi^{\mathfrak{su}(3)_{1}}_{0}\chi^{\mathfrak{su}(2)_{1}}_{0}$
$\displaystyle=\frac{1}{\left(\eta\left(q\right)\right)^{3}}\left(1+8q+12q^{2}+...\right),$
$\displaystyle Z_{5}\left(\tau,-{i\mkern 1.0mu}\infty\right)$
$\displaystyle=\frac{1}{\left(\eta\left(q\right)\right)^{3}}\left(1+8q+6q^{2}+...\right),$
(90)
meaning that there are not enough chiral operators in $Z_{5}$ to fill out
relevant representation of $\mathfrak{su}(3)_{1}\oplus\mathfrak{su}(2)_{1}$.
We thus have our contradiction.
The chiral spectrum of $Z_{6}$ is the same as that of $Z_{5}$ and hence the
argument to rule it out is the same.
### 3.2 Polynomials 3 and 4
We start by writing the impostor partition functions:
$\displaystyle Z_{3}\left(\tau,\bar{\tau}\right)$
$\displaystyle=\frac{1}{\left(\eta\left(q\right)\bar{\eta}\left(\bar{q}\right)\right)^{3}}\left(1+4\left(q\bar{q}\right)^{\frac{1}{8}}+8\left(q\bar{q}\right)^{\frac{1}{4}}+16\left(q\bar{q}\right)^{\frac{3}{8}}+4q+4\bar{q}+...\right)$
$\displaystyle Z_{4}\left(\tau,\bar{\tau}\right)$
$\displaystyle=\frac{1}{\left(\eta\left(q\right)\bar{\eta}\left(\bar{q}\right)\right)^{3}}\left(1+8\left(q\bar{q}\right)^{\frac{1}{4}}+16q^{\frac{9}{8}}\bar{q}^{\frac{1}{8}}+16q^{\frac{1}{8}}\bar{q}^{\frac{9}{8}}+4q+4\bar{q}+...\right)$
(91)
It may be verified that the chiral spectra of these functions are identical.
In both cases, there are seven spin-$1$ currents, and we can use the arguments
of the previous subsection to conclude that the relevant Kac-Moody algebra is
either $\mathfrak{su}(2)_{1}\oplus\mathfrak{su}(2)_{1}\oplus\mathfrak{u}(1)$
or $\mathfrak{so}(4)_{1}\oplus\mathfrak{u}(1)$ – we can restrict our attention
to the first case. This time though, the chiral spectrum has more than enough
states to support the relevant representation of algebra:
$\displaystyle Z_{3,4}\left(\tau,-{i\mkern
1.0mu}\infty\right)-\frac{1}{\eta\left(q\right)}\left(\chi^{\mathfrak{su}(2)_{1}}_{0}\right)^{2}=\frac{1}{\eta\left(q\right)}\left(2q^{2}+2q^{8}+2q^{18}+...\right)\left(\chi^{\mathfrak{su}(2)_{1}}_{0}\right)^{2}.$
(92)
As in the previous section, $c_{sug}=c=3$ and we get
$\displaystyle T$ $\displaystyle=T_{\text{sug}}.$ (93)
This means that the conformal dimensions are fixed by the global symmetry
charges. The representations of
$\mathfrak{su}(2)\oplus\mathfrak{su}(2)\oplus\mathfrak{u}(1)$ are
characterized by two spins and a charge, and we have:
$\displaystyle h_{(l_{1},l_{2},\mathcal{Q})}$
$\displaystyle=\frac{1}{3}\left(l_{1}(l_{1}+1)+l_{2}(l_{2}+1)\right)+2\gamma\mathcal{Q}^{2}$
(94)
where $\gamma$ is defined by
$T_{\text{sug}}^{\mathfrak{u}(1)}=2\gamma:\mathrel{jj}:$, and cancels out the
ambiguity from the normalisation of the $\mathfrak{u}(1)$ charge. We set
${\gamma}\rightarrow 1$ and normalise the charges accordingly.
We now return to the non-chiral part of the spectrum (see (91)). For $Z_{3}$,
this is led by an operator with dimensions
$\left(\frac{1}{8},\frac{1}{8}\right)$. The relevant representation of the
$\mathfrak{su}(2)_{1}\oplus\mathfrak{su}(2)_{1}\oplus\mathfrak{u}(1)$ has
$\left(l_{1},l_{2},\mathcal{Q}\right)=\left(\overline{l}_{1},\overline{l}_{2},\overline{\mathcal{Q}}\right)=\left(0,0,\frac{1}{4}\right)$.
It is easy to verify that $Z_{3}$ doesn’t have enough states to fill in this
representation.
The argument in the case of $Z_{4}$ is similar. The massive vector with
dimensions $\left(\tfrac{9}{8},\tfrac{1}{8}\right)$ has the lowest right-spin;
hence, this operator must be a primary. This state can occur at the head of
representations with
$\left(\overline{l}_{1},\overline{l}_{2},\overline{\mathcal{Q}}\right)=\left(0,0,\tfrac{1}{8}\right)$
and $\left(l_{1},l_{2},\mathcal{Q}\right)=\left(0,0,\tfrac{3}{4}\right)$,
$\left(0,\tfrac{1}{2},\tfrac{\sqrt{7}}{4}\right)$, or
$\left(\tfrac{1}{2},\tfrac{1}{2},\tfrac{\sqrt{5}}{4}\right)$. It is easy to
verify that $Z_{4}$ doesn’t have enough states to fill out any of these
representations.
### 3.3 Polynomials 1 and 2
$Z_{1,2}$ come with extra spin-2 currents. The chiral spectrum is the same as
the higher-spin algebra $\left(\mathfrak{u}(1)_{4}\right)^{3}$. But this time,
we find that the rest of the spectrum also falls neatly into
$\left(\mathfrak{u}(1)_{4}\right)^{3}$ representations:
$\displaystyle Z_{1}$
$\displaystyle=\left(\left|\chi_{0}\right|^{2}+\left|\chi_{2}\right|^{2}+2\left|\chi_{1}\right|^{2}\right)^{3}-2\left|\chi_{1}\right|^{2}\left(\chi_{0}^{2}-\chi_{2}^{2}\right)\overline{\left(\chi_{0}^{2}-\chi_{2}^{2}\right)}$
$\displaystyle Z_{2}$
$\displaystyle=\left(\left|\chi_{0}\right|^{2}+\left|\chi_{2}\right|^{2}+2\left|\chi_{1}\right|^{2}\right)^{3}-4\left|\chi_{1}\right|^{2}\left(\chi_{0}^{2}-\chi_{2}^{2}\right)\overline{\left(\chi_{0}^{2}-\chi_{2}^{2}\right)}$
(95)
where
$\displaystyle\chi_{t}(\tau)$
$\displaystyle=\frac{1}{\eta(\tau)}\sum_{r\in\mathbbm{Z}}q^{2\left(r+\tfrac{t}{4}\right)^{2}}$
(96)
are $\mathfrak{u}(1)_{4}$ characters, and are related to the Jacobi theta
functions:
$\displaystyle\chi_{0}(q)$
$\displaystyle=\frac{1}{2\eta(\tau)}(\theta_{3}(q)+\theta_{4}(q))$
$\displaystyle\chi_{1}(q)=\chi_{3}(q)$
$\displaystyle=\frac{1}{2\eta(\tau)}\theta_{2}(q)$ $\displaystyle\chi_{2}(q)$
$\displaystyle=\frac{1}{2\eta(\tau)}(\theta_{3}(q)-\theta_{4}(q))$ (97)
Note that expanding the brackets in (95) produces a sum of
$\mathfrak{u}(1)_{4}$ characters with positive-integer coefficients. Hence,
the methods from before do not work in this case and our analysis is
inconclusive. But the results of the previous section mean that if $Z_{1,2}$
are indeed CFTs with the $(\mathfrak{u}(1)_{4})^{3}$ chiral algebra, they must
be non-Narain theories. This would be interesting since it is widely assumed
that compact CFTs with the $\mathfrak{u}(1)^{n}\times\mathfrak{u}(1)^{n}$
symmetry are Narain.
### 3.4 A genuine CFT example
Finally, we consider an actual $c=2$ code CFT to illustrate how it passes the
consistency checks discussed in this section. Using the construction of [22],
up to T-duality there is a unique indecomposable $c=2$ code with the
polynomial:
$\displaystyle W$ $\displaystyle=x^{2}+y^{2}+2z^{2}$
leading to the code CFT with the partition function:
$\displaystyle\tilde{Z}$
$\displaystyle=1+4q+4\bar{q}+8(q\bar{q})^{\tfrac{1}{4}}+16(q\bar{q})^{\tfrac{1}{2}}+...$
(98)
Following the line of arguments outlined previously, the chiral algebra must
have $6$ holomorphic spin-$1$ currents with $c\leq 2$ – we have a perfect fit
with $\mathfrak{su}(2)_{1}\oplus\mathfrak{su}(2)_{1}$, with the rest of the
spectrum also fitting into representations of this algebra:
$\displaystyle\tilde{Z}=\left(\left|\chi^{\mathfrak{su}(2)_{1}}_{0}\right|^{2}+\left|\chi^{\mathfrak{su}(2)_{1}}_{1/2}\right|^{2}\right)^{2}$
(99)
where $\chi^{\mathfrak{su}(2)_{1}}_{0,1/2}$ are the characters of the
$\mathfrak{su}(2)_{1}$ representations with highest-weight states of spins $0$
and $1/2$ respectively. Now, we see this is the partition function of two
$R=\sqrt{2}$ scalars – even though the code is indecomposable, the CFT factors
(see [22]).
## 4 Conclusions
In the paper, we considered six impostor partition functions $Z_{i}$, for
$i=1-6$, obtained from the “fake” enumerator polynomials (4) – the polynomials
satisfying all the properties of refined enumerator polynomials of real self-
dual quantum stabilizer codes, yet not being enumerator polynomials of any
code. Then, $Z_{i}$ are obtained using the explicit relation between
enumerator polynomial and code CFT torus partition function (3), which in our
case serves as an ansatz. We investigated these six $Z_{i}$ and denounced four
of them as fake – they are not torus partition functions of any unitary 2d
CFT. Our analysis is inconclusive for the remaining two. We also ruled out the
possibility of any of the six being Narain theories.
We came to these conclusions by analysing the chiral algebra of the underlying
CFTs, assuming that the $Z_{i}$ were genuine. The smallness of the central
charge $c=3$, together with the presence of conserved currents imposed strict
limitations on the possible chiral algebras, and for each case, we could
reduce the choice to a handful of scenarios. Then, for each such scenario, we
noticed that the number of states of a particular dimension does not conform
to the size of the representations of the would-be chiral algebra, leading to
a contradiction. Our analysis is based on the specifics of each case.
Ruling out $Z_{i}$ as the partition functions of Narain theories can be done
separately, using the linearity of OPE coefficients. In this case, all the
primary states of $\mathfrak{u}(1)^{n}\times\mathfrak{u}(1)^{n}$ algebra form
a Narain lattice, hence a sum of two states (in the sense of the
$\mathfrak{u}(1)^{n}\times\mathfrak{u}(1)^{n}$ charges) must be a primary
state as well. Starting from the zero-spin states of small dimension, we can
construct a zero-spin state of higher dimension, only to notice that such a
state is not present in the spectrum of the would-be CFT. The analysis of the
Narain case can be made more systematic. One can find all Euclidean lattices
which reproduce the “Euclidean” spectrum of $Z_{i}$ with the purely imaginary
$\tau$. This list is short: there are no such lattices for $Z_{1},Z_{2}$,
immediately ruling them out, and there is a unique lattice for each $Z_{i}$
with $i=\overline{3,6}$. In the latter case, an analysis of the would-be
Lorentzian structure is necessary to show that these Euclidean lattices
equipped with the Lorentzian inner product of the most general form cannot
reproduce $Z_{i}(\tau,\bar{\tau})$ for complex $\tau$.
Our analysis for $Z_{1,2}$ is incomplete; we could not rule them out as CFT
partition functions or find the appropriate theories matching the spectrum. It
would be interesting to complete this task as a first step to answer a more
general question – if the $Z$ obtained from a fake polynomial can ever be
genuine, i.e. be a partition function of an actual CFT. CFTs are vastly more
rich and complicated objects than codes, and hence there are many more self-
consistency conditions for $Z$. Provided fake $W$ always lead to fake $Z$,
there is therefore hope that these conditions can be used to analyse
enumerator polynomials and identify the fake ones. The latter task is an open
problem in coding theory, and is necessary to answer the following question –
if an optimal code of a particular length is also extremal. An attempt to make
a step in this direction and to use the CFT consistency conditions coming from
the higher genus partition functions was recently undertaken in [24, 25].
Our results are a confirmation that not all solutions of the torus modular
bootstrap constraints are genuine CFTs. These are not the very first examples
of this kind, there are known constructions built of chiral models [37],
essentially providing examples of fake $Z$’s with one character. There is also
an infinite class of candidate partition functions for rational CFTs with two
characters [33], many of which are expected to be fake. Our examples involve a
handful of characters and also belong to an infinite discrete family, yielding
millions of possibly fake $Z$’s already for small central charge $c\leq 10$
[23]. In fact the construction based on fake polynomials can be generalized to
produce continuous families of impostor $Z$; we construct simplest examples
with $c=3$ in the appendix C. Thus, our results emphasize the point that fake
$Z$’s which solve modular bootstrap constraints – yet not being partition
functions of any CFT – can be complicated, involve many characters and require
an extremely elaborate analysis to rule out as fake. This picture is important
to keep in mind because it contradicts the success of modular bootstrap to
study particular theories of interest. Perhaps to understand why no fake $Z$
was found numerically, by solving a particular set of bootstrap constraints,
one should address the following question – can a $Z$, understood as a
solution of modular bootstrap, which is sitting at the boundary of an
exclusion plot ever be fake? Our experience so far suggests that the answer is
negative, but the underlying reason is unclear.
## 5 Acknowledgements
We thank Ofer Aharony for collaboration at the early stages of this project,
and for comments on the draft. We also thank Felix Jonas and the Naama Barkai
lab for help with computing, and Hiromi Ebisu, Masataka Watanabe, Ohad
Mamroud, Adam Schwimmer, Shaul Zemel, Micha Berkooz, Erez Urbach, Adar Sharon,
and Shai Chester for useful discussions. AD is grateful to Weizmann Institute
of Science for hospitality and acknowledges sabbatical support of the
Schwartz/Reisman Institute for Theoretical Physics. AD was supported by the
National Science Foundation under Grant No. PHY-2013812. The work of RRK was
supported in part by an Israel Science Foundation (ISF) center for excellence
grant (grant number 2289/18), by ISF grant no. 2159/22, by Simons Foundation
grant 994296 (Simons Collaboration on Confinement and QCD Strings), by grant
no. 2018068 from the United States-Israel Binational Science Foundation (BSF),
by the Minerva foundation with funding from the Federal German Ministry for
Education and Research, by the German Research Foundation through a German-
Israeli Project Cooperation (DIP) grant “Holography and the Swampland”, and by
a research grant from Martin Eisenstein.
## Appendix A Elementary proof for other $Z_{i}$
### A.1 $Z_{1}$
$\displaystyle\tilde{Z}_{1}((\varrho t)^{8},(\varrho t^{-1})^{8})$
$\displaystyle=1+4\varrho^{2}+12\varrho^{4}+8\varrho^{6}+\dots+24\left(t^{8}+t^{-8}\right)\varrho^{14}+\dots$
(100)
We can proceed exactly as in the $\tilde{Z}_{3}$ case. We need at least two
basis elements with sq-length 2 : $a_{1,2}$; these satisfy $a_{1}.a_{2}=0$
(see (19)). This means that $\pm(a_{1}\pm a_{2})$ are the only linear
combinations of $a_{1,2}$ with sq-length 4, and the remaining 8 are linearly
independent of $a_{1,2}$. We select one of these as a basis element and call
it $b_{1}$. (21) tells us that $a_{1,2}.b\in\\{0,\pm 1\\}$, and we find that
either choice leads us to a contradiction (see Fig. 1)
### A.2 $Z_{2}$
$\displaystyle\tilde{Z}_{2}((\varrho t)^{8},(\varrho t^{-1})^{8})$
$\displaystyle=1+2\varrho^{2}+12\varrho^{4}+8\varrho^{6}+12\varrho^{8}+\dots+24\left(t^{8}+t^{-8}\right)\varrho^{14}+\dots$
(101)
We select one of the sq-length 2 vectors : $a_{1}$, as a basis element, All
sq-length 4 vectors with the exception of $\pm 2a_{1}$ are independent of
$a_{1}$ – we can therefore pick a new basis element $b_{1}$ of sq-length 4;
(21) gives us $a_{1}.b_{1}\in\\{0,\pm 1\\}$ and we must choose $a_{1}.b_{1}=0$
to avoid (24). Since no linear combination of $a_{1}$ and $b_{1}$ has sq-
length 4, we must have another independent $b_{2}$ (with $a_{1}.b_{2}=0$). We
run the polarisation identity again:
$\displaystyle||b_{1}+b_{2}||^{2}+||b_{1}-b_{2}||^{2}$
$\displaystyle=2(||b_{1}||^{2}+||b_{2}||^{2})$ $\displaystyle=16$
$\displaystyle=4+12=6+10=8+8=10+6=12+4$ (102)
to conclude that $b_{1}.b_{2}\in\\{0,\pm 1,\pm 2\\}$. If $b_{1}.b_{2}=\pm 2$,
$\displaystyle\boxed{\begin{array}[]{rl}||a_{1}+b_{1,2}||^{2}=6&\Rightarrow\langle
a_{1},b_{1,2}\rangle=0\\\ ||b_{1}\mp b_{2}||^{2}=4&\Rightarrow\langle
b_{1},b_{2}\rangle=0\\\ |||a_{1}+2b_{1}\mp b_{2}||^{2}=14&\Rightarrow\mp
2\langle b_{1},b_{2}\rangle+2\langle a_{1},b_{1}\rangle\mp\langle
a_{1},b_{2}\rangle\neq 0\end{array}}$ (106)
If $b_{1}.b_{2}=\pm 1$,
$\displaystyle\boxed{\begin{array}[]{rl}||a_{1}+b_{1,2}||^{2}=6&\Rightarrow\langle
a_{1},b_{1,2}\rangle=0\\\ ||b_{1}\mp b_{2}||^{2}=6&\Rightarrow\langle
b_{1},b_{2}\rangle=0\\\ |||2a_{1}+b_{1}\mp
b_{2}||^{2}=14&\Rightarrow\mp\langle b_{1},b_{2}\rangle+2\langle
a_{1},b_{1}\rangle\mp 2\langle a_{1},b_{2}\rangle\neq 0\end{array}}$ (110)
So $b_{1}.b_{2}=0$. Since $(a_{1},b_{1},b_{2})$ are orthogonal, there are no
non-trivial linear combinations with sq-length $\leq 4$ and we can find yet
another independent $b_{3}$ ($a_{1}.b_{3}=0$, of course). We can recycle the
arguments above to conclude that $(a_{1},b_{1},b_{2},b_{3})$ are orthogonal.
But then there must be at least 12 vectors with sq-length 6 ($\pm a_{1}\pm
b_{1,2,3}$); this conflicts with the information presented in (101).
### A.3 $Z_{5}$
$\displaystyle\tilde{Z}_{5}((\varrho t)^{8},(\varrho t^{-1})^{8})$
$\displaystyle=1+2\varrho^{2}+4\varrho^{4}+24\varrho^{6}+\left(8\left(t^{8}+t^{-8}\right)+28\right)\varrho^{8}+\dots+72\left(t^{8}+t^{-8}\right)\varrho^{14}+\dots$
$\displaystyle\qquad+(96(t^{24}+t^{-24})+216(t^{8}+t^{-8}))\varrho^{30}+\dots$
(111)
Once again, the only vectors with sq-length 2 are $\pm a_{1}$. At sq-length 4,
we have to bring in a new basis element $b_{1}$. We have to have
$a_{1}.b_{1}=0$ to avoid (24); this tells us that the other sq-length 4
vectors are independent of $a_{1},b_{1}$ – let’s call them $\pm b_{2}$. Again,
$a_{1}.b_{2}=0$, and running the polarisation identity on $b_{1,2}$ tells is
that $b_{1}.b_{2}\in\\{0,\pm 1\\}$.999This time, $b_{1}.b_{2}\neq\pm 2$
because we are out of sq-length 4 vectors. If we pick the latter, we run into
(110).
Thus, $(a_{1},b_{1},b_{2})$ are orthogonal. We then have an independent
$c_{1}$ with sq-length 6, and:
$\displaystyle||a_{1}+c_{1}||^{2}+||a_{1}-c_{1}||^{2}$
$\displaystyle=2(||a_{1}||^{2}+||c_{1}||^{2})$ $\displaystyle=16$
$\displaystyle=6+10=8+8=10+6$ (112)
These correspond to $a_{1}.c_{1}\in\\{0,\pm 1\\}$. Let’s try out the second
option:
$\displaystyle\boxed{\begin{array}[]{rl}||a_{1}\mp
c_{1}||^{2}=6&\Rightarrow\langle a_{1},c_{1}\rangle=0\\\ ||a_{1}\pm
2c_{1}||^{2}=30&\Rightarrow\langle a_{1},c_{1}\rangle\neq 0\end{array}}$ (115)
Hence, we must have $a_{1}.c_{1}=0$. But then
$\displaystyle\boxed{\begin{array}[]{rl}||2a_{1}+c_{1}||^{2}=14&\Rightarrow
2\langle a_{1},c_{1}\rangle=\pm 4\\\ ||a_{1}+c_{1}||^{2}=8&\Rightarrow
2\langle a_{1},c_{1}\rangle\in\\{0,\pm 8\\}\end{array}}$ (118)
$||a_{1}||^{2}=2,\quad||b_{1}||^{2}=4$
---
(24) $||b_{2}||^{2}=4$
---
(24) $||c_{1}||^{2}=6$ (110) (118) (115) $a_{1}.b_{1}=\pm
1$$a_{1}.b_{1}=0$$a_{1}.b_{2}=\pm
1$$a_{1}.b_{2}=0$$b_{1}.b_{2}=0$$b_{1}.b_{2}=\pm
1$$a_{1}.c_{1}=0$$a_{1}.c_{1}=\pm 1$ Figure 2: The outline of the argument in
the case of $\tilde{Z}_{5}$. The coloured boxes indicate contradictions.
### A.4 $Z_{4}$
$\displaystyle\tilde{Z}_{4}((\varrho t)^{8},(\varrho t^{-1})^{8})$
$\displaystyle=1+8\varrho^{4}+16\varrho^{6}+\left(4(t^{8}+t^{-8})+20\right)\varrho^{8}+\left(16(t^{8}+t^{-8})+32\right)\varrho^{10}+\dots$
$\displaystyle\qquad+48\left(t^{8}+t^{-8}\right)\varrho^{14}+\dots+\left(48\left(t^{40}+t^{-40}\right)+192\left(t^{8}+t^{-8}\right)+288\left(t^{24}+t^{-24}\right)\right)\varrho^{46}$
$\displaystyle\qquad+\dots+\left(144\left(t^{24}+t^{-24}\right)+144\left(t^{56}+t^{-56}\right)+288\left(t^{40}+t^{-40}\right)+384\left(t^{8}+t^{-8}\right)\right)\varrho^{62}$
$\displaystyle\qquad\qquad\qquad+\dots$ (119)
Our first step is to show that the sublattice generated by sq-length 4 vectors
is $\mathcal{M}_{4}\simeq(2\mathbbm{Z})^{4}$ ($\simeq$ in the Euclidean
sense). We start by running the polarisation identity on two linearly
independent sq-length 4 vectors $b,b^{\prime}$:
$\displaystyle||b+b^{\prime}||^{2}+||b+b^{\prime}||^{2}$
$\displaystyle=2(||b||^{2}+||b^{\prime}||^{2})$ $\displaystyle=16$
$\displaystyle=4+12=6+10=8+8=10+6=12+4$ (120)
so $b.b^{\prime}\in\\{0,\pm 1,\pm 2\\}$. If $b.b^{\prime}=\pm 1$, we run into:
$\displaystyle\boxed{\begin{array}[]{rl}||b\mp
b^{\prime}||^{2}=6&\Rightarrow\langle b,b^{\prime}\rangle=0\\\ ||3b\mp
b^{\prime}||^{2}=46&\Rightarrow\langle b,b^{\prime}\rangle\neq 0\end{array}}$
(123)
and so $b.b^{\prime}\in\\{0,\pm 2\\}$; this means that
$\frac{1}{\sqrt{2}}\mathcal{M}_{4}$ is an integral lattice generated by sq-
length 2 vectors, and hence must be a direct sum of the root lattices –
$A_{m}$, $D_{m}$, $E_{m}$, and $\mathbbm{Z}^{m}$. Since there are no $E_{m}$
lattices at dimensions below six, we can disregard them immediately; the
$D_{m}$’s have too many vectors with sq-length 2 (aka the kissing number); and
the $\mathbbm{Z}^{m}$’s have norm 1 vectors. Therefore, we can restrict our
attention to the $A_{m}$’s. Kissing numbers add under $\oplus$ and hence the
only option with the correct kissing number and dimension is
$A_{1}^{4}\simeq(\sqrt{2}\mathbbm{Z})^{4}$ – thus,
$\mathcal{M}_{4}\simeq(2\mathbbm{Z})^{4}$.
Lattice | Kissing number | min. sq-length
---|---|---
$A_{m}$ | $m(m+1)$ | 2
$D_{m\geq 3}$ | $2m(m-1)$ | 2
$\mathbbm{Z}^{m}$ | $2m$ | 1
Table 3: A list of root lattices and their relevant properties. The $n$ that
appears in the name of the lattice indicates the dimension. ‘min. sq-length’
gives the sq-length of the shortest basis vector. The number of shortest
vectors is the kissing number.
Next we show that any vector with sq-length 6 is orthogonal to
$\mathcal{M}_{4}$. Let’s run the polarisation argument between two vectors $b$
and $c$ of sq-lengths 4 and 6 respectively101010Note that $||b\pm c||^{2}\neq
4$ because $b\pm c\in\mathcal{M}_{4}\Rightarrow c\in\mathcal{M}_{4}$, and this
would mean that $(2\mathbbm{Z})^{4}$ has a vector of sq-length 6. :
$\displaystyle||b+c||^{2}+||b-c||^{2}$ $\displaystyle=2(||b||^{2}+||c||^{2})$
$\displaystyle=20$ $\displaystyle=6+14=8+12=10+10=12+8=14+6$ $\displaystyle
b.c$ $\displaystyle\in\\{0,\pm 1,\pm 2\\}$ (124)
$b.c=\pm 2$ is ruled out via:
$\displaystyle\boxed{\begin{array}[]{rl}||b\mp c||^{2}=6&\Rightarrow\langle
b,c\rangle=0\\\ ||2b\mp c||^{2}=14&\Rightarrow\langle b,c\rangle\neq 0\\\
\end{array}}$ (127)
$b.c=\pm 1$ may be ruled out too, but with a bit more labour. Firstly, this
means that $b\mp c$ is of sq-length 8, but cannot be one of the scalars since:
$\displaystyle\boxed{\begin{array}[]{rl}\langle\langle b\mp
c\rangle\rangle^{2}=0&\Rightarrow\langle b,c\rangle=0\\\ ||4b\mp
c||^{2}=62&\Rightarrow\langle b,c\rangle\neq 0\end{array}}$ (130)
Thus, we must have $2\langle b,c\rangle=\pm 8$ (again, since all the spinning
$\Delta=1$ operators are spin-1). But then,
$\displaystyle\boxed{\begin{array}[]{rl}||4b\mp c||^{2}&=62\\\ \langle\langle
4b\mp c\rangle\rangle^{2}&=\pm 32\\\ \end{array}}$ (133)
which is a contradiction since there are no spin-4 operators with
$\Delta=\frac{31}{4}$. Hence, $b.c=0$, and we have our result.
Since there are 16 vectors of sq-length 6, we must have two basis elements of
sq-length 6 (we can’t have more since we already have 4 basis elements with
sq-length 4 and the lattice is six dimensional). This means that the
sublattice generated by vectors of sq-length is two dimensional and has
kissing number 16. The known bound in two dimensions is 6 (and is attained by
the $A_{2}$ lattice; see [38]), and hence this is a contradiction.
Alternatively, $||b+c||^{2}=10$ for any $b,c$ of sq-lengths 4 and 6
respectively, and this gives us a minimum of 128 vectors of sq-length 10 – a
quick look at (119) tells us that we only have 64.
### A.5 $Z_{6}$
$\displaystyle\tilde{Z}_{6}((\varrho t)^{8},(\varrho t^{-1})^{8})$
$\displaystyle=1+4\varrho^{4}+24\varrho^{6}+\left(8(t^{8}+t^{-8})+28\right)\varrho^{8}+\left(8(t^{8}+t^{-8})+16\right)\varrho^{10}+\dots+$
$\displaystyle\qquad
72\left(t^{8}+t^{-8}\right)\varrho^{14}+\dots+\left(72\left(t^{40}+t^{-40}\right)+288\left(t^{8}+t^{-8}\right)+432\left(t^{24}+t^{-24}\right)\right)\varrho^{46}$
$\displaystyle\qquad+\dots+\left(216\left(t^{24}+t^{-24}\right)+216\left(t^{56}+t^{-56}\right)+432\left(t^{40}+t^{-40}\right)+576\left(t^{8}+t^{-8}\right)\right)\varrho^{62}$
$\displaystyle\qquad\qquad\qquad+\dots$ (134)
It is easy to bootstrap the sublattice generated by vectors of sq-length 4
(say $\mathcal{M}_{4}$). There must at least be two independent vectors with
sq-length 4 (let’s call these $b_{1},b_{2}$). The polarisation identity (see
(120)) gives $b_{1}.b_{2}\in\\{0,\pm 1\pm 2\\}$. Now, $b_{1}.b_{2}=\pm 1$ may
be ruled out via (123) and $b_{1}.b_{2}=\pm 2$ produces too many vectors of
sq-length 4; hence, $b_{1}.b_{2}=0$ and
$\mathcal{M}_{4}\simeq(2\mathbbm{Z})^{2}$.
Now, we prove that any vector of sq-length 6 is orthogonal to
$\mathcal{M}_{4}$. For any $b,c$ of sq-length 4 and 6 respectively, (124)
works the same way, and $b.c\neq\pm 2$ thanks to (127). Say $b.c=\pm 1$; this
means that $b\mp c$ is of sq-length 8. We also need $2\langle b,c\rangle=\pm
8$ to avoid (130). As in the $\tilde{Z}_{4}$ case, we can now construct a
spin-4 operator at $\Delta=\frac{31}{4}$:
$\displaystyle\boxed{\begin{array}[]{rl}||4b\mp c||^{2}&=62\\\ \langle\langle
4b\mp c\rangle\rangle^{2}&=\pm 32\\\ \end{array}}$ (137)
This leads us to the conclusion that $b.c=0$; subsequently $||b+c||^{2}=10$,
and we must have at least $24\times 4=96$ vectors of sq-length 10; (134)
immediately cries out in protest.
## Appendix B Constructing theta functions of sublattices
In this subsection, we discuss a trick that lets us compute the (Euclidean)
theta functions of strings of $B$ matrices (as in (32)) exactly. This will let
us save a lot of time since computing these brute force can be very time-
consuming.
We start by noting that given a lattice generated by $M$, the lattice $MB$
where $B$ is an integer-valued matrix with determinant 2 forms an index-2
sublattice of $M$. Furthermore, if $M$ is integral, we have the following
result:
###### Lemma B.1.
Any index-2 sublattice $\mathcal{M}^{\prime}$ of an integral lattice
$\mathcal{M}$ is of the form:
$\displaystyle\mathcal{M}^{\prime}=\partial_{v}\mathcal{M}$
$\displaystyle:=\left\\{l\in\mathcal{M},\ v(l)\in 2\mathbbm{Z}\right\\}$ (138)
for some $v\in\mathcal{M}^{*}/2\mathcal{M}^{*}$; where $\mathcal{M}^{*}$ is
the dual lattice of $\mathcal{M}$. Conversely, $\partial_{v}\mathcal{M}$ is an
index-2 sublattice of $\mathcal{M}$ for any such $v\neq 0$.
In particular,
$\mathcal{M}_{k}=\partial_{v_{k}}\mathcal{M}_{k-1}=B_{i_{1}}...B_{i_{k}}$ is
an index-2 sublattice of $\mathcal{M}_{k-1}=B_{i_{1}}...B_{i_{k-1}}$. The
special tool that aids us in our quest is a generalised theta function:
$\displaystyle\Theta_{\mathcal{M}}\left(\xi_{1}^{v_{1}}\times\xi_{2}^{v_{2}}\times...\times\xi_{m}^{v_{m}},\varrho\right)$
$\displaystyle=\sum_{r\in\mathcal{M}}\xi_{1}^{r.v_{1}}\xi_{2}^{r.v_{2}}...\xi_{m}^{r.v_{m}}\varrho^{||r||^{2}/2}$
(139)
The theta function of $\partial_{v}\mathcal{M}$ is given by:
$\displaystyle\Theta_{\partial_{v}\mathcal{M}}\left(\xi_{1}^{w_{1}}\times\xi_{2}^{w_{2}}\times...\times\xi_{m-1}^{w_{m-1}},\varrho\right)$
$\displaystyle=\frac{1}{2}\left(\Theta_{\mathcal{M}}\left(\xi_{1}^{w_{1}}\times\xi_{2}^{w_{2}}\times...\times\xi_{m-1}^{w_{m-1}}\times
1^{v},\varrho\right)\right.$
$\displaystyle\qquad\qquad+\left.\Theta_{\mathcal{M}}\left(\xi_{1}^{w_{1}}\times\xi_{2}^{w_{2}}\times...\times\xi_{m-1}^{w_{m-1}}\times(-1)^{v},\varrho\right)\right)$
$\displaystyle=\sum_{\begin{subarray}{c}r\in\mathcal{M}\\\ r.v\in
2\mathbbm{Z}\end{subarray}}\xi_{1}^{r.v_{1}}\xi_{2}^{r.v_{2}}...\xi_{m-1}^{r.v_{m-1}}\varrho^{||r||^{2}/2}$
(140)
All we needed was the generalised theta function of $\mathcal{M}$ and the dual
vector $v$. Now, $\mathcal{M}_{k}$ itself is of the form
$\partial_{v_{k}}...\partial_{v_{1}}\mathbbm{Z}^{n}$. This means that all
we’re missing is the generalised theta function of $\mathbbm{Z}^{n}$:
$\displaystyle\Theta_{\mathbbm{Z}^{n}}\left(\xi_{1}^{w_{1}}\times\xi_{2}^{w_{2}}\times...\times\xi_{m}^{w_{m}},\varrho\right)$
$\displaystyle=\prod_{i=1}^{n}\theta_{3}\left(\left(\prod_{j=1}^{m}\xi_{j}^{\left(w_{j}\right)_{i}}\right),\varrho\right)$
(141)
Recall that $\theta_{3}$ is a Jacobi theta function.
## Appendix C Continuous family of impostor $Z$
The construction of [22, 23], which maps quantum codes to particular Narain
theories with the fixed lattice $\Gamma$, see section 2.1, was generalized in
[28] to include lattices $\Gamma$ of varying size. More precisely, starting
e.g. from a real self-dual stabilizer code of $F_{2}$, one constructs full
enumerator polynomial $W(t,x,y,z)$ which gives rise to CFT partition function
via
$\displaystyle
Z=W(\psi_{00},\psi_{10},\psi_{11},\psi_{01}),\quad\psi_{ab}=\frac{1}{|\eta|^{2}}\sum_{n,m}e^{i\pi\tau(r(n+a/2)+r^{-1}(m+b/2))^{2}-i\pi\bar{\tau}(r^{-1}(n+a/2)-r(m+b/2))^{2}},$
(142)
where $r$ is an arbitrary parameter. When $r=1$, $\psi_{10}=\psi_{01}$ and we
return to (3) upon a substitution $t\rightarrow x,\,x,z\rightarrow
z,\,y\rightarrow y$. As in the case of refined enumerator polynomials, there
are fake full enumerator polynomials, which satisfy necessary algebraic
identities – insurance under $y\rightarrow-y$ and under
$t\rightarrow\frac{t+x+y+z}{2},\,\,x\rightarrow\frac{t+x-y-z}{2},\,\,y\rightarrow\frac{t-x+y-z}{2},\,\,z\rightarrow\frac{t-x-y+z}{2},$
(143)
but not associated with any codes. There are no such polynomials for $c=1,2$
but already eighteen distinct fake polynomials for $c=3$,
$\displaystyle t^{3}+t^{2}x+tx^{2}+xy^{2}+2txz+x^{2}z+y^{2}z,$ (144)
$\displaystyle t^{3}+2t^{2}x+2tx^{2}+x^{3}+xy^{2}+tz^{2},$ (145)
$\displaystyle t^{3}+t^{2}x+xy^{2}+t^{2}z+2txz+x^{2}z+tz^{2},$ (146)
$\displaystyle t^{3}+t^{2}x+tx^{2}+x^{3}+2xy^{2}+2tz^{2},$ (147)
$\displaystyle t^{3}+2t^{2}x+2tx^{2}+x^{3}+ty^{2}+xz^{2},$ (148)
$\displaystyle t^{3}+t^{2}x+ty^{2}+t^{2}z+2txz+x^{2}z+xz^{2},$ (149)
$\displaystyle t^{3}+t^{2}x+tx^{2}+t^{2}z+2txz+y^{2}z+xz^{2},$ (150)
$\displaystyle t^{3}+ty^{2}+xy^{2}+2txz+x^{2}z+y^{2}z+xz^{2},$ (151)
$\displaystyle t^{3}+xy^{2}+t^{2}z+2txz+y^{2}z+tz^{2}+xz^{2},$ (152)
$\displaystyle t^{3}+x^{3}+ty^{2}+2xy^{2}+2tz^{2}+xz^{2},$ (153)
$\displaystyle t^{3}+t^{2}x+tx^{2}+x^{3}+2ty^{2}+2xz^{2},$ (154)
$\displaystyle t^{3}+x^{3}+2ty^{2}+xy^{2}+tz^{2}+2xz^{2},$ (155)
$\displaystyle t^{3}+tx^{2}+2ty^{2}+2x^{2}z+y^{2}z+z^{3},$ (156)
$\displaystyle t^{3}+2tx^{2}+ty^{2}+x^{2}z+2y^{2}z+z^{3},$ (157)
$\displaystyle t^{3}+2ty^{2}+t^{2}z+2x^{2}z+tz^{2}+z^{3},$ (158)
$\displaystyle t^{3}+2tx^{2}+t^{2}z+2y^{2}z+tz^{2}+z^{3},$ (159)
$\displaystyle t^{3}+ty^{2}+2t^{2}z+x^{2}z+2tz^{2}+z^{3},$ (160)
$\displaystyle t^{3}+tx^{2}+2t^{2}z+y^{2}z+2tz^{2}+z^{3}.$ (161)
They give rise to eighteen continuous families of impostor $Z$, parameterized
by $r$. For $r=1$ they reduce to six polynomials/fake $Z$ discussed in this
paper.
## References
* [1] S. Hellerman, _A universal inequality for cft and quantum gravity_ , _Journal of High Energy Physics_ 2011 (2011) 130.
* [2] S. Hellerman and C. Schmidt-Colinet, _Bounds for state degeneracies in 2d conformal field theory_ , _Journal of High Energy Physics_ 2011 (2011) 127.
* [3] C. A. Keller and H. Ooguri, _Modular constraints on calabi-yau compactifications_ , _Communications in Mathematical Physics_ 324 (2013) 107.
* [4] D. Friedan and C. A. Keller, _Constraints on 2d cft partition functions_ , _Journal of High Energy Physics_ 2013 (2013) 180.
* [5] J. D. Qualls and A. D. Shapere, _Bounds on operator dimensions in 2d conformal field theories_ , _Journal of High Energy Physics_ 2014 (2014) 91.
* [6] T. Hartman, C. A. Keller and B. Stoica, _Universal spectrum of 2d conformal field theory in the large c limit_ , _Journal of High Energy Physics_ 2014 (2014) 118.
* [7] J. D. Qualls, _Universal bounds on operator dimensions in general 2d conformal field theories_ , _arXiv preprint arXiv:1508.00548_ (2015) .
* [8] H. Kim, P. Kravchuk and H. Ooguri, _Reflections on conformal spectra_ , _Journal of High Energy Physics_ 2016 (2016) 184.
* [9] Y.-H. Lin, S.-H. Shao, Y. Wang and X. Yin, _(2, 2) superconformal bootstrap in two dimensions_ , _Journal of High Energy Physics_ 2017 (2017) 112.
* [10] T. Anous, R. Mahajan and E. Shaghoulian, _Parity and the modular bootstrap_ , _SciPost Phys_ 5 (2018) 022.
* [11] S. Collier, Y.-H. Lin and X. Yin, _Modular bootstrap revisited_ , _Journal of High Energy Physics_ 2018 (2018) 61.
* [12] N. Afkhami-Jeddi, T. Hartman and A. Tajdini, _Fast conformal bootstrap and constraints on 3d gravity_ , _Journal of High Energy Physics_ 2019 (2019) 87.
* [13] M. Cho, S. Collier and X. Yin, _Genus two modular bootstrap_ , _Journal of High Energy Physics_ 2019 (2019) 22.
* [14] T. Hartman, D. Mazáč and L. Rastelli, _Sphere packing and quantum gravity_ , _Journal of High Energy Physics_ 2019 (2019) 48.
* [15] N. Afkhami-Jeddi, H. Cohn, T. Hartman, D. de Laat and A. Tajdini, _High-dimensional sphere packing and the modular bootstrap_ , _arXiv preprint arXiv:2006.02560_ (2020) .
* [16] N. Afkhami-Jeddi, H. Cohn, T. Hartman and A. Tajdini, _Free partition functions and an averaged holographic duality_ , _Journal of High Energy Physics_ 2021 (2021) 1.
* [17] F. Gliozzi, _Modular bootstrap, elliptic points, and quantum gravity_ , _Physical Review Research_ 2 (2020) 013327.
* [18] R. Rattazzi, V. S. Rychkov, E. Tonni and A. Vichi, _Bounding scalar operator dimensions in 4d cft_ , _Journal of High Energy Physics_ 2008 (2008) 031.
* [19] D. Poland and D. Simmons-Duffin, _The conformal bootstrap_ , _Nature Physics_ 12 (2016) 535.
* [20] D. Poland, S. Rychkov and A. Vichi, _The conformal bootstrap: Theory, numerical techniques, and applications_ , _Reviews of Modern Physics_ 91 (2019) 015002.
* [21] S. El-Showk, M. F. Paulos, D. Poland, S. Rychkov, D. Simmons-Duffin and A. Vichi, _Solving the 3d ising model with the conformal bootstrap_ , _Physical Review D_ 86 (2012) 025022.
* [22] A. Dymarsky and A. Shapere, _Quantum stabilizer codes, lattices, and cfts_ , _Journal of High Energy Physics_ 2021 (2021) 1.
* [23] A. Dymarsky and A. Shapere, _Solutions of modular bootstrap constraints from quantum codes_ , _Physical Review Letters_ 126 (2021) 161602\.
* [24] J. Henriksson, A. Kakkar and B. McPeak, _Codes, lattices, and cfts at higher genus_ , _arXiv preprint arXiv:2112.05168_ (2021) .
* [25] J. Henriksson, A. Kakkar and B. McPeak, _Narain cfts and quantum codes at higher genus_ , _arXiv preprint arXiv:2205.00025_ (2022) .
* [26] M. Buican, A. Dymarsky and R. Radhakrishnan, _Quantum codes, cfts, and defects_ , _arXiv preprint arXiv:2112.12162_ (2021) .
* [27] S. Yahagi, _Narain CFTs and error-correcting codes on finite fields_ , _Journal of High Energy Physics_ 2022 (2022) .
* [28] N. Angelinos, D. Chakraborty and A. Dymarsky, _Optimal narain cfts from codes_ , _arXiv preprint arXiv:2206.14825_ (2022) .
* [29] A. Dymarsky and A. Sharon, _Non-rational narain cfts from codes over f4_ , _Journal of High Energy Physics_ 2021 (2021) 1.
* [30] M. R. Gaberdiel, H. R. Hampapura and S. Mukhi, _Cosets of meromorphic CFTs and modular differential equations_ , _Journal of High Energy Physics_ 2016 (2016) 1.
* [31] A. R. Chandra and S. Mukhi, _Towards a classification of two-character rational conformal field theories_ , _Journal of High Energy Physics_ 2019 (2019) .
* [32] A. R. Chandra and S. Mukhi, _Curiosities above c = 24_ , _SciPost Physics_ 6 (2019) .
* [33] S. Mukhi, _Classification of rcft from holomorphic modular bootstrap: A status report_ , _arXiv preprint arXiv:1910.02973_ (2019) .
* [34] J. H. Conway and N. J. A. Sloane, _Low-dimensional lattices. i. quadratic forms of small determinant_ , _Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences_ 418 (1988) 17\.
* [35] J. Milnor, _Curvatures of left invariant metrics on lie groups_ , _Advances in Mathematics_ 21 (1976) 293.
* [36] J. Fuchs and C. Schweigert, _Symmetries, Lie algebras and representations: A graduate course for physicists_. Cambridge University Press, 10, 2003.
* [37] A. N. Schellekens, _Meromorphic c = 24 conformal field theories,_ , _Commun. Math. Phys._ 153 (1993) 159.
* [38] J. H. Conway and N. J. A. Sloane, _Sphere packings, lattices and groups_ , vol. 290. Springer Science & Business Media, 2013.
|
Revisiting the Universal Texture Zero of Flavour:
a Markov Chain Monte Carlo Analysis
Jordan Bernigauda,b, Ivo de Medeiros Varzielasc, Miguel Levyc, Jim Talbertd
a Institute for Astroparticle Physics (IAP), Karlsruhe Institute of
Technology, Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen,
Germany,
b Institute for Theoretical Particle Physics (TTP), Karlsruhe Institute of
Technology, Engesserstrasse 7, D-76128 Karlsruhe, Germany
c CFTP, Departamento de Física, Instituto Superior Técnico, Universidade de
Lisboa, Avenida Rovisco Pais 1, 1049 Lisboa, Portugal,
d DAMTP, University of Cambridge, Wilberforce Rd., Cambridge, CB3 0WA, United
Kingdom
E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
_This work is dedicated to the memory of Prof. Graham Garland Ross, FRS:
1944-2021._
We revisit the phenomenological predictions of the Universal Texture Zero
(UTZ) model of flavour originally presented in [1], and update them in light
of both improved experimental constraints and numerical analysis techniques.
In particular, we have developed an in-house Markov Chain Monte Carlo (MCMC)
algorithm to exhaustively explore the UTZ’s viable parameter space,
considering both leading- and next-to-leading contributions in the model’s
effective operator product expansion. We also extract — for the first time —
reliable UTZ predictions for the (poorly constrained) leptonic CP-violating
phases, and ratio observables that characterize neutrino masses probed by
(e.g.) oscillation, $\beta$-decay, and cosmological processes. We therefore
dramatically improve on the proof-in-principle phenomenological analysis
originally presented in [1], and ultimately show that the UTZ remains a
minimal, viable, and appealing theory of flavour. Our results also further
demonstrate the potential of robustly examining multi-parameter flavour models
with MCMC routines.
###### Contents
1. 1 Introduction
2. 2 The Universal Texture Zero Model
1. 2.1 The Leading-Order Effective Yukawa Lagrangian
2. 2.2 Higher-Order Contributions
3. 2.3 Complete Effective Mass Matrices in the Ultraviolet
3. 3 Experimental Constraints
1. 3.1 Renormalization Group Evolution Uncertainties
4. 4 An MCMC Scan of Parameter Space
1. 4.1 The Generic MCMC Algorithm
2. 4.2 UTZ Specifics
3. 4.3 Results and Analysis
4. 4.4 Summary Comments
5. 5 Summary and Outlook
## 1 Introduction
The bulk of the free, unexplained parameters in the Standard Model (SM) of
particle physics originate in its flavour sector, thanks to the replication of
SM fermion generations with distinct masses and quantum mixings. These
parameters are technically natural, in that sending them to zero recovers a
global U(3)5 flavour symmetry of the Lagrangian [2, 3]. However, the Yukawa
couplings of SM fermions to the Higgs boson break this symmetry in a deeply
flavour-non-universal manner, with a mass ratio of $\sim\mathcal{O}(10^{12})$
between (e.g.) neutrinos and the top quark. Furthermore the
Cabibbo–Kobayashi–Maskawa (CKM) quark mixing matrix exhibits a hierarchical,
approximately unit structure [4], while the Pontecorvo–Maki–Nakagawa–Sakata
(PMNS) leptonic mixing matrix is extremely non-hierarchical, with large
mixings amongst generations [5]. These highly disparate patterns of fermionic
mass and mixing strongly hint that the origins of flavour in the SM may be
dynamical, as opposed to a random, soft deviation from an accidental symmetric
limit.
The _flavour puzzle_ therefore remains a compelling motivation to search for
physics Beyond-the-SM (BSM), as it can be solved dynamically via the breakdown
of an ultraviolet (UV), BSM symmetry in specific directions of flavour space.
This symmetry breaking typically occurs when exotic scalar _familons_ develop
special vacuum expectation values (vev) as determined by a family-symmetric
scalar potential, although other alignment mechanisms are conceivable. When
familons couple to SM fermions and the Higgs boson, their flavoured vevs shape
the otherwise free Yukawa matrices of the SM, and therefore control their
associated mass eigenvalues and mixing angles after electroweak symmetry
breaking. These predictions can be compared to global flavour data sets to
falsify the model, serving as an indirect probe of the new physics proposed.
While the predictions of flavour models — derived from either top-down or
bottom-up considerations — are rich, they are also becoming increasingly
difficult to falsify, given that experiment is rapidly resolving all SM
flavour parameters to a high degree of precision, such that the models’
predictions should actually be considered _postdictions_. Indeed, virtually
all quark masses and CKM mixings are measured with exceptional accuracy, while
only the PMNS angle $\theta_{23}^{l}$,111In what follows we use the label $l$
for leptons, $q$ for quarks, and $u,d,e,\nu$ for individual families of
either. We also include neutrino mass and mixing when we reference ‘SM’
flavour parameters in the text, despite these being fundamentally BSM objects.
the Dirac CP-violating phase $\delta^{l}$, absolute neutrino mass eigenvalues,
and Majorana CP-violating phases (if relevant) are poorly constrained in the
leptonic sector. While physical observables that depend on non-trivial
combinations of these parameters, e.g. neutrinoless-double-$\beta$ decay rates
($0\nu\beta\beta$), single $\beta$-decay rates, and the sum of neutrino mass
eigenvalues (as constrained by cosmology), offer additional independent probes
of flavour models, it is conceivable that a believable BSM theory will also
make falsifiable predictions for a subset of the aforementioned, unresolved
constituent flavour parameters. Complicating matters further, many (most) BSM
flavour models introduce a number of UV theory parameters that are difficult
to numerically sample in a fully generic manner, and so extracting concrete
predictions from said models is challenging in its own right.
In light of this experimental situation, and in response to the need for more
robust analysis routines for exploring viable model parameter spaces, we will
re-examine the Universal Texture Zero (UTZ) Model originally presented in [1].
The UTZ is an effective theory (EFT) valid at mass scales above those
characteristic of the SM, but below those of hypothetical (and potentially
unfalsifiable), renormalizable UV completions, e.g. those incorporating ultra-
heavy fermionic messenger fields $\mathcal{V}$:
$\Lambda_{\mathcal{V}}>\Lambda_{\text{UTZ}}>\Lambda_{\text{SM}}$. Its Yukawa
sector is therefore generated only at the non-renormalizable level, with EFT
expansion parameters in inverse powers of the messenger masses $M_{i}$. The
UTZ Lagrangian is symmetric under a
$\Delta(27)\simeq\left(\mathbb{Z}_{3}\times\mathbb{Z_{3}}\right)\rtimes\mathbb{Z}_{3}$
[6, 7, 8, 9, 10] non-Abelian discrete family symmetry and a further
$\mathbb{Z}_{N}$ discrete shaping symmetry, and is consistent with an
underlying stage of SO(10) grand unification as all fermions and their
conjugates — including right-handed (RH) gauge-singlet neutrinos — are
assigned as triplets ${\bf{3}}$ under $\Delta(27)$. Critically, the additional
scalars introduced are charged such that a $\Delta(27)$-invariant scalar
potential exists that drives family-symmetry breaking as mentioned above,
yielding symmetric mass matrices with a characteristic texture zero in the
(1,1) position for _all_ family sectors. As shown in [1], this UTZ structure
is capable of explaining quark and lepton flavour data with as few as nine
infrared (IR) theory parameters, and therefore amounts to an appealing and
predictive theory for the origin of SM flavour patterns. The UTZ stands as a
continuation of similar solutions employing texture zeroes, explored already
in e.g. [11, 12].
However, the numerical exploration of the UTZ parameter space presented in [1]
only achieved a ‘proof-in-principle’ fit demonstrating the model’s
phenomenological viability. It did not exhaustively explore the predictions of
the UTZ Lagrangian at leading order (LO) in its EFT expansion parameters, nor
did it consider the complete set of corrections generated by operators present
at next-to-leading order (NLO) in $1/M_{i}$. Most importantly, the analysis in
[1] did not present robust _predictions_ for the aforementioned unresolved
leptonic flavour parameters nor any other observables (e.g. $\beta$-decay
rates) that depend on them, and hence it did not provide a reliable means of
falsifying the UTZ model space as data continues to improve. In this paper we
aim to remedy these shortcomings by applying a Markov Chain Monte Carlo (MCMC)
fitting algorithm to the UTZ. Inspired by similar analyses [13, 14], the MCMC
technology we employ allows for a robust exploration of multi-parameter models
and their associated likelihoods. It also allows one to simultaneously extract
predictions for poorly-constrained observables which are controlled (in part
and in different combinations) by the same parameters that control
exceptionally well-constrained observables, thereby accounting for the
intricate correlations between UTZ theory parameters and their associated
phenomenology. In this way we are capable of presenting predictions in
experimentally-preferred regions of the UTZ parameter space, for both the LO
and NLO UTZ Lagrangian. As we will show, the theory is phenomenologically
viable at _both_ orders in its operator product expansion, with the latter NLO
terms yielding only minor corrections to the dominant LO predictions. The UTZ
is therefore a stable, predictive, and minimal theory of flavour.
The remainder of the paper develops as follows: in Section 2 we review the UTZ
model as conceived in [1], including the field and symmetry content composing
the (N)LO contributions to its operator product expansion, as well as its
qualitative predictions in the quark and lepton sectors. Then in Section 3 we
review the most up-to-date experiment that constrains its predictions in the
Yukawa sector, and also discuss the uncertainties associated to
renormalization group evolution (RGE) from the UV to the IR. In Section 4 we
discuss the MCMC algorithm we have developed to explore the UTZ parameter
space, and also present the results and analysis following from our scans. We
conclude in Section 5.
## 2 The Universal Texture Zero Model
Fields | $\psi_{q,e,\nu}$ | $\psi^{c}_{q,e,\nu}$ | $H$ | $\Sigma$ | $S$ | $\theta_{3}$ | $\theta_{23}$ | $\theta_{123}$ | $\theta$ | $\theta_{X}$
---|---|---|---|---|---|---|---|---|---|---
$\Delta(27)$ | 3 | 3 | $1_{00}$ | $1_{00}$ | $1_{00}$ | $\bar{3}$ | $\bar{3}$ | $\bar{3}$ | $\bar{3}$ | $3$
$\mathbb{Z}_{N}$ | 0 | 0 | 0 | 2 | -1 | 0 | -1 | 2 | 0 | $x$
Table 1: The fields and $\Delta(27)\times\mathbb{Z}_{N}$ family symmetry
content of the UTZ flavour model. Note that $\theta_{X}$ only appears in the
scalar potential, and hence the only restriction on its $\mathbb{Z}_{N}$
charge is that it does not contribute significantly to the fermionic mass
matrices. Its $\mathbb{Z}_{N}$ charge can therefore be left generic, as shown.
The field and $\Delta(27)\times\mathbb{Z}_{N}$ symmetry content of the UTZ [1]
is given in Table 1. There one observes that all SM fermions $\psi_{a}$ are
assigned as triplets ${\bf{3}}$ under the family symmetry, as are additional
gauge singlet ‘sterile’ neutrinos that participate in a seesaw mechansim.
Besides the fermionic content, we also have a set of BSM scalar familons
$\theta_{i}$, charged as $\Delta(27)$ anti-triplets ${\bf{\bar{3}}}$, a
lepton-number-violating (LNV) anti-triplet familon $\theta$ necessary for
describing the Majorana neutrino mass sector, and finally a triplet familon
$\theta_{X}$ necessary for successful vacuum alignment. All such familons are
SM gauge singlets. There is also a $\Delta(27)$ singlet sector composed of the
$\Sigma$ and Higgs $H$ scalars, both associated to an underlying stage of
grand unification consistent with the following symmetry-breaking chain:
$\text{SO(10)}\rightarrow\text{SU(4)}\times\text{SU(2)}_{L}\times\text{SU(2)}_{R}\rightarrow\text{SU(3)}\times\text{SU(2)}\times\text{U(1)}\,,$
(1)
where the SO(10) breaking proceeds via an $H$ vev and where
$\langle\Sigma\rangle\propto B-L+\kappa\,T_{3}^{R}$ is associated to Pati-
Salam breaking. As seen below, this latter $\Sigma$ field selects unique Dirac
textures for distinct fermion families out of an otherwise universal mass
matrix structure. Finally, the $\Delta(27)$ singlet scalar $S$ is a shaping
field that, along with the $\mathbb{Z}_{N}$ shaping symmetry, restricts the
class of operators that appear in the UTZ EFT. Its main role is to indirectly
forbid terms $\propto\theta_{123}\theta_{123}$ in the UV Majorana Lagrangian
presented in (18), which would destroy the desirable UTZ texture. We note that
this field and symmetry content exhibits explicit discrete gauge anomaly
freedom at the relevant scale of our EFT — see (e.g.) [15, 16, 17, 18, 19, 10,
20, 21, 22].
Besides the Yukawa Lagrangian to be discussed in upcoming Sections, the
familons $\theta_{i}$, $\theta_{X}$ and $\theta$ also compose an associated
scalar potential $V=V_{A}+V_{B}$,
$\displaystyle V_{A}$
$\displaystyle=\sum\limits_{i=3,123}{\left({{V_{1}}({\theta_{i}})+{V_{2}}({\theta_{i}})}\right)}+{V_{3}}+{V_{4}}+{V_{5}}\,,$
$\displaystyle V_{B}$ $\displaystyle=V_{1}(\theta)+V_{2}(\theta)+V_{6}\,.$ (2)
While we leave the complete description of the vacuum alignment mechanism to
[1], we recall that the individual components $V_{i}$ of $V$ are given by
$\displaystyle V_{1}(\theta_{i})$
$\displaystyle=m_{i}^{2}|\theta_{i}|^{2}\,,\,\,\,V_{2}(\theta_{i})=h_{i}{\left({{\theta_{i}}}\right)^{2}}{\left({{\theta^{{\dagger}i}}}\right)^{2}}\,,\,\,\,\,V_{3}=k_{1}{\theta_{X,i}}\theta_{123}^{{\dagger}i}{\theta_{123,j}}\theta_{X}^{{\dagger}j}\,\,\,\,\,\,(k_{1}>0)\,,\,\,\,V_{4}=k_{2}m_{0}\theta_{X}^{1}\theta_{X}^{2}\theta_{X}^{3}$
$\displaystyle V_{5}$
$\displaystyle=k_{3}\theta_{23,i}\theta_{X}^{i}\theta_{23}^{{\dagger}j}\theta_{X}^{{\dagger}j}+k_{4}{\theta_{23,i}}\theta_{3}^{{\dagger}i}{\theta_{3,i}}\theta_{23}^{{\dagger}i}\,\,\,\,\,\,\,(k_{3}>0\,\,\text{and}\,\,k_{4}<0)\,\,,\,\,\,\,\,V_{6}=k_{5}\theta_{3,i}\theta^{\dagger
i}\theta_{i}\theta_{3}^{\dagger i}\,\,\,\,\,(k_{5}<0)\,.$ (3)
The first term $V_{1}$ sets the scale of the scalar familon fields, and is
sufficient to break the family symmetry spontaneously upon $m_{i}^{2}$ being
driven to negative values, perhaps via radiative corrections in the manner of
[23]. Then the second term $V_{2}$ aligns the $\theta_{3,123}$ vevs in flavour
space as a function of the sign of $h_{i}$; $h_{123}\equiv h_{i}>0$ while
$h_{3}\equiv h_{i}<0$. The terms $V_{3,4,5,6}$ account for the final alignment
of the $\theta_{23,X}$ and $\theta$ vevs, with $V_{3}$ sourcing the dominant
coupling of $\theta_{X}$, $V_{4}$ selecting
$\langle\theta_{X}\rangle\propto\left(2,-1,1\right)$ out of the two degenerate
vacua $V_{3}$ allows, and $V_{5}$ and $V_{6}$ respectively driving the final
$\theta_{23}$ and $\theta$ orientations upon minimization.222Observe that in
(2) we have only included terms that are consistent with a spontaneously
broken, supersymmetric (SUSY) underlying theory with triplet mediators.
Additional quartic terms may appear, but must be suppressed in order to
preserve (2). All other aspects of the tree-level phenomenology of the UTZ
model can be studied without reference to hypothetical UV completions, and we
adopt this agnosticism to be as generic as possible in what follows.
All subtleties considered, the potential in (3) aligns the scalar familon
fields in special directions in flavour-space,
$\left\langle{{\theta_{(3)}}}\right\rangle={{\rm{v}}_{\theta(3)}}\left({\begin{array}[]{*{20}{c}}0\\\
0\\\
1\end{array}}\right),\quad\left\langle{{\theta_{123}}}\right\rangle=\frac{{{\rm{v}}_{123}}}{{\sqrt{3}}}\left({\begin{array}[]{*{20}{c}}e^{i\beta}\\\
e^{i\alpha}\\\
{-1}\end{array}}\right),\quad\left\langle{{\theta_{23}}}\right\rangle=\frac{{{\rm{v}}_{23}}}{{\sqrt{2}}}\left({\begin{array}[]{*{20}{c}}0\\\
e^{i\alpha}\\\
1\end{array}}\right),\quad\left\langle{{\theta^{{\dagger}}_{X}}}\right\rangle=\frac{{{\rm{v}}_{X}}}{{\sqrt{6}}}\left({\begin{array}[]{*{20}{c}}2e^{i\beta}\\\
-e^{i\alpha}\\\ 1\end{array}}\right)\,,$
where the parentheses on the first term indicate that both $\theta_{3}$ and
$\theta$ are aligned in the third-family direction, and where we have included
the generic phases $\alpha$, $\beta$ for completeness, although we will
eventually set these to zero following the discussion in [24]. We note that,
of the above potential terms, many are not invariant under SU(3)F, and so the
use of $\Delta(27)$, a non-Abelian discrete subgroup of SU(3)F, was
instrumental in the above discussion.
### 2.1 The Leading-Order Effective Yukawa Lagrangian
Upon demonstrating that a successful vacuum alignment is plausible upon
family-symmetry breaking, a meaningful BSM Yukawa sector can be subsequently
formed from the field and symmetry content of Table 1. This leads to the
following LO UTZ effective Lagrangian in the Dirac sector of the theory:
$\mathcal{L}^{LO}_{D,f}=\psi_{i}\left({c^{(6)}_{3}\over\,M_{3,f}^{2}}\theta_{3}^{i}\theta_{3}^{j}+{c^{(7)}_{23}\over
M_{23,f}^{3}}\,\theta_{23}^{i}\theta_{23}^{j}\Sigma+{c^{(7)}_{123}\over
M_{123,f}^{3}}\,(\theta_{123}^{i}\theta_{23}^{j}+\theta_{23}^{i}\theta_{123}^{j})S\right)\psi_{j}^{c}H\,,$
(4)
where $f\in\\{u,\;d,\;e,\;\nu\\}$. Here $c^{(n)}_{i}$ are free Wilson
coefficients whose superscript denotes the mass dimension $n$ of the operator,
while $M_{i,f}$ represent the mass scales associated to heavy messenger fields
that have been integrated out of the spectrum in forming the EFT, a lá the
Froggatt-Nielsen mechanism [25]. These messenger fields are associated to
distinct UV completions and are typically taken to be vector-like fermions,
although we do not wish to commit ourselves to any particular scenario. In
what follows we will simply point out the implications and constraints on said
UV messengers coming from the (falsifiable) IR spectrum associated to (4).
To that end, one quickly notices that a natural hierarchy for the third-family
fermions is realized, thanks to the power suppression (assuming only mild
hierarchies amongst messenger masses) of the second and third terms with
respect to the first, which only contributes to the (3,3) entry of the Dirac
mass matrices. While this helps realize an approximate SU(2)F symmetry of the
quark mass matrices and associated CKM mixing matrix, it also implies that the
ratio $\theta_{3}/M_{3,f}$ is large [26], at least in the up sector. This is
acceptable if $\theta_{3}$ is the dominant contributor to the messenger mass,
which we assume for all charged fermion sectors. For an alternative solution
to this issue involving Higgs mediators, see [27].
Besides (4), the field and symmetry content of Table 1 also permits a Majorana
mass Lagrangian, which at leading order in the OPE is of the following form:
$\mathcal{L}^{\nu}_{\mathcal{M}}=\psi_{i}^{c}\left({c^{(5)}_{M}\over
M}\,\theta^{i}\theta^{j}+{1\over
M^{4}}[c^{(8)}_{M,1}\,\theta_{23}^{i}\theta_{23}^{j}(\theta^{k}\theta^{k}\theta_{123}^{k})+c^{(8)}_{M,2}\,(\theta_{23}^{i}\theta_{123}^{j}+\theta_{123}^{i}\theta_{23}^{j})(\theta^{k}\theta^{k}\theta_{23}^{k})]\right)\psi^{c}_{j}\,.$
(5)
Here one notices that there are two insertions of the LNV scalar $\theta$ in
each operator, as is consistent with our underlying SO(10) $\rightarrow$
SU(4)$\times$SU(2)${}_{L}\times$SU(2)R GUT embedding, and also that the
leading contributions in this effective Lagrangian are at dimension five and
eight in the $1/M$ expansion of the EFT, as opposed to six and seven in the
case of the Dirac Lagrangian given in (4). This results in an extremely
dominant third-family hierarchy that has important phenomenological
implications in the neutrino sector upon applying the seesaw, as mentioned
below. Further discussion regarding the relative power suppression between
Dirac and Majorana sectors will be given in Section 2.2.
#### Qualitative Charged Fermion Masses and Sum Rules
While the SM’s quark and charged lepton flavour sector is exceptionally well-
measured and therefore offers little opportunity for novel predictions, we do
note that the UTZ Lagrangian above has been designed to realize successful
charged fermion mass ratios, as well as two long-standing and successful
phenomenological ansätze: the Georgi-Jarlskog mechanism and the Gatto-Sartori-
Tonin sum rule. This is due to the UTZ structure of the Dirac mass matrices,
given qualitatively by
$M_{f}^{D}\approx
m_{3}\left({\begin{array}[]{*{20}{c}}0&{\varepsilon_{f}^{3}}&{\varepsilon_{f}^{3}}\\\
{\varepsilon_{f}^{3}}&{{r_{f}}\varepsilon_{f}^{2}}&{{r_{f}}\varepsilon_{f}^{2}}\\\
{\varepsilon_{f}^{3}}&{{r_{f}}\varepsilon_{f}^{2}}&1\end{array}}\right),\quad{r_{u,d}}=1/3,\quad{r_{e}}=-1\,,$
(6)
with $f$ again indicating the family sector, $f\in\\{u,d,e\\}$ and
$\epsilon_{f}$ associated small parameters. Phenomenologically viable values
are given by $\epsilon_{u}\approx 0.05$ and $\epsilon_{d,e}\approx 0.15$. This
family splitting is accommodated via the UTZ relations
$r_{f}\,\epsilon_{f}^{2}\equiv\frac{\langle\theta_{23}\rangle^{2}\langle\Sigma\rangle}{M_{23,f}^{3}}\cdot\frac{M_{3,f}^{2}}{\langle\theta_{3}\rangle^{2}}\,,\,\,\,\,\,\,\,\,\,\,\,\,\,\epsilon_{f}^{3}\equiv\frac{\langle\theta_{23}\rangle\langle\theta_{123}\rangle\langle
S\rangle}{M_{123,f}^{3}}\cdot\frac{M_{3,f}^{2}}{\langle\theta_{3}\rangle^{2}}\,,$
(7)
which hold up to $\mathcal{O}(1)$ coefficients and signs. Here one sees that
$\epsilon_{u}<\epsilon_{d}$ is realized if
$M_{3,u}/M_{23,u}<M_{3,d}/M_{23,d}$, and of course $\epsilon_{e}$ and
$\epsilon_{d}$ can be equal given the symmetry breaking in (1) and the fact
that both are $T_{R,3}=-1/2$ states which (in SUSY models) acquire their mass
from the same Higgs boson ($H_{d}$). Note that we assume the messenger masses
carry both lepton and quark quantum numbers, and so it is important that RH
messengers associated to SU(2)R breaking in (1) dominate for
$\epsilon_{u}<\epsilon_{d}$, as opposed to the LH messengers associated to
SU(2)L, whose up and down masses are of course equal due to SU(2)L invariance.
Also associated to this pattern of family suppression are the $r_{f}$
coefficients in (6), which are sourced from the $\Sigma$ vev and therefore
implement the Georgi-Jarlskog mechanism [28], resulting in
$m_{\tau}=m_{b}\,,\,\,\,\,\,\,\,\,m_{\mu}=3\,m_{s}\,,\,\,\,\,\,\,\,\,\,\,\,\,\,m_{e}=\frac{1}{3}\,m_{d}$
(8)
at the scale of grand unification, as is consistent with RGE and threshold
corrections [12].
In addition to these characteristic mass relations, the quark sector
realizations of (6) also implement the Gatto-Sartori-Tonin mixing sum rule
[29],
$\sin{\theta_{c}}=\left|\sqrt{\frac{{{m_{d}}}}{{{m_{s}}}}}-{e^{i\delta}}\sqrt{\frac{{{m_{u}}}}{{{m_{c}}}}}\right|\,,$
(9)
relating the Cabibbo angle $\theta_{c}\simeq\theta_{12}^{q}$ to mass ratios
from the first- and second-generation quarks of both the up and down families.
Setting $\delta\approx\pi/2$ and again accounting for RGE and threshold
corrections [12] (cf. Section 3.1), both (8) and (9) remain successful
predictions that are maintained in our UTZ framework.333These predictions are
a consequence of the texture zeroes in the charged fermion structures and are
therefore not unique to the UTZ, see e.g. [11, 30, 26].
#### Qualitative Neutrino Masses and Sum Rules
As discussed above, the bulk of the parameters left to be constrained
experimentally are in the neutrino sector, and so it is worthwhile to discuss
the qualitative, analytic predictions of the UTZ construction in this area. As
with other flavour models, we employ the Type-I seesaw mechanism [31, 32, 33,
34]. In this framework the right-handed (RH) Majorana mass terms generated by
(5) are parametrically heavier than the Dirac neutrino masses coming from (4).
Integrating the RH neutrino fields out of the spectrum generates a left-handed
(LH) Majorana neutrino mass term,
$M_{\nu}=\frac{1}{2}\,M^{D}_{\nu}\cdot M_{M}^{-1}\cdot M^{D,T}_{\nu}\,,$ (10)
which is of course naturally light due to the heavy $M_{M}$ suppression. In
the presence of a sequentially dominant [35, 36, 37, 38] RH neutrino spectrum
$M_{M,3}\gg M_{M,2}\geq M_{M,1}\,,$ (11)
which is naturally realized thanks to the hierarchical suppression of the
second term in (18) with respect to the dominant (third-family) first term,
the see-saw contribution coming from $\nu_{3}^{c}$ exchange is negligible.
This results in the lightest active neutrino having a parametrically smaller
mass compared to the two heaviest active neutrinos. This spectrum is described
by an effective 2$\times$2 neutrino mass structure in the IR, which can be
analyzed analytically. In particular, after application of the Type-I seesaw
mechanism, one can extract sum rules for the PMNS mixing angles as a function
of neutrino mass eigenvalues,
$\sin\theta^{\nu}_{13}\approx\sqrt{{m_{2}\over
3m_{3}}}\,,\quad\quad\sin\theta^{\nu}_{23}\approx|{1\over\sqrt{2}}-e^{i\eta}\sin\theta^{\nu}_{13}|\,,\quad\quad\sin\theta^{\nu}_{12}\approx\frac{1}{\sqrt{3}}\,,$
(12)
where the phase $\eta$ is defined from the predicted ratio of the heavy
neutrino masses $m_{2}/m_{3}$ (see the discussion in [1]). One notes that the
relationships in (12) are similar to the renowned ‘Tri-Bimaximal’ (TBM)
texture [39] ($\sin\theta_{13}^{TBM}=0$, $\sin\theta_{23}^{TBM}=1/\sqrt{2}$,
$\sin\theta_{12}^{TBM}=1/\sqrt{3}$) that is often a starting point for
neutrino mass model building. However, the salient difference with respect to
prior models of this type — see e.g. [30, 26] or the more recent [40, 41, 42]
— is that the (1,1) texture zero of the mass matrix remains _after_
application of the seesaw, such that our UTZ setup leads to a non-negligible
departure from the TBM texture and naturally allows for a large(r) reactor
mixing angle $\theta_{13}^{l}$ (which also receives corrections from the
charged-lepton sector), in accord with data. Finally, we note that our
$\Delta(27)$ family-symmetry breaking realizes the
$\mathbb{Z}_{2}\times\mathbb{Z}_{2}$ [43] residual symmetry of the IR neutrino
mass term only ‘indirectly,’444…following the classification system of [44].
For a pedagogical algorithm and exhaustive discussion regarding the
reconstruction of effective Lagrangians analogous to (4)-(5) in the
alternative ‘direct’ symmetry-breaking scenario, see [45]. in that it is not a
subgroup of $\Delta(27)$ and appears only accidentally thanks to (2).
### 2.2 Higher-Order Contributions
The Lagrangians in (4)-(5) represent the LO contributions to the UTZ operator
product expansion. Higher-order terms in this series are suppressed by further
powers of the relevant mediator masses, and should therefore represent small
corrections to the qualitative structures and predictions discussed above.
However, these corrections can a priori be non-negligible as noted in [1], and
so it is important that we consider them robustly as we revisit the UTZ.
In the Dirac sector, the NLO $\Delta(27)\times\mathbb{Z}_{N}$ invariant terms
composed of the same field content as in Table 1 arise at mass-dimension
eight, i.e. with four powers of mediator mass suppression,
$\mathcal{L}^{HO}_{D,f}=\psi_{i}\left({c^{(8)}_{23}\over
M_{23,f}^{4}}(\theta_{23}^{i}\theta_{3}^{j}+\theta_{3}^{i}\theta_{23}^{j})\Sigma
S+{c^{(8)}_{123}\over
M_{123,f}^{4}}(\theta_{123}^{i}\theta_{3}^{j}+\theta_{3}^{i}\theta_{123}^{j})S^{2}\right)\psi_{j}^{c}H\,.$
(13)
While these terms contribute at the same order in the EFT’s power counting, we
have already identified in the discussion below (7) that the LO Dirac mass
contribution $\propto\langle\Sigma\rangle$ is parametrically larger than that
$\propto S$. If one assumes roughly universal messenger masses, one finds that
$\frac{\langle\theta_{23}\rangle\langle\theta_{23}\rangle\langle\Sigma\rangle}{M_{23}^{3}}\sim\mathcal{O}(\epsilon^{2}),\,\,\,\,\,\,\,\frac{\langle\theta_{23}\rangle\langle\theta_{123}\rangle\langle
S\rangle}{M_{123}^{3}}\sim\mathcal{O}(\epsilon^{3})\,\,\,\Longrightarrow\,\,\,\frac{\langle\theta_{23}\rangle\langle\Sigma\rangle}{\langle\theta_{123}\rangle\langle
S\rangle}\sim\mathcal{O}\left(\frac{1}{\epsilon}\right)\,,$ (14)
from which once can readily conclude that the HO contributions $\propto S^{2}$
in (13) are also parametrically smaller than those $\propto\Sigma S$:
$\frac{\langle\theta_{3}\rangle\langle\theta_{23}\rangle\langle\Sigma\rangle\langle
S\rangle}{M^{4}}\sim\frac{1}{\epsilon}\frac{\langle\theta_{3}\rangle\langle\theta_{123}\rangle\langle
S\rangle^{2}}{M^{4}}\,.$ (15)
In [1] we used (15) to justify ignoring the $S^{2}$ contribution to the Dirac
mass matrix entirely. However, we will now include both terms in (13) for
completeness.
The UTZ’s operator product expansion is of course infinite-dimensional in the
absence of an explicit UV completion. Hence further, next-to-next-to-leading
order contributions can also be written down. However, these operators will
have at least three additional insertions of $\Delta(27)$ triplets, and are
therefore highly suppressed. We neglect their contributions as a result. We
also note that the NLO contributions to the Dirac Lagrangian given in (13)
enter at the same mass-dimension as LO contributions to the Majorana
Lagrangian given in (5),
$\mathcal{O}^{\text{HO}}_{D}\sim\mathcal{O}^{\text{LO}}_{\mathcal{M}}\sim\mathcal{O}(1/M^{4})\,,$
(16)
and so we do not consider any corrections to (5) to be consistent in our power
counting.
### 2.3 Complete Effective Mass Matrices in the Ultraviolet
The discussions in the Subsections above lead to the LO and NLO Lagrangians of
(4), (5) and (13). After family- and electroweak-symmetry breaking, these
Lagrangians generate the following Dirac and Majorana fermion UTZ mass
matrices:
$\mathcal{M}_{f}^{\mathcal{D}}\simeq\left(\begin{array}[]{ccc}0&a\,e^{i(\alpha+\beta+\gamma)}&a\,e^{i(\beta+\gamma)}+c\,e^{i(\beta+\zeta)}\\\
a\,e^{i(\alpha+\beta+\gamma)}&(b\,e^{-i\gamma}+2a\,e^{-i\delta})\,e^{i(2\alpha+\gamma+\delta)}&b\,e^{i(\alpha+\delta)}+c\,e^{i(\alpha+\zeta)}+d\,e^{i(\alpha+\psi)}\\\
a\,e^{i(\beta+\gamma)}+c\,e^{i(\beta+\zeta)}&b\,e^{i(\alpha+\delta)}+c\,e^{i(\alpha+\zeta)}+d\,e^{i(\alpha+\psi)}&1-2a\,e^{i\gamma}+b\,e^{i\delta}-2c\,e^{i\zeta}+2d\,e^{i\psi}\end{array}\right)\,,$
(17)
$\mathcal{M}^{\mathcal{M}}\simeq\left(\begin{array}[]{ccc}0&y\,e^{i(\alpha+\beta+\rho)}&y\,e^{i(\beta+\rho)}\\\
y\,e^{i(\alpha+\beta+\rho)}&(x\,e^{-i\rho}+2y\,e^{-i\phi})\,e^{i(2\alpha+\rho+\phi)}&x\,e^{i(\alpha+\phi)}\\\
y\,e^{i(\beta+\rho)}&x\,e^{i(\alpha+\phi)}&1-2y\,e^{i\rho}+x\,e^{i\phi}\end{array}\right)\,.$
(18)
Here the matrices have been renormalized such that
$\mathcal{M}_{f}^{\mathcal{D}}\equiv M_{f}^{\mathcal{D}}/s_{f}$,
$\mathcal{M}^{\mathcal{M}}\equiv M^{\mathcal{M}}/M_{\theta}$, where
$M_{\theta}$ and $s$ are the overall scale-setting parameters of (17)-(18)
which, along with the relative-scale-setting parameters $\\{a,b,c,d,x,y\\}$,
are defined in terms of scalar vevs and other coefficients:
$\displaystyle{a^{\prime}_{f}}$
$\displaystyle=\frac{c_{123}^{(7)}{{{\rm{v}}_{123}}{{\rm{v}}_{23}}\left\langle
S\right\rangle}}{{\sqrt{6}M_{123,f}^{3}}},\quad{b^{\prime}_{f}}=\frac{c_{23}^{(7)}{{r_{f}}{\rm{v}}_{23}^{2}\left\langle\Sigma\right\rangle}}{{2M_{23,f}^{3}}},\quad{c^{\prime}_{f}}=\frac{c_{123}^{(8)}\rm{v}_{123}\rm{v}_{3}\langle
S\rangle^{2}}{\sqrt{3}M_{123,f}^{4}},\quad{d^{\prime}_{f}}=\frac{c_{23}^{(8)}r_{f}\rm{v}_{23}\rm{v}_{3}\langle\Sigma\rangle\langle
S\rangle}{\sqrt{2}M_{23,f}^{4}},$ $\displaystyle\quad{s_{f}}$
$\displaystyle=\frac{c_{3}^{(6)}{{\rm{v}}_{3}^{2}}}{{M_{3,f}^{2}}},\quad\quad\quad
x^{\prime}=\frac{c_{M,1}^{(8)}\rm{v}_{23}^{2}\langle\Theta_{123}\rangle}{2M^{4}},\quad
y^{\prime}=\frac{c_{M,2}^{(8)}\rm{v}_{23}\rm{v}_{123}\langle\Theta_{23}\rangle}{\sqrt{6}M^{4}},\quad
M_{\theta}=\frac{c_{M}^{(5)}\rm{v}_{\theta}^{2}}{M}\,,$ (19)
with $r_{u,d,e,\nu}=(1,1,-3,-3)/3$ and
$\langle\Theta_{23,123}\rangle\equiv\langle\theta^{k}\theta^{k}\theta^{k}_{23,123}\rangle$,
i.e. the vev of the singlet contractions with $k$ superscript in (5). The
relationship between primed and unprimed parameters, along with associated
complex phases, is then given by
$\displaystyle\frac{a^{\prime}}{s}$
$\displaystyle=\bigg{|}\frac{a^{\prime}}{s}\bigg{|}\,e^{i\gamma}\equiv
a\,e^{i\gamma},\,\,\,\,\,\,\,\,\,\frac{b^{\prime}}{s}=\bigg{|}\frac{b^{\prime}}{s}\bigg{|}\,e^{i\delta}\equiv
b\,e^{i\delta},\,\,\,\,\,\,\,\,\,\frac{c^{\prime}}{s}=\bigg{|}\frac{c^{\prime}}{s}\bigg{|}\,e^{i\zeta}\equiv
c\,e^{i\zeta},\,\,\,\,\,\,\,\,\,\frac{d^{\prime}}{s}=\bigg{|}\frac{d^{\prime}}{s}\bigg{|}\,e^{i\psi}\equiv
d\,e^{i\psi}\,,$ $\displaystyle\frac{x^{\prime}}{M_{\theta}}$
$\displaystyle=\bigg{|}\frac{x^{\prime}}{M_{\theta}}\bigg{|}\,e^{i\phi}\equiv
x\,e^{i\phi},\,\,\,\,\,\,\,\,\,\frac{y^{\prime}}{M_{\theta}}=\bigg{|}\frac{y^{\prime}}{M_{\theta}}\bigg{|}\,e^{i\rho}\equiv
y\,e^{i\rho}\,,$ (20)
and it is clear that $c$ and $d$ account for the HO Dirac corrections in (13).
Given (17)-(18), the values of the ‘physical’ fermionic mass, mixing, and CP-
violating parameters can be extracted numerically as described in [1] or
analytically, using flavour-invariant theory as described in (e.g.) [46, 47,
48]. Then, given (19)-(20), one can compare the number of IR theory parameters
vs. IR physical parameters as a measure of the predictivity of the UTZ. At LO,
there are a priori two coefficients (a,b) and two phases ($\gamma$, $\delta$)
for each charged fermion sector, plus the additional two family-universal
phases ($\alpha$, $\beta$) from vacuum alignment. However, following [24], we
can set all but two of these phases to zero without loss of generality.
Assuming the GUT embedding discussed above to relate the down quarks to
charged leptons taking into account the Georgi-Jarlskog factors, one then has
$\left(2+2\right)\cdot 3+2-4-4=6$ UTZ model parameters (including two phases)
to describe three CKM mixing angles, one CKM Dirac phase, four quark mass
ratios and two charged lepton mass ratios, totalling 10 physical parameters.
The neutrino sector’s predictivity is even more striking, in the sequentially
dominant limit of (12). There, only three parameters, including a phase and an
overall mass scale, are necessary to reproduce the neutrino mass differences,
which when combined with the aforementioned charged lepton parameters also
generate PMNS angles and phases. In total, we see that only nine theory
parameters are required to reproduce 18 physical parameters at LO in the UTZ
OPE.
## 3 Experimental Constraints
The core experimental constraints on the UTZ model presented in Section 2 are
of course the fermionic mass eigenvalues and CKM/PMNS mixings extracted from a
host of low- and high-energy flavour experiments. Regarding the charged
fermion sector, this information is regularly collated in the PDG review [4],
which reports bounds on fermion masses and mixing angles. We have reported
these IR bounds for the mass sector in Table 3, translating the uncertainties
on individual masses into uncertainties on mass ratios, given that the UTZ
only predicts the charged fermion mass spectrum up to a common scale. On the
other hand, uncertainties on mixing angles and the Dirac CP phase can be
extracted from global fits to the CKM matrix and Jarlskog invariant given by
[4]
$|V_{CKM}|\equiv|U_{u}^{\dagger}U_{d}|\in\left(\begin{array}[]{ccc}\left(\stackanchor{0.97419}{0.97451}\right)&\left(\stackanchor{0.22433}{0.22567}\right)&\left(\stackanchor{0.00358}{0.00388}\right)\\\
\left(\stackanchor{0.22419}{0.22553}\right)&\left(\stackanchor{0.97333}{0.97365}\right)&\left(\stackanchor{0.04108}{0.04267}\right)\\\
\left(\stackanchor{0.00839}{0.00877}\right)&\left(\stackanchor{0.04038}{0.04193}\right)&\left(\stackanchor{0.999082}{0.999149}\right)\end{array}\right)\,,\,\,\,\,\,\mathcal{J}^{CKM}\in\left(\stackanchor{3.23}{2.95}\right)\cdot
10^{-5}\,,$ (21)
where the left equality defines the CKM as the overlap of the matrices
$U_{u,d}$ diagonalizing the up / down Yukawa couplings. The translation of
these bounds to the $\theta_{ij}^{q}$ and $\delta_{q}$ basis is given in Table
2.
Leptonic mass and mixing constraints are of course deeply sensitive to ongoing
neutrino oscillation, cosmology, and $\beta$-decay experiments. The authors of
[5] have compiled a global fit to the available oscillation data, finding
(e.g.)
$|V_{PMNS}|\equiv|U_{l}^{\dagger}U_{\nu}|\in\left(\begin{array}[]{ccc}\left(\stackanchor{0.801}{0.845}\right)&\left(\stackanchor{0.513}{0.579}\right)&\left(\stackanchor{0.144}{0.156}\right)\\\
\left(\stackanchor{0.244}{0.499}\right)&\left(\stackanchor{0.505}{0.693}\right)&\left(\stackanchor{0.631}{0.768}\right)\\\
\left(\stackanchor{0.272}{0.518}\right)&\left(\stackanchor{0.471}{0.669}\right)&\left(\stackanchor{0.623}{0.761}\right)\end{array}\right)\,,$
(22)
where the LHS again gives the standard definition of the PMNS matrix as it
appears in the charged-current interactions in terms of constituent charged-
lepton and neutrino mixing matrices $U_{l,\nu}$, and the $3\sigma$ confidence
bounds on the RHS further assume a unitary $V_{PMNS}$ and include Super-
Kamiokande atmospheric data — see [5] for details. As seen in [5] and also in
Table 2, current $3\sigma$ oscillation constraints do not yet fully determine
the quadrant of the atmospheric mixing angle $\theta_{23}^{l}$ and, at least
in the normal ordering scenario, have only excluded $\sim 43\%$ of the
available domain of the leptonic CP-violating phase $\delta^{l}$, i.e.
$\delta^{l}$ is only constrained within a $\sim 200^{\degree}$ arc. This is
reduced to an exclusion of only $\sim 20\%$ of the phase domain when not
including SK data.
Uncertainties on Fermionic Mixing Parameters
---
CKM Parameters | $\sin\theta_{12}^{q}$ | $\sin\theta_{23}^{q}$ | $\sin\theta_{13}^{q}$ | $\sin\delta_{CP}^{q}$
($\mu=M_{IR}$) | $\left(\stackanchor{0.226}{0.224}\right)$ | $\left(\stackanchor{0.427}{0.411}\right)\cdot 10^{-1}$ | $\left(\stackanchor{0.380}{0.358}\right)\cdot 10^{-2}$ | $\left(\stackanchor{0.921}{0.899}\right)$
($\mu=M_{UV}$) | $\left(\stackanchor{0.226}{0.224}\right)$ | $\left(\stackanchor{0.463}{0.219}\right)\cdot 10^{-1}$ | $\left(\stackanchor{0.409}{0.184}\right)\cdot 10^{-2}$ | $\left(\stackanchor{1.000}{0.194}\right)$
PMNS Parameters | $\sin\theta_{12}^{l}$ | $\sin\theta_{23}^{l}$ | $\sin\theta_{13}^{l}$ | $\sin\delta_{CP}^{l}$
($\mu=M_{{IR,UV}}$) | $\left(\stackanchor{0.586}{0.519}\right)$ | $\left(\stackanchor{0.776}{0.639}\right)$ | $\left(\stackanchor{0.156}{0.144}\right)$ | $\left(\stackanchor{0.588}{-1.000}\right)$
Table 2: Uncertainty estimates for fermionic mixing parameters. In the CKM
sector, the (experimental) IR bounds are given in [4], while the UV bounds are
estimated by considering various input RGE/threshold correction parameter
choices from [12], and accounting for the propagated IR experimental
uncertainties. In the PMNS sector we take 3$\sigma$ global bounds from NuFit,
in the normal ordering scenario and incorporating Super-Kamiokande atmospheric
data. Uncertainties on Fermionic Masses
---
Quarks | $m_{u}/m_{t}$ | $m_{c}/m_{t}$ | $m_{d}/m_{b}$ | $m_{s}/m_{b}$
($\mu=M_{IR}$) | $\left(\stackanchor{1.543}{1.097}\right)\cdot 10^{-5}$ | $\left(\stackanchor{7.509}{7.217}\right)\cdot 10^{-3}$ | $\left(\stackanchor{1.238}{1.069}\right)\cdot 10^{-3}$ | $\left(\stackanchor{2.452}{2.138}\right)\cdot 10^{-2}$
($\mu=M_{UV}$) | $\left(\stackanchor{5.807}{1.592}\right)\cdot 10^{-6}$ | $\left(\stackanchor{2.576}{0.911}\right)\cdot 10^{-3}$ | $\left(\stackanchor{8.303}{3.570}\right)\cdot 10^{-4}$ | $\left(\stackanchor{1.729}{0.880}\right)\cdot 10^{-2}$
Leptons | $m_{e}/m_{\tau}$ | $m_{\mu}/m_{\tau}$ | $\Delta m_{sol}^{2}/\Delta m^{2}_{atm}$
($\mu=M_{IR}$) | $\left(\stackanchor{2.876}{2.876}\right)\cdot 10^{-4}$ | $\left(\stackanchor{5.947}{5.946}\right)\cdot 10^{-2}$ | $\left(\stackanchor{3.31}{2.63}\right)\cdot 10^{-2}$
($\mu=M_{UV}$) | $\left(\stackanchor{2.875}{2.343}\right)\cdot 10^{-4}$ | $\left(\stackanchor{5.753}{5.096}\right)\cdot 10^{-2}$ | $\left(\stackanchor{3.31}{2.63}\right)\cdot 10^{-2}$ | $\left(\stackanchor{3.421}{0.124}\right)\cdot 10^{-1}$
Neutrinos | $m_{\beta}$ [GeV] | $\langle m_{\beta\beta}\rangle$ [GeV] | $m_{\Sigma}$ [GeV] | See text for UV Estimates
($\mu=M_{IR}$) | $8\cdot 10^{-10}$ | $6\cdot 10^{-12}$ | $1.2\cdot 10^{-10}$
($\mu=M_{UV}$) | $1.12\cdot 10^{-9}$ | $8.4\cdot 10^{-12}$ | $1.68\cdot 10^{-10}$
Table 3: The same as Table 2, but for fermion masses. We estimate the neutrino
mass squared difference in the UV from [49], and recall that the $\xi$ ratio
only differs from the IR when $\tan\beta$ is large and/or the neutrino mass
spectrum is partially degenerate (see text), hence the two UV bounds for $\xi$
in the last row, with the left (right) cell corresponding to the low (high)
$\tan\beta$ scenario. IR bounds for $m_{\beta(\beta)}$ and $m_{\Sigma}$ are
taken from [50, 51, 52], and their corresponding UV bounds are given by
conservatively setting $\mathbb{s}=1.4$ (see text).
The authors of [5] have also obtained global constraints on the differences of
squared neutrino mass eigenvalues, finding at the $3\sigma$ confidence level
$\displaystyle\Delta m^{2}_{sol}$ $\displaystyle\equiv
m_{2}^{2}-m_{1}^{2}\in\\{6.82,8.04\\}^{3\sigma}\cdot
10^{-5}\,\text{eV}^{2}\,,$ $\displaystyle\Delta m^{2}_{atm}$
$\displaystyle\equiv m_{3}^{2}-m_{2}^{2}\in\\{2.430,2.593\\}^{3\sigma}\cdot
10^{-3}\,\text{eV}^{2}\,,$ (23)
in the normal-mass-ordering scenario relevant to the UTZ construction, and
again including Super-Kamiokande atmospheric data.555Note that, by definition,
the mass eigenvalues are labeled according to their relative magnitudes, i.e.
$m_{3}>m_{2}>m_{1}$ in the normal-ordering scenario. We have translated this
to a bound on the ratio $\xi\equiv\Delta m^{2}_{sol}/\Delta m^{2}_{atm}$ in
Table 3.
A second class of neutrino mass constraints comes directly from cosmological
probes. For example, assuming the $\Lambda$CDM model and using data from the
Cosmic Microwave Background’s (CMB) angular spectra, the Planck experiment has
put an upper bound on the sum of cosmologically stable neutrino masses
$m_{\Sigma}$ of [50]
$m_{\Sigma}\equiv\sum_{i}\,m_{\nu_{i}}<0.26\,\,\,\,\text{eV}$ (24)
at the 95 $\%$ confidence level. When also including data from Baryon Acoustic
Oscillations this bound is reduced to $m_{\Sigma}<0.12$ eV [50], which can be
yet further reduced to $m_{\Sigma}<0.09$ eV when including Type Ia supernova
luminosity distances and growth rate parameter determinations [53] — see [54]
for a recent review.
Finally, additional constraints in the neutrino mass and mixing sector
originate in the effective mass terms controlling electron-neutrino
0$\nu\beta\beta$ decay and single $\beta$ decays,
$\displaystyle\langle m_{\beta\beta}\rangle$
$\displaystyle\equiv\big{|}\sum_{i}V_{ei}^{2}\,m_{i}\big{|}\,<\,(61-165)\cdot
10^{-3}\,\,\text{eV}\,,$ (25) $\displaystyle m_{\beta}$
$\displaystyle\equiv\sqrt{\sum_{i}|V_{ei}|^{2}\,m_{i}^{2}}\,<\,0.8\,\,\text{eV}\,,$
(26)
where $V_{ei}$ is the matrix element of the first row and $i$-th column of the
PMNS matrix defined in (22), and $m_{i}$ is the corresponding neutrino mass
eigenvalue. Robust bounds for these quantities are provided by dedicated
terrestrial experiments. In (25) we have cited the KamLAND-Zen collaboration
[51], while the limit in (26) is the 90% confidence-level bound from KATRIN
[52]. KATRIN’s future sensitivity is expected to reach $m_{\beta}<0.2$ eV
[55].666See e.g. [56, 57] and references therein for a recent summary review
of these independent flavour constraints and the ability to use them to probe
neutrino mass models.
At this point we should clarify that, while in practice we must specify a
numerical value for the scale $M_{\theta}$ when applying the seesaw formula
(10),777…which amounts to specifying _both_ the Dirac and Majorana neutrino
mass scales and hence their relative hierarchy, as we have no way of
differentiating $M_{\theta}$ from $\tilde{M}_{\theta}\equiv
s_{\nu}^{2}/M_{\theta}$ in the seesaw formula… and must therefore count this
as a relevant IR model parameter in our theory (unlike the charged fermion
case), this choice completely determines the overall neutrino mass scale. As
this is a free UTZ model parameter, we can vary it between sensible UV seesaw
scales to accommodate (e.g) (23)-(26), and probe said variation’s effect on
the relative-scale-setting parameters $\\{a_{\nu},b_{\nu},x,y\\}$. We have
done so between $M_{\theta}\in 10^{10-12}$ GeV, and observe in Figure 1 that,
at least qualitatively, larger values of $M_{\theta}$ are preferred.
Regardless, as with the charged fermion spectrum, we only consider UTZ
predictions for _ratios_ of observables that depend on the overall neutrino
mass scale, where $M_{\theta}$ cancels, as truly meaningful. For this reason
we will present $\xi$, $\langle m_{\beta(\beta)}\rangle/m_{\Sigma}$, and
$\langle m_{\beta\beta}\rangle/m_{\beta}$ as predictions in Section 4, but not
their individual constituent parameters $m_{\nu_{i}}$ or $m_{\beta(\beta)}$,
although we can report these based on the various $M_{\theta}$’s identified in
the MCMC evolution, and indeed $m_{\beta(\beta)}$, $\Delta m^{2}_{sol,atm}$,
and $m_{\Sigma}$ should be seen as giving reliable _constraints_ on the MCMC
algorithm, in that once a value for $M_{\theta}$ has been settled on, their
associated values must still be consistent with observation.
### 3.1 Renormalization Group Evolution Uncertainties
A critical uncertainty for any prediction of the UTZ model comes from the fact
that (17)-(18) are the textures associated to the _UV_ theory. Any comparison
with data must be made at the scale where said experimental constraints are
obtained, which in the case of the UTZ model is orders of magnitude below
where (17)-(18) hold. Thankfully the Renormalization Group (RG) evolution
required to account for this scale separation is well-studied in the context
of a background SM or minimally-SUSY SM (MSSM) spectrum (consistent with our
vacuum alignment mechanism discussed above), for both the charged fermion [12,
58, 59] and neutrino sectors [60, 49, 61, 62, 23, 63]. For example, assuming a
large flavour-breaking scale and a background SUSY spectrum allowing for high-
scale gauge-coupling unification, the phenomenological charged-fermion
structures discussed between (6)-(9) are already consistent with RGE and
threshold corrections from the IR to UV — see e.g. [12] — up to uncertainties
regarding (e.g.) the underlying SUSY breaking scale and parameter spaces (in
particular the ratio of Higgses, $\tan\beta$), which can drive some mass and
mixing-angle splittings. As we do not specify $\tan\beta$ or other parameters
and/or fields beyond those of the Yukawa sector of the EFT presented in
Section 2.1, we have used [12] to estimate the overall uncertainty associated
to UV quark mass and mixing parameters, accounting for the broad range of
possible theory parameters studied therein, and of course propagating updated
IR experimental uncertainties from [4] to the UV. These estimates are reported
in Table 2-3.
Moving to the neutrino sector and assuming a Type-I seesaw mechanism, detailed
RGE and threshold correction analyses can be found in [49, 60] and references
therein. There one concludes that, in the absence of a conspiracy between
special alignments of phases, large $\tan\beta$, and/or a
(partially-)degenerate888Here a ‘degenerate’ spectrum corresponds to $\Delta
m^{2}_{atm}\ll m^{2}_{3}\sim m^{2}_{2}\sim m^{2}_{1}$, while a ‘partially
degenerate’ spectrum corresponds to $\Delta m^{2}_{sol}\ll
m_{1}^{2}\lesssim\Delta m^{2}_{atm}$. Hierarchical neutrinos satisfy
$m_{1}^{2}\ll\Delta m^{2}_{sol}$ in the normal-ordering scenario. light
neutrino spectrum, radiative corrections to PMNS mixing angles and phases
between disparate scales is generally minimal,
$\displaystyle\Delta\theta_{ij}^{l}\equiv\theta_{ij}^{l}(\Lambda_{M_{1}})-\theta_{ij}^{l}(\Lambda_{M_{Z}})$
$\displaystyle\sim\frac{1}{2}\cdot\ln\frac{M_{1}}{M_{Z}}\cdot
10^{-6}\cdot\left(1+\tan^{2}\beta\right)\cdot\Gamma_{\text{enh}}\,,$ (27)
for the lowest seesaw scale of $M_{1}$, and with
$\Gamma_{\text{enh}}=\\{1,\sqrt{\xi},1,\sqrt{\xi}/\theta_{13}^{l},\sqrt{\xi}\\}$
for
$\\{\theta_{12}^{l},\theta_{13}^{l},\theta_{23}^{l},\delta^{l},\phi_{i}\\}$,
respectively. Taking rough order-of-magnitude estimates for
$\Gamma_{\text{enh}}$ and allowing for $M_{1}$ as large as $M_{\text{GUT}}\sim
10^{16}$ GeV and $\tan\beta$ as large as 50, one sees that typically
$\Delta\theta_{ij}^{l}\lesssim\mathcal{O}(10^{-2})$, which is largely
insignificant in comparison to the experimental uncertainties on mixing
parameters given in Table 2, except for possible corrections to
$\theta_{13}^{l}$. Given that we predict a normally-ordered, hierarchical mass
spectrum as a result of the sequential dominance condition of (11),999Note
that the presence of a dominant scale $M_{3}$ in the RH neutrino mass matrix
also minimizes the _inter-_ threshold radiative effects between $M_{i}$ [60].
we can take the $3\sigma$ bounds of (22) to hold in the UV, implying that the
experimental uncertainties are large in comparison to radiative effects.
On the other hand, the light neutrino mass eigenvalues are far more sensitive
to RGE than are the PMNS parameters, even in a hierarchical system. Assuming
small $\tan\beta$, neutrino mass eigenvalues generally evolve with a common
scaling, $m_{\nu,i}(\mu)\approx\mathbb{s}(\mu,\mu_{0})\,m_{\nu,i}(\mu_{0})$
with (e.g.) $\mathbb{s}\approx 1.1-1.2$ for $\tan\beta\approx 10$ or
$\mathbb{s}\approx 1.35-1.4$ for SM-like running. This obviously leads to a UV
enhancement of the neutrino mass differences in (23) $\propto\mathbb{s}^{2}$,
but this effect cancels in the ratio $\xi$. On the other hand, large
$\tan\beta$ can drive UV flavour splittings amongst the neutrino mass
eigenvalues, evolving both $\Delta m_{sol,atm}^{2}$ and $\xi$, an effect which
is especially enhanced in the case of a (partially-)degenerate spectrum, and
which is considerably uncertain when allowing for generic phase
configurations. We have used [49] to estimate the effect on $\xi$ in this
regime in Table 2, where one sees that an uncertainty greater than an order of
magnitude in principle exists, although this is quite conservative given the
neutrino mass domain considered in [49], and the fact that we can constrain
our MCMC scan to prefer a hierarchical mass spectrum, i.e. $\Delta
m^{2}_{sol}/m_{\nu_{1}}^{2}\gg 1$.101010While sequential dominance (11)
naturally generates a hierarchical spectrum, variations of the relative-scale-
setting neutrino coefficients $\\{a_{\nu},b_{\nu},x,y\\}$ can in principle
edge the spectrum towards partial degeneracy. We have applied a likelihood of
1 to any value of $\Delta m^{2}_{sol}/m_{\nu_{1}}^{2}>10$ found in our MCMC
scans, and have applied a smoothing, Gaussian-like corrective factor to assign
likelihoods for values close to this threshold. Finally, we note that RGE
discussed above also impacts the UV values of (24)-(26), which serve as
constraints on the MCMC system. In Table 3 we have estimated these in the
(conservative) SM-like scenario, with $\mathbb{s}=1.4$ for all neutrino
species..
In summary, we will apply the UV bounds in Tables 2-3 to account for a rather
generic class of RGE and threshold corrections to fermionic mass and mixing in
the UTZ. They will allow us to robustly explore the UTZ’s predictions without
introducing unnecessary assumptions about the background field content and/or
non-flavour parameter spaces that are irrelevant to the EFT construction at
hand, which is designed to be as model-independent as possible.
## 4 An MCMC Scan of Parameter Space
A proof-in-principle numerical analysis of the UTZ predictions derived from
(17)-(18) was originally performed in [1], in order to show that the model was
consistent with available mass and mixing data at the time. This semi-analytic
study, while successful, relied on a largely heuristic contour analysis to
identify a viable region of the UTZ parameter space. However, the analysis was
incomplete in many ways, in that it did not
1. 1.
exhaustively explore the available UTZ model space, robustly accounting for
all theory correlations amongst its Lagrangian parameters and therefore
conclusively determine whether the LO UTZ effective Lagrangian adequately
describes nature;
2. 2.
explore the complete set of corrections coming from NLO effective operators as
discussed in Section 2.2. Only the largest corrections identified in the Dirac
Lagrangian were briefly considered in [1], and only in the down-quark sector
(the corrections parameterized by $d_{d}$ and $\psi_{d}$).
3. 3.
identify sufficiently generic _predictions_ for (e.g.) the CP-violating phases
$\delta^{l}$ and $\phi_{1,2}$ or PMNS atmospheric angle $\theta_{23}^{l}$,
when all other (well-measured) flavour parameters were simultaneously resolved
by the UTZ;
4. 4.
consider in any way the experimental constraints from, nor predictions for,
neutrino-sector observables like $0\nu\beta\beta$, single $\beta$-decay rates,
or the sum of neutrino masses $m_{\Sigma}$.
Furthermore, the experimental datasets available for theory comparison have
been updated since the original publication of [1]. All of these
considerations motivate us to revisit our phenomenological analysis of the UTZ
in order to better determine its viability and identify means of falsifying
it. However, given the number of free parameters introduced by (4), (5), and
even (13), numerical techniques more sophisticated than those applied in [1]
will be necessary to achieve 1-4. To that end, in this work we consider a
Markov Chain Monte Carlo (MCMC) algorithm for exploring the UTZ.
### 4.1 The Generic MCMC Algorithm
Our numerical analysis will rely on a Metropolis-Hasting MCMC algorithm. The
purpose of this approach is to find the posterior distribution of the model
after applying relevant experimental constraints, thereby obtaining viable,
high-likelihood UTZ parameter regions. The MCMC technique has proven to be
very powerful when applied to the exploration of high-dimensional parameter
spaces, with physics applications originating in cosmology [64] and extending
to (e.g.) phenomenological studies of SUSY extensions of the SM [65, 66] and
the determination of parton distribution functions [67]. More recently, two
publications have used this approach to study the viability of a flavoured
SUSY SU(5) model [13] and a scotogenic model for loop-induced neutrino masses
[14], from which we will follow most of the methodology.
The algorithm is based on an iterative process where every new proposed
parameter point is selected in an area near to the previous one, and its
estimated viability drives its acceptance in the chain. To be more explicit,
every Markov chain starts on a randomly selected point within the parameter
interval ranges. Then, on every iteration, a new point with parameters
$\vec{\theta}^{n+1}$ is proposed in the vicinity of the previous point with
parameters $\vec{\theta}^{n}$. In our study, the new proposed parameter value
is computed according to
$\theta_{i}^{n+1}=\Pi\left\\{\theta_{i}^{n},\,\kappa\left(\theta_{i}^{\text{max}}-\theta_{i}^{\text{min}}\right)\right\\}\,,$
(28)
where $\Pi\left\\{a,b\right\\}$ is a Gaussian distribution with mean value $a$
and standard deviation $b$. The parameter $\kappa$ parametrizes the allowed
jump length between two iterations, and its value is chosen empirically in
order to maximize the efficiency of the algorithm. If the calculated value
exceeds the limits of the defined intervals for the model parameters, the
point is rejected.
We then compute the global likelihood associated with the proposal point
$\mathcal{L}^{n+1}$, which is accepted with a probability
$p=\text{min}\left(1,\,\mathcal{L}^{n+1}/\mathcal{L}^{n}\right)\,,$ (29)
which enforces the acceptance of points with higher likelihood and conditions
the acceptance to the viability of the proposal point with respect to the
previously accepted one. For simplicity, we assume here that our experimental
constraints are not correlated and that the global likelihood is simply the
products of the individual likelihoods, i.e
$\mathcal{L}^{n}~{}\equiv~{}\mathcal{L}(\vec{\theta}^{n},\vec{O})~{}=~{}\prod_{i}\mathcal{L}_{i}^{n}(\vec{\theta}^{n},O^{i})\,,$
(30)
where $\vec{O}$ is the set of experimental observables used as constraints.
Furthermore, we assume a Gaussian shape for all the constraint likelihoods
where uncertainties are given in Table 2-3, except for constraints that only
correspond to upper or lower bounds. In these latter cases we apply a step
function whose likelihood is assigned to 1 if the bound is satisfied, and
which otherwise employs a Gaussian ‘corrective’ factor that diminishes the
likelihood assigned to the phase-space point as a function of the extent to
which the bound is violated. Within this numerical setup, the chain will
converge to high-likelihood domains whose area represents the viability of the
models according to the applied uncertainties on the constraints.
Additionally, in order to speed up the convergence process, we modify our jump
parameter $\kappa$ to include a memory of proposal tries
$\kappa(t)=(1-\epsilon)^{t}\kappa_{0}\,,$ (31)
where $t$ is the number of tries before accepting a new point in the chain.
This becomes extremely helpful as the chain converges since some parameters
might have a very thin data-compatible range. As soon as one point is
accepted, $t$ is set to 0 again, maintaining the chain’s ability to jump to
another parameter region.
Finally, we focus solely on points within a chain for which the convergence
already occurred (i.e. where maximums of likelihoods are reached). Therefore,
we set up a ‘burning-length’ parameter which automatically removes the first
$N_{\text{burn}}$ points of each chains. This parameter is once again chosen
empirically during the pre-runs by studying multiple likelihood evolution
plots.
To summarize, we present the different MCMC hyper-parameters that we use in
our setup : $N_{\text{burn}}$, $\kappa_{0}$, $\epsilon$, the number of chains
launched $N_{\text{chains}}$, and the length of the chains $L_{\text{chain}}$
which determine the target number of accepted points for every chain. All
these parameters are chosen empirically depending on the model and the final
statistics desired for the distributions. As a final comment, we note that it
is usually better to allow for more chains, rather than longer chains, as this
ensures a more reliable parameter exploration.
### 4.2 UTZ Specifics
LO UTZ Model Parameter MCMC Ranges & Global Best Fits
---
| $(a,b)_{d}\cdot 10^{3}$ | $(a,b)_{u}\cdot 10^{5}$ | $(a,b)_{\nu}\cdot 10^{1}$ | $(x,y)\cdot 10^{3}$
Range | $\left(\left[2,6\right],\left[10,20\right]\right)$ | $\left(\mp 30,\mp 800\right)$ | $\mp 5$ | $\mp 5$
LO | $\left(3.579,15.924\right)$ | $\left(6.720,-192.922\right)$ | $\left(-1.166,1.818\right)$ | $\left(-0.146,-4.641\right)$
HO | $\left(3.415,15.416\right)$ | $\left(7.604,-200.279\right)$ | $\left(-1.819,2.440\right)$ | $\left(3.728,3.501\right)$
| $(\gamma,\delta)_{d}$ | $(\gamma,\delta)_{\nu}$ | $(\rho,\phi)$ | $M_{\theta}\cdot 10^{-11}$ [GeV]
Range | $\left[0,2\pi\right]$ | $\left[0,2\pi\right]$ | $\left[0,2\pi\right]$ | $\left[0.1,10\right]$
LO | $\left(3.910,5.782\right)$ | $\left(3.163,4.553\right)$ | $\left(2.964,4.784\right)$ | $3.084$
HO | $\left(4.228,6.134\right)$ | $\left(0.464,2.293\right)$ | $\left(3.636,3.976\right)$ | $9.918$
HO UTZ Model Parameter MCMC Ranges & $1\sigma$-Preferred Fits
---
| $(c,d)_{d}\cdot 10^{5}$ | $(c,d)_{u}\cdot 10^{6}$ | $(c,d)_{\nu}\cdot 10^{3}$
Range | $\left(\mp 5,\mp 50\right)$ | $\left(\mp 5,\mp 50\right)$ | $\left(\mp 5,\mp 50\right)$
HO | $\left(0.640,10.811\right)$ | $\left(0.916,-37.298\right)$ | $\left(-0.896,-1.565\right)$
Table 4: The scan ranges of UTZ model parameters, along with the value of the
model parameter in the global best-fit dataset, for both LO and HO fits.
Recall that only two charged-fermion phase parameters are non-redundant at LO
[24], and so we have chosen $\\{\gamma_{d},\delta_{d}\\}$, as in [1].
Graphical representations of the MCMC evolution of these parameters are given
in Figure 1. Also recall that there are no relevant HO Majorana corrections,
that we have kept the HO corrections real, and that the global best-fit values
identified for the phases are not terribly meaningful, as we do not observe
very strong MCMC preferences for any phase values in our scans (they are all
relatively evenly distributed across $\left[0,2\pi\right]$).
Figure 1: Histograms demonstrating the distribution of MCMC iterations for
the Dirac (Majorana) scale-setting UTZ parameters $\\{a,b,c,d\\}_{d,u,\nu}$
($\\{x,y,M\\}$), in the LO (blue) and HO (red) scans. We have distributed our
results across 100 horizontal bins, while the sum of all vertical histogram
values in a given plot is equal to $N_{\Theta}$. By and large, phases, like
the $\\{c,d\\}_{f}$ shown, do not exhibit strong preferential values.
Following the algorithm above, we now specify the constraints that will guide
our likelihood evolution in the MCMC, and also the hyper- and model-parameter
choices that control our statistics. Regarding the former, we have identified
/ implemented the following set of MCMC _constraints_ and _predictions_ :
$\displaystyle\text{{\bf{Constraints}}}:$
$\displaystyle\,\,\,\,\,\\{R_{f_{i}f_{3}}\,(f\in
u,d,e),\,\sin\theta_{ij}^{q,l}\,,\sin\delta^{q,l}\,,\Delta
m^{2}_{sol,atm},\,m_{\beta(\beta)},\,m_{\Sigma},\,\xi,\,\text{n.h.}\\}$
$\displaystyle\text{{\bf{Predictions}}}:$
$\displaystyle\,\,\,\,\,\\{R_{\nu_{i}\nu_{3}},\,m_{\beta\beta}/m_{\Sigma},\,m_{\beta}/m_{\Sigma},\,m_{\beta\beta}/m_{\beta},\,\sin\phi_{1},\,\sin\phi_{2}\\}$
$\displaystyle\text{{\bf{Quasi-Predictions}}}:$
$\displaystyle\,\,\,\,\,\\{\sin\delta^{q,l}\,,\,\xi\\}$ (32)
where $R_{f_{i}f_{3}}$ corresponds to the ratio of the $i$th generation over
third-generation mass ($R_{i3}\equiv m_{f_{i}}/m_{f3}$) for the corresponding
family $f$, and where ‘n.h.’ corresponds to the constraint $\Delta
m^{2}_{sol}/m_{\nu_{1}}^{2}\gg 1$, which enforces a strictly-hierarchical
normal-ordered light neutrino spectrum. The associated numerical constraints
correspond to the UV bounds from Tables 2-3. Hence there are
$N_{\text{cons}}=21$ constraints to guide the MCMC likelihood evolution, and
$N_{\text{obs}}=7$ additional predictions that depend on correlated theory
parameters, but which do not impact MCMC likelihoods. Observe that
$\sin\delta^{q,l}$ and $\xi$ are listed as _quasi-predictions_ because, as
discussed above, the UV bounds associated to them are extremely large, either
due to IR experimental uncertainties ($\sin\delta^{l}$) or due to theory
uncertainties associated to radiative corrections ($\sin\delta^{q}$, $\xi$).
We will therefore use Tables 2-3 as (weak) MCMC constraints, but will also
present these results as novel predictions of the UTZ framework, along with
those already listed as such in (32).
Given (32), we then set the values of the MCMC hyper-parameters we have
employed to
$N_{\text{chains}}=2500,\,\,\,\,\,\,\,\,L_{\text{chains}}=500,\,\,\,\,\,\,\,\,N_{\text{burn}}=40,\,\,\,\,\,\,\,\,\kappa_{0}=0.01,\,\,\,\,\,\,\,\,\epsilon=0.00005\,,$
(33)
while Table 4 gives the ranges scanned over for the actual UTZ model
parameters outlined in Section 2.111111Note that we have performed a
preliminary scan with the same statistics, but without applying the
constraints of (32) (all model parameter configurations generate a likelihood
of 1), in order to confirm that the UTZ does not exhibit built-in preferred
regions. Taking the generic scan range $-5\cdot
10^{-1}\leq\\{a_{f},b_{f},x,y\\}\leq 5\cdot 10^{-1}$, we find that all
parameter distributions are flat. In other words, the shapes of the
distributions presented in Fig. 1 are truly driven by (32). The ranges listed
for both were identified from successful preliminary MCMC runs with broader
model-parameter ranges, coarser hyper-parameter specifications and, most
importantly, general physics considerations from Section 2, which we now
discuss.
Considering the Majorana sector, we heuristically observe that establishing
the sequential dominance condition in (11) with $M_{3}/M_{2}\sim 10^{n}$ GeV
requires $\text{max}(x,y)\sim\mathcal{O}(10^{-n})$, and this is largely
independent of the scale $M_{\theta}$. We have required $n\geq 3$, to truly
establish the third-family Majorana dominance implied by (5). Requiring
$M_{2}\gg M_{1}$ of course requires further suppression between $x$ and $y$,
such that $M_{2}/M_{1}\sim 10^{n}$ GeV (roughly) corresponds to
$\text{min}(x,y)\sim\text{max}(x,y)\cdot 10^{-n/2}$, and we recall that the
qualitative physics leading to (12) does indeed imply said additional
hierarchy. However, we also notice from (5) that the two coefficients are
sourced from Lagrangian terms that enter at the same power counting
(suppressed by $M^{4}$). While the combinations of vevs and coefficients can
lead to additional suppression, in Table 4 we have kept the scan range for $y$
on the same generic order of magnitude as $x$, which of course allows for the
additional suppression, but does not enforce it.
Scan ranges for the LO Dirac parameters $\\{a,b\\}_{f}$ are determined by
observing that the LO Dirac Lagrangian in (4) only exhibits one order of
messenger mass suppression w.r.t. the leading third-generation scale-setting
terms. Allowing for a broad range of Wilson coefficients and flavon vevs, we
consider $-5\cdot 10^{-1}<\\{a,b\\}_{f}<5\cdot 10^{-1}$ as a reasonable first
constraint on preliminary MCMC scans, which we then iteratively refine given
observed preferential domains. Following this procedure, we have noticed that
the up-family parameters prefer to be (roughly) symmetrically distributed
around zero, and extend to $\pm\mathcal{O}(10^{-4})$ ($\mathcal{O}(10^{-3})$)
for the $a_{u}$ ($b_{u}$) terms. The down-quark and charged-lepton parameters
are also symmetric about zero, but with centers around $\mathcal{O}(10^{-3})$
($\mathcal{O}(10^{-2})$) for $a_{d}$ ($b_{d}$). In Table 4 and Figure 1 we
have only considered the positive branch of these parameters. Finally, the
Dirac neutrino parameters are also distributed in a roughly symmetric way
about zero, with both $a_{\nu}$ and $b_{\nu}$ peaked around
$\mathcal{O}(10^{-1})$.
Upon identifying the final LO scan ranges as above, we then consider the HO
Dirac parameters $\\{c,d\\}_{f}$, which we recall from (13) contribute at
$1/M_{i,f}^{4}$ in the UTZ OPE, i.e. at one order higher than the leading
terms. Consistent with our power-counting philosophy at LO, we require these
terms be at least one order of magnitude smaller than their LO counterparts.
We then consider the analytic hierarchy between the HO operators identified in
(15) suggesting the $c_{f}$ correction $\propto S^{2}$ be yet further
suppressed w.r.t. $d_{f}$. To this end, if we have identified a scan range of
$|\text{min}\\{a,b\\}_{f}|<\mathcal{O}(10^{-n})$, we require
$|d_{f}|<\mathcal{O}(10^{-n-1})$ and $|c_{f}|<\mathcal{O}(10^{-n-2})$. While
this of course does not forbid the possibility that $|d_{f}|\sim|c_{f}|$ (as
is also in principle allowed given slightly non-universal messenger masses
and/or hierarchical Wilson coefficients), it is sufficiently generic for our
purposes and, in any event, we observe that these HO corrections do not
converge on highly-preferential domains regardless, which is clearly visible
in the last two rows of Figure 1. Note that, for simplicity, we have kept
these HO corrections real.
Phase Combos | $\\{\gamma_{d},\delta_{d}\\}$ | $\\{\gamma_{d},\gamma_{u}\\}$ | $\\{\gamma_{d},\delta_{u}\\}$ | $\\{\gamma_{u},\delta_{d}\\}$ | $\\{\delta_{u},\delta_{d}\\}$ | $\\{\gamma_{u},\delta_{u}\\}$
---|---|---|---|---|---|---
LO Max Likelihood $\times 100$ | 3.22 | 3.57 | 4.35 | 1.17 | 1.30 | 2.35 $\cdot 10^{-3}$
Table 5: The maximum likelihoods returned from preliminary LO MCMC scans with
$N_{\text{chains}}=500$, $L_{\text{chain}}=250$, and $L_{\text{burn}}=20$,
upon choosing different charged fermion phase configurations. Note that we
have shown the results for the $\\{\gamma_{d},\delta_{d}\\}$ configuration in
Figures 2-6, to readily compare with [1].
Finally, we note that in all of the above considerations we have allowed for
generic LO phase configurations in the neutrino sector,121212Although recall
that in the sequentially-dominant IR limit only one neutrino phase, formed
from a combination of said UV phases, dominates the phenomenology. but have
chosen the two non-redundant LO phases in the charged fermion sector as in
[1], i.e. we allow for non-zero $\gamma_{d}$ and $\delta_{d}$. This allows us
to readily compare the physical conclusions of our analysis with those of [1].
In Table 5 we show that this choice is amongst the higher-likelihood
configurations given the six possible pairings, having considered other
configurations in preliminary MCMC scans with limited statistics. Up to this
choice, we have otherwise allowed for arbitrary phases in our MCMC scans; all
are constrained to $\left[0,2\pi\right]$, and we observe that there is
typically no strong MCMC preference for said phase domains. For this reason we
do not show their MCMC histogram distributions in Fig. 1.
Global Best-Fit UV Fermionic Mass Parameters
---
Quark Masses | $m_{u}/m_{t}$ | $m_{c}/m_{t}$ | $m_{d}/m_{b}$ | $m_{s}/m_{b}$
LO Fit | $2.518\cdot 10^{-6}$ | $1.805\cdot 10^{-3}$ | $7.738\cdot 10^{-4}$ | $1.563\cdot 10^{-2}$
HO Fit | $3.126\cdot 10^{-6}$ | $1.862\cdot 10^{-3}$ | $7.397\cdot 10^{-4}$ | $1.490\cdot 10^{-2}$
Lepton Masses | $m_{e}/m_{\tau}$ | $m_{\mu}/m_{\tau}$ | $m_{\nu_{1}}/m_{\nu_{3}}$ | $m_{\nu_{2}}/m_{\nu_{3}}$
LO Fit | $2.621\cdot 10^{-4}$ | $5.423\cdot 10^{-2}$ | $8.450\cdot 10^{-3}$ | $1.149\cdot 10^{-1}$
HO Fit | $2.475\cdot 10^{-4}$ | $5.341\cdot 10^{-2}$ | $2.540\cdot 10^{-3}$ | $1.298\cdot 10^{-1}$
Global Best-Fit UV Fermionic Mixing Parameters
---
CKM Parameters | $\sin\theta_{12}^{q}$ | $\sin\theta_{23}^{q}$ | $\sin\theta_{13}^{q}$ | $\sin\delta_{CP}^{q}$
LO Fit | $0.225$ | $1.762\cdot 10^{-2}$ | $3.429\cdot 10^{-3}$ | $0.485$
HO Fit | $0.225$ | $1.752\cdot 10^{-2}$ | $3.247\cdot 10^{-3}$ | $0.446$
PMNS Parameters | $\sin\theta_{12}^{l}$ | $\sin\theta_{23}^{l}$ | $\sin\theta_{13}^{l}$ | $\sin\delta_{CP}^{l}$
LO Fit | $0.550$ | $0.704$ | $0.146$ | $-0.845$
HO Fit | $0.547$ | $0.714$ | $0.150$ | $-0.975$
Table 6: The UTZ global best-fits for fermionic mass (top) and mixing (bottom)
parameters in the UV, as determined from the MCMC scan described in Section 4.
The upper lines correspond to the fit allowing for only LO UTZ Lagrangian
parameters, while the lower lines also account for HO parameters, both of
whose MCMC distributions are given in Table 4. Figures 2-3 show the total
spread of MCMC predictions in this sector, and also highlight the LO global
fits presented in this table with a black target marker.
### 4.3 Results and Analysis
We implement the MCMC scan as described above on a computing cluster. The
output is a data-set composed of UTZ model parameters, associated values for
the constraints and predictions from (32), and the corresponding likelihood of
said predictions for _each_ saved MCMC iteration. We denote the corresponding
data-set $\Theta_{i}^{j}$, where our notation implies that the $i$th data-set
has $j$ entries corresponding to the model / physical / likelihood
parameter(s). Hence $i\in\\{1,2,...,N_{\Theta}\\}$, where
$N_{\Theta}=N_{\text{chains}}\cdot\left(L_{\text{chains}}-N_{\text{burn}}\right)\,,$
(34)
i.e the overall number of data-sets, each of which has
$j\in\\{1,2,...,L_{\Theta}\\}$ constituents, where
$L_{\Theta}=N_{\text{model}}+N_{\text{cons}}+N_{\text{obs}}+1\,,$ (35)
with the additional unit in $L_{\Theta}$’s counting coming from the standard
likelihood function $\mathcal{L}$ for a set of model predictions compared to
experiment. Given (33), $N_{\Theta}=1.15\cdot 10^{6}$, and we now examine the
physical and model parameters embedded therein.
#### Fermion Flavour Mass Ratios
Figure 2: MCMC density plots for UTZ quark and lepton flavoured mass ratio
predictions. Plots are generated with the hyper-parameter choices in (33) with
model-parameter variations as given in Table 4. The blue (red) regions
correspond to the LO (HO) MCMC scan results, with darker regions corresponding
to places of higher density. Gray regions represent the UV bounds for the mass
ratios as presented in Table 3, and the black target markers correspond to the
global best-fit values shown numerically in Table 6.
We first examine the MCMC results for the UTZ’s predictions in the fermion
mass sector. Figure 2 illustrates these for mass ratios in the up quark, down
quark, charged lepton, and neutrino families. Both Figure 2 (and upcoming
figures) and Table 6 give results for the LO and, when indicated, HO MCMC
scans, with the former given in blue and the latter in red. Note that these
figures represent density plots, in that darker regions correspond to
parameter domains where more Markov chains evolved. Also, the black ‘target’
marker in Figures 2-6 corresponds to the location of the _overall_ (global)
best-fit data-set $\Theta_{i}$, which is also given numerically in Table 6.
The gray bands correspond to the global data available from the PDG (NuFit)
collaborations for the charged fermion (neutrino) masses, corrected to the UV
according to the discussion in Section 3.1. Comparing these to the blue and
red regions, we see that the UTZ is capable of successfully resolving the
entire charged fermion mass spectrum, for both quarks and leptons, up to the
RGE and threshold correction uncertainties. Furthermore, the UTZ predictions
for (currently unmeasured) neutrino mass ratios are shown in the bottom-right
panel; given the model parameter ranges explored in Table 4, the ratio
$R_{i3}\equiv m_{\nu_{i}}/m_{\nu_{3}}$ is densely populated within $2.5\cdot
10^{-2}<R_{23}<2\cdot 10^{-1}$ for the heavier generations while the smaller
mass ratio is densely populated between $1.5\cdot 10^{-3}<R_{13}<2\cdot
10^{-2}$. However, we see that, albeit less frequent, much larger neutrino
mass hierarchies are also resolved, with $R_{13}\,(R_{23})$ falling below
$10^{-6}\,(5\cdot 10^{-3})$.
Finally, we notice from the up-family plot that the inclusion of the red HO
corrections sourced from (13) do not qualitatively change the physics
conclusions of the blue LO regions. We have in fact observed this quite
generically across family and observable sectors, and hence for visual clarity
we only display the dominant LO results in what follows, unless otherwise
specified.
Figure 3: The same as Figure 2, but for the CKM and PMNS mixing angles and
Dirac CP-violating phases.
#### Fermion Mixings and CP-Violation
In Figure 3 we have presented the MCMC UTZ predictions for the CKM (top two
plots) and PMNS (bottom two plots) mixing angles $\theta_{ij}^{q,l}$ as well
as the associated Dirac CP-violating phases $\delta^{q,l}$. Here we again
compare to (radiatively corrected) data from the PDG and NuFit given in gray,
and notice that the blue (red) LO (HO) UTZ Lagrangian is again highly
successful at resolving these parameters. Indeed, while we observe that the
overlap with PMNS uncertainty bands is perhaps qualitatively more successful
than that of the quarks, the regions overlap with the UV bounds for all
fermion families.
Figure 4: The same as Figure 3, but now comparing the quark and lepton Dirac
CP-violating phases, and presenting the novel predictions for the Majorana CP-
violating phases $\phi_{1,2}$.
Figure 5: The same as Figure 2, but now comparing the neutrino mass ratio
observables predicted by the UTZ.
Note that this conclusion differs from the naive analysis in [1], which found
elements in the third row and column of the CKM to be outside of the UV
uncertainty bands considering only the LO UTZ Lagrangian, a deviation sourced
by the $\theta_{23}^{q}$ mixing angle. While we observe that the bulk of the
MCMC sample points for $\sin\theta_{23}^{q}$ are indeed lower than the allowed
uncertainty region, a significant number of LO points do overlap successfully.
Studying [12], one concludes that lower values of $\theta_{23}^{q}$ tend to
correspond to higher $\tan\beta$ RGE scenarios. Hence independent evidence
that a background spectrum imitating this UV MSSM structure131313…assuming a
certain threshold correction structure and SUSY breaking scale, of course… is
not physical would in principle also disfavor the UTZ theory of flavor, up to
the extent the bounds on $\sin\theta_{23}^{q}$ drive our current MCMC
likelihoods.
In addition to $\theta_{23}^{q}$, the bottom-right panel of Figure 3 suggests
that resolving values of $\sin\delta^{l}\sim 0$ simultaneously with
$\theta_{23}^{l}$ in the UTZ is disfavored compared to larger
$|\sin\delta^{l}|$ in the UV. Hence the $\\{\theta_{23}^{l},\delta^{l}\\}$
sector of the PMNS represents an exceptional opportunity to constrain
significant portions of the UTZ parameter space, as information on
$\delta^{l}$ from neutrino oscillations continues to improve.
To fully present the CP-violating sector of the UTZ, we have also presented
our MCMC results for $\sin\delta^{q,l}$ side-by-side in Figure 4, along with
results for the Majorana CP-violating phases $\phi_{1,2}$ of the PMNS matrix.
Reliable experimental constraints on $\phi_{1,2}$ are presently non-existent,
and so they also represent opportunities to falsify / further constrain our
model space. However, one observes that a broad range of Majorana phases are
predicted in the UTZ. We have contextualized this observation by including the
MCMC histograms for these phases, analogous to the model parameters presented
in Fig. 1, in the bottom two panels of Fig. 4. These histograms reveal that,
while it is true that virtually all values of $\sin\phi_{1,2}$ are acceptable,
a huge number of Markov chains evolved to $|\sin\phi_{1,2}|\sim 1$. Hence
improving data (and therefore more rigid constraints in (32)) could allow us
to resolve more precise predictions for these Majorana phases.
Finally, as with the masses presented in Figure 2, we have noticed that HO
corrections (as shown in red the top left panel of Fig. 3, and in the
histograms of 4) do not qualitatively alter the LO physical conclusions we
present above. This is because we have been conservative in Table 4 regarding
the relative magnitude of said HO corrections w.r.t. LO parameters, which is
of course motivated by the relative suppression of these Lagrangian
terms.141414We did not enforce this suppression on $d_{d}$ in [1] and hence
allowed it to be comparatively large.
Figure 6: MCMC scatter results for $\beta$-decay and absolute neutrino mass
scale observables. Gray regions correspond to IR bounds from KamLAND-Zen
($m_{\beta\beta}$), KATRIN ($m_{\beta}$), and Planck ($m_{\Sigma}$), corrected
to the UV (if stated) with the conservative $\mathbb{s}$ evolution factor for
the heaviest generation. As noted in the text, these results are presented as
consistency tests of our approach.
#### $\beta$-Decay and Cosmological Probes
We now focus on the sector of observables sensitive to the absolute neutrino
mass scale and the Majorana (vs. Dirac) nature of the neutrino field, i.e.
$m_{\Sigma}$ and $m_{\beta(\beta)}$. As mentioned above, we only consider
ratios of these observables as highly meaningful predictions in the UTZ, and
we present their regions in Fig. 5, where it is clear that, at least for the
parameter-space we have explored, the UTZ largely prefers values for the
ratios $m_{\beta(\beta)}/m_{\Sigma}$ and $m_{\beta\beta}/m_{\beta}$ given by
$5.9\cdot 10^{-1}\lesssim\frac{m_{\beta\beta}}{m_{\Sigma}}\lesssim 7.3\cdot
10^{-1}\,,\,\,\,\,\,\,\,\,\,\,7.0\cdot
10^{-1}\lesssim\frac{m_{\beta}}{m_{\Sigma}}\lesssim 8.4\cdot
10^{-1}\,,\,\,\,\,\,\,\,\,\,\,7.8\cdot
10^{-1}\lesssim\frac{m_{\beta\beta}}{m_{\beta}}\lesssim 9.2\cdot 10^{-1}\,.$
(36)
In the event that a positive signal for $m_{\beta(\beta)}$ is ever observed,
(36) will serve as an excellent probe of the UTZ construction. One also
notices in the right panel of Fig. 5 that the MCMC has evolved such that
relatively small values of the neutrino-mass-squared difference ratio $\xi$
are preferred, with respect to the possible UV upper bound in Table 3.
However, the observed region is still consistent with both the low-$\tan\beta$
/ SM-like RGE and high-$\tan\beta$-like RGE scenarios discussed in Section
3.1.151515For consistency with the quark sector we have trained our MCMC on
the more uncertain UV scenario for $\xi$, which allows for the possibility of
high-$\tan\beta$-like RGE. If we instead train on the low-$\tan\beta$ / SM-
like RGE scenario, the 1$\sigma$ regions in Fig. 6 shift upwards to center on
the darker, smaller UV/IR uncertainty band.
Of course, as mentioned, we can also report the actual values of the
constituent functions $m_{\Sigma}$ and $m_{\beta(\beta)}$, despite them being
less meaningful due to their sensitivity to $M_{\theta}$. For completeness we
do so in Figure 6. As expected due to their use as constraint in (32), we
observe that the UTZ readily evades available bounds from (e.g.) KATRIN,
Planck, and KamLAND-Zen. However we emphasize that this statement effectively
amounts to a consistency check on the MCMC framework implemented.
### 4.4 Summary Comments
Before concluding, we summarize the results in the above Sections:
* •
The LO UTZ Lagrangian in (4) and (5) is sufficient to describe all available
data on fermionic mass and mixing, as well as data constraining the overall
scale of neutrino masses. This result is novel, and represents a substantial
improvement on the phenomenological findings of [1], which found that HO
corrections were necessary to describe the third row and column of the CKM
matrix (due to $\theta_{23}^{q}$). This illustrates the power of our MCMC
algorithm to robustly explore viable UTZ parameter spaces, in comparison to
less sophisticated methods. However, $\theta_{23}^{q}$ still represents an
excellent parameter to exclude UTZ parameter spaces in the future.
* •
The MCMC algorithm also allows us to present robust predictions for
observables that are not well constrained by data — e.g. leptonic CP-violating
phases $\sin\\{\delta^{l},\phi_{1},\phi_{2}\\}$ and neutrino mass ratios
$m_{\nu_{i}}/m_{\nu_{3}}$, $\xi$, $m_{\beta(\beta)}/m_{\Sigma}$, and
$m_{\beta\beta}/m_{\beta}$ — despite the fact that said observables depend
sensitively on theory parameters that are highly-correlated to other, well-
constrained observable sectors. These findings provide excellent opportunities
for the falsification / exclusion of UTZ parameter spaces. We have presented
these predictions using the (N)LO UTZ Lagrangian in Figs. 2-5.
* •
The HO corrections generated by the operators in (13) do not qualitatively
change the physics conclusions driven by the dominant operators in (4)-(5).
This is due to our (natural) assumption that said HO parameters are suppressed
with respect to LO parameters, a constraint that we did not impose as
rigorously in [1]. As a result, the UTZ’s predictions are dominated by as few
as nine IR theory parameters. Hence the UTZ is realized as a _well-defined_ ,
_stable_ , and _predictive_ effective theory of flavour.
* •
The results we have presented are of course sensitive to the hyper- and model-
parameter ranges we have explored, which are presented in (33) and Table 4.
While we have taken care in identifying these ranges, and have indeed
demonstrated that they are successful, they are not necessarily unique.
Exploring alternative parameter spaces, possibly with even more statistics
than implied by (33), will be especially motivated in the event data fully
excludes the predictions presented in Figures 2-5, and/or a specific
renormalizable completion (with exact RGE / threshold behavior) of the UTZ is
identified.
## 5 Summary and Outlook
We have re-examined the Universal Texture Zero (UTZ) model of flavour
presented in [1] in light of updated experimental constraints and in the
context of a novel Markov Chain Monte Carlo (MCMC) analysis routine. We have
considered the UTZ’s predictions at both leading- and next-to-leading orders
in its effective theory operator product expansion, and the associated
phenomenological pre- and post-dictions are given in Figures 2-5. There we
observe that the UTZ is capable of fully resolving the fermionic mass and
mixing spectrum as constrained by global data sets, for both the quark and
lepton sectors, up to uncertainties regarding radiative corrections to/from
the ultraviolet. We have also presented a host of novel, robust predictions
for poorly-constrained leptonic observables, in particular the PMNS CP-
violating phases $\delta^{l}$, $\phi_{1,2}$ and neutrino mass-sector ratio
observables $m_{\beta(\beta)}/m_{\Sigma}$ and $m_{\beta\beta}/m_{\beta}$.
These latter results offer a route to UTZ falsification and/or parameter-space
exclusions.
Our analysis therefore greatly improves on the proof-in-principle
phenomenology pursued in the original UTZ publication [1], which was incapable
of yielding a robust prediction for even (e.g.) $\delta^{l}$, and which did
not consider observables like $m_{\beta(\beta)}$ and $m_{\Sigma}$. Indeed, our
updated MCMC analysis revises the claim from [1] that the leading UTZ
Lagrangian is insufficient to account for all fermionic data, before
considering next-to-leading corrections. However, as discussed at the end of
Section 4, there is still room for improvement, as a yet more exhaustive scan
of the UTZ parameter space is in principle possible. We also note that, while
our MCMC algorithm fully accounts for theory correlations amongst UTZ model
parameters, we have not accounted for _experimental_ correlations, beyond what
is already accounted for in the global fits presented in Section 3. While we
do not expect such correlations to qualitatively change our conclusions,
pursuing such an analysis in the future could be interesting.
Besides these future technical/phenomenological improvements, we also note
that significant progress has recently been made in rigorously connecting
theories of flavour controlled by non-Abelian discrete symmetries (and
additional shaping symmetries) to string theories with toroidal orbifold
compactifications — see e.g. [68, 69, 70, 71]. It would be interesting to
determine whether the UTZ (or a close cousin) could be formally embedded into
one of these structures, thereby providing a UV origin for the field and
symmetry content of Table 1, and an unambiguous background spectrum that would
minimize the radiative uncertainties that we have considered agnostically in
our effective field theory setup. After all, the absence of $\Delta(27)$
contractions with non-trivial singlets in (4), (5), and (13) is already
consistent with the stringy models examined in [72].
We leave these questions to future work, and simply conclude that Figures 2-6
indicate that the UTZ represents an appealing, minimal, and phenomenologically
viable model of flavour physics, and therefore provides some support for the
idea that observed flavour patterns are the result of yet-discovered Beyond-
the-Standard Model dynamics, rather than (e.g.) random chance.
## Acknowledgements
JT and IdMV are indebted to Prof. Graham Garland Ross (1944-2021), whose
supervision and collaboration greatly inspired our research interests,
including the Universal Texture Zero theory we originally co-authored in [1]
and further studied in this paper. We thank him for his mentorship, both
personal and professional, and for the time we shared together over the years.
Our collaboration also thanks Ben Risebrow for his helpful contributions over
the course of his summer research project.
JB is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) under grant 396021762 – TRR 257. IdMV acknowledges funding from
Fundação para a Ciência e a Tecnologia (FCT) through the contract
UID/FIS/00777/2020 and was supported in part by FCT through projects CFTP-FCT
Unit 777 (UID/FIS/00777/2019), PTDC/FIS-PAR/29436/2017, CERN/FIS-PAR/0004/2019
and CERN/FIS-PAR/0008/2019 which are partially funded through POCTI (FEDER),
COMPETE, QREN and EU. The work of ML is funded by FCT Grant
No.PD/BD/150488/2019, in the framework of the Doctoral Programme IDPASC-PT. JT
gratefully acknowledges funding from the European Union’s Horizon 2020
research and innovation programme under the Marie Skłodowska-Curie grant
agreement No. 101022203.
## References
* [1] Ivo de Medeiros Varzielas, Graham G. Ross, and Jim Talbert. A Unified Model of Quarks and Leptons with a Universal Texture Zero. JHEP, 03:007, 2018.
* [2] J. M. Gerard. FERMION MASS SPECTRUM IN SU(2)-L x U(1). Z. Phys. C, 18:145, 1983.
* [3] R. Sekhar Chivukula and Howard Georgi. Composite Technicolor Standard Model. Phys. Lett. B, 188:99–104, 1987.
* [4] R. L. Workman. Review of Particle Physics. PTEP, 2022:083C01, 2022.
* [5] Ivan Esteban, M. C. Gonzalez-Garcia, Michele Maltoni, Thomas Schwetz, and Albert Zhou. The fate of hints: updated global analysis of three-flavor neutrino oscillations. JHEP, 09:178, 2020.
* [6] I. de Medeiros Varzielas, S. F. King, and G. G. Ross. Neutrino tri-bi-maximal mixing from a non-Abelian discrete family symmetry. Phys. Lett. B, 648:201–206, 2007.
* [7] Ernest Ma. Neutrino Mass Matrix from Delta(27) Symmetry. Mod. Phys. Lett. A, 21:1917–1921, 2006.
* [8] Christoph Luhn, Salah Nasri, and Pierre Ramond. The Flavor group Delta(3n**2). J. Math. Phys., 48:073501, 2007.
* [9] Ivo de Medeiros Varzielas. $\Delta(27)$ family symmetry and neutrino mixing. JHEP, 08:157, 2015.
* [10] Hajime Ishimori, Tatsuo Kobayashi, Hiroshi Ohki, Yusuke Shimizu, Hiroshi Okada, and Morimitsu Tanimoto. Non-Abelian Discrete Symmetries in Particle Physics. Prog. Theor. Phys. Suppl., 183:1–163, 2010.
* [11] Pierre Ramond, R. G. Roberts, and Graham G. Ross. Stitching the Yukawa quilt. Nucl. Phys. B, 406:19–42, 1993.
* [12] Graham Ross and Mario Serna. Unification and fermion mass structure. Phys. Lett. B, 664:97–102, 2008.
* [13] Jordan Bernigaud, Adam K. Forster, Björn Herrmann, Stephen F. King, Werner Porod, and Samuel J. Rowley. Data-driven analysis of a SUSY GUT of flavour. 11 2021.
* [14] Maud Sarazin, Jordan Bernigaud, and Björn Herrmann. Dark matter and lepton flavour phenomenology in a singlet-doublet scotogenic model. JHEP, 12:116, 2021.
* [15] Luis E. Ibanez and Graham G. Ross. Discrete gauge symmetry anomalies. Phys. Lett. B, 260:291–295, 1991.
* [16] Luis E. Ibanez and Graham G. Ross. Should discrete symmetries be anomaly free? 1 1991.
* [17] Tom Banks and Michael Dine. Note on discrete gauge anomalies. Phys. Rev. D, 45:1424–1427, 1992.
* [18] Takeshi Araki. Anomaly of Discrete Symmetries and Gauge Coupling Unification. Prog. Theor. Phys., 117:1119–1138, 2007.
* [19] Takeshi Araki, Tatsuo Kobayashi, Jisuke Kubo, Saul Ramos-Sanchez, Michael Ratz, and Patrick K. S. Vaudrevange. (Non-)Abelian discrete anomalies. Nucl. Phys. B, 805:124–147, 2008.
* [20] Jim Talbert. Pocket Formulae for Non-Abelian Discrete Anomaly Freedom. Phys. Lett. B, 786:426–431, 2018.
* [21] Ben Gripaios. Gauge anomalies of finite groups. Phys. Rev. D, 105(10):105008, 2022.
* [22] Joe Davighi, Ben Gripaios, and Nakarin Lohitsiri. Anomalies of non-Abelian finite groups via cobordism. JHEP, 09:147, 2022.
* [23] J. A. Casas, J. R. Espinosa, A. Ibarra, and I. Navarro. Nearly degenerate neutrinos, supersymmetry and radiative corrections. Nucl. Phys. B, 569:82–106, 2000.
* [24] R. G. Roberts, A. Romanino, Graham G. Ross, and L. Velasco-Sevilla. Precision Test of a Fermion Mass Texture. Nucl. Phys. B, 615:358–384, 2001.
* [25] C. D. Froggatt and Holger Bech Nielsen. Hierarchy of Quark Masses, Cabibbo Angles and CP Violation. Nucl. Phys. B, 147:277–298, 1979.
* [26] Ivo de Medeiros Varzielas and Graham G. Ross. SU(3) family symmetry and neutrino bi-tri-maximal mixing. Nucl. Phys. B, 733:31–47, 2006.
* [27] Ivo de Medeiros Varzielas and Graham G. Ross. Discrete family symmetry, Higgs mediators and $theta_{13}$. JHEP, 12:041, 2012.
* [28] Howard Georgi and C. Jarlskog. A New Lepton - Quark Mass Relation in a Unified Theory. Phys. Lett. B, 86:297–300, 1979.
* [29] Raoul Gatto, G. Sartori, and M. Tonin. Weak Selfmasses, Cabibbo Angle, and Broken SU(2) x SU(2). Phys. Lett. B, 28:128–130, 1968.
* [30] S. F. King and Graham G. Ross. Fermion masses and mixing angles from SU (3) family symmetry and unification. Phys. Lett. B, 574:239–252, 2003.
* [31] Peter Minkowski. $\mu\to e\gamma$ at a Rate of One Out of $10^{9}$ Muon Decays? Phys. Lett. B, 67:421–428, 1977.
* [32] Murray Gell-Mann, Pierre Ramond, and Richard Slansky. Complex Spinors and Unified Theories. Conf. Proc. C, 790927:315–321, 1979.
* [33] Rabindra N. Mohapatra and Goran Senjanovic. Neutrino Mass and Spontaneous Parity Nonconservation. Phys. Rev. Lett., 44:912, 1980.
* [34] Tsutomu Yanagida. Horizontal Symmetry and Masses of Neutrinos. Prog. Theor. Phys., 64:1103, 1980.
* [35] S. F. King. Atmospheric and solar neutrinos with a heavy singlet. Phys. Lett. B, 439:350–356, 1998.
* [36] S. F. King. Atmospheric and solar neutrinos from single right-handed neutrino dominance and U(1) family symmetry. Nucl. Phys. B, 562:57–77, 1999.
* [37] S. F. King. Large mixing angle MSW and atmospheric neutrinos from single right-handed neutrino dominance and U(1) family symmetry. Nucl. Phys. B, 576:85–105, 2000.
* [38] S. F. King. Constructing the large mixing angle MNS matrix in seesaw models with right-handed neutrino dominance. JHEP, 09:011, 2002.
* [39] P. F. Harrison, D. H. Perkins, and W. G. Scott. Tri-bimaximal mixing and the neutrino oscillation data. Phys. Lett. B, 530:167, 2002.
* [40] Fredrik Björkeroth, Francisco J. de Anda, Ivo de Medeiros Varzielas, and Stephen F. King. Towards a complete A${}_{4}\times$ SU(5) SUSY GUT. JHEP, 06:141, 2015.
* [41] Fredrik Björkeroth, Francisco J. de Anda, Ivo de Medeiros Varzielas, and Stephen F. King. Towards a complete $\Delta(27)\times SO(10)$ SUSY GUT. Phys. Rev. D, 94(1):016006, 2016.
* [42] Fredrik Björkeroth, Francisco J. de Anda, Stephen F. King, and Elena Perdomo. A natural S4 × SO(10) model of flavour. JHEP, 10:148, 2017.
* [43] C. S. Lam. Symmetry of Lepton Mixing. Phys. Lett. B, 656:193–198, 2007.
* [44] Stephen F. King and Christoph Luhn. Neutrino Mass and Mixing with Discrete Symmetry. Rept. Prog. Phys., 76:056201, 2013.
* [45] Jordan Bernigaud, Ivo de Medeiros Varzielas, and Jim Talbert. Reconstructing Effective Lagrangians Embedding Residual Family Symmetries. Eur. Phys. J. C, 81(1):65, 2021.
* [46] Elizabeth Ellen Jenkins and Aneesh V. Manohar. Algebraic Structure of Lepton and Quark Flavor Invariants and CP Violation. JHEP, 10:094, 2009.
* [47] Jim Talbert and Michael Trott. Dirac masses and mixings in the (geo)SM(EFT) and beyond. JHEP, 11:009, 2021.
* [48] Yilin Wang, Bingrong Yu, and Shun Zhou. Flavor invariants and renormalization-group equations in the leptonic sector with massive Majorana neutrinos. JHEP, 09:053, 2021.
* [49] Stefan Antusch, Jörn Kersten, Manfred Lindner, and Michael Ratz. Running neutrino masses, mixings and CP phases: Analytical results and phenomenological consequences. Nucl. Phys. B, 674:401–433, 2003.
* [50] N. Aghanim et al. Planck 2018 results. VI. Cosmological parameters. Astron. Astrophys., 641:A6, 2020. [Erratum: Astron.Astrophys. 652, C4 (2021)].
* [51] A. Gando et al. Search for Majorana Neutrinos near the Inverted Mass Hierarchy Region with KamLAND-Zen. Phys. Rev. Lett., 117(8):082503, 2016. [Addendum: Phys.Rev.Lett. 117, 109903 (2016)].
* [52] M. Aker et al. Direct neutrino-mass measurement with sub-electronvolt sensitivity. Nature Phys., 18(2):160–166, 2022.
* [53] Eleonora Di Valentino, Stefano Gariazzo, and Olga Mena. Most constraining cosmological neutrino mass bounds. Phys. Rev. D, 104(8):083504, 2021.
* [54] Eleonora Di Valentino and Alessandro Melchiorri. Neutrino Mass Bounds in the Era of Tension Cosmology. Astrophys. J. Lett., 931(2):L18, 2022.
* [55] M. Aker et al. Improved Upper Limit on the Neutrino Mass from a Direct Kinematic Method by KATRIN. Phys. Rev. Lett., 123(22):221802, 2019.
* [56] Matteo Agostini, Giovanni Benato, Jason A. Detwiler, Javier Menéndez, and Francesco Vissani. Toward the discovery of matter creation with neutrinoless double-beta decay. 2 2022.
* [57] Vincenzo Cirigliano et al. Neutrinoless Double-Beta Decay: A Roadmap for Matching Theory to Experiment. 3 2022.
* [58] Marek Olechowski and Stefan Pokorski. Heavy top quark and scale dependence of quark mixing. Phys. Lett. B, 257:388–392, 1991.
* [59] S. H. Chiu and T. K. Kuo. Renormalization of the quark mass matrix. Phys. Rev. D, 93(9):093006, 2016.
* [60] Stefan Antusch, Jörn Kersten, Manfred Lindner, Michael Ratz, and Michael Andreas Schmidt. Running neutrino mass parameters in see-saw scenarios. JHEP, 03:024, 2005.
* [61] Piotr H. Chankowski, Wojciech Krolikowski, and Stefan Pokorski. Fixed points in the evolution of neutrino mixings. Phys. Lett. B, 473:109–117, 2000.
* [62] J. A. Casas, J. R. Espinosa, A. Ibarra, and I. Navarro. General RG equations for physical neutrino parameters and their phenomenological implications. Nucl. Phys. B, 573:652–684, 2000.
* [63] Shivani Gupta, Sin Kyu Kang, and C. S. Kim. Renormalization Group Evolution of Neutrino Parameters in Presence of Seesaw Threshold Effects and Majorana Phases. Nucl. Phys. B, 893:89–106, 2015.
* [64] Roberto Trotta. Bayes in the sky: Bayesian inference and model selection in cosmology. Contemp. Phys., 49:71–104, 2008.
* [65] Howard Baer, Sabine Kraml, Sezen Sekmen, and Heaya Summy. Dark matter allowed scenarios for Yukawa-unified SO(10) SUSY GUTs. JHEP, 03:056, 2008.
* [66] Karen De Causmaecker, Benjamin Fuks, Bjoern Herrmann, Farvah Mahmoudi, Ben O’Leary, Werner Porod, Sezen Sekmen, and Nadja Strobbe. General squark flavour mixing: constraints, phenomenology and benchmarks. JHEP, 11:125, 2015.
* [67] Mariane Mangin-Brinet and Yémalin Gabin Gbedo. Markov Chain Monte Carlo techniques applied to Parton Distribution Functions determination: proof of concept. PoS, DIS2017:213, 2018.
* [68] Alexander Baur, Hans Peter Nilles, Andreas Trautner, and Patrick K. S. Vaudrevange. Unification of Flavor, CP, and Modular Symmetries. Phys. Lett. B, 795:7–14, 2019.
* [69] Alexander Baur, Hans Peter Nilles, Andreas Trautner, and Patrick K. S. Vaudrevange. A String Theory of Flavor and $CP$. Nucl. Phys. B, 947:114737, 2019.
* [70] Alexander Baur, Hans Peter Nilles, Saul Ramos-Sanchez, Andreas Trautner, and Patrick K. S. Vaudrevange. Top-down anatomy of flavor symmetry breakdown. Phys. Rev. D, 105(5):055018, 2022.
* [71] Alexander Baur, Hans Peter Nilles, Saul Ramos-Sanchez, Andreas Trautner, and Patrick K. S. Vaudrevange. The first string-derived eclectic flavor model with realistic phenomenology. JHEP, 09:224, 2022.
* [72] Hans Peter Nilles, Michael Ratz, and Patrick K. S. Vaudrevange. Origin of Family Symmetries. Fortsch. Phys., 61:493–506, 2013.
|
# Inferencing Progenitor and Explosion Properties of Evolving Core-collapse
Supernovae from
Zwicky Transient Facility Light Curves
Bhagya M. Subrayan Purdue University, Department of Physics and Astronomy, 525
Northwestern Ave, West Lafayette, IN 47907 Dan Milisavljevic Purdue
University, Department of Physics and Astronomy, 525 Northwestern Ave, West
Lafayette, IN 47907 Integrative Data Science Initiative, Purdue University,
West Lafayette, IN 47907, USA Takashi J. Moriya National Astronomical
Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa,
Mitaka, Tokyo 181-8588, Japan School of Physics and Astronomy, Faculty of
Science, Monash University, Clayton, Victoria 3800, Australia Kathryn E. Weil
Purdue University, Department of Physics and Astronomy, 525 Northwestern Ave,
West Lafayette, IN 47907 Geoffery Lentner Purdue University, Department of
Physics and Astronomy, 525 Northwestern Ave, West Lafayette, IN 47907 Mark
Linvill Purdue University, Department of Physics and Astronomy, 525
Northwestern Ave, West Lafayette, IN 47907 John Banovetz Purdue University,
Department of Physics and Astronomy, 525 Northwestern Ave, West Lafayette, IN
47907 Braden Garretson Purdue University, Department of Physics and
Astronomy, 525 Northwestern Ave, West Lafayette, IN 47907 Jack Reynolds
Purdue University, Department of Physics and Astronomy, 525 Northwestern Ave,
West Lafayette, IN 47907 Niharika Sravan California Institute of Technology,
Pasadena, USA Ryan Chornock Department of Astronomy, University of
California, Berkeley, CA 94720-3411, USA Raffaella Margutti Department of
Astronomy, University of California, Berkeley, CA 94720-3411, USA
###### Abstract
We analyze a sample of 45 Type II supernovae from the Zwicky Transient
Facility (ZTF) public survey using a grid of hydrodynamical models in order to
assess whether theoretically-driven forecasts can intelligently guide follow
up observations supporting all-sky survey alert streams. We estimate several
progenitor properties and explosion physics parameters including zero-age-
main-sequence (ZAMS) mass, mass-loss rate, kinetic energy, 56Ni mass
synthesized, host extinction, and the time of explosion. Using complete light
curves we obtain confident characterizations for 34 events in our sample, with
the inferences of the remaining 11 events limited either by poorly
constraining data or the boundaries of our model grid. We also simulate real-
time characterization of alert stream data by comparing our model grid to
various stages of incomplete light curves ($\Delta t<$ 25 days, $\Delta t<50$
days, all data), and find that some parameters are more reliable indicators of
true values at early epochs than others. Specifically, ZAMS mass, time of
explosion, steepness parameter $\beta$, and host extinction are reasonably
constrained with incomplete light curve data, whereas mass-loss rate, kinetic
energy and 56Ni mass estimates generally require complete light curves
spanning $>100$ days. We conclude that real-time modeling of transients,
supported by multi-band synthetic light curves tailored to survey passbands,
can be used as a powerful tool to identify critical epochs of follow up
observations. Our findings are relevant to identify, prioritize, and
coordinate efficient follow up of transients discovered by Vera C. Rubin
Observatory.
Supernovae, Type II supernovae, Surveys, Hydrodynamical simulations
††software: KEPLER (Weaver et al., 1978), STELLA (Blinnikov et al., 1998,
2000, 2006; Moriya et al., 2017, 2018; Ricks & Dwarkadas, 2019),
astropy(Astropy Collaboration et al., 2013, 2018), dynesty (Skilling, 2004)
## 1 Introduction
The upcoming Legacy Survey of Space and Time (LSST) to be conducted by the
Vera C. Rubin Observatory is highly anticipated to revolutionize time domain
astronomy (LSST Science Collaboration et al., 2009). Its sensitivity ($\sim$
24 mag), six broadband filters ($u$-$g$-$r$-$i$-$z$-$y$), regular southern-sky
patrolling (cadences anticipated between hourly and every few days; LSST
Science Collaboration et al. 2017), and prompt reporting of transient activity
(latency of $\approx 60$ seconds from exposure read-out to alert distribution)
will provide opportunities to discover and investigate millions of supernovae
(SNe) over its planned ten year lifetime (Ivezić et al., 2019).
However, managing the massive data sets associated with LSST will be
demanding. It will produce $\sim$ 20 TB of raw images every single night,
which will be processed rapidly via template subtraction to send out real-time
alerts of residual source variability (approximately ten million alerts
nightly). Moreover, because LSST photometry alone will generally be
insufficient to adequately investigate the transients it will discover (Alves
et al., 2022), the survey’s success will be partially dependent on other
telescopes for supporting observations (see, e.g., Najita et al. 2016), with
electromagnetic and multi-messenger facilities (Huerta et al., 2019).
The LSST Corporation (LSSTC) architects along with emerging alert stream data
brokers are developing the processes, cyberinfrastructure, and software needed
to confront this challenge and help manage upcoming LSST discoveries (Borne,
2008; Narayan et al., 2018). Data brokers utilize transient classification
methods that often employ machine learning (Möller & de Boissière, 2020;
Förster et al., 2021; Sooknunan et al., 2021; García-Jara et al., 2022).
Current data brokers include ALeRCE111Automatic Learning for the Rapid
Classification of Events, http://alerce.science/ (Sánchez-Sáez et al., 2021),
ANTARES222Arizona–NOIRLab Temporal Analysis and Response to Events System,
https://antares.noirlab.edu/ (Matheson et al., 2021),
Lasair333https://lasair.roe.ac.uk/ (Smith, 2019), MARS444Make Alerts Really
Simple, https://mars.lco.global/, and FINK555https://fink-broker.org/ (Möller
et al., 2021), Babamul and PITT-Google. These data brokers are taking on
different responsibilities to promptly process, value-add, cross-reference,
and classify survey alert streams, which in turn permits users to filter and
prioritize targets.
Working downstream of these data brokers are additional services to coordinate
follow up observations, including Target and Observation Managers (TOMs) that
permit observers to sort through broker alert streams to plan and trigger
follow up (Street et al., 2018). TOMs have some level of automation, but
generally rely largely on humans to make decisions about target prioritization
and coordination. The Recommender Engine For Intelligent Transient Tracking
(REFITT; Sravan et al. 2020) is an attempt to completely automate transient
follow up as an Object Recommender for Augmentation and Coordinating Liaison
Engine (ORACLE). REFITT uses data ingested from surveys to predict the light
curve evolution of transients, prioritize events based on confidence in its
prediction, and finally makes recommendations to observers on targets that
need follow up, specific to their observing facility and coordinated among all
observing agents.
To date, decisions about which transients to prioritize for follow up
observations with supporting facilities are generally data-driven; i.e., based
on comparisons to data sets of previously observed events. In this paper, we
explore the feasibility of guiding follow-up of core–collapse supernovae
(CCSNe) using parameters of theoretically-driven forecasts. Our expectation is
that prioritizing the underlying physics of transients will make it possible
to 1) rapidly recognize transients of desired physical parameter spaces, and
2) identify information-rich epochs in transient evolution for efficient
follow-up with limited facilities.
To this end, in this paper we characterize a sample of 45 light curves of Type
II supernovae using publicly available data from the Zwicky Transient Facility
(ZTF) (Bellm et al., 2019) survey with a grid of theoretical hydrodynamical
models spanning various progenitor properties (zero-age-main-sequence (ZAMS)
mass, mass-loss history, etc.) and explosion physics (e.g., kinetic energy,
56Ni synthesized). We compare results between model fits with both complete
and incomplete light curves in order to assess whether theoretically-driven
forecasts can intelligently guide follow up observations supporting all-sky
survey alert streams. ZTF data is used because, among currently operating all-
sky surveys that include ATLAS666Asteroid Terrestrial-impact Last Alert System
(Tonry et al., 2018) and ASAS–SN777All-Sky Automated Survey for Supernovae
(Shappee et al., 2014), ZTF’s functioning alert stream best mimics LSST data
flow, but at a more manageable scale (Masci et al., 2019). Given that ZTF’s
cadence, depth, and filters differ from those of LSST, it still serves as an
excellent testing ground for developing the infrastructure and software that
LSST will require as it starts operating.
The paper is organized as follows: Section 2 describes the treatment of ZTF
data in multiple passbands with forced photometry and extinction. In Section
3, we describe our hydrodynamical models with circumstellar material (CSM)
structure constructed using the stellar evolutionary code KEPLER and the
radiative transfer code STELLA. Section 4 outlines the results from our
fitting method and trends in the parameters derived from the fits, and Section
5 describes our real-time fitting analysis for ZTF events and an assessment of
how model parameters evolve as a function of time. The implications and
utility of this work in the backdrop of current and upcoming all–sky surveys
including LSST are discussed in Section 6.
## 2 Survey Data from ZTF
We used data from the public ZTF alert stream, available as photometry in the
ztf–$g$ and ztf–$r$ passbands. We selected 45 ZTF events from Garretson et al.
(2021) that are spectroscopically classified as Type II or IIP (Table 1). For
the events that were classified as Type II, we confirmed that the light curves
clearly demonstrated some portion of the plateau phase through visual
inspection, differentiating them from Type IIL supernovae. The events span
across the first four years of ZTF survey starting from 2018–2021. All events
in this sample have a minimum of 5 detections, as defined below, in each
ztf–$g$ and ztf–$r$ passbands. We treat this as a representative test sample
as the light curves represent a variety of phases in evolution, reasonably
span expected redshifts of the survey ($0.010<z<0.055$), and span peak
apparent magnitudes approximately between 18–20 mag.
Table 1: ZTF events in this paper
ZTF ID | TNS | TNS | RA | Dec | Redshift | No. $g$-band | No. $r$-band
---|---|---|---|---|---|---|---
| Classification | Name | (J2000) | (J2000) | ($z$) | detections | detections
ZTF18abcpmwh$*$$*$The reported number of measurements in ztf–$g$ and ztf–$r$ obtained within 150 days of first detection, after running forced photometry for each ZTF event. Additional detections beyond 150 days not included in the analysis are quoted in parenthesis for each band. | IIP | SN 2018cur | 12:59:09.12 | +37:19:00.19 | 0.015 | 8(12) | 8(40)
ZTF18acbvhit | II | SN 2018hle | 3:39:28.11 | $-$13:07:02.50 | 0.014 | 16 | 15
ZTF18acbwasc$\dagger$$\dagger$footnotemark: | IIP | SN 2018hfc | 11:01:58.61 | +45:13:39.26 | 0.020 | 51 | 65
ZTF18acrtvmm$*$$*$The reported number of measurements in ztf–$g$ and ztf–$r$ obtained within 150 days of first detection, after running forced photometry for each ZTF event. Additional detections beyond 150 days not included in the analysis are quoted in parenthesis for each band. | II | SN 2018jfp | 3:17:56.27 | $-$0:10:10.82 | 0.023 | 17 | 20(2)
ZTF18acuqskr | II | SN 2018jrb | 8:09:33.69 | +15:31:10.55 | 0.045 | 7 | 10
ZTF19aakiyed | II | SN 2019awk | 15:07:02.58 | +61:13:42.37 | 0.044 | 17 | 23
ZTF19aaqdkrm | II | SN 2019dod | 13:25:49.97 | +34:29:43.58 | 0.034 | 27 | 30
ZTF19aauqwna | IIP | SN 2019fem | 19:44:46.13 | +44:42:49.13 | 0.041 | 20 | 47
ZTF19aavbkly | IIP | SN 2019fmv | 12:29:33.80 | +35:46:12.15 | 0.041 | 31 | 29
ZTF19aavhblr | II | SN 2019fuo | 15:31:43.38 | +16:42:49.30 | 0.050 | 19 | 27
ZTF19aavkptg | IIP | SN 2019gqs | 11:36:15.78 | +49:09:12.55 | 0.038 | 30 | 22
ZTF19abguqsi | II | SN 2019lsh | 22:52:58.52 | +0:26:50.53 | 0.052 | 14 | 20
ZTF19abhduuo$\dagger$$\dagger$footnotemark: | II | SN 2019lre | 1:58:50.79 | $-$9:35:05.65 | 0.018 | 14 | 16
ZTF19abiahko | IIP | SN 2019lsj | 19:36:59.13 | $-$11:57:13.63 | 0.023 | 8 | 11
ZTF19abqyouo$\dagger$$\dagger$footnotemark: | IIP | SN 2019pbk | 7:46:23.89 | +64:13:23.79 | 0.045 | 11 | 13
ZTF19abvbrve | IIP | SN 2019puv | 19:12:36.67 | $-$19:25:01.67 | 0.020 | 14 | 42
ZTF19acbvisk$*$$*$The reported number of measurements in ztf–$g$ and ztf–$r$ obtained within 150 days of first detection, after running forced photometry for each ZTF event. Additional detections beyond 150 days not included in the analysis are quoted in parenthesis for each band. | II | SN 2019rms | 9:00:45.39 | +19:44:42.32 | 0.037 | 19 | 37(1)
ZTF19ackjvtl$\dagger$$\dagger$footnotemark: | IIP | SN 2019uwd | 13:16:20.75 | +30:40:48.72 | 0.019 | 53 | 99
ZTF19acmwfli$\dagger$$\dagger$footnotemark: | II | SN 2019tza | 13:14:04.68 | +59:15:04.91 | 0.028 | 39 | 59
ZTF19acszmgx | II | SN 2019vew | 5:27:49.43 | $-$5:21:39.88 | 0.042 | 22 | 22
ZTF20aahqbun | II | SN 2020alg | 14:27:10.55 | +35:55:20.22 | 0.028 | 26 | 41
ZTF20aamlmec | II | SN 2020chv | 7:46:27.78 | +1:57:34.73 | 0.034 | 8 | 9
ZTF20aamxuwl | II | SN 2020ckv | 11:03:26.48 | $-$1:32:27.04 | 0.037 | 13 | 12
ZTF20aatqgeo | II | SN 2020fcx | 13:40:10.01 | +23:20:29.56 | 0.032 | 25 | 16
ZTF20aatqidk | II | SN 2020fbj | 12:47:47.03 | +22:17:10.44 | 0.034 | 26 | 29
ZTF20aaullwz | II | SN 2020fch | 11:33:24.03 | $-$9:20:55.94 | 0.027 | 12 | 10
ZTF20aausahr | II | SN 2020hgm | 8:31:22.09 | +49:13:35.43 | 0.043 | 9 | 10
ZTF20aazcnrv | II | SN 2020jjj | 14:31:19.63 | $-$25:39:31.04 | 0.023 | 12 | 13
ZTF20aazpphd | II | SN 2020jww | 16:10:51.58 | +27:09:42.02 | 0.046 | 36 | 43
ZTF20abekbzp | II | SN 2020meu | 15:34:44.68 | +6:38:53.35 | 0.041 | 9 | 11
ZTF20abuqali | II | SN 2020rht | 2:30:17.30 | +28:36:02.64 | 0.040 | 13 | 12
ZTF20abwdaeo$\dagger$$\dagger$footnotemark: | II | SN 2020rvn | 21:06:36.34 | +17:59:34.86 | 0.021 | 28 | 29
ZTF20abyosmd$\dagger$$\dagger$footnotemark: | II | SN 2020toc | 8:28:30.14 | +17:28:08.52 | 0.021 | 20 | 26
ZTF20acjqksf$\dagger$$\dagger$footnotemark: | IIP | SN 2020tfb | 6:08:52.14 | $-$26:24:46.02 | 0.048 | 17 | 18
ZTF20acnvtxy$\dagger$$\dagger$footnotemark: | IIP | SN 2020zkx | 11:18:31.84 | +6:44:28.84 | 0.030 | 10 | 13
ZTF20acptgfl | IIP | SN 2020zjk | 5:22:00.02 | $-$7:11:20.81 | 0.037 | 22 | 27
ZTF21aabygea$*$$*$The reported number of measurements in ztf–$g$ and ztf–$r$ obtained within 150 days of first detection, after running forced photometry for each ZTF event. Additional detections beyond 150 days not included in the analysis are quoted in parenthesis for each band. | II | SN 2021os | 12:02:54.08 | +5:36:53.15 | 0.019 | 31 | 49(2)
ZTF21aaevrjl | II | SN 2021arg | 4:31:18.78 | $-$10:23:46.95 | 0.031 | 14 | 15
ZTF21aafkktu | II | SN 2021avg | 11:39:59.00 | +14:31:40.65 | 0.031 | 31 | 32
ZTF21aafkwtk | II | SN 2021apg | 13:41:19.24 | +24:29:43.88 | 0.027 | 26 | 37
ZTF21aagtqpn$\dagger$$\dagger$footnotemark: $*$$*$The reported number of measurements in ztf–$g$ and ztf–$r$ obtained within 150 days of first detection, after running forced photometry for each ZTF event. Additional detections beyond 150 days not included in the analysis are quoted in parenthesis for each band. | II | SN 2021bkq | 18:20:34.83 | +40:56:36.28 | 0.036 | 33(4) | 44(6)
ZTF21aaigdly$\dagger$$\dagger$footnotemark: | II | SN 2021cdw | 14:05:31.80 | $-$25:21:54.51 | 0.040 | 17 | 28
ZTF21aaluqkp | II | SN 2021dhx | 11:05:10.38 | $-$15:21:10.13 | 0.025 | 34 | 25
ZTF21aamzuxi | II | SN 2021dvl | 7:49:56.10 | +71:15:42.11 | 0.034 | 27 | 29
ZTF21acchbmn | II | SN 2021zaa | 23:46:41.12 | +26:44:45.11 | 0.032 | 22 | 19
††footnotetext: No upper–limit constraints before first detection are
available for informed priors on explosion date.
We utilize point spread function (PSF)–fit photometry measurements using
difference imaging as provided by the ZTF forced-photometry service(IRSA,
2022). Complete light curves were constructed from the differential flux
measurements (forcediffimflux and forcediffimfluxunc) that the service returns
in each band along with upper limits. A signal-to-noise threshold (SNT) $=3$
and a signal-to-noise ratio (SNU) $=5$ were used to declare the measurements
as a detection versus an upper-limit (Masci et al., 2019). The photometric
zero-points in each passband were used to calculate the differential
magnitudes.
We use the redshift reported by the Transient Name Server (TNS) (Gal-Yam,
2021) to calculate the distance modulus using astropy (Astropy Collaboration
et al., 2013, 2018) for every ZTF event assuming standard flat $\Lambda$CDM
cosmology model with $H_{0}=70\>\text{km}\>\>\text{Mpc}^{-1}\>\text{s}^{-1}$
and $\Omega_{0}=0.3$. The measurements are corrected for Milky Way extinction
using dust maps as prescribed by Schlegel et al. 1998 for each passband,
assuming a $R_{V}=3.1$. In our analysis, for each ZTF event, we only use
measurements up to 150 days in the rest-frame from the first detection. We do
not account for cosmological K-corrections in this work since we use a sample
of low-redshift events. However, because LSST is expected to discover events
at higher redshifts, K-corrections for a similar analysis of LSST events would
be non-negligible.
## 3 Model Fitting
### 3.1 Model grid using STELLA
The hydrodynamic models used in this work are specific to Type IIP,
constructed using the multi-group radiation hydrodynamics code STELLA
(Blinnikov et al., 1998, 2000, 2006; Moriya et al., 2017, 2018; Ricks &
Dwarkadas, 2019). In this work, our models have the following parameters: ZAMS
mass, kinetic energy of explosion ($E_{k}$), mass-loss rate ($\dot{M}$),
steepness of velocity law ($\beta$) associated with the stellar wind, and 56Ni
mass synthesized. The model parameters along with their corresponding values
in the grid are described in Table 2. Red supergiant (RSG) pre-supernova
progenitors from Sukhbold et al. (2016) were used, which were calculated using
the KEPLER code (Weaver et al., 1978), with physics previously discussed
(e.g., Woosley et al. 2002). A neutron star remnant mass of 1.4
$\rm{M}_{\odot}$ is assumed and the SN explosions are triggered by putting
thermal energy above the mass cut. 56Ni is assumed to be uniformly mixed up to
half of the hydrogen-rich envelope in the mass coordinate.
Table 2: Prior Distribution on physical parameters used in our sampling method
along with values of parameters in our hydrodynamical model grid.
Parameter | Hydrodynamical Model Values | In steps | Prior Distribution | Units
---|---|---|---|---
$t_{\text{exp}}$ | – | – | $U(0,t_{\text{upper-limit}})$ | day
ZAMS | 12 – 16 | 2 | $N(14,3)\in(12,16)$ | $M_{\odot}$
$E_{k}$ | 0.5 – 5.0 | 0.5 | $N(1,1)\in(0.5,5)$ | $10^{51}$ erg
56Ni | 0.01 – 0.1 | 0.01 (0.001,0.2,0.3) | $N(0.05,0.01)\in(0.001,0.3)$ | $M_{\odot}$
$\beta$ | 1 – 5 | 1 | $N(3,2)\in(1,5)$ | –
$-\text{log}_{10}\dot{M}$ | 1 – 5 | 0.5 | $U(4,2)\in(5,1)$ | $M_{\odot}$ $\text{yr}^{-1}$
$A_{V}$ | – | – | $N(\text{ln}(0.05),2)\in(10^{-4},2)$ | mag
Note. — The “In steps” values in parentheses for 56Ni are the additional
values for the parameters present in the model grid.
Our hydrodynamical model grid also incorporates an associated circumstellar
material (CSM) density structure attached to these RSG progenitors. The CSM
density is given by
$\rho_{CSM}(r)=\frac{\dot{M}}{4\pi v_{wind}r^{2}},$ (1)
where $\dot{M}$ is the mass-loss rate and $v_{wind}$ is the velocity structure
associated with the stellar wind. The radial dependency of $v_{wind}$ is given
by the velocity law
$v_{wind}(r)=v_{0}+\left(v_{\infty}-v_{0}\right)\left(1-\frac{R_{0}}{r}\right)^{\beta},$
(2)
where $v_{0}$ is the initial wind velocity with a value $<0.01$ km s-1,
$v_{\infty}$ is the terminal wind velocity $=10$ km s-1, $R_{0}$ is the wind
launching radius set to the photosphere of the star, and $\beta$ is the
steepness parameter that gives a measure of wind acceleration. RSG progenitors
typically have $\beta>2$, owing to slower acceleration of the stellar winds
(Mauron & Josselin, 2011). A fixed CSM radius of $10^{15}$ cm is assumed in
our models.
The RSG progenitors with associated CSM density structures were then exploded
as thermal bombs with energies ranging from $0.5$ – $5.0\times 10^{51}$ erg
using STELLA. The code calculates the resulting light curve of the explosion
following the evolution of spectral energy distributions (SEDs) with time at
every epoch. The light curves in various passbands are obtained by convolving
the ZTF filter transmission functions to the numerical SEDs. Our final grid of
varying progenitor, explosion, and CSM properties is made up of 4206 unique
models. A similar model grid used in this work can be found in Förster et al.
(2018) for further reference. The full details of the numerical model grid
will be presented in a separate paper (T.J. Moriya, in prep.).
### 3.2 Fast Interpolation Method
Using Bayesian Inference methods requires the models to be finely sampled
within the parameter space. However, because our model grid is neither
complete nor uniform, a scale–independent fast interpolation process that can
use a non–uniform grid of models was incorporated into our Monte Carlo
sampling method following Förster et al. (2018) and Martinez et al. (2020).
This allowed us to quickly interpolate between the models in the grid to
sample any combination of values in the parameter space. For a given parameter
vector $\vec{\theta}$, the method finds the closest models
$\vec{\theta}_{close}$ and weighs them appropriately using
$m(t,\vec{\theta})=\sum_{\vec{\theta}_{\it{i}}\in\vec{\theta}_{\textit{close}}}\hat{w}(\vec{\theta},\vec{\theta}_{\it{i}})m(t,\vec{\theta}_{\it{i}}),$
(3)
where $m(t,t_{\text{exp}},\vec{\theta})$ is the magnitude for a given
$\vec{\theta}$ at time $t$ and the normalized weights are given by
$\hat{w}(\vec{\theta},\vec{\theta}_{\it{i}})=\frac{w(\vec{\theta},\vec{\theta}_{\it{i}})}{\sum_{\vec{\theta}_{\it{j}}\in\vec{\theta}_{\text{close}}}w(\vec{\theta},\vec{\theta}_{\it{j}})},$
(4)
where
$w(\vec{\theta},\vec{\theta}_{\it{i}})=\Big{(}\prod_{\it{j}}\big{|}\theta^{\it{j}}-\theta_{\it{i}}^{\it{j}}\big{|}+\delta^{\it{j}}\Big{)}^{-1}.$
(5)
Equation 5 uses a very small vector $\vec{\delta}$ with the same units as
$\vec{\theta}$, which ensures that the weights do not diverge when a given
parameter combination exactly matches a model in the grid. This fast
interpolation method can be used to calculate light curves for any combination
of parameter vector $\theta$ bound by the limits of our model grid.
### 3.3 Explosion date and host extinction
Along with fitting for the parameters of our models, we also fit for the
explosion date and host extinction. The time of explosion is calculated as the
number of days before the first photometric measurement in each passband. We
defined an informed prior distribution for the time of explosion leveraging
the constraints given by the upper limits in each passband for each event. The
priors are more constraining if upper limits preceding the first detection are
well defined. To calculate the approximate bounds of time of explosion priors,
the deepest upper limit available before the first detection is identified. If
no upper-limits are available for an event, we allow a wider distribution for
the prior.
Observed Type II SN light curves can be significantly affected by host
extinction (Kasen & Woosley, 2009; Mattila et al., 2012; Kochanek et al.,
2012). We fit for total extinction in the visual band assuming a prior
distribution listed in Table 2. We do this by simultaneously fitting ztf–$g$
and ztf–$r$ passbands to infer host extinction. The derived extinction is used
to calculate $E(B-V)$ for the host of each event assuming $R_{V}$ = 3.1.
### 3.4 Nested Sampling Methods for Parameter Inference
We derive posterior distributions of the parameters involved using the python
based MIT-licensed Dynamic Nested Sampling package dynesty (Skilling, 2004)
that estimates the Bayesian target distributions. We assigned a combination of
uniform and Gaussian distributions as our priors for different parameters. The
prior distributions considered in this work are listed in Table 2. The
likelihood function incorporates fast interpolation and evaluates how close
the observations are with a sample model drawn. Using the above framework,
multi-band ZTF observations in ztf–$g$ and ztf–$r$ passbands in their absolute
magnitudes are fit to the hydrodynamical models.
Nested sampling methods require all the samples to be identically and
independently distributed (i.i.d) random variables drawn from the prior
distribution. We use uniform sampling method widely used for dimensions $<10$
in the dynesty package (Skilling, 2006). In the Bayesian framework all of the
inference is contained in the final multi-dimensional posterior, which can be
marginalised over each parameter to obtain constraints. The Bayesian evidence
is represented by the overall normalisation of this posterior. The nested
sampling algorithm converges until the evidence has been estimated to a
desired accuracy after accepting or rejecting the samples that are drawn from
the prior distribution.
Table 3: Median values from the posterior distribution with 1–$\sigma$ uncertainty when fitted with all data from the events. ZTF ID | ZAMS | $E_{k}$ | -log${}_{10}\dot{M}$ | MNi56 | $t_{exp}$ | $\beta$ | $A_{V}$
---|---|---|---|---|---|---|---
| ($M_{\odot}$) | ($10^{51}$ ergs) | ($M_{\odot}\,\text{yr}^{-1}$) | ($M_{\odot}$) | (days) | | (mag)
ZTF18abcpmwh | $15.77_{-1.39}^{+0.20}$ | $1.67_{-0.29}^{+0.35}$ | $4.09_{-0.40}^{+0.38}$ | $0.07_{-0.02}^{+0.02}$ | $14.81_{-1.58}^{+1.88}$ | $2.56_{-0.27}^{+1.20}$ | $0.23_{-0.10}^{+0.11}$
ZTF18acbvhit$\S$$\S$Events with relatively less confident inferences due to poor data quality (including missing phases of light curve along with no constraints on upper limits) | $13.35_{-0.91}^{+1.68}$ | $0.51_{-0.01}^{+0.02}$ | $4.00_{-0.09}^{+0.09}$ | $0.04_{-0.01}^{+0.01}$ | $25.54_{-4.04}^{+3.93}$ | $3.98_{-0.10}^{+0.09}$ | $0.31_{-0.09}^{+0.09}$
ZTF18acbwasc$\S$$\S$Events with relatively less confident inferences due to poor data quality (including missing phases of light curve along with no constraints on upper limits) | $12.47_{-0.31}^{+0.40}$ | $0.52_{-0.01}^{+0.01}$ | $4.22_{-0.36}^{+0.35}$ | $0.10_{-0.02}^{+0.005}$ | $48.48_{-1.79}^{+1.06}$ | $3.06_{-0.20}^{+0.17}$ | $0.02_{-0.01}^{+0.02}$
ZTF18acrtvmm | $12.86_{-0.50}^{+0.80}$ | $0.99_{-0.05}^{+0.04}$ | $2.43_{-0.25}^{+0.15}$ | $0.10_{-0.03}^{+0.04}$ | $8.81_{-1.00}^{+0.75}$ | $3.64_{-1.22}^{+0.14}$ | $0.03_{-0.02}^{+0.03}$
ZTF18acuqskr | $13.89_{-0.98}^{+1.02}$ | $2.90_{-0.39}^{+0.30}$ | $3.91_{-0.45}^{+0.47}$ | $0.04_{-0.02}^{+0.02}$ | $16.72_{-2.38}^{+2.40}$ | $3.01_{-0.45}^{+0.46}$ | $0.06_{-0.04}^{+0.07}$
ZTF19aakiyed | $14.85_{-1.74}^{+1.04}$ | $1.02_{-0.20}^{+0.21}$ | $3.75_{-0.54}^{+0.52}$ | $0.06_{-0.02}^{+0.02}$ | $17.29_{-3.09}^{+3.22}$ | $2.93_{-0.53}^{+0.82}$ | $0.10_{-0.07}^{+0.10}$
ZTF19aaqdkrm | $13.14_{-0.81}^{+2.52}$ | $1.08_{-0.28}^{+0.25}$ | $1.67_{-0.21}^{+0.25}$ | $0.05_{-0.01}^{+0.02}$ | $16.68_{-2.88}^{+1.95}$ | $3.06_{-1.20}^{+0.89}$ | $0.12_{-0.08}^{+0.11}$
ZTF19aauqwna | $13.71_{-1.01}^{+1.17}$ | $1.19_{-0.31}^{+0.30}$ | $1.32_{-0.20}^{+0.26}$ | $0.08_{-0.02}^{+0.02}$ | $16.29_{-3.80}^{+6.28}$ | $3.03_{-0.50}^{+0.55}$ | $0.05_{-0.03}^{+0.07}$
ZTF19aavbkly | $13.07_{-0.75}^{+2.09}$ | $1.06_{-0.21}^{+0.43}$ | $2.43_{-0.25}^{+0.18}$ | $0.02_{-0.01}^{+0.01}$ | $7.27_{-0.82}^{+0.51}$ | $3.00_{-0.10}^{+0.10}$ | $0.18_{-0.10}^{+0.25}$
ZTF19aavhblr | $13.46_{-0.97}^{+1.36}$ | $0.86_{-0.19}^{+0.28}$ | $1.09_{-0.04}^{+0.41}$ | $0.07_{-0.01}^{+0.02}$ | $24.13_{-10.53}^{+4.25}$ | $2.98_{-0.47}^{+0.49}$ | $0.05_{-0.04}^{+0.09}$
ZTF19aavkptg | $13.14_{-0.80}^{+1.97}$ | $1.06_{-0.26}^{+0.45}$ | $2.22_{-0.20}^{+0.30}$ | $0.03_{-0.02}^{+0.02}$ | $8.21_{-1.40}^{+1.09}$ | $2.95_{-0.50}^{+0.71}$ | $0.38_{-0.17}^{+0.25}$
ZTF19abguqsi | $12.57_{-0.40}^{+1.02}$ | $1.24_{-0.45}^{+0.73}$ | $1.51_{-0.04}^{+0.06}$ | $0.01_{-0.01}^{+0.01}$ | $13.71_{-1.34}^{+0.88}$ | $3.00_{-0.44}^{+0.45}$ | $0.17_{-0.10}^{+0.19}$
ZTF19abhduuo$\S$$\S$Events with relatively less confident inferences due to poor data quality (including missing phases of light curve along with no constraints on upper limits) | $12.95_{-0.59}^{+0.85}$ | $0.52_{-0.02}^{+0.05}$ | $4.00_{-0.08}^{+0.08}$ | $0.03_{-0.01}^{+0.01}$ | $51.02_{-6.87}^{+5.20}$ | $3.97_{-0.36}^{+0.47}$ | $0.79_{-0.11}^{+0.17}$
ZTF19abiahko | $13.11_{-0.71}^{+0.92}$ | $0.75_{-0.21}^{+0.32}$ | $3.99_{-0.10}^{+0.11}$ | $0.02_{-0.01}^{+0.01}$ | $25.15_{-5.27}^{+7.76}$ | $3.99_{-0.10}^{+0.10}$ | $0.54_{-0.36}^{+0.23}$
ZTF19abqyouo$\S$$\S$Events with relatively less confident inferences due to poor data quality (including missing phases of light curve along with no constraints on upper limits) | $12.98_{-0.70}^{+1.36}$ | $1.39_{-0.23}^{+0.15}$ | $3.95_{-0.70}^{+0.58}$ | $0.06_{-0.02}^{+0.02}$ | $31.37_{-3.42}^{+3.16}$ | $3.00_{-0.84}^{+0.85}$ | $0.09_{-0.06}^{+0.09}$
ZTF19abvbrve${\ddagger}$${\ddagger}$Fits to explosion energies are very close to the model grid parameter boundaries. | $12.85_{-0.59}^{+1.59}$ | $0.51_{-0.01}^{+0.02}$ | $3.56_{-0.25}^{+0.57}$ | $0.06_{-0.02}^{+0.02}$ | $8.27_{-1.16}^{1.47}$ | $2.82_{-0.43}^{+0.89}$ | $0.13_{-0.07}^{+0.09}$
ZTF19acbvisk | $13.82_{-0.86}^{+0.77}$ | $0.54_{-0.03}^{+0.07}$ | $1.63_{-0.41}^{+0.41}$ | $0.10_{-0.02}^{+0.02}$ | $20.02_{-6.41}^{+9.31}$ | $3.11_{-0.49}^{+0.48}$ | $0.02_{-0.02}^{+0.03}$
ZTF19ackjvtl$\S$$\S$Events with relatively less confident inferences due to poor data quality (including missing phases of light curve along with no constraints on upper limits) | $14.43_{-1.26}^{+1.05}$ | $0.51_{-0.01}^{+0.02}$ | $3.88_{-0.80}^{+0.71}$ | $0.06_{-0.00}^{+0.01}$ | $57.60_{-1.99}^{+2.22}$ | $2.47_{-0.83}^{+1.37}$ | $0.40_{-0.07}^{+0.07}$
ZTF19acmwfli | $13.34_{-0.97}^{+2.19}$ | $1.01_{-0.04}^{+0.06}$ | $3.82_{-0.79}^{+0.71}$ | $0.04_{-0.01}^{+0.01}$ | $24.75_{-4.51}^{+2.12}$ | $2.69_{-0.21}^{+1.07}$ | $0.21_{-0.08}^{+0.10}$
ZTF19acszmgx | $13.21_{-0.86}^{+1.40}$ | $1.39_{-0.40}^{+0.42}$ | $1.03_{-0.03}^{+0.07}$ | $0.08_{-0.02}^{+0.02}$ | $19.37_{-0.88}^{+0.45}$ | $2.98_{-0.47}^{+0.50}$ | $0.43_{-0.36}^{+0.25}$
ZTF20aahqbun | $15.39_{-2.27}^{+0.51}$ | $0.58_{-0.04}^{+0.05}$ | $3.01_{-0.48}^{+1.11}$ | $0.06_{-0.01}^{+0.01}$ | $27.73_{-2.11}^{+1.51}$ | $2.55_{-0.18}^{+1.20}$ | $0.17_{-0.07}^{+0.07}$
ZTF20aamlmec | $13.53_{-1.01}^{+1.48}$ | $1.00_{-0.26}^{+0.38}$ | $2.37_{-0.40}^{+0.72}$ | $0.03_{-0.02}^{+0.02}$ | $22.35_{-3.56}^{+3.76}$ | $2.99_{-0.49}^{+0.53}$ | $0.25_{-0.16}^{+0.20}$
ZTF20aamxuwl | $13.48_{-1.00}^{+1.49}$ | $1.03_{-0.28}^{+0.37}$ | $1.78_{-0.39}^{+0.46}$ | $0.06_{-0.01}^{+0.02}$ | $14.41_{-3.18}^{+4.21}$ | $3.05_{-0.54}^{+0.57}$ | $0.08_{-0.06}^{+0.11}$
ZTF20aatqgeo | $12.60_{-0.43}^{+1.67}$ | $0.92_{-0.25}^{+0.15}$ | $2.02_{-0.06}^{+0.10}$ | $0.06_{-0.02}^{+0.02}$ | $18.60_{-1.54}^{+0.97}$ | $2.76_{-0.40}^{+0.94}$ | $0.29_{-0.13}^{+0.09}$
ZTF20aatqidk | $13.81_{-1.04}^{+1.21}$ | $0.61_{-0.04}^{+0.05}$ | $2.67_{-0.26}^{+0.37}$ | $0.10_{-0.04}^{+0.04}$ | $16.88_{-2.26}^{+2.47}$ | $3.52_{-1.11}^{+0.26}$ | $0.03_{-0.02}^{+0.04}$
ZTF20aaullwz | $14.01_{-0.69}^{+0.73}$ | $1.03_{-0.08}^{+0.38}$ | $2.73_{-0.28}^{+0.33}$ | $0.08_{-0.02}^{+0.01}$ | $10.65_{-1.94}^{+1.76}$ | $3.02_{-0.35}^{+0.37}$ | $0.04_{-0.03}^{+0.10}$
ZTF20aausahr | $13.74_{-1.02}^{+1.18}$ | $2.38_{-0.34}^{+0.46}$ | $3.69_{-1.01}^{+0.74}$ | $0.03_{-0.02}^{+0.02}$ | $27.38_{-3.29}^{+3.29}$ | $3.00_{-0.47}^{+0.48}$ | $0.12_{-0.08}^{+0.12}$
ZTF20aazcnrv | $14.80_{-1.61}^{+1.02}$ | $1.26_{-0.39}^{+0.51}$ | $3.78_{-0.52}^{+0.51}$ | $0.02_{-0.01}^{+0.01}$ | $19.69_{-2.43}^{+2.65}$ | $3.94_{-0.27}^{+0.57}$ | $0.35_{-0.20}^{+0.21}$
ZTF20aazpphd | $13.73_{-0.90}^{+0.82}$ | $1.00_{-0.02}^{+0.03}$ | $1.86_{-0.15}^{+0.33}$ | $0.10_{-0.02}^{+0.03}$ | $13.94_{-1.60}^{+1.67}$ | $3.08_{-0.55}^{+0.45}$ | $0.02_{-0.01}^{+0.03}$
ZTF20abekbzp | $13.62_{-1.05}^{+1.38}$ | $1.54_{-0.46}^{+0.75}$ | $3.19_{-0.78}^{+1.08}$ | $0.02_{-0.02}^{+0.02}$ | $15.27_{-3.34}^{+2.84}$ | $2.99_{-0.49}^{+0.51}$ | $0.28_{-0.18}^{+0.27}$
ZTF20abuqali | $13.81_{-1.22}^{+1.62}$ | $1.01_{-0.15}^{+0.18}$ | $2.41_{-0.33}^{+0.31}$ | $0.08_{-0.02}^{+0.02}$ | $17.38_{-2.96}^{+3.00}$ | $3.00_{-0.56}^{+0.75}$ | $0.07_{-0.05}^{+0.08}$
ZTF20abwdaeo | $15.22_{-1.95}^{+0.71}$ | $0.57_{-0.03}^{+0.03}$ | $2.55_{-0.27}^{+0.26}$ | $0.09_{-0.02}^{+0.01}$ | $20.09_{-2.49}^{+2.75}$ | $3.50_{-1.08}^{+0.28}$ | $0.03_{-0.02}^{+0.03}$
ZTF20abyosmd$\S$$\S$Events with relatively less confident inferences due to poor data quality (including missing phases of light curve along with no constraints on upper limits) | $12.72_{-0.51}^{+1.61}$ | $0.97_{-0.25}^{+0.08}$ | $4.00_{-0.44}^{+0.44}$ | $0.06_{-0.01}^{+0.01}$ | $47.15_{-3.01}^{+1.89}$ | $2.97_{-0.49}^{+0.55}$ | $1.03_{-0.11}^{+0.09}$
ZTF20acjqksf$\S$$\S$Events with relatively less confident inferences due to poor data quality (including missing phases of light curve along with no constraints on upper limits) | $12.83_{-0.59}^{+1.62}$ | $1.59_{-0.45}^{+0.63}$ | $4.05_{-0.63}^{+0.58}$ | $0.03_{-0.02}^{+0.02}$ | $28.34_{-1.99}^{+1.16}$ | $2.98_{-0.47}^{+0.49}$ | $0.40_{-0.22}^{+0.24}$
ZTF20acnvtxy$\S$$\S$Events with relatively less confident inferences due to poor data quality (including missing phases of light curve along with no constraints on upper limits) | $13.32_{-0.89}^{+1.49}$ | $1.38_{-0.49}^{+0.59}$ | $3.89_{-0.76}^{+0.65}$ | $0.03_{-0.02}^{+0.02}$ | $54.43_{-5.08}^{+3.58}$ | $3.00_{-0.48}^{+0.44}$ | $0.98_{-0.26}^{+0.28}$
ZTF20acptgfl${\ddagger}$${\ddagger}$Fits to explosion energies are very close to the model grid parameter boundaries. | $12.94_{-0.41}^{+0.42}$ | $0.52_{-0.01}^{+0.02}$ | $3.97_{-0.11}^{+0.15}$ | $0.05_{-0.01}^{+0.01}$ | $19.41_{-2.55}^{+2.76}$ | $3.00_{-0.10}^{+0.10}$ | $0.08_{-0.05}^{+0.07}$
ZTF21aabygea | $15.96_{-0.19}^{+0.03}$ | $0.57_{-0.02}^{+0.02}$ | $3.76_{-0.14}^{+0.18}$ | $0.05_{-0.02}^{+0.02}$ | $9.84_{-0.98}^{+0.87}$ | $3.75_{-0.05}^{+0.05}$ | $0.02_{-0.01}^{+0.02}$
ZTF21aaevrjl | $13.72_{-1.05}^{+1.19}$ | $1.42_{-0.60}^{+0.76}$ | $4.00_{-0.10}^{+0.10}$ | $0.02_{-0.01}^{+0.02}$ | $14.82_{-2.54}^{+3.13}$ | $3.00_{-0.09}^{+0.10}$ | $0.81_{-0.32}^{+0.30}$
ZTF21aafkktu | $14.35_{-1.29}^{+1.21}$ | $0.56_{-0.05}^{+0.09}$ | $2.11_{-0.40}^{+0.47}$ | $0.09_{-0.01}^{+0.01}$ | $8.56_{-2.31}^{+3.54}$ | $2.66_{-0.33}^{+1.09}$ | $0.04_{-0.03}^{+0.05}$
ZTF21aafkwtk${\ddagger}$${\ddagger}$Fits to explosion energies are very close to the model grid parameter boundaries. | $14.36_{-0.85}^{+1.12}$ | $0.51_{-0.00}^{+0.01}$ | $2.75_{-0.20}^{+0.35}$ | $0.04_{-0.01}^{+0.01}$ | $13.52_{-2.11}^{+1.84}$ | $2.78_{-0.40}^{+0.74}$ | $0.05_{-0.03}^{+0.04}$
ZTF21aagtqpn | $12.11_{-0.08}^{+0.17}$ | $1.44_{-0.10}^{+0.10}$ | $4.04_{-0.45}^{+0.45}$ | $0.10_{-0.002}^{+0.002}$ | $19.70_{-1.40}^{+1.21}$ | $2.50_{-0.30}^{+1.23}$ | $0.04_{-0.03}^{+0.04}$
ZTF21aaigdly | $14.64_{-1.76}^{+1.24}$ | $0.67_{-0.07}^{+0.13}$ | $2.79_{-0.26}^{+0.43}$ | $0.09_{-0.06}^{+0.07}$ | $10.60_{-2.14}^{+2.38}$ | $3.71_{-1.24}^{+0.08}$ | $0.05_{-0.04}^{+0.06}$
ZTF21aaluqkp | $15.14_{-2.06}^{+0.77}$ | $0.54_{-0.02}^{+0.04}$ | $3.70_{-0.42}^{+0.49}$ | $0.09_{-0.03}^{+0.01}$ | $16.11_{-2.75}^{+2.74}$ | $2.60_{-0.19}^{+1.16}$ | $0.07_{-0.04}^{+0.06}$
ZTF21aamzuxi | $14.48_{-1.36}^{+1.25}$ | $1.49_{-0.09}^{+0.08}$ | $3.72_{-0.51}^{+0.49}$ | $0.09_{-0.06}^{+0.05}$ | $7.85_{-1.31}^{+1.41}$ | $2.56_{-0.23}^{+1.20}$ | $0.05_{-0.04}^{+0.05}$
ZTF21acchbmn | $14.14_{-1.35}^{+1.62}$ | $1.51_{-0.08}^{+0.13}$ | $2.17_{-0.24}^{+0.32}$ | $0.09_{-0.02}^{+0.01}$ | $8.96_{-0.89}^{+0.66}$ | $3.32_{-0.92}^{+0.46}$ | $0.05_{-0.03}^{+0.05}$
Figure 1: The observed ZTF light curves and our model fits plotted with
respect to derived time of explosion . The family of light curves in each
panel represents 150 models randomly sampled from the derived posterior
probability distribution in individual bands respectively. The dashed curve
represents the best fit model for each ZTF event. The upper limits before
first detection that are used to constrain explosion date are plotted. The
median values and their $1-\sigma$ uncertainity for the parameters are listed
in Table 3. ZTF events with no upper limits and poorly constraining rise data
are flagged with $\S$. ZTF events whose fits approached model boundaries are
flagged with ${\ddagger}$. Additional plots for the remaining ZTF events
listed in Table 1 are provided in the Appendix.
## 4 Parameter Inference and Trends of Complete Light Curves
Using our Bayesian inference procedure to fit all ZTF events with our model
grid, we derived the posterior probability distribution for all parameters of
each Type II supernova. Our models were fit confidently to 34 events. The fits
for the remaining 11 events were limited by poor upper limits prior to first
detection (flagged with $\S$) and the hydrodynamical model grid boundaries
(flagged with ${\ddagger}$). Figure 1 shows the samples of the posterior
probability distributions for the first 15 ZTF events. The light curve fits
for the remaining events are plotted in an extended version of Figure 1 in
Appendix. The observed data points are shifted with respect to the inferred
explosion date and corrected for host extinction derived from the fits. The
uncertainity in host extinction and explosion date is not represented in
Figure 1. The dashed line in each passband represents the best fit model.
Table 3 lists the inferred median values for the seven parameters with their
16th and 84th percentile confidence regions. We note that these $1-\sigma$
uncertainities of the parameters only reflect how well the model fits the
observed data and do not take into account any uncertainties related to the
assumptions used to create the model grid itself. Representative corner plots
of posterior probability distributions are shown in the Appendix.
Figure 2: Top panel: ZAMS mass plotted against 56Ni mass, mass–loss rate, and
kinetic energy. The blue and grey circles are values that were found for SNe
II in Förster et al. 2018 and Martinez et al. (2020). Bottom panel: Kinetic
energy plotted against time of explosion with respect to first detection, 56Ni
mass, and host extinction ($A_{V}$). The grey dotted lines represents the
parameter bounds of our model grid. The purple circle in the bottom left panel
demarcates events that required a narrower prior distribution for the
explosion date constrained either from upper limits or rise-time data. The
green circles represent flagged ZTF events with low confidence model fits.
ZTF21aabygea (SN 2021os) and ZTF18abcpmwh (SN 2018cur) yielded the highest
ZAMS masses ($15.96^{+0.03}_{-0.19}$ M⊙ and $15.77^{+0.20}_{-1.39}$ M⊙,
respectively), while ZTF21aagtqpn (SN 2021bkq) has the lowest
($12.11^{+0.17}_{-0.08}$ M⊙). The highest explosion energies are seen for
ZTF18acuqskr (SN 2018jrb) and ZTF20aausahr (SN 2020hgm)
($2.90^{+0.30}_{-0.39}\times 10^{51}$ erg and $2.38^{+0.46}_{-0.38}\times
10^{51}$ erg , respectively). Shown in Figure 1, these two events correspond
to steeper and early declines in the light curves as compared to low energetic
events ($\sim 0.5\times 10^{51}$ erg ) including ZTF19ackjvtl (SN 2019uwd) and
ZTF19acbvisk (SN 2019rms). These observed short-lived plateaus and fast
declines are in agreement with previous results for other high energy CCSNe
(Valenti et al., 2016; Rubin & Gal-Yam, 2017; de Jaeger et al., 2019; Barker
et al., 2021). The more energetic events in our sample also show increased
peak luminosity in both ztf–$g$ and ztf–$r$ bands (Galbany et al., 2016;
Valenti et al., 2016; Sanders et al., 2015).
For seven events, our estimates of kinetic energy favor the minimum parameter
value of our grid (0.5 $\times 10^{51}$ erg). We flag these events as ones for
which our model fits are less confident, since the actual kinetic energy may
be significantly lower.
Most events (37 out of 45) in our sample tend to favor mass–loss rates between
$10^{-4.0}-10^{-2.0}\text{M}_{\odot}\text{yr}^{-1}$. However, a non-negligible
number of events (8 out of 45) yield higher mass-loss rate estimates $\leq
10^{-2.0}\text{M}_{\odot}\text{yr}^{-1}$. This finding is consistent with
Förster et al. (2018), who reported higher mass loss rates to be correlated
with early and steep rises in the light curves of many Type II SNe from the
High cadence Transient Survey (HiTS) (Förster et al., 2016), possibly due to
shock-breakout in dense CSM (Moriya et al., 2011; Morozova et al., 2015;
Moriya et al., 2017; Morozova et al., 2018; Moriya et al., 2018; Bruch et al.,
2021; Haynie & Piro, 2021). We find that the fits for the steepness parameter
$\beta$ are most likely prior dominated and favour values closer to $\beta\sim
3$, consistent with slowly accelerating winds found in red supergiants (Baade
et al., 1996).
ZTF19abguqsi (SN 2019lsh) produced the least 56Ni mass of $0.01\pm 0.01$
$\text{M}_{\odot}$. Other ZTF events in the sample have estimates of 56Ni mass
ranging from 0.02–0.1 $\text{M}_{\odot}$. ZTF19aabvkly (SN 2019fmv),
ZTF19abiahko (SN 2019lsj) and ZTF20aazcnrv (SN 2020jjj) have relatively short-
lived plateau regions in their light curves and are associated with lower
estimates of synthesized 56Ni mass (see Figure 1). Events with higher
estimates of 56Ni mass have long–lived plateaus as compared to the the events
with lower 56Ni mass estimates and faster declines in their light curves.
These results are in agreement with previous analyses of SNe Type II that
consider how 56Ni mass affects light curve evolution (Eastman et al., 1994;
Bersten, 2013; Faran et al., 2014; Kozyreva et al., 2019).
ZTF20abyosmd (SN 2020toc) has the highest host extinction
($A_{V}=1.03^{+0.19}_{-0.11}$ mag), followed by ZTF21aaevrjl (SN 2021arg)
($A_{V}=0.81^{+0.30}_{-0.32}$ mag). ZTF18acbwasc (SN 2018hfc) and ZTF21aabygea
(SN 2021os) have negligible host extinction values (both
$A_{V}=0.02^{+0.02}_{-0.01}$ mag). All ZTF events in our sample show host
extinction values ranging from $A_{V}=0.01-1.1$ mag, typical for CCSNe hosts
(Pastorello et al., 2006; Maguire et al., 2010; Faran et al., 2014).
### 4.1 Correlations between physical parameters
We plot our inferred values with comparison to other similar works in
literature (Förster et al., 2018; Martinez et al., 2020) in Figure 2. Förster
et al. (2018) in their work uses hydrodynamical models on HiTS Type II
supernovae that include estimates of circumstellar environment, while Martinez
et al. (2020) do not include CSM structure in their models. Figure 2 shows the
bounds of our model grid with grey dotted lines for each parameter in the
plot. The ZTF events whose energy value fits approached the bounds of the
model grid are marked in Table 3. We represent events that had relatively
narrower prior distribution for explosion date inside the purple circle in
Figure 2. These events had either constraining upper limits before first
detection or enough rise time data to make an educated guess on the upper
bound of prior distribution for explosion date.
Generally, our parameter estimates are less confident for ZTF events with poor
data quality, such as those missing phases of light curves combined with
unavailable upper limit constraints and whose parameter values approached the
parameter boundaries of the model grid (see Table 3). These events are
represented by green circles in Figure 2 as flagged events. Among 7 out of 11,
ZTF18acbwasc (SN 2018hfc), ZTF19abhduuo (SN 2019lre), ZTF19abqyouo (SN
2019pbk), ZTF19ackjvtl (SN 2019uwd), ZTF20abyosmd (SN 2020toc), ZTF20acjqksf
(SN 2020tfb) and ZTF20acnvtxy (SN 2020zkx) can be grouped together as events
that have higher values for time of explosion ($>$ 25 days). This could be
attributed to the fact that the prior distribution was too broad as there were
no upper limits for these events as provided by the ZTF survey. The remaining
4 ZTF events whose fits approached model boundaries are ZTF18acbvhit (SN
2018hle), ZTF19abvbrve (SN 2019puv), ZTF20acptgfl (SN 2020zjk), and
ZTF21aafkwtk (SN 2021apg).
Moreover, we performed a Pearson correlation analysis to check if we can find
any correlations between the physical parameters. However, we were unable to
find any significant correlations (r $>+0.5$ or r $<-0.5$) within the sample
studied. The correlation matrix is shown in Figure 3. The modest mass range in
our model grid (12 - 16 M⊙) limits our ability to make strong claims on any
potential ZAMS dependencies. For example, in Figure 2, the correlations in
Martinez et al. (2020) becomes only clear when the ZAMS range is extended to
10 M⊙. Our hydrodynamical grid explores new parameter spaces that include
mass-loss properties and this can possibly introduce additional degeneracies.
Figure 3: Pearson correlation matrix showing correlation coefficients that are
color-coded for different physical parameters in the analysis. No significant
correlations between physical parameters were found within the sample of Type
II SNe used in this study.
## 5 Real-time Parameter Evolution
We analyzed how each of the parameter fits evolved as a function of time,
termed real-time characterization. We were motivated to characterize how well
or poorly the model parameter fits and their uncertainties at fractional light
curve stages anticipated the values found from complete light curves. We
compared our model grid to three regimes of incomplete light curves with
respect to the first detection: 1) $\Delta t\leq 25$ days; 2) $\Delta t\leq
50$ days; 3) all available data.
Figure 4: Multi-epoch real-time characterization for ZTF19aaqdkrm (SN
2019dod). The left panel shows model fits to only data within first 25 days of
detection in each individual band. The middle panel shows model fits using
data within the first 50 days of detection. The right panel uses all data. The
light curves are plotted with respect to the derived time of explosion and
host extinction from model fits at each epoch. The table below shows parameter
estimates derived from fits at each epoch.
Figure 4 shows a detailed evolving characterization for the event ZTF19aaqdkrm
(SN 2019dod). As expected, the fits and estimates of the parameters change
with time as the supernova evolves and additional measurements are
incorporated into the fitting process. Within $\Delta t\leq 25$ days and
$\Delta t\leq 50$ days, the fits favor lower 56Ni masses and higher energies.
As data in both bands accumulate, the fits favor higher 56Ni masses and lower
energies. At $\Delta t\geq 50$ days, the portion of the light curve powered by
hydrogen recombination of the event starts to fall off, giving better
estimates on ZAMS and 56Ni masses.
We performed this same analysis on all 45 ZTF events, where similar trends are
seen. Figure 5 shows the difference in parameter values of our fits with
respect to the final epoch with all the data as the event unfolds. Our
analysis shows that the explosion energies and mass–loss rates for the events
are initially overestimated and tend to favor lower final values when all data
is included in the fit. This is opposite to what we see in case of 56Ni mass,
as it is underestimated with only few measurements and consistently favors
higher values for many events during later stages of light curve evolution as
the recombination drop–off starts to unfold.
The most confident estimate of 56Ni mass is inferred when the hydrogen
recombination phase ends and the radioactive decay phase starts. This results
in epochs with all the data yielding higher estimates of 56Ni mass. With
kinetic energy, the peak luminosity and decline rate of the light curve plays
a significant role in estimating the energetics of the event. As a result, as
more data become available at later epochs, more reliable estimates of kinetic
energy are seen. The inferred values of ZAMS, host extinction, explosion date
and the $\beta$ remain nearly constant as the light curve evolves.
Figure 5: Real-time parameter evolution for all the listed ZTF events. The
average change in parameter values (averaged over all 45 ZTF events) at each
epoch with respect to the final epoch with all data are represented with the
blue squares from top left to bottom right. The order of the parameters are as
follows: ZAMS mass, kinetic energy of explosion ($E_{k}$), mass-loss rate
($\dot{M}$), steepness of velocity law ($\beta$) associated with the stellar
wind, and 56Ni mass synthesized, explosion date with respect to first
detection, and host extinction ($\text{A}_{V}$).
## 6 Discussion
### 6.1 Parameter Space Degeneracy
A major challenge encountered when modelling only two passbands provided by
public ZTF light curves was model degeneracy. Specifically, the probability
that different combinations of explosion and progenitor parameters can
potentially lead to the same light curve posed difficulties in converging to a
unique solution in our fitting method. As noted in previous works, the
hydrodynamical modelling approach has led to larger estimates of progenitor
properties, especially ZAMS estimates, when compared to other approaches like
pre-SN explosion imaging (Utrobin & Chugai, 2008, 2009; Maguire et al., 2010;
Sanders et al., 2015). In the Appendix, posterior distribution of three ZTF
events at various epochs are examined using kernel density estimation (KDE)
analysis, along with a calculation of the number of modes. In a similar
analysis for all 45 events, we find that all the ZTF events have multi-modal
posteriors and temporal evolution of modes cannot be generalized.
Goldberg et al. (2019) recognize the challenges involved in breaking the
degeneracies between ejecta mass, explosion energy and progenitor radius, and
argue that in order to do so requires an independent measurement of one of the
parameters. The scaling relationships used in their work yield families of
explosions with varied parameters that can reproduce similar light curves.
Hillier & Dessart (2019) also highlight calculations of similar photospheric
phases for well-sampled Type II supernovae in their multi-band and
spectroscopic modelling.
Martinez et al. (2020) attempt to partially lift this degeneracy issue by
fitting photospheric velocity information from their models to velocity
measurements obtained of SNe during the plateau phase. However, Goldberg et
al. (2019) argue that only the ejecta velocities measured during the initial
shock cooling phase can be useful to break these degeneracies seen in the
parameters. Our analysis focused on photometry only, and future work can
investigate whether use of kinematic information from spectroscopy can better
constrain parameter selection.
### 6.2 Bolometric vs Pre-computed Multi-band Inference
Our analysis adopts the approach of fitting observations to synthetic multi-
band photometry derived from theoretical hydrodynamical models. Similar works
like Nicholl et al. (2017) and Guillochon et al. (2018) use semi-analytical,
black body spectral energy distribution (SED) models to fit multi-band
photometry for transients. These procedures contrast with typical methods that
first construct bolometric light curves from multi-filter and/or
multi–wavelength observations, that are in turn compared to model bolometric
light curves. Each method has associated uncertainties. In the case of
synthetic model photometry, uncertainties arise from assumptions in opacity
treatment at different frequencies in STELLA. In the case of creating
bolometric light curves from interpolated observed data sets, the
uncertainties stem from potential gaps in photometry cadence, limited
passbands, and potentially few data points overall to fit against.
For real time characterization of events, we found that a proper Bayesian
inferencing of explosion parameters for large numbers of supernovae is most
efficiently conducted with pre-computed grids of models. Computing models in
real-time per event will lead to duplicative efforts and incur computational
time costs. With our method, the fitting is more rapid, is more flexible, and
the associated uncertainties are less significant.
### 6.3 Intelligent Augmentation
Our analysis only uses ZTF public data in ztf–$g$ and ztf–$r$ passbands to
make inferences about Type II explosion parameters. Our parameter fits could
be further constrained with observations at other passbands, and it is
worthwhile to consider which passbands at which epochs are most constraining.
To this end, inferencing transients in real time in order to make on-the-fly
decisions about optimal follow-up in complementary passbands is needed
(Carbone & Corsi, 2020; Sravan et al., 2021). Generally, most constraining for
inferring Type II properties are observations at early and late phases of
light curve evolution. At early phases, UV observations best sample shock
breakout and circumstellar interaction (Gezari et al., 2015; Ganot et al.,
2016; Soumagnac et al., 2020; Haynie & Piro, 2021; Jacobson-Galán et al.,
2022). At late phases, near- and mid-infrared passbands provide diagnostics
that best follow ejecta cooling and dust formation (Szalai & Vinkó, 2013;
Bianco et al., 2014; Tinyanont et al., 2016). Optimally augmenting all-sky
survey photometry in real-time in this way can enhance opportunities to
generate large samples of core-collapse supernovae sufficiently observed to
perform population and host-environment studies (D’Andrea et al., 2010;
Anderson et al., 2014; Sanders et al., 2015; Schulze et al., 2021).
Figure 6: Spectroscopically classified Type II supernovae with anomalous light
curves identified in this work. Only the upper limits prior to the first
detection used for deriving the fits are plotted.
### 6.4 Anomaly Detection
Our work uncovered two examples of anomalous Type II supernovae, which are not
included in our sample of 45 events. We present the unusual light curves of
ZTF18acgvgiq (SN 2018fru) and ZTF20acwxrgp (SN 2020acjg) in Figure 6. The fits
for the entire light curves converged to a model solution with very poor
likelihood scores resulting in inaccurate inferences. Consequently, we were
unable to characterize these events with our current model grid. The anomalous
nature of these events could be identified via poor model fits as early as
$\Delta t<25$ days. The cumulative log-evidence (logZ) inferred for these two
events were above -200 indicating very poor likelihood estimates when
comparing the models with the data. For the events with good fits, the logZ
estimates were under -20. These high logZ values indicate that the models were
unable to converge to a target distribution of the parameters for the two
anomalous events.
This experience shows that real-time inferencing can be used as a way to
identify targets that deviate from normal theoretical predictions. Such real-
time analyses for detecting anomalies (Pruzhinskaya et al., 2019; Soraisam et
al., 2020; Villar et al., 2020; Ishida et al., 2021; Villar et al., 2021;
Martínez-Galarza et al., 2021) can be automated into an ORACLE such as REFITT
to motivate rapid spectroscopic follow–up of non-traditional CCSNe.
## 7 Conclusions
In this paper we have characterized 45 Type II supernovae using only products
from the public ZTF survey (i.e., in ztf-$g$ and ztf-$r$ passbands) using a
grid of theoretical hydrodynamical models. Our grid parameters span multiple
supernova progenitor and explosion properties, as well as the time of
explosion with respect to the first detection and host extinction. We compare
results between complete and fractional light curves to determine which
parameters are most robust to incomplete photometric data sets. This effort is
to assess whether opportunities exist for theoretically-driven forecasts to
inform when follow up observations are needed to support all-sky survey alert
streams. The following conclusions are made:
* •
We obtain confident characterizations for 34 SNe II in our sample. Inferences
of the remaining 11 events are limited either by poorly constraining data or
the boundaries of our model grid. The properties of these well-fitted events
broadly follow those reported in previous analyses of SNe II.
* •
In cases where fitted parameters derived from complete vs. incomplete data
sets are compared, some parameters are more reliably determined at early
epochs than others. The explosion energy, host extinction and mass-loss rate
parameters are overestimated during initial phases of evolution, while the
56Ni mass is underestimated. The ZAMS mass and $\beta$ estimates do not change
significantly at different phases.
* •
The explosion date is a very sensitive parameter that requires
well–constrained pre-explosion upper–limits from the survey for confident
inferences. Generally, we found parameter estimates to be less reliable for
ZTF events with poor data quality, such as missing phases of the light curve
along with poor upper limit constraints.
* •
Real-time Bayesian inferencing of progenitor and explosion parameters for
large numbers of CCSNe from all-sky surveys demands a pre-computed grid of
models. Creating synthetic model light curves in respective all-sky survey
passbands catalyzes real-time characterization of evolving transients by
avoiding challenges associated with constructing bolometric light curves with
sparse and incomplete photometry.
Our work has demonstrated that hydrodynamical model grids for CCSNe along with
statistical analyses can provide opportunities to enhance scientific return
from all-sky surveys that provide live alert streams. Theoretically-driven
predictions can be leveraged to efficiently coordinate worldwide observing
facilities to conduct follow up observations that augment survey light curves
to optimally achieve scientific objectives (Bianco et al., 2014; Modjaz et
al., 2019; Kennamer et al., 2020; Sravan et al., 2020; Anand et al., 2021).
For example, real-time characterization can identify and prioritize transients
that fall within certain parameter spaces of interest, including the extreme
high and low ends of kinetic energy or 56Ni mass. Likewise, theoretical
forecasts can identify and prioritize follow up photometry at critical phases
of transient evolution, including monitoring the plateau drop-off of SNe II
light curves that provides information needed to improve estimates of kinetic
energy and ZAMS and 56Ni masses. Ideally, predicting transient evolution using
the underlying physics of transients can be incorporated into a TOM or ORACLE
that can efficiently recommend targets for follow-up at information-rich
epochs (Djorgovski et al., 2016; Street et al., 2018; Kasliwal et al., 2019;
Sravan et al., 2020; Agayeva et al., 2021).
Our future work relies on an expanded grid of hydrodynamical models exploring
larger parameter ranges, including varying degrees of 56Ni mixing within the
inner layers of the progenitors, and information on photospheric velocity that
can be used to potentially break degeneracies between parameters. It will also
expand synthetic photometry to all six passbands of LSST. Although our work
focuses on Type II CCSNe, our methods can be easily applied to identify,
prioritize and coordinate follow-up of other transients discovered by Vera C.
Rubin Observatory.
## Acknowledgements
The authors would like to thank the anonymous referee for helpful comments
that have significantly improved this paper. We also acknowledge helpful
discussions with Thomas Matheson, Mariana Orellana, and Melina Bersten. The
ZTF forced-photometry service was funded under the Heising-Simons Foundation
grant #12540303 (PI: Graham). Numerical computations were in part carried out
on PC cluster at Center for Computational Astrophysics (CfCA), National
Astronomical Observatory of Japan. D. M. acknowledges NSF support from grants
PHY-1914448, PHY- 2209451, AST-2037297, and AST-2206532.
Figure 7: Continued from Figure 1. Figure 8: Continued from Figure 1. Figure
9: Continued from Figure 4 for event ZTF20abwdaeo (SN 2020rvn). Figure 10:
Continued from Figure 4 for event ZTF21aabygea (SN 2021os). Figure 11: Corner
plot showing the posterior probability distribution of various parameters for
the event ZTF20acptgfl (SN 2020zjk).
Figure 12: Corner plot showing the posterior probability distribution of
various parameters for the event ZTF20aausahr (SN 2020hgm).
Figure 13: Corner plot showing the posterior probability distribution of
various parameters for the event ZTF19abqyouo (SN 2019pbk).
## Appendix A Multi-epoch Evolution of Posterior Distribution of Parameters
We performed a kernel density estimation (KDE) analysis in order to find the
modality of the posterior distributions at various epochs. The samples in the
posterior distribution were collected and smoothed using Silverman’s bandwidth
with a gaussian kernel. The KDE approximated distribution was then used to
calculate the number of modes at every epoch. The modes were found by
identifying inflection points in the distribution i.e positions where the
first derivative changes the sign.
We found that all the objects have multi-modal posteriors for at least one
parameter in our analysis. From this analysis, we conclude that the change in
the posteriors for physical parameters over different epochs cannot be
generalized for all the events. Figure 14 shows examples of multi-modal
posteriors for ZAMS, Kinetic Energy and 56Ni for three ZTF events The red
circles represent different modes found in the distribution using inflection
point analysis. We note that the modes of the distribution for kinetic energy
shift from higher to lower values as time proceeds in Figure 14 for each event
as discussed in the paper. The trend in 56Ni with time is reflected in the
modes with earlier epochs favouring lower values as compared with final
epochs. The degeneracies in parameter space as discussed in 6.1 are clearly
reflected in these distributions through multi-modality.
Figure 14: Kernel Density Estimates (KDE) of physical parameters along with
modes represented by red circles obtained at three epochs for three ZTF
events. The order of physical parameters from top to bottom row are as
follows: (a) ZAMS mass, (b) Kinetic Energy and (c) 56Ni mass.
## References
* Agayeva et al. (2021) Agayeva, S., Alishov, S., Antier, S., et al. 2021, in Revista Mexicana de Astronomia y Astrofisica Conference Series, Vol. 53, Revista Mexicana de Astronomia y Astrofisica Conference Series, 198–205, doi: 10.22201/ia.14052059p.2021.53.39
* Alves et al. (2022) Alves, C. S., Peiris, H. V., Lochner, M., et al. 2022, ApJS, 258, 23, doi: 10.3847/1538-4365/ac3479
* Anand et al. (2021) Anand, S., Coughlin, M. W., Kasliwal, M. M., et al. 2021, Nature Astronomy, 5, 46, doi: 10.1038/s41550-020-1183-3
* Anderson et al. (2014) Anderson, J. P., González-Gaitán, S., Hamuy, M., et al. 2014, ApJ, 786, 67, doi: 10.1088/0004-637X/786/1/67
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
* Baade et al. (1996) Baade, R., Kirsch, T., Reimers, D., et al. 1996, ApJ, 466, 979, doi: 10.1086/177569
* Barker et al. (2021) Barker, B. L., Harris, C. E., Warren, M. L., O’Connor, E. P., & Couch, S. M. 2021, arXiv e-prints, arXiv:2102.01118. https://arxiv.org/abs/2102.01118
* Bellm et al. (2019) Bellm, E. C., Kulkarni, S. R., Graham, M. J., et al. 2019, PASP, 131, 018002, doi: 10.1088/1538-3873/aaecbe
* Bersten (2013) Bersten, M. C. 2013, arXiv e-prints, arXiv:1303.0639. https://arxiv.org/abs/1303.0639
* Bianco et al. (2014) Bianco, F. B., Modjaz, M., Hicken, M., et al. 2014, ApJS, 213, 19, doi: 10.1088/0067-0049/213/2/19
* Blinnikov et al. (2000) Blinnikov, S., Lundqvist, P., Bartunov, O., Nomoto, K., & Iwamoto, K. 2000, ApJ, 532, 1132, doi: 10.1086/308588
* Blinnikov et al. (1998) Blinnikov, S. I., Eastman, R., Bartunov, O. S., Popolitov, V. A., & Woosley, S. E. 1998, ApJ, 496, 454, doi: 10.1086/305375
* Blinnikov et al. (2006) Blinnikov, S. I., Röpke, F. K., Sorokina, E. I., et al. 2006, A&A, 453, 229, doi: 10.1051/0004-6361:20054594
* Borne (2008) Borne, K. D. 2008, Astronomische Nachrichten, 329, 255, doi: 10.1002/asna.200710946
* Bruch et al. (2021) Bruch, R. J., Gal-Yam, A., Schulze, S., et al. 2021, ApJ, 912, 46, doi: 10.3847/1538-4357/abef05
* Carbone & Corsi (2020) Carbone, D., & Corsi, A. 2020, ApJ, 889, 36, doi: 10.3847/1538-4357/ab6227
* D’Andrea et al. (2010) D’Andrea, C. B., Sako, M., Dilday, B., et al. 2010, ApJ, 708, 661, doi: 10.1088/0004-637X/708/1/661
* de Jaeger et al. (2019) de Jaeger, T., Zheng, W., Stahl, B. E., et al. 2019, MNRAS, 490, 2799, doi: 10.1093/mnras/stz2714
* Djorgovski et al. (2016) Djorgovski, S. G., Graham, M. J., Donalek, C., et al. 2016, arXiv e-prints, arXiv:1601.04385. https://arxiv.org/abs/1601.04385
* Eastman et al. (1994) Eastman, R. G., Woosley, S. E., Weaver, T. A., & Pinto, P. A. 1994, ApJ, 430, 300, doi: 10.1086/174404
* Faran et al. (2014) Faran, T., Poznanski, D., Filippenko, A. V., et al. 2014, MNRAS, 442, 844, doi: 10.1093/mnras/stu955
* Förster et al. (2016) Förster, F., Maureira, J. C., San Martín, J., et al. 2016, ApJ, 832, 155, doi: 10.3847/0004-637X/832/2/155
* Förster et al. (2018) Förster, F., Moriya, T. J., Maureira, J. C., et al. 2018, Nature Astronomy, 2, 808, doi: 10.1038/s41550-018-0563-4
* Förster et al. (2021) Förster, F., Cabrera-Vives, G., Castillo-Navarrete, E., et al. 2021, AJ, 161, 242, doi: 10.3847/1538-3881/abe9bc
* Gal-Yam (2021) Gal-Yam, A. 2021, in American Astronomical Society Meeting Abstracts, Vol. 53, American Astronomical Society Meeting Abstracts, 423.05
* Galbany et al. (2016) Galbany, L., Hamuy, M., Phillips, M. M., et al. 2016, The Astronomical Journal, 151, 33, doi: 10.3847/0004-6256/151/2/33
* Ganot et al. (2016) Ganot, N., Gal-Yam, A., Ofek, E. O., et al. 2016, ApJ, 820, 57, doi: 10.3847/0004-637X/820/1/57
* García-Jara et al. (2022) García-Jara, G., Protopapas, P., & Estévez, P. A. 2022, arXiv e-prints, arXiv:2205.06758. https://arxiv.org/abs/2205.06758
* Garretson et al. (2021) Garretson, B., Milisavljevic, D., Reynolds, J., et al. 2021, Research Notes of the American Astronomical Society, 5, 283, doi: 10.3847/2515-5172/ac416e
* Gezari et al. (2015) Gezari, S., Jones, D. O., Sanders, N. E., et al. 2015, ApJ, 804, 28, doi: 10.1088/0004-637X/804/1/28
* Goldberg et al. (2019) Goldberg, J. A., Bildsten, L., & Paxton, B. 2019, ApJ, 879, 3, doi: 10.3847/1538-4357/ab22b6
* Guillochon et al. (2018) Guillochon, J., Nicholl, M., Villar, V. A., et al. 2018, ApJS, 236, 6, doi: 10.3847/1538-4365/aab761
* Haynie & Piro (2021) Haynie, A., & Piro, A. L. 2021, ApJ, 910, 128, doi: 10.3847/1538-4357/abe938
* Hillier & Dessart (2019) Hillier, D. J., & Dessart, L. 2019, A&A, 631, A8, doi: 10.1051/0004-6361/201935100
* Huerta et al. (2019) Huerta, E. A., Allen, G., Andreoni, I., et al. 2019, Nature Reviews Physics, 1, 600, doi: 10.1038/s42254-019-0097-4
* IRSA (2022) IRSA. 2022, Zwicky Transient Facility Image Service, IPAC, doi: 10.26131/IRSA539
* Ishida et al. (2021) Ishida, E. E. O., Kornilov, M. V., Malanchev, K. L., et al. 2021, A&A, 650, A195, doi: 10.1051/0004-6361/202037709
* Ivezić et al. (2019) Ivezić, Ž., Kahn, S. M., Tyson, J. A., et al. 2019, ApJ, 873, 111, doi: 10.3847/1538-4357/ab042c
* Jacobson-Galán et al. (2022) Jacobson-Galán, W. V., Dessart, L., Jones, D. O., et al. 2022, ApJ, 924, 15, doi: 10.3847/1538-4357/ac3f3a
* Kasen & Woosley (2009) Kasen, D., & Woosley, S. E. 2009, ApJ, 703, 2205, doi: 10.1088/0004-637X/703/2/2205
* Kasliwal et al. (2019) Kasliwal, M. M., Cannella, C., Bagdasaryan, A., et al. 2019, PASP, 131, 038003, doi: 10.1088/1538-3873/aafbc2
* Kennamer et al. (2020) Kennamer, N., Ishida, E. E. O., Gonzalez-Gaitan, S., et al. 2020, arXiv e-prints, arXiv:2010.05941. https://arxiv.org/abs/2010.05941
* Kochanek et al. (2012) Kochanek, C. S., Khan, R., & Dai, X. 2012, ApJ, 759, 20, doi: 10.1088/0004-637X/759/1/20
* Kozyreva et al. (2019) Kozyreva, A., Nakar, E., & Waldman, R. 2019, MNRAS, 483, 1211, doi: 10.1093/mnras/sty3185
* LSST Science Collaboration et al. (2009) LSST Science Collaboration, Abell, P. A., Allison, J., et al. 2009, arXiv e-prints, arXiv:0912.0201. https://arxiv.org/abs/0912.0201
* LSST Science Collaboration et al. (2017) LSST Science Collaboration, Marshall, P., Anguita, T., et al. 2017, arXiv e-prints, arXiv:1708.04058. https://arxiv.org/abs/1708.04058
* Maguire et al. (2010) Maguire, K., Di Carlo, E., Smartt, S. J., et al. 2010, MNRAS, 404, 981, doi: 10.1111/j.1365-2966.2010.16332.x
* Martinez et al. (2020) Martinez, L., Bersten, M. C., Anderson, J. P., et al. 2020, A&A, 642, A143, doi: 10.1051/0004-6361/202038393
* Martínez-Galarza et al. (2021) Martínez-Galarza, J. R., Bianco, F. B., Crake, D., et al. 2021, MNRAS, 508, 5734, doi: 10.1093/mnras/stab2588
* Masci et al. (2019) Masci, F. J., Laher, R. R., Rusholme, B., et al. 2019, PASP, 131, 018003, doi: 10.1088/1538-3873/aae8ac
* Matheson et al. (2021) Matheson, T., Stubens, C., Wolf, N., et al. 2021, AJ, 161, 107, doi: 10.3847/1538-3881/abd703
* Mattila et al. (2012) Mattila, S., Dahlen, T., Efstathiou, A., et al. 2012, ApJ, 756, 111, doi: 10.1088/0004-637X/756/2/111
* Mauron & Josselin (2011) Mauron, N., & Josselin, E. 2011, A&A, 526, A156, doi: 10.1051/0004-6361/201013993
* Modjaz et al. (2019) Modjaz, M., Gutiérrez, C. P., & Arcavi, I. 2019, Nature Astronomy, 3, 717, doi: 10.1038/s41550-019-0856-2
* Möller & de Boissière (2020) Möller, A., & de Boissière, T. 2020, MNRAS, 491, 4277, doi: 10.1093/mnras/stz3312
* Möller et al. (2021) Möller, A., Peloton, J., Ishida, E. E. O., et al. 2021, MNRAS, 501, 3272, doi: 10.1093/mnras/staa3602
* Moriya et al. (2011) Moriya, T., Tominaga, N., Blinnikov, S. I., Baklanov, P. V., & Sorokina, E. I. 2011, MNRAS, 415, 199, doi: 10.1111/j.1365-2966.2011.18689.x
* Moriya et al. (2018) Moriya, T. J., Förster, F., Yoon, S.-C., Gräfener, G., & Blinnikov, S. I. 2018, MNRAS, 476, 2840, doi: 10.1093/mnras/sty475
* Moriya et al. (2017) Moriya, T. J., Yoon, S.-C., Gräfener, G., & Blinnikov, S. I. 2017, MNRAS, 469, L108, doi: 10.1093/mnrasl/slx056
* Morozova et al. (2015) Morozova, V., Piro, A. L., Renzo, M., et al. 2015, ApJ, 814, 63, doi: 10.1088/0004-637X/814/1/63
* Morozova et al. (2018) Morozova, V., Piro, A. L., & Valenti, S. 2018, ApJ, 858, 15, doi: 10.3847/1538-4357/aab9a6
* Najita et al. (2016) Najita, J., Willman, B., Finkbeiner, D. P., et al. 2016, arXiv e-prints, arXiv:1610.01661. https://arxiv.org/abs/1610.01661
* Narayan et al. (2018) Narayan, G., Zaidi, T., Soraisam, M. D., et al. 2018, ApJS, 236, 9, doi: 10.3847/1538-4365/aab781
* Nicholl et al. (2017) Nicholl, M., Guillochon, J., & Berger, E. 2017, ApJ, 850, 55, doi: 10.3847/1538-4357/aa9334
* Pastorello et al. (2006) Pastorello, A., Sauer, D., Taubenberger, S., et al. 2006, MNRAS, 370, 1752, doi: 10.1111/j.1365-2966.2006.10587.x
* Pruzhinskaya et al. (2019) Pruzhinskaya, M. V., Malanchev, K. L., Kornilov, M. V., et al. 2019, MNRAS, 489, 3591, doi: 10.1093/mnras/stz2362
* Ricks & Dwarkadas (2019) Ricks, W., & Dwarkadas, V. V. 2019, arXiv e-prints, arXiv:1906.07311. https://arxiv.org/abs/1906.07311
* Rubin & Gal-Yam (2017) Rubin, A., & Gal-Yam, A. 2017, ApJ, 848, 8, doi: 10.3847/1538-4357/aa8465
* Sánchez-Sáez et al. (2021) Sánchez-Sáez, P., Reyes, I., Valenzuela, C., et al. 2021, AJ, 161, 141, doi: 10.3847/1538-3881/abd5c1
* Sanders et al. (2015) Sanders, N. E., Soderberg, A. M., Gezari, S., et al. 2015, ApJ, 799, 208, doi: 10.1088/0004-637X/799/2/208
* Schlegel et al. (1998) Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525, doi: 10.1086/305772
* Schulze et al. (2021) Schulze, S., Yaron, O., Sollerman, J., et al. 2021, ApJS, 255, 29, doi: 10.3847/1538-4365/abff5e
* Shappee et al. (2014) Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, ApJ, 788, 48, doi: 10.1088/0004-637X/788/1/48
* Skilling (2004) Skilling, J. 2004, in American Institute of Physics Conference Series, Vol. 735, Bayesian Inference and Maximum Entropy Methods in Science and Engineering: 24th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, ed. R. Fischer, R. Preuss, & U. V. Toussaint, 395–405, doi: 10.1063/1.1835238
* Skilling (2006) Skilling, J. 2006, Bayesian Analysis, 1, 833 , doi: 10.1214/06-BA127
* Smith (2019) Smith, K. 2019, in The Extragalactic Explosive Universe: the New Era of Transient Surveys and Data-Driven Discovery, 51, doi: 10.5281/zenodo.3478098
* Sooknunan et al. (2021) Sooknunan, K., Lochner, M., Bassett, B. A., et al. 2021, MNRAS, 502, 206, doi: 10.1093/mnras/staa3873
* Soraisam et al. (2020) Soraisam, M. D., Saha, A., Matheson, T., et al. 2020, ApJ, 892, 112, doi: 10.3847/1538-4357/ab7b61
* Soumagnac et al. (2020) Soumagnac, M. T., Ofek, E. O., Liang, J., et al. 2020, ApJ, 899, 51, doi: 10.3847/1538-4357/ab94be
* Sravan et al. (2021) Sravan, N., Graham, M. J., Fremling, C., & Coughlin, M. W. 2021, arXiv e-prints, arXiv:2112.05897. https://arxiv.org/abs/2112.05897
* Sravan et al. (2020) Sravan, N., Milisavljevic, D., Reynolds, J. M., Lentner, G., & Linvill, M. 2020, ApJ, 893, 127, doi: 10.3847/1538-4357/ab8128
* Street et al. (2018) Street, R. A., Bowman, M., Saunders, E. S., & Boroson, T. 2018, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10707, Software and Cyberinfrastructure for Astronomy V, ed. J. C. Guzman & J. Ibsen, 1070711, doi: 10.1117/12.2312293
* Sukhbold et al. (2016) Sukhbold, T., Ertl, T., Woosley, S. E., Brown, J. M., & Janka, H. T. 2016, ApJ, 821, 38, doi: 10.3847/0004-637X/821/1/38
* Szalai & Vinkó (2013) Szalai, T., & Vinkó, J. 2013, A&A, 549, A79, doi: 10.1051/0004-6361/201220015
* Tinyanont et al. (2016) Tinyanont, S., Kasliwal, M. M., Fox, O. D., et al. 2016, ApJ, 833, 231, doi: 10.3847/1538-4357/833/2/231
* Tonry et al. (2018) Tonry, J. L., Denneau, L., Heinze, A. N., et al. 2018, PASP, 130, 064505, doi: 10.1088/1538-3873/aabadf
* Utrobin & Chugai (2008) Utrobin, V. P., & Chugai, N. N. 2008, A&A, 491, 507, doi: 10.1051/0004-6361:200810272
* Utrobin & Chugai (2009) —. 2009, A&A, 506, 829, doi: 10.1051/0004-6361/200912273
* Valenti et al. (2016) Valenti, S., Howell, D. A., Stritzinger, M. D., et al. 2016, MNRAS, 459, 3939, doi: 10.1093/mnras/stw870
* Villar et al. (2021) Villar, V. A., Cranmer, M., Berger, E., et al. 2021, ApJS, 255, 24, doi: 10.3847/1538-4365/ac0893
* Villar et al. (2020) Villar, V. A., Cranmer, M., Contardo, G., Ho, S., & Yao-Yu Lin, J. 2020, arXiv e-prints, arXiv:2010.11194. https://arxiv.org/abs/2010.11194
* Weaver et al. (1978) Weaver, T. A., Zimmerman, G. B., & Woosley, S. E. 1978, ApJ, 225, 1021, doi: 10.1086/156569
* Woosley et al. (2002) Woosley, S. E., Heger, A., & Weaver, T. A. 2002, Reviews of Modern Physics, 74, 1015, doi: 10.1103/RevModPhys.74.1015
|
# Note on Heilbronn’s triangle problem in $\mathbb{R}^{d}$
Dmitrii Zakharov Department of Mathematics, Massachusetts Institute of
Technology, Cambridge, MA 02139, USA, Email<EMAIL_ADDRESS>
###### Abstract
We show that for fixed $d\geqslant 1$, any subset of $[0,1]^{d}$ of size $n$
contains $d+1$ points which span a simplex of volume at most $C_{d}n^{-\log
d+6}$. For small $d$ we provide stronger estimates on the exponent of $n$
which improve known upper bounds for all $d\geqslant 3$.
## 1 Introduction
The Heilbronn’s triangle problem asks for the smallest number
$\Delta=\Delta(n)$ such that among any $n$ points in the unit square
$[0,1]^{2}$ one can find 3 which form a triangle of area at most $\Delta$.
After a series of developments by Roth [6], [7], [8] and Schmidt [9], the best
known lower and upper bounds were obtained by Komlos–Pintz–Szemerédi [2], [3]
and state that:
$n^{-2}\log n\ll\Delta(n)\ll n^{-\frac{8}{7}+\varepsilon}$ (1)
where $\varepsilon>0$ can be taken arbitrarily small.
Consider a high dimensional analogue of this question, namely, given
$0\leqslant k\leqslant d$ define $\Delta_{k,d}(n)$ to be the minimum $\Delta$
such that among any $n$ points in $[0,1]^{d}$ there are $k+1$ points which
form a simplex with $k$-dimensional volume at most $\Delta$. Setting $d=k=2$
recovers the original Heilbronn’s triangle problem. One can also similarly
define $\Delta_{k,d}$ for $k>d$ but we are not going to address this case in
this note. We refer to [5] for an account of known results.
Simple probabilistic and packing arguments imply that
$n^{-\frac{k}{d-k+1}}\ll\Delta_{k,d}(n)\ll n^{-\frac{k}{d}},$ (2)
for any fixed $0\leqslant k\leqslant d$. The lower bound can be improved by a
logarithmic factor using results on independence numbers of sparse
hypergraphs, see [4] for details. Lefmann [4] improved the upper bound by a
polynomial factor for all odd $k\geqslant 1$:
$\Delta_{k,d}(n)\ll n^{-\frac{k}{d}-\frac{k-1}{2d(d-1)}},$ (3)
Brass [1] proved this in the special case $k=d$. For $d\geqslant 3$ and even
$k$, no improvements over (2) are known and the first interesting special case
is $d=3,k=2$.
In this note we establish the following inequality between functions
$\Delta_{k,d}$:
###### Theorem 1.1
Fix $1\leqslant k\leqslant d$ and $0\leqslant l<k$. Then for some constant
$C=C(d)$ and all $n^{\prime}>Cn$ we have:
$\Delta_{k,d}(n^{\prime})\leqslant C\Delta_{l,d}(n)\Delta_{k-l-1,d-l-1}(n).$
(4)
Combining (4) with known upper bounds on $\Delta_{k,n}$ leads to new
inequalities. The most notable improvement appears in the case when $k=d$.
Recall that known estimates give $n^{-d}\lessapprox\Delta_{d,d}(n)\ll n^{-1}$
for all $d$ and $\Delta_{d,d}(n)\ll n^{-1-\frac{1}{2d}}$ when $d$ is odd.
###### Corollary 1.1
We have the following bounds:
$\displaystyle\Delta_{3,3}(n)\ll n^{-\frac{4}{3}},~{}~{}\Delta_{4,4}(n)\ll
n^{-\frac{3}{2}},~{}~{}\Delta_{5,5}(n)\ll n^{-\frac{33}{20}},$
$\displaystyle\Delta_{6,6}(n)\ll n^{-1.676},~{}~{}\Delta_{7,7}(n)\ll
n^{-1.792},~{}~{}\Delta_{8,8}(n)\ll n^{-\frac{19}{10}}.$
###### Corollary 1.2
For any $d\geqslant 3$ we have $\Delta_{d,d}(n)\ll n^{-\log d+6}$.
We did not attempt to optimize the constant $6$ in the exponent. It can be
done by a more careful analysis of the recursion from Theorem 1.1 but it is
likely that the result will be far from optimal anyway. For each fixed $d$ it
is more efficient to apply Theorem 1.1 directly to obtain the best possible
bound. Theorem 1.1 also improves the upper bound in (2) for all $k\gg
d^{1/2}$, though the precise bounds are not particularly enlightening so we
omit them.
In Section 2 we prove Theorem 1.1. It would be interesting to combine our
argument with more advanced methods from [3] and [8] based on Bombieri-type
inequalities. Note however that there is a natural limit to those techniques
applied to the $\Delta_{d,d}$ problem at the exponent
$n^{-1-\frac{1}{2}-\frac{1}{3}-\ldots-\frac{1}{d}}=n^{-\log d+O(1)}$. So it is
possible that even though those ideas generalize well to all $d\geqslant 3$,
the resulting improvement might still only affect the constant term in
Corollary 1.2. On the other hand, our argument seems to be rather wasteful for
small $d$ so perhaps the bound can be improved significantly there.
### Acknowledgements.
I thank Cosmin Pohoata for many fruitful discussions.
## 2 Proof of Theorem 1.1
For our argument it will be convenient to restate the problem in the
following, essentially equivalent form. Fix $1\leqslant k\leqslant d$. For a
collection of vectors $v_{1},\ldots,v_{k}\in\mathbb{R}^{d}$ let
$\operatorname{Vol}_{k}(v_{1},\ldots,v_{k})$ be the $k$-dimensional volume of
the parallelepided spanned by vectors $v_{1},\ldots,v_{k}$. For brevity, we
call $\operatorname{Vol}_{k}(v_{1},\ldots,v_{k})$ the determinant of the
collection $v_{1},\ldots,v_{k}$. Note that
$\operatorname{Vol}_{k}(v_{1},\ldots,v_{k})$ equals to the square root of the
determinant of the Gram matrix $(\langle v_{i},v_{j}\rangle)_{i,j=1}^{k}$ of
vectors $v_{1},\ldots,v_{k}$.
We will use the following basic formula to estimate
$\operatorname{Vol}_{k}(v_{1},\ldots,v_{k})$. We include a proof for
completeness.
###### Proposition 2.1
Let $x_{1},\ldots,x_{k}\in\mathbb{R}^{d}$ be arbitrary vectors, then for any
$1\leqslant l\leqslant k$ we have
$\operatorname{Vol}_{k}(x_{1},\ldots,x_{k})=\operatorname{Vol}_{l}(x_{1},\ldots,x_{l})\operatorname{Vol}_{k-l}(x_{l+1}^{\prime},\ldots,x_{k}^{\prime}),$
(5)
where for $j=l+1,\ldots,k$, we denote by $x_{j}^{\prime}$ the projection of
$x_{j}$ on the orthogonal complement to the space
$V=\operatorname{span}(x_{1},\ldots,x_{l})$.
Proof: View $x_{1},\ldots,x_{k}$ as $d\times 1$ column vectors and consider
the matrices $A=(x_{1},\ldots,x_{k})$ and
$A^{\prime}=(x_{1},\ldots,x_{l},x_{l+1}^{\prime},\ldots,x_{k}^{\prime})$. Note
that $A^{\prime}=AU$ for some strictly upper triangular $k\times k$ matrix
$U$. The Gram matrices of the collections $x_{1},\ldots,x_{k}$ and
$x_{1},\ldots,x_{l},x_{l+1}^{\prime},\ldots,x_{k}^{\prime}$ are given by
$G=A^{T}A$ and $G^{\prime}=A^{\prime T}A^{\prime}$, respectively. It follows
that $\det G^{\prime}=\det U^{T}GU=\det G$. The Gram matrix $G^{\prime}$
splits into $l\times l$ and $k-l\times k-l$ blocks which are the Gram matrices
of the collections $x_{1},\ldots,x_{l}$ and
$x^{\prime}_{l+1},\ldots,x^{\prime}_{k}$, respectively. This implies (5).
$\Box$
Now for any $1\leqslant k\leqslant d$ define $\Delta^{\prime}_{k,d}(n)$ to be
the minimum $\Delta^{\prime}$ such that any collection $X$ of $n$ unit vectors
in $\mathbb{R}^{d+1}$ contains $k+1$ vectors with determinant at most
$\Delta^{\prime}$.
###### Proposition 2.2
For any $1\leqslant k\leqslant d$ there are constants $c,c^{\prime},C$ such
that for all $n>k$:
$c\Delta_{k,d}(n)\leqslant\Delta^{\prime}_{k,d}(n)\leqslant
C\Delta_{k,d}(c^{\prime}n).$
Proof: Let $X\subset[0,1]^{d}$ be an $n$-element set of points. Recall that
for any points $x_{1},\ldots,x_{k+1}\in\mathbb{R}^{d}$ the volume of the
simplex spanned by them is equal to
$\frac{1}{k!}\operatorname{Vol}_{k+1}((x_{1},1),\ldots,(x_{k+1},1))$. Define a
set $Y\subset S^{d}\subset\mathbb{R}^{d+1}$ as follows:
$Y=\left\\{\frac{(x,1)}{\sqrt{|x|^{2}+1}},~{}x\in X\right\\}.$
Note that the volumes of $k$-dimensional simplices in $X$ now precisely
correspond to determinants of $k+1$ tuples of vectors $(x,1)$, $x\in X$. Since
for any $x\in[0,1]^{d}$ we have $\sqrt{|x|^{2}+1}\in[1,\sqrt{d+1}]$,
renormalization by these factors changes the determinants only by a constant
factor. Thus, if $X$ does not contain $k+1$ points with volume at most
$\Delta$ then $Y$ does not contain $k+1$ points with determinant at most
$c\Delta$ for some $c>0$ depending only on $d$ and $k$. This shows the first
inequality.
Similarly, suppose that $Y\subset S^{d}$ has size $n$ and any $k+1$ points of
$Y$ have determinant at least $\Delta^{\prime}$. Consider the central
projection of the sphere $S^{d}$ on the hyperplane
$W=\\{x~{}|~{}x_{d+1}=1\\}$. Then for some $c^{\prime}>0$ and a random
rotation of $Y$ we get that at least $c^{\prime}n$ points of $Y$ are projected
in the cube $[0,1]^{d}\times\\{1\\}$. Define $X$ to be the set of these points
note that $X$ does not contain $k$-simplices of volume $\Delta^{\prime}/C$.
This completes the proof of the second inequality. $\Box$
In light of Proposition 2.2, Theorem 1.1 follows from an analogous statement
about $\Delta^{\prime}$.
###### Lemma 2.3
For any $1\leqslant k\leqslant d$, $0\leqslant l<k$ and $n>k$ we have
$\Delta^{\prime}_{k,d}(n)\leqslant\Delta^{\prime}_{l,d}(n)\Delta^{\prime}_{k-l-1,d-l-1}(n-l-1).$
Proof: Let $X\subset S^{d}$ be an $n$-element set, we need to show that $X$
contains $k+1$ points with determinant at most
$\Delta^{\prime}_{l,d}(n)\Delta^{\prime}_{k-l-1,d-l-1}(n-l-1)$. A small
perturbation of points does not change the determinants by a small amount, so
we may assume that $X\subset S^{d}$ is a set of points in general position.
By the definition of $\Delta_{l,d}$ there are points $x_{1},\ldots,x_{l+1}\in
X$ such that
$\operatorname{Vol}_{l+1}(x_{1},\ldots,x_{l+1})\leqslant\Delta^{\prime}_{l,d}(n)$.
Let $V=\operatorname{span}(x_{1},\ldots,x_{l+1})$ and let $\pi$ be the
projection on the orthogonal complement of $V$. Consider the set
$X^{\prime}=\pi(X\setminus\\{x_{1},\ldots,x_{l+1}\\})$ and define
$Y=\\{x/|x^{\prime}|,~{}x\in X^{\prime}\\}$. By the general position
assumption, $Y$ is a well-defined $n-l-1$-element set of points lying on a
$d-l-1$-dimensional unit sphere $S^{d-l-1}\subset V^{\perp}$.
So, by the definition of $\Delta_{k-l-1,d-l-1}(n-l-1)$, there exists a
collection of points $y_{1},\ldots,y_{k-l}\in Y$ such that
$\operatorname{Vol}_{k-l}(y_{1},\ldots,y_{k-l})\leqslant\Delta_{k-l-1,d-l-1}(n-l-1)$.
Consider the preimages $z_{1},\ldots,z_{k-l}\in X$ of these points. By
Proposition 2.1 we have
$\operatorname{Vol}_{k+1}(x_{1},\ldots,x_{l+1},z_{1},\ldots,z_{k-l})=\operatorname{Vol}_{l+1}(x_{1},\ldots,x_{l+1})\operatorname{Vol}_{k-l}(\pi(z_{1}),\ldots,\pi(z_{k-l}))\leqslant$
$\leqslant\operatorname{Vol}_{l+1}(x_{1},\ldots,x_{l+1})\operatorname{Vol}_{k-l}(y_{1},\ldots,y_{k-l})\leqslant\Delta_{l,d}(n)\Delta_{k-l-1,d-l-1}(n-l-1),$
where in the first inequality we replaced vectors $\pi(z_{j})$ with longer
vectors $y_{j}=\pi(z_{j})/|\pi(z_{j})|$ which increases the determinant. This
completes the proof of the lemma, and by Proposition 2.2, of Theorem 1.1.
$\Box$
## 3 Proofs of corollaries
For $1\leqslant k\leqslant d$ let us denote by $\delta_{k,d}$ the maximal
number such that $\Delta_{k,d}(n)\ll n^{-\delta_{k,d}}$ holds for large $n$.
We ignore the technical issues regarding the existence of the maximum as it
does not affect validity of our computations.
The bounds (2) and (3) imply that $\delta_{k,d}\geqslant\frac{k}{d}$ for all
$1\leqslant k\leqslant d$ and
$\delta_{k,d}\geqslant\frac{k}{d}+\frac{k-1}{2d(d-1)}$ for odd $k$. Theorem
1.1 now states that
$\delta_{k,d}\geqslant\delta_{l,d}+\delta_{k-l-1,d-l-1},$ (6)
for all $1\leqslant k\leqslant d$ and $0\leqslant l<k$, where we put
$\delta_{0,d}=0$ for convenience. Finally, the result of
Komlos–Pintz–Szemerédi implies that
$\delta_{2,2}\geqslant\frac{8}{7}-\varepsilon$ for any fixed $\varepsilon>0$.
We are now ready to perform the calculations for small $d$:
* •
$\delta_{3,3}\geqslant\delta_{1,3}+\delta_{1,1}\geqslant\frac{1}{3}+1=\frac{4}{3}$,
* •
$\delta_{4,4}\geqslant\delta_{2,4}+\delta_{1,1}\geqslant\frac{2}{4}+1=\frac{3}{2}$,
* •
$\delta_{5,5}\geqslant\delta_{3,5}+\delta_{1,1}\geqslant(\frac{3}{5}+\frac{1}{20})+1=\frac{33}{20}$,
* •
$\delta_{6,6}\geqslant\delta_{3,6}+\delta_{2,2}\geqslant(\frac{3}{6}+\frac{1}{30})+\frac{8}{7}-\varepsilon\geqslant\frac{176}{105}-\varepsilon>1.676$.
Note that all other ways to apply (6) lead to a slightly weaker exponent
$5/3$:
$\delta_{6,6}\geqslant\delta_{1,6}+\delta_{4,4}\geqslant\frac{1}{6}+\frac{3}{2}=\frac{5}{3}$,
$\delta_{6,6}\geqslant\delta_{2,6}+\delta_{3,3}\geqslant\frac{5}{3}$, and
$\delta_{6,6}\geqslant\delta_{4,6}+\delta_{1,1}\geqslant\frac{5}{3}$,
* •
$\delta_{7,7}\geqslant\delta_{1,7}+\delta_{5,5}\geqslant\frac{1}{7}+\frac{33}{20}>1.792$,
* •
$\delta_{8,8}\geqslant\delta_{2,8}+\delta_{5,5}\geqslant\frac{2}{8}+\frac{33}{20}=\frac{19}{10}$.
This proves Corollary 1.1. Using induction, we are going to show that for all
$d\geqslant 3$ we have
$\delta_{d,d}\geqslant\log d-6+10d^{-\frac{1}{2}}.$ (7)
This can be verified directly for $d\in\\{3,\ldots,8\\}$. The relation (6)
applied with $l=0$ implies that the sequence $\delta_{d,d}$ is non-decreasing
and so, in particular, $\delta_{d,d}\geqslant 1.9$ holds for all $d\geqslant
8$. This implies (7) for all $3\leqslant d\leqslant 100$.
Now let $d>100$ be arbitrary and suppose that (7) is true for all $3\leqslant
d^{\prime}<d$. Let $1\leqslant t\leqslant d/2$ be an integer parameter which
we will optimize later and apply (6) with $l=t$. Then by the induction
assumption we get
$\delta_{d,d}\geqslant\delta_{t-1,d}+\delta_{d-t,d-t}\geqslant\frac{t-1}{d}+\log(d-t)-6+10(d-t)^{-\frac{1}{2}},$
for any $x\in[0,0.5]$ we have $\log(1-x)\geqslant-x-x^{2}$ and
$(1-x)^{-\frac{1}{2}}\geqslant 1+x/2$, so that we get
$\log(d-t)\geqslant\log d-\frac{t}{d}-\frac{t^{2}}{d^{2}},$
$(d-t)^{-\frac{1}{2}}=d^{-\frac{1}{2}}\left(1-\frac{t}{d}\right)^{-\frac{1}{2}}\geqslant
d^{-\frac{1}{2}}+\frac{1}{2}td^{-\frac{3}{2}}$
and so we conclude that
$\delta_{d,d}\geqslant\frac{t-1}{d}+\log
d-\frac{t}{d}-\frac{t^{2}}{d^{2}}-6+10d^{-\frac{1}{2}}+5td^{-\frac{3}{2}}=\log
d-6+10d^{-\frac{1}{2}}-\frac{1}{d}-\frac{t^{2}}{d^{2}}+5td^{-\frac{3}{2}},$
note that for $d>100$ we have $2\sqrt{d}<d/2$ and let $t$ be an arbitrary
integer in the interval $[\sqrt{d},2\sqrt{d}]$, then we get
$-\frac{1}{d}-\frac{t^{2}}{d^{2}}+5td^{-\frac{3}{2}}\geqslant 0$ and so
$\delta_{d,d}\geqslant\log d-6+10d^{-\frac{1}{2}}$. This completes the
induction step and so (7) is true for all $d\geqslant 3$. Dismissing the
additional term $10d^{-\frac{1}{2}}$ we conclude that
$\delta_{d,d}\geqslant\log d-6$ for all $d\geqslant 3$.
## References
* [1] Brass, Peter. ”An upper bound for the d-dimensional analogue of Heilbronn’s triangle problem.” SIAM Journal on Discrete Mathematics 19.1 (2005): 192-195.
* [2] Komlós, János, János Pintz, and Endre Szemerédi. ”On Heilbronn’s triangle problem.” Journal of the London Mathematical Society 2.3 (1981): 385-396.
* [3] Komlós, János, János Pintz, and Endre Szemerédi. ”A lower bound for Heilbronn’s problem.” Journal of the London Mathematical Society 2.1 (1982): 13-24.
* [4] Lefmann, Hanno. ”Distributions of points in d dimensions and large k-point simplices.” Discrete & Computational Geometry 40.3 (2008): 401-413.
* [5] Lefmann, Hanno. ”Distributions of points in the unit square and large k-gons.” European Journal of Combinatorics 29.4 (2008): 946-965.
* [6] Roth, Klaus F. ”On a problem of Heilbronn.” Journal of the London Mathematical Society 1.3 (1951): 198-204.
* [7] Roth, K. F. ”On a problem of Heilbronn, II.” Proceedings of the London Mathematical Society 3.2 (1972): 193-212.
* [8] Roth, K. F. ”On a problem of Heilbronn, III.” Proceedings of the London Mathematical Society 3.3 (1972): 543-549.
* [9] Schmidt, Wolfgang M. ”On a problem of Heilbronn.” Journal of the London Mathematical Society 2.3 (1972): 545-550.
|
§ INTRODUCTION
For liver surgery, minimally invasive techniques such as laparoscopy have become as relevant as open surgery [1]. Among other benefits, laparoscopy has shown to yield higher quality of life, shorten recovery time, lessen patient trauma, and reduce blood loss with comparable long-term oncological outcomes [1]. Overcoming challenges from limited field of view to manoeuvrability, and a small work space are the foundations of laparoscopy success. Image-guided navigation platforms aim to ease the burden off the surgeon, by bringing better visualisation techniques to the operating room [2, 3]. Image-to-patient and image-to-image registration techniques (hereafter image registration) are at the core of such platforms to provide clinically valuable visualisation tools. The concept of image registration refers to the alignment of at least two images, matching the location of corresponding features across images in order to express them into a common space. In medicine, image registration is mandatory for fusing clinically relevant information across images; groundwork for enabling image-guided navigation during laparoscopic interventions [4, 5]. Additionally, laparoscopic preoperative surgical planning benefits from abdominal computed tomography (CT) to magnetic resonance imaging (MRI) registration to better identify risk areas in a patient's anatomy [6].
During laparoscopic liver surgeries, intraoperative imaging (e.g., video and ultrasound) is routinely used to assist the surgeon in navigating the liver while identifying the location of landmarks. In parenchyma-sparing liver resection (i.e., wedge resection) for colorectal liver metastasis, a minimal safety margin around the lesions is defined to ensure no recurrence and spare healthy tissue [7]. When dealing with narrow margins and close proximity to critical structures, a high accuracy in the registration method employed is paramount to ensure the best patient outcome. Patient-specific cross-modality registration between images of different nature (e.g., CT to MRI) is practised [8], yet being a more complex process compared to mono-modal registration.
The alignment of images can be evaluated through different metrics based either on intensity information from the voxels, shape information from segmentation masks, or spatial information from landmarks' location or relative distances. The most common intensity-based similarity metrics are the normalised cross-correlation (NCC), structural similarity index measure (SSIM), or related variations [9, 10]. For segmentation-based metrics, the most notorious are the Dice similarity coefficient (DSC) and Hausdorff distance (HD) [11]. However, target registration error (TRE) is the gold standard metric for practitioners, conferring a quantitative error measure based on the target lesion location across images [12].
Research on the use of convolutional neural networks (CNNs) for image registration has gained momentum in recent years, motivated by the improvements in hardware and software.
One early application of deep learning-based image registration (hereafter deep image registration) was performed by Wu et al. [13]. They proposed a network built with two convolutional layers, coupled with principal component analysis as a dimensionality reduction step, to align brain MR scans. Expanding upon the concept, Jaderberg et al. [14] introduced the spatial transformer network, including a sampling step for data interpolation, allowing for gradients to be backpropagated.
Publications on CNNs for image-registration show a preference for encoder-decoder architectures like U-Net [15], followed by a spatial transformer network, as can be seen in Quicksilver [16], VoxelMorph [9], and other studies [17]. Mok et al. [18] proposed a Laplacian pyramid network for multi-resolution-based MRI registration, enforcing the non-rigid transformation to be diffeomorphic.
The development of weakly-supervised training strategies [19, 20] enabled model training by combining intensity information with other data types (e.g., segmentation masks). Intensity-based unsupervised training for non-rigid registration was explored for abdominal and lung CT [21, 22]. Building cross-modality image registration models through reinforcement learning has also been explored [23].
However, semi-supervised training of convolutional encoder-decoder architectures has been favoured for training registration models and producing the displacement map [24].
In our study, the focus is brought towards improving the training scheme of deep neural networks for image registration to cater more easily to use-cases with limited data. We narrowed the scope to mono-modal registration, and the investigation of transfer learning across image modalities and anatomies. Our proposed main contributions are:
* an augmentation layer for on-the-fly data augmentation and ground truth generation of samples for non-rigid image registration, based on thin plate splines (TPS), removing the need to store augmented copies on disk,
* an uncertainty weighting loss layer to enable adaptive multi-task learning in a weakly-supervised learning approach,
* and the validation of a cross-anatomy and cross-modality transfer learning approach for image registration with scarce data.
§ MATERIALS AND METHODS
§.§ Dataset
In this study, two datasets were selected for conducting the experiments: the Information eXtraction from Images (IXI) dataset[<https://brain-development.org/ixi-dataset/>] and Laparoscopic Versus Open Resection for Colorectal Liver Metastases: The OSLO-COMET Randomized Controlled Trial dataset (OSLO-COMET) [1, 25].
The IXI dataset contains $578$ T1-weighted head MR scans from healthy subjects collected from three different hospitals in London. Only T1-weighted MRIs were used in this study, but other MRI sequences such as T2 and proton density are also available. Using the advanced normalization tools (ANTs) [26], the T1 images were registered to the symmetric Montreal Neurological Institute ICBM2009a atlas, to subsequently obtain the segmentation masks of $29$ different regions of the brain. Ultimately, left and right parcels were merged together resulting in a collection of $17$ labels (see the online resource Table S1). The data was then stratified into three cohorts: training (n=$407$), validation (n=$87$), and test (n=$88$) sets.
The OSLO-COMET trial dataset, compiled by the Intervention Centre, Oslo University Hospital (Norway), contains 60 contrast-enhanced CTs with manually delineated liver segmentation masks. An approximate segmentation of the vascular structures was obtained using the segmentation model available in the public livermask tool [27]. The data was then stratified into three cohorts: training (n=$41$), validation (n=$8$), and test (n=$11$) sets.
§.§ Preprocessing
Before the training phase, both CT and MR images, as well as the segmentation masks, were resampled to an isotropic resolution of $1$ mm$^3$ and resized to $128\times 128\times128$ voxels. Additionally, the CT images were cropped around the liver mask before the resampling step. The segmentation masks were stored as 8-bit unsigned integer single-channel images, to enable rapid batch generation during training.
To overcome the scarcity of image registration datasets for algorithm development, we propose an augmentation layer, implemented in TensorFlow [28], to generate artificial moving images during training.
The augmentation layer allows for data augmentation and preprocessing. The layer features gamma (0.5 to 2) and brightness augmentation ($\pm20\%$), rotation, and rigid and non-rigid transformations, to generate the moving images. Data preprocessing includes resizing and intensity normalisation to the range $[0, 1]$. The maximum displacements, rigid and non-rigid, can be constrained to mimic real-case scenarios. In our case, $30$ mm and $6$ mm respectively. Rotation was limited to $10^{\circ}$, for any of the three coordinate axes.
The non-rigid deformation was achieved using TPS applied on an $8\times 8\times 8$ grid, with a configurable maximum displacement. Rigid transformations include rotation and translation.
§.§ Model architecture
The baseline architecture consists of a modified VoxelMorph model [9], based on a U-Net [29] variant. The model was used to predict the displacement map, as depicted in <ref>. After the augmentation step (see <ref>), the fixed ($I_f$) and the generated moving ($I_m$) images were concatenated into a two-channel volumetric image and fed to the VoxelMorph model. The model returns the displacement map ($\Phi$) i.e., a volume image with three channels, which describes the relative displacement of each voxel along each of the three coordinate axes. Finally, the predicted fixed image ($I_p$) is reconstructed by interpolating voxels on the moving image at the locations defined by the displacement map. This way, the model can be trained by comparing the predicted image with the original fixed image.
Proposed pipeline generating artificial moving images on-the-fly, predicting the displacement map using a modified U-Net, and finding the optimal loss weighting automatically using uncertainty weighting.
When provided, the segmentations ($S_m$) are likewise updated using the same displacement map. The symmetric U-Net architecture was designed with six max pooling blocks, each featuring 32, 64, 128, 256, 512, and 1024 convolution filters, respectively. Each encoder block consisted of a convolution with kernel size $3\times 3\times 3$ and a LeakyReLU activation function, followed by max pooling with stride 2. The decoder blocks consisted of a convolution and a LeakyReLU activation function, followed by a nearest neighbour interpolation upsampling layer. The output convolutional layer, named Head in <ref>, was set to two consecutive convolutions of 16 filters with LeakyReLU activation function. A convolution layer with three filters was used as the output layer. This produces a displacement map with the same size as the input images and three channels, one for each displacement dimension.
§.§ Model training
The registration model was trained in a weakly-supervised manner, as proposed by Hu et al. [19]. Instead of evaluating the displacement map directly as in traditional supervised training, only the final registration results were assessed during training.
Due to the complexity of the task at hand, a single loss function would provide limited insight of the registration result, therefore a combination of well-known loss functions was deemed necessary. Balancing the contribution of these operators can be challenging, time consuming, and prone to errors. We therefore used uncertainty weighting (UW) [30], which combines losses as a weighted sum and enables the loss weights to be tuned dynamically during backpropagation. Our loss function $\mathcal{L}$ was implemented as a custom layer, and consists of a weighted sum of loss functions $L$ and regularisers $\mathcal{R}$:
\begin{equation}
\mathcal{L}\left(\textbf{y}_t, \textbf{y}_p\right) = \sum_{i=1}^{N}\omega_{i}L_{i}\left(\textbf{y}_t, \textbf{y}_p\right) + \sum_{j=1}^{M}\lambda_{j}\mathcal{R}_{j}
\label{eq:multi_task_loss}
\end{equation}
such that $\sum \omega_i = \sum \lambda_i = 1$. By default, the weights $\omega_i$ and $\lambda_i$ were initialised to equally contribute in the weighted sum, but can be set manually from a priori knowledge on initial loss and regularisation values. In our experiments, the default initialisation for the loss weights was used, except for the regularisation term, which was initialised to $5\times10^{-3}$.
For training the neural networks we used the Adam optimiser. Gradient accumulation was performed to overcome memory constraints and enable larger batch training. The batch size was set to one, but artificially increased by accumulating eight mini-batch gradients. The learning rate was set to $10^{-3}$ with a scheduler to decrease by 10 whenever the validation loss plateaued with a patience of 10. The models were trained using a custom training pipeline. Training curves can be found in the online resources Fig. S2 to S5. The training was limited to $10^5$ epochs, and manually stopped if the model stopped converging. The model with the lowest validation loss was saved.
§.§ Experiments
The experiments were conducted on an Ubuntu 18.04 Linux desktop computer with an Intel®Xeon®Silver 4110 CPU with 16 cores, 64 GB of RAM, an NVIDIA Quadro P5000 (16 GB VRAM) dedicated GPU, and SSD hard-drive. Our framework, DDMR, used to conduct the experiments was implemented in Python 3.6 using TensorFlow v1.14. To accelerate the research within the field, the source code is made openly available on GitHub[<https://github.com/jpdefrutos/DDMR>].
As aforesaid, our aim was to improve the training phase of image registration for CNN models. To that extent, four experiments were carried out:
(i) Ablation study
Different training strategies and loss function combinations were evaluated to identify the key components in deep image registration.
Three different training strategies were considered, all using weakly-supervised learning: 1) the baseline (BL) using only intensity information, 2) adding segmentation guidance (SG) to the baseline, and 3) adding uncertainty weighting (UW) to the segmentation-guided approach. For all experiments, the input size and CNN backbone were tuned prior and kept fixed. All designs are described in <ref> and evaluated on both the IXI and OSLO-COMET datasets. Six loss weighting schemes were tested, using different combinations of loss functions, including both intensity and segmentation-based loss functions. For the second experiment, the entire model was finetuned directly or in two steps, i.e., by first finetuning the decoder, keeping the encoder frozen, and then finetuning the full model. A learning rate of $10^{-4}$ was used when performing transfer learning.
Configurations trained on both the IXI and OSLO-COMET datasets.
Design Model Loss function
BL-N Baseline NCC
BL-NS Baseline NCC, SSIM
SG-ND Segmentation-guided NCC, DSC
SG-NSD Segmentation-guided NCC, SSIM, DSC
UW-NSD Uncertainty weighting NCC, SSIM, DSC
UW-NSDH Uncertainty weighting NCC, SSIM, DSC, HD
BL: baseline, SG: segmentation-guided, UW: uncertainty weighting, N: normalized cross correlation, S: structural similarity index measure, D: Dice similarity coefficient, H: Hausdorff distance.
(ii) Transfer learning
To assess the benefit finetuning for deep image registration to applications with a small number of samples available, e.g., abdominal CT registration.
(iii) Baseline comparison
The trained models were evaluated against a traditional registration framework (ANTs), to better understand the potential of deep image registration.
This experiment was performed only on the OSLO-COMET dataset, as ANTs was used to generate the segmentations on the IXI dataset. Two different configuration were tested: symmetric normalisation (SyN), with mutual information as optimisation metric, and SyN with cross-correlation as metric (SyNCC).
(iv) Training runtime
The last experiment was conducted to assess the impact of the augmentation layer (see the online resource Fig S1). The GPU resources were monitored during training. Only the second epoch was considered, as the first one served as warm-up.
§.§ Evaluation metrics
The evaluation was done on the test sets of the IXI and OSLO-COMET datasets, for which the fixed-moving image pairs were generated in advance, such that the same image pairs were used across methods during evaluation. After inference, the displacement maps were resampled back to isotropic resolution. The final predictions were then evaluated using four sets of metrics to cover all angles. Image similarity was assessed under computation of NCC and SSIM metrics. Segmentations were evaluated using DSC, HD, and HD95 (95th percentile of HD). For image registration, TRE was estimated using the centroids of the segmentation masks of the fixed image and the predicted image. Lastly, the methods were compared in terms of inference runtime, only measuring the prediction and application of the displacement map, as all other operations were the same between the methods.
§ RESULTS
In <ref>, the best performing methods in terms of individual performance metrics, i.e., most optimal mean and lowest standard deviation, were highlighted in bold.
See the online resources for additional tables and figures not presented in this manuscript.
On the IXI dataset, fusing NCC and SSIM improved performance in terms of intensity-based metrics for the baseline model, whereas segmentation metrics were degraded (see <ref>). Adding segmentation-guiding drastically increased performance across all metrics compared to the baseline. Minor improvement was observed using uncertainty weighting, whereas adding the Hausdorff loss was not beneficial.
On the OSLO-COMET dataset, a similar trend as for the IXI dataset was observed (see <ref>). However, in this case, the baseline model was more competitive, especially in terms of intensity-based metrics. Nonetheless, segmentation-guiding was still better overall, but no benefit of uncertainty weighting was observed.
Finetuning the entire model trained on the IXI dataset to the OSLO-COMET dataset (see <ref>), yielded similar intensity-based metrics overall, but drastically improved the uncertainty weighted models in terms of segmentation metrics. The best performing models overall used uncertainty weighting. When finetuning the model in two steps, the uncertainty weighted designs were further improved to some extent (see <ref>).
The traditional methods, SyN and SyNCC, performed the poorest on the OSLO-COMET dataset. Both methods performed similarly, but the SyNCC was considerably slower. All deep learning models had similar inference runtimes of less than one second, which was expected as the final inference model architectures were identical. On average, the CNN-based methods were $\sim 12\times$ and $\sim 324\times$ faster than SyN and SyNCC, respectively.
The deep learning models struggled with image reconstruction, unlike ANTs (see Fig S5). For instance, anatomical structures outside the segmentation masks were poorly reconstructed in the predicted image, e.g., the spine of the patient.
The use of the augmentation layer resulted in a negligible increase in training runtime of $7.7$% per epoch and $0.47$% ($\sim 74$ MB of $16$ GB) increase in GPU memory usage (see the online resource Fig. S1).
Evaluation of the models trained on the IXI dataset.
Model SSIM NCC DSC HD HD95 TRE Runtime
BL-N 0.20±0.15 0.51±0.11 0.08±0.01 55.80±12.47 81.43 29.93±8.51 0.77±0.76
BL-NS 0.25±0.16 0.53±0.12 0.07±0.01 138.60±21.07 165.40 29.84±8.86 0.78±0.62
SG-ND 0.45±0.24 0.46±0.10 0.64±0.06 5.90±1.20 8.43 1.75±0.46 0.79±1.15
SG-NSD 0.46±0.24 0.46±0.11 0.63±0.05 5.86±1.16 8.13 1.74±0.45 0.79±1.19
UW-NSD 0.47±0.24 0.46±0.11 0.64±0.05 5.74±1.14 8.02 1.66±0.42 0.80±1.14
UW-NSDH 0.47±0.24 0.46±0.11 0.63±0.05 5.91±1.25 8.26 1.73±0.43 0.78±1.13
The best performing methods for each metric are highlighted in bold.
Evaluation of the models trained on the OSLO-COMET dataset.
Model SSIM NCC DSC HD HD95 TRE Runtime
BL-N 0.52±0.10 0.20±0.07 0.46±0.06 45.59±5.48 53.74 16.26±5.97 0.78±1.49
BL-NS 0.62±0.13 0.17±0.07 0.50±0.05 33.98±6.28 44.58 13.31±3.45 0.78±1.48
SG-ND 0.55±0.15 0.16±0.06 0.57±0.10 20.66±7.62 31.87 9.00±3.17 0.79±1.49
SG-NSD 0.58±0.13 0.12±0.07 0.56±0.05 24.64±6.99 35.07 9.23±2.70 0.78±1.47
UW-NSD 0.54±0.13 0.11±0.06 0.49±0.05 24.41±6.04 34.27 9.46±3.32 0.79±1.49
UW-NSDH 0.59±0.14 0.14±0.06 0.55±0.08 23.27±8.00 31.84 9.39±3.33 0.78±1.49
The best performing methods for each metric are highlighted in bold.
§ DISCUSSION
Development of CNNs for image registration is challenging, especially when data is scarce. We therefore developed a framework called DDMR to train deep registration models, which we have evaluated through an ablation study. By pretraining a model on a larger dataset, we found that performance can be greatly improved using transfer learning, even if the source domain is from a different image modality or anatomic origin. Through the development of novel augmentation and loss weighting layers, training was simplified by generating artificial moving images on-the-fly, removing the need to store augmented samples on disk, while simultaneously learning to weigh losses in a dynamic fashion. Furthermore, by guiding registration using automatically generated segmentations and adaptive loss weighting, registration performance was enhanced. In addition, negligible increase in inference runtime and GPU memory usage was observed. The added-value of our method lies in the use of generic concepts, which can therefore leverage most deep learning-based registration designs.
From <ref>, segmentation guidance boosts the performance of the image registration both on the SG and UW models, further confirmed by the online resources Fig. S14 to S16. The introduction of landmarks to guide the training, in the form of boundaries of the different segmentation masks, allows for a better understanding of the regions occupied by each anatomical structure. This observation is drawn by the improvement of the segmentation-based metrics on the finetuned models (see <ref>).
Evaluation of models trained on the OSLO-COMET dataset from finetuning the entire architecture.
Model SSIM NCC DSC HD HD95 TRE Runtime
BL-N 0.52±0.08 0.17±0.07 0.46±0.05 48.42±4.23 53.22 21.70±5.98 0.79±1.48
BL-NS 0.61±0.09 0.16±0.07 0.31±0.04 67.01±4.10 73.29 34.04±8.20 0.80±1.49
SG-ND 0.56±0.13 0.14±0.07 0.61±0.06 15.47±4.82 22.51 7.66±2.76 0.77±1.47
SG-NSD 0.58±0.13 0.14±0.07 0.61±0.07 15.83±5.51 22.90 7.72±2.79 0.79±1.49
UW-NSD 0.58±0.12 0.14±0.06 0.64±0.08 14.92±4.78 21.61 6.27±1.97 0.78±1.47
UW-NSDH 0.59±0.12 0.14±0.06 0.64±0.07 14.63±4.58 21.46 6.36±2.00 0.78±1.46
The best performing methods for each metric are highlighted in bold.
Evaluation of the models trained on the OSLO-COMET dataset from finetuning in two steps.
Model SSIM NCC DSC HD HD95 TRE Runtime
BL-N 0.52±0.07 0.19±0.07 0.47±0.04 50.38±19.53 80.63 19.71±7.13 0.78±1.51
BL-NS 0.62±0.10 0.17±0.07 0.30±0.03 68.73±3.93 74.42 35.30±8.33 0.78±1.51
SG-ND 0.56±0.12 0.14±0.07 0.62±0.06 15.81±4.66 22.80 7.10±2.13 0.78±1.49
SG-NSD 0.58±0.12 0.15±0.07 0.61±0.06 16.27±5.52 23.66 7.31±2.08 0.78±1.47
UW-NSD 0.60±0.11 0.15±0.06 0.68±0.09 14.28±4.33 20.13 6.12±1.64 0.78±1.49
UW-NSDH 0.60±0.12 0.15±0.06 0.66±0.08 14.17±4.64 20.45 6.25±2.17 0.78±1.49
SyN 0.61±0.13 0.20±0.07 0.29±0.02 66.32±4.19 71.45 22.25±5.12 9.23±1.01
SyNCC 0.63±0.13 0.20±0.07 0.29±0.01 66.05±4.37 71.34 22.23±5.20 259.11±25.75
The best performing methods for each metric are highlighted in bold.
Although a marginal increase in performance can be observed using uncertainty weighting, the improvement was not significant and larger dataset would be required for further statistical assessment. Surprisingly, adding HD to the set of losses neither improved performance. We believe this is due to HD being sensitive to outliers and minor annotation errors, which is likely to happen as the annotations used in this study were automatically generated. Furthermore, NCC proved to be a well-suited intensity-based loss function, with no real benefit of adding an additional intensity-based loss function such as SSIM.
From studying the loss weights evolution during training (see online resources Fig. S7 to S13), it is possible to deduce an interpretation regarding influence and benefit from each loss component over the network. Evidently, SSIM was favoured over NCC during training, even though SSIM was deemed less useful for image registration compared to NCC. A rationale can be hypothesised from SSIM being easier to optimise, being a perception-based loss. Interestingly, the loss weight curves all seemed to follow the same pattern. Upweighted losses are linearly increased until a plateau is reached and the opposite behaviour happens for the downweighted losses. This may indicate that uncertainty weighting lacks the capacity of task prioritisation, which could have been helpful at a later stage in training. Such an approach has been proposed in the literature [31], simply not for similar image registration tasks. Hence, a comparison of other multi-task learning designs might be worth investigating in future work.
A sizeable downside in training CNNs for image registration remains the long training runtime. Having access to pretrained models in order to perform transfer learning alleviates this issue, but the substantial amount of training data required, and in our use case annotated data, persists as another tremendous drawback.
Once deployed, such registration models often fail to generalise to other anatomies, imaging modalities, and data shifts in general, resulting in ad hoc solutions. As part of future work investigations, developing more generic deep image registration models would be of interest, tackling both training and deployment shortcomings.
In this study, only synthetic moving images and mostly algorithm-based annotations were used for evaluation. To verify the clinical relevance of the proposed models, a dataset with manual delineations of structures both for the fixed and moving images, and with clinically relevant movements, is required. However, obtaining such a dataset is extremely challenging and was thus deemed outside the scope of this study. Nevertheless, such investigation is of definite value and should be part of future works, additionally including human qualitative evaluation of the clinical relevance.
The sole focus on mono-modal registration can be considered as a limitation from our work. Especially when selecting the loss functions. For instance, in multi-modal registration it is common to use mutual information. Hence, investigating the translation between mono and multi-modal designs is of value to assess applicability over various registration tasks. The recent introduction of the new Learn2Reg challenge dataset [24] represents an adequate alley for further investigation over this aspect. While the U-Net architecture, used in this study, is not recent, a substantial number of publications have favoured it for image registration, as shown to outperform vision transformers on smaller datasets [32]. Alternatively, generative adversarial models should be tested, as these networks have shown to produce more realistic looking images [33]. Self-attention [34] for encoding anatomical information, or graph-based neural networks [35] for improved vascular segmentation-guided registration, are concepts that also should be considered in future work.
§ CONCLUSION
In the presented study, we demonstrated that registration models can be improved through transfer learning and adaptive loss weighting even with minimal data without manual annotations. The proposed framework DDMR also enables on-the-fly generation of artificial moving images, without the need to store copies on disk. In future work, DDMR should be validated on data of other anatomies and imaging modalities to further assess its benefit.
Supplementary information
This article has accompanying online resources including additional figures and tables.
§ DECLARATIONS
§.§ Funding
This study was supported by the H2020-MSCA-ITN Project No. 722068 HiPerNav; Norwegian National Advisory Unit for Ultrasound and Image-Guided Therapy (St. Olavs hospital, NTNU, SINTEF); SINTEF; St. Olavs hospital; and the Norwegian University of Science and Technology (NTNU).
§.§ Competing interests
The authors declare to have no conflict of interest.
§.§ Ethics approval
The trial protocol for the OSLO-COMET dataset was approved by the Regional Ethical Committee of South Eastern Norway (REK Sør-Øst B 2011/1285) and the Data Protection Officer of Oslo University Hospital (Clinicaltrials.org identifier NCT01516710).
§.§ Informed consent
The IXI dataset is made available under the Creative Commons CC BY-SA 3.0 license. Informed consent was obtained from all participants included in the OSLO-COMET study [1, 25].
§.§ Data and Availability
The IXI dataset used in this study is publicly available (<https://brain-development.org/ixi-dataset/>), whereas the OSLO-COMET dataset can be made available per reasonable request.
§.§ Code availability
The source code used in this study is made openly available at <https://github.com/jpdefrutos/DDMR>
§.§ Authors' contributions
Conceptualization - JP; Methodology - JP, AP; Formal analysis and investigation - JP, AP; Software - JP, AP, SS; Writing - original draft preparation - JP, AP, DB, EP; Writing - review and editing - TL, FL, SS, OE; Funding acquisition - TL, OE; Resources - TL; Supervision - TL, FL, OE; All authors read and approved the final manuscript.
#1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook
[1]
Fretland, Å.A.,
Dagenborg, V.J.,
Bjørnelv, G.M.W.,
Kazaryan, A.M.,
Kristiansen, R.,
Fagerland, M.W.,
Hausken, J.,
Tønnessen, T.I.,
Abildgaard, A.,
Barkhatov, L.,
Yaqub, S.,
Røsok, B.I.,
Bjørnbeth, B.A.,
Andersen, M.H.,
Flatmark, K.,
Aas, E.,
Edwin, B.:
Laparoscopic versus open resection for colorectal liver metastases.
Annals of Surgery
[2]
Alam, F.,
Rahman, S.U.,
Ullah, S.,
Gulati, K.:
Medical image registration in image guided surgery: Issues, challenges and
research opportunities
[3]
Cash, D.M.,
Miga, M.I.,
Glasgow, S.C.,
Dawant, B.M.,
Clements, L.W.,
Cao, Z.,
Galloway, R.L.,
Chapman, W.C.:
Concepts and preliminary data toward the realization of image-guided
liver surgery.
Journal of Gastrointestinal Surgery
[4]
Pelanis, E.,
Teatini, A.,
Eigl, B.,
Regensburger, A.,
Alzaga, A.,
Kumar, R.P.,
Rudolph, T.,
Aghayan, D.L.,
Riediger, C.,
Kvarnström, N.,
Elle, O.J.,
Edwin, B.:
Evaluation of a novel navigation platform for laparoscopic liver
surgery with organ deformation compensation using injected fiducials.
Medical Image Analysis
[5]
Prevost, G.A.,
Eigl, B.,
Paolucci, I.,
Rudolph, T.,
Peterhans, M.,
Weber, S.,
Beldi, G.,
Candinas, D.,
Lachenmayer, A.:
Efficiency, accuracy and clinical applicability of a new image-guided
surgery system in 3d laparoscopic liver surgery.
Journal of Gastrointestinal Surgery
[6]
Baisa, N.L.,
Bricq, S.,
Lalande, A.:
MRI-PET Registration with Automated Algorithm in Pre-clinical Studies.
[7]
Martínez-Cecilia, D.,
Wicherts, D.A.,
Cipriani, F.,
Berardi, G.,
Barkhatov, L.,
Lainas, P.,
D’Hondt, M.,
Rotellar, F.,
Dagher, I.,
Aldrighetti, L.,
Troisi, R.I.,
Edwin, B.,
Abu Hilal, M.:
Impact of resection margins for colorectal liver metastases in
laparoscopic and open liver resection: a propensity score analysis.
Surgical Endoscopy
[8]
Fusaglia, M.,
Tinguely, P.,
Banz, V.,
Weber, S.,
Lu, H.:
A novel ultrasound-based registration for image-guided laparoscopic
liver ablation.
Surgical Innovation
[9]
Balakrishnan, G.,
Zhao, A.,
Sabuncu, M.R.,
Guttag, J.,
Dalca, A.V.:
VoxelMorph: A Learning Framework for Deformable Medical Image
IEEE Transactions on Medical Imaging
[10]
Dosovitskiy, A.,
Brox, T.:
Generating images with perceptual similarity metrics based on deep
In: Lee, D.,
Sugiyama, M.,
Luxburg, U.,
Guyon, I.,
Garnett, R. (eds.)
Advances in Neural Information Processing Systems,
vol. 29
[11]
Survarachakan, S.,
Prasad, P.J.R.,
Naseem, R.,
Pérez de Frutos, J.,
Kumar, R.P.,
Langø, T.,
Alaya Cheikh, F.,
Elle, O.J.,
Lindseth, F.:
Deep learning for image-based liver analysis — a comprehensive
review focusing on malignant lesions.
Artificial Intelligence in Medicine
[12]
Maurer, C.R.,
Fitzpatrick, J.M.,
Wang, M.Y.,
Galloway, R.L.,
Maciunas, R.J.,
Allen, G.S.:
Registration of head volume images using implantable fiducial
IEEE Transactions on Medical Imaging
[13]
Wu, G.,
Kim, M.,
Wang, Q.,
Gao, Y.,
Liao, S.,
Shen, D.:
Unsupervised Deep Feature Learning for Deformable Registration of MR
Brain Images.
In: Medical Image Computing and Computer Assisted Intervention
vol. 16,
pp. 649–656
[14]
Jaderberg, M.,
Simonyan, K.,
Zisserman, A.,
kavukcuoglu, k.:
Spatial transformer networks.
In: Advances in Neural Information Processing Systems,
vol. 28
[15]
Ronneberger, O.,
Fischer, P.,
Brox, T.:
U-net: Convolutional networks for biomedical image segmentation.
In: Navab, N.,
Hornegger, J.,
Wells, W.M.,
Frangi, A.F. (eds.)
Medical Image Computing and Computer-Assisted Intervention – MICCAI
pp. 234–241.
[16]
Yang, X.,
Kwitt, R.,
Styner, M.,
Niethammer, M.:
Quicksilver: Fast predictive image registration – A deep learning
[17]
Rohé, M.-M.,
Datar, M.,
Heimann, T.,
Sermesant, M.,
Pennec, X.:
SVF-Net: Learning Deformable Image Registration Using Shape Matching.
vol. 2878,
pp. 266–274
[18]
Mok, T.C.W.,
Chung, A.C.S.:
Large Deformation Diffeomorphic Image Registration with Laplacian
Pyramid Networks.
In: Lecture Notes in Computer Science (including Subseries Lecture
Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
vol. 12263,
pp. 211–221
[19]
Hu, Y.,
Modat, M.,
Gibson, E.,
Li, W.,
Ghavami, N.,
Bonmati, E.,
Wang, G.,
Bandula, S.,
Moore, C.M.,
Emberton, M.,
Ourselin, S.,
Noble, J.A.,
Barratt, D.C.,
Vercauteren, T.:
Weakly-supervised convolutional neural networks for multimodal image
Medical Image Analysis
[20]
Li, H.,
Fan, Y.:
Non-rigid image registration using self-supervised fully convolutional
networks without training data.
In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI
pp. 1075–1078
[21]
Fu, Y.,
Lei, Y.,
Wang, T.,
Higgins, K.,
Bradley, J.D.,
Curran, W.J.,
Liu, T.,
Yang, X.:
LungRegNet: An unsupervised deformable image registration method for
4D-CT lung.
Medical Physics
[22]
Lei, Y.,
Fu, Y.,
Harms, J.,
Wang, T.,
Curran, W.J.,
Liu, T.,
Higgins, K.,
Yang, X.:
4D-CT Deformable Image Registration Using an Unsupervised Deep
Convolutional Neural Network.
Lecture Notes in Computer Science (including subseries Lecture Notes
in Artificial Intelligence and Lecture Notes in Bioinformatics)
[23]
Hu, J.,
Luo, Z.,
Wang, X.,
Sun, S.,
Yin, Y.,
Cao, K.,
Song, Q.,
Lyu, S.,
Wu, X.:
End-to-end multimodal image registration via reinforcement learning.
Medical Image Analysis
[24]
Hering, A.,
Hansen, L.,
Mok, T.C.W.,
Chung, A.C.S.,
Siebert, H.,
Häger, S.,
Lange, A.,
Kuckertz, S.,
Heldmann, S.,
Shao, W.,
Vesal, S.,
Rusu, M.,
Sonn, G.,
Estienne, T.,
Vakalopoulou, M.,
Han, L.,
Huang, Y.,
Yap, P.-T.,
Brudfors, M.,
Balbastre, Y.,
Joutard, S.,
Modat, M.,
Lifshitz, G.,
Raviv, D.,
Lv, J.,
Li, Q.,
Jaouen, V.,
Visvikis, D.,
Fourcade, C.,
Rubeaux, M.,
Pan, W.,
Xu, Z.,
Jian, B.,
De Benetti, F.,
Wodzinski, M.,
Gunnarsson, N.,
Sjölund, J.,
Grzech, D.,
Qiu, H.,
Li, Z.,
Thorley, A.,
Duan, J.,
Großbröhmer, C.,
Hoopes, A.,
Reinertsen, I.,
Xiao, Y.,
Landman, B.,
Huo, Y.,
Murphy, K.,
Lessmann, N.,
Van Ginneken, B.,
Dalca, A.V.,
Heinrich, M.P.:
Learn2reg: comprehensive multi-task medical image registration challenge,
dataset and evaluation in the era of deep learning.
IEEE Transactions on Medical Imaging,
[25]
Fretland, Å.A.,
Kazaryan, A.M.,
Bjørnbeth, B.A.,
Flatmark, K.,
Andersen, M.H.,
Tønnessen, T.I.,
Bjørnelv, G.M.W.,
Fagerland, M.W.,
Kristiansen, R.,
Øyri, K.,
Edwin, B.:
Open versus laparoscopic liver resection for colorectal liver
metastases (the Oslo-CoMet study): Study protocol for a randomized controlled
[26]
Avants, B.B.,
Tustison, N.J.,
Song, G.,
Cook, P.A.,
Klein, A.,
Gee, J.C.:
A reproducible evaluation of ants similarity metric performance in
brain image registration.
[27]
Pedersen, A.:
andreped/livermask: v1.3.1.
[28]
Abadi, M.,
Agarwal, A.,
Barham, P.,
Brevdo, E.,
Chen, Z.,
Citro, C.,
Corrado, G.S.,
Davis, A.,
Dean, J.,
Devin, M.,
Ghemawat, S.,
Goodfellow, I.,
Harp, A.,
Irving, G.,
Isard, M.,
Jia, Y.,
Jozefowicz, R.,
Kaiser, L.,
Kudlur, M.,
Levenberg, J.,
Mané, D.,
Monga, R.,
Moore, S.,
Murray, D.,
Olah, C.,
Schuster, M.,
Shlens, J.,
Steiner, B.,
Sutskever, I.,
Talwar, K.,
Tucker, P.,
Vanhoucke, V.,
Vasudevan, V.,
Viégas, F.,
Vinyals, O.,
Warden, P.,
Wattenberg, M.,
Wicke, M.,
Yu, Y.,
Zheng, X.,
Research, G.:
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
[29]
Çiçek, Ö.,
Abdulkadir, A.,
Lienkamp, S.S.,
Brox, T.,
Ronneberger, O.:
3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation,
pp. 424–432.
[30]
Cipolla, R.,
Gal, Y.,
Kendall, A.:
Multi-task learning using uncertainty to weigh losses for scene
geometry and semantics.
In: 2018 IEEE/CVF Conference on Computer Vision and Pattern
pp. 7482–7491
[31]
Guo, M.,
Haque, A.,
Huang, D.-A.,
Yeung, S.,
Fei-Fei, L.:
Dynamic task prioritization for multitask learning.
In: Proceedings of the European Conference on Computer Vision (ECCV)
[32]
Jia, X.,
Bartlett, J.,
Zhang, T.,
Lu, W.,
Qiu, Z.,
Duan, J.:
U-Net vs Transformer: Is U-Net Outdated in Medical Image Registration?
[33]
Bhadra, S.,
Zhou, W.,
Anastasio, M.A.:
Medical image reconstruction with image-adaptive priors learned by use
of generative adversarial networks.
[34]
Vaswani, A.,
Shazeer, N.,
Parmar, N.,
Uszkoreit, J.,
Jones, L.,
Gomez, A.N.,
Kaiser, L.u.,
Polosukhin, I.:
Attention is all you need.
In: Advances in Neural Information Processing Systems,
vol. 30
[35]
Montaña-Brown, N.,
Ramalhinho, J.a.,
Allam, M.,
Davidson, B.,
Hu, Y.,
Clarkson, M.J.:
Vessel segmentation for automatic registration of untracked
laparoscopic ultrasound to CT of the liver.
International Journal of Computer Assisted Radiology and Surgery
|
# Spin superfluidity
E. B. Sonin Racah Institute of Physics, Hebrew University of Jerusalem, Givat
Ram, Jerusalem 9190401, Israel
###### Abstract
The phenomenon of superfluidity (superconductivity) is a possibility of
transport of mass (charge) on macroscopical distances without essential
dissipation. In magnetically ordered media with easy-plane topology of the
order parameter space the superfluid transport of spin is also possible
despite the absence of the strict conservation law for spin. The article
addresses three key issues in the theory of spin superfluidity: topology of
the order parameter space, Landau criterion for superfluidity, and decay of
superfluid currents via phase slip events, in which magnetic vortices cross
current streamlines. Experiments on detection of spin superfluidity are also
surveyed.
coherent spin precession, Landau criterion, magnetic vortex, oder parameter,
phase slip, skyrmion, spin supercurrent, spin superfluidity, spin wave,
topology
Key Points/Objective
* •
Spin superfluidity is the possibility to transport spin with essentially
suppressed dissipation on long distances.
* •
Spin superfluidity is possible if the magnetic order parameter space has the
topology of a circumference.
* •
The necessary topology is provided by easy-plane anisotropy in ferromagnets or
by magnetic field in antiferromagnets.
* •
Metastability of spin superfluid current states is restricted by the Landau
criterion.
* •
Decay of spin superfluid currents is realized via phase slips, in which
magnetic vortices cross current streamlines.
###### Contents
1. 1 Introduction
2. 2 Concept of superfluidity
3. 3 Spin superfluidity in ferromagnets
4. 4 Spin superfluidity in antiferromagnets
5. 5 Superfluid spin transport without spin conservation law
6. 6 Long-distance superfluid spin transport
7. 7 Experiments on detection of spin superfluidity
8. 8 Discussion and conclusions
## 1 Introduction
The phenomenon of superfluidity (superconductivity in the case of charged
fluids) is known more than a hundred years. Its analog for spin (spin
superfluidity) occupies minds of condensed matter physicists from 70s of the
last century. The term superfluidity is used in the literature to cover a
broad range of phenomena in superfluid 4He and 3He, Bose-Einstein condensates
of cold atoms, and solids. In this article superfluidity means only the
possibility to transport a physical quantity (mass, charge, spin, …) without
dissipation (more accurately, with essentially suppressed dissipation). This
corresponds to the original hundred-years old meaning of the term from the
times of Kamerlingh Onnes and Kapitza.
The superfluidity is conditioned by the special topology of the order
parameter space (vacuum manifold). Namely, this topology is that of a
circumference in a plane. The angle of rotation around this circumference is
the order parameter phase describing all degenerate ground states. In
superfluids the phase is the phase of the macroscopic wave function. In
magnetically ordered systems (ferro- and antiferromagnets) the necessary
topology is provided by easy-plane magnetic anisotropy, and the phase is the
angle of rotation around the axis (further the axis $z$) normal to the easy
plane. Currents of mass (charge) or spin are proportional to phase gradients
and are called supercurrents.
In early discussions of the spin supercurrent it was considered as a
counterflow of superfluid currents of particles with different spins in
superfluid 3He [44], i.e., spin was transported by itinerant spin carriers.
Later it was demonstrated that spin superfluidity is a universal phenomenon,
which does not require mobile spin carriers and is possible in magnetic
insulators [27, 29]. It can be described within the framework of the standard
Landau–Lifshitz–Gilbert (LLG) theory. However, the publications conditioning
spin superfluid transport by the presence of mobile carriers of spin continued
to appear in the literature [3, 25]. According to Shi _et al._ [25], it is a
critical flaw of spin-current definition if it predicts spin currents in
insulators.
Strictly speaking the analogy of spin superfluidity with superfluids is
complete only if there is invariance with respect to any rotation in the spin
space around the axis $z$. Then according to Noether’s theorem the spin
component along the axis $z$ is conserved. But there is always some magnetic
anisotropy, which breaks the rotational invariance. Correspondingly, there is
no strict conservation law for spin, while in superfluids the gauge invariance
is exact, and the conservation law of mass (charge) is also exact.
In the past there were arguments about whether superfluidity is possible in
the absence of the conservation law. This dispute started at discussion of
superfluidity of electron-hole pairs or excitons. The number of electron-hole
pairs can vary due to interband transitions, and the degeneracy with respect
to the phase of the pair condensate is lifted. Guseinov and Keldysh [9] called
this effect “fixation of phase”. They demonstrated that spatially
_homogeneous_ stationary current states are impossible, and concluded that
there is no analogy with superfluidity. However, later it was demonstrated
that phase fixation does not rule out the existence of weakly _inhomogeneous_
stationary current states analogous to superfluid current states [14, 26, 20].
This analysis was extended on spin superfluidity [28, 27, 29]. In magnetism
violation of the spin conservation law usually is rather weak because it is
related with relativistically small (inversely proportional to the speed of
light) processes of spin-orbit interaction. In fact, the LLG theory itself is
based on the assumption of weak spin-orbit interaction [17].
Above we discussed supercurrents generated in the equilibrium (ground state)
of the magnetically ordered medium with easy plane topology. But in
magnetically ordered system this topology is possible also in non-equilibrium
coherent precession states, when spin pumping supports spin precession with
fixed spin component along the magnetic field. Such non-equilibrium coherent
precession states, which are called nowadays magnon BEC, were experimentally
investigated in the $B$ phase of superfluid 3He and in yttrium-iron-garnet
(YIG) films [3, 6].
Spin superfluid transport is possible as long as the spin phase gradient does
not exceeds the critical value determined by the Landau criterion. The Landau
criterion checks stability of supercurrent states with respect to elementary
excitations of all collective modes. The Landau criterion determines a
threshold for the current state instability, but it tells nothing about how
the instability develops. The decay of the supercurrent is possible only via
phase slips. In a phase slip event a magnetic vortex crosses current
streamlines decreasing the phase difference along streamlines. Below the
critical value of supercurrent phase slips are suppressed by energetic
barriers. The critical value of the supercurrent at which barriers vanish is
of the same order as that estimated from the Landau criterion. This leads to a
conclusion that the instability predicted by the Landau criterion is a
precursor of the avalanche of phase slips not suppressed by any activation
barrier.
The superfluid spin transport on macroscopical distances is possible only if
the spin phase performs a large number of full $2\pi$ rotations along current
streamlines (large winding number), and phase slips are suppressed by
energetic barriers. On the other hand, small phase variations less than $2\pi$
are ubiquitous in magnetism. They emerge in any spin wave, any domain wall, or
due to disorder. Their existence is confirmed by numerous experiments in the
past. Spin currents generated by these small phase differences transport spin
only on small distances and oscillate in space or time, or both. Their
existence is not a manifestation of spin superfluidity.
In last decades the interest to spin superfluidity was revived [30, 4, 41, 40,
38, 5, 32, 11, 43, 23, 35, 7] in connection with the emergence of spintronics.
The present article reviews the three essentials of the spin superfluidity
concept: topology, Landau criterion, and phase slips. The article focuses on
the qualitative analysis avoiding details of calculations, which can be found
in original papers. After the theoretical analysis the experiments searching
for spin superfluidity are shortly discussed.
## 2 Concept of superfluidity
Figure 1: Phase (in-plane rotation angle) variation in space at the presence
of mass (spin) currents [From Sonin [30]]. (a) Oscillating currents in a sound
(spin) wave. (b) Stationary mass (spin) supercurrent.
Let us remind the concept of superfluidity for the transport of mass (charge).
In superfluid hydrodynamics there are the Hamilton equations for the pair of
the canonically conjugate variables “phase - particle density”:
$\displaystyle\hbar{d\varphi\over dt}=-{\delta{\cal H}\over\delta
n},~{}~{}{dn\over dt}={\delta{\cal H}\over\hbar\delta\varphi}.$ (1)
Here
${\delta{\cal H}\over\delta n}={\partial{\cal H}\over\partial
n}-\bm{\nabla}\cdot{\partial{\cal
H}\over\partial\bm{\nabla}n},~{}~{}{\delta{\cal
H}\over\delta\varphi}={\partial{\cal
H}\over\partial\varphi}-\bm{\nabla}\cdot{\partial{\cal
H}\over\partial\bm{\nabla}\varphi}$ (2)
are functional derivatives of the Hamiltonian
${\cal H}={\hbar^{2}n\over 2m}\nabla\varphi^{2}+E_{0}(n),$ (3)
$\varphi$ is the phase of the wave function describing the Bose-Einstein
condensate (BEC) in Bose liquids or the Cooper pair condensate in Fermi
liquids, and $E_{0}(n)$ is the energy of the superfluid at rest, which depends
only on the particle density $n$. Taking into account the gauge invariance
[$U(1)$ symmetry] $\partial{\cal H}/\partial\varphi=0$, the Hamilton equations
are reduced to the equations of hydrodynamics for an ideal liquid:
$\displaystyle m{d\bm{v}\over dt}=-\bm{\nabla}\mu,$ (4) $\displaystyle{dn\over
dt}=-\bm{\nabla}\cdot\bm{j}.$ (5)
In these expressions
$\mu={\partial E_{0}\over\partial n}+{\hbar^{2}\over 2m}\nabla\varphi^{2}$ (6)
is the chemical potential, and
$\bm{j}=n\bm{v}={\partial{\cal H}\over\hbar\partial\bm{\nabla}\varphi}$ (7)
is the particle current. We consider the zero-temperature limit, when the
superfluid velocity coincides with the center-of-mass velocity of the whole
liquid:
$\bm{v}={\hbar\over m}\bm{\nabla}\varphi.$ (8)
The continuity equation Eq. (5) satisfies the conservation law of mass
(charge), which follows from the gauge invariance.
A collective mode of the ideal liquid is a plane sound wave $\propto
e^{i\bm{k}\cdot\bm{r}-i\omega t}$ with the wave vector $\bm{k}$, the frequency
$\omega$, and the linear spectrum $\omega=u_{s}k$. The sound velocity is
$u_{s}=\sqrt{{n\over m}{\partial^{2}E_{0}\over\partial n^{2}}}.$ (9)
In the sound wave the phase varies in space, i.e., the wave is accompanied by
mass currents [Fig. 1(a)]. An amplitude of the phase variation is small, and
currents transport mass on distances of the order of the wavelength. A real
superfluid transport on macroscopic distances is possible in current states,
which are stationary solutions of the hydrodynamic equations with finite
constant currents, i.e., with constant nonzero phase gradients. In the current
state the phase rotates through a large number of full 2$\pi$-rotations along
streamlines of the current [Fig. 1(b)]. These are supercurrents or persistent
currents.
The crucial point of the superfluidity theory is why the supercurrent in Fig.
1(b) is a metastable state and does not decay a long time. The first
explanation of the supercurrent metastability was the well known Landau
criterion [16]. According to this criterion, the current state is stable as
far as _any quasiparticle_ in a moving liquid has a positive energy in the
laboratory frame and therefore its creation requires an energy input. Let us
suppose that elementary quasiparticles of the liquid at rest have an energy
$\varepsilon(\bm{p})=\hbar\omega(\bm{k})$, where $\bm{p}=\hbar\bm{k}$ is the
quasiparticle momentum. If the liquid moves with the velocity $\bm{v}$ the
quasiparticle energy in the laboratory frame becomes
$\tilde{\varepsilon}(\bm{p})=\varepsilon(\bm{p})+\bm{p}\cdot\bm{v}$. This is
the Doppler effect in the Galilean invariant fluid. The current state is
stable if the energy $\tilde{\varepsilon}$ is never negative. This condition
is the Landau criterion:
$v<v_{L}=\mbox{min}{\varepsilon(\bm{p})\over p}=\mbox{min}{\omega(\bm{k})\over
k}.$ (10)
If quasiparticles are phonons (quanta of sound waves) the Landau critical
velocity $v_{L}$ is the sound velocity $u_{s}$. In superfluid 4He the Landau
critical velocity $v_{L}$ is determined by the roton part of the spectrum. It
is a few times less than the sound velocity.
The Landau criterion checks the stability with respect to elementary
microscopic perturbations of the current state, but does not provide an
information how the instability would develop. The whole process of the
supercurrent decay is connected with generation of macroscopic perturbations.
These perturbations are quantum vortices. If the vortex axis (vortex line)
coincides with the $z$ axis, the phase gradient around the vortex line is
given by
$\displaystyle\bm{v}_{v}={\hbar\over
m}\bm{\nabla}\varphi_{v}=\frac{\kappa[\hat{z}\times\bm{r}]}{2\pi r^{2}},$ (11)
where $\bm{r}$ is the position vector in the $xy$ plane and $\kappa=h/m$ is
the velocity circulation quantum.
Creation of the vortex requires some energy. The vortex energy per unit length
(line tension) is determined mostly by the kinetic (gradient) energy in the
area not very close to the vortex axis where the particle density does not
differ essentially from its equilibrium value $n_{0}$ (the London region):
$\displaystyle\varepsilon_{v}=n_{0}\int
d^{2}\bm{r}{\hbar^{2}(\bm{\nabla}\varphi_{v})^{2}\over
2m}={\pi\hbar^{2}n_{0}\over m}\ln{r_{m}\over r_{c}}.$ (12)
The upper cut-off $r_{m}$ of the logarithm is determined by geometry. For the
vortex shown in Fig. 2(a) it is the distance of the vortex line from a sample
border. The lower cut-off $r_{c}$ is the vortex-core radius. It determines the
distance $r$ at which the phase gradient is so high that the density $n$
starts to decrease compared to the equilibrium density $n_{0}$ at large $r$.
The energy $E_{0}(n)$ of the resting superfluid at small $n-n_{0}$ is
determined by the fluid compressibility:
$E_{0}(n)=E_{0}(n_{0})+{m(n_{0}-n)^{2}u_{s}^{2}\over 2n_{0}}.$ (13)
At the distance $r_{c}$ from the vortex axis the energy
$mn\nabla\varphi_{v}^{2}/2$ becomes of the order of the compressibility energy
at $n_{0}-n\sim n_{0}$. This happens when the velocity $v_{v}(r)$ induced by
the vortex becomes of the order of the sound velocity $u_{s}$. This yields
$r_{c}\sim\kappa/u_{s}$. Inside the core the density vanishes at the vortex
axis eliminating the singularity in the kinetic energy. For the weakly non-
ideal Bose-gas $r_{c}$ is on the order of the coherence length.
Figure 2: Mass and magnetic vortices [From Sonin [30]]. (a) Vortex in a
superfluid or magnetic vortex in an easy-plane ferromagnet without in-plane
anisotropy. (b) magnetic vortex at small average spin currents
($\langle\nabla\varphi\rangle\ll 1/l$) for four-fold in-plane symmetry. The
vortex line is a confluence of four $90^{\circ}$ domain walls (solid lines).
Phase slips are impeded by energy barriers determined by topology of the order
parameter space (vacuum manifold). The order parameter of a superfluid is a
complex wave function $\psi=\psi_{0}e^{i\varphi}$, where the modulus
$\psi_{0}$ of the wave function is a positive constant determined by
minimization of the energy and the phase $\varphi$ is a degeneracy parameter.
Any degenerate ground state in a closed annular channel (torus) with some
constant phase $\varphi$ maps on some point at the circumference
$|\psi|=\psi_{0}$ in the complex plane $\psi$, while a current state with the
phase change $2\pi N$ around the torus maps onto a path [Fig. 3(a)] winding
around the circumference $N$ times. The integer winding number $N$ is a
topological charge. The current can relax if it is possible to change the
topological charge.
A change of the topological charge from $N$ to $N-1$ is possible, if a vortex
generated at one border of a channel with a moving superfluid moves across
current streamlines “cutting” the channel cross-section, and annihilates at
another border as shown in Fig. 2(a). This is a phase slip. In the phase slip
event the distance $r_{m}$ of the vortex from a border varies from zero to the
width $W$ of the channel. The energy of the vortex in a moving superfluid is
determined by a sum of the constant gradient $\bm{\nabla}\varphi_{0}$, which
determines the supercurrent, and the phase gradient $\bm{\nabla}\varphi_{v}$
induced by the vortex. The vortex energy consists of the energy of the vortex
in a resting fluid given by Eq. (12) and of the energy from the cross terms of
the two gradient fields $\bm{\nabla}\varphi_{0}$ and $\bm{\nabla}\varphi_{v}$:
$\displaystyle\tilde{\varepsilon}_{v}={\pi\hbar^{2}n_{0}\over
m}L\ln{r_{m}\over r_{c}}-{2\pi\hbar^{2}n_{0}\over m}S\nabla\varphi_{0},$ (14)
where $L$ is the length of the vortex line and $S$ is the area of the cut, at
which the phase jumps by $2\pi$. For the 2D case shown in Fig. 2(a) (a
straight vortex in a slab of thickness $L$ normal to the picture plane)
$S=Lr_{m}$. The vortex motion across the channel (growth of $r_{m}$) is
impeded by the barrier determined by the maximum of the energy
$\tilde{\varepsilon}_{v}$ as a function $r_{m}$. The height of the barrier is
the vortex energy at $r_{m}=1/2\nabla\varphi_{0}$:
$\displaystyle\varepsilon_{b}\approx{\pi\hbar^{2}n\over m}L\ln{2\over
r_{c}\nabla\varphi_{0}}.$ (15)
The barrier disappears at gradients $\nabla\varphi_{0}\sim 1/r_{c}$, which are
of the same order as the critical gradient determined from the Landau
criterion. On the other hand, in the limit of small velocity
$v\propto\nabla\varphi_{0}$ the barrier height grows and at very small
velocity $v\sim\hbar/mW$ reaches the value
$\displaystyle\varepsilon_{m}\approx{\pi\hbar^{2}n\over m}L\ln{W\over
r_{c}\nabla\varphi_{0}}.$ (16)
Thus, in the thermodynamic (macroscopic) limit $W\to\infty$ the barrier height
becomes infinite. Since the phase slip probability exponentially decreases on
the barrier height (whether the barrier is overcome due to thermal
fluctuations or via quantum tunneling) the life time of the current state in
conventional superfluidity diverges when the velocity (phase gradient)
decreases. This justifies calling superfluidity macroscopic quantum
phenomenon.
In the 3D geometry the phase slip is realized with expansion of vortex rings.
For the ring of radius $R$ the vortex-length and the area of the cut are
$L=2\pi R$ and $S=\pi R^{2}$ respectively, and the barrier disappears at the
same critical gradient $\sim 1/r_{c}$ as in the 2D case.
The state with the vortex in a moving superfluid maps on the full circle
$|\psi|\leq\psi_{0}$ [Fig. 3(b)]. The area outside the vortex core maps on the
circumference $|\psi|=\psi_{0}$, while the core maps on the interior of the
circle. In a weakly interacting Bose-gas the structure of the core is
determined by solution of the Gross—Pitaevskii equation [22, 31]. The details
of the core structure are not important for the content of the present
article.
Figure 3: Topology of the uniform mass current and the vortex states [From
Sonin [30]]. (a) The current state in a torus maps onto the circumference
$|\psi|=\psi_{0}$ in the complex $\psi$ \- plane, where $\psi_{0}$ is the
modulus of the equilibrium order parameter wave function in the uniform state.
(b) The current state with a vortex maps onto the circle $|\psi|\leq\psi_{0}$.
Figure 4: Mechanical analogue of a persistent current: A twisted elastic rod
bent into a closed ring. There is a persistent angular-momentum flux around
the ring [From Sonin [30]].
For better understanding of the superfluidity phenomenon it is useful to
consider a mechanical analogue of superfluid current [29]. Let us twist a long
elastic rod so that a twisting angle at one end of the rod with respect to an
opposite end reaches values many times $2\pi$. Bending the rod into a ring and
connecting the ends rigidly, one obtains a ring with a circulating persistent
angular-momentum flux (Fig. 4). The flux is proportional to the gradient of
twisting angle, which plays the role of the phase gradient in the
supercurrent. The deformed state of the ring is not the ground state of the
ring, but it cannot relax to the ground state via any elastic process, because
it is topologically stable. The only way to relieve the strain inside the rod
is plastic displacements. This means that dislocations start to move across
rod cross-sections. The role of dislocations in the twisted rod is similar to
the role of vortices in the superfluid.
## 3 Spin superfluidity in ferromagnets
For a ferromagnet with magnetization density $\bm{M}$ the LLG equation is [17]
$\displaystyle{\partial\bm{M}\over\partial
t}=\gamma\left[\bm{H}_{eff}\times\bm{M}\right],$ (17)
where $\gamma$ is the gyromagnetic ratio between the magnetic and mechanical
moment. The effective magnetic field is determined by the functional
derivative of the Hamiltonian $\cal H$:
$\displaystyle\bm{H}_{eff}=-{\delta{\cal H}\over\delta\bm{M}}=-{\partial{\cal
H}\over\partial\bm{M}}+\nabla_{i}{\partial{\cal
H}\over\partial\nabla_{i}\bm{M}}.$ (18)
According to the LLG equation, the absolute value $M$ of the magnetization
does not vary. The evolution of $\bm{M}$ is a precession around the effective
magnetic field $\bm{H}_{eff}$.
If spin-rotational invariance is broken and there is uniaxial crystal magnetic
anisotropy the phenomenological Hamiltonian is
$\displaystyle{\cal H}={A\over
2}\nabla_{i}\bm{M}\cdot\nabla_{i}\bm{M}+{GM_{z}^{2}\over
2M^{2}}-\bm{H}\cdot\bm{M}.$ (19)
The parameter $A$ is stiffness of the spin system determined by exchange
interaction. If the anisotropy parameter $G$ is positive, it is the “easy
plane” anisotropy, which keeps the magnetization in the $xy$ plane. If the
external magnetic field $\bm{H}$ is directed along the $z$ axis, the $z$
component of spin is conserved because of invariance with respect to rotations
around the $z$ axis. For the sake of simplicity we ignore the magnetostatic
energy, which depends on sample shape.
Since the absolute value $M$ of magnetization is fixed, the magnetization
vector $\bm{M}$ is fully determined by the $z$ magnetization component $M_{z}$
and the angle $\varphi$ showing the direction of $\bm{M}$ in the easy plane
$xy$:
$M_{x}=M_{\perp}\cos\varphi,~{}~{}M_{y}=M_{\perp}\sin\varphi,~{}~{}M_{\perp}=\sqrt{M^{2}-M_{z}^{2}}.$
(20)
In the new variables the Hamiltonian is
$\displaystyle{\cal H}={AM_{\perp}^{2}(\bm{\nabla}\varphi)^{2}\over
2}+{M_{z}^{2}\over 2\chi}-HM_{z}.$ (21)
Here we neglected gradients of $M_{z}$. The magnetic susceptibility
$\chi=M^{2}/G$ along the $z$ axis is determined by the easy-plane anisotropy
parameter $G$. The LLG equation reduces to the Hamilton equations for a pair
of canonically conjugate continuous variables “angle–moment”:
$\displaystyle{1\over\gamma}{d\varphi\over dt}=-{\delta{\cal H}\over\delta
M_{z}}=-{\partial{\cal H}\over\partial M_{z}},$ (22)
$\displaystyle{1\over\gamma}{dM_{z}\over dt}={\delta{\cal
H}\over\delta\varphi}=-\bm{\nabla}\cdot{\partial{\cal
H}\over\partial\bm{\nabla}\varphi},$ (23)
where functional derivatives are taken from the Hamiltonian Eq. (21). Using
the expressions for functional derivatives the Hamilton equations are
$\displaystyle{1\over\gamma}{d\varphi\over
dt}=AM_{z}(\bm{\nabla}\varphi)^{2}-{M_{z}-\chi H\over\chi},$ (24)
$\displaystyle{1\over\gamma}{dM_{z}\over dt}+\bm{\nabla}\cdot\bm{J}_{s}=0,$
(25)
where
$\displaystyle\bm{J}_{s}=-{\partial{\cal
H}\over\partial\bm{\nabla}\varphi}=-AM_{\perp}^{2}\bm{\nabla}\varphi$ (26)
is the spin current. Although our equations are written for magnetization, but
not for the spin density, $\bm{J}_{s}$ is defined as a current of spin with
the spin density $M_{z}/\gamma$.
Figure 5: Mapping of spin current states on the order parameter space of the
ferromagnet [From Sonin [36]]. (a) Spin current in an isotropic ferromagnet.
The current state in torus maps on an equatorial circumference on the sphere
of radius $M$ (top). Continuous shift of mapping on the surface of the sphere
(middle) reduces it to a point at the northern pole (bottom), which
corresponds to the ground state without currents. (b) Spin current in an easy-
plane ferromagnet at $M_{z}=0$. The easy-plane anisotropy reduces the order
parameter space to an equatorial circumference in the $xy$ plane topologically
equivalent to the order parameter space in superfluids. (c) Spin current state
with a magnetic vortex in an easy-plane ferromagnet at $M_{z}=0$. The states
map on the surface of the upper or the lower hemisphere. (d) Spin current in
an easy-plane ferromagnet at $M_{z}\neq 0$. Spin is confined in the plane
parallel to the $xy$ plane but shifted closer to the northern pole. A nonzero
$M_{z}$ appears either in the equilibrium due to a magnetic field parallel to
the axis $z$, or due to spin pumping. (e) Spin current state with a magnetic
vortex in an easy-plane ferromagnet at $M_{z}\neq 0$. The state maps on the
surface of the upper or the lower parts of the sphere. Two options of mapping
are not degenerate, and the phase slip with the magnetic vortex of a smaller
energy (a smaller area of mapping) is more probable.
There is an evident analogy of Eqs. (24) and (25) with the hydrodynamic
equations (4) and (5) for the superfluid. This analogy of magnetodynamics with
hydrodynamics was pointed out by Halperin and Hohenberg [10] for spin waves in
antiferromagnets. The analogy is also important for spin superfluidity.
There are linear solutions of Eqs. (24) and (25) describing the plane spin-
wave mode $\propto e^{i\bm{k}\cdot\bm{r}-i\omega t}$ with the sound-like
spectrum:
$\omega=c_{sw}k,$ (27)
where
$c_{sw}=\gamma M_{\perp}\sqrt{A\over\chi}$ (28)
is the spin-wave velocity in the ground state. The variation of the phase in
the space is the same as in the sound mode propagating in the superfluid and
shown in Fig. 1(a).
However, as well as the mass current in a sound wave, the small oscillating
spin current in the spin wave does not lead to long-distance superfluid spin
transport, which this article addresses. Spin superfluid transport on long
distances is realized in current states with spin performing a large number of
full 2$\pi$-rotations in the easy plane as shown in Fig. 1(b). In the current
state with a constant gradient of the spin phase $\bm{K}=\bm{\nabla}\varphi$,
there is a constant magnetization component along the magnetic field (the axis
$z$):
$M_{z}={\chi H\over 1-\chi AK^{2}}.$ (29)
Like in superfluids, the stability of current states is connected with
topology of the order parameter space. In isotropic ferromagnets ($G=0$) the
order parameter space is a spherical surface of radius equal to the absolute
value of the magnetization vector $\bm{M}$ [Fig. 5(a)]. All points on this
surface correspond to the same energy of the ground state. Suppose we created
the spin current state with monotonously varying phase $\varphi$ in a torus.
This state maps on the equatorial circumference in the order parameter space.
The topology allows to continuously shift the circumference and to reduce it
to a point (the northern or the southern pole). During this process shown in
Fig. 5(a) the path remains in the order parameter space all the time, and
therefore, no energetic barrier resists to the transformation. Thus, in
isotropic ferromagnets the metastable current state are not expected.
In a ferromagnet with easy-plane anisotropy ($G>0$) the order parameter space
reduces from the spherical surface to a circumference parallel to the $xy$
plane. It is shown in Fig. 5(b) for zero magnetic field when the circumference
is the equator. This order parameter space is topologically equivalent to that
in superfluids. Now the transformation of the circumference to the point costs
the anisotropy energy. This allows to expect metastable spin currents
(supercurrents). The magnetic field along the anisotropy axis $z$ shifts the
easy plane either up [Fig. 5(d)] or down away from the equator.
In order to check the Landau criterion, one should know the spectrum of spin
waves in the current state with the constant value of the spin phase gradient
$\bm{K}=\bm{\nabla}\varphi$ and with the longitudinal (along the magnetic
field) magnetization given by Eq. (29). The spectrum is determined by solving
the Hamilton equations Eqs. (24) and (25) linearized with respect to weak
perturbations of the current state. We skip the standard algebra given
elsewhere [35]. Finally one obtains [11, 35] the spectrum of plane spin waves:
$\omega+\bm{w}\cdot\bm{k}=\tilde{c}_{sw}k.$ (30)
Here
$\tilde{c}_{sw}=\sqrt{1-\chi AK^{2}}c_{sw}$ (31)
is the spin-wave velocity in the current state, and the spin wave velocity
$c_{sw}$ in the ground state is given by Eq. (28). The velocity
$\bm{w}=2\gamma M_{z}A\bm{K}$ (32)
can be called Doppler velocity because its effect on the frequency is similar
to the Doppler effect in a Galilean invariant fluid [see the text before Eq.
(10)]. However, our system is not Galilean invariant [11], and this is the
pseudo Doppler effect. Because of it, the gradient $K$ proportional to $w$ is
present also on the right-hand side of the dispersion relation Eq. (30).
We obtained the gapless Goldstone mode with the sound-like linear in $\bm{k}$
spectrum. The current state becomes unstable when at $\bm{k}$ antiparallel to
$\bm{w}$ the frequency $\omega$ becomes negative. This happens at the gradient
$K$ equal to the Landau critical gradient
$K_{L}={M_{\perp}\over\sqrt{4M^{2}-3M_{\perp}^{2}}}{1\over\sqrt{\chi A}}.$
(33)
In the limit of weak magnetic fields when $M_{z}\ll M$ the Landau critical
gradient is
$K_{L}={1\over\sqrt{\chi A}}={\gamma M\over\chi c_{sw}}.$ (34)
In this limit the pseudo-Doppler effect is not important, and at the Landau
critical gradient $K_{L}$ the spin-wave velocity $\tilde{c}_{sw}$ in the
current state vanishes.
In the opposite limit $M_{z}\to M$ ($M_{\perp}\to 0$) the Landau critical
gradient,
$K_{L}={M_{\perp}\over 2M}{1\over\sqrt{\chi A}},$ (35)
decreases, and the spin superfluidity becomes impossible at the phase
transition to the easy-axis anisotropy ($M_{\perp}=0$).
Figure 6: Skyrmion cores of magnetic vortices. Variation of magnetization
vectors ($\bm{M}$ in a ferromagnet, $\bm{M}_{1}$ and $\bm{M}_{2}$ in an
antiferromagnet) in the vortex core as a function of the distance $r$ from the
vortex axis is shown schematically. Horizontal directions of magnetizations
correspond to the direction of the $z$ axis in the spin space [From Sonin
[34]]. (a) The magnetic vortex in the ferromagnet corresponding to the gapless
Goldstone spin wave mode with the coherence length $\xi_{0}$ given by Eq.
(36). (b) The magnetic vortex in the antiferromagnet corresponding to the
Goldstone spin wave mode with the coherence length $\xi_{0}$ given by Eq.
(59). (c) The magnetic vortex in the antiferromagnet corresponding to the
gapped spin wave mode with the coherence length $\xi$ given by Eq. (61).
Deriving the sound-like spectrum of the spin wave we neglected in the
Hamiltonian terms dependent on gradients $\bm{\nabla}M_{z}$. One should take
into account these terms at the wavelength on the order of the coherence
length
$\xi_{0}={M\over M_{\perp}}\sqrt{\chi A}={M^{2}\over M_{\perp}}\sqrt{A\over
G}.$ (36)
The Landau critical gradient $K_{L}$ is on the order of the inverse coherence
length $1/\xi_{0}$.
The current states relax to the ground state via phase slips events, in which
magnetic vortices cross spin current streamlines. At $M_{z}=0$ the current
state with a magnetic vortex maps on a surface of a hemisphere of radius $M$
either above or below the equator [21] as shown in Fig. 5(c). The vortex core
has a structure of a skyrmion. The skyrmion mapping on a hemisphere is called
meron. At $M_{z}=0$ two magnetic vortices with opposite spin polarizations
have the same energy, and both can participate in phase slips. But at
$M_{z}\neq 0$ the magnetic vortex with a smaller mapping area [Fig. 5(e)] has
a smaller energy, and phase slips with its participation are more frequent.
Since inside the core the gradients $\bm{\nabla}M_{z}$ cannot be ignored, the
core radius is on the order of the coherence length $\xi_{0}$. Variation of
the magnetization direction in space inside the skyrmion core is schematically
shown in Fig. 6(a). One can find more details and numerical calculations of
the structure of skyrmion cores of magnetic vortices in ferromagnets elsewhere
[33].
The estimation of barriers for phase slips in spin-superfluid ferromagnets is
similar to that in the case of mass superfluidity. The spin phase gradient in
the current state with a straight magnetic vortex parallel to the axis $z$ is
$\bm{\nabla}\varphi={[\hat{z}\times\bm{r}]\over r^{2}}+\bm{K}.$ (37)
We consider a 2D problem of the straight magnetic vortex at the distance
$r_{m}$ from the plane border. The gradient $\bm{K}$ is parallel to the
border. Substituting the phase gradient field Eq. (37) into the kinetic energy
and integrating the energy over the London region, where $M_{\perp}$ is close
to its value in the ground state, one obtains the energy of the magnetic
vortex per unit length:
$\displaystyle\tilde{\varepsilon}_{v}=\pi AM_{\perp}^{2}\left(\ln{r_{m}\over
r_{c}}-2Kr_{m}\right).$ (38)
The magnetic vortex energy has a maximum at $r_{m}=1/2K$. The energy at the
maximum is a barrier preventing phase slips:
$\varepsilon_{b}=\pi AM_{\perp}^{2}\ln{1\over 2Kr_{c}}.$ (39)
The barrier vanishes if the gradient $K$ becomes of the order of the inverse
vortex core radius $r_{c}$. This gradient is on the order of the Landau
critical gradient $K_{L}$.
Considering mapping of current states with nonzero magnetization $M_{z}$
[Figs. 5(d) and (e)], we had in mind the equilibrium magnetization $M_{z}=\chi
H$ produced by an external magnetic field $\bm{H}$. In the equilibrium there
is no precession of the magnetization $\bm{M}$ around the axis $z$. However,
non-equilibrium states with non-equilibrium magnetization $M_{z}$, which makes
the magnetization (spin) to coherently precess, are also possible. One can
create them by pumping of magnons, which bring spin and energy into the
system. Spin pumping compensates inevitable losses of spin due to spin
relaxation. However, usually the process of spin relaxation is weak, and one
may treat the coherent precession state as a quasi-equilibrium state at fixed
$M_{z}$. The coherent precession state does not requires the crystal easy-
plane anisotropy for its existence. The easy-plane topology of Fig. 5(d) is
provided dynamically, and, as a result, metastable spin current states are
also possible.
Spin superfluidity in the quasi-equilibrium coherent precession state was
investigated theoretically and experimentally in the $B$ phase of superfluid
3He [3]. Later the coherent precession state in 3He was rebranded as magnon
BEC [4]. Coherent spin precession state (also under the name of magnon BEC)
was revealed also in YIG magnetic films [6]. Spin superfluidity in YIG was
discussed by Sun _et al._ [38, 39] and Sonin [32]. In the quasi-equilibrium
coherent precession state demonstration of the long-distance superfluid spin
transport is problematic (see Sec. 7). Semantic dilemma “coherent precession
state, or magnon BEC” was discussed by Sonin [30].
## 4 Spin superfluidity in antiferromagnets
The dynamics of a bipartite antiferromagnet can be described by the LLG
equations for two spin sublattices coupled via exchange interaction [12]:
${d\bm{M}_{i}\over dt}=\gamma\left[\bm{H}_{i}\times\bm{M}_{i}\right].$ (40)
Here the subscript $i=1,2$ indicates to which sublattice the magnetization
$\bm{M}_{i}$ belongs, and
$\bm{H}_{i}=-{\delta{\cal H}\over\delta\bm{M}_{i}}=-{\partial{\cal
H}\over\partial\bm{M}_{i}}+\nabla_{j}{\partial{\cal
H}\over\partial\nabla_{j}\bm{M}_{i}}$ (41)
is the effective field for the $i$th sublattice determined by the functional
derivative of the Hamiltonian $\cal H$. For an isotropic antiferromagnet the
Hamiltonian is
$\displaystyle{\cal
H}={\bm{M}_{1}\cdot\bm{M}_{2}\over\chi}+{A(\nabla_{i}\bm{M}_{1}\cdot\nabla_{i}\bm{M}_{1}+\nabla_{i}\bm{M}_{2}\cdot\nabla_{i}\bm{M}_{2})\over
2}$
$\displaystyle+A_{12}\nabla_{j}\bm{M}_{1}\cdot\nabla_{j}\bm{M}_{2}-\bm{H}\cdot\bm{m}.~{}$
(42)
The total magnetization is
$\bm{m}=\bm{M}_{1}+\bm{M}_{2},$ (43)
and the staggered magnetization
$\bm{L}=\bm{M}_{1}-\bm{M}_{2}$ (44)
is normal to $\bm{m}$.
In the LLG theory absolute values of sublattice magnetizations $\bm{M}_{1}$
and $\bm{M}_{2}$ are equal to constant $M$, and in the uniform state without
gradients the Hamiltonian is
${\cal H}=-{M^{2}\over\chi}+{m^{2}\over 2\chi}-Hm_{z}.$ (45)
Here and later on we assume that the magnetic field is applied along the $z$
axis. The constant term $M^{2}/\chi$ can be ignored. In the ground state the
total magnetization $\bm{m}$ is directed along the magnetic field (the $z$
axis), and the staggered magnetization $\bm{L}$ is confined to the $xy$ plane.
The order parameter for an antiferromagnet is the unit Néel vector
$\bm{l}=\bm{L}/L$. The order parameter space for the isotropic antiferromagnet
in the absence of the external magnetic field is a surface of a sphere, as for
isotropic ferromagnets. But in ferromagnets the magnetic field produces an
easy axis for the magnetization, while in the antiferromagnet the magnetic
field produces the easy plane for the order parameter vector $\bm{l}$. Thus,
the easy-plane topology necessary for the spin superfluidity in
antiferromagnets does not require the crystal easy-plane anisotropy.
In the analogy to the ferromagnetic case, one can describe the vectors of
sublattice magnetizations $\bm{M}_{i}$ with the constant absolute value $M$ by
the two pairs of the conjugate variables $(M_{iz},\varphi_{i})$, which are
determined by the two pairs of the Hamilton equations:
$\displaystyle{1\over\gamma}{d\varphi_{i}\over dt}=-{\delta{\cal H}\over\delta
M_{iz}}=-{\partial{\cal H}\over\partial M_{iz}},$ (46)
$\displaystyle{1\over\gamma}{dM_{iz}\over dt}={\delta{\cal
H}\over\delta\varphi_{i}}={\partial{\cal
H}\over\partial\varphi_{i}}-\bm{\nabla}\cdot{\partial{\cal
H}\over\partial\bm{\nabla}\varphi_{i}}.$ (47)
Let us consider the magnetization distribution with axial symmetry around the
axis $z$:
$\displaystyle M_{1z}=M_{2z}={m_{z}\over
2}=M\sin\theta_{0},~{}~{}m_{y}=m_{z}=0,$ $\displaystyle M_{1x}=-M_{2x}={L\over
2}\cos\varphi,$ $\displaystyle M_{1y}=-M_{2y}={L\over
2}\sin\varphi,~{}~{}L_{z}=0.$ (48)
Here $\varphi=\varphi_{1}=\pi-\varphi_{2}$ is the angle of rotation of
$\bm{L}$ around the $z$ axis, $L=\sqrt{4M^{2}-m_{z}^{2}}=2M\cos\theta_{0}$,
and $\theta_{0}$ is the canting angle. The Hamiltonian Eq. (42) for the
axisymmetric case becomes the Hamiltonian
${\cal H}={m_{z}^{2}\over 2\chi}-Hm_{z}+{A_{-}L^{2}(\nabla\varphi)^{2}\over
4}$ (49)
for the pair of the canonically conjugate variables $(m_{z},\varphi)$. The
Hamilton equations for this pair are
$\displaystyle{1\over\gamma}{d\varphi\over
dt}={A_{-}m_{z}(\bm{\nabla}\varphi)^{2}\over 2}-{m_{z}-\chi H\over\chi},$ (50)
$\displaystyle{1\over\gamma}{dm_{z}\over dt}+\bm{\nabla}\cdot\bm{J}_{s}=0.$
(51)
Here $A_{-}=A-A_{12}$, and
$\bm{J}_{s}=-{A_{-}L^{2}\over 2}\bm{\nabla}\varphi$ (52)
is the superfluid spin current. These equations are identical to Eqs.
(24)-(26) for the ferromagnet after replacing the spontaneous magnetization
component $M_{z}$ by the total magnetization component $m_{z}$, $A$ by
$A_{-}/2$, and $M_{\perp}$ by $L$.
In the stationary spin current state there is a constant gradient
$\bm{K}=\bm{\nabla}\varphi$ of the spin phase and a constant total
magnetization
$m_{z}={\chi H\over 1-\chi A_{-}K^{2}/2}.$ (53)
For checking the Landau criterion we must know the spectrum of all collective
modes. Solution of Eqs. (50) and (51) linearized with respect to plane wave
perturbations $m^{\prime}\propto e^{i\bm{k}\cdot\bm{r}-i\omega t}$ and
$\varphi^{\prime}\propto e^{i\bm{k}\cdot\bm{r}-i\omega t}$ of the stationary
spin current state yield the spectrum of the Goldstone gapless mode:
$\displaystyle\omega+\bm{w}\cdot\bm{k}=\tilde{c}_{sw}k.$ (54)
Here the spin-wave velocity $\tilde{c}_{sw}$ in the current state and the
Doppler velocity $\bm{w}$ are
$\tilde{c}_{sw}=c_{sw}\sqrt{1-{\chi A_{-}K^{2}\over 2}},~{}~{}\bm{w}=\gamma
m_{z}A_{-}\bm{K},$ (55)
while
$c_{sw}=\gamma L\sqrt{A_{-}\over 2\chi}=2\gamma
M\cos\theta_{0}\sqrt{A_{-}\over 2\chi}$ (56)
is the spin-wave velocity in the ground state without spin currents. The
difference between gapless modes in a ferromagnet and an antiferromagnet is
that in the former the total magnetization precesses around the $z$ axis,
while in the latter there is the precession of the staggered magnetization.
Oscillations of sublattice magnetizations $\bm{M}_{1}$ and $\bm{M}_{2}$ in the
gapless mode are illustrated in Fig. 7(a).
Figure 7: The schematic picture of the two spin wave modes in the bipartite
antiferromagnet in the plane $xz$ [From Sonin [36]]. (a) The gapless Goldstone
mode. There are oscillations of the canting angle determining the
magnetization component $m_{z}$ and rotational oscillations around the axis
$z$. (b) The gapped mode. There are rotational oscillations of the two
magnetizations around the axis $y$.
In antiferromagnets there is another mode in which the total magnetization
$\bm{m}$ and the staggered magnetization $\bm{L}$ perform rotational
oscillations around the axis normal to the axis $z$. Without spin
supercurrents this axis does not vary in space, and one can choose it to be
the axis $y$. The oscillations of the sublattice magnetizations are
illustrated in Fig. 7(b). In the spin current state $\bm{L}$ rotates around
the axis, which itself rotates in the easy-plane $xy$ along the current
streamlines. The magnetic field breaks invariance with respect to rotations
around axes normal to the field, and the mode spectrum is
$\displaystyle\omega+\bm{w}\cdot\bm{k}=\sqrt{\omega_{0}^{2}+c_{sw}k^{2}},$
(57)
where the gap is given by
$\omega_{0}=\sqrt{{\gamma^{2}m_{z}^{2}\over\chi^{2}}-c_{sw}^{2}K^{2}}.$ (58)
More details of the derivation are given by Sonin [35].
Applying the Landau criterion to the gapless mode at small canting angles
$\theta_{0}$ (weak magnetic fields), one obtains the critical gradient $K_{L}$
and the correlation length $\xi_{0}$,
$K_{L}={1\over\xi_{0}},~{}~{}\xi_{0}=\sqrt{\chi A_{-}\over 2},$ (59)
similar to those obtained for a ferromagnet [Eqs. (35) and (36)]. However, in
contrast to a ferromagnet where the susceptibility $\chi$ is connected with
weak anisotropy energy, in an antiferromagnet the susceptibility $\chi$ is
determined by a much larger exchange energy and is rather small. As a result,
in the antiferromagnet the gapped mode loses its stability at much lower
values of $K$ than the gapless mode. This happens at the Landau critical
gradient
$K_{L}={1\over\xi}$ (60)
when the gap given by Eq. (58) vanishes and the mode frequency becomes
complex. Here we introduced a new correlation length
$\xi={c_{s}\over\gamma H}={\chi c_{s}\over\gamma m_{z}}.$ (61)
As usual, the instability with respect to phase slips with magnetic vortices
starts at the gradients of the same order as the Landau critical gradient. Two
modes in antiferromagnets have different correlation lengths, and,
correspondingly, there are two types of magnetic vortices with different
structure and size of the skyrmion core. Figure 6(b) shows schematically
spatial variation of the sublattice magnetizations in the skyrmion core of a
magnetic vortex connected with the Goldstone gapless mode. The core radius is
of the order of the correlation length $\xi_{0}$ given by Eq. (59). Inside the
core the canting angle $\theta_{0}$ grows and reaches $\pi/2$ at the vortex
axis. This transforms an antiferromagnet to a ferromagnet with the
magnetization $2M$. The transformation increases the exchange energy, and at
weak magnetic fields creation of magnetic vortices connected with the gapped
mode starts earlier. Its core size is determined by the larger correlation
length $\xi$ determined by the Zeeman energy and given by Eq. (61). The
skyrmion core connected with the gapped mode is illustrated in Fig. 6(c).
## 5 Superfluid spin transport without spin conservation law
Though processes violating the spin conservation law are relativistically
weak, their effect is of principal importance and cannot be ignored in
general. Here we consider the effect of broken rotational symmetry in the easy
plane in ferromagnets. Its extension on antiferromagnets requires
insignificant modifications. One should add the $s$-fold in-plane anisotropy
energy $\propto G_{in}$ to the Hamiltonian (21), which becomes
${\cal H}={M_{z}^{2}\over 2\chi}-\gamma
M_{z}H+{AM_{\perp}^{2}(\bm{\nabla}\varphi)^{2}\over
2}+G_{in}[1-\cos(s\varphi)].$ (62)
Then the spin continuity equation (25) becomes
${dM_{z}\over
dt}=-\bm{\nabla}\cdot\bm{J}_{s}+sG_{in}\sin(s\varphi)=AM_{\perp}^{2}\left[\nabla^{2}\varphi-{\sin(s\varphi)\over
l^{2}}\right],$ (63)
where
$\displaystyle l=\sqrt{AM_{\perp}^{2}\over sG_{in}}$ (64)
is the thickness of the wall separating domains with $s$ equivalent easiest
directions in the easy plane. In the stationary spin current states
$dM_{z}/dt=0$, and the phase $\varphi$ is a periodical solution of the sine-
Gordon equation parametrized by the average phase gradients
$\langle\nabla\varphi\rangle$. At small $\langle\nabla\varphi\rangle\ll 1/l$
the spin structure constitutes the chain of domains with the period
$2\pi/s\langle\nabla\varphi\rangle$. Spin currents (gradients) inside domains
are negligible but there are spin currents inside domain walls. The spin
current has a maximum in the center of the domain wall equal to
$J_{l}=\sqrt{2\over s}{A\over l}.$ (65)
Figure 8: The nonuniform spin-current states with
$\langle\nabla\varphi\rangle\ll 1/l$ and $\langle\nabla\varphi\rangle\gg 1/l$
[From Sonin [30]].
The spin transport in the case $\langle\nabla\varphi\rangle\ll 1/l$ hardly
reminds the superfluid transport on macroscopic scales: spin is transported
over distances on the order of the domain-wall thickness $l$. With increasing
$\langle\nabla\varphi\rangle$ the density of domain walls grows, and at
$\langle\nabla\varphi\rangle\gg 1/l$ the domains coalesce. Deviations of the
gradient $\nabla\varphi$ from the constant average gradient
$\langle\nabla\varphi\rangle$ become negligible. This restores the analogy
with the superfluid transport in superfluids [28, 27, 29], and spin non-
conservation can be ignored. The transformation of the domain wall chain into
a weakly inhomogeneous current state at growing $\langle\nabla\varphi\rangle$
is illustrated in Fig. 8. According to the Landau criterion, spin current
states become unstable at $\nabla\varphi\sim 1/\xi_{0}$, where the correlation
length $\xi_{0}$ is given by Eq. (36). Thus, one can expect metastable nearly
uniform current states at $1/l\ll\langle\nabla\varphi\rangle\ll 1/\xi_{0}$.
This is possible if the easy plane anisotropy energy $G$ essentially exceeds
the in-plane anisotropy energy $G_{in}$.
In the limit $\nabla\varphi\ll 1/l$ of strongly nonuniform current states the
decay of the current is also possible only via phase slips, but the structure
of magnetic vortices is essentially different from that in the opposite limit
$\nabla\varphi\gg 1/l$. The magnetic vortex is a line defect at which $s$
domain walls end. The phase slip with the magnetic vortex crossing the
streamlines in the channel leads to annihilation of $s$ domain walls. This
process is illustrated in Fig. 2(b) for the four-fold in-plane symmetry
($s=4$).
An important difference with conventional mass superfluidity is that while the
existence of conventional superfluidity is restricted only from above by the
Landau critical gradients, the spin superfluidity is restricted also from
below: gradients should not be less than the value $1/l$. Thus, the gradient
$K=\langle\nabla\varphi\rangle$ in the expression Eq. (39) barrier height
cannot be less than $1/l$, and the height of the barrier cannot exceed
$\varepsilon_{m}=\pi AM_{\perp}^{2}\ln{l\over r_{c}}.$ (66)
In contrast to the maximal phase slip barrier Eq. (16) in mass superfluidity,
in spin superfluidity the maximal phase slip barrier does not become infinite
in the macroscopic limit $W\to\infty$ [28, 27, 29, 13].
## 6 Long-distance superfluid spin transport
Figure 9: Long distance spin transport [From Sonin [30]]. (a) Injection of
the spin current $J_{0}$ into a spin-non-superfluid medium. (b) Injection of
the strong spin current $J_{0}\gg J_{l}$ into a spin-superfluid medium. (c)
Injection of the weak spin current $J_{0}<J_{l}$ into a spin-superfluid
medium.
The absence of the strict spin conservation law also leads to the dissipative
process, which is impossible in the mass superfluidity and very important for
long-distance superfluid spin transport [27, 30, 41, 40]: longitudinal spin
relaxation characterized in the magnetism theory by the Bloch time $T_{1}$.
Taking the Bloch relaxation into account, the equation for the non-equilibrium
magnetization $M^{\prime}_{z}=M_{z}-\chi H$ becomes
${1\over\gamma}{dM^{\prime}_{z}\over
dt}=-\bm{\nabla}\cdot\bm{J}-{M^{\prime}_{z}\over\gamma T_{1}}.$ (67)
Here the total current $\bm{J}=\bm{J}_{s}+\bm{J}_{d}$ includes not only the
spin supercurrent $\bm{J}_{s}$ given by Eq. (26), but also the spin diffusion
current
$\bm{J}_{d}=-{D\over\gamma}\bm{\nabla}M_{z}.$ (68)
In the absence of spin superfluidity ($\bm{J}_{s}=0$) Eq. (67) describes pure
spin diffusion [Fig. 9(a)]. Its solution, with the boundary condition that the
spin current $J_{0}$ is injected at the interface $x=0$, is
$J_{d}=J_{0}e^{-x/L_{d}},~{}~{}M^{\prime}_{z}=\gamma J_{0}\sqrt{T_{1}\over
D}e^{-x/L_{d}},$ (69)
where
$L_{d}=\sqrt{DT_{1}}$ (70)
is the spin-diffusion length. Thus, the effect of spin injection exponentially
decays at the scale of the spin-diffusion length, and the density of spin
accumulated at the other border of the medium decreases exponentially with
growing distance $d$.
However, if spin superfluidity is possible, the spin precession equation Eq.
(24) becomes relevant. According to this equation, in a stationary state the
magnetization $M^{\prime}_{z}$ cannot vary in space [Fig. 9(b)] since the
gradient $\bm{\nabla}M^{\prime}_{z}$ leads to the linear in time growth of the
gradient $\bm{\nabla}\varphi$. The right-hand side of Eq. (24) is an analog of
the chemical potential, and the requirement of constant in space magnetization
$M_{z}$ is similar to the requirement of constant in space chemical potential
in superfluids, or the electrochemical potential in superconductors. As a
consequence of this requirement, spin diffusion current is impossible in the
bulk since it is simply “short-circuited” by the superfluid spin current. The
bulk spin diffusion current can appear only in AC processes.
If the spin superfluidity is possible, the spin current can reach the spin
detector at the plane $x=d$ opposite to the border where spin is injected. As
a boundary condition at $x=d$, one can use a phenomenological relation
$J_{s}(d)=M^{\prime}_{z}(d)v_{d}$ connecting the spin current with the non-
equilibrium magnetization at the border with a non-magnetic medium. Here
$v_{d}$ is a phenomenological constant. This boundary condition [27] was
confirmed by the microscopic theory of Takei and Tserkovnyak [41]. Together
with the boundary condition $J_{s}(0)=J_{0}$ at $x=0$ this yields the solution
of Eqs. (24) and (67):
$M^{\prime}_{z}={T_{1}\over d+v_{d}T_{1}}\gamma
J_{0},~{}~{}J_{s}(x)=J_{0}\left(1-{x\over d+v_{d}T_{1}}\right).$ (71)
Thus, the spin accumulated at large distance $d$ from the spin injector slowly
decreases with $d$ as $1/(d+C)$ [Fig. 9(b)], in contrast to the exponential
decay $\propto e^{-d/L_{d}}$ in the spin diffusion transport [Fig. 9(a)]. The
constant $C$ is determined by the boundary condition at $x=d$.
The long-distance superfluid spin transport is possible only if the injected
spin current is not too small. If the injection spin current $J_{0}$ is less
than the current $J_{l}$ determined by Eq. (65) the superfluid spin current
penetrates into the medium only at distances not longer than the width $l$ of
a domain wall in a non-uniform spin current state at a very small average
gradient $\langle\nabla\varphi\rangle\ll 1/l$ [Fig. 9(c)]. This threshold for
the long-distance spin superfluid transport is connected with the absence of
the strict conservation law for spin (Sec. 5).
## 7 Experiments on detection of spin superfluidity
Experimental detection of spin superfluidity does not reduce to experimental
evidence of the existence of spin supercurrents proportional to gradients of
spin phase. As pointed out in Introduction (Sec. 1), such supercurrents
produced by spin phase difference smaller than $2\pi$ emerge in any non-
uniform spin structure. Numerous observations of spin waves and domain
structures during the more than half-a-century history of modern magnetism
cannot be explained without these microscopic spin currents. Only detection of
macroscopical spin supercurrents produced by the phase difference many times
larger than $2\pi$ is evidence of spin superfluidity.
The experimental evidence of macroscopical spin supercurrents was obtained in
the past in the B phase of superfluid 3He [2]. A spin current was generated in
a long channel connecting two cells filled by 3He-B. The quasi-equilibrium
state of the coherent spin precession was supported by spin pumping. The
magnetic fields applied to the two cells were slightly different, and
therefore, the spins in the two cells precessed with different frequencies. A
small difference in the frequencies leads to a linear growth of difference of
the precession phases in the cells and a phase gradient in the channel. When
the gradient reached the critical value, $2\pi$ phase slips were detected.
This was evidence of non-trivial spin supercurrents.
This experiment was done in the dynamical state of coherent spin precession
(non-equilibrium magnon BEC). The states require pumping of spin in the whole
bulk for their existence. In the geometry of the experiment on long-distance
spin transport (Sec. 6) this would mean that spin is permanently pumped not
only by a distant injector but also all the way up to the place where its
accumulation is probed. Thus, the spin detector measures not only spin coming
from the distant injector but also spin pumped close to the detector.
Therefore, the experiment cannot demonstrate the existence of long-distance
superfluid spin transport, but can provide, nevertheless, indirect evidence
that long-distance superfluid spin transport is possible in principle.
The experiment on detection of long-distance superfluid spin transport (Sec.
6) was recently done by Yuan _et al._ [45] in antiferromagnetic Cr2O3. The
spin was injected from a Pt injector by heating [the Seebeck effect [24]] on
one side of the Cr2O3 film and spin accumulation was probed on another side of
the film by the Pt detector via the inverse spin Hall effect (Fig. 10). In
agreement with the theoretical prediction, they observed spin accumulation
inversely proportional to the distance from the interface where spin was
injected.
Figure 10: Long distance spin transport in the experiment by Yuan _et al._
[45]. Spin is injected from the left Pt wire and flows along the Cr2O3 film to
the right Pt wire, which serves as a detector. The arrowed dashed line shows a
spin-current streamline [From Sonin [36]].
In the experiment of Yuan _et al._ [45] spin injection required heating of
the Pt injector, and the spin current to the detector is inevitably
accompanied by a heat flow. Lebrun _et al._ [18] argued that Yuan _et al._
[45] detected a signal not from spin coming from the distant injector but from
spin generated by the Seebeck effect at the interface between the heated
antiferromagnet and the Pt detector. If true, Yuan _et al._ [45] observed not
long-distance spin transport but long-distance heat transport. A resolution of
this controversy requires further experimental and theoretical investigations.
In particular, one could check long-distance heat transport scenario by
replacing the Pt spin injector in the experiment of Yuan _et al._ [45] by a
heater, which produces heat but no spin.
Observation of the long-distance superfluid spin transport was also reported
by Stepanov _et al._ [37] in a graphene quantum Hall antiferromagnet.
However, the discussion of this report requires an extensive theoretical
analysis of the $\nu=0$ quantum Hall state of graphene, which goes beyond the
scope of the present article. A reader can find this analysis in Takei _et
al._ [42].
## 8 Discussion and conclusions
The article addressed the basics of the spin superfluidity concept: topology,
Landau criterion, and phase slips. Metastable (persistent) superfluid current
states are possible if the order parameter space (vacuum manifold) has the
topology of a circumference like in conventional superfluids. In ferromagnets
it is the circumference on the spherical surface of the spontaneous
magnetizations $\bm{M}$, and in antiferromagnets it is the spherical surface
of the unit Néel vector $\bm{L}/L$, where $\bm{L}$ is the staggered
magnetization. The topology necessary for spin superfluidity requires the
magnetic easy-plane anisotropy in ferromagnets, while in antiferromagnets this
anisotropy is provided by the Zeeman energy, which confines the Néel vector in
the plane normal to the magnetic field.
The Landau criterion was checked for the spectrum of elementary excitations,
which are spin waves in our case. In ferromagnets there is only one gapless
Goldstone spin wave mode. In bipartite antiferromagnets there are two modes:
the Goldstone mode in which spins perform rotational oscillations around the
symmetry axis and the gapped mode with rotational oscillations around the axis
normal to the symmetry axis. At weak magnetic fields the Landau instability
starts not in the Goldstone mode, but in the gapped mode. In contrast to
superfluid mass currents in conventional superfluids, metastable spin
superfluid currents are restricted not only by the Landau criterion from above
but also from below. The restriction from below is related to the absence of
the strict conservation law for spin.
The Landau instability with respect to elementary excitations is a precursor
for the instability with respect to phase slips. The latter instability starts
when the spin phase gradient reaches the value of the inverse vortex core
radius. This value is on the same order of magnitude as the Landau critical
gradient. Magnetic vortices participating in phase slips have skyrmion cores,
which map on the upper or lower part of the spherical surface in the space of
spontaneous magnetizations in ferromagnets, or in the space of the unit Néel
vectors in antiferromagnets.
It is worthwhile to note that in reality it is not easy to reach the critical
gradients discussed in the present article experimentally. The decay of
superfluid spin currents is possible also at subcritical spin phase gradients
since the barriers for phase slips can be overcome by thermal activation or
macroscopic quantum tunneling. This makes the very definition of the real
critical gradient rather ambiguous and dependent on duration of observation of
persistent currents. Calculation of real critical gradients requires a
detailed dynamical analysis of processes of thermal activation or macroscopic
quantum tunneling through phase slip barriers, which is beyond the scope of
the present article. One can find examples of such analysis for conventional
superfluids with mass supercurrents in Sonin [31].
Experimental evidence of the existence of metastable superfluid spin currents
in the $B$ phase of superfluid 3He was reported long ago [2]. The experiment
was done in the non-equilibrium state of coherent spin precession, which
requires permanent spin pumping in whole bulk for its existence. This does not
allow to check true long-distance superfluid spin transport without any
additional spin injection on the way from an injector to a detector of spin.
The experiment demonstrating the long-distance transport of spin in the solid
antiferromagnet was reported recently [45]. But the interpretation of this
experiment in the terms of spin superfluidity was challenged [18], and
experimental verification of the long-distance superfluid spin transport in
magnetically ordered solids convincing many (if not all) in the community is
still wanted.
Mechanical analogy of the mass superfluidity discussed in the end of Sec. 2 is
valid also for spin superfluidity. The “superfluid” flux of the angular
momentum in a twisted elastic rod is similar to the superfluid spin current in
magnetically ordered solids. Of course, it is not obligatory to discuss the
twisted rod in terms of angular-momentum flux. More usual is to discuss it in
the terms of the elasticity theory: deformations, stresses, and elastic
stiffness. On the same grounds, one can avoid to use the terms “spin current”
and “spin superfluidity” and consider the spin current states as metastable
helicoidal spin structures determined by “phase stiffness”. This stance was
quite popular in early disputes about spin superfluidity. Nowadays in the era
spintronics the terms “spin supercurrents” and “spin superfluidity” are widely
accepted.
In this article the essentials of spin superfluidity were discussed for
simpler cases of a ferromagnet and of a bipartite antiferromagnet at zero
temperature. Spin superfluidity was also investigated in antiferromagnets with
a more complicated magnetic structure [19]. At finite temperates the presence
of the gas of incoherent magnons was taken into account in the two-fluid
theory [8] similar to the two-fluid theory in the theory of mass
superfluidity.
The present article focused on spin superfluidity in magnetically ordered
solids. In superfluid 3He spin superfluidity coexists with mass superfluidity.
Recently investigations of spin superfluidity were extended to spin-1 BEC,
where spin and mass superfluidity also coexist and interplay [15, 1, 33, 34].
This interplay leads to a number of new nontrivial features of the phenomenon
of superfluidity. The both types of superfluidity are restricted by the Landau
criterion for the softer collective modes, which usually are the spin wave
modes. As a result, the presence of spin superfluidity diminishes the
possibility of the conventional mass superfluidity. Another consequence of the
coexistence of spin and mass superfluidity is phase slips with bicirculation
vortices characterized by two topological charges (winding numbers) [34].
## References
* Armaitis and Duine [2017] Armaitis, J, and R. A. Duine (2017), “Superfluidity and spin superfluidity in spinor Bose gases,” Phys. Rev. A 95, 053607.
* Borovik-Romanov _et al._ [1987] Borovik-Romanov, A S, Yu. M. Bunkov, V. V. Dmitriev, and Yu. M. Mukharskii (1987), “Observation of phase slippage during the flow of a superfluid spin current in 3He-B,” Pis’ma Zh. Eksp. Teor. Fiz. 45, 98–101, [JETP Lett. 45, 124–128 (1987)].
* Bunkov [1995] Bunkov, Yu M (1995), “Spin supercurrent and novel properties of nmr in 3He,” in _Progress of Low Temperature Physics_ , Vol. 14, edited by W. P. Halperin (Elsevier) p. 68.
* Bunkov and Volovik [2013] Bunkov, Yu M, and G. E. Volovik (2013), “Spin superfluidity and magnon Bose-Einstein condensation,” in _Novel Superfluids_ , International Series of Monographs on Physics, Vol. 1, edited by K. H. Bennemann and J. B. Ketterson, Chap. IV (Oxford University Press) pp. 253–311.
* Chen and MacDonald [2017] Chen, Hua, and Allan H. MacDonald (2017), “Spin–superfluidity and spin–current mediated nonlocal transport,” in _Universal themes of Bose–Einstein condensation_ , edited by N.P. Proukakis, D.W. Snoke, and P.B. Littlewood, Chap. 27 (Cambridge University Press) pp. 525–548, arXiv:1604.02429.
* Demokritov _et al._ [2006] Demokritov, S O, V. E. Demidov, O. Dzyapko, G. A. Melkov, A. A. Serga, B. Hillebrands, and A. N. Slavin (2006), “Bose–Einstein condensation of quasi-equilibrium magnons at room temperature under pumping,” Nature 443, 430–433.
* Evers and Nowak [2020] Evers, Martin, and Ulrich Nowak (2020), “Transport properties of spin superfluids: Comparing easy-plane ferromagnets and antiferromagnets,” Phys. Rev. B 101, 184415.
* Flebus _et al._ [2016] Flebus, B, S. A. Bender, Y. Tserkovnyak, and R. A. Duine (2016), “Two-fluid theory for spin superfluidity in magnetic insulators,” Phys. Rev. Lett. 116, 117201.
* Guseinov and Keldysh [1972] Guseinov, R R, and L. V. Keldysh (1972), “Nature of the phase transition under the conditions of an “exitonic” instability in the electronic spectrum of a crystal,” Zh. Eksp. Teor. Fiz. 63, 2255, [Sov. Phys.–JETP 36, 1193 (1972)].
* Halperin and Hohenberg [1969] Halperin, B I, and P. C. Hohenberg (1969), “Hydrodynamic theory of spin waves,” Phys. Rev. 188, 898–918.
* Iacocca _et al._ [2017] Iacocca, Ezio, T. J. Silva, and Mark A. Hoefer (2017), “Breaking of Galilean invariance in the hydrodynamic formulation of ferromagnetic thin films,” Phys. Rev. Lett. 118, 017203.
* Keffer and Kittel [1951] Keffer, F, and C. Kittel (1951), “Theory of antiferrornagnetic resonance,” Phys. Rev. 85, 329–337.
* König _et al._ [2001] König, J, M. C. Bønsager, and A. H. MacDonald (2001), “Dissipationless spin transport in thin film ferromagnets,” Phys. Rev. Lett. 87, 187202.
* Kulik and Shevchenko [1976] Kulik, I O, and S. I. Shevchenko (1976), “Exciton pairing and superconductivity in layered systems,” Fiz. Nizk. Temp. 2, 1405–1426, [Sov. J. Low Temp. Phys. 2, 687 (1976)].
* Lamacraft [2017] Lamacraft, Austen (2017), “Persistent currents in ferromagnetic condensates,” Phys. Rev. B 95, 224512.
* Landau [1941] Landau, L D (1941), “Theory of superfluidity of helium II,” J. Phys. (USSR) 5, 71.
* Landau and Lifshitz [1980] Landau, L D, and E. M. Lifshitz (1980), _Statistical physics. Part II_ (Pergamon Press).
* Lebrun _et al._ [2018] Lebrun, R, A. Ross, S. A. Bender, A. Qaiumzadeh, L. Baldrati, J. Cramer, A. Brataas, R. A. Duine, and M. Kläui (2018), “Tunable long-distance spin transport in a crystalline antiferromagnetic iron oxide,” Nature 561, 222–225.
* Li and Kovalev [2021] Li, Bo, and Alexey A. Kovalev (2021), “Spin superfluidity in noncollinear antiferromagnets,” Phys. Rev. B 103, L060406.
* Lozovik and Yudson [1977] Lozovik, Yu E, and V. I. Yudson (1977), “Interband transitions and the possibility of current states in systems with electron-hole pairing,” Pis’ma Zh. Eksp. Teor. Fiz. 25, 18–21, [JETP Lett. 25, 14–17 (1977)].
* Nikiforov and Sonin [1983] Nikiforov, A V, and E. B. Sonin (1983), “Dynamics of magnetic vortices in a planar ferromagnet,” Zh. Eksp. Teor. Fiz. 85, 642–651, [Sov. Phys.–JETP 58, 373 (1983)].
* Pitaevskii and Stringari [2003] Pitaevskii, L, and S. Stringari (2003), _Bose–Einstein condensation_ (Oxford University Press).
* Qaiumzadeh _et al._ [2017] Qaiumzadeh, Alireza, Hans Skarsvåg, Cecilia Holmqvist, and Arne Brataas (2017), “Spin superfluidity in biaxial antiferromagnetic insulators,” Phys. Rev. Lett. 118, 137201.
* Seki _et al._ [2015] Seki, S, T. Ideue, M. Kubota, Y. Kozuka, R. Takagi, M. Nakamura, Y. Kaneko, M. Kawasaki, and Y. Tokura (2015), “Thermal generation of spin current in an antiferromagnet,” Phys. Rev. Lett. 115, 266601.
* Shi _et al._ [2006] Shi, Junren, Ping Zhang, Di Xiao, and Qian Niu (2006), “Proper definition of spin current in spin-orbit coupled systems,” Phys. Rev. Lett. 96, 076604.
* Sonin [1977] Sonin, E B (1977), “On superfuidity of Bose condensate of electron-hole pairs,” Pis’ma Zh. Eksp. Teor. Fiz. 25, 95–98, [JETP Lett. 25, 84–87 (1977)].
* Sonin [1978a] Sonin, E B (1978a), “Analogs of superfluid currents for spins and electron-hole pairs,” Zh. Eksp. Teor. Fiz. 74, 2097–2111, [Sov. Phys.–JETP, 47, 1091–1099 (1978)].
* Sonin [1978b] Sonin, E B (1978b), “Phase fixation, exitonic and spin superfluidity of electron-hole pairs and antiferromagnetic chromium,” Solid State Commun. 25, 253–255.
* Sonin [1982] Sonin, E B (1982), “Superflows and superfluidity,” Usp. Fiz. Nauk 137, 267, [Sov. Phys.–Usp., 25, 409 (1982)].
* Sonin [2010] Sonin, E B (2010), “Spin currents and spin superfluidity,” Adv. Phys. 59, 181–255.
* Sonin [2016] Sonin, E B (2016), _Dynamics of quantised vortices in superfluids_ (Cambridge University Press).
* Sonin [2017] Sonin, E B (2017), “Spin superfluidity and spin waves in YIG films,” Phys. Rev. B 95, 144432.
* Sonin [2018] Sonin, E B (2018), “Spin and mass superfluidity in ferromagnetic spin-1 BEC,” Phys. Rev. B 97, 224517, Although the subject of the paper is the BEC of cold atoms with spin 1, Sec. VII of the paper investigates vortices in a solid ferromagnet in order to compare them with vortices in the spin-1 BEC.
* Sonin [2019a] Sonin, E B (2019a), “Interplay of spin and mass superfluidity in antiferromagnetic spin-1 Bose–Einstein condensates and bicirculation vortices,” Phys. Rev. Research 1, 033103.
* Sonin [2019b] Sonin, E B (2019b), “Superfluid spin transport in ferro- and antiferromagnets,” Phys. Rev. B 99, 104423.
* Sonin [2020] Sonin, E B (2020), “Superfluid spin transport in magnetically ordered solids (review article),” Fiz. Nizk. Temp. 46, 523–535, [Low Temp. Phys. 46, 436 (2020)].
* Stepanov _et al._ [2018] Stepanov, Petr, Shi Che, Dmitry Shcherbakov, Jiawei Yang, Ruoyu Chen, Kevin Thilahar, Greyson Voigt, Marc W. Bockrath, Dmitry Smirnov, Kenji Watanabe, Takashi Taniguchi, Roger K. Lake, Yafis Barlas, Allan H. MacDonald, and Chun Ning Lau (2018), “Long-distance spin transport through a graphene quantum Hall antiferromagnet,” Nat. Phys. 14 (9), 907–911.
* Sun _et al._ [2016] Sun, Chen, Thomas Nattermann, and Valery L. Pokrovsky (2016), “Unconventional superfluidity in yttrium iron garnet films,” Phys. Rev. Lett. 116, 257205.
* Sun _et al._ [2017] Sun, Chen, Thomas Nattermann, and Valery L Pokrovsky (2017), “Bose–Einstein condensation and superfluidity of magnons in yttrium iron garnet films,” J. Phys. D: Appl. Phys. 50 (14), 143002.
* Takei _et al._ [2014] Takei, So, Bertrand I. Halperin, Amir Yacoby, and Yaroslav Tserkovnyak (2014), “Superfluid spin transport through antiferromagnetic insulators,” Phys. Rev. B 90, 094408.
* Takei and Tserkovnyak [2014] Takei, So, and Yaroslav Tserkovnyak (2014), “Superfluid spin transport through easy-plane ferromagnetic insulators,” Phys. Rev. Lett. 112, 227201.
* Takei _et al._ [2016] Takei, So, Amir Yacoby, Bertrand I. Halperin, and Yaroslav Tserkovnyak (2016), “Spin superfluidity in the $\nu=0$ quantum Hall state of graphene,” Phys. Rev. Lett. 116, 216801.
* Tserkovnyak and Kläui [2017] Tserkovnyak, Yaroslav, and Mathias Kläui (2017), “Exploiting coherence in nonlinear spin-superfluid transport,” Phys. Rev. Lett. 119, 187705.
* Vuorio [1974] Vuorio, M (1974), “Condensate spin currents in helium-3,” J. Phys. C: Solid State Phys. 7 (1), L5–L8.
* Yuan _et al._ [2018] Yuan, Wei, Qiong Zhu, Tang Su, Yunyan Yao, Wenyu Xing, Yangyang Chen, Yang Ma, Xi Lin, Jing Shi, Ryuichi Shindou, X. C. Xie, and We Han (2018), “Experimental signatures of spin superfluid ground state in canted antiferromagnet Cr2O3 via nonlocal spin transport,” Sci. Adv. 4 (4), eaat1098.
|
# Totally real points in the Mandelbrot set
Xavier Buff<EMAIL_ADDRESS>Institut de Mathématiques de
Toulouse
Université Paul Sabatier
118, route de Narbonne
31062 Toulouse Cedex
France and Sarah Koch<EMAIL_ADDRESS>Department of Mathematics
530 Church Street
East Hall
University of Michigan
Ann Arbor MI 48109
United States
###### Abstract.
Recently, Noytaptim and Petsche proved that the only totally real parameters
$c\in{\overline{\mathbb{Q}}}$ for which $f_{c}(z):=z^{2}+c$ is postcritically
finite are $0$, $-1$ and $-2$ [NP]. In this note, we show that the only
totally real parameters $c\in{\overline{\mathbb{Q}}}$ for which $f_{c}$ has a
parabolic cycle are $\frac{1}{4}$, $-\frac{3}{4}$, $-\frac{5}{4}$ and
$-\frac{7}{4}$.
The research of the second author was supported in part by the NSF
## Introduction
Consider the family of quadratic polynomials
$f_{c}:{\mathbb{C}}\to{\mathbb{C}}$ defined by
$f_{c}(z):=z^{2}+c,\quad c\in{\mathbb{C}}.$
The Mandelbrot set $M$ is the set of parameters $c\in{\mathbb{C}}$ for which
the orbit of the critical point $0$ under iteration of $f_{c}$ remains
bounded:
$M:=\bigl{\\{}c\in{\mathbb{C}}~{}|~{}\forall n\geq 1,~{}f_{c}^{\circ
n}(0)\in\overline{D}(0,2)\bigr{\\}}.$
###### Definition 1.
A parameter $c\in{\mathbb{C}}$ is postcritically finite if the orbit of $0$
under iteration of $f_{c}$ is finite.
###### Definition 2.
A parameter $c\in{\mathbb{C}}$ is parabolic if $f_{c}$ has a periodic cycle
with multiplier a root of unity.
Postcritically finite parameters and parabolic parameters are algebraic
numbers contained in $M$. More precisely, $c\in{\mathbb{C}}$ is a
postcritically finite parameter if and only if $c$ is an algebraic integer
whose Galois conjugates all belong to $M$ (see [M] and [Bu]). In addition, if
$c\in{\mathbb{C}}$ is a parabolic parameter, then $4c$ is an algebraic integer
(see [Bo]); moreover, $4c$ is an algebraic unit in
$\overline{\mathbb{Z}}/2\overline{\mathbb{Z}}$ (see [M, Remark 3.2]).
###### Definition 3.
An algebraic number $c\in{\overline{\mathbb{Q}}}$ is totally real if its
Galois conjugates are all in ${\overline{\mathbb{Q}}}\cap{\mathbb{R}}$.
Recently, Noytaptim and Petsche [NP] completely determined the totally real
postcritically finite parameters.
###### Proposition 1 (Noytaptim-Petsche).
The only totally real parameters $c\in{\overline{\mathbb{Q}}}$ for which
$z\mapsto z^{2}+c$ is postcritically finite are $-2$, $-1$ and $0$.
Their proof relies on the fact that the Galois conjugates of a postcritically
finite parameter are also postcritically finite parameters, thus contained in
$M$, and on the fact that $M\cap{\mathbb{R}}=[-2,\frac{1}{4}]$ has small
arithmetic capacity. In this note, we revisit their proof. We then determine
the totally real parabolic parameters.
###### Proposition 2.
The only totally real parameters $c\in{\overline{\mathbb{Q}}}$ for which
$z\mapsto z^{2}+c$ has a parabolic cycle are $\frac{1}{4}$, $-\frac{3}{4}$,
$-\frac{5}{4}$ and $-\frac{7}{4}$.
Acknowledgments. We thank Valentin Huguin for useful discussions, and Curtis
McMullen for introducing us to these questions.
## 1\. Postcritically finite parameters
We first revisit the proof of Noytaptim and Petsche [NP].
###### Proof of Proposition 1.
Assume that $c$ is a totally real postcritically finite parameter. Then, $c$
and all of its Galois conjugates are real postcritically finite parameters,
thus lie in the interval $[-2,0]$. Indeed,
* •
for $c\in(0,\frac{1}{4})$, the orbit of $0$ under iteration $f_{c}$ is
infinite, converging to an attracting fixed point of $f_{c}$;
* •
for $c=\frac{1}{4}$ the orbit of $0$ under iteration $f_{c}$ is infinite,
converging to a parabolic fixed point of $f_{c}$ at $z=\frac{1}{2}$;
* •
for $c\in(-\infty,-2)\cup(\frac{1}{4},+\infty)$ the orbit of $0$ under
iteration $f_{c}$ is infinite, converging to $\infty$.
Let $a$ be a solution of $a+1/a=c$; that is, $a^{2}-ca+1=0$. Then $a$ is an
algebraic integer of modulus $1$ with nonpositive real part, and all of its
Galois conjugates also have modulus $1$ and nonpositive real part.
By Kronecker’s theorem, $a$ is a root of unity. And since the Galois
conjugates of $a$ all have nonpositive real part, the only possibilities are
the following:
* •
$a=-1$, which is mapped to $c=-2$;
* •
$a={\rm e}^{\pm{\rm i}2\pi/3}$, which is mapped to $c=-1$;
* •
$a=\pm{\rm i}$, which is mapped to $c=0$.
Therefore, the only postcritically finite parameters that are totally real are
$-2$, $-1$, and $0$. ∎
## 2\. Parabolic parameters
We now present the proof of Proposition 2. Note that $c=\frac{1}{4}$,
$c=-\frac{3}{4}$, $c=-\frac{5}{4}$ and $c=-\frac{7}{4}$ are indeed parabolic
parameters. Indeed,
* •
$f_{\frac{1}{4}}$ has a fixed point with multiplier $1$ at $z=\frac{1}{2}$;
* •
$f_{-\frac{3}{4}}$ has a fixed point with multiplier $-1$ at $z=-\frac{1}{2}$;
* •
$f_{-\frac{5}{4}}$ has a cycle of period $2$ with multiplier $-1$ consisting
of the two roots of $4z^{2}+4z-1$;
* •
$f_{-\frac{7}{4}}$ has a cycle of period $3$ with multiplier $1$ consisting of
the three roots of $8z^{3}+4z^{2}-18z-1$.
###### Proof of Proposition 2.
Assume that $c$ is a totally real parabolic parameter. Then, the Galois
conjugates of $c$ also are parabolic parameters. Either $c=\frac{1}{4}$,
$c=-\frac{3}{4}$, $c=-\frac{5}{4}$, or $c$ and all of its Galois conjugates
lie in the interval $[-2,-\frac{5}{4})$. Indeed, a parabolic cycle must
attract the orbit of $0$ under iteration of $f_{c}$. However,
* •
for $c\in(-\frac{3}{4},\frac{1}{4})$, the orbit of $0$ under iteration $f_{c}$
converges to an attracting fixed point of $f_{c}$;
* •
for $c\in(-\frac{5}{4},-\frac{3}{4})$ the orbit of $0$ under iteration $f_{c}$
converges to an attracting cycle of period $2$ of $f_{c}$;
* •
for $c\in(-\infty,-2)\cup(\frac{1}{4},+\infty)$ the orbit of $0$ under
iteration $f_{c}$ converges to $\infty$.
Let us assume that $c\in[-2,-\frac{5}{4})$. Then, $b:=4c+6$ and all of its
Galois conjugates lie in the interval $[-2,1)$. Let $a$ be a solution of
$a+1/a=b$; that is, $a^{2}-ba+1=0$. Then $a$ is an algebraic integer of
modulus $1$ with real part less than $\frac{1}{2}$, and all of its Galois
conjugates also have modulus $1$ and real part less than $\frac{1}{2}$.
By Kronecker’s theorem, $a$ is a root of unity. And since the Galois
conjugates of $a$ all have real part less than $\frac{1}{2}$, the only
possibilities are the following:
* •
$a=-1$, $b=-2$ and $c=-2$; this is not a parabolic parameter;
* •
$a={\rm e}^{\pm{\rm i}2\pi/3}$, $b=-1$ and $c=-\frac{7}{4}$; this is indeed a
parabolic parameter;
* •
$a=\pm{\rm i}$, $b=0$ and $c=-\frac{3}{2}$; in that case $4c=-6$ is not an
algebraic unit in $\overline{\mathbb{Z}}/2\overline{\mathbb{Z}}$ and so, $c$
is not a parabolic parameter;
* •
$a={\rm e}^{\pm{\rm i}2\pi/5}$, $b=2\cos(\frac{2\pi}{5})$ and
$c=\frac{\sqrt{5}-13}{8}$; in this case, $f_{c}$ has an attracting cycle of
period $4$ and so, $c$ is not a parabolic parameter;
* •
$a={\rm e}^{\pm{\rm i}4\pi/5}$, $b=2\cos(\frac{4\pi}{5})$ and
$c=\frac{-\sqrt{5}-13}{8}$; then the Galois conjugate $\frac{\sqrt{5}-13}{8}$
is not a parabolic parameter and so, $c$ is not a parabolic parameter.
This completes the proof of the proposition. ∎
Remark: the following proof that $-\frac{3}{2}$ is not a parabolic parameter
was explained to us by Valentin Huguin. It follows from [Bo] that for all
$n\geq 1$,
${\rm discriminant}\bigl{(}f_{c}^{\circ
n}(z)-z,z\bigr{)}=P_{n}(4c)\quad\text{with}\quad
P_{n}(b)\in{\mathbb{Z}}[b]\quad\text{and}\quad\pm P_{n}\text{ monic}.$
As an example,
$P_{1}(b)=-b+1,\quad P_{2}(b)=(b-1)(b+3)^{3},\quad
P_{3}(z)=(b-1)(b+7)^{3}(b^{2}+b+7)^{4},$
and
$P_{4}(z)=(b-1)(b+3)^{3}(b+5)^{6}(b^{3}+9b^{2}+27b+135)^{4}(b^{2}-2b+5)^{5}.$
Note that this yields an alternate proof that $c=\frac{1}{4}$,
$c=-\frac{3}{4}$, $c=-\frac{5}{4}$ and $c=-\frac{7}{4}$ are parabolic
parameters. In addition,
$P_{n}(0)={\rm discriminant}\bigl{(}z^{2^{n}}-z,z\bigr{)}\equiv 1{\ ({\rm
mod}\ 2)}.$
As a consequence
$P_{n}(-6)\equiv 1{\ ({\rm mod}\ 2)}.$
Thus, for all $n\geq 1$, the roots of $f_{-\frac{3}{2}}^{\circ n}(z)-z$ are
simple, which shows that $f_{-\frac{3}{2}}$ has no parabolic cycle.
## References
* [Bo] T. Bousch, Les racines des composantes hyperboliques de $M$ sont des quarts d’entiers algébriques, Frontiers in Complex Dynamics: a volume in honor of John Milnor’s 80th birthday, A. Bonifant, M. Lyubich, S. Sutherland, editors. Princeton University Press 2014, 25–26.
* [Bu] X. Buff, On Postcritically Finite Unicritical Polynomials, New York J. Math. 24 (2018) 1111–1122.
* [M] J. Milnor, Arithmetic of unicritical polynomial maps, Frontiers in Complex Dynamics: a volume in honor of John Milnor’s 80th birthday, A. Bonifant, M. Lyubich, S. Sutherland, editors. Princeton University Press 2014, 15–24.
* [NP] C. Noytaptim $\&$ C. Petsche, Totally real algebraic integers in short intervals, Jacobi polynomials, and unicritical families in arithmetic dynamics, Preprint (2022).
|
# PBH-infused seesaw origin of matter and unique gravitational waves
Debasish Borah<EMAIL_ADDRESS>Department of Physics, Indian Institute of
Technology Guwahati, Assam 781039, India Suruj Jyoti Das<EMAIL_ADDRESS>Department of Physics, Indian Institute of Technology Guwahati, Assam 781039,
India Rome Samanta<EMAIL_ADDRESS>CEICO, Institute of Physics of the Czech
Academy of Sciences, Na Slovance 1999/2, 182 21 Prague 8, Czech Republic
Federico R. Urban<EMAIL_ADDRESS>CEICO, Institute of Physics of the
Czech Academy of Sciences, Na Slovance 1999/2, 182 21 Prague 8, Czech Republic
###### Abstract
The Standard Model, extended with three right-handed (RH) neutrinos, is the
simplest model that can explain light neutrino masses, the baryon asymmetry of
the Universe, and dark matter (DM). Models in which RH neutrinos are light are
generally easier to test in experiments. In this work, we show that, even if
the RH neutrinos are super-heavy ($M_{i=1,2,3}>10^{9}$ GeV)—close to the Grand
Unification scale—the model can be tested thanks to its distinct features on
the stochastic Gravitational Wave (GW) background. We consider an early
Universe filled with ultralight primordial black holes (PBH) that produce a
super-heavy RH neutrino DM via Hawking radiation. The other pair of RH
neutrinos generates the baryon asymmetry via thermal leptogenesis, much before
the PBHs evaporate. GW interferometers can test this novel spectrum of masses
thanks to the GWs induced by the PBH density fluctuations. In a more refined
version, wherein a $U(1)$ gauge symmetry breaking dynamically generates the
seesaw scale, the PBHs also cause observable spectral distortions on the GWs
from the $U(1)$-breaking cosmic strings. Thence, a low-frequency GW feature
related to DM genesis and detectable with a pulsar-timing array must
correspond to a mid- or high-frequency GW signature related to baryogenesis at
interferometer scales.
## I Introduction
Neutrino oscillation data [1] suggests that the active neutrino masses are
tiny, $m_{i}\sim\mathcal{O}(0.01)$ eV. Type-I seesaw [2, 3, 4, 5] is the
simplest and the most elegant mechanism, wherein, owing to the hierarchy
between the Electroweak scale and the scale of Grand Unification (GUT) [6, 7,
8], such small masses can be understood. Quantitatively, the Type-I seesaw
mechanism indicates that, if the Yukawa couplings are not strongly fine-tuned,
then $m_{i}\simeq\Lambda_{\rm EW}^{2}/\Lambda_{\rm GUT}$, where $\Lambda_{\rm
EW}\simeq 100$ GeV and $\Lambda_{\rm GUT}\simeq 10^{15}$ GeV. In a
renormalizable seesaw Lagrangian, these two scales are introduced with the
right-handed (RH) neutrino sterile fields ($N_{R}$), and via two mass terms:
$\mathcal{L}_{\rm mass}\sim\Lambda_{\rm EW}\overline{L}N_{R}+\Lambda_{\rm
GUT}\overline{N^{c}_{R}}N_{R}$, where $L$ is the Standard Model (SM) lepton
doublet. The heavy fields, $N_{R}$, besides generating light neutrino masses,
also decay CP-asymmetrically to produce the Baryon Asymmetry of the Universe
(BAU) via leptogenesis [9, 10, 11, 12]. While a two-RH-neutrino extension of
the SM suffices to address the generation of light neutrino masses and the
BAU, it is natural to complete the sterile fermion family with a third RH
neutrino, analogously to the other SM leptons. In this case, if cosmologically
stable, one of the RH neutrinos could be a Dark Matter (DM) candidate.
Therefore, a three-RH-neutrino extension of the SM model provides an elegant
and unified explanation of three “beyond the SM” problems. Many efforts have
been made towards such unifications, see, e.g., [13, 14, 15, 16, 17, 18, 19,
20, 21, 22]. Because so far the energy reachable with colliders is only up to
a few TeV, in order to develop a testable unification model, we are mostly led
to consider light sterile fermions with masses much below $\Lambda_{\rm GUT}$.
In this article, we show that a testable unification model is possible even in
keeping with the original implementation of the seesaw, i.e., considering all
three RH fermions to be super-heavy, close to $\Lambda_{\rm GUT}$. The catch
is that, contrary to the previous models, here we propose to search for the
signatures of such unification in Gravitational Waves (GW) experiments – an
idea also entertained in Refs. [23, 24, 25, 26, 27, 28, 28, 29, 30, 31, 32,
33] that aims to find GW signatures of the high-scale seesaw within various
cosmological context.
Suppose exotic objects such as ultralight primordial black holes (PBH) with
initial monochromatic mass $M_{\rm BH}$ exist prior to the Big Bang
Nucleosynthesis (BBN) ($T\gtrsim$ 5 MeV) [34, 35, 36]. They could be abundant
enough to dominate the Universe’s energy budget before evaporating via Hawking
radiation. A PBH-dominated Universe is associated with two significant
consequences. First, while evaporating, the PBHs produce entropy that dilutes
any pre-existing relics. Second, if the PBHs dominate, strong and sharply
peaked GWs are induced due to PBH density fluctuations [37, 38, 39, 40]. For a
fixed value of $M_{\rm BH}$, both the amount of produced entropy and the
amplitude of the GWs increase with the duration of the PBH-domination epoch.
Furthermore, since the Hawking radiation from a black hole is a gravitational
phenomenon, along with the SM particles, the PBHs must produce the DM,
independently of its non-gravitational interaction. In the case where the PBHs
dominate at relatively low temperatures close to the BBN, they produce super-
heavy DM ($M_{DM}\gtrsim 10^{10}$ GeV), constituting the observed DM abundance
[41, 42, 43, 44].
Consider now that the DM produced by the PBHs is one of the RH neutrinos
($N_{3}\equiv N_{\rm DM}$) in the seesaw. For a scale of leptogenesis, $T_{\rm
lepto}\sim M_{1}\gtrsim 10^{9}$ GeV [45], irrespective of whether the masses
of $N_{1,2}$ are hierarchical or quasi-degenerate [46], the BAU is
overproduced for a region in the Yukawa parameter space. The PBHs, which
evaporate at much lower temperatures, bring the overproduced asymmetry down to
the observed value via entropy dilution. Because a large $T_{\rm lepto}$
requires substantial entropy dilution, hence a longer duration of PBH-
domination, the amplitude of the induced GWs increases with $T_{\rm lepto}\sim
M_{1}$. In this scenario the DM mass is related to $M_{\rm BH}$ and it
determines the peak frequency of the induced GWs. As the RH neutrino masses
approach $\Lambda_{\rm GUT}$, the mechanism naturally predicts strong GWs with
amplitudes within reach of LIGO [47, 48], ET [49], and CE [50]. Therefore, a
PBH-dominated Universe provides a unique opportunity to test the origin of
ordinary matter and the DM in the seesaw, where the amplitude and the peak
frequency of the predicted GWs are determined by super-heavy RH neutrino
masses.
In the following sections, we present the general framework of the model,
discuss the model’s technicalities and results, and model extensions, before
summarising and concluding.
## II General framework and assumptions
We consider non-rotating PBHs with monochromatic initial mass $M_{\rm
BH}<10^{9}~{}g$, produced in a radiation-dominated Universe. If the PBHs
dominate the Universe’s energy density, they produce only super-heavy DM with
masses, e.g., $M_{\rm DM}\simeq 10^{10}\,{\rm GeV}-10^{15}$ GeV at
temperatures $T_{\rm eva}^{\rm DM}\simeq T_{\rm BBN}-100~{}{\rm GeV}$ [41, 42,
43, 44]. Such super-heavy DM is produced because, as the PBHs evaporate, the
Hawking temperature $T_{\rm BH}=(8\pi GM_{\rm BH})^{-1}$ continues to increase
and finally becomes larger than the $M_{\rm DM}$. If the $\rm PBH\rightarrow
DM$ process makes up all the DM that we observe today [51], the following
relation holds:
$\displaystyle M_{\rm DM}\simeq 4.5\times 10^{3}\left(\frac{M_{\rm BH}}{M_{\rm
Pl}}\right)^{-5/2}M_{\rm Pl}^{2}~{}~{}{\rm GeV^{-1}},$ (1)
where $M_{\rm Pl}=1.22\times 10^{19}$ GeV is the Planck mass.
First, we shall consider the simplest scenario where the DM ($N_{i=3,DM}$) is
strictly stable and does not interact with any other particle. We then assume
that $[T_{\rm Bf},M_{i=1,2}]<T_{\rm RH}$, where $T_{\rm RH}$ and $T_{\rm Bf}$
are the reheating and PBH formation temperatures respectively. This condition
ensures that, once the Universe reheats after inflation, the PBHs form during
radiation domination. Thermal scatterings mediated by Yukawa interactions
populate the $N_{i}$s, which seed baryogenesis via thermal leptogenesis [9].
Furthermore, even though the DM does not have any interaction, the PBHs
produce $N_{\rm DM}$ via Hawking radiation111PBHs also produce $N_{1,2}$.
Nonetheless, the PBH evaporation temperature corresponding to the correct dark
matter relic is lower than the sphaleron freeze-out temperature. Therefore,
the lepton asymmetry produced by $N_{1,2}$ will not be processed into a baryon
asymmetry.. We shall neglect any pre-existing DM relic [52, 53, 54], but the
extension to that case is straightforward. A possible timeline of the proposed
mechanism is shown in Fig.1.
Thus, in this paper, we consider non-thermal DM production from PBH
evaporation and matter generation via thermal leptogenesis. In principle, the
mechanism could work also if leptogenesis proceeds non-thermally, but thermal
scatterings populate the DM density. However, the PBH mass window for non-
thermal leptogenesis, $M_{BH}\in\left[0.1~{}{\rm g}-10~{}{\rm g}\right]$ [55],
would produce very high-frequency induced GWs—see Eq.(11) below—which would be
beyond the reach of current and planned GW interferometers, making this
scenario less predictive and testable.
Figure 1: A possible timeline for the proposed scenario.
## III Black hole signatures for the origin of matter in the seesaw
The energy density of the black holes ($\rho_{\rm BH}$) and radiation
($\rho_{R}$) evolve according to the following Friedmann equations [56, 55]:
$\displaystyle\frac{d\rho_{R}}{dz}+\frac{4}{z}\rho_{R}$ $\displaystyle=$
$\displaystyle 0,$ (2) $\displaystyle\frac{d\rho_{\rm
BH}}{dz}+\frac{3}{z}\frac{H}{\tilde{H}}\rho_{\rm BH}$ $\displaystyle-$
$\displaystyle\frac{\dot{M}_{\rm BH}}{M_{\rm BH}}\frac{1}{z\tilde{H}}\rho_{\rm
BH}=0,$ (3)
where $H$ is the Hubble parameter and $z=T_{\rm Bf}/T$. The quantity
$\tilde{H}$ and the scale factor $a$ evolve as
$\displaystyle\tilde{H}=\left(H+\mathcal{K}\right),~{}\frac{da}{dz}=\left(1-\frac{\mathcal{K}}{\tilde{H}}\right)\frac{a}{z},$
(4)
where $\mathcal{K}=\frac{\dot{M}_{\rm BH}}{M_{\rm BH}}\frac{\rho_{\rm
BH}}{4\rho_{R}}$. To derive Eq.(2)-Eq.(4), we assume the entropy ($g_{*s}$) as
well as the energy ($g_{*\rho}$) degrees of freedom are equal and constant.
For a given value of $\beta\equiv\frac{\rho_{\rm BH}(T_{\rm Bf})}{\rho_{\rm
R}(T_{\rm Bf})}$, the above equations can be solved to determine the duration
of PBH domination and resulting entropy production
$\Delta=\tilde{S}_{2}/\tilde{S}_{1}$, where $\tilde{S}_{1,2}\propto
a_{1,2}^{3}/z_{1,2}^{3}$ is the total entropy before (after) the PBH
evaporation. In Fig.2, we show the evolution of the normalised energy
densities, $\Omega_{\rm BH,rad}=\rho_{\rm BH,rad}/\rho_{\rm BH}+\rho_{\rm
rad}$, and consequent entropy production with $M_{\rm BH}=5\times 10^{6}$ g
for three benchmark values of $\beta$. From Fig.2, it is evident that a larger
value of $\beta$ corresponds to a longer period of PBH domination and larger
entropy production. We find an analytical expression for $\Delta$ as222The
entropy production factor can be well approximated as
$\Delta\simeq\frac{3}{2}\frac{T_{\rm dom}}{T_{\rm eva}}$, where $T_{\rm
dom}\simeq\beta T_{\rm Bf}$ is the temperature at which the PBHs start to
dominate, and 3/2 is a numerical fitting factor that takes into account a
finite duration of PBH evaporation.
$\displaystyle\Delta\simeq 233~{}\beta\left(\frac{M_{\rm BH}}{M_{\rm
Pl}}\right)\left(\frac{\gamma}{g_{*B}(T_{\rm BH})\mathcal{G}}\right)^{1/2},$
(5)
where $\gamma\simeq 0.2$ is the PBH formation efficiency, $g_{*B}\simeq 100$
being the relativistic particles below $T_{\rm BH}$, and $\mathcal{G}\simeq
3.8$ is the greybody factor. Eq.(5) matches the numerical solutions with very
good accuracy, as shown in Fig.2 with the dashed horizontal lines.
Figure 2: Normalised energy densities as a function of inverse temperature
(normalised to $T_{Bf}$). Because only the ratio of the total entropy
$\tilde{S_{i}}\propto a_{i}^{3}/z_{i}^{3}$ are relevant, we have taken
$\tilde{S}(z=1)=1$.
Like any other pre-existing relics, the produced baryon asymmetry from $N_{1}$
decays will be diluted due to the entropy production. Therefore, once the PBHs
evaporate, we have the baryon-to-photon ratio:
$\displaystyle\eta_{B}\simeq 10^{-2}N_{B-L}^{\rm
final}=10^{-2}\Delta^{-1}\varepsilon_{1}\kappa_{1},$ (6)
where $\varepsilon_{1}$ is the CP asymmetry parameter, $\kappa_{1}$ is the
efficiency of lepton asymmetry production, $N_{B-L}^{\rm final}$ is the final
$B-L$ number, and the factor $10^{-2}$ accounts for the combined effects of
sphaleron conversion of the leptons to baryons plus the photon dilution [11].
We shall neglect the flavour effects [57, 58, 59] in the leptogenesis
computation, use the maximum value of the CP asymmetry parameter, and work in
the strong-washout regime of leptogenesis [11, 12]. Therefore, we consider the
CP asymmetry parameters and the efficiency factor as
$\displaystyle\varepsilon_{1}=\frac{3M_{1}\sqrt{|\Delta m_{\rm
atm}|^{2}}}{8\pi v^{2}},~{}~{}\kappa_{1}\simeq 10^{-2},$ (7)
where $v=174$ GeV is the vacuum expectation value of the SM Higgs, and
$|\Delta m_{\rm atm}|^{2}\simeq 2.4\times 10^{-3}$ eV [1] is the active
neutrino atmospheric mass-squared difference. Using Eq.(5), Eq.(7), and the
observed value of $\eta_{B}\simeq 6.3\times 10^{-10}$ in Eq.(6), we obtain a
simple analytic relation between $\beta$ and $M_{1}$ as
$\displaystyle\beta=5.7\times 10^{-12}\left(\frac{M_{\rm Pl}}{M_{\rm
BH}}\right)M_{1}~{}{\rm GeV^{-1}},$ (8)
which we shall use in the computation of GWs from PBHs.
There are several ways ultralight PBHs involve in the production of GWs. For
instance, the primordial curvature perturbations that produce PBHs also induce
GWs (see, e.g., [60, 61, 62]), PBHs radiate gravitons that constitute high-
frequency GWs [63], PBHs form mergers that emit GWs [64], and finally, the
inhomogeneous distribution of PBHs leading to density fluctuations, induces
GWs [37, 38, 39]. We shall focus on the last one in this work.
It has been recently pointed out in Ref. [37] and further developed in Refs.
[38, 39], that right after formation, PBHs are randomly distributed in space
according to Poisson statistics. Therefore, even though the PBH gas behaves as
pressure-less dust on average, the spatially inhomogeneous distribution leads
to density fluctuations, which are isocurvature in nature. Once the PBHs
dominate the Universe’s energy density, the isocurvature component gets
converted to curvature perturbations that result in secondary GWs. Because the
density fluctuations are large at small scales (comparable to the mean
separation of PBHs at $T_{\rm Bf}$), sizeable GWs are induced, which are
further enhanced due to the almost instantaneous (see Fig.2) evaporation of
PBHs [38, 40]. The present-day amplitude of such induced GWs is given
by333Notice that the amplitude of the induced GWs is highly sensitive to the
PBH mass spectrum [38, 40]. Therefore, the results presented in this paper can
vary significantly for an extended mass function instead of the monochromatic
mass spectrum that we use. Moreover, we consider only non-rotating
Schwarzschild BHs. However, the evaporation temperature of a BH with spin (a
Kerr BH) changes only slightly compared to a non-spinning BH [65, 66], so that
our results would remain essentially unchanged for spinning PBHs.
$\displaystyle\Omega_{\rm GW}(t_{0},f)\simeq\Omega_{\rm GW}^{\rm
peak}\left(\frac{f}{f^{\rm peak}}\right)^{11/3}\Theta\left(f^{\rm
peak}-f\right),$ (9)
where
$\displaystyle\Omega_{\rm GW}^{\rm peak}\simeq 2\times
10^{-6}\left(\frac{\beta}{10^{-8}}\right)^{16/3}\left(\frac{M_{\rm
BH}}{10^{7}\rm g}\right)^{34/9},$ (10)
and
$\displaystyle f^{\rm peak}\simeq 1.7\times 10^{3}{\rm Hz}\left(\frac{M_{\rm
BH}}{10^{4}\rm g}\right)^{-5/6}.$ (11)
The $\Theta$-function in Eq.(9) stipulates that the Poisson spectrum of
density fluctuation is subjected to an ultra-violet cut-off $f_{\rm UV}\simeq
f^{\rm peak}$, which is comparable to the frequency corresponding to the
comoving scale representing the mean separation of PBHs at the time of
formation.
Figure 3: Contours of $\Omega_{GW}^{\rm peak}$ on the $M_{1}-M_{DM}$ plane.
The red-shaded region is excluded because otherwise the induced GWs would
saturate the BBN bound on the effective number of neutrino species. The
sensitivity limits of various experiments are shown with the coloured lines.
The white line represents the PBH formation temperature, $T_{\rm Bf}\sim
f\left(M_{\rm BH}\right)$, expressed as a function of DM mass (cf. Eq.(1)).
Intriguingly, $\Omega_{\rm GW}^{\rm peak}$ being dependent on $\beta$ and
$M_{\rm BH}$, can be expressed as a function of $M_{1}$ and $M_{\rm DM}$ (cf.
Eq.(1) and Eq.(8)) in this model. We derive the master equation relating the
DM and scale of leptogenesis as
$\displaystyle\Omega_{\rm GW}^{\rm peak}\simeq\Omega_{\rm GW}^{\rm
0}\left(\frac{M_{1}}{10^{14}~{}\rm GeV}\right)^{16/3}\left(\frac{M_{\rm
DM}}{10^{14}~{}\rm GeV}\right)^{28/45},$ (12)
where $\Omega_{\rm GW}^{\rm 0}\simeq 2\times 10^{-10}$. Similarly, $f_{\rm
peak}$ can be expressed in terms of DM mass as
$\displaystyle f^{\rm peak}\simeq 15~{}\left(\frac{M_{\rm DM}}{10^{14}~{}\rm
GeV}\right)^{1/3}{\rm Hz}.$ (13)
Although the amplitude depends on both $M_{1}$ and $M_{\rm DM}$, the peak
frequency is determined only by $M_{\rm DM}$. Before we present the numerical
results relevant to the GW experiments, let us point out a constraint that
must be considered. Depending on $M_{\rm BH}$ and $\beta$, the induced GWs
could be strong enough (Eq.(10)) to saturate the BBN constraint on the
effective number of neutrino species which is bounded from above [67]. Ref.
[38] derives an upper bound on $\beta$ as a function of $M_{\rm BH}$, which,
using Eq.(8) and Eq.(1), we recast as
$\displaystyle M_{1,\rm max}^{\rm BBN}\simeq 4.6\times
10^{14}\left(\frac{M_{\rm DM}}{10^{14}~{}\rm GeV}\right)^{-7/60}~{}{\rm GeV}.$
(14)
In Fig.3, we show the contours of $\Omega_{\rm GW}^{\rm peak}h^{2}$ on the
$M_{\rm DM}-M_{1}$ plane. The BBN bound derived in Eq.(14) excludes the red-
shaded region. From this figure, we extract the following key points: 1) In a
PBH-dominated early Universe, Fig.3 represents the most general testable
parameter space in the seesaw with three super-heavy RH neutrinos, with one of
them being the DM. The hierarchy in the masses could be in any order, i.e.,
$M_{\rm 3,DM}>M_{i=1,2}$ as well as $M_{\rm 3,DM}<M_{i=1,2}$. For the former
(latter) case, one expects to see a detectable GW signal at the higher (lower)
frequencies. 2) Detectors such as CE, ET, and DECIGO can probe the model if
the scale of leptogenesis is sufficiently high; $M_{1}\gtrsim 5\times 10^{13}$
GeV, and at the same time, if $f^{\rm peak}\gtrsim 0.6~{}{\rm Hz}$. The
absolute lower bound $f_{\rm min}^{\rm peak}\simeq 0.6~{}\rm Hz$ corresponds
to the lowest value of the allowed DM mass ($M_{\rm DM}\simeq 10^{10}$ GeV) in
the $\rm PBH\rightarrow DM$ mechanism.
The scenario becomes extremely predictive if the DM mass is known. In this
model, the DM being super-heavy, we do not expect any effects on conventional
DM searches [68], even if we switch on DM interactions444An indirect way to
measure super-heavy DM mass is to find the signatures in the cosmic rays [69,
70, 71].. Recently, Ref. [72] pointed out that if the DM acquires mass by a
$U(1)$ gauge symmetry breaking, which also produces cosmic strings, a PBH-
dominated Universe offers a unique way to determine the super-heavy DM mass
from the spectral features of cosmic string-radiated GWs. An elegant UV
completion of the seesaw consists in promoting the difference between the
lepton ($L$) and the baryon ($B$) number to a new gauge symmetry $U(1)_{B-L}$,
which can be naturally embedded in GUT [73, 74, 75, 76] – one of our primary
motivations as outlined in the introduction. In addition, a three right-handed
neutrino extension of the SM makes $G_{\rm SM}\times U(1)_{B-L}$ anomaly free
($G_{\rm SM}$ is the SM gauge group). In the next section, we discuss the
$U(1)_{B-L}$ version of the seesaw in the presence of ultralight PBHs, namely
the $U(1)_{B-L}^{\rm PBH}$ model.
## IV Gravitational waves signatures in the $U(1)_{B-L}^{\rm PBH}$ model
Cosmic strings [77, 78] appear as topological defects once the $U(1)_{B-L}$
breaks and the RH neutrinos become massive: $M_{i=1,2,3}=f_{i}v_{\Phi}$, where
$f_{i}$ and $v_{\Phi}$ are the Yukawa coupling and the VEV of the symmetry
breaking scalar $\Phi_{B-L}$ respectively. After their formation, strings form
closed loops and a network of horizon-size long strings [79]. When two
segments of the long-strings cross, they form loops. Long strings are
characterized by a correlation length $L=\sqrt{\mu/\rho_{\infty}}$, where
$\rho_{\infty}$ is the long-string energy density and $\mu$ is the string
tension. For a gauge coupling $g$ and a scalar self-interaction coupling
$\lambda$, the string tension $\mu$ is defined as $\mu=\pi
v_{\Phi}^{2}\mathcal{H}\left(\frac{\lambda}{2{g}^{2}}\right)$. The quantity
$\mathcal{H}$ varies slowly [80] with respect to its argument:
$\mathcal{H}\left(\frac{\lambda}{2{g}^{2}}\right)\simeq 1$ for
$\lambda=2{g}^{2}$. As in the previous section, let us consider $N_{3}\equiv
N_{\rm DM}$, i.e., $f_{i=3}\equiv f_{\rm DM}$, so that $M_{\rm DM}=f_{\rm
DM}v_{\Phi}$. Therefore, we can express the string tension in terms of DM mass
as $\mu=\pi M_{\rm DM}^{2}f_{\rm
DM}^{-2}\mathcal{H}\left(\frac{\lambda}{2{g}^{2}}\right)$.
Generally, owing to the strong interaction with the thermal plasma [81], the
motion of a string network gets damped. Once the damping becomes inefficient,
the network oscillates to enter a scaling evolution phase, characterized by
the fragmentation of the long strings into loops and stretching of the
correlation length due to the cosmic expansion. These loops oscillate
independently and produce GWs [82].
Figure 4: Left: GW signatures of $U(1)_{B-L}$-seesaw in presence of ultralight
PBHs. The dashed red line shows GWs from cosmic strings for $M_{DM}=4\times
10^{14}$ GeV. The slanted vertical bars show GWs from PBH density fluctuation
corresponding to $M_{1}=M_{DM}/x$, where
$x=4~{}{\rm(red)},~{}6~{}{\rm(blue)}~{}{\rm and}~{}8~{}{\rm(green)}$. The
coloured regions represent the nominal sensitivities of several current and
planned GW detectors as well as the NANOGrav result. Right: contours of
$\Omega_{GW}^{\rm peak}$ on the $M_{1}-M_{DM}$ plane, containing the benchmark
cases shown on the left panel. See the text for the explanation of the
exclusion regions.
Cosmic string network simulations find evidence of scaling solutions (see
Refs. [83, 84, 85, 86]), which motivates us to consider the network in the
scaling regime in our computation. The size of a radiating loop at a cosmic
time $t$ is given by $l(t)=\alpha t_{i}-\Gamma G\mu(t-t_{i})$, where
$l_{i}=\alpha t_{i}$ is the initial size of the loop, $\Gamma\simeq 50$ [87],
and $\alpha\simeq 0.1$ [88, 89]. The energy loss from a loop is decomposed
into a set of normal-mode oscillations with frequencies
$f_{k}=2k/l_{k}=a(t_{0})/a(t)f$, where $k=1,2,3...\infty$. The $k$th mode GW
density parameter is obtained as [88]
$\displaystyle\Omega_{GW}^{(k)}(f)=\frac{2kG\mu^{2}\Gamma_{k}}{f\rho_{c}}\int_{t_{osc}}^{t_{0}}\left[\frac{a(t)}{a(t_{0})}\right]^{5}n\left(t,l_{k}\right)dt,$
(15)
where $n\left(t,l_{k}\right)$ is the loop number density, and we calculate it
using the Velocity-dependent-One-Scale (VOS) model [90, 91]. The quantity
$\Gamma_{j}$ is given by $\Gamma_{j}=\frac{\Gamma
j^{-\delta}}{\zeta(\delta)}$, with $\delta=4/3$ for loops containing cusps
[92]. Eq.(15) is valid only for $t_{i}>l_{\rm crit}/\alpha$, with $l_{\rm
crit}$ the critical length above which GW emission dominates the massive
particle radiation [93, 94], and $t_{i}>t_{\rm osc}={\rm
Max}\,\left[t_{F},t_{\text{fric}}\right]$, where $t_{F}$ ($t_{\rm fric}$) is
network formation (end of damping) time.
Detectable GWs from the cosmic string loops come from the most recent cosmic
epochs. While the overall GW amplitude grows with $\mu$ [88], in presence of a
matter-era before the most recent radiation epoch at $T=T_{\Delta}$, the GW
spectrum at higher frequencies can be described as $\Omega_{\rm
GW}^{(1)}(f\lesssim f_{\Delta})\sim f^{0}=\text{const}$ and $\Omega_{\rm
GW}^{(1)}(f\gtrsim f_{\Delta})\sim f^{-1}$, with $f_{\Delta}$ the frequency of
the spectral-break (see, e.g., Refs.[95, 96, 97, 98, 99, 100] for the
importance of $f_{\Delta}$ to probe various cosmological models).
We now consider a simple scenario where $T_{\rm RH}\lesssim M_{\rm DM}$, so
that the thermal bath does not have sufficient energy to produce the dark
matter, despite the latter having gauge interaction, e.g., $g\simeq 1$. In
this case, the $U(1)_{B-L}$ breaking must occur after the cosmic inflation
ends. Otherwise, the inflated string network re-enters the horizon at late
times, which may lead to other spectral breaks that can obfuscate the feature
at $f_{\Delta}$ [101].
Using the PBH evaporation temperature in the case where they dominate [56, 55]
along with Eq.(1), we obtain $T_{\Delta}\equiv T_{\rm eva}^{\rm DM}=2.1\times
10^{-8}\left(M_{\rm DM}/{\rm GeV}\right)^{3/5}~{}{\rm GeV}$. Consequently, an
approximate analytical expression for $f_{\Delta}$[95, 72] in our case looks
like
$\displaystyle f_{\Delta}\simeq f_{\Delta}^{0}\sqrt{\frac{50}{z_{\rm
eq}\alpha\Gamma G\mu}}\left(\frac{\mu f_{\rm
DM}^{2}}{\pi\mathcal{H}}\right)^{3/10}T_{0}^{-1}t_{0}^{-1},$ (16)
where $f_{\Delta}^{0}\simeq 2.1\times 10^{-8}$, $t_{0}\simeq 4\times 10^{17}$
sec, $T_{0}=2.7$K, $z_{\rm eq}\simeq 3387$. Because the dependence of Eq.(16)
on $\mathcal{H}$ is weak and $f_{\rm DM}\lesssim\sqrt{4\pi}$, remarkably, a GW
measurement at low frequencies constrains $\mu$ while also robustly predicting
an approximate upper bound on $f_{\Delta}$.
In this context, a significant experimental result at low frequencies is the
recent finding of the NANOGrav pulsar-timing array experiment, which reports
strong evidence of a stochastic common-spectrum process across 47-millisecond
pulsars with 12.5 years of data [102]. However, the data set does not show the
required evidence for quadrupolar spatial correlation described by Hellings-
Downs curve [103]. Interestingly, such a common-spectrum process has been
confirmed recently by the PPTA [104], and the EPTA collaboration [105]. The
signal, if genuine, can be well explained in this model with $M_{\rm DM}^{\rm
PTA}\simeq 3\times 10^{13}~{}{\rm GeV}-6\times 10^{14}$ GeV assuming the DM
maximally coupled to $\Phi_{B-L}$ and $\mathcal{H}\simeq 1$. Therefore, if the
pulsar timing arrays results are due to the existence of a super-heavy DM with
$M_{\rm DM}^{\rm PTA}\simeq\mathcal{O}(10^{14})$ GeV in the $U(1)^{\rm
PBH}_{B-L}$ model, we forecast that DECIGO will observe a spectral break
($f_{\Delta}$) in GWs as shown in Fig.4 (left) with the dashed red curve. The
signature is unique compared to the gravitational waves from cosmic strings
always in radiation domination [106, 107, 24] and compared to a cosmic string
model in which there are other intermediate matter domination epochs, e.g.,
see Refs. [95, 96, 97, 98, 99, 100]. For the former case, we do not expect any
spectral break at higher frequencies, whereas, for the latter, $f_{\Delta}$ is
generally not bounded.
Moving forward with a benchmark value $M_{\rm DM}^{\rm PTA}\simeq 4\times
10^{14}$ GeV, we now look for the signature of leptogenesis by substituting
Eq.(12) and Eq.(13) in Eq.(9). The results are shown with the coloured solid
line for $M_{1}\simeq M_{\rm DM}^{\rm PTA}/x$, with $x=4$ (red), $x=6$ (blue),
and $x=8$ (green) in Fig.4 (left). Note that the amplitude of the induced GWs
increases with $M_{1}$. The reason being, a large $M_{1}$ produces more baryon
asymmetry, which requires strong entropy dilution and hence large $\beta$–to
be consistent with the observed value. Consequently, the induced GWs are
enhanced according to Eq.(10). If $M_{i=1,2}$ are quasi-degenerate, the amount
of asymmetry is greatly enhanced by the resonant leptogenesis mechanism [108].
In which case, even if $M_{1}\ll\Lambda_{\rm GUT}$, one can have induced GWs
at the level of LIGO-5. Thus, in this mechanism, any measurement of GWs at low
frequencies not only forecasts a spectral break in the string-radiated GWs but
also robustly fixes the location of the peak of the induced GWs whose
amplitude is determined by the scale of leptogenesis. Therefore, if timing
array results are correct, in this model, we expect a signature of high-scale
leptogenesis at mid-frequency interferometer scales, with $f^{\rm peak}\simeq
23.8$ Hz. In Fig.4 (right), we have reproduced the contours of $\Omega_{\rm
GW}^{\rm peak}$ as in Fig.3, but now with three more exclusion regions and
with the benchmarks shown in the left panel. The green region is excluded
because we consider $M_{\rm DM}>T_{\rm RH}>M_{1,2}$. The blue region is
excluded because otherwise, $T_{\rm Bf}>T_{\rm RH}$. In the less-stringent
purple region, particle production is dominant over the gravitational
radiation, where we have assumed that the string-loops contain only cusp-like
structures [93] and $\mu\sim M_{\rm DM}^{2}$.
A similar analysis for the $T_{\rm RH}>M_{\rm DM}$ can be done
straightforwardly. Nonetheless, in this case, one needs to identify the domain
of $g$ for which dark matter production via gauge boson-mediated interactions
is suppressed. Otherwise, Eq.(16), which accounts only for $\rm PBH\rightarrow
DM$ channel, will not be valid. For large values of $g$, which numerical
simulations of cosmic string network take into account [93], super-heavy dark
matter will overclose the Universe via a thermal freeze-out mechanism [109].
Therefore, a suitable domain of $g$, in this case, is the freeze-in regime
with small values of $g$ [110]. In this regime, though the dark matter never
thermalizes, it can be produced via $B-L$ gauge boson mediated scatterings:
$f\overline{f}\leftrightarrow N_{\rm DM}N_{\rm DM}$, where $f$ is an SM
fermion. In Fig. 5, we show the evolution of the DM yields for two benchmark
masses. We find that for $g\lesssim 10^{-6}$, the production channels in the
freeze-in regime remain subdominant compared to $\rm PBH\rightarrow DM$ for
the entire range of dark matter mass we are interested in. Therefore, for
$T_{\rm RH}>M_{\rm DM}$, a predictive cosmic string phenomenology requires
$g\lesssim 10^{-6}$.
Figure 5: Evolution of DM yield (normalised to entropy density) produced from
freeze-in for $M_{\rm DM}=10^{13}$ GeV (left panel) and $M_{\rm DM}=10^{10}$
GeV (right panel). The red (blue) dashed lines are for $g=10^{-5}$
($g=10^{-6}$). The orange contour denotes the yield required to get the
observed DM abundance. We have taken $T_{\rm RH}=10^{15}$ GeV and the symmetry
breaking scale $v_{\Phi}=10^{13}$ GeV.
Let us outline the following important characteristics of this case. I) The
green and blue shaded exclusion regions are less important because $T_{\rm
RH}$ is the highest scale in the theory. II) One can explore three limiting
cases: $\lambda=2g^{2}$, $\lambda\gg 2g^{2}$ and $\lambda\ll 2g^{2}$. Taking
into account the results of numerical simulations with large coupling
constants at face value, the first choice would provide a similar
phenomenology as in $M_{\rm DM}>T_{\rm RH}$ case because $\mathcal{H}\simeq
1$. On the other hand, for the other two cases, one has $\mathcal{H}\simeq{\rm
ln}\left(\frac{\lambda}{2g^{2}}\right)$ and $\mathcal{H}\simeq\left({\rm
ln}~{}\frac{2g^{2}}{\lambda}\right)^{-1}$, respectively [80]. Despite
$\mathcal{H}$ having a logarithmic dependence on the parameters, these two
cases may be less appealing because in Eq.16 $\mathcal{H}$ appears as a free
parameter. III) For small values of $\lambda$, the string-width
$\delta_{\omega}\sim 1/\sqrt{\lambda}v_{\Phi}$ might constitute a considerable
fraction of the horizon $H(T_{F})^{-1}$ at initial times, assuming $v_{\Phi}$
is of the order of the network formation temperature, $T_{F}$. In this case,
it is questionable to treat the strings as Nambu-Goto-strings
($\delta_{\omega}\ll H(T)^{-1}$) consistently throughout the evolution of the
Universe. Notice that we consider the effect of PBHs on the cosmic string
network only at the level of the Universe’s expansion. We have ignored the
possible interactions of PBHs and strings [111] that might cause further
spectral distortions. This requires numerical simulations of a PBH-string
network, which is beyond the scope of this work.
## V conclusion
We have shown that the presence of ultralight black holes ($M_{\rm BH}<10^{9}$
g) in the early Universe may open up new avenues to study the cosmological
implications of seesaw models. In most unified scenarios of dark matter and
baryogenesis via leptogenesis the right-handed neutrinos are light, so that
the theory is predictive for particle physics experiments. In this work, we
showed that super-heavy right-handed neutrinos can be just as predictive.
However, in this case, the search strategy for signatures is different. The
presence of black holes and a super-high-scale phase transition that generates
right-handed neutrino masses offer unique gravitational wave signatures that
are testable in the upcoming gravitational wave detectors. In our model,
ultralight black holes generate super-heavy right-handed neutrino dark matter
via Hawking radiation, whereas baryogenesis is realized via thermal
leptogenesis through the remaining pair of right-handed neutrinos. The
amplitude of gravitational waves from PBH density fluctuations correlates with
super-heavy neutrino masses. The detectability of such gravitational waves
improves as the right-handed neutrino masses approach the grand unification
scale; $\Lambda_{\rm GUT}$. In a realistic version of the model, motivated by
grand unified theories, the presence of cosmic strings make the gravitational
wave signatures distinct from other realisations of this idea, therefore
offering a unique avenue to test its properties.
## acknowledgement
We thank Guillem Domènech for very useful comments and suggestions. The work
of RS is supported by the project International mobility MSCA-IF IV FZU -
CZ.02.2.69/0.0/0.0/$20\\_079$/0017754 and Czech Science Foundation, GACR,
grant number 20-16531Y. RS also acknowledges the European Structural and
Investment Fund and the Czech Ministry of Education, Youth, and Sports. FU is
supported by the European Regional Development Fund (ESIF/ERDF) and the Czech
Ministry of Education, Youth and Sports (MEYS) through Project CoGraDS -
CZ.02.1.01/0.0/0.0/$15\\_003$/0000437.
## References
* [1] I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz and A. Zhou, JHEP 09, 178 (2020) doi:10.1007/JHEP09(2020)178 [arXiv:2007.14792 [hep-ph]].
* [2] P. Minkowski, Phys. Lett. B 67, 421-428 (1977) doi:10.1016/0370-2693(77)90435-X
* [3] M. Gell-Mann, P. Ramond and R. Slansky, Conf. Proc. C 790927, 315-321 (1979) [arXiv:1306.4669 [hep-th]].
* [4] T. Yanagida, Prog. Theor. Phys. 64, 1103 (1980) doi:10.1143/PTP.64.1103
* [5] R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44, 912 (1980) doi:10.1103/PhysRevLett.44.912
* [6] H. Georgi and S. L. Glashow, Phys. Rev. Lett. 32, 438-441 (1974) doi:10.1103/PhysRevLett.32.438
* [7] H. Fritzsch and P. Minkowski, Annals Phys. 93, 193-266 (1975) doi:10.1016/0003-4916(75)90211-0
* [8] K. S. Babu and R. N. Mohapatra, Phys. Rev. Lett. 70, 2845-2848 (1993) doi:10.1103/PhysRevLett.70.2845 [arXiv:hep-ph/9209215 [hep-ph]].
* [9] M. Fukugita and T. Yanagida, Phys. Lett. B 174, 45-47 (1986) doi:10.1016/0370-2693(86)91126-3
* [10] A. Riotto and M. Trodden, Ann. Rev. Nucl. Part. Sci. 49, 35-75 (1999) doi:10.1146/annurev.nucl.49.1.35 [arXiv:hep-ph/9901362 [hep-ph]].
* [11] W. Buchmuller, P. Di Bari and M. Plumacher, Annals Phys. 315, 305-351 (2005) doi:10.1016/j.aop.2004.02.003 [arXiv:hep-ph/0401240 [hep-ph]].
* [12] S. Davidson, E. Nardi and Y. Nir, Phys. Rept. 466, 105-177 (2008) doi:10.1016/j.physrep.2008.06.002 [arXiv:0802.2962 [hep-ph]].
* [13] T. Asaka and M. Shaposhnikov, Phys. Lett. B 620, 17-26 (2005) doi:10.1016/j.physletb.2005.06.020 [arXiv:hep-ph/0505013 [hep-ph]].
* [14] T. Asaka, S. Blanchet and M. Shaposhnikov, Phys. Lett. B 631, 151-156 (2005) doi:10.1016/j.physletb.2005.09.070 [arXiv:hep-ph/0503065 [hep-ph]].
* [15] S. Dodelson and L. M. Widrow, Phys. Rev. Lett. 72, 17-20 (1994) doi:10.1103/PhysRevLett.72.17 [arXiv:hep-ph/9303287 [hep-ph]].
* [16] E. K. Akhmedov, V. A. Rubakov and A. Y. Smirnov, Phys. Rev. Lett. 81, 1359-1362 (1998) doi:10.1103/PhysRevLett.81.1359 [arXiv:hep-ph/9803255 [hep-ph]].
* [17] X. D. Shi and G. M. Fuller, Phys. Rev. Lett. 82, 2832-2835 (1999) doi:10.1103/PhysRevLett.82.2832 [arXiv:astro-ph/9810076 [astro-ph]].
* [18] L. Canetti, M. Drewes and M. Shaposhnikov, Phys. Rev. Lett. 110, no.6, 061801 (2013) doi:10.1103/PhysRevLett.110.061801 [arXiv:1204.3902 [hep-ph]].
* [19] A. Anisimov and P. Di Bari, Phys. Rev. D 80, 073017 (2009) doi:10.1103/PhysRevD.80.073017 [arXiv:0812.5085 [hep-ph]].
* [20] P. Di Bari, P. O. Ludl and S. Palomares-Ruiz, JCAP 11, 044 (2016) doi:10.1088/1475-7516/2016/11/044 [arXiv:1606.06238 [hep-ph]].
* [21] P. Di Bari, K. Farrag, R. Samanta and Y. L. Zhou, JCAP 10, 029 (2020) doi:10.1088/1475-7516/2020/10/029 [arXiv:1908.00521 [hep-ph]].
* [22] A. Datta, R. Roshan and A. Sil, Phys. Rev. Lett. 127, no.23, 231801 (2021) doi:10.1103/PhysRevLett.127.231801 [arXiv:2104.02030 [hep-ph]].
* [23] J. A. Dror, T. Hiramatsu, K. Kohri, H. Murayama and G. White, Phys. Rev. Lett. 124, no.4, 041804 (2020) doi:10.1103/PhysRevLett.124.041804 [arXiv:1908.03227 [hep-ph]].
* [24] R. Samanta and S. Datta, JHEP 05, 211 (2021) doi:10.1007/JHEP05(2021)211 [arXiv:2009.13452 [hep-ph]].
* [25] R. Samanta and S. Datta, JHEP 11, 017 (2021) doi:10.1007/JHEP11(2021)017 [arXiv:2108.08359 [hep-ph]].
* [26] B. Barman, D. Borah, A. Dasgupta and A. Ghoshal, Phys. Rev. D 106, no.1, 015007 (2022) doi:10.1103/PhysRevD.106.015007 [arXiv:2205.03422 [hep-ph]].
* [27] Y. F. Perez-Gonzalez and J. Turner, Phys. Rev. D 104, no.10, 103021 (2021) doi:10.1103/PhysRevD.104.103021 [arXiv:2010.03565 [hep-ph]].
* [28] N. Bhaumik, A. Ghoshal and M. Lewicki, JHEP 07, 130 (2022) doi:10.1007/JHEP07(2022)130 [arXiv:2205.06260 [astro-ph.CO]].
* [29] L. Bian, X. Liu and K. P. Xie, JHEP 11, 175 (2021) doi:10.1007/JHEP11(2021)175 [arXiv:2107.13112 [hep-ph]].
* [30] A. Dasgupta, P. S. B. Dev, A. Ghoshal and A. Mazumdar, Phys. Rev. D 106, no.7, 075027 (2022) doi:10.1103/PhysRevD.106.075027 [arXiv:2206.07032 [hep-ph]].
* [31] P. Huang and K. P. Xie, JHEP 09, 052 (2022) doi:10.1007/JHEP09(2022)052 [arXiv:2206.04691 [hep-ph]].
* [32] D. Borah, A. Dasgupta and I. Saha, [arXiv:2207.14226 [hep-ph]].
* [33] A. Ghoshal, R. Samanta and G. White, [arXiv:2211.10433 [hep-ph]].
* [34] R. H. Cyburt, B. D. Fields, K. A. Olive and T. H. Yeh, Rev. Mod. Phys. 88, 015004 (2016) doi:10.1103/RevModPhys.88.015004 [arXiv:1505.01076 [astro-ph.CO]].
* [35] M. Kawasaki, K. Kohri and N. Sugiyama, Phys. Rev. Lett. 82, 4168 (1999) doi:10.1103/PhysRevLett.82.4168 [arXiv:astro-ph/9811437 [astro-ph]].
* [36] T. Hasegawa, N. Hiroshima, K. Kohri, R. S. L. Hansen, T. Tram and S. Hannestad, JCAP 12, 012 (2019) doi:10.1088/1475-7516/2019/12/012 [arXiv:1908.10189 [hep-ph]].
* [37] T. Papanikolaou, V. Vennin and D. Langlois, JCAP 03, 053 (2021) doi:10.1088/1475-7516/2021/03/053 [arXiv:2010.11573 [astro-ph.CO]].
* [38] G. Domènech, C. Lin and M. Sasaki, JCAP 04, 062 (2021) [erratum: JCAP 11, E01 (2021)] doi:10.1088/1475-7516/2021/11/E01 [arXiv:2012.08151 [gr-qc]].
* [39] G. Domènech, V. Takhistov and M. Sasaki, Phys. Lett. B 823, 136722 (2021) doi:10.1016/j.physletb.2021.136722 [arXiv:2105.06816 [astro-ph.CO]].
* [40] G. Domènech, Universe 7, no.11, 398 (2021) doi:10.3390/Universe7110398 [arXiv:2109.01398 [gr-qc]].
* [41] T. Fujita, M. Kawasaki, K. Harigaya and R. Matsuda, Phys. Rev. D 89, no.10, 103501 (2014) doi:10.1103/PhysRevD.89.103501 [arXiv:1401.1909 [astro-ph.CO]].
* [42] O. Lennon, J. March-Russell, R. Petrossian-Byrne and H. Tillim, JCAP 04, 009 (2018) doi:10.1088/1475-7516/2018/04/009 [arXiv:1712.07664 [hep-ph]].
* [43] L. Morrison, S. Profumo and Y. Yu, JCAP 05, 005 (2019) doi:10.1088/1475-7516/2019/05/005 [arXiv:1812.10606 [astro-ph.CO]].
* [44] D. Hooper, G. Krnjaic and S. D. McDermott, JHEP 08, 001 (2019) doi:10.1007/JHEP08(2019)001 [arXiv:1905.01301 [hep-ph]].
* [45] S. Davidson and A. Ibarra, Phys. Lett. B 535, 25-32 (2002) doi:10.1016/S0370-2693(02)01735-5 [arXiv:hep-ph/0202239 [hep-ph]].
* [46] A. Pilaftsis and T. E. J. Underwood, Nucl. Phys. B 692, 303-345 (2004) doi:10.1016/j.nuclphysb.2004.05.029 [arXiv:hep-ph/0309342 [hep-ph]].
* [47] B. P. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. D 100, no.6, 061101 (2019) doi:10.1103/PhysRevD.100.061101 [arXiv:1903.02886 [gr-qc]].
* [48] J. Aasi et al. [LIGO Scientific and VIRGO], Class. Quant. Grav. 32, no.11, 115012 (2015) doi:10.1088/0264-9381/32/11/115012 [arXiv:1410.7764 [gr-qc]].
* [49] B. Sathyaprakash, M. Abernathy, F. Acernese, P. Ajith, B. Allen, P. Amaro-Seoane, N. Andersson, S. Aoudia, K. Arun and P. Astone, et al. Class. Quant. Grav. 29, 124013 (2012) [erratum: Class. Quant. Grav. 30, 079501 (2013)] doi:10.1088/0264-9381/29/12/124013 [arXiv:1206.0331 [gr-qc]].
* [50] B. P. Abbott et al. [LIGO Scientific], Class. Quant. Grav. 34, no.4, 044001 (2017) doi:10.1088/1361-6382/aa51f4 [arXiv:1607.08697 [astro-ph.IM]].
* [51] P. A. R. Ade et al. [Planck], Astron. Astrophys. 594, A13 (2016) doi:10.1051/0004-6361/201525830 [arXiv:1502.01589 [astro-ph.CO]].
* [52] D. J. H. Chung, E. W. Kolb and A. Riotto, Phys. Rev. D 59, 023501 (1998) doi:10.1103/PhysRevD.59.023501 [arXiv:hep-ph/9802238 [hep-ph]].
* [53] V. Kuzmin and I. Tkachev, JETP Lett. 68, 271-275 (1998) doi:10.1134/1.567858 [arXiv:hep-ph/9802304 [hep-ph]].
* [54] E. W. Kolb, D. J. H. Chung and A. Riotto, AIP Conf. Proc. 484, no.1, 91-105 (1999) doi:10.1063/1.59655 [arXiv:hep-ph/9810361 [hep-ph]].
* [55] S. Datta, A. Ghosal and R. Samanta, JCAP 08, 021 (2021) doi:10.1088/1475-7516/2021/08/021 [arXiv:2012.14981 [hep-ph]].
* [56] I. Masina, Eur. Phys. J. Plus 135, no.7, 552 (2020) doi:10.1140/epjp/s13360-020-00564-9 [arXiv:2004.04740 [hep-ph]].
* [57] A. Abada, S. Davidson, A. Ibarra, F. X. Josse-Michaux, M. Losada and A. Riotto, JHEP 09, 010 (2006) doi:10.1088/1126-6708/2006/09/010 [arXiv:hep-ph/0605281 [hep-ph]].
* [58] E. Nardi, Y. Nir, E. Roulet and J. Racker, JHEP 01, 164 (2006) doi:10.1088/1126-6708/2006/01/164 [arXiv:hep-ph/0601084 [hep-ph]].
* [59] S. Blanchet and P. Di Bari, JCAP 03, 018 (2007) doi:10.1088/1475-7516/2007/03/018 [arXiv:hep-ph/0607330 [hep-ph]].
* [60] R. Saito and J. Yokoyama, Phys. Rev. Lett. 102, 161101 (2009) [erratum: Phys. Rev. Lett. 107, 069901 (2011)] doi:10.1103/PhysRevLett.102.161101 [arXiv:0812.4339 [astro-ph]].
* [61] R. g. Cai, S. Pi and M. Sasaki, Phys. Rev. Lett. 122, no.20, 201101 (2019) doi:10.1103/PhysRevLett.122.201101 [arXiv:1810.11000 [astro-ph.CO]].
* [62] K. Inomata, M. Kawasaki, K. Mukaida, T. Terada and T. T. Yanagida, Phys. Rev. D 101, no.12, 123533 (2020) doi:10.1103/PhysRevD.101.123533 [arXiv:2003.10455 [astro-ph.CO]].
* [63] R. Anantua, R. Easther and J. T. Giblin, Phys. Rev. Lett. 103, 111303 (2009) doi:10.1103/PhysRevLett.103.111303 [arXiv:0812.0825 [astro-ph]].
* [64] D. Hooper, G. Krnjaic, J. March-Russell, S. D. McDermott and R. Petrossian-Byrne, [arXiv:2004.00618 [astro-ph.CO]].
* [65] R. Dong, W. H. Kinney and D. Stojkovic, JCAP 10, 034 (2016) doi:10.1088/1475-7516/2016/10/034 [arXiv:1511.05642 [astro-ph.CO]].
* [66] A. Arbey, J. Auffinger and J. Silk, Mon. Not. Roy. Astron. Soc. 494, no.1, 1257-1262 (2020) doi:10.1093/mnras/staa765 [arXiv:1906.04196 [astro-ph.CO]].
* [67] A. Peimbert, M. Peimbert and V. Luridiana, Rev. Mex. Astron. Astrofis. 52, no.2, 419-424 (2016) [arXiv:1608.02062 [astro-ph.CO]].
* [68] M. Schumann, J. Phys. G 46, no.10, 103003 (2019) doi:10.1088/1361-6471/ab2ea5 [arXiv:1903.03026 [astro-ph.CO]].
* [69] O. K. Kalashev and M. Y. Kuznetsov, Phys. Rev. D 94, no.6, 063535 (2016) doi:10.1103/PhysRevD.94.063535 [arXiv:1606.07354 [astro-ph.HE]].
* [70] L. Marzola and F. R. Urban, Astropart. Phys. 93, 56-69 (2017) doi:10.1016/j.astropartphys.2017.04.005 [arXiv:1611.07180 [astro-ph.HE]].
* [71] E. Alcantara, L. A. Anchordoqui and J. F. Soriano, Phys. Rev. D 99, no.10, 103016 (2019) doi:10.1103/PhysRevD.99.103016 [arXiv:1903.05429 [hep-ph]].
* [72] R. Samanta and F. R. Urban, JCAP 06, no.06, 017 (2022) doi:10.1088/1475-7516/2022/06/017 [arXiv:2112.04836 [hep-ph]].
* [73] A. Davidson, Phys. Rev. D 20, 776 (1979) doi:10.1103/PhysRevD.20.776
* [74] R. E. Marshak and R. N. Mohapatra, Phys. Lett. B 91, 222-224 (1980) doi:10.1016/0370-2693(80)90436-0
* [75] R. N. Mohapatra and R. E. Marshak, Phys. Rev. Lett. 44, 1316-1319 (1980) [erratum: Phys. Rev. Lett. 44, 1643 (1980)] doi:10.1103/PhysRevLett.44.1316
* [76] W. Buchmüller, V. Domcke, K. Kamada and K. Schmitz, JCAP 10, 003 (2013) doi:10.1088/1475-7516/2013/10/003 [arXiv:1305.3392 [hep-ph]].
* [77] T. W. B. Kibble, J. Phys. A 9, 1387-1398 (1976) doi:10.1088/0305-4470/9/8/029
* [78] H. B. Nielsen and P. Olesen, Nucl. Phys. B 61, 45-61 (1973) doi:10.1016/0550-3213(73)90350-7
* [79] M. B. Hindmarsh and T. W. B. Kibble, Rept. Prog. Phys. 58, 477-562 (1995) doi:10.1088/0034-4885/58/5/001 [arXiv:hep-ph/9411342 [hep-ph]].
* [80] A.VilenkinandE.P.S.Shellard,CosmicStringsandOther
TopologicalDefects(CambridgeUniversity Press,2000).
* [81] A. Vilenkin, Phys. Rev. D 43, 1060-1062 (1991) doi:10.1103/PhysRevD.43.1060
* [82] T. Vachaspati and A. Vilenkin, Phys. Rev. D 31, 3052 (1985) doi:10.1103/PhysRevD.31.3052
* [83] D. P. Bennett and F. R. Bouchet, Phys. Rev. Lett. 60, 257 (1988) doi:10.1103/PhysRevLett.60.257
* [84] D. P. Bennett and F. R. Bouchet, Phys. Rev. Lett. 63, 2776 (1989) doi:10.1103/PhysRevLett.63.2776
* [85] C. Ringeval, M. Sakellariadou and F. Bouchet, JCAP 02, 023 (2007) doi:10.1088/1475-7516/2007/02/023 [arXiv:astro-ph/0511646 [astro-ph]].
* [86] J. J. Blanco-Pillado, K. D. Olum and B. Shlaer, Phys. Rev. D 83, 083514 (2011) doi:10.1103/PhysRevD.83.083514 [arXiv:1101.5173 [astro-ph.CO]].
* [87] A. Vilenkin, Phys. Lett. B 107, 47-50 (1981) doi:10.1016/0370-2693(81)91144-8
* [88] J. J. Blanco-Pillado, K. D. Olum and B. Shlaer, Phys. Rev. D 89, no.2, 023512 (2014) doi:10.1103/PhysRevD.89.023512 [arXiv:1309.6637 [astro-ph.CO]].
* [89] J. J. Blanco-Pillado and K. D. Olum, Phys. Rev. D 96, no.10, 104046 (2017) doi:10.1103/PhysRevD.96.104046 [arXiv:1709.02693 [astro-ph.CO]].
* [90] C. J. A. P. Martins and E. P. S. Shellard, Phys. Rev. D 54, 2535-2556 (1996) doi:10.1103/PhysRevD.54.2535 [arXiv:hep-ph/9602271 [hep-ph]].
* [91] P. Auclair, J. J. Blanco-Pillado, D. G. Figueroa, A. C. Jenkins, M. Lewicki, M. Sakellariadou, S. Sanidas, L. Sousa, D. A. Steer and J. M. Wachter, et al. JCAP 04, 034 (2020) doi:10.1088/1475-7516/2020/04/034 [arXiv:1909.00819 [astro-ph.CO]].
* [92] T. Damour and A. Vilenkin, Phys. Rev. D 64, 064008 (2001) doi:10.1103/PhysRevD.64.064008 [arXiv:gr-qc/0104026 [gr-qc]].
* [93] D. Matsunami, L. Pogosian, A. Saurabh and T. Vachaspati, Phys. Rev. Lett. 122, no.20, 201301 (2019) doi:10.1103/PhysRevLett.122.201301 [arXiv:1903.05102 [hep-ph]].
* [94] P. Auclair, D. A. Steer and T. Vachaspati, Phys. Rev. D 101, no.8, 083511 (2020) doi:10.1103/PhysRevD.101.083511 [arXiv:1911.12066 [hep-ph]].
* [95] Y. Cui, M. Lewicki, D. E. Morrissey and J. D. Wells, JHEP 01, 081 (2019) doi:10.1007/JHEP01(2019)081 [arXiv:1808.08968 [hep-ph]].
* [96] S. Blasi, V. Brdar and K. Schmitz, Phys. Rev. Res. 2, no.4, 043321 (2020) doi:10.1103/PhysRevResearch.2.043321 [arXiv:2004.02889 [hep-ph]].
* [97] Y. Gouttenoire, G. Servant and P. Simakachorn, JCAP 07, 032 (2020) doi:10.1088/1475-7516/2020/07/032 [arXiv:1912.02569 [hep-ph]].
* [98] Y. Gouttenoire, G. Servant and P. Simakachorn, JCAP 07, 016 (2020) doi:10.1088/1475-7516/2020/07/016 [arXiv:1912.03245 [hep-ph]].
* [99] D. Borah, S. Jyoti Das, A. K. Saha and R. Samanta, Phys. Rev. D 106, no.1, L011701 (2022) doi:10.1103/PhysRevD.106.L011701 [arXiv:2202.10474 [hep-ph]].
* [100] D. Borah, S. Jyoti Das and R. Roshan, [arXiv:2208.04965 [hep-ph]].
* [101] G. S. F. Guedes, P. P. Avelino and L. Sousa, Phys. Rev. D 98, no.12, 123505 (2018) doi:10.1103/PhysRevD.98.123505 [arXiv:1809.10802 [astro-ph.CO]].
* [102] Z. Arzoumanian et al. [NANOGrav], Astrophys. J. Lett. 905, no.2, L34 (2020) doi:10.3847/2041-8213/abd401 [arXiv:2009.04496 [astro-ph.HE]].
* [103] R. w. Hellings and G. s. Downs, Astrophys. J. Lett. 265, L39-L42 (1983) doi:10.1086/183954
* [104] B. Goncharov, R. M. Shannon, D. J. Reardon, G. Hobbs, A. Zic, M. Bailes, M. Curylo, S. Dai, M. Kerr and M. E. Lower, et al. Astrophys. J. Lett. 917, no.2, L19 (2021) doi:10.3847/2041-8213/ac17f4 [arXiv:2107.12112 [astro-ph.HE]].
* [105] S. Chen, R. N. Caballero, Y. J. Guo, A. Chalumeau, K. Liu, G. Shaifullah, K. J. Lee, S. Babak, G. Desvignes and A. Parthasarathy, et al. Mon. Not. Roy. Astron. Soc. 508, no.4, 4970-4993 (2021) doi:10.1093/mnras/stab2833 [arXiv:2110.13184 [astro-ph.HE]].
* [106] S. Blasi, V. Brdar and K. Schmitz, Phys. Rev. Lett. 126, no.4, 041305 (2021) doi:10.1103/PhysRevLett.126.041305 [arXiv:2009.06607 [astro-ph.CO]].
* [107] J. Ellis and M. Lewicki, Phys. Rev. Lett. 126, no.4, 041304 (2021) doi:10.1103/PhysRevLett.126.041304 [arXiv:2009.06555 [astro-ph.CO]].
* [108] A. Pilaftsis and T. E. J. Underwood, Nucl. Phys. B 692, 303-345 (2004) doi:10.1016/j.nuclphysb.2004.05.029 [arXiv:hep-ph/0309342 [hep-ph]].
* [109] K. Griest and M. Kamionkowski, Phys. Rev. Lett. 64, 615 (1990) doi:10.1103/PhysRevLett.64.615
* [110] L. J. Hall, K. Jedamzik, J. March-Russell and S. M. West, JHEP 03, 080 (2010) doi:10.1007/JHEP03(2010)080 [arXiv:0911.1120 [hep-ph]].
* [111] A. Vilenkin, Y. Levin and A. Gruzinov, JCAP 11, 008 (2018) doi:10.1088/1475-7516/2018/11/008 [arXiv:1808.00670 [astro-ph.CO]].
|
# The statistical properties of stars at redshift, $z=5$, compared with the
present epoch
Matthew R. Bate1
1 Department of Physics and Astronomy, University of Exeter, Stocker Road,
Exeter EX4 4QL, United Kingdom E-mail<EMAIL_ADDRESS>
(Accepted by MNRAS)
###### Abstract
We report the statistical properties of stars and brown dwarfs obtained from
three radiation hydrodynamical simulations of star cluster formation with
metallicities of 1, 1/10 and 1/100 of the solar value. The star-forming clouds
are subjected to cosmic microwave background radiation that is appropriate for
star formation at a redshift $z=5$. The results from the three calculations
are compared to each other, and to similar previously published calculations
that had levels of background radiation appropriate for present-day ($z=0$)
star formation. Each of the calculations treat dust and gas temperatures
separately and include a thermochemical model of the diffuse interstellar
medium. We find that whereas the stellar mass distribution is insensitive to
the metallicity for present-day star formation, at $z=5$ the characteristic
stellar mass increases with increasing metallicity and the mass distribution
has a deficit of brown dwarfs and low-mass stars at solar metallicity compared
to the Galactic initial mass function. We also find that the multiplicity of
M-dwarfs decreases with increasing metallicity at $z=5$. These effects are a
result of metal-rich gas being unable to cool to as low temperatures at $z=5$
compared to at $z=0$ due to the hotter cosmic microwave background radiation,
which inhibits fragmentation at high densities.
###### keywords:
binaries: general – hydrodynamics – radiative transfer – stars: abundances –
stars: formation – stars: luminosity function, mass function.
## 1 Introduction
Around a decade ago, numerical studies of present day star formation advanced
to the point that individual radiation hydrodynamical calculations could
produce stellar groups or clusters containing more than a hundred stars and
brown dwarfs. With such numbers of stars and brown dwarfs, the statistical
properties of the stellar populations can be meaningfully compared with those
of observed Galactic stellar populations. Bate (2012) published the first
radiation hydrodynamical calculation to reproduce a wide variety of observed
statistical properties of Galactic stellar systems. The calculation produced
in excess of 180 stars and brown dwarfs, including 40 multiple systems, while
also resolving young discs down to radii of $\approx 10$ au. The stellar mass
function was found to be in good agreement with the observed Galactic initial
mass function (IMF), the stellar multiplicity as a function of primary mass
was in agreement with the results from field star surveys, and many of the
observed characteristics of multiple stellar systems were reproduced.
Moreover, Bate (2018) analysed the mass and radius distributions of the young
discs that were produced by the Bate (2012) calculation and found that the
distribution of disc radii are in good agreement with the observed radii of
discs in Galactic star-forming regions, and the mass distributions are similar
to the estimated mass distributions of very young (Class 0) protostellar
systems (see Tychoniec et al., 2018; Tobin et al., 2020).
In addition to comparing the results of star cluster formation calculations to
observed Galactic stellar populations, the ability to perform numerical
simulations that produce clusters of stars gives us the ability to directly
predict how the statistical properties of stellar systems depend on
environment and initial conditions, and to investigate the roles of various
physical processes. Thus far, hydrodynamical calculations that either include
radiative transfer and/or radiative heating from protostars have been used to
explore the dependence of stellar properties on metallicity (Myers et al.
2011; Bate 2014, 2019; He, Ricotti & Geen 2019; Guszejnov et al. 2022),
molecular cloud density (Bate, 2009b; Jones & Bate, 2018b; Tanvir et al.,
2022; Guszejnov et al., 2022), protostellar outflows (Krumholz, Klein & McKee
2012; Mathew & Federrath 2021; Tanvir, Krumholz & Federrath 2022; Grudić et
al. 2021; Grudić et al. 2022; Guszejnov et al. 2021; Guszejnov et al. 2022),
variations in turbulence (Lomax, Whitworth & Hubber 2015; Nam, Federrath &
Krumholz 2021; Mathew, Federrath & Seta 2022; Guszejnov et al. 2022), and
magnetic fields (Myers et al. 2013, 2014; Krumholz et al. 2016; Cunningham et
al. 2018; Grudić et al. 2021; Guszejnov et al. 2022). Some of the most recent
calculations have even begun exploring how the non-ideal magnetohydrodynamic
(MHD) effects of ambipolar diffusion, Hall conduction, and Ohmic resistivity
affect the formation of stellar groups, albeit only for small numbers of stars
as yet (Wurster et al., 2019).
The study of the dependence of star formation on metallicity of Bate (2019)
employed the method of Bate & Keto (2015) to model the thermodynamics. This
method combines radiative transfer with a thermochemical model of the diffuse
interstellar medium (ISM), with gas and dust temperatures treated separately.
The thermochemical model includes simple chemical models for the atomic and
molecular forms of hydrogen and whether carbon is in the form of C+, neutral
carbon, or CO. The ISM model is similar to that used by Glover & Clark (2012)
to study molecular cloud evolution, although the chemical model employed by
Bate & Keto is greatly simplified. Bate (2019) found that the stellar
properties did not vary greatly for present-day star formation if the
metallicities ranged from 1/100 to 3 times the solar value. For example, the
stellar mass functions were statistically indistinguishable. However, the
greater cooling rates at high gas densities due to the lower opacities at low
metallicities were found to increase the fragmentation on small spatial scales
which resulted in an anti-correlation between the close binary fraction of
low-mass stars and metallicity similar to that which is observed (Badenes et
al. 2018; El-Badry & Rix 2019; Moe, Kratter & Badenes 2019). The fraction of
protostellar mergers at low metallicities also increased due to the higher
prevalence of small-scale fragmentation. An indication was found that at lower
metallicity close binaries may have lower mass ratios and the relative
abundance of brown dwarfs to stars may increase slightly, but these effects
were quite weak. Elsender & Bate (2021) also studied the discs produced by the
calculations and found that the characteristic radii of protostellar discs
decrease with decreasing metallicity and that the discs and orbits of pairs of
protostars tend to be less well aligned at lower metallicity.
In this paper, we report results from three new radiation hydrodynamical
calculations of stellar cluster formation that are identical to those reported
by Bate (2019) except that the background interstellar radiation field (ISRF)
is modified so as to include a cosmic microwave background radiation component
that is appropriate for a redshift $z=5$ rather than $z=0$. We follow the
collapse of each of the molecular clouds to form a cluster of stars and then
compare the properties of the stars and brown dwarfs, both between each of the
new calculations, and to the results from the present-day ($z=0$) calculations
of Bate (2019). In Section 2, we summarise the method and initial conditions.
The results from the calculations are presented in Section 3. In Section 4, we
discuss observational evidence for variation of the stellar mass function in
the context of the results presented in this paper. Section 5 contains the
conclusions.
## 2 Method
The calculations were performed using the smoothed particle hydrodynamics
(SPH) code, sphNG, based on the original version from Benz (1990; Benz et al.
1990), but substantially modified using the methods described in Bate et al.
(1995), Price & Monaghan (2007), Whitehouse, Bate & Monaghan (2005),
Whitehouse & Bate (2006), Bate & Keto (2015) and parallelised using both MPI
and OpenMP.
Gravitational forces and particle nearest neighbours were calculated using a
binary tree. The smoothing lengths of particles were set such that the
smoothing length of each particle $h=1.2(m/\rho)^{1/3}$ where $m$ and $\rho$
are the SPH particle’s mass and density, respectively (see Price & Monaghan,
2007, for further details). A second-order Runge-Kutta-Fehlberg integrator
(Fehlberg, 1969) was employed, with individual time steps for each particle
(Bate, Bonnell & Price, 1995). The Morris & Monaghan (1997) artificial
viscosity was used to reduce numerical shear viscosity, with
$\alpha_{\rm{}_{v}}$ varying between 0.1 and 1 and setting $\beta_{\rm
v}=2\alpha_{\rm v}$ (see also Price & Monaghan, 2005).
### 2.1 Radiative transfer and the diffuse ISM model
The calculations used the combined radiative transfer and diffuse ISM
thermochemical model developed by Bate & Keto (2015). A brief summary of the
main elements of the method is provided by Bate (2019); the reader is directed
to those two papers for the details of the method since the calculations
performed for the present paper are identical to the calculations presented by
Bate (2019) except for the form of the interstellar radiation field.
Briefly, the radiation hydrodynamical calculations are solved in a frame co-
moving with the fluid, assuming local thermal equilibrium (LTE), and can be
written
$\frac{{\rm D}\rho}{{\rm D}t}+\rho\nabla\cdot\mbox{\boldmath$v$}=0~{},$ (1)
$\rho\frac{{\rm D}\mbox{\boldmath$v$}}{{\rm D}t}=-\nabla
p+\frac{\mbox{${\chi}\rho$}}{c}\mbox{\boldmath$F$}~{},$ (2)
$\begin{split}\rho\frac{{\rm D}}{{\rm
D}t}\left(\frac{E}{\rho}\right)=-&\nabla\cdot\mbox{\boldmath$F$}-\nabla\mbox{\boldmath$v${\bf:P}}-ac\kappa_{\rm
g}\rho\left(T_{\rm r}^{4}-T_{\rm g}^{4}\right)-\\\ &ac\kappa_{\rm
d}\rho\left(T_{\rm r}^{4}-T_{\rm d}^{4}\right),\end{split}$ (3)
$\begin{split}\rho\frac{{\rm D}u}{{\rm
D}t}=-&p\nabla\cdot\mbox{\boldmath$v$}+ac\kappa_{\rm g}\rho\left(T_{\rm
r}^{4}-T_{\rm g}^{4}\right)-\\\ &\Lambda_{\rm gd}-\Lambda_{\rm
line}+\Gamma_{\rm cr}+\Gamma_{\rm pe}+\Gamma_{\rm H2,g}~{},\end{split}$ (4)
(Bate & Keto, 2015), where, ${\rm D}/{\rm D}t\equiv\partial/\partial
t+\mbox{\boldmath$v$}\cdot\nabla$ is the convective derivative. The symbols
$\rho,$ $v$ and $p$ represent the material mass density, velocity, and scalar
isotropic pressure, respectively, and $c$ is the speed of light. The total
frequency-integrated radiation density, momentum density (flux) and pressure
tensor are represented by $E$, $F$, and P, respectively. The flux-limited
diffusion approximation is employed, such that
$\mbox{\boldmath$F$}=-c\lambda\nabla E/(\chi\rho)$ and ${\bf P}={\bf f}E$, in
which the flux limiter, $\lambda$, of Levermore & Pomraning (1981) is used and
the Eddington tensor, ${\bf f}$, is computed using the gradient of $E$.
Equations 3–4 are used to evolve the continuum radiation field and the
specific internal energy of the gas, $u$, separately from the dust. The gas,
dust, and continuum radiation fields all have their own associated
temperatures: $T_{\rm g}$, $T_{\rm d}$, and $T_{\rm r}$, respectively. The
dust temperature is set by assuming that the dust is in thermal equilibrium
with the combined ISRF and continuum radiation field, $E$, taking into account
the transfer of energy between the gas and the dust:
$\rho\Lambda_{\rm ISR}+ac\kappa_{\rm d}\rho(T_{\rm r}^{4}-T_{\rm
d}^{4})+\Lambda_{\rm gd}=0,$ (5)
where $\Lambda_{\rm ISR}$ is the heating rate per unit mass of the dust due to
the ISRF. The terms at the end of equation 4 are $\Lambda_{\rm gd}$ which
accounts for the thermal interaction due to collisions between the gas and the
dust, $\Lambda_{\rm line}$ which is the cooling rate per unit volume due to
atomic and molecular line emission, $\Gamma_{\rm cr}$ which is the heating
rate per unit volume due to cosmic rays, $\Gamma_{\rm pe}$ which is the
heating rate per unit volume due to the photoelectric effect, and $\Gamma_{\rm
H2,g}$ which is the heating rate per unit volume due to the formation of
molecular hydrogen on dust grains. We also define the radiation constant
$a=4\sigma_{\rm B}/c$, where $\sigma_{\rm B}$ is the Stefan-Boltzmann
constant. The Rosseland mean gas opacity $\kappa_{\rm g}$ is only important
above the dust sublimation temperature, and the mean dust opacity is
$\kappa_{\rm d}$. The total opacity, $\chi$, in equation 2 and the equation
for $F$ is set to the sum of the gas and dust opacities and ignores
scattering. It may be noted that the radiation pressure term (the last term in
equation 2) and the second term on the right-hand side of equation 3 are both
negligible in all of the calculations discussed in this paper.
Figure 1: A comparison of the form of the interstellar radiation field (ISRF)
that is assumed at redshift $z=5$ compared to that at $z=0$. It is assumed
that the only difference is due to the cosmic microwave background radiation
(CMBR) which has a greater temperature at $z=5$ (the contributions of the CMBR
to each ISRF are shown separately).
The hydrogen and helium mass fractions were taken to be $X=0.70$ and $Y=0.28$,
respectively, with the solar metallicity abundance set to be ${\rm
Z}_{\odot}=0.02$. We employed simple chemical models to treat the evolution of
hydrogen and carbon. The abundances of C$+$, neutral carbon, CO, and the
depletion of CO on to dust grains were computed using the model of Keto &
Caselli (2008). For hydrogen, we evolved the atomic and molecular hydrogen
abundances using the same molecular hydrogen formation and dissociation rates
as those used by Glover et al. (2010).
The clouds were assumed to be bathed in an ISRF. This contributes to both the
heating rate of dust grains and photoelectric heating of the gas. The ISRF is
attenuated due to dust extinction inside a molecular cloud, with opacities set
in the same way as in Bate (2014, 2019). To describe the ISRF at the present
epoch (redshift $z=0$) the analytic form of Zucconi, Walmsley & Galli (2001)
is used, but modified to include the ‘standard’ UV component from Draine
(1978) in the energy range $h\nu=5-13.6$ eV. For the calculations presented in
this paper, we modified the functional form of the component of the ISRF that
is due to the cosmic microwave background radiation. Specifically, we take the
temperature of the cosmic microwave background radiation (CMBR) to scale as
$T_{\rm CMBR}(z)=(1+z)T_{\rm CMBR}(0),$ (6)
where $T_{\rm CMBR}(0)=2.73$ K, so that at $z=5$, $T_{\rm CMBR}(z)=16.4$ K. In
Figure 1, we plot the forms of the ISRF that Bate (2019) used for the present
day (redshift $z=0$) and the form that we use for redshift $z=5$. The
contributions of the CMBR to each of the ISRFs are also plotted in the figure.
Note that we assume that the other contributions to the ISRF do not change
with redshift. Essentially we are assuming that the star-forming clouds are in
similar radiative environments, except for the contribution from the CMBR.
This need not be the case – for example, if a star-forming cloud was in a
galactic starburst environment there may also be a stronger high-frequency
component to the ISRF. However, making such additional changes to the ISRF
would add more free parameters to the models. Therefore, for the present paper
we only consider the effect of the hotter/stronger CMBR.
### 2.2 Sink particles
As in Bate (2019), the calculations followed the hydrodynamic collapse of each
protostar through the first core phase and into the second collapse phase
(that begins at densities of $\sim 10^{-7}$ g cm-3) due to molecular hydrogen
dissociation (Larson, 1969). Sink particles (Bate et al., 1995) were inserted
when the density exceeded $10^{-5}$ g cm-3. This density is only a factor of
one hundred lower than the density at which stellar core begins to form
(density $\sim 10^{-3}$ g cm-3) and the free-fall time at this density is only
one week.
The sink particle accretion radius was set to $r_{\rm acc}=0.5$ au. Sink
particles interact with the gas only via gravity and accretion, and the
gravitational interaction between sink particles is not softened. Sink
particle trajectories are integrated using the same Runge-Kutta-Fehlberg
integrator that is used for the SPH particles, but with a much lower error
tolerance so that even the closest orbits (semi-major axes less than 1 au) are
maintained for the period of the simulations (i.e. $\sim 10^{5}$ yrs). The
sink particles themselves do not contribute radiative feedback (see Bate,
2012; Jones & Bate, 2018a, for discussion of this limitation). Sink particles
are merged together if they pass within 0.03 au (i.e., $\approx 6$ R⊙).
Redshift | Initial Gas | Metallicity | No. Stars | No. Brown | Mass of Stars & | Mean | Mean | Median | Stellar
---|---|---|---|---|---|---|---|---|---
| Mass | | Formed | Dwarfs | Brown Dwarfs | Mass | Log-Mass | Mass | Mergers
| M⊙ | Z⊙ | | | M⊙ | M⊙ | M⊙ | M⊙ |
$z=5$ | 500 | 0.01 | $\geq 89$ | $\leq 43$ | 52.4 | $0.40\pm 0.05$ | $0.16\pm 0.02$ | 0.15 | 17
| 500 | 0.1 | $\geq 81$ | $\leq 33$ | 56.5 | $0.50\pm 0.07$ | $0.20\pm 0.03$ | 0.19 | 7
| 500 | 1.0 | $\geq 65$ | $\leq 2$ | 67.4 | $1.01\pm 0.17$ | $0.49\pm 0.08$ | 0.40 | 1
$z=0$ | 500 | 0.01 | $\geq 90$ | $\leq 52$ | 49.8 | $0.35\pm 0.04$ | $0.15\pm 0.02$ | 0.14 | 17
| 500 | 0.1 | $\geq 125$ | $\leq 49$ | 73.4 | $0.42\pm 0.05$ | $0.17\pm 0.02$ | 0.15 | 11
| 500 | 1.0 | $\geq 171$ | $\leq 84$ | 90.1 | $0.35\pm 0.04$ | $0.16\pm 0.01$ | 0.15 | 14
| 500 | 3.0 | $\geq 193$ | $\leq 65$ | 92.0 | $0.36\pm 0.03$ | $0.18\pm 0.01$ | 0.17 | 6
Table 1: The parameters and overall statistical results for each of the three
new radiation hydrodynamical calculations performed for this paper, and the
four calculations from Bate (2019). All calculations were run to 1.20 $t_{\rm
ff}$. Brown dwarfs are defined as having final masses less than 0.075 M⊙ [Note
that the equivalent table in Bate (2019) mistakenly gave incorrect numbers of
stars and brown dwarfs, although the sums of these numbers (the total numbers
of objects) were correct.]. The numbers of stars (brown dwarfs) are lower
(upper) limits because some brown dwarfs were still accreting when the
calculations were stopped. At $z=0$, different metallicities result in no
significant difference in the mean and median masses of the stellar
populations, but with $z=5$ the solar-metallicity calculation produces a much
heavier population of stars. At both redshifts, less gas is converted into
stars in calculations with lower metallicities and the average number of
stellar mergers per star consistently increases with decreasing metallicity.
This table and caption are comparable to Table 1 in Bate (2019). Figure 2: The
star formation rates obtained for each of the three radiation hydrodynamical
calculations. We plot: the total stellar mass (i.e., the mass contained in
sink particles) versus time (left panel), the number of stars and brown dwarfs
(i.e., the number of sink particles) versus time (centre panel), and the
number of stars and brown dwarfs versus the total stellar mass (right panel).
The different line types are for metallicities of 1/100 (magenta, dot-dashed),
1/10 (red, dashed), and 1 (black, solid) times solar. Time is measured from
the beginning of the calculation in terms of the free-fall time of the initial
cloud (bottom) or years (top). The star formation is delayed in the two low-
metallicity calculations, due to the warmer gas temperatures, but after
$t\approx 1.08$ $t_{\rm ff}$ the star formation rates in terms of the amount
of gas converted to stars are very similar for all of the calculations. There
is also a clear progression such that more objects are produced from a given
amount of gas for lower metallicities (right panel). This figure and caption
are comparable to Fig. 1, Bate (2019).
### 2.3 Initial conditions and resolution
The initial density and velocity structure for each calculation were identical
to those used by Bate (2012, 2014, 2019) – this allows close comparison of the
results with those of Bate (2019) because only one or two parameters are
changed at a time (i.e. the metallicity and/or the redshift). We only provide
a brief description of the initial conditions here – see Bate (2012) for a
more complete description. In short, for each calculation the initial
conditions consisted of a uniform-density, spherical cloud containing 500 M⊙
of molecular gas, with a radius of 0.404 pc (83300 au). This gives an initial
density of $1.2\times 10^{-19}$ g cm-3 ($n_{\rm H}\approx 6\times 10^{4}$
cm-3) and an initial free-fall time of $t_{\rm ff}=6.0\times 10^{12}$ s or
$1.90\times 10^{5}$ years. Although each cloud was uniform in density, we
imposed an initial supersonic ‘turbulent’ velocity field in the same manner as
Ostriker, Stone & Gammie (2001) and Bate, Bonnell & Bromm (2003). The power
spectrum of the divergence-free random Gaussian velocity field was
$P(k)\propto k^{-4}$, where $k$ is the wavenumber. The velocities were set on
a $128^{3}$ uniform grid and the SPH particle velocities were interpolated
from the grid. The velocity field was normalised so that the kinetic energy of
the turbulence was equal to the magnitude of the gravitational potential
energy of the cloud (this results in the initial root-mean-square Mach number
of the turbulence being ${\cal M}=11.2$ at 15 K). The same velocity field was
used for each calculation.
As in Bate (2019), the initial gas and dust temperatures were set such that
the dust was in thermal equilibrium with the local interstellar radiation
field. The gas was in thermal equilibrium such that heating from the ISRF and
cosmic rays was balanced by cooling from atomic and molecular line emission
and collisional coupling with the dust. This produces clouds with dust
temperatures that are warmest on the outside and coldest at the centre. For
the solar metallicity cloud, the initial dust temperature ranges from $T_{\rm
dust}=16.3-17.9$ K. For $Z=0.1~{}{\rm Z}_{\odot}$, $T_{\rm dust}=16.8-19.1$ K
and for the lowest metallicity, $T_{\rm dust}=18.6-19.9$ K. The mean initial
gas temperatures for the three calculations were $T_{\rm gas}\approx
16.5,17.6,14.3$ K for metallicities $Z=1,0.1,0.01~{}{\rm Z}_{\odot}$,
respectively.
The calculations used $3.5\times 10^{7}$ SPH particles to model the cloud (the
same resolution as that used by Bate 2019). This resolution is sufficient to
resolve the local Jeans mass throughout the calculation, which is necessary to
model fragmentation of collapsing molecular clouds correctly (Bate & Burkert
1997; Truelove et al. 1997; Whitworth 1998; Boss et al. 2000; Hubber, Goodwin
& Whitworth 2006). Currently there is no consensus as to the resolution that
is necessary and sufficient to capture disc fragmentation (see Section 2.3 of
Bate 2019 for discussion of this point and references for further reading).
The fact that the criteria for disc fragmentation are not well understood
should be kept in mind as a caveat for the results presented below.
Figure 3: Column density and temperature snapshots at three different times
($t=1.00,1.10,1.20$ $t_{\rm ff}$ ) for the $z=5$ calculation with a
metallicity of 1/100 times solar. From top to bottom, the rows give column
density, and the mass-weighted gas, dust, and protostellar radiation
temperatures. The colour scales are logarithmic. The column density scale
covers $0.03-30$ g cm-3, and the temperature scales cover $5-100$K. The stars
and brown dwarfs are plotted using white circles. Gas temperatures in the
dense parts of the cloud are much hotter than the dust temperature due to the
low metallicity and poor gas-dust coupling. This figure and caption are
comparable to Fig. 2, Bate (2019). Animations of the evolution shown in this
figure are provided in the online Additional Supporting Information that
accompanies this paper. Figure 4: Column density and temperature snapshots at
three different times ($t=1.00,1.10,1.20$ $t_{\rm ff}$ ) for the $z=5$
calculation with a metallicity of 1/10 times solar. From top to bottom, the
rows give column density, and the mass-weighted gas, dust, and protostellar
radiation temperatures. The colour scales are logarithmic. The column density
scale covers $0.03-30$ g cm-3, and the temperature scales cover $5-100$K. The
stars and brown dwarfs are plotted using white circles. The gas temperatures
in the dense parts of the cloud tend to be hotter than the dust temperature,
but less than in the 1/100 metallicity case. This figure and caption are
comparable to Fig. 3, Bate (2019). Animations of the evolution shown in this
figure are provided in the online Additional Supporting Information that
accompanies this paper. Figure 5: Column density and temperature snapshots at
three different times ($t=1.00,1.10,1.20$ $t_{\rm ff}$ ) for the $z=5$
calculation with solar metallicity. From top to bottom, the rows give column
density, and the mass-weighted gas, dust, and protostellar radiation
temperatures. The colour scales are logarithmic. The column density scale
covers $0.03-30$ g cm-3, and the temperature scales cover $5-100$K. The stars
and brown dwarfs are plotted using white circles. The gas temperatures are
generally lower than in the low metallicity calculations. This figure and
caption are comparable to Fig. 4, Bate (2019). Animations of the evolution
shown in this figure are provided in the online Additional Supporting
Information that accompanies this paper.
## 3 Results
### 3.1 Cloud evolution
Each of the three radiation hydrodynamical calculations was evolved to
$t=1.20~{}t_{\rm ff}$ (228,300 yrs). During this period they produced
different numbers of stars and brown dwarfs (modelled using sink particles
with accretion radii of 0.5 au). The initial conditions and the statistical
properties of the stars and brown dwarfs produced by each $z=5$ calculation
are summarised in Table 1, along with the properties of the $z=0$ calculations
that were previously published by Bate (2019).
At both redshifts there is a clear trend that lower metallicity clouds have
converted less gas into stars at the same time. For the present-day
calculations, Bate (2019) showed that this was due to a delay in the star
formation that was caused by more metal-poor gas at intermediate densities
being warmer (i.e. greater thermal pressure support). Fig. 2 shows how the
star formation rates evolve with time for the $z=5$ calculations, both in
terms of the number of stars and brown dwarfs and their total mass. As was the
case in the $z=0$ calculations, it is clear from the left panel of these
graphs that the trend of lower total stellar mass with lower metallicity at
the end of the calculations ($t=1.20~{}t_{\rm ff}$) arises primarily because
the star formation is delayed with lower metallicity. After $t=1.08~{}t_{\rm
ff}$ (205,500 yrs), the star formation rates (the slopes of the lines in the
left panel) are almost identical. But there is a delay in the star formation
getting started that increases for lower metallicities. At the end of the
calculations, almost 1/7 of the initial mass has been transformed into stars
in the solar metallicity calculation, but in the lowest metallicity
calculation only 1/10 of the mass has been converted to stars. Note that for
the same metallicity, the $z=5$ calculations have generally converted less gas
into stars after the same amount of time than the $z=0$ calculations – the
amount of mass is similar for $Z=0.01~{}{\rm Z}_{\odot}$ (actually 5 per cent
higher), but substantially less for $Z\geq 0.1~{}{\rm Z}_{\odot}$. As we will
see below, this is because the gas temperatures at the same metallicity tend
to be higher for $z=5$ than for $z=0$ due to the enhanced CMBR.
However, while the trend of the total stellar mass with metallicity is the
same for the $z=0$ and $z=5$ calculations, the dependence of the number of
objects with metallicity actually reverses for $z=5$ compared to $z=0$.
Whereas the present-day star formation calculations of Bate (2019) produced
more objects at higher metallicities at $t=1.20~{}t_{\rm ff}$, in the $z=5$
calculations higher metallicity results in fewer objects being formed. The
greater numbers of objects at higher metallicity in the present-day star
formation calculations is due to lower gas temperatures caused by the
combination of greater extinction shielding the inner parts of the cloud from
the ISRF and enhanced dust cooling. However, the latter effect starts to
saturate at very high metallicity ($Z\approx 1-3~{}Z_{\odot}$) when the cloud
is highly optically thick and begins to trap its own infrared radiation, so
that at $z=0$ the calculations with $Z=Z_{\odot}$ and $Z=3~{}Z_{\odot}$
produced similar numbers of objects: 255 & 258, respectively (Table 1; Bate
2019). By contrast, at redshift $z=5$ the ability of the gas to cool is
diminished by the higher CMBR temperature. Moreover, at higher temperatures
the clouds are more optically thick to their own radiation (i.e. the Rossland
mean opacity increases with temperature). These effects combine to reverse the
present-day metallicity trend. The $Z=0.01~{}{\rm Z}_{\odot}$ and
$Z=0.1~{}{\rm Z}_{\odot}$ calculations produce reasonably similar numbers of
objects (132 and 114, respectively), but the $Z={\rm Z}_{\odot}$ produces only
67 objects.
Figure 6: Phase diagrams of temperature vs gas density at the end of each of
the $z=5$ calculations ($t=1.20$ $t_{\rm ff}$). The upper row of panels gives
the gas temperature, while the lower row gives the dust temperature. The
colour scale in the upper panels gives the mean abundance of gas phase CO
relative to hydrogen at that temperature and density. Note that the
calculations include a prescription for the freeze out of CO onto dust grains,
but they do not treat thermal desorption of CO from dust grains at $T_{\rm
dust}\mathrel{\hbox{\hbox to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss}
\kern-3.00003pt\raise 1.72218pt\hbox{$>$}}}20$ K. In the dust temperature
plots, the grey scale is proportional to the logarithm of the number of fluid
elements (i.e., darker regions contain more SPH particles). The low-density
gas ($n_{\rm H}\mathrel{\hbox{\hbox to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss}
\kern-3.00003pt\raise 1.72218pt\hbox{$<$}}}10^{8}~{}{\rm cm}^{-3}$) tends to
be warmer at low metallicity than with high metallicity due to the poor
cooling, but the high-density gas ($n_{\rm H}\mathrel{\hbox{\hbox
to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise
1.72218pt\hbox{$>$}}}10^{10}~{}{\rm cm}^{-3}$) tends to be cooler at low
metallicity due to the reduced optical depths. The dust is almost all warmer
than 10 K, as opposed to the present-day star formation calculations of Bate
(2019) in which dust temperatures as low as $5-6$ K were reached with
metallicity $Z\mathrel{\hbox{\hbox to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss}
\kern-3.00003pt\raise 1.72218pt\hbox{$>$}}}{\rm Z}_{\odot}$. Since dust
cooling is crucial for gas cooling at high densities and metallicities, this
leads to significantly greater gas temperatures for $Z\mathrel{\hbox{\hbox
to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise
1.72218pt\hbox{$>$}}}0.1~{}{\rm Z}_{\odot}$ at $z=5$ than at $z=0$. This
figure and caption are comparable to Fig. 7, Bate (2019).
The mean and median masses of the objects that are produced in each of the
clouds are provided in columns 7–9 of Table 1. Bate (2019) found that these
statistical properties of the stars/brown dwarfs produced by the calculations
at $z=0$ were statistically indistinguishable over the full range of
metallicities from $Z=0.01-3~{}{\rm Z}_{\odot}$. Although at a given time more
mass was converted into stars at higher metallicity, more objects were
produced from that gas leading to a protostellar mass function that was
independent of the metallicity. However, at $z=5$ although at a given time
more mass is again converted into stars at higher metallicity (left panel of
Fig. 2), the number of objects produce by that gas consistently decreases with
increasing metallicity (centre and right panels of Fig. 2). The result is that
the typical stellar mass increases with increasing metallicity (Table 1).
There is only a small difference between the stellar populations produced by
the $Z=0.01~{}{\rm Z}_{\odot}$ and $Z=0.1~{}{\rm Z}_{\odot}$ calculations, but
the characteristic mass doubles (either the mean or median mass) between the
$Z=0.1~{}{\rm Z}_{\odot}$ and $Z={\rm Z}_{\odot}$ calculations.
In all of the calculations there are some protostellar mergers (see the last
column of Table 1). These occur when two sink particles pass within $6~{}{\rm
R}_{\odot}$ of each other. This merger radius was chosen because low-mass
protostars that accrete at high rates ($\dot{M}_{*}\sim 10^{-6}-10^{-5}$ M⊙
yr-1) are believed to have radii of $2-3~{}{\rm R}_{\odot}$ (e.g. Larson,
1969; Hosokawa & Omukai, 2009). At both redshifts the average number of
stellar mergers per star consistently increases with decreasing metallicity.
This is due to the lower opacities at lower metallicity that allow faster
cooling of very high-density gas and, therefore, more small-scale
fragmentation (c.f. Bate, 2019).
In the right panel of Fig. 2, we plot the number of stars and brown dwarfs
versus their total mass. There appears to be a change in the nature of the
star formation in the solar-metallicity calculation when $\approx 45$ M⊙ of
gas has been converted into stars in that beyond this point fewer new objects
are created per unit mass of gas. This is reminiscent of the calculations of
Krumholz, Klein & McKee (2011) in which the star-forming clouds were so dense
and the protostars so close together that the radiative feedback from the
young protostars heating the high-density gas inhibited the formation of new
protostars, resulting in a top-heavy mass function. The same effect seems to
be happening in the solar-metallicity calculation here, but for different
reasons: the combination of the inherently hotter clouds due to the enhanced
CMBR and more metal-rich high-density gas trapping radiation and being less
able to cool is inhibiting the formation of new protostars. Thus, the
collapsing gas primarily gets accreted by existing protostars, boosting their
mass, rather than producing new objects.
The reason for the delay of the star formation at low metallicities is that
the intermediate-density, metal-poor gas is hotter. In Figs. 3 to 5 we provide
images of the column densities and mass-weighed temperatures from the three
$z=5$ calculations. The times ($t=1.00,1.10,1.20$ $t_{\rm ff}$) have been
chosen to cover the main periods of star formation. In each figure, the
second, third, and fourth rows give the separate gas, dust, and radiation
temperature distributions, respectively. The radiation temperature is that of
the radiation field from the collapsing gas and dust, modelled using flux-
limited diffusion; the ISRF that also heats the dust is treated separately.
For $z=0$, Bate (2019) found a clear progression that both gas and dust
temperatures were hotter at lower metallicities for $Z\leq{\rm Z}_{\odot}$.
For example, in the $z=0$ calculations at higher metallicities the inner parts
of the clouds were more shielding from the ISRF, resulting in typical dust
temperatures on large spatial scales that decreased with increasing
metallicity. This is not the case at $z=5$. The dust temperatures are much
more uniform due to the heating from the hotter CMBR (which is long wavelength
radiation that is much more able to penetrate the clouds than the high
frequency components of the ISRF). This is further illustrated in lower panels
of Fig. 6 which shows phase diagrams of the gas and dust temperatures
functions of density, that can be directly compared to those in Bate (2019).
The upper panels of Fig. 6 provide the gas-phase CO abundance relative to
hydrogen using a colour scale. Note that the chemical model treats the freeze
out of CO on to dust grains and desorption of CO by cosmic rays, but it does
not treat the thermal desorption that occurs at dust temperatures above 20 K
(e.g. in protostellar discs). This is neglected because the only role of the
chemistry in the calculations is to provide realistic gas temperatures and the
primary coolant at the densities above which CO freeze out becomes important
is usually the dust rather than the CO. In the lower panels of Fig. 6 the grey
scale is proportional to the logarithm of the amount of dust at each
temperature and density. In the $z=5$ calculations, the dust temperatures at
low and intermediate densities are $T_{\rm dust}\approx 15-20$ K and even in
the solar metallicity calculation the coldest dust barely reaches down to
$T_{\rm dust}\approx 10$ K. By contrast, at $z=0$ the dust was able to cool to
much lower temperatures of $T_{\rm dust}\approx 10$ K at $Z=0.01~{}{\rm
Z}_{\odot}$, $T_{\rm dust}\approx 8$ K at $Z=0.1~{}{\rm Z}_{\odot}$, $T_{\rm
dust}\approx 6$ K at $Z\geq{\rm Z}_{\odot}$ (Bate, 2019).
The inability of the dust to cool in the $z=5$ calculations means that the gas
is also unable to cool as it did in the $z=0$ calculations. At low gas
densities ($n_{\rm H}\mathrel{\hbox{\hbox to0.0pt{\lower
2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise 1.72218pt\hbox{$<$}}}10^{3}$
cm-3) the spread of gas temperatures (Fig. 6) is reasonably similar in the
$z=0$ and $z=5$ calculations (c.f. Fig. 7 of Bate, 2019). At both redshifts,
at intermediate gas densities ($n_{\rm H}\approx 10^{4}-10^{7}$ cm-3) the
hotter gas tends to be a bit hotter at lower metallicity (see Fig. 6) because
metal-poor gas is less able to cool than metal-rich gas due to the reduction
of the atomic and molecular abundances (that dominate the cooling at low
densities). When shocks and other compressive motions in the clouds increase
the thermal energy, the low-metallicity gas cannot radiate this heat away as
easily and, thus, the gas is significantly hotter. This hotter gas gives
greater gas pressure support against gravity to the metal-poor clouds and
delays the collapse of the low metallicity clouds, as seen in Fig. 2.
Comparing the temperature ranges of gas at intermediate densities at the same
metallicity but different redshift, the hottest gas is very similar, but the
coldest gas is colder in the $z=0$ calculations than in the $z=5$
calculations.
At higher densities this difference in the lower range of gas temperatures
between calculations with the same metallicity but at different redshifts
becomes even more significant, particularly at high metallicity. The cooling
of high-density molecular gas with $Z\mathrel{\hbox{\hbox to0.0pt{\lower
2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise
1.72218pt\hbox{$>$}}}0.01~{}Z_{\odot}$ is primarily dominated by dust
continuum cooling; gas and dust are thermally coupled by collisions. Dust and
gas temperatures become similar above number densities
$n_{\rm H}\approx 10^{6}\left(\frac{Z}{Z_{\odot}}\right)^{-1}~{}\mbox{\rm
cm}^{-3},$ (7)
as can be seen in Fig 6 (see also Omukai, 2000; Glover & Clark, 2012; Bate,
2019). In the calculations of present-day star formation ($z=0$), the cooler
dust temperatures at higher metallicities produced substantially lower gas
temperatures at higher metallicity. This trend with increasing metallicity is
almost completely absent in the $z=5$ calculations. In Fig. 6 at high
densities ($n_{\rm H}\geq 10^{7}$ cm-3) there is very little difference in the
spread in $T_{\rm gas}$-nH for different metallicities because the dust
temperatures are very similar and the much warmer dust is not as effective at
cooling the gas.
It is this comparatively ineffective dust cooling that is fundamentally
responsible for the differences in the stellar properties between the $z=0$
and $z=5$ calculations. For the same metallicity ($Z\geq 0.1~{}{\rm
Z}_{\odot}$), the dust embedded in the high-density gas is substantially
warmer in the $z=5$ calculations than in the $z=0$ calculations. Furthermore,
the magnitude of this difference increases with increasing metallicity because
at $z=0$ the high-density gas becomes colder and colder at higher metallicity
(see Fig. 7 of Bate, 2019). Thus, the gas is less prone to fragment into
protostars in the $z=5$ calculations, producing fewer protostars for the same
amount of gas that is converted into stars and, therefore, the characteristic
stellar masses are higher in the $z=5$ calculations than in the $z=0$
calculations with the same metallicity. Since the $z=0$ calculations produced
protostars with a characteristic mass that was independent of metallicity,
this change in the thermodynamics means that the $z=5$ calculations instead
produce a characteristic stellar mass that increases with increasing
metallicity.
As the star formation proceeds, the radiation fields generated by the gas
collapsing into the gravitational potential wells of the protostars become
stronger (bottom rows of Figs. 3 to 5). These primarily heat the gas and dust
with the highest densities. The effect of this protostellar feedback is much
less obvious on the gas and dust temperatures in Figs. 3 to 5 than it was in
the equivalent figures of Bate (2019) at $z=0$ because of the generally higher
dust and gas temperatures due to the hotter CMBR (i.e. the protostellar
radiation fields tend to be swamped on scales $\mathrel{\hbox{\hbox
to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise
1.72218pt\hbox{$>$}}}0.02$ pc by the greater overall level of radiation from
the CMBR).
Figure 7: Snapshots of the mass-weighted abundances of atomic hydrogen,
ionised carbon, neutral carbon, and CO at the end of each of the three
calculations with differing metallicity ($t=1.20$ $t_{\rm ff}$). There are
three groups of panels, one for each metallicity of 1/100, 1/10, and 1 times
solar. For each group, the top left panel gives the relative abundance HI/H,
the top-right panel the relative abundance C+/C, the lower-left panel the
relative abundance C0/C, and the lower-right panel the relative abundance of
gas phase CO to the total carbon abundance. The stars and brown dwarfs are
plotted using white dots. The colour scale of relative abundances is
logarithmic ranging from $10^{-3}$ to unity. Generally, the outer parts of the
clouds contain atomic hydrogen and C+. The inner parts of the cloud are
primarily composed of H2 and carbon is in the form of CO. At the highest
densities, CO freezes out onto grains (hence the drop in gas-phase abundance).
At lower metallicities, there is a greater fraction of atomic hydrogen and
more of the carbon is in C+ because there is less extinction of the ISRF by
dust and more self-shielding of H2. This figure and caption are comparable to
Fig. 6, Bate (2019).
Figure 8: Histograms giving the distributions of the masses of the stars and
brown dwarfs produced by the three radiation hydrodynamical calculations, each
at $t=1.20t_{\rm ff}$. The dark blue histograms are used to denote those
objects that have stopped accreting (defined as accreting at a rate of less
than $10^{-7}$ M⊙ yr-1), while those objects that were still accreting when
the calculations were stopped are plotted using light blue. The Kroupa (2001)
and Chabrier (2005) parameterisations of the IMF are also plotted. The
vertical dashed line marks the stellar/brown dwarf boundary. The two low-
metallicity calculations produce mass functions that are in reasonable
agreement with the Chabrier (2005) fit to the observed IMF for individual
objects, but the solar metallicity calculation is noticeably deficient in
brown dwarfs and low-mass stars. This figure and caption are comparable to
Fig. 8, Bate (2019).
### 3.2 Chemistry
The present-day star formation calculations of Bate (2019) were the first to
combine a model of the diffuse ISM with radiative transfer to treat the
radiation produced by the collapsing gas. The calculations keep track of
whether the hydrogen is in atomic or molecular form, and whether carbon is in
the form of ionised carbon, C+, neutral carbon, C0, or molecular in the form
of CO. There is also a model for the freezing out of CO onto dust grains (see
Bate & Keto, 2015). Modelling the chemistry is important because the formation
of molecular hydrogen is a potential source of heating, while carbon atoms and
molecules are important coolants.
Following Bate (2019), in Fig. 7 we provide snapshots of the chemical make up
of the clouds at the end of each of the three $z=5$ calculations. The
chemistry varies most in the low-density gas and in the outer parts of the
clouds, so we show the large scales. The carbon abundances are scaled relative
to the total carbon abundance in each calculation (i.e. the absolute carbon
abundance is 10 times lower in the $Z=0.1~{}{\rm Z}_{\odot}$ calculation than
in the calculation at solar metallicity).
If the panels in Fig. 7 are compared with the corresponding panels in Figure 6
of Bate (2019), very little difference is seen for the same metallicity. This
is because it is the higher energy part of the ISRF that dictates the
chemistry of the low-density gas, not the CMBR at $z=0-5$. In both sets of
calculations, the hydrogen deep within the clouds is almost entirely
molecular. The atomic hydrogen abundance is only high in the outer parts of
the clouds where there is low extinction of the ISRF by dust and less self-
shielding provided by the molecular hydrogen. Although the atomic hydrogen
abundance within the cloud is a little higher at low metallicity, even with
$Z=0.01~{}{\rm Z}_{\odot}$ the hydrogen is 97.8 percent molecular at the end
of the calculation.
Similarly, carbon is almost entirely in the form of C+ in the outer parts of
the clouds, because it is exposed to the ISRF, while deep within the clouds
the main gas phase form of carbon is CO. Neutral atomic carbon is found at
intermediate depths, but it is never very abundant. The C+ fraction has a
greater dependence on metallicity than the fraction of atomic hydrogen.
### 3.3 The statistical properties of the stellar populations
In this section, we compare the statistical properties of stars and brown
dwarfs produced by the three calculations. We also compare our results with
the those from the present-day star formation calculations of Bate (2019). As
in Bate (2019), we study the stellar mass distributions, stellar
multiplicities, the distributions of separations of multiple stellar systems,
and the mass ratio distributions of binary systems. In earlier papers
performing calculations of a similar scale (e.g. Bate, 2009a, 2012, 2014),
other statistics such as the orbital eccentricity distributions of multiple
systems, triple system orbits, stellar accretion histories and kinematics, and
stellar closest encounters were also examined. We do not present data on these
properties here, however, that we find no evidence that these other properties
vary significantly with metallicity or redshift between $z=0$ and $z=5$.
#### 3.3.1 The initial mass function
In Fig. 8, we compare the differential mass functions at the end of each of
the three radiation hydrodynamical calculations with different metallicities.
We compare them with parameterisations of the observed Galactic IMF, given by
Chabrier (2005), and Kroupa (2001). There is a clear difference between the
mass distributions, with the solar metallicity case producing stars with a
higher typical mass than the two sub-solar metallicity calculations and very
few brown dwarfs.
Figure 9: The cumulative stellar/brown dwarf mass distributions produced by the three radiation hydrodynamical calculations with metallicities of $Z=0.01~{}{\rm Z}_{\odot}$ (magenta dot-dashed line), $Z=0.1~{}{\rm Z}_{\odot}$ (red dashed line), and $Z={\rm Z}_{\odot}$ (black solid line). We also plot the Chabrier (2005) IMF (black dotted line). The vertical dashed line marks the stellar/brown dwarf boundary. The form of the stellar mass distribution is similar for all three calculations, but the median mass increases with metallicity, particularly for the solar metallicity calculation which produces no low-mass ($M_{*}\mathrel{\hbox{\hbox to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise 1.72218pt\hbox{$<$}}}0.05$ M⊙) brown dwarfs at all. This figure and caption are comparable to Fig. 9, Bate (2019). Mass Range [M⊙] | Single | Binary | Triple | Quadruple
---|---|---|---|---
Redshift $z=5$, Metallicity $Z=0.01~{}{\rm Z}_{\odot}$
$M<0.03$ | 11 | 0 | 0 | 0
$0.03\leq M<0.07$ | 20 | 1 | 0 | 0
$0.07\leq M<0.10$ | 6 | 0 | 0 | 0
$0.10\leq M<0.20$ | 11 | 2 | 1 | 0
$0.20\leq M<0.50$ | 9 | 2 | 1 | 4
$0.50\leq M<0.80$ | 4 | 0 | 3 | 1
$0.80\leq M<1.2$ | 2 | 1 | 1 | 0
$M>1.2$ | 2 | 3 | 1 | 2
Redshift $z=5$, Metallicity $Z=0.1~{}{\rm Z}_{\odot}$
$M<0.03$ | 9 | 0 | 0 | 0
$0.03\leq M<0.07$ | 16 | 0 | 0 | 0
$0.07\leq M<0.10$ | 6 | 0 | 0 | 0
$0.10\leq M<0.20$ | 11 | 2 | 1 | 0
$0.20\leq M<0.50$ | 9 | 0 | 2 | 1
$0.50\leq M<0.80$ | 1 | 1 | 0 | 2
$0.80\leq M<1.2$ | 1 | 1 | 0 | 1
$M>1.2$ | 0 | 1 | 6 | 2
Redshift $z=5$, Metallicity $Z={\rm Z}_{\odot}$
$M<0.03$ | 0 | 0 | 0 | 0
$0.03\leq M<0.07$ | 2 | 0 | 0 | 0
$0.07\leq M<0.10$ | 5 | 0 | 0 | 0
$0.10\leq M<0.20$ | 10 | 1 | 0 | 0
$0.20\leq M<0.50$ | 7 | 2 | 1 | 0
$0.50\leq M<0.80$ | 2 | 1 | 0 | 0
$0.80\leq M<1.2$ | 3 | 0 | 1 | 0
$M>1.2$ | 3 | 2 | 3 | 2
All masses, 3 calculations | 150 | 20 | 21 | 15
Table 2: The numbers of single and multiple systems for different primary mass ranges at the end of the three $z=5$ radiation hydrodynamical calculations with different metallicities. This table and caption are comparable to Table 2, Bate (2019). Object Number | Mass | $t_{\rm form}$ | Accretion Rate
---|---|---|---
| [M⊙] | [$t_{\rm ff}$] | [M⊙ yr-1]
1 | 2.4253 | 0.8606 | $4.00\times 10^{-5}$
2 | 7.1796 | 0.8616 | $9.83\times 10^{-5}$
3 | 7.1896 | 0.8699 | $9.88\times 10^{-5}$
4 | 2.1061 | 0.8801 | $8.23\times 10^{-6}$
5 | 3.6424 | 0.9140 | $1.28\times 10^{-4}$
Table 3: For each of the three calculations, we provide online tables of the
stars and brown dwarfs that were formed, numbered by their order of formation,
listing the mass of the object at the end of the calculation, the time (in
units of the initial cloud free-fall time) at which it began to form (i.e.
when a sink particle was inserted), and the accretion rate of the object at
the end of the calculation (precision $\approx 10^{-7}$ M⊙ yr-1). The first
five lines of the table for the $z=5$ solar metallicity calculation are
provided above. This table and caption are comparable to Table 3, Bate (2019).
In Fig. 9, we compare the cumulative mass functions at the end of the three
calculations, along with the parameterisation of the observed Galactic IMF of
Chabrier (2005). There is a consistent trend of higher stellar masses being
produced at higher metallicity, although there is only a small difference
between the mass distributions with $Z=0.01~{}{\rm Z}_{\odot}$ and
$Z=0.1~{}{\rm Z}_{\odot}$. In the most metal-poor calculation, approximately
1/3 of the objects are brown dwarfs, while in the most metal-rich calculation
only 2 out of 67 objects (3%) are brown dwarfs, and one of these is still
accreting when the calculation is stopped (we take brown dwarfs to have masses
$M<0.075~{}{\rm M}_{\odot}$).
Performing Kolmogorov-Smirnov tests on each pair of distributions shows that
although both the mean and median masses are slightly higher in the
$Z=0.1~{}{\rm Z}_{\odot}$ calculation than in the $Z=0.01~{}{\rm Z}_{\odot}$
calculation (Table 1), formally, the two lowest metallicity distributions are
consistent with random sampling from a single underlying distribution
(probability 38%). However, the solar metallicity calculation is significantly
different, having probabilities of $4\times 10^{-4}$ and $6\times 10^{-5}$ of
being randomly sampled from the same distribution as the $Z=0.1~{}{\rm
Z}_{\odot}$ and $Z=0.01~{}{\rm Z}_{\odot}$ calculations, respectively.
The two low-metallicity distributions are both in reasonable agreement with
the Chabrier (2005) IMF. Kolmogorov-Smirnov tests that compare the numerical
distributions to Chabrier’s parameterisation give probabilities of 2.6%
($Z=0.01~{}{\rm Z}_{\odot}$) and 20% ($Z=0.1~{}{\rm Z}_{\odot}$) of the
numerical distributions being randomly drawn from the Chabrier (2005) IMF.
However, the solar metallicity distribution is inconsistent with being drawn
from the Chabrier (2005) IMF, with a Kolmogorov-Smirnov probability of only
$1\times 10^{-4}$. The differences essentially arise from the fact that the
median mass of the $Z=0.1~{}{\rm Z}_{\odot}$ distribution is in agreement with
that of the Chabrier (2005) IMF (0.2 M⊙), while the median mass for the
$Z=0.01~{}{\rm Z}_{\odot}$ distribution is a little lower and the median of
the $Z={\rm Z}_{\odot}$ distribution is much greater (see Table 1). Of course
in these probabilities no account is taken of the observational uncertainty in
the Galactic median mass.
Finally, as noted by Bate (2019), the calculations produce protostellar mass
functions (PMFs) rather than IMFs (Fletcher & Stahler, 1994a, b; McKee &
Offner, 2010) because many of the objects are still accreting when the
calculations are stopped and the star formation has not finished (Fig. 8).
This is particularly true of the solar metallicity calculation. Bate (2012)
showed that the form of the distribution of stellar masses in a calculation
similar to those performed here (but not treating the diffuse interstellar
medium or having separate gas and dust temperatures) did not change
significantly with time during his calculation. Although the maximum stellar
mass and the total number of stars both increased with time, the
characteristic (median) mass and the form of the mass function remained
similar due to the production of new protostars. This approximate invariance
with time is also true of the calculations of Bate (2019) and the calculations
presented here, even in the $z=5$ solar metallicity case.
Figure 10: Multiplicity fraction as a function of primary mass at the end of each of the calculations with different metallicities. The thick solid lines give the continuous multiplicity fractions from the calculations computed using a sliding log-normal average and the shaded areas give the approximate $1\sigma$ confidence intervals around the solid line. The blue squares with error bars and/or upper/lower limits give the observed multiplicity fractions from the surveys of Close et al. (2003), Basri & Reiners (2006), Fischer & Marcy (1992), Raghavan et al. (2010), Duquennoy & Mayor (1991), Kouwenhoven et al. (2007), Rizzuto et al. (2013), Preibisch et al. (1999) and Mason et al. (1998), from left to right. Note that the error bars of the Duquennoy & Mayor (1991) results have been plotted using light blue since this survey has been superseded by Raghavan et al. (2010). The observed trend of increasing multiplicity with primary mass is reproduced by all calculations. There is no strong difference between the multiplicities obtained with different metallicities, but there seems to be a consistent trend that low-mass stars ($0.1-0.5$ M⊙) have somewhat higher multiplicities with decreasing metallicity. This figure and caption are comparable to Fig. 10, Bate (2019). Object Numbers | No. of | No. in | $M_{\rm max}$ | $M_{\rm min}$ | $M_{1}$ | $M_{2}$ | $q$ | $a$ | P | $e$ | Relative Spin | Spin1 | Spin2
---|---|---|---|---|---|---|---|---|---|---|---|---|---
| Objects | System | | | | | | | | | or Orbit | -Orbit | -Orbit
| | | | | | | | | | | Angle | Angle | Angle
| | | [M⊙] | [M⊙] | [M⊙] | [M⊙] | | [au] | [yr] | | [deg] | [deg] | [deg]
22, 30 | 2 | 4 | 1.240 | 0.870 | 1.240 | 0.870 | 0.702 | 0.70 | 0.40 | 0.029 | 19 | 63 | 48
36, 42 | 2 | 3 | 0.876 | 0.614 | 0.876 | 0.614 | 0.701 | 3.23 | 4.76 | 0.195 | 54 | 59 | 18
( 36, 42), 43 | 3 | 3 | 0.876 | 0.393 | 1.490 | 0.393 | 0.264 | 17.15 | 51.74 | 0.071 | 20 | – | –
( 24, 34), ( 22, 30) | 4 | 4 | 1.400 | 0.870 | 2.738 | 2.110 | 0.771 | 417.97 | 3880 | 0.576 | – | – | –
Table 4: For each of the three calculations, we provide online tables of the
properties of the multiple systems at the end of each calculation. The
structure of each system is described using a binary hierarchy. For each
‘binary’ we give the masses of the most massive star $M_{\rm max}$ in the
system, the least massive star $M_{\rm min}$ in the system, the masses of the
primary $M_{1}$ and secondary $M_{2}$, the mass ratio $q=M_{2}/M_{1}$, the
semi-major axis $a$, the period $P$, the eccentricity $e$. For binaries, we
also give the relative spin angle, and the angles between orbit and each of
the primary’s and secondary’s spins. For triples, we give the relative angle
between the inner and outer orbital planes. For binaries, $M_{\rm max}=M_{1}$
and $M_{\rm min}=M_{2}$. However, for higher-order systems $M_{1}$ gives the
combined mass of the most massive sub-system (which may be a star, binary, or
a triple) and $M_{2}$ gives the combined mass of the least massive sub-system
(which also may be a star, a binary, or a triple). Multiple systems of the
same order are listed in order of increasing semi-major axis. As examples, we
provide selected lines from the table from the $z=5$ solar metallicity
calculation. This table and caption are comparable to Table 4, Bate (2019).
#### 3.3.2 Multiplicity as a function of primary mass
The formation mechanisms of multiple systems and the evolution of their
properties (e.g. separations) has been discussed in some detail by Bate (2012)
and Bate (2018) and will not be repeated here. Our primary purpose is to
determine whether or not variation of the metallicity and/or redshift alters
stellar properties significantly.
As in Bate (2009a, 2012), and subsequent papers, to quantify the fraction of
stars and brown dwarfs that are in multiple systems, we use the multiplicity
fraction, $mf$, defined as a function of stellar mass as
$mf=\frac{B+T+Q}{S+B+T+Q},$ (8)
where $S$ is the number of single stars within a given mass range and, $B$,
$T$, and $Q$ are the numbers of binary, triple, and quadruple systems,
respectively, for which the primary has a mass in the same mass range. As
discussed by Hubber & Whitworth (2005) and Bate (2009a), this measure of
multiplicity is relatively insensitive to both observational incompleteness
(e.g., if a binary is found to be a triple it is unchanged) and further
dynamical evolution (e.g., if an unstable quadruple system decays the
numerator only changes if it decays into two binaries). We also use the same
method for identifying multiple systems as that used by Bate (2009a) and Bate
(2012). We identify binary, triple, and quadruple stellar systems, but we
ignore higher-order multiples (e.g. a quintuple system consisting of a triple
and a binary orbiting one another is counted as one triple and one binary). We
choose to stop at quadruple systems since it is likely that many higher order
systems would be unstable and would eventually decay.
Table 2 provides the numbers of single and multiple star and brown dwarf
systems produced by each calculation. Bate (2019) provided electronic ASCII
tables of the properties of each of the stars, brown dwarfs, and multiple
systems produced by the calculations discussed in that paper, and we do the
same here. We present tables of the masses, formation times, and final
accretion rates of the stars and brown dwarfs (see Table 3 for an example).
The tables are given file names such as Table3_Stars_z5_Metal01.txt for the
$Z=0.1~{}{\rm Z}_{\odot}$ calculation. We also produce tables listing the
properties of each multiple system (see Table 4 for an example). These tables
are given file names such as Table4_Multiples_z5_Metal1.txt for the $Z={\rm
Z}_{\odot}$ calculation.
The overall multiplicities for stars and brown dwarfs of all masses from each
of the calculations are 26, 27, and 29 per cent for the calculations with
metallicities of 1/100, 1/10, 1 times solar, respectively. The typical
$1\sigma$ uncertainties are $\pm 5$ per cent. Therefore, there is no evidence
for a significant dependence of the overall multiplicity on metallicity.
However, because the mass distributions vary between the calculations and
multiplicity varies with stellar mass this can be misleading.
In Fig. 10, for each of the three simulations we compare the multiplicity
fraction of the stars and brown dwarfs as functions of the stellar mass of the
primary with the values obtained from various Galactic observational surveys
(see the figure caption). The results from each of the calculations have been
plotted using a thick solid line that gives the continuous multiplicity
fraction computed using a sliding log-normal-weighted average. The width of
the log-normal average is half a decade in stellar mass. The shaded region
gives the approximate $1\sigma$ (68%) uncertainty on the sliding log-normal
average.
Figure 11: The distributions of separations (semi-major axes) of multiple
systems with stellar primaries ($M_{*}>0.1$ M⊙) produced by the calculations
with different metallicities. The dark, medium, and light histograms give the
orbital separations of binaries, triples, and quadruples, respectively (each
triple contributes two separations, each quadruple contributes three
separations). The solid curve gives the M-dwarf separation distribution
(scaled to match the same area as each of the histograms) from the M-dwarf
survey of Janson et al. (2012), and the dashed curve gives the separation
distribution for solar-type primaries of Raghavan et al. (2010). Note that
since most of the simulated systems are low-mass, the distributions are
expected to match the Janson et al. distribution better than that of Raghavan
et al. The vertical dotted line gives the resolution limit of the calculations
as set by the accretion radii of the sink particles (0.5 au). This figure and
caption are comparable to Fig. 11, Bate (2019).
All of the calculations produce multiplicity fractions that increase strongly
with increasing primary mass, in qualitative agreement with observed stellar
systems. The actual values of the multiplicities are in good agreement with
the observed multiplicities of Galactic field stars for the lowest metallicity
calculation, and they are also similar to the multiplicities obtained for all
of the present-day star calculations of Bate (2019) with metallicities ranging
from $Z=0.01-3$ Z⊙. However, unlike the present-day star formation
calculations of Bate (2019), the multiplicities of the three calculations at
redshift $z=5$ do seem to depend on metallicity. There is no consistent trend
of the multiplicities for solar-mass and more massive stars, but for primaries
with masses $M_{1}\mathrel{\hbox{\hbox to0.0pt{\lower
2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise
1.72218pt\hbox{$<$}}}0.1-0.5$ M⊙ (i.e. M-dwarfs) the multiplicities
consistently increase with decreasing metallicity such that the multiplicities
at metallicity $Z=0.01$ Z⊙ are approximately twice as high as for $Z={\rm
Z}_{\odot}$.
Only a single very-low-mass (VLM; $M_{1}<0.1~{}{\rm M}_{\odot}$) multiple
system is produced across all three simulations (see Table 2), and this is in
the lowest metallicity calculation. Fig. 10 shows that the VLM multiplicity
for metal-poor ($Z=0.01~{}{\rm Z}_{\odot}$) star formation at redshift $z=5$
is similar to the Galactic value. However, at higher metallicities we find
that the VLM multiplicity produced by star formation at redshift $z=5$ is
lower than both Galactic field value and that obtained from the present-day
star formation calculations of Bate (2019).
Figure 12: The cumulative semi-major axis (separation) distributions of the
multiple systems produced by the three $z=5$ calculations with different
metallicities. All orbits are included in the plot (i.e. two separations for
triple systems, and three separations for quadruple systems), and the lines
for different metallicities include both stars and brown dwarfs. The vertical
dashed line marks the resolution limit of the calculations as determined by
the accretion radii of the sink particles. Performing Kolmogorov-Smirnov tests
on pairs of distributions shows that the distributions for the different
metallicities are statistically indistinguishable. This figure and caption are
comparable to Fig. 12, Bate (2019).
#### 3.3.3 Separation distributions of multiple systems
In Fig. 11, we present the separation (semi-major axis) distributions of the
stellar ($M_{1}>0.1$ M⊙) multiples (there is only one VLM multiple, the binary
in the $Z=0.01~{}{\rm Z}_{\odot}$ calculation). The distributions are compared
with the log-normal distributions from the surveys of M-dwarfs by Janson et
al. (2012) and solar-type stars by Raghavan et al. (2010). Although binaries
as close as 0.03 au (6 R⊙) can be modelled without being merged, gas is not
modelled within the sink particle accretion radii of 0.5 au. The lack of gas
to provide dissipation on small scales inhibits the formation of very close
systems (Bate, Bonnell & Bromm, 2002).
There is no obvious dependence of the shape of the distributions on
metallicity (in all cases the peak occurs in the $10-100$ au range of semi-
major axis). The absolute numbers are lower with increasing metallicity due to
the smaller number of stars produced at higher metallicity (Table 1). The peak
occurs at a similar separation to the peak in the Galactic population of
stellar multiples. The spread appears slightly wider than for Galactic M-dwarf
systems and narrower than Galactic solar-type stars, consistent with a mixture
of primary stellar masses that is dominated by low-mass stars.
In Fig. 12, we provide the cumulative separation distributions for each of the
calculations (this time including the VLM binary from the $Z=0.01~{}{\rm
Z}_{\odot}$ calculation). Kolmogorov-Smirnov tests performed on each pair of
distributions shows that they are statistically indistinguishable. They are
also each in good agreement with the separation distribution of M-dwarfs as
determined by Janson et al. (2012), and they tend to have slightly closer
separations than the solar-type stars (Raghavan et al., 2010). In the
$Z=0.01~{}{\rm Z}_{\odot}$ calculation, most of the multiple systems have
primary masses $M_{1}=0.1-0.8~{}{\rm M}_{\odot}$, while in each of the other
two calculations roughly half of the multiples are low-mass and half have
primary masses $M_{1}>0.8~{}{\rm M}_{\odot}$.
For present-day star formation, Bate (2019) found a weak possible trend for
multiple systems to be preferentially tighter at lower metallicities, though
the small numbers of systems limited the confidence in the result. There is no
sign of such a trend for the systems formed at redshift $z=5$.
Figure 13: The cumulative mass ratio distributions of pairs of objects
produced by the three $z=5$ calculations with different metallicities. Pairs
include both binaries and bound pairs that are components of triple or
quadruple systems. We also plot the observed mass ratio distribution of solar-
type stars from Raghavan et al. (2010). There is a clear and consistent trend
for the more metal-rich systems to have pairs consisting of more equal-mass
components. However, performing Kolmogorov-Smirnov tests on each pair of the
simulated distributions, only the $Z=0.01~{}{\rm Z}_{\odot}$ and $Z={\rm
Z}_{\odot}$ distributions are inconsistent with being drawn from the same
underlying distribution (Kolmogorov- Smirnov probability of 1.3 percent) due
to the relatively small numbers of pairs that are produced by the
calculations. This figure and caption are comparable to Fig. 14, Bate (2019).
#### 3.3.4 Mass ratio distributions of pairs
In Fig. 13 we plot the cumulative mass ratio distributions for each
calculation (including all primary masses, and including the single VLM binary
from the $Z=0.01~{}{\rm Z}_{\odot}$ calculation). These distributions include
binary systems, and pairs that are the inner components of triple and
quadruple systems. A triple system composed of a binary with a wider companion
contributes the mass ratio from the closest pair, as does a quadruple composed
of a triple with a wider companion. A quadruple composed of two pairs orbiting
each other contributes two mass ratios – one from each of the pairs. We also
plot the cumulative observed mass-ratio distributions for M-dwarfs from Janson
et al. (2012), and for solar-type primaries from Raghavan et al. (2010).
There is a clear progression in the mass ratio distributions from the $z=5$
calculations such that the more metal-rich calculations tend to have pairs
consisting of more equal-mass components. The mass ratio distribution of the
$Z=0.1~{}{\rm Z}_{\odot}$ is most similar to the mass ratio distribution of
M-dwarfs from Janson et al. (2012). The $Z=0.01~{}{\rm Z}_{\odot}$ calculation
has a greater fraction of $q<0.5$ pairs, while the $Z={\rm Z}_{\odot}$
calculation has almost no pairs with $q<0.5$. Despite the clear progression
with metallicity, formally only the $Z=0.01~{}{\rm Z}_{\odot}$ and $Z={\rm
Z}_{\odot}$ distributions are inconsistent with being drawn from the same
underlying distribution (Kolmogorov-Smirnov probability of 1.3 percent) due to
the relatively small numbers of pairs that are produced by the calculations.
For present-day ($z=0$) star formation, Bate (2019) found that all of the mass
ratio distributions from the four calculations ranging in metallicity from
$Z=0.01-3~{}{\rm Z}_{\odot}$ were statistically indistinguishable, although
again the lowest metallicity calculation had the greatest fraction of low
mass-ratio systems. All of the distributions were formally consistent with
being drawn from the Raghavan et al. (2010) distribution, except for the
highest metallicity calculation ($Z=3~{}{\rm Z}_{\odot}$) which had a deficit
of low mass ratio pairs. Thus, the general trends found (a greater fraction of
low mass ratio pairs in lower metallicity calculations) are the same for both
present-day star formation and for redshift $z=5$. We have also performed
Kolmogorov-Smirnov tests on the mass ratio distributions for the $z=0$ and
$z=5$ distributions that were obtained using the same metallicity (i.e.
$Z=0.01,0.1,1~{}{\rm Z}_{\odot}$). In each case the mass ratio distributions
are indistinguishable at the two redshifts, although again the sensitivity of
this test is limited by the relatively small numbers of pairs that are
produced by the $z=5$ simulations.
Figure 14: The frequencies of close companions (semi-major axes $a<10$ au)
for low-mass stars with masses $M_{*}=0.1-1.5$ M⊙ that are produced by the
three $z=5$ calculations with different metallicities (magenta filled circles
and errorbars). We compare the results with the equivalent frequencies from
the numerical simulations of present-day ($z=0$) star formation by Bate (2019)
(blue open squares with errorbars) and Bate (2014) (red triangles and dashed
errorbars), and with the observed values for stellar masses $M_{*}=0.6-1.5$ M⊙
from Moe et al. (2019) (black filled squares and errorbars). The error bars
for the numerical simulations give 95 percent confidence intervals. The values
of the metallicity have been slightly offset horizontally for the Bate (2014,
2019) results for clarity. The results from the simulations are consistent
with the observed anti-correlation between the frequency of close companions
and metallicity, and the numerical values from the present-day calculations
are in reasonable agreement with the observed values. For the redshift $z=5$
calculations, the frequencies are on average approximately a factor of two
lower and the anti-correlation is less clear. This figure and caption are
comparable to Fig. 15, Bate (2019).
#### 3.3.5 Close binaries
Figure 15: The mass ratio distributions of stellar pairs with primary masses
$M_{1}=0.1-1.5$ M⊙ that are produced by the four calculations with different
metallicities. In the left panel, we give the distributions for pairs with
semi-major axes $a<10$ au, while in the right panel we give the distributions
for pairs with $a>10$ au. Due to the small numbers of objects, the probability
of any two pairs of distributions being drawn from the same underlying
distribution never falls below 9 percent in the left plot. However, the close
pairs (left panel) do display a consistent trend of more low mass ratio
systems as the metallicity is reduced, whereas there is no clear trend for the
wider pairs (right panel). This figure and caption are comparable to Fig. 16,
Bate (2019).
Bate (2019) found that although the stellar multiplicities for present-day
star formation with metallicities ranging from $Z=0.01-3~{}{\rm Z}_{\odot}$
were indistinguishable when considering all separations, if the frequency of
close pairs with semi-major axes less than 10 au were considered then the
binary frequency was anti-correlated with metallicity. Bate (2019)
investigated this particular separation range because three observational
papers that had been published during the preceding year had found that the
close binary fractions for solar-type stars were anti-correlated with
metallicity (Badenes et al., 2018; El-Badry & Rix, 2019; Moe et al., 2019).
This type of anti-correlation had been claimed before (Grether & Lineweaver,
2007; Raghavan et al., 2010), but the small numbers of stars involved and
potential observational biases had limited the confidence in the results.
In addition, Bate (2019) found that close pairs (semi-major axes, $a<10$ au)
displayed a consistent trend such that there were more low mass-ratio systems
for lower metallicities, but that there was no such trend for wider pairs.
In the redshift $z=5$ calculations analysed in this paper, we have already
seen that the multiplicities of low-mass stars (M-dwarfs) appear to be lower
at higher metallicity (Section 10) even considering all separation ranges.
Here we examine just the close pairs ($a<10$ au). We also restrict our study
to systems with primary with masses $M_{1}\approx 0.1-1.5$ M⊙, which is larger
than the range of solar-type primaries, $M_{1}\approx 0.6-1.5$ M⊙, that was
considered by Moe et al. (2019). We use a larger range because we have a
comparatively small number of systems (this larger range was also used by Bate
2019).
In Fig. 14, we present the frequencies of close pairs (semi-major axes $<10$
au) in systems with primary masses $M_{1}=0.1-1.5$ M⊙ for each of our
simulations. We compare these to the fractions that were presented in the
equivalent figure by Bate (2019) for the the present-day star formation
simulations analysed by that paper, the earlier calculations of Bate (2014),
and also to the observational results of Moe et al. (2019). The results from
the $z=5$ simulations are consistent with an anti-correlation between close
binary frequency and metallicity, but the results are less convincing than for
the Bate (2014, 2019) simulations. The close companion fractions are generally
lower for the $z=5$ calculations than for the present-day star formation
simulations. In other words, star formation at $z=5$ results in a lower
frequency close pairs than present-day star formation by about a factor of two
for the same metallicity. On top of this, there is evidence for an anti-
correlation with metallicity since the frequency in the solar-metallicity
calculation is about 1/3 of that obtained from the $Z=0.1~{}{\rm Z}_{\odot}$.
However, the frequencies are similar for the two lowest metallicity
calculations. Overall, the results are consistent with an anti-correlation
between close binary frequency and metallicity for star formation at both
redshift $z=0$ and $z=5$, but larger numbers of systems are required to be
certain.
Following Bate (2019), in Fig 15, we give the mass ratio distributions of
pairs with low-mass stellar primaries ($M_{1}=0.1-1.5$ M⊙) for close pairs
(semi-major axes $a<10$ au; left panel) and wider pairs (right panel). For the
close pairs, the lowest metallicity calculation produces more low-mass ratio
systems than the intermediate metallicity calculation, whereas there is no
clear difference for the wider systems. This is the same trend that was found
by Bate (2019) for present-day star formation. But performing Kolmogorov-
Smirnov tests on all pairs of cumulative distributions for the $z=5$
calculations shows that, in each case, the two distributions are consistent
with being drawn from the same underlying distribution due to the small
numbers.
As discussed by Bate (2019), the fact that anti-correlation between close pair
frequency and metallicity was found in both the Bate (2014) and Bate (2019)
sets of calculations implies that the physical mechanism originates from the
metallicity dependence of the opacity used in the radiative transfer (which
applied in both sets of calculations), rather than from the separate treatment
of gas and dust temperatures or the model of the diffuse interstellar medium
that was employed in the calculation of Bate (2019) and in this paper. Bate
(2019) found that two different effects of the dust opacities combined to
produce the anti-correlation. First, at higher metallicities the higher dust
opacities gave less small-scale fragmentation because dense gas is more
optically thick and less able to cool quickly. Second, at higher metallicities
the higher dust opacities result in slower cooling rates of first hydrostatic
cores (FHSCs) and, thus, longer lifetimes. Since FHSCs typically have radii of
$\approx 5$ au (Larson, 1969), FHSCs need to undergo collapse to form stellar
cores before close binaries ($a<10$ au) can be formed. Longer lifetimes mean
that even if small scale fragmentation produces multiple FHSCs, those with
high-metallicity have longer to merge into a single FHSC, thereby producing a
single protostar rather than a close binary. Conversely, low-metallicity FHSCs
can quickly collapse to produce stellar cores, potentially avoiding merging
with another nearby FHSC and producing a close pair. It is also worth noting
that Bate (2019) found that most close pairs were formed by cloud or filament
fragmentation, not by disc fragmentation.
In the redshift $z=5$ calculations, there is still evidence for an anti-
correlation between close binary frequency and metallicity, but it may be
weaker than for present-day star formation and the close binary frequency may
be lower (by approximately a factor of two) at a given metallicity at $z=5$
than at $z=0$. This may be because the first of the two mechanisms leading to
the anti-correlation is weakened at higher redshift. In other words, the
slower cooling rates of the FHSCs at higher metallicity leading to more FHSC
merges at high metallicity would still operate (giving fewer close pairs at
higher metallicity). But with the higher CMBR temperature there is less small-
scale fragmentation of high-density gas because the dense gas is less able to
cool, particularly at high metallicity.
## 4 Discussion
Bate (2019) performed radiation hydrodynamical calculations of present-day
star formation in molecular clouds whose metallicity was varied from
$Z=0.01-3~{}{\rm Z}_{\odot}$ and found that the statistical properties of the
stars were largely independent of the metallicity (see also Myers et al.,
2011; Bate, 2014). The resulting stellar mass distributions, overall stellar
multiplicity, and mass ratio distributions of bound pairs were statistically
indistinguishable. Furthermore, the stellar mass distributions were all very
similar to the parameterisation of the Galactic IMF of Chabrier (2005). The
only clear differences that were found was an anti-correlation between the
frequency of close ($a<10$ au) protostellar pairs and metallicity, in
qualitative agreement with observations of Galactic field stars, a preference
for a greater fraction of unequal-mass close pairs with decreasing
metallicity, and an increase in the number of protostellar mergers per star
with decreasing metallicity.
By contrast, in this paper we have presented results from simulations that are
identical to those of Bate (2019) except that the cosmic microwave background
radiation was hotter because they were performed at a redshift of $z=5$. With
the hotter CMBR we find that the statistical properties of the stars do vary
with metallicity. In particular, at $Z=0.01~{}{\rm Z}_{\odot}$ the stellar
mass distributions obtained at redshifts $z=0$ and $z=5$ are indistinguishable
(and both are similar to the observed Galactic IMF), but at $z=5$ increasing
the metallicity results in an increase in the characteristic stellar mass to
the point that brown dwarfs are very rare at solar metallicities. Furthermore,
the multiplicities of M-dwarfs are approximately a factor of two lower at
solar metallicity than at $Z=0.01~{}{\rm Z}_{\odot}$. In agreement with the
present-day star formation calculations, the mass ratios of stellar pairs are
more unequal and there are more stellar mergers per star at lower metallicity.
The anti-correlation between the frequencies of close pairs and metallicity
still seems to be present, but it is less clear than in the $z=0$
calculations.
Perhaps the most surprising result from the two papers is that the different
level of CMBR qualitatively changes the dependence of the stellar mass
function on metallicity: the characteristic stellar mass is predicted to be
independent of metallicity at $z=0$, but increase with metallicity at $z=5$.
Or, alternately, at low metallicity ($Z=0.01~{}{\rm Z}_{\odot}$) the stellar
mass function doesn’t vary with redshift, but at solar-metallicity the
characteristic stellar mass increases with increasing redshift due to the
hotter CMBR.
In the following sections, we examine the predictions of past theoretical
studies for the dependence of the IMF on redshift, we discuss the implications
of these results for how the IMF may vary continuously with redshift and
metallicity, and we examine the observational evidence for variations in the
IMF and how these results may aid in their interpretation.
### 4.1 Comparison with previous theoretical models
It is common in the literature to postulate that the characteristic stellar
mass is proportional to the typical Jeans mass in a molecular cloud. The Jeans
mass depends on temperature: for example, it can be expressed as varying with
temperature and density as $\propto T^{3/2}\rho^{-1/2}$, or with temperature
and pressure as $\propto T^{2}P^{-1/2}$, or with temperature and surface
density as $\propto T^{2}\mu^{-1}$ (Larson, 1998). Given the strong dependence
of the Jeans mass on temperature, if the characteristic stellar mass scales
with the Jeans mass it is natural to postulate that the characteristic mass of
the IMF should increase with redshift because the cosmic microwave background
radiation scales with redshift as $(1+z)$ (equation 6) and this exceeds the
lowest temperatures found in Galactic molecular clouds ($\sim 6-8$) K by
redshift $z=2-3$. Similarly, because metal-poor gas is less able to cool than
metal-rich gas when it is optically thin, one can make the argument that
metal-poor gas is generally expected to be hotter and, therefore, the
characteristic stellar mass would also increase with decreasing metallicity.
Indeed, both these arguments are made by Larson (1998) when he discusses the
possibility of a time-varying IMF, and assuming that metallicity generally
decreases with increasing redshift both of these affects act together to
suggest that the characteristic stellar mass should increase with increasing
redshift (see also Baugh et al. 2005; Narayanan & Davé 2012).
However, as we have seen from the results in this paper the dependence of the
characteristic stellar mass is apparently more complicated than this. For one
thing, although metal-poor gas is a poor at cooling compared metal-rich gas at
low densities, at high densities when the cooling of the gas becomes dominated
by collisions with dust grains, metal-poor gas remains optically thin to
higher densities and, therefore, is able to cool more quickly, leading to
lower gas temperatures and, potentially, more fragmentation than metal-rich
gas. Bate (2014) demonstrated this by performing simple spherically-symmetric
calculations of collapsing 1-M⊙ Bonnor-Ebert spheres with radiative transfer
and varying opacities, showing that gas temperatures reduce by factors of 5–7
at densities ranging from $10^{-13}$ to $10^{-9}$ g cm-3 when the opacity is
reduced from 3 to 0.01 times the opacity of solar-metallicity gas (see Fig. 23
of Bate, 2014).
These opposing effects of metallicity (i.e. typically hotter low-density gas,
but cooler high-density gas) leads to a more complicated dependence of the
characteristic mass on redshift and metallicity than that postulated by Larson
(1998).
Very recently, Sharda & Krumholz (2022) have investigated analytically how the
characteristic stellar mass may vary with metallicity, decreasing from a
comparatively high value at low metallicity ($Z\mathrel{\hbox{\hbox
to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise
1.72218pt\hbox{$<$}}}10^{-4}~{}{\rm Z}_{\odot}$) to a much lower value once
dust dominates the cooling ($Z\mathrel{\hbox{\hbox to0.0pt{\lower
2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise
1.72218pt\hbox{$>$}}}10^{-2}~{}{\rm Z}_{\odot}$). They argue that this may
help to explain the bottom-heavy IMFs apparently observed in massive
elliptical galaxies (see Section 4.3.1 below). However, they do not consider
at all how a change in redshift (i.e. the level of CMBR) may also alter the
characteristic mass and its metallicity dependence. The results from this
paper show that both redshift and metallicity need to be taken into account.
In another recent paper, Guszejnov et al. (2022) studied how environment,
initial conditions, and feedback may alter the stellar IMF. Among other
results, they find that the characteristic stellar mass varies with
metallicity and with the level of the ISRF. They find that with a typical
Galactic ISRF, lower metallicity tends to raise the gas temperature and this
in turn produces an increase characteristic stellar mass. This result is at
odds with the results of Bate (2019) who find that under present-day
conditions the characteristic stellar mass is quite independent of the
metallicity (ranging from 1/100 to 3 times the solar value). This may be due
to the limited resolution of the Guszejnov et al. (2022) study – as mentioned
above, Bate (2019) found that at low metallicity the effects of intermediate-
density gas being hotter were offset by enhanced cooling at high densities
that led to enhanced small-scale fragmentation. On the other hand, Guszejnov
et al. (2022) found that subjecting solar-metallicity clouds to a stronger
ISRF increased the characteristic stellar mass, which is in qualitative
agreement with the results we present here.
### 4.2 Extrapolation of the results to different redshifts
Fundamentally, the numerical results from this paper and Bate (2019) point to
an IMF that is relatively independent of metallicity (ranging from
$z=0.01-3~{}{\rm Z}_{\odot}$) for present-day star formation ($z=0$), to one
where the characteristic stellar mass increases with metallicity at high
redshift (assuming all other parameters remain the same). The main cause of
this different qualitative behaviour is that for present-day star formation
increasing the metallicity results in higher extinction of the ISRF and colder
temperatures deep within star-forming clumps. By contrast, at redshift $z=5$
the CMBR component of the ISRF that is able to penetrate into the densest
regions of molecular cloud cores is hotter. There is essentially no difference
in the range of temperatures of high-density gas ($n_{\rm
H}\mathrel{\hbox{\hbox to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss}
\kern-3.00003pt\raise 1.72218pt\hbox{$>$}}}10^{8}$ cm-3) at low metallicity
($Z=0.01~{}{\rm Z}_{\odot}$) between the $z=0$ and $z=5$ calculations. In the
$z=0$ calculations the vast majority of the gas is much warmer than the
temperature of the CMBR at $z=5$ ($T_{\rm CMBR}=(2.73~{}{\rm K})(1+z)=16$ K),
ranging between $T\approx 10-200$ K for densities $n_{\rm
H}\mathrel{\hbox{\hbox to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss}
\kern-3.00003pt\raise 1.72218pt\hbox{$>$}}}10^{8}-10^{11}$ cm-3. Therefore,
the hotter CMBR at $z=5$ has little effect and so the low-metallicity IMFs at
$z=0$ and $z=5$ are almost identical. However, at high metallicity (e.g.
$Z={\rm Z}_{\odot}$), for $z=0$ a substantial fraction of the high-density gas
has temperatures below 15 K and temperatures drop as low as $T\approx 6$ K.
Therefore, when otherwise identical calculations are carried out using the
hotter CMBR at $z=5$ high-density gas is kept substantially hotter and
fragmentation is reduced. This leads to fewer protostars being formed and
higher characteristic stellar masses because the protostars that do form are
able to accrete more mass.
Based on these results, we can interpolate to determine how the IMF is likely
to vary with metallicity and redshift between $z=0$ and $z=5$, and extrapolate
for $z>5$. Since the CMBR temperature scales as $T_{\rm CMBR}=(2.73~{}{\rm
K})(1+z)$ we expect the IMF to be essentially identical to the $z=0$ cases up
to $z=1$, since the CMBR temperature is lower than the minimum gas
temperatures found in the $z=0$ calculations ($T_{\min}\approx 5$ K even for
$Z=3~{}{\rm Z}_{\odot}$). There would probably also be no significant change
for $z=2$ either, because although a small amount of gas may otherwise be
colder than $T_{\rm CMBR}$ it would be a negligible amount. We base this on
the fact that the minimum gas temperature for high-density gas is raised by
about a factor of two between $z=0$ and $z=5$ for $Z=0.1~{}{\rm Z}_{\odot}$
but this results in a very small increase in the characteristic stellar mass
(Table 1). For $z=3$, we would expect a very slight increase in the
characteristic stellar mass (i.e. from $\approx 0.15$ to $\approx 0.20$ M⊙ in
terms of the mean of the logarithm of the stellar masses) for
$Z\mathrel{\hbox{\hbox to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss}
\kern-3.00003pt\raise 1.72218pt\hbox{$>$}}}3~{}{\rm Z}_{\odot}$, similar to
that which is seen between $Z=0.01~{}{\rm Z}_{\odot}$ and $Z=0.1~{}{\rm
Z}_{\odot}$ at $z=5$ because the minimum high-density gas temperature would be
raised from $T\approx 5$ K to $T\approx 11$ K. By redshift $z=4$ this
magnitude of shift in the characteristic stellar mass should be apparent for
solar metallicity, while for $Z=3~{}{\rm Z}_{\odot}$ the characteristic mass
is likely to have doubled to $\approx 0.3$ M⊙).
Beyond $z=5$ we would expect the characteristic stellar masses of the IMFs for
all metallicities $Z>0.01~{}{\rm Z}_{\odot}$ to increase with redshift. As
with the $z=5$ calculations, the greatest effect would initially be found in
the highest metallicity clouds. However, in the limit that $T_{\rm CMBR}$
becomes very large, say $\mathrel{\hbox{\hbox to0.0pt{\lower
2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise 1.72218pt\hbox{$>$}}}50$ K
which is roughly the median temperature of the high density ($n_{\rm
H}\mathrel{\hbox{\hbox to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss}
\kern-3.00003pt\raise 1.72218pt\hbox{$>$}}}10^{8}-10^{11}$ cm-3) gas in all
the of the calculations (Fig. 6), then all the IMFs are likely become similar
(and all with characteristic stellar masses significantly larger than the
Galactic value). Since the Jeans mass scales $\propto T^{3/2}\rho^{-1/2}$ it
would be reasonable to expect that the characteristic mass (for clouds similar
to those modelled here) would increase $\propto(1+z)^{3/2}$ in the limit of
high redshift.
Finally, a few remarks of caution. First, all the calculations discussed here
are performed in the regime where dust cooling dominates the cooling of high-
density molecular gas. We have intentionally limited our studies to
metallicities $Z\geq 0.01~{}{\rm Z}_{\odot}$ for this reason. In the limit of
no metals, i.e., Population III stars, gas cooling is very different (e.g.
Bromm et al., 1999; Abel et al., 2000). To model the very-low-metallicity
regime (e.g. $Z=10^{-4}-10^{-2}~{}{\rm Z}_{\odot}$) accurately probably
requires both metal line cooling of the gas and dust cooling to be modelled
accurately (e.g. Sharda & Krumholz, 2022). Second, to date we have performed
such calculations using only a single type of initial condition – relatively
dense initial clouds ($n_{\rm H}\approx 6\times 10^{4}$ cm-3) with decaying
turbulence and an initial root-mean-square Mach number of ${\cal M}=11.2$ at
15 K. Jones & Bate (2018b) studied how the IMF may vary with cloud density for
present-day star formation and found that the characteristic stellar mass
varied relatively weakly with cloud density $\propto\rho^{-0.2}$. However,
other effects may come into play that we do not include, such as the effects
of feedback from protostellar outfows (e.g. Mathew & Federrath, 2021; Tanvir,
Krumholz & Federrath, 2022). All in all, there is still a great deal of work
to be done in mapping out how the IMF may vary over a wide range of initial
conditions.
### 4.3 Observational evidence for a varying IMF
#### 4.3.1 Galactic stellar populations
Although there is little direct evidence for variation of the IMF (Bastian,
Covey & Meyer, 2010), there are a number of observations that provide indirect
evidence in favour of variable IMFs. Larson (1998) lists several points
including: no metal-free stars have ever been found (implying that few, if
any, of the first stars had masses $M\mathrel{\hbox{\hbox to0.0pt{\lower
2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise
1.72218pt\hbox{$<$}}}0.8~{}{\rm M}_{\odot}$); the ‘G-dwarf problem’ in which
the numbers of metal-poor stars in the solar neighbourhood are deficient
compared to the predictions of simple chemical models (van den Bergh, 1962;
Schmidt, 1963); the large total mass of heavy elements in large clusters of
galaxies (Elbaz et al., 1995; Loewenstein & Mushotzky, 1996; Gibson, 1997;
Gibson & Matteucci, 1997); and the observed increases of the metallicities
(Faber 1973; Worthey, Faber & Gonzalez 1992; Vazdekis et al. 1996, 1997;
Trager et al. 2000; Gallazzi et al. 2006; Graves, Faber & Schiavon 2009a, b;
Kuntschner et al. 2010; McDermid et al. 2015) and mass-to-light ratios
(Guzman, Lucey & Bower 1993; Jorgensen, Franx & Kjaergaard 1996; Burstein et
al. 1997; Thomas et al. 2011; Cappellari et al. 2012, 2013; Dutton, Mendel &
Simard 2012;,Wegner et al. 2012; Dutton et al. 2013; Tortora, Romanowsky &
Napolitano 2013; McDermid et al. 2014; Davis & McDermid 2017) of the central
regions of early-type galaxies with increasing central velocity dispersion
(i.e. mass). Recently, gravitational lensing has also been used to measure the
total projected mass within the Einstein radius to obtain mass-to-light ratios
(Ferreras, Saha & Williams 2005, 2008; Auger et al. 2010; Ferreras et al.
2010; Treu et al. 2010; Barnabè et al. 2011; Spiniello et al. 2015; Newman et
al. 2017). Although the central regions of early-type galaxies are observed to
have metallicities and mass-to-light ratios that increase with the central
mass, observations of radial gradients that the outskirts of such galaxies are
generally composed of metal-poor stars (e.g. Greene et al., 2015).
Historically, these observations and, in particular the high values of
[$\alpha$/Fe] observed in massive early-type galaxies, have been used argue
for a top-heavy IMF (e.g. Larson, 1998). Gunawardhana et al. (2011) and
Weidner et al. (2013) have proposed that the IMF becomes more top-heavy with
increasing star-formation rate. However, Larson (1998) argued instead that
many of these observations can potentially be explained by IMFs that have the
same (Salpeter-type) slope for high-mass stars, but a varying (characteristic)
mass below which the IMF flattens or turns over. For example, mass-to-light
ratios of old stellar populations (e.g. those in elliptical galaxies) depend
sensitively on the behaviour of the IMF in the vicinity of a solar-mass, since
stars above $\approx 0.8$ M⊙ have evolved into dark stellar remnants, while
the light is generated by stars below this mass. In general, mass-to-light
ratios can be increased either by an excess of very low-mass stars (i.e.
bottom-heavy), or an excess of stellar remnants (i.e. top heavy).
In principle, spectral analysis of stellar populations can be used to
constrain directly the fraction of low-mass stars in early-type galaxies
(Spinrad 1962; Cohen 1978; Faber & French 1980; Carter, Visvanathan & Pickles
1986; Hardy & Couture 1988; Delisle & Hardy 1992; Cenarro et al. 2003; Falcón-
Barroso et al. 2003; van Dokkum & Conroy 2010, 2011; Conroy & van Dokkum
2012a, b; Smith, Lucey & Carter 2012; Spiniello et al. 2012, 2014; Ferreras et
al. 2013; Ferreras et al. 2015; Ferreras, La Barbera & Vazdekis 2015; La
Barbera et al. 2013, 2017; Martín-Navarro et al. 2015; van Dokkum et al.
2017). Recent spectra analysis of the centres of early-type galaxies show a
substantial population of low-mass stars (e.g. van Dokkum & Conroy, 2010) and
point to a bottom-heavy stellar population, although not in all early-type
galaxies (Smith & Lucey 2013; Smith, Lucey & Conroy 2015; Leier et al. 2016).
Rosani et al. (2018) find that need for a bottom-heavy IMF does not depend on
the environment of the early-type galaxy (i.e. galaxy hierarchy or host halo
mass). To produce both the high metallicity and enhanced [$\alpha$/Fe] of
early-type galaxies and a bottom-heavy IMF, a time-dependent IMF may be
required that changes from top- to bottom-heavy during the early formation
process (Vazdekis et al., 1997; Weidner et al., 2013; Ferreras et al., 2015).
Recently, Guszejnov, Hopkins & Graus (2019) examined whether the near
universal Milky Way IMF could be reconciled with proposed extragalactic IMF
variations. They found that it was very difficult to match the proposed
extragalactic IMF variations without simultaneously predicting too much
variation of the IMF in extreme Galactic environments.
Using the calculations presented in this paper, we cannot comment on
variations in the slope of the IMF at the high-mass end since the calculations
are only large enough to produce low- and intermediate-mass stars. However, we
can use them to predict the dependence of the characteristic stellar mass on
both metallicity and redshift (Section 4.2). Unfortunately, it is not
immediately clear how the predictions of this paper can be used to explain the
above observations. A characteristic stellar mass that increases for
decreasing redshift and increasing metallicity would fit well with the picture
presented by Larson (1998) of a deficit of low-mass stars being able to
explain many observations. However, it does not appear to fit well with the
recent spectral analysis results that point to an abundance of low-mass stars
and a bottom-heavy IMF. It may be that several different processes are
contributing to the unusual stellar populations in the centres of early-type
galaxies (e.g. high levels of turbulence or intense radiation fields from
starbursts), rather than only changes due to the varying CMBR. For example,
the recent calculations of Tanvir et al. (2022) show that at very high
molecular cloud densities, feedback from protostellar outflows may result in
an IMF with a lower characteristic stellar mass (and a narrower overall mass
range) than the Galactic IMF. Alternately, the results obtained in this paper
do provide a mechanism for metal-rich gas to produce time-dependent IMFs that
change from top-heavy to a standard IMF as redshift decreases (similar to that
postulated by Vazdekis et al., 1997; Weidner et al., 2013; Ferreras et al.,
2015).
#### 4.3.2 Globular clusters
Globular clusters are much more simple objects than the central regions of
galaxies. They (largely) are thought to have formed from single star formation
events, and they are not thought to contain significant dark matter. When
interpreting their stellar mass functions care does needs to be taken with the
effects of dynamical evolution – the lowest mass stars are lost
preferentially. But if this can be taken into account, they can in principle
be used to study the form of the low-mass ($M\mathrel{\hbox{\hbox
to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise
1.72218pt\hbox{$<$}}}0.8$ M⊙) IMF at high redshift.
In particular, a study of old globular clusters in M31 has found that more
metal-rich globular clusters become increasingly bottom-light (i.e. have a
deficit of low-mass stars). Strader, Caldwell & Seth (2011) studied more than
a hundred globular clusters in M31, the vast majority of which were very old
(i.e. formed at high redshift). They divided their sample into several
metallicity bins. Their most metal-rich sample had metallicities
[Fe/H]$~{}>-0.4$. Crucially, Strader et al. (2011) were able to detect that
those clusters that were thought to be more dynamically-evolved were more
bottom-light as expected due to dynamical evolution. However, due to their
large sample, they were also able to show that even accounting for the effects
of dynamical evolution, globular clusters that were more metal-rich were
increasingly bottom-light. There are questions over this result, however.
Shanahan & Gieles (2015) claim that the masses of the M31 globular clusters
were systematically underestimated and that the bias is more important at high
metallicities.
Nevertheless, four of the M31 metal-rich globular clusters were also studied
using the spectral analysis method of Conroy & van Dokkum (2012a) by Conroy &
van Dokkum (2012b) to determine the distributions of low-mass stars. In
agreement with Strader et al. (2011) they found that they had a significant
deficit of low-mass stars relative to a Galactic Kroupa IMF.
This observed trend of the stellar populations of more metal-rich globular
clusters being more bottom-light is the opposite of the trends seen in
spectral studies of the centres of early-type galaxies. However, it is exactly
the sort of trend that is expected from the numerical results presented in
this paper. Assuming that the old globular clusters were formed at redshifts
of $z\mathrel{\hbox{\hbox to0.0pt{\lower 2.36806pt\hbox{$\sim$}\hss}
\kern-3.00003pt\raise 1.72218pt\hbox{$>$}}}5$, the results presented here can
be used to predict that the characteristic stellar masses of old globular
clusters should indeed have been higher for greater metallicity. Similarly,
the most metal-rich old globular clusters should have been born with deficits
of low-mass stars compared to the present-day Galactic IMF.
## 5 Conclusions
We have presented results from three radiation hydrodynamical simulations of
star cluster formation that are subjected to the cosmic microwave background
radiation at redshift $z=5$. The three calculations have identical initial
conditions except for their metallicity which ranges from 1/100 to 1 times the
solar value. The calculations resolve the opacity limit for fragmentation,
protoplanetary discs (radii $\mathrel{\hbox{\hbox to0.0pt{\lower
2.36806pt\hbox{$\sim$}\hss} \kern-3.00003pt\raise 1.72218pt\hbox{$>$}}}1$ au),
and multiple stellar systems. The calculations treat gas and dust temperatures
separately and include a thermochemical model of the diffuse ISM that is
important for setting the temperatures at low densities. We compare the
results presented in this paper with the results of present-day ($z=0$) star
cluster formation calculations already published by Bate (2019).
We draw the following conclusions:
1. 1.
At redshift $z=5$ we find that the stellar mass functions produced by the
calculations are become increasingly bottom-light with increasing metallicity.
In other words, the characteristic (median) stellar mass increases with
increasing metallicity. This behaviour is qualitatively different to that
found for present-day ($z=0$) star formation in which the stellar mass
function is found to be independent of metallicity (in the range
$Z=0.01-3~{}{\rm Z}_{\odot}$). At redshift $z=5$, the distribution of stellar
masses at low metallicity ($Z=0.01~{}{\rm Z}_{\odot}$) is similar to the
parametrisation of the observed Galactic IMF given by Chabrier (2005). But at
$z=5$ with solar metallicity the characteristic stellar mass is approximately
3 times higher.
2. 2.
Stellar multiplicity strongly increases with primary mass, similar to observed
Galactic stellar systems. But whereas at $z=0$ the multiplicity for a given
primary mass is found to be similar for all metallicities ($Z=0.01-3~{}{\rm
Z}_{\odot}$), at $z=5$ the multiplicity of M-dwarfs is found to decrease with
increasing metallicity, such that their multiplicity is a factor of two lower
at $Z={\rm Z}_{\odot}$ compared to $Z=0.01~{}{\rm Z}_{\odot}$.
3. 3.
At both $z=0$ and $z=5$ there is an anti-correlation between metallicity and
the frequencies of close (semi-major axes $a<10$ au) bound protostellar pairs
with primary masses $M_{1}=0.1-1.5~{}{\rm M}_{\odot}$. However, the trend is
not as strong at $z=5$ as at $z=0$, and the frequencies of close pairs are
lower at $z=5$ than at $z=0$.
4. 4.
We also find that close (semi-major axes $a<10$ au) bound protostellar pairs
have greater fractions of unequal-mass systems with lower metallicity at both
$z=0$ and $z=5$.
5. 5.
The above differences between star formation at $z=5$ and $z=0$ are a result
of metal-rich gas being unable to cool to as lower temperatures at $z=5$ as at
$z=0$ due to the hotter cosmic microwave background radiation. This inhibits
fragmentation at high densities and high metallicity, meaning that the
protostars that do form tend to accrete more gas (i.e. they have a greater
characteristic stellar mass) and they have lower companion frequencies.
6. 6.
Comparing our results to observational evidence for a varying IMF, we find
that the observed trend of old globular clusters in M31 being more bottom-
light with increasing metallicity (Strader et al., 2011; Conroy & van Dokkum,
2012b) is consistent with the dependencies of the IMF on redshift and
metallicity that we find from the numerical simulations. The results may also
help to explain some other observations that point to the need for varying
IMFs, although they do not provide an explanation for bottom-heavy IMFs that
are apparently observed in the centres of many massive elliptical galaxies.
The calculations discussed in this paper begin with idealised initial
conditions (i.e. uniform-density, spherical clouds initialised with a
particular ‘turbulent’ velocity field). This is done to allow careful
comparison between calculations to determine the effects that varying the CMBR
and/or metallicity have. More realistic initial conditions (e.g. self-
consistent turbulence and different cloud structures, such as turbulence
driven by external supernovae; Seifried et al. 2018) or different initial
clouds (e.g. mean cloud densities; Jones & Bate 2018b; Tanvir et al. 2022) may
lead to some what different stellar properties. Similarly, these calculations
do not include magnetic fields or protostellar outflows. Nevertheless, we
expect the general trends for how stellar properties depend on redshift and
metallicity that have been identified in this paper to be similar even if more
complex initial conditions were employed or additional physical processes were
to be included.
## Acknowledgements
MRB thanks Charlie Conroy for reading and commenting on a draft version of the
manuscript.
This work was supported by the European Research Council under the European
Commission’s Seventh Framework Programme (FP7/2007-2013 Grant Agreement No.
339248). The calculations discussed in this paper were performed on the
University of Exeter Supercomputer, Isca, and on the DiRAC Complexity system,
operated by the University of Leicester IT Services, which forms part of the
STFC DiRAC HPC Facility (www.dirac.ac.uk). The latter equipment is funded by
BIS National E-Infrastructure capital grant ST/K000373/1 and STFC DiRAC
Operations grant ST/K0003259/1. DiRAC is part of the National
E-Infrastructure. This research was supported in part by the National Science
Foundation under Grant No. NSF PHY-1748958. Some of the figures were produced
using SPLASH (Price, 2007), an SPH visualization tool publicly available at
http://users.monash.edu.au/$\sim$dprice/splash.
## Data availability
Data that can be used to produce Table 2 and Figs. 8 to 15, and to calculate
many of the values in Table 1 are provided as Additional Supporting
Information (see below). Input and output files from the three calculations
that were performed for this paper are available from the University of
Exeter’s Open Research Exeter (ORE) repository (Bate, 2022). This dataset
includes the initial conditions and input files for the three SPH
calculations, the SPH dump files that were used to create Figs. 3–7, and the
output files that were used to produce Fig. 2 and Tables 1, 3, and 4.
## Supporting Information
Additional Supporting Information may be found in the online version of this
article:
Data files for protostars. We provide text files of Tables 3 and 4 that give
the properties of the protostars and multiple systems for each of the three
calculations. These files contain the data necessary to construct Figs. 8 to
15, and to produce Table 2. Their file names are of the format
Table3_Stars_z5_MetalX.txt and Table4_Multiples_z5_MetalX.txt where ‘X’ gives
the metallicity (‘001’ for 0.01, ‘01’ for 0.1, or ‘1’).
Animations. We provide animations of the evolution of the column density, and
the gas, dust, and radiation temperatures for each of the three calculations
(i.e. 12 animations). Sink particles are represented by white circles. These
animations are provided to support the snapshots provided in Figs. 3 to 5.
Their file names are of the format Bate2022_z5_MetalX_Y.txt where ‘X’ gives
the metallicity and ‘Y’ gives the variable (‘Density’, ‘Tgas’, or ‘Tdust’ or
‘Trad’).
## References
* Abel et al. (2000) Abel T., Bryan G. L., Norman M. L., 2000, ApJ, 540, 39
* Auger et al. (2010) Auger M. W., Treu T., Bolton A. S., Gavazzi R., Koopmans L. V. E., Marshall P. J., Moustakas L. A., Burles S., 2010, ApJ, 724, 511
* Badenes et al. (2018) Badenes C., Mazzola C., Thompson T. A., Covey K., Freeman P. E., Walker M. G., Moe M., Troup N., Nidever D., Allende Prieto C., et al. 2018, ApJ, 854, 147
* Barnabè et al. (2011) Barnabè M., Czoske O., Koopmans L. V. E., Treu T., Bolton A. S., 2011, MNRAS, 415, 2215
* Basri & Reiners (2006) Basri G., Reiners A., 2006, AJ, 132, 663
* Bastian et al. (2010) Bastian N., Covey K. R., Meyer M. R., 2010, ARA&A, 48, 339
* Bate (2009a) Bate M. R., 2009a, MNRAS, 392, 590
* Bate (2009b) Bate M. R., 2009b, MNRAS, 392, 1363
* Bate (2012) Bate M. R., 2012, MNRAS, 419, 3115
* Bate (2014) Bate M. R., 2014, MNRAS, 442, 285
* Bate (2018) Bate M. R., 2018, MNRAS, 475, 5618
* Bate (2019) Bate M. R., 2019, MNRAS, 484, 2341
* Bate (2022) Bate M. R., , 2022, The statistical properties of stars and their dependence on redshift (Dataset)
* Bate et al. (2002) Bate M. R., Bonnell I. A., Bromm V., 2002, MNRAS, 336, 705
* Bate et al. (2003) Bate M. R., Bonnell I. A., Bromm V., 2003, MNRAS, 339, 577
* Bate et al. (1995) Bate M. R., Bonnell I. A., Price N. M., 1995, MNRAS, 277, 362
* Bate & Burkert (1997) Bate M. R., Burkert A., 1997, MNRAS, 288, 1060
* Bate & Keto (2015) Bate M. R., Keto E. R., 2015, MNRAS, 449, 2643
* Baugh et al. (2005) Baugh C. M., Lacey C. G., Frenk C. S., Granato G. L., Silva L., Bressan A., Benson A. J., Cole S., 2005, MNRAS, 356, 1191
* Benz (1990) Benz W., 1990, in Buchler J. R., ed., Numerical Modelling of Nonlinear Stellar Pulsations Problems and Prospects. Kluwer, Dordrecht, p. 269
* Benz et al. (1990) Benz W., Cameron A. G. W., Press W. H., Bowers R. L., 1990, ApJ, 348, 647
* Boss et al. (2000) Boss A. P., Fisher R. T., Klein R. I., McKee C. F., 2000, ApJ, 528, 325
* Bromm et al. (1999) Bromm V., Coppi P. S., Larson R. B., 1999, ApJ, 527, L5
* Burstein et al. (1997) Burstein D., Bender R., Faber S., Nolthenius R., 1997, AJ, 114, 1365
* Cappellari et al. (2012) Cappellari M., McDermid R. M., Alatalo K., Blitz L., Bois M., Bournaud F., Bureau M., Crocker A. F., Davies R. L., et al. 2012, Nature, 484, 485
* Cappellari et al. (2013) Cappellari M., McDermid R. M., Alatalo K., Blitz L., Bois M., Bournaud F., Bureau M., Crocker A. F., Davies R. L., et al. 2013, MNRAS, 432, 1862
* Carter et al. (1986) Carter D., Visvanathan N., Pickles A. J., 1986, ApJ, 311, 637
* Cenarro et al. (2003) Cenarro A. J., Gorgas J., Vazdekis A., Cardiel N., Peletier R. F., 2003, MNRAS, 339, L12
* Chabrier (2005) Chabrier G., 2005, in E. Corbelli, F. Palla, & H. Zinnecker ed., The Initial Mass Function 50 Years Later Vol. 327 of Astrophysics and Space Science Library, The Initial Mass Function: from Salpeter 1955 to 2005. Springer, Dordrecht, pp 41–50
* Close et al. (2003) Close L. M., Siegler N., Freed M., Biller B., 2003, ApJ, 587, 407
* Cohen (1978) Cohen J. G., 1978, ApJ, 221, 788
* Conroy & van Dokkum (2012a) Conroy C., van Dokkum P., 2012a, ApJ, 747, 69
* Conroy & van Dokkum (2012b) Conroy C., van Dokkum P. G., 2012b, ApJ, 760, 71
* Cunningham et al. (2018) Cunningham A. J., Krumholz M. R., McKee C. F., Klein R. I., 2018, MNRAS, 476, 771
* Davis & McDermid (2017) Davis T. A., McDermid R. M., 2017, MNRAS, 464, 453
* Delisle & Hardy (1992) Delisle S., Hardy E., 1992, AJ, 103, 711
* Draine (1978) Draine B. T., 1978, ApJS, 36, 595
* Duquennoy & Mayor (1991) Duquennoy A., Mayor M., 1991, A&A, 248, 485
* Dutton et al. (2013) Dutton A. A., Macciò A. V., Mendel J. T., Simard L., 2013, MNRAS, 432, 2496
* Dutton et al. (2012) Dutton A. A., Mendel J. T., Simard L., 2012, MNRAS, 422, L33
* El-Badry & Rix (2019) El-Badry K., Rix H.-W., 2019, MNRAS, 482, L139
* Elbaz et al. (1995) Elbaz D., Arnaud M., Vangioni-Flam E., 1995, A&A, 303, 345
* Elsender & Bate (2021) Elsender D., Bate M. R., 2021, MNRAS, 508, 5279
* Faber (1973) Faber S. M., 1973, ApJ, 179, 731
* Faber & French (1980) Faber S. M., French H. B., 1980, ApJ, 235, 405
* Falcón-Barroso et al. (2003) Falcón-Barroso J., Peletier R. F., Vazdekis A., Balcells M., 2003, ApJ, 588, L17
* Fehlberg (1969) Fehlberg E., 1969, NASA Technical Report R-315, Low-Order Classical Runge-Kutta Formulas with Step Size Control and Their Application to Some Heat Transfer Problems. Washington, USA
* Ferreras et al. (2013) Ferreras I., La Barbera F., de La Rosa I. G., Vazdekis A., de Carvalho R. R., Falcon-Barroso J., Ricciardelli E., 2013, MNRAS, 429, L15
* Ferreras et al. (2015) Ferreras I., La Barbera F., Vazdekis A., 2015, in Highlights of Spanish Astrophysics VIII Constraining the Initial Mass Function of unresolved stellar populations. pp 102–110
* Ferreras et al. (2008) Ferreras I., Saha P., Burles S., 2008, MNRAS, 383, 857
* Ferreras et al. (2010) Ferreras I., Saha P., Leier D., Courbin F., Falco E. E., 2010, MNRAS, 409, L30
* Ferreras et al. (2005) Ferreras I., Saha P., Williams L. L. R., 2005, ApJ, 623, L5
* Ferreras et al. (2015) Ferreras I., Weidner C., Vazdekis A., La Barbera F., 2015, MNRAS, 448, L82
* Fischer & Marcy (1992) Fischer D. A., Marcy G. W., 1992, ApJ, 396, 178
* Fletcher & Stahler (1994a) Fletcher A. B., Stahler S. W., 1994a, ApJ, 435, 313
* Fletcher & Stahler (1994b) Fletcher A. B., Stahler S. W., 1994b, ApJ, 435, 329
* Gallazzi et al. (2006) Gallazzi A., Charlot S., Brinchmann J., White S. D. M., 2006, MNRAS, 370, 1106
* Gibson (1997) Gibson B. K., 1997, MNRAS, 290, 471
* Gibson & Matteucci (1997) Gibson B. K., Matteucci F., 1997, MNRAS, 291, L8
* Glover & Clark (2012) Glover S. C. O., Clark P. C., 2012, MNRAS, 426, 377
* Glover et al. (2010) Glover S. C. O., Federrath C., Mac Low M.-M., Klessen R. S., 2010, MNRAS, 404, 2
* Graves et al. (2009a) Graves G. J., Faber S. M., Schiavon R. P., 2009a, ApJ, 693, 486
* Graves et al. (2009b) Graves G. J., Faber S. M., Schiavon R. P., 2009b, ApJ, 698, 1590
* Greene et al. (2015) Greene J. E., Janish R., Ma C.-P., McConnell N. J., Blakeslee J. P., Thomas J., Murphy J. D., 2015, ApJ, 807, 11
* Grether & Lineweaver (2007) Grether D., Lineweaver C. H., 2007, ApJ, 669, 1220
* Grudić et al. (2021) Grudić M. Y., Guszejnov D., Hopkins P. F., Offner S. S. R., Faucher-Giguère C.-A., 2021, MNRAS, 506, 2199
* Grudić et al. (2022) Grudić M. Y., Guszejnov D., Offner S. S. R., Rosen A. L., Raju A. N., Faucher-Giguère C.-A., Hopkins P. F., 2022, MNRAS, 512, 216
* Gunawardhana et al. (2011) Gunawardhana M. L. P., Hopkins A. M., Sharp R. G., Brough S., Taylor E., Bland-Hawthorn J., Maraston C., Tuffs R. J., Popescu C. C., et al. 2011, MNRAS, 415, 1647
* Guszejnov et al. (2021) Guszejnov D., Grudić M. Y., Hopkins P. F., Offner S. S. R., Faucher-Giguère C.-A., 2021, MNRAS, 502, 3646
* Guszejnov et al. (2022) Guszejnov D., Grudić M. Y., Offner S. S. R., Faucher-Giguère C.-A., Hopkins P. F., Rosen A. L., 2022, MNRAS, 515, 4929
* Guszejnov et al. (2019) Guszejnov D., Hopkins P. F., Graus A. S., 2019, MNRAS, 485, 4852
* Guzman et al. (1993) Guzman R., Lucey J. R., Bower R. G., 1993, MNRAS, 265, 731
* Hardy & Couture (1988) Hardy E., Couture J., 1988, ApJ, 325, L29
* He et al. (2019) He C.-C., Ricotti M., Geen S., 2019, MNRAS, 489, 1880
* Hosokawa & Omukai (2009) Hosokawa T., Omukai K., 2009, ApJ, 691, 823
* Hubber et al. (2006) Hubber D. A., Goodwin S. P., Whitworth A. P., 2006, A&A, 450, 881
* Hubber & Whitworth (2005) Hubber D. A., Whitworth A. P., 2005, A&A, 437, 113
* Janson et al. (2012) Janson M., Hormuth F., Bergfors C., Brandner W., Hippler S., Daemgen S., Kudryavtseva N., Schmalzl E., Schnupp C., Henning T., 2012, ApJ, 754, 44
* Jones & Bate (2018a) Jones M. O., Bate M. R., 2018a, MNRAS, 480, 2562
* Jones & Bate (2018b) Jones M. O., Bate M. R., 2018b, MNRAS, 478, 2650
* Jorgensen et al. (1996) Jorgensen I., Franx M., Kjaergaard P., 1996, MNRAS, 280, 167
* Keto & Caselli (2008) Keto E., Caselli P., 2008, ApJ, 683, 238
* Kouwenhoven et al. (2007) Kouwenhoven M. B. N., Brown A. G. A., Portegies Zwart S. F., Kaper L., 2007, A&A, 474, 77
* Kroupa (2001) Kroupa P., 2001, MNRAS, 322, 231
* Krumholz et al. (2011) Krumholz M. R., Klein R. I., McKee C. F., 2011, ApJ, 740, 74
* Krumholz et al. (2012) Krumholz M. R., Klein R. I., McKee C. F., 2012, ApJ, 754, 71
* Krumholz et al. (2016) Krumholz M. R., Myers A. T., Klein R. I., McKee C. F., 2016, MNRAS, 460, 3272
* Kuntschner et al. (2010) Kuntschner H., Emsellem E., Bacon R., Cappellari M., Davies R. L., de Zeeuw P. T., Falcón-Barroso J., Krajnović D., McDermid R. M., Peletier R. F., Sarzi M., Shapiro K. L., van den Bosch R. C. E., van de Ven G., 2010, MNRAS, 408, 97
* La Barbera et al. (2013) La Barbera F., Ferreras I., Vazdekis A., de la Rosa I. G., de Carvalho R. R., Trevisan M., Falcón-Barroso J., Ricciardelli E., 2013, MNRAS, 433, 3017
* La Barbera et al. (2017) La Barbera F., Vazdekis A., Ferreras I., Pasquali A., Allende Prieto C., Röck B., Aguado D. S., Peletier R. F., 2017, MNRAS, 464, 3597
* Larson (1969) Larson R. B., 1969, MNRAS, 145, 271
* Larson (1998) Larson R. B., 1998, MNRAS, 301, 569
* Leier et al. (2016) Leier D., Ferreras I., Saha P., Charlot S., Bruzual G., La Barbera F., 2016, MNRAS, 459, 3677
* Levermore & Pomraning (1981) Levermore C. D., Pomraning G. C., 1981, ApJ, 248, 321
* Loewenstein & Mushotzky (1996) Loewenstein M., Mushotzky R. F., 1996, ApJ, 466, 695
* Lomax et al. (2015) Lomax O., Whitworth A. P., Hubber D. A., 2015, MNRAS, 449, 662
* Martín-Navarro et al. (2015) Martín-Navarro I., Vazdekis A., La Barbera F., Falcón-Barroso J., Lyubenova M., van de Ven G., Ferreras I., Sánchez S. F., Trager S. C., et al. 2015, ApJ, 806, L31
* Mason et al. (1998) Mason B. D., Gies D. R., Hartkopf W. I., Bagnuolo Jr. W. G., ten Brummelaar T., McAlister H. A., 1998, AJ, 115, 821
* Mathew & Federrath (2021) Mathew S. S., Federrath C., 2021, MNRAS, 507, 2448
* Mathew et al. (2022) Mathew S. S., Federrath C., Seta A., 2022, arXiv e-prints, p. arXiv:2208.08802
* McDermid et al. (2015) McDermid R. M., Alatalo K., Blitz L., Bournaud F., Bureau M., Cappellari M., Crocker A. F., Davies R. L., Davis T. A., et al. 2015, MNRAS, 448, 3484
* McDermid et al. (2014) McDermid R. M., Cappellari M., Alatalo K., Bayet E., Blitz L., Bois M., Bournaud F., Bureau M., Crocker A. F., et al. 2014, ApJ, 792, L37
* McKee & Offner (2010) McKee C. F., Offner S. S. R., 2010, ApJ, 716, 167
* Moe et al. (2019) Moe M., Kratter K. M., Badenes C., 2019, ApJ, 875, 61
* Morris & Monaghan (1997) Morris J. P., Monaghan J. J., 1997, J. Comp. Phys., 136, 41
* Myers et al. (2014) Myers A. T., Klein R. I., Krumholz M. R., McKee C. F., 2014, MNRAS, 439, 3420
* Myers et al. (2011) Myers A. T., Krumholz M. R., Klein R. I., McKee C. F., 2011, ApJ, 735, 49
* Myers et al. (2013) Myers A. T., McKee C. F., Cunningham A. J., Klein R. I., Krumholz M. R., 2013, ApJ, 766, 97
* Nam et al. (2021) Nam D. G., Federrath C., Krumholz M. R., 2021, MNRAS, 503, 1138
* Narayanan & Davé (2012) Narayanan D., Davé R., 2012, MNRAS, 423, 3601
* Newman et al. (2017) Newman A. B., Smith R. J., Conroy C., Villaume A., van Dokkum P., 2017, ApJ, 845, 157
* Omukai (2000) Omukai K., 2000, ApJ, 534, 809
* Ostriker et al. (2001) Ostriker E. C., Stone J. M., Gammie C. F., 2001, ApJ, 546, 980
* Preibisch et al. (1999) Preibisch T., Balega Y., Hofmann K.-H., Weigelt G., Zinnecker H., 1999, New Astronomy, 4, 531
* Price (2007) Price D. J., 2007, Publ. Astron. Soc. Australia, 24, 159
* Price & Monaghan (2005) Price D. J., Monaghan J. J., 2005, MNRAS, 364, 384
* Price & Monaghan (2007) Price D. J., Monaghan J. J., 2007, MNRAS, 374, 1347
* Raghavan et al. (2010) Raghavan D., McAlister H. A., Henry T. J., Latham D. W., Marcy G. W., Mason B. D., Gies D. R., White R. J., ten Brummelaar T. A., 2010, ApJS, 190, 1
* Rizzuto et al. (2013) Rizzuto A. C., Ireland M. J., Robertson J. G., Kok Y., Tuthill P. G., Warrington B. A., Haubois X., Tango W. J., Norris B., ten Brummelaar T., Kraus A. L., Jacob A., Laliberte-Houdeville C., 2013, ArXiv e-prints
* Rosani et al. (2018) Rosani G., Pasquali A., La Barbera F., Ferreras I., Vazdekis A., 2018, MNRAS, 476, 5233
* Schmidt (1963) Schmidt M., 1963, ApJ, 137, 758
* Seifried et al. (2018) Seifried D., Walch S., Haid S., Girichidis P., Naab T., 2018, ApJ, 855, 81
* Shanahan & Gieles (2015) Shanahan R. L., Gieles M., 2015, MNRAS, 448, L94
* Sharda & Krumholz (2022) Sharda P., Krumholz M. R., 2022, MNRAS, 509, 1959
* Smith & Lucey (2013) Smith R. J., Lucey J. R., 2013, MNRAS, 434, 1964
* Smith et al. (2012) Smith R. J., Lucey J. R., Carter D., 2012, MNRAS, 426, 2994
* Smith et al. (2015) Smith R. J., Lucey J. R., Conroy C., 2015, MNRAS, 449, 3441
* Spiniello et al. (2015) Spiniello C., Koopmans L. V. E., Trager S. C., Barnabè M., Treu T., Czoske O., Vegetti S., Bolton A., 2015, MNRAS, 452, 2434
* Spiniello et al. (2014) Spiniello C., Trager S., Koopmans L. V. E., Conroy C., 2014, MNRAS, 438, 1483
* Spiniello et al. (2012) Spiniello C., Trager S. C., Koopmans L. V. E., Chen Y. P., 2012, ApJ, 753, L32
* Spinrad (1962) Spinrad H., 1962, ApJ, 135, 715
* Strader et al. (2011) Strader J., Caldwell N., Seth A. C., 2011, AJ, 142, 8
* Tanvir et al. (2022) Tanvir T. S., Krumholz M. R., Federrath C., 2022, arXiv e-prints, p. arXiv:2206.04999
* Thomas et al. (2011) Thomas J., Saglia R. P., Bender R., Thomas D., Gebhardt K., Magorrian J., Corsini E. M., Wegner G., Seitz S., 2011, MNRAS, 415, 545
* Tobin et al. (2020) Tobin J. J., Sheehan P. D., Megeath S. T., Díaz-Rodríguez A. K., Offner S. S. R., Murillo N. M., van ’t Hoff M. L. R., van Dishoeck E. F., Osorio M., et al. 2020, ApJ, 890, 130
* Tortora et al. (2013) Tortora C., Romanowsky A. J., Napolitano N. R., 2013, ApJ, 765, 8
* Trager et al. (2000) Trager S. C., Faber S. M., Worthey G., González J. J., 2000, AJ, 120, 165
* Treu et al. (2010) Treu T., Auger M. W., Koopmans L. V. E., Gavazzi R., Marshall P. J., Bolton A. S., 2010, ApJ, 709, 1195
* Truelove et al. (1997) Truelove J. K., Klein R. I., McKee C. F., Holliman II J. H., Howell L. H., Greenough J. A., 1997, ApJ, 489, L179
* Tychoniec et al. (2018) Tychoniec Ł., Tobin J. J., Karska A., Chandler C., Dunham M. M., Harris R. J., Kratter K. M., Li Z.-Y., Looney L. W., Melis C., Pérez L. M., Sadavoy S. I., Segura-Cox D., van Dishoeck E. F., 2018, ArXiv e-prints
* van den Bergh (1962) van den Bergh S., 1962, AJ, 67, 486
* van Dokkum et al. (2017) van Dokkum P., Conroy C., Villaume A., Brodie J., Romanowsky A. J., 2017, ApJ, 841, 68
* van Dokkum & Conroy (2010) van Dokkum P. G., Conroy C., 2010, Nature, 468, 940
* van Dokkum & Conroy (2011) van Dokkum P. G., Conroy C., 2011, ApJ, 735, L13
* Vazdekis et al. (1996) Vazdekis A., Casuso E., Peletier R. F., Beckman J. E., 1996, ApJS, 106, 307
* Vazdekis et al. (1997) Vazdekis A., Peletier R. F., Beckman J. E., Casuso E., 1997, ApJS, 111, 203
* Wegner et al. (2012) Wegner G. A., Corsini E. M., Thomas J., Saglia R. P., Bender R., Pu S. B., 2012, AJ, 144, 78
* Weidner et al. (2013) Weidner C., Ferreras I., Vazdekis A., La Barbera F., 2013, MNRAS, 435, 2274
* Whitehouse & Bate (2006) Whitehouse S. C., Bate M. R., 2006, MNRAS, 367, 32
* Whitehouse et al. (2005) Whitehouse S. C., Bate M. R., Monaghan J. J., 2005, MNRAS, 364, 1367
* Whitworth (1998) Whitworth A. P., 1998, MNRAS, 296, 442
* Worthey et al. (1992) Worthey G., Faber S. M., Gonzalez J. J., 1992, ApJ, 398, 69
* Wurster et al. (2019) Wurster J., Bate M. R., Price D. J., 2019, MNRAS, 489, 1719
* Zucconi et al. (2001) Zucconi A., Walmsley C. M., Galli D., 2001, A&A, 376, 650
|
# Directed flow in a baryonic fireball
Tribhuban Parida<EMAIL_ADDRESS>Sandeep Chatterjee
<EMAIL_ADDRESS>Department of Physical Sciences,
Indian Institute of Science Education and Research Berhampur,
Transit Campus (Govt ITI), Berhampur-760010, Odisha, India
###### Abstract
Directed flow of identified hadrons in a baryon rich fireball is an
interesting observable as it is expected to probe several physics aspects: the
initial three dimensional baryon profile in the thermalised fireball that can
be treated as an input for the hydrodynamic evolution, the nature of baryon
dissipation current and baryon transport coefficients, the QCD equation of
state at finite baryon densities as well as the nature of phase transition
between the quark gluon and hadronic phases. Particularly, the mid-rapidity
slope of the rapidity dependence of directed flow of protons have been
proposed as a sensitive observable to several of these physics aspects while a
consistent description of the splitting in directed flow of baryon and its
anti-particle has been a challenge. In this work, we propose a suitable ansatz
of the initial condition for baryon deposition. When such a baryon deposition
ansatz is coupled to a tilted fireball, we manage to find parameter space that
can describe the directed flow of identified hadrons including the elusive
baryon antibaryon splitting of directed flow. Further, we demonstrate that
future measurements of baryon antibaryon directed flow at larger rapidities
have the potential to constrain the baryon diffusion coefficient.
## I Introduction
A large amount of energy as well as baryon and electric charges are deposited
as a result of a relativistic heavy ion collision Adamczyk _et al._ (2017).
The framework of relativistic hydrodynamics has been very successful in
evolving these conserved quantities with a few unknown parameters that
characterise the initial thermalised distribution of these conserved
quantities to be evolved hydrodynamically as well as the transport
coefficients of these charges in the QCD medium. While the evolution of a
baryon free fireball within the ambit of the above paradigm has been well
studied and compared to appropriate observables for about two decades,
hydrodynamic evolution of baryonic fireball is relatively new and pose several
questions that are being understood. In this work we focus on the suitable
thermalised distribution of the baryon density that can be evolved
hydrodynamically and its consequences on observables that can be measured and
tested. Directed flow of identified hadrons and particularly, the splitting of
directed flow of baryons and anti-baryons have been well studied in this
regard.
## II Model
### II.1 Initial condition
The following ansatz has been taken for initial energy density
$\epsilon(x,y,\eta_{s};\tau_{0})$ at a constant proper time $\tau_{0}$ Bozek
and Wyskiel (2010).
$\displaystyle\epsilon(x,y,\eta_{s})$ $\displaystyle=$
$\displaystyle\epsilon_{0}\left[\left(N_{+}(x,y)f_{+}(\eta_{s})+N_{-}(x,y)f_{-}(\eta_{s})\right)\right.$
(1)
$\displaystyle\left.\times\left(1-\alpha\right)+N_{coll}(x,y)\epsilon_{\eta_{s}}\left(\eta_{s}\right)\alpha\right]$
$N_{+}(x,y)$ and $N_{-}(x,y)$ are the participants from the forward and
backward going nuclei respectively. $N_{coll}(x,y)$ is the contribution from
binary collision sources at the transverse position $(x,y)$. $\alpha$ is the
hardness factor. The rapidity odd component has been introduced through
$f_{+,-}(\eta_{s})$ in $\epsilon$.
$f_{+,-}(\eta_{s})=\epsilon_{\eta_{s}}(\eta_{s})\epsilon_{F,B}(\eta_{s})$ (2)
where
$\epsilon_{F}(\eta_{s})=\begin{cases}0,&\text{if }\eta_{s}<-\eta_{m}\\\
\frac{\eta_{s}+\eta_{m}}{2\eta_{m}},&\text{if
}-\eta_{m}\leq\eta_{s}\leq\eta_{m}\\\ 1,&\text{if
}\eta_{m}<\eta_{s}\end{cases}$ (3)
and
$\epsilon_{B}(\eta_{s})=\epsilon_{F}(-\eta_{s})$ (4)
The initial baryon profile is modelled as,
$n_{B}\left(x,y,\eta_{s}\right)=N_{B}\left[W_{+}^{B}(x,y)f_{+}^{B}(\eta_{s})+W_{-}^{B}(x,y)f_{-}^{B}(\eta_{s})\right]$
(5)
where $N_{B}$ is a normalisation constant to be determined from the condition
that the total baryon deposited should be equal to the total participants
$N_{part}=N_{+}+N_{-}$
$\int\tau_{0}dxdyd\eta n_{B}\left(x,y,\eta,\tau_{0}\right)=N_{part}$ (6)
$W_{\pm}^{B}(x,y)$ are the weight factors to deposit baryon in the transverse
plane and are taken to be of 2 component form
$W_{\pm}^{B}(x,y)=\left(1-\omega\right)N_{\pm}(x,y)+\omega N_{coll}(x,y)$ (7)
This ansatz is quite different from the usual practice where the baryon
transverse profile is taken to be proportional to $N_{\pm}$ and contribution
from binary collision sources is not considered Shen and Alzhrani (2020). Here
we are motivated by microscopic, dynamical models like LEXUS Jeon and Kapusta
(1997); De _et al._ (2022) where the baryon deposition in the initial state
depends on the number of binary collisions. We keep $\omega$ as a free
parameter that can be tuned by comparing to data.
The net baryon density rapidity profiles $f_{+}^{n_{B}}$, $f_{-}^{n_{B}}$ are
taken as Shen and Alzhrani (2020); Denicol _et al._ (2018),
$\displaystyle f_{+}^{n_{B}}\left(\eta_{s}\right)$ $\displaystyle=$
$\displaystyle\left[\theta\left(\eta_{s}-\eta_{0}^{n_{B}}\right)\exp{-\frac{\left(\eta_{s}-\eta_{0}^{n_{B}}\right)^{2}}{2\sigma_{B,+}^{2}}}+\right.$
(8)
$\displaystyle\left.\theta\left(\eta_{0}^{n_{B}}-\eta_{s}\right)\exp{-\frac{\left(\eta_{s}-\eta_{0}^{n_{B}}\right)^{2}}{2\sigma_{B,-}^{2}}}\right]$
and
$\displaystyle f_{-}^{n_{B}}\left(\eta_{s}\right)$ $\displaystyle=$
$\displaystyle\left[\theta\left(\eta_{s}+\eta_{0}^{n_{B}}\right)\exp{-\frac{\left(\eta_{s}+\eta_{0}^{n_{B}}\right)^{2}}{2\sigma_{B,-}^{2}}}+\right.$
(9)
$\displaystyle\left.\theta\left(-\eta_{s}-\eta_{0}^{n_{B}}\right)\exp{-\frac{\left(\eta_{s}+\eta_{0}^{n_{B}}\right)^{2}}{2\sigma_{B,+}^{2}}}\right]$
The above profiles are constrained by comparing to rapidity dependence of net
proton yield Bearden _et al._ (2004); Arsene _et al._ (2009); Adamczyk _et
al._ (2017); Anticic _et al._ (2011). In this paper, we work with an ensemble
averaged initial profile of energy and baryon density by smearing the
participant and binary collision sources obtained from event by event Monte-
Carlo Glauber model of over 25,000 initial configurations from a given
centrality class. In each event, first all the sources are rotated by the
second order participant plane angle and then smeared around the new
transverse position by a gaussian profile with constant transverse width of
0.4 fm Shen and Alzhrani (2020). The ensemble averaged profile is
hydrodynamically evolved.
### II.2 Hydrodynamic Evolution
We evolve the initial energy and baryon densities from Eqs. 1 and 5 using the
publicly available code MUSIC Schenke _et al._ (2010); Paquet _et al._
(2016); Schenke _et al._ (2012). The code solves the following conservation
equations
$\partial_{\mu}T^{\mu\nu}=0$ (10) $\partial_{\mu}J^{\mu}_{B}=0$ (11)
The energy momentum tensor ($T^{\mu\nu}$) and baryon current ($J^{\mu}_{B}$)
are defined as
$T^{\mu\nu}=\epsilon u^{\mu}u^{\nu}-(p+\Pi)\Delta^{\mu\nu}+\pi^{\mu\nu}=0$
(12)
$J^{\mu}_{B}=n_{B}u^{\mu}+q^{\mu}$ (13)
Here $\Delta^{\mu\nu}$ is the spatial projection tensor defined as
$\Delta^{\mu\nu}=g^{\mu\nu}-u^{\mu}u^{\nu}$, where $u^{\mu}$ is the fluid four
velocity and $g^{\mu\nu}=diag(1,-1,-1,-1)$ is the metric tensor in Minkowski
space. $\epsilon$ and $p$ are the local energy density and pressure in the
fluid. $T^{\mu\nu}$ and $J^{\mu}_{B}$ consist of three dissipative currents,
the bulk viscous pressure $\Pi$, the shear stress tensor $\pi^{\mu\nu}$ and
net baryon diffusion current $q^{\mu}$ among which we do not consider the
effect of $\Pi$ in this work.
Like $\pi^{\mu\nu}$, the evolution of baryon diffusion current follows the
Israel-Stewart-like equation.
$\Delta^{\mu\nu}Dq_{\nu}=-\frac{1}{\tau_{q}}\left(q^{\mu}-\kappa_{B}\nabla^{\mu}\frac{\mu_{B}}{T}\right)-\frac{\delta_{qq}}{\tau_{q}}q^{\mu}\theta-\frac{\lambda_{qq}}{\tau_{q}}q_{\nu}\sigma^{\mu\nu}$
(14)
The above equation is a relaxation type equation where
$D=u^{\alpha}\partial_{\alpha}$ is the comoving time derivative. $\tau_{q}$ is
the time scale for the baryon diffusion current to relax to its Navier-Stokes
limit chosen to be inversely proportional to the temperature $T$ as in a
conformal system. $\delta_{qq}$ and $\lambda_{qq}$ are the second order
transport coefficients present in the coupling terms with velocity shear
tensor $\sigma^{\mu\nu}$ and system expansion rate $\theta$.
MUSIC uses a temperature ($T$) and baryon chemical potential ($\mu_{B}$)
dependent baryon transport coefficient which is derived from Boltzman equation
in relaxation time approximation Denicol _et al._ (2018).
$\kappa_{B}=\frac{C_{B}}{T}n_{B}\left[\frac{1}{3}\coth{\left(\frac{\mu_{B}}{T}\right)}-\frac{n_{B}T}{\epsilon+p}\right]$
(15)
$C_{B}$ is a free paramter to control the strength of baryon diffusion in the
medium.
A lattice QCD based EoS at finite baryon density, NEoS-BQS Monnai _et al._
(2019); Bazavov _et al._ (2012); Ding _et al._ (2015); Bazavov _et al._
(2017) has been used during the hydrodynamic evolution. The EoS imposes
strangeness neutrality and fixed electric charge to baryon density ratio:
$n_{Q}=0.4n_{B}$. We have taken the specific shear viscosity
($C_{\eta}=\frac{\eta T}{\epsilon+p}$) to be 0.08 in the simulation.
The Cooper-Frye conversion of fluid into particles has been performed on the
hypersurface of constant energy density, $\epsilon_{f}=0.26$ GeV/fm3 using iSS
Shen _et al._ (2014); htt . The sampled primary hadrons are then fed into
UrQMD Bass _et al._ (1998); Bleicher _et al._ (1999) for hadronic transport.
## III Tilted matter and baryon
The introduction of a 2 component transverse baryon profile in Eq. 7 allows us
to tune the relative tilt between the matter and baryon profiles in the
initial condition by varying $\omega$. This is demonstrated in Fig. 1.
Contours of constant baryon density for different $\omega$ are plotted. The
contour of constant energy density profile for $\eta_{m}=0.8$ has been plotted
for reference. From Eqs. 7 and 5 one can deduce that the rapidity profile of
baryon deposition due to the $N_{coll}$ term is forward-backward symmetric in
rapidity. On the other hand, the baryon deposited by the participant sources
are asymmetric in rapidity as characterised by $f_{+}^{n_{B}}$ and
$f_{-}^{n_{B}}$. Now $\omega$ controls the relative weight between the
participant and binary collision sources. Thus, changing $\omega$ amounts to
changing the initial baryon tilt independent of the matter tilt.
In Figure 2, we have shown the effect of varying $\omega$ on the initial
rapidity profile of baryon density in panel (a) and in panel (b) the rapidity
dependence of the baryon dipole asymmetry with respect to the centre of
initial energy density. $\eta_{m}$ is adjusted to ensure similar final state
pion $v_{1}$. While the transverse coordinates integrated baryon rapidity
profile does not show any dependency on $\omega$, the dipole asymmetry
$\varepsilon_{1}$ that characterises the first oder harmonic in the Fourier
expansion of the transverse distribution of baryon density shows large
variation with $\omega$. The effect on the final state net baryon observables
are shown in the lower row. In panel (c) we find the net proton rapidity
profile is independent of $\omega$ while in panel (d) we see that there is
large variation in the $v_{1}$ of $p$ and $\bar{p}$ affecting their mid-
rapidity slopes as well as their splitting.
In a baryon free fireball, charged particle directed flow originates from
initial tilted distribution of energy or entropy density in the reaction plane
Bozek and Wyskiel (2010). However, in a baryonic fireball it is possible to
generate non-zero rapidity odd $v_{1}$ of pion even with a forward-backward
symmetric initial energy deposition but a tilted baryon profile. In panel (a)
of Figure 3, we have demonstrated that how the directed flow of pion
originates from tilted profile of baryon even when the energy density profile
is symmetric in $\eta_{s}$. In this case the origin of pion $v_{1}$ is the
dipole asymmetry in pressure($p=p(\epsilon,n_{B})$) in the transverse plane at
non-zero rapidity which originates from the tilted baryon profile through EoS.
This effect can be strong enough to generate sufficient pion $v_{1}$ to
explain data with just contribution from initial state pressure anisotropy. In
panel(b) the effect on $v_{1}(\pi^{+})$ due to resonance decay and hadronic
transport has been shown. It has been observed that the major contribution to
$v_{1}(\pi^{+})$ comes form the effect of EoS while contribution from the
resonance decay of higher baryons is very small. In the same plot, we have
presented the effect of hadronic transport on the rapidity dependent directed
flow.
Figure 1: (Color online) The contour plot of baryon profile with different
$\omega$. Along the contour the baryon density is fixed to be 0.55 fm-3.
Figure 2: (Color online) Effect of $\omega$ proton and anti-proton directed
flow split and the net proton rapidity distribution at 10-40$\%$ Au+Au
$\sqrt{s_{\textrm{NN}}}=19.6$ GeV collisions. The experimental data of pion
directed flow is measure by STAR collaboration Adamczyk _et al._ (2014).
Figure 3: (Color online) Effect of $\omega$ on pion directed flowat 10-40$\%$
Au+Au $\sqrt{s_{\textrm{NN}}}=19.6$ GeV collisions. The experimental data of
pion directed flow is measure by STAR collaboration Adamczyk _et al._ (2014).
## IV Results
$\sqrt{S_{NN}}$ (GeV) | $\tau_{0}$ (fm) | $\epsilon_{0}$ (GeV/fm3) | $\eta_{0}$ | $\sigma_{\eta}$ | $\eta_{0}^{n_{B}}$ | $\sigma_{B,-}$ | $\sigma_{B,+}$ | $\omega$ | $\eta_{m}$
---|---|---|---|---|---|---|---|---|---
$C_{B}=0$
200 | 0.6 | 8.0 | 1.3 | 1.5 | 4.4 | 2.0 | 0.1 | 0.3 | 2.0
19.6 | 1.8 | 1.55 | 1.3 | 0.4 | 1.5 | 0.9 | 0.3 | 0.13 | 0.8
$C_{B}=1$
200 | 0.6 | 8.0 | 1.3 | 1.5 | 4.6 | 1.6 | 0.1 | 0.25 | 2.2
19.6 | 1.8 | 1.55 | 1.3 | 0.4 | 1.8 | 0.8 | 0.3 | 0.15 | 0.8
Table 1: Parameters used during simulations with $C_{B}=0$ and $C_{B}=1$.
We have studied Au+Au collisions at 19.6 GeV and 200 GeV. The simulations have
been performed for both $C_{B}=0$ and $1$ to understand the effect of baryon
diffusion on the presented observables. We have tuned our parameters to
describe the available experimental data of the pseudo-rapidity dependence of
charged particle multiplicity, centrality dependence of charged particle
multiplicity, rapidity dependence of net proton yield and rapidity dependence
of directed flow of identified hadrons simultaneously.
The centrality class has been determined from the initial state by assuming
the produced charge particle multiplicity in the final stage is proportional
to the two component ansatz of initial energy deposition in the transverse
plane mentioned in Eq. 1. The hardness factor, $\alpha$ has been chosen to
capture the centrality dependence of charged particle yield in the mid
rapidity region. We have found $\alpha=0.1$ and $0.14$ is suitable for Au+Au
collisions at 19.6 GeV and 200 GeV respectively.
We have observed that the pseudo rapidity dependence of charge particle
multiplicity does not get affected by the $\omega$ parameter although it
mildly increases with $C_{B}$. However, the rapidity distribution of net
proton is strongly affected by $C_{B}$ due to different baryon diffusion.
Hence we have first tuned and kept $\epsilon_{0},\eta_{0}$ and $\sigma_{\eta}$
same for both $C_{B}$ which desribes the experimental data of pseudo rapidity
dependent charge particle yield. In the second step, we have calibrated the
$\eta_{0}^{n_{B}},\sigma_{B,in}$ and $\sigma_{B,out}$ parameters independently
for two different $C_{B}$ values to explain the rapidity distribution of net
proton yield. It is to be noted here that, the weak decay contributions are
taken into account during this calibrations.
The tilt parameter $\eta_{m}$ controls the tilt in the energy density profile
where as $\omega$ controls the same for baryon density profile. We have
observed that the presence of baryon density strongly affects the evolution of
energy density inside the medium through EoS. Hence fixing of $\eta_{m}$ and
$\omega$ indepently from the pion($\pi$) and proton($p$) directed flow is not
possible. We have choosen a set of $\omega-\eta_{m}$ for each $C_{B}$ inorder
to explain the directed flow of $\pi,p$ and $\bar{p}$ simultaneously.
Figure 4: (Color online) Pseudo rapidity distribution of produced charged
particle for 0-6$\%$ and 15-25$\%$ centrality class in Au+Au collisions at
$\sqrt{s_{\textrm{NN}}}=19.6$ GeV. The model expectations for both $C_{B}=0$
and $1$ are compared to measurements from the PHOBOS collaboration Back _et
al._ (2003). Figure 5: (Color online) Rapidity distribution of net proton for
0-5$\%$ and 10-20$\%$ centrality class in Au+Au collisions at
$\sqrt{s_{\textrm{NN}}}=19.6$ GeV. The model expectations for both $C_{B}=0$
and $1$ are compared to experimental datas from NA49 Anticic _et al._ (2011)
and STAR Adamczyk _et al._ (2017) collaboration. The NA49 data is for Pb+Pb
$\sqrt{s_{\textrm{NN}}}=17.3$ GeV 0-5$\%$ centrality. Figure 6: (Color online)
Phase space dependence of the rapidity odd directed flow of identified
particles in 10-40$\%$ Au+Au collisions at $\sqrt{s_{\textrm{NN}}}=19.6$ GeV.
The model results for both $C_{B}=0$ and $1$ are are compared with the
measurements from STAR collaboration Adamczyk _et al._ (2014, 2018).
Pseudo rapidity distribution of produced charged particle for 0-6$\%$ and
15-25$\%$ Au+Au collisions at $\sqrt{s_{NN}}=19.6$ GeV has been plotted in
Fig. 4 for both $C_{B}=0$ and 1. There is a small increase in the charged
particle yield in the case of $C_{B}=1$. It is observed that the centrality
dependence has been followed with the chosen hardness factor $\alpha$.
Rapidity distribution of net proton for 0-5$\%$ and 10-20$\%$ centrality in
Au+Au collisions at $\sqrt{s_{NN}}=19.6$ GeV have been plotted in Fig. 5. The
weak decay contributions has been considered in the model calculation.
After constraining the model parameters of initial matter and baryon profile
we have studied the phase space dependency of $v_{1}$ of identified particles.
Rapidity dependence of the directed flow of identified particles in 10-40$\%$
Au+Au collisions at $\sqrt{s_{NN}}=19.6$ have been plotted in Fig. 6. We have
chosen a set of $\eta_{m}$ and $\omega$ to capture the rapidity dependence of
$v_{1}$ for $\pi^{+},p$ and $\bar{p}$ simultaneously whereas, the $v_{1}$ of
other particle species are the model predictions. From the model calculation
it has been observed that both $C_{B}=0$ and $C_{B}=1$ are able to describe
the baryon and anti-baryon $v_{1}$ splitting at the measured rapidity range.
So within the current scope of experimental measurements we are not able to
constrain the $C_{B}$.
Figure 7: (Color online) Pseudo rapidity distribution of produced charged
particle for 0-6$\%$ and 15-25$\%$ centrality class in Au+Au collisions at
$\sqrt{s_{\textrm{NN}}}=200$ GeV. The model expectations for both $C_{B}=0$
and $1$ are compared to measurements from the PHOBOS collaboration Back _et
al._ (2003). Figure 8: (Color online) Rapidity distribution of net proton for
0-5$\%$ centrality class in Au+Au collisions at $\sqrt{s_{\textrm{NN}}}=200$
GeV. The model expectations for both $C_{B}=0$ and $1$ are compared to
measurements from the BRAHMS collaboration Bearden _et al._ (2004). Figure
9: (Color online) Phase space dependence of the rapidity odd directed flow of
identified particles in 10-40$\%$ Au+Au collisions at
$\sqrt{s_{\textrm{NN}}}=200$ GeV. The model results for both $C_{B}=0$ and $1$
are are compared with the measurements from STAR collaboration Adamczyk _et
al._ (2014, 2018).
Pseudo rapidity distribution of produced charged particle for 0-6$\%$ and
15-25$\%$ Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV has been plotted in Fig.
7 for both $C_{B}=0$ and 1. Rapidity distribution of net proton for 0-5$\%$
centrality class in Au+Au collisions at $\sqrt{s_{\textrm{NN}}}=200$ GeV has
been plotted in Fig. 8. We are able to capture the proton, anti-proton and
net-proton rapidity distribution which shows that the used freeze-out energy
density gives a proper combination of $T$ and $\mu_{B}$ for the chemical
equilibrium.
Rapidity dependence of the directed flow of identified particles in 10-40$\%$
Au+Au collisions at $\sqrt{s_{NN}}=200$ have been plotted in Fig. 9. There is
relatively less splitting in baryon and anti-baryon at $\sqrt{s_{NN}}=200$ GeV
as the net deposited baryon in mid-rapidity is very less.
Figure 10: (Color online) Phase space dependence of the rapidity odd directed
flow of $p$,$\Lambda$ and their anti-particles in 10-40$\%$ Au+Au collisions
at $\sqrt{s_{\textrm{NN}}}=19.6$ GeV. The model results for both $C_{B}=0$ and
$1$ are are compared with the measurements from STAR collaboration Adamczyk
_et al._ (2014, 2018).
We now revisit the issue of constraining $C_{B}$ with $v_{1}$. We have seen so
far that for rapidities upto 1 or 1.5, the model predictions for the different
$C_{B}$ are almost on top of each other and hence the data cannot discriminate
between two different values of $C_{B}$. We have plotted in Fig.. 10 $v_{1}$
over a large rapidity range. We find noticeable difference in the model
calculation at large rapidity region that could be used to constrain $C_{B}$
with the availability of such experimental data in the future.
## V Summary
We have studied the effect of baryon density on the identified particle
directed flow. We have proposed a suitable initial transverse profile for the
deposited baryon charge that allows us to study the interplay of matter and
baryon tilt in the initial state and how subsequent hydrodynamic evolution of
these conserved charges can help us to understand the identified particle
directed flow in the Beam Energy Scan energies. Further, we demonstrated that
$v_{1}$ measurements at intermediate rapidities can provide strong constraints
on the baryon diffusion coefficient.
## References
* Adamczyk _et al._ (2017) L. Adamczyk _et al._ (STAR), Phys. Rev. C 96, 044904 (2017), arXiv:1701.07065 [nucl-ex] .
* Bozek and Wyskiel (2010) P. Bozek and I. Wyskiel, Phys. Rev. C 81, 054902 (2010), arXiv:1002.4999 [nucl-th] .
* Shen and Alzhrani (2020) C. Shen and S. Alzhrani, Phys. Rev. C 102, 014909 (2020), arXiv:2003.05852 [nucl-th] .
* Jeon and Kapusta (1997) S. Jeon and J. I. Kapusta, Phys. Rev. C 56, 468 (1997), arXiv:nucl-th/9703033 .
* De _et al._ (2022) A. De, J. I. Kapusta, M. Singh, and T. Welle, Phys. Rev. C 106, 054906 (2022), arXiv:2206.02655 [nucl-th] .
* Denicol _et al._ (2018) G. S. Denicol, C. Gale, S. Jeon, A. Monnai, B. Schenke, and C. Shen, Phys. Rev. C 98, 034916 (2018), arXiv:1804.10557 [nucl-th] .
* Bearden _et al._ (2004) I. G. Bearden _et al._ (BRAHMS), Phys. Rev. Lett. 93, 102301 (2004), arXiv:nucl-ex/0312023 .
* Arsene _et al._ (2009) I. C. Arsene _et al._ (BRAHMS), Phys. Lett. B 677, 267 (2009), arXiv:0901.0872 [nucl-ex] .
* Anticic _et al._ (2011) T. Anticic _et al._ (NA49), Phys. Rev. C 83, 014901 (2011), arXiv:1009.1747 [nucl-ex] .
* Schenke _et al._ (2010) B. Schenke, S. Jeon, and C. Gale, Phys. Rev. C 82, 014903 (2010), arXiv:1004.1408 [hep-ph] .
* Paquet _et al._ (2016) J.-F. Paquet, C. Shen, G. S. Denicol, M. Luzum, B. Schenke, S. Jeon, and C. Gale, Phys. Rev. C 93, 044906 (2016), arXiv:1509.06738 [hep-ph] .
* Schenke _et al._ (2012) B. Schenke, S. Jeon, and C. Gale, Phys. Rev. C 85, 024901 (2012), arXiv:1109.6289 [hep-ph] .
* Monnai _et al._ (2019) A. Monnai, B. Schenke, and C. Shen, Phys. Rev. C 100, 024907 (2019), arXiv:1902.05095 [nucl-th] .
* Bazavov _et al._ (2012) A. Bazavov _et al._ (HotQCD), Phys. Rev. D 86, 034509 (2012), arXiv:1203.0784 [hep-lat] .
* Ding _et al._ (2015) H. T. Ding, S. Mukherjee, H. Ohno, P. Petreczky, and H. P. Schadler, Phys. Rev. D 92, 074043 (2015), arXiv:1507.06637 [hep-lat] .
* Bazavov _et al._ (2017) A. Bazavov _et al._ , Phys. Rev. D 95, 054504 (2017), arXiv:1701.04325 [hep-lat] .
* Shen _et al._ (2014) C. Shen, Z. Qiu, H. Song, J. Bernhard, S. Bass, and U. Heinz, “The iebe-vishnu code package for relativistic heavy-ion collisions,” (2014).
* (18) “The iSS code packge can be downloaded from https://github.com/chunshen1987/iSS,” .
* Bass _et al._ (1998) S. A. Bass _et al._ , Prog. Part. Nucl. Phys. 41, 255 (1998), arXiv:nucl-th/9803035 .
* Bleicher _et al._ (1999) M. Bleicher _et al._ , J. Phys. G 25, 1859 (1999), arXiv:hep-ph/9909407 .
* Adamczyk _et al._ (2014) L. Adamczyk _et al._ (STAR), Phys. Rev. Lett. 112, 162301 (2014), arXiv:1401.3043 [nucl-ex] .
* Back _et al._ (2003) B. B. Back _et al._ , Phys. Rev. Lett. 91, 052303 (2003), arXiv:nucl-ex/0210015 .
* Adamczyk _et al._ (2018) L. Adamczyk _et al._ (STAR), Phys. Rev. Lett. 120, 062301 (2018), arXiv:1708.07132 [hep-ex] .
|
# “Stochastic Inverse Problems” and Changes-of-Variables ††thanks: This work
was supported in part by the NNSA Office of Defense Nuclear Nonproliferation
Research and Development, NA-22 F2019 Nuclear Forensics Venture
(LA19-V-FY2019-NDD3Ac).
Peter W. Marcy Los Alamos National Laboratory, Los Alamos, NM
<EMAIL_ADDRESS>Rebecca E. Morrison University of Colorado Boulder, Boulder,
CO<EMAIL_ADDRESS>
###### Abstract
Over the last decade, a series of applied mathematics papers have explored a
type of inverse problem—called by a variety of names including “inverse
sensitivity”, “pushforward based inference”, “consistent Bayesian inference”,
or “data-consistent inversion”—wherein a solution is a probability density
whose pushforward takes a given form. The formulation of such a _stochastic_
inverse problem can be unexpected or confusing to those familiar with
traditional Bayesian or otherwise statistical inference. To date, two classes
of solutions have been proposed, and these have only been justified through
applications of measure theory and its disintegration theorem. In this work we
show that, under mild assumptions, the formulation of and solution to _all_
stochastic inverse problems can be more clearly understood using basic
probability theory: a stochastic inverse problem is simply a change-of-
variables or approximation thereof. For the two existing classes of solutions,
we derive the relationship to change(s)-of-variables and illustrate using
analytic examples where none had previously existed. Our derivations use
neither Bayes’ theorem nor the disintegration theorem explicitly. Our final
contribution is a careful comparison of changes-of-variables to more
traditional statistical inference. While taking stochastic inverse problems at
face value for the majority of the paper, our final comparative discussion
gives a critique of the framework.
Keywords: Jacobian, reparameterization, uncertainty quantification,
statistical inference, Bayesian analysis
## 1 Introduction
Scientific models often relate unknown parameters or functions to observable
quantities. Any choice of unknown input to the model will produce an
observable that can be compared with real measurements. In the absence of
noise or measurement error, this _forward_ problem can be thought of as a
well-defined operator from parameter/function-space to observable-space. To
solve the _inverse_ problem (IP) is to estimate the unknown inputs from a
finite collection of observations. In the more formal language of Tikhonov et
al. (1995), if $\mathcal{F}$ is an operator between metric (or Banach, or
Hilbert) spaces representing parameters and data,
$\mathcal{F}:\mathcal{P}\mapsto\mathcal{Q}$, the solution to the IP is defined
by the operator equation
$\displaystyle\boldsymbol{\theta}$ $\displaystyle\text{\quad such that
\quad}\mathcal{F}(\boldsymbol{\theta})=\boldsymbol{y}\quad\quad\big{(}\boldsymbol{\theta}\in\mathcal{P},\
\boldsymbol{y}\in\mathcal{Q}\big{)}\ .$ (1)
This operator formulation is typically associated with function spaces, where
solving the IP means solving equations (often integral equations) given a
finite number of potentially noisy observations (Groetsch, 1993; Stuart, 2010;
Kirsch, 2011; Aster et al., 2019; Lesnic, 2021). Equality within (1) may not
be possible, and a solution might instead be required to minimize some
functional involving $\mathcal{F}(\boldsymbol{\theta})$ and $\boldsymbol{y}$,
as in the case of weighted or regularized least-squares estimation.
When $\mathcal{P}$ and $\mathcal{Q}$ are subsets of $\mathbb{R}^{p}$ and
$\mathbb{R}^{q}$ (respectively), the forward operator is a Euclidean map
$\boldsymbol{g}$, and the noise is typically written into the IP explicitly.
The solution is
$\displaystyle\boldsymbol{\theta}$ $\displaystyle\text{\quad such that
\quad}\boldsymbol{g}(\boldsymbol{\theta})+\boldsymbol{\epsilon}=\boldsymbol{y}\quad\quad\big{(}\boldsymbol{\theta}\in\mathcal{P},\
\boldsymbol{y}\in\mathcal{Q},\ \boldsymbol{\epsilon}\sim
F_{\boldsymbol{E}}\big{)}\ .$ (2)
The term $\boldsymbol{\epsilon}$ is a realization of a random variable
$\boldsymbol{E}$ corresponding to the error process with cumulative
distribution function $F_{\boldsymbol{E}}$. (The use of the cumulative
function allows for continuous as well as discrete and mixed random variables.
Also, the statement within (2) assumes additive error, but is trivially
modified for multiplicative or other error structures.) Together,
$\boldsymbol{g}(\boldsymbol{\theta})$ and
$F_{\boldsymbol{E}}(\boldsymbol{\epsilon})$ determine a model whose likelihood
function forms the basis for _statistical_ IPs (Pawitan, 2001; Davison, 2003;
Kaipio and Somersalo, 2005; Tarantola, 2005; Stuart, 2010; Reid, 2010;
Chiachio-Ruano et al., 2022). As stated in (2), the solution to the
statistical IP is a point estimate derived in some fashion from the
likelihood. In certain cases, the maximum likelihood estimate will be a
weighted or regularized least-squares estimate. In the Bayesian (Bernardo and
Smith, 1994; Robert, 2007; Gelman et al., 2014) and Fiducial (Hannig, 2009;
Hannig et al., 2016) inferential contexts, the solution to a statistical IP is
an entire probability distribution from which point estimates may be obtained.
Over the past century, the field of inverse problems has extended into nearly
every domain of science, engineering, and technology, cementing its status as
fundamental to applied mathematics and statistics. One particular, more
recently developed class of so-called _stochastic_ inverse problems (SIPs)
identifies a solution as a probability density whose pushforward takes a given
form (Butler et al., 2014, 2015a, 2015b; Mattis et al., 2015; Butler et al.,
2018a; Mattis and Butler, 2019; Uy and Grigoriu, 2019; Butler et al., 2020a;
Bruder et al., 2020; Butler et al., 2020b; Tran and Wildey, 2021). These SIPs
and their solutions have also gone under the names of “(stochastic) inverse
sensitivity problems” (Breidt et al., 2011; Butler et al., 2012, 2014; Butler
and Estep, 2013; Graham et al., 2017); “measure-theoretic inverse problems”
(Butler et al., 2017; Presho et al., 2017); “consistent Bayesian” or
“pushforward-based inference” (Butler et al., 2018a, b; Walsh et al., 2018);
“data/observation-consistent inversion” (Butler et al., 2018b, 2020c; Butler
and Hakula, 2020; Mattis et al., 2022); or “random parameter models” (Swigon
et al., 2019).
The formulation and solution of these SIPs can look peculiar to those familiar
with more traditional inverse problems. For example, each observable quantity
is an entire probability density, not of a fixed realization of a random
variable modelled conditionally. Also, there must be at least as many unknown
parameters as observables. Moreover, derivations of the two different classes
of solutions that have been proposed—those of Breidt et al. (2011) and Butler
et al. (2018a) which we call “BBE” (short for “Breidt, Butler, Estep”) and
“BJW” (short for “Butler, Jakeman, Wildey”), respectively—rely heavily upon
measure theory, specifically the disintegration theorem.
The goal of this paper is to explore the formulation and solution to SIPs
using only introductory probability and mathematical statistics while also
giving a careful examination of the existing literature. While measure theory
is often the language of choice, we feel that in this context it can obscure
concepts that are quite uncomplicated. To begin to explore some of the SIP
peculiarities, we first address the case when $p=q$. For $p\geq q$, we then
offer a class of “intuitive” solutions and show how these relate to a change-
of-variables (CoV) from observable- to parameter-space (Section 3). We give a
simple algorithm to obtain samples from intuitive solutions, and the
underlying reasoning proves useful for later results. The BBE and BJW
solutions are investigated through theoretical derivation and analytic
examples (Sections 4, 5). Analytic results have heretofore never been given.
We show that under mild assumptions, the BBE and BJW solutions can be derived
from CoVs. Some discussion of related work is provided in Section 6. We then
show that _any_ solution to an SIP must be related to a CoV (Section 7). For
$p>q$, the results rely on _auxiliary variables_ to augment the
underdetermined system. A comparative critique of SIPs/CoVs versus inference
is given in Section 8 along with two illustrative examples. Our main
conclusion is that SIPs are significantly different from statistical inverse
problems and should be handled with greater care.
In the remainder of this section, we give a formal definition of the SIP and a
statement of the traditional CoV theorem.
### 1.1 Stochastic Inverse Problem Formulation and Assumptions
In an SIP, a
$q$-dimensional vector of observable quantities or data is taken to be a
random variable $\boldsymbol{Y}$ with given probability density function (pdf)
$f_{\boldsymbol{Y}}(\boldsymbol{y})$. Further, there is also a function or
“forward-map” from parameter- to data-space
$\displaystyle\boldsymbol{g}:\big{(}\mathcal{P}\subseteq\mathbb{R}^{p}\big{)}\
$ $\displaystyle\longmapsto\ \big{(}\mathcal{Q}\subseteq\mathbb{R}^{q}\big{)}$
$\displaystyle\left[\begin{array}[]{l}\Theta_{1}\\\ \vdots\\\ \vdots\\\
\Theta_{p}\end{array}\right]$
$\displaystyle\longmapsto\left[\begin{array}[]{lcl}Y_{1}&=&g_{1}(\boldsymbol{\Theta})\\\
\vdots&&\\\ Y_{q}&=&g_{q}(\boldsymbol{\Theta})\end{array}\right],$
with $p\geq q$, which is either known analytically or which can be evaluated
as a “black-box”, such as a computer model that solves a set of differential
equations. The goal is to obtain a density for random variables in the pre-
image, $\boldsymbol{\Theta}\in\mathcal{P}$, that will transform (exactly or in
some approximate sense) to $f_{\boldsymbol{Y}}(\boldsymbol{y})$ under the
forward-map $\boldsymbol{g}$. In other words, given $f_{\boldsymbol{Y}}$ and
$\boldsymbol{g}$, the solution is
$\displaystyle f_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
$\displaystyle\text{\quad such that
\quad}\boldsymbol{\Gamma}\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{g}(\boldsymbol{\Theta})\sim
f_{\boldsymbol{Y}}$ (3)
i.e., a density that pushes forward or propagates “correctly”. _Throughout
this paper we assume that a solution exists_ , though in applications one may
need to be careful and check that the range $\boldsymbol{g}(\mathcal{P})$
contains $\mathcal{Q}$, the support of the given
$f_{\boldsymbol{Y}}(\boldsymbol{y})$.
Within the classical IP definition (1), taking $\mathcal{P}$ and $\mathcal{Q}$
to be subsets of Euclidean space, and replacing $\boldsymbol{y}$ and
$\boldsymbol{\theta}$ with continuous random variables $\boldsymbol{Y}$ and
$\boldsymbol{\Theta}$, then an SIP defined by (3) appears to be a natural
variant of the more traditional (1). The simplest example is when the operator
$\mathcal{F}$ is a linear map $\boldsymbol{g}$ between Euclidean spaces and
represented by the invertible matrix $\boldsymbol{A}$. The solution to the
classical linear IP is of course
$\boldsymbol{\theta}=\boldsymbol{A}^{-1}\boldsymbol{y}$; the solution to the
linear SIP is the density of the random variable
$\boldsymbol{A}^{-1}\boldsymbol{Y}$.
In practice, a random sample from
$f_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$ constitutes a solution as well,
albeit an _approximate_ solution whose quality increases with sample size. Let
us explicitly state this as a non-controversial assumption.
_ Assumption (A0): The SIP defined by (3) is approximately solved when a
random sample $\boldsymbol{\theta}^{(1)},\ldots,\boldsymbol{\theta}^{(M)}$ is
obtained such that
$\boldsymbol{g}\big{(}\boldsymbol{\theta}^{(1)}\big{)},\ldots\boldsymbol{g}\big{(}\boldsymbol{\theta}^{(M)}\big{)}\stackrel{{\scriptstyle\text{iid}}}{{\sim}}f_{\boldsymbol{Y}}$.
_
The main assumption that we use in this paper is the following.
_ Assumption (A1): The maps $g_{1},\ldots,g_{q}$ are in $C^{1}(\mathcal{P})$
(i.e. they are continuously differentiable on the domain) and the Jacobian
$\big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\big{|}$ has
full row-rank except for on a set $\mathcal{P}_{0}$ of measure zero. Without
loss of generality, the left $q\times q$ block of the Jacobian
$\big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}_{1:q}}\big{|}$
is invertible on all of $\mathcal{P}\setminus\mathcal{P}_{0}$. _
First of all, it is not unreasonable to assume the Jacobian has linearly
independent rows almost everywhere (“a.e.”, meaning for all but a set of
measure zero) in $\mathcal{P}$. If the rows were dependent a.e., then one
could consider only the independent outputs and instead solve the
corresponding reduced SIP. Second, (A1) allows the domain to be partitioned
into subdomains $\mathcal{P}_{0}$ where the Jacobian determinant vanishes and
disjoint sets $\mathcal{P}_{1},\ldots,\mathcal{P}_{m}$ where the functions
$g_{1},\ldots,g_{q}$ produce full row-rank Jacobian.
In the first paragraph of this section it was taken as given that:
_ Assumption (A2): The random variables $\boldsymbol{Y}$ and
$\boldsymbol{\Theta}$ are absolutely continuous and thus admit probability
densities with respect to Lebesgue measure. _
This assumption is not strictly necessary but is, nonetheless, the usual
context for the SIPs. The results of this paper carry over to discrete random
variables as well, but we shall from here on only speak in terms of
probability densities.
### 1.2 Changes-of-Variables
For completeness we state the change-of-variables (CoV) theorem, which can be
found in standard textbooks on probability and mathematical statistics. The
following is taken from Shao (2003), pg. 23 and is also similar to Casella and
Berger (2002), pg. 185.
###### Theorem (Change-of-Variables).
Let $\boldsymbol{U}$ be a random $p$-vector with a Lebesgue pdf
$f_{\boldsymbol{U}}(\boldsymbol{u})$ and let
$\boldsymbol{V}\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{t}(\boldsymbol{U})$,
where $\boldsymbol{t}$ is a Borel function from
$(\mathbb{R}^{p},\mathcal{B}^{p})$ to $(\mathbb{R}^{p},\mathcal{B}^{p})$. Let
$A_{1},\ldots,A_{d}$ be disjoint sets in $\mathcal{B}^{p}$ such that
$\mathbb{R}^{p}-(A_{1}\cup\cdots\cup A_{d})$ has Lebesgue measure 0 and
$\boldsymbol{t}$ on $A_{i}$ is one-to-one with a nonvanishing Jacobian, i.e.,
$|\frac{\partial\boldsymbol{t}}{\partial\boldsymbol{u}}|\neq 0$ on $A_{i}$ for
$i=1,\ldots,d$. Then $\boldsymbol{V}$ has the following Lebesgue pdf
$\displaystyle f_{\boldsymbol{V}}(\boldsymbol{v})$
$\displaystyle=\sum_{i=1}^{d}f_{\boldsymbol{U}}\big{(}\boldsymbol{u}=\boldsymbol{t}^{-1}_{i}(\boldsymbol{v})\big{)}\Big{|}\frac{\partial\boldsymbol{t}^{-1}_{i}}{\partial\boldsymbol{v}}\Big{|}\
,$ (4)
where $\boldsymbol{t}^{-1}_{i}$ is the inverse function of $\boldsymbol{t}$ on
$A_{i}$ (i=1, …, d).
Being that one of the goals of the paper is to keep exposition at the level of
basic probability and away from measure theory, we note that the theorem above
will be the only place in which Borel sigma algebras are mentioned, and the
last time we need to refer to Lebesgue measure.
## 2 CoV Solutions to the SIP When $p=q$: Immediate Issues and Insights
Taking $\boldsymbol{t}=\boldsymbol{g}$ and
$\boldsymbol{U}=\boldsymbol{\Theta}$ in the CoV Theorem, the pushforward of
$f_{\boldsymbol{\Theta}}$ through $\boldsymbol{g}$ has density
$f_{\boldsymbol{\Gamma}}(\boldsymbol{\gamma})=\sum_{i=1}^{d}f_{\boldsymbol{\Theta}}\big{(}\boldsymbol{g}_{i}^{-1}(\boldsymbol{\gamma})\big{)}\big{|}\partial\boldsymbol{g}_{i}^{-1}/\partial\boldsymbol{\gamma}\big{|}$.
If $\boldsymbol{g}$ is a one-to-one function of $\mathcal{P}$ onto
$\mathcal{Q}$, then $d=1$, and no $i$ subscript is necessary. In this case one
can go the other direction and pull $f_{\boldsymbol{Y}}$ back through
$\boldsymbol{h}\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{g}^{-1}$ to
get
$f_{\boldsymbol{H}}(\boldsymbol{\eta})=f_{\boldsymbol{Y}}\big{(}\boldsymbol{g}(\boldsymbol{\eta})\big{)}\big{|}\partial\boldsymbol{g}/\partial\boldsymbol{\eta}\big{|}$.
Thus, by taking
$\boldsymbol{\Theta}\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{H}$,
the SIP is solved uniquely (a.e.) by the density
$\displaystyle f^{\text{CoV}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
$\displaystyle=f_{\boldsymbol{Y}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}\Big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\Big{|}\
.$ (5)
If $\boldsymbol{g}$ is not a uniquely invertible function, its many-to-one
nature allows for multiple solutions to the SIP. We show in the next
proposition that there are in fact _infinitely_ many solutions. This property
does not appear to have been observed within the existing literature. For
simplicity and ease of exposition we assume that $\boldsymbol{g}$ is exactly
$m$-to-1 onto all of $\mathcal{Q}$. The general case can be proved similarly,
but involves keeping track of all the distinct sets where the function is two-
to-one, three-to-one, etc. within the $d$-partition of the domain; as such the
indexing is more laborious and no further insight is added.
###### Proposition 1.
Assume (A1) and let $\boldsymbol{g}$ be an $m$-to-1 ($m>1$) function of
$\mathcal{P}$ onto $\mathcal{Q}$. Then there exists a continuously-indexed,
infinite family of solutions to the SIP.
###### Proof.
Let $\mathbbm{1}\\{\boldsymbol{\theta}\in S\\}$ denote an indicator function
that is 1 when $\boldsymbol{\theta}$ is in the set $S$, and 0 otherwise.
Consider the density
$\displaystyle
f^{\text{CoV}}_{\boldsymbol{\Theta},\boldsymbol{w}}(\boldsymbol{\theta})$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}f_{\boldsymbol{Y}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}\Big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\Big{|}\sum_{j=1}^{m}w_{j}\
\mathbbm{1}\\{\boldsymbol{\theta}\in\mathcal{P}_{j}\\}$ (6)
continuously indexed by mixture weights summing to unity:
$\sum_{j=1}^{m}w_{j}=1$. By the CoV Theorem, the pushforward density for
$\boldsymbol{\Gamma}\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{g}(\boldsymbol{\Theta})$
is
$\displaystyle f_{\boldsymbol{\Gamma}}(\boldsymbol{\gamma})$
$\displaystyle=\sum_{i=1}^{m}f^{\text{CoV}}_{\boldsymbol{\Theta},\boldsymbol{w}}\big{(}\boldsymbol{\theta}=\boldsymbol{g}_{i}^{-1}(\boldsymbol{\gamma})\big{)}\
\Big{|}\frac{\partial\boldsymbol{g}_{i}^{-1}}{\partial\boldsymbol{\gamma}}\Big{|}$
$\displaystyle=\sum_{i=1}^{m}\left(f_{\boldsymbol{Y}}\big{(}\boldsymbol{g}(\boldsymbol{g}_{i}^{-1}(\boldsymbol{\gamma})\big{)}\Big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\Big{|}_{\boldsymbol{\theta}=\boldsymbol{g}_{i}^{-1}(\boldsymbol{\gamma})}\sum_{j=1}^{m}w_{j}\
\mathbbm{1}\\{\boldsymbol{g}_{i}^{-1}(\boldsymbol{\gamma})\in\mathcal{P}_{j}\\}\right)\Big{|}\frac{\partial\boldsymbol{g}_{i}^{-1}}{\partial\boldsymbol{\gamma}}\Big{|}$
$\displaystyle=f_{\boldsymbol{Y}}(\boldsymbol{\gamma})\sum_{i=1}^{m}\left(\sum_{j=1}^{m}w_{j}\
\mathbbm{1}\\{\boldsymbol{g}_{i}^{-1}(\boldsymbol{\gamma})\in\mathcal{P}_{j}\\}\right)$
$\displaystyle=f_{\boldsymbol{Y}}(\boldsymbol{\gamma})\sum_{i=1}^{m}w_{i}\
\mathbbm{1}\\{\boldsymbol{g}_{i}^{-1}(\boldsymbol{\gamma})\in\mathcal{P}_{i}\\}\quad=f_{\boldsymbol{Y}}(\boldsymbol{\gamma})$
as desired. In the penultimate line, all of the $i\neq j$ terms vanish
tautologically. ∎
Section A of the appendix gives a concrete demonstration of the method used in
the proof above. In the general case, any sets in the domain that get mapped
to the same range subset can be given their own set of weights summing to
unity. In summary, when $p=q$, the existence of a neighborhood within
$\mathcal{P}$ where $\boldsymbol{g}$ is not one-to-one implies an infinite
number of solutions to the SIP. The following related result states that there
are no other solutions to the SIP beyond those found by CoVs.
###### Proposition 2.
Suppose that $p=q$ and assumption (A1) holds for the forward map
$\boldsymbol{g}$. If $f_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$ solves the
SIP (3), then it is (a.e.) derivable from changes-of-variables.
###### Proof.
By (A1), the domain $\mathcal{P}\setminus\mathcal{P}_{0}$ can be partitioned
into $\mathcal{P}_{1},\ldots,\mathcal{P}_{d}$ such that $\boldsymbol{g}$ has
$C^{1}(\mathcal{Q})$ inverses
$\boldsymbol{g}_{1}^{-1},\ldots,\boldsymbol{g}_{d}^{-1}$ defined on these
sets. The density $f_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$ solves the
SIP, so then by the CoV Theorem, its pushforward under $\boldsymbol{g}$ is
$f_{\boldsymbol{Y}}(\boldsymbol{y})=\sum_{i=1}^{d}f_{\boldsymbol{\Theta}}\big{(}\boldsymbol{g}_{i}^{-1}(\boldsymbol{y})\big{)}\
\big{|}\frac{\partial\boldsymbol{g}_{i}^{-1}}{\partial\boldsymbol{y}}\big{|}$.
Let the range sets be denoted
$\mathcal{Q}_{i}\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{g}(\mathcal{P}_{i})$
(while not a proper partition, it still holds that
$\mathcal{Q}=\bigcup_{i=1}^{d}\mathcal{Q}_{i}$). Taking
$\displaystyle f_{\boldsymbol{Y}_{i}}(\boldsymbol{y})$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}w_{i}^{-1}f_{\boldsymbol{\Theta}}\big{(}\boldsymbol{g}_{i}^{-1}(\boldsymbol{y})\big{)}\
\Big{|}\frac{\partial\boldsymbol{g}_{i}^{-1}}{\partial\boldsymbol{y}}\Big{|}\
\mathbbm{1}\\{\boldsymbol{y}\in\mathcal{Q}_{i}\\}$ $\displaystyle w_{i}$
$\displaystyle=\int_{\mathcal{Q}_{i}}f_{\boldsymbol{\Theta}}\big{(}\boldsymbol{g}_{i}^{-1}(\boldsymbol{y})\big{)}\
\Big{|}\frac{\partial\boldsymbol{g}_{i}^{-1}}{\partial\boldsymbol{y}}\Big{|}d\boldsymbol{y}\quad=\int_{\mathcal{P}_{i}}f_{\boldsymbol{\Theta}}(\boldsymbol{\theta})d\boldsymbol{\theta}\
,$
a CoV from each $\mathcal{Q}_{i}$ to $\mathcal{P}_{i}$ under
$\boldsymbol{g}_{i}^{-1}$ yields densities
$f^{\text{CoV}}_{\boldsymbol{\Theta}_{i}}(\boldsymbol{\theta})\stackrel{{\scriptstyle\text{def}}}{{=}}w_{i}^{-1}f_{\boldsymbol{\Theta}}(\boldsymbol{\theta})\mathbbm{1}\\{\boldsymbol{\theta}\in\mathcal{P}_{i}\\}$.
The weighted combination $\sum_{i=1}^{d}w_{i}\
f^{\text{CoV}}_{\boldsymbol{\Theta}_{i}}(\boldsymbol{\theta})$ of the $d$
range subset CoVs is then the original given solution to the SIP, up to a set
of exceptional points having measure zero. ∎
Regardless of whether $\boldsymbol{g}$ is invertible everywhere, if the
pushforward of $f_{\boldsymbol{\Theta}}$ is $f_{\boldsymbol{Y}}$, then the
original density can be written concisely as a CoV from $\mathcal{Q}$ to
$\mathcal{P}\setminus\mathcal{P}_{0}$. To see this note that by the Inverse
Function Theorem,
$\big{|}\frac{\partial\boldsymbol{g}_{i}^{-1}}{\partial\boldsymbol{y}}\big{|}_{\boldsymbol{y}=\boldsymbol{g}(\boldsymbol{\theta})}=\big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\big{|}^{-1}$
for all $i$ so that
$\displaystyle f_{\boldsymbol{Y}}(\boldsymbol{y})$
$\displaystyle=\sum_{i=1}^{d}f_{\boldsymbol{\Theta}}\big{(}\boldsymbol{g}_{i}^{-1}(\boldsymbol{y})\big{)}\
\Big{|}\frac{\partial\boldsymbol{g}_{i}^{-1}}{\partial\boldsymbol{y}}\Big{|}$
$\displaystyle\Rightarrow\quad
f_{\boldsymbol{Y}}(\boldsymbol{g}(\boldsymbol{\theta}))$
$\displaystyle=\Big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\Big{|}^{-1}\
\sum_{i=1}^{d}f_{\boldsymbol{\Theta}}\big{(}\boldsymbol{g}_{i}^{-1}(\boldsymbol{g}(\boldsymbol{\theta}))\big{)}$
$\displaystyle=\Big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\Big{|}^{-1}\
\sum_{i=1}^{d}f_{\boldsymbol{\Theta}}(\boldsymbol{\theta})\
\mathbbm{1}\\{\boldsymbol{\theta}\in\mathcal{P}_{i}\\}$
$\displaystyle\Rightarrow\quad f_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
$\displaystyle=f_{\boldsymbol{Y}}(\boldsymbol{g}(\boldsymbol{\theta}))\
\Big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\Big{|}\ .$
(7)
This is a general statement that a density can be written in terms of its
pushforward, and will be used in Proposition 4.
## 3 Intuitive Solutions to SIPs
It was demonstrated in the last section that when $p=q$, the SIP (3) is solved
by using at least one density given by the CoV theorem. However, for the more
general case of $p>q$, there is also a family of densities that solve the SIP
(3) which we call “intuitive solutions”.
Let the parameter space be partitioned as
$\boldsymbol{\Theta}^{\top}=\big{(}\boldsymbol{\Theta}_{1:q}^{\top},\boldsymbol{\Theta}_{(q+1):p}^{\top}\big{)}$.
Temporarily fix
$\boldsymbol{\Theta}_{(q+1):p}=\boldsymbol{\theta}^{*}_{(q+1):p}$ and consider
the distribution that comes from solving the “square” SIP:
$\displaystyle
f_{\boldsymbol{\Theta}_{1:q}|(\boldsymbol{\Theta}_{(q+1):p}=\boldsymbol{\theta}^{*}_{(q+1):p})}^{\text{CoV}}$
$\displaystyle\text{\quad such that
\quad}\boldsymbol{g}\big{(}\boldsymbol{\Theta}_{1:q},\boldsymbol{\theta}^{*}_{(q+1):p}\big{)}\sim
f_{\boldsymbol{Y}}$ (8)
via a $q$-dimensional change-of-variables. This implies that the intuitive
solution $f_{\boldsymbol{\Theta}}^{\text{Int}}$ given by
$\displaystyle f_{\boldsymbol{\Theta}}^{\text{Int}}$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}f_{\boldsymbol{\Theta}_{1:q}|\boldsymbol{\Theta}_{(q+1):p}}^{\text{CoV}}\cdot
f_{\boldsymbol{\Theta}_{(q+1):p}}$ (9)
solves the SIP for any choice of $f_{\boldsymbol{\Theta}_{(q+1):p}}$.
Sampling from this density constitutes a solution by (A0), and this can be
performed using a simple and intuitive (hence the name) Monte Carlo routine
within the algorithm below.
Algorithm 1 Intuitive solutions to the stochastic inverse problem (SIP)
Given densities $f_{\boldsymbol{Y}}$ and $f_{\boldsymbol{\Theta}_{(q+1):p}}$,
0: Sample $\boldsymbol{y}^{*}\sim f_{\boldsymbol{Y}}$ and
$\boldsymbol{\theta}_{(q+1):p}^{*}\sim f_{\boldsymbol{\Theta}_{(q+1):p}}$.
0: Solve the (likely nonlinear) equation
$\displaystyle\boldsymbol{y}^{*}-\boldsymbol{g}\big{(}\boldsymbol{\theta}_{1:q}^{*},\boldsymbol{\theta}_{(q+1):p}^{*}\big{)}=\boldsymbol{0}$
(10) for $\boldsymbol{\theta}_{1:q}^{*}$ to form the random sample
$\big{(}\boldsymbol{\theta}_{1:q}^{*\top},\boldsymbol{\theta}_{(q+1):p}^{*\top}\big{)}^{\top}$,
a realization of $\boldsymbol{\Theta}\sim
f^{\text{Int}}_{\boldsymbol{\Theta}}$.
0: Repeat $\langle 1\rangle-\langle 2\rangle$.
Algorithm 1 requires no sophisticated sampling techniques, but it does require
a nonlinear solver. Typical solvers benefit from the ability to evaluate and
use the Jacobian, but this is not always required. This algorithm was
mentioned in Swigon et al. (2019) for the simplest case of $p=q$, though the
authors ultimately avoided the use of solvers and instead employed a
Metropolis-Hastings algorithm based upon an approximated Jacobian term.
Intuitive solutions as given above were not considered as an alternative
within the work stemming from the two major approaches of BBE (Breidt et al.,
2011) and BJW (Butler et al., 2018a).
When the Jacobian of the function $\boldsymbol{g}$ vanishes within the domain
$\mathcal{P}$, the input space must be partitioned into $d>1$ disjoint sets
according to the CoV Theorem. This will lead to the same infinite class of
solutions as in Proposition 1 of the previous section. When using
deterministic equation solvers, the distribution of starting values for
$\boldsymbol{\theta}^{*}_{1:q}$ will determine the particular density out of
the infinite family.
The main characteristic of intuitive solutions is made explicitly clear in
Algorithm 1. In the first step, $\boldsymbol{y}^{*}\sim f_{\boldsymbol{Y}}$
and $\boldsymbol{\theta}_{(q+1):p}^{*}\sim f_{\boldsymbol{\Theta}_{(q+1):p}}$
are drawn _independently_ , not from some joint density. This implies that the
solution has the feature that the response and $(p-q)$ of the parameters do
not covary—the output is independent of these inputs! Thus, when forming an
intuitive solution to the SIP, one might specify a density for $(p-q)$ of the
least important input parameters, as determined by an initial sensitivity
analysis.
As a final note about intuitive solutions, the relationship between the
density (9) and the sampling steps of Algorithm 1 will prove useful in later
sections (specifically, 4.1, 5.4, and 7).
## 4 Breidt et al. 2011 (BBE) and Related Work
The IP under current consideration was first (to the best of our knowledge)
described in Breidt et al. (2011) under the name “inverse sensitivity problem”
in the first of three papers (Parts I, II, III). The authors give an algorithm
to compute an approximate pdf of the solution, which we denote as
$\widehat{f}^{\text{BBE}}_{\boldsymbol{\Theta}}$ for a single output ($q=1$).
However, the authors do not specify the exact solution
$f^{\text{BBE}}_{\boldsymbol{\Theta}}$ that should be approximated by
$\widehat{f}^{\text{BBE}}_{\boldsymbol{\Theta}}$. This approximate solution
relies upon (A1), (A2), and the presence of derivative information, obtained
either analytically or estimated via adjoint techniques. The algorithm and
hence the approximate solution
$\widehat{f}^{\text{BBE}}_{\boldsymbol{\Theta}}$ depends heavily upon
discretization and as such is restricted to very low dimension $p$.
Part II (Butler et al., 2012) gives a rigorous error analysis of Breidt et al.
(2011), treating both statistical error due to sampling of the data density
$f_{\boldsymbol{Y}}$, and numerical error due to solving differential
equations during the likelihood calculation. Part III (Butler et al., 2014)
considers the multiresponse scenario, i.e., $q>1$. The Part III approach
deviates from Parts I and II in that the authors discretize events in the
parameter space as opposed to manifolds in the observable space. This
framework leads to the first mention of the disintegration theorem of measure
theory.
The last methodological developments within the BBE framework are Butler et
al. (2017) and Mattis and Butler (2019). Butler et al. (2017) discusses the
necessary event approximations and considers an adaptive sampling algorithm to
solve the SIP. Mattis and Butler (2019) focuses on computational estimates of
events from samples, using adjoints to enhance low-order piecewise-defined
surrogate response surfaces.
Several subsequent works have explored the use of
$\widehat{f}^{\text{BBE}}_{\boldsymbol{\Theta}}$ in various contexts. Butler
and Estep (2013) illustrates the BBE approach using the Brusselator
reaction–diffusion model. Butler et al. (2015b) compares the multiresponse
methodology of Butler et al. (2014) to other UQ methods in the context of
material damage from vibrational data. Mattis et al. (2015) applies the
multiresponse BBE to a groundwater contamination problem. Eight parameters
(including source location coordinates, the contaminant source flux, etc.) are
inferred from seven data points (the concentrations of the contaminant in
seven wells). Similarly, Butler et al. (2015a) describes the SIP for an
application in hydrodynamic models. The authors infer two parameters—the two-
dimensional vector of the Manning’s $n$ parameter—given maximum water
elevations at two of twelve possible observation stations. Note that, although
data is available from all twelve stations, only one or two may be used in any
given SIP, due to the constraint that $p\geq q$. To address this peculiarity
of the method, the authors introduce the notion of “geometrically distinct”
data (or quantities of interest, “QoI”): since $q\leq p$, then each data point
should contain as much information as possible (analogous to linear
independence). At the same time (and as pointed out by the authors), two
different sets of geometrically distinct data may lead to very different
solutions for the otherwise same SIP. Graham et al. (2017) also estimates
probable Manning’s $n$ fields using a storm surge model and the BBE SIP
framework. Finally, Presho et al. (2017) uses the BBE framework together with
the generalized multiscale finite element method for uncertainty
quantification within two-phase flow problems.
More recently, Uy and Grigoriu (2019) provides useful examples and commentary
on SIPs, with an emphasis on Breidt et al. (2011); we comment further on this
paper in Section 6.
### 4.1 The BBE Solution to the SIP is a Change-of-Variables
In their solutions to the univariate and multivariate inverse sensitivity
problem, Breidt et al. (2011) and Butler et al. (2014) provide algorithms to
discretize a probability density that is never actually given in closed form.
We begin our reanalysis by deriving what this density must be. The derivation
will clarify and extend arguments of Uy and Grigoriu (2019) (Sec. 2) without
the explicit use of the disintegration theorem.
Some notation and concepts used within the BBE approach are needed. In the BBE
setup there is a transverse parameterization
$\boldsymbol{t}(\boldsymbol{\theta})$ of dimension $q$, and a
$(p-q)$-dimensional contour parameterization
$\boldsymbol{c}(\boldsymbol{\theta})$ of the forward map contours
$\boldsymbol{g}$ in $\mathcal{Q}$. Furthermore, there exists a one-to-one
function $\boldsymbol{g}^{t}$ between $\boldsymbol{t}(\boldsymbol{\theta})$
and $\mathcal{Q}$ induced by $\boldsymbol{g}$, although in practice this
function is typically unknown. The function $\boldsymbol{g}^{t}$ simply
transforms the transverse coordinates uniquely to the observable space
$\mathcal{Q}$ and thus operates as a $q$-dimensional forward map.
It is easiest to derive the density of the BBE solution by first thinking of
how to sample from it, and this can be done akin to Algorithm 1 for the
intuitive solutions. Given a random sample $\boldsymbol{y}^{*}\sim
f_{\boldsymbol{Y}}$, the solution to
$\boldsymbol{y}^{*}-\boldsymbol{g}^{t}(\boldsymbol{t}^{*})=\boldsymbol{0}$ is
a random draw from the density
$f_{\boldsymbol{T}}(\boldsymbol{t})=f_{\boldsymbol{Y}}\big{(}\boldsymbol{g}^{t}(\boldsymbol{t})\big{)}\big{|}\frac{\partial\boldsymbol{g}^{t}}{\partial\boldsymbol{t}}\big{|}$.
Next, a uniformly sampled point along the contour indexed by
$\boldsymbol{t}^{*}$ is a realization $\boldsymbol{c}^{*}\sim
f_{\boldsymbol{C}|\boldsymbol{T}}(\boldsymbol{c}\ |\
\boldsymbol{T}=\boldsymbol{t}^{*})$ for the special case of a uniform
distribution. Thus the pair $(\boldsymbol{t}^{*},\boldsymbol{c}^{*})$ is a
random draw from the joint distribution
$f_{\boldsymbol{T},\boldsymbol{C}}=f_{\boldsymbol{T}}\cdot
f_{\boldsymbol{C}|\boldsymbol{T}}$. Putting this back in
$\boldsymbol{\theta}$-coordinates requires one more change-of-variables and
assumes that the map
$(\boldsymbol{t},\boldsymbol{c})\mapsto\boldsymbol{\theta}$ has nonvanishing
Jacobian a.e. The resulting density is thus given by the following iterated
change-of-variables:
$\displaystyle f^{\text{BBE}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
$\displaystyle=f_{\boldsymbol{T},\boldsymbol{C}}\Big{(}\boldsymbol{c}(\boldsymbol{\theta}),\boldsymbol{t}(\boldsymbol{\theta})\Big{)}\Big{|}\frac{\partial(\boldsymbol{t},\boldsymbol{c})}{\partial\boldsymbol{\theta}}\Big{|}$
$\displaystyle=f_{\boldsymbol{T}}(\boldsymbol{t}(\boldsymbol{\theta}))f_{\boldsymbol{C}|\boldsymbol{T}}\Big{(}\boldsymbol{c}(\boldsymbol{\theta})\
|\
\boldsymbol{t}(\boldsymbol{\theta})\Big{)}\Big{|}\frac{\partial(\boldsymbol{t},\boldsymbol{c})}{\partial\boldsymbol{\theta}}\Big{|}$
$\displaystyle=\left(f_{\boldsymbol{Y}}\Big{(}\boldsymbol{g}^{t}\big{(}\boldsymbol{t}(\boldsymbol{\theta})\big{)}\Big{)}\Big{|}\frac{\partial\boldsymbol{g}^{t}}{\partial\boldsymbol{t}}\Big{|}_{\boldsymbol{t}(\boldsymbol{\theta})}\
f_{\boldsymbol{C}|\boldsymbol{T}}\Big{(}\boldsymbol{c}(\boldsymbol{\theta})\
|\
\boldsymbol{t}(\boldsymbol{\theta})\Big{)}\right)\left|\frac{\partial(\boldsymbol{t},\boldsymbol{c})}{\partial\boldsymbol{\theta}}\right|$
(11)
for $f_{\boldsymbol{C}|\boldsymbol{T}}$ uniform. We have just sketched a proof
of the following result.
###### Proposition 3.
Suppose that $\boldsymbol{g}^{t}(\boldsymbol{t})$ (induced by the full forward
map $\boldsymbol{g}$) is a continuously differentiable, invertible map from
the $q$-dimensional image of $\mathcal{P}$ under
$\boldsymbol{t}(\boldsymbol{\theta})$ to $\mathcal{Q}$. Also suppose that
together the transverse and contour parameterizations
$\boldsymbol{t}(\boldsymbol{\theta})$ and
$\boldsymbol{c}(\boldsymbol{\theta})$ jointly map to the parameter space
$\mathcal{P}$ with nonvanishing Jacobian according to (A1). Then the exact BBE
solution is equivalent to an iterated change-of-variables given by (11).
The main difficulty with directly using the CoV form of the BBE solution above
is the lack of closed-form transverse and contour parameterizations. Hence,
examples in which the exact density $f^{\text{BBE}}(\boldsymbol{\theta})$ is
available are hard to come by. The following two examples do however admit
closed-form solutions and illustrate the concepts above.
### 4.2 Example: Linear Map
Here we find the exact solution density corresponding to Section 3.1 of Butler
et al. (2014). Let
$\boldsymbol{g}(\boldsymbol{\theta})\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{A}\boldsymbol{\theta}$
so that $\boldsymbol{Y}=\boldsymbol{A}\boldsymbol{\Theta}$. Suppose that
$\boldsymbol{A}^{\perp}$ is a $(p-q)\times p$ representation of the null space
of row($\boldsymbol{A}$).
Consider the transverse parameterization that follows the Jacobian of the
forward map, namely,
$\boldsymbol{t}(\boldsymbol{\theta})\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{A}\boldsymbol{\theta}$.
The connection between the data-space and transverse coordinates is the
trivial one:
$\boldsymbol{y}=\boldsymbol{g}^{t}(\boldsymbol{t})\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{I}_{q}\boldsymbol{t}$.
Thus we have
$f_{\boldsymbol{T}}(\boldsymbol{t})=f_{\boldsymbol{Y}}(\boldsymbol{t})$.
Contours of $\boldsymbol{g}$ are described by
$\boldsymbol{c}(\boldsymbol{\theta};\boldsymbol{t})\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{A}^{\perp}\boldsymbol{\theta}$.
The choice of uniformly distributed contours implies that
$f_{\boldsymbol{C}|\boldsymbol{T}}(\boldsymbol{c}|\boldsymbol{t})\propto\mathbbm{1}\big{\\{}\boldsymbol{l}(\boldsymbol{t})\leq\boldsymbol{c}\leq\boldsymbol{u}(\boldsymbol{t})\big{\\}}$,
where $\boldsymbol{l}$ and $\boldsymbol{u}$ are given lower and upper bounds
(in order to define a valid probability distribution). If the pre-specified
bounds are aligned with $\boldsymbol{A},\boldsymbol{A}^{\perp}$, then
$\boldsymbol{l}$ and $\boldsymbol{u}$ will not depend on $\boldsymbol{t}$.
Transforming $(\boldsymbol{T},\boldsymbol{C})$ to $\boldsymbol{\Theta}$ comes
from the invertible augmented system
$\displaystyle\begin{bmatrix}\boldsymbol{t}\\\ \boldsymbol{c}\end{bmatrix}$
$\displaystyle=\left(\boldsymbol{A}_{\text{aug}}\stackrel{{\scriptstyle\text{def}}}{{=}}\begin{bmatrix}\boldsymbol{A}\\\
\boldsymbol{A}^{\perp}\end{bmatrix}\right)\boldsymbol{\theta}$
having constant Jacobian $\left|\boldsymbol{A}_{\text{aug}}\right|$, and this
implies the result
$\displaystyle f^{\text{BBE}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
$\displaystyle\propto f_{\boldsymbol{Y}}(\boldsymbol{A}\boldsymbol{\theta})\
\mathbbm{1}\big{\\{}\boldsymbol{l}(\boldsymbol{A}\boldsymbol{\theta})\leq\boldsymbol{A}^{\perp}\boldsymbol{\theta}\leq\boldsymbol{u}(\boldsymbol{A}\boldsymbol{\theta})\big{\\}}\
.$
The top row of Figure 1 shows samples of the BBE solution overlaid upon
contours of the forward map
$g(\boldsymbol{\theta})\stackrel{{\scriptstyle\text{def}}}{{=}}[-1/3,4/3]\boldsymbol{\theta}$
(left). The observable density $f_{Y}(y)$ is a $N\big{(}1/2,(1/4)^{2}\big{)}$
truncated to (0,1); this is plotted over the pushforward histogram of the BBE
solution samples (right), confirming the solution.
Figure 1: Left: Samples of $\boldsymbol{\Theta}\sim
f^{\text{BBE}}_{\boldsymbol{\Theta}}$ and heatmap of
$f^{\text{BBE}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$ with contours of
the function $g(\boldsymbol{\theta})$ overlaid; Right: The pushforward of the
BBE solution samples ($g(\boldsymbol{\Theta})$, histogram) compared to the
given density $f_{Y}(y)$. Top row: Linear map in Section 4.2; Bottom row:
Nonlinear map in Section 4.3.
### 4.3 Example: Nonlinear Map
Now we work through Example 2 of Butler et al. (2014). Let
$Y=g(\boldsymbol{\Theta})$ with
$g(\theta_{1},\theta_{2})\stackrel{{\scriptstyle\text{def}}}{{=}}\tfrac{1}{2}\big{(}\theta_{1}^{2}+\theta_{2}^{2}\big{)}$
with $0<\theta_{1},\theta_{2}\leq 1$. (Note we have slightly modified the
forward map by introducing the factor of $1/2$, simply to clean up the
results.) Suppose $f_{Y}(y)$ is a pdf defined on $(0,1)$, such as a
$Beta(a,b)$ distribution.
The symmetry of $g(\boldsymbol{\theta})$ allows for tidy transverse and
contour parameterizations via polar coordinates. Consider the transverse
parameter defined by the distance to the origin
$r(\boldsymbol{\theta})\stackrel{{\scriptstyle\text{def}}}{{=}}\sqrt{\theta_{1}^{2}+\theta_{2}^{2}}$
(i.e., the radius), and the contour parameter defined by the polar angle to
the positive $\theta_{1}$-axis,
$\phi(\boldsymbol{\theta})\stackrel{{\scriptstyle\text{def}}}{{=}}\text{atan2}(y=\theta_{2},x=\theta_{1})$.
Observe that the polar angle is increasingly restricted as $r$ goes from 1 to
$\sqrt{2}$.
The connection between the data-space and transverse coordinates is
$y=g^{t}(r)\stackrel{{\scriptstyle\text{def}}}{{=}}\frac{1}{2}r^{2}$. Thus we
have $f_{R}(r)=f_{Y}(\frac{1}{2}r^{2})\cdot r$. Here, the choice of uniformly
distributed contours implies that
$\displaystyle f_{\Phi|R}(\phi|r)$ $\displaystyle=\left\\{\begin{array}[]{r
l}\frac{2}{\pi}\quad\mathbbm{1}\big{\\{}\ 0\ <\phi<\
\tfrac{\pi}{2}\big{\\}}&\quad 0<r\leq 1\\\ \frac{1}{\phi_{2}-\phi_{1}}\
\mathbbm{1}\big{\\{}\phi_{1}<\phi<\phi_{2}\big{\\}}&\quad
1<r\leq\sqrt{2}\end{array}\right.$ $\displaystyle\phi_{1}$
$\displaystyle=\text{atan2}\left(y=\sqrt{r^{2}-1},x=1\right)$
$\displaystyle\phi_{2}$
$\displaystyle=\text{atan2}\left(y=1,x=\sqrt{r^{2}-1}\right).$
Transforming $(R,\Phi)$ to $\boldsymbol{\Theta}$ comes with Jacobian
$\frac{1}{r}$ evaluated at $r=\sqrt{\theta_{1}^{2}+\theta_{2}^{2}}$, and
therefore
$\displaystyle f^{\text{BBE}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
$\displaystyle=f_{Y}\Big{(}\tfrac{1}{2}(\theta_{1}^{2}+\theta_{2}^{2})\Big{)}\
f_{\Phi|R}\Big{(}\text{atan2}(\theta_{2},\theta_{1})\ |\
\sqrt{\theta_{1}^{2}+\theta_{2}^{2}}\Big{)}\ .$
The bottom row of Figure 1 shows the solution with contours of the nonlinear
forward map (left). The observable density $f_{Y}(y)$ is a $Beta(8,12)$, and
this is plotted over the pushforward histogram of the BBE solution samples
(right).
## 5 Butler et al. 2018 (BJW) and Related Work
Butler, Jakeman, and Wildey (“BJW”) (Butler et al., 2018a) consider the SIP
(3) as above and call the use of their solution “consistent Bayesian
inference” or “pushforward based inference”. Here the use of the term
“consistent” does not refer to the statistical limiting sense, but rather to
the fact that the solution pushes forward to a given density. (We prefer
neither of these terms since we will show that this BJW solution is neither
Bayesian [Sections 5.2, 5.4] nor inference [Section 8]). The exact solution to
the SIP proposed by Butler et al. (2018a) is
$\displaystyle
f_{\boldsymbol{\Theta}}^{\text{BJW}}\big{(}\boldsymbol{\theta}\big{)}$
$\displaystyle=\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}\big{(}\boldsymbol{\theta}\big{)}\frac{f_{\boldsymbol{Y}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}{\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}\
,$ (12)
where $\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}$ is a given density and
$\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}}$ is its pushforward through
$\boldsymbol{g}$. The approximate solution
$\widehat{f}_{\boldsymbol{\Theta}}^{\text{BJW}}\big{(}\boldsymbol{\theta}\big{)}$
is what gets used in practice, and this has the same form except the density
in the denominator is replaced by an approximate density
$\widehat{\underaccent{\wtilde}{f}}_{\boldsymbol{\Gamma}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}$.
The BJW solution to the SIP was at least initially called Bayesian because
Bayes’ Rule was used in the derivation of the solution, and because the form
of the solution explicitly features a given initial density
$\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
(potentially playing a role similar to a prior distribution) times a weighting
function (akin to a likelihood function). Indeed, one must specify this
initial $p$-dimensional density
$\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$, similar
to the choice of a prior distribution during Bayesian inference. Unlike
standard Bayesian methods, however, the derivation also explicitly invokes the
disintegration theorem of measure theory, and the applied solution relies on
kernel density estimation for the denominator term. The practical reliance
upon density estimation restricts the application of BJW to very few
observable QoI (small $q$).
Subsequent works explore numerical aspects, applications, and extensions of
Butler et al. (2018a). Butler et al. (2018b) studies the convergence of kernel
density approximate solutions to the analytic BJW-derived densities. Walsh et
al. (2018) and Butler et al. (2020a) propose algorithms for optimal
experimental design which maximize the expected information gain between
initial and updated densities. Butler and Hakula (2020) applies the SIP
framework to a drum manufacturing process. The QoI are two (of twenty
possible, observable) eigenmodes of the drum vibration, and the parameters are
two diffusion parameters. Butler et al. (2020b) generalizes the BJW solution
to “stochastic” forward maps; we will discuss this generalization further in
Section 5.3 after an example in which (12) has closed form. Bruder et al.
(2020) uses multi-fidelity methods and Gaussian process regression models to
efficiently solve the SIP. Tran and Wildey (2021) focuses on a materials
science application, and also augments the BJW approach with a regression
model based on Gaussian processes to decrease the computational expense.
Finally, focusing on problems yielding large amounts of time-series data,
Mattis et al. (2022) learn the parameter-to-observable (QoI) map from data,
thus allowing for specification of the observable probability distribution.
Learning this map may rely on clustering the data and creating a partition on
the parameter-space. Once this is completed through the “Learning Uncertain
Quantities” framework, then the SIP can be solved as in previous works above.
### 5.1 Example: Linear Transformation of a Multivariate Gaussian Vector
Here we provide an analytic solution to a problem that was partly solved by
Butler et al. (2020a). Let
$\boldsymbol{g}(\boldsymbol{\theta})\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{A}\boldsymbol{\theta}$
so that $\boldsymbol{Y}=\boldsymbol{A}\boldsymbol{\Theta}$. Suppose that
$\boldsymbol{Y}\sim N_{q}(\boldsymbol{\mu}_{y},\boldsymbol{\Sigma}_{y})$ and
an initial density for $\boldsymbol{\Theta}$ is also multivariate Gaussian:
$\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})\sim
N_{p}(\boldsymbol{\mu}_{\theta},\boldsymbol{\Sigma}_{\theta})$. The
pushforward of $\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}$ under
$\boldsymbol{g}$ has distribution $\boldsymbol{\Gamma}\sim
N_{p}(\boldsymbol{A}\boldsymbol{\mu}_{\theta},\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{\top})$.
Using these three given densities, the BJW solution (12) is
$\displaystyle f^{\text{BJW}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
$\displaystyle\propto\exp\left\\{-(\boldsymbol{\theta}-\boldsymbol{\mu}_{\theta})^{\top}\boldsymbol{\Sigma}_{\theta}^{-1}(\boldsymbol{\theta}-\boldsymbol{\mu}_{\theta})/2\right\\}$
$\displaystyle\ \ \cdot\
\frac{\exp\left\\{-(\boldsymbol{A}\boldsymbol{\theta}-\boldsymbol{\mu}_{y})^{\top}\boldsymbol{\Sigma}_{y}^{-1}(\boldsymbol{A}\boldsymbol{\theta}-\boldsymbol{\mu}_{y})/2\right\\}}{\exp\left\\{-(\boldsymbol{A}\boldsymbol{\theta}-\boldsymbol{A}\boldsymbol{\mu}_{\theta})^{\top}(\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{\top})^{-1}(\boldsymbol{A}\boldsymbol{\theta}-\boldsymbol{A}\boldsymbol{\mu}_{\theta})/2\right\\}}$
After expanding, combining terms, and completing the quadratic form, it is
seen that the answer must be
$\displaystyle f^{\text{BJW}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
$\displaystyle=N_{p}(\wtilde{\boldsymbol{\mu}},\wtilde{\boldsymbol{\Sigma}})$
$\displaystyle\wtilde{\boldsymbol{\mu}}$
$\displaystyle=\wtilde{\boldsymbol{\Sigma}}\left(\boldsymbol{A}^{\top}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{\mu}_{y}-\boldsymbol{A}^{\top}(\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{\top})^{-1}\boldsymbol{A}\boldsymbol{\mu}_{\theta}+\boldsymbol{\Sigma}_{\theta}^{-1}\boldsymbol{\mu}_{\theta}\right)$
(13)
$\displaystyle=\wtilde{\boldsymbol{\Sigma}}\boldsymbol{A}^{T}\boldsymbol{\Sigma}_{y}^{-1}(\boldsymbol{\mu}_{y}-\boldsymbol{A}\boldsymbol{\mu}_{\theta})+\boldsymbol{\mu}_{\theta}$
(14) $\displaystyle\wtilde{\boldsymbol{\Sigma}}$
$\displaystyle=\left(\boldsymbol{A}^{\top}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{A}-\boldsymbol{A}^{\top}(\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{\top})^{-1}\boldsymbol{A}+\boldsymbol{\Sigma}_{\theta}^{-1}\right)^{-1}$
(15)
$\displaystyle=\boldsymbol{\Sigma}_{\theta}-\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{T}\left\\{(\boldsymbol{\Sigma}_{y}^{-1}-(\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{T})^{-1})^{-1}+\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{T}\right\\}^{-1}\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}$
(16)
Note that when $\boldsymbol{A}$ is a square invertible matrix,
$\wtilde{\boldsymbol{\Sigma}}=\boldsymbol{A}^{-1}\boldsymbol{\Sigma}_{y}\boldsymbol{A}^{-\top}$,
and $\wtilde{\boldsymbol{\mu}}=\boldsymbol{A}^{-1}\boldsymbol{\mu}_{y}$.
Otherwise, as shown in the appendix (Lemma 1),
$\boldsymbol{A}\wtilde{\boldsymbol{\mu}}=\boldsymbol{\mu}_{y}$ and
$\boldsymbol{A}\wtilde{\boldsymbol{\Sigma}}\boldsymbol{A}^{\top}=\boldsymbol{\Sigma}_{y}$,
confirming that the pushforward indeed has the distribution
$\boldsymbol{A}\boldsymbol{\Theta}\sim
N_{q}(\boldsymbol{\mu}_{y},\boldsymbol{\Sigma}_{y})$.
### 5.2 Sequential Updating(?)
With its apparent ability to update an initial distribution through (12), the
BJW solution has been compared to (and presented as an alternative to)
Bayesian inference. We show however that such comparisons are misleading.
Suppose that an analyst wants to compare two $q$-vectors
$\boldsymbol{Y}_{1},\boldsymbol{Y}_{2}$ of data to a model $\boldsymbol{g}$
that produces $q$ comparable outputs. The model takes $p$ inputs through the
vector $\boldsymbol{\theta}$, and the goal is to estimate these unknown
quantities. Because $\boldsymbol{g}$ is a well-defined function, it cannot
simultaneously map $\boldsymbol{\theta}$ to both $\boldsymbol{Y}_{1}$ and
$\boldsymbol{Y}_{2}$. The analyst therefore decides to use the BJW method to
update an initial distribution
$\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}$ before updating it once more
so that both pieces of information can be used. This procedure yields a final
answer
$\displaystyle\underaccent{\wtilde}{\ut
f}_{\boldsymbol{\Theta}}\big{(}\boldsymbol{\theta}\big{)}\frac{f_{\boldsymbol{Y}_{2}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}{\underaccent{\wtilde}{\ut
f}_{\boldsymbol{\Gamma}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}$
$\displaystyle=\left(\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}\big{(}\boldsymbol{\theta}\big{)}\frac{f_{\boldsymbol{Y}_{1}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}{\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}\right)\frac{f_{\boldsymbol{Y}_{2}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}{\underaccent{\wtilde}{\ut
f}_{\boldsymbol{\Gamma}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}$
$\displaystyle=\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}\big{(}\boldsymbol{\theta}\big{)}\frac{f_{\boldsymbol{Y}_{2}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}{\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}$
since $\underaccent{\wtilde}{\ut f}_{\boldsymbol{\Gamma}}\equiv
f_{\boldsymbol{Y}_{1}}$ (as $\underaccent{\wtilde}{\ut
f}_{\boldsymbol{\Theta}}$ solved the initial SIP). This is the BJW solution
using $\boldsymbol{Y}_{2}$, and as such, the final answer does not depend on
$\boldsymbol{Y}_{1}$! If done in reverse order, the final answer would not
depend upon $\boldsymbol{Y}_{2}$. In a Bayesian analysis, both observations
would be used. (Thankfully, after 2018, the terminology in the literature
related to BJW moved away from the name “consistent Bayes” and the analogy of
Bayesian inference.)
Thus sequential updating is not a defensible way to deal with _replicate_
measurements within an SIP. The next section explores a recent method that
allows for replicates.
### 5.3 The Extension of BJW to “Stochastic Maps” is Actually an Application
of BJW
In Butler et al. (2020c), the authors extend the BJW framework to so-called
“stochastic maps”—forward maps that include either embedded or additive
parameters meant to capture irreducible aleatoric uncertainty. In their terms,
if $\boldsymbol{g}(\boldsymbol{\theta})$ is a deterministic map, then the
corresponding stochastic map
$\hat{\boldsymbol{g}}(\boldsymbol{\theta},\boldsymbol{\epsilon})$ incoporates
either embedded or additive noise. In the additive case, we then have the
following system defining the SIP:
$\displaystyle\boldsymbol{Y}$
$\displaystyle=\hat{\boldsymbol{g}}(\boldsymbol{\Theta},\boldsymbol{E})\quad\quad\hat{\boldsymbol{g}}(\boldsymbol{\theta},\boldsymbol{\epsilon})=\boldsymbol{\theta}+\boldsymbol{\epsilon}$
(17)
But this is just a special case of BJW. That is, there is nothing inherently
stochastic in how the new $\boldsymbol{\epsilon}$ term interacts with
$\boldsymbol{g}$; if $\boldsymbol{g}(\cdot)$ is deterministic, then so is
$\hat{\boldsymbol{g}}(\boldsymbol{\theta},\boldsymbol{\epsilon})$. It is when
the parameters are turned into random variables
($\boldsymbol{\theta}\rightarrow\boldsymbol{\Theta}$ and
$\boldsymbol{\epsilon}\rightarrow\boldsymbol{E}$) that the response becomes
stochastic, and this has nothing to do with $\hat{\boldsymbol{g}}$.
The addition of these $\boldsymbol{\epsilon}$ ($\boldsymbol{E}$) parameters
simply accommodates the case when there are more observables than original
parameters of interest $\boldsymbol{\theta}$. As such, this redefined forward
map allows the BJW solution to be used when there are replicate measurements
on the same observable. The BJW solution for the system above gives the
“discrepancy” tradeoff between the $p+q$ random parameters
$\boldsymbol{\Theta}$ and $\boldsymbol{E}$ such that the marginal propagated
density $f_{\boldsymbol{Y}}$ is unaffected.
#### 5.3.1 Gaussian Mean Estimation as “Stochastic Map” Inversion
To see the difference between the “stochastic map” BJW solution and a
statistical solution, consider the simplest possible scenario where the goal
is to estimate a single parameter of interest $\mu$ ($q=1$) from $n$
independent replicate samples. The statistical model is
$(Y_{i}|\mu)=\mu+E_{i}$ for
$E_{i}\stackrel{{\scriptstyle\text{iid}}}{{\sim}}N(0,\sigma^{2}_{\epsilon})$
($i=1,\ldots,n$), or equivalently, $\boldsymbol{Y}\sim
N_{n}(\mu\boldsymbol{1}_{n},\sigma^{2}_{\epsilon}I_{n})$ for
$\sigma^{2}_{\epsilon}$ given. Within the BJW framework, the map is
$\hat{\boldsymbol{g}}(\mu,\boldsymbol{\epsilon})=\mu\boldsymbol{1}_{n}+\boldsymbol{\epsilon}$
so that $\boldsymbol{Y}=M\boldsymbol{1}_{n}+\boldsymbol{E}$ (where $M$ is the
random variable form of $\mu$).
The stochastic BJW solution comes from a system of $n$ equations in $n+1$
unknowns. Note that if any $E_{i}$ is fixed at a constant value or given a
known distribution, the system becomes square and $M$ is then determined by a
single observable. For example, if $E_{1}$ is set, then $Y_{1}$ completely
determines the density for $M=Y_{1}-E_{1}$; adding more observables
$Y_{n+1},\ldots$ only provides information on $\epsilon_{n+1},\ldots$. In
other words, the information on $M$ is contained within a single, arbitrary
$Y_{1}$ and does not increase as more information is gathered. However, if no
$E_{i}$ is set, then the BJW solution for the $n+1$ parameters instead
produces a distribution for $M$ that converges to a point mass.
The forward map is
$\boldsymbol{g}(\boldsymbol{\theta})\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{A}\boldsymbol{\theta}$
where
$\boldsymbol{A}\stackrel{{\scriptstyle\text{def}}}{{=}}[\boldsymbol{1}_{n}\vdots\boldsymbol{I}_{n}]$
and
$\boldsymbol{\theta}_{n}\stackrel{{\scriptstyle\text{def}}}{{=}}(\mu,\epsilon_{1},\ldots,\epsilon_{n})^{\top}$.
The observable distribution is $\boldsymbol{Y}\sim
N_{n}(\boldsymbol{\mu}_{y},\boldsymbol{\Sigma}_{y})$ with
$\boldsymbol{\Sigma}_{y}=\sigma^{2}_{y}\boldsymbol{I}_{n}$. When the initial
distribution is $\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}\equiv
N_{n+1}(\boldsymbol{\mu}_{\theta},\boldsymbol{\Sigma}_{\theta})$ with
$\boldsymbol{\mu}_{\theta}^{\top}=(\mu_{0},\boldsymbol{0}_{n}^{\top})$ and
$\boldsymbol{\Sigma}_{\theta}=\text{diag}(\sigma_{0}^{2},\sigma^{2}_{\epsilon},\ldots,\sigma^{2}_{\epsilon})$,
the BJW solution is
$N_{n+1}(\wtilde{\boldsymbol{\mu}},\wtilde{\boldsymbol{\Sigma}})$ with mean
and covariance given by (13, 15). We derive the closed-form moments in Section
B.2 of the appendix. The matrix algebra is tedious but reveals some
interesting features of the BJW solution, namely that the mean and variance
take the form
$\displaystyle\wtilde{\boldsymbol{\mu}}$
$\displaystyle=\begin{bmatrix}\overline{\mu}_{y}+\mathcal{O}(n^{-1})\\\
(\boldsymbol{\mu}_{y}-\overline{\mu}_{y}\boldsymbol{1}_{n})+\mathcal{O}(n^{-1})\boldsymbol{1}_{n}\end{bmatrix}\quad\quad\wtilde{\boldsymbol{\Sigma}}=\begin{bmatrix}\mathcal{O}(n^{-1})&-\mathcal{O}(n^{-1})\boldsymbol{1}_{n}^{\top}\\\
-\mathcal{O}(n^{-1})\boldsymbol{1}_{n}&\sigma^{2}_{y}\boldsymbol{I}_{n}-\mathcal{O}(n^{-1})\boldsymbol{J}_{n}\end{bmatrix}\
.$ (18)
The marginal variance of $M$ decreases like $\mathcal{O}(n^{-1})$, implying
that the distribution of $M$ concentrates on $\overline{\mu}_{y}$, a simple
average of the means of $\boldsymbol{Y}$. Marginally, $\boldsymbol{E}$
converges to a distribution with covariance $\sigma^{2}_{y}\boldsymbol{I}_{n}$
that is centered on a residual vector. Thus all the irreducible uncertainty is
contained within $\boldsymbol{E}$, but not within the parameter of interest,
contradicting the intended purpose of the BJW approach.
For any $n$, $\wtilde{\boldsymbol{\Sigma}}$ is a dense matrix, meaning that
$M$ and $\boldsymbol{E}$ covary and all elements of $\boldsymbol{E}$ covary
with one another. Furthermore, the mean of $M$ is a linear combination of
$\overline{\mu}_{y}$ and $\mu_{0}$. While this is reminiscent of Bayesian
inference, it should be noted that the terms in the BJW solution are
considerably more complicated and thus harder to interpret. In addition, a
statistical solution involves simple scalar arithmetic while the BJW solution
comes from inverting and multiplying potentially large matrices.
### 5.4 The BJW Solution to the SIP is a Change-of-Variables
Due to its generality, the use of measure theory in describing and solving
problems is often preferable. However, if restricting one’s attention to
absolutely continuous random variables (A2) in practice, then the language of
measure theory can obscure concepts which are quite simple in nature. In the
next proposition we show that the BJW density can be derived in a
straightforward manner under Assumption (A1). The second half of its proof
relies upon the notion of _auxiliary variables_. For some choice (but
typically, infinitely many choices) of $p-q$ auxiliary variables
$\boldsymbol{Y}^{c}$ via maps
$\boldsymbol{g}^{c}(\cdot)\stackrel{{\scriptstyle\text{def}}}{{=}}\left(g_{q+1}(\cdot),\ldots,g_{p}(\cdot)\right)$,
the following augmented system is locally invertible:
$\displaystyle\boldsymbol{\Theta}=\left[\begin{array}[]{l}\Theta_{1}\\\
\vdots\\\ \Theta_{q}\\\\[3.87498pt] \Theta_{q+1}\\\\[3.87498pt] \vdots\\\
\Theta_{p}\end{array}\right]\quad\begin{array}[]{c}\boldsymbol{g}_{\text{aug}}\\\
\longmapsto\end{array}\quad\left[\begin{array}[]{lcl}Y_{1}&=&g_{1}(\boldsymbol{\Theta})\\\
\vdots&&\\\ Y_{q}&=&g_{q}(\boldsymbol{\Theta})\\\\[3.87498pt] \hline\cr
Y_{q+1}&\stackrel{{\scriptstyle\text{def}}}{{=}}&g_{q+1}(\boldsymbol{\Theta})\\\
\vdots&&\\\
Y_{p}&\stackrel{{\scriptstyle\text{def}}}{{=}}&g_{p}(\boldsymbol{\Theta})\end{array}\right]=\begin{bmatrix}\boldsymbol{Y}\\\
\boldsymbol{Y}^{c}\end{bmatrix}\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{Y}_{\text{aug}}\
.$ (33)
For the time being we shall take the existence of
$\boldsymbol{g}_{\text{aug}}$ defining the locally invertible system as given,
but more details will be given in Section 7.
###### Proposition 4.
Under assumption (A1) the exact BJW solution can be derived from a CoV
solution for $p\geq q$.
###### Proof.
Fix some initial $p$-dimensional density
$\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$.
First consider the case that $p=q$. Using the same reasoning leading to (7),
the initial density can be written in terms of its pushforward on
$\mathcal{P}\setminus\mathcal{P}_{0}$, regardless of the global invertibility
of $\boldsymbol{g}$. Equivalently, the Jacobian determinant is
$\displaystyle\Big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\Big{|}$
$\displaystyle=\frac{\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})}{\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}}(\boldsymbol{g}(\boldsymbol{\theta}))}$
(34)
wherever $\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}>0$ (and hence
$\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}}>0$) on
$\mathcal{P}\setminus\mathcal{P}_{0}$. Now we have
$\displaystyle
f_{\boldsymbol{\Theta}}^{\text{BJW}}\big{(}\boldsymbol{\theta}\big{)}$
$\displaystyle=f_{\boldsymbol{Y}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}\frac{\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}\big{(}\boldsymbol{\theta}\big{)}}{\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}=f_{\boldsymbol{Y}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}\
\Big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\Big{|}$
which is the density corresponding to a CoV, as desired. The explicit mixture
of CoVs from $\mathcal{Q}$ to $\mathcal{P}$ can be constructed exactly as in
the proof of Proposition 2. Note that in this $p=q$ case,
$\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}$ did not affect the solution
to the SIP since
$\big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\big{|}$ is
invariant to this choice.
The case of $p>q$ follows exactly the same reasoning as above but relies on
auxiliary variables, as per (33). Additionally, it is slightly more
transparent to start with $f^{\text{CoV}}$, and show that for a certain
choice, it becomes the exact BJW solution. Replacing “$\boldsymbol{g}$”,
“$\boldsymbol{Y}$”, and “$\boldsymbol{\Gamma}$” above with
“$\boldsymbol{g}_{\text{aug}}$”, “$\boldsymbol{Y}_{\text{aug}}$”, and
“$\boldsymbol{\Gamma}_{\text{aug}}$”, we have by previous reasoning that
$\displaystyle\Big{|}\frac{\partial\boldsymbol{g}_{\text{aug}}}{\partial\boldsymbol{\theta}}\Big{|}$
$\displaystyle=\frac{\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})}{\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}_{\text{aug}}}(\boldsymbol{g}_{\text{aug}}(\boldsymbol{\theta}))}$
$\displaystyle f^{\text{CoV}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
$\displaystyle=f_{\boldsymbol{Y}_{\text{aug}}}\big{(}\boldsymbol{g}_{\text{aug}}(\boldsymbol{\theta})\big{)}\
\Big{|}\frac{\partial\boldsymbol{g}_{\text{aug}}}{\partial\boldsymbol{\theta}}\Big{|}$
$\displaystyle=\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})\
\frac{f_{\boldsymbol{Y}_{\text{aug}}}\big{(}\boldsymbol{g}_{\text{aug}}(\boldsymbol{\theta})\big{)}}{\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}_{\text{aug}}}\big{(}\boldsymbol{g}_{\text{aug}}(\boldsymbol{\theta})\big{)}}$
$\displaystyle=\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})\frac{f_{\boldsymbol{Y}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}{\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}}\big{(}\boldsymbol{g}(\boldsymbol{\theta})\big{)}}\
\frac{f_{\boldsymbol{Y}^{c}|\boldsymbol{Y}}\big{(}\boldsymbol{g}^{c}(\boldsymbol{\theta})\
|\
\boldsymbol{y}=\boldsymbol{g}(\boldsymbol{\theta})\big{)}}{\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}^{c}|\boldsymbol{\Gamma}}\big{(}\boldsymbol{g}^{c}(\boldsymbol{\theta})\
|\ \boldsymbol{\gamma}=\boldsymbol{g}(\boldsymbol{\theta})\big{)}}\ ,$
where
$\boldsymbol{\Gamma}_{\text{aug}}=\\{\boldsymbol{\Gamma},\boldsymbol{\Gamma}^{c}\\}$
and $\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}_{\text{aug}}}$ is the
pushforward of $\underaccent{\wtilde}{f}$ through
$\boldsymbol{g}_{\text{aug}}$. Setting
$f_{\boldsymbol{Y}^{c}|\boldsymbol{Y}}\equiv\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}^{c}|\boldsymbol{\Gamma}}$
leads to cancellation that establishes that
$f^{\text{BJW}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$ is indeed
derivable from a CoV solution. ∎
In the proof of Proposition above, Bayes’ Theorem was not invoked and the
disintegration theorem did not need to be called explicitly. All that was
required was that the joint density of unobserved and observed variables
$\boldsymbol{Y}_{\text{aug}}=\\{\boldsymbol{Y},\boldsymbol{Y}^{c}\\}$ could be
factored as a conditional times a marginal density. Of course this fact is
directly related to the disintegration theorem, but will nonetheless be easier
for most audiences to digest.
The statement at the end of the proof,
$f_{\boldsymbol{Y}^{c}|\boldsymbol{Y}}\stackrel{{\scriptstyle\text{set}}}{{=}}\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}^{c}|\boldsymbol{\Gamma}}$,
may seem mysterious, but actually provides the density to assign to the set of
auxiliary variables within a CoV exercise to match the BJW solution. First
choose a complementary map $\boldsymbol{g}^{c}(\cdot)$ that yields a full-rank
augmented system (a.e.). Under the augmented map
$\boldsymbol{g}_{\text{aug}}$, the initial density
$\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}$ pushes forward to the
$p$-dimensional joint density
$\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma},\boldsymbol{\Gamma}^{c}}=\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}^{c}|\boldsymbol{\Gamma}}\cdot\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}}$.
The BJW solution to the SIP is thus a density obtained through a linear
combination of CoVs from $\mathcal{Q}$ to $\mathcal{P}$ with
$\boldsymbol{Y}_{\text{aug}}\sim
f_{\boldsymbol{Y}}(\boldsymbol{y})\cdot\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}^{c}|\boldsymbol{\Gamma}}(\boldsymbol{y}^{c}|\boldsymbol{y})$.
In theory, one could sample from
$f^{\text{BJW}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$ using Monte Carlo
reasoning akin to Algorithm 1. One would first generate
$\boldsymbol{y}^{*}\sim f_{\boldsymbol{Y}}$, then use this value to obtain a
conditional draw
$\boldsymbol{y}^{c*}\sim\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}^{c}|\boldsymbol{\Gamma}=\boldsymbol{y}^{*}}$
to form
$\boldsymbol{y}_{\text{aug}}^{*}\stackrel{{\scriptstyle\text{def}}}{{=}}(\boldsymbol{y}^{*},\boldsymbol{y}^{c*})$.
Finally, the solution to
$\boldsymbol{y}_{\text{aug}}^{*}-\boldsymbol{g}_{\text{aug}}\big{(}\boldsymbol{\theta}^{*}\big{)}=\boldsymbol{0}$
would give a realization $\boldsymbol{\theta}^{*}$ of the BJW solution.
However, this is hypothetical since in practice one does not have the ability
to simulate from the unknown conditional density
$\underaccent{\wtilde}{f}_{\boldsymbol{\Gamma}^{c}|\boldsymbol{\Gamma}}$.
## 6 Other Related Work
In order to complete our literature review of SIPs, we need to discuss two
more recent works: Swigon et al. (2019) and Uy and Grigoriu (2019).
Swigon et al. (2019) demonstrates the critical role of the Jacobian
determinant in two types of parameter estimation problems. In the first, the
data is given by a nonlinear transformation of the parameters, i.e.,
$\boldsymbol{Y}=\boldsymbol{g}(\boldsymbol{\Theta})$, and the authors call
this situation (i.e., our SIP) a “random parameter model”. Here the answer is
simply given by the CoV, similar to our conclusion for $p=q$, and as such
depends on the Jacobian determinant. In the second type of parameter
estimation problem, the data is corrupted by error, i.e.,
$\boldsymbol{Y}=\boldsymbol{g}(\boldsymbol{\theta})+\boldsymbol{E}$, for some
fixed $\boldsymbol{\theta}$, which the authors called a “random measurement
error model” (and we call a statistical IP). In this context the Jacobian
determinant appears in the construction of a default prior for
$\boldsymbol{\Theta}$. The authors take both these modeling approaches as
worthy of consideration, and while offering some discussion akin to our
Section 8, do not tie their results into the work stemming from BBE and BJW.
Uy and Grigoriu (2019) restricts attention to the case that $p>q$ and approach
SIPs from a different angle: given that the problem is underdetermined, what
additional knowledge about $\boldsymbol{\Theta}$ is required to pose and solve
SIPs? (By contrast, traditional IPs can use “extra” information to regularize
solutions in overdetermined systems, but here represents necessary missing
information for the underdetermined system.) In short, one can either specify
moments of $\boldsymbol{\Theta}$ or its distributional family, similar to
methods based on the principle of maximum entropy. Interestingly, if
additional information is provided on the family of distributions to which
$\boldsymbol{\Theta}$ belongs, then the new inverse problem can be solved
using standard Bayesian methods (see Section 3.2.2). For each option, the
authors explore two possibilities about the data: (1) pdf information about
$\boldsymbol{Y}=\boldsymbol{g}(\boldsymbol{\Theta})$ is known, or (2) samples
from $\boldsymbol{Y}$ are given. While mainly focusing on the BBE solutions,
Uy and Grigoriu (2019) stresses the fundamental underdetermined nature of SIPs
(previously noted in Breidt et al. (2011)). We will take this a little further
in Section 8 (IV).
Uy and Grigoriu (2019) shows that the uniform contour ansatz can lead to
arbitrarily bad approximations of the true density, and in some cases cannot
recover the true density at all. Moreover, the authors specify this additional
information in order to find the true distribution of $\boldsymbol{\Theta}$,
so that it can be queried in other contexts, and not just to match
$\boldsymbol{g}(\boldsymbol{\Theta})$. This goal causes a philosophical breach
from Breidt et al. (2011) and Butler et al. (2018a), in which different data
points may lead to different solutions, even within the same problem.
## 7 All Solutions to SIPs are Changes-of-Variables
Our main proposition follows from the CoV theorem and contains elements of
Proposition 4. We give the proof here because it is both constructive and
instructive.
###### Proposition 5.
Let $f_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$ be any exact solution to
the SIP of (3), and suppose the forward map $\boldsymbol{g}$ is as described
in Assumption (A1). Then $f$ is derivable from a CoV (a.e.) for at least one
choice of auxiliary variables.
###### Proof.
The $p=q$ case is covered in Proposition 2, so we now assume $p>q$. After
constructing auxiliary variables, the proof follows in much the same fashion.
Under (A1) we have that the component functions of $\boldsymbol{g}$ (i.e.,
$g_{1},\ldots,g_{q}$) produce a Jacobian with left $q\times q$ block
$\big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}_{1:q}}\big{|}$
that is invertible on $\mathcal{P}_{0}$. Now observe that there exist maps
$g_{q+1},\ldots,g_{p}$ from $\left(\mathcal{P}\cap\mathbb{R}^{p-q}\right)$ to
$\mathbb{R}^{p-q}$ such that the corresponding lower-right $(p-q)\times(p-q)$
sub-Jacobian is invertible; Again, let $\boldsymbol{g}_{\text{aug}}$ denote
the full $p$-dimensional function. Any choice of these $(p-q)$ functions
results in the definition of auxiliary variables
$\boldsymbol{Y}^{c}\stackrel{{\scriptstyle\text{def}}}{{=}}(Y_{q+1},\ldots,Y_{p})^{\top}$.
The simplest augmented system is one that uses the identity maps to define the
components of $\boldsymbol{Y}^{c}$
$\displaystyle\left[\begin{array}[]{l}\Theta_{1}\\\ \vdots\\\
\Theta_{q}\\\\[3.87498pt] \Theta_{q+1}\\\\[3.87498pt] \vdots\\\
\Theta_{p}\end{array}\right]\begin{array}[]{c}\boldsymbol{g}_{\text{aug}}\\\
\longmapsto\end{array}\left[\begin{array}[]{lcl}Y_{1}&=&g_{1}(\boldsymbol{\Theta})\\\
\vdots&&\\\ Y_{q}&=&g_{q}(\boldsymbol{\Theta})\\\\[3.87498pt] \hline\cr
Y_{q+1}&\stackrel{{\scriptstyle\text{def}}}{{=}}&g_{q+1}(\boldsymbol{\Theta})\stackrel{{\scriptstyle\text{def}}}{{=}}\Theta_{q+1}\\\
\vdots&&\\\
Y_{p}&\stackrel{{\scriptstyle\text{def}}}{{=}}&g_{p}(\boldsymbol{\Theta})\hskip
11.0pt\stackrel{{\scriptstyle\text{def}}}{{=}}\Theta_{p}\end{array}\right]$
(49)
which produces Jacobian determinant
$\displaystyle\left|\frac{\partial\boldsymbol{g}_{\text{aug}}}{\partial\boldsymbol{\theta}}\right|\quad$
$\displaystyle=\quad\text{det}\begin{bmatrix}\frac{\partial
g_{1}}{\partial\theta_{1}}&\cdots&\frac{\partial
g_{1}}{\partial\theta_{q}}&\frac{\partial
g_{1}}{\partial\theta_{q+1}}&\cdots&\frac{\partial
g_{1}}{\partial\theta_{p}}\\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\\
\frac{\partial g_{q}}{\partial\theta_{1}}&\cdots&\frac{\partial
g_{q}}{\partial\theta_{q}}&\frac{\partial
g_{q}}{\partial\theta_{q+1}}&\cdots&\frac{\partial
g_{q}}{\partial\theta_{p}}\\\\[3.87498pt] \hline\cr
0&\cdots&0&1&\boldsymbol{0}^{\top}&0\\\
\vdots&\vdots&\vdots&\boldsymbol{0}&\ddots&\boldsymbol{0}\\\
0&\cdots&0&0&\boldsymbol{0}^{\top}&1\\\
\end{bmatrix}\quad=\quad\left|\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}_{1:q}}\right|\
.$ (50)
This augmented system is invertible due to the fact that the upper-left
$(q\times q)$ block
$\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}_{1:q}}$ is itself
invertible. The pushforward of $f_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
is thus possible by the CoV theorem. Furthermore, because
$f_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$ is assumed to solve the SIP,
its pushforward is a $p$-dimensional distribution
$f_{\boldsymbol{Y}_{\text{aug}}}$ whose marginal is
$f_{\boldsymbol{Y}}(\boldsymbol{y})$ for the first $q$ entries. In fact,
$f_{\boldsymbol{Y}_{\text{aug}}}=f_{\boldsymbol{Y}}\cdot
f_{\boldsymbol{Y}^{c}|\boldsymbol{Y}}$ with
$\displaystyle
f_{\boldsymbol{Y}^{c}|\boldsymbol{Y}}(\boldsymbol{y}^{c}|\boldsymbol{y})\stackrel{{\scriptstyle\text{def}}}{{=}}f_{\boldsymbol{\Theta}_{(q+1):p}|\boldsymbol{\Theta}_{1:q}}(\boldsymbol{y}^{c}|\boldsymbol{y})$
under the identity maps.
Going the other direction, the reasoning in the proof of Proposition 2 can
again be used. Specifically, Assumption (A1) guarantees local inverses
$\boldsymbol{h}_{1},\ldots,\boldsymbol{h}_{d}$ of $\boldsymbol{g}$ defined on
the appropriate sets, and we have that the posited solution can be written as
a particular $d$-combination of CoV solutions from the augmented range to
$\mathcal{P}\setminus\mathcal{P}_{0}$ except for on a set of measure zero. The
presumed solution was then actually a CoV using a particular choice of
auxiliary variables $\boldsymbol{Y}^{c}$.
Thus we have shown that no matter how one has obtained an exact solution to
the SIP, it could have actually been derived from CoV solutions for any choice
of auxiliary variables to make an augmented forward map
$\boldsymbol{g}_{\text{aug}}$ locally invertible. ∎
Some readers will recognize the augmented system within the proof as that used
within standard constructive proof of the Implicit Function Theorem (given the
Inverse Function Theorem); see e.g. Lee (2013) pg. 661. This auxiliary
variable strategy is exactly what is taught in a first course on mathematical
statistics. For example, to find the distribution of the product $Y$ of two
random variables $\Theta_{1}$ and $\Theta_{2}$, one may augment the one-
dimensional system
$Y\stackrel{{\scriptstyle\text{def}}}{{=}}\Theta_{1}\Theta_{2}$ with the
auxiliary variable $Y^{c}\stackrel{{\scriptstyle\text{def}}}{{=}}\Theta_{2}$
(for example), obtain $f_{Y,Y^{c}}$ from the two-dimensional CoV, and
integrate the joint density to get $f_{Y}$.
The proposition above shows that the answer will be completely determined
(a.e.) from the choice of $p-q$ dimensional density
$f_{\boldsymbol{Y}^{c}|\boldsymbol{Y}}$. Intuitive solutions (9) can be viewed
as the simplest cases that use $(p-q)$ identity maps (as in the proof above),
together with $f_{\boldsymbol{Y}^{c}|\boldsymbol{Y}}\equiv
f_{\boldsymbol{\Theta}_{(q+1):p}}$ chosen by the analyst.
In order to sample from a CoV density
$f^{\text{CoV}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$, it may appear
that the Jacobian determinant
$\big{|}\frac{\partial\boldsymbol{g}_{\text{aug}}}{\partial\boldsymbol{\theta}}\big{|}$
(or at the very least
$\big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}_{1:q}}\big{|}$)
is readily available (Swigon et al., 2019). However, this is not true. Instead
of using the explicit density to obtain random draws, one could instead use a
simple Monte Carlo routine similar to Algorithm 1. One would first generate
$\boldsymbol{y}^{*}\sim f_{\boldsymbol{Y}}$, then use this value to obtain a
conditional draw $\boldsymbol{y}^{c*}\sim
f_{\boldsymbol{Y}^{c}|\boldsymbol{Y}=\boldsymbol{y}^{*}}$. The vector
$\boldsymbol{y}_{\text{aug}}^{*}\stackrel{{\scriptstyle\text{def}}}{{=}}(\boldsymbol{y}^{*},\boldsymbol{y}^{c*})$
is a random draw from $f_{\boldsymbol{Y}_{\text{aug}}}$. Finally, the solution
to
$\boldsymbol{y}_{\text{aug}}^{*}-\boldsymbol{g}_{\text{aug}}\big{(}\boldsymbol{\theta}^{*}\big{)}=\boldsymbol{0}$
would give a realization $\boldsymbol{\theta}^{*}$ of the CoV solution.
### 7.1 Example: Linear Transformation of a Multivariate Gaussian Vector
Let us return to the example of Section 5.1. Again, let
$\boldsymbol{g}(\boldsymbol{\theta})\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{A}\boldsymbol{\theta}$
so that $\boldsymbol{Y}=\boldsymbol{A}\boldsymbol{\Theta}$; the dimension of
$\boldsymbol{A}$ is $q\times p$, and this matrix is assumed to have full row-
rank. Suppose that the observation distribution is $\boldsymbol{Y}\sim
N_{q}(\boldsymbol{\mu}_{y},\boldsymbol{\Sigma}_{y})$.
When $p=q$, $\boldsymbol{A}$ is invertible so the SIP is solved by
$\boldsymbol{\Theta}=\boldsymbol{A}^{-1}\boldsymbol{Y}$, and this has density
$\displaystyle
f^{\text{CoV}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})=N_{p}\Big{(}\boldsymbol{A}^{-1}\boldsymbol{\mu}\
,\ \boldsymbol{A}^{-1}\boldsymbol{\Sigma}\boldsymbol{A}^{-\top}\Big{)}\ .$
(51)
When $p>q$, $\boldsymbol{A}$ can be augmented into an invertible matrix in an
infinite number of ways. Suppose that $\boldsymbol{A}^{\perp}$ is a
$(p-q)\times p$ representation of the null space of row($\boldsymbol{A}$). The
augmented matrix
$\boldsymbol{A}_{\text{aug}}\stackrel{{\scriptstyle\text{def}}}{{=}}\begin{bmatrix}\boldsymbol{A}\\\
\boldsymbol{A}^{\perp}\end{bmatrix}$ is then invertible. Furthermore
$\boldsymbol{A}^{\perp}$ defines $p-q$ auxiliary variables
$\boldsymbol{Y}^{c}$. If the joint distribution of
$(\boldsymbol{Y},\boldsymbol{Y}^{c})$ is
$N_{p}(\boldsymbol{\mu}_{+},\boldsymbol{\Sigma}_{+})$ (making
$\boldsymbol{\mu}_{y}$ the first $q$ entries of $\boldsymbol{\mu}_{+}$ and
$\boldsymbol{\Sigma}_{y}$ the upper left $q\times q$ block of
$\boldsymbol{\Sigma}_{+}$), then the CoV solution to the SIP is
$\displaystyle f^{\text{CoV}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})$
$\displaystyle=N_{p}\Big{(}\boldsymbol{A}_{\text{aug}}^{-1}\boldsymbol{\mu}_{+}\
;\
\boldsymbol{A}_{\text{aug}}^{-1}\boldsymbol{\Sigma}_{+}\boldsymbol{A}_{\text{aug}}^{-1}\Big{)}\
.$
The density above begs the question: how does one choose a density for
_unobservable_ quantities in order to augment and close the system? We return
to this fundamental question in the next section. The issue of
underdeterminedness was observed back in Breidt et al. (2011) (pg. 1839) for
the bivariate Gaussian case ($p=2,q=1$) by considering moment conditions
(instead of auxiliary variables), but this did not curtail the investigation
or application of SIPs.
## 8 Changes-of-Variables and Inference
In the previous section we showed that any solution to an SIP (3) can be
viewed as a CoV. The solution to the IP given in (2) is typically a matter of
statistical _inference_. What is the connection between these two solutions?
In this section we explore this question and more broadly examine the
interface of CoVs and inference. While multiple works provide some discussion
to distinguish SIP solutions from more traditional statistical
inference—Section 2 in Breidt et al. (2011); Section 7 in Butler et al.
(2018a); Section 4.3 in Butler et al. (2014); Remark 2.3 in Butler et al.
(2020c); and throughout Uy and Grigoriu (2019), especially Sections 1 and
4.2—there is still much to address, and we do so here.
In any inferential context, a change of the likelihood function $\propto
f_{\boldsymbol{Y}}(\boldsymbol{y};\boldsymbol{\theta})$ with respect to
$\boldsymbol{y}$ is obviously a CoV. In the Bayesian or Fiducial paradigms, a
change of the target distribution $f(\boldsymbol{\theta}|\boldsymbol{y})$ with
respect to $\boldsymbol{\theta}$ is a reparameterization that is also quite
conspicuously a CoV. A less obvious connection between inference and CoVs is
the very concept of Bayesian inference itself. If
$f_{\boldsymbol{\Theta}}^{\text{pri}}(\boldsymbol{\theta})$ is a proper prior
distribution, and
$f_{\boldsymbol{\Theta}}^{\text{post}}(\boldsymbol{\theta})\propto
f(\boldsymbol{y}|\boldsymbol{\theta})\cdot
f_{\boldsymbol{\Theta}}^{\text{pri}}(\boldsymbol{\theta})$ denotes the
posterior distribution, there is some “forward map” $\boldsymbol{g}_{T}$ that
pushes prior forward to posterior. In fact this is the motivation behind
_transport maps_ where, in an inference setting, the goal is to learn exactly
that function $\boldsymbol{g}_{T}$. This CoV is however entirely distinct from
the SIP solutions discussed here: with transport maps, the function
$\boldsymbol{g}_{T}$ maps _from parameter to parameter_ space, not from
parameter to observable space (Marzouk et al., 2016; Baptista et al., 2021).
The solutions to (2) and (3) can sometimes agree, but as we will argue, this
does not mean that statistical IPs and SIPs are analogous notions. We will
first give examples where Bayesian posterior and Generalized Fiducial (GF)
distributions are identical to a CoV before giving five (interrelated) ways in
which SIPs are fundamentally different from inference. We close by providing
two illustrative examples.
Bayesian and GF solutions can appear to be CoVs when $p=q$. Consider again the
linear map to a Gaussian observable (Section 7.1) having known covariance
$\boldsymbol{\Sigma}_{y}$. When $p=q$ and an improper prior is chosen
$f_{\boldsymbol{\Theta}}^{\text{pri}}(\boldsymbol{\theta})\propto 1$ for
$\boldsymbol{\Theta}$, the posterior is
$f_{\boldsymbol{\Theta}}^{\text{post}}(\boldsymbol{\theta})\propto
f_{\boldsymbol{Y}}(\boldsymbol{y};\boldsymbol{A}\boldsymbol{\theta})\cdot 1$
which agrees with the CoV solution (51) after a simple rearrangement. A
similar result is also obtained in the appendix of Swigon et al. (2019). The
density above is also the uncertainty distribution in the GF paradigm (Hannig
et al., 2016, Ex. 3). In fact, when $p=q$ and $\boldsymbol{g}$ is an
invertible nonlinear map, the GF solution is
$f_{\boldsymbol{\Theta}}^{\text{GF}}(\boldsymbol{\theta})\propto
f_{\boldsymbol{Y}}(\boldsymbol{y};\boldsymbol{g}(\boldsymbol{\theta}))\big{|}\frac{\partial\boldsymbol{g}}{\partial\boldsymbol{\theta}}\big{|}$
(Hannig et al., 2016, Eqns 3,4) which is the CoV solution (5). The agreement
in these special cases is superficial, as a closer look in the first point
below will show a sharp contrast in the interpretation of the terms involved.
(I) In SIPs, observables are _populations_ with given parameters. It is in the
application of the CoV methodology that one can clearly see the difference
between SIPs and any inferential framework based upon a likelihood function.
In an inferential context, after the data is observed, $\boldsymbol{Y}$
becomes $\boldsymbol{y}_{\text{obs}}$ and the data pdf
$f_{\boldsymbol{Y}}(\boldsymbol{y};\boldsymbol{\theta})$ will yield the
likelihood $L(\boldsymbol{\theta};\boldsymbol{y}_{\text{obs}})$. The
likelihood is a function of $\boldsymbol{\theta}$ and indexed by the data
$\boldsymbol{y}_{\text{obs}}$. In the Gaussian examples above, the _unknown_
mean of $\boldsymbol{Y}$ features the parameters:
$\boldsymbol{A}\boldsymbol{\theta}$ or $\boldsymbol{g}(\boldsymbol{\theta})$.
On the other hand, the CoV solution is based upon the density for the
observables $f_{\boldsymbol{Y}}(\boldsymbol{y})$ which is actually
$f_{\boldsymbol{Y}}(\boldsymbol{y};\boldsymbol{y}_{\text{obs}})$ where the
observed “data” actually operate as _known_ population parameters! This
density has both $\boldsymbol{y}$ and $\boldsymbol{y}_{\text{obs}}$ as
arguments, and the solution will have the $\boldsymbol{y}$ argument replaced
with the forward map:
$f_{\boldsymbol{Y}}(\boldsymbol{y}=\boldsymbol{g}(\boldsymbol{\theta});\
\boldsymbol{y}_{\text{obs}})$. The agreement of the Bayesian/GF and CoV
solutions for the $p=q$ cases above is due to the symmetry of the Gaussian
density with respect to its $\boldsymbol{y}$ and mean arguments. Moreover,
SIPs require all such observable population parameters to be given: as seen
above, SIPs involving Gaussian observables require the covariance
$\boldsymbol{\Sigma}_{y}$ to be given. This is quite different from
statistical IPs where quite often covariances are parameters to estimate.
SIP solutions are not inference because they essentially treat everything on a
“population” level to begin with: the observables _are not a realization_ from
an unknown population density, they _are the population_ with a known density.
In the case of inference, the data model
$f_{\boldsymbol{Y}}(\boldsymbol{y};\boldsymbol{g}(\boldsymbol{\theta}))$
features the forward map and population parameters $\boldsymbol{\theta}$ to
infer. The likelihood function gives desirable large-sample properties such as
the consistency and asymptotic normality of the maximum likelihood estimator
(Reid, 2010). In the case of SIP solutions, the density for the observables
$f_{\boldsymbol{Y}}(\boldsymbol{y};\boldsymbol{y}_{\text{obs}})$ is a
population-level description with known parameters and depends upon neither
$\boldsymbol{g}$ nor $\boldsymbol{\theta}$. As such,
$f_{\boldsymbol{Y}}(\boldsymbol{y})$ does not feature any
$\boldsymbol{g}(\boldsymbol{\theta})$ in it, so can neither be interpreted as
a conditional density nor a likelihood given the map
$\boldsymbol{g}(\boldsymbol{\theta})$. Hence there is no likelihood function
or the corresponding theory.
In fact, the phrases “increasing sample size” and “collecting more data” are
not really even coherent phrases in the world of SIPs; adding observables is
simply changing the population to invert. Despite the fact that the BJW
solution was first presented as “consistent Bayesian inference”, it too
suffers from from this changing-populations issue because none of the
$f_{\boldsymbol{Y}}$’s for the new data explicitly contain any
$\boldsymbol{g}(\boldsymbol{\theta})$ terms. In contrast, within any
statistical inference procedure, each new piece of data will contribute
knowledge of $\boldsymbol{\theta}$ via $\boldsymbol{g}(\boldsymbol{\theta})$
through a joint likelihood function.
(II) SIPs require $p\geq q$. Perhaps the most obvious sign that an SIP is
different from a statistical IP is the hard stipulation that $p\geq q$, i.e.,
more parameters than observables or data. On the other hand, in any
likelihood-based inference setting (Bayesian, GF, frequentist, etc.), an
analyst takes great care to avoid unregularized or saturated models and the
inherent risk of “over-fitting”. This is tied to the problem of _prediction_.
Along with obtaining an uncertainty distribution for $\boldsymbol{\Theta}$,
another common goal is prediction: inferring a new value given those values
already observed. In the Bayesian/GF frameworks, the (posterior) predictive
density is $f_{\boldsymbol{Y}^{*}|\boldsymbol{Y}}=\int
f_{\boldsymbol{Y}^{*}|\boldsymbol{\Theta},\boldsymbol{Y}}\
f_{\boldsymbol{\Theta}|\boldsymbol{Y}}d\boldsymbol{\theta}$. This density is
trustworthy when the models $f_{\boldsymbol{Y}|\boldsymbol{\Theta}}$ and
$f_{\boldsymbol{Y}^{*}|\boldsymbol{\Theta},\boldsymbol{Y}}$ are adequate.
Adequacy can be assessed by analyzing residuals and out-of-sample predictions.
When $p\geq q$, a non-Bayesian introduces regularization or smoothness
penalties (such as in LASSO) to better constrain the problem, and a Bayesian
codifies complexity penalization into the prior specification. In SIPs there
is no way to accommodate such regularization: all solutions embody over-
fitting. Prediction, however, is still possible within the CoV framework,
through some new forward map $\boldsymbol{g}^{*}$. After obtaining
$\boldsymbol{\Theta}\sim f_{\boldsymbol{\Theta}}^{\text{CoV}}$, a predictive
density is simply $\boldsymbol{g}^{*}(\boldsymbol{\Theta})$. However, as
$\boldsymbol{Y}\rightarrow\boldsymbol{\Theta}\rightarrow\boldsymbol{g}^{*}(\boldsymbol{\Theta})$
is a sequence of population-level reparameterizations (I), there are no
falsifiable models that can be checked, and hence no way to defensibly justify
prediction.
The SIP framework has no extension into the realm of $p<q$; instead, one has
to force a problem into becoming an SIP and out of the realm of inference.
Approaches to ensure that $p\geq q$ include the following. First, the analyst
can pick a new subset of $p$ or fewer observables, even though each subset
will lead to a different answer. This is the approach followed in Butler et
al. (2015a), in which 2 of 12 possible water levels are selected, and in
Butler and Hakula (2020), in which 2 of 20 possible eigenmodes of a drum are
selected. A related approach is to combine the $q$ original observables into
$p$ or fewer, such as by taking averages and pooling variances. Per the LUQ
framework developed by Mattis et al. (2022), large quantities of time series
data are transformed into samples of a small number of QoIs (via PCA-based
feature extraction). Alternatively, the analyst can add parameters; the
original forward map is effectively augmented to make $p\geq q$. One way to do
this is to add a new parameter for each new observable, as suggested in the
stochastic map framework Butler et al. (2020c). In this case, the number of
unknowns is inflated from $p\mapsto p+q>q$.
(III) SIPs accommodate replicates in an awkward fashion. This is very related
to (II) above but can be an issue regardless of $p$’s relation to $q$. Suppose
for example that a computer model has $p$ parameters ($\boldsymbol{\theta}$)
as inputs and $q$ outputs. If there were $n$ vectors of measurements that can
be related to the computer model output, one could not immediately treat the
estimation of $\boldsymbol{\theta}$ as an SIP. Either the data vectors would
have to be combined into a single $q$-vector (to be treated like a population,
I), or following Butler et al. (2020c), $n$ additional $q$-vectors of unknowns
$\boldsymbol{\epsilon}_{i}$ would be added ($p\mapsto p+nq$).
This awkward, forced adjustment contrasts the world of inference where
replicate measurements are a virtual cornerstone of statistical design and
analysis of experiments.
Figure 2: A demonstration of the fundamental underdetermined nature of the
SIP for $p=2$ and $q=1$. The only way to recover the true
$f_{\boldsymbol{\Theta}}$ distributions (left) from given only $f_{Y}$ (right)
is to use unknown (and unknowable) auxiliary variables whose joint
distribution could be arbitrarily complex.
(IV) The underdeterminedness of an SIPs may preclude the recovery of any
“true” solution. Any solution to (3) solves the SIP. If one is faced with a
problem where there is supposed to be a single, true distribution to be
recovered, then one will almost certainly not be able to recover it for the
following reasons. As we showed in this work, an infinite number of solutions
will exist when $p>q$, as there are infinitely many choices for auxiliary
variables. Any true distribution comes from joint density of hypothetical
observables with the given observables, and this distribution could be very
complicated. Figure 2 shows three potential “true” solutions that come from
complicated joint distributions for augmented observables
$\boldsymbol{Y}_{\text{aug}}$. Is one choice of auxiliary variables and
densities more valid or preferable than others? In general, no.
Even without direct mention of auxiliary variables, there is no _a priori_
reason to prefer BBE, BJW, or any of the possible intuitive solutions. This is
quite different from Bayesian inference where the choice of prior distribution
becomes irrelevant as more data is collected. Moreover, even in the case that
$p=q$, there is the risk of infinitely many solutions if the forward map is
not uniquely invertible on a given domain, as shown in Section A of the
appendix.
(V) In practice, SIP solutions may not exist. Throughout this work we have
taken it as given that the SIP solution exists, i.e., that the range
$\boldsymbol{g}(\mathcal{P})$ contains $\mathcal{Q}$, and that the system of
equations has a solution. Underdetermined systems can, in reality, be
“inconsistent” in that they have none. Therefore, the existence of a solution
needs to be checked and not assumed. Moore (1977), for example, gives a test
to check for solutions to nonlinear equations within given bounds.
### 8.1 Example: CoV Versus Simple Linear Regression
Suppose that two measurements of a response variable are given
$\boldsymbol{y}_{\text{obs}}=(-1,1)^{\top}$ together with an uncertainty
matrix of $\Sigma_{y}=\sigma^{2}\boldsymbol{I}_{2}$. An analyst wants to
relate these to a predictor variable $x$, having values of $(-1,1)$, through a
forward map which is linear in its parameters:
$g_{x}(\boldsymbol{\theta})=\theta_{1}+\theta_{2}x=[1\ x]\boldsymbol{\theta}$.
To treat this as an SIP, the analyst might take $\boldsymbol{Y}\sim
N(\boldsymbol{y}_{\text{obs}},\boldsymbol{\Sigma}_{y})$, or that the given
measurement values were actually population parameters of a Gaussian
distribution (I). The linear forward map is indexed by $x$, but for the
problem at hand reduces to $\boldsymbol{Y}=\boldsymbol{X}\boldsymbol{\Theta}$
with $\boldsymbol{X}=\begin{bmatrix}1&-1\\\ 1&1\end{bmatrix}$. The SIP is
solved by
$f^{\text{CoV}}_{\boldsymbol{\Theta}}(\boldsymbol{\theta})=N_{2}\big{(}\boldsymbol{X}^{-1}\boldsymbol{y}_{\text{obs}},\
\boldsymbol{X}^{-1}\boldsymbol{\Sigma}_{y}\boldsymbol{X}^{-\top}\big{)}$ which
simplifies to
$\displaystyle\boldsymbol{\Theta}$ $\displaystyle\sim
N_{2}\left(\begin{bmatrix}0\\\ 1\end{bmatrix},\
\frac{\sigma^{2}}{2}\begin{bmatrix}1&0\\\ 0&1\end{bmatrix}\right)$
When interpolating and extrapolating at some new covariate value $x^{*}$, the
“predictive” distribution is a univariate Gaussian derived from a second CoV
(II):
$[1\ x^{*}]\boldsymbol{\Theta}\sim
N\left(x^{*},\frac{\sigma^{2}}{2}(1+x^{*2})\right)\ .$
The analyst is thus able to use two datapoints to get the distribution on
$\boldsymbol{\Theta}$ and predict (with quantified uncertainty) at any $x^{*}$
without any checks on model assumptions (because there is no falsifiable model
being used!). If more measurements were obtained for new $x$ values (resulting
in $n$ total), the analyst would have to modify their approach to ensure that
the number of parameters was greater. This would be done by 1] reducing the
$n$ measurements to $q=1$ or 2 observables; 2] expanding the
$\boldsymbol{\Theta}$ vector and making the rows of the matrix
$\boldsymbol{X}$ correspond to a higher-order polynomial in $x$, ensuring that
a new forward map will overfit; or 3] augmenting the original forward map to
include $n$ new parameters $\epsilon_{1},\ldots,\epsilon_{n}$, as per Butler
et al. (2018a).
A statistical analyst would model any number of measurements as
$(Y_{i}|\boldsymbol{\theta})=\theta_{1}+\theta_{2}x_{i}+E_{i}$
($i=1,\ldots,n$), _conditional upon_ the covariate values $x_{i}$ and unknown
parameters $\boldsymbol{\theta}$; a marginal, unconditional distribution would
_not_ be specified for $\boldsymbol{Y}$ as in an SIP. The right side of the
model equation above is a random variable (uppercase $Y_{i}$) due to the fact
that the $E_{i}$ term (and only this term) on the left side has a
distribution, say $N(0,\sigma^{2})$. Each given measurement is a _realization_
of a random variable (lowercase $y_{i}$). All these data values will be
present in the likelihood function, but the argument is $\boldsymbol{\theta}$,
not $\boldsymbol{y}$. A Bayesian might specify a flat prior for
$\boldsymbol{\Theta}$ and possibly even take $\sigma^{2}$ as given to complete
the analysis. In this case it is true that if $n=2$, the posterior would be
the same as the CoV solution, but no reasonable Bayesian would be comfortable
with this answer, especially if prediction were required. Additional
measurements beyond $n=2$ pose no philosophical problem for the statistician,
and indeed, the only dimensional consideration on the statistician’s mind is
ensuring $n>>p=2$.
### 8.2 Example: Calibration of Nuclear Reaction Code
Here we give an overview of a real type of analysis that can appear to be
either a stochastic or statistical IP and is thus useful for pedagogical
purposes.
Consider a nuclear reaction model in which a collection of ingoing actinide
isotopes are subjected to a fission and/or fusion environment controlled by
physical fluence parameters and a set of nuclear cross-sections. The
conversion of nuclides within the reactions is governed by a set of
differential equations, and the code output is a vector of the resulting
actinides at the end of the reaction series. The total number of fissions is
also tabulated by the code. In order to compare the outputs to real
measurements, per-fission ratios are formed for each resultant actinide (=
number of atoms / total fissions, so that scaling factors cancel). The goal is
to estimate the physics parameters (and in the case of the true forensics
scenario, ratios of ingoing isotopes) using the observed per-fission
quantities.
Figure 3: Schematic illustrating the forward problem for a simple scenario of
nuclear reactions followed by radioactive decay.
There are multiple actinide per-fission measurements with uncertainties, but
these correspond to radiochemical analysis performed some time after the
nuclear event. To account for this time differential, either the Bateman
equations (describing radioactive decay) must augment the burn code, or the
samples must be decay-corrected back to the end-of-event time ($t=0$) and then
used for the calibration. That is, the forward map for this analysis can
either be thought of as the composition of two maps between physics parameters
and observables
$\boldsymbol{\theta}\stackrel{{\scriptstyle\text{code}}}{{\longrightarrow}}\boldsymbol{Y}_{0}\stackrel{{\scriptstyle\text{decay}}}{{\longrightarrow}}\boldsymbol{Y}_{t}$,
or solely the code. Figure 3 shows this graphically for
$\boldsymbol{Y}^{\top}=({}^{237}$U$,{}^{237}$Np$,{}^{241}$Pu$,{}^{241}$Am).
Inverting the physics of radioactive decay for a measurement vector is
unequivocally a CoV/SIP. It is therefore tempting to consider the entire
calibration process as SIP. It is especially tempting to do so when, as in
many historic radiochemical reports, the original collection of independent
decay-corrected samples $\boldsymbol{y}_{0,1},\ldots,\boldsymbol{y}_{0,n}$ has
been reduced to a single vector with uncertainty.
The most immediate reasons for not treating this nuclear calibration scenario
as an SIP are practical. First, there are often fewer parameters than
observables. This is almost always true when ingoing isotopics are known, and
the goal is to estimate a few fluence parameters. Treating known masses as
unknown to ensure $p>q$ is not appealing. Second, an SIP solution does not
typically exist because the code cannot simultaneously fit all the responses.
After adding discrepancy terms, one is still faced with the possibility of
infinitely many solutions since it is not known where the augmented map is
one-to-one.
The fundamental reason for not treating this IP as an SIP comes from thinking
about what the data represent. The radiochemical measurements are samples from
a population of such quantities, and any reduction to summary values—such as
in the historic reports—does not change this fact. There are unknown
population parameters $\boldsymbol{\theta}$, and any collection of samples
could have been generated by a single $\boldsymbol{\theta}$. Within the SIP
framework, nature’s distribution of $\boldsymbol{\Theta}$ would have to change
between collections of samples in order to produce their respective
variability. Because this is not how the data is generated, an SIP is not
appropriate to infer $\boldsymbol{\theta}$.
## 9 Conclusion
This paper explored so-called “stochastic” inverse problems (SIPs) in great
depth. For the majority of the paper these problems were taken at face value
and various solutions explored. First we provided intuitive solutions derived
from a change-of-variables (CoV) wherein the user explicitly controls $p-q$
degrees of freedom. The two existing types of solutions in the literature (BBE
and BJW) were then shown to be derivable from CoVs. We then showed that any
solution to an SIP must be directly related to a CoV. After not questioning
the SIP framework, we then gave a lengthy discussion wherein we pointed out a
number of fundamental issues inherent to SIPs. This work thus demonstrates
that anyone wanting to treat an estimation or prediction problem as an SIP (as
opposed to one of statistical inference) must answer the following questions:
* •
Was the data generated in a way consistent with the SIP framework? How will
any possible replicates be treated?
* •
Does the stipulation $p\geq q$ make sense for this problem?
* •
Does an SIP solution exist?
* •
If the answer to the point above is “yes,” then: Given that an infinite number
of SIP solutions are possible, why is one preferable to any other? On the
other hand, given that the SIP solution is almost certainly not the true
distribution, will this be problematic for future tasks, e.g., testing,
filtering, smoothing, and prediction?
## Appendix A Example: SIP Involving a Two-to-One Map
As a simple demonstration of the method used in the proof of the Proposition
1, consider the function $g(\theta)=\theta^{2}$ and a random variable $Y$ on
$\mathcal{Q}=(0,1)$. If the domain is chosen to be
$\mathcal{P}\stackrel{{\scriptstyle\text{def}}}{{=}}(-1,1)$, then we can take
$\mathcal{P}_{1}\stackrel{{\scriptstyle\text{def}}}{{=}}(-1,0)$ and
$\mathcal{P}_{2}\stackrel{{\scriptstyle\text{def}}}{{=}}(0,1)$. The pullback
of $f_{Y}$ through
$g_{1}^{-1}(y)\stackrel{{\scriptstyle\text{def}}}{{=}}-\sqrt{y}$ and
$g_{2}^{-1}(y)\stackrel{{\scriptstyle\text{def}}}{{=}}\sqrt{y}$ yields
densities $-2\theta
f_{Y}(\theta^{2})\mathbbm{1}\\{\theta\in\mathcal{P}_{1}\\}$ and $2\theta
f_{Y}(\theta^{2})\mathbbm{1}\\{\theta\in\mathcal{P}_{2}\\}$. Let $0<w<1$ and
consider the continuous mixture of the two pullpack densities:
$\displaystyle f^{\text{CoV}}_{\Theta,w}(\theta)$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}2|\theta|f_{Y}(\theta^{2})\Big{(}w\mathbbm{1}\\{\theta\in\mathcal{P}_{1}\\}+(1-w)\mathbbm{1}\\{\theta\in\mathcal{P}_{2}\\}\Big{)}\
.$
By the CoV Theorem, the distribution of
$\Gamma\stackrel{{\scriptstyle\text{def}}}{{=}}\Theta^{2}$ is
$\displaystyle f_{\Gamma}(\gamma)$
$\displaystyle=f^{\text{CoV}}_{\Theta,w}\big{(}g_{1}^{-1}(\gamma)\big{)}\Big{|}\frac{\partial
g_{1}^{-1}}{\partial\gamma}\Big{|}+f^{\text{CoV}}_{\Theta,w}\big{(}g_{2}^{-1}(\gamma)\big{)}\Big{|}\frac{\partial
g_{2}^{-1}}{\partial\gamma}\Big{|}$
$\displaystyle=\frac{1}{2\sqrt{\gamma}}\left(f^{\text{CoV}}_{\Theta,w}\big{(}-\sqrt{\gamma}\big{)}+f^{\text{CoV}}_{\Theta,w}\big{(}\sqrt{\gamma}\big{)}\right)$
$\displaystyle=f_{Y}(\gamma)\Big{(}w\mathbbm{1}\\{-\sqrt{\gamma}\in\mathcal{P}_{1}\\}+(1-w)\mathbbm{1}\\{-\sqrt{\gamma}\in\mathcal{P}_{2}\\}\Big{)}$
$\displaystyle\ \
+f_{Y}(\gamma)\Big{(}w\mathbbm{1}\\{\sqrt{\gamma}\in\mathcal{P}_{1}\\}+(1-w)\mathbbm{1}\\{\sqrt{\gamma}\in\mathcal{P}_{2}\\}\Big{)}$
$\displaystyle=f_{Y}(\gamma)\Big{(}w\mathbbm{1}\\{-\sqrt{\gamma}\in\mathcal{P}_{1}\\}+(1-w)\mathbbm{1}\\{\sqrt{\gamma}\in\mathcal{P}_{2}\\}\Big{)}\quad=f_{Y}(\gamma)\
.$
To get a glimpse into the more general case, if the domain had been chosen to
be $\mathcal{P}\stackrel{{\scriptstyle\text{def}}}{{=}}(-\epsilon,1)$ for
$0<\epsilon<1$, the domain sets would be
$\mathcal{P}_{1}\stackrel{{\scriptstyle\text{def}}}{{=}}(-\epsilon,0)$ and
$\mathcal{P}_{2}\stackrel{{\scriptstyle\text{def}}}{{=}}(0,\epsilon)$, but now
$\mathcal{P}_{3}$ would be the set $(\epsilon,1)$ where the function is one-
to-one. The infinite set of solutions would take the form
$\displaystyle f^{\text{CoV}}_{\Theta,w}(\theta)$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{\propto}}2|\theta|f_{Y}(\theta^{2})\cdot\left\\{\begin{array}[]{cc}w\mathbbm{1}\\{\theta<0\\}+(1-w)\mathbbm{1}\\{\theta>0\\}&-\epsilon<\theta<\epsilon\\\
1&\phantom{-}\epsilon<\theta<1\end{array}\right.\ .$
Using $Y\sim$ $Unif(0,1)$ with $\epsilon=0.5$, two solutions are shown in 4.
Figure 4: Two of the infinitely many solutions when $Y\sim$Unif$(0,1)$ and
$g(\theta)=\theta^{2}$ on $(-0.5,1)$ determined by the choice of $w$.
## Appendix B Additional Derivations
### B.1 BJW Solution: Linear Transformation of a Multivariate Gaussian Vector
Here we verify the claim in Section 5.1 that the proposed Gaussian
distribution is indeed the BJW solution under the linear map
$\boldsymbol{g}(\boldsymbol{\theta})=\boldsymbol{A}\boldsymbol{\theta}$.
###### Lemma 1.
For $\boldsymbol{\Theta}\sim f^{\text{BJW}}\equiv
N_{p}(\wtilde{\boldsymbol{\mu}},\wtilde{\boldsymbol{\Sigma}})$ with moments
defined by 13, 15, $\boldsymbol{A}\boldsymbol{\Theta}\sim
N_{q}(\boldsymbol{\mu}_{y},\boldsymbol{\Sigma}_{y})$, confirming that the
proposed BJW solution indeed solves the SIP (3).
###### Proof.
We need to show that
$\boldsymbol{A}\wtilde{\boldsymbol{\mu}}=\boldsymbol{\mu}_{y}$ and
$\boldsymbol{A}\wtilde{\boldsymbol{\Sigma}}\boldsymbol{A}^{\top}=\boldsymbol{\Sigma}_{y}$.
We will show the second claim about covariances and then use it to show the
first claim about the means.
Starting from 15,
$\wtilde{\boldsymbol{\Sigma}}^{-1}=\boldsymbol{\Sigma}_{\theta}^{-1}+\boldsymbol{A}^{T}\big{\\{}\boldsymbol{\Sigma}_{y}^{-1}-(\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{T})^{-1}\big{\\}}\boldsymbol{A}\
.$
Let $\boldsymbol{D}=\boldsymbol{\Sigma}_{\theta}^{-1}$,
$\boldsymbol{U}=\boldsymbol{A}^{T}$,
$\boldsymbol{E}=\boldsymbol{\Sigma}_{y}^{-1}-(\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{T})^{-1}$,
and $\boldsymbol{V}=\boldsymbol{A}$. The Woodbury formula states
$(\boldsymbol{D}+\boldsymbol{U}\boldsymbol{E}\boldsymbol{V})^{-1}=\boldsymbol{D}^{-1}-\boldsymbol{D}^{-1}\boldsymbol{U}(\boldsymbol{E}^{-1}+\boldsymbol{V}\boldsymbol{D}^{-1}\boldsymbol{U})^{-1}\boldsymbol{V}\boldsymbol{D}^{-1}$,
so that
$\displaystyle\wtilde{\boldsymbol{\Sigma}}=\boldsymbol{\Sigma}_{\theta}-\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{T}\left\\{(\boldsymbol{\Sigma}_{y}^{-1}-(\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{T})^{-1})^{-1}+\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{T}\right\\}^{-1}\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\
.$
Now consider $\boldsymbol{A}\wtilde{\boldsymbol{\Sigma}}\boldsymbol{A}^{T}$,
and let
$\boldsymbol{B}=\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{T}$.
Then,
$\boldsymbol{A}\wtilde{\boldsymbol{\Sigma}}\boldsymbol{A}^{T}=\boldsymbol{B}-\boldsymbol{B}\left\\{(\boldsymbol{\Sigma}_{y}^{-1}-\boldsymbol{B}^{-1})^{-1}+\boldsymbol{B}\right\\}^{-1}\boldsymbol{B}\
.$
Now redefine
$\boldsymbol{D}=(\boldsymbol{\Sigma}_{y}^{-1}-\boldsymbol{B}^{-1})^{-1}$ and
$\boldsymbol{E}=\boldsymbol{B}$ (with
$\boldsymbol{U}=\boldsymbol{V}=\boldsymbol{I}$), and again apply the Woodbury
formula. Then
$(\boldsymbol{D}+\boldsymbol{E})^{-1}=\boldsymbol{B}^{-1}-\boldsymbol{B}^{-1}\boldsymbol{\Sigma}_{y}\boldsymbol{B}^{-1}$.
Thus,
$\displaystyle\boldsymbol{A}\wtilde{\boldsymbol{\Sigma}}\boldsymbol{A}^{T}$
$\displaystyle=\boldsymbol{B}-\boldsymbol{B}(\boldsymbol{B}^{-1}-\boldsymbol{B}^{-1}\boldsymbol{\Sigma}_{y}\boldsymbol{B}^{-1})\boldsymbol{B}$
$\displaystyle=\boldsymbol{\Sigma}_{y}\ .$
Next, to see the first claim about the mean vectors, start from 13 and add and
subtract the term
$\boldsymbol{A}^{T}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{A}\boldsymbol{\mu}_{\theta}$
within the parentheses:
$\displaystyle\wtilde{\boldsymbol{\mu}}$
$\displaystyle=\wtilde{\boldsymbol{\Sigma}}\left(\boldsymbol{A}^{T}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{\mu}_{y}-\boldsymbol{A}^{T}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{A}\boldsymbol{\mu}_{\theta}+\big{\\{}\boldsymbol{A}^{T}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{A}\boldsymbol{-}\boldsymbol{A}^{T}(\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{T})^{-1}\boldsymbol{A}\big{\\}}\boldsymbol{\mu}_{\theta}+\boldsymbol{\Sigma}^{-1}_{\theta}\boldsymbol{\mu}_{\theta}\right)$
$\displaystyle=\wtilde{\boldsymbol{\Sigma}}\left(\boldsymbol{A}^{T}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{\mu}_{y}-\boldsymbol{A}^{T}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{A}\boldsymbol{\mu}_{\theta}+\big{\\{}\wtilde{\boldsymbol{\Sigma}}^{-1}-\boldsymbol{\Sigma}^{-1}_{\theta}\big{\\}}\boldsymbol{\mu}_{\theta}+\boldsymbol{\Sigma}^{-1}_{\theta}\boldsymbol{\mu}_{\theta}\right)$
$\displaystyle=\wtilde{\boldsymbol{\Sigma}}\left(\boldsymbol{A}^{T}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{\mu}_{y}-\boldsymbol{A}^{T}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{A}\boldsymbol{\mu}_{\theta}\right)+\boldsymbol{\mu}_{\theta}$
$\displaystyle=\wtilde{\boldsymbol{\Sigma}}\boldsymbol{A}^{T}\boldsymbol{\Sigma}_{y}^{-1}(\boldsymbol{\mu}_{y}-\boldsymbol{A}\boldsymbol{\mu}_{\theta})+\boldsymbol{\mu}_{\theta}\
.$
Now consider $\boldsymbol{A}\wtilde{\boldsymbol{\mu}}$ and use the fact above
that
$\boldsymbol{A}\wtilde{\boldsymbol{\Sigma}}\boldsymbol{A}^{T}=\boldsymbol{\Sigma}_{y}$
so that
$\displaystyle\boldsymbol{A}\wtilde{\boldsymbol{\mu}}$
$\displaystyle=\big{(}\boldsymbol{A}\wtilde{\boldsymbol{\Sigma}}\boldsymbol{A}^{T}\big{)}\boldsymbol{\Sigma}_{y}^{-1}(\boldsymbol{\mu}_{y}-\boldsymbol{A}\boldsymbol{\mu}_{\theta})+\boldsymbol{A}\boldsymbol{\mu}_{\theta}$
$\displaystyle=\boldsymbol{\mu}_{y}\ .$
∎
### B.2 BJW Solution: Gaussian Mean Estimation as “Stochastic Map” Inversion
The forward map is
$\boldsymbol{g}(\boldsymbol{\theta})\stackrel{{\scriptstyle\text{def}}}{{=}}\boldsymbol{A}\boldsymbol{\theta}$
where
$\boldsymbol{A}\stackrel{{\scriptstyle\text{def}}}{{=}}[\boldsymbol{1}_{n}\vdots\boldsymbol{I}_{n}]$
and
$\boldsymbol{\theta}_{n}\stackrel{{\scriptstyle\text{def}}}{{=}}(\mu,\epsilon_{1},\ldots,\epsilon_{n})^{\top}$.
The observable distribution is
$N_{n}(\boldsymbol{\mu}_{y},\boldsymbol{\Sigma}_{y})$ with
$\boldsymbol{\Sigma}_{y}=\sigma^{2}_{y}\boldsymbol{I}_{n}$. The initial
distribution is $\underaccent{\wtilde}{f}_{\boldsymbol{\Theta}}\equiv
N_{n+1}(\boldsymbol{\mu}_{\theta},\boldsymbol{\Sigma}_{\theta})$ with
$\boldsymbol{\mu}_{\theta}^{\top}=(\mu_{0},\boldsymbol{0}_{n}^{\top})$ and
$\boldsymbol{\Sigma}_{\theta}=\text{diag}(\sigma_{0}^{2},\sigma^{2}_{\epsilon},\ldots,\sigma^{2}_{\epsilon})$.
The BJW solution in this case is
$N_{n+1}(\wtilde{\boldsymbol{\mu}},\wtilde{\boldsymbol{\Sigma}})$ with mean
and covariance given by 13, 15, or equivalently,
$\displaystyle\wtilde{\boldsymbol{\Sigma}}^{-1}\wtilde{\boldsymbol{\mu}}$
$\displaystyle=\boldsymbol{A}^{\top}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{\mu}_{y}-\boldsymbol{A}^{\top}(\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{\top})^{-1}\boldsymbol{A}\boldsymbol{\mu}_{\theta}+\boldsymbol{\Sigma}_{\theta}^{-1}\boldsymbol{\mu}_{\theta}$
$\displaystyle\wtilde{\boldsymbol{\Sigma}}^{-1}$
$\displaystyle=\boldsymbol{A}^{\top}\boldsymbol{\Sigma}_{y}^{-1}\boldsymbol{A}-\boldsymbol{A}^{\top}(\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{\top})^{-1}\boldsymbol{A}+\boldsymbol{\Sigma}_{\theta}^{-1}\
.$
For any $\sigma^{2}$ term, we will take $\tau=1/\sigma^{2}$ to be the
corresponding precision. Also define
$\boldsymbol{J}_{n}=\boldsymbol{1}_{n}\boldsymbol{1}_{n}^{\top}$. The most
complicated term in the equations above is
$\displaystyle\boldsymbol{A}^{\top}(\boldsymbol{A}\boldsymbol{\Sigma}_{\theta}\boldsymbol{A}^{\top})^{-1}\boldsymbol{A}$
$\displaystyle=\begin{bmatrix}nc_{2}&c_{2}\boldsymbol{1}_{n}^{\top}\\\
c_{2}\boldsymbol{1}_{n}&\tau_{\epsilon}\boldsymbol{I}_{n}-c_{1}\boldsymbol{J}_{n}\end{bmatrix}$
$\displaystyle c_{1}$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}\frac{\tau^{2}_{\epsilon}}{n\tau_{\epsilon}+\tau_{0}}\quad\quad
c_{2}\stackrel{{\scriptstyle\text{def}}}{{=}}\tau_{\epsilon}-nc_{1}=\frac{\tau_{\epsilon}\tau_{0}}{n\tau_{\epsilon}+\tau_{0}}\
.$
Now the precision matrix is
$\displaystyle\wtilde{\boldsymbol{\Sigma}}^{-1}$
$\displaystyle=\begin{bmatrix}n\tau_{y}&\tau_{y}\boldsymbol{1}_{n}^{\top}\\\
\tau_{y}\boldsymbol{1}_{n}&\tau_{y}\boldsymbol{I}_{n}\end{bmatrix}-\begin{bmatrix}nc_{2}&c_{2}\boldsymbol{1}_{n}^{\top}\\\
c_{2}\boldsymbol{1}_{n}&\tau_{\epsilon}\boldsymbol{I}_{n}-c_{1}\boldsymbol{J}_{n}\end{bmatrix}+\begin{bmatrix}\tau_{0}&\boldsymbol{0}_{n}^{\top}\\\
\boldsymbol{0}_{n}&\tau_{\epsilon}\boldsymbol{I}_{n}\end{bmatrix}$
$\displaystyle=\begin{bmatrix}nc_{3}+\tau_{0}&c_{3}\boldsymbol{1}_{n}^{\top}\\\
c_{3}\boldsymbol{1}_{n}&\tau_{y}\boldsymbol{I}_{n}+c_{1}\boldsymbol{J}_{n}\end{bmatrix}\quad\quad
c_{3}\stackrel{{\scriptstyle\text{def}}}{{=}}\tau_{y}-c_{2}\ .$
The covariance is
$\displaystyle\wtilde{\boldsymbol{\Sigma}}$
$\displaystyle=\begin{bmatrix}a&\boldsymbol{b}^{\top}\\\
\boldsymbol{b}&\boldsymbol{C}\end{bmatrix}^{-1}=\begin{bmatrix}\frac{1}{a}+\frac{1}{a^{2}}\boldsymbol{b}^{\top}(2,2)\boldsymbol{b}&-\frac{1}{a}\boldsymbol{b}^{\top}(2,2)\\\
-\frac{1}{a}(2,2)\boldsymbol{b}&(2,2)\end{bmatrix}$ $\displaystyle(2,2)^{-1}$
$\displaystyle=\boldsymbol{C}-\frac{1}{a}\boldsymbol{b}\boldsymbol{b}^{\top}=\tau_{y}\boldsymbol{I}_{n}+c_{4}\boldsymbol{J}_{n}$
$\displaystyle(2,2)$
$\displaystyle=\sigma^{2}_{y}\boldsymbol{I}_{n}-c_{5}\boldsymbol{J}_{n}$
$\displaystyle c_{4}$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}\frac{\tau_{\epsilon}^{2}}{n\tau_{\epsilon}+\tau_{0}}-\frac{c_{3}^{2}}{nc_{3}+\tau_{0}}\quad\quad
c_{5}\stackrel{{\scriptstyle\text{def}}}{{=}}\frac{\sigma^{4}_{y}c_{4}}{n\sigma^{2}_{y}c_{4}+1}\
.$
and hence
$\displaystyle\wtilde{\boldsymbol{\Sigma}}$
$\displaystyle=\begin{bmatrix}c_{7}&c_{6}\boldsymbol{1}_{n}^{\top}\\\
c_{6}\boldsymbol{1}_{n}&\sigma^{2}_{y}\boldsymbol{I}_{n}-c_{5}\boldsymbol{J}_{n}\end{bmatrix}\quad=\begin{bmatrix}\mathcal{O}(n^{-1})&-\mathcal{O}(n^{-1})\boldsymbol{1}_{n}^{\top}\\\
-\mathcal{O}(n^{-1})\boldsymbol{1}_{n}&\sigma^{2}_{y}\boldsymbol{I}_{n}-\mathcal{O}(n^{-1})\boldsymbol{J}_{n}\end{bmatrix}$
(52) $\displaystyle c_{6}$
$\displaystyle\stackrel{{\scriptstyle\text{def}}}{{=}}-\frac{c_{3}(\sigma^{2}_{y}-nc_{5})}{nc_{3}+\tau_{0}}\quad\quad
c_{7}\stackrel{{\scriptstyle\text{def}}}{{=}}\frac{1}{nc_{3}+\tau_{0}}+\frac{nc_{3}^{2}(\sigma^{2}_{y}-nc_{5})}{(nc_{3}+\tau_{0})^{2}}\
.$
Returning to the mean, if
$\overline{\mu}_{y}\stackrel{{\scriptstyle\text{def}}}{{=}}n^{-1}\sum_{i=1}^{n}\mu_{i}$,
then
$\displaystyle\wtilde{\boldsymbol{\Sigma}}^{-1}\wtilde{\boldsymbol{\mu}}$
$\displaystyle=\begin{bmatrix}\tau_{y}\boldsymbol{1}_{n}^{\top}\boldsymbol{\mu}_{y}-nc_{2}\mu_{0}+\tau_{0}\mu_{0}\\\
\tau_{y}\boldsymbol{\mu}_{y}-c_{2}\mu_{0}\boldsymbol{1}_{n}\end{bmatrix}$
$\displaystyle\wtilde{\boldsymbol{\mu}}$
$\displaystyle=\begin{bmatrix}c_{7}&c_{6}\boldsymbol{1}_{n}^{\top}\\\
c_{6}\boldsymbol{1}_{n}&\sigma^{2}_{y}\boldsymbol{I}_{n}-c_{5}\boldsymbol{1}_{n}\boldsymbol{1}_{n}^{\top}\end{bmatrix}\begin{bmatrix}n\tau_{y}\overline{\mu}_{y}+(\tau_{0}-nc_{2})\mu_{0}\\\
\tau_{y}\boldsymbol{\mu}_{y}-c_{2}\mu_{0}\boldsymbol{1}_{n}\end{bmatrix}$
$\displaystyle=\begin{bmatrix}(nc_{6}+nc_{7})\tau_{y}\overline{\mu}_{y}+\left(-nc_{2}c_{6}+c_{7}(\tau_{0}-nc_{2})\right)\mu_{0}\\\
\boldsymbol{\mu}_{y}+(nc_{6}-nc_{5})\tau_{y}\overline{\mu}_{y}\boldsymbol{1}_{n}+\left(-c_{2}\sigma^{2}_{y}+nc_{2}c_{5}+c_{6}(\tau_{0}-nc_{2})\right)\mu_{0}\boldsymbol{1}_{n}\end{bmatrix}$
$\displaystyle=\begin{bmatrix}\left(\sigma^{2}_{y}-\mathcal{O}(n^{-1})\right)\tau_{y}\overline{\mu}_{y}+\mathcal{O}(n^{-1})\mu_{0}\\\
\boldsymbol{\mu}_{y}+\left(-\sigma^{2}_{y}+\mathcal{O}(n^{-1})\right)\tau_{y}\overline{\mu}_{y}\boldsymbol{1}_{n}-\mathcal{O}(n^{-1})\mu_{0}\boldsymbol{1}_{n}\end{bmatrix}\
,$
and hence
$\displaystyle\wtilde{\boldsymbol{\mu}}$
$\displaystyle=\begin{bmatrix}\overline{\mu}_{y}+\mathcal{O}(n^{-1})\\\
(\boldsymbol{\mu}_{y}-\overline{\mu}_{y}\boldsymbol{1}_{n})+\mathcal{O}(n^{-1})\boldsymbol{1}_{n}\end{bmatrix}\
.$ (53)
## References
* Aster et al. (2019) Aster, R. C., Borchers, B., and Thurber, C. H. (2019), Parameter Estimation and Inverse Problems, Amsterdam: Elsevier, 3rd ed.
* Baptista et al. (2021) Baptista, R., Marzouk, Y., Morrison, R. E., and Zahm, O. (2021), “Learning non-Gaussian graphical models via Hessian scores and triangular transport,” arXiv preprint arXiv:2101.03093.
* Bernardo and Smith (1994) Bernardo, J. and Smith, A. F. M. (1994), Bayesian Theory, Chichester: John Wiley & Sons.
* Breidt et al. (2011) Breidt, J., Butler, T., and Estep, D. (2011), “A measure-theoretic computational method for inverse sensitivity problems I: Method and analysis,” SIAM Journal on Numerical Analysis, 49, 1836–1859.
* Bruder et al. (2020) Bruder, L., Gee, M. W., and Wildey, T. (2020), “Data-consistent solutions to stochastic inverse problems using a probabilistic multi-fidelity method based on conditional densities,” International Journal for Uncertainty Quantification, 10, 399–424.
* Butler and Estep (2013) Butler, T. and Estep, D. (2013), “A numerical method for solving a stochastic inverse problem for parameters,” Annals of Nuclear Energy, 52, 86–94.
* Butler et al. (2012) Butler, T., Estep, D., and Sandelin, J. (2012), “A computational measure theoretic approach to inverse sensitivity problems II: A posteriori error analysis,” SIAM Journal on Numerical Analysis, 50, 22–45.
* Butler et al. (2014) Butler, T., Estep, D., Tavener, S., Dawson, C., and Westerink, J. J. (2014), “A measure-theoretic computational method for inverse sensitivity problems III: Multiple quantities of interest,” SIAM/ASA Journal on Uncertainty Quantification, 2, 174–202.
* Butler et al. (2015a) Butler, T., Graham, L., Estep, D., Dawson, C., and Westerink, J. J. (2015a), “Definition and solution of a stochastic inverse problem for the Manning’s $n$ parameter field in hydrodynamic models,” Advances in Water Resources, 78, 60–79.
* Butler et al. (2017) Butler, T., Graham, L., Mattis, S., and Walsh, S. (2017), “A measure-theoretic interpretation of sample based numerical integration with applications to inverse and prediction problems under uncertainty,” SIAM Journal on Scientific Computing, 39, A2072–2098.
* Butler and Hakula (2020) Butler, T. and Hakula, H. (2020), “What do we hear from a drum? A data-consistent approach to quantifying irreducible uncertainty on model inputs by extracting information from correlated model output data,” Computer Methods in Applied Mechanics and Engineering, 370, 113228.
* Butler et al. (2015b) Butler, T., Huhtala, A., and Juntunen, M. (2015b), “Quantifying uncertainty in material damage from vibrational data,” Journal of Computational Physics, 283, 414–435.
* Butler et al. (2018a) Butler, T., Jakeman, J., and Wildey, T. (2018a), “Combining push-forward measures and Bayes’ Rule to construct consistent solutions to stochastic inverse problems,” SIAM Journal on Scientific Computing, 40, A984–A1011.
* Butler et al. (2018b) — (2018b), “Convergence of probability densities using approximate models for forward and inverse problems in uncertainty quantification,” SIAM Journal on Scientific Computing, 40, A3523–A3548.
* Butler et al. (2020a) — (2020a), “Optimal experimental design for prediction based on push-forward probability measures,” Journal of Computational Physics, 416, 109518.
* Butler et al. (2020b) Butler, T., Pilosov, M., and Walsh, S. (2020b), “Simulation-based optimal experimental design: A measure-theoretic perspective,” Tech. rep., University of Colorado.
* Butler et al. (2020c) Butler, T., Wildey, T., and Yen, T. Y. (2020c), “Data-consistent inversion for stochastic input-to-output maps,” Inverse Problems, 36, 085015.
* Casella and Berger (2002) Casella, G. and Berger, R. L. (2002), Statistical Inference, Thomas Learning: Duxbury, 2nd ed.
* Chiachio-Ruano et al. (2022) Chiachio-Ruano, J., Chiachio-Ruano, M., and Sankararaman, S. (eds.) (2022), Bayesian Inverse Problems: Fundamentals and Engineering Applications, Boca Raton: CRC Press.
* Davison (2003) Davison, A. C. (2003), Statistical Models, Cambridge University Press.
* Gelman et al. (2014) Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2014), Bayesian Data Analysis, Boca Raton: CRC Press, 3rd ed.
* Graham et al. (2017) Graham, L., Butler, T., Walsh, S., Dawson, C., and Westerink, J. J. (2017), “A measure-theoretic algorithm for estimating bottom friction in a coastal inlet: Case study of Bay St. Louis during Hurricane Gustav (2008),” Monthly Weather Review, 145, 929–954.
* Groetsch (1993) Groetsch, C. W. (1993), Inverse Problems in the Mathematical Sciences, Springer Fachmedien Wiesbaden.
* Hannig (2009) Hannig, J. (2009), “On generalized fiducial inference,” Statistica Sinica, 19, 491–544.
* Hannig et al. (2016) Hannig, J., Iyer, H., Lai, R. C. S., and Lee, T. C. M. (2016), “Generalized fiducial inference: A review and new results,” Journal of the American Statistical Association, 111, 1346–1361.
* Kaipio and Somersalo (2005) Kaipio, J. P. and Somersalo, E. (2005), Statistical and Computational Inverse Problems, New York: Springer.
* Kirsch (2011) Kirsch, A. (2011), An Introduction to the Mathematical Theory of Inverse Problems, New York: Springer, 3rd ed.
* Lee (2013) Lee, J. M. (2013), Introduction to Smooth Manifolds, New York: Springer, 2nd ed.
* Lesnic (2021) Lesnic, D. (2021), Inverse Problems with Applications in Science and Engineering, Boca Raton: CRC Press.
* Marzouk et al. (2016) Marzouk, Y., Moselhy, T., Parno, M., and Spantini, A. (2016), “Sampling via measure transport: An introduction,” Handbook of Uncertainty Quantification, 1–41.
* Mattis and Butler (2019) Mattis, S. A. and Butler, T. (2019), “Enhancing piecewise-defined surrogate response surfaces with adjoints on sets of unstructured samples to solve stochastic inverse problems,” International Journal for Numerical Methods in Engineering, 119, 923–940.
* Mattis et al. (2015) Mattis, S. A., Butler, T. D., Dawson, C. N., Estep, D., and Vesselinov, V. V. (2015), “Parameter estimation and prediction for groundwater contamination based on measure theory,” Water Resources Research, 51, 7608–7629.
* Mattis et al. (2022) Mattis, S. A., Steffen, K. R., Butler, T., Dawson, C. N., and Estep, D. (2022), “Learning quantities of interest from dynamical systems for observation-consistent inversion,” Computer Methods in Applied Mechanics and Engineering, 388, 114230.
* Moore (1977) Moore, R. E. (1977), “A test for existence of solutions to nonlinear systems,” SIAM Journal on Numerical Analysis, 14, 611–615.
* Pawitan (2001) Pawitan, Y. (2001), In All Likelihood: Statistical Modelling and Inference Using Likelihood, Oxford University Press.
* Presho et al. (2017) Presho, M., Mattis, S., and Dawson, C. (2017), “Uncertainty quantification of two-phase flow problems via measure theory and the generalized multiscale finite element method,” Computational Geosciences, 21, 187–204.
* Reid (2010) Reid, N. (2010), “Likelihood inference,” Wiley Interdisciplinary Reviews: Computational Statistics, 2, 517–525.
* Robert (2007) Robert, C. P. (2007), The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation, New York: Springer, 2nd ed.
* Shao (2003) Shao, J. (2003), Mathematical Statistics, New York: Springer, 2nd ed.
* Stuart (2010) Stuart, A. M. (2010), “Inverse problems: A Bayesian perspective,” Acta Numerica, 19, 451–559.
* Swigon et al. (2019) Swigon, D., Stanhope, S. R., Zenker, S., and Rubin, J. E. (2019), “On the importance of the Jacobian determinant in parameter inference for random parameter and random measurement error models,” SIAM/ASA Journal on Uncertainty Quantification, 7, 975–1006.
* Tarantola (2005) Tarantola, A. (2005), Inverse Problem Theory and Methods for Model Parameter Estimation, Philadelphia: SIAM.
* Tikhonov et al. (1995) Tikhonov, A. N., Goncharsky, A. V., Stepanov, V. V., and Yagola, A. G. (1995), Numerical Methods for the Solution of Ill-Posed Problems, Dordrecht: Springer Science+Business Media.
* Tran and Wildey (2021) Tran, A. and Wildey, T. (2021), “Solving stochastic inverse problems for property-structure linkages using data-consistent inversion and machine learning,” JOM: The Journal of The Minerals, Metals & Materials Society, 73, 72–89.
* Uy and Grigoriu (2019) Uy, W. I. T. and Grigoriu, M. (2019), “Specification of additional information for solving stochastic inverse problems,” SIAM Journal on Scientific Computing, 41, A2880–A2910.
* Walsh et al. (2018) Walsh, S. N., Wildey, T. M., and Jakeman, J. D. (2018), “Optimal experimental design using a consistent Bayesian approach,” ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, 4, 1–19.
|
techreport vldbpaper
# Cache Me If You Can: Accuracy-Aware Inference Engine for Differentially
Private Data Exploration
Miti Mazmudar<EMAIL_ADDRESS>University of Waterloo , Thomas
Humphries<EMAIL_ADDRESS>University of Waterloo , Jiaxiang
Liu<EMAIL_ADDRESS>University of Waterloo , Matthew Rafuse
<EMAIL_ADDRESS>University of Waterloo and Xi He
<EMAIL_ADDRESS>University of Waterloo
###### Abstract.
Differential privacy (DP) allows data analysts to query databases that contain
users’ sensitive information while providing a quantifiable privacy guarantee
to users. Recent interactive DP systems such as APEx provide accuracy
guarantees over the query responses, but fail to support a large number of
queries with a limited total privacy budget, as they process incoming queries
independently from past queries. We present an interactive, accuracy-aware DP
query engine, CacheDP, which utilizes a differentially private cache of past
responses, to answer the current workload at a lower privacy budget, while
meeting strict accuracy guarantees. We integrate complex DP mechanisms with
our structured cache, through novel cache-aware DP cost optimization. Our
thorough evaluation illustrates that CacheDP can accurately answer various
workload sequences, while lowering the privacy loss as compared to related
work.
††copyright: none
PVLDB Artifact Availability:
The source code, data, and/or other artifacts have been made available at
%leave␣empty␣if␣no␣availability␣url␣should␣be␣sethttps://gitlab.uwaterloo.ca/m2mazmud/cachedp-
public.
## 1\. Introduction
Organizations often collect large datasets that contain users’ sensitive data
and permit data analysts to query these datasets for aggregate statistics.
However, a curious data analyst may use these query responses to infer a
user’s record. Differential Privacy (DP) (Dwork et al., 2006; Dwork and Roth,
2014) allows organizations to provide a guarantee to their users that the
presence or absence of their record in the dataset will only change the
distribution of the query response by a small factor, given by the privacy
budget. This guarantee is typically achieved by perturbing the query response
with noise that is inversely proportional to the privacy budget. Thus, DP
systems face an accuracy-privacy trade-off: they should provide accurate query
responses, while reducing the privacy budget spent. DP has been deployed at
the US Census Bureau (Machanavajjhala et al., 2008), Google (Wilson et al.,
2020) and Microsoft (Ding et al., 2017).
Existing DP deployments (Machanavajjhala et al., 2008; Bittau et al., 2017;
Ding et al., 2017; Kotsogiannis et al., 2019) mainly consider a non-
interactive setting, where the analyst provides all queries in advance.
Whereas in interactive DP systems (McSherry, 2010; Johnson et al., 2018;
Wilson et al., 2020; Gaboardi et al., 2019), data analysts supply queries one
at a time. These systems have been difficult to deploy as they often assume an
analyst has DP expertise. First, data analysts need to choose an appropriate
privacy budget per query. Second, data analysts require each DP noisy query
response to meet a specific accuracy criterion, whereas DP systems only seek
to minimize the expected error over multiple queries. Ge et al.’s APEx (Ge et
al., 2019) eliminates these two drawbacks, as data analysts need only specify
accuracy bounds in the form of an error rate $\alpha$ and a probability of
failure $\beta$. APEx chooses an appropriate DP mechanism and calibrates the
privacy budget spent on each workload, to fulfill the accuracy requirements.
However, interactive DP systems may run out of privacy budget for a large
number of queries.
We observe that we can further save privacy budget on a given query, by
exploiting past, related noisy responses, and thereby, we can answer a larger
number of queries interactively. The DP post-processing theorem allows
arbitrary computations on noisy responses without affecting the DP guarantee.
Hay et al. (Hay et al., 2010) have applied this theorem to enforce consistency
constraints among noisy responses to related range queries, thereby improving
their accuracy, through constrained inference. Peng et al. have proposed
caching noisy responses and reusing them to answer future queries in Pioneer
(Peng et al., 2013). However, their cache is unstructured and only operates
with simple DP mechanisms such as the Laplace mechanism.
We design a usable interactive DP query engine, CacheDP, with a built-in
differentially private cache, to support data analysts in answering data
exploration workloads accurately, without requiring them to have any knowledge
of DP. Our system is built on top of an existing non-private DBMS and
interacts with it through standard SQL queries. CacheDP meets the analysts’
$(\alpha,\beta)$ accuracy requirements on each workload, while minimizing the
privacy budget spent per workload. We note that a similar reduction in privacy
budget could be obtained if an expert analyst planned their queries, however
our system removes the need for such planning.
Our contributions address four main challenges in the design of our engine.
_First_ , we structure our cache to maximize the possible reuse of noisy
responses by DP mechanisms (Section 3). Our cache design fully harnesses the
post-processing theorem in the interactive setting, for cached noisy
responses. _Second_ , we integrate existing DP mechanisms with our cache,
namely Li et al.’s Matrix Mechanism (Li et al., 2015) (Section 4), and
Koufogiannis et al.’s Relax Privacy mechanism (Koufogiannis et al., 2016)
(Section 6). In doing so, we address technical challenges that arise due to
the need to maintain accuracy requirements over cached responses while
minimizing the privacy budget, and thus, we provide a novel privacy budget
cost estimation algorithm.
_Third_ , we extend our cache-aware DP mechanisms with two modules, which
further reduce the privacy budget (Section 5). Specifically, we apply DP
sensitivity analysis to proactively fill our cache, and we apply constrained
inference to increase cache reuse. We note that CacheDP internally chooses the
DP module with the lowest privacy cost per workload, removing cognitive burden
on data analysts. _Fourth_ , we develop the design of our cache to handle
queries with multiple attributes efficiently (Section 7).
_Finally_ , we conduct a thorough evaluation of our CacheDP against related
work (APEx, Pioneer), in terms of privacy budget consumption and performance
overheads (Section 8). We find that it consistently spends lower privacy
budget as compared to related work, for a variety of workload sequences, while
incurring modest performance overheads. Through an ablation study, we deduce
that our standard configuration with all DP modules turned on, is optimal for
the evaluated workload sequences. Thus, researchers implementing our system
need not tinker with our module configurations. This paper contains several
theorems and lemmas; their proofs can be found in the extended version of the
paper (dpcacheextended).
## 2\. Background
We consider a single-table relational schema $\mathcal{R}$ across $d$
attributes: $\mathcal{R}(\mathcal{A}_{1},\ldots\mathcal{A}_{d})$. The domain
of an attribute $\mathcal{A}_{i}$ is given by $dom(\mathcal{A}_{i})$ and the
full domain of $\mathcal{R}$ is
$dom(\mathcal{R})=dom(\mathcal{A}_{1})\times\cdots\times
dom(\mathcal{A}_{d})$. Each attribute $\mathcal{A}_{i}$ has a finite domain
size $|dom(\mathcal{A}_{i})|=n_{i}$. The full domain has a size of
$n=\prod_{i}n_{i}$. A database instance $D$ of relation $\mathcal{R}$ is a
multiset whose elements are values in $dom(\mathcal{R})$.
A predicate $\phi:dom(\mathcal{R})\rightarrow\\{0,1\\}$ is an indicator
function specifying which database rows we are interested in (corresponds to
the WHERE clause in SQL). A _linear or row counting query (RCQ)_ takes a
predicate $\phi$ and returns the number of tuples in $D$ that satisfy $\phi$,
i.e., $\phi(D)=\sum_{t\in D}\phi(t)$. This corresponds to querying SELECT
COUNT(*) FROM $D$ WHERE $\phi$ in SQL. We focus on RCQs for this work as they
are primitives that can be used to express histograms, multi-attribute range
queries, marginals, and data cubes.
In this work, we express RCQs as a matrix. Consider $dom(\mathcal{R})$ to be
an ordered list. We represent a database instance $D$ by a data (column)
vector $\mathbb{x}$ of length $n$, where $\mathbb{x}[i]$ is the count of $i$th
value from $dom(\mathcal{R})$ in $D$. After constructing $\mathbb{x}$, we
represent any RCQ as a length-$n$ vector $\mathbb{w}$ with
$\mathbb{w}[i]\in\\{0,1\\}$ for $i=1,\ldots,n$. To obtain the ground truth
response for a RCQ $\mathbb{w}$, we can simply compute
$\mathbb{w}\cdot\mathbb{x}$. Hence, we can represent a workload of $\ell$ RCQs
as an $\ell\times n$ matrix $\mathbb{W}$ and answer this workload by matrix
multiplication, as $\mathbb{W}\mathbb{x}$.
When we partition the full domain $dom(\mathcal{R})$ into a set of
$n^{\prime}$ disjoint buckets, the data vector $\mathbb{x}$ and the workload
matrix $\mathbb{W}$ over the full domain $dom(\mathcal{R})$ can be mapped to a
vector $\mathbf{x}$ of size $n^{\prime}$ and a matrix $\mathbf{W}$ of size
$\ell\times n^{\prime}$, respectively. We also consider a workload matrix
$\mathbf{W}$ as a _set_ of RCQs, and hence applying a set operator over a
workload matrix is equivalent to applying this operator over a set of RCQs.
For example, $\mathbf{W}^{\prime}\subseteq\mathbf{W}$ means the set of RCQs in
$\mathbf{W}^{\prime}$ is a subset of the RCQs in $\mathbf{W}$. We follow a
differential privacy model with a trusted data curator.
###### Definition 2.1 ($\epsilon$-Differential Privacy (DP) (Dwork et al.,
2006)).
A randomized mechanism $M:\mathcal{D}\rightarrow\mathcal{O}$ satisfies
$\epsilon$-DP if for any output sets $O\subseteq\mathcal{O}$, and any
_neighboring_ database pairs $(D,D^{\prime})$, i.e., $|D\backslash
D^{\prime}\cup D^{\prime}\backslash D|=1$,
(1) $\Pr[M(D)\in O]\leq e^{\epsilon}\Pr[M(D^{\prime})\in O].$
The privacy parameter $\epsilon$ is also known as privacy budget. A classic
mechanism to achieve DP is the Laplace mechanism. We present the matrix form
of Laplace mechanism here.
###### Theorem 2.1 (Laplace mechanism (Dwork et al., 2006; Li et al., 2015)).
Given an $l\times n$ workload matrix $\mathbf{W}$ and a data vector
$\mathbf{x}$, the Laplace Mechanism $\mathcal{L}_{b}$ outputs
$\mathcal{L}_{b}(\mathbf{W},\mathbf{x})=\mathbf{W}\mathbf{x}+Lap(b)^{l}$ where
$Lap(b)^{l}$ is a vector of $l$ i.i.d. samples from a Laplace distribution
with scale $b$. If $b\geq\frac{\|\mathbf{W}\|_{1}}{\epsilon}$, where
$\|\mathbf{W}\|_{1}$ denotes the $L_{1}$ norm of $\mathbf{W}$, then
$\mathcal{L}_{b}(\mathbf{W},\mathbf{x})$ satisfies $\epsilon$-DP.
Table 1. Notation Notation | Description
---|---
$\mathbb{x},\mathbb{w},\mathbb{W},\mathbb{A}$ | raw data vector, query vector, query workload matrix, strategy matrix over full domain $dom(\mathcal{R})$
$\mathbf{x},\mathbf{w},\mathbf{W},\mathbf{A}$ | mapped data vector, query vector, query workload matrix, strategy matrix over a partition of $dom(\mathcal{R})$
$\alpha,\beta$ | accuracy parameters for $\mathbb{W}$
$\mathcal{B},B_{c},\epsilon$ | total budget, consumed budget, workload budget
$\mathbb{A}^{*},\mathcal{C}_{\mathbb{A}^{*}}$ | global strategy matrix, its cache over $dom(\mathcal{R})$
$b,\tilde{y}$ | a scalar noise parameter, a scalar noisy response
$\mathbf{b}$ | a vector of noise parameters
$\tilde{\mathbf{y}},\tilde{\mathbf{z}}$ | a vector of noisy responses to the strategy $\mathbf{A}$ or $\mathbf{W}$.
($\mathbb{a}$, $b$, $\tilde{y}$, $t$) | a cache entry for a strategy query $\mathbb{a}\in{\mathbb{A}^{*}}$ stored at timestamp $t$. See Definition 3.1.
$\mathbf{F},\mathbf{P}$ | free strategy matrix, paid strategy matrix
Li et al. (Li et al., 2015) present the matrix mechanism, which first applies
a DP mechanism, $M$, on a new strategy matrix $\mathbf{A}$, and then post-
processes the noisy answers to the queries in $\mathbf{A}$ to estimate the
queries in $\mathbf{W}$. This mechanism aims to achieve a smaller error than
directly applying the mechanism $M$ on $\mathbf{W}$. We will use the Laplace
mechanism $\mathcal{L}_{b}$ to illustrate matrix mechanism.
###### Definition 2.2 (Matrix Mechanism (MM) (Li et al., 2015)).
Given an $l\times n$ workload matrix $\mathbf{W}$, a $p\times n$ strategy
matrix $\mathbf{A}$, and the Laplace mechanism
$\mathcal{L}_{b}(\mathbf{A},\mathbf{x})$ that answers $\mathbf{A}$ on
$\mathbf{x}$, the matrix mechanism $\mathcal{M}_{\mathbf{A},\mathcal{L}_{b}}$
outputs the following answer:
$\mathcal{M}_{\mathbf{A},\mathcal{L}_{b}}(\mathbf{W},\mathbf{x})=\mathbf{W}\mathbf{A}^{+}\mathcal{L}_{b}(\mathbf{A},\mathbf{x})$
is the Moore-Penrose pseudoinverse of $\mathbf{A}$.
Intuitively, each workload query in $\mathbf{W}$ can be represented as a
linear combination of strategy queries in $\mathbf{A}$, i.e.,
$\mathbf{W}\mathbf{x}=\mathbf{W}\mathbf{A}^{+}(\mathbf{A}\mathbf{x})$. We
denote $\mathcal{L}_{b}(\mathbf{A},\mathbf{x})$ by $\tilde{\mathbf{y}}$ and
$\mathcal{M}_{\mathbf{A},\mathcal{L}_{b}}$ by $\tilde{\mathbf{z}}$. As the
matrix mechanism post-processes the output of a DP mechanism (Dwork and Roth,
2014), it also satisfies the same level of privacy guarantee.
###### Proposition 2.1 ((Li et al., 2015)).
If $b\geq\frac{\|\mathbf{A}\|_{1}}{\epsilon}$, then
$\mathcal{M}_{\mathbf{A},\mathcal{L}_{b}}$ satisfies $\epsilon$-DP.
For data analysts who may not be able to choose an appropriate budget for a DP
mechanism, we would like to allow them to specify their accuracy requirements
for their queries. We consider two popular types of error specification for DP
mechanisms.
###### Definition 2.3.
Given a $l\times n$ workload matrix $\mathbf{W}$ and a DP mechanism $M$, (i)
the $\alpha^{2}$-expected total squared error bound (Li et al., 2015) is
(2)
$\mathbb{E}[\|\mathbf{W}\mathbf{x}-M(\mathbf{W},\mathbf{x})\|_{2}^{2}]\leq\alpha^{2}$
and (ii) the $(\alpha,\beta)$-worst error bound (Ge et al., 2019) is defined
as
(3)
$\Pr[\|\mathbf{W}\mathbf{x}-M(\mathbf{W},\mathbf{x})\|_{\infty}\geq\alpha]\leq\beta.$
The error for the matrix mechanism is
$\|\mathbf{W}\mathbf{A}^{+}Lap(b)^{l}\|$, which is independent of the data.
This allows a direct estimation of the error bound without running the
algorithm on the data. For example, Ge et al. (Ge et al., 2019) provide a
loose bound for the noise parameter in the matrix mechanism to achieve an
$(\alpha,\beta)$-worst error bound.
###### Theorem 2.2 ((Ge et al., 2019)).
The matrix mechanism $\mathcal{M}_{\mathbf{A},\mathcal{L}_{b}}$ satisfies the
$(\alpha,\beta)$-worst error bound, if
(4) $b\leq
b_{L}=\frac{\alpha\sqrt{\beta/2}}{\|\mathbf{W}\mathbf{A}^{+}\|_{F}}$
where $\|\cdot\|_{F}$ is the Frobenius norm.
When we set $b$ to this loose bound $b_{L}$, the privacy budget consumed by
this mechanism is $\frac{\|\mathbf{A}\|_{1}}{b_{L}}$. To minimize the privacy
cost, Ge et al. (Ge et al., 2019) conduct a continuous binary search over
noise parameters larger than $b_{L}$. The filtering condition for this search
is the output of a Monte Carlo (MC) simulation for the error term
$\|\mathbf{W}\mathbf{A}^{+}Lap(b)^{l}\|_{\infty}$ (i.e., if the sampled error
exceeds $\alpha$ with a probability $\leq\beta$).
## 3\. System Design
Figure 1. System diagram.
We design an interactive inference engine with a built-in cache, CacheDP, that
supports data analysts in answering data exploration queries with sufficient
accuracy, without requiring them to have any differential privacy knowledge.
The system architecture is given in Figure 1. The data owner instantiates an
unmodified relational DBMS such as MySQL, with a database that includes
sensitive data. To complete the setup stage, the data owner also provides a
total privacy budget $\mathcal{B}$ to our system. At runtime, the data analyst
inputs a workload query $\mathbb{W}$, and an $(\alpha,\beta)$ accuracy
requirement that the query should satisfy, to CacheDP. Our system interacts
with the DBMS, via an SQL interface, and a cache $\mathcal{C}$, to return a
differentially private workload response $\tilde{\mathbf{z}}$, which satisfies
this accuracy requirement, to the analyst. Each workload response consumes a
privacy budget $\epsilon$, out of $\mathcal{B}$, and the goal of CacheDP is to
reduce $\epsilon$ by using our cache, which stores historical noisy responses.
We provide an overview of our system design in this section, while motivating
our description through design challenges. Our system follows a _modular
design_ , in order to enable DP experts to develop new cache-aware, problem-
specific modules in the future.
### 3.1. Cache Structure Overview
Our cache stores previously released noisy DP responses and related
parameters; it does not store any sensitive ground truth data. Moreover, the
cache does not interact directly with the DBMS at all. Therefore, the cache
design evolves independently of the DBMS or other alternative data storage
systems. We consider two design questions: (i) which queries and their noisy
responses should be stored in the cache; and (ii) what other parameters are
needed?
A naive cache design simply stores all historical workloads, their accuracy
requirements and noisy responses
$[(\mathbb{W}_{1},\alpha_{1},\beta_{1},\tilde{\mathbf{z}}_{1}),$
$\ldots,(\mathbb{W}_{t},\alpha_{t},\beta_{t},\tilde{\mathbf{z}}_{t})]$. When a
new workload $(\mathbb{W}_{t+1},\alpha_{t+1},\beta_{t+1})$ comes in, the
system first infers a response $\tilde{\mathbf{z}}^{\prime}_{t+1}$ from the
cache and its error bound $\alpha^{\prime}_{t+1}$. If its error bound is worse
than the accuracy requirement, i.e., $\alpha^{\prime}_{t+1}\geq\alpha_{t+1}$,
then additional privacy budget $\epsilon_{t+1}$ needs to be spent to improve
$\tilde{\mathbf{z}}^{\prime}_{t+1}$ to $\tilde{\mathbf{z}}_{t+1}$. This
additional privacy cost $\epsilon_{t+1}$ should be smaller than a DP mechanism
that does not use historical query answers.
This cache design is used in Pioneer (Peng et al., 2013), but it has several
drawbacks. First, this design results in a cache size that linearly increases
with the number of workload queries. Second, we will not be able to _compose
and reuse_ cached past responses to overlapping workloads
($\mathbb{W}_{t-k}\cap\mathbb{W}_{t}\neq\emptyset$). Simply put, this design
works with only simple DP mechanisms, which answer the data analyst-supplied
workloads directly with noisy responses. For instance, Pioneer (Peng et al.,
2013) considers only single query workloads and the Laplace mechanism. We seek
to design a reusable cache that can work with complex DP mechanisms, and in
particular, the matrix mechanism. Thus, we need to _structure_ our cache such
that cached queries and their noisy responses can be reused efficiently, in
terms of the additional privacy cost and run time, while limiting the cache
size.
Our key insight is that the strategy matrices in Matrix Mechanism (MM) in Def
2.2 can be chosen from a structured set. So, we store noisy responses to the
matrix that the mechanism answers directly (the strategy matrix), instead of
storing noisy responses that are post-processed and returned to the data
analyst (the workload matrix). If all the strategy matrices share a similar
structure, in other words, many similar queries, then we need to only track a
limited set of queries in our cache. Relatedly, since the $(\alpha,\beta)$
accuracy requirements for different workload matrices can only be composed
through a loose union bound, we instead track the noise parameters that are
used to answer the associated strategy matrices. Thus, in our cache, we store
the strategy queries, the noisy strategy query responses and the noise
parameter.
This cache design motivates us to consider a global strategy matrix
$\mathbb{A}^{*}$ for the cache that can support all possible workloads.
Importantly, for a given workload matrix $\mathbb{W}$, we present a strategy
transformer (ST) module to generate an instant strategy matrix, denoted by
$\mathbb{A}$, such that each instant strategy matrix is contained in the
global strategy matrix, i.e., $\mathbb{A}\subseteq\mathbb{A}^{*}$. In this
design, the cache tracks each strategy entry $\mathbb{a}\in\mathbb{A}^{*}$,
with its noisy response, its noise parameter, and the timestamp.
###### Definition 3.1 (Cache Structure).
Given a global strategy matrix $\mathbb{A}^{*}$ over the full domain
$dom(\mathcal{R})$, a cache for differentially private counting queries is
defined as
(5)
$\mathcal{C}_{\mathbb{A}^{*}}=\\{\ldots,(\mathbb{a},b,\tilde{y},t),\ldots|\mathbb{a}\in\mathbb{A}^{*}\\},$
where $b$ and $\tilde{y}$ are the latest noise parameter and noisy response
for the strategy query $\mathbb{a}$, and $t$ is the time stamp for the latest
update of $\mathbb{a}$. At beginning, all entries are initialized as
$(\mathbb{a},-,-,0)$, where ‘$-$’ denotes invalid values. We use $\mathcal{C}$
to represent the set of entries with valid noisy responses and $t>0$.
In this work, we consider a hierarchical structure, or $k$-ary tree, for
$\mathbb{A}^{*}$, which is a popular and effective strategy matrix for MM (Li
et al., 2015) with an expected worst error of $O(\log^{3}n)$, where $n$ is the
domain size. Figure 3 shows the global strategy matrix as a binary tree
decomposition of a small integer domain $[0,8)$.
### 3.2. Strategy Transformer (ST) Overview
We outline the Strategy Transformer (ST) module, which is commonly used by all
of our cache-aware DP modules. The ST module consists of two components: a
Strategy Generator (SG) and a Full-rank Transformer (FT). Prior work (Li et
al., 2015) uses the global strategy $\mathbb{A}^{*}$, which has a high
$\|\mathbb{A}^{*}\|_{1}$. Given an input $\mathbb{W}$, the SG selects a basic
instant strategy $\mathbb{A}\subseteq\mathbb{A}^{*}$, with a low
$\|\mathbb{A}^{*}\|_{1}$, among other criteria. Though the cache
$\mathcal{C}_{\mathbb{A}^{*}}$ is structured based on $\mathbb{A}^{*}$, the
cache is not searched while generating $\mathbb{A}$. We present two example
workloads and the instant strategies generated for these workloads next.
Figure 2. A global strategy $\mathbb{A}^{*}$ in a binary tree decomposition
for an integer domain $[0,8)$. Workload queries include
$\mathbb{W}_{1}=\\{[0,7)\\}$, $\mathbb{W}_{2}=\\{[2,6),[3,7)\\}$. Nodes
present only in strategy $\mathbb{A}_{1}$ are shown in blue text, nodes only
in $\mathbb{A}_{2}$ are in magenta text, nodes in both $\mathbb{A}_{1}$ and
$\mathbb{A}_{2}$ are shown in purple text. The dashed nodes are output by the
Proactive Querying (PQ) module (Section 5.3), for $\mathbb{A}_{2}$; the value
annotations for $r$ and $s$ refer to Algorithm 6.
pt [0,8) $r:2$ $s:2$ pt [0,4) $r:2$ $s:2$ pt [0,2) $r:2$ $s:0$ pt [0,1) $r:1$
$s:0$ pt [1,2) $r:1$ $s:0$ pt [2,4) $r:2$ $s:2$ pt [2,3) $r:1$ $s:0$ pt [3,4)
$r:1$ $s:1$ pt [4,8) $r:2$ $s:1$ pt [4,6) $r:1$ $s:1$ pt [4,5) $r:0$ $s:0$ pt
[5,6) $r:0$ $s:0$ pt [6,8) $r:1$ $s:1$ pt [6,7) $r:1$ $s:1$ pt [7,8) $r:1$
$s:0$
Figure 3. A global strategy $\mathbb{A}^{*}$ in a binary tree decomposition
for an integer domain $[0,8)$. Workloads include $\mathbb{W}_{1}=\\{[0,7)\\}$,
$\mathbb{W}_{2}=\\{[2,6),[3,7)\\}$. Strategy nodes unique to $\mathbb{A}_{1}$
are in blue text, those unique to $\mathbb{A}_{2}$ are in magenta text,
whereas those in both $\mathbb{A}_{1}$ and $\mathbb{A}_{2}$ are in purple
text. The dashed nodes are output by the PQ module (Section 5.3), for
$\mathbb{A}_{2}$.
pt [0,8) pt [0,4) pt [0,2) pt [0,1) pt [1,2) pt [2,4) pt [2,3) pt [3,4) pt
[4,8) pt [4,6) pt [4,5) pt [5,6) pt [6,8) pt [6,7) pt [7,8)
###### Example 3.0.
In Figure 3, for an integer domain $[0,8)$, we show a binary tree
decomposition for its global strategy $\mathbb{A}^{*}$. This strategy consists
of $(2^{3}+2^{2}+2^{1}+1)$ row counting queries (RCQs), where each RCQ
corresponds to the counting query with the predicate range indicated by a node
in the tree. We use $\mathbb{A}^{*}_{[a,b)}$ to denote the RCQ with a range
$[a,b)$ in the global strategy matrix.
The first workload $\mathbb{W}_{1}$ consists of a single query with a range
predicate $[0,7)$. Its answer can be composed by summing over noisy responses
to three RCQs in the global strategy matrix, ($\mathbb{A}^{*}_{[0,4)}$,
$\mathbb{A}^{*}_{[4,6)}$, $\mathbb{A}^{*}_{[6,7)}$). The second workload
$\mathbb{W}_{2}$ has two queries with range predicates ($[2,6),[3,7)$). It can
be answered using $\mathbb{A}_{2}$ = ($\mathbb{A}^{*}_{[2,4)}$,
$\mathbb{A}^{*}_{[4,6)}$, $\mathbb{A}^{*}_{[3,4)}$, $\mathbb{A}^{*}_{[6,7)}$).
We detail the strategy generation in Example 5.1.
We observe that the RCQs $\mathbb{A}^{*}_{[4,6)}$ and $\mathbb{A}^{*}_{[6,7)}$
are common to both $\mathbb{A}_{1}$ and $\mathbb{A}_{2}$, thus our cache-aware
DP mechanisms can potentially reuse their noisy responses to answer
$\mathbb{A}_{2}$. ∎
The accuracy analysis of the matrix mechanism only holds over full rank
strategy matrices, however, the instant strategy $\mathbb{A}$ may be a very
sparse matrix over the full domain, and thus, may not be full rank. We address
this challenge in the FRT module, by mapping the instant strategy
$\mathbb{A}$, workload $\mathbb{W}$, data vector $\mathbb{x}$, to a compact,
full-rank, efficient representation, resulting in $\mathbf{W}$, $\mathbf{A}$,
and $\mathbf{x}$ respectively. Thus for an input $\mathbb{W},\mathbb{x}$, the
ST module outputs $(\mathbf{A},\mathbf{W},\mathbf{x})$. Since the cache
entries should be uniquely addressable, the raw data vector $\mathbb{x}$ and
strategy $\mathbb{A}$ are used to index the cache.
### 3.3. Cache-aware DP Modules
Our system supports two novel classes of cache-aware DP mechanisms: _Modified
Matrix Mechanism_ (MMM) and the _Relax Privacy_ Mechanism (RP). These two
cache-aware DP mechanisms commonly use the ST module to transform an input
$\mathbb{W},\mathbb{x}$ to $(\mathbf{W},\mathbf{A},\mathbf{x})$. Each cache-
aware DP mechanism implements two interfaces (similar to APEx (Ge et al.,
2019)) using the mapped representations $\mathbf{A},\mathbf{W},\mathbf{x}$, as
well as the cache $\mathcal{C}_{\mathbb{A}^{*}}$:
* $\bullet$
The _answerWorkload_ interface answers a workload $\mathbf{W}$ using the cache
$\mathcal{C}_{\mathbb{A}^{*}}$ and an instant strategy $\mathbf{A}$ to derive
fresh noisy strategy responses, using the ground truth from the DB. Each
implementation of this interface also updates the cache
$\mathcal{C}_{\mathbb{A}^{*}}$.
* $\bullet$
The _estimatePrivacyBudget_ interface estimates the minimum privacy budget
$\epsilon$ required by the answerWorkload interface to achieve the
$(\alpha,\beta)$ accuracy requirement.
For the first cache-aware DP mechanism, MMM, we have two additional optional
modules, namely _Strategy Expander_ (SE) and _Proactive Querying_ (PQ), which
modify the instant strategy $\mathbb{A}$ output by the basic ST module, for
different purposes. The SE module expands the basic $\mathbb{A}$ with related,
cached, accurate strategy rows in $\mathcal{C}_{\mathbb{A}^{*}}$ to exploit
constrained inference as discussed by Hay et al. (Hay et al., 2010). The goal
of this module is to further reduce the privacy cost of the basic instant
strategy to answer the given workload $\mathbb{W}$. On the other hand, the PQ
module is designed to fill the cache proactively, for later use by the MMM,
MMM+SE, and RP mechanisms. It expands $\mathbb{A}$ with strategy queries that
are absent from $\mathcal{C}_{\mathbb{A}^{*}}$, without incurring any
additional privacy budget over the MMM module. Therefore, it reduces the
privacy cost of future workload queries.
Algorithm 1 CacheDP Overview
1:Dataset $D$, Total privacy budget $\mathcal{B}$.
2:Initialize privacy loss $B_{c}=0$, cache
$\mathcal{C}_{\mathbb{A}^{*}}=\\{(\mathbb{w},-,-,0)|\mathbb{w}\in\mathbb{A}^{*}\\}$
3:repeat
4: Receive $(Q,\alpha,\beta)$ from analyst
5: $\mathbb{W}$ $\leftarrow$ getMatrixForm($Q,\mathbf{x}$)
6: $\mathbb{A},\mathbf{A},\mathbf{W}$ $\leftarrow$
generateStrategy($\mathbb{W},\mathbb{A}^{*}$)
7: $(b,\epsilon_{1})\leftarrow$
MMM.estimatePrivacyBudget($\mathcal{C},\mathbf{A},\mathbf{W},\alpha,\beta$)
8: $\epsilon_{2}\leftarrow$
RP.estimatePrivacyBudget($\mathcal{C},\mathbf{A},\mathbf{W},\alpha,\beta$)
9: $\mathbb{A}_{e},\mathbf{A}_{e},\leftarrow$
SE.generateExpandedStrategy($\mathbb{A},\mathcal{C},b$)
10: $\epsilon_{3}\leftarrow$
MMM.estimatePrivacyBudget($\mathcal{C},\mathbf{A}_{e},\mathbf{W},\alpha,\beta$)
11: Pick $(\hat{M},\hat{\mathbf{A}})$ from (MMM/RP,
$\mathbf{A}/\mathbf{A}_{e}$) that has smallest $\epsilon_{i}$
12: if $\epsilon_{i}+B_{c}\geq\mathcal{B}$ then
13: Answering $Q$ satisfying ($\alpha,\beta$) will exceed $\mathcal{B}$.
Reject $Q$.
14: $z\leftarrow$
$\hat{M}$.answerWorkload($\mathcal{C},\hat{\mathbf{A}},\mathbf{W},\epsilon_{i},\mathbf{x}$)
15: return $z$ to data analyst.
16: $B_{c}\leftarrow B_{c}+\epsilon_{i}$
17:until no more $Q$ from the analysts
Putting it all together, we state the end-to-end algorithm in Algorithm 1.
First, for an input workload $(\mathbf{W},\alpha,\beta)$, our system first
uses the ST module to generate a full-rank instant strategy matrix
$\mathbf{A}$ (line 6), and then executes the estimatePrivacyBudget interface,
with the input tuple $(\mathbf{W},\mathbf{A},\alpha,\beta)$, for the MMM,
MMM+SE, and RP mechanisms (line 7-10). We choose the mechanism that returns
the lowest privacy cost $\epsilon_{i}$ (line 11). If the sum of this privacy
cost with the consumed privacy budget is smaller than the total privacy
budget, then the system executes the answerWorkload interface for the chosen
mechanism, with the input tuple $(\mathbf{W},\hat{\mathbf{A}},\epsilon_{i})$
(line 14). The consumed privacy budget will increase by $\epsilon_{i}$ (line
16). (The PQ module does not impact the cost estimation for MMM, it only
extends the strategy matrix $\mathbf{A}$ to be answered.) We present the MMM
in Section 4, the common ST module and the MMM optional modules (SE, PQ) in
Section 5, and the RP mechanism in Section 6.
###### Theorem 3.2.
CacheDP, as defined in Algorithm 1, satisfies $\mathcal{B}$-DP.
## 4\. Modified Matrix Mechanism (MMM)
In this section, we focus on our core cache-aware DP mechanism, namely the
Modified Matrix Mechanism (MMM). We would like to answer a workload
$\mathbf{W}$ with an $(\alpha,\beta)$-accuracy requirement using a given cache
$\mathcal{C}_{\mathbf{A}^{*}}$ and an instant strategy
$\mathbf{A}\subseteq\mathbf{A}^{*}$, while minimizing the privacy cost. We
will first provide intuition for the design of this mechanism. Then, we will
describe the first interface answerWorkload that answers a workload
$\mathbf{W}$ using the instant strategy $\mathbf{A}$ with the best set of
parameters derived from the second interface EstimatePrivacyBudget. We then
present how the the second interface arrives at an optimal privacy budget.
### 4.1. MMM Overview
The cacheless matrix mechanism (Definition 2.2) perturbs the ground truth
response to the strategy, that is $\mathbf{A}\mathbf{x}$, with the noise
vector freshly drawn from $Lap(b)^{|\mathbf{A}|}$ to obtain
$\tilde{\mathbf{y}}=\mathbf{A}\mathbf{x}+Lap(b)^{|\mathbf{A}|}$. An input
workload is then answered using $\mathbf{W}\mathbf{A}^{+}\tilde{\mathbf{y}}$.
As we discussed in the background, in an accuracy-aware DP system such as APEx
(Ge et al., 2019), the noise parameter $b$ is calibrated, first through a
loose bound $b_{L}$ and then to a tighter noise parameter $b_{T}$, such that
the workload response above meets the $(\alpha,\beta)$-accuracy requirement.
This spends a privacy budget $\frac{\|\mathbf{A}\|_{1}}{b_{T}}$ (Proposition
2.1).
In MMM, we seek to reduce the privacy budget spent by using the cache
$\mathcal{C}$. Given an instant strategy matrix
$\mathbf{A}\subseteq\mathbf{A}^{*}$, we first lookup the cache for any rows in
the strategy matrix $\mathbf{A}$. Note that not all rows in $\mathbf{A}$ have
their noisy responses in the cache. The cache may contain noisy responses for
some rows of $\mathbf{A}$, given by $\mathcal{C}\cap\mathbf{A}$, whereas other
rows in $\mathbf{A}$ may not have cached responses. A preliminary approach
would be to simply reuse all cached strategy responses, and obtain noisy
responses for non-cached strategy rows by expending some privacy budget
through naive MM. However, some cached responses may be too noisy and thus
including them will lead to a higher privacy cost than the cacheless MM.
Our key insight is that by reusing noisy responses for _accurately cached_
strategy rows, MMM can ultimately use a smaller privacy budget for all other
strategy rows as compared to MM without cache while satisfying the accuracy
requirements. Thus, out of all cached strategy rows
$\mathcal{C}\cap\mathbf{A}$, MMM identifies a subset of accurately cached
strategy rows $\mathbf{F}\subseteq\mathcal{C}\cap\mathbf{A}$ that can be
directly answered using their cached noisy responses, without spending any
privacy budget. MMM only spends privacy budget on the remaining strategy rows,
namely on $\mathbf{P}=\mathbf{A}-\mathbf{F}$. We refer to $\mathbf{F}$ and
$\mathbf{P}$ as the _free strategy matrix_ and the _paid strategy matrix_
respectively. MMM consists of two interfaces as indicated by Algorithm 2: (i)
answerWorkload and (ii) estimatePrivacyBudget. The second interface seeks the
best pair of free and paid strategy matrices $(\mathbf{F},\mathbf{P})$ that
use the smallest privacy budget $\epsilon$ to achieve
$(\alpha,\beta)$-accuracy requirement. The first interface will make use of
this parameter configuration $(\mathbf{F},\mathbf{P},\epsilon)$ to generate
noisy responses to the workload.
Algorithm 2 MMM main interfaces and supporting functions
1:function answerWorkload(
$\mathcal{C},\mathbf{A},\mathbf{W},\epsilon,\mathbf{x}$)
2: $(\mathbf{F},\mathbf{P},b_{\mathbf{P}},\epsilon)$ from pre-run
EstimatePrivacyBudget($\mathcal{C},\mathbf{A},\mathbf{W},\alpha,\beta$)
3: (Optional) Expand $\mathbf{P}$ with PQ module (Section 5.3)
4:
$\tilde{\mathbf{y}}_{\mathbf{P}}\leftarrow\mathbf{P}\mathbf{x}+Lap(b_{\mathbf{P}})^{|\mathbf{P}|}$
$\triangleright$ we have $b_{\mathbf{P}}=\frac{\|\mathbf{P}\|_{1}}{\epsilon}$
5: Update cache $\mathcal{C}_{\mathbb{A}^{*}}$ with
$(\mathbf{P},b_{\mathbf{P}},\tilde{\mathbf{y}}_{\mathbf{P}},t=\text{current
time})$
6:
$\tilde{\mathbf{y}}_{\mathbf{F}}\leftarrow[(\mathbf{w},b,\tilde{y},t)\in\mathcal{C}|\mathbf{w}\in\mathbf{F}]$
$\triangleright$ free cached responses for $\mathbf{F}$
7:
$\tilde{\mathbf{y}}\leftarrow\tilde{\mathbf{y}}_{\mathbf{F}}\|\tilde{\mathbf{y}}_{\mathbf{P}}$
$\triangleright$ concatenate noisy responses for $\mathbf{A}$.
8: return $\mathbf{W}\mathbf{A}^{+}\tilde{\mathbf{y}}$, $\epsilon$
9:
10:function
EstimatePrivacyBudget($\mathcal{C},\mathbf{A},\mathbf{W},\alpha,\beta$)
11: Set upper bound $b_{\top}=\frac{\|\mathbf{A}\|_{1}}{\epsilon_{\bot}}$
$\triangleright$ $\epsilon_{\bot}$ is the budget precision
12: Set loose bound
$b_{L}=\frac{\alpha\sqrt{\beta/2}}{\|\mathbf{W}\mathbf{A}^{+}\|_{F}}$
$\triangleright$ Theorem 2.2 (without cache)
13:
$\mathbf{b}\leftarrow[(\mathbf{w},b,\tilde{y},t)\in\mathcal{C}~{}|~{}\mathbf{w}\in\mathbf{A}\cap\mathcal{C},b>b_{L}]\cup[b_{L}]$
14: $b_{D}\leftarrow$ binarySearch(sort($\mathbf{b}$),
checkAccuracy($\cdot,\mathcal{C},\mathbf{A},\mathbf{W},\alpha,\beta$))
$\triangleright$ Search $b_{D}$ in the discrete space
15:
$\mathbf{F}\leftarrow[c.\mathbf{a}\in\mathcal{C}~{}|~{}c.\mathbf{a}\in\mathbf{A}\cap\mathcal{C},c.b<b_{\mathbf{P}}]$
and $\mathbf{P}\leftarrow\mathbf{A}-\mathbf{F}$
16: $b_{\mathbf{P}}\leftarrow$ binarySearch($[b_{D},b_{\top}]$,
checkAccuracy($\cdot,\mathcal{C},\mathbf{A},\mathbf{W},\alpha,\beta$))
$\triangleright$ Search $b_{\mathbf{P}}$ in a continuous space
17: return (
$\mathbf{F},\mathbf{P},b_{\mathbf{P}},\frac{\|P\|_{1}}{b_{\mathbf{P}}}$)
18:
19:function
checkAccuracy($b_{\mathbf{P}},\mathcal{C},\mathbf{A},\mathbf{W},\alpha,\beta$)
20:
$(\mathbf{F},\mathbf{b}_{\mathbf{F}})\leftarrow[(\mathbf{w},b,\tilde{y},t)\in\mathcal{C}~{}|~{}\mathbf{w}\in\mathbf{A}\cap\mathcal{C},b<b_{\mathbf{P}}]$
and $\mathbf{P}\leftarrow\mathbf{A}-\mathbf{F}$
21: Sample size $N=10000$ and failure counter $n_{f}=0$
22: for $i=1,\ldots,N$ do
23: $n_{f}$++ if
$\|\mathbf{W}\mathbf{A}^{+}Lap(\mathbf{b}_{\mathbf{F}}||\mathbf{b}_{\mathbf{P}})\|_{\infty}>\alpha$
24: $\beta_{e}=n_{f}/N$, $p=\beta/100$
25: $\delta\beta=z_{1-p/2}\sqrt{\beta_{e}(1-\beta_{e})/N}$
26: return $(\beta_{e}+\delta\beta+p/2)<\beta$
### 4.2. Answer Workload Interface
We present the first interface answerWorkload for the MMM. We recall that this
interface is always called after the estimatePrivacyBudget interface which
computes the best combination of free and paid strategy matrices and their
corresponding privacy budget
$(\mathbf{F},\mathbf{P},b_{\mathbf{P}},\epsilon)$. As shown in Algorithm 2,
the answerWorkload interface first calls the proactive module (Section 5.3).
If this module is turned on, $\mathbf{P}$ will be expanded for the remaining
operations. Then this interface will answer the paid strategy matrix
$\mathbf{P}$ using Laplace mechanism with the noise parameter
$b_{\mathbf{P}}$. We have
$b_{\mathbf{P}}=\frac{\|\mathbf{P}\|_{1}}{\epsilon}$, to ensure $\epsilon$-DP
(Line 4). Then, it updates the corresponding entries in the cache
$\mathcal{C}_{\mathbb{A}^{*}}$ (Line 5). In particular, for each query
$\mathbf{w}\in\mathbf{P}$, we update its corresponding noisy parameter, noisy
response, and timestamp in $\mathcal{C}_{\mathbb{A}^{*}}$ to
$b_{\mathbf{P}},\tilde{y}$, and the current time. After obtaining the fresh
noisy responses $\tilde{\mathbf{y}}_{\mathbf{P}}$ for the paid strategy
matrix, this interface pulls the cached responses
$\tilde{\mathbf{y}}_{\mathbf{F}}$ for the free strategy matrix from the cache
and concatenate them into $\tilde{\mathbf{y}}$ according to their order in the
instant strategy $\mathbf{A}$ (Lines 6-7). Finally, this interface returns a
noisy response to the workload $\mathbf{W}\mathbf{A}^{+}\tilde{\mathbf{y}}$,
and its privacy cost $\epsilon$.
###### Proposition 4.1.
The AnswerWorkload interface of MMM (Algorithm 2) satisfies $\epsilon$-DP,
where $\epsilon$ is the output of this interface.
As the final noisy response vector $\tilde{\mathbf{y}}$ to the strategy
$\mathbf{A}$ is concatenated from $\tilde{\mathbf{y}}_{\mathbf{F}}$ and
$\tilde{y}_{\mathbf{P}}$, its distribution is equivalent to a response vector
perturbed by a vector of Laplace noise with parameters:
$\mathbf{b}=\mathbf{b}_{\mathbf{F}}||\mathbf{b}_{\mathbf{P}}$, where
$\mathbf{b}_{\mathbf{F}}$ is a vector of noise parameters for the cached
entries in $\mathbf{F}$ with length $|\mathbf{F}|$ and
$\mathbf{b}_{\mathbf{P}}$ is a vector of the same value $b_{\mathbf{P}}$ with
length $|\mathbf{P}|$. This differs from the standard matrix mechanism with a
single scalar noise parameter. We derive its error term next.
###### Proposition 4.2.
Given an instant strategy $\mathbf{A}=(\mathbf{F}||\mathbf{P})$ with a vector
of $k$ noise parameters
$\mathbf{b}=\mathbf{b}_{\mathbf{F}}||\mathbf{b}_{\mathbf{P}}$, the error to a
workload $\mathbf{W}$ using the AnswerWorkload interface of MMM (Algorithm 2)
is
(6) $\|\mathbf{W}\mathbf{A}^{+}Lap(\mathbf{b})\|$
where $Lap(\mathbf{b})$ draws independent noise from $Lap(\mathbf{b}[1])$,
$\ldots,Lap(\mathbf{b}[k])$ respectively. We can simplify its expected total
square error as
(7) $\|\mathbf{W}\mathbf{A}^{+}diag(\mathbf{b})\|_{F}^{2}$
where $diag(\mathbf{b})$ is a diagonal matrix with
$diag(\mathbf{b})[i,i]=\mathbf{b}[i]$.
### 4.3. Estimate Privacy Budget Interface
The second interface EstimatePrivacyBudget chooses the free and paid strategy
matrices and the privacy budget to run the first interface for MMM. This
corresponds to the following questions:
* (1)
Which cached strategy rows out of $\mathcal{C}\cap\mathbf{A}$ should be
included in the free strategy matrix $\mathbf{F}$? The choice of $\mathbf{F}$
directly determines the paid strategy matrix $\mathbf{P}$ as
$\mathbf{A}-\mathbf{F}$.
* (2)
Given $\mathbf{P}$ and $b_{\mathbf{P}}$, the privacy budget paid by MMM is
given by
$\epsilon=\|\mathbf{P}\|_{1}/b_{\mathbf{P}}=\|\mathbf{A}-\mathbf{F}\|_{1}/b_{\mathbf{P}}$.
To minimize this privacy budget, what is the maximum noise parameter value
$b_{\mathbf{P}}$ that can be used to answer $\mathbf{P}$ while meeting the
accuracy requirement?
A baseline approach to the first question is to simply set
$\mathbf{F}=\mathcal{C}\cap\mathbf{A}$, that is, we reuse all cached strategy
responses. This approach may reuse inaccurate cached responses with large
noise parameters, which results in a larger $\epsilon$ (or a smaller
$b_{\mathbf{P}}$) to achieve the given accuracy requirement than answering the
entire $\mathbf{A}$ by resampling new noisy responses without using the cache.
###### Example 4.0.
Continuing with Example 3.1, we have an instant strategy $\mathbf{A}$ for the
workload $\mathbb{W}_{1}$ with range predicate $[0,7)$ mapped to a partitioned
domain $\\{[0,4),[4,6),[6,7)\\}$. The mapped workload and instant strategy are
shown in Figure 4. For simplicity, we use the expected square error to
illustrate the drawback of the baseline approach, but the same reasoning
applies to $(\alpha,\beta)$-worst error bound. Without using the cache, when
we set $\mathbf{b}=[10,10,10]$, we achieve an expected error
$\|\mathbf{W}\mathbf{A}^{+}diag(\mathbf{b})\|^{2}_{F}$ = 300 for the workload
$\mathbf{W}$. Suppose the cache has an entry for the first RCQ $[0,4)$ of the
strategy and a noise parameter $b_{c}=15$. Using this cached entry, the noise
vector becomes $\mathbf{b}=[15,b_{\mathbf{P}},b_{\mathbf{P}}]$, and the
expected square error is
$\|\mathbf{W}\mathbf{A}^{+}diag(\mathbf{b})\|^{2}_{F}=15^{2}+2b_{\mathbf{P}}^{2}$.
To achieve the same or a smaller error than the cacheless MM, we need to set
$b_{\mathbf{P}}\leq\sqrt{(300-15^{2})/2}\approx 6.12$ for the remaining
entries in the strategy. This tighter noise parameter $b_{\mathbf{P}}$
corresponds to a larger privacy budget. ∎
$\mathbf{W}=\begin{bmatrix}1&1&1\end{bmatrix},\
\mathbf{A}=\left[\begin{array}[]{cccc}1&0&0\\\ \hline\cr 0&1&0\\\
0&0&1\end{array}\right],\ \mathbf{b}=\left[\begin{array}[]{c}b_{c}\\\
\hline\cr b\\\ b\end{array}\right],\
\mathbf{x}_{1}=\left[\begin{array}[]{c}\mathbb{x}{\left[0,4\right)}\\\
\hline\cr\mathbb{x}{\left[4,6\right)}\\\ \mathbb{x}{\left[6,7\right)}\\\
\end{array}\right]$ Figure 4. Consider $\mathbb{W}_{1}=\\{[0,7)\\}$ with its
corresponding mapped workload matrix, instant strategy, noise vector, and data
vector. Reusing a cached response for the first row with noise parameter
$b_{c}$ requires a smaller noise parameter $b$ (and hence a bigger privacy
budget) for the other rows than the cacheless MM to achieve the same accuracy
level.
#### 4.3.1. Privacy Cost Optimizer
We formalize the two aforementioned questions as an optimization problem,
subject to the accuracy requirements, as follows.
Cost estimation (CE) problem: Given a cache $\mathcal{C}$ and an instant
strategy matrix $\mathbf{A}$, determine
$\mathbf{F}\subseteq(\mathbf{A}\cap\mathcal{C})$ (and
$\mathbf{P}=\mathbf{A}-\mathbf{F})$ and $b_{\mathbf{P}}\in[b_{L},b_{\top}]$
that minimizes the paid privacy budget
$\epsilon=\frac{\|\mathbf{P}\|_{1}}{b_{\mathbf{P}}}$ subject to accuracy
requirement:
$\|\mathbf{W}\mathbf{A}^{+}diag(\mathbf{b}_{\mathbf{F}}||\mathbf{b}_{\mathbf{P}})\|_{F}^{2}\leq\alpha^{2}$
or
$\Pr[\|\mathbf{W}\mathbf{A}^{+}Lap(\mathbf{b}_{\mathbf{F}}||\mathbf{b}_{\mathbf{P}})\|_{\infty}\geq\alpha]\leq\beta$.
In this optimization problem, the lower bound for $b_{\mathbf{P}}$ is the
loose bound for the cacheless MM (Equation (4)), and the upper bound
$b_{\top}$ is $\frac{\|A\|_{1}}{\epsilon_{\bot}}$, where $\epsilon_{\bot}$ is
the smallest possible privacy budget.
In a brute-force solution to this problem, we can search over all possible
pairs of $\mathbf{F}\subseteq(\mathbf{A}\cap\mathcal{C})$ and
$b_{\mathbf{P}}\in[b_{L},b_{\top}]$, and check whether every possible pair of
$(\mathbf{F},b_{\mathbf{P}})$ can lead to an accurate response. In this
solution, the search space for $\mathbf{F}$ will be
$O(2^{|\mathbf{A}\cap\mathcal{C}|})$ and thus the total search space will be
$O\left(2^{|\mathbf{A}\cap\mathcal{C}|}\cdot\log_{2}(|[b_{L},b_{\top}]|)\right)$
if we apply binary search within $[b_{L},b_{\top}]$. Hence, we need another
way to efficiently determine optimal values for $(\mathbf{F},b_{\mathbf{P}})$.
#### 4.3.2. Simplified Privacy Cost Optimizer
We present a simplification to arrive at a much smaller search space for
($\mathbf{F}$, $b_{\mathbf{P}}$), while ensuring that $b_{\mathbf{P}}$
improves over the noise parameter of the cacheless MM. We observe that, if we
perturb the paid strategy matrix with noise parameter $b_{\mathbf{P}}$ and
choose cached entries with noise parameters smaller than $b_{\mathbf{P}}$, we
will have a smaller error than a cacheless MM with a noise parameter
$b=b_{\mathbf{P}}$ for all the queries in the strategy matrix. This motivates
us to consider the following search space for $\mathbf{F}$. When given
$b_{\mathbf{P}}$, we choose a free strategy matrix fully determined by this
noise parameter:
(8)
$\mathbf{F}_{b_{\mathbf{P}}}=\\{c.\mathbf{a}\in\mathcal{C}~{}|~{}c.\mathbf{a}\in\mathcal{C}\cap\mathbf{A},c.b\leq
b_{\mathbf{P}}\\},$
and formalize a simplified optimization problem.
Simplified CE problem: Given a cache $\mathcal{C}$ and an instant strategy
matrix $\mathbf{A}$, determine $b_{\mathbf{P}}\in[b_{L},b_{\bot}]$ (and
$\mathbf{F}=\mathbf{F}_{b_{\mathbf{P}}}$, $\mathbf{P}=\mathbf{A}-\mathbf{F}$)
that minimizes the paid privacy budget
$\epsilon=\frac{\|\mathbf{P}\|_{1}}{b_{\mathbf{P}}}$ subject to:
$\|\mathbf{W}\mathbf{A}^{+}diag(\mathbf{b}_{\mathbf{F}}||\mathbf{b}_{\mathbf{P}})\|_{F}^{2}\leq\alpha^{2}$
or
$\Pr[\|\mathbf{W}\mathbf{A}^{+}Lap(\mathbf{b}_{\mathbf{F}}||\mathbf{b}_{\mathbf{P}})\|_{\infty}\geq\alpha]\leq\beta$.
###### Theorem 4.1.
The optimal solution to simplified CE problem incurs a smaller privacy cost
$\epsilon$ than the privacy cost $\epsilon_{\mathbf{F}=\emptyset}$ of the
matrix mechanism without cache, i.e., MMM with $\mathbf{F}=\emptyset$.
Figure 5. An example of cached noise parameters
$c.b\in\mathbf{A}\cap\mathcal{C}$ (dots) and the discrete (dashed lines) and
continuous (full lines) binary searches through these parameters. Parameters
in green are accurate enough. The discrete search scans over $c.b\geq b_{L}$
and outputs $b_{D}$; the free matrix includes all cache entries in green and
yellow. The continuous search scans over the interval $[b_{D},b_{D+1}]$ and
identifies an optimal $b_{\mathbf{P}}$.
#### 4.3.3. Algorithm for Simplified CE Problem
We present our search algorithm to find the best solution to the simplified CE
problem, shown in the estimatePrivacyBudget function of Algorithm 2. In our
extended paper, we visualize our searches through the cached noise parameters.
We visualize our searches through the cached noise parameters in Figure 5.
First, we setup the upper and lower bounds for the noise parameter
$b_{\mathbf{P}}$ for the simplified CE problem (Lines 11-12).
Step 1: Discrete search for $b_{\mathbf{P}}$. We first search $b_{\mathbf{P}}$
from the existing noise parameters in the cached strategy rows
$\mathbf{A}\cap\mathcal{C}$ that are greater than $b_{L}$ (Line 13). We also
include $b_{L}$ in this noise parameter list $\mathbf{b}$. Next, we sort the
noise parameter list $\mathbf{b}$ and conduct a binary search in this sorted
list to find the largest possible $b_{D}\in\mathbf{b}$ that meets the accuracy
requirement (Line 14). During this binary search, to check if a given
$b_{\mathbf{P}}$ achieves $(\alpha,\beta)$-accuracy requirement, we run the
function checkAccuracy , defined in Algorithm 2 . This function first places
all the cached entries with noise parameter smaller than $b_{\mathbf{P}}$ into
$\mathbf{F}$ and the remaining entries of the strategy into $\mathbf{P}$ (Line
20) . Then it runs an MC simulation (Lines 21-26) of the error
$\mathbf{W}\mathbf{A}^{+}Lap(\mathbf{b}_{\mathbf{F}}||\mathbf{b}_{\mathbf{P}})$
(Proposition 4.2). If a small number of the simulated error vectors have a
norm bigger than $\alpha$, then this paid noise vector $b_{\mathbf{P}}$
achieves $(\alpha,\beta)$-accuracy guarantee. This MC simulation differs from
a traditional one (Ge et al., 2019) which makes no use of the cache and has
only a single scalar noise value for all entries of the strategy. On the other
hand, if the accuracy requirement is $\alpha^{2}$-expected total square error,
we simply check if
$\|\mathbf{W}\mathbf{A}^{+}diag(\mathbf{b}_{\mathbf{F}}||\mathbf{b}_{\mathbf{P}})\|_{2}^{2}\leq\alpha^{2}$.
Step 2: Refining $b_{\mathbf{P}}$ in a continuous space. We observe that we
may further increase $b_{\mathbf{P}}$, by examining the interval between
$b_{D}$, which is the output from the discrete search, and the next largest
cached noise parameter, denoted by $\top_{C}=b_{D+1}$. If $\top_{C}$ does not
exist, then we set $\top_{C}=b_{\top}$. We conduct a binary search in a
continuous domain $[b_{D},\top_{C}]$ (Line 16). This continuous search does
not impact the free strategy matrix $\mathbf{F}$ obtained from the discrete
search, as the chosen noise parameter will be strictly smaller than $b_{D+1}$.
The continuous search is depicted through full lines in Figure 5. This search
outputs a noise parameter $b_{\mathbf{P}}$. Finally, this function returns
$b_{\mathbf{P}}$, the privacy budget
$\epsilon=\frac{\|\mathbf{P}\|_{1}}{b_{\mathbf{P}}}$, as well as the free and
paid strategy matrices outputted from the discrete search.
The search space for this simplified CE problem is $O(\log_{2}(|[b_{L},$
$b_{\top}]|))$. We only need to sort the cached matrix once, which costs
$O(n_{c}\cdot\log(n_{c}))$, where $n_{c}=|\mathbf{A}\cap\mathcal{C}|$. Hence,
this approach significantly improves the brute-force search solution for the
CE problem.
## 5\. Strategy Modules
In this section, we first present the strategy transformer (ST), which is used
by all of our cache-aware DP mechanisms. We then present two optional modules
for MMM: the Strategy Expander (SE) and Proactive Querying (PQ). Due to space
constraints, all detailed algorithms for this section are included in the full
paper (dpcacheextended).
### 5.1. Strategy Transformer
The ST module selects an instant strategy from the given global strategy
$\mathbb{A}\subseteq\mathbb{A}^{*}$ based on the workload $\mathbb{W}$. Since
our cache-aware MMM and RP modules build on the matrix mechanism, we require a
few basic properties for this instant strategy $\mathbb{A}$ to run the former
mechanisms, with good utility. First, the strategy $\mathbb{A}$ should be _a
support_ to the workload $\mathbb{W}$ (Li et al., 2015), that is, it must be
possible to represent each query in $\mathbb{W}$ as a linear combination of
strategy queries in $\mathbb{A}$. In other words, there exists a solution
matrix $\mathbb{X}$ to the linear system $\mathbb{W}=\mathbb{X}\mathbb{A}$.
Second, $\mathbb{A}$ should have a _low $l_{1}$ norm_, such that the privacy
cost $\epsilon=\frac{\|\mathbb{A}\|_{1}}{b}$ for running MM is small, for a
given a noise parameter $b$ (Proposition 2.1). Third, using noisy responses to
$\mathbb{A}$ to answer $\mathbb{W}$ should incur minimal _noise compounding_
(Hay et al., 2010). We thus present the strategy generator (SG) component, to
address all of these requirements. The strategy generator only uses the global
strategy $\mathbb{A}^{*}$, and does not use the cached responses, to generate
an instant strategy $\mathbb{A}$ for the workload $\mathbb{W}$.
Last, we require that $\mathbb{A}$ must be mapped to _a full rank_ matrix
$\mathbf{A}$, such that $\mathbf{A}^{+}\tilde{\mathbf{y}}$ is the estimate of
the mapped data vector $\mathbf{x}$ that minimizes the total squared error
given the noisy observations $\tilde{\mathbf{y}}$ of the strategy queries
$\mathbf{A}$ (Li et al., 2015, Section 4). We present a full-rank transform
(FRT) component to address this last requirement. Thus the ST module consists
of two components: the strategy generator, and the full-rank transform, run
sequentially.
#### 5.1.1. Strategy Generator.
Consider using the global strategy $\mathbb{A}^{*}$ as follows: to answer the
first workload, we obtain the noisy strategy responses for all nodes on the
tree, thereby fully populating the cache. Cached noisy responses can be reused
for future workloads. Though $\mathbb{A}^{*}$ supports all possible counting
queries over $dom(\mathcal{R})$, it has a very high norm $\|\mathbb{A}^{*}\|$,
equal to the tree height $\log_{k}(n)+1$, where $n$ is the full domain size.
Thus, answering the first workload would require spending a high upfront
privacy budget, which may not be amortized across future workloads, as they
may focus on a small part of the domain with higher accuracy requirements.
Our global strategy $\mathbb{A}^{*}$ is a $k$-ary tree over the full domain
$dom(\mathcal{R})$, hence, it supports all possible counting queries on the
full domain. A baseline instant strategy $\mathbb{A}$ just uses the full
global strategy matrix ($\mathbb{A}=\mathbb{A}^{*}$), thus satisfying the
first requirement. To answer the first workload, we obtain the noisy strategy
responses for all nodes on the tree, thereby fully populating the cache and
reusing the cached noisy responses for future workloads. However, this instant
strategy has a very high norm $\|\mathbb{A}^{*}\|$, equal to the tree height
$\log_{k}(n)+1$, where $n$ is the full domain size. Thus, answering the first
workload would require spending a high upfront privacy budget. Moreover, this
high upfront cost may not be amortized across future workload queries, for
example, if the future queries do not require many nodes on this tree. Future
workload queries may also have higher accuracy requirements, and we would thus
need to re-sample noisy responses to the entire tree again, with a lower noise
parameter.
To obtain a low norm strategy matrix, we only choose those strategy queries
from $\mathbb{A}^{*}$ that support the workload $\mathbb{W}$. Intuitively, we
wish to fill the cache with noisy responses to as many strategy queries as
possible, thus we should bias our strategy generation algorithm towards the
leaf nodes of the strategy tree. However, the DP noisy responses for the
strategy nodes would be added up to answer the workload, and summing up
responses to a large number of strategy leaf nodes compounds the DP noise in
the workload response (Hay et al., 2010). Thus, for each query in the workload
$\mathbb{W}$, we apply a top-down tree traversal to fetch the _minimum_ number
of nodes in the strategy tree (and the corresponding queries in
$\mathbb{A}^{*}$) required to answer this workload query. Then we include all
these queries into the instant strategy $\mathbb{A}$ for this workload
$\mathbb{W}$. The $L_{1}$ norm of the output strategy matrix is then simply
the maximum number of nodes in any path of the strategy tree, and it is upper-
bounded by the tree height. We present an example strategy generation below.
###### Example 5.0.
We continue with Example 3.1 shown in Figure 3, for an integer domain $[0,8)$.
For the single workload query $\mathbb{W}_{1}=\mathbb{w}=[0,7)$, the first
iteration of our SG workload decomposition algorithm computes the overlap of
$\mathbb{w}$ with its left child $c_{1}=\mathbb{A}^{*}_{[0,4)}$ as
$\mathbb{w}_{c1}=[0,4)$ and the overlap with its right child
$c_{2}=\mathbb{A}^{*}_{[4,8)}$ as $\mathbb{w}_{c2}=[4,7)$. The function only
iterates once for the left child $c_{1}$, directly outputs that child’s range
$\mathbb{A}^{*}_{[0,4)}$, as the base condition is satisfied (Line 2). In the
next iteration for the right child $c_{2}$, the overlaps with both of its
children are non-null ($[4,6)$ with $\mathbb{A}^{*}_{[4,6)}$ and $[6,7)$ with
$\mathbb{A}^{*}_{[6,8)}$), and the corresponding strategy nodes are returned
in subsequent iterations. Since $\mathbb{A}_{1}$ has no overlapping intervals,
$\|\mathbb{A}_{1}\|_{1}=1<\|\mathbb{A}^{*}\|_{1}$.
The second workload $\mathbb{W}_{2}$ has two queries with range predicates
($[2,6),[3,7)$). The first workload query predicate requires the strategy
nodes $\mathbb{A}^{*}_{[2,4)}$ and $\mathbb{A}^{*}_{[4,6)}$, whereas the
second query requires the following three nodes:
$\mathbb{A}^{*}_{[3,4)}$,$\mathbb{A}^{*}_{[4,6)}$ and
$\mathbb{A}^{*}_{[6,7)}$. Hence, the second instant strategy $\mathbb{A}_{2}$
is a set of all of these strategy nodes.
The global strategy has an $L_{1}$ norm $\|\mathbb{A}^{*}\|_{1}=4$. The matrix
forms of $\mathbb{A}_{1}$ and $\mathbb{A}_{2}$ can be generated as shown in
Example 5.2. Both strategy matrices improve over the global strategy in terms
of their $L_{1}$ norms: $\mathbb{A}_{1}=1<\|\mathbb{A}^{*}\|_{1}$ and
$\mathbb{A}_{2}=2<\|\mathbb{A}^{*}\|_{1}$. We observe that though
$\mathbb{A}^{*}$ is full-rank, due to the removal of strategy queries that do
not support the workloads, both $\mathbb{A}_{1}$ and $\mathbb{A}_{2}$ are not
full rank. ∎
We formalize our strategy generation algorithm in the recursive function
workloadDecompose given in Algorithm 4. This function takes as input a single
workload query range interval $\mathbb{w}$ and a node $v$ on the tree
$\mathcal{T}$. It first checks if the input predicate matches the range
interval for the node $v$. If it does, it returns that range interval (Line
2). Otherwise, for each child $c$ of node $v$, it computes the _overlap_
$\mathbb{w}_{c}$ of the range interval of that child with the interval
$\mathbb{w}$ (Line 6). For instance, the overlap of the range interval $[2,6)$
with $[0,4)$ is given by
$[\text{(}max)\\{(0,2)\\},\text{min}\\{(4,6)\\})=[2,4)$. For each child with a
non-null range interval overlap, the function is called recursively with that
overlap $\mathbb{w}_{c}$ (Line 8). This function is called with the root of
the tree $\mathcal{T}$, as the second argument, and returns with the
decomposition of $\mathbb{w}$ over all child nodes in the tree ($\mathbb{a}$).
It is run for each workload RCQ $\mathbb{w}\in\mathbb{W}$, and $\mathbb{A}$
simply includes the union of each workload decomposition.
Algorithm 3 Strategy Transformer (ST)
1:function decomposeWorkload($\mathbb{w}$, node $v$)
2: if $v$.query == $\mathbb{w}$ then return $v$
3: $\mathbb{a}\leftarrow\emptyset$
4: if $v$ has children then
5: for child $c$ of node $v$ do
6: $\mathbb{w}_{c}$ $\leftarrow$ OverlappingRCQ($\mathbb{w}$, node $c$.query)
7: if $\mathbb{w}_{c}\neq\emptyset$ then
8: $\mathbb{a}\leftarrow\mathbb{a}$ $\cup$ decomposeStrategy($\mathbb{w}_{c}$,
node $c$)
9: return $\mathbb{a}$
#### 5.1.2. Full Rank Transformer (FRT)
We transform an instant strategy matrix $\mathbb{A}$ to a full rank matrix
$\mathbf{A}$ by mapping the full domain $dom(\mathcal{R})$ of size $n$ to a
new partition of the full domain of $n^{\prime}\leq n$ non-overlapping
counting queries or buckets. The resulting partition should still support all
the queries in the instant raw strategy $\mathbb{A}$ output by our SG. For
efficiency, the partition should have the smallest possible number of buckets
such that the transformed strategy $\mathbf{A}$ will be full rank. First, we
define a domain transformation matrix $\mathbb{T}$ of size $n^{\prime}\times
n$ that transforms the data vector $\mathbb{x}$ over the full domain to the
partitioned data vector $\mathbf{x}$, such that
$\mathbf{x}=\mathbb{T}\mathbb{x}$. Using $\mathbb{T}$, we can then transform a
raw $\mathbb{A}$ to a full-rank $\mathbf{A}$.
###### Definition 5.1 (Transformation Matrix).
Given a partition of $n^{\prime}$ non-overlapping buckets over the full domain
$dom(\mathcal{R})$, if the $i$th value in $dom(\mathcal{R})$ is in the $j$th
bucket, $\mathbb{T}[j,i]=1$; else, $\mathbb{T}[j,i]=0$.
We elaborate exactly how we create a transformation matrix $\mathbb{T}$ to
support strategy $\mathbb{A}$, as presented in getTransformationMatrix in
Algorithm 4. To create $\mathbb{T}$, getTransformationMatrix starts with the
first row of $\mathbb{A}$ (line 2), then iterates through the remaining rows
(line 3) updating the transformation matrix $\mathbb{T}$ as needed. If row $i$
is disjoint from all buckets, we simply add this row as a new bucket (line 4).
Otherwise, we construct a new bucket matrix $\mathbb{T}^{\prime}$. To do this,
we first copy all rows of $\mathbb{T}$ that do not intersect with the current
row of $\mathbb{A}$ (line 7). Then, for the buckets that do intersect, we
remove the intersection from that bucket and add a new bucket containing the
intersection (line 9). Finally, if the row of $\mathbb{A}$ is a super-set of
some buckets, we add the part of the row that is not covered by the buckets as
a new bucket (line 10).
The transformStrategy function in Algorithm 4 transforms $\mathbb{A}$ to
$\mathbf{A}$, using the transformation matrix $\mathbb{T}$ to determine which
buckets are used for each query. We first initialize $\mathbf{A}$ to be the
zero matrix (line 16). For row $i$ of $\mathbb{A}$ and row $j$ of
$\mathbb{T}$, we compute whether bucket $j$ is needed to answer row $i$ by
checking if $\mathbb{T}[j]\subseteq\mathbb{A}[i]$ (line 18). If bucket $j$ is
needed, we set the corresponding entry of $\mathbf{A}$ to $1$ (line 19).
Algorithm 4 Full-rank transformer (Section 5.1.2)
1:function getTransformationMatrix($\mathbb{A}$)
2: $\mathbb{T}=\\{\mathbb{A}[0]\\}$
3: for $i$ in range(1,$|\mathbb{A}|$) and $\mathbb{A}[i]\notin\mathbb{T}$ do
4: if $\mathbb{t}\cdot\mathbb{A}[i]=0$ for all $\mathbb{t}\in\mathbb{T}$ then
5: $\mathbb{T}\leftarrow\mathbb{T}\cup\\{\mathbb{A}[i]\\}$ $\triangleright$
Add a disjoint bucket $\mathbb{A}[i]$
6: else
7:
$\mathbb{T}^{\prime}\leftarrow\\{\mathbb{t}\in\mathbb{T}~{}|~{}\mathbb{t}\cdot\mathbb{A}[i]=0\\}$
8: for $\mathbb{t}$ in $\mathbb{T}$ and $\mathbb{t}\cdot\mathbb{A}[i]\neq 0$
do
9:
$\mathbb{T}^{\prime}\leftarrow\mathbb{T}^{\prime}\cup(\mathbb{t}\cap\mathbb{A}[i])\cup(\mathbb{t}-\mathbb{A}[i])$
10:
$\mathbb{T}^{\prime}\leftarrow\mathbb{T}^{\prime}\cup(\mathbb{A}[i]-\sum_{\mathbb{t}\in\mathbb{T}^{\prime}\wedge\mathbb{t}\cdot\mathbb{A}[i]\neq
0}\mathbb{t})$
11: $\mathbb{T}=\mathbb{T}^{\prime}$
12: return $\mathbb{T}$
13:
14:function transformStrategy($\mathbb{A}$)
15: $\mathbb{T}\leftarrow$ getTransformationMatrix($\mathbb{A}$)
16: Initialize $\mathbf{A}$ as a $|\mathbb{A}|\times|\mathbb{T}|$ zero-valued
matrix
17: for $i$ in range($|\mathbb{A}|$) and $j$ in range($|\mathbb{T}|$) do
18: if $\mathbb{T}[j]\subseteq\mathbb{A}[i]$ then
19: Set $\mathbf{A}[i,j]=1$ $\triangleright$ Bucket $j$ is contained in query
$j$
20: return $\mathbf{A},\mathbb{T}$
###### Example 5.0.
We consider $\mathbb{A}_{2}$ from example 3.1. The domain vector $\mathbb{x}$
consists of the leaves of the tree depicted in Figure 3. We get the following
raw matrix form for $\mathbb{A}_{2}$.
$\mathbb{A}_{2}=\begin{bmatrix}0&0&1&1&0&0&0&0\\\ 0&0&0&0&1&1&0&0\\\
0&0&0&0&0&1&0&0\\\
0&0&0&0&0&0&1&0\end{bmatrix},\mathbf{A}_{2}=\begin{bmatrix}1&0&0&0\\\
0&1&1&0\\\ 0&0&1&0\\\ 0&0&0&1\end{bmatrix}$
We generate the full-rank form $\mathbf{A}_{2}$ above using $\mathbb{T}$:
$\mathbb{T}=\begin{bmatrix}0&0&1&1&0&0&0&0\\\ 0&0&0&0&1&0&0&0\\\
0&0&0&0&0&1&0&0\\\ 0&0&0&0&0&0&1&0\end{bmatrix}$
###### Theorem 5.1.
Given a global strategy $\mathbb{A}^{*}$ in a $k$-ary tree structure, and an
instant strategy $\mathbb{A}\subseteq\mathbb{A}^{*}$, transformStrategy
outputs a strategy $\mathbf{A}$ that is full rank and supports $\mathbb{A}$.
We present an example FRT in our full paper. The ST module finally outputs
$\mathbb{A}$, $\mathbf{A}$, as well as the transformation matrix, as it can be
used to transform $\mathbb{W}$. We use the full-rank versions $\mathbf{W}$,
$\mathbf{A}$ for all invocations of the matrix mechanism (i.e. computing
$\mathbf{W}\mathbf{A}^{+}$).
### 5.2. Strategy Expander
We recall that our goal with CacheDP is to use cached strategy responses, in
order to save privacy budget on new strategy queries. Section 4 shows that MMM
achieves this goal by directly reusing accurate strategy responses from the
cache for the basic instant matrix, i.e., by selecting
$\mathbb{F}\subseteq\mathcal{C}\cap\mathbb{A}$. In this strategy expander (SE)
module, we provide efficient heuristics to include _additional_ cached
strategy entries out of $\mathcal{C}-\mathbb{A}$, to $\mathbb{A}$ to save more
privacy budget.
###### Example 5.0.
Consider the cache structure $\mathcal{C}_{\mathbb{A}^{*}}$ in Figure 3 and a
new workload $\mathbb{W}_{1}=\\{[0,1),[0,2),[2,4)\\}$ and so
$\mathbb{A}_{1}=\\{\mathbb{A}^{*}_{[0,1)}$ $,\mathbb{A}^{*}_{[1,2)},$
$\mathbb{A}^{*}_{[2,4)}\\}$. The cache includes entries for
$\mathbb{A}^{*}_{[0,1)},\mathbb{A}^{*}_{[1,2)}$ at noise parameter $b$, as
well as $\mathbb{A}^{*}_{[0,4)}$ at $4b$. The MMM module decides to reuse the
first two cache entries, and pay for $\mathbb{A}^{*}_{[2,4)}$ at $5b$,
resulting in the noise parameter vector $\mathbf{b}_{1}$, as depicted in
Figure 6. The SE problem is deciding which cached responses (such as
$\mathbb{A}^{*}_{[0,4)}$) can be added to the strategy to reduce it’s cost.
Consider a strawman solution to choosing cache entries: we simply add all
strategy queries from $\mathcal{C}-\mathbb{A}$ to $\mathbb{A}$, in order to
obtain an expanded strategy $\mathbb{A}_{e}$. Prior work by Li et al. (Li et
al., 2015, Theorem 6) suggests that adding more queries to $\mathbf{A}$ always
reduces the error of the matrix mechanism. However, their result hinges on the
assumption that all strategy queries are answered using i.i.d draws from the
_same_ Laplace distribution (Li et al., 2015). Our cached strategy noisy
responses can be drawn at different noise parameters in the past, and thus Li
et al’s result does not hold. In our case, the error term for the expanded
strategy is given by:
$\mathbf{W}\mathbf{A}_{e}^{+}\text{diag}(\mathbf{b}_{e})$ (Proposition 4.2).
In Figure 6, we present a counterexample for Li et al.’s result.
###### Example 5.0.
Continuing Example 5.3, we expand $\mathbf{A}_{1}$ to $\mathbf{A}_{1e}$ by
adding a row $\mathbb{A}^{*}_{[0,4)}$. In Figure 6, we compute the
$\alpha^{2}$-expected error using both $\mathbf{A}_{1}$ and $\mathbf{A}_{1e}$
and find that $\mathbf{A}_{1e}$ has a larger error term.
We can see that the strawman solution can lead to a strategy with an increased
error term. Importantly, this figure shows that adding a strategy query
results in changed coefficients in $\mathbf{W}\mathbf{A}_{e}^{+}$, that is,
this added query changes the weight with which noisy responses to the original
strategy queries are used to form the workload response. The added strategy
query response must also be accurate, since adding a large, cached noise
parameter to $\mathbf{b}_{e}$ will also likely increase the magnitude of the
error term (recall the example in Figure 4).
Consider a strawman solution to choosing cache entries: we simply add all
strategy queries from $\mathcal{C}-\mathbb{A}$ to $\mathbb{A}$, in order to
obtain an expanded strategy $\mathbb{A}_{e}$. The error term for the expanded
strategy is given by:
$\mathbf{W}\mathbf{A}_{e}^{+}\text{diag}(\mathbf{b}_{e})$ (Proposition 4.2).
In our full version of the paper (dpcacheextended), we discuss related work
hypothesizing this strawman solution (Li et al., 2015), and we present an
example wherein the strawman solution can lead to a strategy with an increased
error term. Intuitively, adding a strategy query results in changed
coefficients in $\mathbf{W}\mathbf{A}_{e}^{+}$, that is, this added query
changes the weight with which noisy responses to the original strategy queries
are used to form the workload response. The added strategy query response must
also be accurate, since adding a large, cached noise parameter to
$\mathbf{b}_{e}$ will also likely increase the magnitude of the error term
(recall the example in Figure 4).
A brute force approach to find the optimal $\mathbb{A}_{e}$ would consider all
possible subsets of cache entries from $\mathcal{C}-\mathbb{A}$ and check if
the error is better than the original strategy. This induces an exponentially
large search space of $O(2^{|\mathcal{C}|})$ possible solutions for
$\mathbb{A}_{e}$. We propose a series of efficient heuristics to obtain a
greedy solution.
First, we search only the strategy queries from $\mathcal{C}-\mathbb{A}$ that
are accurate enough. Recall that the MMM.estimatePrivacyBudget interface
outputs the noise parameter $b_{\mathbf{P}}$. Just as we used $b_{\mathbf{P}}$
to compute $\mathbf{F}$, we can also use it to select cache entries for
$\mathbb{A}_{e}$ that are at least as accurate as other entries in
$\mathbf{F}$. These accurate cached responses will likely improve the accuracy
of the workload response. We first sort the cache entries in increasing order
of the noise parameters and add each entry to $\mathbb{A}_{e}$ one by one
until its noise parameter is greater than $b_{\mathbf{P}}$, or, we reach a
maximum bound on the cached strategy size. This approach reduces the search
space from $O(2^{|\mathcal{C}|})$ to $O(|\mathcal{C}|)$ and ensures that the
additional strategy rows do not significantly increase the run-time of
CacheDP.
Second, we ensure that each query $\mathbb{a}$ added to $\mathbb{A}_{e}$ is a
parent or a child of an existing query $\mathbb{a}^{\prime}\in\mathbb{A}$. Our
heirarchical global strategy $\mathbb{A}^{*}$ structures cache entries, and
induces relations between the cached noisy responses. The constrained
inference problem focuses on minimizing the error term for multiple noisy
responses, while following consistency constraints among them, as described by
Hay et al. (Hay et al., 2010). For example, if we add the strategy queries
corresponding to the siblings and parent nodes of an existing query in
$\mathbb{A}$, we obtain an additional consistency constraint which tends to
reduce error. However, if we only added the sibling node, we would not have
seen as significant (if any) improvement. This heuristic selects strategy rows
that are more likely to reduce the privacy budget compared to MMM
($\epsilon_{\mathbf{P}}$).
The privacy budget for $\mathbf{A}_{e}$ is estimated using the
MMM.estimatePrivacyBudget interface. We encapsulate SE as a module rather than
integrate it with MMM, since our heuristics might fail and $\mathbf{A}_{e}$
might cost a higher privacy budget than the $\mathbf{A}$ used by MMM (as
illustrated in Example 5.4). Since Algorithm 1 chooses the answerWorkload
interface for the module and strategy with the lowest privacy cost, in the
above case, $\mathbf{A}_{e}$ is simply not used. We analyze the conditions
under which our heuristics result in SE module being selected, in our full
paper (dpcacheextended). In an optimal solution to this problem, one would
have to consider adding each combination of cache entries from
$\mathcal{C}-\mathbb{A}$. This induces an exponentially large search space of
$O(2^{|\mathcal{C}|})$ possible solutions for $\mathbb{A}_{e}$. Then for each
candidate $\mathbb{A}_{e}$ we need to evaluate this error term and compare it
to the error for the original strategy. We provide an example in Figure 6.
Instead of navigating this large search space for $\mathbb{A}_{e}$, we propose
a series of efficient heuristics to obtain a greedy solution to this problem,
as presented in Algorithm 5. In designing our algorithm, we have three goals:
* (1)
Search space: Reduce the search space from $O(2^{|\mathcal{C}|})$ to
$O(|\mathcal{C}|)$.
* (2)
Efficiency: Ensure that the additional strategy rows do not significantly
increase the run-time of CacheDP.
* (3)
Greediness: Select strategy rows that are most likely to reduce the privacy
budget from that for MMM ($\epsilon_{\mathbf{P}}$).
We achieve the first goal above by conducting a single lookup over cache
entries in $\mathcal{C}-\mathbb{A}$ (Line 3), which would only incur a worst-
case complexity of $O(|\mathcal{C}|)$. We limit the number of selected cached
strategy rows to $\lambda$ (Line 4), thereby achieving our second goal of
efficiency. Our greediness heuristics to select a strategy query are based on
the two aforementioned factors that impact the workload error term, namely,
the _accuracy_ of its cached noisy response, and how the noisy response is
_related_ to noisy responses to the original strategy.
First, we must ensure that the strategy queries selected from
$\mathcal{C}-\mathbb{A}$ are accurate enough. Before conducting our cache
lookup, we sort our cache entries in increasing order of the noise parameter,
therefore our algorithm greedily prefers more accurate cache entries. Recall
that the MMM.estimatePrivacyBudget interface outputs the noise parameter
$b_{\mathbf{P}}$. Just as we used $b_{\mathbf{P}}$ to compute $\mathbf{F}$, we
can also use it to select cache entries for $\mathbb{A}_{e}$ that are at least
as accurate as other entries in $\mathbf{F}$. These accurate cached responses
will likely improve the accuracy of the workload response. Thus, out of cache
entries in $\mathcal{C}-\mathbb{A}$, we only consider cache entries whose
noise parameter is lower than $b_{\mathbf{P}}$ (Line 4).
$\mathbf{W}_{1}=\begin{bmatrix}1&0&0\\\ 1&1&0\\\ 0&0&1\end{bmatrix},\
\mathbf{A}_{1}=I_{3}=\begin{bmatrix}1&0&0\\\ 0&1&0\\\ 0&0&1\end{bmatrix},\
\mathbf{b}_{1}=\begin{bmatrix}b\\\ b\\\ 5b\end{bmatrix}$
$\mathbf{A}_{1e}=\left[\begin{array}[]{ccc}1&0&0\\\ 0&1&0\\\ 0&0&1\\\
\hline\cr 1&1&1\end{array}\right],\
\mathbf{b}_{1e}=\left[\begin{array}[]{c}b\\\ b\\\ 5b\\\ \hline\cr
4b\end{array}\right]$
$\mathbf{W}_{1}\mathbf{A}_{1}^{+}diag(\mathbf{b}_{1})=\left[\begin{array}[]{rrr}1.00\,b&0&0\\\
1.00\,b&1.00\,b&0\\\ 0&0&5.00\,b\end{array}\right]$
$\mathbf{W}_{1}\mathbf{A}_{1e}^{+}diag(\mathbf{b}_{1e})=\left[\begin{array}[]{rrrr}0.750\,b&-0.250\,b&-1.25\,b&1.00\,b\\\
0.500\,b&0.500\,b&-2.50\,b&2.00\,b\\\
-0.250\,b&-0.250\,b&3.75\,b&1.00\,b\end{array}\right]$
$\|\mathbf{W}_{1}\mathbf{A}_{1}^{+}diag(\mathbf{b}_{1})\|=28b^{2},\
\|\mathbf{W}_{1}\mathbf{A}_{1e}^{+}diag(\mathbf{b}_{1e})\|=29.1b^{2}$
Figure 6. An input instant strategy $\mathbf{A}_{1}$ is expanded to a
strategy $\mathbf{A}_{1e}$. Using their noise parameter vectors
($\mathbf{b}_{1},\mathbf{b}_{1e}$), we can see that $\mathbf{A}_{1e}$ has an
error term of larger magnitude.
Second, our heirarchical global strategy $\mathbb{A}^{*}$ structures cache
entries, and induces relations between the cached noisy responses. The
constrained inference problem focuses on minimizing the error term for
multiple noisy responses, while following consistency constraints among them,
as described by Hay et al. (Hay et al., 2010). For example, if we add the
strategy queries corresponding to the siblings and parent nodes of an existing
query in $\mathbb{A}$, we obtain an additional consistency constraint which
tends to reduce error. However, if we only added the sibling node, we would
not have seen as significant (if any) improvement. Thus, our second greedy
heuristic, in line 6, ensures that each query $\mathbb{a}$ added to
$\mathbb{A}_{e}$ is a parent or a child of an existing query
$\mathbb{a}^{\prime}\in\mathbb{A}$.
The SE algorithm generates an expanded strategy $\mathbb{A}_{e}$ and
transforms it to its full-rank form $\mathbf{A}_{e}$ (Line 8). The privacy
budget for $\mathbf{A}_{e}$ is estimated using the MMM.estimatePrivacyBudget
interface. We encapsulate SE as a module rather than integrate it with MMM,
since our heuristics might fail and $\mathbf{A}_{e}$ might cost a higher
privacy budget than the $\mathbf{A}$ used by MMM (as illustrated in Example
5.4). Since Algorithm 1 chooses to run the answerWorkload interface for the
module and strategy with the lowest privacy cost, in the above case,
$\mathbf{A}_{e}$ is simply not used. We evaluate the success of our heuristics
both experimentally and theoretically in Appendix D.
Algorithm 5 Strategy Expander (SE) (Section 5.2)
1:function generateExpandedStrategy($\mathbb{A}$, $\mathcal{C}$,
$b_{\mathbf{P}}$)
2: $\mathbb{A}_{e}\leftarrow\mathbb{A}$
3: for $(\mathbb{a},b,\tilde{y},t)\in(\mathcal{C}-\mathbb{A})$ in an ascending
order of $b$ do
4: if $|\mathbb{A}_{e}|\geq|\mathbb{A}|+\lambda$ or $b>b_{\mathbf{P}}$ then
5: Break
6: if $\mathbb{a}^{\prime}\cdot\mathbb{a}\neq 0$ for some
$\mathbb{a}^{\prime}\in\mathbb{A}$ then
7: $\mathbb{A}_{e}\leftarrow\mathbb{A}_{e}\cup\\{\mathbb{a}\\}$
8: $(\mathbf{A}_{e},\mathbb{T}_{e})\leftarrow$
ST.transformStrategy($\mathbb{A}_{e}$) $\triangleright$ Section 5.1
9: return $\mathbb{A}_{e},\mathbf{A}_{e},\mathbb{T}_{e}$
### 5.3. Proactive Querying
The proactive querying (PQ) module is an optional module for MMM. The MMM
obtains fresh noisy responses only for the paid strategy matrix $\mathbf{P}$,
and inserts them into the cache. The goal of the PQ module is to proactively
populate the cache with noisy responses to a subset $\Delta\mathbb{P}$ out of
the remaining, non-cached strategy queries of the global strategy
$(\mathbb{A}^{*}-\mathcal{C}-\mathbb{P})$, where $\mathbb{P}$ corresponds to
the raw, non-full rank form of $\mathbf{P}$. Thus, we run the PQ module in the
function MMM.answerWorkload($\cdot$) after obtaining the paid strategy matrix
$\mathbf{P}$. Our cache-aware modules, including MMM, RP and SE, can use the
cached noisy responses to $\Delta\mathbb{P}$ to answer future instant strategy
queries. We wish to satisfy this goal without consuming any additional privacy
budget over the MMM.
We first motivate key constraints for the PQ algorithm. First, we do not
assume any knowledge of future workload query sequences. However, all future
workload queries will be transformed into instant strategy matrices, and our
cache-aware mechanisms will lookup the cache for cached strategy rows. Second,
we also do not know the accuracy requirements for future workload queries.
Future workloads may be asked at different accuracy requirements than the
current workload. Thus, we choose to obtain responses to $\Delta\mathbb{P}$ at
the highest possible accuracy requirements without spending any additional
privacy budget over that required for $\mathbf{P}$ by MMM, which is
$\epsilon=\frac{\|\mathbf{P}\|_{1}}{b_{\mathbf{P}}}$. Our key insight is to
generate $\Delta\mathbb{P}\subseteq(\mathbb{A}^{*}-\mathcal{C}-\mathbb{P})$
such that $\|\mathbb{P}\cup\Delta\mathbb{P}\|_{1}=\|\mathbb{P}\|_{1}$.
Therefore, answering both instant strategies ($\mathbb{P}$ and
$\Delta\mathbb{P}$) with the Laplace mechanism using $b_{\mathbf{P}}$ costs no
more privacy budget than simply answering $\mathbb{P}$ at $b_{\mathbf{P}}$.
###### Theorem 5.2.
Given a paid strategy matrix $\mathbb{P}$ our proactive strategy generation
algorithm outputs $\Delta\mathbb{P}$ such that
$\|\mathbb{P}\cup\Delta\mathbb{P}\|_{1}=\|\mathbb{P}\|_{1}$.
Our proactive generation algorithm consists of two top-down traversals of the
tree. We illustrate our proactive strategy generation algorithm through the
following example. Our detailed algorithm and theorem proofs are in the full
paper (dpcacheextended).
###### Example 5.0.
In Figure 3, we apply our proactive strategy generation function to to
$\mathbb{P}_{2}=\\{\mathbb{A}^{*}_{[2,4)},\mathbb{A}^{*}_{[3,4)}\\}$ for
$\mathbb{W}_{2}$ in our example sequence. Our algorithm outputs
$\Delta\mathbb{P}_{2}=$ $\\{\mathbb{A}^{*}_{[4,8)}$, $\mathbb{A}^{*}_{[0,2)}$,
$\mathbb{A}^{*}_{[0,1)}$, $\mathbb{A}^{*}_{[1,2)}$,
$\mathbb{A}^{*}_{[2,3)}\\}$. ($\mathbb{A}^{*}_{[7,8)}$ is excluded from
$\Delta\mathbb{P}_{2}$ since it is cached from $\mathbb{A}_{1}$ for
$\mathbb{W}_{1}$.) Here,
$\|\mathbb{P}_{2}\|_{1}=\|{\mathbb{P}_{2}\cup\Delta\mathbb{P}_{2}}\|_{1}=2$.
Adding any other nodes from the tree to $\Delta\mathbb{P}_{2}$ will increase
the number of nodes in $\mathbb{A}\cup\Delta\mathbb{P}$ that are on the same
_path_ of the tree from 2 to 3 or 4, or in other words,
$\|\mathbb{P}_{2}\cup\Delta\mathbb{P}_{2}\|_{1}$ might increase. Instead of
any node in $\Delta\mathbb{P}_{2}$ we could obtain its children nodes,
however, our algorithm prefers nodes at the higher layers of the tree, since
they are more likely to be reused by other modules. Note that
$\Delta\mathbb{P}_{2}$ does not only consist of disjoint query predicates. For
example, $\mathbb{A}^{*}_{[0,2)}$ and $\mathbb{A}^{*}_{[0,1)}$ overlap.
We present the function searchProactiveNodes for generating $\Delta\mathbb{P}$
in Algorithm 6. We formulate this algorithm in terms of the the $k$-ary tree
representation of the global strategy $\mathbb{A}^{*}$, denoted by
$\mathcal{T}$. (We assume this is a directed tree with directed edges from the
root to leaves and all paths refer to paths from a node to its leaves.) For a
node $v\in\mathcal{T}$, we define a binary function
$\mathcal{M}_{\mathbb{P}}(n)$ to indicate if its corresponding query $v$.query
is in $\mathbb{P}$.
###### Definition 5.6.
We define the subtree norm of a node $v$ as the maximum number of marked nodes
across all paths $p$ from node $v$-to-leaf in the subtree of $v$, i.e.,
(9) $\mathcal{S}_{\mathbb{P}}(v)=\max_{p\in\text{subtree}(v)}\sum_{v\in
p}\mathcal{M}_{\mathbb{P}}(v)$
The subtree norm of a node can be computed recursively as the sum of the mark
function for that node and the maximum subtree norm of all of its children
nodes, if any. The proactive module first recursively computes the subtree
norm of each node before generating $\Delta\mathbb{P}$. This step requires a
single top-down traversal of the strategy decomposition tree. Given that the
children of each node have non-overlapping ranges, the subtree norm enables us
to define the $L_{1}$ norm of $\mathbb{P}$:
###### Lemma 5.0.
The $L_{1}$ norm of the $\mathbb{P}$ matrix is equal to the subtree norm of
the root of the tree with marked nodes corresponding to $\mathbb{P}$:
(10) $\mathcal{S}_{\mathbb{P}}(\mathcal{T}\text{.root})=\|\mathbb{P}\|_{1}$
Our proactive strategy generation function, generateProactiveStrategy, is
presented in Algorithm 6. This function conducts a recursive top-down
traversal of the strategy decomposition tree (line 9), and outputs a list of
nodes to be fetched proactively into $\Delta\mathbb{P}$, such that the sum of
the number of marked nodes and proactively fetched nodes for each path in the
tree is at most $\|\mathbb{P}\|_{1}$.
###### Lemma 5.0.
The proactive strategy $\Delta\mathbb{P}$ generated by
generateProactiveStrategy for an input $\mathbb{P}$ satisfies the condition:
(11) $\forall\text{ paths }p\in\mathcal{T},\sum_{v\in
p}\mathcal{M}_{\mathbb{P}\cup\Delta\mathbb{P}}(v)\leq\mathcal{S}_{\mathbb{P}}(\mathcal{T}\text{.root})=\|\mathbb{P}\|_{1}$
The second argument $r$ in the function searchProactiveNodes represents the
number of remaining nodes that can be fetched proactively for the subtree
originating at node $v$. Thus in the first call, we pass the root node for the
first argument, and the second argument is initially set to
$\|\mathbb{P}\|_{1}$. We decrement $r$ whenever we encounter a marked node
(line 2) or when we add a node to the proactive output list (line 5). In the
latter case, we require that the node is not cached and that $r$ is greater
than the subtree norm of the node, $\mathcal{S}_{\mathbb{P}}(v)$ or $s$, as
seen in line 4. This condition ensures that we can safely add node $v$ to the
proactive list, while achieving Equation (13). (We prove all PQ module lemmas
and theorems in Appendix A.4.)
###### Theorem 5.3.
Given a paid strategy matrix $\mathbb{P}$ Algorithm 6 outputs
$\Delta\mathbb{P}$ such that
$\|\mathbb{P}\cup\Delta\mathbb{P}\|_{1}=\|\mathbb{P}\|_{1}$.
Algorithm 6 Proactive Querying (PQ) (Section 5.3)
1:function searchProactiveNodes(node v, $r$, $\mathcal{C}$,
$\Delta\mathbb{P}$)
2: if $v$.query $\in\mathbb{P}$ then
3: $r\leftarrow r-1$
4: else if $v$.query $\notin\mathcal{C}$ and $v$.s ¡ $r$ then
5: $\Delta\mathbb{P}\leftarrow$ $\Delta\mathbb{P}\cup\\{v.\text{query}\\}$
6: $r\leftarrow r-1$
7: if $r>0$ and $v$ has children then
8: for child $c$ of node $v$ do
9: searchProactiveNodes(node c, $r$, $\mathcal{C}$, $\Delta\mathbb{P}$)
10: return
###### Example 5.0.
In Figure 3, we apply our proactive strategy generation function to to
$\mathbb{P}_{2}=\\{\mathbb{A}^{*}_{[2,4)},\mathbb{A}^{*}_{[3,4)}\\}$ for
$\mathbb{W}_{2}$ in our example sequence. We annotate each node with the
values of $r$ and $v.s$ from line 4 in function searchProactiveNodes. The tree
nodes $\mathbb{A}^{*}_{[4,8)}$, $\mathbb{A}^{*}_{[0,2)}$,
$\mathbb{A}^{*}_{[0,1)}$, $\mathbb{A}^{*}_{[1,2)}$, $\mathbb{A}^{*}_{[2,3)}$
and $\mathbb{A}^{*}_{[7,8)}$ satisfy the condition $r>v.s$. All nodes other
than $\mathbb{A}^{*}_{[7,8)}$ are output into $\Delta\mathbb{P}_{2}$; the
latter node is excluded since it is cached from $\mathbb{A}_{1}$ for
$\mathbb{W}_{1}$. Note that $\Delta\mathbb{P}_{2}$ does not only consist of
disjoint query predicates. For example, $\mathbb{A}^{*}_{[0,2)}$ and
$\mathbb{A}^{*}_{[0,1)}$ overlap. However,
$\mathcal{S}_{\mathbb{P}_{2}}(\mathcal{T})=\mathcal{S}_{\mathbb{P}_{2}\cup\Delta\mathbb{P}_{2}}(\mathcal{T})=2$.
#### 5.3.1. Integration.
The MMM answerWorkload function perturbs $\Delta\mathbb{P}$ with the same
noise parameter as for $\mathbb{P}$, namely $b_{\mathbf{P}}$, to obtain the
noisy responses $\tilde{\mathbf{y}}_{\Delta\mathbb{P}{}}$. It then updates the
cache $\mathcal{C}$ with
$\\{(\mathbb{p^{\prime}},b_{\mathbf{P}},\tilde{y},t)|$
$\mathbb{p^{\prime}}\in\Delta\mathbb{P},\tilde{y}\in\tilde{\mathbf{y}}_{\Delta\mathbb{P}{}}\\}$.
We observe that we do not answer the analyst’s workload query $\mathbf{W}$
using $\tilde{\mathbf{y}}_{\Delta\mathbb{P}{}}$. Importantly, this is the
reason why we do not incorporate $\Delta\mathbb{P}$ in estimating
$b_{\mathbf{P}}$ in our MMM cache-aware cost estimation function. The PQ
module can also be used while the SE module is turned on. Algorithm 6 can also
be applied to multi-attribute strategies, as we discuss in Section 7.
## 6\. Relax Privacy Mechanism
Algorithm 7 Relax Privacy (RP) (Section 6)
1:function answerWorkload($\mathcal{C},\mathbf{A},\mathbf{W},\mathbb{x}$)
2: $\bm{\eta}_{o}\leftarrow\tilde{\mathbf{y}}_{o}-\mathbb{A}_{o}\mathbb{x}$
$\triangleright$ Old noise vector for $\mathbb{A}_{o}$.
3: $\bm{\eta}\leftarrow$ lapNoiseDown($\bm{\eta}_{o},b_{o},b$)
$\triangleright$ Koufogiannis et al. (Koufogiannis et al., 2016)
4: $\tilde{\mathbf{y}}\leftarrow\mathbb{A}_{o}\mathbb{x}+\bm{\eta}$
$\triangleright$ New noisy responses to $\mathbb{A}_{o}$
5: Update cache $\mathcal{C}_{\mathbb{A}^{*}}$ with
($\mathbb{A}_{o},b,\tilde{\mathbf{y}},t$=current time)
6: $\tilde{\mathbf{y}}^{\prime}\leftarrow\tilde{\mathbf{y}}$ for
$\mathbb{A}\subseteq\mathbb{A}_{o}$ $\triangleright$ New noisy responses to
$\mathbb{A}$
7: return $\mathbf{W}\mathbf{A}^{+}\tilde{y}^{\prime}$
8:
9:function
estimatePrivacyBudget($\mathcal{C},\mathbf{A},\mathbf{W},\alpha,\beta$)
10: $b\leftarrow$ MMM.estimatePrivacyBudget( $\mathcal{C}=\emptyset$,
$\mathbf{A}$, $\mathbf{W}$, $\alpha,\beta$)
11: $\mathcal{C}\leftarrow\\{\cdots(\mathbb{A}_{t}$, $\tilde{\mathbf{y}}_{t}$,
$b_{t})\\}$ $\triangleright$ Group queries in $\mathcal{C}$ by timestamp.
12:
$S_{RP}\leftarrow\mathbb{A}_{j}\in\mathcal{C}_{t=j}|\mathbb{A}_{j}\supseteq\mathbb{A}$
$\triangleright$ Keep only those $\mathbb{A}_{t}$ that contain $\mathbb{A}$
13: if $S_{RP}=\emptyset$ then
14: return “RP cannot run for this input $\mathbb{A}$.”
15:
$o=\underset{j}{\arg\min}\quad\epsilon_{RP,j}=\frac{\|\mathbb{A}_{j}\|_{1}}{b}-\frac{\|\mathbb{A}_{j}\|_{1}}{b_{j}}$
$\triangleright$ $\mathbb{A}_{o}$ has the lowest RP cost
16: return $b_{o},\quad\epsilon_{RP,o}$ $\triangleright$ Cached noise
parameter, RP cost for $\mathbb{A}_{o}$
When exploring a database, a data analyst may first ask a series of workloads
at a low accuracy (spending $\epsilon_{1}$), and then re-query the most
interesting workloads at a higher accuracy (spending
$\epsilon_{2}>\epsilon_{1}$). (The repeated workload can also be asked by a
different analyst.) The cumulative privacy budget spent by the MMM will be
$\epsilon_{1}+\epsilon_{2}$ due to sequential composition. The goal of the
Relax Privacy module is to spend less privacy budget than MMM on such repeated
workloads with higher accuracy requirements.
Koufogiannis et al. (Koufogiannis et al., 2016) refine a noisy response at a
smaller $\epsilon_{1}$, to a more accurate response at a larger
$\epsilon_{2}$, using only a privacy cost of $\epsilon_{2}-\epsilon_{1}$
(Koufogiannis et al., 2016; Peng et al., 2013). However, their framework only
operates with the simple Laplace mechanism. Thus, we achieve the
aforementioned goal by closely integrating their framework (Koufogiannis et
al., 2016) with the matrix mechanism and our DP cache. This design of the RP
module meets a secondary goal, namely, our RP module handles not only repeated
analyst-supplied workloads, but also different workloads that result in
identical strategy matrices. We exploit the fact that other modules, such as
the PQ module, also operate over the matrix mechanism and DP cache. Therefore,
our RP module can be seen as generalizing these frameworks to operate over
more workload sequences.
### 6.1. Estimate Privacy Budget Interface
We first describe the estimatePrivacyBudget interface for RP, and as with the
MMM, it estimates the privacy budget required by the RP mechanism. The privacy
budget required for the RP mechanism is defined as the difference between the
new or _target_ privacy budget for the output strategy noisy responses
$\tilde{\mathbf{y}}$ to meet the accuracy guarantees, and the old or _cached_
privacy budget ($\epsilon_{\mathcal{C}}$) that cached responses to
$\mathbb{A}$ were obtained at. The target noise parameter is the noise
parameter required _by the cacheless MM_ to achieve an
$(\alpha,\beta)$-accuracy guarantee for $\mathbf{W},\mathbf{A}$. It can be
obtained by running the estimatePrivacyBudget of MMM with an empty cache (line
10). Then the main challenge of this interface is to choose which past
strategy entries should be relaxed by the RP mechanism, based on the smallest
RP cost as defined above.
Each strategy query $\mathbb{a}\in\mathbb{A}$ may be cached at a different
timestamp. Relaxing each such set of cache entries across different
timestamps, through sequential composition, requires summing over the RP cost
for each set, and can thus be very costly. For simplicity, we design the RP
mechanism to relax the entirety of a past strategy matrix, rather than picking
and choosing strategy entries across different timestamps. Our RP cache lookup
condition groups cache entries by their timestamps to form cached strategy
matrices (Line 11), and identifies all candidate matrices that include the
entire input strategy (Line 12). The inclusion condition (instead of an
equality) allows proactively fetched strategy entries to be relaxed, at no
additional cost to relaxing $\mathbb{A}_{j}$. If answering $\mathbb{A}$ using
the cache requires: (1) composing cache entries spanning multiple timestamps,
or (2) composing cache entries at one timestamp and paid (freshly noised)
strategy queries at the current timestamp, then the RP cost estimation
interface simply returns nothing (Line 13) and CacheDP will instead use
another module.
###### Example 6.0.
Suppose that the workloads shown in Figure 3 have been asked in the past at
$\alpha_{1}$, and have been answered through MMM, as discussed in Example 5.9.
Now $\mathbb{W}_{3}=\\{[3,8)\\}$ is asked at $\alpha_{3}<\alpha_{1}$. We have
$\mathbb{A}_{3}=\\{[3,4),[4,8)\\}$. Thus
$\mathbb{A}_{3}\subset\mathbb{P}_{2}\cup\Delta\mathbb{P}_{2}$, and the RP
module relaxes all of
$\mathbb{A}_{3,RP}=\mathbb{P}_{2}\cup\Delta\mathbb{P}_{2}$ from $\alpha_{1}$
to $\alpha_{3}$.
For each candidate cached strategy matrix $\mathbb{A}_{j}$, we compute the RP
cost to relax its cached noisy response vector $\tilde{\mathbf{y}}_{j}$ from
${b_{j}}$ to the new target $b$ as $\epsilon_{RP,j}$. Lastly, the RP module
chooses to relax the candidate past strategy $\mathbb{A}_{j}$ with the minimum
RP cost (Line 15). Importantly, we only seek to obtain accuracy guarantees
over $\mathbf{W}\mathbf{A}^{+}$, and not over $\mathbf{W}\mathbf{A}_{o}^{+}$,
and thus we compute the target noise parameter $b$ based on the input strategy
matrix $\mathbb{A}$ (Line 10), rather than the optimal cached candidate
$\mathbb{A}_{o}$. For the chosen cached strategy matrix $\mathbb{A}_{o}$, we
return the cached noise parameter $b_{o}$ and the RP cost $\epsilon_{RP,o}$.
### 6.2. Answer Workload Interface
The RP answerWorkload interface is a straightforward application of
Koufogiannis et al.’s noise down module. The estimatePrivacyBudget interface
records the following parameters: the optimal cached or old strategy to relax
($\mathbb{A}_{o}$), its cached noisy response ($\tilde{y}_{o}$), the cached
noise parameter ($b_{o}$) and the target noise parameter ($b$). We first
compute the Laplace noise vector used in the past $\bm{\eta}_{o}$, by
subtracting the ground truth for the cached old strategy
$\mathbb{A}_{o}\mathbb{x}$ from the cached noisy response $\tilde{y}_{o}$
(line 2). We can now supply Koufogiannis et al.’s noise down algorithm with
the old noise vector $\bm{\eta}_{o}$, the cached noise parameter $b_{o}$, and
the target noise parameter $b$. This algorithm draws noise from a correlated
noise distribution, and outputs a new, more accurate noise vector at noise
parameter $b$ (line 3) (Koufogiannis et al., 2016, Algorithm 1). We can simply
compute the new noisy response vector to $\tilde{y}_{o}$ using the ground
truth and the new noise vector (line 4). We then update the cache with the
new, more accurate noisy responses, which can be used to answer future
strategy queries (line 5). Finally, we do not need the noisy strategy
responses to $\mathbb{A}_{o}-\mathbb{A}$ to answer the data analyst’s
workload, and so we filter them out to simply obtain new noisy responses
$\tilde{y}^{\prime}$ to $\mathbb{A}$ (line 6). We use $\tilde{y}^{\prime}$ to
compute the workload response and return it to the analyst (line 7).
## 7\. Multiple attribute workloads
We extend CacheDP to work over queries with multiple attributes. We define a
single data vector $\mathbb{x}$ over $\text{dom}(\mathcal{R})$ as the cross
product of $d$ single-attribute domain vectors. It represents the frequency of
records for each value of a marginal over all attributes. However,
$|\mathbb{x}|$ and thus $|\mathcal{C}|$ could be very large due to the cross
product.
We observe that not all attributes may be referenced by analysts in their
workloads. Suppose that each workload includes marginals over a set of
attributes $S_{\mathcal{A}}\in R$. That is, each marginal
$\mathbb{w}\in\mathbb{W}$ includes $|S_{\mathcal{A}}|=k\leq d$ RCQs, with one
RCQ over each attribute ($\mathbb{w}=\prod_{j}^{k}\mathbb{w}_{j}$ ). These
workloads would share a common, smaller domain and hence a data vector
$\mathbb{x}_{S_{\mathcal{A}}}$.
$\mathbb{x}_{S_{\mathcal{A}}}=\bigotimes_{i=1}^{k\leq
d}\mathbb{x}_{\mathcal{A}_{i}}$. Similarly, instead of creating a large cache,
we create a set of smaller caches, with one cache
$\mathcal{C}_{S_{\mathcal{A}}}$ for each _unique_ combination of attributes
$S_{\mathcal{A}}$ encountered in a workload _sequence_. Cache entries can thus
be reused across workloads that span the same set of attributes. The entries
of each smaller cache $\mathcal{C}_{S_{\mathcal{A}}}$ are indexed by its
associated domain vector $\mathbb{x}_{{}_{S_{\mathcal{A}}}}$.
###### Example 7.0.
Consider three attributes with $dom(\mathcal{A}_{1})=[0,4)$,
$dom(\mathcal{A}_{2})=[0,8)$, and $dom(\mathcal{A}_{3})=[0,2)$. The analyst
supplied a workload sequence of RCQ over different sets of attributes:
$\mathbb{W}_{1}$ over $S_{1}=\\{\mathcal{A}_{1},\mathcal{A}_{2}\\}$,
$\mathbb{W}_{2}$ over $S_{2}=\\{\mathcal{A}_{2},\mathcal{A}_{3}\\}$ and then
$\mathbb{W}_{3}$ over $S_{3}=S_{1}=\\{\mathcal{A}_{1},\mathcal{A}_{2}\\}$.
Then $\mathbb{W}_{1}$ and $\mathbb{W}_{3}$ are answered using the same domain
vector
$\mathbb{x}_{S_{1}}=\mathbb{x}_{\mathcal{A}_{1}}\otimes\mathbb{x}_{\mathcal{A}_{2}}$
and a cache indexed over it, $\mathcal{C}_{S_{1}}$. However, $\mathbb{W}_{2}$
is answered using a different domain vector
$\mathbb{x}_{S_{2}}=\mathbb{x}_{\mathcal{A}_{2}}\otimes\mathbb{x}_{\mathcal{A}_{3}}$
and the corresponding cache $\mathcal{C}_{S_{2}}$.
Our cache-aware MMM, SE and RP modules can be extended trivially to the multi-
attribute case, since these modules would simply operate on the larger domain
vector. However, in order to generate $\mathbb{A}$, the ST module relies on a
$k$-ary strategy tree, corresponding to $\mathbb{A}^{*}$ for the single-
attribute case. Thus, we extend the ST and PQ modules by defining this global
strategy tree using marginals over multiple attributes. Our extended ST and PQ
modules serve as a proof-of-concept that other modules can be extended for
other problem domains. We detail the multi-attribute strategy tree generation
in our full paper (dpcacheextended). We also discuss handling complex SQL
queries such as joins, in its future work section.
For a given workload $\mathbb{W}$ that spans a set of attributes
$S_{\mathcal{A}}$, we can use the single-attribute strategy tree
$\mathcal{T}_{i}$ for each attribute $\mathcal{A}_{i}\in S_{\mathcal{A}}$ to
construct a multi-attribute strategy tree $\mathcal{T}$ for $\mathbb{W}$, as
follows. Intuitively, $\mathbb{x}$ consists of marginals that are constructed
by taking each unit value in $dom(\mathcal{A}_{i})$, and forming a cross
product with each unit value in the $dom(\mathcal{A}_{i+1})$, and so on. Unit
values in $dom(\mathcal{A}_{i})$ are represented by leaf nodes on its strategy
tree $\mathcal{T}_{i}$, as illustrated in Figure 3. Thus, we form a two-
attribute strategy tree $\mathcal{T}$ over $\mathcal{A}_{1}$ and
$\mathcal{A}_{2}$, by attaching the tree $\mathcal{T}_{2}$ to each _leaf_ node
for the tree $\mathcal{T}_{1}$. (We can then simply extend this definition to
$k=\|S_{\mathcal{A}}\|$ attributes in $\mathbb{W}$.) Nodes on $\mathcal{T}$
either represent marginals over these attributes or sums of marginals. In
particular, the leaf nodes on $\mathcal{T}$ represent the unit value strategy
marginals over $\mathcal{A}_{1}$ and $\mathcal{A}_{2}$.
We can thus decompose each workload marginal $\mathbb{w}$ in terms of nodes on
this tree $\mathcal{T}$. In the first step, we decompose each single-attribute
marginal $\mathbb{w}_{i}$ following our single-attribute decomposeWorkload
function (Algorithm 4) to get a set of single-attribute strategy nodes
$\mathbb{a}_{i}$. The second step is run for all but the last ($k$th)
attribute: in this step, we _further_ decompose $\mathbb{a}_{i}$ in terms of
the leaf nodes on tree $\mathcal{T}_{i}$, to obtain
$\mathbb{a}_{\text{leaf,}j}$. In the final step, we compute the cross-product
of the single-attribute strategy nodes $\mathbb{a}_{\text{leaf,}j}$, over all
$j$, and a final cross product with the decomposition of the $k$th attribute,
$\mathbb{a}_{k}$, to form the strategy marginal $\mathbb{a}$ for the workload
marginal $\mathbb{w}$, as follows: $\mathbb{a}=\bigotimes_{i=1}^{k-1\leq
d}\mathbb{a}_{\text{leaf,}j}\otimes\mathbb{a}_{k}$.
To determine the order in which to decompose the attributes, we first compute
the minimum granularity needed for each attribute to be able to answer the
workload. We then sort the attributes in acceding order of granularity. The
intuition is that this should result in the least amount of noise compounding
for the lowest attribute on the tree. Each decomposition is greedy in that we
only go down to the minimum granularity for the given workload and not to the
leaf nodes. This further reduces the noise compounding of our approach.
We design the above strategy transformer such that our PQ module can easily be
applied to the structure. The only difference is that the
generateProactiveStrategy algorithm is run over the strategy tree for the
combination of attributes in the workload ($S_{\mathcal{A}}$) rather than the
global tree structure. The PQ module can thus add queries for any subset of
attributes in $S_{\mathcal{A}}$, prioritizing nodes higher up in the tree,
which correspond to fewer number of attributes. In practice, this is
preferable as it focuses on expanding the areas an analyst has already shown
interest in rather than completely unrelated attributes.
###### Example 7.0.
Consider the first workload in the sequence in Example 7.1. It consists of one
marginal tuple over attributes $\mathcal{A}_{1}$ and $\mathcal{A}_{2}$:
$(\mathbb{W}=[0,3)_{\mathcal{A}1},[1,6)_{\mathcal{A}2})$. We denote the global
strategy matrix for attribute $i$ as $\mathbb{A}{i}^{*}$. We decompose the RCQ
for each attribute in the marginal, using our single attribute strategy
generator. Thus:
$\mathbb{A}_{\mathcal{A}1}=\\{\mathbb{A}{1}^{*}_{[0,2)},\mathbb{A}{1}^{*}_{[2,3)}\\}$
and
$\mathbb{A}_{\mathcal{A}2}=\\{\mathbb{A}{2}^{*}_{[1,2)},\mathbb{A}{2}^{*}_{[2,4)},\mathbb{A}{2}^{*}_{[4,6)}\\}$.
We then construct the two-attribute strategy marginals as the cross product of
these RCQs:
$\mathbb{A}=\mathbb{A}_{\mathcal{A}1}\bigotimes\mathbb{A}_{\mathcal{A}2}$
$=\\{(\mathbb{A}{1}^{*}_{[0,2)},\mathbb{A}{2}^{*}_{[1,2)}),$
$(\mathbb{A}{1}^{*}_{[0,2)},\mathbb{A}{2}^{*}_{[2,4)}),$
$(\mathbb{A}{1}^{*}_{[0,2)},\mathbb{A}{2}^{*}_{[4,6)}),$
$(\mathbb{A}{1}^{*}_{[2,3)},\mathbb{A}{2}^{*}_{[1,2)}),$
$(\mathbb{A}{1}^{*}_{[2,3)},\mathbb{A}{2}^{*}_{[2,4)}),$
$(\mathbb{A}{1}^{*}_{[0,2)},\mathbb{A}{2}^{*}_{[4,6)})\\}$
## 8\. Evaluation
(a) BFS - Adult (Age attribute)
(b) DFS - Adult (Country attribute)
(c) RRQ - Synthetic (1D)
(d) BFS - Taxi (Lat, Long attributes)
(e) DFS - Taxi (Lat, Long attributes)
(f) IDEBench - Planes (multiple attributes)
Figure 7. Average cumulative privacy budget comparison between CacheDP and
baselines (APEx, APEx with cache, Pioneer).
We conduct a thorough experimental evaluation of CacheDP. We focus on our
primary goal, namely, reducing the cumulative privacy budget of interactive
workload sequences over baseline solutions (Section 8.2.1), while still
meeting the accuracy requirements (Section 8.2.2) and incurring low overheads
(Section 8.2.3). We assess how often each module is used, and quantify its
impact on the privacy budget, through our ablation study in Section 8.3.
### 8.1. Experimental Setup
#### 8.1.1. Baseline Solutions
We consider a number of baseline, accuracy-aware solutions from the literature
to compare with CacheDP.
APEx (Ge et al., 2019): APEx is a state-of-the-art accuracy-aware interactive
DP query engine. APEx consumes accuracy requirements in the form of an
$(\alpha,\beta)$ bound (see Definition 2.2). APEx treats all workload queries
separately and has no cache of previous responses.
APEx with cache: We simulate APEx with a naive cache of all past workloads and
their responses. If a client repeats a workload asked in the past by any
client, with the same or a lower accuracy requirement, we do not count its
privacy budget towards the cumulative budget spent by APEx with cache.
Pioneer (Peng et al., 2013): Pioneer is a DP query engine that incorporates a
cache of previous noisy responses to save the privacy budget on future
queries. Pioneer expects the accuracy over its responses in the form of a
target variance. Since Pioneer can only answer single range queries, we
decompose all workloads into single queries for our evaluation, and let
Pioneer answer them sequentially.
Dataset | Adult (Kohavi, 1996) | Taxi (Commission, 2022) | Planes (of Transportation, 2022; Eichmann et al., 2020)
---|---|---|---
Size | $48842\times 14$ | $1028527\times 19$ | $500,000\times 12$
Tasks | BFS (Age) | BFS (Lat, Long) | IDEBench
(Attributes) | DFS (Country) | DFS (Lat, Long) | (8 out of 12)
Table 2. Datasets, their sizes, and associated tasks, with the attributes or
number of attributes used in each task.
#### 8.1.2. Datasets and Tasks
In Table 2, we outline the datasets used and the tasks that each dataset is
used in. A common data exploration task involves traversing a decomposition
tree over the domain (Zhang et al., 2016). We construct our workloads through
either a breadth-first search (BFS) or a depth-first search (DFS); both of
these tasks are executed over a single attribute or a pair of correlated
attributes (Latitude and Longitude from the Taxi dataset). We also replicate
the evaluation of Pioneer (Peng et al., 2013) through randomized single range
queries (RRQ) over a single attribute of a synthetic dataset. We use Eichman
et al.’s benchmarking tool, namely IDEBench (Eichmann et al., 2020), to
construct a sequence of interactive multi-attribute workloads.
We model multiple data analysts querying each system, as clients. We run the
BFS, DFS and IDEBench tasks with multiple clients. We schedule the clients’
interactions with each system by randomly sampling clients, with replacement,
from the set of clients until no queries remain. A client chooses the accuracy
requirements and task parameters for each run of the experiment independently
and at random; we detail these choices in the full paper (dpcacheextended).
#### 8.1.3. Datasets and Interactive Exploration Tasks
We use three datasets: the 1994 US Census data in the Adult dataset (Kohavi,
1996) ($48842$ rows $\times 14$ attributes), a log of NYC yellow taxi trip
records from 2015 in the Taxi dataset (Commission, 2022) ($1028527\times 19$),
and US domestic flight records in the Planes dataset (of Transportation, 2022;
Eichmann et al., 2020) ($500,000\times 12$). The Taxi dataset is used to model
a single strategy tree over a pair of correlated (Lat, Long) attributes. Each
node on the tree (or a range query) represents a rectangle of area on a map,
and each level on the tree splits each node’s rectangle into its quarters.
BFS and DFS tasks: A common data exploration task involves traversing a
decomposition tree over the domain (Zhang et al., 2016), by progressively
asking more fine-grained queries over a subset of the domain. In each
iteration, the analyst decomposes each query in the past workload whose noisy
response satisfies a certain criteria, into $k$ new children queries on the
attribute decomposition tree. In the BFS task, the analyst explores only past
queries with a sufficiently high noisy count. The BFS task thus returns the
smallest subsets of a domain that are sufficiently populated.
Whereas in a DFS task, the analyst focuses on past queries with a sufficiently
low non-zero count (i.e. underrepresented subgroups). A DFS task terminates if
a query’s noisy count is non-zero and falls within the low DFS threshold
range. In the DFS task, when the analyst reaches a leaf node without finding a
node with a sufficiently low count, they backtrack a random number of steps up
the tree, and resume the search with the second smallest node at that level.
We group the range queries that satisfy the BFS or DFS criteria, into a single
workload per level of the tree.
Randomized range queries (RRQ) over synthetic data: We replicate the
evaluation of Pioneer (Peng et al., 2013) through $50,000$ randomized range
queries over a synthetic dataset. We fix a domain size of $m=1,000$ such that
each range query is contained in $(0,m)$. Each query is in the form of
$(s,s+\ell)$ where $s$ and $\ell$ are both selected from a normal distribution
with the following mean ($\mu$) and standard deviation ($\sigma$):
$\mu_{s}=500$, $\sigma_{s}=10$, $\mu_{\ell}=320$, and $\sigma_{\ell}=10$. The
accuracy requirement is supplied as an expected square error, which is also
selected from a normal distribution with $\mu=250000$,$\sigma=25000$.
IDEBench for Multi-Attribute queries: Eichman et al. develop a benchmarking
tool to evaluate interactive data exploration systems (Eichmann et al., 2020).
We use this tool to construct a sequence of exploration workloads for a multi-
attribute case study over the Planes dataset. Specifically, we extract the SQL
workloads for their $1:N$ case where each query triggers an additional $N$
dependent queries (Eichmann et al., 2020). We simplify these queries for easy
integration with our prototype.
#### 8.1.4. Client Modelling
We model multiple data analysts querying the aforementioned systems, as
clients. We schedule the clients’ interactions with each system by randomly
sampling clients, with replacement, from the set of clients until no queries
remain. A client chooses the accuracy requirements and task parameters for
each run of the experiment independently and at random.
#### 8.1.5. Experiment setup.
We run the BFS and DFS tasks with $c=25$ clients and the IDEBench task with
$c=10$ clients. (We only run the RRQ task with a single client, to precisely
replicate the Pioneer paper. Since Pioneer can only work with a single
attribute, we do not run it for tasks over the Taxi or Planes datasets.) We
detail the accuracy requirements and task parameters in the full paper. The
BFS and DFS tasks are conducted over the Age and Country attributes of the
Adult dataset respectively. Both tasks are also run over the Latitude (Lat)
and Longitude (Long) attributes of the Taxi dataset. Each client randomly
draws the minimum threshold for their BFS and the maximum threshold for their
DFS. The BFS, DFS and IDEBench tasks consume $(\alpha,\beta)$ accuracy
requirements, with fixed $\beta=0.05$ across all clients. Each client randomly
selects $\alpha=\alpha_{s}|D|$ for $\alpha_{s}\in[0.01,0.16]$, with step size
0.05.
### 8.2. End-to-end Comparison
#### 8.2.1. Privacy Budget Comparison
We repeat each interactive exploration task $N=100$ times, and we compute the
average cumulative privacy budget for our solution CacheDP and baselines
(APEx, APEx with cache, Pioneer) over all $N$ experiment runs. We plot the
mean and 95% confidence intervals in Figure 7. We have two hypotheses:
1. H1
The baselines arranged in order of increasing cumulative privacy budget should
be: APEx, APEx with cache, Pioneer.
1. H1.1
Pioneer should outperform APEx with cache, since Pioneer saves privacy budget
over any related workloads, whereas APEx with cache only saves privacy budget
over repeated workloads.
2. H1.2
Baselines with a cache (APEx with cache, Pioneer) should outperform the
baseline without a cache (APEx).
2. H2
CacheDP should outperform all baselines.
First, we observe that hypothesis H1 holds for all tasks, other than the
single-attribute BFS and DFS tasks (Figures 7a, 7b). Since we decompose each
workload into multiple single range queries for Pioneer, this sequential
composition causes it to perform worse than APEx without a cache in the BFS
task, and thus hypothesis H1H1.1 is violated. For the same reason, APEx with
cache outperforms Pioneer for the single-attribute DFS task, and so,
hypothesis H1H1.2 is violated. Though, we note that hypothesis H1H1.2 holds
for the RRQ task (Figure 7c). Our Pioneer implementation replicates a similar
privacy budget trendline to the original paper (Peng et al., 2013, Figure 15).
Hypothesis H2 holds for all tasks, and the cumulative privacy budget spent by
CacheDP scales slower per query, by at least a constant factor, over all
graphs. We note that in the RRQ task (Figure 7c), CacheDP spends more privacy
budget upfront than the other systems, since these systems use the simpler
Laplace Mechanism, which is optimal for single range queries over our
underlying Matrix Mechanism. However, any upfront privacy budget spent by
CacheDP is used to fill the cache, which yields budget savings over a large
number of workloads, as CacheDP requires an order of magnitude less cumulative
privacy budget than the best baseline (Pioneer). We observe that even in the
computationally intensive tasks due to larger data vectors for two attributes
(Figures 7d, 7e) and multiple attributes (Figure 7f), CacheDP outperforms the
best baseline (APEx with cache), by at least a factor of 1.5 for Figure 7f.
For both DFS tasks (Figures 7b, 7e), since each experiment can terminate after
a different number of workloads have been run, we observe large confidence
intervals for higher workload indices for each system. CacheDP simply returns
cached responses to a workload if they meet the accuracy requirements, whereas
our simulation for APEx with cache resamples noisy workload responses and may
traverse the tree again in a possibly different path. Relaxed accuracy
requirements from different clients can lead to frequent re-use of our cache
(Section 8.1.5), and thus, we find that in Figure 7e, CacheDP ends the DFS
exploration faster than APEx with cache.
#### 8.2.2. Accuracy Evaluation
We measured the empirical error of the noisy responses returned by all systems
and found that they meet the the clients’ $(\alpha,\beta)$ accuracy
requirements. Cached responses used by CacheDP commonly exceed the accuracy
requirements. Specifically, when all strategy responses are free, CacheDP will
always return the most accurate cached response for each strategy query, even
if the current workload has a poorer $\alpha$.
#### 8.2.3. Overhead Evaluation
We compute the following storage and computation overheads for all systems,
averaged over all $N$ experiment runs: (1) cache size in terms of total number
of cache entries at the end of a run, and (2) workload runtime, averaged over
all workloads in a run. In Table 5, we present these overheads for
representative tasks. (Since our simulation for APEx with cache only differs
from APEx by a recalculation of the privacy budget (Section 8.1), the latter
has the same runtime as the former.) Our cache size is limited by the number
of nodes on our strategy tree, and so for the RRQ task, which has $50k$
workloads, CacheDP has a smaller cache size than the baselines. Whereas, in
tasks with fewer workloads, such as the IDEBench task, our PQ module inserts
more strategy query nodes into the cache, and thus, significantly increases
our cache size over the baselines. Nevertheless, since our cache entries only
consist of 4 floating points (32B), even a cache with $\approx 25k$ entries
would be reasonably small in size ($\approx 800kB)$.
In terms of runtime, CacheDP only takes a few seconds per workload for single-
attribute tasks, such as the DFS task, thereby matching other cached
baselines. As the number of attributes increases in the IDEBench task, the
cumulative cache size and runtime of CacheDP scales linearly. Specifically,
IDEBench workloads require computations over a larger data vector that spans
many attributes. (We include a graph for these variables and discuss
performance optimizations in the extended paper.) Yet, non-optimized CacheDP
only takes around 6 minutes per workload for the IDEBench task, and performs
slightly better than APEx with cache.
Figure 8. Average cumulative cache size and runtime of CacheDP, versus the
number of attributes, for the IDEBench task.
We illustrate how the cumulative cache size and runtime of CacheDP scales with
the number of attributes for the IDEBench task in Figure 8. We note that the
x-axis corresponds to a total of 27 IDEBench workloads. We consider multiple
clients in Table 5, whereas we only consider one client in Figure 8.111Since
this single client will not experience fully cached, accurate workload
responses, we find an order of magnitude worse runtime overheads than simply
by multiplying the average in Table 5 by the number of IDEBench workloads.
Thus this plot shows worst-case runtimes for the IDEBench task. We also note
that increasing the number of records in the database only impacts the size of
the data vector in the pre-processing stage, and does not impact the size of
the cache.
We can see that both observed variables scale approximately linearly with the
number of attributes. The cache size can be limited by identifying inaccurate
cache entries that can be replaced, while inserting new noisy responses, in
each module. The MC simulation forms the bottleneck for the runtime, when the
number of attributes remains small, whereas the matrix multiplication becomes
the main bottleneck as the number of attributes increases. The MC simulation
can be avoided entirely, by expressing the desired accuracy through expected
variances instead of the $(\alpha,\beta)$ requirements. Our implementation can
be optimized with fast matrix multiplication algorithms to achieve smaller
runtimes.
System | Cache size (entries) | Runtime (s)
---|---|---
| RRQ | IDEBench | DFS - Adult | IDEBench
APEx (cache) | 3118$\pm$4 | 6540$\pm$0 | 2.71$\pm$0.02 | 456$\pm$70
Pioneer | 3118$\pm$4 | - | 1.6$\pm$0.2ms | -
CacheDP | 1998$\pm$0 | 23666$\pm$1000 | 2.82$\pm$0.2 | 338$\pm$20
Table 3. Cache size and workload runtime comparison.
| BFS | DFS | RRQ | IDEBench
---|---|---|---|---
Free | $164.8\pm 0.9$ | $591\pm 6$ | $49801\pm 3$ | $216\pm 1$
MMM | $2.1\pm 0.1$ | $2.25\pm 0.09$ | $\mathbf{137\pm 3}$ | $15.1\pm 0.2$
RP | $14.7\pm 0.5$ | $21.1\pm 0.5$ | $1.4\pm 0.1$ | $\mathbf{31\pm 1}$
SE | $\mathbf{18.5\pm 0.6}$ | $\mathbf{42\pm 1}$ | $60\pm 4$ | $7.8\pm 0.2$
Table 4. Average number of times each module was chosen for each task; most
frequently chosen modules are in bold.
System | BFS - Adult (Age) | DFS - Adult (Country) | RRQ - Synthetic (1D) | BFS - Taxi (Lat,Long) | DFS - Taxi (Lat,Long) | IDEBench
---|---|---|---|---|---|---
| Cache Entries | Runtime (s) | Cache Entries | Runtime (s) | Cache Entries | Runtime (s) | Cache Entries | Runtime (s) | Cache Entries | Runtime (s) | Cache Entries | Runtime (s)
APEx | - | 3.7$\pm$0.2 | - | 2.71$\pm$0.02 | - | 0.05$\pm$0 | - | 111$\pm$8 | - | 25$\pm$1 | - | 456$\pm$70
APEx Cache | 123$\pm$1 | 3.7$\pm$0.2 | 81$\pm$0 | 2.71$\pm$0.02 | 3118$\pm$4 | 0.05$\pm$0 | 668$\pm$14 | 111$\pm$8 | 2515$\pm$0 | 25$\pm$1 | 6540$\pm$0 | 456$\pm$70
Pioneer | 117$\pm$1 | 4.8$\pm$0.2ms | 80$\pm$0 | 1.6$\pm$0.2ms | 3118$\pm$4 | 0.01$\pm$0 | - | - | - | - | - | -
CacheDP | 145$\pm$3 | 7.5$\pm$0.4 | 81$\pm$0 | 2.8$\pm$0.2 | 1998$\pm$0 | 0.05$\pm$0 | 3669$\pm$0 | 23$\pm$1 | 3669$\pm$0 | 15.3$\pm$0.5 | 23666$\pm$1000 | 338$\pm$20
Table 5. Cache size and workload runtime comparison.
### 8.3. Ablation study
| BFS | DFS | RRQ | IDEBench
---|---|---|---|---
Free | $164.8\pm 0.9$ | $591\pm 6$ | $49801\pm 3$ | $216\pm 1$
MMM | $2.1\pm 0.1$ | $2.25\pm 0.09$ | $\mathbf{137\pm 3}$ | $15.1\pm 0.2$
RP | $14.7\pm 0.5$ | $21.1\pm 0.5$ | $1.4\pm 0.1$ | $\mathbf{31\pm 1}$
SE | $\mathbf{18.5\pm 0.6}$ | $\mathbf{42\pm 1}$ | $60\pm 4$ | $7.8\pm 0.2$
Table 6. Average number of times each module was chosen for each task; most
frequently chosen modules are in bold.
Each of our modules contribute differently to the success of our system across
different workloads. We conduct an ablation study in two parts analyzing our
modules. First, we analyze the frequencies at which each module is selected to
answer a workload, and second, we run a study to quantify the impact of each
module on the cumulative privacy budget. We begin with our frequency analysis,
noting that a module is chosen to answer a given workload if it is estimated
to cost the lowest privacy budget. We only include the MMM, RP, and SE modules
in this analysis, since the PQ module is not involved in the cost estimation
stage. We present the number of times each module is chosen to answer a
workload in each of the BFS, DFS, RRQ and IDEBench tasks, averaged over
$N=100$ runs, in Table 6. If MMM reports $\epsilon=0$ for a workload, CacheDP
simply uses MMM to answer the workload using cached responses, and it does not
run RP or SE modules. We thus separately record the number of free workloads
per task in the first row of Table 6. First, we observe that most workloads
for each task are free, indicating that using solely the cached strategy
responses, CacheDP can successfully answer most workloads for these tasks.
Second, considering all non-free workloads, each of the modules are used the
most frequently for at least one task. SE is chosen most frequently for the
BFS and DFS tasks, answering $52\%$ and $64\%$ of non-free workloads
respectively. Furthermore, for many workloads in these tasks, we observed that
MMM had $\epsilon>0$ cost, but under SE, these workloads became free
($\epsilon=0$). RP is chosen most frequently for the IDEBench task ($57\%$),
whereas MMM is used most frequently for the RRQ task ($69\%$). Thus, we can
see that each module plays a role in CacheDP’s performance in one or more
tasks.
We also run a study to quantify savings in the cumulative privacy budget due
to each module. We rerun our single-attribute tasks (BFS, DFS, RRQ) while
disabling each of our modules (MMM, SE, RP, PQ) one at a time, and present the
cumulative privacy budget consumed by each such configuration in Figure 9. The
standard configuration consists of all modules turned on. (Turning an
effective module off should lead to an increase in the cumulative privacy
budget, in comparison to the standard configuration.) First, we observe that
the standard configuration performs the best in all three tasks, while
considering CI overlaps. Therefore, data analysts need not pick which modules
should be turned on in order to answer a workload sequence with the lowest
privacy budget. Second, the PQ module significantly lowers the cost for the
BFS and RRQ task, proactively fetching all ($\approx 12$) queries in a BFS
workload at the cost of one strategy node. Third, the RP module also lowers
the cost for BFS, when the same workload is repeated by other clients.
Fourth, turning the SE module off only contributes to minor differences in the
cumulative privacy budget ($B_{c}$). However, the reader may expect that
turning the SE module off would lead to a higher $B_{c}$, since based on the
frequency analysis, the SE module is most frequently chosen to answer non-zero
workloads for the BFS and DFS tasks. We reconcile this discrepancy with the
observation that the SE module provides savings on earlier workloads, through
constrained inference, at the cost of a less accurate cache to answer future
workloads. We reconcile this discrepancy as follows. When the SE module is
chosen, constrained inference reduces the privacy cost for paid strategy
queries. As a result, the cache entries for these queries remain at a lower
accuracy than if MMM or RP had been used to answer them, and they may not be
reusable for later workloads. Thus CacheDP may need to obtain noisy responses
for these queries, later on, at a higher accuracy. The SE module thus provides
savings on earlier workloads at the cost of a less accurate cache to answer
future workloads. In summary, different tasks exploit different modules and
the standard configuration incurs the least privacy budget, and thus data
analysts need not turn off any modules.
Figure 9. Ablation study over PQ, RP, SE modules of CacheDP
## 9\. Related work
Constrained inference techniques have been applied in the non-interactive DP
setting to improve the accuracy of noisy query responses (Zhang et al., 2016;
Qardaji et al., 2013) and in synthetic data generators (Tantipongpipat et al.,
2021; Ge et al., 2021; McKenna et al., [n.d.]) to infer consistent answers
from a data model built through noisy measurement queries. However, these
systems do not provide any accuracy guarantee on the inferred responses. If
the analyst desires a more accurate response than the synthetic data can
offer, no privacy budget remains to improve the query answer (Tao et al.,
2021). Our work applies DP constrained inference in an interactive setting so
that we can spend the privacy budget on queries that the analyst is interested
in and meet their accuracy requirements. On the other hand, existing accuracy-
aware DP systems for data exploration (Ge et al., 2019; Mohan et al., 2012),
releasing data (Gaboardi et al., 2016; Nanayakkara et al., 2022), or
programming frameworks (Vesga et al., 2019; Xiao et al., 2021) do not exploit
historical query answers to save privacy budget on a given query. We design a
cache structure and inference engine extending one of these accuracy-aware
systems, APEx (Ge et al., 2019).
Peng et al.’s Pioneer (Peng et al., 2013) is the most relevant work that uses
historical query answers to obtain accurate responses to upcoming single range
queries. However, CacheDP can handle workloads with multiple queries. Second,
it supports multiple, complex DP mechanisms and chooses the mechanism that
uses the least privacy budget for each new workload. Third, our PQ module
(Section 5.3) proactively fetches certain query responses that can be used
later at no additional cost. Finally, CacheDP can answer multi-attribute
queries through our extended ST module (Section 7).
Our key modules are built on top of prior work (e.g., Li et al.’s Matrix
Mechanism (Li et al., 2015), Koufogiannis et al.’s Relax Privacy Mechanism
(Koufogiannis et al., 2016)), such that existing interactive DP systems that
make use of these mechanisms (e.g. PrivateSQL (Kotsogiannis et al., 2019),
APEx (Ge et al., 2019)) do not have to make significant changes; these systems
can include a relatively light-weight cache structure and cache-aware version
of the DP mechanisms. Integrating a structured, reusable cache with these
mechanisms has its own technical challenges, such as the Cost Estimation
problem (Section 4.3.2), Full Rank Transformation problem (Section 5.1.2), as
well as optimally reusing the cache (Section 5.2) and filling it (Section
5.3).
## 10\. Future work:
CacheDP can be extended to answer top-$k$ or iceberg counting queries studied
in APEx, as well as simple aggregates such as means, by integrating the query
processing engine of APEx to transform these queries to raw counting queries.
Beyond counting queries, providing differential privacy over complex SQL
queries, such as joins and group by operators, is a challenging problem, as
studied in prior work (Kotsogiannis et al., 2019; Dong and Yi, 2021; Wilson et
al., 2020). The global sensitivity of SQL queries involving joins is
unbounded. To tackle this challenge, the existing well-performed DP mechanisms
(Kotsogiannis et al., 2019; Dong and Yi, 2021; Wilson et al., 2020) for these
queries require a data-dependent transformation (e.g., truncation of the data)
in order to bound the sensitivity of the query. Thus, the accuracy of these
mechanisms depend on the data, and searching the minimum privacy budget to
achieve the desired accuracy bound is non-trivial, which is an important
research direction.
## 11\. Conclusion
We build a usable interactive DP query engine, CacheDP, that uses a structured
DP cache to achieve privacy budget savings commonly seen in the non-
interactive model. CacheDP supports data analysts in answering data
exploration workloads accurately, without requiring them to have any DP
knowledge. Our work provides researchers with a methodology to address common
challenges while integrating DP mechanisms with a DP cache, such as, cache-
aware privacy budget estimation (MMM), filling the cache at a low privacy
budget (PQ), and maximizing cache reuse (SE).
## 12\. Acknowledgements
We thank NSERC for funding our work through the Postgraduate Scholarship-
Doctoral program, grant CRDPJ-534381, and a Discovery Grant. We also thank the
Royal Bank of Canada for supporting our work. This work benefited from the use
of the CrySP RIPPLE Facility at the University of Waterloo.
## References
* (1)
* Bittau et al. (2017) Andrea Bittau, Úlfar Erlingsson, Petros Maniatis, Ilya Mironov, Ananth Raghunathan, David Lie, Mitch Rudominer, Ushasree Kode, Julien Tinnes, and Bernhard Seefeld. 2017\. Prochlo: Strong Privacy for Analytics in the Crowd. In _Proceedings of the 26th Symposium on Operating Systems Principles_ (Shanghai, China) _(SOSP ’17)_. Association for Computing Machinery, New York, NY, USA, 441–459. https://doi.org/10.1145/3132747.3132769
* Commission (2022) NYC Taxi & Limousine Commission. 2022\. _TLC Trip Record Data_. https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page
* Ding et al. (2017) Bolin Ding, Janardhan Kulkarni, and Sergey Yekhanin. 2017\. Collecting Telemetry Data Privately. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_ _(NIPS’17)_. 3574–3583. https://doi.org/10.5555/3294996.3295115
* Dong and Yi (2021) Wei Dong and Ke Yi. 2021\. Residual Sensitivity for Differentially Private Multi-Way Joins. In _Proceedings of the 2021 International Conference on Management of Data_ (Virtual Event, China) _(SIGMOD ’21)_. Association for Computing Machinery, New York, NY, USA, 432–444. https://doi.org/10.1145/3448016.3452813
* Dwork et al. (2006) Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating Noise to Sensitivity in Private Data Analysis. In _Theory of Cryptography_ , Shai Halevi and Tal Rabin (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 265–284.
* Dwork and Roth (2014) Cynthia Dwork and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy. _Foundations and Trends in Theoretical Computer Science_ 9, 3–4 (2014), 211–407. https://doi.org/10.1561/0400000042
* Eichmann et al. (2020) Philipp Eichmann, Emanuel Zgraggen, Carsten Binnig, and Tim Kraska. 2020. IDEBench: A Benchmark for Interactive Data Exploration. In _Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data_. 1555–1569. https://doi.org/10.1145/3318464.3380574
* Gaboardi et al. (2019) Marco Gaboardi, Michael Hay, and Salil Vadhan. 2019. _A Programming Framework for OpenDP_. https://projects.iq.harvard.edu/opendp
* Gaboardi et al. (2016) Marco Gaboardi, James Honaker, Gary King, Jack Murtagh, Kobbi Nissim, Jonathan Ullman, and Salil Vadhan. 2016. Psi: a private data sharing interface. (2016). http://arxiv.org/abs/1609.04340
* Ge et al. (2019) Chang Ge, Xi He, Ihab F. Ilyas, and Ashwin Machanavajjhala. 2019. APEx: Accuracy-Aware Differentially Private Data Exploration. In _Proceedings of the 2019 International Conference on Management of Data_ (Amsterdam, Netherlands). Association for Computing Machinery, New York, NY, USA, 177–194. https://doi.org/10.1145/3299869.3300092
* Ge et al. (2021) Chang Ge, Shubhankar Mohapatra, Xi He, and Ihab F. Ilyas. 2021\. Kamino: Constraint-Aware Differentially Private Data Synthesis. _VLDB_ (2021).
* Hay et al. (2010) Michael Hay, Vibhor Rastogi, Gerome Miklau, and Dan Suciu. 2010. Boosting the Accuracy of Differentially Private Histograms through Consistency. _Proceedings of the VLDB Endowment_ 3, 1–2 (Sept. 2010), 1021–1032. https://doi.org/10.14778/1920841.1920970
* Johnson et al. (2018) Noah Johnson, Joseph P. Near, and Dawn Song. 2018. Towards Practical Differential Privacy for SQL Queries. _Proceedings of the VLDB Endowment_ 11, 5 (Jan. 2018), 526–539. https://doi.org/10.1145/3177732.3177733
* Kohavi (1996) Ron Kohavi. 1996\. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid. https://archive.ics.uci.edu/ml/datasets/adult. In _KDD_ , Vol. 96. 202–207.
* Kotsogiannis et al. (2019) Ios Kotsogiannis, Yuchao Tao, Xi He, Maryam Fanaeepour, Ashwin Machanavajjhala, Michael Hay, and Gerome Miklau. 2019. PrivateSQL: A Differentially Private SQL Query Engine. _Proceedings of the VLDB Endowment_ 12, 11 (July 2019), 1371–1384. https://doi.org/10.14778/3342263.3342274
* Koufogiannis et al. (2016) Fragkiskos Koufogiannis, Shuo Han, and George J Pappas. 2016\. Gradual Release of Sensitive Data under Differential Privacy. _Journal of Privacy and Confidentiality_ 7, 2 (2016), 23–52. https://doi.org/10.29012/jpc.v7i2.649
* Li et al. (2015) Chao Li, Gerome Miklau, Michael Hay, Andrew Mcgregor, and Vibhor Rastogi. 2015. The Matrix Mechanism: Optimizing Linear Counting Queries under Differential Privacy. _The VLDB Journal_ 24, 6 (Dec. 2015), 757–781. https://doi.org/10.1007/s00778-015-0398-x
* Machanavajjhala et al. (2008) Ashwin Machanavajjhala, Daniel Kifer, John Abowd, Johannes Gehrke, and Lars Vilhuber. 2008\. Privacy: Theory meets practice on the map. In _2008 IEEE 24th international conference on data engineering_. IEEE, IEEE, Cancun, Mexico, 277–286. https://doi.org/10.1109/ICDE.2008.4497436
* Mazmudar et al. (2022) Miti Mazmudar, Thomas Humphries, Jiaxiang Liu, Matthew Rafuse, and Xi He. 2022. _CacheDP artifact_. https://gitlab.uwaterloo.ca/m2mazmud/cachedp-public
* Mazmudar et al. (2023) Miti Mazmudar, Thomas Humphries, Jiaxiang Liu, Matthew Rafuse, and Xi He. 2023. Cache Me If You Can: Accuracy-Aware Inference Engine for Differentially Private Data Exploration. _Proceedings of the VLDB Endowment_.
* McKenna et al. ([n.d.]) Ryan McKenna, Daniel Sheldon, and Gerome Miklau. [n.d.]. Graphical-model based estimation and inference for differential privacy. arXiv:1901.09136
* McSherry (2010) Frank McSherry. 2010\. Privacy Integrated Queries: An Extensible Platform for Privacy-Preserving Data Analysis. _Commun. ACM_ 53, 9 (2010), 89–97. https://doi.org/10.1145/1810891.1810916
* Mohan et al. (2012) Prashanth Mohan, Abhradeep Thakurta, Elaine Shi, Dawn Song, and David Culler. 2012. GUPT: Privacy Preserving Data Analysis Made Easy. In _Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data_. Association for Computing Machinery, 349–360. https://doi.org/10.1145/2213836.2213876
* Nanayakkara et al. (2022) Priyanka Nanayakkara, Johes Bater, Xi He, Jessica Hullman, and Jennie Rogers. 2022\. Visualizing Privacy-Utility Trade-Offs in Differentially Private Data Releases. _CoRR_ (2022). arXiv:2201.05964
* of Transportation (2022) United States Department of Transportation. 2022. Bureau of Transportation Statistics. https://transtats.bts.gov
* Peng et al. (2013) S. Peng, Y. Yang, Z. Zhang, M. Winslett, and Y. Yu. 2013. Query optimization for differentially private data management systems. In _2013 IEEE 29th International Conference on Data Engineering (ICDE)_. IEEE, Brisbane, QLD, Australia, 1093–1104. https://doi.org/10.1109/ICDE.2013.6544900
* Qardaji et al. (2013) Wahbeh Qardaji, Weining Yang, and Ninghui Li. 2013. Understanding hierarchical methods for differentially private histograms. In _Proceedings of the VLDB Endowment_ , Vol. 6. 1954–1965. http://www.vldb.org/pvldb/vol6/p1954-qardaji.pdf
* Tantipongpipat et al. (2021) Uthaipon Tantipongpipat, Chris Waites, Digvijay Boob, Amaresh Siva, and Rachel Cummings. 2021\. Differentially private synthetic mixed-type data generation for unsupervised learning. _Intelligent Decision Technologies_ (2021).
* Tao et al. (2021) Yuchao Tao, Ryan McKenna, Michael Hay, Ashwin Machanavajjhala, and Gerome Miklau. 2021. Benchmarking Differentially Private Synthetic Data Generation Algorithms. _TPDP_ (2021). https://tpdp.journalprivacyconfidentiality.org/2021/papers/NingUQKH21.pdf
* Vesga et al. (2019) Elisabet Lobo Vesga, Alejandro Russo, and Marco Gaboardi. 2019\. A Programming Framework for Differential Privacy with Accuracy Concentration Bounds. _CoRR_ (2019). arXiv:1909.07918
* Wilson et al. (2020) Royce J Wilson, Celia Yuxin Zhang, William Lam, Damien Desfontaines, Daniel Simmons-Marengo, and Bryant Gipson. 2020\. Differentially Private SQL with Bounded User Contribution. _Proceedings on Privacy Enhancing Technologies_ 2020 (2020), 230–250. Issue 2. https://doi.org/10.2478/popets-2020-0025
* Xiao et al. (2021) Yingtai Xiao, Zeyu Ding, Yuxin Wang, Danfeng Zhang, and Daniel Kifer. 2021. Optimizing Fitness-for-Use of Differentially Private Linear Queries. _Proceedings of the VLDB Endowment_ 14, 10 (2021), 1730–1742. https://doi.org/10.14778/3467861.3467864
* Zhang et al. (2016) Jun Zhang, Xiaokui Xiao, and Xing Xie. 2016. PrivTree: A Differentially Private Algorithm for Hierarchical Decompositions. In _Proceedings of the 2016 International Conference on Management of Data_ _(SIGMOD ’16)_. ACM, 155–170. https://doi.org/10.1145/2882903.2882928
## Appendix A Proofs
### A.1. End-to-end Privacy Proof
#### A.1.1. Proof of Theorem 3.2
Recall the theorem states that CacheDP, as defined in Algorithm 1, satisfies
$\mathcal{B}$-DP.
###### Proof.
(sketch) We begin by addressing the cost estimation phase (lines 4-10). The
cost estimation phases are independent of the data. In addition, each DP
mechanism (MMM or RP) with its corresponding chosen strategy ensures
$\epsilon_{i}$-DP (Proposition 4.1, (Koufogiannis et al., 2016, Theorem 1A)).
At line 11, we check if running the chosen DP mechanism (MMM or RP with the
corresponding chosen strategy) will exceed the total privacy budget
$\mathcal{B}$ by sequential composition (Dwork and Roth, 2014). We only run
the DP mechanism if the total budget is sufficient. This ensures the overall
Algorithm 1 satisfies $\mathcal{B}$-DP.
∎
### A.2. MMM Module Proofs
#### A.2.1. Proof of Proposition 4.1
Recall the proposition states that the AnswerWorkload interface of MMM
(Algorithm 2) satisfies $\epsilon$-DP, where $\epsilon$ is the output of this
interface.
###### Proof.
(sketch) When the cache is empty, the privacy of MMM (with optional SE)
follows from the matrix mechanisms privacy guarantee in Proposition 2.1. When
there are entries in the cache, we split the strategy into the free and paid
matrix. The paid matrix is private by the same reasoning as above. The privacy
of the free matrix responses and the final concatenation of free and paid
responses follow by the post processing lemma of DP (Dwork and Roth, 2014,
Proposition 2.1). ∎
#### A.2.2. Proof of Proposition 4.2
Given an instant strategy $\mathbf{A}=(\mathbf{F}||\mathbf{P})$ with a vector
of $k$ noise parameters
$\mathbf{b}=\mathbf{b}_{\mathbf{F}}||\mathbf{b}_{\mathbf{P}}$, the error to a
workload $\mathbf{W}$ using the AnswerWorkload interface of MMM (Algorithm 2)
is $\|\mathbf{W}\mathbf{A}^{+}Lap(\mathbf{b})\|$, where $Lap(\mathbf{b})$
draws independent noise from $Lap(\mathbf{b}[1])$, $\ldots,Lap(\mathbf{b}[k])$
respectively. We can simplify its expected total square error as
$\|\mathbf{W}\mathbf{A}^{+}diag(\mathbf{b})\|_{F}^{2}$ where
$diag(\mathbf{b})$ is a diagonal matrix with
$diag(\mathbf{b})[i,i]=\mathbf{b}[i]$.
###### Proof.
The error to the MMM is
$\|\mathbf{W}\mathbf{A}^{+}(\mathbf{A}\mathbf{x}+Lap(\mathbf{b}))-\mathbf{W}\mathbf{x}\|=\|\mathbf{W}\mathbf{A}^{+}Lap(\mathbf{b})\|$.
The expected total square error
$\mathbb{E}[\|\mathbf{W}\mathbf{A}^{+}Lap(\mathbf{b})\|^{2}_{2}]$ can be
expanded to
$\sum_{i=1}^{l}\mathbb{E}[\sum_{j=1}^{k}(\mathbf{W}\mathbf{A}^{+}[i,j]Lap(\mathbf{b}[j]))^{2}]$.
As the $k$ noise variables are independent and has a zero mean, then we have
the error expression equals to
$\sum_{i=1}^{l}\sum_{j=1}^{k}(\mathbf{W}\mathbf{A}^{+}[i,j])^{2}\mathbb{E}[Lap(\mathbf{b}[j]))^{2}]=\sum_{i=1}^{l}\sum_{j=1}^{k}(\mathbf{W}\mathbf{A}^{+}[i,j])^{2}\mathbf{b}[j]^{2}$
which is equivalent to $\|\mathbf{W}\mathbf{A}^{+}diag(\mathbf{b})\|_{F}^{2}$.
∎
#### A.2.3. Proof of Theorem 4.1
The optimal solution to simplified CE problem incurs a smaller privacy cost
$\epsilon$ than the privacy cost $\epsilon_{\mathbf{F}=\emptyset}$ of the
matrix mechanism without cache, i.e., MMM with $\mathbf{F}=\emptyset$.
###### Proof.
(sketch) Let $b^{*}$ be the noise parameter for $\mathbf{A}$ in the matrix
mechanism without cache to achieve the desired accuracy requirement. We can
show that $b^{*}$ is a valid solution to the simplified CE problem: setting
$\mathbf{b}_{\mathbf{F}}=[c.b\in\mathcal{C}~{}|~{}c.\mathbf{a}\in\mathcal{C}\cap\mathbf{A},c.b\leq
b^{*}]$ and $\mathbf{b}_{\mathbf{P}}=[b^{*}~{}|~{}\mathbf{a}\in\mathbf{P}]$
satisfies the accuracy requirement. As
$\|\mathbf{P}\|_{1}\leq\|\mathbf{A}\|_{1}$, the privacy cost
$\epsilon=\frac{\|P\|_{1}}{b^{*}}$ for $b_{\mathbf{P}}=b^{*}$ is smaller than
$\epsilon_{\mathbf{F}=\emptyset}=\frac{\|\mathbf{A}\|_{1}}{b^{*}}$. The
optimal solution to the simplified CE problem has a smaller or the same
privacy cost than a valid solution $b_{\mathbf{P}}=b^{*}$. ∎
### A.3. Full-rank Transformer (FRT) Proof
Recall Theorem 5.1 states that Given a global strategy $\mathbb{A}^{*}$ in a
$k$-ary tree structure, and an instant strategy
$\mathbb{A}\subseteq\mathbb{A}^{*}$, transformStrategy outputs a strategy
$\mathbf{A}$ that is full rank and supports $\mathbb{A}$. We begin by proving
the following lemma.
###### Lemma A.0.
Given a global strategy $\mathbb{A}^{*}$ in a $k$-ary tree structure, and an
instant strategy $\mathbb{A}\subseteq\mathbb{A}^{*}$, running
getTransformationMatrix($\mathbb{A}$) in Algorithm 4 increases the number of
non-empty buckets in $\mathbb{T}$ by at most 1, for each row
$\mathbb{A}[i]\in\mathbb{A}$.
###### Proof.
If the condition in line 4 is met the result follows trivially. We consider
the remaining case where $\mathbb{A}[i]$ intersects with at least one bucket.
We recall that all entries in $\mathbb{A}$ represent nodes in a $k$-ary tree.
Since adding and subtracting nodes on a $k$-ary tree always results in a
combination of one or more nodes on a $k$-ary tree, we can conclude that at
the end of each loop, all of the buckets are disjoint. For each new row
$\mathbb{A}[i]$, if this row intersects with the buckets in $\mathbb{T}$, then
$\mathbb{A}[i]$ is either (i) a descendant node of one bucket in $\mathbb{T}$,
or (ii) an ancestor node of one or more buckets in $\mathbb{T}$. For the first
case, assume $\mathbb{A}[i]$ is a descendant node of
$\mathbb{t}\in\mathbb{T}$. It cannot be the descendant of other buckets, as
the other buckets are disjoint with $\mathbb{t}$. Then $\mathbb{t}$ will be
replaced by $\mathbb{t}\cap\mathbb{A}[i]$ and $\mathbb{t}-\mathbb{A}[i]$ (line
9), and hence the size of $\mathbb{T}$ increases by 1. For the second case,
assume $\mathbb{A}[i]$ is the ancestor node of multiple buckets in
$\mathbb{T}$. We denote these buckets as
$\\{\mathbb{T}[j_{1}],\ldots,\mathbb{T}[j_{k}]\\}$, then all these buckets
will remain the same (line 9 just adds $\mathbb{t}$), and at most one
additional bucket
$\mathbb{A}[i]-\sum_{\mathbb{t}\in\mathbb{T}^{\prime}\wedge\mathbb{t}\cdot\mathbb{A}[i]\neq
0}\mathbb{t}$ is added in line 10. ∎
We now prove the main result (Theorem 5.1) that transformStrategy ensures full
rank matrices.
###### Proof.
We begin by discussing how $\mathbf{A}$ represents the queries in
$\mathbb{A}$. First, we note that $\mathbf{A}$ and $\mathbb{A}$ have the same
number of rows. The only difference is that $\mathbf{A}$ represents the
queries on the data vector $\mathbf{x}$ where as $\mathbb{A}$ uses
$\mathbb{x}$. By construction, we have
$\mathbb{A}\mathbb{x}=\mathbf{A}\mathbb{T}\mathbb{x}=\mathbf{A}\mathbf{x}$.
Since $\mathbf{A}$ is a representation of $\mathbb{A}$ using the only the
buckets generated in getTransformationMatrix, the number of columns in
$\mathbf{A}$ is the same as the number of buckets, $|\mathbb{T}|$. To show
that $\mathbf{A}$ is always full rank, we will first show that
$|row(\mathbf{A})|\geq|col(\mathbf{A})|$, where $|row(\mathbf{A})|$ and
$|col(\mathbf{A})|$ represent the number of rows and the number of columns in
$\mathbf{A}$. Or equivalently, $|row(\mathbf{A})|\geq|\mathbb{T}|$. This
follows by inductively applying Lemma A.1.
Next we show that $rank(\mathbf{A})=|col(\mathbf{A})|$. We once again show
this by induction on the number of rows in $\mathbb{A}$. In the base case
($|\mathbb{A}|=1$) applying transformStrategy, we get $\mathbf{A}=[1]$ and
thus $rank(\mathbf{A})=1$. Now we assume that
$rank(\mathbf{A})=|col(\mathbf{A})|$ for some $\mathbf{A}$ obtained from
$\mathbb{A}$. We consider adding a new row $r_{new}$ to $\mathbb{A}$ and
define $\bar{\mathbb{A}}=\mathbb{A}\cup r_{new}$. Let $\bar{\mathbf{A}}$
represent the result of applying transformStrategy to $\bar{\mathbb{A}}$.
When adding this row there are two possible cases: First, consider the case
where the newly added row, $r_{new}$, can be represented as a linear
combination of $\mathbb{T}$. That is the buckets created in
getTransformationMatrix are the same for $\mathbb{A}$ and $\bar{\mathbb{A}}$
In this case, $\bar{\mathbf{A}}=\mathbf{A}\cup r^{\prime}_{new}$. Thus
$|col(\bar{\mathbf{A}})|=rank(\bar{\mathbf{A}})$ since the number of linearly
independent columns (or buckets) did not increase or decrease by adding a row.
On the other hand, if the newly added row, $r_{new}$, cannot be represented as
a linear combination of $\mathbb{T}$, then $r_{new}$ must be linearly
independent of $\mathbb{A}$. Thus, $\bar{\mathbf{A}}$ has one additional
linearly independent row and thus $rank(\bar{\mathbf{A}})=rank(\mathbf{A})+1$.
Furthermore by Lemma A.1, we know that adding a row can add at most one new
bucket. Since we assume $r_{new}$ cannot be represented as a linear
combination of $\mathbb{T}$, this means $|col(\bar{A})|=|col(A)|+1$. Thus we
have that $|col(\bar{A})|=rank(\bar{\mathbf{A}})$. Combining the fact that
$|row(\mathbf{A})|\geq|col(\mathbf{A})|$ and
$rank(\mathbf{A})=|col(\mathbf{A})|$, it follows that $\mathbf{A}$ is full
rank. ∎
### A.4. PQ Module Proofs
#### A.4.1. Proof of Lemma 5.7
Recall the lemma states that the $L_{1}$ norm of the $\mathbb{P}$ matrix is
equal to the subtree norm of the root of the tree with marked nodes
corresponding to $\mathbb{P}$:
(12) $\mathcal{S}_{\mathbb{P}}(\mathcal{T}\text{.root})=\|\mathbb{P}\|_{1}$
###### Proof.
Given that we form $\mathbb{P}$ as shown in Example 5.2, $\|\mathbb{P}\|_{1}$
simply represents the maximum number of overlapping RCQs in $\mathbb{P}$.
Overlapping RCQs on the tree $\mathcal{T}$ must occur on the same path, that
is, they form an ancestor-descendant relationship, since the children of each
node $n$ have non-overlapping ranges. Thus, the maximum number of overlapping
RCQs across all tree paths, is equal to the $\|\mathbb{P}\|_{1}$. ∎
#### A.4.2. Proof of Lemma 5.8
The lemma states that the proactive strategy $\Delta\mathbb{P}$ generated by
generateProactiveStrategy for an input $\mathbb{P}$ satisfies the condition:
(13) $\forall\text{ paths }p\in\mathcal{T},\sum_{v\in
p}\mathcal{M}_{\mathbb{P}\cup\Delta\mathbb{P}}(v)\leq\mathcal{S}_{\mathbb{P}}(\mathcal{T}\text{.root})=\|\mathbb{P}\|_{1}$
###### Proof.
This inequality holds for $\Delta\mathbb{P}=\emptyset$ by applying Lemma 5.7
and Definition 5.6. generateProactiveStrategy only adds a node to
$\Delta\mathbb{P}$ if it satisfies the following condition on line 4, the
remaining path has a length less than $r$. At the root, this condition is set
to be less than $\|\mathbb{P}\|_{1}$. ∎
#### A.4.3. Proof of Theorem 5.3
The theorem states that, given a paid strategy matrix $\mathbb{P}$, Algorithm
6 outputs $\Delta\mathbb{P}$ such that
$\|\mathbb{P}\cup\Delta\mathbb{P}\|_{1}=\|\mathbb{P}\|_{1}$.
###### Proof.
Applying Lemma 5.7 and Definition 5.6 for the matrix
$\mathbb{P}\cup\Delta\mathbb{P}$:
(14)
$\|\mathbb{P}\cup\Delta\mathbb{P}\|_{1}=\mathcal{S}_{\mathbb{P}\cup\Delta\mathbb{P}}(\mathcal{T}\text{.root})=\max_{p\in\text{subtree}(v)}\sum_{v\in
p}\mathcal{M}_{\mathbb{P}\cup\Delta\mathbb{P}}(v)$
Rephrasing Lemma 5.8:
(15) $\max_{p\in\text{subtree}(v)}\sum_{v\in
p}\mathcal{M}_{\mathbb{P}\cup\Delta\mathbb{P}}(v)=\|\mathbb{P}\|_{1}$
Using equations 14 and 15, we get:
(16) $\|\mathbb{P}\cup\Delta\mathbb{P}\|_{1}=\|\mathbb{P}\|_{1}$
∎
## Appendix B MMM Cache-aware tight bound
Given the cached noise parameters, we propose a theoretical upper bound
$b_{T}$ for the candidate $b_{P0}$ required to satisfy the
$(\alpha,\beta)$-accuracy guarantee.
(17)
$b_{T}\leq\frac{\sqrt{\alpha^{2}\beta/2-\|\mathbf{W}\mathbf{A}^{+}Diag(\vec{b_{\mathbf{F}}})\|_{F}^{2}}}{\|WA^{+}Diag(I_{|\mathbf{P}|})\|_{F}}$
Here, $\vec{b_{\mathbf{F}}}=[b_{1},\ldots,b_{|\mathbf{F}|}]$. In doing so, we
generalize Ge et al.’s tight bound $b_{T}$ for $\mathcal{C}=\phi$ (equation 4)
to consider cached noise parameters.
## Appendix C Implementation Details
We implement an initial prototype of CacheDP in Python. We use the source code
provided by Ge et al. to evaluate APEx222https://github.com/cgebest/APEx.
Since the authors of Pioneer did not publish their code, we implement it from
scratch in python following the paper. We implement a simple composite plan
using all hyperparamters as described in the paper. We show in Section 8.2.1
that our implementation reproduces similar performance to the results given in
the original paper (Peng et al., 2013)[Section 7.2.1]. We make our full
evaluation including our implementation of related work publicly
available333https://git.uwaterloo.ca/m2mazmud/cachedp-public.git.
We note that pioneer uses variance based accuracy where as APEx uses
$(\alpha,\beta)$ accuracy requirements. This is not a problem for CacheDP as
we accept both types of accuracy requirement. To run Pioneer on workloads with
an $(\alpha,\beta)$ requirement, we use an MC simulation to search for the
variance that satisfies the accuracy requirement. To run APEx on workloads
with a variance based accuracy requirement we utilize a tail bound on the
Laplace distribution (since we find empirically that apex uses the Laplace
Mechanism for all single range queries) to get the alpha beta.
## Appendix D Evaluation of the SE heuristics
In this section, we reason about the effectiveness of the strategy expander
heuristics, both experimentally and theoretically.
##### Experiments:
We first quantify the probability of SE being selected over MMM using the
results of our frequency analysis, from Table 6. We refer the reader to
Section 8 for all experimental details. Out of all paid workloads for which
MMM and SE were selected (second and fourth rows of the table), we consider
the probability for SE to be selected. SE is overwhelmingly more likely to be
chosen over MMM on both BFS and DFS tasks, with the probability of selection
being $90\%$ for BFS and $95\%$ for DFS. On the other hand, MMM is around
twice as likely to be chosen over SE on both RRQ and IDEBench tasks, as the SE
selection probability is $31\%$ for RRQ and $34\%$ for IDEBench. We conclude
that the likelihood of the SE module being chosen, and thus the success of our
heuristics, depends on the workload sequence since it influences the contents
of our cache. Averaging across all four tasks, the likelihood of the SE module
being chosen over MMM, is around $62\%$.
##### Theoretical Analysis:
We analyze the conditions under which our heuristics result in SE module being
selected. The SE module is only selected if it has a lower cost than the
original strategy in MMM. The privacy cost for our mechanisms is inversely
proportional to the error term, for a given $(\alpha,\beta)$ or
$\alpha^{2}$-expected total square error. Thus, our heuristics are only
successful if they lead to the error term for the SE module to be smaller than
the error term for MMM.
We recall Figure 6 included an example to show that expanding the strategy can
lead to an increased error term. We analyze why such situations can arise and
describe conditions for when our noise parameter-based heuristic can reduce
the error. Our heuristic to choose $b_{\ell+1}<b_{\mathbf{P}}$ strictly
improves the error under the following condition:
###### Theorem D.1.
Given a workload $\mathbf{W}$, a strategy $\mathbf{A}$ of $\ell$ rows, and a
noise vector $\mathbf{b}$, adding a new row to $\mathbf{A}$ to form
$\mathbf{A}_{e}$ reduces the error, i.e.,
(18)
$\|\mathbf{W}\mathbf{A}^{+}Diag(\mathbf{b})\|_{F}^{2}\geq\|\mathbf{W}\mathbf{A}_{e}^{+}Diag(\mathbf{b}||b_{\ell+1})\|_{F}^{2}$
if all entries in $\mathbf{b}$ equal $b^{*}$ and
$b_{\ell+1}<b_{\mathbf{P}}\leq b^{*}$, for any $b^{*}$.
###### Proof.
We recall the following theorem proved by Li et al. (Li et al., 2015) in their
MM paper.
(19)
$\|\mathbf{W}\mathbf{A}^{+}\|_{F}^{2}\geq\|\mathbf{W}\mathbf{A}_{e}^{+}\|_{F}^{2}$
For $\mathbf{b}=[b^{*}\cdots b^{*}]$, we transform Equation 18 to:
(20) $\displaystyle\|\mathbf{W}\mathbf{A}^{+}Diag(\mathbf{b})\|_{F}^{2}$
$\displaystyle=$
$\displaystyle\|\mathbf{W}\mathbf{A}^{+}(b^{*}I)\|_{F}^{2}=(b^{*})^{2}\|\mathbf{W}\mathbf{A}^{+}\|_{F}^{2}$
(21) $\displaystyle\geq$
$\displaystyle(b^{*})^{2}\|\mathbf{W}\mathbf{A}_{e}^{+}\|_{F}^{2}$ (22)
$\displaystyle=$
$\displaystyle\|\mathbf{W}\mathbf{A}_{e}^{+}Diag(\mathbf{b}||b^{*})\|_{F}^{2}$
(23) $\displaystyle\geq$
$\displaystyle\|\mathbf{W}\mathbf{A}_{e}^{+}Diag(\mathbf{b}||b_{\ell+1})\|_{F}^{2}$
where the first inequality comes from applying Li et al.’s result (Equation
19) and the final inequality by applying the condition: $b_{\ell+1}\leq
b^{*}$. ∎
When the noise parameters are not all equal, the sufficiency conditions become
more complicated. For example, if we modify the above counterexample to have a
slightly more accurate expanded row ($b_{\ell+1}=3b<4b$), we get a lower error
than MMM:
$\|\mathbf{W}_{1}\mathbf{A}_{1}^{+}diag(\mathbf{b}_{1})\|=28b^{2},\
\|\mathbf{W}_{1}\mathbf{A}_{1e}^{+}diag(\mathbf{b}_{1e})\|=26.5b^{2}$
Thus our greedy heuristic, which does not simply add entries less than
$b_{p}$, but prioritizes the most accurate rows first, would be effective in
this scenario. We also observe that distribution of the existing noise
parameters in $\mathbf{b}$ is important. If the noise parameters are all close
to each other, strategy expansion is likely to be more beneficial. For example
changing the second entry of $\mathbf{b}_{1}$ from $b$ to $2b$ (i.e.
$\mathbf{b}_{1e}=[b,2b,5b,4b]$), results in a smaller error than MMM:
$\|\mathbf{W}_{1}\mathbf{A}_{1}^{+}diag(\mathbf{b}_{1})\|=31.0b^{2},\
\|\mathbf{W}_{1}\mathbf{A}_{1e}^{+}diag(\mathbf{b}_{1e})\|=30.2b^{2}$
It is evident that both the distribution of the noise parameters in
$\mathbf{b}$ and the new noise parameter $b_{\ell+1}$, influence the success
of our accuracy heuristic. Additionally, we observe that a newly added row
could satisfy our final heuristic by being a parent or a child of _any_ of the
existing rows in $\mathbf{A}_{1}$. We find that the structure of the expanded
strategy $\mathbf{A}_{1e}$ also significantly influences the error term. For
example, changing the final row in $\mathbf{A}_{1e}$ from $(1,1,1)$ to
$(1,0,1)$ or $(0,1,1)$ also reduces the error for $\mathbf{A}_{1e}$ to
$26.5b^{2}$ and $20b^{2}$ respectively, under the exact same noise parameters
as our counterexample. However, changing the final row to $(1,1,0)$ increases
the error to $34.7b^{2}$.
##### Conclusion:
We observe that whether an additional row selected by our noise parameter
heuristics will succeed in decreasing the error for SE over MMM, depends on
how the new row changes elements in $\mathbf{W}\mathbf{A}_{e}^{+}$. However,
we can guarantee that when the noise parameters in $\mathbf{b}$ are
sufficiently similar, strategy expander will reduce the error regardless of
the structure. Our structure-based heuristic only allows strategy queries
related through a parent-child relationship to an existing strategy query in
$\mathbf{A}$. Future research may model the success of this heuristic, by
analyzing the relation between $\mathbf{W}\mathbf{A}^{+}$ and
$\mathbf{W}\mathbf{A}_{e}^{+}$ for $\mathbf{A}$ and $\mathbf{A}_{e}$ that
differ by a parent or child row.
|
myequation
$\BODY$
(1)
# Cooperate or not Cooperate: Transfer Learning with Multi-Armed Bandit for
Spatial Reuse in Wi-Fi
Pedro Enrique Iturria-Rivera1, , Marcel Chenier2, Bernard Herscovici2,
Burak Kantarci1, and Melike Erol-Kantarci1, 2NetExperience Inc., Ottawa,
Canada
Emails:{pitur008, burak.kantarci<EMAIL_ADDRESS>{marcel,
<EMAIL_ADDRESS>1School of Electrical Engineering and Computer
Science, University of Ottawa, Ottawa, Canada
###### Abstract
The exponential increase of wireless devices with highly demanding services
such as streaming video, gaming and others has imposed several challenges to
Wireless Local Area Networks (WLANs). In the context of Wi-Fi, IEEE 802.11ax
brings high-data rates in dense user deployments. Additionally, it comes with
new flexible features in the physical layer as dynamic Clear-Channel-
Assessment (CCA) threshold with the goal of improving spatial reuse (SR) in
response to radio spectrum scarcity in dense scenarios. In this paper, we
formulate the Transmission Power (TP) and CCA configuration problem with an
objective of maximizing fairness and minimizing station starvation. We present
four main contributions into distributed SR optimization using Multi-Agent
Multi-Armed Bandits (MA-MABs). First, we propose to reduce the action space
given the large cardinality of action combination of TP and CCA threshold
values per Access Point (AP). Second, we present two deep Multi-Agent
Contextual MABs (MA-CMABs), named Sample Average Uncertainty (SAU)-Coop and
SAU-NonCoop as cooperative and non-cooperative versions to improve SR. In
addition, we present an analysis whether cooperation is beneficial using MA-
MABs solutions based on the $\epsilon$-greedy, Upper Bound Confidence (UCB)
and Thompson techniques. Finally, we propose a deep reinforcement transfer
learning technique to improve adaptability in dynamic environments. Simulation
results show that cooperation via SAU-Coop algorithm contributes to an
improvement of 14.7% in cumulative throughput, and 32.5% improvement of PLR
when compared with no cooperation approaches. Finally, under dynamic
scenarios, transfer learning contributes to mitigation of service drops for at
least 60% of the total of users.
Index Terms — Wi-Fi, 802.11ax, Multi-Agent Multi-Armed Bandits, spatial reuse,
deep transfer reinforcement learning.
## I Introduction
W ireless connectivity has become an irreplaceable commodity in our modern
society. The exponential trend expected in the wireless technology usage has
lead analysts to predict that by 2023, $71\%$ of the global population will
enjoy some kind of wireless service. In the group of Wireless Local Area
Networks (WLANs), Wireless Fidelity (Wi-Fi) technology presents a growth up to
4-fold over a period of 5 years from 2018 to 2023 [1]. The newest Wi-Fi
standard IEEE-802.11ax [2], also known as Wi-Fi 6 expects to grow 4-fold by
2023 becoming $11\%$ of all the public Wi-Fi hostpots [3].
Spatial reuse (SR) has been of interest for more than 20 years in the wireless
community since it contributes to the reduction of the collisions among
stations and the determination of channel access rights [4]. As the number of
dense WLAN deployments increases, SR becomes more challenging in the context
of Carrier Sense Multiple Access (CSMA) technology as used in Wi-Fi[5]. Wi-Fi
6 comes to address diverse challenges such as increasing number of Wi-Fi
users, dense hotspots deployments and high demanded services such as
Augmented, Mixed and Virtual Reality.
Figure 1: Typical operational scenario: APs adjust their Transmission Power
and CCA threshold towards an efficient spatial reuse.
Moreover, 802.11ax included additional features such as dynamic adjustment of
the Clear Channel Assessment (CCA) threshold and Transmission Power (TP).
Static CCA threshold may not be representative of diverse network topologies,
and cause inefficient channel utilization or concurrent transmissions [6].
Additionally, adjusting TP allows to reduce the interference among the APs and
consequently maximize the network performance [7]. Thus, SR and network
performance can be positively improved by adjusting CCA and TP. Yet, the
complex interactions between CCA and TP, call for intelligent configuration of
both.
To this end, data scarcity and data access are key for any Machine Learning
(ML) method [8]. Recently, AI-based wireless networks have been of remarkable
interest among researchers both in WiFi domain [9], and 5G domain [10] however
the proposed solutions usually require complete availability of the data. In
reality, data access is not always feasible due to privacy restrictions.
Recent wireless network architectures have started to shift to a more open and
flexible design. In 5G networks as well as the O-RAN Alliance architecture
support the utilization of artificial intelligence to orchestrate main network
functions [11]. In the context of Wi-Fi, a novel project named OpenWiFi[12]
released by the Telcom Infra Project intends to disaggregate the Wi-Fi
technology stack by utilizing open source software for the cloud controller
and AP firmware operating system. These paradigm changes allow for the
development of many applications in the area of ML and more specifically in
Reinforcement Learning (RL) applications to become reality.
In this paper111The present work has been submitted to IEEE, we intend to
optimize TP and CCA threshold to improve SR and overall network KPIs.To do so,
we formulate the TP and CCA configuration problem with an objective of
maximizing product network fairness and minimizing station starvation. We
model the SR problem as a distributed multi-agent decision making problem and
use a Multi-Agent Multi-Armed Bandit (MA-MAB) approach to solve it. The
contributions of this work, different from the ones found in the literature,
can be summarized in the following points:
1. 1.
We propose a solution for reducing the inherent huge action space given the
possible combinations of TP and CCA threshold values per AP. We derive our
solution via worst-case interference analysis.
2. 2.
We analyse the performance of the network KPIs of well-known distributed MA-
MAB implementations such as $\epsilon$-greedy, UCB and Thompson on the
selection of the TP and CCA values in cooperative and non-cooperative
settings.
3. 3.
We introduce a contextual MA-MAB (MA-CMAB) named Sample Average Uncertainty-
Sampling (SAU) in cooperative and non-cooperative settings. SAU-MAB is based
on a deep Contextual MAB.
4. 4.
We propose for the first time, to the best of our knowledge, a deep transfer
learning solution to adapt efficiently TP and CCA parameters in dynamic
scenarios.
With these contributions, our simulation results show that the
$\epsilon$-greedy MAB solution improves the throughput at least 44.4%,
provides improvement of 12.2% in terms of fairness and 94.5% in terms of
Packet Loss Ratio (PLR) over typical configurations when a reduced set of
actions is known. Additionally, we show that the SAU-Coop algorithm improves
the throughput by 14.7% and PLR 32.5% when compared with non cooperative
approaches with full set of actions. Moreover, our proposed transfer learning
based approach reduces the service drops by at least 60%.
The rest of the paper is organized as follows. Section II presents a summary
of recent work that uses Machine Learning to improve SR in Wi-Fi. Section III
covers the basics on Multi-Armed Bandits including deep contextual bandits and
deep transfer reinforcement learning. In IV we present our system model
altogether with an analysis to reduce the action space via worst-case
interference. Section V presents the proposed schemes and the results are
discussed in section VI. Finally, section VII concludes the paper.
## II Related work
Reinforcement learning-based spatial reuse has been of interest in recent
literature. The studies have focused on distributed solutions with no
cooperation or centralized schemes of multi-armed bandits. These studies are
summarized below.
In [13], the authors present a comparison among well-known MABs as
$\epsilon$-greedy, UCB, Exp3 and Thompson sampling in the context of
decentralized SR via Dynamic Channel Allocation (DCA) and Transmission Power
Control (TPC) in WLANs. The results showed that “selfish learning” in a
sequential matter present better performance than “concurrent learning” among
the agents. Additionally, [14] presents a centralized MAB consisting of an
optimizer based on a modified Thompson Sampling (TS) algorithm and a sampler
based on Gaussian Mixture (GM) algorithm to improve SR in 802.11ax Wi-Fi. More
specifically, the authors propose to deal with the large action space
comprised by TP and Overlapping BSS/Preamble-Detection (OBSS/PD) threshold by
utilizing a MAB variant called Infinitely Many-Armed Bandit (IMAB).
Furthermore, a distributed solution based on Bayesian optimizations of
Gaussian processes to improve SR is proposed in [15].
Other solutions that are not related to reinforcement learning can be found in
the literature with the aim of improving SR in WLANs. For instance, in [16]
the authors propose a distributed algorithm where the APs decide their
Transmission Power based on their RSSI. Moreover, in [17] the authors present
an algorithm to improve SR by utilizing diverse metrics such as SINR,
proximity information, RSSI and BSS color and compare with the legacy existing
algorithms. The ultimate goal of the previous algorithm is the selection of
the channel state (IDLE of BUSY) at the moment of an incoming frame given the
previous metrics. Finally, the authors in [18] presented a supervised
federated learning approach for SR optimization.
In all above works, the authors employ either centralized or decentralized
schemes with no cooperation to address SR optimization in WiFi. In this work,
we propose to address this via a coordination based MA-MAB. In addition, we
tackle some of the issues previously encountered in others works such as the
size of action space due the set of possible values TP and CCA. Finally, to
the best of our knowledge, we propose for the first time to address SR
adaptation in dynamic environments utilizing deep transfer learning.
## III Background
In this section, we present a background on Multi-Armed Bandits including
$\epsilon$-greedy, Upper Confident Bound, Thompson sampling bandits and an
introduction on contextual MABs with a focus on a neural network-based
contextual bandit. Additionally, we introduce MABs to the multi-agent setting
and we finalize with a background on deep transfer reinforcement learning.
Multi-Armed Bandits (MABs) are a widely used RL approach that tackles the
exploration-exploitation trade-off problem. Their implementation is usually
simpler when compared with full RL off-policy or on-policy algorithms.
However, simplicity often comes with a cost of obtaining suboptimal solutions
[19]. The basic model of MABs corresponds to the stochastic bandit, where the
agent has $K$ possible actions to choose, called arms, and receive certain
reward $R$ as a consequence of pulling the $j^{th}$ arm over $T$ environment
steps. The rewards can be modeled as independent and identically distributed
(i.i.d), adversarial, constrained adversary or random-process rewards [20].
From the four models previously mentioned, two are more commonly found in the
literature: the i.i.d and the adversarial models. In the i.i.d model, each
pulled arm’s reward is drawn independently from a fixed but unknown
distribution $D_{j}$ with an unknown mean $\mu_{j}^{*}$. On the other hand, in
the adversarial model each pulled arm’s reward is randomly sampled from an
adversary or alien to the agent (such as the environment) and not necessarily
sampled from any distribution[21]. The performance of MABs is measured in
terms of cumulative regret $R_{T}$ or total expected regret over the $T$ steps
defined as: R_T = ∑_t=1^T E[(max_jμ_j^* - μ_j^*)], The utmost goal of the
agent is to minimize $R_{T}$ over the T steps such as the
$\lim_{T\to\infty}R_{T}/T$ = 0 which means the agent will identify the action
with the highest reward in such limit.
### III-A $\epsilon$-greedy, Upper-Confidence-Bound and Thompson Sampling MAB
The $\epsilon$-greedy MAB is one of the simplest MABs and as the name
suggests, it is based on the $\epsilon$-greedy policy. In this method, the
agent selects greedily the best arm most of time and once a while, with a
predefined small probability ( $\epsilon$), it selects a random arm [22]. The
UCB MAB tackles some the disadvantages of the $\epsilon$-greedy policy at the
moment of selecting non-greedy arms. Instead of drawing randomly an arm, the
UCB policy measures how promising non-greedy arms are close from optimal. In
addition, it takes in to consideration the rewards’ uncertainty in the
selection process. The selected arm is obtained by drawing the action from
$\text{{argmax}}_{a}\left[Q_{t}(a)+c\sqrt{\text{ln }{t}/N_{t}(a)}\right]$,
where $N_{t}(a)$ corresponds to the number of times that action $a$ via the
$j^{th}$ arm has been chosen and $Q_{t}(a)$ the Q-value of action $a$[22, 23].
Finally, Thompson Sampling MAB action selection is based on Thompson Sampling
algorithm as the name indicates. Thompson sampling or posterior sampling is a
Bayesian algorithm that constantly constructs and updates the distribution of
the observed rewards given a previously selected action. This allows the MAB
to select arms based on the probability of how optimal the chosen arm is. The
parameters of the distribution are updated depending on the selection of the
distribution class[24].
### III-B Deep Contextual Multi-Armed Bandits
Contextual Multi-Armed Bandits (CMABs) are a variant of MABs, that before
selecting an arm, observe a series of features commonly named context[19].
Different from the stateless MAB, a CMAB is expected to relate the observed
context with the feedback or reward gathered from the environment in $T$
episodes and consequently predict the best arm given the received features
[21]. Diverse CMABs have been proposed throughout the literature such as
LinUCB, Neural Bandit, Contextual Thompson Sampling and Active Thompson
Sampling [19]. More recently, a deep neural contextual bandit named SAU-
Sampling has been presented in [25] where the context is related with the
rewards using neural networks. The details of SAU-Sampling will be discussed
in following sections.
### III-C Multi-Agent Multi-Armed Bandits (MA-MABs)
Multi-agent Multi-Armed Bandits is the multi-agent variant of MABs in which
$N$ agents pull their $j^{th}$ arm and each $m^{th}$ agent will receive a
reward drawn from their distribution $D_{m,j}$ with an unknown mean
$\mu_{m,j}^{*}$ [26]. MA-MABs can be modeled as centralized or distributed. In
centralized settings the agents’ actions are taken by a centralized controller
and in distributed settings each agent will independently choose their own
actions. Distributed decision-making settings scale more effectively [27] and
naturally deals easily with large $K$ set of arms when compared with
centralized settings that suffers of $K$ arms’ cardinality explosion. Finally,
the total regret can be defined as: R_T = ∑_t=1^T∑_m=1^N E[(max_jμ_m,j^* -
μ_m,j^*)] In this work, we consider two main approaches: distributed non-
cooperative and cooperative MA-MABs with adversarial rewards.
### III-D Deep Transfer Reinforcement Learning
Transfer learning or knowledge transfer techniques improve learning time
efficiency by utilizing prior knowledge. Typically, this is done by extracting
the knowledge from one or diverse source tasks and then applying such
knowledge in a target task [28]. If the tasks are related in nature and the
target task benefits positively with the acquired knowledge from the source,
then it is called inductive transfer learning [29]. This type of learning is
not uncommon and it is used by the human brain on a daily basis. However, a
phenomena called negative transfer can occur, if after knowledge transfer, the
target task performance is negatively affected [30].
In the realm of transfer learning we can find Deep Transfer Learning (DTL).
DTL is a subset of transfer learning that studies how to utilize knowledge in
deep neural networks. In the context of classification/prediction tasks, large
amount of data is required to properly train the model of interest [31]. In
many practical applications where training time is essential to respond to new
domains [32], retraining using large amount of data is not always feasible and
possibly catastrophic in terms of performance. “What to transfer” corresponds
to one of the main research topics in transfer learning. Specifically, in the
case of deep transfer learning four categories have been broadly identified:
instances-based transfer, where data instances from a source task are
utilized; mapping-based transfer, where a mapping of two tasks is used on a
new target task; network-based transfer, where the network pre-trained model
is transferred to the target task; and adversarial-based transfer, where an
adversarial model is employed to find which features from diverse source tasks
can be transferred to the target task[33].
In this work, we utilize the DTL form called network-based transfer learning
to adapt efficiently TP and CCA parameters in dynamic scenarios. An example of
network-based transfer learning technique is presented in Fig. 2. Such
technique is utilized in deep transfer reinforcement learning as part of a
transfer learning type called policy transfer [34]. In particular, policy
transfer takes a set of source policies $\pi_{S_{1}},...,\pi_{S_{K}}$ that are
trained on a set of source tasks and uses them in a target policy $\pi_{T}$ in
a way that is able to leverage the former knowledge from the source policies
to learn its own. More specifically, the weights and biases that comprise each
of the hidden layers of the source policies are the elements transferred to
the target polices. Note that in practice policies are modeled as neural
networks.
Figure 2: Network-based transfer learning: the neural network source task’s
hidden layers are reutilized in the target network
In this paper, we take advantage of the design of a contextual multi-armed
bandit presented in [25] and apply policy transfer to improve the agent’s SR
adaptability in dynamic environments. The results and observations of applying
DTRL are discussed in section VI-E. In the next section, we will discuss the
details of the system model and present an analysis on reducing the
cardinality of the action space in the proposed SR problem formulation.
## IV System model and Problem Formulation
TABLE I: Notations Notation | Definition
---|---
$s$ and $\mathcal{S}$ | Index and set of stations,
$m$ and $\mathcal{M}$ | Index and set and the number of [APs] RUs,
$x^{|\mathcal{S}|}$ and $c^{|\mathcal{M}|}$ | Stations’ positions and AP’s positions
$P_{cs}^{m}$ | CCA threshold of $m^{th}$ AP,
$P_{tx}^{m}$ | Transmission Power of $m^{th}$ AP,
$R_{s}^{m}$ | Throughput of $s^{th}$ STA of $m^{th}$ AP,
$R_{s,A}^{m}$ | Achievable throughput of $s^{th}$ STA of $m^{th}$ AP,
$D_{s}^{m}$ | Adaptive data link rate of $s^{th}$ STA of $m^{th}$ AP
$P_{IDLE}^{m}$ | Probability of a STA is idle in a BSS,
$P_{SUCC,s}^{m}$ | Probability of succesful transmission by station $s^{th}$ STA to the $m^{th}$ AP,
$\phi_{s}^{m}$ | Probability of $s^{th}$ STA be transmitting to the $m^{th}$ AP,
$\xi_{CCA}$ | Binary function, $\xi_{CCA}=1$ if signal is bellow the CCA threshold $P_{cs}$,
$\xi_{ED}$ | Binary function, $\xi_{ED}=1$ if signal is bellow the Energy Detection (ED) threshold $P_{ed}$,
$\xi_{STA}$ | Binary function, $\xi_{STA}=1$ if throug is bellow the Energy Detection (ED) threshold $P_{ed}$,
$E(T_{g,s}^{m})$ and $E(I_{g,s}^{m})$ | Expected length of general time-slot and expected information transmitted by the $s^{th}$ STA of $m^{th}$ AP,
$T_{TXOP}$ and $T_{EDCA}$ | Packet transmission duration and time required for a successful Enhanced Distributed Channel Access (EDCA) transmission,
$\bar{P}^{fair}$ and $\bar{U}$ | Average linear product-based network fairness and average station starvation,
$\omega$, $g_{s}^{m}$ and $\sigma^{2}$ | Fraction of $R_{s,A}^{m}$ in which STAs are consider in starvation, the channel power gain and the power noise.
$P_{tx}^{m}$ and $P_{tx}^{r}$ | The transmission power at the $m^{th}$ transmitter (AP) and the received signal strength at the $r^{th}$ receiver,
$d_{m,r}$ and $\theta$ | Distance between the $m^{th}$ transmitter and $r^{th}$ receiver and path loss exponent,
$\mathcal{F}_{m}^{+}$ and $\mathcal{F}_{m}^{-}$ | Subset of interferers and non-AP interferers,
$\gamma_{m,r}$, $C_{m,r}$ and $C_{T}$ | Worst-case SINR and Shannon’s maximum capacity of $m^{th}$ transmitter and $r^{th}$ receiver and cumulative maximum network capacity.
In this work, we consider an infrastructure mode Wi-Fi 802.11ax network
$\mathcal{N}$ with $N=|\mathcal{S}|+|\mathcal{M}|$ nodes where $\mathcal{S}$
is the set of stations with
$\\{\bm{x}^{1},\bm{x}^{2},...,\bm{x}^{|\mathcal{S}|}\\}\in\mathbb{R}^{2}$
positions and $\mathcal{M}$ is the set of APs with
$\\{\bm{c}^{1},\bm{c}^{2},...,\bm{c}^{|\mathcal{M}|}\\}\in\mathbb{R}^{2}$
positions. We can assume that $|\mathcal{M}|$ APs positions correspond to
cluster centers and the stations will attach to their closest AP. In addition,
the list of notations utilized in this work can be found in Table I.
In this paper, we improve SR via maximization of the linear product-based
fairness and minimization of the number of stations under starvation by
configuring TP and CCA parameters.
Max $\displaystyle\begin{pmatrix}\text{fairness}\\\ \text{avg. station
starvation complement}\end{pmatrix}$ (2a) s.t. Throughput (2b) var.
Transmission power and CCA threshold selection (2c)
Let’s define the probability of an STA being idle in a BSS as:
$\displaystyle P_{IDLE}^{m}=\prod_{s\in\mathcal{S}}\phi_{s}^{m^{\prime}}$
$\displaystyle\forall m\in\mathcal{M}.$ (3)
where $\phi_{s}^{m}\in[0,1]$ is the probability of an STA transmitting to the
$m^{th}$ AP. In addition, we proceed to define the probability in which an STA
will successfully transmit a packet as:
$\displaystyle
P_{SUCC,s}^{m}=\phi_{s}^{m}\xi_{CCA}^{m}(\cdot)\xi_{ED}^{m}(\cdot)\prod_{s^{\prime}\in\mathcal{S},s^{\prime}\neq
s}^{\mathcal{S}}\phi_{s}^{m^{\prime}}$ $\displaystyle\forall m\in\mathcal{M}.$
(4)
where $\xi_{CCA}(\cdot)=1$ if the sensed signal of a packet sent by the
$s^{th}$ STA is below the CCA threshold ($P_{cs}$), otherwise becomes zero.
Here, $\xi_{ED}(\cdot)=1$ if the sensed signal of packet sent by the $s^{th}$
STA is below the Energy Detection (ED) threshold ($P_{ed}$), otherwise becomes
zero. Additionally, we consider $P_{cs}=P_{ed}$ to simplify our analysis. As
indicated by [35] the expected length of the general time-slot
$\mathbb{E}(T_{g})$ and the expected information transmitted by the $s^{th}$
STA to $m^{th}$ AP $\mathbb{E}(I_{g})$ can be expressed as:
$\displaystyle E(T_{g,s}^{m})=\delta P_{IDLE}^{m}+P_{IDLE}^{m^{\prime}}T$
$\displaystyle\forall m\in\mathcal{M}.$ (5) $\displaystyle
E(I_{g,s}^{m})=P_{SUCC,s}^{m}D_{s}^{m}T_{TXOP}$ $\displaystyle\forall
m\in\mathcal{M},s\in\mathcal{S}.$ (6)
where $D_{s}^{m}$ corresponds to the link data rate, $T_{EDCA}$ corresponds to
the time required for a successful Enhanced Distributed Channel Access (EDCA)
transmission, $T_{TXOP}$ is the transmission duration and $\delta$ the
duration of an idle time slot. The link data rate will adaptively depend on
SNR [36] and mapped based on SNR/BER curves[37]. The received SNR can be
defined as $P_{tx}^{m}g_{s}^{m}\//\sigma^{2}$ where $P_{tx}$ is the
transmission power, $g_{s}^{m}$ the channel power gain and $\sigma^{2}$ the
power noise.
Finally, the throughput of the $s^{th}$ station attached to the $m^{th}$ AP
can be defined as:
$\displaystyle
R_{s}^{m}=\frac{E(I_{g,s}^{m})}{E(T_{g,s}^{m})}=\frac{P_{SUCC,s}^{m}D_{s}^{m}T_{TXOP}}{P_{IDLE}^{m}\delta+P_{IDLE}^{m^{\prime}}T_{EDCA}},$
(7)
Additionally, let’s define the average linear product-based network fairness
and average station starvation in a distributed setting:
$\displaystyle\bar{P}^{fair}(t)=\frac{1}{|\mathcal{M}|}\sum_{m\in\mathcal{M}}\prod_{s\in\mathcal{S}}\frac{R_{s}^{m}}{R_{m,A}^{s}},$
(8)
$\displaystyle\bar{U}(t)=\frac{1}{|\mathcal{M}|}\sum_{m\in\mathcal{M}}\frac{1}{|\mathcal{S}|}\sum_{s\in\mathcal{S}}\xi_{STA}(R_{s}^{m}>\omega
R_{m,A}^{s}),$ (9)
where $R_{m,A}^{s}$ is the achievable throughput of the $s^{th}$ station
attached to the $m^{th}$ AP. Additionally, $\xi_{STA}=1$ if $s^{th}$ station’s
throughput is greater than a fraction $\omega\in(0,1]$ of the achievable
throughput, otherwise becomes zero in which case the station is considered in
starvation. The considered problem is a multi-objective problem and can be
addressed with the weighted sum approach. Thus, in each time step, the problem
can be formulated as follows:
###### Problem 1
$\displaystyle\underset{\mathbf{P_{tx}},\mathbf{P_{cs}}}{\operatorname{max}}\;A_{1}\bar{P}^{fair}(t)+A_{2}(1-\bar{U}(t))$
(10) s.t. $\displaystyle\text{\eqref{thr_eq}},$ (11) $\displaystyle
P_{tx}^{m}\in[P_{tx}^{min},P_{tx}^{max}],P_{cs}^{m}\in[P_{cs}^{min},P_{cs}^{max}]$
$\displaystyle\forall m\in\mathcal{M}$ (12)
Due the dynamic nature of the scenario, the transmission probabilities of the
STAs $\phi_{s}^{m}$ are not directly controllable and require an additional
step to map them to EDCA parameters [35]. Instead, we simplify our analysis by
utilizing a network simulator to model such dynamics and propose to solve the
previous linear programming (LP) problem using a MA-MAB solution as described
in section V.
### IV-A Optimal action set via worst-case interference
Wi-Fi typical scenarios consist in APs and stations distributed non-uniformly.
Contrary to the analysis presented in [38] we aim obtaining an optimal subset
of TP and CCA threshold values to further reduce the action space size in SR
problems. In this analysis, we only consider the Carrier Sense (CS) threshold
term as form of the CCA threshold.
First, let’s consider the worst-case interference scenario in a $N>2$
arrangement. For the sake of simplicity we use the path-loss radio propagation
model: P_rx^r = Ptxmdm,rθ, where $P_{tx}^{m}$ and $P_{rx}^{r}$ are the TP at
the $m^{th}$ transmitter (AP) and the received signal strength at the $r^{th}$
receiver, respectively. In addition, $d_{m,r}$ is the distance between the
transmitter and receiver. Finally, $\theta\in[2,4]$ corresponds to the path
loss exponent. Thus, from the perspective of $m^{th}$ AP the worst-case
interference $I_{m}$ is defined as: I_m = ∑_v ∈F_m^+ PtxvX(m,v)θ \+
P_tx^sta∑_w∈F_m^- 1X(m,w)θ,
where $\mathcal{F}_{m}^{+}$ is the subset of interferers
$|\mathcal{F}_{m}^{+}|=|\mathcal{M}|-1$, corresponding to APs interfering with
the $m^{th}$ AP and $\mathcal{F}_{m}^{-}$ the subset of non-AP interferers
$|\mathcal{F}_{m}^{-}|=|\mathcal{S}|$, corresponding to the stations
interfering with the $m^{th}$ AP. Furthermore, $P_{tx}^{v}$ is the TP of the
$v^{th}$ interferer and $P_{tx}^{sta}$ is a constant corresponding to the
fixed power assigned to all the stations based on the fact that typically
stations are not capable to modify their TP. Additionally, $X^{(m,v)}$ and
$X^{(m,w)}$ corresponds to the distance from the $m^{th}$ AP to the $v^{th}$
AP interferer and $m^{th}$ AP to the $v^{th}$ station interferer,
respectively. $X^{(m,.)}$ is calculated as follows: X^(m,.) = (D_m+x_m,.)^2 +
d_m,r^2 - 2(D_m+x_m,.)d_m,rcosς_r,., where $(.)$ refers either to the AP or
non-AP interferer, $D_{m}$ is the CCA threshold range of the $m^{th}$ AP,
$\varsigma_{r,.}$ is the distance between the receiver to the interferer $(.)$
and $x_{m,.}$ corresponds to the distance between any $(.)$ interferer and
$D_{m}$.
The corresponding worst-case SINR $\gamma_{m,r}$ at the receiver is defined
as:
$\gamma_{m,r}=\frac{P_{tx}^{m}}{{d_{m,r}}^{\theta}(I_{m}+N_{0})},$ (13)
Let’s assume that $N_{0}<<I_{m}$, thus the equation is reduced to:
$\gamma_{m,r}=\frac{P_{tx}^{m}}{{d_{m,r}}^{\theta}I_{m}},$ (14)
Substituting equations (IV-A) and (IV-A) in (14) we obtain equation (15):
$\displaystyle\gamma_{m,r}=\frac{\frac{P_{tx}^{m}}{{d_{m,r}}^{\theta}}}{\sum_{v\in\mathcal{F}_{m}^{+}}\frac{P_{tx}^{m}}{({\sqrt{(D_{m}+x_{m,v})^{2}+{d_{m,r}}^{2}-2(D_{m}+x_{m,v})d_{m,r}\cos\varsigma_{r,.}}})^{\theta}}+P_{tx}^{sta}\sum_{w\in\mathcal{F}_{m}^{-}}\frac{1}{({\sqrt{(D_{m}+x_{m,w})^{2}+d_{m,r}^{2}-2(D_{m}+x_{m,w})d_{m,r}\cos\varsigma_{r,w}}})^{\theta}}}$
(15)
The aforementioned equation describes $\gamma_{m,r}$ in function of $D_{m}$
and $d_{m,r}$. Additionally, we substitute
$D_{m}=\left(P_{tx}^{m}/T_{cs}^{m}\right)^{1/\theta}$ in equation (15),
obtaining:
γ_m,r=Ptxmdm,rθ∑v∈Fm+PtxmΓm\+ Ptxsta∑w=1Km-ι(m,w),
where,
$\scalebox{0.7}{$\Gamma^{m}=\left({\sqrt{\left[\left(\frac{P_{tx}^{m}}{T_{cs}^{m}}\right)^{\frac{1}{\theta}}+x_{m,v}\right]^{2}+d_{m,r}^{2}-2\left[\left(\frac{P_{tx}^{m}}{T_{cs}^{m}}\right)^{\frac{1}{\theta}}+x_{m,v}\right]d_{m,r}\cos\varsigma_{r,v}}}\right)^{\theta}$},$
$\iota^{(m,w)}=\frac{1}{{(\sqrt{(\Omega_{sta}+x_{m,w})^{2}+d_{m,r}^{2}-2((\Omega_{sta}+x_{m,w})d_{m,r}\cos\varsigma_{r,w}}})^{\theta}}$
and
$\Omega_{sta}=\left(\frac{P_{tx}^{sta}}{T_{cs}^{sta}}\right)^{\frac{1}{\theta}}$.
Now, we proceed to define the maximum channel capacity in terms of TP and
Carrier Sense (CS) threshold ($T_{cs}$). Given a certain value of SINR, the
Shannon maximum capacity is expressed as: C_m,r = Wlog_2(1 + γ_m,r),
where $W$ is the channel bandwidth in Hz. Then, the cumulative maximum network
capacity can be calculated as:
C_T = ∑_m=1^—M—-1∑_r=1^N C_m,r,
Figure 3: Network capacity as a function of TP and CS threshold.
In figure 3, it is shown a graph of the network maximum capacity as a function
of TP and CS threshold. As observed, the network capacity achieves its higher
values when a combination of high TP and low CS threshold is utilized. Note
that, prior knowledge of the locations are required.
## V Proposed Multi-Agent Multi-armed bandit algorithms
In this section, we present the action space, context definition and reward
function for the MA-MAB algorithms utilized in this work.
### V-A Action space
The action space corresponds to the number of combinations of $P_{cs}$ and
$P_{tx}$ which in the context of MABs translates to the number of arms for
each MAB agent. The action space is defined as: A_cs = {P_cs^min, P_cs^min +
Pcsmax\- PcsminLcs-1,…, P_cs^max },
A_tx = {P_tx^min, P_tx^min + Ptxmax\- PtxminLtx-1,…, P_cs^max}, where
$P_{cs}^{min}$, $P_{cs}^{max}$ and $P_{tx}^{min}$, $P_{tx}^{max}$ are the
minimum and maximum values of CCA threshold and TP values, respectively.
$L_{cs}$ and $L_{tx}$ corresponds to the number of levels to be discretized
the CCA threshold and TP values, respectively. Finally, the number of arms
corresponding to the action space for the $m^{th}$ agent $K_{m}^{AP}$ is
$|A_{cs}^{m}|\cdot|A_{tx}^{m}|$.
### V-B Reward function in distributed non-cooperative settings
The reward is defined following the optimization problem 1. The reward
resembles the reward presented in [14] which includes a linear product-based
fairness and station’s starvation term [17, 14] but defined in a distributed
manner. A station is considered to be on starvation when its performance is
bellow to a predefined percentage of its theoretical achievable throughput.
The reward is defined as:
$\leavevmode\resizebox{390.25534pt}{}{$r_{m}^{AP}=\frac{|\Psi_{m}^{AP}|\prod_{j\in\Psi_{m}^{AP}}\frac{R_{m}^{s}}{\omega
R_{m,A}^{s}}+|N_{m}^{AP}\setminus\Psi_{m}^{AP}|(N_{m}^{AP}+\prod_{j\in
N_{m}^{AP}\setminus\Psi_{m}^{AP}}\frac{R_{m}^{s}}{R_{m,A}^{s}})}{N_{m}^{AP}(N_{m}^{AP}+1)}$},$
(16)
where $\Psi_{m}^{AP}$ is the set of starving stations attached to the $m^{th}$
AP , $N_{m}^{AP}$ the set of stations attached to the $m^{th}$ AP. We can also
observe, that $r_{m}^{AP}\propto C_{m,r}$ as defined in Eq. IV-A.
In the next subsection, we present the definition of the context considered in
our MA-CMAB solution.
### V-C Distributed Sample Average Uncertainty-Sampling MA-CMAB
In [25], the authors present an efficient contextual multi-arm bandit based on
a “frequentist approach” to compute the uncertainty instead of using bayesian
solutions as Thompson Sampling. The frequentist approach consist in measuring
the uncertainty of the action-values based on the sample average rewards just
computed instead of relaying on the posterior distribution given the past
rewards. In this work, we present multi-agent cooperative and not cooperative
variants of the previously mentioned RL algorithm.
In our problem, the context is comprised only by the APs’ local observations:
1. 1.
Number of starving stations, $|\Psi_{m}^{AP}|$ where $m$ corresponds to the
$m^{th}$ AP under $\omega$ fraction of their attainable throughput during the
$t$ episode.
2. 2.
Average RSSI, $\overline{S}_{m}^{AP}$ where $m$ is the $m^{th}$ AP during the
$t$ episode.
3. 3.
Average Noise, $\overline{\Upsilon}_{m}^{AP}$ where $m$ denotes the $m^{th}$
AP during the $t$ episode.
Additionally the context is normalized as follows: ψ_m^AP = —Ψ_m^AP—/N_m^AP,
s_m^AP ={0, -50 dBm ≤¯SmAP≤-60 dBm,0.25, -60 dBm ≤¯SmAP≤-70 dBm,0.5, -70
dBm ≤¯SmAP≤-80 dBm,0.75, -80 dBm ≤¯SmAP≤-90 dBm,1, -90 dBm ≥¯SmAP
^Υ
_m^AP = ¯Υ_m^AP/100,
The multi-agent SAU-Sampling algorithm in its non-cooperative version (SAU-
NonCoop) is described in Algorithm 1.The algorithm starts by initializing
action-value functions $\mu(\bm{x}_{m}|\bm{\hat{\theta}}_{m})$ as a deep
neural networks and the exploration parameters $J_{m,a}^{2}$ and $n_{m,a}$ for
each $m^{th}$ AP. $n_{m,a}$ correspond to the number of times action $a$ was
selected in the $m^{th}$ AP and $J_{m,a}^{2}$ is defined as an exploration
bonus. In each environment step (Algorithm 1, line 2), each agent will observe
their local context and compute the selected arm given the reward prediction.
In (Algorithm 1, line 11) each CMAB agent will update
$\bm{\hat{\theta}}_{m,a}$ using stochastic gradient descent on the loss
between the predicted reward and the real observed reward. Finally, the
exploration parameters are accordingly updated given the the prediction error
as depicted in (Algorithm 1, line 12).
1
2Initialize network $\bm{\hat{\theta}}_{m,a}$, exploration parameters
$J_{m,a}^{2}(t=0)=1$ and $n_{m,a}(t=0)=0$ for all actions $a\in K_{m}$.
3for _environment step $t\leftarrow 1$ to $T$_ do
4 for _agent $m$_ do
5 Observe context
${\bm{x}_{m}(t)}=[\psi_{m}^{AP}(t),s_{m}^{AP}(t),\hat{\Upsilon}_{m}^{AP}(t)]$
6 for _$a=1,...,K_{m}$_ do
7 Calculate reward prediction
$\hat{\mu}_{i,t}(t)=\mu(\bm{x}_{m}|\bm{\hat{\theta}}_{m})$ and
$\tau_{m,a}^{2}(t)=J_{m,a}^{2}/n_{m,a}$
8
$\tilde{\mu}_{m,a}\sim\mathcal{N}(\hat{\mu}_{m,a},n_{m,a}^{1-}\tau_{m,a}^{2})$
9 end for
10 Compute $a_{m}(t)=\text{{argmax}}_{a}(\\{\tilde{\mu}_{m,a}(t)\\}_{a\in
K_{m}}\\})$ if $t>K_{m}$, otherwise $a_{m}(t)\sim\mathcal{U}(0,K)$;
11 Select action $a_{m}(t)$, observe reward $r_{m}^{AP}$;
12 Update $\bm{\hat{\theta}}_{m,a}$ using SGD with gradients $\partial
l_{m}/\partial\theta$ where $l_{m}=0.5(r_{m}^{AP}-\hat{\mu}_{m,a}(t))$ ;
13 Update $J_{m,a}^{2}\leftarrow J_{m,a}^{2}+e_{m}^{2}$ using prediction error
$e_{m}=r_{m}^{AP}(t)-\hat{\mu}_{m,a}(t)$ and $n_{m,a}\leftarrow n_{m,a}+1$;
14 end for
15
16 end for
17
Algorithm 1 SAU-Sampling MA-CMAB
### V-D Cooperative Sample Average Uncertainty-Sampling MA-CMAB
In this section we present a cooperative version of SAU-Sampling named SAU-
Coop. Different from the non-cooperative version, the total reward $r_{m}^{C}$
considers the network Jain’s fairness index in addition to their local reward
$r_{m}^{AP}$ as:
r_m^C = r_m^AP + r_J,
where $r_{\mathcal{J}}$ as the overall network Jain’s fairness index is
defined as: r_J = J(R_1,…,R_N) = (∑m=1—M—Rm)2—M—⋅∑m=1—M—Rm2, where
$R_{m}=\sum_{s=1}^{|\mathcal{S}_{m}|}R_{s}^{m}$ is the total throughput of all
the $S_{m}$ stations of the $m^{th}$ AP.
### V-E Reward-cooperative $\epsilon$-greedy MA-MAB
In addition to the previous cooperative algorithm, we propose a cooperative
approach based on the classical $\epsilon$-greedy strategy [22] that takes
into account in the action’s reward update a percentage of the average reward
of other agents. This algorithm is described in Algorithm 2.
1 Initialize $\epsilon_{m}(t=0)=\epsilon_{0}$, $Q_{m,a}(t=0)\leftarrow 0$,
$N_{m,a}(t=0)\leftarrow 0$ and $\beta$.
2for _environment step $t\leftarrow 1$ to $T$_ do
3 for _agent $m$_ do
4 Execute action $a_{m}(t)$:
$a_{m}(t)=\begin{cases}\text{{argmax}}_{k=1,...,K}r_{k,i}(t)&\text{with
probability}1-\epsilon_{m}(t)\\\ k\sim\mathcal{U}(0,K)&\text{o.w}\end{cases}$
5 Calculate reward $r_{m}^{AP}(t)$ based on feedback of the environment
6 Update
$Q_{m,a}(t+1)=Q_{m,a}(t)+\frac{1}{N_{m}(t)}[(r_{m}^{AP}+\beta\cdot\frac{1}{M-1}\sum_{m=1}^{M^{-i}}r_{m}^{AP})-Q_{m,a}(t)]$
7 Update $N_{m}\leftarrow N_{m}(t)+1$;
8 Update $\epsilon_{m}\leftarrow\frac{\epsilon_{m}(t)}{\sqrt{t}}$
9 end for
10
11 end for
12
Algorithm 2 Reward-cooperative $\epsilon$-greedy MA-MAB
Finally, in the next subsection we present the details of the the DTRL scheme
to improve SR adaptation in dynamic environments.
### V-F Sample Average Uncertainty-Sampling MA-CMAB based Deep Transfer
Reinforcement Learning
Typically, RL agents learn their best policy based on the feedback received
from the environment in a $T$ horizon time. However, in real-world scenarios
the environment conditions can change in $T+1$ and thus, adapting to the
updated environment is necessary[39]. In such cases, the “outdated” agent’s
policy might not be optimal to address the new conditions efficiently. For
instance, a modification on the stations’ distribution over the APs can cause
that the SR-related parameters chosen by the “outdated” agents’ policy affect
the network performance.
Function Detect_Singularity($\mathcal{K}$) ;
// returns True if anomaly is detected in network KPIs data $\mathcal{K}$ at
time $t$, and False otherwise.
1 Let $\mathcal{L}=\\{l|l\in\mathbb{N},l>0\\}$ the set of layers of model
$\bm{\hat{\theta}}_{m,a}^{l}$ and $\mathcal{M}\subset\mathcal{L}$ the subset
of layers to be transferred. Run algorithm SAU-Sampling MA-CMAB (Algorithm 1)
while _environment step $t<T$_ do
2
3 if _$\neg$ Detect_Singularity_ then
4 continue;
5 else
6 Reset exploration parameters $S_{m,a}^{2},n_{m,a}$;
7
8 Reinitialize weights $w$ and biases $b$ of the $l^{th}$ layer of
$\bm{\hat{\theta}}_{m,a}^{l\not\in\mathcal{M}}$ via:
9
$\nu_{l}=\left(\sqrt{|\bm{\hat{\theta}}_{m,a}^{l\not\in\mathcal{M}}|}\right)^{-1}$
;
10
11 $\bm{\hat{\theta}}_{m,a}^{l\not\in\mathcal{M}}(w,b)\rightarrow
w_{l}\sim\mathcal{U}(-\nu_{l},\nu_{l}),b_{l}\sim\mathcal{U}(-\nu_{l},\nu_{l})$;
12 Transfer weights and biases via:
13
$\bm{\hat{\theta}}_{m,a}^{l\in\mathcal{M}}(w,b)\rightarrow\bm{\hat{\theta}}_{m,a}^{l\in\mathcal{M}^{\prime}}(w,b)$;
14
15 end if
16
17 end while
18
19
20
Algorithm 3 SAU-Sampling MA-MAB Transfer Learning
To address the previous situation we propose two main solutions: 1. If the
agent detects a change in the environment indicated by a singularity, it will
decide to correct its configuration via forgetting the policy already learnt
(forget) or 2. adapting the agent’s policy to the new conditions via a
transfer learning technique. A singularity is defined as a anomalous behavior
of the KPIs of interest after the policy of the MAB agent has converged. In
this work, we don’t delve into how to detect a singularity and moreover, we
assume the existence of an anomaly detector in our system [40]. In Algorithm
3, we present the transfer learning algorithm depicting the second proposed
solution. At $t=0$ each SAU-Sampling agent will reset their weights and biases
and start learning as part of Algorithm 1. At $t=S1$, where $S1$ corresponds
to the time when an anomaly is detected and the transfer procedure is
activated (Algorithm 3, line 7). In our setup we transfer $l=2$ and reset
$l=1$ (Algorithm 3, line 11) , where $l$ corresponds to the layer of the
neural network utilized in the SAU-Sampling agent. However, as indicated
(Algorithm 3, line 13), the transfer is not constrained to one layer but more
generally to a set of layers. The set of transferred layers is considered as
an hyperparameter to be tuned. The partial transfer of a model avoids negative
transfer by giving the agent room to adapt to the new context since it
mitigates model overfitting.
## VI Performance Evaluation
### VI-A Simulation Setting
We consider two scenarios in our simulations. The first one considers
stationary users, meanwhile the second scenario considers mobile users to
model dynamic scenarios (see section VI-E). In addition, stations and APs are
two-antenna devices supporting up to two spatial streams in transmission and
reception. In this work, we assume a frequency of 5 GHz with a 80 MHz channel
bandwidth in a Line of Sight (LOS) setting. The propagation loss model is the
Log Distance propagation loss model with a constant speed propagation delay.
In addition, an adaptive rate data mode is considered with a UDP downlink
traffic. We implement our proposed solutions using ns-3 and also we use OpenAI
Gym to interface between ns-3 and the MA-MAB solution[41]. In Table II and
Table III we present the learning hyperparameters and network settings
parameters, respectively.
TABLE II: Learning hyperparameters
Parameter | Value
---|---
$\epsilon$-greedy MAB | Annealing $\epsilon$: $\sqrt{T}$
Thompson Sampling MAB | Prior distribution: Beta
Upper Confidence Bound MAB | Level of exploration, $c=1$
SAU-Sampling | Number of hidden layers, $N_{h}=2$
| Number of neurons per hidden layer, $n_{h}=100$
| Number of inputs, $N_{m}=3$ and number of outputs, $N_{o}=K$
| Batch size, $B_{s}=64$
| Optimizer : RMSProp (8e-3)
| Weight decay : 5e-4
| Activation function : ReLU
Gym environment step time | 0.05 s
TABLE III: Network settings
Parameter | Value
---|---
Number of APs | 6
Number of Stations | 15
Number of antennas (AP) | 2
Max Supported Tx Spatial Streams | 2
Max Supported Tx Spatial Streams | 2
Channel Number 222We assume all APs are configured to use 1 channel out of the available 11. This is a practical selection to create dense deployment scenarios. | 1
Propagation Loss Model | Log Distance Propagation Loss Model
Wi-Fi standard | 802.11 ax
Frequency | 5 GHz
Channel Bandwidth | 80 MHz
Traffic Model - UDP application | $[0.011,0.056,0.11\text{\cite[cite]{[\@@bibref{}{WILHELMI201926}{}{}]}},0.16]$ Gbps
Maximum $\&$ minimum Transmission Power | $P_{tx}^{max}=21.0$ dBm $\&$ $P_{tx}^{min}=1.0$ dBm
Maximum $\&$ minimum CCA threshold | $P_{cs}^{max}=-62.0$ dbm $\&$ $P_{cs}^{min}=-82.0$ dbm
| $K_{cs}=1$ and $K_{tx}=1$
### VI-B Reduced set of actions vs. all actions
In subsection IV-A we presented a mathematical analysis to obtain a reduced
set of optimal actions with the goal of decreasing exploration time and
consequently improving convergence time. As concluded in figure 3, high TP and
low CCA threshold values maximize the network capacity in the simulation
scenario under study. Therefore, we selected a fixed value of CCA threshold
($P_{cs}=-82.0$ dBm) and a reduced set of TP
$P_{tx}\in\\{15,16,17,18,19,20,21\\}$ dBm and observed the performance against
the full set of possible actions described in V-A.
Figure 4: Convergence performance of $\epsilon$-greedy, UCB and Thompson
Sampling MA-MABs under non-cooperative and distributed regimen. The subscript
“all” indicates the usage of the full set of actions.
In figure 4, we present the convergence performance of three MA-MAB algorithms
under UDP traffic of 0.056 Gbps in a non-cooperative and cooperative settings
(indicated with subscripts “non-coop” and “coop”, respectively ). The
algorithms correspond to $\epsilon$-greedy ($MAB^{eg}$), UCB ($MAB^{ucb}$) and
Thompson Sampling ($MAB^{thom}$) MA-MABs. For each algorithm, we plotted three
convergence graphs in terms of fairness, cumulative throughput and station
starvation representing the behavior when a reduced set of actions and the
full action set (indicated with the subscript “all”) are used, respectively.
For the case of the set of optimal actions, we can observe that the
performance is similar with a slight improvement when utilizing MAB-Thompson
Sampling. On the other hand, when utilizing the full action set the behavior
shows a noticeable improvement with MAB $\epsilon$-greedy algorithm with
respect the others. In [43], the authors study the unreasonable behavior of
greedy algorithms when $K$ is sufficiently large. They concluded that when $K$
increases above 27 arms, intelligent algorithms are affected greatly by the
exploration stage. The former results validate ours based on the fact that
$K=|A_{cs}|\cdot|A_{tx}|=21^{2}$. Finally, it can be noted that the impact of
utilizing reduced optimal actions in terms of convergence time and KPI
maximization. The set of optimal tasks allows to reduce the station starvation
when compared with the best performer $MAB_{nocoop_{all}}^{eg}$ by an average
of two starving users. However, in order to obtain such a set it is requires a
prior knowledge of stations and APs geographical locations. In the following
section we compare the results of $\epsilon$-greedy MA-MAB and a default
typical configuration without machine learning.
Figure 5: Performance results: $\epsilon$-greedy MAB w/ optimal set vs.
default configuration with $P_{cs}\in\\{-62.0,-82.0\\}$ dBm.
### VI-C Distributed $\epsilon$-greedy MA-MAB vs. default configuration
performance results
In this subsection, we present the comparative results and advantages of
utilizing a distributed intelligent solution such as MAB $\epsilon$-greedy
over the default CCA threshold and TP configuration with no ML. In figure 5,
we show the performance under four different UDP data traffic regimes:
$\\{0.011,0.056,0.11,0.16\\}$ Gbps. We considered two typical configurations
of CCA threshold: $-82.0$ dBm and $-62.0$ dBm. In both cases, the AP’s TP is
$16.0$ dBm. It can be observed that MAB $\epsilon$-greedy achieves a
significant improvement over the default configuration ($P_{cs}=-82.0$ dBm)
with an average gain over all the considered traffic of $44.4\%$ in terms of
cumulative throughput, $70.9\%$ in terms of station starvation, $12.2\%$ in
terms of fairness, $138.0\%$ in terms of latency and $94.5\%$ in terms of
packet loss ratio (PLR), respectively. Additionally, a gain over the default
configuration ($P_{cs}=-62.0$ dBm) with an average gain over all the
considered traffics of $53.9\%$ in terms of cumulative throughput, $138.4\%$
in terms of station starvation, $43.0\%$ in terms of fairness, $84.0\%$ in
terms of latency and $105.4\%$ in terms of packet loss ratio (PLR) is shown,
respectively.
Figure 6: Performance results of cooperative algorithms: $\epsilon$-greedy MA-
MAB (Rew-Coop), SAU-Sampling MA-CMAB (SAU-Coop) and non-cooperative versions
of the previous algorithms SAU-NonCoop and Eg-NonCoop under full-set of
actions.
### VI-D Cooperation vs. non-cooperation performance results
In the two past subsections we have shown the results considering the set of
optimal actions. In this subsection we assume the non-existence of stations
and APs location information and thus, we must rely on the full set of
actions. In consequence, we investigate if cooperation can improve the KPIs of
interest by utilizing the cooperative proposal of the MAB $\epsilon$-greedy
algorithm (Rew-Coop) and the contextual SAU-Sampling algorithm (SAU-Coop).
Additionally, we present two non-cooperative algorithms: SAU-NonCoop which
corresponds to the non-cooperative version of the SAU-Sampling and Eg-NonCoop
that refers to the MAB $\epsilon$-greedy algorithm utilized in the previous
section. As observed in figure 6, simulations show that SAU-Coop improves Eg-
NonCoop over all the data traffic with an average of $14.7\%$ in terms of
cumulative throughput, $21.3\%$ in terms of station starvation, $4.64\%$ in
terms of network fairness, $36.7\%$ in terms of latency and $32.5\%$ in terms
of PLR. Similarly, the distributed version of SAU-Sampling presents a better
performance over Eg-NonCoop, indicating that context is beneficial to solve
the current optimization problem. Additionally, SAU-Coop presents a better
performance over its non-cooperative version, specially when the data rate
increases up to 0.16 Gbps where it is observed a gain of $14.1\%$ in terms of
cumulative throughput, $32.1\%$ in terms of station starvation, $18.2\%$ in
terms of network fairness, $16.5\%$ in terms of latency and $4\%$ in terms of
PLR. To sum up, cooperative approaches contribute positively to the
improvement of SR in WiFi over non-cooperative approaches. In addition, in
cases where cooperation is not possible it is advisable to utilize contextual
multi-armed bandits over stateless multi-armed bandits.
### VI-E Deep Transfer Learning in Adaptive SR in Dynamic scenarios results
In order to model a dynamic scenario, we design a simulation where the users
move across the simulation area and attach to the AP that offers the best
signal quality. Consequently, the user load in each AP will change and thus,
the dynamics of the environment. We model this scenario with 3 APs and 15
users where the load will change twice throughout the simulation. As depicted
in table IV the user load of the $m^{th}$ AP denoted as $C_{m}$ will change in
two instances in time: 3 and 6 minutes, respectively.
TABLE IV: Dynamic scenario load distribution | $\bm{t=0}$ min | $\bm{t=3}$ min | $\bm{t=6}$ min
---|---|---|---
$C_{1}$ | 8 | 5 | 2
$C_{2}$ | 5 | 5 | 2
$C_{3}$ | 2 | 5 | 11
Figure 7: Network response in terms of fairness and station starvation when
utilizing the forget, full transfer and transfer strategies.
In figure 7 we present the network behavior in terms of fairness and station
starvation under the scenario depicted by Table IV. In addition to the two
methods previously mentioned: forget and transfer, we present the performance
of a third approach called full transfer where the full transfer of the model
is considered. During the first interval ($0-3$min) the performance is similar
in the three methods as expected. However, after the two changes on the
network load, two singularities in each graph are visible in the fairness and
starvation graphs. More specifically, the forget method experiences the worst
behavior, with a $54.3\%$ and $11.7\%$ decrease when compared with the
transfer method in terms of station starvation and fairness, respectively. The
forget method shows some peaks at the moment of the singularities representing
$60\%$ of total of the users with a service drop; this behavior is inherently
related to the agents’ process of start learning again and cannot be avoided.
From the quality of service perspective, a disturbance such as the one
observed is highly non-preferable. Meanwhile, the full transfer method
underperforms the transfer method with $18.7\%$ and $6\%$ decrease in the
previously mentioned KPIs. Interestingly, it can be observed in the second
interval under study ($3-6$min) the forget method is able to overperform at
the end of the period the full transfer method. This is due to a negative
transfer as a result of transferring the whole model. As observed, not only
the partial transfer learning reduces considerably the peaks in performance of
the forget method but also it is able to achieve better adaptation over the
full transfer method. In all methods, the cumulative throughput is similar,
however as observed in figure 7 station starvation and consequently, fairness
are affected.
## VII Conclusion
In this paper, we propose Machine Learning (ML)-based solutions to the Spatial
Reuse (SR) problem in distributed Wi-Fi 802.11ax scenarios. We presented a
solution to reduce the huge action space given the possible values of
Transmission Power (TP) and Clear-Channel-Assessment (CCA) threshold values
per Access Point (AP) and analysed its impact on diverse well-known
distributed Multi-Agent Multi-Armed Bandit (MA-MAB) implementations. In
distributed scenarios, we showed that $\epsilon$-greedy MA-MAB significantly
improves the performance over typical configurations when the optimal actions
are known. Moreover, the Contextual Multi-Agent Multi-Armed (MA-CMAB) named
SAU-Sampling in the cooperative setting contributes positively to an increase
in throughput and fairness and reduction of PLR when compared with no
cooperation approaches. Under a dynamic scenarios, transfer learning benefits
the SAU-Sampling algorithm to overcome the service drops for at least $60\%$
of the total of users when utilizing the forget method. Additionally, we
obtained that partial transfer learning offers better results than the full
transfer method. To conclude, the utilization of the cooperative version of
the MA-CMAB to improve SR in WiFi scenarios is preferable since it outperforms
the presented ML-based solutions and prevents service drops in dynamic
environments via transfer learning.
## VIII Acknowledgment
This research is supported by Mitacs Accelerate Program and NetExperience Inc.
## References
* [1] Cisco Systems Inc., “Cisco Annual Internet Report (2018–2023),” _Whitepaper, Cisco public_ , 2020.
* [2] “IEEE Standard for Information Technology–Telecommunications and Information Exchange between Systems Local and Metropolitan Area Networks–Specific Requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 1: Enhancements for High-Efficiency WLAN,” _IEEE Std 802.11ax-2021 (Amendment to IEEE Std 802.11-2020)_ , pp. 1–767, 2021.
* [3] Cisco Systems Inc., “IEEE 802.11ax : The Sixth Generation of Wi-Fi,” _Whitepaper, Cisco public_ , 2020.
* [4] F. Ye, S. Yi, and B. Sikdar, “Improving Spatial Reuse of IEEE 802.11 Based Ad Hoc Networks,” in _GLOBECOM - IEEE Global Telecommunications Conference_ , 2003.
* [5] F. Wilhelmi, S. Barrachina-Muñoz, C. Cano, I. Selinis, and B. Bellalta, “Spatial Reuse in IEEE 802.11ax WLANs,” _Computer Communications_ , 2021\.
* [6] C. Thorpe and L. Murphy, “A survey of adaptive carrier sensing mechanisms for IEEE 802.11 wireless networks,” _IEEE Communications Surveys and Tutorials_ , 2014.
* [7] T. Huehn and C. Sengul, “Practical power and rate control for WiFi,” _2012 21st International Conference on Computer Communications and Networks_ , 2012.
* [8] C. Khosla and B. S. Saini, “Enhancing Performance of Deep Learning Models with different Data Augmentation Techniques: A Survey,” in _Proceedings of International Conference on Intelligent Engineering and Management, ICIEM 2020_ , 2020.
* [9] S. Szott, K. Kosek-Szott, P. Gawłowicz, J. T. Gómez, B. Bellalta, A. Zubow, and F. Dressler, “WiFi Meets ML: A Survey on Improving IEEE 802.11 Performance with Machine Learning,” _arXiv preprint arXiv:2109.04786_ , 2021.
* [10] M. Elsayed and M. Erol-Kantarci, “AI-Enabled Future Wireless Networks: Challenges, Opportunities, and Open Issues,” _IEEE Vehicular Technology Magazine_ , vol. 14, no. 3, pp. 70–77, 2019.
* [11] P. E. Iturria-Rivera, H. Zhang, H. Zhou, S. Mollahasani, and M. Erol-Kantarci, “Multi-Agent Team Learning in Virtualized Open Radio Access Networks (O-RAN),” _Sensors_ , vol. 22, no. 14, 2022.
* [12] TIP, “OpenWiFi Release 2.4 GA,” 2022. [Online]. Available: https://openwifi.tip.build/
* [13] F. Wilhelmi, C. Cano, G. Neu, B. Bellalta, A. Jonsson, and S. Barrachina-Muñoz, “Collaborative Spatial Reuse in wireless networks via selfish Multi-Armed Bandits,” _Ad Hoc Networks_ , 2019.
* [14] A. Bardou, T. Begin, and A. Busson, “Improving the Spatial Reuse in IEEE 802.11ax WLANs,” _ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM’21)_ , 2021.
* [15] A. Bardou and T. Begin, “Inspire: Distributed bayesian optimization for improving spatial reuse in dense wlans,” _arXiv preprint arXiv:2204.10184_ , 2022.
* [16] H. Lee, H.-S. Kim, and S. Bahk, “Lsr: Link-aware spatial reuse in ieee 802.11ax wlans,” in _2021 IEEE Wireless Communications and Networking Conference (WCNC)_ , 2021, pp. 1–6.
* [17] H. Kim and J. So, “Improving Spatial Reuse of Wireless LAN Uplink Using BSS Color and Proximity Information,” _Applied Sciences_ , vol. 11, no. 22, 2021\.
* [18] F. Wilhelmi, J. Hribar, S. F. Yilmaz, E. Ozfatura, K. Ozfatura, O. Yildiz, D. Gündüz, H. Chen, X. Ye, L. You _et al._ , “Federated spatial reuse optimization in next-generation decentralized ieee 802.11 wlans,” _arXiv preprint arXiv:2203.10472_ , 2022.
* [19] D. Bouneffouf, I. Rish, and C. Aggarwal, “Survey on Applications of Multi-Armed and Contextual Bandits,” in _2020 IEEE Congress on Evolutionary Computation, CEC 2020 - Conference Proceedings_ , 2020.
* [20] A. Slivkins, “Introduction to multi-armed bandits,” _Foundations and Trends in Machine Learning_ , 2019.
* [21] L. Zhou, “A survey on contextual multi-armed bandits,” _arXiv preprint arXiv:1508.03326_ , 2015.
* [22] R. S. Sutton and A. G. Barto, _Reinforcement learning: An introduction_ , 2nd ed. MIT press, 2018.
* [23] R. Agrawal, “ Sample mean based index policies by O (log n ) regret for the multi-armed bandit problem ,” _Advances in Applied Probability_ , 1995.
* [24] D. J. Russo, B. Van Roy, A. Kazerouni, I. Osband, and Z. Wen, “A Tutorial on Thompson Sampling,” 2018.
* [25] R. Zhu and M. Rigotti, “Deep bandits show-off: Simple and efficient exploration with deep networks,” _Advances in Neural Information Processing Systems_ , vol. 34, 2021.
* [26] S. Hossain, E. Micha, and N. Shah, “Fair Algorithms for Multi-Agent Multi-Armed Bandits,” in _Advances in Neural Information Processing Systems_ , vol. 34, 2021, pp. 24 005–24 017.
* [27] P. C. Landgren, “Distributed Multi-Agent Multi-Armed Bandits,” Ph.D. dissertation, 2019.
* [28] S. J. Pan and Q. Yang, “A survey on transfer learning,” _IEEE Trans. Knowl. Data Eng._ , vol. 22, no. 10, pp. 1345–1359, 2010.
* [29] T. Scott, K. Ridgeway, and M. C. Mozer, “Adapted Deep Embeddings: A Synthesis of Methods for k-Shot Inductive Transfer Learning,” in _Advances in Neural Information Processing Systems_ , vol. 31, 2018.
* [30] F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and Q. He, “A comprehensive survey on transfer learning,” _Proc. IEEE_ , vol. 109, no. 1, pp. 43–76, 2021.
* [31] L. Vu, Q. U. Nguyen, D. N. Nguyen, D. T. Hoang, and E. Dutkiewicz, “Deep transfer learning for iot attack detection,” _IEEE Access_ , vol. 8, pp. 107 335–107 344, 2020.
* [32] M. Elsayed, M. Erol-Kantarci, and H. Yanikomeroglu, “Transfer Reinforcement Learning for 5G New Radio mmWave Networks,” _IEEE Trans. Wirel. Commun._ , 2021.
* [33] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A survey on deep transfer learning,” in _Artificial Neural Networks and Machine Learning – ICANN 2018_ , 2018, pp. 270–279.
* [34] Z. Zhu, K. Lin, and J. Zhou, “Transfer learning in deep reinforcement learning: A survey,” _arXiv preprint arXiv:2009.07888_ , 2020.
* [35] M. Derakhshani, X. Wang, D. Tweed, T. Le-Ngoc, and A. Leon-Garcia, “AP-STA association control for throughput maximization in virtualized WiFi networks,” _IEEE Access_ , 2018.
* [36] G. Holland, N. Vaidya, and P. Bahl, “A rate-adaptive MAC protocol for multi-hop wireless networks,” in _Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM_ , 2001.
* [37] G. F. Riley and T. R. Henderson, _The ns-3 Network Simulator_. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 15–34. [Online]. Available: https://doi.org/10.1007/978-3-642-12331-3_2
* [38] T. S. Kim, H. Lim, and J. C. Hou, “Improving spatial reuse through tuning transmit power, carrier sense threshold, and data rate in multihop wireless networks,” in _Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM_ , 2006.
* [39] S. Padakandla, “A Survey of Reinforcement Learning Algorithms for Dynamically Varying Environments,” _ACM Comput. Surv._ , vol. 54, no. 6, pp. 1–25, 2021\.
* [40] A. Blázquez-García, A. Conde, U. Mori, and J. A. Lozano, “A Review on Outlier/Anomaly Detection in Time Series Data,” _ACM Comput. Surv._ , vol. 54, no. 3, pp. 1–33, 2021.
* [41] P. Gawłowicz and A. Zubow, “Ns-3 meets OpenAI Gym: The playground for machine learning in networking research,” in _MSWiM 2019 - Proceedings of the 22nd International ACM Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems_ , 2019.
* [42] F. Wilhelmi, S. Barrachina-Muñoz, B. Bellalta, C. Cano, A. Jonsson, and G. Neu, “Potential and pitfalls of Multi-Armed Bandits for decentralized Spatial Reuse in WLANs,” _Journal of Network and Computer Applications_ , vol. 127, pp. 26–42, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1084804518303655
* [43] M. Bayati, N. Hamidi, R. Johari, and K. Khosravi, “Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms,” in _Advances in Neural Information Processing Systems_ , vol. 33, 2020, pp. 1713–1723.
|
# Towards Reliable Item Sampling for Recommendation Evaluation
Dong Li,1 Ruoming Jin,1 Zhenming Liu,2 Bin Ren,2 Jing Gao,3 Zhi Liu,3
###### Abstract
Since Rendle and Krichene argued that commonly used sampling-based evaluation
metrics are “inconsistent” with respect to the global metrics (even in
expectation), there have been a few studies on the sampling-based recommender
system evaluation. Existing methods try either mapping the sampling-based
metrics to their global counterparts or more generally, learning the empirical
rank distribution to estimate the top-$K$ metrics. However, despite existing
efforts, there is still a lack of rigorous theoretical understanding of the
proposed metric estimators, and the basic item sampling also suffers from the
“blind spot” issue, i.e., estimation accuracy to recover the top-$K$ metrics
when $K$ is small can still be rather substantial. In this paper, we provide
an in-depth investigation into these problems and make two innovative
contributions. First, we propose a new item-sampling estimator that explicitly
optimizes the error with respect to the ground truth, and theoretically
highlight its subtle difference against prior work. Second, we propose a new
adaptive sampling method which aims to deal with the “blind spot” problem and
also demonstrate the expectation-maximization (EM) algorithm can be
generalized for such a setting. Our experimental results confirm our
statistical analysis and the superiority of the proposed works. This study
helps lay the theoretical foundation for adopting item sampling metrics for
recommendation evaluation, and provides strong evidence towards making item
sampling a powerful and reliable tool for recommendation evaluation.
## 1 Introduction
As personalization and recommendation continue to play an integral role in the
emerging AI-driven economy (Fayyaz et al. 2020; Zhao et al. 2021; Peng,
Sugiyama, and Mine 2022; Jin et al. 2021a), proper and rigorous evaluation of
recommendation models becomes increasingly important in recent years for both
academic researchers and industry practitioners (Gruson et al. 2019; Cremonesi
et al. 2011; Dacrema et al. 2019; Rendle 2019). Particularly, ever since
Krichene and Rendle (2020, 2022) pointed out the “inconsistent” issue of item-
sampling based evaluation of commonly used (top-$K$) evaluation metrics, such
as Recall (Hit-Ratio)/Precision, Average Precision (AP) and Normalized
Discounted Cumulative Gain (NDCG), (other than AUC), it has emerged as a major
controversy being hotly debated among recommendation community.
Specifically, the item-sampling strategy calculates the top-$K$ evaluation
metrics using only a small set of item samples (Koren 2008; Cremonesi, Koren,
and Turrin 2010; He et al. 2017; Ebesu, Shen, and Fang 2018; Hu et al. 2018;
Krichene et al. 2019; Wang et al. 2019; Yang et al. 2018a, b). Krichene and
Rendle (2020, 2022) show that the top-$K$ metrics based on the samples differ
from the global metrics using all the items. They suggested a cautionary use
(avoiding if possible) of the sampled metrics for recommendation evaluation.
Due to the ubiquity of sampling methodology, it is not only of theoretical
importance, but also of practical interests to understand item-sampling
evaluation. Indeed, since the number of items in any real world recommendation
systems is typically quite large (easily in the order of tens of millions),
efficient model evaluation based on item sampling can be very useful for
recommendation researchers and practitioners.
To address the discrepancy between item sampling results and the exact top-$K$
recommendation evaluation, Krichene and Rendle (2020) have proposed a few
estimators in recovering global top-$K$ metrics from sampling. Concurrently,
Li et al. (2020) showed that for the top-$K$ Hit-Ratios (Recalls) metric,
there is an (approximately) linear relationship between the item-sampling
top-$k$ and the global top-$K$ metrics ($K=f(k)$, where $f$ is approximately
linear). In another recent study, Jin et al. (2021b) developed solutions based
on MLE (Maximal Likelihood Estimation) and ME (Maximal Entropy) to learn the
empirical rank distribution, which is then used to estimate global top-$K$
metrics.
Despite these latest works on item-sampling estimation (Li et al. 2020;
Krichene and Rendle 2020; Jin et al. 2021b), there remain some major gaps in
making item-sampling reliable and accurate for top-$K$ metrics. Specifically,
the following important problems are remaining open:
_(i)_ What is the optimal estimator given the basic item sampling? All the
earlier estimation methods do not establish any optimality results with
respect to the estimation errors (Krichene and Rendle 2020; Jin et al. 2021b).
_(ii)_ What can we do for the problem of the basic item sampling, which
appears to have a fundamental limitation that prevents us from recovering the
global rank distributions accurately? For the offline recommendation
evaluation, we typically are interested in the top ranked items and top-$K$
metrics, when $K$ is relatively small, say less than $50$. However, the
current item-sampling seems to have a “blind spot” for the top rank
distribution. For example, when there are $n=100$ samples and $N=10k$, the
estimation granularity is only at around 1% ($1/n$) level (Krichene and Rendle
2020; Li et al. 2020). We can only infer that the top items in the samples are
top 1% (top 100) in the global rank, while we can not further tell whether the
top items in sample set are in, for example, top-50, without increasing
sampling size. Given this, even with the best estimator for the item sampling,
we may still not be able to provide accurate results for the top-$K$ metrics.
A remedy is increasing sampling size, but it can significantly increase the
estimation cost too, limiting the benefits of item sampling. Can we sample the
items in a more intelligent manner to circumvent the “blind spot” while
keeping estimation cost low (and the sample size small)?
To address above open questions, we make the following contributions in this
paper:
* •
We derive an optimal item-sampling estimator and highlight subtle differences
from the BV estimators derived by Krichene and Rendle (2020), and point out
the potential issues of BV estimator because it fails to link the user
population size with the estimation variance. To the best of our knowledge,
this is the first estimator that directly optimizes the estimation errors.
* •
We address the limitation of the current item sampling approaches by proposing
a new adaptive sampling method. This provides a simple and effective remedy
which helps avoid the sampling “blind spot”, and significantly improves the
accuracy of the estimated metrics with low sample complexity.
* •
We perform a thorough experimental evaluation on the proposed item-sampling
estimator and the new adaptive sampling method. The experimental results
further confirm the statistical analysis and the superiority of newly proposed
estimators.
Our results help lay the theoretical foundation for adopting item sampling
metrics for recommendation evaluation, and offer a simple yet effective new
adaptive sampling approach to help recommendation practitioners and
researchers to apply item sampling based approaches to speedup offline
evaluation.
The rest of the paper is organized as follows: Section 2 introduces the item
sampling based top-$K$ evaluation framework and review the related work (in
Appendix A); Section 3 introduces the new optimal estimator minimizing its
mean squared error with respect to the ground-truth; Section 4 presents the
new adaptive sampling and estimation method; Section 5 discusses the
experimental results; and finally Section 6 concludes the paper.
## 2 Overview
Table 4 (in Appendix B) highlights the key notations for evaluating
recommendation algorithms used throughout the paper. Given $M$ users and $N$
items. To evaluate the quality of recommender models, each testing user $u$
hides an already clicked (or so called target) item $i_{u}$, and compare it
with the rest of items, derive a rank $R_{u}$. The recommendation model is
considered to be effective if it ranks $i_{u}$ in top (small $R_{u}$).
Formally, given a recommendation model, a metric function (denoted as a metric
$\mathcal{M}$) maps each rank $R_{u}$ to a real-valued score, and then
averages over all the users in the test set:
$T=\frac{1}{M}\sum_{u=1}^{M}{\mathcal{M}}(R_{u})$ (1)
And the corresponding top-K evaluation metric:
$T=\frac{1}{M}\sum_{u=1}^{M}{\bf 1}_{R_{u}\leq K}\cdot{\mathcal{M}}(R_{u})$
(2)
where $\mathbf{1}_{x}=1$ if $x$ is True, 0 otherwise.
The commonly used function ${\mathcal{M}}$ of evaluation metrics (Krichene and
Rendle 2020) are Recall, Precision, AUC, NDCG, and AP. For example:
$\begin{split}&Recall@K=\frac{1}{M}\sum_{u=1}^{M}{\bf 1}_{R_{u}\leq
K}\end{split}$ (3)
### 2.1 Item-Sampling Top-K Evaluation
In the item-sampling based top-K evaluation scenario, for a given user $u$ and
his/her relevant item $i_{u}$, another $n-1$ items from the entire item set
$I$ are sampled. The union of sampled items and $i_{u}$ is $I_{u}$ ($i_{u}\in
I_{u}$, $|I_{u}|=n$). The recommendation model then returns the rank of
$i_{u}$ among $I_{u}$, denoted as $r_{u}$ (again, $R_{u}$ is the rank against
the entire set of items $I$).
Given this, a list of studies (Koren 2008; He et al. 2017) simply replaces
$R_{u}$ with $r_{u}$ for (top-$K$) evaluation. Sampled evaluation metric
denoted as :
$T_{S}\triangleq\frac{1}{M}\sum_{u=1}^{M}{\bf 1}_{r_{u}\leq
K}\cdot{\mathcal{M}}(r_{u})$ (4)
It’s obvious that $r_{u}$ and $R_{u}$ differ substantially, for example,
$r_{u}\in[1,n]$ whereas $R_{u}\in[1,N]$. Therefore, for the same $K$, the
item-sampling top-K metrics and the global top-K metrics correspond to
distinct measures (no direct relationship): $T\neq T_{S}$ ($Recall@K\neq
Recall_{S}@K$). This problem is highlighted in (Krichene and Rendle 2020;
Rendle 2019), referring to these two metrics being inconsistent. From
perspective of statistical inference, the basic sampling-based top-$K$ metric
$T_{S}@K$ is not a reasonable or good estimator (Lehmann and Casella 2006) of
$T@K$.
Li et al. (2020) showed that for some of the most commonly used metrics, the
top-K Recall/HitRatio, there is a mapping function $f$ (approximately linear),
such that $Recall@f(k)\approx{Recall}_{S}@k$. Thus, they give an intuitive
explanation on how to look at a sampling top-$K$ metrics (on Recall) linking
it the same global metric but at a different rank/location.
There are two recent works (see related works in Appendix A) studying the
general metric estimation problem based on the item sampling metrics.
Specifically, given the sampling ranked results in the test set,
$\\{r_{u}\\}^{M}_{u=1}$, how to infer/approximate the $T$ from Eq. 1 or more
commonly Eq. 2, without the knowledge $\\{R_{u}\\}_{u=1}^{M}$?
Two Problems: We consider two fundamental (open) questions for item-sampling
estimation: 1) What is the optimal estimator following for (Krichene and
Rendle 2020)? (Their methods does not directly target to minimizing the
estimation errors). 2) By solving the first problem and the MLE method from
(Jin et al. 2021b), we observe the best effort using the basic item sampling
still fails to recover accurately on the global top-$K$ metrics when $K$ is
small, as well as the rank distribution for the top spots. Those “blind spots”
seem to stemming from the inherent (information) limitation of item sampling
methods, not on the estimators. How can we effectively address such item-
sampling limitations? Next, we will introduce methods to address these two
problems.
## 3 New Estimator for Item-Sampling
In this section, we introduce a new estimator which aims to directly minimize
the expected errors between the item-sampling based top-$K$ metrics and the
global top-$K$ metrics. Here, we consider a similar strategy as (Krichene and
Rendle 2020) though our objective function is different and aims to explicitly
minimize the expected error. We aim to search for a sampled metric
$\widehat{\mathcal{M}}(r)$ to approach $\widehat{T}\approx T$:
$\begin{split}\widehat{T}&=\sum_{r=1}^{n}\tilde{P}(r)\cdot\widehat{\mathcal{M}}(r)=\frac{1}{M}\sum_{u=1}^{M}\widehat{\mathcal{M}}(r_{u})\\\
&\approx\frac{1}{M}\sum_{u=1}^{M}{\mathcal{M}}(R_{u})=\sum_{R=1}^{N}\tilde{P}(R)\cdot{\mathcal{M}}(R)=T\end{split}$
where $\tilde{P}(r)=\frac{1}{M}\sum\limits_{r=1}^{M}\mathbf{1}_{r_{u}=r}$ is
the empirical sampled rank distribution and $\widehat{\mathcal{M}}(r)$ is the
adjusted discrete metric function. An immediately observation is:
$\begin{split}\operatorname*{\mathbb{E}}\widehat{T}&=\sum_{r=1}^{n}\operatorname*{\mathbb{E}}[\tilde{P}(r)]\cdot\widehat{\mathcal{M}}(r)=\sum_{r=1}^{n}{P}(r)\cdot\widehat{\mathcal{M}}(r)\end{split}$
(5)
Following the classical statistical inference (Casella and Berger 2002), the
optimality of an estimator is measured by Mean Squared Error (more derivation
in Appendix C):
$\operatorname*{\mathbb{E}}[T-\sum_{R=1}^{N}P(R)\mathcal{M}(R)]^{2}\\\ $ (6)
$\begin{split}=&\Big{(}\sum_{r=1}^{n}\sum_{R=1}^{N}P(r|R)P(R)\widehat{\mathcal{M}}(r)-\sum_{R=1}^{N}P(R)\mathcal{M}(R)\Big{)}^{2}\\\
+&\operatorname*{\mathbb{E}}[\sum_{r=1}^{n}\sum_{R=1}^{N}\tilde{P}(r|R)P(R)\widehat{\mathcal{M}}(r)-\sum_{r=1}^{n}\sum_{R=1}^{N}P(r|R)P(R)\widehat{\mathcal{M}}(r)]^{2}\end{split}$
Remark that $\tilde{P}(r|R)$ is the empirical conditional sampling rank
distribution given a global rank $R$. We next use a Jensen’s inequality to
bound the first term in (Eq. 6). Specifically, we may treat
$\sum_{r=1}^{n}P(r|R)\widehat{\mathcal{M}}(r)-\mathcal{M}(R)$ as a random
variable and use
$(\operatorname*{\mathbb{E}}X)^{2}\leq\operatorname*{\mathbb{E}}X^{2}$ to
obtain
$\begin{split}&\Big{(}\sum_{r=1}^{n}\sum_{R=1}^{N}P(r|R)P(R)\widehat{\mathcal{M}}(r)-\sum_{R=1}^{N}P(R)\mathcal{M}(R)\Big{)}^{2}\\\
&\leq\sum_{R=1}^{N}P(R)\Big{(}\sum_{r=1}^{n}P(r|R)\widehat{\mathcal{M}}(r)-\mathcal{M}(R)\Big{)}^{2}\end{split}$
Therefore, we have
$\begin{split}&\operatorname*{\mathbb{E}}[\widehat{T}-\sum_{R=1}^{N}P(R)\mathcal{M}(R)]^{2}\\\
&\leq\underbrace{\sum_{R=1}^{N}P(R)\Big{\\{}\Big{(}\sum_{r=1}^{n}P(r|R)\widehat{\mathcal{M}}(r)-\mathcal{M}(R)\Big{)}^{2}}_{\mathcal{L}_{1}}\\\
&+\underbrace{\operatorname*{\mathbb{E}}[\sum_{r=1}^{n}\tilde{P}(r|R)\widehat{\mathcal{M}}(r)-\sum_{r=1}^{n}P(r|R)\widehat{\mathcal{M}}(r)]^{2}\Big{\\}}}_{\mathcal{L}_{2}}.\end{split}$
Let $\mathcal{L}=\mathcal{L}_{1}+\mathcal{L}_{2}$, which gives an upper bound
on the expected MSE. Therefore, our goal is to find $\widehat{\mathcal{M}}(r)$
to minimize $\mathcal{L}$. We remark that a seemingly innocent application of
Jensen’s inequality results in an optimization objective that possesses a
range of interesting properties:
1\. Statistical structure. The objective has a variance-bias trade-off
interpretation, i.e.,
$\mathcal{L}_{1}=\sum\limits_{R=1}^{N}{P(R)\Big{(}{\operatorname*{\mathbb{E}}(\widehat{\mathcal{M}}(r)|R)-M(R)}\Big{)}^{2}}$
(7)
$\mathcal{L}_{2}=\sum\limits_{R=1}^{N}{\frac{1}{M}Var(\widehat{\mathcal{M}}(r)|R)}$
(8)
where $\mathcal{L}_{1}$ can be interpreted as a bias term and
$\mathcal{L}_{2}$ can be interpreted as a variance term. Note that while
Krichene and Rendle (2020) also introduce an variance-bias tradeoff objective,
their objective is constructed from heuristics and contains a hyper-parameter
(that determines the relative weight between bias and variance) that needs to
be tuned in an ad-hoc manner. Here, because our objective is constructed from
direct optimization of the MSE, it is more principled and also removes
dependencies on hyper-parameters. See Section 3.1 for proving Eq. 8 ( Eq. 7 is
trivial) and Section 3.2 for more comparison against estimators proposed in
(Krichene and Rendle 2020).
2\. Algorithmic structure. while the objective is not convex, we show that the
objective can be expressed in a compact manner using matrices and we can find
the optimal solution in a fairly straightforward manner. In other words,
Jensen’s inequality substantially simplifies the computation at the cost of
having a looser upper bound. See Section 3.2.
3\. Practical performance. Our experiments also confirm that the new estimator
is effective ( Section 5), which suggests that Jensen’s inequality makes only
inconsequential and moderate performance impact to the estimator’s quality.
### 3.1 Analysis of ${\mathcal{L}}_{2}$
To analyze ${\mathcal{L}}_{2}$, let us take a close look of $\tilde{P}(r|R)$.
Formally, let $X_{r}$ be the random variable representing the number of items
at rank $r$ in the item-sampling data whose original rank in the entire item
set is $R$. Then, we rewrite $\tilde{P}(r|R)=\frac{X_{r}}{M\cdot P(R)}$.
Furthermore, it is easy to observe $(X_{1},\cdots X_{n})$ follows the
multinomial distribution $Multi(P(1|R),\cdots,P(n|R))$ (See $P(r|R)$ defined
in Table 4). We also have:
$\begin{split}&\operatorname*{\mathbb{E}}[X_{r}]=M\cdot P(R)\cdot P(r|R)\\\
&Var[X_{r}]=M\cdot P(R)\cdot P(r|R)(1-P(r|R))\end{split}$ (9)
Next, let us define a new random variable
$\mathcal{B}\triangleq\sum\limits_{r}^{n}{\widehat{\mathcal{M}}(r)X_{r}}$,
which is the weighted sum of random variables under a multinomial
distribution. According to Appendix D, its variance is give by:
$\begin{split}&Var[\mathcal{B}]=\operatorname*{\mathbb{E}}[\sum_{r=1}^{n}{X_{r}}\widehat{\mathcal{M}}(r)-\sum_{r=1}^{n}{\operatorname*{\mathbb{E}}[X_{r}]}\widehat{\mathcal{M}}(r)]^{2}\\\
&=M\cdot
P(R)\Big{(}\sum\limits_{r}^{n}{\widehat{\mathcal{M}}^{2}(r)P(r|R)}-\big{(}\sum\limits_{r}^{n}{\widehat{\mathcal{M}}(r)P(r|R)}\big{)}^{2}\Big{)}\end{split}$
${\mathcal{L}}_{2}$ can be re-written (see Appendix E) as:
$\mathcal{L}_{2}=\sum\limits_{R=1}^{N}{\frac{1}{M}Var(\widehat{\mathcal{M}}(r)|R)}$
### 3.2 Closed Form Solution and its Relationship to Bias-Variance Estimator
We can rewrite $\mathcal{L}$ as a matrix format (see Appendix F) and
optimizing it corresponding to a constraint least square optimization and its
solution:
$\begin{split}\mathcal{L}=||\sqrt{D}A\mathbf{x}-\sqrt{D}\mathbf{b}||^{2}_{F}+\frac{1}{M}||\sqrt{\Lambda}_{1}\mathbf{x}||^{2}_{F}-\frac{1}{M}||A\mathbf{x}||^{2}_{F}\end{split}$
(10)
$\begin{split}\mathbf{x}=\Big{(}A^{T}DA-\frac{1}{M}A^{T}A+\frac{1}{M}\Lambda_{1}\Big{)}^{-1}A^{T}D\mathbf{b}\end{split}$
(11)
where $M$ is the number of users and $\mathcal{M}$ is metric function,
$diagM(\cdot)$ is a diagonal matrix:
$\begin{split}\mathbf{x}&=\begin{bmatrix}\widehat{\mathcal{M}}(r=1)\\\
\vdots\\\
\widehat{\mathcal{M}}(r=n)\end{bmatrix}\quad\mathbf{b}=\begin{bmatrix}\mathcal{M}(R=1)\\\
\vdots\\\ \mathcal{M}(R=N)\end{bmatrix}\in\mathbb{R}^{N}\\\
A_{R,r}&=P(r|R)\in\mathbb{R}^{N\times n}\quad
D=diagM\big{(}P(R)\big{)}\in\mathbb{R}^{N\times N}\\\
\Lambda_{1}&=diagM\big{(}\sum\limits_{R=1}^{N}P(r|R)\big{)}\in\mathbb{R}^{n\times
n}\\\ \end{split}$
Relationship to the BV Estimator: The bias-variance trade-off is given by
(Krichene and Rendle 2020):
$\begin{split}\mathcal{L}_{BV}&=\underbrace{\sum\limits_{R=1}^{N}P(R)(\operatorname*{\mathbb{E}}[\widehat{\mathcal{M}}(r)|R]-M(R))^{2}}_{{\mathcal{L}}_{1}}\\\
&+\underbrace{\sum\limits_{R=1}^{N}P(R)\gamma\cdot
Var[\widehat{\mathcal{M}}(r)|R]}_{{\mathcal{L}}_{2}}\end{split}$
We observe the main difference between the BV and our new estimator is on the
${\mathcal{L}}_{2}$ components (variance components): for our estimator, each
$Var[\widehat{\mathcal{M}}(r)|R]$ is regularized by $1/M$ ($M$ is the number
of testing users), where in BV, this term is regularized by $P(R)\gamma$. Our
estimator reveals that as the number of users increase, the variance in the
${\mathcal{L}}_{2}$ components will continue to decrease, where as the BV
estimator does not consider this factor. Thus, as the user size increases, BV
estimator still needs to deal with ${\mathcal{L}}_{2}$ or has to manually
adjust $\gamma$.
Finally, both BV and the new estimator rely on prior distribution $P(R)$,
which is unknown. In (Krichene and Rendle 2020), the uniform distribution is
used for the estimation purpose. In this paper, we propose to leverage the
latest approaches in (Jin et al. 2021b) which provide more accurate estimation
of $P(R)$ for this purpose. The experimental results in Section 5 will confirm
the validity of using such distribution estimations.
## 4 Adaptive Item-Sampling Estimator
Figure 1: Top is distribution of $r_{u}$ with sample set size $n=100$. Rank
$r=1$ is highlighted. Bottom is the relative error of MLE estimator for
different top-$K$. The result is obtained by EASE model (Steck 2019) over
ml-20m dataset.
### 4.1 Blind Spot and Adaptive Sampling
In recommendation, top ranked items are vital, thus it’s more crucial to
obtain an accurate estimation for these top items. However current sampling
approaches treat all items equally and particularly have difficulty in
recovering the the global top-$K$ metrics when K is small. In top of Fig. 1,
we plot the distribution of target items’ rank in the sample set and observe
that most target items rank top 1 (highlighted in red). This could lead to the
”blind spot” problem - when $K$ gets smaller, the estimation of basic
estimators are more inaccurate (see bottom of Fig. 1). Intuitively, when
$r_{u}=1$, it does not mean its global rank $R_{u}$ being $1$, instead, its
expected global rank may be around $100$ (assuming $N=10K$ and sample set size
$n=100$). And the estimation granularity is only at around 1% ($1/n$) level.
This blind spot effect bring a big drawback for current estimators.
Based on above discussion, we propose an adaptive sampling strategy, which
increases acceptable test sample size for users whose target item ranks top
(say $r_{u}=1$) in the sampled data. When $r_{u}=1$, we continue doubling the
sample size until $r_{u}\neq 1$ or until sample size reaches a predetermined
ceiling. See Algorithm 1 (and detailed explanation in Appendix H). The
benefits of this adaptive strategy are two folds: high granularity, with more
items sampled, the counts of $r_{u}=1$ shall reduce, which could further
improve the estimating accuracy; efficiency, we iteratively sample more items
for users whose $r_{u}=1$ and the empirical experiments (Table 2) confirm that
small average adaptive sample size (compared to uniform sample size) is able
to achieve significant better performance.
Algorithm 1 Adaptive Sampling Process
INPUT: Recommender Model $RS$, test user set $\mathcal{U}$, initial size
$n_{0}$, terminal size $n_{max}$
OUTPUT: $\\{(u,r_{u},n_{u})\\}$
1:for all $u\in\mathcal{U}$ do
2: sampling $n_{0}-1$ items, form the sample set $I^{s}_{u}$
3: $n_{u}=n_{0}$, $r_{u}=RS(i_{u},I^{s}_{u})$
4: while $r_{u}=1$ and $n_{u}\neq n_{max}$ do
5: sampling extra $n_{u}$ items, form the new set $I^{s}_{u}$
6: $n_{u}=2n_{u}$, $r_{u}=RS(i_{u},I^{s}_{u})$
7: end while
8: record $n_{u},r_{u}$ for user $u$
9:end for
### 4.2 Maximum Likelihood Estimation by EM
To utilize the adaptive item sampling for estimating the global top-$K$
metrics, we review two routes: 1) approaches from (Krichene and Rendle 2020)
and our aformentioned new estimators in this paper; 2) methods based on MLE
and EM (Jin et al. 2021b). Since every user has different number of item
samples, we found the first route is hard to extend (which requires equal
sample size); but luckily the second route is much more flexible and can be
easily generalized to this situation.
To begin with, we note that for any user $u$ (his/her test item ranks $r_{u}$
in the sample set (with size $n_{u}$) and ranks $R_{u}$ (unknown)), its rank
$r_{u}$ follows a binomial distribution:
$\begin{split}P(r=r_{u}|R=R_{u};n_{u})=Bin(r_{u}-1;n_{u}-1,\theta_{u})\\\
\end{split}$ (12)
Given this, let $\boldsymbol{\Pi}=(\pi_{1},\dots,\pi_{R},\dots,\pi_{N})^{T}$
be the parameters of the mixture of binomial distributions, $\pi_{R}$ is the
probability for user population ranks at position $R$ globally. And then we
have $p(r_{u}|\boldsymbol{\Pi})=\sum_{R=1}^{N}\pi_{R}\cdot
p(r_{u}|\theta_{R};n_{u})$, where
$p(r_{u}|\theta_{R};n_{u})=Bin(r_{u}-1;n_{u}-1,\theta_{R})$. We can apply the
maximal likelihood estimation (MLE) to learn the parameters of the mixture of
binomial distributions ($MB$), which naturally generalizes the EM procedure
(details see Appendix G) used in (Jin et al. 2021b), where each user has the
same $n$ samples:
$\begin{split}\phi(R_{uk})&=P(R_{u}=k|r_{u};\bm{\pi}^{old})\\\
\pi^{new}_{k}&=\frac{1}{M}\sum\limits_{u=1}^{M}\phi(R_{uk})\end{split}$
When the process converges, we obtain $\boldsymbol{\Pi}^{*}$ and use it to
estimate $\boldsymbol{P}$, i.e., $\widehat{P}(R)=\pi^{*}_{R}$. Then, we can
use $\widehat{P}(R)$ in Eq. 17 to estimate the desired metric $metric@K$. The
overall time complexity is linearly with respect to the sample size
$O(t\sum{n_{u}})$ where $t$ is the iteration number.
### 4.3 Sampling Size UpperBound
Figure 2: Sample efficiency w.r.t terminal size. The illustration result is
obtained by EASE model (Steck 2019) over yelp dataset while it consistently be
observed in other datasets.
Now, we consider how to determine the terminal size $n_{max}$. We take the
post-analysis over the different terminal sizes and investigate the average
sampling cost, which introduce the concept sampling efficiency, see Fig. 2.
Formally, we first select a large number $n_{max}\approx N$ and repeat the
aforementioned adaptive sampling process. For each user, his/her sampling set
size could be one of
$\\{n_{0},n_{1}=2n_{0},n_{2}=4n_{0},\dots,n_{t}=n_{max}\\}$. And there are
$m_{j}$ users whose sample set size is $n_{j}$ $(j=0,1,\dots,t)$. The average
sampling cost for each size $n_{j}$ can be defined heuristically:
$\begin{split}C_{j}&=\frac{(M-\sum\limits_{p=0}^{j-1}{m_{p}})\times(n_{j}-n_{j-1})}{m_{j}}\quad
j\neq 0,t\\\ C_{0}&=\frac{M\times n_{0}}{m_{0}}\end{split}$ (13)
The intuition behind Eq. 13 is: at $j$-th iteration, we independently sample
$n_{j}-n_{j-1}$ items for total $M-\sum\limits_{p=0}^{j-1}{m_{p}}$ users, and
there are $m_{j}$ users whose rank $r_{u}>1$. $C_{j}$ is the average items to
be sampled to get a user whose $r_{u}>1$, which reflects sampling efficiency.
In Fig. 2, we can see that when sample reaches $3200$, the sampling efficiency
will reduce quickly (the average cost $C_{j}$ increases fast). Such post
analysis provides insights on how to balance the sample size and sampling
efficiency. In this case, we observe $n_{max}=3200$ can be a reasonable
choice. Even though different dataset can pickup different threshold, we found
in practice $3200$ can serve as a default choice to start, and achieve pretty
good performance for the estimation accuracy.
## 5 Experiments
Table 1: The average relative errors between estimated $Recall@K$ ($K$ from
$1$ to $50$) and the true ones. Unit is $\%$. In each row, the smallest two
results are highlighted in bold, indicating the most accurate results. Sample
set size $n=100$.
dataset | Models | sample set size 100
---|---|---
baseline | this paper
MES | MLE | BV | BV_MES | BV_MLE | MN_MES | MN_MLE
pinterest-20 | EASE | 5.86$\scriptstyle\pm$2.26 | 5.54$\scriptstyle\pm$1.85 | 8.11$\scriptstyle\pm$2.00 | 5.05$\scriptstyle\pm$1.46 | 5.14$\scriptstyle\pm$1.46 | 5.00$\scriptstyle\pm$1.39 | 5.10$\scriptstyle\pm$1.34
MultiVAE | 4.17$\scriptstyle\pm$2.91 | 3.34$\scriptstyle\pm$2.07 | 2.75$\scriptstyle\pm$1.61 | 2.89$\scriptstyle\pm$1.74 | 2.88$\scriptstyle\pm$1.74 | 2.75$\scriptstyle\pm$1.66 | 2.75$\scriptstyle\pm$1.68
NeuMF | 5.17$\scriptstyle\pm$2.74 | 4.28$\scriptstyle\pm$1.95 | 4.23$\scriptstyle\pm$1.79 | 3.83$\scriptstyle\pm$1.59 | 3.84$\scriptstyle\pm$1.72 | 3.60$\scriptstyle\pm$1.50 | 3.76$\scriptstyle\pm$1.44
itemKNN | 5.90$\scriptstyle\pm$2.20 | 5.80$\scriptstyle\pm$1.60 | 8.93$\scriptstyle\pm$1.70 | 5.11$\scriptstyle\pm$1.22 | 5.31$\scriptstyle\pm$1.25 | 5.09$\scriptstyle\pm$1.15 | 5.26$\scriptstyle\pm$1.14
ALS | 4.19$\scriptstyle\pm$2.37 | 3.44$\scriptstyle\pm$1.68 | 3.17$\scriptstyle\pm$1.34 | 3.05$\scriptstyle\pm$1.39 | 3.07$\scriptstyle\pm$1.42 | 2.86$\scriptstyle\pm$1.27 | 2.90$\scriptstyle\pm$1.28
yelp | EASE | 8.08$\scriptstyle\pm$4.94 | 7.89$\scriptstyle\pm$4.70 | 18.60$\scriptstyle\pm$2.78 | 6.10$\scriptstyle\pm$3.74 | 6.56$\scriptstyle\pm$3.90 | 4.84$\scriptstyle\pm$2.17 | 5.61$\scriptstyle\pm$2.30
MultiVAE | 9.33$\scriptstyle\pm$6.61 | 7.67$\scriptstyle\pm$4.94 | 9.70$\scriptstyle\pm$3.22 | 6.84$\scriptstyle\pm$4.10 | 6.80$\scriptstyle\pm$4.04 | 4.30$\scriptstyle\pm$1.27 | 4.35$\scriptstyle\pm$1.31
NeuMF | 15.09$\scriptstyle\pm$6.24 | 15.47$\scriptstyle\pm$5.55 | 22.40$\scriptstyle\pm$3.17 | 13.14$\scriptstyle\pm$4.55 | 13.92$\scriptstyle\pm$4.70 | 13.46$\scriptstyle\pm$2.43 | 14.50$\scriptstyle\pm$2.45
itemKNN | 9.25$\scriptstyle\pm$4.87 | 9.62$\scriptstyle\pm$4.88 | 23.24$\scriptstyle\pm$2.16 | 7.69$\scriptstyle\pm$4.09 | 8.15$\scriptstyle\pm$4.17 | 7.74$\scriptstyle\pm$2.08 | 8.75$\scriptstyle\pm$2.08
ALS | 14.31$\scriptstyle\pm$3.96 | 13.68$\scriptstyle\pm$3.51 | 15.14$\scriptstyle\pm$1.86 | 13.43$\scriptstyle\pm$3.16 | 13.26$\scriptstyle\pm$3.08 | 11.68$\scriptstyle\pm$0.88 | 11.57$\scriptstyle\pm$0.83
In this section, we evaluate the performance of the new proposed estimator in
Eq. 11 compared to the baselines from (Krichene and Rendle 2020; Jin et al.
2021b) and validate the effectiveness and efficiency of adaptive sampling
based MLE (adaptive MLE). Specifically, we aim to answer three questions:
Question 1. How do the new estimators in Section 3 perform compared to
estimators based on learning empirical distributions (i.e., $MLE$, $MES$ in
(Jin et al. 2021b)), and to the $BV$ approach in (Krichene and Rendle 2020)?
Question 2. How effective and efficient the adaptive item-sampling evaluation
method adaptive MLE is, compared with the best estimators for the basic (non-
adaptive) item sampling methods in Section 3?
Question 3. How accurately can these estimators find the best model (in terms
of the global top-K metric) among a list of recommendation models?
#### Experimental Setup
We take three widely-used datasets for recommendation system research,
$pinterest-20$, $yelp$, $ml-20m$. See also Appendix I for dataset statistics.
We follow the work (Jin et al. 2021b) to adopt five popular recommendation
algorithms, including three non-deep-learning methods (itemKNN (Deshpande and
Karypis 2004), ALS (Hu, Koren, and Volinsky 2008), and EASE (Steck 2019)) and
two deep learning ones (NeuMF (He et al. 2017) and MultiVAE (Liang et al.
2018)). Three most popular top-K metrics: $Recall$, $NDCG$ and $AP$ are
utilized for evaluating the recommendation models.
#### Estimating Procedure
There are $M$ users and $N$ items. Each user $u$ is associated with a target
item $i_{u}$. The learned recommendation algorithm/model $A$ would compute the
ranks $\\{R_{u}\\}_{u=1}^{M}$ among all items called global ranks and the
ranks $\\{r_{u}\\}_{u=1}^{M}$ among sampled items called sampled ranks.
Without the knowledge of $\\{R_{u}\\}_{u=1}^{M}$, the estimator tries to
estimate the global metric defined in Eq. 2 based on $\\{r_{u}\\}_{u=1}^{M}$.
We repeat experiments $100$ times, deriving $100$ distinct
$\\{r_{u}\\}_{u=1}^{M}$ set. Below reports experimental results that are
averaged on these $\mathbf{100}$ repeats.
Due to the space limitation, we report representative experimental results in
the following and leave baselines, dataset statistics and more results in the
Appendix I.
### Q1. Accuracy of the Estimators for Basic Item-Sampling
Here, we aim to answer Question 1: comparing the new estimators in Section 3
perform against the state-of-the-art methods from (Krichene and Rendle 2020;
Jin et al. 2021b) under the basic item sampling scheme. Different from (Jin et
al. 2021b), where authors list all the estimating as well as the true results
for $metric@10$ and conclude, here we would quantify the accuracy of each
estimator in terms of relative error, leading to a more rigorous and reliable
comparison. Specifically, we compute the true global $metric@k$ ( $k$ from $1$
to $50$), then we average the absolute relative error between the estimated
$metric@k$ from each estimator and the true one.
The estimators include $BV$ (with the tradeoff parameter $\gamma=0.01$) from
(Krichene and Rendle 2020), $MLE$ (Maximal Likelihood Estimation), $MES$
(Maximal Entropy with Squared distribution distance, where $\eta=0.001$) from
(Jin et al. 2021b) and newly proposed estimators Eq. 11. Table 1 (see Table 6
in Appendix for a complete result version.) presents the average relative
error of the estimators in terms of $Recall@K$ ($k$ from $1$ to $50$). The
results of $NDCG@K$ and $AP@K$ are in Appendix I. We highlight the most and
the second-most accurate estimator. For instance, for model $EASE$ in dataset
$pinterest-20$ (line $1$ of Table 1), the estimator $MN\\_MES$ is the most
accurate one with $5.00\%$ average relative error compared to its global
$Recall@K$ ($K$ from $1$ to $50$).
Overall, we observe from Table 1 that $MN\\_MES$ and $MN\\_MLE$ are among the
most, or the second-most accurate estimators. And in most cases, they
outperform the others significantly. Meantime, they have smaller deviation
compared to their prior estimators $MES$ and $MLE$. In addition, we also
notice that the estimators with the knowledge of some reasonable prior
distribution ($BV\\_MES$, $MN\\_MES$, $BV\\_MLE$, $MN\\_MLE$) could achieve
more accurate results than the others. This indicate that these estimators
could better help the distribution to converge.
### Q2. Performance of Adaptive Sampling
Here, we aim to answer Question 2: comparing the adaptive item sampling method
adaptive MLE, comparing with the best estimators for the basic (non-adaptive)
item sampling methods, such as $BV\\_MES$, $MN\\_MES$, $BV\\_MLE$, $MN\\_MLE$.
Table 2: The average relative errors between estimated $NDCG@K$ ($K$ from $1$
to $50$) and the true ones. Unit is $\%$. In each row, the smallest relative
error is highlighted, indicating the most accurate result.
Dataset | Models | Fix Sample | Adaptive Sample
---|---|---|---
BV_MES | BV_MLE | MN_MES | MN_MLE | average size | adaptive MLE
| sample set size n = 500 |
pinterest-20 | EASE | 3.87$\scriptstyle\pm$2.13 | 4.33$\scriptstyle\pm$2.23 | 4.17$\scriptstyle\pm$2.45 | 4.33$\scriptstyle\pm$2.50 | 307.74$\scriptstyle\pm$1.41 | 1.46$\scriptstyle\pm$0.63
MultiVAE | 2.66$\scriptstyle\pm$1.75 | 2.58$\scriptstyle\pm$1.75 | 3.26$\scriptstyle\pm$2.14 | 3.07$\scriptstyle\pm$2.09 | 286.46$\scriptstyle\pm$1.48 | 1.67$\scriptstyle\pm$0.70
NeuMF | 2.82$\scriptstyle\pm$1.98 | 2.79$\scriptstyle\pm$1.96 | 3.24$\scriptstyle\pm$2.30 | 3.22$\scriptstyle\pm$2.27 | 259.77$\scriptstyle\pm$1.28 | 1.73$\scriptstyle\pm$0.83
itemKNN | 3.99$\scriptstyle\pm$2.22 | 4.18$\scriptstyle\pm$2.24 | 4.23$\scriptstyle\pm$2.47 | 4.06$\scriptstyle\pm$2.44 | 309.56$\scriptstyle\pm$1.31 | 1.42$\scriptstyle\pm$0.67
ALS | 3.35$\scriptstyle\pm$1.89 | 3.24$\scriptstyle\pm$1.87 | 3.90$\scriptstyle\pm$2.24 | 3.80$\scriptstyle\pm$2.23 | 270.75$\scriptstyle\pm$1.22 | 1.84$\scriptstyle\pm$1.07
| sample set size n = 500 |
yelp | EASE | 5.63$\scriptstyle\pm$3.73 | 5.48$\scriptstyle\pm$3.66 | 4.03$\scriptstyle\pm$2.53 | 4.04$\scriptstyle\pm$2.53 | 340.79$\scriptstyle\pm$2.03 | 3.55$\scriptstyle\pm$2.00
MultiVAE | 7.68$\scriptstyle\pm$5.86 | 7.48$\scriptstyle\pm$5.74 | 5.77$\scriptstyle\pm$3.87 | 5.64$\scriptstyle\pm$3.82 | 288.70$\scriptstyle\pm$2.24 | 5.09$\scriptstyle\pm$2.60
NeuMF | 9.34$\scriptstyle\pm$5.50 | 9.55$\scriptstyle\pm$5.50 | 8.43$\scriptstyle\pm$4.07 | 8.91$\scriptstyle\pm$4.13 | 290.62$\scriptstyle\pm$2.11 | 4.43$\scriptstyle\pm$2.55
itemKNN | 5.01$\scriptstyle\pm$2.99 | 4.99$\scriptstyle\pm$2.96 | 3.65$\scriptstyle\pm$2.28 | 3.71$\scriptstyle\pm$2.29 | 369.16$\scriptstyle\pm$2.51 | 3.67$\scriptstyle\pm$2.73
ALS | 13.39$\scriptstyle\pm$7.34 | 13.94$\scriptstyle\pm$7.61 | 12.57$\scriptstyle\pm$5.46 | 13.67$\scriptstyle\pm$5.80 | 297.07$\scriptstyle\pm$2.29 | 5.48$\scriptstyle\pm$3.34
Table 3: Accuracy of predicting of the winner models for different datasets.
Values in table is the number of correct predictions over 100 repeats. The
larger number, the better estimator
Dataset | Top-K | Metrics | Fix Sample | AdaptiveSample
---|---|---|---|---
BV_MES | BV_MLE | MN_MES | MN_MLE | adaptive MLE
sample set size n = 500 | sample set size $260\sim 310$
pinterest-20 | 10 | RECALL | 69 | 73 | 67 | 69 | 78
NDCG | 58 | 59 | 58 | 60 | 84
AP | 54 | 57 | 52 | 52 | 68
20 | RECALL | 69 | 73 | 70 | 74 | 81
NDCG | 69 | 73 | 68 | 73 | 79
AP | 57 | 60 | 54 | 56 | 69
Table 2 (complete version in appendix Table 7) presents the average relative
error of the estimators in terms of $NDCG@K$ ($k$ from $1$ to $50$). We
highlight the most accurate estimator. For the basic item sampling, we choose
$500$ sample size for datasets $pinterest-20$ and $yelp$, and $1000$ sample
size for dataset $ml-20m$. The upperbound threshold $n_{max}$ is set to
$3200$.
We observe the adaptive sampling uses much less sample size (typically
$200-300$ vs $500$ on $pinterest-20$, $yelp$ datasets and $700-800$ vs $1000$
on $ml-20m$ dataset). Particularly, the relative error of the adaptive
sampling is significantly smaller than that of the basic sampling methods. On
the first ($pinterest-20$) and third ($ml-20m$) datasets, the relative errors
have reduced to less than $2\%$. In other words, the adaptive method has been
much more effective (in terms of accuracy) and efficient (in terms of sample
size). This also confirms the benefits in addressing the “blind spot” issue,
which provides higher resolution to recover global $K$ metrics for small $K$
($K\leq 50$ here).
### Q3. Estimating the Winner
Table 3 (complete version in appendix Table 8) indicates the results of among
the $100$ repeats, how many times that a estimator could match the best
recommendation algorithms for a given $metric@K$. More concretely, for a
global $metric@K$ (compute from Eq. 2), there exist a recommendation
algorithm/model which performs best called the winner. The estimator could
also estimate $metric@K$ for each algorithm based on its
$\\{r_{u}\\}_{u=1}^{M}$. Among these estimated $metric@K$, one can also find a
best recommendation model. Intuitively, if the estimator is accurate enough,
it could find the same winner as the truth. Thus, we counting the success time
that a estimator can find it as another measure of estimating accuracy. From
the Table 3, We observe that new proposed adaptive estimators could achieve
competitive or even better results to the baselines with much less average
sample cost.
## 6 Conclusion
In this paper, we propose first item-sampling estimators which explicitly
optimizes its mean square error with respect to the ground truth assuming the
prior of the rank distribution is given. Then we highlight the subtle
difference between the estimators from (Krichene and Rendle 2020) and ours,
and point out the potential issue of the former - failing to link the user
size with the variance. Furthermore, we address the limitation of the current
item sampling approach, whose typically do not have sufficient granularity to
recover the top $K$ global metrics when $K$ is small. We then propose an
effective adaptive item-sampling method. The experimental evaluation
demonstrate the new adaptive item sampling significantly improve both the
sampling efficiency and estimation accuracy. Our results provide a solid step
towards making item sampling available for the recommendation research and
practice. In the future, we would like to further investigate how to combine
item-sampling working with user-sampling to speedup the offline evaluation,
and investigate other aspects of the evaluation criteria such as fairness and
robustness.
## References
* Casella and Berger (2002) Casella, G.; and Berger, R. 2002. _Statistical Inference_. Thomson Learning.
* Cover and Thomas (2006) Cover, T. M.; and Thomas, J. A. 2006. _Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)_. USA: Wiley-Interscience. ISBN 0471241954.
* Cremonesi et al. (2011) Cremonesi, P.; Garzotto, F.; Negro, S.; Papadopoulos, A.; and Turrin, R. 2011. Comparative evaluation of recommender system quality. In _CHI’11 Extended Abstracts on Human Factors in Computing Systems_ , 1927–1932.
* Cremonesi, Koren, and Turrin (2010) Cremonesi, P.; Koren, Y.; and Turrin, R. 2010. Performance of Recommender Algorithms on Top-n Recommendation Tasks. In _RecSys’10_.
* Dacrema et al. (2019) Dacrema, M. F.; Boglio, S.; Cremonesi, P.; and Jannach, D. 2019. A Troubling Analysis of Reproducibility and Progress in Recommender Systems Research. arXiv:1911.07698.
* Deshpande and Karypis (2004) Deshpande, M.; and Karypis, G. 2004. Item-Based Top-N Recommendation Algorithms. _ACM Trans. Inf. Syst._
* Ebesu, Shen, and Fang (2018) Ebesu, T.; Shen, B.; and Fang, Y. 2018. Collaborative Memory Network for Recommendation Systems. In _SIGIR’18_.
* Fayyaz et al. (2020) Fayyaz, Z.; Ebrahimian, M.; Nawara, D.; Ibrahim, A.; and Kashef, R. 2020. Recommendation Systems: Algorithms, Challenges, Metrics, and Business Opportunities. _applied sciences_ , 10(21): 7748.
* Gruson et al. (2019) Gruson, A.; Chandar, P.; Charbuillet, C.; McInerney, J.; Hansen, S.; Tardieu, D.; and Carterette, B. 2019. Offline Evaluation to Make Decisions About PlaylistRecommendation Algorithms. In _Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining_ , WSDM ’19, 420–428. New York, NY, USA: Association for Computing Machinery. ISBN 9781450359405.
* He et al. (2017) He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; and Chua, T.-S. 2017. Neural Collaborative Filtering. WWW ’17.
* Hu et al. (2018) Hu, B.; Shi, C.; Zhao, W. X.; and Yu, P. S. 2018. Leveraging Meta-Path Based Context for Top- N Recommendation with A Neural Co-Attention Model. In _KDD’18_.
* Hu, Koren, and Volinsky (2008) Hu, Y.; Koren, Y.; and Volinsky, C. 2008. Collaborative filtering for implicit feedback datasets. In _ICDM’08_.
* Jin et al. (2021a) Jin, R.; Li, D.; Gao, J.; Liu, Z.; Chen, L.; and Zhou, Y. 2021a. Towards a better understanding of linear models for recommendation. In _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_, 776–785.
* Jin et al. (2021b) Jin, R.; Li, D.; Mudrak, B.; Gao, J.; and Liu, Z. 2021b. On Estimating Recommendation Evaluation Metrics under Sampling. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 4147–4154.
* Koren (2008) Koren, Y. 2008. Factorization Meets the Neighborhood: A Multifaceted Collaborative Filtering Model. In _KDD’08_.
* Krichene et al. (2019) Krichene, W.; Mayoraz, N.; Rendle, S.; Zhang, L.; Yi, X.; Hong, L.; Chi, E. H.; and Anderson, J. R. 2019. Efficient Training on Very Large Corpora via Gramian Estimation. In _ICLR’2019_.
* Krichene and Rendle (2020) Krichene, W.; and Rendle, S. 2020. On Sampled Metrics for Item Recommendation. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, KDD ’20.
* Krichene and Rendle (2022) Krichene, W.; and Rendle, S. 2022. On sampled metrics for item recommendation. _Commun. ACM_ , 65(7): 75–83.
* Lehmann and Casella (2006) Lehmann, E. L.; and Casella, G. 2006. _Theory of point estimation_. Springer Science & Business Media.
* Li et al. (2020) Li, D.; Jin, R.; Gao, J.; and Liu, Z. 2020. On Sampling Top-K Recommendation Evaluation. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, KDD ’20.
* Liang et al. (2018) Liang, D.; Krishnan, R. G.; Hoffman, M. D.; and Jebara, T. 2018. Variational Autoencoders for Collaborative Filtering. In _WWW’18_.
* Peng, Sugiyama, and Mine (2022) Peng, S.; Sugiyama, K.; and Mine, T. 2022. Less is More: Reweighting Important Spectral Graph Features for Recommendation. In _Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , SIGIR ’22, 1273–1282. New York, NY, USA: Association for Computing Machinery. ISBN 9781450387323.
* Rendle (2019) Rendle, S. 2019. Evaluation metrics for item recommendation under sampling. _arXiv preprint arXiv:1912.02263_.
* Steck (2019) Steck, H. 2019. Embarrassingly Shallow Autoencoders for Sparse Data. _WWW’19_.
* Wang et al. (2019) Wang, X.; Wang, D.; Xu, C.; He, X.; Cao, Y.; and Chua, T. 2019. Explainable Reasoning over Knowledge Graphs for Recommendation. In _The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI2019_.
* Yang et al. (2018a) Yang, L.; Bagdasaryan, E.; Gruenstein, J.; Hsieh, C.-K.; and Estrin, D. 2018a. OpenRec: A Modular Framework for Extensible and Adaptable Recommendation Algorithms. In _WSDM’18_.
* Yang et al. (2018b) Yang, L.; Cui, Y.; Xuan, Y.; Wang, C.; Belongie, S. J.; and Estrin, D. 2018b. Unbiased Offline Recommender Evaluation for Missing-Not-at-Random Implicit Feedback. In _RecSys’18_.
* Zhao et al. (2021) Zhao, W. X.; Mu, S.; Hou, Y.; Lin, Z.; Chen, Y.; Pan, X.; Li, K.; Lu, Y.; Wang, H.; Tian, C.; Min, Y.; Feng, Z.; Fan, X.; Chen, X.; Wang, P.; Ji, W.; Li, Y.; Wang, X.; and Wen, J.-R. 2021. RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms. In _In Proceedings of Conference on Information and Knowledge Management (CIKM ’21)_.
## Appendix A Related Work: Efforts in Metric Estimation
There are two recent works studying the general metric estimation problem
based on the item sampling metrics. Specifically, given the sampling ranked
results in the test set, $\\{r_{u}\\}^{M}_{u=1}$, how to infer/approximate the
$T$ from Eq. 1 or more commonly Eq. 2, without the knowledge
$\\{R_{u}\\}_{u=1}^{M}$?
Krichene and Rendle’s approaches: (Krichene and Rendle 2020) develop a
discrete function $\widehat{\mathcal{M}}(r)$ so that:
$\begin{split}T&=\frac{1}{M}\sum_{u=1}^{M}\mathbf{1}_{R_{u}\leq
K}\cdot{\mathcal{M}}(R_{u})\\\
&\approx\frac{1}{M}\sum_{u=1}^{M}\widehat{\mathcal{M}}(r_{u})=\sum_{r=1}^{n}\tilde{P}(r)\widehat{\mathcal{M}}(r)=\widehat{T}\end{split}$
(14)
where $\tilde{P}(r)$ is the empirical rank distribution on the sampling data (
Table 4). They have proposed a few estimators based on this idea, including
estimators that use the unbiased rank estimators, minimize bias with
monotonicity constraint ($CLS$), and utilize Bias-Variance ($BV$) tradeoff.
Their study shows that only the last one ($BV$) is competitive. They further
derive a solution based on the Bias-Variance ($BV$) tradeoff:
$\begin{split}&\widehat{\mathcal{M}}=\Big{(}(1.0-\gamma)A^{T}A+\gamma\text{diag}(\boldsymbol{c})\Big{)}^{-1}A^{T}\boldsymbol{b}\\\
&A\in\mathbb{R}^{N\times n},\quad A_{R,r}=\sqrt{P(R)}P(r|R)\\\
&\boldsymbol{b}\in\mathbb{R}^{N},\quad b_{R}=\sqrt{P(R)}\mathcal{M}(R)\\\
&\boldsymbol{c}\in\mathbb{R}^{n},\quad
c_{r}=\sum\limits_{R}^{N}{P(R)P(r|R)}\end{split}$ (15)
Jin et al.’s Method: (Jin et al. 2021b) proposed different types of estimators
based on learning the empirical rank distribution $(P(R))_{R=1}^{N}$:
$T=\frac{1}{M}\sum_{u=1}^{M}\mathbf{1}_{R_{u}\leq
K}\cdot{\mathcal{M}}(R_{u})=\sum_{R=1}^{K}P(R)\cdot{\mathcal{M}}(R)$ (16)
Thus, if $P(R)$ can be estimated (by $\widehat{P}(R)$), then the metric Eq. 16
can be estimated by:
$\widehat{T}=\sum_{R=1}^{K}\widehat{P}(R)\cdot{\mathcal{M}}(R)$ (17)
They further proposed two approaches to learn the empirical rank distribution
$(P(R))_{R=1}^{N}$. The first approach is to learn the parameters of the
mixture of binomial distributions ($MB$) given $\\{r_{u}\\}_{u=1}^{M}$ based
on maximal likelihood estimation (MLE). The other approach for estimating a
(discrete) probability distribution is based on the principal of maximal
entropy (Cover and Thomas 2006).
## Appendix B Notation
$M$ | # of users in test set
---|---
$N$ | # of total items
$I$ | the set of all items, and $|I|=N$
$i_{u}$ | target item for user $u$ in test set
$R_{u}$ | rank of item $i_{u}$ among $I$ for user $u$
$n$ | sample set size (1 target item, $n-1$ sampled items)
$I_{u}$ | $I_{u}\backslash i_{u}$ consists of $n-1$ sampled items for $u$
$r_{u}$ | rank of item $i_{u}$ among $I_{u}$
$T$ | evaluation metric, e.g. $Recall$
$T_{S}$ | sampled evaluation metric, e.g. $Recall_{S}$
$\widehat{T}$ | estimated evaluation metric
Table 4: Definition of notations
## Appendix C Rewriting of Eq. 6
$\begin{split}&\operatorname*{\mathbb{E}}[T-\sum_{R=1}^{N}P(R)\mathcal{M}(R)]^{2}\\\
=&\operatorname*{\mathbb{E}}[\operatorname*{\mathbb{E}}T-\sum_{R=1}^{N}P(R)\mathcal{M}(R)+T-\operatorname*{\mathbb{E}}T]^{2}\\\
=&\Big{(}\operatorname*{\mathbb{E}}T-\sum_{R=1}^{N}P(R)\mathcal{M}(R)\Big{)}^{2}+\operatorname*{\mathbb{E}}[T-\operatorname*{\mathbb{E}}T]^{2}\\\
=&\Big{(}\sum_{r=1}^{n}P(r)\widehat{\mathcal{M}}(r)-\sum_{R=1}^{N}P(R)\mathcal{M}(R)\Big{)}^{2}\\\
+&\operatorname*{\mathbb{E}}[\sum_{r=1}^{n}\tilde{P}(r)\widehat{\mathcal{M}}(r)-\sum_{r=1}^{n}P(r)\widehat{\mathcal{M}}(r)]^{2}\\\
=&\Big{(}\sum_{r=1}^{n}\sum_{R=1}^{N}P(r|R)P(R)\widehat{\mathcal{M}}(r)-\sum_{R=1}^{N}P(R)\mathcal{M}(R)\Big{)}^{2}\\\
+&\operatorname*{\mathbb{E}}[\sum_{r=1}^{n}\sum_{R=1}^{N}\tilde{P}(r|R)P(R)\widehat{\mathcal{M}}(r)-\sum_{r=1}^{n}\sum_{R=1}^{N}P(r|R)P(R)\widehat{\mathcal{M}}(r)]^{2}\end{split}$
## Appendix D Linear Combination of Coefficients
Considering $X=(X_{1},\dots,X_{n})$ is the random variables of a sample $M$
times multinomial distribution with $n$ cells probabilities
$(\theta_{1},\dots,\theta_{n})$. We
have:$\frac{X_{i}}{M}\rightarrow\theta_{i},\text{ when }M\rightarrow\infty$
$\begin{split}&\operatorname*{\mathbb{E}}[{X_{i}}]=M\theta_{i}\quad
Var[X_{i}]=M\theta_{i}(1-\theta_{i})\\\
&Cov(X_{i},X_{j})=-M\theta_{i}\theta_{j}\end{split}$ (18)
Considering the a new random variable deriving from the linear combination:
$\mathcal{A}=\sum\limits_{i=1}^{n}{w_{i}X_{i}}$, where the ${w_{i}}$ are the
constant coefficients.
$\begin{split}\operatorname*{\mathbb{E}}[\mathcal{A}]&=M\cdot\sum\limits_{i=1}^{n}{w_{i}\theta_{i}}\\\
Var[\mathcal{A}]&=\operatorname*{\mathbb{E}}[\mathcal{A}^{2}]-(\operatorname*{\mathbb{E}}[\mathcal{A}])^{2}=\sum\limits_{i}^{n}{w^{2}_{i}[M\theta_{i}-M\theta_{i}^{2}]}-2\sum\limits_{i\neq
j}{w_{i}w_{j}[M\theta_{i}\theta_{j}]}\\\
&=M\sum\limits_{i}^{n}{w^{2}_{i}\theta_{i}}-M\cdot\Big{(}\sum\limits_{i}^{n}{w^{2}_{i}\theta^{2}_{i}}+2\sum\limits_{i\neq
j}w_{i}w_{j}\theta_{i}\theta_{j}\Big{)}\\\
&=M\cdot\Big{(}\sum\limits_{i}^{n}{w^{2}_{i}\theta_{i}}-\big{(}\sum\limits_{i}^{n}{w_{i}\theta_{i}}\big{)}^{2}\Big{)}\end{split}$
## Appendix E Rewriting of $L_{2}$
$\begin{split}\mathcal{L}_{2}&=\sum\limits_{R=1}^{N}{P(R)\cdot\operatorname*{\mathbb{E}}[\sum_{r=1}^{n}\tilde{P}(r|R)\widehat{\mathcal{M}}(r)-\sum_{r=1}^{n}P(r|R)\widehat{\mathcal{M}}(r)]^{2}}\\\
&=\sum\limits_{R=1}^{N}{P(R)\cdot\operatorname*{\mathbb{E}}[\sum_{r=1}^{n}\frac{X_{r}}{M\cdot
P(R)}\widehat{\mathcal{M}}(r)-\sum_{r=1}^{n}\frac{\operatorname*{\mathbb{E}}[X_{r}]}{M\cdot
P(R)}\widehat{\mathcal{M}}(r)]^{2}}\\\
&=\sum\limits_{R=1}^{N}{\frac{1}{M^{2}\cdot
P(R)}\cdot\operatorname*{\mathbb{E}}[\sum_{r=1}^{n}{X_{r}}\widehat{\mathcal{M}}(r)-\sum_{r=1}^{n}{\operatorname*{\mathbb{E}}[X_{r}]}\widehat{\mathcal{M}}(r)]^{2}}\\\
&=\sum\limits_{R=1}^{N}{\frac{1}{M}\cdot\Big{(}\sum\limits_{r}^{n}{\widehat{\mathcal{M}}^{2}(r)P(r|R)}-\big{(}\sum\limits_{r}^{n}{\widehat{\mathcal{M}}(r)P(r|R)}\big{)}^{2}\Big{)}}\\\
&=\sum\limits_{R=1}^{N}{\frac{1}{M}Var(\widehat{\mathcal{M}}(r)|R)}\end{split}$
## Appendix F Least Square Formulation
This section would take the conclusion from Appendix D to derive the quadratic
form for Equation 10:
$\begin{split}\mathcal{L}_{1}&=\sum\limits_{R=1}^{N}{P(R)\Big{(}\sum\limits_{r=1}^{n}{P(r|R)\widehat{\mathcal{M}}(r)-\mathcal{M}(R)}\Big{)}^{2}}\\\
&=||\sqrt{D}A\mathbf{x}-\sqrt{D}\mathbf{b}||^{2}_{F}\\\
\mathcal{L}_{2}&=\sum\limits_{R=1}^{N}{P(R)\cdot\operatorname*{\mathbb{E}}[\sum_{r=1}^{n}\tilde{P}(r|R)\widehat{\mathcal{M}}(r)-\sum_{r=1}^{n}P(r|R)\widehat{\mathcal{M}}(r)]^{2}}\\\
&=\sum\limits_{R=1}^{N}{P(R)\cdot\operatorname*{\mathbb{E}}[\sum_{r=1}^{n}\frac{X_{r}}{M\cdot
P(R)}\widehat{\mathcal{M}}(r)-\sum_{r=1}^{n}\frac{\operatorname*{\mathbb{E}}[X_{r}]}{M\cdot
P(R)}\widehat{\mathcal{M}}(r)]^{2}}\\\
&=\sum\limits_{R=1}^{N}{\frac{1}{m^{2}\cdot
P(R)}\cdot\operatorname*{\mathbb{E}}[\sum_{r=1}^{n}{X_{r}}\widehat{\mathcal{M}}(r)-\sum_{r=1}^{n}{\operatorname*{\mathbb{E}}[X_{r}]}\widehat{\mathcal{M}}(r)]^{2}}\\\
&=\sum\limits_{R=1}^{N}{\frac{1}{M}\cdot\Big{(}\sum\limits_{r}^{n}{\widehat{\mathcal{M}}^{2}(r)P(r|R)}-\big{(}\sum\limits_{r}^{n}{\widehat{\mathcal{M}}(r)P(r|R)}\big{)}^{2}\Big{)}}\\\
&=\frac{1}{M}\mathbf{x}^{T}\Lambda_{1}\mathbf{x}-\frac{1}{M}||A\mathbf{x}||^{2}_{F}=\frac{1}{M}||\sqrt{\Lambda_{1}}\mathbf{x}||^{2}_{F}-\frac{1}{M}||Ax||^{2}_{F}\end{split}$
(19)
## Appendix G EM steps
E-step
$\begin{split}\log\mathcal{L}&=\sum\limits_{u=1}^{M}\log\sum\limits_{k=1}^{N}{P(r_{u},R_{uk};\bm{\pi})}\\\
&=\sum\limits_{u=1}^{M}\log\sum\limits_{k=1}^{N}{\phi(R_{uk})\cdot\frac{P(r_{u},R_{uk};\bm{\pi})}{\phi(R_{uk})}}\\\
&\geq\sum\limits_{u=1}^{M}\sum\limits_{k=1}^{N}{\phi(R_{uk})\cdot\log
P(r_{u},R_{uk};\bm{\pi})}+constant\\\
&\triangleq\sum\limits_{u=1}^{M}Q_{u}(\bm{\pi},\bm{\pi}^{old})=Q(\bm{\pi},\bm{\pi}^{old})\end{split}$
(20)
where
$\begin{split}\phi(R_{uk})&=P(R_{u}=k|r_{u};\bm{\pi}^{old})\\\
&=\frac{\pi^{old}_{k}\cdot
P(r_{u}|R_{u}=k;n_{u})}{\sum\limits_{j=1}^{N}\pi^{old}_{j}\cdot
P(r_{u}|R_{u}=j;n_{u})}\end{split}$ (21)
where $\phi$ is the posterior, $\bm{\pi}^{old}$ is the known probability
distribution and $\bm{\pi}$ is parameter.
M-step: Derived from the Lagrange maximization procedure:
$\begin{split}\pi^{new}_{k}=\frac{1}{M}\sum\limits_{u=1}^{M}\phi(R_{uk})\end{split}$
(22)
## Appendix H Adaptive Sampling Explanation
A detailed explanation for Algorithm 1. Specifically, we start from an initial
sample set size parameter $n_{0}$. We sample $n_{0}-1$ items and compute the
rank $r_{u}$ for all users. For those users with $r_{u}>1$, we take down the
sample set size $n_{u}=n_{0}$. For those with $r_{u}=1$, we double the sample
set size $n_{1}=2n_{0}$, in other words, we sample another set of $n_{0}$
items (since we already sample $n_{0}-1$). Consequently, we check the rank
$r_{u}$ and repeat the process until $r_{u}\neq 1$ or sample set size is
$n_{max}$. We will discuss how to determine $n_{max}$ later in Section 4.3.
## Appendix I More Experimental Results
### I.1 Item-sampling based estimators
The estimators evaluated in this section are: $\mathbf{BV}$ (Bias-Variance
Estimator)(Krichene and Rendle 2020); $\mathbf{MLE}$ (Maximal Likelihood
Estimation)(Jin et al. 2021b); $\mathbf{MES}$ (Maximal Entropy with Squared
distribution distance)(Jin et al. 2021b); BV_MLE, BV_MES (Eq. 15 with $P(R)$
obtained from $MLE$ and $MES$; basically, we consider combining the two
approaches from BV (Krichene and Rendle 2020) and $\mathbf{MLE}$ (MES) (Jin et
al. 2021b)); the new multinomial distribution based estimator with different
prior, short as MN_MLE, MN_MES, Eq. 11 with prior $P(R)$ obtained from $MLE$
and $MES$ estimators.
### I.2 Reproducibility
We release code at https://github.com/dli12/AAAI23-Towards-Reliable-Item-
Sampling-for-Recommendation-Evaluation.
### I.3 Dataset Statistics
Table 5 shows the information of the three datasets used in this paper.
Table 5: Dataset Statistics Dataset | Interactions | Users | Items | Sparsity
---|---|---|---|---
pinterest-20 | 1,463,581 | 55,187 | 9,916 | 99.73$\%$
yelp | 696,865 | 25,677 | 25,815 | 99.89$\%$
ml-20m | 9,990,682 | 136,677 | 20,720 | 99.65$\%$
### I.4 Complete Table
Due to space limitation, we put the complete version of Table 1, Table 2 and
Table 3 in below, see Table 6, Table 7 and Table 8 respectively.
Table 6: The average relative errors between estimated $Recall@K$ ($K$ from
$1$ to $50$) and the true ones. Unit is $\%$. In each row, the smallest two
results are highlighted in bold, indicating the most accurate results. Sample
set size $n=100$.
dataset | Models | sample set size 100
---|---|---
baseline | this paper
MES | MLE | BV | BV_MES | BV_MLE | MN_MES | MN_MLE
pinterest-20 | EASE | 5.86$\scriptstyle\pm$2.26 | 5.54$\scriptstyle\pm$1.85 | 8.11$\scriptstyle\pm$2.00 | 5.05$\scriptstyle\pm$1.46 | 5.14$\scriptstyle\pm$1.46 | 5.00$\scriptstyle\pm$1.39 | 5.10$\scriptstyle\pm$1.34
MultiVAE | 4.17$\scriptstyle\pm$2.91 | 3.34$\scriptstyle\pm$2.07 | 2.75$\scriptstyle\pm$1.61 | 2.89$\scriptstyle\pm$1.74 | 2.88$\scriptstyle\pm$1.74 | 2.75$\scriptstyle\pm$1.66 | 2.75$\scriptstyle\pm$1.68
NeuMF | 5.17$\scriptstyle\pm$2.74 | 4.28$\scriptstyle\pm$1.95 | 4.23$\scriptstyle\pm$1.79 | 3.83$\scriptstyle\pm$1.59 | 3.84$\scriptstyle\pm$1.72 | 3.60$\scriptstyle\pm$1.50 | 3.76$\scriptstyle\pm$1.44
itemKNN | 5.90$\scriptstyle\pm$2.20 | 5.80$\scriptstyle\pm$1.60 | 8.93$\scriptstyle\pm$1.70 | 5.11$\scriptstyle\pm$1.22 | 5.31$\scriptstyle\pm$1.25 | 5.09$\scriptstyle\pm$1.15 | 5.26$\scriptstyle\pm$1.14
ALS | 4.19$\scriptstyle\pm$2.37 | 3.44$\scriptstyle\pm$1.68 | 3.17$\scriptstyle\pm$1.34 | 3.05$\scriptstyle\pm$1.39 | 3.07$\scriptstyle\pm$1.42 | 2.86$\scriptstyle\pm$1.27 | 2.90$\scriptstyle\pm$1.28
yelp | EASE | 8.08$\scriptstyle\pm$4.94 | 7.89$\scriptstyle\pm$4.70 | 18.60$\scriptstyle\pm$2.78 | 6.10$\scriptstyle\pm$3.74 | 6.56$\scriptstyle\pm$3.90 | 4.84$\scriptstyle\pm$2.17 | 5.61$\scriptstyle\pm$2.30
MultiVAE | 9.33$\scriptstyle\pm$6.61 | 7.67$\scriptstyle\pm$4.94 | 9.70$\scriptstyle\pm$3.22 | 6.84$\scriptstyle\pm$4.10 | 6.80$\scriptstyle\pm$4.04 | 4.30$\scriptstyle\pm$1.27 | 4.35$\scriptstyle\pm$1.31
NeuMF | 15.09$\scriptstyle\pm$6.24 | 15.47$\scriptstyle\pm$5.55 | 22.40$\scriptstyle\pm$3.17 | 13.14$\scriptstyle\pm$4.55 | 13.92$\scriptstyle\pm$4.70 | 13.46$\scriptstyle\pm$2.43 | 14.50$\scriptstyle\pm$2.45
itemKNN | 9.25$\scriptstyle\pm$4.87 | 9.62$\scriptstyle\pm$4.88 | 23.24$\scriptstyle\pm$2.16 | 7.69$\scriptstyle\pm$4.09 | 8.15$\scriptstyle\pm$4.17 | 7.74$\scriptstyle\pm$2.08 | 8.75$\scriptstyle\pm$2.08
ALS | 14.31$\scriptstyle\pm$3.96 | 13.68$\scriptstyle\pm$3.51 | 15.14$\scriptstyle\pm$1.86 | 13.43$\scriptstyle\pm$3.16 | 13.26$\scriptstyle\pm$3.08 | 11.68$\scriptstyle\pm$0.88 | 11.57$\scriptstyle\pm$0.83
ml-20m | EASE | 10.45$\scriptstyle\pm$1.03 | 11.52$\scriptstyle\pm$1.03 | 36.59$\scriptstyle\pm$0.31 | 8.99$\scriptstyle\pm$0.74 | 9.86$\scriptstyle\pm$0.77 | 9.07$\scriptstyle\pm$0.61 | 10.09$\scriptstyle\pm$0.69
MultiVAE | 9.93$\scriptstyle\pm$0.38 | 9.48$\scriptstyle\pm$0.22 | 22.24$\scriptstyle\pm$0.37 | 9.85$\scriptstyle\pm$0.36 | 9.50$\scriptstyle\pm$0.22 | 9.82$\scriptstyle\pm$0.28 | 9.53$\scriptstyle\pm$0.14
NeuMF | 4.35$\scriptstyle\pm$1.50 | 6.05$\scriptstyle\pm$1.35 | 28.27$\scriptstyle\pm$0.42 | 3.67$\scriptstyle\pm$1.14 | 4.81$\scriptstyle\pm$1.14 | 3.64$\scriptstyle\pm$1.05 | 4.79$\scriptstyle\pm$1.08
itemKNN | 15.31$\scriptstyle\pm$1.18 | 17.19$\scriptstyle\pm$1.15 | 36.63$\scriptstyle\pm$0.42 | 14.02$\scriptstyle\pm$0.75 | 15.24$\scriptstyle\pm$0.83 | 14.16$\scriptstyle\pm$0.68 | 15.41$\scriptstyle\pm$0.77
ALS | 36.17$\scriptstyle\pm$0.83 | 35.21$\scriptstyle\pm$0.64 | 36.39$\scriptstyle\pm$0.21 | 36.50$\scriptstyle\pm$0.74 | 35.75$\scriptstyle\pm$0.62 | 36.32$\scriptstyle\pm$0.56 | 35.60$\scriptstyle\pm$0.48
Table 7: The average relative errors between estimated $NDCG@K$ ($K$ from $1$
to $50$) and the true ones. Unit is $\%$. In each row, the smallest relative
error is highlighted, indicating the most accurate result.
Dataset | Models | Fix Sample | Adaptive Sample
---|---|---|---
BV_MES | BV_MLE | MN_MES | MN_MLE | average size | adaptive MLE
| sample set size n = 500 |
pinterest-20 | EASE | 3.87$\scriptstyle\pm$2.13 | 4.33$\scriptstyle\pm$2.23 | 4.17$\scriptstyle\pm$2.45 | 4.33$\scriptstyle\pm$2.50 | 307.74$\scriptstyle\pm$1.41 | 1.46$\scriptstyle\pm$0.63
MultiVAE | 2.66$\scriptstyle\pm$1.75 | 2.58$\scriptstyle\pm$1.75 | 3.26$\scriptstyle\pm$2.14 | 3.07$\scriptstyle\pm$2.09 | 286.46$\scriptstyle\pm$1.48 | 1.67$\scriptstyle\pm$0.70
NeuMF | 2.82$\scriptstyle\pm$1.98 | 2.79$\scriptstyle\pm$1.96 | 3.24$\scriptstyle\pm$2.30 | 3.22$\scriptstyle\pm$2.27 | 259.77$\scriptstyle\pm$1.28 | 1.73$\scriptstyle\pm$0.83
itemKNN | 3.99$\scriptstyle\pm$2.22 | 4.18$\scriptstyle\pm$2.24 | 4.23$\scriptstyle\pm$2.47 | 4.06$\scriptstyle\pm$2.44 | 309.56$\scriptstyle\pm$1.31 | 1.42$\scriptstyle\pm$0.67
ALS | 3.35$\scriptstyle\pm$1.89 | 3.24$\scriptstyle\pm$1.87 | 3.90$\scriptstyle\pm$2.24 | 3.80$\scriptstyle\pm$2.23 | 270.75$\scriptstyle\pm$1.22 | 1.84$\scriptstyle\pm$1.07
| sample set size n = 500 |
yelp | EASE | 5.63$\scriptstyle\pm$3.73 | 5.48$\scriptstyle\pm$3.66 | 4.03$\scriptstyle\pm$2.53 | 4.04$\scriptstyle\pm$2.53 | 340.79$\scriptstyle\pm$2.03 | 3.55$\scriptstyle\pm$2.00
MultiVAE | 7.68$\scriptstyle\pm$5.86 | 7.48$\scriptstyle\pm$5.74 | 5.77$\scriptstyle\pm$3.87 | 5.64$\scriptstyle\pm$3.82 | 288.70$\scriptstyle\pm$2.24 | 5.09$\scriptstyle\pm$2.60
NeuMF | 9.34$\scriptstyle\pm$5.50 | 9.55$\scriptstyle\pm$5.50 | 8.43$\scriptstyle\pm$4.07 | 8.91$\scriptstyle\pm$4.13 | 290.62$\scriptstyle\pm$2.11 | 4.43$\scriptstyle\pm$2.55
itemKNN | 5.01$\scriptstyle\pm$2.99 | 4.99$\scriptstyle\pm$2.96 | 3.65$\scriptstyle\pm$2.28 | 3.71$\scriptstyle\pm$2.29 | 369.16$\scriptstyle\pm$2.51 | 3.67$\scriptstyle\pm$2.73
ALS | 13.39$\scriptstyle\pm$7.34 | 13.94$\scriptstyle\pm$7.61 | 12.57$\scriptstyle\pm$5.46 | 13.67$\scriptstyle\pm$5.80 | 297.07$\scriptstyle\pm$2.29 | 5.48$\scriptstyle\pm$3.34
| sample set size n = 1000 |
ml-20m | EASE | 3.45$\scriptstyle\pm$0.83 | 4.16$\scriptstyle\pm$0.81 | 3.74$\scriptstyle\pm$1.51 | 3.85$\scriptstyle\pm$1.53 | 899.89$\scriptstyle\pm$1.90 | 2.01$\scriptstyle\pm$0.56
MultiVAE | 3.20$\scriptstyle\pm$1.44 | 4.37$\scriptstyle\pm$1.58 | 4.76$\scriptstyle\pm$2.68 | 3.87$\scriptstyle\pm$2.58 | 771.26$\scriptstyle\pm$1.84 | 1.21$\scriptstyle\pm$0.60
NeuMF | 1.02$\scriptstyle\pm$0.65 | 1.14$\scriptstyle\pm$0.74 | 1.94$\scriptstyle\pm$1.35 | 1.90$\scriptstyle\pm$1.23 | 758.45$\scriptstyle\pm$1.61 | 0.92$\scriptstyle\pm$0.51
itemKNN | 4.44$\scriptstyle\pm$1.07 | 5.36$\scriptstyle\pm$1.05 | 3.52$\scriptstyle\pm$1.67 | 4.13$\scriptstyle\pm$1.68 | 725.72$\scriptstyle\pm$1.49 | 2.02$\scriptstyle\pm$0.63
ALS | 12.28$\scriptstyle\pm$2.07 | 17.12$\scriptstyle\pm$2.38 | 13.42$\scriptstyle\pm$4.44 | 13.48$\scriptstyle\pm$4.90 | 705.76$\scriptstyle\pm$1.56 | 5.27$\scriptstyle\pm$1.39
Table 8: Accuracy of predicting of the winner models for different datasets.
Values in table is the number of correct predictions over 100 repeats. The
larger number, the better estimator.
Dataset | Top-K | Metrics | Fix Sample | AdaptiveSample
---|---|---|---|---
BV_MES | BV_MLE | MN_MES | MN_MLE | adaptive MLE
sample set size n = 500 | sample set size $260\sim 310$
pinterest-20 | 10 | RECALL | 69 | 73 | 67 | 69 | 78
NDCG | 58 | 59 | 58 | 60 | 84
AP | 54 | 57 | 52 | 52 | 68
20 | RECALL | 69 | 73 | 70 | 74 | 81
NDCG | 69 | 73 | 68 | 73 | 79
AP | 57 | 60 | 54 | 56 | 69
| sample set size n = 500 | sample set size $280\sim 370$
yelp | 10 | RECALL | 98 | 97 | 100 | 100 | 100
NDCG | 96 | 96 | 100 | 100 | 100
AP | 95 | 95 | 100 | 100 | 94
20 | RECALL | 100 | 100 | 100 | 100 | 100
NDCG | 100 | 100 | 100 | 100 | 100
AP | 97 | 96 | 100 | 100 | 98
| sample set size n = 1000 | sample set size $700\sim 900$
ml-20m | 10 | RECALL | 100 | 100 | 100 | 100 | 100
NDCG | 100 | 100 | 100 | 100 | 100
AP | 100 | 100 | 100 | 100 | 100
20 | RECALL | 100 | 100 | 100 | 100 | 100
NDCG | 100 | 100 | 100 | 100 | 100
AP | 100 | 100 | 100 | 100 | 100
### I.5 More Results
Due to the space limitation, we present two complementary experimental results
for Table 1 as well as Table 2.
Table 9 show the result of average relative errors between estimated $AP@K$
($K$ from $1$ to $50$) and the true ones. The proposed estimator especially
$MN\\_MES$ performs best in general, which is consistent with the observation
in Section 5.
Table 10 show the result of average relative errors between estimated $NDCG@K$
($K$ from $1$ to $50$) and the true ones. In Table 1, the sample set size
$n=100$ while here is $500$. The result consistently confirm the superiority
of models proposed in this paper. Specifically, from the Table 10, we can find
that $MN\\_MES$ estimator performs best or second best (highlighted) in most
of the cases.
Table 11 presents the average relative errors in terms of $Recall$ metric.
There are four advanced estimators with fixed sampling set size ($n=500$ or
$1000$) and a proposed adaptive based estimator. We follow the adaptive
sampling strategy in Section 4 and obtain the average sampling set size for
each dataset and model. For example, the average sampling set size is $307$
for EASE model in pinterest-20 dataset, compared to the estimators with
$n=500$ sample set size. Nevertheless, the adaptive based estimator can even
achive much better performance, say $1.69\%$ relative error compared to the
$2.78\%$ from MN_MES estimator. All these results consistently support the
superiority of the adaptive sampling strategy and the adaptive estimator.
Table 9: The average relative errors between estimated $AP@K$ ($K$ from $1$ to
$50$) and the true ones. Unit is $\%$. In each row, the smallest two results
are highlighted in bold, indicating the most accurate results. Sample set size
$n=100$.
dataset | Models | sample set size 100
---|---|---
baseline | this paper
MES | MLE | BV | BV_MES | BV_MLE | MN_MES | MN_MLE
pinterest-20 | EASE | 17.45$\scriptstyle\pm$6.14 | 17.85$\scriptstyle\pm$4.50 | 23.50$\scriptstyle\pm$2.25 | 16.88$\scriptstyle\pm$3.91 | 17.19$\scriptstyle\pm$3.87 | 17.06$\scriptstyle\pm$3.54 | 17.30$\scriptstyle\pm$3.52
MultiVAE | 6.45$\scriptstyle\pm$5.05 | 4.68$\scriptstyle\pm$3.40 | 3.58$\scriptstyle\pm$2.29 | 4.04$\scriptstyle\pm$2.94 | 4.03$\scriptstyle\pm$2.93 | 3.78$\scriptstyle\pm$2.66 | 3.76$\scriptstyle\pm$2.66
NeuMF | 8.62$\scriptstyle\pm$6.12 | 7.28$\scriptstyle\pm$4.77 | 9.43$\scriptstyle\pm$3.33 | 6.39$\scriptstyle\pm$4.19 | 7.04$\scriptstyle\pm$4.39 | 6.19$\scriptstyle\pm$3.89 | 6.58$\scriptstyle\pm$3.99
itemKNN | 17.46$\scriptstyle\pm$6.07 | 18.17$\scriptstyle\pm$4.12 | 23.90$\scriptstyle\pm$1.97 | 16.81$\scriptstyle\pm$3.53 | 17.41$\scriptstyle\pm$3.49 | 17.02$\scriptstyle\pm$3.17 | 17.46$\scriptstyle\pm$3.15
ALS | 7.78$\scriptstyle\pm$5.56 | 7.03$\scriptstyle\pm$4.30 | 3.39$\scriptstyle\pm$1.79 | 6.31$\scriptstyle\pm$3.70 | 6.48$\scriptstyle\pm$3.76 | 5.80$\scriptstyle\pm$3.30 | 6.07$\scriptstyle\pm$3.39
yelp | EASE | 9.87$\scriptstyle\pm$6.09 | 9.72$\scriptstyle\pm$5.78 | 22.16$\scriptstyle\pm$2.90 | 7.44$\scriptstyle\pm$4.68 | 8.04$\scriptstyle\pm$4.87 | 6.14$\scriptstyle\pm$2.64 | 7.22$\scriptstyle\pm$2.72
MultiVAE | 12.07$\scriptstyle\pm$10.12 | 9.66$\scriptstyle\pm$7.90 | 6.00$\scriptstyle\pm$3.04 | 9.53$\scriptstyle\pm$7.03 | 9.11$\scriptstyle\pm$6.82 | 6.85$\scriptstyle\pm$3.24 | 6.20$\scriptstyle\pm$3.08
NeuMF | 33.32$\scriptstyle\pm$7.85 | 34.41$\scriptstyle\pm$6.23 | 41.36$\scriptstyle\pm$2.64 | 32.00$\scriptstyle\pm$5.34 | 33.04$\scriptstyle\pm$5.23 | 33.05$\scriptstyle\pm$2.14 | 34.15$\scriptstyle\pm$2.12
itemKNN | 14.97$\scriptstyle\pm$6.82 | 15.94$\scriptstyle\pm$6.24 | 31.18$\scriptstyle\pm$2.11 | 13.43$\scriptstyle\pm$5.48 | 14.17$\scriptstyle\pm$5.43 | 14.21$\scriptstyle\pm$2.20 | 15.45$\scriptstyle\pm$2.17
ALS | 31.31$\scriptstyle\pm$13.11 | 31.06$\scriptstyle\pm$11.63 | 13.59$\scriptstyle\pm$4.14 | 32.91$\scriptstyle\pm$10.21 | 32.38$\scriptstyle\pm$10.10 | 32.00$\scriptstyle\pm$4.18 | 31.29$\scriptstyle\pm$4.17
ml-20m | EASE | 31.30$\scriptstyle\pm$1.29 | 32.83$\scriptstyle\pm$1.23 | 57.14$\scriptstyle\pm$0.23 | 29.29$\scriptstyle\pm$1.10 | 30.66$\scriptstyle\pm$1.05 | 29.48$\scriptstyle\pm$0.91 | 31.04$\scriptstyle\pm$0.90
MultiVAE | 29.01$\scriptstyle\pm$2.90 | 25.61$\scriptstyle\pm$2.54 | 12.75$\scriptstyle\pm$0.42 | 30.07$\scriptstyle\pm$2.25 | 27.17$\scriptstyle\pm$2.18 | 30.33$\scriptstyle\pm$2.00 | 26.74$\scriptstyle\pm$1.97
NeuMF | 9.10$\scriptstyle\pm$2.05 | 11.64$\scriptstyle\pm$1.74 | 37.40$\scriptstyle\pm$0.40 | 8.12$\scriptstyle\pm$1.55 | 9.86$\scriptstyle\pm$1.50 | 8.29$\scriptstyle\pm$1.40 | 9.99$\scriptstyle\pm$1.37
itemKNN | 43.81$\scriptstyle\pm$1.44 | 46.09$\scriptstyle\pm$1.17 | 62.55$\scriptstyle\pm$0.28 | 42.14$\scriptstyle\pm$1.04 | 43.82$\scriptstyle\pm$1.00 | 42.36$\scriptstyle\pm$0.94 | 44.12$\scriptstyle\pm$0.92
ALS | 88.83$\scriptstyle\pm$4.86 | 84.37$\scriptstyle\pm$4.40 | 28.12$\scriptstyle\pm$0.97 | 91.43$\scriptstyle\pm$3.89 | 87.73$\scriptstyle\pm$3.77 | 90.49$\scriptstyle\pm$3.43 | 87.04$\scriptstyle\pm$3.38
Table 10: The average relative errors between estimated $NDCG@K$ ($K$ from $1$
to $50$) and the true ones. Unit is $\%$. In each row, the smallest two
results are highlighted in bold, indicating the most accurate results. Sample
set size $n=500$.
dataset | Models | sample set size 500
---|---|---
baseline | this paper
MES | MLE | BV | BV_MES | BV_MLE | MN_MES | MN_MLE
pinterest-20 | EASE | 5.11$\scriptstyle\pm$3.09 | 4.52$\scriptstyle\pm$2.42 | 5.53$\scriptstyle\pm$2.12 | 3.87$\scriptstyle\pm$2.13 | 4.33$\scriptstyle\pm$2.23 | 4.17$\scriptstyle\pm$2.45 | 4.33$\scriptstyle\pm$2.50
MultiVAE | 4.01$\scriptstyle\pm$2.60 | 2.89$\scriptstyle\pm$2.05 | 2.29$\scriptstyle\pm$1.58 | 2.66$\scriptstyle\pm$1.75 | 2.58$\scriptstyle\pm$1.75 | 3.26$\scriptstyle\pm$2.14 | 3.07$\scriptstyle\pm$2.09
NeuMF | 4.27$\scriptstyle\pm$2.85 | 3.20$\scriptstyle\pm$2.22 | 2.75$\scriptstyle\pm$1.83 | 2.82$\scriptstyle\pm$1.98 | 2.79$\scriptstyle\pm$1.96 | 3.24$\scriptstyle\pm$2.30 | 3.22$\scriptstyle\pm$2.27
itemKNN | 5.13$\scriptstyle\pm$2.99 | 4.36$\scriptstyle\pm$2.46 | 5.23$\scriptstyle\pm$2.15 | 3.99$\scriptstyle\pm$2.22 | 4.18$\scriptstyle\pm$2.24 | 4.23$\scriptstyle\pm$2.47 | 4.06$\scriptstyle\pm$2.44
ALS | 4.53$\scriptstyle\pm$2.86 | 3.65$\scriptstyle\pm$2.13 | 3.16$\scriptstyle\pm$1.77 | 3.35$\scriptstyle\pm$1.89 | 3.24$\scriptstyle\pm$1.87 | 3.90$\scriptstyle\pm$2.24 | 3.80$\scriptstyle\pm$2.23
yelp | EASE | 8.59$\scriptstyle\pm$5.62 | 6.32$\scriptstyle\pm$4.28 | 4.85$\scriptstyle\pm$3.16 | 5.63$\scriptstyle\pm$3.73 | 5.48$\scriptstyle\pm$3.66 | 4.03$\scriptstyle\pm$2.53 | 4.04$\scriptstyle\pm$2.53
MultiVAE | 10.83$\scriptstyle\pm$8.83 | 8.57$\scriptstyle\pm$6.91 | 6.48$\scriptstyle\pm$4.79 | 7.68$\scriptstyle\pm$5.86 | 7.48$\scriptstyle\pm$5.74 | 5.77$\scriptstyle\pm$3.87 | 5.64$\scriptstyle\pm$3.82
NeuMF | 11.52$\scriptstyle\pm$7.27 | 10.82$\scriptstyle\pm$6.16 | 12.82$\scriptstyle\pm$5.17 | 9.34$\scriptstyle\pm$5.50 | 9.55$\scriptstyle\pm$5.50 | 8.43$\scriptstyle\pm$4.07 | 8.91$\scriptstyle\pm$4.13
itemKNN | 7.72$\scriptstyle\pm$5.18 | 5.87$\scriptstyle\pm$3.38 | 5.31$\scriptstyle\pm$3.26 | 5.01$\scriptstyle\pm$2.99 | 4.99$\scriptstyle\pm$2.96 | 3.65$\scriptstyle\pm$2.28 | 3.71$\scriptstyle\pm$2.29
ALS | 16.30$\scriptstyle\pm$10.72 | 15.65$\scriptstyle\pm$9.30 | 15.26$\scriptstyle\pm$7.00 | 13.39$\scriptstyle\pm$7.34 | 13.94$\scriptstyle\pm$7.61 | 12.57$\scriptstyle\pm$5.46 | 13.67$\scriptstyle\pm$5.80
ml-20m | EASE | 5.93$\scriptstyle\pm$2.02 | 7.44$\scriptstyle\pm$1.08 | 14.35$\scriptstyle\pm$0.56 | 5.78$\scriptstyle\pm$0.95 | 6.78$\scriptstyle\pm$0.93 | 5.59$\scriptstyle\pm$1.49 | 6.14$\scriptstyle\pm$1.47
MultiVAE | 7.53$\scriptstyle\pm$2.95 | 10.27$\scriptstyle\pm$1.68 | 10.41$\scriptstyle\pm$1.08 | 7.22$\scriptstyle\pm$1.42 | 9.35$\scriptstyle\pm$1.46 | 7.01$\scriptstyle\pm$2.18 | 8.08$\scriptstyle\pm$2.26
NeuMF | 2.82$\scriptstyle\pm$1.92 | 1.71$\scriptstyle\pm$1.00 | 4.41$\scriptstyle\pm$0.91 | 1.29$\scriptstyle\pm$0.71 | 1.46$\scriptstyle\pm$0.83 | 2.10$\scriptstyle\pm$1.31 | 2.11$\scriptstyle\pm$1.32
itemKNN | 8.45$\scriptstyle\pm$2.12 | 10.50$\scriptstyle\pm$1.27 | 18.98$\scriptstyle\pm$0.66 | 8.24$\scriptstyle\pm$1.12 | 9.46$\scriptstyle\pm$1.10 | 7.40$\scriptstyle\pm$1.60 | 8.96$\scriptstyle\pm$1.56
ALS | 28.45$\scriptstyle\pm$5.33 | 37.22$\scriptstyle\pm$2.88 | 40.95$\scriptstyle\pm$1.72 | 27.89$\scriptstyle\pm$2.30 | 33.53$\scriptstyle\pm$2.47 | 26.31$\scriptstyle\pm$3.81 | 30.61$\scriptstyle\pm$4.08
Table 11: The average relative errors between estimated $Recall@K$ ($K$ from
$1$ to $50$) and the true ones. Unit is $\%$. In each row, the smallest
relative error is highlighted, indicating the most accurate result.
Dataset | Models | Fix Sample | Adaptive Sample
---|---|---|---
BV_MES | BV_MLE | MN_MES | MN_MLE | average size | adaptive MLE
| sample set size n = 500 |
pinterest-20 | EASE | 2.54$\scriptstyle\pm$0.85 | 2.68$\scriptstyle\pm$0.87 | 2.78$\scriptstyle\pm$1.05 | 2.83$\scriptstyle\pm$1.06 | 307.74$\scriptstyle\pm$1.41 | 1.69$\scriptstyle\pm$0.60
MultiVAE | 2.17$\scriptstyle\pm$1.08 | 2.13$\scriptstyle\pm$1.09 | 2.60$\scriptstyle\pm$1.30 | 2.55$\scriptstyle\pm$1.35 | 286.46$\scriptstyle\pm$1.48 | 1.95$\scriptstyle\pm$0.65
NeuMF | 2.45$\scriptstyle\pm$1.15 | 2.44$\scriptstyle\pm$1.15 | 2.76$\scriptstyle\pm$1.37 | 2.80$\scriptstyle\pm$1.38 | 259.77$\scriptstyle\pm$1.28 | 2.00$\scriptstyle\pm$0.81
itemKNN | 2.49$\scriptstyle\pm$0.97 | 2.59$\scriptstyle\pm$0.94 | 2.79$\scriptstyle\pm$1.12 | 2.79$\scriptstyle\pm$1.20 | 309.56$\scriptstyle\pm$1.31 | 1.63$\scriptstyle\pm$0.51
ALS | 2.65$\scriptstyle\pm$1.04 | 2.63$\scriptstyle\pm$1.06 | 3.02$\scriptstyle\pm$1.32 | 2.98$\scriptstyle\pm$1.33 | 270.75$\scriptstyle\pm$1.22 | 2.00$\scriptstyle\pm$0.73
| sample set size n = 500 |
yelp | EASE | 4.68$\scriptstyle\pm$2.43 | 4.56$\scriptstyle\pm$2.35 | 3.47$\scriptstyle\pm$1.79 | 3.49$\scriptstyle\pm$1.78 | 340.79$\scriptstyle\pm$2.03 | 3.48$\scriptstyle\pm$1.40
MultiVAE | 6.14$\scriptstyle\pm$3.48 | 6.07$\scriptstyle\pm$3.46 | 4.68$\scriptstyle\pm$2.27 | 4.67$\scriptstyle\pm$2.28 | 288.70$\scriptstyle\pm$2.24 | 5.08$\scriptstyle\pm$2.14
NeuMF | 6.59$\scriptstyle\pm$2.38 | 6.73$\scriptstyle\pm$2.35 | 5.48$\scriptstyle\pm$1.43 | 5.68$\scriptstyle\pm$1.42 | 290.62$\scriptstyle\pm$2.11 | 4.01$\scriptstyle\pm$1.51
itemKNN | 3.94$\scriptstyle\pm$1.94 | 3.95$\scriptstyle\pm$1.92 | 2.92$\scriptstyle\pm$1.60 | 2.96$\scriptstyle\pm$1.57 | 369.16$\scriptstyle\pm$2.51 | 3.25$\scriptstyle\pm$1.59
ALS | 10.00$\scriptstyle\pm$3.47 | 10.31$\scriptstyle\pm$3.65 | 9.29$\scriptstyle\pm$2.03 | 9.80$\scriptstyle\pm$2.23 | 297.07$\scriptstyle\pm$2.29 | 5.25$\scriptstyle\pm$2.38
| sample set size n = 1000 |
ml-20m | EASE | 1.39$\scriptstyle\pm$0.21 | 1.69$\scriptstyle\pm$0.28 | 1.81$\scriptstyle\pm$0.46 | 1.73$\scriptstyle\pm$0.46 | 899.89$\scriptstyle\pm$1.90 | 1.07$\scriptstyle\pm$0.24
MultiVAE | 2.23$\scriptstyle\pm$0.58 | 2.91$\scriptstyle\pm$0.72 | 3.55$\scriptstyle\pm$1.23 | 2.98$\scriptstyle\pm$1.50 | 771.26$\scriptstyle\pm$1.84 | 1.10$\scriptstyle\pm$0.39
NeuMF | 0.82$\scriptstyle\pm$0.30 | 0.85$\scriptstyle\pm$0.28 | 1.51$\scriptstyle\pm$0.66 | 1.69$\scriptstyle\pm$0.70 | 758.45$\scriptstyle\pm$1.61 | 0.78$\scriptstyle\pm$0.27
itemKNN | 1.84$\scriptstyle\pm$0.24 | 2.13$\scriptstyle\pm$0.27 | 1.97$\scriptstyle\pm$0.42 | 2.17$\scriptstyle\pm$0.49 | 725.72$\scriptstyle\pm$1.49 | 1.17$\scriptstyle\pm$0.28
ALS | 9.41$\scriptstyle\pm$0.97 | 12.83$\scriptstyle\pm$1.27 | 10.63$\scriptstyle\pm$2.53 | 10.57$\scriptstyle\pm$3.18 | 705.76$\scriptstyle\pm$1.56 | 4.29$\scriptstyle\pm$1.05
|
# Sketch-and-solve approaches to $k$-means clustering
by semidefinite programming
Charles Clum111Department of Mathematics, The Ohio State University, Columbus,
Ohio, USA Dustin G. Mixon11footnotemark: 1 222Translational Data Analytics
Institute, The Ohio State University, Columbus, Ohio, USA Soledad
Villar333Department of Applied Mathematics & Statistics, Johns Hopkins
University, Baltimore, Maryland, USA 444Mathematical Institute for Data
Science, Johns Hopkins University, Baltimore, Maryland, USA Kaiying
Xie555Department of Electrical and Computer Engineering, The Ohio State
University, Columbus, Ohio, USA
###### Abstract
We introduce a sketch-and-solve approach to speed up the Peng-Wei semidefinite
relaxation of $k$-means clustering. When the data is appropriately separated
we identify the $k$-means optimal clustering. Otherwise, our approach provides
a high-confidence lower bound on the optimal $k$-means value. This lower bound
is _data-driven_ ; it does not make any assumption on the data nor how it is
generated. We provide code and an extensive set of numerical experiments where
we use this approach to certify approximate optimality of clustering solutions
obtained by k-means++.
## 1 Introduction
One of the most fundamental data processing tasks is clustering. Here, one is
given a collection of objects, a notion of similarity between those objects,
and a clustering objective that scores any given partition according to how
well it clusters similar objects together. The goal is then to partition the
objects in a way that optimizes this clustering objective. For example, to
partition the vertices of a simple graph into two clusters, one might take the
clustering objective to be edge cut in the graph complement. This clustering
problem is equivalent to MAX-CUT, which is known to be NP-hard [21]. To
(approximately) solve MAX-CUT, one may pass to the Goemans–Williamson
semidefinite relaxation [15] and randomly round to a nearby partition. For
certain random graph models, it is known that this semidefinite relaxation is
tight with high probability, meaning the clustering problem is exactly solved
before the rounding step [1, 7].
The above discussion suggests a workflow to solve clustering problems: relax
to a semidefinite program (SDP) and round the solution to a partition if
necessary. Since SDPs can be solved to machine precision in polynomial time
[34], this has been a worthwhile pursuit for several clustering settings [1,
4, 31, 19, 32, 2]. However, the polynomial runtime of semidefinite programming
is notoriously slow in practice, making it infeasible to cluster thousands of
objects (say). As an alternative, one may pass to a random subset of the
objects, cluster the subset by semidefinite programming, and then infer a
partition of the full data set based on proximity to the small clusters. This
sketch-and-solve approach was studied in [32, 2] in the context of clustering
vertices of a graph into two clusters. By passing to a small subset of
objects, the SDP step is no longer burdensome, and one may prove performance
guarantees in terms of planted structure in the data.
In this paper, we develop analogous sketch-and-solve approaches to cluster
data in Euclidean space. Here, the standard clustering objective is the
$k$-means objective. Much like MAX-CUT, the $k$-means problem is NP-hard [3,
5], but one may relax to the Peng–Wei semidefinite relaxation [35] and obtain
performance guarantees [4, 19, 31, 27]. By sketching to a random subset of
data points, one may solve this relaxation quickly and then cluster the
remaining data points according to which cluster mean is closest. Taking
inspiration from [32], this approach was studied in [46] in the setting of
Gaussian mixture models, where exact recovery is possible provided the planted
clusters are sufficiently separated and the size of the sketch is
appropriately large. In Section 3, we consider data that are not necessarily
drawn from a random model. As we will see, the sketch-and-solve approach
exactly recovers planted clusters provided they are sufficiently separated and
the size of the sketch is appropriately large relative to the shape of the
clusters.
In practice, it is popular to solve the $k$-means problem using Lloyd’s method
[29] with the random initialization afforded by $k$-means++ [41]. This
algorithm produces a partition that is within an $O(\log k)$ factor of the
optimal partition in expectation. In principle, the semidefinite relaxation
could deliver an a posteriori approximation certificate for this partition,
potentially establishing that the partition is even closer to optimal than
guaranteed by the $k$-means++ theory. In Section 4, we describe sketch-and-
solve approaches to this process that certify constant-factor approximations
for data drawn from Gaussian mixture models whenever the dimension of the
space is much larger than the number of clusters (even when the clusters are
not separated!). In addition, as established in Section 5, these algorithms
give improved bounds for most of the real-world data sets considered in the
original $k$-means++ paper [41].
### 1.1 Summary of contributions
What follows is a brief description of this paper’s contributions:
* •
A sketch-and-solve algorithm for $k$-means clustering that consists of
subsampling the input dataset, solving an SDP on the samples, and
extrapolating the solution on the samples to the entire dataset. We provide
theoretical guarantees when the data points are drawn from separated clusters.
We phrase the conditions of optimality in terms of _proximity conditions_ , a
classical concept within the computer science literature [24]. These results
make use of work from [27] that connects the clustering SDP with proximity
conditions. (See Section 3.)
* •
An algorithm that takes a dataset as input and outputs a lower bound on the
$k$-means optimal value. This algorithm leverages the sketch-and-solve scheme
and can be used to certify approximate optimality of clusters obtained with
fast, greedy methods such as $k$-means++. We provide bounds on the tightness
of this lower bound when data is sampled from spherical Gaussians. (See
Section 4.)
* •
Open source code and an extensive set of numerical experiments showing how
tight the lower bounds are on several real-world datasets.666Our code is
available here: https://github.com/Kkylie/Sketch-and-solve_kmeans.git
### 1.2 Related work
The classical semidefinite programming relaxation for $k$-means clustering was
proposed by Peng and Wei in [35]. A few years later, [4] proved that the
Peng–Wei relaxation is tight (i.e., it recovers the solution to the original
NP-hard problem) if the clusters are sampled from the so-called stochastic
ball model [33] with sufficiently separated balls. The proof is based on the
construction of a dual certificate. These results were significantly improved
by [19] and [27], and applied to spectral methods in [28]. In particular, [27]
shows a connection between the tightness of the SDP and the proximity
conditions from theoretical computer science [24]. The actual threshold at
which the SDP becomes tight is currently unknown, an open conjecture was posed
in [27].
The Peng–Wei SDP relaxation has also been studied in the context of clustering
mixtures of Gaussians, a classical problem in theoretical computer science.
Introduced in [9], this problem has been approached with many methods
including spectral-like methods [20], methods of moments [18, 44], integer
programming [10], and it has been studied from an information-theoretic point
of view [12] under many different settings and assumptions. The performance of
the SDP relaxation for clustering Gaussian mixtures was first studied in [31],
using proof techniques from [17]. It was later shown that the SDP error decays
exponentially on the separation of the Gaussians [13, 14]. Other conic
relaxations of $k$-means clustering have been recently studied, for instance
[37, 36]. A (quite loose) linear programming relaxation of $k$-means was
proposed in [4], with mostly negative results. A significantly better LP
relaxation was recently introduced and analyzed in [11].
Another related line of work concerns efficiently certifying optimality of
solutions to data problems via convex relaxations and dual certificates. In
[6], Bandeira proposes leveraging a dual certificate to efficiently certify
optimality of solutions obtained with other (more efficient) methods. The goal
is to combine the theoretical guarantee from (slow) convex relaxations with
solutions provided by (fast) algorithms that may not have theoretical
guarantees. This idea has been used to provide a posteriori optimality
certificates in data science problems such as point cloud registration [45],
$k$-means clustering [19], and synchronization [38]. However, in order for
this method to succeed, the relaxation must be tight.
Sketch-and-solve methods provide a looser guarantee (not optimality, but
approximate optimality) that can work in broader contexts, including ones
where no convex relaxation is involved (for instance, numerical linear algebra
algorithms [43]). Here, we consider a setting where a data problem is relaxed
via convex relaxation. First, the original dataset is subsampled to a sketch,
then the convex problem is solved in the sketch, and finally a solution is
inferred for the entire dataset. The approximation guarantees from the convex
relaxation combined with regularity assumptions on the data (and how well the
sample can represent it) can be used to derive approximation guarantees for
the general approach. These ideas were used in [32, 2] in the context of graph
clustering, and in [46] in the context of clustering mixtures of Gaussians.
Here, we provide approximation guarantees for a sketch-and-solve approach for
general $k$-means clustering, but we also show that the approach provides an
efficient algorithm that computes a high-probability lower bound on the
$k$-means objective for any dataset, with no assumptions on the data nor how
it is generated. A similar observation was made by two of the authors in a
preprint [30].
### 1.3 Roadmap
The following section contains some preliminaries and introduces notation for
the remainder of the paper. Next, Section 3 introduces a sketch-and-solve
algorithm that determines the optimal clustering provided the clusters are
appropriately separated. Section 4 then introduces sketch-and-solve algorithms
to compute lower bounds on the optimal $k$-means value of a given dataset, and
we prove that these lower bounds are nearly sharp for data drawn from Gaussian
mixtures. In Section 5, we illustrate the quality of these bounds on real-
world datasets. We discuss opportunities for future work in Section 6, and our
main results are proved in Sections 7 and 8.
## 2 Preliminaries and notation
We are interested in clustering $n$ points in $\mathbb{R}^{d}$ into $k$
clusters. Denoting the index set $[n]:=\\{1,\ldots,n\\}$, let $\Pi(n,k)$ be
the set of partitions of $[n]$ into $k$ nonempty sets, i.e., if
$\Gamma\in\Pi(n,k)$, then $|\Gamma|=k$, $\bigsqcup_{S\in\Gamma}S=[n]$, and
$|S|>1$ for each $S\in\Gamma$. Given a tuple $X:=\\{x_{i}\\}_{i\in[n]}$ of
points in $\mathbb{R}^{d}$ and a nonempty set $S\subseteq[n]$ of indices, we
denote the corresponding centroid by
$c_{S}:=\frac{1}{|S|}\sum_{i\in S}x_{i}.$
Note that we suppress the dependence of $c_{S}$ on $X$ for simplicity. With
this notation, the (normalized) $k$-means problem is given by
$\text{minimize}\qquad\frac{1}{n}\sum_{S\in\Gamma}\sum_{i\in
S}\|x_{i}-c_{S}\|^{2}\qquad\text{subject to}\qquad\Gamma\in\Pi(n,k),$
and we denote the value of this program by $\operatorname{IP}(X,k)$. This
problem is trivial when $k=1$, and so we assume $k\geq 2$ in the sequel.
Lloyd’s algorithm is a popular approach to solve the $k$-means problem in
practice. This algorithm alternates between computing cluster centroids and
re-partitioning the data points according to the nearest centroid. These
iterations are inexpensive, costing only $O(kdn)$ operations each, and the
algorithm eventually converges to a (possibly sub-optimal) fixed point. The
value $V^{(0)}$ of the $k$-means++ random initialization enjoys the following
guarantee of approximate optimality (see Theorem 3.1 in [41]):
$\operatorname{IP}(X,k)\geq\mathbb{E}L,\qquad L:=\frac{V^{(0)}}{8(\log k+2)}.$
(1)
Since the $k$-means objective monotonically decreases with each iteration of
Lloyd’s algorithm, the $k$-means++ initialization ensures a $O(\log
k)$-competitive solution to the $k$-means problem on average.
As a theory-friendly alternative, one may instead solve the $k$-means problem
by relaxing to the Peng–Wei semidefinite program [35]. Let
$1_{S}\in\mathbb{R}^{n}$ denote the indicator vector of $S\subseteq[n]$, and
encode $\Gamma\in\Pi(n,k)$ with the matrix
$Z_{\Gamma}:=\sum_{S\in\Gamma}\frac{1}{|S|}1_{S}1_{S}^{\top}\in\mathbb{R}^{n\times
n}.$
Define $D_{X}\in\mathbb{R}^{n\times n}$ by
$(D_{X})_{ij}:=\|x_{i}-x_{j}\|^{2}$. A straightforward manipulation gives
$\sum_{S\in\Gamma}\sum_{i\in
S}\|x_{i}-c_{S}\|^{2}=\frac{1}{2}\sum_{S\in\Gamma}\frac{1}{|S|}\sum_{i\in
S}\sum_{j\in
S}\|x_{i}-x_{j}\|^{2}=\frac{1}{2}\operatorname{tr}(D_{X}Z_{\Gamma}).$
Thus, the $k$-means problem is equivalently given by
$\text{minimize}\qquad\frac{1}{2n}\operatorname{tr}(D_{X}Z_{\Gamma})\qquad\text{subject
to}\qquad\Gamma\in\Pi(n,k).$
Considering the containment
$\Big{\\{}Z_{\Gamma}:\Gamma\in\Pi(n,k)\Big{\\}}\subseteq\mathcal{Z}(n,k):=\Big{\\{}Z\in\mathbb{R}^{n\times
n}:Z1=1,~{}\operatorname{tr}Z=k,~{}Z\geq 0,~{}Z\succeq 0\Big{\\}},$
we obtain the (normalized) Peng–Wei semidefinite relaxation [35]:
$\text{minimize}\qquad\frac{1}{2n}\operatorname{tr}(D_{X}Z)\qquad\text{subject
to}\qquad Z\in\mathcal{Z}(n,k).$
We denote the value of this program by $\operatorname{SDP}(X,k)$. When the
relaxation is tight, the minimizer recovers the optimal $k$-means partition,
and otherwise, the value delivers a lower bound on the optimal $k$-means
value. For many real-world clustering instances, this SDP is too time
consuming to compute in practice. To resolve this issue, we introduce a few
sketch-and-solve approaches that work similarly well with far less run time.
## 3 Sketch-and-solve clustering
In this section, we introduce a sketch-and-solve approach to SDP-based
$k$-means clustering that works well provided the clusters are sufficiently
separated. Suppose we run the Peng–Wei SDP on a random subset of the data. If
the original clusters are well separated, then this random subset satisfies a
proximity condition from [27] with high probability, which in turn implies
that the Peng–Wei SDP recovers the desired clusters in this subset. Then we
can partition the full data set according to which of these cluster centroids
is closest. This approach is summarized in Algorithm 1. The main result of
this section (Theorem 2) is a theoretical guarantee for this algorithm.
We start by introducing the necessary notation to enunciate the proximity
condition from [27]. For $X\in(\mathbb{R}^{d})^{n}$ and $\Gamma\in\Pi(n,k)$,
and for each $S,T\in\Gamma$ with $S\neq T$, we define
$\alpha_{ST}:=\min_{i\in
S}\Big{\langle}x_{i}-\frac{c_{S}+c_{T}}{2},\frac{c_{S}-c_{T}}{\|c_{S}-c_{T}\|}\Big{\rangle},\quad\beta_{ST}:=\frac{1}{2}\Big{(}\Big{(}\frac{1}{|S|}+\frac{1}{|T|}\Big{)}\sum_{R\in\Gamma}\|X_{R}\|_{2\to
2}^{2}\Big{)}^{1/2}.$
Here, $X_{R}$ denotes the $d\times n$ matrix whose $i$th column equals zero
unless $i\in R$, in which case the column equals $x_{i}-c_{R}$. Notice that
$\alpha_{ST}$ captures how close $S$ is to the hyperplane that bisects the
centroids $c_{S}$ and $c_{T}$, while $\beta_{ST}$ scales some measure of
variance of the entire dataset by the sizes of $S$ and $T$. Intuitively,
$k$-means clustering is easier when clusters are well separated, e.g., when
the following quantity is positive:
$\operatorname{prox}(X,\Gamma):=\min_{\begin{subarray}{c}S,T\in\Gamma\\\ S\neq
T\end{subarray}}\Big{(}\alpha_{ST}-\beta_{ST}\Big{)}.$
###### Proposition 1 (Theorem 2 in [27]).
For each $X\in(\mathbb{R}^{d})^{n}$, there exists at most one
$\Gamma\in\Pi(n,k)$ for which $\operatorname{prox}(X,\Gamma)>0$. Furthermore,
if such $\Gamma$ exists, then $Z_{\Gamma}$ is the unique minimizer of the
Peng–Wei semidefinite relaxation for $X$, and therefore $\Gamma$ is the unique
minimizer of the $k$-means problem.
Next, we introduce notation necessary to enunciate the main result in this
section. Given $X\in(\mathbb{R}^{d})^{n}$ and $\Gamma\in\Pi(n,k)$, consider
the quantities
$\Delta:=\min_{\begin{subarray}{c}S,T\in\Gamma\\\ S\neq
T\end{subarray}}\|c_{S}-c_{T}\|,\qquad r:=\max_{S\in\Gamma}\max_{i\in
S}\|x_{i}-c_{S}\|.$
We will consider a shape parameter of $(X,\Gamma)$ that is determined by the
following quantities:
$d,\qquad\frac{\Delta}{r},\qquad\frac{\operatorname{prox}(X,\Gamma)}{r},\qquad
k,\qquad\frac{1}{n}\min_{S\in\Gamma}|S|,\qquad\frac{1}{n}\max_{S\in\Gamma}|S|.$
Notice that the first term above is completely determined by $X$, the next two
terms are determined by $X$ and $\Gamma$, and the last three terms are
completely determined by $\Gamma$. Since its dependence on $X$ factors through
$d$, $\frac{\Delta}{r}$, and $\frac{\operatorname{prox}(X,\Gamma)}{r}$, the
shape parameter is invariant to rotation, translation, and dilation.
Data: Points $X=\\{x_{i}\\}_{i\in[n]}\in(\mathbb{R}^{d})^{n}$, number of
clusters $k$, Bernoulli rate $p$
Result: Optimal $k$-means clustering $\Gamma\in\Pi(n,k)$
Draw $W\subseteq[n]$ according to a Bernoulli process with rate $p$ and put
$X^{\prime}:=\\{x_{i}\\}_{i\in W}$
Define $D_{X^{\prime}}\in\mathbb{R}^{W\times W}$ by
$(D_{X^{\prime}})_{ij}:=\|x_{i}-x_{j}\|^{2}$ for $i,j\in W$
Solve Peng–Wei semidefinite relaxation to find optimal
$\Gamma^{\prime}\in\Pi(|W|,k)$ for $X^{\prime}$
Compute centroids $\\{c_{S^{\prime}}\\}_{S^{\prime}\in\Gamma^{\prime}}$
Output $\Gamma\in\Pi(n,k)$ that partitions $X$ according to closest
$c_{S^{\prime}}$
Algorithm 1 Sketch-and-solve algorithm for Peng–Wei semidefinite relaxation
###### Theorem 2.
There exists an explicit shape parameter
$C\colon(\mathbb{R}^{d})^{n}\times\Pi(n,k)\to[0,\infty]$ for which the
following holds:
* (a)
Suppose $X\in(\mathbb{R}^{d})^{n}$ and $\Gamma\in\Pi(n,k)$ satisfy
$\operatorname{prox}(X,\Gamma)>0$ and $r\leq\frac{\Delta}{2}$. Then
$C(X,\Gamma)<\infty$, and for $\epsilon\in(0,\frac{1}{2})$, Algorithm 1
exactly recovers $\Gamma$ from $X$ with probability $1-\epsilon$ provided
$\mathbb{E}|W|>C(X,\Gamma)\cdot\log(1/\epsilon).$
(Here, the randomness is in the Bernoulli process in Algorithm 1.)
* (b)
$C(X,\Gamma)$ is directly related to $d$, $k$, and
$\frac{1}{n}\max_{S\in\Gamma}|S|$, and inversely related to
$\frac{\Delta}{r}$, $\frac{\operatorname{prox}(X,\Gamma)}{r}$, and
$\frac{1}{n}\min_{S\in\Gamma}|S|$.
The proof idea for Theorem 2 is simple: identify conditions under which the
sketched data satisfy the proximity condition with probability $1-\epsilon$.
Notably, the complexity of the SDP step of Algorithm 1 depends on $|W|$, which
in turn scales according to the shape of the data rather than its size.
Indeed, when the clusters are more separated, the shape parameter is smaller,
and so the sketch size can be taken to be smaller by Theorem 2.
For the sake of illustration, we test the performance of Algorithm 1 on data
drawn according to the stochastic ball model. Fix $\\{\mu_{a}\\}_{a\in[k]}$ in
$\mathbb{R}^{d}$, and for each $a$, draw $\\{g_{a,i}\\}_{i\in[m]}$
independently according to some rotation-invariant probability distribution
$\mathcal{D}$ supported on the origin-centered unit ball in $\mathbb{R}^{d}$.
Then we write $X\sim\mathsf{SBM}(\mathcal{D},\\{\mu_{a}\\}_{a\in[k]},m)$ to
denote the data $X=\\{\mu_{a}+g_{a,i}\\}_{a\in[k],i\in[m]}$. In words, we draw
$m$ points from each of $k$ unit balls centered at the $\mu_{a}$’s, meaning
$n=km$. We will focus on the case where $d=2$, $k=2$, and $\mathcal{D}$ is the
uniform distribution on the unit ball.
First, we consider the behavior of our sketch-and-solve approach as
$n\to\infty$. (Indeed, we have the luxury of considering such behavior since
the performance of our approach does not depend on $n$, but rather on the
shape of the data.) In the limit, one may show that
$\Delta=\|\mu_{1}-\mu_{2}\|,\qquad
r=1,\qquad\operatorname{prox}(X,\Gamma)=\frac{\Delta-3}{2},\qquad\frac{1}{n}\min_{S\in\Gamma}|S|=\frac{1}{n}\max_{S\in\Gamma}|S|=\frac{1}{2}.$
By Theorem 2(b), it follows that the shape parameter $C(X,\Gamma)$ is
inversely related to $\Delta$, as one might expect. Figure 1(left) illustrates
that Algorithm 1 exactly recovers the planted clustering of the entire balls
provided they are appropriately separated and the sketch size $|W|$ isn’t too
small. Overall, the behavior reported in Figure 1(left) qualitatively matches
the prediction in Theorem 2.
Figure 1: (left) For each $\Delta\in\\{2,2.1,\ldots,4\\}$ and
$|W|\in\\{2,4,\ldots,30\\}$, perform the following experiment $500$ times:
Consider two unit balls in $\mathbb{R}^{2}$ whose centers are separated by
$\Delta$. Draw $|W|$ points uniformly from the union of the balls, solve the
Peng–Wei SDP for this sketch in CVX [16], compute the cluster centroids, and
partition the union of balls according to the nearest centroid. Plot the
proportion of these $500$ trials for which the planted clustering was exactly
recovered. (right) For each $n\in\\{2^{3},2^{4},\ldots,2^{25}\\}$, perform the
following experiment $10$ times: Draw $n$ points in $\mathbb{R}^{2}$ according
to the uniformly distributed stochastic ball model with separation $\Delta=3$.
Run Algorithm 1 with $k=2$ and $p=\min\\{\frac{10}{n},1\\}$ so that
$\mathbb{E}|W|=\min\\{10,n\\}$. For comparison, run the Peng–Wei SDP in CVX
and MATLAB’s built-in $k$-means++ algorithm (both with $k=2$), and plot the
average runtimes.
Next, we compare the runtime of our method to traditional (non-sketching)
methods in Figure 1(right). For this, we consider the same stochastic ball
model with separation $\Delta=3$. As mentioned earlier, one cannot run the
Peng–Wei SDP directly on a large dataset in a reasonable amount of time. Also,
each iteration of $k$-means++ takes linear runtime. Meanwhile, the bulk of our
sketch-and-solve approach has a runtime that scales with the shape of the data
(i.e., it’s independent of the size of the data). Of course, the final
clustering step of our approach requires a single pass of the data, which
explains the slight increase in runtime for larger datasets.
## 4 High-confidence lower bounds
The previous section focused on a sketch-and-solve approach that recovers the
optimal $k$-means clustering whenever the data exhibits a sufficiently nice
shape (in some quantifiable sense). However, the Peng–Wei semidefinite
relaxation fails to be tight when the planted clusters are not well separated,
so we should not expect exact recovery in general. Regardless, the SDP always
delivers a lower bound on the optimal $k$-means value, which can be useful in
practice (e.g., when deciding whether to re-initialize Lloyd’s algorithm again
to find an even better clustering).
In this section, we introduce sketch-and-solve approaches for computing such
bounds in a reasonable amount of time. Our approaches are inspired by the
following simple result:
###### Lemma 3.
Consider any sequence $X:=\\{x_{i}\\}_{i\in[n]}$ in $\mathbb{R}^{d}$, draw
indices $\\{i_{j}\\}_{j\in[s]}$ uniformly from $[n]$ with (or without)
replacement, and put $Y:=\\{x_{i_{j}}\\}_{j\in[s]}$. Then
$\mathbb{E}\operatorname{SDP}(Y,k)\leq\mathbb{E}\operatorname{IP}(Y,k)\leq\operatorname{IP}(X,k).$
###### Proof.
The first inequality follows from relaxation. For the second inequality,
select $\Gamma\in\arg\operatorname{IP}(X,k)$, consider the set-valued function
$\sigma\colon[n]\to\Gamma$ that satisfies $i\in\sigma(i)$ for every $i\in[n]$,
and for each $j\in[s]$, denote the random variable
$E_{j}:=\bigg{\|}x_{i_{j}}-\frac{1}{|\sigma(i_{j})|}\sum_{i\in\sigma(i_{j})}x_{i}\bigg{\|}^{2}.$
The random variables $\\{E_{j}\\}_{j\in[s]}$ have a common distribution with
expectation $\operatorname{IP}(X,k)$, though they are dependent if the indices
$\\{i_{j}\\}_{j\in[s]}$ are drawn without replacement. Considering the random
function $f\colon[s]\to[n]$ defined by $f(j):=i_{j}$, we have
$\operatorname{IP}(Y,k)\leq\frac{1}{s}\sum_{S\in\Gamma}\sum_{j\in
f^{-1}(S)}\bigg{\|}x_{i_{j}}-\frac{1}{|f^{-1}(S)|}\sum_{j^{\prime}\in
f^{-1}(S)}x_{i_{j^{\prime}}}\bigg{\|}^{2}\leq\frac{1}{s}\sum_{j\in[s]}E_{j},$
where the second inequality follows from the fact that the centroid of a tuple
of points minimizes the sum of squared distances from those points. The result
follows by taking the expectation of both sides. ∎
By Lemma 3, we can lower bound the $k$-means value $\operatorname{IP}(X,k)$ by
estimating the expected Peng–Wei value $\mathbb{E}\operatorname{SDP}(Y,k)$ of
a random sketch $Y$. To this end, given an error rate $\epsilon>0$ and
$\ell\in\mathbb{N}$ draws of the random sketch, we may leverage concentration
inequalities to compute a random variable $B$ that is smaller than
$\mathbb{E}\operatorname{SDP}(Y,k)$ with probability $\geq 1-\epsilon$. We
provide two such random variables in Algorithms 3 and 4. Notably, the random
variable $B_{H}$ computed by Algorithm 3 is consistent in the sense that
$B_{H}$ converges in probability to $\mathbb{E}\operatorname{SDP}(Y,k)$ as
$\ell\to\infty$. Meanwhile, the random variable $B_{M}$ computed by Algorithm
4 is not consistent, but as we show in Section 5, it empirically outperforms
$B_{H}$ when $\ell$ is small. Theorems 4(a) and 5(a) give that these random
variables indeed act as lower bounds with probability $\geq 1-\epsilon$.
Of course, we would like these lower bounds to be as sharp as possible. To
evaluate how close they are to the desired $k$-means value, we consider the
Gaussian mixture model. Given means $\mu_{1},\ldots,\mu_{k}\in\mathbb{R}^{d}$,
covariances $\Sigma_{1},\ldots,\Sigma_{k}\in\mathbb{R}^{d\times d}$, and a
probability distribution $p$ over the index set $[k]$, the random vector
$x\sim\mathsf{GMM}(\\{(\mu_{t},\Sigma_{t},p(t))\\}_{t\in[k]})$ is obtained by
first drawing $T$ from $[k]$ with distribution $p$, and then drawing $x$ from
the Gaussian $\mathsf{N}(\mu_{T},\Sigma_{T})$. The Gaussian mixture model can
be thought of as a “noisy” version of the stochastic ball model. By part (b)
of the following results, our random lower bounds are nearly sharp provided
$d\gg k$, even when there is no separation between the Gaussian means. See
Section 8 for the proofs of these parts.
Data: Points $X=\\{x_{i}\\}_{i\in[n]}\in(\mathbb{R}^{d})^{n}$, number of
clusters $k$
Result: Indices $\\{i_{j}\\}_{j\in[k]}\in[n]^{k}$ of well-separated points
Put $i_{1}:=1$, and iteratively select
$i_{t+1}\in\arg\max_{i\in[n]}\min_{j\in[t]}\|x_{i}-x_{i_{j}}\|$
Algorithm 2 Deterministic $k$-means$++$ initialization
Data: Points $X=\\{x_{i}\\}_{i\in[n]}\in(\mathbb{R}^{d})^{n}$, number of
clusters $k$, sketch size $s$, number of trials $\ell$, error rate $\epsilon$
Result: Random variable $B_{H}$ such that $\operatorname{IP}(X,k)\geq B_{H}$
with probability $\geq 1-\epsilon$
Run deterministic $k$-means$++$ initialization and put
$b:=\max_{i\in[n]}\min_{j\in[k]}\|x_{i}-x_{i_{j}}\|^{2}$
Draw $\\{Y_{i}\\}_{i\in[\ell]}$ independently at random, with each $Y_{i}$
denoting $s$ points drawn uniformly from $X$ with replacement
Output
$B_{H}:=\frac{1}{\ell}\sum_{i\in[\ell]}\operatorname{SDP}(Y_{i},k)-(\frac{b^{2}}{2\ell}\log(\frac{1}{\epsilon}))^{1/2}$
Algorithm 3 Hoeffding Monte Carlo $k$-means lower bound
Data: Points $X=\\{x_{i}\\}_{i\in[n]}\in(\mathbb{R}^{d})^{n}$, number of
clusters $k$, sketch size $s$, number of trials $\ell$, error rate $\epsilon$
Result: Random variable $B_{M}$ such that $\operatorname{IP}(X,k)\geq B_{M}$
with probability $\geq 1-\epsilon$
Draw $\\{Y_{i}\\}_{i\in[\ell]}$ independently at random, with each $Y_{i}$
denoting $s$ points drawn uniformly from $X$ without replacement
Output $B_{M}:=\epsilon^{1/\ell}\min_{i\in[\ell]}\operatorname{SDP}(Y_{i},k)$
Algorithm 4 Markov Monte Carlo $k$-means lower bound
###### Theorem 4 (Performance guarantee for Algorithm 3).
* (a)
Consider any $X:=\\{x_{i}\\}_{i\in[n]}$ in $\mathbb{R}^{d}$, any
$k,s,\ell\in\mathbb{N}$, and $\epsilon>0$, and compute the random variable
$B_{H}$ in Algorithm 3. Then
$\operatorname{IP}(X,k)\geq B_{H}$
with probability $\geq 1-\epsilon$. (Here, the probability is on Algorithm 3.)
* (b)
Consider any $\mu_{1},\ldots,\mu_{k}\in\mathbb{R}^{d}$, draw the points
$X:=\\{x_{i}\\}_{i\in[n]}$ independently with distribution
$\mathsf{GMM}(\\{(\mu_{t},I_{d},\frac{1}{k})\\}_{t\in[k]})$, take any
$s,\ell\in\mathbb{N}$, and $\epsilon>0$, and compute the random variable
$B_{H}$ in Algorithm 3. Then
$B_{H}\geq\frac{d-6k-2}{d+1}\cdot\operatorname{IP}(X,k)$
with probability $\geq 1-\frac{1}{n}-e^{-\Omega(n/(d+\log n)^{2})}-\epsilon$
provided
$s\geq 15d\log d,\qquad\ell\geq 128(d+3\log n)^{2}\log(1/\epsilon).$
(Here, the probability is on both $X$ and Algorithm 3.)
###### Proof of Theorem 4(a).
Take $\\{x_{i_{j}}\\}_{j\in[k]}$ from Algorithm 2, consider any tuple
$\\{i^{\prime}_{t}\\}_{t\in[s]}$ of indices in $[n]$, and put
$Y:=\\{x_{i^{\prime}_{t}}\\}_{t\in[s]}$. Then
$\displaystyle\operatorname{SDP}(Y,k)$
$\displaystyle\leq\operatorname{IP}(Y,k)=\min_{\mu_{1},\ldots,\mu_{k}\in\mathbb{R}^{d}}\frac{1}{s}\sum_{t\in[s]}\min_{j\in[k]}\|x_{i^{\prime}_{t}}-\mu_{j}\|^{2}$
$\displaystyle\leq\frac{1}{s}\sum_{t\in[s]}\min_{j\in[k]}\|x_{i^{\prime}_{t}}-x_{i_{j}}\|^{2}\leq\max_{t\in[s]}\min_{j\in[k]}\|x_{i^{\prime}_{t}}-x_{i_{j}}\|^{2}\leq\max_{i\in[n]}\min_{j\in[k]}\|x_{i}-x_{i_{j}}\|^{2}.$
It follows that $\operatorname{SDP}(Y_{i},k)\leq b$ almost surely for each
$i\in[\ell]$. The result then follows from Lemma 3 and Hoeffding’s inequality:
$\mathbb{P}\\{B_{H}>\operatorname{IP}(X,k)\\}\leq\mathbb{P}\bigg{\\{}\frac{1}{\ell}\sum_{i\in[\ell]}\operatorname{SDP}(Y_{i},k)-\Big{(}\tfrac{b^{2}}{2\ell}\log(\tfrac{1}{\epsilon})\Big{)}^{1/2}>\mathbb{E}\operatorname{SDP}(Y_{1},k)\bigg{\\}}\leq\epsilon.\qed$
###### Theorem 5 (Performance guarantee for Algorithm 4).
* (a)
Consider any $X:=\\{x_{i}\\}_{i\in[n]}$ in $\mathbb{R}^{d}$, any
$k,s,\ell\in\mathbb{N}$, and $\epsilon>0$, and compute the random variable
$B_{M}$ in Algorithm 4. Then
$\operatorname{IP}(X,k)\geq B_{M}$
with probability $\geq 1-\epsilon$. (Here, the probability is on Algorithm 4.)
* (b)
Consider any $\mu_{1},\ldots,\mu_{k}\in\mathbb{R}^{d}$, draw the points
$X:=\\{x_{i}\\}_{i\in[n]}$ independently with distribution
$\mathsf{GMM}(\\{(\mu_{t},I_{d},\frac{1}{k})\\}_{t\in[k]})$, take any
$s,\ell\in\mathbb{N}$, and $\epsilon>0$, and compute the random variable
$B_{M}$ in Algorithm 4. Then
$B_{M}\geq\frac{d-3k-2}{d+1}\cdot\operatorname{IP}(X,k)$
with probability $\geq 1-e^{-s/(8d)}-2e^{-s/54}-e^{-n/(16d)}$ provided
$s\geq 54d\log\ell,\qquad\ell\geq d\log(1/\epsilon),\qquad\ell\geq 3.$
(Here, the probability is on both $X$ and Algorithm 4.)
###### Proof of Theorem 5(a).
By Lemma 3, we have
$\operatorname{IP}(X,k)\geq\mathbb{E}\operatorname{SDP}(Y_{1},k)$. It follows
that
$\mathbb{P}\\{B_{M}>\operatorname{IP}(X,k)\\}\leq\mathbb{P}\\{B_{M}>\mathbb{E}\operatorname{SDP}(Y_{1},k)\\}=\mathbb{P}\\{\epsilon^{1/\ell}\operatorname{SDP}(Y_{1},k)>\mathbb{E}\operatorname{SDP}(Y_{1},k)\\}^{\ell},$
where the last step applies the fact that
$\\{\operatorname{SDP}(Y_{i},k)\\}_{i\in[k]}$ are independent with identical
distribution. Markov’s inequality further bounds this upper bound by
$\epsilon$. ∎
The proof of Theorem 4(b) makes use of the following approximation ratio
afforded by deterministic $k$-means++ initialization (Algorithm 2), which may
be of independent interest.
###### Lemma 6.
Given points $X:=\\{x_{i}\\}_{i\in[n]}$ in $\mathbb{R}^{d}$, consider the
problem of minimizing the function $f\colon(\mathbb{R}^{d})^{k}\to\mathbb{R}$
defined by
$f(\\{z_{j}\\}_{j\in[k]}):=\max_{i\in[n]}\min_{j\in[k]}\|x_{i}-z_{j}\|.$
The output $\\{i_{j}\\}_{j\in[k]}$ of deterministic $k$-means$++$
initialization (Algorithm 2) indexes an approximate solution to this
optimization problem with approximation ratio $2$:
$f(\\{x_{i_{j}}\\}_{j\in[k]})\leq 2\inf(f).$
###### Proof.
First, we establish that $f$ has a global minimizer. Taking $z_{j}=0$ for
every $j$ gives $f(\\{z_{j}\\}_{j\in[k]})=\max_{i\in[n]}\|x_{i}\|=:R$. Let
$w\in\mathbb{R}^{d}$ denote a point for which $\min_{i\in[n]}\|x_{i}-w\|=R$.
For any input $\\{z_{j}\\}_{j\in[k]}$ such that $f(\\{z_{j}\\}_{j\in[k]})\leq
R$, then for each $j$, either $z_{j}$ is within $R$ of the nearest $x_{i}$, or
the function value is not changed by sending $z_{j}$ to $w$. Thus, we may
restrict the input space to the set of $\\{z_{j}\\}_{j\in[k]}$ for which every
$z_{j}$ is within $R$ of the nearest $x_{i}$. Since this set is compact and
$f$ is continuous, the extreme value theorem guarantees a global minimizer.
Let $\\{z_{j}^{\star}\\}_{j\in[k]}$ denote a global minimizer of $f$, and put
$r:=\min(f)$. Then every $x_{i}$ is within $r$ of some $z_{j}^{\star}$. Run
deterministic $k$-means$++$ initialization, and consider the quantities
$r_{t}:=\max_{i\in[n]}\min_{j\in[t]}\|x_{i}-x_{i_{j}}\|$
for $t\in[k]$. Observe that $r_{1}\geq\cdots\geq
r_{k}=f(\\{x_{i_{j}}\\}_{j\in[k]})$. We wish to demonstrate $r_{k}\leq 2r$. If
$r_{k-1}\leq 2r$, then we are done. Otherwise, we have $r_{t}>2r$ for every
$t<k$, and so the pairwise distances between $\\{x_{i_{j}}\\}_{j\in[k]}$ are
all strictly greater than $2r$. Since each $x_{i_{j}}$ is within $r$ of a
point in $\\{z_{j}^{\star}\\}_{j\in[k]}$, the triangle inequality gives that
no two $x_{i_{j}}$’s are within $r$ of the same point in
$\\{z_{j}^{\star}\\}_{j\in[k]}$. Since every $x_{i}$ is within $r$ of some
$z_{j}^{\star}$, which in turn is within $r$ of some $x_{i_{j}}$, the triangle
inequality further gives that every $x_{i}$ is within $2r$ of some
$x_{i_{j}}$. This establishes $r_{k}\leq 2r$, as desired. ∎
## 5 Numerical experiments
In this section, we use both synthetic and real-world data to test the
performance of our high-confidence lower bounds $B_{H}$ (Algorithm 3) and
$B_{M}$ (Algorithm 4).
### 5.1 Implementation details
When implementing our algorithms, we discovered a few modifications that
delivered substantial improvements in practice.
Correcting the numerical dual certificate. SDPNAL+ [39] is a fast SDP solver
that was specifically designed for SDPs with entrywise nonnegativity
constraints, such as the Peng–Wei SDP. For this reason, we used SDPNAL+ to
solve our sketched SDP, but then it delivered a dual certificate that was not
exactly dual feasible. To be explicit, recall the normalized Peng–Wei SDP for
a sketch $Y$ of size $s$:
$\text{minimize}\qquad\frac{1}{2s}\operatorname{tr}(D_{Y}Z)\qquad\text{subject
to}\qquad Z\in\mathbb{R}^{s\times s},~{}\mathcal{A}(Z)=b,~{}Z\geq
0,~{}Z\succeq 0,$
where $\mathcal{A}\colon\mathbb{R}^{s\times s}\to\mathbb{R}^{s+1}$ and
$b\in\mathbb{R}^{s+1}$ are defined by
$\displaystyle\mathcal{A}(Z):=\left[\begin{matrix}\operatorname{tr}(Z)\\\
Z1\end{matrix}\right],\qquad b:=\left[\begin{matrix}k\\\
1\end{matrix}\right].$
The corresponding dual program is
$\text{minimize}\quad\frac{1}{2s}b^{\top}y\quad\text{subject to}\quad
y\in\mathbb{R}^{s+1},~{}P,S\in\mathbb{R}^{s\times
s},~{}\mathcal{A}^{*}(y)+P+S=D_{Y},~{}P\geq 0,~{}S\succeq 0.$
In practice, SDPNAL+ returns $(y_{0},P_{0},S_{0})$ such that
$\mathcal{A}^{*}(y)+P+S\neq D_{Y}$. To correct this, we fix $S=S_{0}$ in the
dual program, thereby restricting to an easy-to-solve linear program, which we
then solve using CVX. The result is a point $(y_{1},P_{1},S_{0})$ that is
feasible in the original dual program, and so weak duality implies that
$\frac{1}{2s}b^{\top}y_{1}$ is a lower bound, as desired.
Truncating the distance matrix. For some datasets (e.g., INTRUSION), the
entries of $D_{X}$ might vary widely, and we observe that SDP solvers fail to
converge in such cases. This can be resolved by truncating. Specifically, we
apply the function $t\mapsto\min\\{t,10^{8}\\}$ to each entry of $D_{X}$
before solving the SDP. Not only does this make the problem solvable in
practice, the optimal value of the truncated problem is a lower bound on the
original optimal value, and so we can use the result.
Truncating the sketched SDP value. When we run Algorithm 3 in practice, we
find that $b$ is often so large that $B_{H}$ is negative (and therefore
useless). Recall that $b$ plays the role of an almost sure upper bound on
$\operatorname{SDP}(Y,k)$. We can replicate this behavior by selecting a
threshold $u>0$ and replacing $\operatorname{SDP}(Y,k)$ with the truncated
version $\min\\{\operatorname{SDP}(Y,k),u\\}$. This leads to the following
modification:
$B_{H}=\frac{1}{\ell}\sum_{i\in[\ell]}\min\Big{\\{}\operatorname{SDP}(Y_{i},k),u\Big{\\}}-\Big{(}\tfrac{u^{2}}{2\ell}\log(\tfrac{1}{\epsilon})\Big{)}^{1/2}.$
Following the proof of Theorem 4(a), we have that
$\displaystyle\mathbb{P}\\{B_{H}>\operatorname{IP}(X,k)\\}$
$\displaystyle\quad\leq\mathbb{P}\bigg{\\{}\frac{1}{\ell}\sum_{i\in[\ell]}\min\Big{\\{}\operatorname{SDP}(Y_{i},k),u\Big{\\}}-\Big{(}\tfrac{u^{2}}{2\ell}\log(\tfrac{1}{\epsilon})\Big{)}^{1/2}>\mathbb{E}\min\Big{\\{}\operatorname{SDP}(Y_{1},k),u\Big{\\}}\bigg{\\}}\leq\epsilon,$
and so it still delivers a high-confidence lower bound. In practice, we take
$u$ to be the $k$-means value of the clustering given by $k$-means++, and we
observe that the truncation usually occurs for only a few of the $\ell$
sketches.
Other implementation details. When solving our SDPs with SDPNAL+, we warm
start with a block diagonal primal matrix $Z$ that we obtain by running
$k$-means++ over the sketched data. Also, we solve SDPs to low precision by
default, but in cases where this only takes a few iterations, we solve the SDP
again to a higher precision.
### 5.2 Datasets and parameters
We test our algorithms on five datasets. MNIST, consisting of $60000$ points
in $784$ dimensions, is the training set images of the MNIST database of
handwritten digits [26]. NORM-10, consisting of $10000$ points in $5$
dimensions, is a synthetic dataset drawn according to a mixture of $10$
Gaussians $\mathsf{GMM}(\\{(\mu_{t},I_{5},\frac{1}{10})\\}_{t\in[10]})$ with
the centers $\\{\mu_{t}\\}_{t\in[10]}$ drawn uniformly in a $5$-dimensional
hypercube of side length $500$. Similarly NORM-25 consisting of $10000$ points
in $15$ dimensions, is a synthetic dataset drawn according to a mixture of
$25$ Gaussians $\mathsf{GMM}(\\{(\mu_{t},I_{15},\frac{1}{25})\\}_{t\in[25]})$
with the centers $\\{\mu_{t}\\}_{t\in[25]}$ drawn uniformly in a
$15$-dimensional hypercube of side length $500$. CLOUD, consisting of $1024$
points in $10$ dimensions, is Philippe Collard’s cloud cover data [8] in the
UC Irvine Machine Learning Repository. INTRUSION, consisting of $494021$
points in $34$ dimensions, is a $10\%$ subset data of continuous features
(with symbolic features removed) for network intrusion detection used in KDD
Cup 1999 competition [22], also from the UC Irvine Machine Learning
Repository. Notice that NORM-25, CLOUD, and INTRUSION are the same datasets
used in [41] to show the performance of $k$-means++ algorithm.
For each dataset, we run our algorithms in various settings. First, we run our
algorithms with a small number of trials, namely, $\ell=30$. We test $k=10$
for MNIST and $k\in\\{10,25,50\\}$ for each of the other four datasets. We
expect the Hoeffding lower bound $B_{H}$ to perform better than the Markov
lower bound $B_{M}$ when $\ell$ is larger, so we also consider the setting
where $\ell=1000$. In this setting, we took $k=25$ for NORM-25 and $k=10$ for
the other four datasets. For all of these experiments, we use sketch size
$s=300$ and error rate $\epsilon=0.01$.
### 5.3 Results
Before running our algorithms, we run the $k$-means++ algorithm $\ell$ times
on the full dataset and record the smallest $k$-means value, denoted by $\min
v_{i}$. (This number of trials equals the number of random sketch trials for
simplicity.) We take this $k$-means value to be truncation level $u$ in our
SDP truncation technique for $B_{H}$. Note that $\min v_{i}$ is an upper bound
on the optimal $k$-means objective.
For comparison, we use (1) to get high-confidence lower bounds similar to
$B_{H}$ (with value truncation) and $B_{M}$:
$L_{H}:=\frac{1}{\ell}\sum_{i\in[\ell]}\min\\{L_{i},u\\}-\Big{(}\tfrac{u^{2}}{2\ell}\log(\tfrac{1}{\epsilon})\Big{)}^{1/2},\qquad
L_{M}:=\epsilon^{1/\ell}\min_{i\in[\ell]}L_{i},$
where $\ell$, $u$, and $\epsilon$ are the same as for $B_{H}$ and $B_{M}$, and
each $L_{i}$ is an independent draw of $L$. By the proofs of Theorems 4(a) and
5(a), the random variables $L_{H}$ and $L_{M}$ are indeed lower bounds on
$\operatorname{IP}(X,k)$ with probability $1-\epsilon$. In Tables 1 and 2, we
include the average of $\\{L_{i}\\}_{i\in[\ell]}$ for comparison (denoted by
$\operatorname{avg}L_{i}$), which can be seen as a lower bound without a
probability guarantee.
Our results with $\ell=30$ are in Table 1, while our results with $\ell=1000$
are in Table 2. The bold numbers give the best lower bound among $L_{H}$,
$L_{M}$, $B_{H}$, and $B_{M}$. In all datasets except INTRUSION, our lower
bound (the best between $B_{H}$ and $B_{M}$) is at least $10$ times better
than $L_{H}$ and $L_{M}$ (and is still a significant improvement over
$\operatorname{avg}L_{i}$). However, our lower bound is worse than the
$k$-means++ lower bound for the INTRUSION dataset. This is because INTRUSION
is quite unbalanced: most of the data is concentrated at the same point, and
there are a few very distant outliers. Indeed, when we uniformly sketch down
to a small subset, most of the outliers may not be selected, and so even the
optimal $k$-means value of the subset $\operatorname{IP}(Y,k)$ is much smaller
than $\operatorname{IP}(X,k)$. (This can be demonstrated by running
$k$-means++ on $Y$ for INTRUSION.) As such, the bad performance on INTRUSION
data is due to sketching rather than relaxing. As expected, $B_{M}$ is
slightly better when $\ell$ is small, while $B_{H}$ slightly better when
$\ell$ is large. In addition, we observe that $L_{H}$ is consistently negative
(as is $B_{H}$ for INTRUSION).
Tables 1 and 2 also display three types of runtime. $T_{\mathrm{init}}$ is the
time for $\ell$ repeated initializations of $k$-means++, which is used to get
$\\{L_{i}\\}_{i\in[\ell]}$. $T_{\mathrm{k++}}$ is the time to compute $\min
v_{i}$ (i.e., the threshold $u$). $T_{\mathrm{SDP}}$ is the time to compute
$\ell$ randomly sketched SDPs. Based on the definitions of these lower bounds,
the runtime for $L_{H}$ is $T_{\mathrm{init}}+T_{\mathrm{k++}}$, the runtime
for $L_{M}$ is $T_{\mathrm{init}}$, the runtime for $B_{H}$ is
$T_{\mathrm{SDP}}+T_{\mathrm{k++}}$, and the runtime for $B_{M}$ is
$T_{\mathrm{SDP}}$. While the SDP approach takes longer than $k$-means++ in
these examples, the SDP approach usually delivers a much better lower bound.
Table 1: High-confidence $k$-means lower bounds with $\ell=30$. Dataset | $k$ | $\min v_{i}$ | $\operatorname{avg}L_{i}$ | $L_{H}$ | $L_{M}$ | $B_{H}$ | $B_{M}$ | $T_{\mathrm{init}}$ | $T_{\mathrm{k++}}$ | $T_{\mathrm{SDP}}$
---|---|---|---|---|---|---|---|---|---|---
MNIST | 10 | 3.92e1 | 1.26e0 | -9.59e0 | 1.06e0 | 2.56e1 | 2.96e1 | 1.71e1 | 4.18e2 | 2.47e2
NORM-10 | 10 | 4.97e0 | 1.10e0 | -1.07e0 | 1.24e-1 | 3.42e0 | 3.72e0 | 2.24e-1 | 1.48e-1 | 2.92e1
NORM-10 | 25 | 4.05e0 | 1.02e-1 | -1.01e0 | 8.63e-2 | 1.43e0 | 2.11e0 | 4.00e-1 | 1.97e0 | 3.98e2
NORM-10 | 50 | 3.18e0 | 7.58e-2 | -8.05e-1 | 6.33e-2 | 5.24e-1 | 1.12e0 | 6.30e-1 | 3.71e0 | 1.93e2
NORM-25 | 10 | 1.18e5 | 3.94e3 | -2.90e4 | 3.03e3 | 6.99e4 | 8.08e4 | 2.58e-1 | 1.95e-1 | 1.67e2
NORM-25 | 25 | 1.50e1 | 6.37e0 | -3.31e0 | 3.08e-1 | 9.61e0 | 1.15e1 | 4.42e-1 | 3.63e-1 | 3.44e1
NORM-25 | 50 | 1.41e1 | 3.04e-1 | -3.61e0 | 2.60e-1 | 5.23e0 | 7.48e0 | 6.88e-1 | 2.50e0 | 4.64e1
CLOUD | 10 | 5.62e3 | 2.41e2 | -1.31e3 | 1.62e2 | 2.70e3 | 3.06e3 | 1.01e-1 | 1.75e-1 | 1.46e2
CLOUD | 25 | 1.94e3 | 6.31e1 | -4.75e2 | 4.91e1 | 8.24e2 | 9.43e2 | 1.54e-1 | 1.90e-1 | 3.48e2
CLOUD | 50 | 1.09e3 | 2.99e1 | -2.72e2 | 2.33e1 | 2.57e2 | 4.54e2 | 2.19e-1 | 3.67e-1 | 4.99e2
INTRUSION | 10 | 2.36e7 | 1.00e6 | -5.55e6 | 6.03e5 | -6.43e6 | 2.93e4 | 9.46e0 | 3.76e1 | 3.43e2
INTRUSION | 25 | 2.19e6 | 7.64e4 | -5.31e5 | 5.31e4 | -6.03e5 | 1.67e3 | 1.86e1 | 1.04e2 | 4.58e2
INTRUSION | 50 | 4.52e5 | 1.36e4 | -1.11e5 | 9.27e3 | -1.25e5 | 4.56e1 | 3.43e1 | 1.51e2 | 2.16e3
Table 2: High-confidence $k$-means lower bounds with $\ell=1000$. Dataset | $k$ | $\min v_{i}$ | $\operatorname{avg}L_{i}$ | $L_{H}$ | $L_{M}$ | $B_{H}$ | $B_{M}$ | $T_{\mathrm{init}}$ | $T_{\mathrm{k++}}$ | $T_{\mathrm{SDP}}$
---|---|---|---|---|---|---|---|---|---|---
MNIST | 10 | 3.92e1 | 1.26e0 | -6.11e-1 | 1.21e0 | 3.44e1 | 3.39e1 | 5.79e2 | 1.45e4 | 8.08e3
NORM-10 | 10 | 5.02e0 | 4.05e-1 | -8.05e-2 | 1.45e-1 | 4.59e0 | 4.18e0 | 8.81e0 | 5.02e0 | 1.14e3
NORM-25 | 25 | 1.49e1 | 1.73e0 | -1.84e-1 | 3.56e-1 | 1.29e1 | 1.27e1 | 1.56e1 | 1.14e1 | 1.26e3
CLOUD | 10 | 5.62e3 | 2.31e2 | -3.89e1 | 1.72e2 | 4.06e3 | 3.14e3 | 4.05e0 | 6.10e0 | 6.27e3
INTRUSION | 10 | 2.34e7 | 9.58e5 | -1.66e5 | 6.80e5 | -1.00e6 | 1.81e4 | 3.15e2 | 2.44e3 | 2.65e4
## 6 Discussion
In this paper, we introduced sketch-and-solve approaches to $k$-means
clustering with semidefinite programming. In particular, we exactly recover
the optimal clustering with a single sketch provided the data is well
separated, and we compute a high-confidence lower bound on the $k$-means value
from multiple sketches when the data is not well separated. For future work,
one might attempt to use multiple sketches to find the optimal clustering when
the data only exhibits some separation. Next, our high-confidence lower bounds
perform poorly for unbalanced data like INTRUSION, and we suspect this stems
from our uniform sampling approach. Presumably, an importance sampling–based
alternative would perform better in such settings. Finally, the main idea of
our high-confidence lower bounds is that
$\mathbb{E}\operatorname{IP}(Y,k)\leq\operatorname{IP}(X,k)$ (see Lemma 3). It
would be interesting to show a similar relationship for SDP, namely,
$\mathbb{E}\operatorname{SDP}(Y,k)\leq\operatorname{SDP}(X,k)$. While this
bound holds empirically, we do not have a proof. Such a bound might allow one
to extend our sketch-and-solve approach to more general SDPs.
## 7 Proof of Theorem 2
For each $S\in\Gamma$, denote $S^{\prime}:=S\cap W$. To prove Theorem 2, we
will first find sufficient conditions for the following approximations to
hold:
$\alpha_{S^{\prime}T^{\prime}}\gtrapprox\alpha_{ST},\qquad\beta_{S^{\prime}T^{\prime}}\approx\beta_{ST},\qquad
c_{S^{\prime}}\approx c_{S}.$
The first two approximations combined with Proposition 1 ensure that the SDP
step of Algorithm 1 produces the clustering
$\Gamma^{\prime}=\\{S^{\prime}:S\in\Gamma\\}$. Meanwhile, the approximation
$c_{S^{\prime}}\approx c_{S}$ ensures that the final step of Algorithm 1
produces the desired clustering $\Gamma$.
###### Lemma 7.
For $X\sim\mathsf{Binomial}(n,p)$, it holds that
$\mathbb{P}\\{X\leq\tfrac{1}{2}pn\\}\leq\operatorname{exp}(-\tfrac{3}{28}pn),\qquad\mathbb{P}\\{X\geq\tfrac{3}{2}pn\\}\leq\operatorname{exp}(-\tfrac{3}{28}pn).$
###### Proof.
This is an immediate consequence of Bernstein’s inequality:
$\mathbb{P}\\{X\leq\tfrac{1}{2}pn\\}=\mathbb{P}\\{pn-X\geq\tfrac{1}{2}pn\\}\leq\operatorname{exp}\bigg{(}-\frac{\frac{1}{2}(\frac{1}{2}pn)^{2}}{np(1-p)+\frac{1}{3}(\frac{1}{2}pn)}\bigg{)}\leq\operatorname{exp}(-\tfrac{3}{28}pn).$
The other bound follows from a similar proof. ∎
###### Proposition 8 (Matrix Bernstein, see Theorem 6.6.1 in [40]).
Consider independent, random, mean-zero, real symmetric $d\times d$ matrices
$\\{X_{i}\\}_{i\in[n]}$ such that
$\lambda_{\mathrm{max}}(X_{i})\leq L~{}~{}~{}\forall
i\in[n],\qquad\qquad\bigg{\|}\sum_{i\in[n]}\mathbb{E}X_{i}^{2}\bigg{\|}_{2\to
2}\leq v.$
Then for every $t\geq 0$, it holds that
$\mathbb{P}\bigg{\\{}\lambda_{\mathrm{max}}\bigg{(}\sum_{i\in[n]}X_{i}\bigg{)}\geq
t\bigg{\\}}\leq
d\cdot\operatorname{exp}\Big{(}-\tfrac{\frac{1}{2}t^{2}}{v+\frac{1}{3}Lt}\Big{)}\leq
d\cdot\operatorname{exp}(-\tfrac{1}{4}\min\\{\tfrac{t^{2}}{v},\tfrac{3t}{L}\\}).$
###### Lemma 9.
Given independent random vectors $\\{X_{i}\\}_{i\in[n]}$ satisfying
$\mathbb{E}X_{i}=0$ and $\|X_{i}\|\leq r$ almost surely for each $i\in[n]$,
put $v:=\max_{i\in[n]}\mathbb{E}\|X_{i}\|^{2}$. Then for $t\geq 0$, it holds
that
$\mathbb{P}\bigg{\\{}\bigg{\|}\sum_{i\in[n]}X_{i}\bigg{\|}\geq
t\bigg{\\}}\leq(d+1)\operatorname{exp}\Big{(}-\tfrac{\frac{1}{2}t^{2}}{nv+\frac{1}{3}rt}\Big{)}\leq(d+1)\operatorname{exp}(-\tfrac{1}{4}\min\\{\tfrac{t^{2}}{nv},\tfrac{3t}{r}\\}).$
###### Proof.
Apply matrix Bernstein to the random matrices
$\left[\begin{smallmatrix}0&X_{i}^{\top}\\\ X_{i}&0\end{smallmatrix}\right]$.
∎
###### Lemma 10.
Given a tuple $\\{x_{i}\\}_{i\in[n]}$ of points in $\mathbb{R}^{d}$, consider
their centroid and radius:
$c:=\frac{1}{n}\sum_{i\in[n]}x_{i},\qquad r:=\max_{i\in[n]}\|x_{i}-c\|.$
Let $\\{b_{i}\\}_{i\in[n]}$ denote independent Bernoulli random variables with
success probability $p$, and consider the random variable
$n^{\prime}:=\sum_{i\in[n]}b_{i}$ and the random vector
$c^{\prime}:=\left\\{\begin{array}[]{cl}\frac{1}{n^{\prime}}\sum_{i\in[n]}b_{i}x_{i}&\text{if
}n^{\prime}>0\\\ \text{undefined}&\text{if }n^{\prime}=0.\end{array}\right.$
Provided $pn\geq 16\log(\frac{d+2}{\epsilon_{\ref{lem.centroid}}})$, it holds
that
$\|c^{\prime}-c\|^{2}<\frac{16r^{2}}{pn}\log(\frac{d+2}{\epsilon_{\ref{lem.centroid}}})$
with probability $\geq 1-\epsilon_{\ref{lem.centroid}}$.
###### Proof.
In the event $\\{n^{\prime}>0\\}$, it holds that
$\|c^{\prime}-c\|=\bigg{\|}\frac{1}{n^{\prime}}\sum_{i\in[n]}b_{i}x_{i}-c\bigg{\|}=\frac{1}{n^{\prime}}\bigg{\|}\sum_{i\in[n]}b_{i}(x_{i}-c)\bigg{\|}=\frac{1}{n^{\prime}}\bigg{\|}\sum_{i\in[n]}(b_{i}-p)(x_{i}-c)\bigg{\|},$
where the last step applies the fact that $\sum_{i\in[n]}(x_{i}-c)=0$. Then
$\displaystyle\mathbb{P}\\{\|c^{\prime}-c\|\geq t\\}$
$\displaystyle\leq\mathbb{P}(\\{\|c^{\prime}-c\|\geq
t\\}\cap\\{n^{\prime}>\tfrac{1}{2}pn\\})+\mathbb{P}\\{n^{\prime}\leq\tfrac{1}{2}pn\\}$
$\displaystyle\leq\mathbb{P}\bigg{\\{}\frac{2}{pn}\bigg{\|}\sum_{i\in[n]}(b_{i}-p)(x_{i}-c)\bigg{\|}\geq
t\bigg{\\}}+\mathbb{P}\\{n^{\prime}\leq\tfrac{1}{2}pn\\}.$ (2)
We apply Lemma 9 to the first term above and Lemma 7 to the second term.
Specifically, put $X_{i}:=(b_{i}-p)(x_{i}-c)$. Then $\mathbb{E}X_{i}=0$ and
$\|X_{i}\|\leq r$ almost surely. Furthermore,
$\mathbb{E}\|X_{i}\|^{2}=\|x_{i}-c\|^{2}\cdot\mathbb{E}(b_{i}-p)^{2}\leq
pr^{2},$
and so $v\leq pr^{2}$. With this, we continue to bound (2):
$\displaystyle\mathbb{P}\\{\|c^{\prime}-c\|\geq t\\}$
$\displaystyle\leq(d+1)\operatorname{exp}\Big{(}-\tfrac{1}{4}\min\Big{\\{}\tfrac{(\frac{pnt}{2})^{2}}{npr^{2}},\tfrac{3(\frac{pnt}{2})}{r}\Big{\\}}\Big{)}+\operatorname{exp}(-\tfrac{3}{28}pn)$
$\displaystyle\leq(d+2)\operatorname{exp}(-\tfrac{pn}{4}\min\\{\tfrac{t^{2}}{4r^{2}},\tfrac{3t}{2r},\tfrac{3}{7}\\})=(d+2)\operatorname{exp}(-\tfrac{pnt^{2}}{16r^{2}}),$
(3)
where the last step holds provided $t\leq r$. The result follows by taking
$t^{2}:=\frac{16r^{2}}{pn}\log(\frac{d+2}{\epsilon_{\ref{lem.centroid}}})$,
since then our assumption $pn\geq
16\log(\frac{d+2}{\epsilon_{\ref{lem.centroid}}})$ implies $t\leq r$. ∎
Here and throughout, we denote
$n_{\mathrm{min}}:=\min_{S\in\Gamma}|S|,\qquad
n_{\mathrm{max}}:=\max_{S\in\Gamma}|S|.$
###### Lemma 11.
Fix $S,T\in\Gamma$ with $S\neq T$ and suppose $pn_{\mathrm{min}}\geq
16\log(\frac{2(d+2)}{\epsilon_{\ref{lem.lower bound alpha}}})$. Then
$\alpha_{S^{\prime}T^{\prime}}>\alpha_{ST}-(\tfrac{2r}{\Delta}+\tfrac{3}{2})\cdot
8r\cdot\sqrt{\tfrac{\log(2(d+2)/\epsilon_{\ref{lem.lower bound
alpha}})}{pn_{\mathrm{min}}}}$
with probability $\geq 1-\epsilon_{\ref{lem.lower bound alpha}}$.
###### Proof.
Denote $m_{ST}:=\tfrac{c_{S}+c_{T}}{2}$ and
$w_{ST}:=\tfrac{c_{S}-c_{T}}{\|c_{S}-c_{T}\|}$ so that $\alpha_{ST}=\min_{i\in
S}\langle x_{i}-m_{ST},w_{ST}\rangle$, and let $j$ denote any minimizer of
$\langle x_{i}-m_{S^{\prime}T^{\prime}},w_{S^{\prime}T^{\prime}}\rangle$ over
$i\in S^{\prime}$. Then
$\displaystyle\alpha_{ST}-\alpha_{S^{\prime}T^{\prime}}$
$\displaystyle\leq\langle x_{j}-m_{ST},w_{ST}\rangle-\langle
x_{j}-m_{S^{\prime}T^{\prime}},w_{S^{\prime}T^{\prime}}\rangle$
$\displaystyle=\langle
x_{j}-m_{ST},w_{ST}-w_{S^{\prime}T^{\prime}}\rangle+\langle
m_{S^{\prime}T^{\prime}}-m_{ST},w_{S^{\prime}T^{\prime}}\rangle$
$\displaystyle\leq\|x_{j}-m_{ST}\|\|w_{S^{\prime}T^{\prime}}-w_{ST}\|+\|m_{S^{\prime}T^{\prime}}-m_{ST}\|.$
(4)
To continue, we bound each of the terms above. First, the triangle inequality
gives
$\|x_{j}-m_{ST}\|=\|x_{j}-c_{S}+c_{S}-m_{ST}\|\leq\|x_{j}-c_{S}\|+\|c_{S}-m_{ST}\|\leq
r+\tfrac{1}{2}\|c_{S}-c_{T}\|,$
$\|m_{S^{\prime}T^{\prime}}-m_{ST}\|=\tfrac{1}{2}\|(c_{S^{\prime}}+c_{T^{\prime}})-(c_{S}+c_{T})\|\leq\tfrac{1}{2}\big{(}\|c_{S^{\prime}}-c_{S}\|+\|c_{T^{\prime}}-c_{T}\|\big{)}.$
Next, we apply the triangle inequality multiple times to get
$\displaystyle\|w_{S^{\prime}T^{\prime}}-w_{ST}\|$
$\displaystyle=\|\tfrac{c_{S^{\prime}}-c_{T^{\prime}}}{\|c_{S^{\prime}}-c_{T^{\prime}}\|}-\tfrac{c_{S^{\prime}}-c_{T^{\prime}}}{\|c_{S}-c_{T}\|}+\tfrac{c_{S^{\prime}}-c_{T^{\prime}}}{\|c_{S}-c_{T}\|}-\tfrac{c_{S}-c_{T}}{\|c_{S}-c_{T}\|}\|$
$\displaystyle\leq\|\tfrac{c_{S^{\prime}}-c_{T^{\prime}}}{\|c_{S^{\prime}}-c_{T^{\prime}}\|}-\tfrac{c_{S^{\prime}}-c_{T^{\prime}}}{\|c_{S}-c_{T}\|}\|+\|\tfrac{c_{S^{\prime}}-c_{T^{\prime}}}{\|c_{S}-c_{T}\|}-\tfrac{c_{S}-c_{T}}{\|c_{S}-c_{T}\|}\|$
$\displaystyle=\tfrac{|\|c_{S}-c_{T}\|-\|c_{S^{\prime}}-c_{T^{\prime}}\||}{\|c_{S}-c_{T}\|}+\tfrac{\|c_{S^{\prime}}-c_{T^{\prime}}-c_{S}+c_{T}\|}{\|c_{S}-c_{T}\|}\leq
2\cdot\tfrac{\|c_{S^{\prime}}-c_{S}\|+\|c_{T^{\prime}}-c_{T}\|}{\|c_{S}-c_{T}\|}.$
With this, we continue (4):
$\displaystyle\alpha_{ST}-\alpha_{S^{\prime}T^{\prime}}$
$\displaystyle\leq\big{(}r+\tfrac{1}{2}\|c_{S}-c_{T}\|\big{)}\cdot
2\cdot\tfrac{\|c_{S^{\prime}}-c_{S}\|+\|c_{T^{\prime}}-c_{T}\|}{\|c_{S}-c_{T}\|}+\tfrac{1}{2}\big{(}\|c_{S^{\prime}}-c_{S}\|+\|c_{T^{\prime}}-c_{T}\|\big{)}$
$\displaystyle=\big{(}\tfrac{2r}{\|c_{S}-c_{T}\|}+\tfrac{3}{2}\big{)}\big{(}\|c_{S^{\prime}}-c_{S}\|+\|c_{T^{\prime}}-c_{T}\|\big{)}$
$\displaystyle\leq\big{(}\tfrac{2r}{\Delta}+\tfrac{3}{2}\big{)}\big{(}\|c_{S^{\prime}}-c_{S}\|+\|c_{T^{\prime}}-c_{T}\|\big{)}.$
It follows that
$\displaystyle\mathbb{P}\\{\alpha_{ST}-\alpha_{S^{\prime}T^{\prime}}\geq t\\}$
$\displaystyle\leq\mathbb{P}\\{\big{(}\tfrac{2r}{\Delta}+\tfrac{3}{2}\big{)}\big{(}\|c_{S^{\prime}}-c_{S}\|+\|c_{T^{\prime}}-c_{T}\|\big{)}\geq
t\\}$
$\displaystyle\leq\mathbb{P}\\{\big{(}\tfrac{2r}{\Delta}+\tfrac{3}{2}\big{)}\|c_{S^{\prime}}-c_{S}\|\geq\tfrac{t}{2}\\}+\mathbb{P}\\{\big{(}\tfrac{2r}{\Delta}+\tfrac{3}{2}\big{)}\|c_{T^{\prime}}-c_{T}\|\geq\tfrac{t}{2}\\}$
$\displaystyle\leq
2\max_{\begin{subarray}{c}R\in\Gamma\end{subarray}}\mathbb{P}\\{\big{(}\tfrac{2r}{\Delta}+\tfrac{3}{2}\big{)}\|c_{R^{\prime}}-c_{R}\|\geq\tfrac{t}{2}\\}.$
The result then follows from Lemma 10 by taking
$\epsilon_{\ref{lem.centroid}}:=\epsilon_{\ref{lem.lower bound alpha}}/2$. ∎
###### Lemma 12.
Fix $S,T\in\Gamma$ with $S\neq T$ and suppose
$pn_{\mathrm{min}}\geq\frac{104}{3}(\frac{n_{\mathrm{max}}}{n_{\mathrm{min}}})^{2}\log(\frac{6}{\epsilon_{\ref{lem.deviation
in sum of reciprocals}}})$. Then
$|p(\tfrac{1}{|S^{\prime}|}+\tfrac{1}{|T^{\prime}|})-(\tfrac{1}{|S|}+\tfrac{1}{|T|})|<\sqrt{\tfrac{104}{3}\tfrac{\log(6/\epsilon_{\ref{lem.deviation
in sum of reciprocals}})}{pn_{\mathrm{min}}^{3}}}$
with probability $\geq 1-\epsilon_{\ref{lem.deviation in sum of
reciprocals}}$.
###### Proof.
First, the triangle inequality gives
$|p(\tfrac{1}{|S^{\prime}|}+\tfrac{1}{|T^{\prime}|})-(\tfrac{1}{|S|}+\tfrac{1}{|T|})|\leq|\tfrac{p}{|S^{\prime}|}-\tfrac{1}{|S|}|+|\tfrac{p}{|T^{\prime}|}-\tfrac{1}{|T|}|.$
As such, it suffices to bound terms of the form
$|\tfrac{p}{|S^{\prime}|}-\tfrac{1}{|S|}|=\tfrac{|p|S|-|S^{\prime}||}{|S^{\prime}||S|}\leq\tfrac{|p|S|-|S^{\prime}||}{\frac{p}{2}|S|^{2}},$
where the last step holds in the event $\\{|S^{\prime}|\geq\frac{p}{2}|S|\\}$.
For every $t\in[0,\frac{1}{|S|}]$, Bernstein’s inequality and Lemma 7 together
give
$\displaystyle\mathbb{P}\big{\\{}|\tfrac{p}{|S^{\prime}|}-\tfrac{1}{|S|}|\geq\tfrac{t}{2}\big{\\}}$
$\displaystyle\leq\mathbb{P}\big{\\{}\big{|}|S^{\prime}|-p|S|\big{|}\geq\tfrac{p}{4}|S|^{2}t\big{\\}}+\mathbb{P}\big{\\{}|S^{\prime}|\leq\tfrac{p}{2}|S|\big{\\}}$
$\displaystyle\leq
2\operatorname{exp}\Big{(}-\tfrac{\frac{1}{2}(\frac{p}{4}|S|^{2}t)^{2}}{|S|p(1-p)+\frac{1}{3}(\frac{p}{4}|S|^{2}t)}\Big{)}+\operatorname{exp}(-\tfrac{3}{28}p|S|)$
$\displaystyle\leq
2\operatorname{exp}\Big{(}-\tfrac{\frac{1}{2}(\frac{p}{4}|S|^{2}t)^{2}}{\frac{13}{12}p|S|}\Big{)}+\operatorname{exp}(-\tfrac{3}{28}p|S|)$
$\displaystyle=2\operatorname{exp}(-\tfrac{3}{104}p|S|^{3}t^{2})+\operatorname{exp}(-\tfrac{3}{28}p|S|)\leq
3\operatorname{exp}(-\tfrac{3}{104}p|S|^{3}t^{2}).$
Finally, we combine our estimates to obtain
$\displaystyle\mathbb{P}\big{\\{}|p(\tfrac{1}{|S^{\prime}|}+\tfrac{1}{|T^{\prime}|})-(\tfrac{1}{|S|}+\tfrac{1}{|T|})|\geq
t\big{\\}}$
$\displaystyle\leq\mathbb{P}\big{\\{}|\tfrac{p}{|S^{\prime}|}-\tfrac{1}{|S|}|\geq\tfrac{t}{2}\big{\\}}+\mathbb{P}\big{\\{}|\tfrac{p}{|T^{\prime}|}-\tfrac{1}{|T|}|\geq\tfrac{t}{2}\big{\\}}$
$\displaystyle\leq
3\operatorname{exp}(-\tfrac{3}{104}p|S|^{3}t^{2})+3\operatorname{exp}(-\tfrac{3}{104}p|T|^{3}t^{2})$
$\displaystyle\leq
6\operatorname{exp}(-\tfrac{3}{104}pn_{\mathrm{min}}^{3}t^{2}).$
The result follows by taking
$t:=\sqrt{\tfrac{104}{3}\tfrac{\log(6/\epsilon_{\ref{lem.deviation in sum of
reciprocals}})}{pn_{\mathrm{min}}^{3}}}$, which is at most
$\frac{1}{n_{\mathrm{max}}}$ by assumption. ∎
###### Lemma 13.
Fix $R\in\Gamma$. Then for every $t\in[0,p|R|r^{2}]$, it holds that
$\mathbb{P}\Big{\\{}\big{|}\|X_{R^{\prime}}\|_{2\to 2}^{2}-p\|X_{R}\|_{2\to
2}^{2}\big{|}\geq
t\Big{\\}}\leq(3d+3)\cdot\operatorname{exp}(-\tfrac{t^{2}}{48p|R|r^{4}}).$
###### Proof.
Fix $R\in\Gamma$. For each $i\in R$, let $b_{i}\sim\mathsf{Bernoulli}(p)$
indicate whether $i\in R^{\prime}$, and consider the matrices
$X_{R}:=\sum_{i\in R}(x_{i}-c_{R})e_{i}^{\top},\qquad
X_{R^{\prime}}:=\sum_{i\in R}b_{i}(x_{i}-c_{R^{\prime}})e_{i}^{\top},\qquad
W_{R}:=\sum_{i\in R}b_{i}(x_{i}-c_{R})e_{i}^{\top}.$
(Here and throughout, we note that $c_{R^{\prime}}$ and any quantity defined
in terms of $c_{R^{\prime}}$ is undefined in the event that $R^{\prime}$ is
empty.) We will bound the deviation of $\|X_{R^{\prime}}\|_{2\to 2}^{2}$ from
$p\|X_{R}\|_{2\to 2}^{2}$ by applying the triangle inequality through
$\|W_{R}\|_{2\to 2}^{2}$. To facilitate this analysis, put
$e_{R}:=c_{R^{\prime}}-c_{R}$. Then
$X_{R^{\prime}}=W_{R}-e_{R}1_{R^{\prime}}^{\top}$ and
$W_{R}1_{R^{\prime}}=|R^{\prime}|e_{R}$, from which it follows that
$X_{R^{\prime}}X_{R^{\prime}}^{\top}-W_{R}W_{R}^{\top}=-|R^{\prime}|e_{R}e_{R}^{\top}.$
As such, the triangle and reverse triangle inequalities together give
$\displaystyle\big{|}\|X_{R^{\prime}}\|_{2\to 2}^{2}-p\|X_{R}\|_{2\to
2}^{2}\big{|}$ $\displaystyle\leq\big{|}\|X_{R^{\prime}}\|_{2\to
2}^{2}-\|W_{R}\|_{2\to 2}^{2}\big{|}+\big{|}\|W_{R}\|_{2\to
2}^{2}-p\|X_{R}\|_{2\to 2}^{2}\big{|}$
$\displaystyle=\big{|}\|X_{R^{\prime}}X_{R^{\prime}}^{\top}\|_{2\to
2}-\|W_{R}W_{R}^{\top}\|_{2\to 2}\big{|}+\big{|}\|W_{R}W_{R}^{\top}\|_{2\to
2}-\|pX_{R}X_{R}^{\top}\|_{2\to 2}\big{|}$
$\displaystyle\leq\|X_{R^{\prime}}X_{R^{\prime}}^{\top}-W_{R}W_{R}^{\top}\|_{2\to
2}+\|W_{R}W_{R}^{\top}-pX_{R}X_{R}^{\top}\|_{2\to 2}$
$\displaystyle=|R^{\prime}|\|e_{R}\|^{2}+\|W_{R}W_{R}^{\top}-pX_{R}X_{R}^{\top}\|_{2\to
2}.$
We will use Lemma 7 to bound $|R^{\prime}|$, Lemma 10 to bound
$\|e_{R}\|^{2}$, and Proposition 8 to bound
$\|W_{R}W_{R}^{\top}-pX_{R}X_{R}^{\top}\|_{2\to 2}$. For this third bound, we
put $X_{i}:=(b_{i}-p)(x_{i}-c_{R})(x_{i}-c_{R})^{\top}$ and observe that
$\|X_{i}\|_{2\to 2}\leq\|x_{i}-c_{R}\|^{2}\leq r^{2}=:L,$ $\bigg{\|}\sum_{i\in
R}\mathbb{E}X_{i}^{2}\bigg{\|}_{2\to 2}\leq\operatorname{tr}\bigg{(}\sum_{i\in
R}\mathbb{E}X_{i}^{2}\bigg{)}=p(1-p)\sum_{i\in R}\|x_{i}-c_{R}\|^{4}\leq
p|R|r^{4}=:v.$
As such, Lemma 7, the bound (3) (which similarly holds with our current choice
of $r$), and Proposition 8 together give
$\displaystyle\mathbb{P}\Big{\\{}\big{|}\|X_{R^{\prime}}\|_{2\to
2}^{2}-p\|X_{R}\|_{2\to 2}^{2}\big{|}\geq t\Big{\\}}$
$\displaystyle\qquad\leq\mathbb{P}\Big{\\{}|R^{\prime}|\geq\tfrac{3}{2}p|R|\Big{\\}}+\mathbb{P}\Big{\\{}\tfrac{3}{2}p|R|\cdot\|e_{R}\|^{2}\geq\tfrac{t}{2}\Big{\\}}+\mathbb{P}\bigg{\\{}\Big{\|}\sum_{i\in
R}X_{i}\Big{\|}_{2\to 2}\geq\tfrac{t}{2}\bigg{\\}}$
$\displaystyle\qquad\leq\operatorname{exp}(-\tfrac{3}{28}p|R|)+(d+2)\operatorname{exp}\Big{(}-\tfrac{p|R|(\frac{t}{3p|R|})}{16r^{2}}\Big{)}+2d\cdot\operatorname{exp}\Big{(}-\tfrac{1}{4}\min\Big{\\{}\tfrac{(\frac{t}{2})^{2}}{|R|pr^{4}},\tfrac{3(\frac{t}{2})}{r^{2}}\Big{\\}}\Big{)}$
$\displaystyle\qquad\leq(3d+3)\cdot\operatorname{exp}\Big{(}-\min\Big{\\{}\tfrac{3p|R|}{28},\tfrac{t}{48r^{2}},\tfrac{t^{2}}{16p|R|r^{4}},\tfrac{3t}{8r^{2}}\Big{\\}}\Big{)}\leq(3d+3)\cdot\operatorname{exp}(-\tfrac{t^{2}}{48p|R|r^{4}}),$
where the last step holds provided $t\in[0,p|R|r^{2}]$. ∎
###### Lemma 14.
Fix $S,T\in\Gamma$ with $S\neq T$ and suppose $pn_{\mathrm{min}}\geq
48(\frac{n_{\mathrm{max}}}{n_{\mathrm{min}}})^{2}\log(\frac{9k(d+1)}{\epsilon_{\ref{lem.deviation
in beta}}})$. Then
$|\beta_{S^{\prime}T^{\prime}}-\beta_{ST}|\leq\tfrac{1}{2}\bigg{(}\Big{(}16\sqrt{3}+\sqrt{\tfrac{104}{3}}\Big{)}\cdot\tfrac{k}{\sqrt{pn_{\mathrm{min}}}}\cdot\tfrac{n_{\mathrm{max}}}{n_{\mathrm{min}}}\cdot
r^{2}\cdot\log^{1/2}(\tfrac{9k(d+1)}{\epsilon_{\ref{lem.deviation in
beta}}})\bigg{)}^{1/2}$
with probability $\geq 1-\epsilon_{\ref{lem.deviation in beta}}$.
###### Proof.
Put $U:=p(\tfrac{1}{|S^{\prime}|}+\tfrac{1}{|T^{\prime}|})$,
$V:=\frac{1}{p}\sum_{R\in\Gamma}\|X_{R^{\prime}}\|_{2\to 2}^{2}$,
$u:=\frac{1}{|S|}+\frac{1}{|T|}$, and $v:=\sum_{R\in\Gamma}\|X_{R}\|_{2\to
2}^{2}$. Then
$\displaystyle|\beta_{S^{\prime}T^{\prime}}-\beta_{ST}|$
$\displaystyle=|\tfrac{1}{2}(UV)^{1/2}-\tfrac{1}{2}(uv)^{1/2}|$
$\displaystyle\leq\tfrac{1}{2}|UV-
uv|^{1/2}=\tfrac{1}{2}|U(V-v)+v(U-u)|^{1/2}\leq\tfrac{1}{2}(U|V-v|+v|U-u|)^{1/2},$
where the first inequality follows from the fact that $x,y\geq 0$ implies
$|x-y|^{2}=x^{2}-2xy+y^{2}\leq
x^{2}-2\min\\{x^{2},y^{2}\\}+y^{2}=|x^{2}-y^{2}|,$
while the second inequality follows from the triangle inequality. Thus, it
suffices to bound $U$, $|V-v|$, $v$, and $|U-u|$ in a high-probability event.
First, Lemma 7 gives that $|S^{\prime}|>\frac{1}{2}p|S|$ with probability
$\geq 1-\exp(-\tfrac{3}{28}p|S|)$, and similarly,
$|T^{\prime}|>\frac{1}{2}p|T|$ with probability $\geq
1-\exp(-\tfrac{3}{28}p|T|)$. A union bound therefore gives
$U=p(\tfrac{1}{|S^{\prime}|}+\tfrac{1}{|T^{\prime}|})<p(\tfrac{2}{p|S|}+\tfrac{2}{p|T|})\leq\tfrac{4}{n_{\mathrm{min}}}$
with probability $\geq 1-2\exp(-\tfrac{3}{28}pn_{\mathrm{min}})$. Next, we
apply the triangle inequality, union bound, and Lemma 13 to get
$|V-v|=\bigg{|}\frac{1}{p}\sum_{R\in\Gamma}\|X_{R^{\prime}}\|_{2\to
2}^{2}-\sum_{R\in\Gamma}\|X_{R}\|_{2\to
2}^{2}\bigg{|}\leq\frac{1}{p}\sum_{R\in\Gamma}\Big{|}\|X_{R^{\prime}}\|_{2\to
2}^{2}-p\|X_{R}\|_{2\to 2}^{2}\Big{|}<\frac{kt}{p}$
with probability $\geq
1-3k(d+1)\exp(-\frac{t^{2}}{48pn_{\mathrm{max}}r^{4}})$, provided
$t\in[0,pn_{\mathrm{min}}r^{2}]$. For $v$, we pass to the Frobenius norm:
$v=\sum_{R\in\Gamma}\|X_{R}\|_{2\to
2}^{2}\leq\sum_{R\in\Gamma}\|X_{R}\|_{F}^{2}\leq nr^{2}\leq
kn_{\mathrm{max}}r^{2}.$
Finally, we apply Lemma 12 to obtain
$|U-u|=|p(\tfrac{1}{|S^{\prime}|}+\tfrac{1}{|T^{\prime}|})-(\tfrac{1}{|S|}+\tfrac{1}{|T|})|<\sqrt{\tfrac{104}{3}\tfrac{\log(6/\epsilon_{\ref{lem.deviation
in sum of reciprocals}})}{pn_{\mathrm{min}}^{3}}}$
with probability $\geq 1-\epsilon_{\ref{lem.deviation in sum of
reciprocals}}$, provided
$pn_{\mathrm{min}}\geq\frac{104}{3}(\frac{n_{\mathrm{max}}}{n_{\mathrm{min}}})^{2}\log(\frac{6}{\epsilon_{\ref{lem.deviation
in sum of reciprocals}}})$. We will combine these bounds using a union bound.
To do so, we will bound the failure probabilities corresponding to $U$,
$|V-v|$, and $|U-u|$ by $\tfrac{\epsilon_{\ref{lem.deviation in beta}}}{3}$:
$\tfrac{\epsilon_{\ref{lem.deviation in beta}}}{3}\geq
2\exp(-\tfrac{3}{28}pn_{\mathrm{min}}),\qquad\tfrac{\epsilon_{\ref{lem.deviation
in beta}}}{3}\geq
3k(d+1)\exp(-\tfrac{t^{2}}{48pn_{\mathrm{max}}r^{4}}),\qquad\tfrac{\epsilon_{\ref{lem.deviation
in beta}}}{3}\geq\epsilon_{\ref{lem.deviation in sum of reciprocals}}.$ (5)
Rearranging the first bound in (5) gives
$pn_{\mathrm{min}}\geq\tfrac{28}{3}\log(\frac{6}{\epsilon_{\ref{lem.deviation
in beta}}})$, which is implied by our hypothesis. We change the second bound
in (5) to an equality that defines $t$ as
$t:=\sqrt{48pn_{\mathrm{max}}r^{4}\log(\tfrac{9k(d+1)}{\epsilon_{\ref{lem.deviation
in beta}}})}.$
This choice satisfies the requirement that $t\in[0,pn_{\mathrm{min}}r^{2}]$
precisely when
$pn_{\mathrm{min}}\geq
48\cdot\tfrac{n_{\mathrm{max}}}{n_{\mathrm{min}}}\cdot\log(\tfrac{9k(d+1)}{\epsilon_{\ref{lem.deviation
in beta}}}),$
which is implied by our hypothesis. Finally, we change the third bound in (5)
to an equality that defines $\epsilon_{\ref{lem.deviation in sum of
reciprocals}}:=\tfrac{\epsilon_{\ref{lem.deviation in beta}}}{3}$, which
satisfies the requirement
$pn_{\mathrm{min}}\geq\frac{104}{3}(\frac{n_{\mathrm{max}}}{n_{\mathrm{min}}})^{2}\log(\frac{6}{\epsilon_{\ref{lem.deviation
in sum of reciprocals}}})$ by our hypothesis. Putting everything together, we
have
$\displaystyle|\beta_{S^{\prime}T^{\prime}}-\beta_{ST}|$
$\displaystyle\leq\tfrac{1}{2}(U|V-v|+v|U-u|)^{1/2}$
$\displaystyle<\tfrac{1}{2}\bigg{(}\tfrac{4}{n_{\mathrm{min}}}\cdot\tfrac{k}{p}\sqrt{48pn_{\mathrm{max}}r^{4}\log(\tfrac{9k(d+1)}{\epsilon_{\ref{lem.deviation
in
beta}}})}+kn_{\mathrm{max}}r^{2}\cdot\sqrt{\tfrac{104}{3}\tfrac{\log(18/\epsilon_{\ref{lem.deviation
in beta}})}{pn_{\mathrm{min}}^{3}}}\bigg{)}^{1/2}$
$\displaystyle\leq\tfrac{1}{2}\bigg{(}\Big{(}16\sqrt{3}+\sqrt{\tfrac{104}{3}}\Big{)}\cdot\tfrac{k}{\sqrt{pn_{\mathrm{min}}}}\cdot\tfrac{n_{\mathrm{max}}}{n_{\mathrm{min}}}\cdot
r^{2}\cdot\log^{1/2}(\tfrac{9k(d+1)}{\epsilon_{\ref{lem.deviation in
beta}}})\bigg{)}^{1/2},$
where $16\sqrt{3}$ comes from the first term and $\sqrt{\tfrac{104}{3}}$ comes
from the second term. (To be clear, we used the fact that
$n_{\mathrm{max}}\geq n_{\mathrm{min}}$ to bound the first term, and the fact
that $9k(d+1)\geq 18$ to bound the second term.) ∎
###### Lemma 15.
Suppose $pn_{\mathrm{min}}\geq
48(\frac{n_{\mathrm{max}}}{n_{\mathrm{min}}})^{2}\log(\frac{18k^{3}(d+1)}{\epsilon_{\ref{lem.approx
prox condition}}})$ and
$\displaystyle\operatorname{prox}(X,\Gamma)$
$\displaystyle>(\tfrac{2r}{\Delta}+\tfrac{3}{2})\cdot
8r\cdot\sqrt{\tfrac{\log(4k^{2}(d+2)/\epsilon_{\ref{lem.approx prox
condition}})}{pn_{\mathrm{min}}}}$
$\displaystyle\qquad+\tfrac{1}{2}\bigg{(}\Big{(}16\sqrt{3}+\sqrt{\tfrac{104}{3}}\Big{)}\cdot\tfrac{k}{\sqrt{pn_{\mathrm{min}}}}\cdot\tfrac{n_{\mathrm{max}}}{n_{\mathrm{min}}}\cdot
r^{2}\cdot\log^{1/2}(\tfrac{18k^{3}(d+1)}{\epsilon_{\ref{lem.approx prox
condition}}})\bigg{)}^{1/2}.$
Then $\operatorname{prox}(X^{\prime},\Gamma^{\prime})>0$ with probability
$\geq 1-\epsilon_{\ref{lem.approx prox condition}}$.
###### Proof.
We apply Lemmas 11 and 14 with $\epsilon_{\ref{lem.lower bound
alpha}}=\epsilon_{\ref{lem.deviation in
beta}}:=\frac{\epsilon_{\ref{lem.approx prox condition}}}{2k^{2}}$. By taking
union bound over $S,T\in\Gamma$ with $S\neq T$, the random variable
$\operatorname{prox}(X,\Gamma)-\operatorname{prox}(X^{\prime},\Gamma^{\prime})$
is at most the right-hand side of the displayed inequality with probability
$\geq 1-\epsilon_{\ref{lem.approx prox condition}}$, and the result follows. ∎
###### Lemma 16.
Suppose $r\leq\frac{\Delta}{2}$ and $pn_{\mathrm{min}}\geq
16\max\\{1,(\frac{r}{\Delta/2-r})^{2}\\}\log(\tfrac{k(d+2)}{\epsilon_{\ref{lem.rounding
after sketch and solve}}})$. Then with probability $\geq
1-\epsilon_{\ref{lem.rounding after sketch and solve}}$, it simultaneously
holds that
$\|x_{i}-c_{S^{\prime}}\|<\|x_{i}-c_{T^{\prime}}\|$
for every $S,T\in\Gamma$ with $S\neq T$ and every $i\in S$.
###### Proof.
We will show that a stronger condition holds, namely that
$\max_{\begin{subarray}{c}S,T\in\Gamma\\\ S\neq T\end{subarray}}\max_{i\in
S}\|x_{i}-c_{S^{\prime}}\|<\min_{\begin{subarray}{c}S,T\in\Gamma\\\ S\neq
T\end{subarray}}\min_{i\in S}\|x_{i}-c_{T^{\prime}}\|$ (6)
with probability $\geq 1-\epsilon_{\ref{lem.rounding after sketch and
solve}}$. Denote the random variable
$E:=\max_{S\in\Gamma}\|c_{S^{\prime}}-c_{S}\|$. Then
$\displaystyle\|x_{i}-c_{S^{\prime}}\|$
$\displaystyle\leq\|x_{i}-c_{S}\|+\|c_{S^{\prime}}-c_{S}\|\leq r+E,$
$\displaystyle\|x_{i}-c_{T^{\prime}}\|$
$\displaystyle\geq\|c_{S}-c_{T}\|-\|c_{T^{\prime}}-c_{T}\|-\|x_{i}-c_{S}\|\geq\Delta-
E-r.$
Thus, the desired inequality (6) holds whenever $E<\frac{\Delta}{2}-r$. The
result then follows from Lemma 10 by taking
$\epsilon_{\ref{lem.centroid}}:=\frac{\epsilon_{\ref{lem.rounding after sketch
and solve}}}{k}$ and applying a union bound over $S\in\Gamma$. ∎
###### Proof of Theorem 2.
Lemmas 15 and 16 with $\epsilon_{\ref{lem.approx prox
condition}}=\epsilon_{\ref{lem.rounding after sketch and
solve}}:=\frac{\epsilon}{2}$ together imply that Algorithm 1 exactly recovers
$\Gamma$ from $X$ with probability $1-\epsilon$ provided both of the following
hold:
$\displaystyle pn_{\mathrm{min}}$
$\displaystyle\geq\max\Big{\\{}48(\tfrac{n_{\mathrm{max}}}{n_{\mathrm{min}}})^{2}\log(\tfrac{36k^{3}(d+1)}{\epsilon}),16\max\\{1,(\tfrac{r}{\Delta/2-r})^{2}\\}\log(\tfrac{2k(d+2)}{\epsilon})\Big{\\}},$
$\displaystyle\operatorname{prox}(X,\Gamma)$
$\displaystyle>(\tfrac{2r}{\Delta}+\tfrac{3}{2})\cdot
8r\cdot\sqrt{\tfrac{\log(8k^{2}(d+2)/\epsilon)}{pn_{\mathrm{min}}}}$
$\displaystyle\qquad+\tfrac{1}{2}\bigg{(}\Big{(}16\sqrt{3}+\sqrt{\tfrac{104}{3}}\Big{)}\cdot\tfrac{k}{\sqrt{pn_{\mathrm{min}}}}\cdot\tfrac{n_{\mathrm{max}}}{n_{\mathrm{min}}}\cdot
r^{2}\cdot\log^{1/2}(\tfrac{36k^{3}(d+1)}{\epsilon})\bigg{)}^{1/2}.$
As we now discuss, there exists an explicit shape parameter $C(X,\Gamma)>0$
such that the inequality $\mathbb{E}|W|\geq C(X,\Gamma)\cdot\log(1/\epsilon)$
implies the above conditions. Denote
$\pi_{\mathrm{min}}:=\frac{1}{n}\min_{S\in\Gamma}|S|,\qquad\pi_{\mathrm{max}}:=\frac{1}{n}\max_{S\in\Gamma}|S|,$
and observe that $pn_{\mathrm{min}}=\mathbb{E}|W|\cdot\pi_{\mathrm{min}}$.
This explains the appearance of $\mathbb{E}|W|$ in our desired inequality. To
isolate $\log(1/\epsilon)$, we will use the general observation that
$\alpha,\beta\geq\gamma>1$ implies
$\log(\alpha\beta)\leq\frac{2\log(\alpha)\log(\beta)}{\log(\gamma)}.$
Indeed,
$\frac{\log(\alpha\beta)}{\log(\alpha)\log(\beta)}=\frac{1}{\log(\alpha)}+\frac{1}{\log(\beta)}\leq\frac{2}{\log(\gamma)}$.
We apply this bound several times with $\beta=1/\epsilon$ and $\gamma=2$ so
that the following conditions imply the above conditions:
$\displaystyle\mathbb{E}|W|\cdot\pi_{\mathrm{min}}$
$\displaystyle\geq\tfrac{2}{\log(2)}\cdot\log(\tfrac{1}{\epsilon})\cdot\max\Big{\\{}48(\tfrac{\pi_{\mathrm{max}}}{\pi_{\mathrm{min}}})^{2}\log(36k^{3}(d+1)),$
$\displaystyle\hskip
195.12877pt16\max\\{1,(\tfrac{\Delta}{2r}-1)^{-2}\\}\log(2k(d+2))\Big{\\}},$
$\displaystyle\operatorname{prox}(X,\Gamma)>(\tfrac{2r}{\Delta}+\tfrac{3}{2})\cdot
8r\cdot\sqrt{\tfrac{\log(8k^{2}(d+2))}{\mathbb{E}|W|\cdot\pi_{\mathrm{min}}}\cdot\tfrac{2}{\log(2)}\cdot\log(\tfrac{1}{\epsilon})}$
$\displaystyle\qquad+\tfrac{1}{2}\bigg{(}\Big{(}16\sqrt{3}+\sqrt{\tfrac{104}{3}}\Big{)}\cdot\tfrac{k}{\sqrt{\mathbb{E}|W|\cdot\pi_{\mathrm{min}}}}\cdot\tfrac{\pi_{\mathrm{max}}}{\pi_{\mathrm{min}}}\cdot
r^{2}\cdot\log^{1/2}(36k^{3}(d+1))\cdot(\tfrac{2}{\log(2)}\cdot\log(\tfrac{1}{\epsilon}))^{1/2}\bigg{)}^{1/2}.$
Notice that we may express these conditions in terms of
$c:=\mathbb{E}|W|/\log(\tfrac{1}{\epsilon})$:
$\displaystyle c\cdot\pi_{\mathrm{min}}$
$\displaystyle\geq\tfrac{2}{\log(2)}\cdot\max\Big{\\{}48(\tfrac{\pi_{\mathrm{max}}}{\pi_{\mathrm{min}}})^{2}\log(36k^{3}(d+1)),16\max\\{1,(\tfrac{\Delta}{2r}-1)^{-2}\\}\log(2k(d+2))\Big{\\}},$
$\displaystyle\tfrac{\operatorname{prox}(X,\Gamma)}{r}$
$\displaystyle>(\tfrac{2r}{\Delta}+\tfrac{3}{2})\cdot
8\cdot\sqrt{\tfrac{\log(8k^{2}(d+2))}{c\cdot\pi_{\mathrm{min}}}\cdot\tfrac{2}{\log(2)}}$
$\displaystyle\quad+\tfrac{1}{2}\bigg{(}\Big{(}16\sqrt{3}+\sqrt{\tfrac{104}{3}}\Big{)}\cdot\tfrac{k}{\sqrt{c\cdot\pi_{\mathrm{min}}}}\cdot\tfrac{\pi_{\mathrm{max}}}{\pi_{\mathrm{min}}}\cdot\log^{1/2}(36k^{3}(d+1))\cdot(\tfrac{2}{\log(2)})^{1/2}\bigg{)}^{1/2}.$
The set of $c$ for which the first inequality holds is an interval the form
$[c_{1},\infty)$, while the set of $c$ for which the second inequality holds
takes the form $(c_{2},\infty)$. Then $C(X,\Gamma):=\max\\{c_{1},c_{2}\\}$ is
an explicit function of $d$, $\frac{\Delta}{r}$,
$\tfrac{\operatorname{prox}(X,\Gamma)}{r}$, $k$, $\pi_{\mathrm{min}}$, and
$\pi_{\mathrm{max}}$, as desired. ∎
## 8 Proof of Theorems 4(b) and 5(b)
###### Lemma 17 (cf. Lemma 10 in [31]).
Given a symmetric matrix $M\in\mathbb{R}^{n\times n}$, it holds that
$\max_{Z\in\mathcal{Z}(n,k)}|\operatorname{tr}(MZ)|\leq\min\Big{\\{}\|M\|_{*},k\|M\|_{2\to
2}\Big{\\}}.$
###### Proof.
Select any $Z\in\mathcal{Z}(n,k)$, and let
$\alpha_{1}\geq\cdots\geq\alpha_{n}$ and $\beta_{1}\geq\cdots\geq\beta_{n}$
denote the singular values of $M$ and $Z$, respectively. Then Von Neumann’s
trace inequality gives that
$|\operatorname{tr}(MZ)|\leq\sum_{i\in[n]}\alpha_{i}\beta_{i}.$
Since $Z$ is stochastic, then by the Gershgorin circle theorem, every
eigenvalue of $Z$ has modulus at most $1$. Since $Z$ is symmetric, it follows
that $\beta_{1}\leq 1$, and so
$|\operatorname{tr}(MZ)|\leq\sum_{i\in[n]}\alpha_{i}\beta_{i}\leq\beta_{1}\sum_{i\in[n]}\alpha_{i}\leq\|M\|_{*}.$
We also have $\sum_{i\in[n]}\beta_{i}=\operatorname{tr}Z=k$, and so
$|\operatorname{tr}(MZ)|\leq\sum_{i\in[n]}\alpha_{i}\beta_{i}\leq\alpha_{1}\sum_{i\in[n]}\beta_{i}=k\|M\|_{2\to
2}.\qed$
###### Lemma 18.
Given any tuple $X:=\\{x_{i}\\}_{i\in[n]}$ of points in $\mathbb{R}^{d}$ and
any orthogonal projection matrix $P\in\mathbb{R}^{d\times d}$, it holds that
$\min_{Z\in\mathcal{Z}(n,k)}\frac{1}{2}\operatorname{tr}(D_{X}Z)\geq\|PX\|_{F}^{2}-k\|PX\|_{2\to
2}^{2}.$
(Here, we abuse notation by identifying $X$ with a member of
$\mathbb{R}^{d\times n}$.)
###### Proof.
Define $\nu\in\mathbb{R}^{n}$ to have $i$th coordinate $\|x_{i}\|^{2}$. Then
we have
$D_{X}=\nu 1^{\top}-2X^{\top}X+1\nu^{\top},\qquad\|X\|_{F}^{2}=1^{\top}\nu.$
Fix $Z\in\mathcal{Z}(n,k)$. Then $Z^{\top}=Z$ and $Z1=1$, and so
$\frac{1}{2}\operatorname{tr}(D_{X}Z)=\frac{1}{2}\operatorname{tr}((\nu
1^{\top}-2X^{\top}X+1\nu^{\top})Z)=\|X\|_{F}^{2}-\operatorname{tr}(X^{\top}XZ).$
(7)
We apply Lemma 17 to get
$\displaystyle\operatorname{tr}(X^{\top}XZ)$
$\displaystyle=\operatorname{tr}(X^{\top}PXZ)+\operatorname{tr}(X^{\top}(I-P)XZ)$
$\displaystyle\leq k\|X^{\top}PX\|_{2\to
2}+\|X^{\top}(I-P)X\|_{*}=k\|PX\|_{2\to 2}^{2}+\|(I-P)X\|_{F}^{2},$
and combining with (7) gives
$\frac{1}{2}\operatorname{tr}(D_{X}Z)\geq\|X\|_{F}^{2}-\Big{(}k\|PX\|_{2\to
2}^{2}+\|(I-P)X\|_{F}^{2}\Big{)}=\|PX\|_{F}^{2}-k\|PX\|_{2\to 2}^{2}.\qed$
###### Proposition 19 (Matrix Chernoff inequality, see Theorem 5.1.1 and
(5.1.8) in [40]).
Consider a finite sequence $\\{X_{k}\\}$ of independent, random, Hermitian
matrices with common dimension d, and assume that
$0\leq\lambda_{\mathrm{min}}(X_{k})\leq\lambda_{\mathrm{max}}(X_{k})\leq L$
almost surely for each $k$. Then
$\mathbb{E}\lambda_{\mathrm{max}}\bigg{(}\sum_{k}X_{k}\bigg{)}\leq
1.72\cdot\lambda_{\mathrm{max}}\bigg{(}\sum_{k}\mathbb{E}X_{k}\bigg{)}+L\log
d.$
###### Proposition 20 (Dvoretzky–Kiefer–Wolfowitz inequality, Theorem 11.5 in
[23]).
Consider a sequence $\\{X_{k}\\}_{k\in[n]}$ of real-valued independent random
variables with common cumulative distribution function
$F\colon\mathbb{R}\to[0,1]$, and let $F_{n}\colon\mathbb{R}\to[0,1]$ denote
the random empirical distribution function defined by
$F_{n}(x):=\frac{1}{n}\sum_{k\in[n]}1_{\\{X_{k}\leq x\\}}$. Then for every
$\epsilon>0$, it holds that
$\mathbb{P}\Big{\\{}\sup_{x\in\mathbb{R}}|F_{n}(x)-F(x)|>\epsilon\Big{\\}}\leq
2e^{-2n\epsilon^{2}}.$
###### Proposition 21 (Lemma 1 in [25]).
Suppose $X$ has chi-squared distribution with $d$ degrees of freedom. Then for
each $t>0$, it holds that
$\mathbb{P}\\{X\geq d+2\sqrt{dt}+2t\\}\leq e^{-t},\qquad\mathbb{P}\\{X\leq
d-2\sqrt{dt}\\}\leq e^{-t}.$
It is sometimes easier to interact with a simpler (though weaker) version of
the upper chi-squared tail estimate:
###### Corollary 22.
Suppose $X$ has chi-squared distribution with $d$ degrees of freedom. Then
$\mathbb{P}\\{X\geq x\\}\leq e^{-x/5}$ for every $x\geq 5d$.
###### Proof.
Put $t:=x/5\geq d$, and so $d+2\sqrt{dt}+2t\leq t+2t+2t=5t$, and Proposition
21 gives
$\mathbb{P}\\{X\geq x\\}=\mathbb{P}\\{X\geq 5t\\}\leq\mathbb{P}\\{X\geq
d+2\sqrt{dt}+2t\\}\leq e^{-t}=e^{-x/5}.\qed$
###### Corollary 23.
Suppose $\\{X_{k}\\}_{k\in[n]}$ are independent random variables with chi-
squared distribution with $d$ degrees of freedom. Then
$\max_{k\in[n]}X_{k}\leq 2d+6\log n$ with probability $\geq 1-\frac{1}{n}$.
###### Proof.
By the arithmetic mean–geometric mean inequality and Proposition 21, we have
$\mathbb{P}\\{X_{k}\geq 2d+3t\\}\leq\mathbb{P}\\{X_{k}\geq
d+2\sqrt{dt}+2t\\}\leq e^{-t}.$
Take $t:=2\log n$ and apply a union bound to get the result. ∎
###### Proposition 24 (Corollary 5.35 in [42]).
Let $G$ be an $m\times n$ matrix with independent standard Gaussian entries.
Then for every $t\geq 0$, it holds that $\|G\|_{2\to
2}\leq\sqrt{m}+\sqrt{n}+t$ with probability $\geq 1-2e^{-t^{2}/2}$.
Throughout, we let $\mathbb{E}_{G}$ denote expectation conditioned on $G$, and
we assume $n\geq s\geq d\geq k$ without mention. (These inequalities are
typically implied by the hypotheses of our results, but even so, it is helpful
to keep these bounds in mind when interpreting the analysis.)
###### Lemma 25.
Let $G:=\\{g_{i}\\}_{i\in[n]}$ denote a tuple of independent random vectors
$g_{i}\sim\mathsf{N}(0,I_{d})$, let $S:=\\{i_{j}\\}_{j\in[s]}$ denote a tuple
of independent random indices $i_{j}\sim\mathsf{Unif}([n])$, and let
$H=H(G,S)\in\mathbb{R}^{d\times s}$ denote the random matrix whose $j$th
column is $g_{i_{j}}$. Assuming $s\geq 15d\log d$, it holds that
$\mathbb{E}_{G}\|H\|_{2\to 2}^{2}\leq 5s$ with probability $\geq
1-\frac{1}{n}-e^{-\Omega(n/(d+\log n)^{2})}$.
###### Proof.
Fix a threshold $\tau>0$ to be selected later, and for any vector
$x\in\mathbb{R}^{d}$, denote
$x^{-}:=\left\\{\begin{array}[]{cl}x&\text{if }\|x\|^{2}\leq\tau\\\
0&\text{otherwise}\end{array}\right\\},\qquad
x^{+}:=\left\\{\begin{array}[]{cl}x&\text{if }\|x\|^{2}>\tau\\\
0&\text{otherwise}\end{array}\right\\}.$
Letting $h_{j}$ denote the $j$th column of $H$, then we are interested in the
quantity
$\displaystyle\|H\|_{2\to 2}^{2}$
$\displaystyle=\bigg{\|}\sum_{j\in[s]}h_{j}h_{j}^{\top}\bigg{\|}_{2\to
2}\leq\bigg{\|}\sum_{j\in[s]}h_{j}^{-}{h_{j}^{-}}^{\top}\bigg{\|}_{2\to
2}+\bigg{\|}\sum_{j\in[s]}h_{j}^{+}{h_{j}^{+}}^{\top}\bigg{\|}_{2\to 2}.$
We will bound the expectation of the first term using the Matrix Chernoff
inequality (Proposition 19) and the expectation of the second term using the
Dvoretzky–Kiefer–Wolfowitz inequality (Proposition 20). First,
$\mathbb{E}_{G}[h_{j}^{-}{h_{j}^{-}}^{\top}]=\frac{1}{n}\sum_{i\in[n]}g_{i}^{-}{g_{i}^{-}}^{\top}$
for each $j\in[s]$, and so
$\lambda_{\mathrm{max}}\bigg{(}\sum_{j\in[s]}\mathbb{E}_{G}\Big{[}h_{j}^{-}{h_{j}^{-}}^{\top}\Big{]}\bigg{)}=\lambda_{\mathrm{max}}\bigg{(}\frac{s}{n}\sum_{i\in[n]}g_{i}^{-}{g_{i}^{-}}^{\top}\bigg{)}\leq\lambda_{\mathrm{max}}\bigg{(}\frac{s}{n}\sum_{i\in[n]}g_{i}g_{i}^{\top}\bigg{)}=\frac{s}{n}\|G\|_{2\to
2}^{2},$
where the inequality uses the fact that
$\frac{s}{n}\sum_{i\in[n]}g_{i}^{+}{g_{i}^{+}}^{\top}\succeq 0$. Thus, Matrix
Chernoff gives
$\mathbb{E}_{G}\bigg{\|}\sum_{j\in[s]}h_{j}^{-}{h_{j}^{-}}^{\top}\bigg{\|}_{2\to
2}\leq 1.72\cdot\frac{s}{n}\|G\|_{2\to 2}^{2}+\tau\log d.$
Next, we bound the expectation of
$\bigg{\|}\sum_{j\in[s]}h_{j}^{+}{h_{j}^{+}}^{\top}\bigg{\|}_{2\to
2}\leq\sum_{j\in[s]}\|h_{j}^{+}{h_{j}^{+}}^{\top}\|_{2\to
2}=\sum_{j\in[s]}\|h_{j}^{+}\|^{2}.$
Let $F_{G}^{+}$ denote the empirical distribution function of
$\\{\|g_{i}^{+}\|^{2}\\}_{i\in[n]}$. Then
$\frac{1}{s}\mathbb{E}_{G}\bigg{\|}\sum_{j\in[s]}h_{j}^{+}{h_{j}^{+}}^{\top}\bigg{\|}_{2\to
2}\leq\mathbb{E}_{G}\|h_{1}^{+}\|^{2}=\int_{0}^{\infty}(1-F_{G}^{+}(x))dx.$
(8)
Let $F_{G}$ denote the empirical distribution function of
$\\{\|g_{i}\|^{2}\\}_{i\in[n]}$, put $\|G\|_{1\to
2}^{2}:=\max_{i\in[n]}\|g_{i}\|^{2}$, and observe that
$F_{G}^{+}(x)=\left\\{\begin{array}[]{cl}F_{G}(\tau)&\text{if }x\leq\tau\\\
F_{G}(x)&\text{if }x>\tau\end{array}\right\\}\quad\text{and}\quad
F_{G}(x)=1\quad\forall x>\|G\|_{1\to 2}^{2}.$
In particular, we may assume $\|G\|_{1\to 2}^{2}\geq\tau$ without loss of
generality, since otherwise the integral in (8) equals zero. We will estimate
this integral in pieces:
$\int_{0}^{\infty}=\int_{0}^{\tau}+\int_{\tau}^{\|G\|_{1\to
2}^{2}}+\int_{\|G\|_{1\to 2}^{2}}^{\infty}.$
We have $\int_{\|G\|_{1\to 2}^{2}}^{\infty}=0$, and we will estimate the other
two integrals in terms of the quantity
$E(G):=\sup_{x\in\mathbb{R}}|F_{G}(x)-F(x)|,$
where $F$ denotes the cumulative distribution function of the chi-squared
distribution with $d$ degrees of freedom. (Later, we will apply the
Dvoretzky–Kiefer–Wolfowitz inequality to bound $E(G)$ with high probability on
$G$.) First, we have
$\int_{0}^{\tau}(1-F_{G}^{+}(x))dx=\tau(1-F_{G}(\tau))\leq\tau(1-F(\tau)+E(G))\leq\tau(e^{-\tau/5}+E(G)),$
where the last step applies Corollary 22 under the assumption that $\tau\geq
5d$. Next, for each $x\in(\tau,\|G\|_{1\to 2}^{2})$, we have
$1-F_{G}^{+}(x)=1-F_{G}(x)\leq 1-F(x)+E(G),$
and so integrating gives
$\int_{\tau}^{\|G\|_{1\to
2}^{2}}(1-F_{G}^{+}(x))dx\leq\int_{\tau}^{\infty}(1-F(x))dx+\|G\|_{1\to
2}^{2}\cdot E(G).$
We estimate the first term using Corollary 22:
$\int_{\tau}^{\infty}(1-F(x))dx\leq\int_{\tau}^{\infty}e^{-x/5}dx=5e^{-\tau/5}.$
All together, we have
$\mathbb{E}_{G}\|H\|_{2\to 2}^{2}\leq 1.72\cdot\frac{s}{n}\|G\|_{2\to
2}^{2}+\tau\log d+s\Big{(}\tau(e^{-\tau/5}+E(G))+5e^{-\tau/5}+\|G\|_{1\to
2}^{2}\cdot E(G)\Big{)}.$
It remains to bound this random variable in a high-probability event.
Proposition 24 gives $\|G\|_{2\to 2}^{2}=O(n)$ with high probability when
$n\geq d$. This means the first term in our bound will be $O(s)$, and so we
select $\tau:=s/\log d$ so that the second term has the same order. (Note that
our assumption $\tau\geq 5d$ then requires $s\geq 5d\log d$.) The remaining
term is $s$ times
$\tau(e^{-\tau/5}+E(G))+5e^{-\tau/5}+\|G\|_{1\to 2}^{2}\cdot
E(G)\leq(\tau+5)e^{-\tau/5}+2\|G\|_{1\to 2}^{2}\cdot E(G),$ (9)
where the inequality follows from the bound $\tau\leq\|G\|_{1\to 2}^{2}$. We
want (9) to be $O(1)$. The first term in (9) is smaller than $1$ provided
$\tau\geq 15$. For the second term in (9), we recall from Corollary 23 that
$\|G\|_{1\to 2}^{2}=O(d+\log n)$ with high probability. This suggests that we
restrict to an event in which $E(G)=O(1/(d+\log n))$, which we can estimate
using the Dvoretzky–Kiefer–Wolfowitz inequality. For this bound, the failure
probability will be $2e^{-2n\epsilon^{2}}=\exp(-\Omega(n/(d+\log n)^{2}))$,
and this informs how sharply we can bound $\|G\|_{2\to 2}^{2}$. Overall, we
restrict to an event in which three things occur simultaneously:
$\|G\|_{2\to 2}^{2}\leq 1.1n,\qquad\|G\|_{1\to 2}^{2}\leq 2d+6\log n,\qquad
E(G)\leq\frac{1}{4d+12\log n}.$
A union bound gives that this event has probability $\geq
1-\frac{1}{n}-e^{-\Omega(n/(d+\log n)^{2})}$, and over this event, it holds
that $\mathbb{E}_{G}\|H\|_{2\to 2}^{2}\leq 1.72\cdot 1.1s+s+s(1+1)\leq 5s$. ∎
###### Lemma 26.
Draw $X:=\\{x_{i}\\}_{i\in[n]}$ in $\mathbb{R}^{d}$ from a mixture of $k$
gaussians with equal weights and identity covariance. Explicitly, take any
$\mu_{1},\ldots,\mu_{k}\in\mathbb{R}^{d}$, draw $\\{t_{i}\\}_{i\in[n]}$
independently with distribution $\mathsf{Unif}([k])$, draw
$\\{g_{i}\\}_{i\in[n]}$ independently with distribution $\mathsf{N}(0,I_{d})$,
and take $x_{i}:=\mu_{t_{i}}+g_{i}$. Next, draw $\\{i_{j}\\}_{j\in[s]}$
independently with distribution $\mathsf{Unif}([n])$ and define the random
tuple $Y:=\\{y_{j}\\}_{j\in[s]}$ by $y_{j}:=x_{i_{j}}$. Then provided $s\geq
15d\log d$, it holds that
$d-6k-1\leq\mathbb{E}_{X}\operatorname{SDP}(Y,k)\leq\operatorname{IP}(X,k)\leq
d+1$
with probability $\geq 1-\frac{1}{n}-e^{-\Omega(n/(d+\log n)^{2})}$.
###### Proof.
Lemma 3 gives the middle inequality in our claim. Next, we obtain an upper
bound on $\operatorname{IP}(X,k)$ by passing to the partition
$\Gamma\in\Pi(n,k)$ defined by the level sets of the planted assignment
$i\mapsto t_{i}$:
$\operatorname{IP}(X,k)\leq\frac{1}{n}\sum_{S\in\Gamma}\sum_{i\in
S}\bigg{\|}x_{i}-\frac{1}{|S|}\sum_{j\in
S}x_{j}\bigg{\|}^{2}\leq\frac{1}{n}\sum_{S\in\Gamma}\sum_{i\in
S}\|x_{i}-\mu_{t_{i}}\|^{2}=\frac{1}{n}\sum_{i\in[n]}\|g_{i}\|^{2},$
where the second inequality follows from the fact that the centroid of a tuple
of points minimizes the sum of squared distances from those points. Next,
$\sum_{i\in[n]}\|g_{i}\|^{2}$ has chi-squared distribution with $dn$ degrees
of freedom, and so an application of Proposition 21 gives that
$\operatorname{IP}(X,k)\leq d+1$ with probability $\geq 1-e^{-n/(16d)}$. It
remains find a lower bound on $\mathbb{E}_{X}\operatorname{SDP}(Y,k)$. For
this, we will apply Lemma 18 with $P$ representing the orthogonal projection
map onto $(\operatorname{span}\\{\mu_{t}\\}_{t\in[k]})^{\perp}$. Letting
$H:=\\{h_{j}\\}_{j\in[s]}$ denote the random tuple defined by
$h_{j}:=g_{i_{j}}$, which satisfies $Ph_{j}=Pg_{i_{j}}=Px_{i_{j}}=Py_{j}$, we
have
$\operatorname{SDP}(Y,k)\geq\frac{1}{s}\Big{(}\|PY\|_{F}^{2}-k\|PY\|_{2\to
2}^{2}\Big{)}\geq\frac{1}{s}\Big{(}\|PH\|_{F}^{2}-k\|H\|_{2\to 2}^{2}\Big{)}.$
Letting $G\in\mathbb{R}^{d\times n}$ denote the matrix whose $i$th column is
$g_{i}$, we take expectations of both sides to get
$\mathbb{E}_{X}\operatorname{SDP}(Y,k)\geq\frac{1}{s}\mathbb{E}_{G}\|PH\|_{F}^{2}-\frac{k}{s}\mathbb{E}_{G}\|H\|_{2\to
2}^{2}.$ (10)
The first term can be rewritten as
$\frac{1}{s}\mathbb{E}_{G}\|PH\|_{F}^{2}=\frac{1}{s}\mathbb{E}_{G}\sum_{j\in[s]}\|Ph_{j}\|^{2}=\frac{1}{s}\sum_{j\in[s]}\mathbb{E}_{G}\|Ph_{j}\|^{2}=\frac{1}{n}\sum_{i\in[n]}\|Pg_{i}\|^{2}=\frac{1}{n}\|PG\|_{F}^{2}.$
Furthermore, $\|PG\|_{F}^{2}$ has chi-squared distribution with
$(d-k^{\prime})n$ degrees of freedom, where $k^{\prime}$ denotes the dimension
of $\operatorname{span}\\{\mu_{t}\\}_{t\in[k]}$. Thus, Proposition 21 gives
$\frac{1}{n}\|PG\|_{F}^{2}\geq d-k^{\prime}-1$ with probability $\geq
1-e^{-n/(4d)}$. The second term in (10) is bounded by Lemma 25, and the result
follows from a union bound. ∎
###### Proof of Theorem 4(b).
Recall that Lemma 26 gives
$d-6k-1\leq\mathbb{E}_{X}\operatorname{SDP}(Y,k)\leq\operatorname{IP}(X,k)\leq
d+1$
with probability $\geq 1-\frac{1}{n}-e^{-\Omega(n/(d+\log n)^{2})}$. We will
use Hoeffding’s inequality to show that
$B_{H}\geq\mathbb{E}_{X}\operatorname{SDP}(Y,k)-1$
with probability $\geq 1-\epsilon$, from which the result follows by a union
bound. Our use of Hoeffding’s inequality requires a bound on $b$. To this end,
Lemma 6 gives that
$b\leq 4\operatorname{inf}(f)^{2}\leq 4f(\\{\mu_{t}\\}_{t\in[k]})^{2},$
which in turn is at most $4$ times the maximum of $n$ independent chi-squared
random variables with $d$ degrees of freedom. Corollary 23 bounds this
quantity by $4(2d+6\log n)$, and this bound holds in the event considered in
Lemma 26. Combining the bounds $b\leq 4(2d+6\log n)$ and $\ell\geq 128(d+3\log
n)^{2}\log(1/\epsilon)$ then gives
$t:=(\tfrac{b^{2}}{2\ell}\log(\tfrac{1}{\epsilon}))^{1/2}\leq\frac{1}{2}$, and
so $t\leq 1-t$. Hoeffding’s inequality then gives
$\displaystyle\mathbb{P}\\{B_{H}<\mathbb{E}_{X}\operatorname{SDP}(Y,k)-1\\}$
$\displaystyle=\mathbb{P}\bigg{\\{}\frac{1}{\ell}\sum_{i\in[\ell]}\operatorname{SDP}(Y_{i},k)-t<\mathbb{E}_{X}\operatorname{SDP}(Y,k)-1\bigg{\\}}$
$\displaystyle\leq\exp(-\tfrac{2\ell(1-t)^{2}}{b^{2}})\leq\exp(-\tfrac{2\ell
t^{2}}{b^{2}})=\epsilon,$
where the last step applies the definition of $t$. ∎
###### Proof of Theorem 5(b).
We have from the proof of the upper bound in Lemma 26 that
$\operatorname{IP}(X,k)\leq d+1$ with probability $\geq 1-e^{-n/(16d)}$. Thus,
it suffices to show that $B_{M}\geq d-3k-2$ with probability $\geq
1-e^{-s/(8d)}-2e^{-s/54}$. To this end, we first observe that
$d-3k-2\leq(1-\tfrac{1}{d})(d-3k-1)\qquad\text{and}\qquad\epsilon^{1/\ell}\geq
1-\tfrac{1}{d}.$ (11)
Indeed, the first inequality can be seen by expanding, while the second
inequality follows from the assumption $\ell\geq d\log(1/\epsilon)$. In
particular, $d\geq 1$ implies $\frac{1}{d}\leq\log(\frac{1}{1-1/d})$, and so
$\ell\geq
d\log(1/\epsilon)\geq\frac{\log(1/\epsilon)}{\log(\frac{1}{1-1/d})},$
and rearranging gives the desired inequality. We apply (11) with a union bound
to get
$\displaystyle\mathbb{P}\Big{\\{}B_{M}<d-3k-2\Big{\\}}$
$\displaystyle\leq\mathbb{P}\Big{\\{}\epsilon^{1/\ell}\min_{i\in[\ell]}\operatorname{SDP}(Y_{i},k)<(1-\tfrac{1}{d})(d-3k-1)\Big{\\}}$
$\displaystyle\leq\mathbb{P}\Big{\\{}\min_{i\in[\ell]}\operatorname{SDP}(Y_{i},k)<d-3k-1\Big{\\}}$
$\displaystyle\leq\ell\cdot\mathbb{P}\Big{\\{}\operatorname{SDP}(Y,k)<d-3k-1\Big{\\}},$
where $Y$ is a random matrix with the same distribution as each $Y_{i}$. In
particular, the columns of $Y$ are drawn uniformly without replacement from
$X$. Let $S:=\\{i_{j}\\}_{j\in[s]}$ denote the random indices such that the
$j$th column of $Y$ is $y_{j}=x_{i_{j}}$. By the law of total probability, it
suffices to bound the conditional probability
$\mathbb{P}_{S}\Big{\\{}\operatorname{SDP}(Y,k)<d-3k-1\Big{\\}}$
uniformly over all possible realizations of $S$. Recall that
$x_{i}=\mu_{t_{i}}+g_{i}$, where $t_{i}\sim\mathsf{Unif}([k])$ and
$g_{i}\sim\mathsf{N}(0,I_{d})$. Let $H\in\mathbb{R}^{d\times s}$ denote the
random matrix whose $j$th column is $h_{j}:=g_{i_{j}}$. Similar to the proof
of the lower bound in Lemma 26, we take $P$ to be the orthogonal projection
onto the $(d-k^{\prime})$-dimensional subspace
$(\operatorname{span}\\{\mu_{t}\\}_{t\in[k]})^{\perp}$ and then apply Lemma 18
to get
$\operatorname{SDP}(Y,k)\geq\frac{1}{s}\Big{(}\|PH\|_{F}^{2}-k\|H\|_{2\to
2}^{2}\Big{)}.$
This implies
$\displaystyle\mathbb{P}_{S}\Big{\\{}\operatorname{SDP}(Y,k)<d-3k-1\Big{\\}}$
$\displaystyle\leq\mathbb{P}_{S}\Big{\\{}\tfrac{1}{s}\big{(}\|PH\|_{F}^{2}-k\|H\|_{2\to
2}^{2}\big{)}<d-3k-1\Big{\\}}$
$\displaystyle\leq\mathbb{P}_{S}\Big{\\{}\tfrac{1}{s}\|PH\|_{F}^{2}<d-k-1\Big{\\}}+\mathbb{P}_{S}\Big{\\{}\tfrac{k}{s}\|H\|_{2\to
2}^{2}>2k\Big{\\}}.$ (12)
Conditioned on $S$, the entries of $H$ are independent with standard gaussian
distribution. As such, $\|PH\|_{F}^{2}$ has chi-squared distribution with
$(d-k^{\prime})s$ degrees of freedom, and so we apply the second part of
Proposition 21 with $t=s/(4(d-k^{\prime}))$ to bound the first term in (12) by
$e^{-s/(4d)}$. Next, we apply Proposition 24 with
$t=(\sqrt{2}-\sqrt{d/s}-1)\sqrt{s}$ to bound the second term in (12) by
$2e^{-t^{2}/2}$, which in turn is at most $2e^{-s/27}$ since $s\geq
54d\log\ell\geq 54d$ by assumption. Overall, we have
$\displaystyle\mathbb{P}\Big{\\{}B_{M}<d-3k-2\Big{\\}}$
$\displaystyle\leq\ell\cdot\mathbb{P}\Big{\\{}\operatorname{SDP}(Y,k)<d-3k-1\Big{\\}}$
$\displaystyle=\ell\cdot\mathbb{E}\Big{[}\mathbb{P}_{S}\Big{\\{}\operatorname{SDP}(Y,k)<d-3k-1\Big{\\}}\Big{]}$
$\displaystyle\leq e^{s/(54d)}\cdot(e^{-s/(4d)}+2e^{-s/27})\leq
e^{-s/(8d)}+2e^{-s/54},$
as desired. ∎
## Acknowledgments
Part of this research was conducted while SV was a Research Fellow at the
Simons Institute for Computing, University of California at Berkeley. DGM was
partially supported by NSF DMS 1829955, AFOSR FA9550-18-1-0107, and an AFOSR
Young Investigator Research Program award. SV is partially supported by ONR
N00014-22-1-2126, NSF CISE 2212457, an AI2AI Amazon research award, and the
NSF–Simons Research Collaboration on the Mathematical and Scientific
Foundations of Deep Learning (MoDL) (NSF DMS 2031985).
## References
* [1] E. Abbe, A. S. Bandeira, G. Hall, Exact recovery in the stochastic block model, IEEE Trans. Inform. Theory 62 (2015) 471–487.
* [2] P. Abdalla, A. S. Bandeira, Community detection with a subsampled semidefinite program. Sampl. Theory Signal Process. Data Anal. 20 (2022) 1–10.
* [3] D. Aloise, A. Deshpande, P. Hansen, P. Popat, NP-hardness of euclidean sum-of-squares clustering, Mach. Learn. 75 (2009) 245–248.
* [4] P. Awasthi, A. S. Bandeira, M. Charikar, R. Krishnaswamy, S. Villar, R. Ward, Relax, no need to round: Integrality of clustering formulations, ITCS 2015, 191–200.
* [5] P. Awasthi, M. Charikar, R. Krishnaswamy, A. K. Sinop, The hardness of approximation of euclidean k-means, arXiv:1502.03316
* [6] A. S. Bandeira, A note on probably certifiably correct algorithms. C. R. Math. 354 (2016) 329–333.
* [7] A. S. Bandeira, Random Laplacian matrices and convex relaxations, Found. Comput. Math. 18 (2018) 345–379.
* [8] P. Collard, Cloud data set, archive.ics.uci.edu/ml/datasets/cloud
* [9] S. Dasgupta, Learning mixtures of Gaussians, FOCS 1999, 634–644.
* [10] D. Davis, M. Diaz, K. Wang, Clustering a mixture of gaussians with unknown covariance, arXiv preprint arXiv:2110.01602
* [11] A. De Rosa, A. Khajavirad, The ratio-cut polytope and K-means clustering, SIAM J. Optim. 32 (2022) 173–203.
* [12] I. Diakonikolas, D. M. Kane, A. Stewart, Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures, FOCS 2017, 73–84.
* [13] Y. Fei, Y. Chen, Hidden integrality of SDP relaxations for sub-Gaussian mixture models, COLT 2018, 1931–1965.
* [14] C. Giraud, N. Verzelen, 2019\. Partial recovery bounds for clustering with the relaxed $K$-means, Math. Stat. Learn. 1 (2019) 317–374.
* [15] M. X. Goemans, D. P. Williamson, Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming, JACM 42 (1995) 1115–1145.
* [16] M. Grant, S. Boyd, CVX: Matlab software for disciplined convex programming, cvxr.com/cvx
* [17] O. Guédon, R. Vershynin, Community detection in sparse networks via Grothendieck’s inequality, Probab. Theory Relat. Fields 165 (2016) 1025–1049.
* [18] D. Hsu, S. M. Kakade, Learning mixtures of spherical gaussians: moment methods and spectral decompositions, ITCS 2013, 11–20.
* [19] T. Iguchi, D. G. Mixon, J. Peterson, S. Villar, Probably certifiably correct k-means clustering, Math. Program. 165 (2017) 605–642.
* [20] R. Kannan, H. Salmasian, S. Vempala, The spectral method for general mixture models, COLT 2005, 444–457.
* [21] R. M. Karp, Reducibility among combinatorial problems, In: Complexity of Computer Computations, Springer, Boston, MA, 1972, pp. 85–103.
* [22] KDD Cup 1999 dataset, kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
* [23] M. R. Kosorok, Introduction to empirical processes and semiparametric inference, Springer, 2008.
* [24] A. Kumar, R. Kannan, Clustering with spectral norm and the k-means algorithm, FOCS 2010, 299–308.
* [25] B. Laurent, P. Massart, Adaptive estimation of a quadratic functional by model selection, Ann. Stat. 28 (2000) 1302–1338.
* [26] Y. LeCun, C. Cortes, MNIST handwritten digit database, AT&T Labs, yann.lecun.com/exdb/mnist
* [27] X. Li, Y. Li, S. Ling, T. Strohmer, K. Wei, When do birds of a feather flock together? k-means, proximity, and conic programming, Math. Program. 179 (2020) 295–341.
* [28] S. Ling, T. Strohmer, Certifying global optimality of graph cuts via semidefinite relaxation: A performance guarantee for spectral clustering, Found. Comput. Math. 20 (2020) 367–421.
* [29] S. Lloyd, Least squares quantization in PCM, IEEE Trans. Inform. Theory 28 (1982) 129–137.
* [30] D. G. Mixon, S. Villar, Monte Carlo approximation certificates for k-means clustering, arXiv:1710.00956
* [31] D. G. Mixon, S. Villar, R. Ward, Clustering subgaussian mixtures by semidefinite programming, Inform. Inference 6 (2017) 389–415.
* [32] D. G. Mixon, K. Xie, Sketching Semidefinite Programs for Faster Clustering, IEEE Trans. Inform. Theory 67 (2021) 6832–6840.
* [33] A. Nellore, R. Ward, Recovery guarantees for exemplar-based clustering, Inform. Comput. 245 (2015) 165–180.
* [34] Y. Nesterov, A. Nemirovskii, Interior-point polynomial algorithms in convex programming, SIAM, 1994.
* [35] J. Peng, Y. Wei, Approximating k-means-type clustering via semidefinite programming, SIAM J. Optim. 18 (2007) 186–205.
* [36] V. Piccialli, A. M. Sudoso, A. Wiegele, SOS-SDP: an exact solver for minimum sum-of-squares clustering, INFORMS J. Comput., 2022.
* [37] M. N. Prasad, G. A. Hanasusanto, Improved conic reformulations for k-means clustering, SIAM J. Optim. 28 (2018) 3105–3126.
* [38] D. M. Rosen, L. Carlone, A. S. Bandeira, J. J. Leonard, SE-Sync: A certifiably correct algorithm for synchronization over the special Euclidean group, Int. J. Robot. Res. 38 (2019) 95–125.
* [39] D. F. Sun, L. Q. Yang, K. C. Toh, Sdpnal+: A majorized semismooth newton-cg augmented lagrangian method for semidefinite programming with nonnegative constraints, Math. Program. Comput. (2015) 331–366.
* [40] J. A. Tropp, An Introduction to Matrix Concentration Inequalities, Found. Trends Mach. Learn. 8 (2015) 1–230.
* [41] S. Vassilvitskii, D. Arthur, k-means++: The advantages of careful seeding, SODA 2006, 1027–1035.
* [42] R. Vershynin, Introduction to the non-asymptotic analysis of random matrices, Compressed Sensing, Theory and Applications, Cambridge U. Press, 2012.
* [43] D. P. Woodruff, Sketching as a tool for numerical linear algebra, Found. Trends Theor. Comp. Sci. 10 (2014) 1–157.
* [44] Y. Wu, P. Yang, Optimal estimation of Gaussian mixtures via denoised method of moments, Ann. Stat. 48 (2020) 1981–2007.
* [45] H. Yang, J. Shi, L. Carlone, Teaser: Fast and certifiable point cloud registration, IEEE Trans. Robot. 37 (2020) 314–333.
* [46] Y. Zhuang, X. Chen, Y. Yang, Sketch-and-lift: scalable subsampled semidefinite program for K-means clustering, PMLR (2022) 9214–9246.
|
# Precision Studies of QCD in the Low Energy Domain of the EIC
V.D. Burkert Thomas Jefferson National Accelerator Facility, Newport News,
Virginia 23606, USA L. Elouadrhiri Thomas Jefferson National Accelerator
Facility, Newport News, Virginia 23606, USA A. Afanasev The George
Washington University, Department of Physics, Washington, DC 20052 J.
Arrington Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA M.
Contalbrigo INFN Ferrara, 44122 Ferrara, Italy W. Cosyn Florida
International University, Department of Physics, Miami, FL 33199, USA Ghent
University, Department of Physics and Astronomy, B9000 Ghent, Belgium A.
Deshpande Stony Brook University, Stony Brook, NY 11794, USA D.I. Glazier
SUPA, School of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ
UK. X. Ji University of Maryland, College Park, MD 20742 Center for Nuclear
Femtography, 1201 New York Avenue, Washington, DC 20005 S. Liuti University
of Virginia, Physics Department, 382 McCormick Rd, Charlottesville, VA 22904
Y. Oh Department of Physics, Kyungpook National University, Daegu 41566,
Korea Asia Pacific Center for Theoretical Physics, Pohang, Gyeongbuk 37673,
Korea D. Richards Thomas Jefferson National Accelerator Facility, Newport
News, Virginia 23606, USA T. Satogata Thomas Jefferson National Accelerator
Facility, Newport News, Virginia 23606, USA A. Vossen Duke University,
Durham, NC 27708, USA Thomas Jefferson National Accelerator Facility, Newport
News, Virginia 23606, USA H. Abdolmaleki School of Particles and
Accelerators, Institute for Research in Fundamental Sciences (IPM),
P.O.Box 19395-5531, Tehran, Iran A. Albataineh Yarmouk University, Irbid,
Jordan 21163. C.A. Aidala Department of Physics, University of Michigan, Ann
Arbor, Michigan 48109, USA C. Alexandrou University of Cyprus, Department of
Physics, CY-1678 Nicosia; The Cyprus Institute, CY-1645 Nicosia H. Avagyan
Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606,
USA A. Bacchetta Dipartimento di Fisica, Università di Pavia, Pavia, Italy
M. Baker Thomas Jefferson National Accelerator Facility, Newport News,
Virginia 23606, USA F. Benmokhtar Duquesne University, Pittsburgh, PA 15282,
USA J.C. Bernauer Stony Brook University, Stony Brook, NY 11794, USA RIKEN
BNL Research Center, Brookhaven National Laboratory, Upton, New York
11973-5000, USA C. Bissolotti Dipartimento di Fisica, Università di Pavia,
Pavia, Italy W. Briscoe The George Washington University, Department of
Physics, Washington, DC 20052 D.Byers Duke University, Durham, NC 27708, USA
Xu Cao Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou
730000, China C.E. Carlson Department of Physics and Astronomy, College of
William and Mary, Williamsburg, Virginia, USA K. Cichy Adam Mickiewicz
University, ul. Uniwersytetu Poznanskiego 2, 61-614 Poznan, Poland. I.C.
Cloet Argonne National Laboratory, Lemont, Illinois 60439, USA C. Cocuzza
Temple University, Philadelphia, Pennsylvania 19122, USA P.L. Cole
Department of Physics Lamar University, TX, USA M. Constantinou Temple
University, Philadelphia, Pennsylvania 19122, USA A. Courtoy Instituto de
Física, Universidad Nacional Autónoma de México, Apartado Postal 20-364,
01000 Ciudad de México, Mexico H. Dahiyah Department of Physics, Dr. B.R.
Ambedkar National Institute of Technology, Jalandhar 144027 India K. Dehmelt
Stony Brook University, Stony Brook, NY 11794, USA S. Diehl University of
Connecticut, Department of Physics 196A Auditorium Road, Storrs, CT 06269-3046
Justus Liebig University, Physikalisches Institut, Giessen, Germany C. Dilks
Duke University, Durham, NC 27708, USA C. Djalali Ohio University,
Department of Physics and Astronomy, Athens OH 45701, USA R. Dupré
Laboratoire de Physique Joliot-Curie, CNRS-IN2P3, Université Paris-Saclay
S.C. Dusa Thomas Jefferson National Accelerator Facility, Newport News,
Virginia 23606, USA B. El-Bennich Universidade Cidade de São Paulo, Rua
Galvão Bueno 868, São Paulo, 01506-000, SP, Brazil. L. El Fassi Mississippi
State University, Mississippi State, MS 39762-5167, USA T. Frederico
Instituto Tecnológico de Aeronáutica 12.228-900, São José dos Campos, Brazil
A. Freese University of Washington, Department of Physics B464, Seattle, USA
B.R. Gamage Thomas Jefferson National Accelerator Facility, Newport News,
Virginia 23606, USA L. Gamberg Science Division, Penn State University
Berks, Reading, Pennsylvania 19610, USA R.R. Ghoshal Thomas Jefferson
National Accelerator Facility, Newport News, Virginia 23606, USA F.X. Girod
Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606,
USA V.P. Goncalves Institut für Theoretische Physik, Westfälische Wilhelms-
Universität Münster, Wilhelm-Klemm-Straße 9, D-48149 Münster, Germany
Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000,
China Institute of Physics and Mathematics, Federal University of Pelotas,
Postal Code 354, 96010-900, Pelotas, RS, Brazil Y. Gotra Thomas Jefferson
National Accelerator Facility, Newport News, Virginia 23606, USA F.K. Guo
CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics,
Chinese Academy of Sciences, Beijing 100190, China School of Physical
Sciences, University of Chinese Academy of Sciences, Beijing 100049, China X.
Guo University of Maryland, College Park, MD 20742 M. Hattawy Old Dominion
University, Department of Physics, 4600 Elkhorn Ave. Norfolk, VA 23529 Y.
Hatta Department of Physics, Brookhaven National Laboratory, Upton, NY 11973,
USA T. Hayward University of Connecticut, Department of Physics 196A
Auditorium Road, Storrs, CT 06269-3046 O. Hen Massachusetts Institute of
Technology, Cambridge, Massachusetts 02139, USA G. M. Huber University of
Regina, Regina, SK S4S 0A2 Canada C. Hyde Old Dominion University,
Department of Physics, 4600 Elkhorn Ave. Norfolk, VA 23529 E.L. Isupov
Michigan State University, East Lansing, 48824, Michigan, USA B. Jacak
Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA W. Jacobs
CEEM, Indiana University, Bloomington, IN 47408, USA A. Jentsch Department
of Physics, Brookhaven National Laboratory, Upton, NY 11973, USA C.R Ji
North Carolina State University, Raleigh, North Carolina. S. Joosten Argonne
National Laboratory, Lemont, Illinois 60439, USA N. Kalantarians Virginia
Union University, Department of Natural Sciences Z. Kang Department of
Physics and Astronomy, University of California, Los Angeles, USA Mani L.
Bhaumik Institute for Theoretical Physics, University of California, Los
Angeles, USA Center for Frontiers in Nuclear Science, Stony Brook University,
Stony Brook, USA A. Kim University of Connecticut, Department of Physics
196A Auditorium Road, Storrs, CT 06269-3046 Thomas Jefferson National
Accelerator Facility, Newport News, Virginia 23606, USA S. Klein Lawrence
Berkeley National Laboratory, Berkeley, CA 94720, USA B. Kriesten Center for
Nuclear Femtography, 1201 New York Avenue, Washington, DC 20005 S. Kumano
The High Energy Accelerator Research Organization, KEK, Japan A. Kumar Dr
Rammanohar Lohia Avadh University, Ayodhya-224001, U.P., INDIA K. Kumericki
University of Zagreb, Faculty of Science, Department of Physics, 10000 Zagreb,
Croatia M. Kuchera Department of Physics Department of Mathematics and
Computer Science Davidson College Box 7133 Davidson, NC 28035 W.K. Lai
Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum
Matter, South China Normal University, Guangzhou 510006, China Guangdong-Hong
Kong Joint Laboratory of Quantum Matter, Southern Nuclear Science Computing
Center, South China Normal University. Department of Physics and Astronomy,
University of California, Los Angeles, USA Jin Li Department of Physics,
Nanjing Normal University, Nanjing 210023, China. Shujie Li Lawrence
Berkeley National Laboratory, Berkeley, CA 94720, USA W. Li The College of
William and Mary, Williamsburg, Virginia 23185, USA X. Li Los Alamos
National Laboratory, Los Alamos, NM, USA H.-W. Lin Michigan State
University, East Lansing, 48824, Michigan, USA K.F. Liu University of
Kentucky, College of Arts & Science, Physics & Astronomy Xiaohui Liu Center
of Advanced Quantum Studies, Department of Physics, Beijing Normal University,
Beijing 100875, China Center for High Energy Physics, Peking University,
Beijing 100871, China P. Markowitz Florida International University,
Department of Physics, Miami, FL 33199, USA V. Mathieu Departament de Física
Quàntica i Astrofísica and Institut de Ciències del Cosmos, Universitat de
Barcelona, E08028, Spain Departamento de Física Teórica, Universidad
Complutense de Madrid and IPARCOS, E-28040 Madrid, Spain M. McEneaney Duke
University, Durham, NC 27708, USA A. Mekki Department of Physics, Faculty of
Science, University of Khartoum, P.O. Box 321, Khartoum 11115, Sudan J.P. B.
C. de Melo Laboratory of Theoretical and Computational Physics-LFTC, Cruzeiro
do Sul University / São Paulo City University, 015060-000, São Paulo, SP,
Brazil Z.E. Meziani Argonne National Laboratory, Lemont, Illinois 60439, USA
R. Milner Massachusetts Institute of Technology, Cambridge, Massachusetts
02139, USA H. Mkrtchyan A.I. Alikhanyan National Science Laboratory, Yerevan
0036, Armenia. V. Mochalov NRC Kurchatov Institute – IHEP, Protvino, 142281,
Russia National Research Nuclear University MEPhI, Moscow, 115409, Russia V.
Mokeev Thomas Jefferson National Accelerator Facility, Newport News, Virginia
23606, USA V. Morozov Oak Ridge National Laboratory, Oak Ridge, Tennessee
37831 USA H. Moutarde IRFU CEA-Saclay, 91191 Gif sur Yvette, France C.
Munos Université Paris-Sud 11, Orsay, France M. Murray Department of
Physics and Astronomy, Malott Hall, 1251 Wescoe Hall Dr.Lawrence, KS 66045 S.
Mtingwa U.S. Nuclear Regulatory Commission,Triangle Science, 138 W.
Hatterleigh Avenue Hillsborough, NC 27278 P. Nadel-Turonski Center for
Frontiers in Nuclear Science, Stony Brook University, Stony Brook, USA V.A.
Okorokov National Research Nuclear University MEPhI, Moscow, 115409, Russia
E. Onyie Thomas Jefferson National Accelerator Facility, Newport News,
Virginia 23606, USA L.L.Pappalardo INFN Ferrara, 44122 Ferrara, Italy
Università degli Studi di Ferrara, I-44122 Ferrara, Italy Z. Papandreou
Department of Physics, University of Regina, 3737 Wascana Parkway Regina,
CANADA C. Pecar Duke University, Durham, NC 27708, USA A. Pilloni INFN
Sezione di Catania, I-95123 Catania, Italy Dipartimento di Scienze
Matematiche e Informatiche, Scienze Fisiche e Scienze della Terra,
Università degli Studi di Messina, I-98122 Messina, Italy B. Pire CPHT,
CNRS, Ecole polytechnique, I.P. Paris, 91128 Palaiseau, France N. Polys
Virginia Tech University, Blacksburg, VA 24061 A. Prokudin Science Division,
Penn State University Berks, Reading, Pennsylvania 19610, USA Thomas
Jefferson National Accelerator Facility, Newport News, Virginia 23606, USA M.
Przybycien AGH University of Science and Technology, FPACS, Cracow 30-059,
Poland J-W. Qiu Thomas Jefferson National Accelerator Facility, Newport
News, Virginia 23606, USA M. Radici INFN, Sezione di Pavia, Pavia, Italy R.
Reed Physics Department, Lehigh University 16 Memorial Drive East Office 406,
Bethlehem, PA 18015 F. Ringer Thomas Jefferson National Accelerator
Facility, Newport News, Virginia 23606, USA Old Dominion University,
Department of Physics, 4600 Elkhorn Ave. Norfolk, VA 23529 B.J. Roy Nuclear
Physics Division, Bhabha Atomic Research Centre, Mumbai 400 085, India. N.
Sato Thomas Jefferson National Accelerator Facility, Newport News, Virginia
23606, USA A. Schäfer Institut für Theoretische Physik, Universität
Regensburg, D-93040 Regensburg, Germany B. Schmookler University of
California, Riverside, Department of Physics and Astronomy G. Schnell DESY,
Hamburg, Germany P. Schweitzer University of Connecticut, Department of
Physics 196A Auditorium Road, Storrs, CT 06269-3046 R. Seidl RIKEN Nishina
Center for Accelerator-Based Science, Wako, Saitama 351-0198, Japan RIKEN BNL
Research Center, Brookhaven National Laboratory, Upton, New York 11973-5000,
USA K.M. Semenov-Tian-Shansky Department of Physics, Kyungpook National
University, Daegu 41566, Korea National Research Centre Kurchatov Institute:
Petersburg Nuclear Physics Institute, 188300 Gatchina, Russia Higher School
of Economics, National Research University, 194100 St. Petersburg, Russia F.
Serna Departamento de Física, Universidad de Sucre, Carrera 28 No. 5-267,
Barrio Puerta Roja, Sincelejo, Colombia Laboratório de Física Teórica e
Computacional, Universidade Cidade de São Paulo, Rua Galvão Bueno 868,
01506-000 São Paulo, SP, Brazil F. Shaban Faculty of Science, Cairo
University, Cairo, Egypt M.H. Shabestari University of West Florida,
Pensacola, FL 23514, USA K. Shiells Center for Nuclear Femtography, 1201 New
York Avenue, Washington, DC 20005 A. Signori Physics Department, University
of Turin, via P. Giuria 1, I-10125 Turin, Italy INFN, Turin section, via P.
Giuria 1, I-10125 Turin, Italy H. Spiesberger Institut für Physik, Institut
für Kernphysik, Johannes-Gutenberg-Universität, D-55099 Mainz, Germany I.
Strakovsky The George Washington University, Department of Physics,
Washington, DC 20052 R.S. Sufian Department of Physics and Astronomy,
College of William and Mary, Williamsburg, Virginia, USA Thomas Jefferson
National Accelerator Facility, Newport News, Virginia 23606, USA A.
Szczepaniak Indiana University, Bloomington, Indiana Thomas Jefferson
National Accelerator Facility, Newport News, Virginia 23606, USA L.
Teodorescu Brunel University London Uxbridge, Middlesex UB8 3PH, UK J. Terry
Department of Physics and Astronomy, University of California, Los Angeles,
USA Mani L. Bhaumik Institute for Theoretical Physics, University of
California, Los Angeles, USA O. Teryaev Joint Institute for Nuclear
Research, Bogoliubov Laboratory of Theoretical Physics. F. Tessarotto INFN,
Sezione di Trieste, Trieste, Italy C. Timmer Thomas Jefferson National
Accelerator Facility, Newport News, Virginia 23606, USA Abdel Nasser Tawfik
Street 90, Fifth Settlement, 11835 New Cairo, Egypt L. Valenzuela Cazares
Iowa State University, Iowa City, IA, USA. A. Vladimirov Institut für
Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany
Dpto. de Física Teórica & IPARCOS, Universidad Complutense de Madrid, E-28040
Madrid, Spain E. Voutier Laboratoire de Physique Joliot-Curie, CNRS-IN2P3,
Université Paris-Saclay D. Watts University of York, School of Physics,
Engineering, and Technology, Heslington YO10 5DD , UK D. Wilson DAMTP,
University of Cambridge, UK D. Winney Guangdong Provincial Key Laboratory of
Nuclear Science, Institute of Quantum Matter, South China Normal University,
Guangzhou 510006, China Guangdong-Hong Kong Joint Laboratory of Quantum
Matter, Southern Nuclear Science Computing Center, South China Normal
University, Guangzhou 510006, China B. Xiao School of Science and
Engineering, The Chinese University of Hong Kong, Shenzhen 518172, China Z.
Ye University of Illinois at Chicago, Chicago, Illinois 60607 Zh. Ye
Physics Department, Tsinghua University, 30 Shuangqing Road, Haidian District,
Beijing 100084 F. Yuan Lawrence Berkeley National Laboratory, Berkeley, CA
94720, USA N. Zachariou University of York, School of Physics, Engineering,
and Technology, Heslington YO10 5DD , UK I. Zahed Stony Brook University,
Stony Brook, NY 11794, USA J.L. Zhang Department of Physics, Nanjing Normal
University, Nanjing 210023, China. Y. Zhang Thomas Jefferson National
Accelerator Facility, Newport News, Virginia 23606, USA J. Zhou Key
Laboratory of Particle Physics and Particle Irradiation (MOE), Institute of
Frontier and Interdisciplinary Science, Shandong University (QingDao), 266237,
China
Editorial Board
A. Afanasev | George Washington University
---|---
J. Arrington | Lawrence Berkeley National Laboratory
V. Burkert∗ | Jefferson Laboratory
M. Contalbrigo | INFN Ferarra
W. Cosyn | Florida International University
A. Deshpande | Stony Brook University
L. Elouadrhiri∗ | Jefferson Laboratory & Center for Nuclear Femtography
D. Glazier | Glasgow University
X. Ji | University of Maryland
S. Liuti | University of Virginia
Y. Oh | Kyungpook National University
D. Richards | Jefferson Laboratory
T. Satogata | Jefferson Laboratory
A. Vossen | Duke University
∗ Principal Investigator
∗∗ Corresponding Author<EMAIL_ADDRESS>
FOREWORD
The Electron-Ion Collider (EIC), is a powerful new facility to be built in the
the U.S. Department of Energy’s Brookhaven National Laboratory in partnership
with the Thomas Jefferson National Accelerator Facility. Its main focus is to
explore the most fundamental building blocks of the visible matter in the
universe and reveal the properties of the strong force of nature.
The initiative to develop this white paper followed DOE’s approval of “mission
need” (known as CD-0) in December 2019. Since then the EIC has achieved
Critical Decision 1 (CD-1) approval on July 6, 2021. This milestone marks the
start of the project execution phase for a next-generation nuclear physics
facility, making the present initiative timely.
The EIC is designed to have two interaction regions that are suitable for the
installation of large-scale detector systems for high priority nuclear physics
experiments. The goal of this initiative was to take a fresh look at the
changing landscape of the science underlying the need of a complementary
approach towards the overall optimization and the execution of the EIC science
program, and include, where appropriate, recent scientific advancements and
challenges that go beyond the original motivation for the EIC. Several of the
highly rated science programs proposed for the EIC were selected, as well as
recent developments that have opened up new directions in nuclear science. It
also included discussions on the machine requirements and performance of
detection systems for the successful and efficient execution of the EIC
science program.
The organizing team held a preparatory coordination meeting on December 15–16,
2020 IR2 (9794) bringing in experts in the field to discuss the science of the
EIC second interaction region, its instrumentation, and explore ways of its
implementation in order to maximize the scientific impact of the EIC. The goal
of this meeting was also to define the scientific program and the agenda for
subsequent workshops.
The first workshop took place remotely on March 17-19, 2021, and was co-hosted
by Argonne National Laboratory and the CFNS. Over 400 members of the
international nuclear science community registered as participants IR2 (0677).
This first workshop highlighted the science that will benefit the most from a
second EIC interaction region, including the science of deep inelastic
exclusive and semi-inclusive processes, the physics with jets, heavy flavor
production, spectroscopy of exotic hadrons, and processes with light and heavy
ions. This workshop was very timely as Brookhaven National Laboratory and
Jefferson Laboratory had just announced the “Call for Collaboration Proposals
for Detectors to be located at the EIC” in two interaction regions. Detector 2
could complement the project detector 1 and may focus on optimizing particular
science topics or addressing topics beyond the requirements defined in
previous published EIC documents. It also refers to possible optimization of
the second interaction region towards such aims.
The second workshop PSQ (1669) Precision Studies of QCD at EIC, co-hosted by
Asia Pacific Center for Theoretical Physics (APCTP) and the CFNS, took place
on July 19-23, 2021. This workshop examined the science requiring high
luminosity at low to medium center of mass energies ($20~{}{\rm to}~{}60$
GeV). The goal of this workshop was to motivate the study of high impact
science in the context of the overall machine design, EIC operation, and
detector performance, focusing on science highlights, detector concepts, and
science documentation. As a result of this workshop technical working groups
were formed to develop this white paper. It identifies part of the science
program in the precision studies of QCD that require or greatly benefit from
the high luminosity and low to medium center-of-mass energies, and it
documents the scientific underpinnings in support of such a program. The
objective of this document is to help define the path towards the realization
of the second interaction region.
###### Contents
1. I GPDs - 3D Imaging and mechanical properties of the nucleon
1. I.1 Introduction & background
2. I.2 Generalized Parton Distributions and Nucleon Tomography
1. I.2.1 Deeply Virtual Exclusive Processes, GPDs and Compton Form Factors
2. I.2.2 Analysis methods
3. I.3 D-term form factor, and mechanical properties of the nucleon - beyond tomography
4. I.4 Backward hard exclusive reactions and probing TDAs with high luminosity EIC
5. I.5 Outlook - Beyond the EIC initial complement
2. II Mass and spin of the nucleon
1. II.1 Nucleon mass
1. II.1.1 Masses in dynamical energy sources
2. II.1.2 Mass radius and “confining” scalar density
2. II.2 Nucleon Spin Structure
1. II.2.1 Longitudinal-Spin Sum Rules
2. II.2.2 Transverse-Spin Sum Rules
3. II.3 D-term and strong forces in the interior of the nucleon
1. II.3.1 Stress tensor
2. II.3.2 Mechanical stability - connection to neutron stars
3. II.3.3 Charge and mechanical radius of proton and of neutron
4. II.3.4 D-term and long range forces
3. III Accessing the Momentum Dependent Structure of the nucleon in Semi-Inclusive Deep Inelastic Scattering
1. III.1 Overview
2. III.2 Accessing Quark-Gluon Correlations at sub-leading Twist
3. III.3 Measurements of TMDs
1. III.3.1 Impact on the understanding of TMD factorization and applicability to fixed target data
2. III.3.2 Impact on TMD PDF extraction
3. III.3.3 The impact study on the unpolarized TMDs
4. III.3.4 The impact study on the Sivers functions
5. III.3.5 TMDs in nuclei
4. III.4 Jet Hadronization Studies
4. IV Exotic meson spectroscopy
1. IV.1 Motivations for an exotic spectroscopy program at the EIC
1. IV.1.1 Photoproduction with the EIC
2. IV.1.2 States of interest
2. IV.2 Estimates for the EIC
1. IV.2.1 JPAC Photoproduction Amplitudes
2. IV.2.2 Electroproduction
3. IV.2.3 Other Models
4. IV.2.4 Estimates
5. IV.2.5 Detection of final states
3. IV.3 Outlook
5. V Science highlights of light and heavy nuclei
1. V.1 Introduction
2. V.2 Inclusive measurements
3. V.3 Semi-inclusive and tagged spectator measurements
4. V.4 Exclusive measurements
5. V.5 Charm-flavored hypernuclei
6. VI Precision studies of Lattice QCD in the EIC era
1. VI.1 Three-dimensional Imaging of the Nucleon
1. VI.1.1 Parton distribution functions
2. VI.1.2 Generalized parton distributions
3. VI.1.3 Transverse momentum dependent distributions
4. VI.1.4 Gluon and flavor-singlet structure
2. VI.2 LQCD and Spectroscopy
3. VI.3 Outlook
7. VII Science of far forward particle detection
1. VII.1 Far-forward detection overview
2. VII.2 Detection of recoil baryons and light ions
3. VII.3 Spectator detection
4. VII.4 Tagging of active nucleons - high spectator momenta
5. VII.5 Vetoing of breakup
6. VII.6 Backward (u-channel) photoproduction
7. VII.7 Rare isotopes (including photons for spectroscopy)
8. VIII Radiative effects and corrections
1. VIII.1 Introduction
2. VIII.2 Monte Carlo generators for radiative events
3. VIII.3 Opportunities to reduce model dependences
9. IX Artificial Intelligence applications
1. IX.1 Accelerate Simulations with AI
2. IX.2 Nuclear Femtography and AI
3. IX.3 Inverse problems of quarks and gluons with AI
10. X The EIC interaction regions for a high impact science program with discovery potential
1. X.1 Introduction
2. X.2 Primary IR design parameters
3. X.3 Second IR design and downstream tagging acceptance
4. X.4 Technical design of an optimized low energy and high luminosity interaction region
1. X.4.1 Design constraints
2. X.4.2 Effect of horizontal crabbing in secondary focus
5. X.5 Operations with Two IRs
11. XI Acknowledgments
Executive Summary
The fundamental building blocks of ordinary matter in the universe, proton and
neutron, together known as nucleons, have been discovered during the early
part of the twenties century Rutherford (1911); Chadwick (1932). For over half
a century we have known that these nucleons are further composed of quarks and
gluons. We also know that global properties of nucleons and nuclei, such as
their mass and spin, and interactions are the consequences of the underlying
physics of quarks and gluons, governed by the theory of strong interaction,
Quantum-Chromo-Dynamics (QCD), whose fifties anniversary we celebrate in 2022.
Yet we still do not understand how the properties of nucleons emerge from the
fundamental interaction. This has resulted in the development of a new science
of emergent phenomena in the nuclear medium and the 3D nuclear structure:
nuclear femtography. A significant part of the science program currently at
the Jefferson Laboratory 12 GeV CEBAF facility is aimed at this new science in
the range where valence quarks dominate the internal structure and dynamics;
the US Electron Ion Collider (EIC) in its low-to-medium center-of-mass energy
is preferential for studying the region of $x_{B}$ from 0.01 to 0.1 where non
trivial flavor and quark-anti-quark differences are expected from Chiral
Symmetry Breaking.
These capabilities will open the door to the exploration of the three-
dimensional distributions in coordinate space and in momentum space of the
quarks and gluons over an unprecedented kinematic range that connects to the
range currently explored at lower energies in fixed-target scattering
experiments. The combined result will be an unparalleled exploration of the
way in which the phenomena of nuclear physics, the mass, and the spin, and the
mechanical properties emerge from the fundamental interactions of the partons,
and how these properties are distributed in the confined space inside nucleons
and nuclei.
The EIC in its full range of 20 to 140 GeV center-of-mass energy and featuring
high luminosity operation will be a powerful facility for the exploration of
the most intricate secrets of the strong interaction, and the potential
discovery of phenomena not observed before. Much of the compelling science
program has been described in previous documents Accardi _et al._ (2016); Pro
(2020); Abdul Khalek _et al._ (2021).
Figure 1: The EIC concept at Brookhaven National Laboratory EIC (2021). The
electron and the ion beams are clearly identified. There are several beam
intersection points, one at the 6 o’clock (IP6) location and at the 8 o’clock
(IP8) location are suitable for the installation and operation of large scale
detector systems. Interaction point IP8 may be most suitable for high-
luminosity optimization at low to intermediate center-of-mass energies as well
as for the installation of a secondary focus for forward processes requiring
high momentum resolution. The electron beam energy ranges from 2.5 GeV to 18
GeV, while for protons the ion beam allows selected energies between 41 GeV
and 275 GeV covering a collision center-mass energy from 20 GeV to 140 GeV.
The ion beam is circulating counter clockwise, and the new electron ring with
electrons circling clockwise. Both beams will be highly polarized with both
electron and proton polarizations greater than 70%. The EIC will benefit from
two existing large detector halls in IP6 and in IR8, both fully equipped with
infrastructure. Figure 2: Estimated luminosity versus center-of-mass energies
for the operation of one (thick lines) or two (thin lines) interaction
regions. The blue lines show the baseline performance. The green lines show
the high luminosity operation for improved beam optics and cooling. The strong
drop in luminosity from the CM energy 44.7 GeV to 28.6 GeV is caused by
increased beam-beam interactions as the proton beam energy is reduced from 100
GeV to 41 GeV while keeping the electron energy of 5 GeV. This problem is
still being studied by machine experts. One option might be to keep the proton
energy at 100 GeV, thus avoiding an increase in beam-beam interactions and
lower the electron beam energy from 5 GeV to 2.5 GeV, resulting in 31.6 CM
energy.
The EIC project scope includes the development of an interaction regions (IR)
and day-one detector at IP6 and the baseline of an interaction region design
for a second detector at IP8. A second EIC detector would be located at IP8
that will include a second focus approximately 50 m downstream of the
collision point at a location with a large dispersion. Such an innovative
design would enable a high-impact and highly complementary physics program to
the day-one detector. The second focus thus makes it possible to move tracking
detectors very close to the beam at a location where scattered particles
separate from the beam envelope, thereby providing exceptional near-beam
detection. This in turn creates unique possibilities for detecting all
fragments from breakup of nuclei, for measuring light nuclei from coherent
processes down to very low $p_{T}$, and greatly improves the acceptance for
protons in exclusive reactions - in particular at low $x$. As such, a second
detector at IP8 will significantly enhance the capabilities of the EIC for
diffractive physics and open up new opportunities for physics with nuclear
targets.
With this document we highlight the science benefiting from an optimized
operation at instantaneous luminosity from $0.5\times 10^{34}$cm-2s-1 up to
$1.0\times 10^{34}$ cm-2s-1 is achievable in the center-of-mass range of 45 to
100 GeV, with significantly lower luminosity at 28 and 140 GeV. Furthermore,
with a projected $10^{7}$ sec of operation (100% equivalent) annually, the
maximal integrated luminosity is 100fb-1.
This White Paper aims at highlighting the important benefits in the science
reach of the EIC. High luminosity operation is generally desirable, as it
enables producing and harvesting scientific results in a shorter time period.
It becomes crucial for programs that would require many months or even years
of operation at lower luminosity.
We also aim at providing the justification for the development of either or
both EIC detectors with characteristics that will provide support for an
exciting science program at low to medium-high center-of-mass electron-ion
collisions that address many of the high impact physics topics. In particular,
the 3D-imaging of the nucleon, requiring a large amount of data in order to
fill the multi-dimensional kinematic space with high statistics data,
including combinations of spin-polarized electrons and longitudinal and
transverse spin-polarized protons. We also emphasize the importance of, in the
future, including positrons for processes that can be isolated through the
measurement of electrical charge differences in electron and positron induced
processes. Furthermore, the availability of high spin polarization for both
the electron and proton beam, in the longitudinal and in the transverse spin
orientation, is critically important for the measurement of the quark angular
momentum distribution in the proton.
Generalized Parton Distributions: The discovery of the Generalized Parton
Distributions (GPDs) and the identification of processes that are accessible
in high energy scattering experiments, has opened up an area of research with
the promise to turn experimentally measured quantities into objects with
3-dimensional physical sizes at the femtometer scale. It requires precision
measurements of exclusive processes, such as deeply virtual Compton scattering
(DVCS) and deeply virtual meson production (DVMP). The tunable energy of the
EIC combined with an instantaneous luminosity of up to $L=10^{34}$cm-2s-1 and
high spin polarization of electrons, proton, and light nuclei, makes the EIC a
formidable instrument to advance nuclear science from the one-dimensional
imaging of the past to the 3-dimensional imaging of the quark and gluon
structure of particles. For the quark structure this is shown in Fig. 7. This
science is one of the cornerstones of the EIC experimental program and is
complemented by theoretical advances as a result of precise computations on
the QCD lattice and through QCD-inspired pictures of the nucleon. To fully
capitalize on these experimental and theoretical efforts demands operation of
the EIC with high luminosity at low to medium center of mass energies. This
will enable connecting the valence quark region, which is well probed in fixed
target experiments, to sea quarks and gluon dominated regions at medium and
small values of the quark longitudinal momentum fraction $x$ correlating the
quarks spatial distribution with its momentum. The great potential of the EIC
for imaging is illustrated in Fig. 7 with the extraction of Compton Form
Factor $\cal{H}$ covering a large $x$ range.
Gravitational Form Factors: Knowledge of the GPDs facilitated the development
of a novel technique to employ the correspondence of the GPDs to the
gravitational form factors (GFFs) through the moments of the GPDs. The GFFs
are form factors of the nucleon matrix element of the energy-momentum tensor
and are related to the mechanical properties of the proton. The Fourier
transform over their t-dependence can be related to the distribution of
forces, of mass, and of angular momentum. The femto-scale images obtained will
provide an intuitive understanding of the fundamental properties of the
proton, and how they arise from the underlying quarks and gluon degrees of
freedom as described by the QCD theory of spin-1/2 quarks and spin-1 gluons.
This is one of the most important goals in nuclear physics. The feasibility of
this program has been demonstrated at experiments at lower energy, and
expected results at the EIC have been simulated.
Mechanical Properties of Particles: In the QCD studies, it has been realized
that the matrix elements, and the quark and gluon GFF, measured through DIS
momentum sum rule and also the source for gravitational fields of the nucleon,
play important roles in understanding the spin and mass decomposition. The
interpretation of the GFF $D(t)$ in terms of mechanical properties has most
recently generated much interest as its relations to deeply virtual Compton
scattering (DVCS) and deeply virtual meson production (DVMP) have been
established. Moreover, the gluon GFF are directly accessible through near-
threshold heavy-quarkonium production as well. Furthermore, the beam charge
asymmetry in DVCS with a future positron beam will have important impact in
directly accessing the $D(t)$ form factor Accardi _et al._ (2021). Figure 13
shows examples of estimated normal and shear force distributions inside the
proton that will become accessible with the EIC.
Nuclear Structure in Momentum Space: As the GPDs relate to imaging in
transverse Euclidean and longitudinal momentum space, the nucleon’s
3-dimensional momentum structure may be accessed through measurements of
transverse momentum dependent parton distribution functions employing semi-
inclusive deep-inelastic scattering as a central part of the scientific
mission of the EIC. This program focuses on an unprecedented investigation of
the parton dynamics and correlations at the confinement scale and will benefit
substantially by an increased luminosity at medium energies. Structure
functions appearing at sub-leading twist are suppressed by a kinematic factor
$1/Q$, which makes data at relatively low and medium $Q^{2}$ the natural
domain for their measurement. Similarly, effects from the intrinsic transverse
momentum dependence are suppressed at high $Q^{2}$, when most of the observed
transverse momenta are generated perturbatively. As a consequence, the signal
of TMDs is naturally diluted at the highest energies. At the same time $Q^{2}$
has to be high enough for the applicability of factorization theorems.
Dedicated running of the EIC at low to medium CM energy would therefore occupy
kinematics where non-perturbative and subleading effects are sizeable and
current knowledge allows the application of factorization to extract the
relevant quantities Grewal _et al._ (2020). The strong impact of a high
luminosity EIC on the determination of the structure function $g_{T}$ is
demonstrated in Figure 21 in comparison with the existing data.
Exotic Mesons in Heavy Quark Spectroscopy: The spectroscopy of excited mesons
and baryons has played an essential role in the development of the quark model
and its underlying symmetries, which led to the decoding of what was then
called the “Particle Zoo” of hundreds of excited states. Modern electro/photo-
production facilities, such as those operating in Jefferson Lab, have
demonstrated the effectiveness of photons as probes of the hadron spectrum.
However the energy ranges of these facilities are such that most states with
open or hidden heavy flavor are out of reach. Still, there is significant
discovery potential for photoproduction in this sector. Already electron
scattering experiments at HERA observed low-lying charmonia, demonstrating the
viability of charmonium spectroscopy in electro-production at high-energies
but were limited by luminosity. Now the EIC, with orders of magnitude higher
luminosity, will provide a suitable facility for a dedicated photoproduction
spectroscopy program (by post-tagging the near $0^{\circ}$ scattered electron)
extended to the heavy flavor sectors. In particular, the study of heavy-
quarkonia and quarkonium-like states in photon-induced reactions while
complementary to the spectroscopy programs employing other production modes
will provide unique clues to the underlying non-perturbative QCD dynamics.
Unique science with nuclei: The EIC will enable deep inelastic scattering off
of all nuclei with its polarized electron beam for the first time in a
collider geometry. Lightest nuclei like deuteron or helium would serve as
surrogates for neutrons to study flavor dependent parton distributions in
kinematic regions that remain unexplored to-date. EIC’s high luminosity and
unique far-forward detection capabilities will enable detailed measurements of
nuclear breakup, spectator tagging, and – in the case of light ions – coherent
scattering reactions, far beyond what is possible in the past fixed target
facilities. Such measurements, would allow additional valuable controls over
measurements and promise to understanding reaction mechanisms and to study
nuclear configurations that are believed to play crucial role in the
scattering process. Coherent scattering measurements in exclusive reactions
enable 3D tomography of light ions in their quark-gluon degrees of freedom.
Nuclei can be used to study the influence of nuclear interactions on non-
perturbative properties of the nucleon (nuclear medium modifications).
Precision measurements of the $Q^{2}$ dependence of the EMC effect will pin
down the influence of higher twist contributions on the medium modifications
of partonic distributions. The broad Bjorken-$x$ range covered by the EIC
makes it an ideal machine to study the gluon EMC effect.
Paper organization: The WP is organized in 10 sections, with section I through
section V outlining an experimental science program. Section VI is dedicated
to the increasing role Lattice QCD will play in supporting the high level
experimental analysis, as well as opening up avenues of research that require
information not (yet) available from prior experiments for the interpretation.
Section VII discusses aspects of the science requiring special instrumentation
in the far forward region of the hadron beam, and for the second interaction
region at IP8 the option of implementing a high-resolution forward ion
spectrometer. Radiative effects are discussed in section VIII, which all
experimental analyses have to deal with, and may present special challenges in
part of the phase space covered by the EIC detection system, covering nearly
the full phase space available. Section IX outlines some of the experimental
and analysis aspects that offer significant benefits from developing and
employing artificial intelligence (AI) procedures in controlling hardware and
in guiding analysis strategies that can be widely developed before that EIC
will begin operation. Section X discusses the two interaction regions that can
house dedicated detector systems, with emphasis on their complementarity in
performance at different center-of-mass energies and optics parameters.
## I GPDs - 3D Imaging and mechanical properties of the nucleon
### I.1 Introduction & background
The discovery of GPDs Müller _et al._ (1994); Ji (1997a); Radyushkin (1997)
has opened a window on three-dimensional imaging of the nucleon, going far
beyond the one dimensional longitudinal structure probed in deeply inelastic
scattering (DIS) and the transverse structure encoded in the different form
factors. This discovery facilitated the development of a novel technique that
employs the remarkable correspondence of the GFF and the second x-moments of
the generalized parton distribution functions, and relate them to the shear
stress and pressure in the proton and the distribution of orbital angular
momentum. These femto-scale images (or femtography) will provide an intuitive
understanding of how the fundamental properties of the nucleon, such as its
mass and spin, arise from the underlying quark and gluon degrees of freedom.
And then, for the first time, we will have access to the forces and pressure
distributions inside the nucleon. This science is one of the cornerstones of
the EIC experimental program and is complemented by theoretical advances as a
result of lattice QCD calculations and through QCD-inspired pictures of the
nucleon. To fully capitalize on these experimental and theoretical efforts
demands operation of the EIC with high luminosity at low to medium center of
mass energies.
The standard approach of imaging is through diffractive scattering. The deeply
virtual exclusive processes allow probing entirely new structural information
of the nucleon through QCD factorization (see Fig.3).
Figure 3: Deeply virtual exclusive processes in electron scattering, as hard
scattering events to probe the 3D quark distribution (left) and gluon
distribution (right).
The golden process to study the quark GPDs is DVCS, where a virtual photon
interacts with a single quark deep in the hadron, and the quark returns to the
hadron initial ground state by emitting a high energy photon in the final
state. Experimental observables in DVCS are parameterized by Compton Form
Factors (CFFs) Belitsky and Mueller (2010). From the analysis of data from
DESY, as well as the results of new dedicated experiments at JLab, and at
CERN, early experimental constraints on CFFs have been obtained from global
extraction fits Kumericki _et al._ (2011); Guidal _et al._ (2013); Kumericki
_et al._ (2016); Moutarde _et al._ (2019). However, data covering a
sufficiently large kinematic range, and the different required polarization
observables, have not been systematically available. The future EIC with high
luminosity at large range in CM energies will provide comprehensive
information on these hard diffractive processes, entering the precision era
for GPD studies.
In what follows, after a brief review of the formalism in Section I.2.1, we
describe state of the art analysis methods in Section I.2.2, and the study of
the extraction of GFF performed at Jefferson Lab Hall B (Section I.3).
Additional processes sensitive to GPDs complementing the main EIC focus, as
well as an outlook are presented in I.5.
### I.2 Generalized Parton Distributions and Nucleon Tomography
GPDs, their theoretical properties, as well as phenomenological aspects
related to their extraction from deeply virtual exclusive processes, have been
the object of several review papers Ji and Bakker (2013); Diehl (2003);
Belitsky and Radyushkin (2005); Kumericki _et al._ (2016); Goeke _et al._
(2001); Ji (2004); Ji _et al._ (2021a) as well as of reports supporting the
design of the upcoming EIC Accardi _et al._ (2016); Abdul Khalek _et al._
(2021). The main properties of GPDs are outlined below while reminding the
reader that many open questions concerning constraints on GPD models, such as
the application of positivity bounds Pobylitsa (2002); Pire _et al._ (1999),
dispersion relations Teryaev (2005); Anikin and Teryaev (2007); Diehl and
Ivanov (2007); Goldstein and Liuti (2009); Pasquini _et al._ (2014), flavor
dependence Kriesten _et al._ (2021), NLO perturbative evolution, as well as
the separation of twist-2 and twist-3 contributions in the deeply virtual
exclusive cross sections, are still intensely debated. The ultimate answer to
many of these questions will be found in the outcome of carefully designed
experiments at the EIC. It is therefore mandatory to define analysis
frameworks to extract GPDs from data. Various approaches, listed in Section
I.2.2, have been developed which represent a new step towards realizing the
goal of nucleon tomographic imaging.
#### I.2.1 Deeply Virtual Exclusive Processes, GPDs and Compton Form Factors
The non-perturbative part of the handbag diagram in Fig. 3(left) is
parameterized by GPDs
$\displaystyle\frac{P^{+}}{2\pi}\int\text{d}y^{-}\,\text{e}^{ixP^{+}y^{-}}\langle
p^{\prime}|\bar{\psi}_{q}(0)\gamma^{+}(1+\gamma^{5})\psi_{q}(y)|p\rangle$
$\displaystyle=$
$\displaystyle\bar{U}(p^{\prime},\Lambda^{\prime})\left[{H^{q}(x,\xi,t)}\gamma^{+}+{E^{q}(x,\xi,t)}i\sigma^{+\nu}\frac{\Delta_{\nu}}{2M}\right.$
(1) $\displaystyle+$
$\displaystyle\left.{\widetilde{H}^{q}(x,\xi,t)}\gamma^{+}\gamma^{5}+{\widetilde{E}^{q}(x,\xi,t)}\gamma^{5}\frac{\Delta^{+}}{2M}\right]U(p,\Lambda)\
$
where the index $q$ refers to the quark flavor;
$P=\frac{1}{2}\left(p+p^{\prime}\right)$ is the average proton 4-momentum,
while $\Delta=p^{\prime}-p$ is the 4-momentum transfer to the proton,
$t=\Delta^{2}$. The Fourier transform is performed along the light-cone (LC)
with $y^{+}=\vec{y}_{\perp}=0$ (Fig.4).
$\displaystyle\begin{cases}b_{\perp}=\dfrac{y_{in_{\perp}}+y_{\,out_{\perp}}}{2}\\\
\Delta_{{}_{3/4/22}}=k_{in}-k_{out}=p-p\prime\
\;\;\;\end{cases}\begin{cases}y={y_{in}-y_{out}}\\\
k=\dfrac{k_{in}+k_{out}}{2}\end{cases}$
Figure 4: Correlation function for the GPDs defined in Eq.(1), highlighting
both momentum and Fourier conjugate spatial coordinates.
The active quark carries light cone momentum fractions $x+\xi$ and $x-\xi$,
respectively, in the initial and final states, so that the average quark LC
momentum is, $k^{+}=xP^{+}$ and the LC momentum difference is,
$\Delta^{+}=p^{\prime+}-p^{+}=-2\xi P^{+}$.
Ordinary parton distribution functions (PDFs) can be recovered from GPDs at
$\xi=0,t=0$ as,
$\frac{1}{4\pi}\int\text{d}y^{-}\,\text{e}^{ixp^{+}y^{-}}\langle
p|\bar{\psi}_{q}(0)\gamma^{+}\psi_{q}(y)|p\rangle=H^{q}(x,\xi=0,t=0)=q(x)\ $
(2)
and similarly $\widetilde{H}^{q}(x,\xi=0,t=0)=\Delta q(x)$. Furthermore, like
ordinary parton distributions, all of the expressions considered here depend
on the hard scale for the scattering process, $Q^{2}$, which is omitted in the
expressions for ease of presentation. Because of Lorentz covariance, the nth
Mellin moment of a GPD is a polynomial in $\xi$ of order (n+1) Ji (1998a).
Because of parity and time reversal invariance, these polynomials are even for
the GPDs of spin-1/2 targets such as the proton. The coefficients of each
power of $\xi$ are functions of $t$, which constitute generalized form
factors. For n=0 in particular, the moments are independent of $\xi$ and give
the familiar elastic form factors. In section I.3 we will use the 2nd Mellin
moments of GPD $H$ and GPD $E$ when discussing the GFF of the proton.
$\displaystyle\int_{-1}^{1}\text{d}x\,H^{q}(x,\xi,t)=F_{1}^{q}(t),\qquad$
$\displaystyle\qquad\int_{-1}^{1}\text{d}x\,E^{q}(x,\xi,t)=F_{2}^{q}(t)$ (3a)
$\displaystyle\int_{-1}^{1}\text{d}x\,\widetilde{H}^{q}(x,\xi,t)=G_{A}^{q}(t),\qquad$
$\displaystyle\qquad\int_{-1}^{1}\text{d}x\,\widetilde{E}^{q}(x,\xi,t)=G_{P}^{q}(t)$
(3b)
GPDs also encode information on the joint distributions of partons as
functions of both the longitudinal momentum fraction $x$ and the transverse
impact parameter $\vec{b}_{\perp}$. For a nucleon polarized along the
transverse $X$ direction they are given by Guo _et al._ (2021a),
$q^{In}_{X}(x,{\bf
b}_{\perp})=\int\frac{\text{d}^{2}{\bf\Delta}_{\perp}}{(2\pi)^{2}}\exp[i{\bf
b}_{\perp}\cdot{\bf\Delta}_{\perp}]\left[H_{q}(x,0,-\Delta^{2})+i\frac{\Delta_{y}}{2M}\left(H_{q}(x,0,-\Delta^{2})+E_{q}(x,0,-\Delta^{2})\right)\right]$
(4)
Figure 5 shows one of the projected results for the 2-dimensional images of
the CFF $\mathcal{E(\xi,\it t)}$ and $\mathcal{H(\xi,\it t)}$ Fourier
transformed into impact parameter space $(b_{x},b_{y})$. The image was
extracted from simulated CLAS12 measurements of different polarization
asymmetries and cross sections with the proton transversely polarized.
Figure 5: Left: Image of the 2-dimensional distribution of $\cal{H}+\cal{E}$
in the valence region for a spin-polarized proton with the polarization axis
parallel to $b_{x}$. The polarization causes a vertical shift of the center.
Right: Same as on the left, but showing the distribution of GPD $\cal{E}$
separately, with the effect of the polarization more dramatically seen as a
clear spatial separation of electrical charges, i.e. u- and d-quarks in
$b_{y}$ space, generating a flavor-dipole. Note that the color codes on the
left and right panels have different scales to account for the much smaller
amplitude of the $\cal{E}$ CFF. Figure 6: Exclusive photon electroproduction
through DVCS (left) and BH processes (middle and right).
In the following we focus on the DVCS process shown in Fig.6 (left). DVCS can
be considered the prototype for all deeply virtual exclusive scattering (DVES)
experiments and as such it has been the most studied process. The DVCS matrix
elements are accessed through exclusive photoproduction,
$ep\rightarrow e^{\prime}p^{\prime}\gamma$
where the final photon is produced at the proton vertex. A competing
background process given by the Bethe-Heitler (BH) reaction is also present,
where the photon is emitted from the electron and the matrix elements measure
the proton elastic form factors, Fig.6 (right). The cross section is a
function of four independent kinematic variables besides the electron-proton
center-of-mass energy $\sqrt{s}$, the scale $Q^{2}$, the skewness $\xi$,
related to Bjorken xB as $\xi\approx x_{B}/(2-x_{B})$, $t$, and the angle
between the lepton and hadron planes, $\phi$.
The CFFs are complex quantities which at leading order in perturbative QCD,
are defined through the convolution integral,
$\displaystyle{\mathcal{F}}(\xi,t;Q^{2})=\int_{-1}^{1}\text{d}x\left[\frac{1}{\xi-
x-i\epsilon}\pm\frac{1}{\xi+x-i\epsilon}\right]F(x,\xi,t;Q^{2})$ (5)
,where
$\mathcal{F}=\mathcal{H},\mathcal{E},\widetilde{\mathcal{H}},\widetilde{\mathcal{E}}$,
and $\pm$ indicates helicity independent (-) or helicity dependent (+) GPDs.
Figure 7 displays estimates of $x_{B}\text{Re}{\mathcal{H}}$ and
$x_{B}\text{Im}{\mathcal{H}}$ at fixed value of $t$.
It is however important to keep in mind that a study of various processes is
necessary to access GPDs in a controllable way. Firstly, the time-like
counterpart of DVCS, named time-like Compton scattering (TCS) Berger _et al._
(2002); Pire _et al._ (2011), accessed through the nearly forward
photoproduction of a lepton pair $\gamma N\to\gamma^{*}N^{\prime}$ is crucial
to test the universality and the analytical properties (in $Q^{2}$) of the
factorized scattering amplitude Mueller _et al._ (2012). Deeply virtual meson
production (DVMP) amplitude has also been proven to factorize but current data
seem to delay the onset of the scaling regime, which makes the study of the
process ($\gamma^{*}N\to{\cal{M}}N^{\prime}$) an important laboratory for the
study of next to leading twist processes. Secondly, a new class of factorized
amplitudes has emerged Qiu and Yu (2022) where the hard scattering process is
a $2\to 3$ process. The case of the process $\gamma N\to\gamma\gamma
N^{\prime}$ with a large invariant mass of the diphoton Pedrak _et al._
(2020); Grocholski _et al._ (2022) and a quasi-real or virtual initial photon
is particularly interesting since it probes the charge-conjugation odd part of
the quark GPDs in contradistinction with the DVCS/TCS probe. Other processes
where a meson-meson Ivanov _et al._ (2002) or photon-meson pair (with a large
invariant mass) is produced have been studied Boussarie _et al._ (2017);
Duplančić _et al._ (2018); when a transversely polarized $\rho$ meson enters
the final state, they should give access to the eluding transversely quark
GPDs.
The electroweak production of a single charmed meson has also been proposed
Pire _et al._ (2021a) to access in a new way these transversely quark GPDs.
Reconstructing the final state $D$ or $D^{*}$ meson is however an experimental
challenge.
All these new reactions have quite small cross-sections and would greatly
benefit from a high luminosity option in the low energy range of the EIC. More
detailed feasibility studies need to be performed but first order of magnitude
estimates show that they need a quite large coverage of photon detection which
seems in line with current detector designs.
Figure 7: Compton form factors Im$\cal{H}$ and Re$\cal{H}$ extracted at local
$x_{B}$ values from simulated DVCS events at different CM beam energies,
$\sqrt{s}=31.6$ GeV (LOW) and $\sqrt{s}\geq 100$ GeV (HIGH). The dark shaded
bands represent the reach and the uncertainties at the lower CM-energy. The
lighter shaded bands represent the higher CM-energy. The $x_{B}$ regions
labeled LOW can only be covered at the low CM-energy with reasonable
uncertainties. The $x_{B}$ region labeled HIGH can only be reached with the
high CM-energy. The widths of the bands indicate the estimated uncertainties
due to overall reconstruction effects, statistics and systematic
uncertainties. For each of the two CM-energies a combined integrated
luminosity of 200 fb-1 equally split between longitudinally polarized and
transversely polarized proton runs is assumed. At $x_{B}>0.1$ smaller
uncertainties can be achieved at the low CM-energy, which provides overlapping
$x_{B}$ kinematics with the JLab 12 GeV experiments (not shown). The region
$x_{B}<2\times 10^{-3}$ can only be reached at the high CM-energy. Note, that
the CFF $\cal{E}$ and $\widetilde{\cal{H}}$ are determined simultaneously.
Here we have used same integrated luminosity for the two CM energies. The
results are statistics limited and may be scaled for different assumptions.
Regarding the luminosity assumptions at the low CM energy see comments in the
caption of Fig. 2.
#### I.2.2 Analysis methods
GPDs are projections of Wigner distributions that give access to the unknown
mechanical properties of the nucleon involving both space and momentum
correlations. Among these are the quark and gluon angular momentum, along with
spin directed $qgq$ interactions Diehl (2003); Belitsky and Radyushkin (2005);
Kumericki _et al._ (2016); Goeke _et al._ (2001); Ji (2004); Ji _et al._
(2021a). An accurate knowledge of GPDs would unveil an unprecedented amount of
information on nucleon structure and on the working of the strong
interactions. Nevertheless, after two decades of experimental and
phenomenological efforts, it has been, so far, impossible to extract these
important quantities directly from experiment. The problem lies at the core of
their connection with observables: the cleanest probe to observe GPDs is from
the matrix elements for deeply virtual Compton scattering (DVCS) (Fig.6, and
Sec.I.2). In a nutshell, GPDs are multi-variable functions depending on the
kinematic set of variables, $x,\xi,t,Q^{2}$ (see eq.[1)], which enter the DVCS
cross section in the form of convolutions with complex kernels, calculable in
perturbative QCD, known as Compton Form Factors (CFFs). Furthermore, because
GPDs are defined at the amplitude level, they appear in bilinear forms, in all
observables, including various types of asymmetries. An additional consequence
is that all four GPDs, $H$, $E$, $\widetilde{H}$, $\widetilde{E}$, enter
simultaneously any given beam/target spin configuration. It is therefore
necessary to consider simultaneously a large array of different observables in
order to extract the contribution of each individual GPD, even before
addressing the issues of their flavor composition, and of the sensitivity of
observables to quark/antiquark components (for a detailed analysis of the DVCS
cross section we refer the reader to Braun _et al._ (2014); Kriesten _et
al._ (2020); Guo _et al._ (2021b)).
For high precision femtography, which is required to obtain proton structure
images, the hadron physics community has been developing sophisticated
analyses. The success of Machine Learning (ML) methodologies in modeling
complex phenomena make this a prime choice for GPD extraction.
Three main frameworks using ML are currently being pursued aimed at the
extraction of GPD from data, which differ in the techniques, methodologies,
and in the types of constraints derived from theory. In this respect, it is
has become clear that the use of lattice QCD results will be indispensable in
GPD analyses Lin _et al._ (2018a); Constantinou _et al._ (2021) and efforts
in this direction are under way.
The Zagreb group Čuić _et al._ (2020); Kumericki _et al._ (2011); Kumerički
_et al._ (2014) addresses the extraction of CFF from experimental data on
various DVCS observables for different beam and target polarizations based on
a neural network (NN) architecture, or a multilayer perceptron. The recent
analysis in Ref.Čuić _et al._ (2020) introduces variable network
configurations depending on whether the model is for an unflavored or flavored
quark. The use of theoretical constraints is explored, in this case given by
the assumption that the CFFs obey a dispersion relation Anikin and Teryaev
(2007); Diehl and Ivanov (2007); Goldstein and Liuti (2009). Results of the
fit highlight the existence of hidden correlations among CFFs arising from
different harmonics in $\phi$ appearing in the cross section formulation of
Refs.Belitsky and Mueller (2003, 2010). Comparisons with previous,
unconstrained results, and with a standard least-squares model fit to the same
data show large uncertainties and often an inversion of the trend of data as a
function of $\xi$ and $t$.
The PARTONS group addresses two different stages of the analysis, namely, the
extraction of CFF from data Moutarde _et al._ (2019, 2018), and, most
recently, the determination of GPDs Dutrieux _et al._ (2021a). CFFs are
extracted in Refs.Moutarde _et al._ (2019, 2018) from global fits of all
available DVCS data using a standard NN augmented by a genetic algorithm. This
work’s purpose is to help benchmarking the group’s future NN based analyses.
The GPD effort is centered around the concept of “shadow GPDs” Dutrieux _et
al._ (2021b), which broadly define the set of all local minima generated by
regression analysis using given functional parametrizations. Shadow GPDs
propose a practical pathway to solve the inverse problem of extracting GPDs
from CFFs. The practicality of the concept still remains to be demonstrated.
More recently, the UVA group developed an analysis initially focused on the
DVCS cross section Grigsby _et al._ (2021). The framework devised in
Ref.Grigsby _et al._ (2021) serves as a first step towards the broader scope
of developing a complete analysis for the extraction of CFFs and GPDs from
experimental data. Industry standard ML techniques are used to fit a cross
section model based on currently available DVCS experimental data, allowing
for efficient and accurate predictions interpolating between experimental data
points across a wide kinematic range. Estimating model uncertainty allows one
to make informed decisions about predictions well outside of the region
defined by data, extrapolating to unexplored kinematic regimes. While the
results of this analysis show that, for instance, the network can effectively
generalize in $t$, even in regions with no data, the study also points out
several of the practical challenges of fitting the sparse NN with significant
experimental uncertainty, as defined by current DVCS data availability.
Another important aspect of this study is the handling of the uncertainties
from experimental data which is ubiquitous to physics analyses but less
commonly considered in building ML models.
Standard least-squares based model fits are also currently being performed at
this stage to provide a baseline for new more exploratory approaches. The
result of one of these studies are presented in Fig.7 and in SectionI.3. The
latter are equivalent to local fits where CFFs are independently determined
from measurements between different kinematic bins. In a more recent
development, the free coefficients of a given CFF parameterization are matched
to experimental data and the kinematic bins are no longer treated
independently, allowing for interpolation between measurements of the same
observable on neighboring kinematic bins. This method also affords to
extrapolate outside the experimental data, paving the way for impact studies.
However, a systematic uncertainty is introduced by the functional choice of a
parameterization, which could potentially impact the predictivity of the
approach. Furthermore, while ML based approaches provide solutions to overcome
the occurrence of local minima, standard fits are not flexible in this
respect. This approach can be most useful in the earlier phase of an
experimental program when insufficient data are available, preventing use of
more flexible alternatives.
All of the studies mentioned above are not only beneficial to the physics
community but provide an interesting overlay of objectives for the physics,
applied math, computer science and data science communities. A future
investment of resources to bring together all communities will allow for a
precise extraction of the 3D structure of the nucleon by using a wide range of
new methodologies: from including the simulation uncertainties directly in the
training procedure, to developing unsupervised (or weakly supervised)
procedures, improving the calibration of simulations, developing new inference
techniques to improve the efficiency in using simulations, and many more
ongoing developments.
In the next section we describe a CFF extraction method based on dispersion
relations Anikin and Teryaev (2007); Diehl and Ivanov (2007). The foremost
advantage of this approach is that it reduces the number of unknown parameters
to be extracted, by calculating the Real part of the amplitude from the
corresponding Imaginary part plus a subtraction constant. The key observation
here is that the same subtraction constant (with a flipped sign) enters in the
dispersion relations for the CFFs $\mathcal{H}$ and $\mathcal{E}$, while the
subtraction constants for CFFs $\tilde{\mathcal{H}}$ and $\tilde{\mathcal{E}}$
vanish. These global fits require to be performed with analytical
parameterizations of the CFFs dependences, since one needs to extrapolate
beyond the available data to perform the full dispersion integral.
Furthermore, it is known that dispersion relations are affected by a
kinematic, $t$-dependent threshold dependence which partially hampers a direct
connection to GPDs and affects the extraction of the subtraction term
Goldstein and Liuti (2009). Although the precision of present data does not
allow for a full evaluation of these systematic uncertainties, a dedicated
study will be possible in the wider kinematic range of the EIC.
### I.3 D-term form factor, and mechanical properties of the nucleon - beyond
tomography
In section I.2 tomographic spatial imaging was discussed through access to
GPDs employing the DVCS process. This section discusses how to obtain
information about gravitational/mechanical properties of the proton.
Mechanical properties that relate to gravitational coupling, such as the
internal mass distributions, the quark pressure, and the angular momentum
distribution inside the proton, are largely unknown. These properties are
encoded in the proton’s matrix element of the Energy Momentum Tensor (EMT)
Kobzarev and Okun (1962); Pagels (1966) and are expressed through the GFF Ji
(1997a).
$\displaystyle\langle
p_{2}|\hat{T}^{q,g}_{\mu\nu}|p_{1}\rangle\\!=\\!\bar{u}(p_{2})\\!\left[A^{q,g}(t)\frac{P_{\mu}P_{\nu}}{M}+B^{q,g}(t)\frac{i(P_{\mu}\sigma_{\mu\rho}+P_{\nu}\sigma_{\mu\rho})\Delta^{\rho}}{2M}+D^{q,g}(t)\frac{\Delta_{\mu}\Delta_{\nu}-g_{\mu\nu}\Delta^{2}}{4M}+M\bar{c}^{q,g}(t)g_{\mu\nu}\right]\\!u(p_{1})$
(6)
The form factors $A^{q,g}(t)$, $B^{q,g}(t)$, $\bar{c}^{q,g}(t)$, $D^{q,g}(t)$
encode information on the distributions of energy density, angular momentum,
and internal forces in the interior of the proton as described in detail in
Sec. II.3. By virtue of energy-momentum conservation, the terms
$\bar{c}^{q,g}(t)$ contribute to both the quark and to the gluon part with
same magnitude but with opposite signs, so that
$\sum_{q}\bar{c}^{q}(t)+\bar{c}^{g}(t)=0$. Experimental information on the
gluon contribution may come from trace anomaly measurements in $J/\Psi$
production at threshold, or possibly with the help from LQCD.
The superscripts $q,g$ indicate that the breakdown is valid for both quarks
$q$ and gluons $g$. Most of the discussion in this section is related to the
quark contributions, and we will omit the reference to the gluon part for the
remainder of this subsection. The GFFs of quarks and gluons also depend on the
renormalization scale $\mu^{2}$ (associated with the hard scale $Q^{2}$ of the
process) that we omit in the formalism for simplicity. The total GFFs,
$A(t)=\sum_{q}A^{q}(t)+A^{g}(t)$ and analog for $B(t)$ and $D(t)$, are
renormalization scale independent.
The GFF are the entry into the mechanical and other properties of the protons.
However, there is not a practical, direct way to measure these form factors as
it would require measurements employing the graviton-proton interaction, a
highly impractical proposition due to the extreme weakness of the
gravitational interaction Kobzarev and Okun (1962); Pagels (1966). More recent
theoretical development showed that the GFFs may be indirectly probed in
deeply virtual Compton scattering (DVCS) Ji (1997b). DVCS allows probing the
proton’s quark structure expressed in the GPDs, as the basis for the
exploration of its mechanical or gravitational properties Polyakov (2003).
The handbag diagram for the DVCS amplitude 3 contains contributions from non-
local operators with collinear twist 2, 3, and 4, where the latter two can be
neglected at large $Q^{2}$. These operators can be expanded through the
operator product expansion in terms of local operators with an infinite tower
of $J^{PC}$ quantum numbers. This includes operators with the quantum numbers
of the graviton, so information about how the target would interact with a
graviton is encoded within this tower. The GPDs $H^{q}$ and $E^{q}$ are mapped
to the GFF $D^{q}(t)$, $A^{q}(t)$, and
$J^{q}(t)=\frac{1}{2}\,A^{q}(t)+\frac{1}{2}\,B^{q}(t)$ in the Ji sum rule Ji
(1997b), involving the second Mellin moment of the GPD $H^{q}$ and $E^{q}$ as
$\displaystyle\int\mathrm{d}x\,x[H^{q}(x,\xi,t)+E^{q}(x,\xi,t)]=2J^{q}(t),$
(7) $\displaystyle\int\mathrm{d}x\,xH^{q}(x,\xi,t)=A^{q}(t)+\xi^{2}D^{q}(t).$
(8)
In the following we focus on the term $D^{q}(t)$ that encodes information
about mechanical properties, see Sec. II.3.
This new direction of nucleon structure research has recently resulted in the
first estimate of the pressure distribution inside the proton based on
experimental data Burkert _et al._ (2018), employing CLAS DVCS-BH beam-spin
asymmetry data Girod _et al._ (2008) and differential cross sections Jo _et
al._ (2015), and constraints from parameterized data covering the full phase
space.
With the EIC as a high luminosity machine and a large energy reach these
properties can be accessed covering a large range in $x_{B}$, $Q^{2}$ and $-t$
in the exclusive DVCS process. As shown in Figure 8 the lower EIC CM energy
range of $3\times 10^{-3}<x_{B}<0.1$ will cover the valence quark and sea-
quark domains, while at the high CM energies the gluon contributions will be
accessible at $10^{-4}<x_{B}<10^{-2}$.
Figure 8: Accessible ranges in $x_{B}$ vs $Q^{2}$ (left), and $t$ vs azimuth
angle $\phi$ (right) for the DVCS process at a center-of-mass energy
$\sqrt{s}=28$ GeV. The color code indicates the number of events per pixel for
a given luminosity.
Ideally, one would determine the integrals in Eqs.(7) and (8) by measuring GPD
$H$ and $E$ in the entire $x$ and $\xi$ space and in a large range of $t$. For
the DVCS experiments, such an approach is impractical as the GPDs are not
directly accessible in the full $x,\xi$-space, but only at the constrained
kinematics $x=\pm\xi$. The GPDs also do not directly appear in the
experimental observables. Instead, GPDs appear inside the Compton Form Factors
defined in Eqn. (LABEL:CFF) that depend only on the two variables $\xi$, $t$.
where one has traded the real function of 3 parameters $H(x,\xi,t)$ with the
complex functions of 2 parameters $\text{Re}{\mathcal{H}}(\xi,t)$ and
$\text{Im}{\mathcal{H}}(\xi,t)$ that can be related more directly to
experimentally accessible observables. The CFF appear in experimental cross
sections and in polarization observables. CFF $\mathcal{H}(\xi,t)$ as well as
$\mathcal{E}(\xi,t)$ are thus accessible through a careful analysis of
differential cross sections and the responses to spin polarization of the
electron and the proton beam.
As discussed in section I.2.2, the extraction of the Im${\mathcal{H}(\xi,t)}$
and Re$\mathcal{H}(\xi,t)$ CFF has been pursued by employing global
parameterizations for the $\xi$ and $t$ dependencies Burkert _et al._ (2018)
and using machine learning (ML) and artificial neural networks approaches
Moutarde _et al._ (2019); Kumericki _et al._ (2016); Grigsby _et al._
(2021)
In order to determine the $D^{q}(t)$ form factor we can employ a subtracted
fixed-$t$ dispersion relation that relates the real and imaginary parts of the
CFF $\mathcal{H}$ to a subtraction term $\Delta^{q}(t)$ whose determination
requires additional experimental information. The dispersion relation and its
relationship to the subtraction term $\Delta^{q}(t)$ is given as
$\displaystyle{\rm Re}{\cal H}^{q}(\xi,t)=\Delta^{q}(t)+\frac{1}{\pi}{\cal
P}\int_{0}^{1}\text{d}x\ \left[\frac{1}{\xi-x}-\frac{1}{\xi+x}\right]{\rm
Im}{\cal H}^{q}(x,t),$ (9)
where $\mathcal{P}$ is the principal value of the Cauchy integral, for
simplicity written without threshold effects Anikin and Teryaev (2007);
Goldstein and Liuti (2009).
The subtraction term $\Delta^{q}(t)$ was shown to be related to the D-term
Anikin and Teryaev (2007); Diehl and Ivanov (2007) through the series of
Gegenbauer polynomials. When only the first term in the series is retained and
we assume $D^{u}(t)\approx D^{q}(t)$ based on large-$N_{c}$ predictions Goeke
_et al._ (2001) and neglect strange and heavier quark contributions which at
JLab energies is a good approximation (recall that in DVCS the contributions
of different quark flavors enter weighted by squares of the fractional quark
charge factors), then we obtain:
$\displaystyle
D^{Q}(t)=\sum_{q}D^{q}(t)\approx\frac{18}{25}\sum_{q}e_{q}^{2}\Delta^{q}(t)$
(10)
This truncation of the Gegenbauer polynomials causes a model-dependence as the
higher order terms can not be isolated with DVCS measurements alone, and must
currently be computed in models. The chiral Quark Soliton Model Kivel _et
al._ (2001) predicts a 30% contribution due to the next term in the Gegenbauer
expansion. Computations of the next to leading term may in future become
possible from LQCD (see also section VI.1.2 for more detailed discussion on
LQCD contributions to GPDs and 3D imaging).
It is important to remark that the different terms in the Gegenbauer expansion
of $\Delta^{q}(t)$ have different renormalization scale dependencies. The
broader $Q^{2}$-coverage at EIC may therefore provide the leverage to
discriminate between the different terms and help to isolate the leading term
related to $D^{q}(t)$. In the limit of the renormalization scale going to
infinity, all higher Gegenbauer terms vanish and asymptotically
$\Delta^{q}(t)\to 5\,D^{q}(t)$ Goeke _et al._ (2001). We note that in the
limit renormalization scale going to infinity it is $\sum_{q}D^{q}(t)\to
D(t)\,N_{f}/(N_{f}+4C_{F})$ and $D^{g}(t)\to D(t)\,4C_{F}/(N_{f}+4C_{F})$
where $D(t)$ is the total GFF, $N_{f}$ is the number of flavors and
$C_{F}=(N_{c}^{2}-1)/(2N_{c})$ Polyakov and Schweitzer (2018a).
### I.4 Backward hard exclusive reactions and probing TDAs with high
luminosity EIC
A natural and promising extension of the EIC experimental program for hard
exclusive processes is the study of hard exclusive electroproduction and
photoproduction reactions in the near-backward region Gayoso _et al._ (2021).
These measurements will allow further exploration of hardronic structure in
terms of baryon-to-meson and baryon-to-photon Transition Distribution
Amplitudes Pire _et al._ (2021b) which extend both the concepts of
Generalized Parton Distributions (GPDs) and baryon Distribution Amplitudes
(DAs).
Baryon-to-meson (and baryon-to-photon) TDAs arise within the collinear
factorization framework for hard exclusive reactions in a kinematic regime
that is complementary to the usual near-forward kinematic in which a familiar
GPD-based description applies for hard exclusive meson electroproduction
reactions and DVCS. Technically, TDAs are defined as transition matrix
elements between a baryon and a meson (or a photon) states of the same non-
local three-quark operator on the light-cone occurring in the definition of
baryon DAs. In Fig. 9 we sketch the collinear factorisation reaction mechanism
involving TDAs (and nucleon DAs) for hard exclusive near-backward
electroproduction of a meson off a nucleon target $\gamma^{*}N\to
N^{\prime}{\cal M}$ Lansberg _et al._ (2012) and of hard exclusive near-
backward photoproduction of a lepton pair off a nucleon target (backward
Timelike Compton Scattering (TCS)) $\gamma
N\to\gamma^{*}N^{\prime}\to\ell^{+}\ell^{-}N^{\prime}$ Pire _et al._ (2022).
Figure 9: Left: Collinear factorization mechanism for hard exclusive
electroproduction of mesons ($\gamma^{*}N\to N^{\prime}{\cal M}$) in the near-
backward kinematic regime (large $Q^{2}$, $W^{2}$; fixed $x_{B}$; $|u|\sim
0$). Right: Collinear factorization of TCS ($\gamma N\to\gamma^{*}N^{\prime}$)
in the near-backward kinematic regime (large $Q^{\prime 2}$, $W^{2}$; fixed
$\tau\equiv\frac{Q^{\prime 2}}{2p_{N}\cdot q}$; $|u|\sim 0$); ${\cal M}N$
($N\gamma$) TDA stands for the transition distribution amplitudes from a
nucleon-to-a-meson (photon-to-a-nucleon); $N$ DA stands for the nucleon
distribution amplitude; $CF$ and $CF^{\prime}$ denote the corresponding hard
subprocess amplitudes (coefficient functions).
The physical contents of baryon-to-meson and baryon-to-photon TDAs is
conceptually similar to that of GPDs and baryon DAs. Since the non-local QCD
operator defining TDAs carries the quantum numbers of a baryon it provides
access to the momentum distribution of baryonic number inside hadrons. It also
enables the study of non-minimal Fock components of hadronic light-front wave
functions. Similarly to GPDs, by switching to the impact parameter space, one
can address the distribution of the baryonic charge inside hadrons in the
transverse plane. This also enables to study the mesonic and electromagnetic
clouds surrounding hadrons and provides new tools for “femtophotography” of
hadrons. Testing the validity of the collinear factorized description in terms
of TDAs for hard backward electroproduction and photoproduction reactions
requires a detailed experimental analysis. The very first experimental
indications of the relevance of the TDA-based description for hard
electroproduction of backward mesons off nucleons were recently obtained at
JLab in the studies of backward pseudoscalar meson electroproduction
$ep\to e^{\prime}n\pi^{+}$
by the CLAS collaboration and in Hall A Park _et al._ (2018); Diehl _et al._
(2020), and backward vector meson electroproduction
$ep\to e^{\prime}p^{\prime}\omega$
by Hall C Li _et al._ (2019). This latter analysis enabled checking one of
the crucial predictions of the TDA-based formalism, the dominance of the
transverse cross section $\sigma_{T}$. A dedicated study of backward neutral
pseudoscalar meson production with a complete Rosenbluth separation of the
cross section to challenge $\sigma_{T}\gg\sigma_{L}$ condition is currently
prepared by Hall C Li _et al._ (2020).
The hard exclusive backward reactions to be studied with the EIC include the
hard exclusive backward electroproduction of light pseudoscalar unflavored
$\pi$, $\eta$, and strange mesons $K$ and vector $\rho$, $\omega$, $\phi$
mesons as well as backward DVCS. Another option can be the study of hard
exclusive backward photoproduction of lepton pairs (backward TCS) and of heavy
quarkonium. The peculiar EIC kinematics, as compared to fixed target
experiments, allows, in principle, a thorough analysis of the backward region
pertinent to TDA studies. Higher $Q^{2}$ providing a larger lever arm to test
the characteristic scaling behavior would be accessible in a domain of
moderate $\gamma^{*}N$ energies, i.e. rather small values of the usual $y$
variable and not too small values of $x_{B}$. It worth mentioning that since
TDA-related cross sections are usually small the high luminosity is definitely
needed to scan a sufficiently wide $Q^{2}$ range. This will allow the new
domain of backward hard exclusive reactions physics to be further explored.
The detection of $u$-channel exclusive electroproduction:
$e+p\to e^{\prime}+p^{\prime}+\pi^{0}$
seems easily feasible thanks to the $4\pi$ coverage of EIC detector package. A
preliminary study documented in Abdul Khalek _et al._ (2021) shows the
feasibility of detecting exclusive $\pi^{0}$ production at $u\sim u_{0}$. The
scattered electrons are well within the standard detection specification. The
two photons (from decaying $\pi^{0}$) project a ring pattern at the zero
degree calorimeter (tagging detector along the incidence proton beam) close to
the effective acceptance, while recoiled proton enters forward EM calorimeter
at high pseudorapidity. The detector optimization and efficiency for detecting
these process is currently undergoing.
Also a rough vector meson dominance model based estimates of backward TCS
cross section for the EIC kinematical conditions presented in Pire _et al._
(2022) suggest a considerable number of events within the high luminosity
regime to study photon-to-nucleon TDAs.
More phenomenological prospective studies and further theoretical development
are needed to establish a sound experimental program focusing on TDAs for EIC.
### I.5 Outlook - Beyond the EIC initial complement
Spin polarized electron and proton beams lead to single-spin dependent cross
sections that are proportional to the imaginary part of the DVCS-BH
interference amplitude. Double-spin dependent cross sections provide an access
to the real part of the interference amplitude but suffer from strong to
dominant contributions of the BH amplitude which makes difficult and
inaccurate the experimental determination of the real part from this
observable. An indisputable and precise determination of this quantity is
required to unravel the mechanical properties of the nucleon.
Accessing the real part of interference amplitude is significantly more
challenging than the imaginary part. It appears in the unpolarized cross
sections for which either the BH contribution is dominant, or all three terms
(pure BH, pure DVCS, and DVCS-BH interference amplitudes) are comparable. The
DVCS and interference terms can be separated in the unpolarized cross-sections
by exploiting their dependencies on the incident beam energy, a generalized
Rosenbluth separation. This is an elaborated experimental procedure, which
needs some theoretical hypothesis to finally extract an ambiguous physics
content Defurne _et al._ (2017); Kriesten and Liuti (2020); Georges _et al._
(2022). Time-like Compton scattering (TCS), $\gamma p\to l^{+}l^{-}p$ is
another process which can, in principle, provide direct but luminosity
challenging access to the Re$\mathcal{H}(\xi,t)$ in a back-to-back
configuration Chatagnon _et al._ (2021) as displayed in Fig. 10. TCS requires
zero-degree electron scattering, generating $l^{+}l^{-}$ pairs in quasi-real
photo-production over a continuous mass range above resonance production.
Figure 10: Left: Handbag diagram of the TCS process. Middle: Diagram of the BH
processes. Right: Relevant angles for the TCS kinematics in CMS to isolate the
Re$\mathcal{H}$ contribution in the interference term.
The feasibility of measuring TCS, and its strong sensitivity to the D-term,
has already been established at CLAS12 Chatagnon _et al._ (2021).
A more convenient access to the real part of the interference amplitude is
obtained from the comparison between unpolarized electron and positron beams
Voutier (2014). Indeed, at leading twist, the electron-positron unpolarized
DVCS cross section difference is a pure interference signal, linearly
dependent on the real part of the DVCS-BH interference term. As such it
provides the cleanest access to this crucial observable, without the need for
additional theoretical assumptions in the CFFs extraction procedure Burkert
_et al._ (2021a). Implementation of a positron source, both polarized and
unpolarized Abbott _et al._ (2016), at the EIC would thus significantly
enhance its capabilities in the high impact 3D imaging science program, with
respect, for instance, to the extraction of the CFF Re$\mathcal{H}(\xi,t)$ and
of the gravitational form factor $D^{q}(t)$.
## II Mass and spin of the nucleon
The most fundamental physical properties of the nucleons as well as other
hadrons are their masses and spins. Understanding how they arise from the QCD
theory of light spin-1/2 quarks and massless spin-1 gluons is one of the most
important goals in nuclear physics report (2018). The experimental study of
the proton spin structure began in the 1980’s and has continuously driven the
field of hadronic physics for the last thirty years Ashman _et al._ (1989).
Despite much effort, a complete picture of the proton spin structure is still
missing Ji _et al._ (2021a). The origins of the proton mass have mostly been
a theoretical interest in QCD-motivated models or effective approaches such as
chiral perturbation theory, and its understanding in the QCD-based framework
and related experimental tests have gained attentions only recently Ji (2021).
Gaining insight into the emergence of hadron mass from the experimental
results on the pion/kaon electromagnetic form factors and PDFs analyzed within
the continuum Schwinger method (CSM) represents an important aspect of efforts
in experiments of the 12 GeV era at JLab Roberts _et al._ (2021) and those
foreseen at the EIC in the US Arrington _et al._ (2021) and at the EiCC in
China Anderle _et al._ (2021). A successful description of the
electroexcitation amplitudes of the $\Delta(1232)3/2^{+}$, $N(1440)1/2^{+}$,
and $\Delta(1600)3/2^{+}$ resonances of different structure Mokeev and Carman
(2022) has been achieved within the CSM Segovia _et al._ (2015, 2014)
employing the same momentum-dependent dressed quark mass evaluated from the
QCD Lagrangian Roberts (2020) and supported by the experimental results on the
structure of the pion/kaon and the ground state nucleon. This success has
demonstrated a promising opportunity to address challenging and still open
problems in the Standard Model on the emergence of hadron mass by confronting
the predictions from QCD-rooted approaches on a broad array of different
hadron structure observables with the results from experiments with
electromagnetic probes already available and those foreseen from intermediate
energy facilities at the luminosity frontier.
In the QCD studies, it has been realized that the matrix elements/form factors
of the quark and gluon energy momentum tensor (EMT), measured through DIS
momentum sum rule and also the source for gravitational fields of the nucleon,
play important roles in spin and mass Ji (1995, 1997a). Moreover, the
interpretation of the GFF $C(Q^{2})$ in terms of mechanical properties has
generated much interest Polyakov and Schweitzer (2018a). Experimentally, the
form factors of EMT can be accessed through the second-order moments of quark
and gluon GPDs which can be probed through DVCS and DVMP as discussed in the
early sections Ji (1997a). EIC is particularly important for probing the GPDs
of gluons which are a crucial part of the nucleon Accardi _et al._ (2016). It
has been suggested recently that the gluon EMT form factors might be directly
accessible through near-threshold heavy-quarkonium production Guo _et al._
(2021c).
### II.1 Nucleon mass
Unlike non-relativistic systems in which the masses mostly arise from the
fundamental constituents, masses of relativistic systems arise predominantly
through interactions. Indeed, without the strong interactions, three current
quarks making up the nucleon weigh about $\sim 10$ MeV (at
$\mu_{\rm\overline{MS}}\sim 2$ GeV), presumably from electroweak symmetry
breaking but actually there is no way of knowing what the quark masses are in
the absence of strong interactions, which is about 1% of the bound state mass
Particle Data Group (2020). Schematically, we can write the nucleon mass in
terms of quark masses and the strong interaction scale $\Lambda_{\rm QCD}$,
$M_{N}=\sum_{i}\alpha_{i}m_{i}+\eta\Lambda_{\rm QCD}\ ,$ (11)
where $\alpha_{i}$ and $\eta$ are dimensionless coefficients determined from
the strong interaction dynamics. Note that $\Lambda_{\rm QCD}$ is a free
parameter of QCD, which in principle can take any value, and therefore, the
nucleon mass can be 10 TeV or 100 MeV, independent of the details of strong
interaction physics. One cannot hope, therefore, to explain from QCD itself
why the nucleon mass is 940 MeV, not any other value, without invoking more
fundamental theories such as grand unifications which may explain why
$\Lambda_{\rm QCD}$ takes the value that we measured Georgi and Glashow
(1974).
In the nucleon models, $\Lambda_{\rm QCD}$ scale has generally been replaced
with some parameters with more direct physical interpretations. For instance,
in the models emphasizing chiral symmetry breaking, $\Lambda_{\rm QCD}$ is
superseded by the chiral symmetry breaking scale and the constituent quark
and/or gluon masses Manohar and Georgi (1984). On the other hand, in the
models such as the MIT bags which stress the color confinement, $\Lambda_{\rm
QCD}$ has been associated with the energy density of the false vacuum inside a
bag Chodos _et al._ (1974). In the instanton liquid models, $\Lambda_{\rm
QCD}$ is reflected through typical instanton size and density Schäfer and
Shuryak (1998). Unfortunately, the effective degrees of freedom in models
cannot be studied directly in experiments, and therefore the pictures cannot
be directly verified without additional assumptions. In lattice QCD
calculations, $\Lambda_{\rm QCD}$ is tied with lattice spacing $a$ which is an
ultraviolet momentum cut-off and the strong coupling associated with the cut-
off. As we shall discuss below, a model-independent way to introduce this
scale might be through the gluonic composite scalar field which breaks the
scale symmetry, a Higgs-like scale-generation mechanism Ji _et al._ (2021b).
So then what are the meaningful questions one can ask about the nucleon mass,
and can they be answered through experiments at EIC? The most discussions so
far in the literature are about mass distributions into different dynamical
sources and about spatial distributions inside the nucleon. For example, what
will be the proton mass if all quark masses where zero? This question has been
studied in chiral perturbation theory in 1980’s Gasser and Leutwyler (1982).
Through Lorentz symmetry relation, it has been found that the quark and gluon
kinetic energy contributions to the nucleon mass can be studied through deep-
inelastic scattering Ji (1995). Moreover, it has been suggested that the trace
anomaly contribution to the nucleon mass can be measured directly as well
Kharzeev (1996). All of these studies are based on understandings of the
energy sources in the strong interaction Hamiltonian, $H_{\rm QCD}$.
Experimental measurements and theoretical calculations of these mass
contributions constitute important tests on an important aspect of our
understandings of the nucleon mass.
The spatial distributions of mass/energy densities are an important concept in
gravitational theories as they are sources of gravitational potentials. In the
limit when the quantum mechanical fluctuations can be neglected or the mass is
considered heavy, the proton can have a fixed center-of-mass position with
spatial profiles of mass and other densities. Studies of these profiles can be
done through the GFF as one has learned about the spatial distributions of the
electric charges and currents Polyakov and Schweitzer (2018a). Moreover, the
trace anomaly contribution is related to the scalar form factor which maps out
the dynamical “bag constant” Ji _et al._ (2021b).
#### II.1.1 Masses in dynamical energy sources
A complete picture of the mass distributions into different sources starts
from the QCD Hamiltonian Ji (1995). In relativistic theories, the Hamiltonian
is a spatial integral of (00)-component of the second-order EMT $T^{\mu\nu}$.
Despite that field theories are full of UV divergences, the full EMT is
conserved and hence finite. This second-rank tensor can be uniquely decomposed
into a trace term proportional to the metric tensor $g^{\mu\nu}$ and a
traceless term $\bar{T}^{\mu\nu}$. They are separately finite due to Lorentz
symmetry. Thus the QCD Hamiltonian contains two finite pieces, the scalar and
(second-order) tensor terms,
$H=H_{S}+H_{T}\ .$ (12)
A general feature of the Lorentz-symmetric QFT in (3+1)D is that the $H_{S}$
contributes 1/4 of a bound state mass, and the tensor term $H_{T}$ contributes
3/4 Ji (1995), namely
$E_{S,T}=\langle P|H_{S,T}|P\rangle;~{}~{}~{}~{}~{}E_{T}=3E_{S}=\frac{3}{4}M\
,$ (13)
where the expectation value is taken in a static hadron (nucleon) state
$|\vec{P}=0\rangle$. Again, this is independent of any other specifics of an
underlying theory.
A further decomposition of the tensor part of the Hamiltonian (energy) can be
done through quark and gluon contributions,
$E_{T}=E_{Tq}(\mu)+E_{Tg}(\mu)\ .$ (14)
These energy sources can be probed through the matrix elements of the
corresponding parts in the EMT in terms of the momentum fractions of the
parton distributions, $E_{Tq,g}(\mu)=(3/4)M_{N}\langle x\rangle_{q,g}(\mu)$,
where the quark and gluon $\langle x\rangle_{q,g}(\mu)$ can be obtained from
the phenomenological PDFs Ji (1995). Therefore, a major part of the proton
mass can be understood in terms of quark and gluon kinetic energy
contributions, although the latter separation depends on scheme and scale as
indicated by argument $\mu$.
The scalar energy that contributes to the 1/4 of the proton mass comes from
the following matrix element,
$E_{S}=\frac{1}{8M}\langle
P|(1+\gamma_{m})m\bar{\psi}\psi+\frac{\beta(g)}{2g}F^{2}|P\rangle\ ,$ (15)
where $\gamma_{m}$ and $\beta$ are perturbative anomalous dimension and
(appropriately normalized) QCD beta function, respectively. The operator is
twist-four in high-energy scattering and its matrix element is difficult to
measure directly. However, the up and down quark mass contribution has been
historically related to the so-called $\pi$-N $\sigma$-term which can be
extracted from experimental data Gasser _et al._ (1991). The strange quark
mass contribution is related the baryon-octet mass spectrum through chiral
perturbation theory Gasser and Leutwyler (1985). A lattice QCD calculation of
various contributions to the proton mass is shown on the left panel in Fig. 11
Yang _et al._ (2018); Alexandrou _et al._ (2017a).
The most interesting and surprising is the contribution of the gluon trace-
anomaly term $F^{2}$, which sets the scale for other contributions. To
understand the physics of this contribution, one can consider the composite
scalar field $\phi\sim F^{2}$ which has a vacuum expectation value through the
gluon condensate. Inside the nucleon, however, the $\phi$ field is not the
same. In fact, $\phi$ gets a contribution through its static response to the
valence quarks inside the nucleon, with physics similar to the MIT bag model
constant $B$, shown as the dots and shaded area on the mid-panel in Fig. 11.
This response can also be calculated dynamically as the exchange of a series
of $0^{++}$ scalar particles. If this is dominated by a single scalar particle
like the $\sigma$ meson, the mechanism of mass generation is then identical to
the Higgs mechanism.
It has been suggested that this matrix element can be measured through the
threshold heavy-quarkonium production of photon or electron on scattering on
the proton target Kharzeev (1996); Wang _et al._ (2020). However, due to
large differences between the initial and final nucleon momenta, the
interpretation has initially been suggested in the vector dominance model
(VDM). A better phenomenological description might be through AdS/CFT models
Hatta and Yang (2018); Mamo and Zahed (2022). At EIC, one may consider deeply-
virtual $J/\Psi$ production to directly measure gluon matrix elements. In the
large $Q^{2}$ and skewness-$\xi$ limit, the twist-2 gluon GFF and twist-4
$F^{2}$ matrix (enhanced by $1/\alpha_{s}$) elements may dominate. Shown on
the right panel in Fig. 11 is the sensitivity of the cross section on the
anomaly matrix element Boussarie and Hatta (2020).
An indirect approach to access the scalar matrix element is to use the
momentum-current conservation, $\partial_{\mu}T^{\mu\nu}=0$, from which the
form factors of the tensor part is related to that of the scalar part. The GFF
were defined in equation 6, which is reproduced here for reference:
$\displaystyle\langle
p_{2}|\hat{T}^{q,g}_{\mu\nu}|p_{1}\rangle\\!=\\!\bar{u}(p_{2})\\!\left[A^{q,g}(t)\frac{P_{\mu}P_{\nu}}{M}+B^{q,g}(t)\frac{i(P_{\mu}\sigma_{\mu\rho}+P_{\nu}\sigma_{\mu\rho})\Delta^{\rho}}{2M}+D^{q,g}(t)\frac{\Delta_{\mu}\Delta_{\nu}-g_{\mu\nu}\Delta^{2}}{4M}+M\bar{c}^{q,g}(t)g_{\mu\nu}\right]\\!u(p_{1})$
One of the combinations yields the (twist-four) scalar form factor Ji (2021)
$G_{s}(t)=MA\left(t\right)+B(t)\frac{t}{4M}-D(t)\frac{3t}{4M}\ ,$ (16)
which contains only the twist-two contributions from the tensor part due to
the conservation law. Thus, to get the contribution of the trace anomaly term,
either in experiments or from lattice QCD simulations, one needs to measure
the form factors $A$, $B$ and $D$ from combined quark and gluon contributions.
The Fourier transformation of the $G_{s}(t)$ from lattice QCD Hagler _et al._
(2008); Shanahan and Detmold (2019a) is shown as the dotted line in the middle
panel on Fig. 11. Shown also as dots in the same panel is the anomaly
contribution from lattice QCD He _et al._ (2021).
Figure 11: Left: the proton mass decomposition, calculated from lattice QCD,
into different sources, including the quark mass ($\sl{H_{m}}$), quark and
gluon kinetic and potential energy ($\sl{H_{g}},\sl{H_{E}}$), and quantum
anomalous energy contributions ($\sl{H_{a}}$) Yang _et al._ (2018);
Alexandrou _et al._ (2017a). Middle: the scalar density distribution in space
which can be constructed from the GFF He _et al._ (2021); Hagler _et al._
(2008); Shanahan and Detmold (2019a). Right: Differential cross section
$d\sigma/dt$ in units of nb/GeV2 for exclusive threshold $J/\Psi$ production
at EIC as a function of $|t|$ at $W=4.4$ GeV, $Q^{2}=64$ GeV2. The dashed
curves are for $D^{g}=0$ and the solid curves are for nonzero $D^{g}$ (from
LQCD). The split between the two solid curves, or two dashed curves is caused
by the variation in the gluon scalar matrix element $0<b<1$ Boussarie and
Hatta (2020).
#### II.1.2 Mass radius and “confining” scalar density
The energy density profile in space requires study of the elastic form factors
of the EMT as in the case of electric charge distribution. The relevant
mass/energy ($T^{00}$) form factor in the Breit frame is
$G_{m}(t)=MA\left(t\right)+B(t)\frac{t}{4M}-D(t)\frac{t}{4M}\ .$ (17)
As discussed extensively in the literature, when a particle has a finite mass,
the spatial resolution of a coordinate-space distribution is limited by its
Compton wavelength. In the case of the nucleon, this is about 0.2 fm. Since
the nucleon charge diameter is around 1.7 fm, one can talk about an
approximate coordinate-space profile. Thus, one can define the spatial
distribution of energy as the Fourier transformation of the mass form factor
Polyakov and Schweitzer (2018a)
$\rho_{m}(r)=\int\frac{{\text{d}}^{3}\bf q}{(2\pi)^{3}}e^{i\bf{q}{\cdot}{\bf
r}}G_{m}(t)\ .$ (18)
The alternative is to interpret the nucleon form factors in the infinite
momentum frame, which yield a 2D profile Freese and Miller (2022).
From the spatial energy distribution, one can define the Sachs-type mass
radius as
$\langle
r^{2}\rangle_{m}=6\left.\frac{{\text{d}}G_{m}(t)/M}{{\text{d}}t}\right|_{t=0}=\left.6\frac{\text{d}A(t)}{{\text{d}}t}\right|_{t=0}-3\frac{D(0)}{2M^{2}}\
.$ (19)
The recent data from $J/\psi$ production at threshold has motivated extracting
the proton’s mass radius using either VDM or AdS/CFT type interpretation
Kharzeev (2021); Mamo and Zahed (2021a). A QCD factorization study indicates
that a connection with the gluon contribution can be established, while the
quark contribution can be obtained through a similar form factor. Both
contributions have been computed on the lattice QCD Hagler _et al._ (2008);
Shanahan and Detmold (2019a), from which one can extract the mass radius as
0.74 fm Ji (2021).
Another interesting quantity is the scalar density,
$\rho_{s}(r)=\int\frac{{\text{d}^{3}}{\bf q}}{(2\pi)^{3}}e^{i{\bf
q}{\cdot}{\bf r}}G_{s}(t)\ ,$ (20)
defining a scalar field distribution inside the nucleon. $G_{s}(t)$ can either
be deduced directly from the trace part of the EMT or indirectly through the
form factors of the twist-2 tensor, as discussed above. This scalar field is
the analogue of the MIT bag constant $B$, which is a constant inside the
nucleon but zero outside, and may be considered as a confining scalar field. A
plot of a LQCD calculation of the scalar density Shanahan and Detmold (2019a)
is shown in the middle panel of Fig. 11.
One can define the scalar or confining radius as ,
$\langle
r^{2}\rangle_{s}=6\left.\frac{{\text{d}}G_{s}(t)/M}{dt}\right|_{t=0}=6\frac{{\text{d}}A(t)}{{\text{d}}t}-9\frac{D(0)}{2M^{2}}\
,$ (21)
which can be compared with the bag radius. The difference between the
confining and mass radii is
$\langle r^{2}\rangle_{s}-\langle r^{2}\rangle_{m}=-6\frac{D(0)}{2M^{2}}\ .$
(22)
Therefore, a consistent physical picture that the confining radius is larger
than the mass radius requires the $D$-term $D(0)<0$ Ji (2021).
### II.2 Nucleon Spin Structure
The spin structure of the nucleon has been one of the most important driving
forces in hadronic physics research in the last thirty years. Non-relativistic
quark models have simple predictions about the spin structure, which have been
shown incorrect through dedicated deep-inelastic scattering studies Ashman
_et al._ (1989). On the other hand, this is not unexpected because QCD quarks
probed by high-energy scattering are different from the constituent quarks
used in the simple quark models, and a connection between them is difficult to
establish.
Figure 12: Proton spin structure calculated from lattice QCD. (Left panel) the
covariant spin decomposition Alexandrou _et al._ (2020a). (Middle panel) the
gluon helicity contribution $\Delta G$ calculated from large momentum
effective theory Yang _et al._ (2017), $\rm p_{3}$ is the absolute value of
the 3-momentum $\rm{\bf p}(0,0,p_{3})$. (Right panel) Integrated quark
transverse angular momentum density versus quark momentum fraction $j_{q}(x)$
of the proton from LQCD, which can be measured through twist-2 GPD $E(x)$.
#### II.2.1 Longitudinal-Spin Sum Rules
The most common approach to study the proton spin is to understand the
longitudinal polarization in the infinite momentum frame in which the quasi-
free quarks and gluons are probed in high-energy scattering Jaffe and Manohar
(1990). In particular, quark and gluon helicity contributions can be measured
through summing over parton helicities $\Delta\Sigma=\int dx\sum_{i}\Delta
q^{+}(x)$ and $\Delta G=\int dx\Delta g(x)$ which appear in the leading-twist
scattering observables, where $+$ indicates summing over quarks and
antiquarks. The EIC planned at BNL will make an important study of $\Delta G$
through $Q^{2}$ evolution and two-jet production Accardi _et al._ (2016). A
complete spin sum rule also requires measurement of the partonic orbital
contributions $l_{q,g}=\int dxl_{q,g}(x)dx$, where $l_{q,g}(x)$ are orbitial
angular momentum carried by quarks and gluons with momentum fraction $x$
Hagler and Schafer (1998), such that
$\frac{1}{2}\Delta\Sigma+\Delta G+l_{q}+l_{g}=\hbar/2\ .$ (23)
This spin sum rule was derived from QCD angular momentum operator by Jaffe and
Manohar Jaffe and Manohar (1990). Since the proton helicity does not grow as
the momentum of the proton, it is a twist-three quantity in high-energy
scattering. Thus, a measurement of partonic $l_{q}(x)$ and $l_{g}(x)$ requires
experimental data on twist-three generalized parton distributions Hatta
(2012); Ji _et al._ (2013); Hatta and Yoshida (2012), which will be
challenging at EIC Guo _et al._ (2022); Bhattacharya _et al._ (2022).
Therefore, it appears that the longitudinal spin structure is not simple to
measure and interpret in the IMF. This, however, is not the case if instead
considering a gauge-invariant sum rule Ji (1997a),
$\frac{1}{2}\Delta\Sigma+L_{q}+J_{g}=\hbar/2\ ,$ (24)
which are not based on partons, where $L_{q}$ and $J_{g}$ are related to the
GFF through $J_{g}=(A_{g}(0)+B_{g}(0))/2$,
$J_{q}=\Delta\Sigma/2+L_{q}=(A_{g}(0)+B_{g}(0))$. This sum rule is frame-
independent, and does not have a simple partonic interpretation when going to
the IMF. On the other hand, $J_{q}$ and $J_{g}$ can be extracted from twist-2
GPDs,
$J_{q,g}=\frac{1}{2}\int{\text{d}}xx(E_{q,g}(x,\xi,t=0)+H_{q,g}(x,\xi,t=0))\
.$ (25)
In the IMF, the twist-2 $L_{q}$ contains both the twist-three parton orbital
angular momentum $l_{q}$ and a contribution from potential orbital angular
momentum. This connection between twist-2 and twist-3 observables is a
reflection of Lorentz symmetry, through which, one can construct the frame-
independent longitudinal spin sum rule by measuring the twist-two GPDs Ji
(1998b).
Lattice QCD calculations of the angular momentum structure of the nucleon have
been investigated by a number of groups (see a review in Ji _et al._
(2021a)). In particular, the frame-independent longitudinal spin sum rule has
been explored with gauge invariant operators on the lattice. Shown on the left
panel in Fig. 12 is a calculation of the spin sum rule by the ETMC
collaboration Alexandrou _et al._ (2020a). A more recent result from the
$\chi$QCD collaboration can be found in Wang _et al._ (2021). The gluon
helicity contribution $\Delta G$ has been extracted from polarized RHIC
experiments and calculated in the large momentum effective field theory Yang
_et al._ (2017), shown on the middle panel in the same figure.
#### II.2.2 Transverse-Spin Sum Rules
The spin structure of a transverse polarized proton has been less studied both
theoretically and experimentally. However, it is not widely known that the
transverse spin in the IMF is simpler to understand than the longitudinal one
Ji _et al._ (2012). This is due to that the transverse angular momentum
$J_{\perp}$ grows with the momentum of nucleon,
$J_{\perp}\sim\gamma\to\infty$ (26)
where $\gamma$ is the Lorentz boost factor Ji and Yuan (2020). $J_{\perp}$ is
then a leading-twist quantity and has a simple twist-2 partonic
interpretation.
Introducing the parton’s transverse angular momentum distribution $j_{q}(x)$
for quarks and $j_{g}(x)$ for gluon, one has
$j_{q,g}(x)=\frac{1}{2}x\Big{(}E_{q,g}(x,t=0)+\\{q,g\\}(x)\Big{)}\ .$ (27)
Physically, $j_{q,g}(x)$ is the transverse angular momentum density of the
quarks and gluons when the partons carry the longitudinal momentum fraction
$x$ Ji _et al._ (2012). These densities represent the total angular momentum
contributions which cannot be separated into spin and orbital ones, as the
former is sub-leading for the transverse polarization. Using the above, one
has the simple twist-2 partonic sum rule for transverse spin
$\int^{1}_{0}{\text{d}}x\left(\sum_{q}j_{q}(x)+j_{g}(x)\right)=\hbar/2$ (28)
which is the analogy of the well-known momentum sum rule. Physically,
experimental measurements of $E_{q,g}(x,t)$ are best performed with
transversely polarized targets with leading-twist observables. An example of
$j_{u,d}(x)$ is shown on the right panel of Fig. 12, which is obtained from
lattice calculation of $E_{q}(x)$ and phenomenological $q(x)$.
There is another transverse spin sum rule at the twist-3 level, which is the
rotated version of the Jaffe-Manohar sum rule for longitudinal spin Guo _et
al._ (2021a),
$\frac{1}{2}\Delta\Sigma_{T}+\Delta G_{T}+l_{qT}+l_{gT}=\hbar/2\ .$ (29)
The numerical values of these quantities are the same as the ones without the
$T$ subscript. However, they are integrated from twist-3 parton densities,
e.g., $\Delta\Sigma_{T}=\sum_{q}\int{\text{d}}x~{}(\Delta
q^{+}(x)+g_{2}^{q}(x))$, where $g_{2}$ is a well-known transverse-spin
distribution which integrates to zero, and similarly for others. Like the
Jaffe-Manohar sum rule, the twist-3 parton densities pose great challenges to
measure experimentally.
### II.3 D-term and strong forces in the interior of the nucleon
The gravitational form factors $A^{q,g}(t)$, $B^{q,g}(t)$, $\bar{c}^{q,g}(t)$,
$D^{q,g}(t)$ defined in Eq. (6) contain information on the spatial
distributions of the energy density, angular momentum, and internal forces.
The interpretation in the Breit frame, where
$P^{\mu}=\frac{1}{2}(p^{\prime}+p)^{\mu}=(E,0,0,0)$ and
$\Delta^{\mu}=(p^{\prime}-p)^{\mu}=(0,\vec{\Delta})$, is done by introducing
the static EMT by means of a 3D Fourier transform as Polyakov (2003)
$\displaystyle
T_{\mu\nu}(\vec{r})=\int\frac{d^{3}\Delta}{2E(2\pi)^{3}}\,e^{-i\vec{\Delta}\cdot\vec{r}}\langle
p_{2}|\hat{T}_{\mu\nu}|p_{1}\rangle\,.$ (30)
The interpretation can be performed also in frames other than Breit frame
Lorcé _et al._ (2019) or in terms of 2D densities Lorcé _et al._ (2019);
Freese and Miller (2021a, 2022) with Abel transformations allowing one to
switch back and forth between the 2D and 3D interpretations Panteleeva and
Polyakov (2021). The consideration of 2D densities for a nucleon state boosted
to the infinite momentum frame is of particular advantage as then the
transverse center of mass of the nucleon is well-defined Burkardt (2000). In
other frames and in the 3D case, this is not possible impeding the 3D spatial
EMT distributions from being exact probabilistic parton densities. The
reservations are similar to the interpretation of the electric form factor
$G_{E}(t)$ in terms of a 3D electrostatic charge distribution and the
definition of a charge radius (which, despite all caveats, gives us an idea of
the proton size). The 3D formalism is nevertheless mathematically rigorous
Polyakov and Schweitzer (2018a) and the 3D interpretation is valid from a
phase-space point of view Lorcé (2020) becoming exact for the nucleon in the
limit of a large number of colors $N_{c}$ Goeke _et al._ (2007); Polyakov and
Schweitzer (2018a); Lorcé _et al._ (2022).
In Eq. (30) we quote the total static EMT,
$T_{\mu\nu}={T}^{q}_{\mu\nu}+{T}^{g}_{\mu\nu}$, but one can also define the
separate quark and gluon static EMTs Polyakov (2003). The meaning of the
different components of the static EMT is intuitively clear with
$T_{00}(\vec{r})$ denoting the energy density which yields the nucleon mass
when integrated over space, and $T_{0k}(\vec{r})$ being related to the spatial
distribution of the angular momentum which upon integration over space yields
the nuclen spin $\frac{1}{2}$. The distributions of energy density and angular
momentum are unknown, but in both cases we at least know very well their
integrals, namely the total nucleon mass and total spin $\frac{1}{2}$.
The arguably most interesting components of the static EMT are
$T_{ij}(\vec{r})$, for two reasons. First, they describe the stress tensor and
the distribution of internal forces Polyakov (2003) and are related to the
$D$-term, a property on the same footing as mass, spin and other fundamental
characteristics of the proton Polyakov and Weiss (1999) which was completely
unknown until recently. It is worth pointing out that a free non-interacting
fermion has a mass and spin but no $D$-term Hudson and Schweitzer (2018) which
hence emerges as a particle property generated by the dynamics and the
interactions in a theory. Second, in order to access the quark and gluon
distributions of energy density and angular the knowledge of all GFFs is
needed which are encoded in GPDs via Eqs. (7, 8) which in turn are encoded in
the Compton form factors in Eq. (LABEL:CFF), the actual observables in DVCS.
In comparison to that, information on the GFF $D^{q}(t)$ can be inferred much
more directly from measurements of the Compton form factors via the fixed-$t$
dispersion relation in Eq. (9).
#### II.3.1 Stress tensor
The key to investigating the mechanical properties of the proton is the stress
tensor $T_{ij}(\vec{r})$ which is symmetric and can be decomposed in terms of
a traceless part and a trace as
$T^{ij}(\vec{r})=\biggl{(}e_{r}^{i}\,e_{r}^{j}-\frac{1}{3}\,\delta^{ij}\biggr{)}\,s(r)+\delta^{ij}\,p(r)\,$
(31)
with $s(r)$ known as the distribution of shear forces and $p(r)$ known as the
distribution of pressure forces while $e_{r}^{i}$ are the components of the
radial unit vector $\vec{e}_{r}=\vec{r}/|\vec{r}|$. The distributions $s(r)$
and $p(r)$ are not independent of each other but related by the differential
equation $\frac{2}{3}\,s^{\prime}(r)+\frac{2}{r}\,s(r)+p^{\prime}(r)=0$ which
originates from energy-momentum conservation $\nabla^{i}T^{ij}(\vec{r})=0$. At
this point it is worth stressing that the distributions of energy density and
angular momentum can be equally well discussed in the 2D interpretation. But
pressure, i.e. force acting on a surface element, is intrinsically a 3D
concept. (One can introduce the notion of a 2D pressure Lorcé _et al._
(2019); Freese and Miller (2021a, 2022), but in that case one looses the
connection to the familiar meaning of pressure in physics and in the daily
life.)
If the form factor $D(t)$ is known, the distributions $s(r)$ and $p(r)$ can be
determined via the relations Polyakov and Schweitzer (2018a)
$\displaystyle s(r)$ $\displaystyle=$
$\displaystyle-\frac{1}{2M}r\frac{d}{dr}\frac{1}{r}\frac{d}{dr}\widetilde{D}(r)\,,$
(32) $\displaystyle p(r)$ $\displaystyle=$
$\displaystyle\frac{1}{6M}\frac{1}{r^{2}}\frac{d}{dr}r^{2}\frac{d}{dr}\widetilde{D}(r)\,,$
(33) $\displaystyle{\rm with}~{}~{}~{}\widetilde{D}(r)$ $\displaystyle=$
$\displaystyle\int\frac{d^{3}{\bf\Delta}}{(2\pi)^{3}}\exp^{-i{\bf\Delta
r}}D(-{\bf\Delta}^{2})\,.$
If the separate $D^{q}(t)$ and $D^{g}(t)$ form factors are known, one can
analogously define “partial” quark and gluon shear forces $s^{q}(r)$ and
$s^{g}(r)$. Also “partial” pressures $p^{q}(r)$ and $p^{g}(r)$ can be defined,
but for that besides respectively $D^{q}(t)$ and $D^{g}(t)$ one needs also the
form factor $\bar{c}^{q}(t)=-\bar{c}^{g}(t)$ which is responsible for the
“reshuffling” of forces between the gluon and quark subsystems inside the
proton Polyakov and Son (2018). The instanton vacuum model predicts
$\bar{c}^{q}(t)$ to be very small Polyakov and Son (2018) which would allow
one to define partial quark pressures $p^{q}(r)$ in terms of $D^{q}(t)$ alone.
The form factor $\bar{c}^{q}(t)$ is difficult to access experimentally but it
can be computed in lattice QCD.
An equivalent, compact way to express the relation of $s^{q}(r)$ and
$p^{q}(r)$ and the form factor $D^{q}(t)$ is given by (for gluons analogously)
$\displaystyle D^{q}(t)$ $\displaystyle=$ $\displaystyle 4M\int{d^{3}{\bf
r}}{\frac{j_{2}(r\sqrt{-t})}{t}}s^{q}(r)$ (34) $\displaystyle\ D^{q}(t)$
$\displaystyle=$ $\displaystyle 12M\int{d^{3}{\bf
r}}{\frac{j_{0}(r\sqrt{-t})}{2t}}p^{q}(r).\ $ (35)
where $M$ is the proton mass, $j_{0}$ and $j_{2}$ are spherical Bessel
functions of zeroth and second order, respectively. Taking the limit $t\to 0$
in Eqs. (34, 35) one obtains two equivalent expressions for the $D$-term
$D=D(0)$ given by
$\displaystyle
D=-\frac{4}{15}\,M\int{d^{3}r}\;r^{2}s(r)=M\int{d^{3}r}\;r^{2}p(r),\,.$ (36)
The derivation of (36) requires the use of the von Laue condition
$\int_{0}^{\infty}dr\,r^{2}p(r)=0$ von Laue (1911), a necessary but not
sufficient condition for stability which follows from energy-momentum
conservation.
The stress tensor $T^{ij}(\vec{r})$ is a $3\times 3$ matrix which can be
diagonalized. One eigenvalue is the normal force per unit area given by
$p_{n}(r)=\frac{2}{3}\,s(r)+p(r)$ with the pertinent eigenvector $\vec{e}_{r}$
while the other two eigenvalues are degenerate in spin-0 and
spin-$\frac{1}{2}$ cases, with the degeneracy lifted only for higher spins,
are referred to as tangential forces per unit area and are given by
$p_{t}(r)=-\,\frac{1}{3}\,s(r)+p(r)$ whose eigenvectors can be chosen to be
unit vectors in $\vartheta$\- and $\varphi$-directions in spherical
coordinates Polyakov and Schweitzer (2018a).
#### II.3.2 Mechanical stability - connection to neutron stars
The normal force makes appearance if we consider the force
$F^{i}=T^{ij}dS^{j}=[\frac{2}{3}\,s(r)+p(r)]\,dS\,e_{r}^{i}$ within the proton
acting on an area element $dS^{j}=dS\,e_{r}^{j}$. Mechanical stability
requires this force to be directed towards the outside, otherwise the system
would implode. This implies that the normal force per unit area must be
positive definite Perevalova _et al._ (2016).
$\frac{2}{3}\,s(r)+p(r)>0\;.$ (37)
At this point it is instructive to notice that this is exactly the condition
which is imposed when calculating the radius of a neutron star. Neutron stars
are basically macroscopic hadronic systems (“giant nuclei”) in which gravity
and general relativity effects cannot be neglected. Based on a chosen model
for the equation of state of nuclear matter, one solves the Tolman-
Oppenheimer-Volkoff equation which yields the radial pressure inside the
neutron star as function of the distance $r$ from the center of the neutron
star. In our notation, the radial pressure corresponds to
$\frac{2}{3}\,s(r)+p(r)$. The solution of the Tolman-Oppenheimer-Volkoff
equation yields a radial pressure which is positive in the center and
decreases monotonically until it drops to zero at some $r=R_{\ast}$ and would
become negative for $r>R_{\ast}$. This would correspond to a mechanical
instability and is avoided by defining the point $r=R_{\ast}$ to be the radius
of the neutron star, see for instance Prakash _et al._ (2001). In this way,
within the neutron star the mechanical stability condition (37) is always
valid, and the point where the normal force per unit area drops to zero
coincides with the “edge” of the system.
The proton has of course no sharp “edge” being “surrounded” by a “pion cloud”
due to which the normal force does not drop literally to zero but exhibits a
Yukawa-tail-type suppression at large $r$ which becomes proportional to
$\frac{1}{r^{6}}$ in the chiral limit Goeke _et al._ (2007). In the less
realistic but nevertheless very instructive and inspiring bag model, cf. Sec.
II.1, one does have an “edge”, namely at the bag boundary, where the normal
force drops to zero Neubelt _et al._ (2020). However, in contrast to the
neutron star one does not determine the “edge” of the bag model in this way.
Rather the normal force drops “automatically” to zero at the bag radius which
reflects the fact that from the very beginning the bag model was thoughtfully
constructed as a simple but mechanically stable model of hadrons Chodos _et
al._ (1974).
#### II.3.3 Charge and mechanical radius of proton and of neutron
The normal force per unit area $\frac{2}{3}\,s(r)+p(r)$ is an ideal quantity
to define the size of the system, thanks to positivity in Eq. (37) guaranteed
by mechanical stability. Notice that a quantity like electric charge
distribution can be used to define an electric charge radius for the
positively charged proton which is a meaningful proxy for the “proton size”.
However, for an electrically neutral hadron this is not possible. One can
still define an electric mean square charge radius $r_{\rm
ch}^{2}=6\,G^{\prime}_{E}(0)$ in terms of the derivative of the electric form
factor $G_{E}(t)$ at $t=0$. But for the neutron $r_{\rm ch}^{2}<0$ which gives
insights about the distribution of the electric charge inside the neutron, but
does not tell us anything about its size. This is ultimately due to the
neutron’s charge distribution not being positive definite.
The positive-definite normal force per unite area $\frac{2}{3}\,s(r)+p(r)$,
Eq. (37), allows us to define the mechanical radius as follows Polyakov and
Schweitzer (2018b, a)
$r_{\rm mech}^{2}=\frac{\int
d^{3}r\,r^{2}\,\biggl{(}\frac{2}{3}\,s(r)+p(r)\biggr{)}}{\int
d^{3}r\,\biggl{(}\frac{2}{3}\,s(r)+p(r)\biggr{)}}=\frac{6\,D(0)}{\int_{-\infty}^{0}dt\,D(t)}\,.$
(38)
Interestingly, this is an “anti-derivative” of a form factor as opposed to the
electric mean square charge radius defined in terms of the derivative of the
electric form factor at $t=0$. With this definition the proton and neutron
have the same radius (up to small isospin violating effects). Another
advantage is that the (isovector component of the) electric mean square charge
radius diverges in the chiral limit which makes it an inadequate proxy for the
proton size in the chiral limit, while the mechanical radius in Eq. (38)
remains finite in the chiral limit Polyakov and Schweitzer (2018a). The
mechanical radius of the proton is predicted to be somewhat smaller than its
charge radius in soliton models at the physical value of the pion mass Goeke
_et al._ (2007); Cebulla _et al._ (2007). In quark models both radii become
equal when one takes the non-relativistic limit Neubelt _et al._ (2020);
Lorcé _et al._ (2022).
An immediate consequence of the positive-definite nature of the normal force
per unit area $\frac{2}{3}\,s(r)+p(r)$ in Eq. (37), is that the $D$-term
$D=D(0)$ is negative Perevalova _et al._ (2016). This has been confirmed in
model and lattice QCD calculations, see e.g. Goeke _et al._ (2007); Cebulla
_et al._ (2007); Kim _et al._ (2012); Jung _et al._ (2014); Neubelt _et
al._ (2020); Perevalova _et al._ (2016); Neubelt _et al._ (2020); Lorcé _et
al._ (2022) and the review Polyakov and Schweitzer (2018a). The behaviour of
the EMT spatial distributions at large-$r$ is dictated by the behavior of the
GFFs at small $t$ which can be studied in chiral perturbation theory Alharazin
_et al._ (2020). This allows one to derive a model-independent bound
formulated in terms of a low-energy constant. According to this bound the
$D$-term of the nucleon is negative and $D\leq-0.20\pm 0.02$ Gegelia and
Polyakov (2021).
#### II.3.4 D-term and long range forces
Among the open questions in theory is the issue of how to define the $D$-term
in the presence of long-range forces such as the electromagnetic interaction.
It was shown in a classical model that the $D(t)$ of the proton diverges for
$t\to 0$ like $1/\sqrt{-t}$ when QED effects are included Varma and Schweitzer
(2020). The form factor $D(t)$ exhibits a divergence for $t\to 0$ due to QED
effects also for charged pions Kubis and Meissner (2000). Similar behavior was
observed for $D(t)$ of the electron in 1-loop QED calculations Metz _et al._
(2021). Also for the H-atom, a bound state of the electromagnetic interaction,
does one find conflicting results Ji and Liu (2021, 2022). These findings are
not entirely surprising as the presence of a massless state (the photon) in a
theory may have profound consequences. Notice that $D(t)$ is the only GFF
which exhibits a divergence for $t\to 0$ when QED effects are included. Also
this is not surprising given the relation of $D(t)$ to the forces acting in a
system. The behavior of $D(t)\propto 1/\sqrt{-t}$ at small-$t$ is relevant
only in the unmeasurable region of very small $|t|<10^{-3}\rm GeV^{2}$ such
that this is of no practical concern for experiments Varma and Schweitzer
(2020). However, a satisfactory theoretical definition of the $D$-term may
require not only the inclusion of electromagnetic forces but also
gravitational forces which, no matter how weak, are present in every system
and are also long-range forces Hudson and Schweitzer (2017). Notice that
despite the divergence of $D(t)$ due to QED effects, the accompanying
prefactor $(\Delta_{\mu}\Delta_{\nu}-g_{\mu\nu}\Delta^{2})$ ensures that the
matrix element $\langle p_{2}|\hat{T}^{q,g}_{\mu\nu}|p_{1}\rangle$ is overall
well-behaving in the $t\to 0$ forward limit.
Figure 13: Left: Spatial distribution of radial force, which has a positive
sign everywhere. Right: Distribution of tangential force, which exhibits a
node near a distance $r\approx 0.45$fm from the center, where it also reverses
sign as indicated by the direction of the arrows. The lines represent the
magnitude of force acting along the orientation of the surface. Note that
pressure acts equally on both sides of a hypothetical pressure gauge immersed
in the system. A positive magnitude of pressure means that an element of the
proton is being pushed on from both direction,. i.e. it is being ”squeezed”,
while a negative magnitude means it is being pulled on from both directions,
i.e. it is being ”stretched”. Lorcé _et al._ (2019); Freese and Miller
(2021b).
The first experimental information from Jefferson Lab experiments allows one
to present first visualization of the pressure inside the proton. Using
expression for $D^{q}(t)$ in (10) and the parameterization of $\Delta(t)$ in
Burkert _et al._ (2021b) the Fourier transforms (34) and (35) can be inverted
to determine respectively $s^{q}(r)$ which is also referred to pressure
anisotropy, and $p^{q}(r)$ which is also referred to as the isotropic
pressure.
Figure 13 shows an example of a tangential pressure distribution inside the
proton using parameterizations of $\mathcal{H}(\xi,t)$ and $\Delta(t)$. We
stress that these results have been obtained with paramterizations of the
kinematic observables $\xi$ and $t$ extrapolated into unmeasured physical
territory. The extension of these measurements to higher energies, including
into the EIC kinematics domain and the availability of transversely polarized
protons, will enable experiments with strong sensitivity to the CFF
$\mathcal{E}(\xi,t)$ and $\mathcal{H}(\xi,t)$ and unprecedented kinematic
coverage.
## III Accessing the Momentum Dependent Structure of the nucleon in Semi-
Inclusive Deep Inelastic Scattering
### III.1 Overview
Accessing the spin dependent and spin averaged nucleon structure encoded in
Transverse Momentum Dependent parton distribution functions (TMD PDFs, or
simply TMDs) as well as subleading twist parton distribution functions (twist3
PDFs) in semi-inclusive deep-inelastic scattering Anselmino _et al._ (2020)
is a central part of the scientific mission of the EIC Abdul Khalek _et al._
(2021). This program focuses on an unprecedented investigation of the parton
dynamics and correlations at the confinement scale and will benefit
substantially by an increased luminosity at medium energies for the following
reasons.
* •
Structure functions appearing at sub-leading twist are suppressed by a
kinematic factor $1/Q$, which makes data at relatively low and medium $Q^{2}$
the natural domain for their measurement. Similarly, effects from the
intrinsic transverse momentum dependence are suppressed at high $Q^{2}$, when
most of the observed transverse momenta are generated perturbatively. As a
consequence, the signal of TMDs is naturally diluted at the highest energies.
However, at the same time $Q^{2}$ has to be high enough for the applicability
of factorization theorems, which makes most fixed target data already
challenging. Running the EIC at low- to medium-CM energies might therefore
occupy a sweet spot at which non-perturbative and subleading effects are
sizeable and current knowledge allows the application of factorization to
extract the relevant quantities Grewal _et al._ (2020). The Sivers asymmetry,
related to one of the most intriguing parton dynamics which will be discussed
below, is shown in Fig. 18 for different EIC energy options, illustrating the
rapid fall of the expected TMD signal as higher and higher $Q^{2}$ is
accessed.
* •
At fixed $Q^{2}$ but lower $\sqrt{s}$, the fractional energy transfer of the
virtual photon $y$ is higher, which is helpful for the extraction of TMDs due
to the more advantageous kinematic factors for asymmetries sensitive to the
helicity of the electron beam and the higher resolution of the reconstruction
of kinematic variables as will be described further below.
The kinematic factor of relevance here, is commonly known as the
depolarization factor. It exhibits a strong $y$ dependence and is small for
electron beam helicity dependent asymmetries in phase space with low $y$
Bacchetta _et al._ (2007). Figures 14 and 15 show the magnitude of this
factor for the relevant target and beam spin asymmetries as well as the
polarization independent cross-section vs $x$ and $Q^{2}$. See the caption of
these Fig. 14 for a more detailed explanation. Figure 14 shows the factor for
the $5\times 41$ beam energy combination and Fig. 15 for the $18\times 275$
combination. The combinations $C/A$ and $W/A$ are suppressed at low $y$ which
has a significant impact at larger $\sqrt{s}$. The former factor appears in
front of the wormgear PDF in the $A_{LT}$ asymmetry and the later in front of
$e$ and $g_{T}$ twist-3 asymmetries in $A_{LU}$ and $A_{LT}$ asymmetries,
where the subscript indicate beam and target polarizations as customary.
Figures 16 and 17 show the impact on the depolarization factors on the
expected statistical uncertainties vs. $x$ and $Q^{2}$.
Furthermore, at low $y$ the reconstruction of the relevant kinematics in the
Breit-frame suffers from low resolution. These issues have been shown to be
significantly improved using the hadronic final state as input to ML/AI
methods Pecar and Vossen (2022) or translating the kinematics into the lab-
frame Gao _et al._ (2022). However, even with these improvements, larger $y$
still offers advantages in the resolution that can be reached.
* •
To map out the structure of the nucleon encoded in TMDs and twist3 PDFs, high
precision, multi-dimensional measurements are needed, which requires very high
statistics. For our understanding of the evolution and proper domain of these
objects, it is essential to cover an extended kinematic phase space region
connecting the future collider to the ongoing fixed-target precision
measurements, e.g. by the JLab experiments. Figure 19 shows the estimated
phase space covered by the existing JLab12 program compared to the lowest and
highest EIC energy options.
* •
Finally, intermediate energies have an advantage for a SIDIS program, as its
foremost detector requirements are excellent tracking and particle
identification. The most significant signals are expected for particles that
carry a large momentum fraction $z$ of the fragmenting quark, as these
particles are most closely connected to the original quark properties. As
illustrated in Fig. 20, at intermediate EIC energies, all particles that are
detected at mid-rapidity are within the momentum acceptance range of the
reference detectors. This is not necessarily true for the highest energies,
when particle identification within the typical EIC detector dimensions
becomes challenging.
Figure 14: Relative kinematic factors entering beam and target spin
asymmetries and polarization independent cross-section for $5\times 41$ beam
energy. These so-called depolarization factors are dependent on $y$ and
$\epsilon=\nicefrac{{(1-y-1/4\gamma^{2}y^{2})}}{{(1-y+1/2y^{2}+1/4\gamma^{2}y^{2})}}$
where $\gamma=\nicefrac{{2Mx}}{{Q}}$ Bacchetta _et al._ (2007). The
nomenclature using $A,B,C,V,W$ is taken from Gliske _et al._ (2014). They can
be approximated by $A\approx(1-y+\frac{1}{2}y^{2})$, $B\approx(1-y)$,
$C\approx y(1-\frac{1}{2}y)$, $V\approx(2-y)\sqrt{1-y}$ and $W\approx
y\sqrt{1-y}$. The rows indicate the different beam and target polarization
combinations while the first two columns relate to twist-2 quantities and the
third column to twist-3 quantities. For the spin independent cross-section,
the factor $A$ impacts the transverse momentum independent part, $B$ the
asymmetry relating to the Boer-Mulders $h_{1}^{\perp}$ function and $V$ the
asymmetry relating to the twist-3 FF $D_{1T}$. For the target-spin asymmetries
$UL$ and $UT$, the factor $B/A$ impacts the extraction of the transversity,
pretzelosity $h_{1T}^{\perp}$ and worm-gear $h_{1L}^{\perp}$ asymmetries,
whereas $V/A$ impacts the extraction of $h_{L}$ from $UL$ asymmetries. While
the factors described so-far become small only for large $y$, the factors
entering asymmetries with respect to the beam helicity, $LU$, $LL$ and $LT$
shown in the third row become small for medium and small $y$. Here the $C/A$
factor enters the extraction of the wormgear (LT) and helicity dependent FFs,
whereas $W/A$ enters the extraction of the twist-3 PDFs $g_{T}$ and $e$. As
illustrated by the figures, beam helicity dependent asymmetries are
significantly suppressed at low values of $y$. This restricts the minimal
$Q^{2}$ value that can be accessed and limits the statistical precision of the
measurement. Figure 15: Like Fig. 14 but for $18\times 275$ beam energies.
Due to the higher $\sqrt{s}$, the accessible $Q^{2}$ range for TMDs extracted
from beam-helicity dependent asymmetries is higher. At large $\sqrt{s}$ a
large fraction of the data is at low $y$, making these measurements even more
challenging. Figure 16: Quantity $(\sqrt{N_{i}}/d)^{-1}$ for the $5\times 41$
configuration, where $N_{i}$ is the normalized count rate in a bin and $d$ the
depolarization factor. The quantity is proportional to the relative
statistical uncertainty in the respective bin with a proportionality factor of
$N^{-1}_{\textrm{total}}$. This illustrates the relative statistical
uncertainties one can reach for TMDs dependent on different polarization
factors. Figure 17: Same as Fig. 14, but for the $18\times 275$ beam energy
configuration. The relative impact of the depolarization factor on asymmetries
dependent on the electron beam helicity is increased due to the phase space
distribution of the data.
The remainder of this section is organized as follows. Section III.2 will
discuss the physics case for twist-3 observables, Sec. III will give a short
overview of the TMD framework and impact studies for unpolarized and Sivers
TMD, which were identified as golden channels in the Yellow Report. This
section will also briefly discuss TMDs in medium. Finally, Sec. III.4 will
introduce the case for jet physics at intermediate energies and high
luminosity. Radiative corrections might complicate the picture, as the impact
on cross-sections and asymmetries can be sizable, depending on the kinematic
regime. The interplay between radiative corrections and TMD extraction is
still very much under investigation with recent studies Akushevich and
Ilyichev (2019a); Liu _et al._ (2021) showing potential significant effects
on the angular reconstruction for TMDs in certain parts of the phase space.
However, as those studies are still in their initial stages, these effects are
not considered for the studies shown in this section.
Figure 18: Left: Projected Sivers asymmetry for various EIC run settings.
(Example for ATHENA pseudodata), 2% point-to-point systematic uncertainties
assumed. Right: projected Sivers asymmetries for 100 days of data taking at
each CM setting with the baseline luminosity vs. $Q^{2}$ for $0.25<x<0.35$ and
$0.4<z<0.6$ at the luminosity optimized EIC, JLab12 and the proposed JLab24.
For the JLab projections, the acceptance of the CLAS detector is used. The
proposed SoLID experiment will be able to run at higher luminosity values and
is expected to improve on these projections Chen _et al._ (2014); Sol . The
drop of the amplitude with $Q^{2}$ is evident. At the same time the projected
uncertainties rise, as the valence quark region is harder to access at high
$Q^{2}$. A constraint of $y>0.05$ is used for this figure. Figure 19:
Estimated coverage of JLab12, HERMES and EIC data for different energy
configurations. The need to deliver high luminosity for the low and medium
energy configurations to fill in the phase space between fixed target
experiments and the higher EIC options is obvious. The data are constraint to
$y>0.05$. Figure 20: Acceptance of an exemplary EIC detector (here: ATHENA) in
laboratory frame $\eta/p$ for various energy configurations and $x,Q^{2}$
regions. PID limits exemplary for the ATHENA proposal are indicated with red
lines. At the highest energies a significant fraction of high $z$ particles is
outside the PID range. The horizontal axes are momenta from 0.1 to 100 GeV,
and the vertical axes are pseudo-rapidity from -4 to +4.
### III.2 Accessing Quark-Gluon Correlations at sub-leading Twist
The interest for contributions that are suppressed by factors of $(M/Q)^{t-2}$
has recently grown with the possibility to access them in low-energy
experiments, such as HERMES and CLAS. Moderate $Q^{2}$ values at EIC will
offer unique opportunities for precision analyses of higher-twist distribution
functions. Such PDFs are often associated to multi-parton correlations as, to
some extent, the operator that defines such objects is made of quarks and
gluon fields. Such operators are almost unexplored by phenomenology Efremov
_et al._ (2003); Accardi _et al._ (2009); Sato _et al._ (2016); Aschenauer
_et al._ (2016); Boglione and Prokudin (2016); Anselmino _et al._ (2020). As
argued below, the physics of twist-$3$ distributions is broader than the
already important quark-gluon-quark interaction, whose third Mellin moments
receive an interpretation in terms of forces Burkardt (2013).
A well-known example of higher-twist objects is the twist-$3$ contribution to
the axial-vector matrix element, $g_{T}$. The latter can be expressed in terms
of a leading-twist distribution through the Wandzura-Wilczek relation, and a
genuine twist-$3$ contribution. Data have shown that the genuine term is not
necessarily small Accardi _et al._ (2009); Sato _et al._ (2016). In the
Yellow Report for the EIC, the access to $g_{T}$ through double-spin asymmetry
$A_{LT}$ in inclusive DIS has been proposed as the golden channel towards the
study of multi-parton correlations. It was shown that the impact on the
uncertainty, based on the previous JAM analyses, is expected to be
significant. Figure 21 shows the impact of the EIC data with high luminosity
at low and medium energies on $g_{T}$ extraction.
Figure 21: Impact of EIC data with high luminosity at low/medium energies on
$g_{T}$ extraction. The improvement at high x is moderate (but not zero) due
to pre-existing data. This extraction uses data at $18\times 275$, $10\times
100$, $5\times 100$ and $5\times 41$, assuming an integrated luminosity of
$10$fb-1 at 18x275 and the other energies scaled according to their relative
instantaneous luminosities.
The scalar PDF, $e(x)$, is preeminent in that it relates to diverse aspects of
non-perturbative dynamics, such as the scalar charge of the nucleons and an
explicit quark-mass term, in addition to the quark-gluon correlations. The
scalar charge is particularly interesting in view of the mass decomposition of
the proton as it constitutes a unique avenue towards the phenomenological
extraction of the scalar condensate Efremov and Schweitzer (2003). While there
exist semi-phenomenological approaches to the determination of the pion-
nucleon sigma-term, e.g. Alarcon _et al._ (2012); Hoferichter _et al._
(2016), the twist-$3$ $e(x)$ can provide a determination that is minimally
biased by the underlying theoretical assumptions. Some model dependence is,
based on our current understanding, inevitable, since the extraction of the
sigma requires knowledge of $e(x)$ in particular down to $x=0$, which is not
experimentally accessible. The access to the scalar PDF through longitudinal
beam-spin asymmetries in (dihadron) SIDIS Bacchetta and Radici (2004) was
proposed as a silver channel in the Yellow Report. Up to date, the scalar PDF
has been accessed at JLab, in CLAS Mirazita _et al._ (2021) and CLAS12
Hayward _et al._ (2021), for low values of $Q^{2}$ and $x$ ranging from
$0.1-0.5$, leading to the first point-by-point phenomenological extraction
Courtoy _et al._ (2022). While the parameterization of $e(x)$ is still a work
in progress, the impact from the EIC was shown to be significant thanks to the
broad kinematical reach. The $x$ range will be extended towards small-$x$
values, in the region relevant for the evaluation of the sum rules – such as
the relation to the scalar charge. The $Q^{2}$ range, spanning a broad window
of mid-$Q^{2}$ values, will allow analyses that account for QCD evolution
effects on each contribution. EIC thus represents a unique opportunity to
expand the curent exploratory studies towards global QCD analyses of the rich
phenomenology of higher-twist distribution functions.
In Fig. 22 the theoretical predictions are shown for the contribution of
$e^{a}(x)$ to the beam spin asymmetry in semi-inclusive di-hadron production
in the collinear framework for two different center of mass energies, showing
larger projected asymmetries for lower energies as expected. This asymmetry
receives a contribution not only from $e^{a}(x)$ but also from a term
involving a twist-3 di-hadron fragmentation function together with
$f_{1}^{a}(x)$ Bacchetta and Radici (2004). The latter has not been considered
here Courtoy _et al._ (2022). The uncertainties in Fig. 22 come from the
envelope of the uncertainties on the interference fragmentation function
Radici _et al._ (2015) and two models for $e^{a}(x)$, the light-front
constituent quark model Pasquini and Rodini (2019) and model of the mass-term
contribution to $e^{a}(x)$ with an assumed constituent quark mass of $~{}300$
MeV and the unpolarized PDF from MSTW08LO. All PDFs and fragmentation
functions are taken at $Q^{2}=1$ GeV2 and the projected uncertainties for the
EIC are shown only for $Q^{2}$ values smaller than $10$ GeV2.
Figure 22: Beam Spin Asymmetry in semi-inclusive di-hadron production.
Predictions corresponding to $Q^{2}=1$ GeV2 based on the di-hadron
fragmentation functions of Ref. Radici _et al._ (2015), low-energy models for
the twist-$3$ PDF $e(x)$ (see text) and MSTW08 for the unpolarized PDF at LO.
Figure is taken from the Yellow Report Abdul Khalek _et al._ (2021). The
twist-3 fragmentation is neglected. The upper and lower panel show two
different energy configuration ; the left (blue) and right (green) plots
correspond, respectively, to the fragmentation kinematics of ($0.2<z<0.3$,
$0.7<M_{h}<0.8$ GeV) and ($0.6<z<0.7$, $0.9<M_{h}<1.2$ GeV). The bands give
the envelope of the model projections discussed in the text folded with the
uncertainty of the interference fragmentation function. The projected
statistical uncertainties are plotted at zero and correspond to 10 fb-1 at
each CM setting. This illustrates that the data at lower $\sqrt{s}$ will have
a larger impact on constraining $e(x)$. Furthermore, the $Q^{2}<10$ GeV2 data,
where the signal is still expected sizable, is restricted to low $x$ for large
$\sqrt{s}$, where in turn $e(x)$ is expected to be small.
As the leading twist analysis addressed further below, all higher-twist
analyses will rely on the possibility to separate the contributions of the
various flavors from different observables, and mostly from different targets.
In particular, deuteron and 3He nuclei will provide effective neutron targets
to complement the proton data.
The phenomenological efforts can be paired with the progress made from the
lattice Lin _et al._ (2018a); Constantinou _et al._ (2021). Moments of
higher-twist distributions have been determined on the lattice Bhattacharya
_et al._ (2021a), frameworks for quasi-PDFs are being studied as well Braun
_et al._ (2021).
Beyond the collinear twist-$3$ mentioned above, there is a plethora of higher-
twist TMDs that could be studied at the EIC. Moreover, the second IR will
grant us the opportunity to explore the relations between twist-$3$ collinear
PDFs and twist-$2$ TMDs, the understanding of which is key for the
interpretation of low-energy dynamics.
### III.3 Measurements of TMDs
The lepton-hadron semi-inclusive deep inelastic scattering (SIDIS) at the EIC
will provide excellent opportunities to probe the confined motion of quarks
and gluons inside the colliding hadron, which are encoded in the transverse
momentum dependent parton distribution functions (TMD PDFs, or simply, TMDs).
With the scattered lepton and an observed hadron (or jet) with sensitivity to
transverse momentum in the final-state, SIDIS provides not only a hard scale
$Q\gg\Lambda_{\rm QCD}$ from the virtuality of the exchanged virtual photon to
localize an active quark or gluon inside the colliding hadron, but also a
natural “soft” scale from the momentum imbalance between the observed lepton
and hadron in the final-state, which is sensitive to the transverse momentum
of the active quark or gluon.
With the one-photon approximation, the “soft” scale is the transverse momentum
of the observed hadron in the photon-hadron (or the Breit) frame, ${\bf
P}_{h_{T}}\gtrsim\Lambda_{\rm QCD}$. When $Q\gg|{\bf P}_{h_{T}}|$, the
unpolarized SIDIS cross section can be factorized as Bacchetta _et al._
(2007),
$\frac{d\sigma^{\rm SIDIS}}{dx_{B}dQ^{2}d^{2}{\bf P}_{h_{T}}}\propto
x\sum_{i}e_{i}^{2}\int d^{2}{\bf p}_{T}\,d^{2}{\bf k}_{T}\,\delta^{(2)}({\bf
p}_{T}-{\bf k}_{T}-{\bf P}_{h_{T}}/z)\,\omega_{i}({\bf p}_{T},{\bf
k}_{T})f_{i}(x,p_{T}^{2})D_{h/i}(z,k_{T}^{2})\equiv{\cal C}\left[\omega
fD\right]\,,$ (39)
which provides the direct access to the TMD PDFs, $f_{i}(x,p_{T}^{2})$ of
flavor $i$ and transverse momentum $p_{T}^{2}\equiv{\bf p}_{T}^{2}$, and TMD
fragmentation functions (FFs), $D_{h/i}(x,k_{T}^{2})$ for a parton of flavor
$i$ and transverse momentum $k_{T}^{2}\equiv{\bf k}_{T}^{2}$, to evolve into
the observed hadron $h$ of transverse momentum $P_{h_{T}}$ in this photon-
hadron frame. In Eq. (39), the $\omega_{i}({\bf p}_{T},{\bf k}_{T})$ is a
known function depending on the kinematics, the type of TMDs and corresponding
angles between the parton transverse momenta.
With many more TMDs than PDFs, it will be possible to learn much more on QCD
dynamics that holds the quarks and gluons together to form the bound hadron,
despite being harder to extract and separate these TMDs from experimental
data. On the other hand, with a good detector able to cover the angle
distribution between two well-defined planes, the leptonic plane determined by
the colliding and scattered leptons, and the hadronic plane defined by the
colliding and observed hadrons, SIDIS measurements at the EIC will allow the
extraction of various TMDs by evaluating independent angular modulations of
the angle distribution between the two planes as well as the distribution
between the hadron spin vector and one of the planes.
#### III.3.1 Impact on the understanding of TMD factorization and
applicability to fixed target data
The TMD factorization formula Eq 39 receives corrections which enter in terms
of powers of $\delta\sim P_{hT}/z/Q$. Identifying the domain of applicability
of TMD factorization is not trivial Boglione _et al._ (2017). In recent
analyses, usually the choice $\delta<0.25$ is adopted, at least for high $Q$
Bacchetta _et al._ (2017); Scimemi and Vladimirov (2018, 2020); Bacchetta
_et al._ (2020a). These restrictions reduce the significance of a large amount
of existing measurements, in particular a majority of data from existing fixed
target experiments. Figure 23 illustrates this issue by showing the results of
Ref. Boglione _et al._ (2022) where the regions of pion production in SIDIS
at the EIC are studied using results of Ref. Boglione _et al._ (2019). The
so-called affinity to TMD factorization region (i.e. the probability that the
data can be described by TMD factorization) is calculated for each bin of the
EIC measurements. The affinity represents the probability of the bin to belong
to TMD factorization region and spans from 0% to 100%, indicated by color and
symbol size in the figure. One can see from Fig. 23 that only at relatively
high $z$ and $P_{hT}$ (and relatively large $x$ and $Q^{2}$) corrections to
the TMD factorization description are expected to be negligible. The reach of
the EIC data into other regions, will be important for the study the
connections to other types of factorization, for instance the collinear
factorization or the region accessed by fixed target experiments, where
sizable corrections to the current TMD formalism are expected. Comparing this
figure with the reach of the different energy option shown in Fig. 20, it can
be seen that intermediate beam energy option such as $10\times 100$ GeV2
operate largely in a region where TMD factorization holds, but also contain
phase space in the transition region towards other QCD regimes. The
flexibility to go from one regime of factorization to the other will be a
crucial ingredient in our understanding of QCD, and in the interpretation of
the vast amount of fixed target data, which has a low TMD affinity.
Figure 23: TMD affinity for EIC kinematics. Bin centers are located in the
points corresponding to the bin averaged values of $x_{b}$ and $Q^{2}$, and in
each of these bins various values of $z_{h}$ and $q_{T}/Q$ can be measured. In
each bin of fixed $z_{h}$ and $q_{T}/Q$, the affinity is indicated by a dot
with size proportional to the corresponding affinity value. The affinity is
color coded according to the scheme on the right of the panels: red (and
smaller) symbols correspond to low TMD affinity, while dark blue (and larger)
symbols correspond to high TMD affinity. The plot is from Ref. Boglione _et
al._ (2022)
Figure 24: Left: Impact on unpolarized TMD measurements integrated within
$0<q_{T}/Q<1.0,z>0.2$ , figure from Athena Proposal. Fit based on P17. The
color-code shows the datasets with the highest impact at a given $x,Q^{2}$
point. The assumed systematic uncertainty of 2% point-to-point is dominating.
However, the extraction of a specific point in $b$ is sensitive to the
collected statistics as shown in the right plot. Right: Impact of the EIC data
on the extraction of the CS kernel as function of $b$ (GeV-1) at $\mu=2$ GeV
using SV19 as a baseline compared to several other global extractions not
using EIC data. Figure from the Yellow Report Abdul Khalek _et al._ (2021).
#### III.3.2 Impact on TMD PDF extraction
The theoretical description of TMDs has been extensively studied in coordinate
space labeled by $b$ as the conjugate variable of transverse momentum. In the
large $b$ region (small $q_{T}\approx p_{T}/z$), TMDs are non-perturbative and
encode intrinsic properties of hadrons while in the small $b$, TMDs are
dominated by QCD radiation which is calculable in perturbative QCD. In the
latter, TMDs can be connected with their corresponding collinear counterparts
such as PDFs and fragmentation functions offering a new venue to constrain
collinear distributions using TMD observables. While the experimental data is
sensitive to all regions in coordinate space, as discussed above, the relative
contribution of each region to the physical observables depend on the
kinematics of the final state particles accessible at a given collision
energy. Because of this, different collision energies from low to high at high
luminosity is needed at the EIC in order to systemically probe TMDs at
different regions of coordinate space. In the sections below, we concentrate
on the impact on the unpolarized TMD PDFs as well as the Sivers TMD PDF as
exemplary cases that would profit from increased precision at moderate
energies.
#### III.3.3 The impact study on the unpolarized TMDs
The unpolarized TMD distributions and fragmentation functions have been
extracted in Refs. Scimemi and Vladimirov (2018); Bacchetta _et al._ (2017);
Scimemi and Vladimirov (2020); Bacchetta _et al._ (2020a, 2022) (SV17, PV17,
SV19, PV19, MAPTMD22) with high perturbative accuracy up to NNLO and up to
N3LL of TMD logarithmic resummation. The data used in these global analyses
includes Drell-Yan and SIDIS processes measured at fixed target experiments
Derrick _et al._ (1996); Adloff _et al._ (1997); Asaturyan _et al._ (2012);
Airapetian _et al._ (2013); Adolph _et al._ (2013); Aghasyan _et al._
(2018a); Ito _et al._ (1981); Moreno _et al._ (1991); McGaughey _et al._
(1994) at relatively low energies, and the collider measurements at higher
energy scales Aidala _et al._ (2019); Affolder _et al._ (2000); Aaltonen
_et al._ (2012); Abbott _et al._ (2000); Abazov _et al._ (2008, 2010); Aad
_et al._ (2014, 2016); Chatrchyan _et al._ (2012); Khachatryan _et al._
(2017); Aaij _et al._ (2015, 2016a, 2016b). The span in the resolution scale
$Q$ and in observed transverse momentum $q_{T}$ allows for an extraction of
the non-perturbative Collins-Soper kernel (CS-kernel) and the unpolarized
TMDs. These extractions demonstrate an agreement between the theory and the
experimental measurements.
The extremely precise LHC measurements at $Q\simeq M_{Z}$ provide very
stringent constraints on the CS-kernel and TMDs in the region of small values
of $b$. However, the uncertainty of extractions grows in the region of $b>1$
GeV-1 due to the lack of the precise low-$q_{T}$ data. The large $b$ region is
important for the understanding of the non-perturbative nature of TMDs and the
primordial shapes TMDs and CS-kernel. In particular for the $Q$ range accessed
by intermediate energies, $Q\geq 5-10$ GeV, TMDs are only very poorly
constrained. Low and intermediate energies at the EIC will naturally provide
precision data in this kinematic regime as shown below. Predictions from
various groups are different in this region, see Ref. Vladimirov (2020), and
also disagree with the lattice measurements Zhang _et al._ (2020); Shanahan
_et al._ (2020); Schlemmer _et al._ (2021). This disagreement is problematic
since it points to a limited understanding of the TMD evolution encoded in the
CS-kernel, which dictates the evolution properties of all TMDs and describes
properties of the QCD vacuum Vladimirov (2020). The measurements from the EIC
will fill in the gap between the low-energy and high-energy experiments, and
will pin down these functions at higher values of $b$ corresponding to lower
values of $k_{T}$. Ultimately, it will help to unravel the 3D nucleon
structure in a very wide kinematic region.
Figure 25: Comparison of relative uncertainty bands for unpolarized u-quark
TMD PDFs at different values of $b$ as a function of $x$. Lighter blue band is
the impact of $18\times 275$ data, light brown band is the impact of $5\times
41$ EIC pseudo data. The dataset used for these projections is the same as
used for the Yellow Report Abdul Khalek _et al._ (2021). In particular all
energy options use the same integrated luminosity.
The unpolarized structure function is the leading contribution to the
differential SIDIS cross-section and also serves as the weight for polarized
asymmetries. As discussed above, mapping the unpolarized TMD over the full
phase space is a also necessary to probe TMD evolution effects which partially
cancel in the extraction of spin asymmetries. Therefore, the knowledge of
unpolarized TMDs is of paramount importance for the whole momentum tomography
program.
To demonstrate the impact, in particular of medium- and low energy data, we
consider the PV17 and SV19-fits. Figure 24, left shows the relative impact of
the different energy options on the extraction of the PV17 based TMD fit. It
is evident, that low and medium energies dominate over a wide range of phase
space, in particular at intermediate $x-Q^{2}$. This is even more impressive
considering that the impact plot is based on the baseline luminosities.
The estimation of the impact on the nonperturbative parts of the CS-kernel and
unpolarized TMDs has been done using the SV19-fit as the baseline. The
analysis was performed with the inclusion of EIC pseudo-data (in $5\times 41$,
$5\times 100$, $10\times 100$, $18\times 100$ and $18\times 275$ beam-energy
configurations). The pseudo-data, generated by pythia Sjostrand _et al._
(2006), includes expected statistical and estimated systematic uncertainties,
for a hand-book detector design with moderate particle identification
capability. The estimate for the improvement in the uncertainties for the
extraction of the unpolarized TMDs is shown in the right panel in Fig. 24
exemplary for $f_{1T}^{u}$. In general, the main impact in the unpolarized
sector occurs for the CS-kernel, whose uncertainty reduces by a factor of
$\sim 10$. This is only possible with precise and homogeneous coverage of the
$(Q,x,z)$ domain, which can efficiently de-correlate the effects of soft gluon
evolution and internal transverse motion.
Fig. 25 shows the impact of the same integrated luminosity with the highest,
$18\times 275$, energy configuration and the lowest, $5\times 45$ energy
configuration on the extraction of the unpolarized u-quark TMD PDFs at
different values of $b$ as a function of $x$. As expected, the lower energy
data has a signifcant impact to constrain the PDF at in the valence quark
region for all $b$ and over the majority of the $x$ range at higher values of
$b$. This is thanks to the sensitivity to smaller values of $p_{T}$. Notice
that the high energy option has little impact in the valence region, as large
$x$ values can only be accessed at large $Q^{2}$. The combination of low and
high energy measurements will have the most homogeneous coverage of the
kinematics required for the studies of TMDs.
#### III.3.4 The impact study on the Sivers functions
Figure 26: Expected impact on the u-quark Sivers functions as a function $x$
as obtained from semi-inclusive pion and kaon EIC pseudo-data for $10\times
100$, $18\times 275$) beam-energy configurations and the combined impact. Fit
uses pseudodata from the EIC reference detector described in the Yellow Report
Abdul Khalek _et al._ (2021) and SV19 fit. Left: impact of equal time data
taking with the base configuration, right: impact of proposed luminosity
increase at low and mid energies.
The non-vanishing Sivers asymmetry triggered a lot of interest in the physics
community and many groups have performed extractions of the Sivers functions
from the available experimental data Anselmino _et al._ (2005); Collins _et
al._ (2006); Vogelsang and Yuan (2005); Anselmino _et al._ (2009); Bacchetta
and Radici (2011); Sun and Yuan (2013); Echevarria _et al._ (2014); Boglione
_et al._ (2018); Luo and Sun (2020); Cammarota _et al._ (2020); Bacchetta
_et al._ (2020b); Echevarria _et al._ (2021); Bury _et al._ (2021a, b).
However, currently the global pool of Sivers asymmetry measurements offers a
relatively small number of data points that could be consistently analysed
using the TMD factorization approach. The future measurements by the EIC will
provide a significant amount of new data in a wide and unexplored kinematic
region, and thus have a decisive impact in the determination of the Sivers
functions.
To determine the impact of EIC measurements on the Sivers function, the
pseudo-data generated by Pythia-6 Sjostrand _et al._ (2006) was used with a
successive reweighing by a phenomenological model for the Sivers and
unpolarized structure functions from Ref. Anselmino _et al._ (2009). The
pseudo-data for $\pi^{\pm}$ and $K^{\pm}$ production in $e+p$ and $e+^{3}He$
collisions at the highest ($18\times 275$) and the lowest ($5\times 41$) beam-
energy configurations were analyzed. The resulting pseudo-data set is about
two orders of magnitude larger in comparison with the current data. Performing
the fit of the new pseudo-data with the initial set of Sivers functions taken
from the global analysis made in Refs. Bury _et al._ (2021a, b) and based on
the current SIDIS Airapetian _et al._ (2009); Adolph _et al._ (2012, 2017);
Alekseev _et al._ (2009); Qian _et al._ (2011) and Drell-Yan Aghasyan _et
al._ (2017); Adamczyk _et al._ (2016) measurements, a substantial reduction
of uncertainties is obtained. The uncertainty bands are reduced by an order of
magnitudes for all flavors.
Fig. 26 shows the impact on the uncertainty of the u-quark Sivers function at
$b=0$ GeV-1 as a function of $x$. The distribution of impact between $5\times
41$ and $18\times 275$ beam-energy configurations is similar to the
unpolarized case. Namely, $5\times 41$ configuration constrains mainly the
large-$x$ region, while $18\times 275$ configuration constrains the low-$x$
region. The combined set of the data gives the most homogeneous error
reduction. In turn, it reduces significantly uncertainties of the integral
characteristics. For example, the integral over Qiu-Sterman function has about
$3\%$ uncertainty (in the combined case) versus $6\%$ (for $18\times 275$
case) or $12\%$ (for $5\times 41$ case). Figure 18 shows the projected
experimental uncertainties compared to projections based on the extraction in
Ref. Bacchetta _et al._ (2020b) for more energy options and vs $Q^{2}$.
Intermediate energies are most advantageous, since the expected asymmetries
are large while still enough statistics for a multi-dimensional analysis are
collected. This is in particular evident when plotting the asymmetries vs
$Q^{2}$ where the drop of the expected asymmetries at high $Q^{2}$ can be
observed as well as the drop of statistics expected from the EIC in the
valence region at high $Q^{2}$.
#### III.3.5 TMDs in nuclei
QCD multiple scattering in the nuclear medium has been demonstrated to be
responsible for the difference between TMDs in bound and free nucleons within
a generalized high-twist factorization formalism Liang _et al._ (2008) and
the dipole model Mueller _et al._ (2016, 2017). In these models, the scale of
the power corrections which modify the relevant distribution for the process
is proportional at leading order to $\alpha_{s}(Q)$, which becomes small at
large $Q$, see for instance Qiu and Vitev (2004, 2006). Thus while the EIC
will be capable of performing $e-A$ collisions for a wide range of nuclear
targets, a low center of mass energy is optimal for probing nuclear medium
modifications to TMDs.
From a phenomenological standpoint, nuclear modifications to collinear PDFs
have been performed in Refs. Eskola _et al._ (1999); de Florian and Sassot
(2004); Hirai _et al._ (2007); Eskola _et al._ (2007); Schienbein _et al._
(2009); Atashbar Tehrani (2012); Khanpour and Atashbar Tehrani (2016); Eskola
_et al._ (2017); Walt _et al._ (2019); Kovarik _et al._ (2016); Abdul Khalek
_et al._ (2019a, 2020) and for the collinear fragmentation function in Ref.
Sassot _et al._ (2010); Zurita (2021). In these global analyses, the medium
modifications to the distributions enter into the non-perturbative
parameterizations. In the TMD description, the QCD multiple scattering
naturally leads to a broadening of the transverse momentum distributions.
Recently, the first extraction of the unpolarized nuclear modified TMDs have
been performed in Ref. Alrashed _et al._ (2021). The authors of this paper
performed a global analysis at NLO+NNLL to the world set of experimental data
from hadron multiplicity production ratio at HERMES Airapetian _et al._
(2007), Drell-Yan reactions at Fermilab Alde _et al._ (1990); Vasilev _et
al._ (1999) and RHIC Leung (2018), as well as $\gamma^{*}/Z$ production at the
LHC Khachatryan _et al._ (2016); Aad _et al._ (2015). In analogy to the work
that has been done in the past, this analysis took the medium modifications to
enter into the non-perturbative parameterization of the collinear
distributions as well as the parameterization for the non-perturbative Sudakov
factor, which controls the broadening of the transverse momentum distribution.
Despite the success of work in Alrashed _et al._ (2021) in describing the
world set of experimental data, there are currently few data points which can
be used in order to constrain the TMD FFs. While the HERMES measurement of the
hadron multiplicity ratio probed a relatively wide kinematic region, the
stringent kinematic cuts applied to ensure the data are within the proper TMD
region vastly reduces the total number of useful experimental points. Since
Semi-Inclusive DIS is sensitive to both the TMD PDFs as well as the TMD FFs,
experimental measurements within the brod kinematical reach of EIC at small
and medium $Q$ represents the optimal process for probing nuclear
modifications to TMDs.
### III.4 Jet Hadronization Studies
Jets are collimated sprays of particles, which are observed in collider
experiments. They exhibit a close connection to energetic quarks and gluons
that can be produced in hard-scattering processes at the EIC Hinderer _et
al._ (2015); Abelof _et al._ (2016); Boughezal _et al._ (2018); Borsa _et
al._ (2020); Arratia _et al._ (2020a); Page _et al._ (2020). Besides event-
wide jet measurements, significant progress has been made in recent years to
better understand jet substructure observables, see Refs. Larkoski _et al._
(2020); Kogler _et al._ (2019); Marzani _et al._ (2019) for recent reviews.
Jet substructure observables can be constructed to be Infrared and Collinear
Safe making them less sensitive to experimental resolution effects.
Nevertheless, hadronization corrections can be sizable for these observables.
For several jet substructure observables it is possible to connect the
relevant hadronization correction to universal functions. The scaling of these
functions can be predicted from first principles which can be tested
experimentally by studying jets at different energies and by varying
parameters of specific observables. EIC jets at different center of mass
energies have different quark/gluon fractions and a different quark flavor
decomposition. Therefore, the measurement of jets at high luminosity and low
center of mass energies can provide important complementary information to
better disentangle the flavor decomposition of the hadronization corrections
of jets and also to study their correlation with different initial state PDFs.
Several jet observables in the literature have been studied which are
particularly sensitive to the quark flavor and quark/gluon differences.
Examples include jet angularities Lee and Sterman (2007); Ellis _et al._
(2010); Aschenauer _et al._ (2020); Caletti _et al._ (2021), the jet charge
Waalewijn (2012); Kang _et al._ (2020), angles between jet axes Cal _et al._
(2020), groomed jet substructure Hoang _et al._ (2019), flavor correlations
Chien _et al._ (2021), energy-energy correlators Dixon _et al._ (2019); Chen
_et al._ (2021); Li _et al._ (2021a), jets at threshold Dasgupta _et al._
(2008); Arratia _et al._ (2021), and T-odd jets Liu and Xing (2021); Lai _et
al._ (2022a). The EIC provides a clean environment with a minimal background
contamination from the underlying event/multi-parton interactions making it an
ideal place to study low-energy aspects of jets. In addition, the measurements
of jets for multiple jet radii at different energies may help to explore in
detail the connection of hadron and jet cross sections. Recently, it was
demonstrated that inclusive hadron cross sections can be obtained from
inclusive jet calculations by taking the limit of a vanishing jet radius Neill
and Ringer (2020); Neill (2021).
An important aspect of jet observables is their sensitivity to TMD PDFs and
FFs. For example, lepton-jet cross sections in the laboratory frame Liu _et
al._ (2019a); Kang _et al._ (2021); Hatta _et al._ (2021) and the Breit
frame Gutierrez-Reyes _et al._ (2018, 2019a, 2019b) give access to (spin-
dependent) quark TMD PDFs where the final state radiation can be calculated
perturbatively. Similarly, di-jet production can be used to study gluon TMD
PDFs Kang _et al._ (2019); del Castillo _et al._ (2020). Moreover, the
transverse momentum of hadrons inside the jet relative to the jet axis can
provide access to TMD FFs, which is independent of initial state TMD PDFs
Arratia _et al._ (2020b). Here the choice of the jet axis is important and
different physics can be probed Neill _et al._ (2017). Especially, due to the
separation of initial and final state TMD PDFs and FFs, jet observables can
provide important complementary information to semi-inclusive deep inelastic
scattering. All of these observables and the information content they provide
benefit greatly from measurements over a wide kinematic range. In particular,
high luminosity at the EIC will allow for a unique quark flavor decomposition.
A measurement that is in particular luminosity hungry, is the detection of
diffractive di-jet events. This observable is sensitive to the elusive
Generalized TMDs (GTMDs) Hatta _et al._ (2016, 2020) of gluons. Lower
collision energies provide constraints for the moderate $x$-range of the gluon
distribution, while higher energies are sensitive to the small-$x$ gluon
distribution. If, as typically assumed, the gluon spin (helicity and orbital
angular momentum) is sizable at moderate $x$, it is critical to have very high
luminosity at lower/intermediate collision energies at the EIC.
## IV Exotic meson spectroscopy
### IV.1 Motivations for an exotic spectroscopy program at the EIC
Modern electro/photoproduction facilities, such as those operating in
Jefferson Lab, have demonstrated the effectiveness of photons as probes of the
hadron spectrum. However the energy ranges of these facilities are such that
most states with open or hidden heavy flavor are out of reach. This is
unfortunate as there remains significant discovery potential for
photoproduction in this sector. Already electron scattering experiments at
HERA Chekanov _et al._ (2002); Aktas _et al._ (2006) observed low-lying
charmonia, demonstrating the viability of charmonium spectroscopy in
electroproduction at high-energies but were limited by luminosity. Now the
proposed EIC, with high luminosity, will provide a suitable facility for a
dedicated photoproduction spectroscopy program extended to the heavy flavor
sectors. In particular, the study of heavy-quarkonia and quarkonium-like
states in photon-induced reactions will not only be complementary to the
spectroscopy programs employing other production modes but may give unique
clues to the underlying non-perturbative QCD dynamics.
One of the most striking features of quarkonium spectra is the wealth of
observed experimental signals which seem to indicate an exotic QCD structure
beyond conventional $Q\bar{Q}$ mesons. Starting with the observation of the
narrow $\chi_{c1}(3872)$ in the $J/\Psi\,\pi^{+}\pi^{-}$ invariant mass
spectrum by the BELLE Collaboration in 2003 Choi _et al._ (2003), these
states, collectively denoted the $XYZ$’s, now number in the dozens. The
dramatic change in landscape from 2003 up to 2021 is illustrated in figure 27
where new states beyond quark model charmonium are highlighted. These states
exhibit properties which are not consistent with expectations of conventional
QCD bound states, for example : large isospin violation in the case of the
$\chi_{c1}(3872)$; iso-vector quarkonium-like character for the $Z$’s;
supernumeracy of the vector $Y$ states. We refer to reviews such as Brambilla
_et al._ (2020); Olsen _et al._ (2018) for more detailed discussion. The
underlying dynamics governing their nature is not unambiguously known. The
experimental signals of these states, usually in the form of sharp peaks in
invariant mass spectra or broader enhancements that are required to describe
distributions in a more complex amplitude analysis, allow multiple
interpretations of their structure, e.g. multi-quark states, hadron-hadron
molecules, kinematic cusps or triangle singularities. Disentangling these
possibilities is one of the foremost missions of exotic spectroscopy and would
further our understanding of the non-pertubative nature of QCD in heavy
sectors.
One challenge in this endeavor is that, with few exceptions, the $XYZ$ signals
have only been observed in single production modes, usually $e^{+}e^{-}$
annihilation or $B$ meson decays. Observation of any of these states at the
EIC through photoproduction would thus provide independent and complementary
verification of their existence. Further, an ubiquitous feature of $XYZ$
signals is their proximity to open thresholds and the presence of additional
particles in the reconstructed final state. This complicates the
interpretation of experimental peaks as complicated kinematic topologies
involving nearby open channels may modify or mimic a resonant signal. Here
photoproduction provides a unique opportunity to produce $XYZ$ in isolated
final states, thus alleviating the role of kinematic singularities. In this
way a null result may be equally important towards uncovering the spectrum of
genuine bound-states. Additionally the polarized electron and proton beam
setups enable the determination of spin-parity assignments of states for which
these are not yet known. The EIC would also have real discovery potential for
exotic heavy flavor mesons.
A dedicated spectroscopy effort can make meaningful contributions to several
aspects of non-exotic quarkonium physics. Theoretical understanding of
photoproduction processes conventionally rely on Regge theory and exchange
phenomenology which have been tested extensively in the light sector Nys _et
al._ (2018). Measurement of quarkonium photoproduction cross-sections serves
as a testing ground of scattering phenomenology in heavy sectors where
perturbative QCD inputs may also be used. In particular the microscopic
structure of $\gamma Q\bar{Q}$ interaction and assumptions such as Vector
Meson Dominance (VMD) may be tested Mamo and Zahed (2021b); Xu _et al._
(2021).
Beyond the charmonium sector, the energy reach of the EIC will also allow the
study of near-threshold bottomonium photoproduction which may be sensitive to
the trace anomaly contribution to the nucleon mass and would be complementary
to ongoing studies of $J/\Psi$ photoproduction studies currently underway at
Jefferson Lab Ali _et al._ (2019); Meziani _et al._ (2016). Further, this
mass range is predicted to also exhibit a rich landscape of pentaquark-like
structures Cao _et al._ (2020); Paryev (2020) the as yet unobserved hidden-
bottom partners of the $P_{c}$ signals observed in the $J/\Psi p$ mass spectra
in $\Lambda_{c}$ decays.
Figure 27: Experimentally measured charmonia, XYZ and pentaquark spectra from
Collaboration _et al._ (2021). A ’?’ refers to unknown spin or parity.
#### IV.1.1 Photoproduction with the EIC
Given the many physics opportunities around photoproduction of heavy
quarkonia, new measurements at the EIC will be essential for understanding
both exotic and conventional quarkonium spectra. Photoproduction provides a
flexible production mode, able to produce the full spectrum of hadrons of any
quantum number. This gives such measurements significant discovery potential
and allows mapping out of patterns within the observed spectrum. The trade-off
however is that the cross sections for photoproducing heavy mesons are small,
only up to $\mathcal{O}(1$ nb), meaning a dedicated spectroscopy program will
require high luminosity at sufficiently large centre-of-mass energies to make
a meaningful contribution. The proposed EIC, maintaining high luminosity at
its lower centre-of-mass energies, would be well placed to meet these
conditions. In particular, even with the lower centre-of-mass settings of 29
and 45 $(GeV/c^{2})$ there is sufficient energy to directly produce many
exotic states of interest in the charmonium sector without the constraints in
bounds from parent masses which occur in decay processes. Kinematic generation
of peaks through final state interactions, such as triangle diagrams, will
also be suppressed over the entire $W$ range.
When combined with complete measurement of the final state, the polarized
electron and proton beams offer means for detailed partial wave analysis to
disentangle overlapping states, deduce the quantum numbers of resonant states
and study production mechanisms. This is of particular importance for many of
the excited $XYZ$ states which have intrinsically greater decay widths and
contribute to more complicated final states. The use of partial-wave analysis
through polarized photoproduction set-ups for exotic searches is currently
being pursued in the light-quark sector at the GlueX experiment and much of
the expertise will be readily applicable to the EIC setup. This includes the
possibility to measure polarized cross-sections, spin density matrix elements,
and asymmetries.
The variable beam setups of the EIC allow exploration of Primakoff production
of axial vector charmonium Albaladejo _et al._ (2020) and simultaneous
measurement of charged charmonium-like isospin multiplets with deuteron beams.
Additionally, the electroproduction mode of the EIC allows measurement of
$Q^{2}$ dependence and photocouplings, a detailed study of which may be a
reliable probe of the microscopic nature of exotic hadrons Kawamura and Kumano
(2014); Anikin _et al._ (2004). Electroproduction studies are of particular
importance for the $\chi_{c1}(3872)$ and the closely related $\tilde{X}(3872)$
candidate claimed in muoproduction by the COMPASS experiment in the
$J/\Psi\pi^{+}\pi^{-}$ mass spectrum Aghasyan _et al._ (2018b). Although this
new state closely resembles the $\chi_{c1}(3873)$ in mass and width, its
dipion mass distribution was suggestive of a scalar wave instead of the usual
$\rho J/\Psi$ decay mode of the $\chi_{c1}(3872)$, implying a different
C-parity. Further this state was observed in production with an additional
pion in the final state but not in exclusive production, raising further
questions as to the nature of the muoproduced peak. Detailed study of the
$J/\Psi\pi\pi$ mass spectra in virtual photoproduction would help to
understand the COMPASS result.
#### IV.1.2 States of interest
The first goal of an exotic spectroscopy program will be to identify the
production of the most established states, $\chi_{c1}(3872)$, Y(4260) and
$Z_{c}(3900)$. The decay of these states to a $J/\Psi$ and pions will provide
a clean and well studied final state and we discuss in Section (IV.2.5) the
prospects for measuring this with the EIC. After that there are many open
questions in XYZ physics, particularly with respect to the nature of peaks in
invariant mass distributions which we hope to address. Here we consider a few
examples with decays which should be readily measurable and make rate
estimates for these in Section (IV.2.4).
A recent publication from LHCb show structure in the $J/\Psi K^{+}$ mass
spectra which they can reproduce with the addition of two new resonances with
strangeness and hidden charm, $Z_{cs}(4000)$ and $Z_{cs}(4220)$ Aaij _et al._
(2021) with widths around 100-200 MeV. A similar, narrower state, the
$Z_{cs}(3985)$, has also been seen in $K^{+}D\bar{D}^{*}$ by BESIII Ablikim
_et al._ (2021).
The X(6900) or $T_{c\bar{c}c\bar{c}}(6900)$ tetraquark candidate has been seen
from its decay to 2$J/\Psi$ Aaij _et al._ (2020). Analogue Z states have been
seen in the b-quark sector by Belle, with the $\Upsilon$ or $h_{b}$ mesons in
combination with a charged pion Bondar _et al._ (2012). Production of these
states are also well within EIC centre-of-mass energies. In addition,
spectroscopy at the EIC will be able to search in a variety of other final
states replacing pions for other mesons such as vectors. We can also look for
charm quarks via reconstructing D mesons the most accessible decay mode of
which will be $K^{-}\pi^{+}$ with a branching ratio of around 4%, while the
decay of XYZ into final states with D mesons is likely to be quite high. As
seen later XYZ decay products populate the detector region relatively
uniformly giving good potential for reconstructing events including pairs of D
mesons. This would be particularly useful for investigating the molecular
picture of these states.
### IV.2 Estimates for the EIC
#### IV.2.1 JPAC Photoproduction Amplitudes
In order to estimate the feasibility of quasi-real photoproduction for states
of interest at EIC energies we followed the approach of a recent JPAC
Collaboration study in Albaladejo _et al._ (2020). Here, general principles
are used to construct exclusive photoproduction amplitudes of the charmonium
states of interest on the per-helicity-amplitude basis. In this way, full
kinematic dependence is retained and the production may be propagated along
decay chains to reconstructed final states.
In general the amplitude of producing a meson, $\mathcal{Q}$ via the exchange
of a particle, $\mathcal{E}$ with spin $j$ take the form:
$\langle\lambda_{\gamma}\,\lambda_{N}|T_{\mathcal{E}}|\lambda_{\mathcal{Q}},\lambda_{N^{\prime}}\rangle=\mathcal{T}^{\mu}_{\lambda_{\gamma}\,\lambda_{\mathcal{Q}}}\;\mathcal{P}_{\mu\nu}^{(\mathcal{E})}\;\mathcal{B}^{\mu}_{\lambda_{N}\,\lambda_{N^{\prime}}}$
(40)
where $\mathcal{T}$ and $\mathcal{B}$ are Lorentz tensors of rank-$j$ and
given by effective interaction Lagrangians which provide an economical way to
satisfy kinematic dependencies and discrete symmetries of the reaction. Such
methods have been widely used to motivate searches for exotic hadrons through
photoproduction Wang _et al._ (2015a); Cao _et al._ (2021); Winney _et al._
(2019); Wang _et al._ (2019); Karliner and Rosner (2017); Hiller Blin _et
al._ (2016); Wang _et al._ (2015b); Galata (2011); Lin _et al._ (2013) The
form of the exchange propagator, $\mathcal{P}$, provides means to consider
production in two kinematic regions of interest: near-threshold and at high-
energies, where production is expected to proceed through exchanges of
definite-spin and Reggeized particles respectively. The center-of-mass range
available at the EIC provides wide coverage in energy, thus for first
estimates we used a simple linear interpolation between the low- and high-
energy models provided in Albaladejo _et al._ (2020).
#### IV.2.2 Electroproduction
We generalized the aforementioned (real) photoproduction to consider exclusive
electroproduction with low-$Q^{2}$ quasi-real virtual photons via a factorized
model whereby the amplitude for producing a virtual photon beam is followed by
the t-channel photoproduction of the meson. The produced meson subsequently
decays to specific final states which can be measured in the EIC detector:
$\frac{d^{4}\sigma}{ds\,dQ^{2\,}dt\,d\phi}=\Gamma(s,Q^{2},E_{e})\;\frac{d^{2}\sigma_{\gamma*+p\rightarrow
V+p}(s,Q^{2})}{dt\,d\phi}$ (41)
$\Gamma(s,Q^{2},E_{e})$ is the virtual photon flux and
$\frac{d^{2}\sigma_{\gamma*+p\rightarrow V+p}(s,Q^{2})}{dtd\phi}$ is the two-
body photoproduction cross section calculated from the model of Albaladejo
_et al._ (2020), modified by an additional $Q^{2}$ dependence taken from
Adloff _et al._ (2000). Eqn. (41) was integrated numerically to give the
total cross section for determining event rates. Note, the virtual photon flux
integration leads to a factor of around 0.2 for the case of $\chi_{c1}(3872)$
production relative to real photoproduction for the 5x41 GeV beams.
#### IV.2.3 Other Models
To estimate how reliable our production rates may be we compared to other
approaches that have been published recently.
In Yang and Guo (2021) a semi-inclusive production mechanism for hadron
molecules was investigated. Here the molecular constituents were first
photoproduced via Pythia, and then allowed to interact together in given $X$
and $Z$ states. Cross sections for semi-inclusive production were given for
the highest proposed EIC centre-of-mass energy for $\chi_{c1}(3872)$, $Z_{c}$
and $Z_{cs}$ and are compared to our estimates for exclusive production in
Table 1. While the estimates for $\chi_{c1}(3872)$ are an order of magnitude
lower than this work, the $Z_{c}$ cross section is an order of magnitude
higher. We note that the calculations of Yang and Guo (2021) should be valid
for larger $Q^{2}$, in the central region (large $p_{T}$), those from
Albaladejo _et al._ (2020) should be valid at $Q^{2}<1(GeV/c^{2})^{2}$, and
peak in the peripheral region (small $p_{T}$), where we expect the bulk of
events to be produced.
Using the same method, Ref. Shi _et al._ (2022a) estimates the semi-inclusive
production rates of more exotic hadrons, and finds that copious $P_{cs}$
pentaquarks and $\Lambda_{c}\bar{\Lambda}_{c}$ dibaryons can be produced at
EIC. It is also promising to search for double-charm tetraquarks at EIC. In
addition, Ref.[2208.02639] also suggests that the possible 24 GeV upgrade of
CEBAF [proper ref.] can play an important role in the search of hidden-charm
tetraquarks and pentaquarks.
A very similar approach to the current work is taken in Xie _et al._ (2021),
where the models of Albaladejo _et al._ (2020) were coupled to a virtual
photon produced from electron-proton scattering interactions. Their results
are compared to ours in table 1, where our estimates are just over a factor 2
lower for the low energy setting and more comparable for the high energy
setting. The differences are likely due to our interpolation of low and high
models, or handling of phase space and virtual photon flux factors when
performing the integration. The threshold at $Q^{2}>0.01~{}(GeV/c^{2})^{2}$ is
applied in this comparison but not in our later results where we integrate the
full allowable $Q^{2}$ range.
In general we can expect integrated cross sections for electroproduction of up
to order 1 nb for production of mesons with charm quarks.
Table 1: Model Comparisons. Note, in the Lanzhou calculations cuts are applied to $Q^{2}$ and W, as indicated in the column title with units in GeV. The same cuts are applied to our calculation when comparing to Lanzhou, but not to the comparisons with Yang. The cut on $W>20~{}GeV/c^{2}$ has a very large effect on the calculated electroproduction cross sections as the photoproduction cross section for X and Z of Albaladejo _et al._ (2020) falls rapidly. | 3.5x20 $Q^{2}>0.01$;$W<16$ | 18x275 $Q^{2}>0.01$; $20<W<60$ | 18x275 $Q^{2}>0$
---|---|---|---
| JPAC | LanzhouXie _et al._ (2021) | JPAC | LanzhouXie _et al._ (2021) | JPAC | YangYang and Guo (2021)
$\chi_{c1}(3872)$ | 0.47 nb | 1.2 nb | 0.00014 nb | 0.00021 nb | 3.5 nb | 0.216-0.914 nb
$Y(4260)$ | 0.06 nb | 0.2 nb | 1.5 nb | 2.0 nb | 14 nb | -
$Z_{c}^{+}(3900)$ | 0.06 nb | 0.16 nb | 0.00018 nb | 0.00048 nb | 0.41 nb | 3.8-14 nb
#### IV.2.4 Estimates
In table 2 we give estimates for the production of a variety of exotic states
with the EIC. These are based on the models and parameters detailed in
Albaladejo _et al._ (2020), with the addition of the $Z_{cs}(4000)$
production using kaon exchange; and the modification of the X(6900) model to
use a higher branching ratio to $\Psi\omega$ of 3%, which was previously taken
as 1%. These estimates assume a luminosity of $10^{34}$ cm-2s-1. The
additional branching ratios, used to calculate events per day, of
$J/\Psi\rightarrow e^{+}e^{-}$ was taken as 6% and $\Upsilon(2S)\rightarrow
e^{+}e^{-}$ as 1.98%.
Current measurements of X and Y states contain up to order 10 thousand and 1
thousand events respectively. This is similar to the daily production rate of
our estimates. So with an overall detector acceptance of order 10 % the EIC
would be able to make significant contributions to our understanding of these
states.
We note that a previous investigation of charged final states in
electroproduction at an electron-ion collider Klein and Xie (2019) through a
Regge exchange mechanism found similar production rates for the $Z_{c}(4430)$,
approximately a factor 2 lower than our estimates for the $Z_{c}(3900)$. They
also conclude that the final state rapidity depends on the beam energy, at
lower center of mass energies production shifts toward mid-rapidity, where the
final state may be reconstructed in a central detector.
Table 2: Summary of results for production of some states of interest at the EIC electron and proton beam momentum $5\times 100(GeV/c)$ (for electron x proton). Columns show : the meson name; our estimate of the total cross section; production rate per day, assuming a luminosity of $6.1\times 10^{33}$ cm-2s-1; the decay branch to a particular measurable final state; its ratio; the rate per day of the meson decaying to the given final state. Meson | Cross Section (nb) | Production rate (per day) | Decay Branch | Branch Ratio (%) | Events (per day)
---|---|---|---|---|---
$\chi_{c1}(3872)$ | 2.3 | 2.0 M | $J/\Psi\;\pi^{+}\pi^{-}$ | 5 | 6.1 k
$Y(4260)$ | 2.3 | 2.0 M | $J/\Psi\;\pi^{+}\pi^{-}$ | 1 | 1.2 k
$Z_{c}(3900)$ | 0.3 | 0.26 M | $J/\Psi\;\pi^{+}$ | 10 | 1.6 k
$X(6900)$ | 0.015 | 0.013 M | $J/\Psi\;J/\Psi$ | 100 | 46
$Z_{cs}(4000)$ | 0.23 | 0.20 M | $J/\Psi\;K^{+}$ | 10 | 1.2 k
$Z_{b}(10610)$ | 0.04 | 0.034 M | $\Upsilon(2S)\;\pi^{+}$ | 3.6 | 24
#### IV.2.5 Detection of final states
Meson photoproduction at the EIC will require a detector with full hermicity.
Quasi-real photoproduction results in the scattered electron being very close
to the incident beam line. t-channel production provides very little
transverse momentum for the recoiling baryon, which will likewise be scattered
within a degree or so of the beam. On the other hand the meson itself will be
produced relatively centrally at the lower centre-of-mass settings making for
excellent detection of its decay products.
The individual particle momentum distributions for the 5x100 centre-of-mass
setting are shown in Fig. 28. Also shown are the distributions expected when
reconstructed with the EIC Yellow Report matrix detector via the eic-smear
package Abdul Khalek _et al._ (2021). It is clear the meson decay products
are almost entirely directed at the high acceptance central detector region.
Protons pass to the far-forward detector region, while there is some electron
detection in the backward electron region.
For final states including a $J/\Psi$, which are mostly under consideration
here, excellent electron/pion separation will allow a clean tag of $J/\Psi$
events through its narrow width in the $e^{+}e^{-}$ invariant mass. Coupled
with a very high detection efficiency this should allow for full
identification of the meson decay products and provide a means for peak
hunting in many final states including a final $J/\Psi$.
Supplementing the meson detection with far-forward and far-backward detector
systems will enhance the spectroscopy program by allowing measurements of the
full production process, that is measurement of the reaction variables W, from
the $e^{-}$ and $t$ from the recoil baryon. Detecting the scattered electron
also allows determination of the longitudinal and transverse polarisation
components of the virtual photon, providing further information on the
production processes through access to the meson spin density matrix elements.
This can be done with the backward detector around 5-10% of the time when the
electron beam momentum is lowest (5 GeV), due to the transverse kick to the
electron from the Lorentz boost due to the more energetic proton. A dedicated
far-backward electron detector such as the proposed low-$Q^{2}$ tagger could
increase the electron detection rate significantly. Detection of both the
electron and baryon can also allow for superior background rejection for
exclusive event reconstruction.
Figure 28: Momentum and angle distributions for X production. Left(right)
columns are for beam configuration 5x41(5x100). Rows, from top to bottom, show
$J/\Psi$ decay $e^{+}$; X decay $\pi^{+}$; the scattered proton; and the
scattered electron. Red lines show the true generated distributions while blue
are the detected particles as expected with the EIC Yellow Report matrix
detector.
### IV.3 Outlook
We have briefly examined the case for producing exotic mesons through quasi-
real photoproduction at the EIC. Although it is difficult to make strong
statements on what we might expect this is exactly due to the uncertainty
around the nature and structure of the new states seen at other labs. We have
shown that if real exotic states exist then many of these should have
sufficiently high cross sections to be measurable. The low centre-of-mass
configurations are particularly suited to mesons produced through fixed spin
exchanges of light mesons, which have a high cross section close to threshold.
Coupled with a high luminosity this would provide a very high production rate,
while the kinematics and hermetic detector systems are ideal for
reconstructing the mesons we wish to study and allow us to exploit the EIC’s
discovery potential in exotic heavy flavor spectroscopy.
## V Science highlights of light and heavy nuclei
### V.1 Introduction
Lepton-induced high-energy scattering with nuclei will be measured at fixed
target facilities such as Jefferson Lab 12 GeV. These facilities have a rich
experimental program that will yield interesting results for years to come. To
complement these programs, the EIC will be the first high-energy facility that
has the ability to collide electrons and nuclei, which means it comes with
unique capabilities:
* •
The EIC has a wide kinematic range in $Q^{2}$ and Bjorken $x$, enabling high-
energy nuclear measurements in unexplored kinematics.
* •
The EIC can have beams of polarized light ions (3He, deuteron, etc. Bai _et
al._ (2011)), enabling studies of the polarized nuclear (neutron) structure,
the polarized EMC effect, and nuclear spin-orbit phenomena. The deuteron,
being spin-1, offers possibilities of spin studies beyond that of the nucleon.
* •
Measurements on nuclei inherently have to deal with nuclear effects such as
the Fermi motion, nuclear binding and correlation effects, and possible non-
nucleonic components of the nuclear wave function Frankfurt and Strikman
(1988). In inclusive measurements these nuclear effects form one of the
dominant sources of systematic uncertainties. With its extensive far-forward
detection apparatus in both interactions regions, detecting particles
originating from the breakup of the nuclear target (nuclear target
fragmentation region) is possible and can help to eliminate or control these
nuclear effects. (See Fig. 29 for a schematic diagram.) As a consequence,
these more exclusive measurements will push the capabilities of the EIC as a
precision machine for high-energy nuclear physics.
Figure 29: Schematic diagram of a nuclear breakup process. The virtual photon
$q$ interacts with a constituent of the nucleus $A$ and particles originating
from the breakup of the nucleus can be detected in the far forward region of
the EIC detector.
Measuring nuclear breakup reactions at the EIC has several advantages. In
collider kinematics nuclear fragments are still moving forward with a certain
fraction of the initial beam momentum and in non-coherent scattering they have
a different rigidity from the beam particles. This makes their detection more
straightforward than in fixed target experiments where they typically have low
momenta in the laboratory frame (10s of MeV/$c$). The detection of these
fragments enables additional control over the initial nuclear state in the
high-energy scattering event. It can be used to probe effective targets, for
instance, free neutron structure in tagged spectator DIS Frankfurt and
Strikman (1981); Sargsian and Strikman (2006), and pion and kaon structure in
the Sullivan process Arrington _et al._ (2021). Nuclear breakup measurements
also determine which nuclear configurations (densities, virtualities, initial
nucleon momentum) play a role in the process, important for instance in a
multivariate disentanglement of nuclear medium modification effects such as
the EMC effect. A special case of detecting fragments is coherent nuclear
scattering in hard exclusive reactions, where the initial nucleus receives a
momentum kick but stays intact (no breakup). Measurements of these coherent
reactions allow us to perform tomography of light nuclei in quark and gluon
degrees of freedom as for the nucleon (Sec. I) and to study coherent nuclear
effects in these systems.
For all these reactions, having high event rates is of high importance
(multidimensional cross sections measured with sufficient precision, probing
rare nuclear configurations). To obtain these high event rates one needs both
high luminosity for a wide kinematic range and high acceptance for the
detection of final-state particles. In both interaction regions, the EIC will
have a dedicated set of far-forward detectors that enable the detection of
nuclear fragments with high acceptance. Due to the intricate engineering
challenges (magnets, beam pipe, crossing angle of the beam), each interaction
region will have some holes in the acceptance. Having these holes in different
regions of the kinematic phase space would enforce the complementarity between
the two interaction regions. Having a secondary focus would also increase
acceptance of detected fragments down to lower $p_{T}$ values. This is
especially important for coherent scattering of light nuclei, where the
$p_{T}$ values are much lower than for the free proton. (see Section VII.)
In the remainder of the section we offer a brief overview of nuclear reactions
that can be studied at the EIC and the physics motivation behind them. These
can all benefit from the complementarity offered by having a second IR. We
discuss these according to the nature of the measurements, starting with
inclusive measurement, then semi-inclusive and tagged reactions, and we
conclude with a discussion on exclusive nuclear channels and charm-flavored
hypernuclei.
### V.2 Inclusive measurements
EIC can measure inclusive DIS on a wide range of nuclei, from the lightest to
heaviest nuclei, and in a wide range of Bjorken $x$ and $Q^{2}$. This can shed
light on the dynamics of nuclear modifications of partonic distributions
functions: shadowing and anti-shadowing at low values of $x$ and the so-called
EMC effect at high $x$. These high-$x$ measurements benefit from lower center
of mass energies and, with the $Q^{2}$ range that can be explored at the EIC,
the $Q^{2}$ dependence of the EMC effect could be further explored. This would
enable the disentanglement of leading and higher-twist effects in the medium
modifications. QCD evolution applied to the wide $Q^{2}$-range offers a way of
getting access to the gluon EMC effect at high $x$. In addition, for polarized
light nuclei the polarized EMC effect Cloet _et al._ (2005) could be further
explored, which is so far an unknown quantity that will be explored in an
upcoming JLab experiment Brooks and Kuhn (2014).
### V.3 Semi-inclusive and tagged spectator measurements
The use of semi-inclusive reactions on nuclei for nuclear TMD studies was
highlighted earlier in Section III.3.5. Here, we focus on so-called tagged
spectator measurements, where one or more nuclear fragments from the nuclear
breakup are detected. This helps, as previously outlined, to control the
nuclear configurations playing a role in the hard scattering processes. One
example is the use of deuteron or 3He as effective neutron targets by tagging
one (resp. two) spectator protons Frankfurt and Strikman (1981); Sargsian and
Strikman (2006); Kondratyuk and Strikman (1984); Del Dotto _et al._ (2017);
Cosyn and Weiss (2019, 2020); Friscic _et al._ (2021); Jentsch _et al._
(2021). These neutron data are an essential ingredient in the quark flavor
separation of the partonic distribution functions. In the tagged spectator
reactions, an effective free neutron target can be probed by performing a so-
called on-shell extrapolation of the measured cross sections or asymmetries
Sargsian and Strikman (2006); Jentsch _et al._ (2021). The presence of
polarized light ion beams enables the extraction of polarized neutron
structure in this manner Kondratyuk and Strikman (1984); Cosyn and Weiss
(2019); Del Dotto _et al._ (2017); Friscic _et al._ (2021).
Measuring tagged spectator reactions at larger nucleon momenta (several 100
MeV relative to the ion rest frame) is of interest to several outstanding
questions in nuclear physics and how these are interconnected. What is the QCD
nature of the short-range, hard core part of the nucleon-nucleon force Boeglin
and Sargsian (2015); Miller _et al._ (2016); Tu _et al._ (2020)? How do
nuclear medium modifications of partonic properties manifest themselves and
what nuclear configurations play a role in these Hen _et al._ (2017)? In
these kinematics, however, the influence of final-state interactions between
products of the hard scattering and the spectator(s) and between the
spectators has to be accounted for Cosyn and Sargsian (2017); Strikman and
Weiss (2018) in order to disentangle them from the QCD phenomenon of interest.
These final-state interactions are moreover little explored in high-energy
scattering and are an interesting topic that can teach us about the space-time
evolution of hadronization dynamics.
While technically not a nuclear process, the Sullivan process $e+p\to
e^{\prime}+X+(N\;\text{or}\;Y)$ share characteristics with the previously
discussed processes. The physics interest of the Sullivan process lies in the
extraction of pion and kaon structure Aguilar _et al._ (2019); Arrington _et
al._ (2021). The pion being the pseudo-Goldstone boson of dynamical chiral
symmetry breaking, this can shed light on the mechanism of emergent hadronic
mass (EHM) within QCD. For the kaon, the presence of the heavier strange quark
opens up the study of the interplay between EHM and the Higgs mechanism. In
the Sullivan process, a nucleon or hyperon is tagged in the far-forward region
at low four momentum transfer squared $-t$. In this manner, the process is
dominated by meson exchange in the $t$-channel and, by extrapolating the
observables to the on-shell pole of the exchanged meson, one can extract pion
(nucleon tagging) or kaon (hyperon tagging) structure. Compared with the
earlier HERA extractions, the high luminosity and wide kinematic range of the
EIC would result in an order of magnitude decrease of statistical errors on
the extracted pion pdfs. These measurements require high luminosity
($>10^{33}$ cm-2 sec-1) in order to compensate for the few times $10^{-3}$
fraction of the proton wave function related to the pion (kaon) pole.
Additionally, for kaon structure lower center of mass energies are preferable
so that sufficient $\Lambda$ decays happen in the far forward region, see Sec.
VII.
Nuclear properties beyond that of the mean-field shell model can be studied
using $A(e,e^{\prime}NN)$ two-nucleon knockout reactions. These can especially
shed light on the nature of the nuclear short-range correlations (SRCs) and
their potential relation to nucleon medium modifications Hen _et al._ (2017).
The EIC will enable measurements of these processes up to $Q^{2}$ values a
factor of 3-4 higher than has been achieved so far in fixed target setups
Hauenstein _et al._ (2022). In these two-nucleon knockout reactions in
selected kinematics, one leading nucleon originates from the interaction with
the photon, while the other is the recoil partner that originated from the
SRC-pair. As with the previous discussed processes, the detection of recoil
nucleons happens in the far forward detector apparatus, due to the boost in
the collider lab frame relative to the ion rest frame. Additionally, detection
of nuclear fragments ($A-2$), and/or veto its breakup, could be possible
improving control over the reaction mechanism in these reactions Patsyuk _et
al._ (2021).
Measurements of single-nucleon knockout reactions in mean-field kinematics are
possible at EIC up to $Q^{2}\approx 20~{}\text{GeV}^{2}$ Hauenstein _et al._
(2022). These would help to constrain the onset of the nuclear color
transparency phenomenon Dutta _et al._ (2013), which has not been observed
for proton knockout up to $Q^{2}=14~{}\text{GeV}^{2}$ Bhetuwal _et al._
(2021). Color transparency could also be explored in other kinematics and
reaction mechanisms. One example that was recently explored is meson
electroproduction on nuclei in backward kinematics Huber _et al._ (2022), see
also Sec. I.4.
Concerning the detection capabilities of the EIC for these 2N knockout
reactions, for the leading nucleon the detection region depends on the ion
beam energy. With 41 GeV/A beams, the majority of the leading nucleons is
detected in the central detector, while for 110 GeV/A it is detected in the
far-forward region, see Fig. 3 of Ref. Hauenstein _et al._ (2022). Moreover,
at 110 GeV/A higher acceptance for recoil nucleons is also achieved. For
leading neutrons, however, with 110 GeV the neutrons are outside the angular
coverage of the ZDC, and these channels have to be measured at the lower ion
beam energy.
### V.4 Exclusive measurements
Hard exclusive reactions on light nuclei can be measured in both the coherent
and incoherent (nuclear breakup) channels Dupré and Scopetta (2016). The
coherent channel, similarly to the case of the nucleon discussed in Secs. I
and II, would give access to 3D tomography of light nuclei in quark and gluon
degrees of freedom and the extraction of mechanical properties of light
nuclei. It could also potentially shed light on the size of non-nucleonic
components of the nuclear wave function. The incoherent channel, on the other
hand, can be used to study medium modifications of nucleon tomography Fucini
_et al._ (2020a, b) and to probe neutron 3D structure Rinaldi and Scopetta
(2013). Three of the lightest nuclei (d,3He,4He) have the interesting feature
that they have different spin and binding energies Cano and Pire (2004); Guzey
and Strikman (2003); Scopetta (2004); Cosyn and Pire (2018); Fucini _et al._
(2018). 4He being spin-0 has the advantage that it has only one leading twist
GPD in the chiral even sector. 3He is a spin-1/2 nucleus, meaning that hard
exclusive observables can be similarly defined to those of the free nucleon.
Lastly, the spin-1 deuteron has a richer structure of GPDs beyond that of the
nucleon (associated with its tensor polarization modes), meaning that new
spin-orbit phenomena can be studied. In terms of binding energy, the deuteron
is very loosely bound, while 4He is very tightly bound and 3He falling
somewhat in between. This gives us access to different degrees of nuclear
effects that can be studied in these systems. Additionally, the availability
of high-precision ab initio nuclear wave functions for these light nuclei
results in a high degree of theoretical control in calculations. The
challenges of detecting these exclusive reactions are covered in more detail
in Sec. VII. There, the influence of a secondary focus on the lower limit of
the measurable $t$-range for the exclusive channel especially deserves
highlighting.
### V.5 Charm-flavored hypernuclei
Hypernuclear physics has been one of the crucial tools for studying the
interactions between nucleons and strange hyperons. Most experimental studies
on hypernuclei have been focused on $\Lambda$ hypernuclei and many precise
measurements have been performed as reviewed in Ref. Feliciello and Nagae
(2015). Recently, these efforts are extended to hypernuclei with multi-
strangeness such as $\Xi$ hypernuclei.
Recently, there have been interests in charm hypernuclei of which the
existence was predicted almost 45 years ago Tyapkin (1975); Dover and Kahana
(1977) right after the discovery of the charm quark. As the strange
hypernuclei structure heavily depends on the $\Lambda$-nucleon interactions,
the stability of charm hypernuclei depends on the $\Lambda_{c}$-nucleon
interactions. Following the seminal works of 1980s, there have been many
theoretical model calculations on various states of $\Lambda_{c}$ hypernuclei.
The calculated spectra of charm hypernuclei are found to be sensitive to the
$\Lambda_{c}$-nucleon interactions. (See, for example, Refs. Krein _et al._
(2018); Wu _et al._ (2020) for a review.) As there is no empirical
information on the $\Lambda_{c}N$ interactions, various ideas were adopted for
modeling the potential between $\Lambda_{c}$ and the nucleon. In recent
calculations, lattice simulation results were used to model this potential.
However, depending on the approach to the physics point from the unphysical
quark masses used in lattice calculations, the extrapolated potentials lead to
very different results for the $\Lambda N$ interactions Haidenbauer and Krein
(2021). Figure 30 shows different predictions for the $\Lambda_{c}N$
${}^{3}D_{1}$ phase shift extrapolated from the same lattice calculations but
with different extrapolation methods. It shows that the results are completely
different depending on the extrapolation approaches. Therefore, experimental
measurements on charm hypernuclei are strongly required to shed light on our
understanding of the $\Lambda_{c}N$ interactions.
Figure 30: Predictions for the $\Lambda_{c}N$ ${}^{3}D_{1}$ phase shift. (a)
Results from covariant $\chi$EFT taken from Ref. Song _et al._ (2020). (b)
Results based on the $\Lambda_{c}N$ potential from Ref. Haidenbauer and Krein
(2018). Red (black), green (dark grey), and blue (light grey) bands correspond
to $m_{\pi}=138$, 410, and 570 MeV, respectively. The width of the bands
represent cutoff variations/uncertainties. Lattice results of the HAL QCD
Collaboration corresponding to $m_{\pi}=410$ MeV (filled circles) and 570 MeV
(open circles) are taken from Ref. Miyamoto (2019). The figure is from Ref.
Haidenbauer and Krein (2021).
Experimentally, earlier efforts to find charm hypernuclei started right after
the seminal work of Ref. Tyapkin (1975) and a few positive report on the
existence of charm hypernuclei (called supernuclei at that time) were claimed
Lyukov (1989). However, no serious follow-up research was reported and, in
practice, there is no experimental information on charm hypernuclei. The
experimental investigations in this topic would be possible at future hadron
beam facilities Barucca _et al._ (2021). The experimental instrumentation of
the EIC allows for precise measurements and would offer a chance to study
charm hypernuclei. So far $\Lambda$ hypernuclei have been studied extensively
with high intensity meson beams as well as electron beams. Electro-production
of Lambda hypernuclei was studied with the
${}^{A}Z(e,e^{\prime}K^{+})^{A}_{\Lambda}(Z-1)$ reaction and similar reaction,
${}^{A}Z(e,e^{\prime}D^{-})^{A}_{\Lambda_{c}^{+}}Z$ will produce charm
hypernuclei by converting a neutron to $\Lambda_{c}^{+}$ and $D^{-}$. Through
observation of produced $D^{-}$ and scattered electron, the missing mass
technique can be applied to the spectroscopic study of charm hypernuclei.
Therefore, studying charm hypernuclei with electron-ion collider would open a
new way to study heavy-flavored nuclei with the future hadron beam facilities.
This investigation can also be extended to the bottom sector Feliciello
(2012), which is simpler than the charm sector as there is no Coulomb
interaction between $\Lambda_{b}$ and nucleons. Therefore, comparing the
properties of bottom hypernuclei and strange hypernuclei would give a clear
clue on the mass dependence of the strong interactions. The designed energy
range of EIC would allow further investigations.
## VI Precision studies of Lattice QCD in the EIC era
Lattice QCD enables the first-principles solution of QCD in the strong-
coupling regime, and thereby facilitates calculations that can both guide the
analysis of key physics quantities to be determined at the EIC, and provide
complementary calculations that can further the physics potential of the EIC.
The calculation of the internal structure of the nucleon, pion and other
hadrons in terms of the fundamental quarks and gluons of QCD has been a key
effort of lattice calculations since the inception of lattice QCD. Notably,
there have been the first-principles calculation of the electromagnetic form
factors, and of the low moments of the unpolarized and polarized parton
distribution functions and of the generalized form factors. Similarly, the
low-lying spectrum of QCD has been a benchmark calculation that now including
the electroweak splittings. Nevertheless, the formulation of lattice QCD in
Euclidean space imposes important restrictions. Firstly, time-dependent
quantities, and in particular those related to matrix elements of operators
separated along the light cone, could not be calculated, thereby precluding
the computation of quantities, such as the $x$-dependent parton distribution
functions. Further, scattering amplitudes, and thereby information about
resonances in QCD, eluded direct computation. In both the fields of three-
dimensional imaging and spectroscopy key theoretical advances have
circumvented these restrictions and transformed our ability to address key
questions of QCD in the strong-coupling regime.
### VI.1 Three-dimensional Imaging of the Nucleon
The electromagnetic form factors, and the generalized form factors
corresponding to the moments with respect to $x$ of the GPDs, can be expressed
as the matrix elements of time-independent, local operators amenable to
computation in lattice QCD on a Euclidean grid. In particular, there has been
a progression of calculations of the lowest moments of the isovector
generalized form factors Hagler _et al._ (2003, 2008); Bratt _et al._ (2010)
that have already provided important insight into three-dimensional imaging of
the nucleon, notably in discerning the role of orbital angular momentum.
The realization that $x$-dependent distributions including the one-dimensional
parton distribution functions and the quark distribution amplitudes, and the
three-dimensional GPDs could be computed from the matrix elements of operators
at Euclidean separations, with its genesis in Large-Momentum Effective Theory
(LaMET) Ji (2013), or quasi-PDF approach, has spurred a renewal in the first-
principles calculation of hadronic and nuclear structure. For the isovector
distributions, the basic matrix elements are those of spatially separated
quark and anti-quark fields, joined by a Wilson line so as to ensure gauge
invariance; an alternative approach to relating the resulting lattice matrix
elements to the familiar PDFs is the pseudo-PDF framework Radyushkin (2017a).
While both the quasi- and pseudo-PDFs methods share the same matrix elements,
the former matches the lattice data to the light-cone PDFs using a large
momentum expansion, while the latter is based on a short distance expansion. A
further framework that encompasses both the quasi-PDF and pseudo-PDF
approaches is that of the so-called “Good Lattice Cross Sections” method that
admits spatially separated gauge-invariant operators thereby simplifying the
lattice renormalization at the expense of computational cost Ma and Qiu
(2018). Characteristic of any of these approaches is the need to attain high
spatial momentum on the lattice in order to obtain a controlled description of
the $x$-dependent PDF. For the most easily accessible isovector nucleon PDFs,
there are now several calculations at the physical light- and strange-quark
masses. Recent reviews can be found in Refs. Lin _et al._ (2018a); Cichy and
Constantinou (2019); Detmold _et al._ (2019); Ji _et al._ (2021c); Lin
(2020); Constantinou (2021); Constantinou _et al._ (2021); Cichy (2021);
Constantinou _et al._ (2022).
Each of the approaches introduced above admits the calculation of the GPDs,
and both the incoming and outgoing hadrons now have to be boosted to high but
distinct spatial momenta to introduce a non-zero momentum transfer $-t$.
#### VI.1.1 Parton distribution functions
The direct calculation of distribution functions is not possible in lattice
QCD as the latter is formulated with a Euclidean metric, while the former have
a light-cone nature. The last decade has been instrumental in attaining the
$x$-dependence of PDFs through a number of approaches, such as the hadronic
tensor Liu and Dong (1994), auxiliary quark field Detmold and Lin (2006);
Braun and Mueller (2008), the quasi-PDFs Ji (2013), pseudo-PDFs Radyushkin
(2017b), current-current correlators Ma and Qiu (2018), and with an OPE
Chambers _et al._ (2017). The most intensively-studied methods are the quasi-
and pseudo-PDFs, which rely on calculation of matrix elements of non-local
operators that are coupled to hadronic states that carry non-zero momentum.
The non-local operators contain a straight Wilson line with a varying length
in the same spatial direction as the momentum boost. Naturally, the
corresponding matrix elements are defined in coordinate space, and can be
transformed to the desired momentum space, $x$, with a Fourier transform. A
factorization process relates the quasi and pseudo distributions to the light-
cone PDFs, with the matching kernel calculated in perturbation theory. Both
methods have been used for lattice calculations using ensembles of gauge
configurations at physical quark masses Lin _et al._ (2018b); Alexandrou _et
al._ (2018a); Chen _et al._ (2018); Alexandrou _et al._ (2018b); Liu _et
al._ (2018); Lin _et al._ (2018c); Joó _et al._ (2020); Bhat _et al._
(2021); Lin _et al._ (2020). Such studies correspond to different lattice
discretizations (actions) and parameters and a comparison may reveal
systematic effects related to the employed methodology, discretization and
volume effects.
Figure 31: Upper left: A selection of lattice-QCD results on the unpolarized
PDF using the quasi-PDFs method Alexandrou _et al._ (2019a) (red band) and
pseudo-ITDs from Ref. Joó _et al._ (2020) (gray band) and Ref. Bhat _et al._
(2021) (blue band). A comparison of unpolarized isovector nucleon PDFs from
lattice QCD (upper right), helicity (lower left) and transversity (lower
right) at or near the physical pion mass Joó _et al._ (2020); Chen _et al._
(2018); Lin _et al._ (2018c); Liu _et al._ (2018); Alexandrou _et al._
(2018a, b, 2019a); Bhat _et al._ (2021) with global fits. Plots taken from
Ref. Constantinou _et al._ (2022). All results are given in the
$\overline{\text{MS}}$ scheme at a renormalization scale of 2 GeV.
In Fig. 31 we show results for the unpolarized isovector valence PDF for the
proton. The results indicated by HadStruc’20 Joó _et al._ (2020) and ETMC ’20
Bhat _et al._ (2021) have been obtained using the pseudo-PDFs method, while
ETMC’18 Alexandrou _et al._ (2018a) uses the quasi-PDFs approach. The results
are very encouraging, exhibiting agreement for a wide range of values for $x$.
The small tension at large $x$ is due to systematic effects such as higher-
twist contamination and the ill-defined inverse problem in the reconstruction
of the $x$ dependence of the PDFs. In fact, Refs. Alexandrou _et al._
(2018a); Bhat _et al._ (2021) analyze the same raw data, and they differ in
the analysis (quasi-PDFs versus pseudo-PDFs). This corroborates that the
large-$x$ region has contamination from the aforementioned systematic effects.
A similar tension is also present in the comparison of the lattice data, e.g,
of Ref. Joó _et al._ (2020) with the global analyses of experimental data
sets shown in the right panel of Fig. 31. When predicting spin-dependent PDFs,
lattice calculations may already provide comparable predictions to
phenomenological global analyses. The lower panel of Fig. 31 summarizes the
lattice predictions for helicity and transversity nucleon isovector PDFs at
physical pion mass Alexandrou _et al._ (2018a); Lin _et al._ (2018c);
Alexandrou _et al._ (2018b); Liu _et al._ (2018). The helicity lattice
results are compared to two phenomenological fits, NNPDFpol1.1 Nocera _et
al._ (2014) and JAM17 Ethier _et al._ (2017), exhibiting nice agreement. The
lattice results for the transversity PDFs have better nominal precision than
the global analyses by PV18 and LMPSS17 Lin _et al._ (2018d). The success in
extracting the $x$ dependence of PDFs is a significant achievement for lattice
QCD, and has the potential to help constrain PDFs in kinematic regions where
experimental data are not available. The synergy of lattice QCD results and
global analysis is currently under study and some results can be found in
Refs. Cichy _et al._ (2019); Bringewatt _et al._ (2021); Del Debbio _et
al._ (2021).
#### VI.1.2 Generalized parton distributions
Information on GPDs from lattice QCD is mostly extracted from their Mellin
moments, that is the form factors (FFs) and generalized form factors (GFFs).
This line of research has been very successful within lattice QCD, and several
results for the form factors using ensembles with physical quark masses
appeared in the last five years. Furthermore, the flavor decomposition for
both the vector and axial form has been performed, giving the individual up,
down, strange and charm contributions to these quantities Sufian _et al._
(2017a, b); Alexandrou _et al._ (2018c, 2019b, 2020b, 2021a). A summary of
state-of-the-art calculations can be found in Ref. Constantinou _et al._
(2021). In the left panel of Fig. 2 we show results on the axial form factor
at physical quark masses from various lattice groups employing different
lattice discretization and analysis methods. Its forward limit is the axial
charge, $g_{A}\equiv G_{A}(0)$, which is a benchmark quantity for lattice QCD,
and is related to the intrinsic spin carried by the quarks in the proton. As
can be seen, the results are in very good agreement, despite the fact that not
all sources of systematic uncertainties have been fully quantified. The level
of agreement indicates that remaining systematic effects are small. Further,
$g_{A}$ is found to be in agreement with the world average of experimental
data Märkisch _et al._ (2019). This is a breakthrough for lattice QCD
calculations, as they demonstrate that agreement with experiment is achieved
once systematic uncertainties are eliminated.
Figure 32: Summary of lattice calculations of $G_{A}(-t)$ (left) and
$A_{20}(-t)$ (right) using ensembles at or near physical quark masses. The
label of $G_{A}$ results correspond to: ETMC ’20 Alexandrou _et al._ (2021b),
RQCD ’20 Bali _et al._ (2020), PNDME ’19 Jang _et al._ (2020), PACS ’18
Shintani _et al._ (2019), RQCD ’18 Bali _et al._ (2019a), ETMC ’17
Alexandrou _et al._ (2017b), LHPC ’17 Hasan _et al._ (2018) and MSULat’21
Lin (2022) . The corresponding results for $A_{20}$ are: ETMC ’19 Alexandrou
_et al._ (2020c)222Larger-volume results are plotted for the ETMC’19 2f
calculation., RQCD ’18 Bali _et al._ (2019b), and MSULat’20 Lin (2021).
More recently, lattice results on the GFFs associated with the sub-leading
Mellin moments of GPDs (one-derivative operators) became available at the
physical pion mass. In the right panel of Fig. 2 we show results on $A_{20}$,
which appears in the decomposition of the energy momentum tensor. Its forward
limit is the quark momentum fraction, $\langle x\rangle\equiv A_{20}$, which
enters the spin decomposition Ji (1997a). Extracting GFFs is more challenging
for a number of reasons. First, the introduction of covariant derivatives
increases the gauge noise, as well as the uncertainties due to cutoff effects.
Second, in general the number of GFFs increases, requiring independent matrix
elements to disentangle the GFFs. Third, beyond the NNNLO Mellin moments,
there is unavoidable mixing under renormalization. The introduction of matrix
elements with greater than three covariant derivatives introduces power-
divergent mixing with matrix elements with few derivatives, thereby precluding
the calculation of the higher Mellin moments. Consequently, there are
limitations in mapping the three-dimensional structure of the nucleon from the
FFs and GFFs.
Methods such as large momentum factorization (quasi-distributions) and short
distance factorization (pseudo-distributions) are very promising in extracting
the $x$-dependence of GPDs Ji _et al._ (2015a); Liu _et al._ (2019b);
Radyushkin (2019); Hou _et al._ (2022) avoiding the challenges associated
with renormalization that are present in the calculation of GFFs mentioned
above. However, the calculations are very taxing because, unlike FFs and GFFs,
GPDs are frame dependent objects and are defined in a symmetric (Breit) frame.
This increases significantly the computational cost, as a separate calculation
is needed for each value of $t$. The $x$-dependence of nucleon GPDs has
already been explored, in the Breit frame, for the unpolarized ($H,\,E$),
helicity ($\widetilde{H},\,\widetilde{E}$) and transversity
($H_{T},\,E_{T},\,\widetilde{H}_{T},\,\widetilde{E}_{T}$) GPDs Alexandrou _et
al._ (2020d, 2021c). Such calculations are very timely, since the EIC will
measure the DVCS process with polarized electrons and longitudinal and
transverse polarized protons to extract the CFFs of $H,\,E$ and
$\widetilde{H}$. It should be noted that, to date, lattice calculations of
GPDs are exploratory and are available for only a few values of $t$ for zero
and nonzero skewness, $\xi$. Nevertheless, lattice results are useful for a
qualitative understanding of GPDs. For instance, one can find characteristics
for the $t$ dependence for each operator under study. For instance, the
lattice results of Fig. 33 indicate that the decay of the GPD with $t$ is
fastest in $H$, followed by $H_{T}$, and then $\widetilde{H}$. Also, one can
compare the hierarchy of GPDs at each value of $t$. On this aspect, it is
found that at $t=0$, $f_{1}\equiv H(t=0)$ is dominant, followed by
$h_{1}\equiv H_{T}(t=0)$ and $g_{1}\equiv\widetilde{H}(t=0)$. As $-t$
increases, $H$ remains dominant, while the hierarchy of $H_{T}$, and then
$\widetilde{H}$ interchanges. Finally, lattice results can be used to check
sum rules. For more details we refer the reader to Refs. Bhattacharya _et
al._ (2020); Alexandrou _et al._ (2021c). We emphasize that lattice
calculations on GPDs are at the proof-of-concept stage, but results are
promising. Once the lattice data can access a wide range of $t$, their
$t$-dependence can be parameterized. This is very useful because the
parameterizations can be used to extract the GPDs in the impact-parameter
space as done in Refs. Lin (2021, 2022) at physical pion mass. The green bands
in Fig. 2 show the moments of lattice $x$-dependent GPD results at zero
skewness; they are in nice agreement with the traditional local-operator
methods, which shows there will be a promising future for lattice QCD
contributions in GPD tomography. Figure 34 shows the first LQCD results of
impact-parameter–dependent 2D distributions at $x=0.3$, 0.5 and 0.7 Lin
(2021). Similar tomography results for helicity GPD,
$\tilde{H}(x,\xi=0,Q^{2})$ can be found in Ref. Lin (2022).
The progress in the field of $x$-dependent GPDs from lattice QCD is being also
extended to twist-3 GPDs Bhattacharya _et al._ (2021b). We anticipate that,
in the near future, lattice results will be incorporated in phenomenological
analysis of GPDs at both the twist-2 and twist-3 level. Lattice-computed
twist-3 GPDs can have advantages with regards to extracting twist-2 GPDs at
kinematics where twist-3 contributions aren’t negligible. In fact, this may
even be a required step before one attempts to extract twist-2 GPDs from DVEP
data.
Figure 33: The non-polarized $H$ and $E$, helicity $\widetilde{H}$ and
transversity $H_{T}$ GPDs at $\\{t,|\xi|\\}=\\{0,0\\}$, $\\{-0.69\text{
GeV}^{2},0\\}$, $\\{-1.02\text{ GeV}^{2},1/3\\}$ extracted from the 260-MeV
pion mass lattice calculations of Ref. Alexandrou _et al._ (2020d, 2021c).
Figure 34: (left) Nucleon tomography: three-dimensional impact
parameter–dependent parton distribution as a function of $x$ and $b$ using
lattice $H$ at physical pion mass. (right) Two-dimensional impact-
parameter–dependent isovector nucleon GPDs for $x=0.3$, 0.5 and 0.7 from the
lattice at physical pion mass. Source: Ref. Lin (2021).
#### VI.1.3 Transverse momentum dependent distributions
In contrast to GPDs, TMDs describe the three-dimensional structure in terms of
the longitudinal momentum-fraction $x$, and the transverse momentum of the
partons. One of the additional challenges that arise in TMD calculations is
the presence of the rapidity divergences that need an additional regulator.
Such divergences can be factorized into the so-called soft function, which can
be separated into a rapidity-independent and a rapidity-dependent part. The
latter defines the Collins-Soper (CS) kernel, which depicts the rapidity
evolution. One of the challenges is that the soft function is non-perturbative
for small transverse momenta.
The TMDs involve the matrix elements of staple-like Wilson lines that extend
along the light cone, imposing analogous restrictions on their calculation
within lattice QCD as encountered for the case of PDFs and GPDs described
above. The first efforts at overcoming these restrictions employed space-like-
separated staples that approached the light-cone as the length of the staple
increased Musch _et al._ (2011), in particular focusing on the time-odd Boer-
Mulders and Sivers functions Musch _et al._ (2012); Yoon _et al._ (2017) and
their relation to the corresponding processes in Drell-Yan and SIDIS,
including calculations for the pion Engelhardt _et al._ (2016).
More recently, there has been extensive work on exploring TMDs within the
quasi-PDF approach Ji _et al._ (2015b, 2019); Ebert _et al._ (2019), as well
as the soft function Ji _et al._ (2020a, b). The Collins-Soper kernel has
been studied by a few collaborations Shanahan _et al._ (2020); Zhang _et
al._ (2020); Schlemmer _et al._ (2021); Li _et al._ (2021b); Shanahan _et
al._ (2021) and a comparison is shown in Fig. 35. Presently, such a comparison
is qualitative, as systematic uncertainties are not fully quantified.
Nevertheless, the agreement is very good and encouraging.
Figure 35: The Collins-Soper kernel as a function of $b_{T}$ as extracted from
various lattice QCD calculations. We show results from SWZ Shanahan _et al._
(2020, 2021), LPC Zhang _et al._ (2020), Regensburg/NMSU Schlemmer _et al._
(2021), and ETMC/PKU Li _et al._ (2021b). Open and filled symbols of the same
shape and color correspond to results from the same lattice group. Source:
Ref. Shanahan _et al._ (2021).
#### VI.1.4 Gluon and flavor-singlet structure
The calculation of the flavor-singlet structure of hadrons is considerably
more challenging than those for the flavor-non-singlet quantities that have
been the focus of the most precise studies. The challenges are primarily
related to the degrading signal-to-noise ratio that impacts calculations both
of the gluon distributions, and of the flavor-singlet quark distributions with
which they mix. Recently, the first calculations of the unpolarized
$x$-dependent gluon distributions in the nucleon have been performed using
quasi-PDF Fan _et al._ (2018) and pseudo-PDF Fan _et al._ (2021); Khan _et
al._ (2021); Fan and Lin (2021) methods, as well as the first lattice gluon
helicity study Egerer _et al._ (2022). Within the present statistical
precision and through a qualitative comparison with global analyses of the
gluon helicity distribution, the lattice calculation hinted at a positive
gluon polarization contribution to the nucleon spin budget.
Figure 36: Left: lattice results on the unpolarized nucleon gluon PDF using a
two-parameter parametrization $xg(x)=Nx^{\alpha}(1-x)^{\beta}$ by MSULat’20
Fan _et al._ (2021) and HadStruc’21 Khan _et al._ (2021) Also shown are the
unpolarized gluon PDFs extracted from global fits to experimental data: CT18
Hou _et al._ (2021), NNPDF3.1 Ball _et al._ (2017), and JAM20 Moffat _et
al._ (2021). Right: the gluon nucleon GFF in a lattice calculation
corresponding to $M_{\pi}=450(5)~{}{\rm MeV}$; the bands show a multipole fit
with $n=3$ (green), and a model-independent $z$ expansion (blue). Source: Ref.
Pefkou _et al._ (2021).
A comparison of the calculation with phenomenological parametrizations is
shown as the left-hand panel in Fig. 36. While this calculation is at
unphysically large pion masses, with limited understanding of the systematic
uncertainties, it demonstrates the potential of lattice QCD to complement and
augment insights into hadron structure from experiment, notably at large $x$.
The calculation of the gluon contributions to three-dimensional structure of
hadrons proceeds as in the case of that of the valence quarks described above.
In particular, the gluonic contribution to the GFF has been computed Shanahan
and Detmold (2019a, b); Pefkou _et al._ (2021) thereby enabling, when
combined with the corresponding quark contributions, the pressure and shear
forces within a nucleon to be computed, shown as the right-hand panel of Fig.
36.
### VI.2 LQCD and Spectroscopy
The ability to study multi-hadron states and resonances from lattice QCD
calculations was transformed by the realization that, for the case of two-body
elastic scattering, infinite-volume, momentum-dependent phase shifts could be
related to energy shifts at finite volume on a Euclidean lattice Luscher
(1986a, b, 1991). The formalism for elastic scattering has now been extended
to coupled-channel scattering, and to multi-hadron final states facilitating a
range of calculations that impact our understanding of the spectroscopy of
QCD. Notably, there are now calculations of coupled-channel scattering
describing the nature of the isoscalar $a_{0},f_{0}~{}{\rm and}~{}f_{2}$
resonances Briceno _et al._ (2018), and the first calculation of the decays
of the exotic $1^{-+}$ meson Woss _et al._ (2021).
Figure 37: The upper panel shows the $\pi^{+}\gamma\rightarrow\pi^{+}\pi^{0}$
cross section as a function of the $\pi\pi$ center-of-mass energy in a
calculation with a pion mass $m_{\pi}\simeq 400~{}{\rm MeV}$. The lower panel
shows the $l=1$ elastic $\pi\pi$ scattering cross section, with the $\rho$
resonance visible in both cases. Source: Ref. Briceño _et al._ (2016).
Beyond the challenge of computing the spectrum of resonances and their decays,
an important development has been that of a formalism for the photo- and
electro-production of two-hadron final states, an example of the so-called
$1+{\cal J}\rightarrow 2$ processes Briceño _et al._ (2015); Briceño and
Hansen (2015). The formalism has been applied to the case of
$\pi^{+}\gamma\longrightarrow\pi^{+}\pi^{0}$, shown in Figure 37. Recently,
this has been extended to the case of coupled-channel, multi-hadron final
states Briceño _et al._ (2021a) thereby providing an essential framework
underpinning the spectroscopy opportunities through photoproduction at the
EIC.
The calculation of the spectrum of the exotic charmonium and bottomonium
states anticipated at the EIC poses several additional challenges beyond those
encountered in the light-quark sector. Firstly, a precise understanding of
light- and strange-quark spectroscopy is a precursor to precision calculations
in the heavy-quark sector since the $c\bar{c}$ can mix with such states in
many of the most interesting channels. Secondly, with increasing mass of the
quark constituents, the splitting between the different energies on the
lattice is compressed, with many $J^{P}$ states at similar energies requiring
additional constraints to identify the states from the lattice data. Finally,
there are the many open channels that must be included. The work so far is
largely exploratory Chen _et al._ (2019); Prelovsek _et al._ (2021), with
the inclusion of only a limited number of coupled channels. However,
controlled calculations of many of the exotic states anticipated at the EIC
are now computationally feasible, with studies both of the $\chi_{c1}(3872)$
and the $X(6900)$ most easily attainable.
### VI.3 Outlook
Many of the ”no-go” theorems that until recently have imposed limitations on
the range of quantities accessible to first-principles calculation in lattice
QCD have now been circumvented through a progression of theoretical advances,
with demonstrations of the ability of lattice QCD calculations to add to our
understanding of the internal structure and spectroscopy of hadrons. The
advent of the era of exascale computing will enable the precision calculations
needed to exploit the opportunities afforded by the EIC Detmold _et al._
(2019); Joó _et al._ (2019). Notably, in addition to the emerging precision
computations of the isovector quantities, such calculations will be extended
to the isoscalar sector. Precise computations within lattice QCD of the three-
dimensional measures of hadron structure, combined with the two-dimensional
Generalized Form Factors accessible through exclusive processes at the EIC,
will constrain the model dependence in global analysis of experimental data,
and will facilitate a more precise three-dimensional imaging of hadrons that
either experiment or first-principles calculation can achieve alone.
Despite these advances, there remain physical processes that elude current
lattice QCD calculations, notably the direct calculation of real-time
scattering cross sections, fragmentation functions, and nuclear response
functions. The rapid advance of Quantum Information Science, and its role as a
high-priority research area, will play an increasingly important role in
addressing many of these key problems, recognised in the report of the NSAC
subcommmittee NSA (2019). Thus far, the investigation of quantum field theory
on quantum computers has been restricted to far simpler systems than that of
QCD, but the role of QIS both in advancing lattice gauge theory is reviewed in
ref. Bañuls _et al._ (2020). Further, strategies for exploiting quantum
computing to directly address processes relevant to the EIC, such as Compton
Scattering, are now being formulated Briceño _et al._ (2021b).
## VII Science of far forward particle detection
### VII.1 Far-forward detection overview
In contrast to colliders that are mainly built to study particles produced at
central rapidities, much of the EIC physics critically relies on excellent
detection of the target and target fragments moving along, and often within,
the outgoing ion beam. Consequently, EIC detectors are from the outset
designed with an elaborate far-forward detection system that is closely
integrated with the interaction region of the accelerator. The forward
detection has several stages: the endcap of the central detector, trackers
within a large-bore dipole magnet in front of the accelerator quadrupole
(quad) magnets, two sets of Roman pots (one for charged particles at lower
rigidities, so-called “off-momentum detectors”; the other for tagging protons
or light ions near the beam momentum) after a larger dipole behind the quads
as seen in Fig. 38 which shows the layout of IP6 during the time of the Yellow
Report, which is largely unchanged. Additionally, a zero-degree calorimeter is
employed for tagging neutrons and photons at very small ($<$5 mrad) polar
angles.
This arrangement allows for high-$p_{T}$ cutoffs to be determined by the
magnet apertures, such as is the case for the neutron/photon cone going toward
the zero-degree calorimeter (which must traverse the full hadron lattice), and
for charged particles and photons being tagged in the first, large-bore dipole
magnet after the IP, which contains a detector for far-forward particles at
polar angles roughly between 5.5 and 20 mrad. The bore of the first dipole
(called B0pf in IP6) has a radius of 20 cm (while the pre-conceptual design
for IP8 has an equivalent dipole magnet with a slightly larger radius), which
in principle allows for larger acceptance than 20 mrad, but support structure
and services for the detectors will limit how much of the bore can be filled
with active detector material. As designs progress, it may be possible to
achieve a larger acceptance in the dipole spectrometer at both IP6 and IP8.
On the other hand, for lower-energy proton beams, unavoidable inefficiencies
will occur in the transition regions. There is a low-$p_{T}$ cutoff due to the
beam itself, which is most severe for the detection of recoil protons from
mid- to high-energy beams (which provide the highest luminosity), for light
ions at all energies, and for heavy ion fragments with A/Z close to that of
the original beam. For ions, where the $p_{T}$ per nucleon is usually small,
acceptance at very low-$p_{T}$ is extremely important. With a traditional IR
layout, low-$p_{T}$ acceptance can be improved by reducing the angular spread
of the beam via reduced beam focusing. However, this has the drawback that it
also reduces luminosity and still does not make it possible to reach
$p_{T}$=0.
Figure 38: Layout of the IP6 Far-Forward region generated with the EICROOT
simulation package EIC (6 16) including the dipole magnets (rectangular
boxes), quadrupole magnets (cylinders), and the four detector subsystems
currently proposed to cover the geometric acceptance.
The kinematics of the EIC are uniquely suited to a more sophisticated forward
detection concept than previous colliders. In DIS, the typical longitudinal
momentum loss $dp/p\sim x$. At the same time, the intrinsic momentum spread of
the particles in the beam is a few $\times 10^{-4}$. With a 10$\sigma$ margin,
all recoil protons with $x>0.01$ will thus separate out from the beam even at
$p_{T}$=0, and at much lower $x$ for non-zero $p_{T}$. Since this method only
relies on a fractional longitudinal momentum loss (magnetic rigidity), it is
independent of the beam energy. For heavy ions, which typically only
experience small changes in momentum, rigidity ($\sim A/Z$) can change through
emission of nucleons. In particular, emission of a single neutron from an
$A\sim 100$ nucleus corresponds to a change in rigidity at the 1% level, which
in principle also allows the EIC to detect most nuclear fragments.
To take full advantage of the EIC kinematics, the forward detection requires
two elements: dispersion and focusing. The former is generated by dipole
magnets and translates a momentum (rigidity) change into a transverse position
offset: $dr=Ddp/p$ (e.g., with $D=40cm$, the transverse displacement for a
particle with $dp/p=0.01$ and $p_{T}=0$ will be 4 mm). This value has to be
compared with the (10$\sigma$) beam size at the detection point (Roman pot).
Without focusing, this is typically a few cm, but with a secondary focus it
can be reduced to 2-3 mm (depending on the beam momentum spread). The beam
size on the Roman pot does in principle not depend on the focusing of the beam
at the collision point ($\beta^{*}$), but in a practical implementation the
same magnets are used to generate both the primary and secondary focus.
However, in contrast to the unfocused case, this means that with a secondary
focus the best low-$p_{T}$ acceptance is achieved at the highest luminosity. A
secondary focus could in principle be used at either IP6 or IP8 of the EIC.
However, while the current IP6 layout has some dispersion (17 cm), it does not
have a secondary focus. In contrast, IP8 is designed for a much larger
dispersion and incorporates a secondary focus – making it complementary to IP6
and opening up unique physics capabilities, as can be seen in Fig. 39.
Figure 39: Two-dimensional plots of proton acceptance in transverse momentum,
$p_{T}$ the nucleon momentum fraction. The acceptance is shown for three
configurations: accepted protons in the IP6 Roman pots with the CDR high
divergence optics (left), for accepted protons in the IP6 Roman pots with the
CDR high acceptance optics (middle), and for accepted protons in the IP8 Roman
pots at the secondary focus with the pre-conceptual optics configuration
(right). All samples were generated for 18 GeV on 275 GeV protons with an
$x_{L}>0.8$ and with $0<\theta<5$ mrad; the cutoff at the top of the plots is
due to the event generation region, while the acceptance in the bottom right
varies with different configurations.
### VII.2 Detection of recoil baryons and light ions
As discussed in sections 7.2.2 and 7.3.8 of the Yellow Report Abdul Khalek
_et al._ (2021) and earlier in this paper, exclusive reactions on the proton
and light nuclei form an essential part of the EIC physics program. The wide
kinematic reach of the EIC makes it ideal for probing different parts of the
nuclear wave function, revealing how the internal landscape of nucleons and
nuclei changes with $x$. Measurements of exclusive processes require high
luminosity, a range of collision energies, and excellent far-forward
detection. Key issues are detector acceptance for the recoil proton or light
ion and optimized reconstruction resolution of the momentum transfer, $t$.
##### Proton detection:
Detecting the recoiling nucleons is important to cleanly establish the
exclusivity of the reaction. It also makes it possible to reconstruct $t$
directly from the proton. Since the EIC reaches its highest luminosity with
the most asymmetric beam energies (i.e., 5-10 GeV electrons colliding with
hadrons at maximum energy), it is essential that the far-forward detection
works optimally for high-energy protons. Here, the greatest challenge is to
detect low-$p_{T}$ protons which stay within the beam envelope. This
capability can be improved by using a secondary focus, which essentially
provides full acceptance for $x>10^{-2}$, and significantly improves the
low-$p_{T}$ acceptance for lower $x$. For lower proton beam energies, a
secondary focus is still useful, although less crucial. However, at lower
energies, high-$p_{T}$ protons will start experiencing losses in the apertures
of the accelerator quadrupole magnets, leading to a reduced acceptance for
detectors downstream of these magnets. Embedding a tracking detector, such as
is envisioned with the B0 tracker in IP6, provides increased coverage of
high-$p_{T}$ protons at the lower beam energies. This issue can be alleviated
by using magnet technologies that allow for higher peak fields, which makes it
possible to increase the apertures, but there are other technical constraints
that could make this challenging, especially at IP6, and more study is needed
to determine what level of improvement is possible. In conjunction with a
secondary focus, this would further enhance the capabilities of the EIC to do
transverse proton imaging.
##### Determination of transverse momentum in exclusive reactions:
Another important consideration for exclusive reactions is reconstruction of
$t$. In principle it can be done either by using the recoiling system detected
in the far-forward detectors, or from the scattered electron and produced
particle (charged meson or DVCS photon) detected in the central detector.
There are advantages to both methods. For example, the former method is very
straightforward, but requires a good understanding of the beam effects (e.g.
angular divergence). Ideally one would want to be able to apply both, but this
requires that the central detector can provide a sufficiently good
$p_{T}$-resolution. This is a challenge for a tracker, but even more so for
the EM calorimetry if one wants to be able to determine $t$ ($\Delta_{\perp}$)
from the DVCS photon (or the photons from $\pi^{0}$ production). However,
while such a dual capability is useful for protons, it becomes essential for
ions, where the ability to determine $t$ from the ion is more limited and
vanishes entirely when the ion is not detected (high A and low $p_{T}$). Being
able to determine $t$ from the DVCS photon would thus greatly enhance the
ability to do transverse imaging of ions.
##### Light ion detection:
Coherent exclusive scattering on light ions differs from protons in two ways.
First, the available ion beam energies are restricted a range between 100
GeV/A and $275\times Z/A$ GeV/A, as well as a discrete energy at 41 GeV/A.
Second, scattered ions travel much closer to the beam, making low-$p_{T}$
acceptance very challenging (and conversely, the high-$p_{T}$ acceptance much
less so, even for the high-$t$ tails). This is the combined result of two
effects: cross sections for ions peak at lower $t$, and a given $t$
corresponds to a lower $p_{T}$ per nucleon. The former means that in contrast
to the proton, clean imaging of light ions requires an acceptance down to
$p_{T}\sim 0$, and the latter that implementing such an acceptance is
particularly difficult. A secondary focus is thus essential for high-quality
measurements of coherent scattering on light ions. However, if the central
detector has the ability to reconstruct the $p_{T}$ from the produced photon
or meson as discussed above, a secondary focus would allow for a hybrid method
where ions with higher $p_{T}$ are detected (the incoherent backgrounds become
more challenging as one moves towards the first diffraction minimum), while
the low-$p_{T}$ part is reconstructed by vetoing the breakup (which is
generally easier to do with light than heavy ions). A hybrid measurement would
not be as clean as one where all recoiling ions are detected, but it would
make it possible to reach lower $x$ and higher A, extending the discovery
potential of the EIC.
### VII.3 Spectator detection
Detection of nuclear breakup is essential for a broad range of EIC physics
topics. From a detection perspective, these broadly fall into two categories:
spectator nucleons and nuclear fragments. In the first case, the spectator
nucleon typically experiences a very small change in momentum, but its
magnetic rigidity (A/Z) is very different from that of the original beam. A
proton spectator will thus initially continue moving with the beam, but will
separate quickly from it after passing the first dipole magnet. The detection
challenge here thus lies primarily in providing adequate magnet apertures. An
key example of spectator proton tagging are measurements of neutron structure
in deuterium and ${}^{3}He$.
In the case of nuclear fragments, they may be detected as a way of vetoing
breakup or part of the direct measurement. The former case was discussed above
(Sec. VII.2 in the context of light and medium nuclei, but coherent processes
on heavy ions are different in that even with a secondary focus, the
high-$p_{T}$ tails cannot be measured directly as the ion always stays inside
the beam envelope. A secondary focus can, however, make it possible to detect
residual ions that have lost a single nucleon (A-1 tagging). Adding such a
capability will significantly improve the efficiency for vetoing the large
incoherent backgrounds, making a reasonably clean measurement possible.
Finally, there are several measurements that rely on detection of the
spectator nucleons, the residual nucleus, or nuclear fragments in the final
state. One example is the case when the struck nucleon and its partner are in
a short-range correlation with a high relative momentum. In this case, the
spectator nucleon will not only have a different A/Z compared with the
original ion, but also a large $p_{T}$. The breakup kinematics can then be
best constrained if the residual A-2 nucleus can be detected, which is
facilitated by a forward spectrometer with a secondary focus. A related topic
is detection of rare isotopes produced in the interaction, which is discussed
in Sec. VII.7. Additional detail, including discussion of the theoretical
framework for several of the tagged measurements can be found in Ref. Strikman
and Weiss (2018); Cosyn and Weiss (2020)
##### Neutron structure through spectator tagging
Light ion beams can be used as an effective free neutron target via spectator
tagging. Deuterium is the simplest system, while ${}^{3}He$ can be polarized
(70%) and thus give access to the neutron spin structure. Spectator tagging
can be applied to any primary measurement ($F_{2}$, DVCS, etc), but a key
common challenge is to account for final-state interactions (FSI). However,
recent studies Jentsch _et al._ (2021) have shown that free neutron structure
can be extracted by on-shell extrapolation to the non-physical pole, where the
neutron is by definition free and unaffected by FSI. In contrast to the pion,
this approach is much more robust for the heavier nucleon where the
extrapolation takes place over a shorter interval. The extrapolation is done
by fitting the measured $t$ distribution, but focuses on the low-to-modest
values of $t$ part, where the extrapolation has minimal model dependence.
Experimentally, this measurement relies on a high-resolution determination of
the $p_{T}$ distribution, and of having sufficiently large magnet apertures to
tag a spectator proton with low $p_{T}$ Jentsch _et al._ (2021). As a cross
check, it is also possible to apply the same method to the bound proton by
tagging the neutron from deuterium in the ZDC.
##### Proton and neutron spectators from deuteron beams
Deuteron beams can be used as an effective free neutron target via spectator
tagging, where the undisturbed proton is measured to isolate scattering from
the proton. To isolate nearly on-shell neutrons, the goal is to tag protons
which had low initial momenta (corresponding to low $-t$) in the deuteron rest
frame. Measurements will be made over a range of $t$, so that the
extrapolation to the on-shell neutron can be performed over different ranges
of $t$ to ensure stability of the extrapolation. As noted above, detection of
these protons in the Off-Momentum Detector and Roman Pots is relatively
straightforward and the key issue is minimizing the loss of acceptance in the
apertures of the accelerator magnets.
Similar studies of the proton structure of the proton structure can be
performed with neutron tagging used to isolate scattering from a low-momentum
proton. In this case, the results can be compared to the known proton
structure, and these studies can be used to study the $t$-dependence and test
the extrapolation to the on-shell proton. For the low $t$ values required for
these measurements, the neutrons have $x_{L}$ near unity and small $P_{T}$ and
are detected in the ZDC.
##### Double tagging from 3He breakup
While the deuteron is the most common target used to study unpolarized neutron
structure, polarized 3He serves as the most effective target for measuring
neutron spin structure, as the neutron carries most of the spin in 3He.
Inclusive measurements can provide some information, with the protons acting
mainly as a dilution to the asymmetries associated with scattering from the
polarized neutron. But double tagging of the two spectator protons in 3He can
be used to isolate scattering from the neutron without dilution or corrections
for the proton contributions Friscic _et al._ (2021). In this case, the goal
is to measure spectators with low momenta in the 3He rest frame which have
momenta close to the beam momentum per nucleon and small $P_{T}$, but lower
mass and therefore roughly 2/3 of the rigidity of the 3He beam. One can also
examine events with one large-momentum proton to identify high-momentum
neutrons in the initial state to look at the spin structure as a function of
initial neutron momentum, which is relevant for understanding the spin EMC
effect.
##### Tagged Pion structure - nucleon spectators from proton beams
Measurement of the $\pi^{+}$ electromagnetic form factor can be accomplished
at the EIC by the detection of the neutron spectator in coincidence with the
scattered electron and $\pi^{+}$, i.e. an exclusive reaction with
$e^{\prime}-\pi^{+}-n$ triple coincidence. The neutron is emitted with 80-98%
of the proton beam momentum, and is detected in the ZDC. The pion form factor
measurement only requires $-t$ measurements up to about 0.4 GeV2, so a
moderate acceptance ZDC is sufficient to catch the events of interest. Very
good ZDC angular resolution is required for two reasons. First, to separate
the small exclusive $\pi^{+}$ cross section from dominant inclusive
backgrounds, a cut may be placed on the detected neutron angle in comparison
to the reconstructed neutron angle (from $e^{\prime}$ and $\pi^{+}$ using
momentum conservation). Second, a $t$ reconstruction resolution better than
$\sim$0.02 GeV2 is necessary for a quality form factor measurement and such
resolution is only possible when reconstructed from the initial proton and
final neutron momenta. The ZDC is thus of crucial importance to the
feasibility of a pion form factor measurement at the EIC.
##### Tagged Kaon structure - hyperon spectators from proton beam
Measurements of the $K^{+}$ electromagnetic form factor at high $Q^{2}$ via
the Sullivan process would yield valuable information on nonperturbative
DCSB–Higgs-boson interference effects in hard exclusive processes. The
reaction of interest is $e+p\rightarrow e^{\prime}+K^{+}+\Lambda$, where the
$\Lambda$ is emitted with $>70$% of the proton beam momentum. We expect that
lower beam energies are optimal, to ensure a high $\Lambda$ decay fraction, as
non-decayed $\Lambda$ will be impossible to distinguish from neutron hits.
The $\Lambda$ needs to be identified from its decay products to ensure the
clean identification of the exclusive events from inclusive backgrounds, and
to reconstruct $t=(p_{p}-p_{\Lambda})^{2}$ with sufficiently high resolution.
One complication is that the $\pi^{-}$ from the dominant $\Lambda\rightarrow
p\pi^{-}$ decay channel cannot be detected in the far forward detectors for
decays occurring at or after the B0 magnet. Such measurements would require
dedicated detectors for negative particles or be limited to decays occurring
sufficiently before B0. The neutral $\Lambda\rightarrow n\pi^{0}\rightarrow
n\gamma\gamma$ decay seems a better choice. For the measurement to be
feasible, three hit events need to be reliably identified in the ZDC with
sufficiently good energy and angle resolution for $t$ reconstruction. Even
more challenging is confirming that the Sullivan process dominates at low
$-t$, which requires a measurement of the $\Lambda/\Sigma^{0}$ ratio. This
entails the reliable detection of four neutral hits in the ZDC, from
$\Sigma^{0}\rightarrow\Lambda\gamma\rightarrow n\pi^{0}\gamma$. Thus, this is
a measurement that is significantly more challenging than that of the pion
form factor, although if it is feasible, it would be an important addition to
the EIC scientific program. The acceptance for neutral decay products could
potentially be increased significantly if calorimetry were included in the B0
magnet. This option was mentioned as a possibility in the Yellow Report, but
including both tracking and calorimetry is technically challenging due to
spatial constraints inside the magnet and further design work is needed to
know what is be possible.
### VII.4 Tagging of active nucleons - high spectator momenta
While the previous sections focused on tagging of relatively low-momentum
spectators, other key studies are focused on isolating high-momentum nucleons
and/or mapping out tagged nucleon structure over a wide range of initial
virtualities. Studies of Short-Range Correlations between pairs of bound
nucleons require tagging of final state nucleons at both high and low values
of $p_{T}$ to fully exploit the measurement capability. This provides a unique
challenge for the detector acceptances, as multiple far-forward subsystems
play a role in covering the phase space. In general, the active nucleon in a
reaction will be scattered with relatively large polar angles ($\theta>5$
mrad), while the recoil nucleons and spectator nuclear fragments (for A $>$ 2)
are usually at smaller values. Additionally, in the case of the recoil
protons, there is a magnetic rigidity change with respect to the ion beam
which further complicates detection. It is in principle also possible to tag
an A-2 spectator nucleus, in the final state, but this is uniquely challenging
to do the small scattering angles for the spectator nucleus, and the small
rigidity change, dependent on the struck SRC pair. Tagging of A-2 nuclei can
be enhanced with Roman Pots at a secondary focus.
In cases where both final-state nucleons from an SRC pair are measured, the
spectator nucleon is detected in the far-forward region while the active
(struck) nucleon is measured in the main or far-forward detectors. At higher
energies, the acceptance is more complete when measuring a spectator neutron
and active proton, since the polar angle coverage for struck protons is
extended to $\sim 20$ mrad in the B0 tracking detector, while the neutron
acceptance is limited to $\sim 5$ mrad by the magnet aperture. For active
neutrons, the lower beam energy configurations (e.g. 5x41 GeV/n) are more
beneficial since the larger active neutron scattering angle can place them in
the acceptance of the main detector endcap hadronic calorimeter (i.e.
$\theta>\approx 30$ mrad). Additionally, if more of the open bore space in the
dipole spectrometer can be used for active detector material, it would further
enhance the capabilities for active proton tagging beyond the current 20 mrad
assumption.
Having some capability for tagging in the higher-$p_{T}$ regime allows
simultaneous study of both free nucleon structure and nuclear modifications
with the same experimental setup. Studies of Short-Range Correlations and
nuclear modifications enable the EIC to provide insight into the EMC effect
and other physics at higher-x.
### VII.5 Vetoing of breakup
Separation of coherent and incoherent photoproduction of photons (Deeply
Virtual Compton Scattering) and vector mesons is critical to many aspects of
the EIC physics program. In the Good-Walker paradigm, one can relate the
coherent cross-section to the average nuclear configuration, while the
incoherent cross-section is sensitive to event-by-event fluctuations of the
nuclear configuration, including gluonic hot-spots Klein and Mäntysaari
(2019). One can do a two-dimensional Fourier transform of $d\sigma_{\rm
coherent}/dt$ to determine the transverse distribution of gluons in the
nuclear target - the nuclear equivalent of the GPD. By studying different
mesons with different masses, and using photons with different $Q^{2}$, one
can map out nuclear shadowing as a function of position within the nucleus.
The challenge in these measurements is in adequately separating coherent and
incoherent production, by detecting the products of nuclear breakup Chang, Wan
and Aschenauer, Elke-Caroline and Baker, Mark D. and Jentsch, Alexander and
Lee, Jeong-Hun and Tu, Zhoudunming and Yin, Zhongbao and Zheng, Liang (2021).
To determine the transverse gluon distributions, it is necessary to measure
$d\sigma_{\rm coherent}/dt$ out to the third diffractive minimum Abdul Khalek
_et al._ (2021), to avoid windowing artifacts in the Fourier transform. At
this minimum, a rejection factor of 500:1 is needed to adequately remove the
incoherent background.
In most cases, nuclear dissociation leads to neutron (or, less frequently,
proton) emission from the target. These are relatively straightforward to
detect, although very high efficiency is required. However, some soft
excitations will produce excited nuclear states that decay by photon emission.
These photons typically have energies of a few MeV (or less) in the nuclear
rest frame. Gold (planned as the main EIC heavy nuclear target), is
particularly bad. It has a 77 keV excited state with a 1.9 nsec lifetime.
Because of the lifetime, this state is almost impossible to observe in an EIC
detector. It’s next states have energies of 269 and 279 keV respectively. The
lab-frame energies depend on the EIC beam energies, but for 110 GeV/n gold
beams, the maximum energy is 65 MeV. For photon emission away from the far-
forward direction, the energy will be lower. This is likely beyond the reach
of the planned EIC detectors, but could be accessible in an upgrade. Because
the energy transfer to the target (and hence the energy spectrum of the
excitations) depends on $t$, is it critical to be able to detect emission of
protons, neutrons, and soft photons over the full phase space. As noted
earlier, the addition of calorimetry in the B0 magnet would improve the
acceptance, but is technically challenging.
Since the knockout of a single neutron (and possibly evaporation of another)
is an important contribution to the incoherent background, the ability to tag
and veto on A-1 nuclei (e.g., Zr-89 from a Zr-90 beam) is also very important
for a clean measurement. High-resolution photon detection is also synergistic
with a potential rare isotopes program discussed below.
It is also possible to mistake coherent production for incoherent, if a second
collision in the same beam crossing dissociates a nucleus Klein (2014). This
could affect the measurement of the incoherent cross-section at small $|t|$.
Although the background rate can be subtracted, statistical uncertainties will
remain. However, most of these events can be removed if the ZDC has very good
timing.
### VII.6 Backward (u-channel) photoproduction
In backward (u-channel) photoproduction, the produced meson takes most of the
energy of the incident proton, and so goes in the forward direction, while the
proton is shifted many units of rapidity, and, at the EIC, is visible in the
central detector Gayoso _et al._ (2021). Instead of having small Mandelstam
$t$, as in conventional photoproduction, $t$ is large (near the kinematic
maximum) and $u$ is small. This process may be modelled using Regge
trajectories involving baryons, but it is not easy to see how such simple
reactions can lead to nucleons being shifted many units of rapidity; there may
be connections with baryon stopping in heavy-ion collisions. A systematic
exploration of production of different mesons at higher energies is needed to
fully characterize this reaction, and test the Regge trajectory approach.
Reconstruction of these events requires a forward detector that is able to
reconstruct multi-particle final states. For the full 18 $\times$ 275 GeV beam
energy, the products of light meson ($\omega$, $\rho$ or $\pi^{0}$) mostly end
up with $\eta>6.2$, in the zero degree calorimeter (ZDC). At lower beam
energies, or with heavier mesons, the products at are at smaller
pseudorapidity. This requires a forward detector with as full an acceptance as
possible, i.e. with no holes in the acceptance) for both charged and neutral
particles.
### VII.7 Rare isotopes (including photons for spectroscopy)
As discussed in the recent EIC Yellow Report, simulation studies suggest that
the EIC has the potential to produce and detect rare isotopes along with their
gamma photon decays, allowing this new machine to complement the results from
dedicated rare isotope facilities.
Direct detection of the rare isotopes will use the Roman Pot (RP) detectors.
At first approximation, the produced rare isotopes will have the same
momentum-per-nucleon as the ion beam and no angle relative to the beam. Under
this approximation, the rigidity (R = p/Z) of the isotope is directly related
to its A/Z as
$R_{Rel}=(R-R_{beam})/R=\Delta
p/p=\left(\frac{A}{Z}\right)/\left(\frac{A_{beam}}{Z_{beam}}\right)~{}.$ (42)
Under the above assumption, the isotope’s hit position in the dispersive
direction at the RP gives a measurement of A/Z. Figure 40 shows the expected
hit positions for known and predicted isotopes both at the first RP for the
primary IR and at the first RP located near the secondary focus in the second
IR, assuming a 238U beam. Isotopes with the same Z and different A values are
shown at the same vertical position in the plots. In addition, using the beam
parameters from table 3.5 of the 2021 EIC CDR EIC (2021) for heavy nuclei at
110 GeV/A on electrons at 18 GeV, the 10$\sigma$ beam exclusion are is shown
by the gray box. As can be clearly seen, none of the heavy rare isotopes can
be detected in the primary IR, while the second IR has the potential to
detected the majority of the isotopes. At the RP in the second IR, isotopes
with the same Z that differ by a single neutron are expected to be separated
by 1.5 mm for Z = 100 and 5 mm for Z = 25.
Figure 40: Left: Isotope Z vs. hit position in the first RP for the primary
IR; right: Isotope Z vs. hit position in the first RP at the secondary focus
for the second IR. The gray box on each plot shows the 10$\sigma$ beam
exclusion area. The plots are made assuming a 238U beam.
For uniquely determining the isotope, a direct measurement of Z is needed. The
simplest way to do this is by placing a Cherenkov detector behind the RPs at
the secondary focus. The number of Cherenkov photons produced by the isotope
will be proportional to Z2.
Measuring gamma decay photons is also important as the level transitions
reveal the structure of the final isotope. The photons are produced
isotropically in the isotope’s rest frame but can be Lorentz up-shifted
significantly in the lab frame. This shift, as well as the requirement that
these photons be detected in coincidence with an isotope, means that photon
background will be small. LYSO crystals that do not require cryogenics can
therefore be used for this measurement. In addition, while spectroscopy would
benefit from a good photon acceptance, it would not be a critical requirement.
## VIII Radiative effects and corrections
### VIII.1 Introduction
QED radiative corrections (RC) are an integral part of the hadronic-structure
studies with electron (or muon) scattering. In experiment, they can reach tens
of per cent for unpolarized cross sections and several per cent for
polarization asymmetries, while also altering dependence of observables on all
kinamatic variables of DIS ($x,y,Q^{2}$) as well as altering dependence on
azimuthal angles both in SIDIS and deep-exclusive reactions. Thus, they can
become a significant source of systematics in a program of hadronic studies
with EIC.
Significance of electromagnetic RC for analysis of scattering data should not
be underestimated, as was clearly demonstrated by different outcomes of
Rosenbluth and polarization methods for measurements of the proton electric
form factor, see Afanasev _et al._ (2017) for an overview. Current and
planned experiments probing 3D hadronic structure require precise measurements
of GPD and TMD contributions to cross sections and spin asymmetries that may
be possibly obscured or altered by radiative effects. For this reason, proper
inclusion of RC is one of priority tasks in experiment planning and data
analysis.
Historically, the approach developed by Mo and Tsai in 1960s Mo and Tsai
(1969) was successfully applied for both DIS and elastic electron scattering
on protons and nuclei. In 1970s Bardin and Shumeiko developed a covariant
approach to the infra-red problem in RC Bardin and Shumeiko (1977) that was
later applied to inclusive, semi-exclusive and exclusive reactions with
polarized particles.
Emission of multiple soft photons is conventionally included via
exponentiation Yennie _et al._ (1961). A different approach for including
higher-order corrections Kuraev and Fadin (1985) uses a method of electron
structure functions based on Drell-Yan representation that allows RC
resummation in all orders of QED.
For high transferred momenta, such as in HERA or EIC, electroweak corrections
have to be included. Corresponding formalism was developed for HERA
Kwiatkowski _et al._ (1992); Charchula _et al._ (1994), while the codes
presently used for JLab would have to be updated to include weak boson
exchanges.
Higher precision of modern experiments presents new demands on the accuracy of
RC. It is common to divide RC, in a gauge invariant way, into two categories,
namely, model-independent and model-dependent. For model-independent RC, QED
corrections do not involve extra photon coupling to a target hadron. Still,
kinematics shifts due to extra photon emission require knowledge of hadronic
response in off-set kinematics that can be handled either by iterative
procedures, or existing data on the same reaction from other experiments, or
input from theoretical models. On the other hand, model-dependent corrections
correspond to extra photon exchange or emission by a target hadron. They
require knowledge of hadronic structure beyond what can be learned in a
considered experiment from a given reaction.
### VIII.2 Monte Carlo generators for radiative events
Classically, radiative corrections are applied to measured data post-hoc, i.e.
a correction factor is calculated using analytical formulas and then
multiplied onto the measured result, effectively mapping the measured
radiative rate to an ideal Born-level rate (e.g. Mo and Tsai (1969); Maximon
and Tjon (2000)). On the other hand, to calculate a cross section, the
detector acceptance is also required, and either calculated analytically from
geometry, or integrated numerically using Monte Carlo methods.
This post-hoc application of a—typically analytically integrated—correction
has limited precision, as it must necessarily make simplifying assumptions
about the detector acceptance, more so since radiative processes beyond a
peaking approximation can radically shift the event kinematics.
Therefore, the Monte Carlo algorithms, classically used to calculate the
acceptance, were extended to include full cross section and reaction models
including radiative corrections. The MC result, together with the luminosity,
is then not a calculation of the acceptance, but of the expected count rate,
and results of experiments are often presented as the ratio of the observed to
predicted count rates. A proper implementation of this approach includes
automatically all interactions between radiative corrections and other
detector effects like bin-migration and detector acceptance, possibly even as
a function of time. Such codes were developed for example for the HERA
experiments H1 and ZEUS Kwiatkowski _et al._ (1992); Charchula _et al._
(1994).
Efficient MC simulations require a small variance of event weights. Radiative
generators must overcome the fact that the radiative cross section varies by
many orders of magnitude, with possibly multiple, unconnected regions of
phase-space with high cross-section, for example for nearly collinear emission
of photons along electron trajectories.
In these cases, naive rejection sampling methods show poor performance as only
very few events are accepted. Automatic volume reweighting approaches like
foams can in principle be effective, but suffer from the high derivatives near
peaks. Efficient approaches therefore exploit the analytical structure of the
underlying cross section to generate events efficiently.
For fixed target electron scattering experiments, many suitable codes for QED
radiative corrections exist, however mostly limited to first-order
approximations, sometimes improved by approximate higher-level corrections
(see e.g. Yennie _et al._ (1961); Akushevich _et al._ (1999, 2012); Gramolin
_et al._ (2014); Henderson _et al._ (2017)). Recently, true higher-order MC
generators became available Banerjee _et al._ (2020, 2020). The validity of
such generators has been tested deep into the radiative tail, recently for
lower energies in Mihovilovič _et al._ (2017).
The translation of these generators to collider kinematics is straight
forward, with the caveat that numerical precision problems might crop up.
Beyond DIS reactions, the mapping of the radiative process back to the Born-
level base process becomes tedious. The QED radiative Feynman graphs resemble
QCD higher-order graphs, opening the door to a unified approach that can
handle both QCD and QED radiative effects, and corresponding algorithms are
currently being implemented in HEP generators Buckley _et al._ (2011). Using
the factorization theorem, the resummed leading logarithmic higher-order
corrections can be described with distribution and fragmentation functions Liu
_et al._ (2020, 2021). Higher-order corrections are resummed in the form of
parton showers, treating partons and photons on equal footing Hoeche _et al._
(2010). The approach has to be extended to include non-logarithmic higher
order corrections.
### VIII.3 Opportunities to reduce model dependences
While QED radiative corrections seem straight forward to calculate, they often
require external input and make model assumptions, for example about hadronic
contributions. For example, recent experimental results on two-photon
exchange, i.e. the next order of corrections for elastic scattering, are not
particularly well predicted by current calculations (for an overview, see
Afanasev _et al._ (2017)), and are an open research topic in theory and
experiment.
Whether semi-analytical or Monte Carlo approaches are chosen for RC
calculations, it is important that integration over the phase space of the
radiated photon is done with a realistic hadronic tensor, as pointed out in
Ref.Akushevich and Ilyichev (2019b). In particular, radiative tails from
exclusive meson production can contribute to SIDIS or baryon resonance
contributions would be enhanced due to kinematic shifts from the radiated
photons. Uncertainties in large-x behavior of PDF may also affect RC
calculations. In order to address these problems, the hadronic physics
community needs to maintain a comprehensive database of exclusive and semi-
inclusive reactions, whereby JLab/EIC data from lower energies and momenta
transfers would be used for RC calculations for highest EIC energies.
Artificial Intelligence approaches may also be instrumental in developing
multi-dimensional iterative procedures, especially for SIDIS. In particular,
SIDIS measurements at lower-energy Interaction Point at EIC may be used as an
input for RC calculations for higher energies of the same machine, thereby
providing necessary energy coverage for self-consistent RC approaches.
Extension of conventional PDF analysis to large Bjorken $x$ values and studies
of its impact on RC also have to be planned.
With an exception of elastic ep-scattering Afanasev _et al._ (2017), most of
approaches to exclusive electron scattering considered model-independent RC
that include only coupling of the extra photon to lepton lines, see, $e.g.$,
Refs.Vanderhaeghen _et al._ (2000); Akushevich and Ilyichev (2018) for VCS
and Ref.Afanasev _et al._ (2002) for exclusive pion production. Importance of
model-dependent RC - still unaccounted for - is indicated both by experiment
and theory. The JLab experiment Katich _et al._ (2014) measured DIS with a
transversely polarized 3He target and revealed a few per cent spin asymmetry
that only appears beyond Born approximation, and it is similar in magnitude to
single-spin asymmetries due to T-odd effects arising from hadronic structure.
Effects at a level of several per cent due to two-photon exchange were also
predicted theoretically for exclusive electroproduction of pions Afanasev _et
al._ (2013); Cao and Zhou (2020).
A collaborative effort between development of advanced models of hadronic
structure, experimental data analyses and RC implementation will aim to
minimize experimental systematics on one hand and provide access to hadronic
PDFs, TMDs and GPDs in kinematics otherwise not accessible in direct
measurements. In this respect, dedicated workshops (e.g. Afanasev _et al._
(2020)) help bring together experts across several fields and facilitate such
collaborations.
## IX Artificial Intelligence applications
Artificial Intelligence (AI) is defined as a “machine-based system that can,
for a given set of human-defined objectives, make predictions, recommendations
or decisions influencing real or virtual environments” United States House of
Representatives Committee - Science, Space, and Technology (2020). Among the
topics that are grouped under the term AI, machine learning and autonomous
systems are of particular importance for the EIC:
* •
Machine Learning (ML) represents the next generation of methods to build
models from data and to use these models alone or in conjunction with
simulation and scalable computing to advance research in nuclear physics. It
describes how to learn and make predictions from data, and enable the
extraction of key information about nuclear physics from large data sets. ML
techniques have a long history in particle physics Carleo _et al._ (2019);
Deiana _et al._ (2021). With the advent of modern deep learning (DL)
networks, their use expanded widely and is now ubiquitous to nuclear physics,
as found promising for many different purposes like anomaly detection, event
classification, simulations, or the design and operation of large-scale
accelerator facilities and experiments Bedaque _et al._ (2021); Boehnlein
_et al._ (2021).
* •
Autonomous systems are of interest for monitoring and optimizing the
performance of accelerator and detector systems without human control or
intervention. This can include responsive systems that adjust their settings
to background conditions as well as self-calibrating accelerator and detector
systems. An ambitious goal is the usage of real-time simulations and AI over
operational parameters to tune the accelerator for high luminosity and high
degrees of polarization.
The EIC community has started to incorporate AI into the work on the physics
case, the resulting detector requirements, and the evolving detector concepts.
Initiatives such as AI4EIC and the related AI working group in the EIC User
Group will work with the community to systematically leverage these
methodologies during all phases of the project. AI4EIC aims at identifying
problems where AI can have an impact and at finding solutions that can be
cross-cutting for the EIC community. The initiative will create a database
with benchmark datasets and challenges to allow testing new AI approaches and
methods and compare to previous ones. An overarching research theme of the EIC
community is the work towards an autonomous experiment with intelligent
decisions in the data processing from detector readout and control to
analysis.
AI will advance precision studies of QCD in both theory and experiment. An
prominent examples is the applications of AI to the inverse problem of using
measured observations to extract quantum correlation functions, e.g., with
variational autoencoders (VAEs) that utilize a latent space principal
component analysis to replicate lost information in the reconstruction of the
posterior distribution Almaeen _et al._ (2021). Other examples are AI methods
to accelerate simulations for the design of experiments and for nuclear
femtography to image quarks and gluons in nucleons and nuclei.
### IX.1 Accelerate Simulations with AI
Physics and detector simulations are being used to develop the physics case,
the resulting detector requirements, and the evolving detector concepts for
the experimental program at the EIC. The high-precision measurements
envisioned for the EIC require simulations with high-precision and high
accuracy. Achieving the statistical accuracy needed is often computationally
intensive with the simulation of the shower evolution in calorimeters or the
optical physics in Cherenkov detectors being prime examples. Fast simulations
with parameterizations of detector response or other computationally efficient
approximations that are pursued as alternative lack the accuracy required for
high-precision measurements. Here, AI provides a promising alternative via
fast generative models, e.g., generative adversarial networks (GANs) or VAEs.
A promising approach is AI-driven detector design where the parameters of
detector and its costs are being tuned using Bayesian optimization. AI-driven
detector design has been used for detector components Cisbani _et al._ (2020)
and recently for detector concepts Fanelli (2022).
### IX.2 Nuclear Femtography and AI
Tomographic images of the nucleon, referred to as nuclear femtography, are
critical for understanding the origin of the mechanical properties of the
nucleon such as mass and orbital angular momentum decompositions into
contributions from quark and gluon dynamics. The development of the new
imaging methodology, deeply-virtual exclusive processes in electron
scattering, and their dedicated exploration through the future EIC’s beam and
detector technology, will make nuclear femtography a reality for the first
time.
Efficiently constructing the images from future large complex experimental
data sets along with first principles constraints from large-scale numerical
lattice-QCD calculations requires the exploration of an ensemble of advanced
AI and ML techniques. In the case of studies of GPDs, the data analytic
strategy to go from precisely understanding the performance of detectors in
searching for high-energy diffractive events, through accurately extracting
the Compton Form Factors as the key link between experimental data and the
input for imaging construction, to generating the images through complex
neural-network numerical regression that takes into account various physical
constraints including direct lattice QCD results. To accomplish this, it is
essential to assemble an interdisciplinary group of nuclear theorists and
experimenters, along with computer scientists and applied mathematicians, to
build the first AI/ML-based platform for the state-of-art nuclear sub-femto-
scale imaging. The physical quantities connecting images and experimental data
are Compton Form Factors (CFFs). To extract CFFs from data is complicated due
to several CFF combinations corresponding to various quark-proton polarization
configurations appearing simultaneously in the cross section terms for each
beam and target polarization configuration. A neural-network (NN) approach,
exploiting dispersion-relation constraints, was recently adopted to obtain the
flavor-separated CFFs Čuić _et al._ (2020).
Generally considered to offer the most robust and flexible method for
multidimensional probability density estimation, Artificial Neural Networks
(ANNs) represent a new paradigm to tackle this complex problem. Initial ANN
applications to CFF extraction were reported in Moutarde _et al._ (2019);
Kumerički (2020); Grigsby _et al._ (2021) using standard supervised NN
architectures. The systematic application of AI to the extraction of
multidimensional structure functions is currently in its initial stages.
Possibly the most crucial aspect of these methods is the treatment of
uncertainties and their propagation from direct experimental observables (such
as cross sections and asymmetries) to the densities of physics interest (such
as the distributions of electric charge or forces). With emerging JLab $12$
GeV data and beyond from various experimental sources, a suite of ML
technologies will need to be explored to properly assess the optimal deep
neural network architectures with proper treatment of uncertainty through
robust UQ techniques. This ML strategy can also be systematically extended to
extract the subleading CFFs once leading twist CFFs have been extracted with
controlled uncertainties. in the future which is, in part, made a more
tangible goal once we have a better extraction of the leading ones. We will
systematically compare performances and the influence of various choices, such
as the detailed structure and depth of the ANN, prior assumptions of the local
variation of the CFFs with respect to the kinematic variables, and prior
assumptions of the full determination of the number and type of contributing
CFFs. A statistically rigorous analysis of the NN performance with respect to
architecture (depth and width of the network), local variation of the CFFs
with respect to the kinematic variables, and prior assumptions of the full
determination of the number and type of contributing CFFs will need to be
performed to fully quantify any systematic errors from using ANNs. With the
goal of extracting all of the leading CFFs, one needs to develop eight
independent ANNs, each with the goal of inputting experimental cross section
data (e.g. DVCS asymmetries), and predicting a CFF with minimal bias.
### IX.3 Inverse problems of quarks and gluons with AI
Since quarks and gluons are not directly observable states of nature due to
confinement, understanding their emergent phenomena such as hadron structure
and hadronization from experimental data is unavoidably an inverse problem.
Traditionally, ML techniques have been mostly applied in the form of
_regression_ that capitalize the model expressivity offered by ANN Abdul
Khalek _et al._ (2019b); Ball _et al._ (2017). In recent years however, a
number of machine learning applications have been developed to tackle similar
problems in nuclear physics, such as the reconstruction of neutron star
equations of state from the observational astrophysical data Fujimoto _et
al._ (2021, 2020, 2018); Soma _et al._ (2022), the deconvolution problem of
the Kaellen-Lehmann equation Shi _et al._ (2022b), inverse Schroedinger
equation solvers Shi _et al._ (2022c), inference on nuclear energy density
functionals Lasseri _et al._ (2020); Scamps _et al._ (2021), and quantum
many-body calculations Molchanov _et al._ (2022) (see the recent review in
Ref. Boehnlein _et al._ (2021)). The emerging features of these applications
includes ML-theory emulators that mitigate large scale computational costs for
parameter searches Lasseri _et al._ (2020); Scamps _et al._ (2021),
generative models to improve Markovian sampling in lattice QCD Albergo _et
al._ (2019), design of explainable ML architectures for parton showers Lai
_et al._ (2022b) to mention few. Many of these applications are likely to
cross pollinate the field of hadronic physics, and they will have a
transformational impact for the scientific discoveries at the EIC.
## X The EIC interaction regions for a high impact science program with
discovery potential
### X.1 Introduction
The compelling science program of the EIC focusing on the low to medium CM
energies has been described in this document. Here we describe the two
interaction regions (IRs) dedicated to the experimental programs, and some of
the important differences between them. The overall layout of the EIC is shown
in Fig. 41.
Figure 41: The EIC layout at Brookhaven National Laboratory. Electron and the
ion beams directions are identified in the upper left. There are several beam
intersection points (IPs); the 6 o’clock (IP6) and 8 o’clock (IP8) locations
are suitable for the installation and operation of large-scale detector
systems, with appropriate existing infrastructure. IP8 may be most suitable
for high-luminosity optimization at low to intermediate CM energies as well as
for the installation of a secondary focus for forward processes requiring high
momentum resolution. Both beams will be highly polarized, with proton and
electron beam polarizations over 70%.
One of the EIC design requirements is the capability of having two IRs. The
EIC configuration therefore includes two IRs where collisions will occur, and
where substantial near-full-acceptance detectors may be installed. The two IRs
are IR6 (for the primary IR at 6 o’clock) and IR8 (for the second IR at 8
o’clock). Here the RHIC clock location nomenclature is used, where the STAR
detector is located in IR6 and PHENIX/sPHENIX detector is located in IR8.
IR6 and IR8 are not identical, nor are their existing experimental halls. RHIC
and EIC bring beams together horizontally for collisions; in the arcs there is
one “inner” beamline (closer to the arc center of curvature) and one “outer”
beamline (further from the arc center of curvature). For the EIC, the IR6
crossing geometry is such that both beams cross from inner to outer beamlines
(illustrated in Figure 42), while the IR8 crossing geometry is from outer to
inner beamlines. Hence the primary IR6 layout requires less bending than the
second IR layout at IR8. Other spatial layout and RHIC experimental hall
structural design differences exist that are inherited by the EIC project.
Figure 42: Schematic top view of the EIC IR6 primary IR, in the high
divergence configuration from the Conceptual Design Report EIC (2021) Figure
3.3. The y-axis positive direction points outward from the ring curvature;
both beams cross from inner (negative y-axis) to outer (positive y-axis)
beamlines.
The physical layout differences between IR6 and IR8, and their separate
implementation timelines, permit them to be developed to enhance the overall
facility science impact and discovery potential. For example, IR6 might
deliver the highest luminosities at highest CM energies, while IR8 may be
designed to provide higher luminosities at mid-range CM energies. The former
would emphasize discovery potential such as gluon saturation, while the latter
would emphasize rare exclusive processes for 3D nuclear imaging and mechanical
properties.
This section first briefly describes the primary IR design, as defined in the
EIC Conceptual Design Report (CDR) EIC (2021). This section then outlines the
present implementation of the second EIC IR at IR8, consistent with nuclear
physics, accelerator, and engineering requirements. The second IR may also
provide a different acceptance coverage than the first IR. We include
discussion of the operation of both IRs over the entire energy range of
$\sim$20–140 GeV center of mass, and include consideration of different modes
of two-IR EIC operations and their anticipated beam dynamics constraints.
### X.2 Primary IR design parameters
The luminosity and the design of the reference first EIC interaction region is
optimized emphasizing the discovery potential of the EIC by providing the
highest luminosity near the upper end of the CM energy range, from
$\sim$80–120 GeV, while covering the entire range of parameters required by
the Nuclear Physics Long Range Plan. The parameter set and design is based on
1160 colliding bunches in each beam as described in the CDR EIC (2021):
* •
Peak luminosity of $L=10^{34}$ cm-2s-1 at a CM energy of 105 GeV;
* •
Crossing angle $\theta_{c}=~{}$25 mrad;
* •
Maximum $\beta$-functions in the low-$\beta$ quadrupole magnets, $\beta_{\rm
max}\leq$1800 m (for protons in the vertical direction) and acceptable
nonlinear chromaticity resulting in sufficient dynamic aperture;
* •
IBS growth times in horizontal and longitudinal directions of $\tau_{\rm
IBS}>$2 hours.
The design and layout of IR6 are reasonably mature, as illustrated in Figure
42.
### X.3 Second IR design and downstream tagging acceptance
The EIC requirements include sufficient flexibility to permit alternative
optimizations of the two experimental IRs. For example, the IRs may be
optimized for highest luminosities at different CM energies. Moreover, the two
IRs and corresponding detectors may have acceptances and capabilities
optimized for different parts of the physics program as described in this
white paper.
To first order, the luminosity at the IP is inversely proportional to the
distance between the last upstream and first downstream final focus
quadrupoles (FFQs). The statistical uncertainty of measurements in the central
detector scales as this distance. However, the closer the beam elements are to
the IP, the more they obstruct the acceptance at shallow angles with respect
to the beam axis and restrict the acceptance for forward particles. The
solenoidal field used in the central detector region to measure the high
$p_{T}$ particles in the central detector is not effective in determining the
momenta of particles moving parallel to the beam direction, and additional
fields are needed.
From kinematics, the reaction products are biased towards small angles around
the original ion beam. In particular, the detection of small-angle products
requires acceptance to the recoiling target baryon (3D structure of the
nucleon), hadrons produced from its breakup (target fragmentation), or all the
possible remnants produced when using nuclear targets (including the tagging
of spectator protons in polarized deuterium). The detection should be done
over a wide range of momenta and charge-to-mass ratios with respect to the
original ion beam. The second IR design should address these measurement
difficulties posed by the beam transport elements.
From machine design and luminosity considerations, it is not desirable to
leave a very large detector space free of beam focusing elements to allow the
small-angle products to accumulate sufficient transverse separation from the
incident beams. The solution is to let the small-angle particles pass through
the nearest elements of the machine final-focusing system, which
simultaneously perform the function of angle and momentum analyzer for the
small angle reaction products. Ideally, this forward detection system must be
capable of accepting all reaction products that have not been captured by the
central detector. In particular, similarly to the IR6 detector, this implies
sufficiently large apertures of the forward ion final focusing quadrupoles to
accommodate particle scattering angles from zero all the way up to the minimum
acceptance angle of the central detector. Of course, detection of zero angle
particles requires that they are outside of the beam stay-clear region in
another dimension, namely, in the rigidity offset. The IR8 design is
particularly optimized for separation of such particles from the beam and
their detection as described below. A significant challenge of this approach
is to balance often contradictory detector and machine optics requirements.
For example, the choice of the apertures of the forward ion final focusing
quadrupoles, and therefore the forward angular acceptance, are a balance of
the detection requirements and engineering constraints. One would like to make
the apertures sufficiently large without exceeding the technical limits on the
maximum aperture-edge fields.
Figure 43 illustrates $x_{L}-p_{T}$ acceptance with two successive
improvements to second IR acceptance. Without forward spectrometry (left), the
detection of low-angle scattered particles is limited by the beam divergence
at the IP. By introducing forward spectrometry (center), this limit can be
lowered, but particles with high ridigity $x_{L}~{}=~{}1$ still escape
detection. Adding a secondary focus point with flat dispersion (right)
improves the $x_{L}$ acceptance gap further.
Figure 43: Illustration of forward spectrometry and secondary focus effects on
detector acceptance (shaded) in the $\textit{x}_{L}-\textit{p}_{T}$ space for
275 GeV protons.
The maximum detectable $x_{L}$ at a point in the beam-line can be calculated
to first order using,
$x_{L}<1-10\frac{\sqrt{\beta_{x}^{2nd}\epsilon_{x}+D_{x}^{2}\sigma_{\delta}^{2}}}{D_{x}},$
(43)
where $\beta_{x}^{2nd}$ is the Twiss $\beta$-function at the second focus,
$\epsilon_{x}$ is the horizontal beam emittance, $D_{x}$ is the horizontal
dispersion at the second focus, and $\sigma_{\delta}$ is the beam momentum
spread. At a point in the lattice with low $\beta$ function and high
dispersion $D_{x}$, one can reach the fundamental limit for the maximum
$x_{L}$ given by
$x_{L}<1-10\sigma_{\delta}\;.$ (44)
The present EIC second IR secondary focus design is very close to this
theoretical limit. Further improvements are quite limited by space
availability in the experimental hall and magnetic field constraints.
The selection of crossing angle is an important design choice for the second
IR. This crossing angle must not be too large ($>\sim$50 mrad) for various
reasons:
* •
Constraints from the existing experimental hall geometry.
* •
The IP must be shifted towards the ring center to permit the RCS to bypass the
detector.
* •
Large crossing angle requires more aggressive crabbing, which in turn is
limited by cost, impedance, and beam dynamics issues.
* •
Detector acceptance becomes unacceptably small at larger crossing angles.
* •
Limits proximity of final focus quads and overall IR luminosity.
The crossing angle must also not be too small ($<\sim$25 mrad), since the
existing hall geometry requires spectrometer dipoles to bend towards the
electron beam. Bending away as in the primary IR is not possible because of
the second IR collision geometry. This pushes the second IR crossing angle
away from the 25 mrad used in the primary IR. The second IR design choice of
crossing angle is presently 35 mrad.
Figure 44 shows the layout of the second IR with the proposed detector
component placements. The ancillary detectors in the downstream hadron beam
side have been integrated, while space is available for luminosity monitor,
low $Q^{2}$ tagger and local hadron polarimetry.
Figure 44: Layout of the second IR with a 35 mrad crossing angle indicating
locations of the main forward and auxiliary detector component. The color
shaded areas shows the $\pm$ 5 mrad $p_{T}$ acceptance for particles with
yellow representing neutrons while orange and blue represent protons with
$x_{L}=$ 1 and 0.5 respectively.
### X.4 Technical design of an optimized low energy and high luminosity
interaction region
The above detection requirements make the detector and machine designs
intertwined and closely integrated. There is no longer a clear separation
between the detector and machine components. Several detection parameters
directly impact the design choices for the second IR and vice versa. The major
parameters critical to both detector and machine aspects of the design are
summarized in Table 3. This table also provides a comparison of primary and
second IR parameters. One of the important design differences is the inclusion
of a secondary focus in the second IR to provide improved downstream tagging
resolution as described in Section X.3.
Table 3: Summary of second IR design requirements and their comparison to the first IR. # | Parameter | EIC IR #1 | EIC IR #2 | Impact
---|---|---|---|---
1 | Energy range | | | Facility operation
| electrons [GeV] | 5–18 | 5–18 |
| protons [GeV] | 41, 100–275 | 41, 100–275 |
2 | Crossing angle [mrad] | 25 | 35 | $p_{T}$ resolution,
| | | | acceptance, geometry
3 | Detector space | -4.5/+4.5 | -5/5.5 | Forward/rear
| symmetry [m] | | | acceptance balance
4 | Forward angular | 20 | 20–30 | Spectrometer dipole aperture
| acceptance [mrad] | | |
5 | Far-forward angular | 4.5 | 5 | Neutron cone, Max. $p_{T}$
| acceptance [mrad] | | |
6 | Minimum $\Delta(B\rho)/(B\rho)$ | | | Beam focus with dispersion,
| allowing for detection | 0.1 | 0.003–0.01 | reach in $x_{L}$ and $p_{T}$ resolution,
| of $p_{T}=0$ fragments | | | reach in $x_{B}$ for exclusive proc.
7 | RMS angular beam diver- | 0.1/0.2 | $<$0.2 | Min. $p_{T}$, $p_{T}$ resolution
| gence at IP, h/v [mrad] | | |
8 | Low $Q^{2}$ electron acceptance | $<$0.1 | $<$0.1 | Not a hard requirement
#### X.4.1 Design constraints
The design constraints for the second IR include:
* •
The second IR must transport both beams over their entire energy ranges with
required path lengths. All second IR dipole magnets must have sufficient field
integrals to provide the necessary bending angles keeping the IR footprint
fixed from the lowest to the highest energy, while respecting geometric
constraints of the existing infrastructure. The quadrupoles must also provide
sufficient focusing to properly transport the beams over the entire energy
range. Use of NbTi superconducting magnets implies that none of the second IR
magnets can have aperture-edge fields higher than 4.6 T at highest beam
energies; more complicated magnets, such as the B0 spectrometer, may be
limited to significantly lower fields EIC (2021). For collisions, the second
IR magnets must have sufficient strengths to focus the beams at the IP while
having sufficiently large apertures to meet the detection requirements
discussed below. Simultaneous operation of the two IRs is also subject to the
beam dynamics constraints discussed later.
* •
Consistent with the two detector complementarity approach, the second IR could
be designed to provide a near flat luminosity above $\approx$45 GeV. This
supports leveling of the EIC luminosity curve at a higher level over a wider
energy range, as can be seen in Fig. 45. The second IR may also be designed to
provide a different acceptance coverage than the first IR.
Figure 45: Estimated luminosity versus CM energies for the operation of one
(thick lines) or two (thin lines) interaction regions. The blue lines show
estimates of the reference luminosity. The green lines show the high
luminosity operation with potentially improved beam optics and cooling at
lower CM energies. (As shown in Gamage (2021))
* •
The ion and electron beams cross at a relatively large angle of 35 mrad at the
IP. High luminosity is preserved through the use of crab cavities. This angle
moves the ion beam away from the electron beam elements and makes room for
dipoles located just downstream of the central detector area. The dipoles
serve two purposes. First, they shape the beam orbits providing their
geometric match, making the IR footprint fit in the available detector hall
and tunnel space, and creating room for detectors. Second, the dipole systems
allow momentum analysis of the particles with small transverse momentum with
respect to the beams. Particles with large transverse momenta are analyzed
using the solenoidal field and the B0 magnet in the central detector.
Figure 46: Apparent horizontal broadening of the beam spot size at the IP due
to the crab tilt. Blue left: RMS hadron bunch length $\sim 10$cm, red middle:
Looking along the beam with no crabbing, and red right: What the RP sees
$\sim$1.25mm.
#### X.4.2 Effect of horizontal crabbing in secondary focus
Since the secondary focus is within the region where the hadron beam is
crabbed, hadron crabbing effectively broadens the horizontal beam spot size
seen by the Roman Pot (RP) detectors in the secondary focus, as illustrated in
Figure 46. This beam spot size is one of the sources of uncertainty in a
$p_{T}$ measurement. Ignoring for the moment other sources such as the beam
angular spread at the IP, the transverse position of a scattered particle at
an RP $x_{RP}$ is related to $p_{T}$ as
$x_{RP}=M_{11}x_{IP}+M_{12}p_{T}/p,$ (45)
where $x_{IP}$ is the scattered particle’s transverse position at the IP and
$p$ is the beam momentum. $M_{11}$ and $M_{12}$ are elements of the linear
beam transfer matrix from the IP to the RP known from the magnetic optics
design:
$\displaystyle M_{11}$ $\displaystyle=$
$\displaystyle\sqrt{\beta_{RP}/\beta_{IP}}\cos\Delta\Psi,$ (46) $\displaystyle
M_{12}$ $\displaystyle=$
$\displaystyle\sqrt{\beta_{RP}\beta_{IP}}\sin\Delta\Psi,$
where $\beta_{RP}$ and $\beta_{IP}$ are the Twiss $\beta$-functions at the RP
and IP, respectively, and $\Delta\Psi$ is the betatron phase advance from the
IP to the RP. The measured $p_{T}$ can be expressed as
$p_{T}=p\frac{x_{RP}}{\sqrt{\beta_{RP}\beta_{IP}}\sin\Delta\Psi}-p\frac{1}{\beta_{IP}}\frac{\cos\Delta\Psi}{\sin\Delta\Psi}x_{IP}.$
(47)
Since it is challenging to measure $x_{IP}$ precisely, the second term on the
right-hand side of the above equation represents a measurement uncertainty
$\Delta
p_{T}=\left|p\frac{1}{\beta_{IP}}\frac{\cos\Delta\Psi}{\sin\Delta\Psi}x_{IP}\right|.$
(48)
$x_{IP}$ consists of a random betatron component $x_{\beta}$ and a
longitudinal-position-correlated component $z\,\theta/2$:
$x_{IP}=x_{\beta}+z\,\theta/2,$ (49)
where $z$ is the particles longitudinal position from the center of the bunch
and $\theta$ is the total beam crossing angle.
The second term in Equation 49 describes the beam spot size smear. It is
typically much greater than the first term. Therefore, the uncertainty term in
Equation 48 can be greatly reduced by measuring the event’s $z$ position. It
has been suggested that, with a feasible RP timing of $\sim 35$ ps, the $z$
position can be resolved down to $\sim 1$ cm.
Another factor in the uncertainty term of Equation 48 is $\cos\Delta\Psi$. By
placing the RP at a position with $\Delta\Psi$ close to $\pi/2$, $\Delta
p_{T}$ in Equation 48 can in principle be made arbitrarily small. There may be
practical considerations limiting the available choice of $\Delta\Psi$ such as
the requirement of placing the RP before the crab cavities, which have small
apertures and kick the particles. In the presented design of the second IR,
$\Delta\Psi$ is adjusted as close to $\pi/2$ at the RP as allowed by other
constraints to minimize $\Delta p_{T}$.
Physics simulations set a requirement on the contribution of the crabbing tilt
to $\Delta p_{T}$ of
$\Delta p_{T}<20\,{\rm MeV}.$ (50)
Figure 47: Gap in the electron rapidity coverage due to the crossing angle and
the ion beam pipe. The blue and red circles represent the ion and electron
beam pipes at the EM calorimeter location. The black dashed circle outlines
the solid angle without full azimuthal detector acceptance.
Another issue with the size of the crossing angle is that it contributes to
the gap in the electron rapidity coverage in the rear direction as illustrated
in Figure 47. There is no full azimuthal coverage within an angle defined by
the crossing angle and the size of the ion beam pipe. Assuming 5 cm for the
radius of the ion beam pipe at a 2.5 m distance in the rear direction from the
IP, the total polar angle of the gap in the rapidity coverage is about 20
mrad$+\theta_{cr}$. There is also a subtle point worth mentioning regarding
the impact of the crabbing on the RP resolution, and the advantage of
measuring both the vertex z- and the time-coordinates. Z-coordinates will be
measured by the MAPS Si vertex tracker. However, if the vertex is measured
e.g. z=+5cm, this does not determine if the collision happened at the leading
edges of the two bunches (with mean x displaced in negative direction) or at
the trailing edges of the collision (with mean x displaced in positive
direction). This is where the time measurement comes in, to determine where in
the longitudinal profile of the crab the event happened.
### X.5 Operations with Two IRs
In the period of the EIC project, the second IR (IR8) will have no detector
and no capability of being tuned for collisions. All operations will focus on
beam parameter, luminosity, and polarization optimization for the single IR
and detector that are part of the project scope.
Operations of the EIC later, with two IRs, involves multiple scenarios, each
with beam dynamics and design constraints that involve tradeoffs of available
luminosity, operations time, and mode switching. The beam-beam force is the
local nonlinear electromagnetic force colliding beams exert on each other;
this force creates a nonlinear beam-beam tune shift that is a known limitation
of many collider operations. This beam-beam tune shift is already optimized in
the single-IR EIC design. Thus both IRs cannot operate simultaneously with
full parameters necessary for maximum luminosity, as this would exceed the
acceptable beam-beam tune shift limit. It is therefore infeasible to add net
luminosity available to experiments by adding an IR in the EIC under optimized
collider conditions where the beam-beam tune shifts limit integrated
luminosity.
There are two alternatives to EIC operations with two IRs: EIC luminosity can
be maximized separately for each detector in dedicated runs where only one IR
is tuned for collisions; or EIC luminosity can be shared and optimized as much
as possible between the two IRs in runs where both IRs are set up to share
total facility luminosity.
The separate luminosity scenario is technically straightforward. The non-
luminosity IR would be detuned to reduce chromatic effects, and beams would be
steered to avoid collisions at that IR. For each run, the overall facility
would then be optimized to maximize operational parameters necessary to
optimize the science program for the given run time at the operating IR.
The shared luminosity scenario is technically more complicated. Section 4.6.4
of the EIC CDR EIC (2021) includes a section titled “Beam-beam Effects with
Two Experiments” (pages 431–3) that describes one possibility for luminosity
sharing. This involves design choices in the facility, and placement of the
second IR and experiment in IR8, to enable an operating configuration that
collides half the bunches at each of IR6 and IR8. Each individual bunch
experiences only one collision per turn, so the total beam-beam tune shift
limit for each bunch is respected. This CDR section also indicates that long-
range beam-beam effects (present when beam timing is adjusted to share
luminosity) may further limit the total luminosity available at both IRs.
The shared luminosity scenario may have other beam dynamics limitations (such
as limitations of global chromatic correction) that would further limit the
total available combined luminosity to both experiments. These beam dynamics
considerations are being studied in the context of EIC second IR design and
overall EIC lattice design optimization. Figure 45 shows this best-case
scenario as the “fair-share” curves, representing a 50% sharing of total
luminosity between the two IRs.
## XI Acknowledgments
The PI’s like to thank the Center for Frontiers in Nuclear Physics (CNFS) and
the Center of Nuclear Femtography (CNF) for their support of this initiative
and the dedicated discussions they devoted to its realization.
Special thanks go to R. McKeown for the continuous support he gave to the
initiative of developing this White Paper.
This work is supported in part by the U.S. Department of Energy, Office of
Science, Office of Nuclear Physics, under contract numbers DE-AC02-05CH11231,
DE-FG02-89ER40531, DE-SC0019230, DE-AC05-06OR23177. A. Afanasev is supported
by National Science Foundation under Grant No. PHY-2111063. J. Bernauer is
supported by National Science Foundation under Grant No. PHY-2012114. A.
Courtoy is supported by UNAM Grant No. DGAPA-PAPIIT IN111222 and CONACyT
Ciencia de Frontera 2019 No. 51244 (FORDECYT-PRONACES). H.-W. Lin is partly
supported by the US National Science Foundation under grant PHY 1653405
“CAREER: Constraining Parton Distribution Functions for New-Physics Searches”
and PHY 2209424, and by the Research Corporation for Science Advancement
through the Cottrell Scholar Award. K. Kumerički is supported by Croatian
Science Foundation project IP-2019-04-9709. Y. Oh was supported by the
National Research Foundation under grants No. NRF-2020R1A2C1007597 and No.
NRF-2018R1A6A1A06024970 (Basic Science Research Program). D. Glazier is
supported by the UK Science and Technology Facilities Council under grants
ST/P004458/1 and ST/V00106X/1. F. Ringer is supported by the Simons Foundation
815892, NSF 1915093. A. Signori acknowledges support from the European
Commission through the Marie Skłodowska-Curie Action SQuHadron (grant
agreement ID: 795475). D. Winney is supported by National Natural Science
Foundation of China Grant No. 12035007 and the NSFC and the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) through the funds
provided to the Sino-German Collaborative Research Center TRR110 “Symmetries
and the Emergence of Structure in QCD” (NSFC Grant No. 12070131001, DFG
Project-ID 196253076-TRR 110). A. Vladimirov is supported by Deutsche
Forschungsgemeinschaft (DFG) through the research Unit FOR 2926, “Next
Generation pQCD for Hadron Structure: Preparing for the EIC”, project number
30824754”. The work of Kirill M. Semenov-Tian-Shansky is supported by the
Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS”.
Krzysztof Cichy acknowledges support by the National Science Centre (Poland)
grant SONATA BIS no. 2016/22/E/ST2/00013. A. Vladimirov is funded by the
Atracción de Talento Investigador program of the Comunidad de Madrid (Spain)
No. 2020-T1/TIC-20204. This work was partially supported by DFG FOR 2926 “Next
Generation pQCD for Hadron Structure: Preparing for the EIC”, project number
430824754. C.R. Ji is supported by DOE Contract No. DE-FG02-03ER41260. F.-K.
Guo is supported by the Chinese Academy of Sciences under Grant No.
XDB34030000, and by the National Natural Science Foundation of China under
Grants No. 12125507, No. 11835015, No. 12047503, No. 11961141012 and No.
12070131001. Xiaohui Liu acknowledges support by the National Natural Science
Foundation of China under Grant No. 12175016.
## References
* IR2 (9794) (https://indico.bnl.gov/event/9794).
* IR2 (0677) (https://indico.bnl.gov/event/10677).
* PSQ (1669) (https://indico.bnl.gov/event/11669).
* Rutherford (1911) E. Rutherford, Phil. Mag. Ser. 6 21, 669 (1911).
* Chadwick (1932) J. Chadwick, Nature 129, 312 (1932).
* Accardi _et al._ (2016) A. Accardi _et al._ , Eur. Phys. J. A 52, 268 (2016), arXiv:1212.1701 [nucl-ex] .
* Pro (2020) _Proceedings, Probing Nucleons and Nuclei in High Energy Collisions_ (WSP, 2020) arXiv:2002.12333 [hep-ph] .
* Abdul Khalek _et al._ (2021) R. Abdul Khalek _et al._ , arXiv:2103.05419 (2021).
* EIC (2021) “Electron-Ion Collider Conceptual Design Report,” https://www.bnl.gov/eic/science.php (2021).
* Accardi _et al._ (2021) A. Accardi _et al._ , Eur. Phys. J. A 57, 261 (2021), arXiv:2007.15081 [nucl-ex] .
* Grewal _et al._ (2020) M. Grewal, Z.-B. Kang, J.-W. Qiu, and A. Signori, Phys. Rev. D 101, 114023 (2020), arXiv:2003.07453 [hep-ph] .
* Müller _et al._ (1994) D. Müller, D. Robaschik, B. Geyer, F. M. Dittes, and J. Hořejši, Fortsch. Phys. 42, 101 (1994), arXiv:hep-ph/9812448 .
* Ji (1997a) X.-D. Ji, Phys. Rev. Lett. 78, 610 (1997a), arXiv:hep-ph/9603249 .
* Radyushkin (1997) A. V. Radyushkin, Phys. Rev. D 56, 5524 (1997), arXiv:hep-ph/9704207 .
* Belitsky and Mueller (2010) A. V. Belitsky and D. Mueller, Phys. Rev. D 82, 074010 (2010), arXiv:1005.5209 [hep-ph] .
* Kumericki _et al._ (2011) K. Kumericki, D. Mueller, and A. Schafer, JHEP 07, 073 (2011), arXiv:1106.2808 [hep-ph] .
* Guidal _et al._ (2013) M. Guidal, H. Moutarde, and M. Vanderhaeghen, Rept. Prog. Phys. 76, 066202 (2013), arXiv:1303.6600 [hep-ph] .
* Kumericki _et al._ (2016) K. Kumericki, S. Liuti, and H. Moutarde, arXiv:1602.02763 (2016).
* Moutarde _et al._ (2019) H. Moutarde, P. Sznajder, and J. Wagner, Eur. Phys. J. C 79, 614 (2019), arXiv:1905.02089 [hep-ph] .
* Ji and Bakker (2013) C.-R. Ji and B. L. G. Bakker, Int. J. Mod. Phys. E 22, 1330002 (2013).
* Diehl (2003) M. Diehl, Phys. Rept. 388, 41 (2003), arXiv:hep-ph/0307382 [hep-ph] .
* Belitsky and Radyushkin (2005) A. Belitsky and A. Radyushkin, Phys.Rept. 418, 1 (2005), arXiv:hep-ph/0504030 [hep-ph] .
* Goeke _et al._ (2001) K. Goeke, M. V. Polyakov, and M. Vanderhaeghen, Prog. Part. Nucl. Phys. 47, 401 (2001), arXiv:hep-ph/0106012 .
* Ji (2004) X. Ji, Ann. Rev. Nucl. Part. Sci. 54, 413 (2004).
* Ji _et al._ (2021a) X. Ji, F. Yuan, and Y. Zhao, Nature Rev. Phys. 3, 27 (2021a), arXiv:2009.01291 [hep-ph] .
* Pobylitsa (2002) P. V. Pobylitsa, Phys. Rev. D 66, 094002 (2002), arXiv:hep-ph/0204337 .
* Pire _et al._ (1999) B. Pire, J. Soffer, and O. Teryaev, Eur. Phys. J. C 8, 103 (1999), arXiv:hep-ph/9804284 .
* Teryaev (2005) O. V. Teryaev, in _11th International Conference on Elastic and Diffractive Scattering: Towards High Energy Frontiers: The 20th Anniversary of the Blois Workshops, 17th Rencontre de Blois_ (2005) arXiv:hep-ph/0510031 .
* Anikin and Teryaev (2007) I. V. Anikin and O. V. Teryaev, Phys. Rev. D 76, 056007 (2007), arXiv:0704.2185 [hep-ph] .
* Diehl and Ivanov (2007) M. Diehl and D. Y. Ivanov, Eur. Phys. J. C 52, 919 (2007), arXiv:0707.0351 [hep-ph] .
* Goldstein and Liuti (2009) G. R. Goldstein and S. Liuti, Phys. Rev. D 80, 071501 (2009), arXiv:0905.4753 [hep-ph] .
* Pasquini _et al._ (2014) B. Pasquini, M. V. Polyakov, and M. Vanderhaeghen, Phys. Lett. B 739, 133 (2014), arXiv:1407.5960 [hep-ph] .
* Kriesten _et al._ (2021) B. Kriesten, P. Velie, E. Yeats, F. Y. Lopez, and S. Liuti, arXiv:2101.01826 (2021).
* Ji (1998a) X.-D. Ji, J. Phys. G 24, 1181 (1998a), arXiv:hep-ph/9807358 .
* Guo _et al._ (2021a) Y. Guo, X. Ji, and K. Shiells, Nucl. Phys. B 969, 115440 (2021a), arXiv:2101.05243 [hep-ph] .
* Berger _et al._ (2002) E. R. Berger, M. Diehl, and B. Pire, Eur. Phys. J. C 23, 675 (2002), arXiv:hep-ph/0110062 .
* Pire _et al._ (2011) B. Pire, L. Szymanowski, and J. Wagner, Phys. Rev. D 83, 034009 (2011), arXiv:1101.0555 [hep-ph] .
* Mueller _et al._ (2012) D. Mueller, B. Pire, L. Szymanowski, and J. Wagner, Phys. Rev. D 86, 031502 (2012), arXiv:1203.4392 [hep-ph] .
* Qiu and Yu (2022) J.-W. Qiu and Z. Yu, JHEP 08, 103 (2022), arXiv:2205.07846 [hep-ph] .
* Pedrak _et al._ (2020) A. Pedrak, B. Pire, L. Szymanowski, and J. Wagner, Phys. Rev. D 101, 114027 (2020), arXiv:2003.03263 [hep-ph] .
* Grocholski _et al._ (2022) O. Grocholski, B. Pire, P. Sznajder, L. Szymanowski, and J. Wagner, Phys. Rev. D 105, 094025 (2022), arXiv:2204.00396 [hep-ph] .
* Ivanov _et al._ (2002) D. Y. Ivanov, B. Pire, L. Szymanowski, and O. V. Teryaev, Phys. Lett. B 550, 65 (2002), arXiv:hep-ph/0209300 .
* Boussarie _et al._ (2017) R. Boussarie, B. Pire, L. Szymanowski, and S. Wallon, JHEP 02, 054 (2017), [Erratum: JHEP 10, 029 (2018)], arXiv:1609.03830 [hep-ph] .
* Duplančić _et al._ (2018) G. Duplančić, K. Passek-Kumerički, B. Pire, L. Szymanowski, and S. Wallon, JHEP 11, 179 (2018), arXiv:1809.08104 [hep-ph] .
* Pire _et al._ (2021a) B. Pire, L. Szymanowski, and J. Wagner, Phys. Rev. D 104, 094002 (2021a), arXiv:2104.04944 [hep-ph] .
* Braun _et al._ (2014) V. M. Braun, A. N. Manashov, D. Müller, and B. M. Pirnay, Phys. Rev. D 89, 074022 (2014), arXiv:1401.7621 [hep-ph] .
* Kriesten _et al._ (2020) B. Kriesten, S. Liuti, L. Calero-Diaz, D. Keller, A. Meyer, G. R. Goldstein, and J. Osvaldo Gonzalez-Hernandez, Phys. Rev. D 101, 054021 (2020), arXiv:1903.05742 [hep-ph] .
* Guo _et al._ (2021b) Y. Guo, X. Ji, and K. Shiells, JHEP 12, 103 (2021b), arXiv:2109.10373 [hep-ph] .
* Lin _et al._ (2018a) H.-W. Lin _et al._ , Prog. Part. Nucl. Phys. 100, 107 (2018a), arXiv:1711.07916 [hep-ph] .
* Constantinou _et al._ (2021) M. Constantinou _et al._ , Prog. Part. Nucl. Phys. 121, 103908 (2021), arXiv:2006.08636 [hep-ph] .
* Čuić _et al._ (2020) M. Čuić, K. Kumerički, and A. Schäfer, Phys. Rev. Lett. 125, 232005 (2020), arXiv:2007.00029 [hep-ph] .
* Kumerički _et al._ (2014) K. Kumerički, D. Müller, and M. Murray, Phys. Part. Nucl. 45, 723 (2014), arXiv:1301.1230 [hep-ph] .
* Belitsky and Mueller (2003) A. V. Belitsky and D. Mueller, Phys. Rev. Lett. 90, 022001 (2003), arXiv:hep-ph/0210313 .
* Moutarde _et al._ (2018) H. Moutarde, P. Sznajder, and J. Wagner, Eur. Phys. J. C 78, 890 (2018), arXiv:1807.07620 [hep-ph] .
* Dutrieux _et al._ (2021a) H. Dutrieux, O. Grocholski, H. Moutarde, and P. Sznajder, arXiv:2112.10528 (2021a).
* Dutrieux _et al._ (2021b) H. Dutrieux, C. Lorcé, H. Moutarde, P. Sznajder, A. Trawiński, and J. Wagner, Eur. Phys. J. C 81, 300 (2021b), arXiv:2101.03855 [hep-ph] .
* Grigsby _et al._ (2021) J. Grigsby, B. Kriesten, J. Hoskins, S. Liuti, P. Alonzi, and M. Burkardt, Phys. Rev. D 104, 016001 (2021), arXiv:2012.04801 [hep-ph] .
* Kobzarev and Okun (1962) I. Y. Kobzarev and L. B. Okun, Zh. Eksp. Teor. Fiz. 43, 1904 (1962).
* Pagels (1966) H. Pagels, Phys. Rev. 144, 1250 (1966).
* Ji (1997b) X.-D. Ji, Phys. Rev. D 55, 7114 (1997b), arXiv:hep-ph/9609381 .
* Polyakov (2003) M. V. Polyakov, Phys. Lett. B 555, 57 (2003), arXiv:hep-ph/0210165 .
* Burkert _et al._ (2018) V. D. Burkert, L. Elouadrhiri, and F. X. Girod, Nature 557, 396 (2018).
* Girod _et al._ (2008) F. X. Girod _et al._ (CLAS), Phys. Rev. Lett. 100, 162002 (2008), arXiv:0711.4805 [hep-ex] .
* Jo _et al._ (2015) H. S. Jo _et al._ (CLAS), Phys. Rev. Lett. 115, 212003 (2015), arXiv:1504.02009 [hep-ex] .
* Kivel _et al._ (2001) N. Kivel, M. V. Polyakov, and M. Vanderhaeghen, Phys. Rev. D 63, 114014 (2001), arXiv:hep-ph/0012136 .
* Polyakov and Schweitzer (2018a) M. V. Polyakov and P. Schweitzer, Int. J. Mod. Phys. A 33, 1830025 (2018a), arXiv:1805.06596 [hep-ph] .
* Gayoso _et al._ (2021) C. A. Gayoso _et al._ , Eur. Phys. J. A 57, 342 (2021), arXiv:2107.06748 [hep-ph] .
* Pire _et al._ (2021b) B. Pire, K. Semenov-Tian-Shansky, and L. Szymanowski, Phys. Rept. 940, 1 (2021b), arXiv:2103.01079 [hep-ph] .
* Lansberg _et al._ (2012) J. P. Lansberg, B. Pire, K. Semenov-Tian-Shansky, and L. Szymanowski, Phys. Rev. D 85, 054021 (2012), arXiv:1112.3570 [hep-ph] .
* Pire _et al._ (2022) B. Pire, K. M. Semenov-Tian-Shansky, A. A. Shaikhutdinova, and L. Szymanowski, Eur. Phys. J. C 82, 656 (2022), arXiv:2201.12853 [hep-ph] .
* Park _et al._ (2018) K. Park _et al._ (CLAS), Phys. Lett. B 780, 340 (2018), arXiv:1711.08486 [nucl-ex] .
* Diehl _et al._ (2020) S. Diehl _et al._ (CLAS), Phys. Rev. Lett. 125, 182001 (2020), arXiv:2007.15677 [nucl-ex] .
* Li _et al._ (2019) W. B. Li _et al._ (Jefferson Lab F$\pi$), Phys. Rev. Lett. 123, 182501 (2019), arXiv:1910.00464 [nucl-ex] .
* Li _et al._ (2020) W. B. Li _et al._ , (2020), arXiv:2008.10768 [nucl-ex] .
* Defurne _et al._ (2017) M. Defurne _et al._ , Nature Commun. 8, 1408 (2017), arXiv:1703.09442 [hep-ex] .
* Kriesten and Liuti (2020) B. Kriesten and S. Liuti, arXiv:2011.04484 (2020).
* Georges _et al._ (2022) F. Georges _et al._ (Jefferson Lab Hall A), Phys. Rev. Lett. 128, 252002 (2022), arXiv:2201.03714 [hep-ph] .
* Chatagnon _et al._ (2021) P. Chatagnon _et al._ (CLAS), Phys. Rev. Lett. 127, 262501 (2021), arXiv:2108.11746 [hep-ex] .
* Voutier (2014) E. Voutier, Nucl. Theor. 33, 142 (2014), arXiv:1412.1249 [nucl-ex] .
* Burkert _et al._ (2021a) V. Burkert, L. Elouadrhiri, F.-X. Girod, S. Niccolai, E. Voutier, _et al._ (CLAS), Eur. Phys. J. A 57, 186 (2021a), arXiv:2103.12651 [nucl-ex] .
* Abbott _et al._ (2016) D. Abbott _et al._ (PEPPo), Phys. Rev. Lett. 116, 214801 (2016), arXiv:1606.08877 [physics.acc-ph] .
* report (2018) N. report, _An Assessment of US-Based Electron-Ion Collider Science_ (The Natiional Academies Press, 2018).
* Ashman _et al._ (1989) J. Ashman _et al._ (European Muon), Nucl. Phys. B 328, 1 (1989).
* Ji (2021) X. Ji, Front. Phys. (Beijing) 16, 64601 (2021), arXiv:2102.07830 [hep-ph] .
* Roberts _et al._ (2021) C. D. Roberts, D. G. Richards, T. Horn, and L. Chang, Prog. Part. Nucl. Phys. 120, 103883 (2021), arXiv:2102.01765 [hep-ph] .
* Arrington _et al._ (2021) J. Arrington _et al._ , J. Phys. G 48, 075106 (2021), arXiv:2102.11788 [nucl-ex] .
* Anderle _et al._ (2021) D. P. Anderle _et al._ , Front. Phys. (Beijing) 16, 64701 (2021), arXiv:2102.09222 [nucl-ex] .
* Mokeev and Carman (2022) V. I. Mokeev and D. S. Carman (CLAS), Few Body Syst. 63, 59 (2022), arXiv:2202.04180 [nucl-ex] .
* Segovia _et al._ (2015) J. Segovia, B. El-Bennich, E. Rojas, I. C. Cloet, C. D. Roberts, S.-S. Xu, and H.-S. Zong, Phys. Rev. Lett. 115, 171801 (2015), arXiv:1504.04386 [nucl-th] .
* Segovia _et al._ (2014) J. Segovia, I. C. Cloet, C. D. Roberts, and S. M. Schmidt, Few Body Syst. 55, 1185 (2014), arXiv:1408.2919 [nucl-th] .
* Roberts (2020) C. D. Roberts, Symmetry 12, 1468 (2020), arXiv:2009.04011 [hep-ph] .
* Ji (1995) X.-D. Ji, Phys. Rev. Lett. 74, 1071 (1995), arXiv:hep-ph/9410274 .
* Guo _et al._ (2021c) Y. Guo, X. Ji, and Y. Liu, Phys. Rev. D 103, 096010 (2021c), arXiv:2103.11506 [hep-ph] .
* Particle Data Group (2020) Particle Data Group (Particle Data Group), Prog. Theor. Exp. Phys. 2020, 083c01 (2020).
* Georgi and Glashow (1974) H. Georgi and S. L. Glashow, Phys. Rev. Lett. 32, 438 (1974).
* Manohar and Georgi (1984) A. Manohar and H. Georgi, Nucl. Phys. B 234, 189 (1984).
* Chodos _et al._ (1974) A. Chodos, R. L. Jaffe, K. Johnson, C. B. Thorn, and V. F. Weisskopf, Phys. Rev. D 9, 3471 (1974).
* Schäfer and Shuryak (1998) T. Schäfer and E. V. Shuryak, Rev. Mod. Phys. 70, 323 (1998), arXiv:hep-ph/9610451 .
* Ji _et al._ (2021b) X. Ji, Y. Liu, and A. Schäfer, Nucl. Phys. B 971, 115537 (2021b), arXiv:2105.03974 [hep-ph] .
* Gasser and Leutwyler (1982) J. Gasser and H. Leutwyler, Phys. Rept. 87, 77 (1982).
* Kharzeev (1996) D. Kharzeev, Proc. Int. Sch. Phys. Fermi 130, 105 (1996), arXiv:nucl-th/9601029 .
* Gasser _et al._ (1991) J. Gasser, H. Leutwyler, and M. E. Sainio, Phys. Lett. B 253, 252 (1991).
* Gasser and Leutwyler (1985) J. Gasser and H. Leutwyler, Nucl. Phys. B 250, 465 (1985).
* Yang _et al._ (2018) Y.-B. Yang, J. Liang, Y.-J. Bi, Y. Chen, T. Draper, K.-F. Liu, and Z. Liu, Phys. Rev. Lett. 121, 212001 (2018), arXiv:1808.08677 [hep-lat] .
* Alexandrou _et al._ (2017a) C. Alexandrou, M. Constantinou, K. Hadjiyiannakou, K. Jansen, C. Kallidonis, G. Koutsou, A. Vaquero Avilés-Casco, and C. Wiese, Phys. Rev. Lett. 119, 142002 (2017a), arXiv:1706.02973 [hep-lat] .
* Wang _et al._ (2020) R. Wang, J. Evslin, and X. Chen, Eur. Phys. J. C 80, 507 (2020), arXiv:1912.12040 [hep-ph] .
* Hatta and Yang (2018) Y. Hatta and D.-L. Yang, Phys. Rev. D 98, 074003 (2018), arXiv:1808.02163 [hep-ph] .
* Mamo and Zahed (2022) K. A. Mamo and I. Zahed, arXiv:2204.08857 (2022).
* Boussarie and Hatta (2020) R. Boussarie and Y. Hatta, Phys. Rev. D 101, 114004 (2020), arXiv:2004.12715 [hep-ph] .
* Hagler _et al._ (2008) P. Hagler _et al._ (LHPC), Phys. Rev. D 77, 094502 (2008), arXiv:0705.4295 [hep-lat] .
* Shanahan and Detmold (2019a) P. E. Shanahan and W. Detmold, Phys. Rev. D 99, 014511 (2019a), arXiv:1810.04626 [hep-lat] .
* He _et al._ (2021) F. He, P. Sun, and Y.-B. Yang ($\chi$QCD), Phys. Rev. D 104, 074507 (2021), arXiv:2101.04942 [hep-lat] .
* Freese and Miller (2022) A. Freese and G. A. Miller, Phys. Rev. D 105, 014003 (2022), arXiv:2108.03301 [hep-ph] .
* Kharzeev (2021) D. E. Kharzeev, Phys. Rev. D 104, 054015 (2021), arXiv:2102.00110 [hep-ph] .
* Mamo and Zahed (2021a) K. A. Mamo and I. Zahed, Phys. Rev. D 103, 094010 (2021a), arXiv:2103.03186 [hep-ph] .
* Alexandrou _et al._ (2020a) C. Alexandrou, S. Bacchio, M. Constantinou, J. Finkenrath, K. Hadjiyiannakou, K. Jansen, G. Koutsou, H. Panagopoulos, and G. Spanoudes, Phys. Rev. D 101, 094513 (2020a), arXiv:2003.08486 [hep-lat] .
* Yang _et al._ (2017) Y.-B. Yang, R. S. Sufian, A. Alexandru, T. Draper, M. J. Glatzmaier, K.-F. Liu, and Y. Zhao, Phys. Rev. Lett. 118, 102001 (2017), arXiv:1609.05937 [hep-ph] .
* Jaffe and Manohar (1990) R. L. Jaffe and A. Manohar, Nucl. Phys. B 337, 509 (1990).
* Hagler and Schafer (1998) P. Hagler and A. Schafer, Phys. Lett. B 430, 179 (1998), arXiv:hep-ph/9802362 .
* Hatta (2012) Y. Hatta, Phys. Lett. B 708, 186 (2012), arXiv:1111.3547 [hep-ph] .
* Ji _et al._ (2013) X. Ji, X. Xiong, and F. Yuan, Phys. Rev. D 88, 014041 (2013), arXiv:1207.5221 [hep-ph] .
* Hatta and Yoshida (2012) Y. Hatta and S. Yoshida, JHEP 10, 080 (2012), arXiv:1207.5332 [hep-ph] .
* Guo _et al._ (2022) Y. Guo, X. Ji, B. Kriesten, and K. Shiells, arXiv:2202.11114 (2022).
* Bhattacharya _et al._ (2022) S. Bhattacharya, R. Boussarie, and Y. Hatta, Phys. Rev. Lett. 128, 182002 (2022), arXiv:2201.08709 [hep-ph] .
* Ji (1998b) X.-D. Ji, Phys. Rev. D 58, 056003 (1998b), arXiv:hep-ph/9710290 .
* Wang _et al._ (2021) G. Wang, Y.-B. Yang, J. Liang, T. Draper, and K.-F. Liu (chiQCD), arXiv:2111.09329 (2021).
* Ji _et al._ (2012) X. Ji, X. Xiong, and F. Yuan, Phys. Lett. B 717, 214 (2012), arXiv:1209.3246 [hep-ph] .
* Ji and Yuan (2020) X. Ji and F. Yuan, Phys. Lett. B 810, 135786 (2020), arXiv:2008.04349 [hep-ph] .
* Lorcé _et al._ (2019) C. Lorcé, H. Moutarde, and A. P. Trawiński, Eur. Phys. J. C 79, 89 (2019), arXiv:1810.09837 [hep-ph] .
* Freese and Miller (2021a) A. Freese and G. A. Miller, Phys. Rev. D 103, 094023 (2021a), arXiv:2102.01683 [hep-ph] .
* Panteleeva and Polyakov (2021) J. Y. Panteleeva and M. V. Polyakov, Phys. Rev. D 104, 014008 (2021), arXiv:2102.10902 [hep-ph] .
* Burkardt (2000) M. Burkardt, Phys. Rev. D 62, 071503 (2000), [Erratum: Phys.Rev.D 66, 119903 (2002)], arXiv:hep-ph/0005108 .
* Lorcé (2020) C. Lorcé, Phys. Rev. Lett. 125, 232002 (2020), arXiv:2007.05318 [hep-ph] .
* Goeke _et al._ (2007) K. Goeke, J. Grabis, J. Ossmann, M. V. Polyakov, P. Schweitzer, A. Silva, and D. Urbano, Phys. Rev. D 75, 094021 (2007), arXiv:hep-ph/0702030 .
* Lorcé _et al._ (2022) C. Lorcé, P. Schweitzer, and K. Tezgin, Phys. Rev. D 106, 014012 (2022), arXiv:2202.01192 [hep-ph] .
* Polyakov and Weiss (1999) M. V. Polyakov and C. Weiss, Phys. Rev. D 60, 114017 (1999), arXiv:hep-ph/9902451 .
* Hudson and Schweitzer (2018) J. Hudson and P. Schweitzer, Phys. Rev. D 97, 056003 (2018), arXiv:1712.05317 [hep-ph] .
* Polyakov and Son (2018) M. V. Polyakov and H.-D. Son, JHEP 09, 156 (2018), arXiv:1808.00155 [hep-ph] .
* von Laue (1911) M. von Laue, Annalen Phys. 35, 524 (1911).
* Perevalova _et al._ (2016) I. A. Perevalova, M. V. Polyakov, and P. Schweitzer, Phys. Rev. D 94, 054024 (2016), arXiv:1607.07008 [hep-ph] .
* Prakash _et al._ (2001) M. Prakash, J. M. Lattimer, J. A. Pons, A. W. Steiner, and S. Reddy, Lect. Notes Phys. 578, 364 (2001), arXiv:astro-ph/0012136 .
* Neubelt _et al._ (2020) M. J. Neubelt, A. Sampino, J. Hudson, K. Tezgin, and P. Schweitzer, Phys. Rev. D 101, 034013 (2020), arXiv:1911.08906 [hep-ph] .
* Polyakov and Schweitzer (2018b) M. V. Polyakov and P. Schweitzer, (2018b), arXiv:1801.05858 [hep-ph] .
* Cebulla _et al._ (2007) C. Cebulla, K. Goeke, J. Ossmann, and P. Schweitzer, Nucl. Phys. A 794, 87 (2007), arXiv:hep-ph/0703025 .
* Kim _et al._ (2012) H.-C. Kim, P. Schweitzer, and U. Yakhshiev, Phys. Lett. B 718, 625 (2012), arXiv:1205.5228 [hep-ph] .
* Jung _et al._ (2014) J.-H. Jung, U. Yakhshiev, H.-C. Kim, and P. Schweitzer, Phys. Rev. D 89, 114021 (2014), arXiv:1402.0161 [hep-ph] .
* Alharazin _et al._ (2020) H. Alharazin, D. Djukanovic, J. Gegelia, and M. V. Polyakov, Phys. Rev. D 102, 076023 (2020), arXiv:2006.05890 [hep-ph] .
* Gegelia and Polyakov (2021) J. Gegelia and M. V. Polyakov, Phys. Lett. B 820, 136572 (2021), arXiv:2104.13954 [hep-ph] .
* Varma and Schweitzer (2020) M. Varma and P. Schweitzer, Phys. Rev. D 102, 014047 (2020), arXiv:2006.06602 [hep-ph] .
* Kubis and Meissner (2000) B. Kubis and U.-G. Meissner, Nucl. Phys. A 671, 332 (2000), [Erratum: Nucl.Phys.A 692, 647–648 (2001)], arXiv:hep-ph/9908261 .
* Metz _et al._ (2021) A. Metz, B. Pasquini, and S. Rodini, Phys. Lett. B 820, 136501 (2021), arXiv:2104.04207 [hep-ph] .
* Ji and Liu (2021) X. Ji and Y. Liu, arXiv:2110.14781 (2021).
* Ji and Liu (2022) X. Ji and Y. Liu, (2022), arXiv:2208.05029 [hep-ph] .
* Hudson and Schweitzer (2017) J. Hudson and P. Schweitzer, Phys. Rev. D 96, 114013 (2017), arXiv:1712.05316 [hep-ph] .
* Freese and Miller (2021b) A. Freese and G. A. Miller, Phys. Rev. D 104, 014024 (2021b), arXiv:2104.03213 [hep-ph] .
* Burkert _et al._ (2021b) V. D. Burkert, L. Elouadrhiri, and F. X. Girod, arXiv:2104.02031 (2021b).
* Anselmino _et al._ (2020) M. Anselmino, A. Mukherjee, and A. Vossen, Prog. Part. Nucl. Phys. 114, 103806 (2020), arXiv:2001.05415 [hep-ph] .
* Bacchetta _et al._ (2007) A. Bacchetta, M. Diehl, K. Goeke, A. Metz, P. J. Mulders, and M. Schlegel, JHEP 02, 093 (2007), arXiv:hep-ph/0611265 .
* Pecar and Vossen (2022) C. Pecar and A. Vossen, (2022), arXiv:2209.14489 [hep-ph] .
* Gao _et al._ (2022) A. Gao, J. K. L. Michel, I. W. Stewart, and Z. Sun, (2022), arXiv:2209.11211 [hep-ph] .
* Gliske _et al._ (2014) S. Gliske, A. Bacchetta, and M. Radici, Phys. Rev. D 90, 114027 (2014), [Erratum: Phys.Rev.D 91, 019902 (2015)], arXiv:1408.5721 [hep-ph] .
* Akushevich and Ilyichev (2019a) I. Akushevich and A. Ilyichev, Phys. Rev. D 100, 033005 (2019a), arXiv:1905.09232 [hep-ph] .
* Liu _et al._ (2021) T. Liu, W. Melnitchouk, J.-W. Qiu, and N. Sato, JHEP 11, 157 (2021), arXiv:2108.13371 [hep-ph] .
* Chen _et al._ (2014) J. P. Chen, H. Gao, T. K. Hemmick, Z. E. Meziani, and P. A. Souder (SoLID), (2014), arXiv:1409.7741 [nucl-ex] .
* (165) SoLID (Solenoidal Large Intensity Device) Updated Preliminary Conceptual Design Report, available at https://solid.jlab.org/DocDB/0002/000282/001/solid-precdr-2019Nov.pdf.
* Efremov _et al._ (2003) A. V. Efremov, K. Goeke, and P. Schweitzer, Phys. Rev. D 67, 114014 (2003), arXiv:hep-ph/0208124 .
* Accardi _et al._ (2009) A. Accardi, A. Bacchetta, W. Melnitchouk, and M. Schlegel, JHEP 11, 093 (2009), arXiv:0907.2942 [hep-ph] .
* Sato _et al._ (2016) N. Sato, W. Melnitchouk, S. E. Kuhn, J. J. Ethier, and A. Accardi (Jefferson Lab Angular Momentum), Phys. Rev. D 93, 074005 (2016), arXiv:1601.07782 [hep-ph] .
* Aschenauer _et al._ (2016) E. C. Aschenauer, U. D’Alesio, and F. Murgia, Eur. Phys. J. A 52, 156 (2016), arXiv:1512.05379 [hep-ph] .
* Boglione and Prokudin (2016) M. Boglione and A. Prokudin, Eur. Phys. J. A 52, 154 (2016), arXiv:1511.06924 [hep-ph] .
* Burkardt (2013) M. Burkardt, Phys. Rev. D 88, 114502 (2013), arXiv:0810.3589 [hep-ph] .
* Efremov and Schweitzer (2003) A. V. Efremov and P. Schweitzer, JHEP 08, 006 (2003), arXiv:hep-ph/0212044 .
* Alarcon _et al._ (2012) J. M. Alarcon, J. Martin Camalich, and J. A. Oller, Phys. Rev. D 85, 051503 (2012), arXiv:1110.3797 [hep-ph] .
* Hoferichter _et al._ (2016) M. Hoferichter, J. Ruiz de Elvira, B. Kubis, and U.-G. Meißner, Phys. Lett. B 760, 74 (2016), arXiv:1602.07688 [hep-lat] .
* Bacchetta and Radici (2004) A. Bacchetta and M. Radici, Phys. Rev. D 69, 074026 (2004), arXiv:hep-ph/0311173 .
* Mirazita _et al._ (2021) M. Mirazita _et al._ (CLAS), Phys. Rev. Lett. 126, 062002 (2021), arXiv:2010.09544 [hep-ex] .
* Hayward _et al._ (2021) T. B. Hayward _et al._ , Phys. Rev. Lett. 126, 152501 (2021), arXiv:2101.04842 [hep-ex] .
* Courtoy _et al._ (2022) A. Courtoy, A. S. Miramontes, H. Avakian, M. Mirazita, and S. Pisano, Phys. Rev. D 106, 014027 (2022), arXiv:2203.14975 [hep-ph] .
* Radici _et al._ (2015) M. Radici, A. Courtoy, A. Bacchetta, and M. Guagnelli, JHEP 05, 123 (2015), arXiv:1503.03495 [hep-ph] .
* Pasquini and Rodini (2019) B. Pasquini and S. Rodini, Phys. Lett. B 788, 414 (2019), arXiv:1806.10932 [hep-ph] .
* Bhattacharya _et al._ (2021a) S. Bhattacharya, K. Cichy, M. Constantinou, A. Metz, A. Scapellato, and F. Steffens, in _38th International Symposium on Lattice Field Theory_ (2021) arXiv:2111.01056 [hep-lat] .
* Braun _et al._ (2021) V. M. Braun, Y. Ji, and A. Vladimirov, JHEP 10, 087 (2021), arXiv:2108.03065 [hep-ph] .
* Boglione _et al._ (2017) M. Boglione, J. Collins, L. Gamberg, J. O. Gonzalez-Hernandez, T. C. Rogers, and N. Sato, Phys. Lett. B 766, 245 (2017), arXiv:1611.10329 [hep-ph] .
* Bacchetta _et al._ (2017) A. Bacchetta, F. Delcarro, C. Pisano, M. Radici, and A. Signori, JHEP 06, 081 (2017), [Erratum: JHEP 06, 051 (2019)], arXiv:1703.10157 [hep-ph] .
* Scimemi and Vladimirov (2018) I. Scimemi and A. Vladimirov, Eur. Phys. J. C 78, 89 (2018), arXiv:1706.01473 [hep-ph] .
* Scimemi and Vladimirov (2020) I. Scimemi and A. Vladimirov, JHEP 06, 137 (2020), arXiv:1912.06532 [hep-ph] .
* Bacchetta _et al._ (2020a) A. Bacchetta, V. Bertone, C. Bissolotti, G. Bozzi, F. Delcarro, F. Piacenza, and M. Radici, JHEP 07, 117 (2020a), arXiv:1912.07550 [hep-ph] .
* Boglione _et al._ (2022) M. Boglione, M. Diefenthaler, S. Dolan, L. Gamberg, W. Melnitchouk, D. Pitonyak, A. Prokudin, N. Sato, and Z. Scalyer (Jefferson Lab Angular Momentum (JAM)), JHEP 04, 084 (2022), arXiv:2201.12197 [hep-ph] .
* Boglione _et al._ (2019) M. Boglione, A. Dotson, L. Gamberg, S. Gordon, J. O. Gonzalez-Hernandez, A. Prokudin, T. C. Rogers, and N. Sato, JHEP 10, 122 (2019), arXiv:1904.12882 [hep-ph] .
* Bacchetta _et al._ (2022) A. Bacchetta, V. Bertone, C. Bissolotti, G. Bozzi, M. Cerutti, F. Piacenza, M. Radici, and A. Signori, (2022), arXiv:2206.07598 [hep-ph] .
* Derrick _et al._ (1996) M. Derrick _et al._ (ZEUS), Z. Phys. C 70, 1 (1996), arXiv:hep-ex/9511010 .
* Adloff _et al._ (1997) C. Adloff _et al._ (H1), Nucl. Phys. B 485, 3 (1997), arXiv:hep-ex/9610006 .
* Asaturyan _et al._ (2012) R. Asaturyan _et al._ , Phys. Rev. C 85, 015202 (2012), arXiv:1103.1649 [nucl-ex] .
* Airapetian _et al._ (2013) A. Airapetian _et al._ (HERMES), Phys. Rev. D 87, 074029 (2013), arXiv:1212.5407 [hep-ex] .
* Adolph _et al._ (2013) C. Adolph _et al._ (COMPASS), Eur. Phys. J. C 73, 2531 (2013), [Erratum: Eur.Phys.J.C 75, 94 (2015)], arXiv:1305.7317 [hep-ex] .
* Aghasyan _et al._ (2018a) M. Aghasyan _et al._ (COMPASS), Phys. Rev. D 97, 032006 (2018a), arXiv:1709.07374 [hep-ex] .
* Ito _et al._ (1981) A. S. Ito _et al._ , Phys. Rev. D 23, 604 (1981).
* Moreno _et al._ (1991) G. Moreno _et al._ , Phys. Rev. D 43, 2815 (1991).
* McGaughey _et al._ (1994) P. L. McGaughey _et al._ (E772), Phys. Rev. D 50, 3038 (1994), [Erratum: Phys.Rev.D 60, 119903 (1999)].
* Aidala _et al._ (2019) C. Aidala _et al._ (PHENIX), Phys. Rev. D 99, 072003 (2019), arXiv:1805.02448 [hep-ex] .
* Affolder _et al._ (2000) T. Affolder _et al._ (CDF), Phys. Rev. Lett. 84, 845 (2000), arXiv:hep-ex/0001021 .
* Aaltonen _et al._ (2012) T. Aaltonen _et al._ (CDF), Phys. Rev. D 86, 052010 (2012), arXiv:1207.7138 [hep-ex] .
* Abbott _et al._ (2000) B. Abbott _et al._ (D0), Phys. Rev. D 61, 032004 (2000), arXiv:hep-ex/9907009 .
* Abazov _et al._ (2008) V. M. Abazov _et al._ (D0), Phys. Rev. Lett. 100, 102002 (2008), arXiv:0712.0803 [hep-ex] .
* Abazov _et al._ (2010) V. M. Abazov _et al._ (D0), Phys. Lett. B 693, 522 (2010), arXiv:1006.0618 [hep-ex] .
* Aad _et al._ (2014) G. Aad _et al._ (ATLAS), JHEP 09, 145 (2014), arXiv:1406.3660 [hep-ex] .
* Aad _et al._ (2016) G. Aad _et al._ (ATLAS), Eur. Phys. J. C 76, 291 (2016), arXiv:1512.02192 [hep-ex] .
* Chatrchyan _et al._ (2012) S. Chatrchyan _et al._ (CMS), Phys. Rev. D 85, 032002 (2012), arXiv:1110.4973 [hep-ex] .
* Khachatryan _et al._ (2017) V. Khachatryan _et al._ (CMS), JHEP 02, 096 (2017), arXiv:1606.05864 [hep-ex] .
* Aaij _et al._ (2015) R. Aaij _et al._ (LHCb), JHEP 08, 039 (2015), arXiv:1505.07024 [hep-ex] .
* Aaij _et al._ (2016a) R. Aaij _et al._ (LHCb), JHEP 01, 155 (2016a), arXiv:1511.08039 [hep-ex] .
* Aaij _et al._ (2016b) R. Aaij _et al._ (LHCb), JHEP 09, 136 (2016b), arXiv:1607.06495 [hep-ex] .
* Vladimirov (2020) A. A. Vladimirov, Phys. Rev. Lett. 125, 192002 (2020), arXiv:2003.02288 [hep-ph] .
* Zhang _et al._ (2020) Q.-A. Zhang _et al._ (Lattice Parton), Phys. Rev. Lett. 125, 192001 (2020), arXiv:2005.14572 [hep-lat] .
* Shanahan _et al._ (2020) P. Shanahan, M. Wagman, and Y. Zhao, Phys. Rev. D 102, 014511 (2020), arXiv:2003.06063 [hep-lat] .
* Schlemmer _et al._ (2021) M. Schlemmer, A. Vladimirov, C. Zimmermann, M. Engelhardt, and A. Schäfer, JHEP 08, 004 (2021), arXiv:2103.16991 [hep-lat] .
* Sjostrand _et al._ (2006) T. Sjostrand, S. Mrenna, and P. Z. Skands, JHEP 05, 026 (2006), arXiv:hep-ph/0603175 .
* Anselmino _et al._ (2005) M. Anselmino, M. Boglione, U. D’Alesio, A. Kotzinian, F. Murgia, and A. Prokudin, Phys. Rev. D 72, 094007 (2005), [Erratum: Phys.Rev.D 72, 099903 (2005)], arXiv:hep-ph/0507181 .
* Collins _et al._ (2006) J. C. Collins, A. V. Efremov, K. Goeke, S. Menzel, A. Metz, and P. Schweitzer, Phys. Rev. D 73, 014021 (2006), arXiv:hep-ph/0509076 .
* Vogelsang and Yuan (2005) W. Vogelsang and F. Yuan, Phys. Rev. D 72, 054028 (2005), arXiv:hep-ph/0507266 .
* Anselmino _et al._ (2009) M. Anselmino, M. Boglione, U. D’Alesio, A. Kotzinian, S. Melis, F. Murgia, A. Prokudin, and C. Turk, Eur. Phys. J. A 39, 89 (2009), arXiv:0805.2677 [hep-ph] .
* Bacchetta and Radici (2011) A. Bacchetta and M. Radici, Phys. Rev. Lett. 107, 212001 (2011), arXiv:1107.5755 [hep-ph] .
* Sun and Yuan (2013) P. Sun and F. Yuan, Phys. Rev. D 88, 114012 (2013), arXiv:1308.5003 [hep-ph] .
* Echevarria _et al._ (2014) M. G. Echevarria, A. Idilbi, Z.-B. Kang, and I. Vitev, Phys. Rev. D 89, 074013 (2014), arXiv:1401.5078 [hep-ph] .
* Boglione _et al._ (2018) M. Boglione, U. D’Alesio, C. Flore, and J. O. Gonzalez-Hernandez, JHEP 07, 148 (2018), arXiv:1806.10645 [hep-ph] .
* Luo and Sun (2020) X. Luo and H. Sun, Phys. Rev. D 101, 074016 (2020), arXiv:2004.03764 [hep-ph] .
* Cammarota _et al._ (2020) J. Cammarota, L. Gamberg, Z.-B. Kang, J. A. Miller, D. Pitonyak, A. Prokudin, T. C. Rogers, and N. Sato (Jefferson Lab Angular Momentum), Phys. Rev. D 102, 054002 (2020), arXiv:2002.08384 [hep-ph] .
* Bacchetta _et al._ (2020b) A. Bacchetta, F. Delcarro, C. Pisano, and M. Radici, arXiv:2004.14278 (2020b).
* Echevarria _et al._ (2021) M. G. Echevarria, Z.-B. Kang, and J. Terry, JHEP 01, 126 (2021), arXiv:2009.10710 [hep-ph] .
* Bury _et al._ (2021a) M. Bury, A. Prokudin, and A. Vladimirov, Phys. Rev. Lett. 126, 112002 (2021a), arXiv:2012.05135 [hep-ph] .
* Bury _et al._ (2021b) M. Bury, A. Prokudin, and A. Vladimirov, JHEP 05, 151 (2021b), arXiv:2103.03270 [hep-ph] .
* Airapetian _et al._ (2009) A. Airapetian _et al._ (HERMES), Phys. Rev. Lett. 103, 152002 (2009), arXiv:0906.3918 [hep-ex] .
* Adolph _et al._ (2012) C. Adolph _et al._ (COMPASS), Phys. Lett. B 717, 383 (2012), arXiv:1205.5122 [hep-ex] .
* Adolph _et al._ (2017) C. Adolph _et al._ (COMPASS), Phys. Lett. B 770, 138 (2017), arXiv:1609.07374 [hep-ex] .
* Alekseev _et al._ (2009) M. Alekseev _et al._ (COMPASS), Phys. Lett. B 673, 127 (2009), arXiv:0802.2160 [hep-ex] .
* Qian _et al._ (2011) X. Qian _et al._ (Jefferson Lab Hall A), Phys. Rev. Lett. 107, 072003 (2011), arXiv:1106.0363 [nucl-ex] .
* Aghasyan _et al._ (2017) M. Aghasyan _et al._ (COMPASS), Phys. Rev. Lett. 119, 112002 (2017), arXiv:1704.00488 [hep-ex] .
* Adamczyk _et al._ (2016) L. Adamczyk _et al._ (STAR), Phys. Rev. Lett. 116, 132301 (2016), arXiv:1511.06003 [nucl-ex] .
* Liang _et al._ (2008) Z.-t. Liang, X.-N. Wang, and J. Zhou, Phys. Rev. D 77, 125010 (2008), arXiv:0801.0434 [hep-ph] .
* Mueller _et al._ (2016) A. H. Mueller, B. Wu, B.-W. Xiao, and F. Yuan, Phys. Lett. B 763, 208 (2016), arXiv:1604.04250 [hep-ph] .
* Mueller _et al._ (2017) A. H. Mueller, B. Wu, B.-W. Xiao, and F. Yuan, Phys. Rev. D 95, 034007 (2017), arXiv:1608.07339 [hep-ph] .
* Qiu and Vitev (2004) J.-w. Qiu and I. Vitev, Phys. Rev. Lett. 93, 262301 (2004), arXiv:hep-ph/0309094 .
* Qiu and Vitev (2006) J.-w. Qiu and I. Vitev, Phys. Lett. B 632, 507 (2006), arXiv:hep-ph/0405068 .
* Eskola _et al._ (1999) K. J. Eskola, V. J. Kolhinen, and C. A. Salgado, Eur. Phys. J. C 9, 61 (1999), arXiv:hep-ph/9807297 .
* de Florian and Sassot (2004) D. de Florian and R. Sassot, Phys. Rev. D 69, 074028 (2004), arXiv:hep-ph/0311227 .
* Hirai _et al._ (2007) M. Hirai, S. Kumano, and T. H. Nagai, Phys. Rev. C 76, 065207 (2007), arXiv:0709.3038 [hep-ph] .
* Eskola _et al._ (2007) K. J. Eskola, V. J. Kolhinen, H. Paukkunen, and C. A. Salgado, JHEP 05, 002 (2007), arXiv:hep-ph/0703104 .
* Schienbein _et al._ (2009) I. Schienbein, J. Y. Yu, K. Kovarik, C. Keppel, J. G. Morfin, F. Olness, and J. F. Owens, Phys. Rev. D 80, 094004 (2009), arXiv:0907.2357 [hep-ph] .
* Atashbar Tehrani (2012) S. Atashbar Tehrani, Phys. Rev. C 86, 064301 (2012).
* Khanpour and Atashbar Tehrani (2016) H. Khanpour and S. Atashbar Tehrani, Phys. Rev. D 93, 014026 (2016), arXiv:1601.00939 [hep-ph] .
* Eskola _et al._ (2017) K. J. Eskola, P. Paakkinen, H. Paukkunen, and C. A. Salgado, Eur. Phys. J. C 77, 163 (2017), arXiv:1612.05741 [hep-ph] .
* Walt _et al._ (2019) M. Walt, I. Helenius, and W. Vogelsang, Phys. Rev. D 100, 096015 (2019), arXiv:1908.03355 [hep-ph] .
* Kovarik _et al._ (2016) K. Kovarik _et al._ , Phys. Rev. D 93, 085037 (2016), arXiv:1509.00792 [hep-ph] .
* Abdul Khalek _et al._ (2019a) R. Abdul Khalek, J. J. Ethier, and J. Rojo (NNPDF), Eur. Phys. J. C 79, 471 (2019a), arXiv:1904.00018 [hep-ph] .
* Abdul Khalek _et al._ (2020) R. Abdul Khalek, J. J. Ethier, J. Rojo, and G. van Weelden, JHEP 09, 183 (2020), arXiv:2006.14629 [hep-ph] .
* Sassot _et al._ (2010) R. Sassot, M. Stratmann, and P. Zurita, Phys. Rev. D 81, 054001 (2010), arXiv:0912.1311 [hep-ph] .
* Zurita (2021) P. Zurita, arXiv:2101.01088 (2021).
* Alrashed _et al._ (2021) M. Alrashed, D. Anderle, Z.-B. Kang, J. Terry, and H. Xing, arXiv:2107.12401 (2021).
* Airapetian _et al._ (2007) A. Airapetian _et al._ (HERMES), Nucl. Phys. B 780, 1 (2007), arXiv:0704.3270 [hep-ex] .
* Alde _et al._ (1990) D. M. Alde _et al._ , Phys. Rev. Lett. 64, 2479 (1990).
* Vasilev _et al._ (1999) M. A. Vasilev _et al._ (NuSea), Phys. Rev. Lett. 83, 2304 (1999), arXiv:hep-ex/9906010 .
* Leung (2018) Y. H. Leung (PHENIX), PoS HardProbes2018, 160 (2018).
* Khachatryan _et al._ (2016) V. Khachatryan _et al._ (CMS), Phys. Lett. B 759, 36 (2016), arXiv:1512.06461 [hep-ex] .
* Aad _et al._ (2015) G. Aad _et al._ (ATLAS), Phys. Rev. C 92, 044915 (2015), arXiv:1507.06232 [hep-ex] .
* Hinderer _et al._ (2015) P. Hinderer, M. Schlegel, and W. Vogelsang, Phys. Rev. D 92, 014001 (2015), [Erratum: Phys.Rev.D 93, 119903 (2016)], arXiv:1505.06415 [hep-ph] .
* Abelof _et al._ (2016) G. Abelof, R. Boughezal, X. Liu, and F. Petriello, Phys. Lett. B 763, 52 (2016), arXiv:1607.04921 [hep-ph] .
* Boughezal _et al._ (2018) R. Boughezal, F. Petriello, and H. Xing, Phys. Rev. D 98, 054031 (2018), arXiv:1806.07311 [hep-ph] .
* Borsa _et al._ (2020) I. Borsa, D. de Florian, and I. Pedron, Phys. Rev. Lett. 125, 082001 (2020), arXiv:2005.10705 [hep-ph] .
* Arratia _et al._ (2020a) M. Arratia, Y. Song, F. Ringer, and B. V. Jacak, Phys. Rev. C 101, 065204 (2020a), arXiv:1912.05931 [nucl-ex] .
* Page _et al._ (2020) B. Page, X. Chu, and E. Aschenauer, Phys. Rev. D 101, 072003 (2020), arXiv:1911.00657 [hep-ph] .
* Larkoski _et al._ (2020) A. J. Larkoski, I. Moult, and B. Nachman, Phys. Rept. 841, 1 (2020), arXiv:1709.04464 [hep-ph] .
* Kogler _et al._ (2019) R. Kogler _et al._ , Rev. Mod. Phys. 91, 045003 (2019), arXiv:1803.06991 [hep-ex] .
* Marzani _et al._ (2019) S. Marzani, G. Soyez, and M. Spannowsky, _An introduction to jet substructure and boosted-object phenomenology_ , Vol. 958 (Springer, 2019) arXiv:1901.10342 [hep-ph] .
* Lee and Sterman (2007) C. Lee and G. F. Sterman, Phys. Rev. D 75, 014022 (2007), arXiv:hep-ph/0611061 .
* Ellis _et al._ (2010) S. D. Ellis, C. K. Vermilion, J. R. Walsh, A. Hornig, and C. Lee, JHEP 11, 101 (2010), arXiv:1001.0014 [hep-ph] .
* Aschenauer _et al._ (2020) E.-C. Aschenauer, K. Lee, B. Page, and F. Ringer, Phys. Rev. D 101, 054028 (2020), arXiv:1910.11460 [hep-ph] .
* Caletti _et al._ (2021) S. Caletti, O. Fedkevych, S. Marzani, D. Reichelt, S. Schumann, G. Soyez, and V. Theeuwes, JHEP 07, 076 (2021), arXiv:2104.06920 [hep-ph] .
* Waalewijn (2012) W. J. Waalewijn, Phys. Rev. D 86, 094030 (2012), arXiv:1209.3019 [hep-ph] .
* Kang _et al._ (2020) Z.-B. Kang, X. Liu, S. Mantry, and D. Y. Shao, Phys. Rev. Lett. 125, 242003 (2020), arXiv:2008.00655 [hep-ph] .
* Cal _et al._ (2020) P. Cal, D. Neill, F. Ringer, and W. J. Waalewijn, JHEP 04, 211 (2020), arXiv:1911.06840 [hep-ph] .
* Hoang _et al._ (2019) A. H. Hoang, S. Mantry, A. Pathak, and I. W. Stewart, JHEP 12, 002 (2019), arXiv:1906.11843 [hep-ph] .
* Chien _et al._ (2021) Y.-T. Chien, A. Deshpande, M. M. Mondal, and G. Sterman, arXiv:2109.15318 (2021).
* Dixon _et al._ (2019) L. J. Dixon, I. Moult, and H. X. Zhu, Phys. Rev. D 100, 014009 (2019), arXiv:1905.01310 [hep-ph] .
* Chen _et al._ (2021) H. Chen, I. Moult, and H. X. Zhu, Phys. Rev. Lett. 126, 112003 (2021), arXiv:2011.02492 [hep-ph] .
* Li _et al._ (2021a) Y. Li, I. Moult, S. S. van Velzen, W. J. Waalewijn, and H. X. Zhu, arXiv:2108.01674 (2021a).
* Dasgupta _et al._ (2008) M. Dasgupta, L. Magnea, and G. P. Salam, JHEP 02, 055 (2008), arXiv:0712.3014 [hep-ph] .
* Arratia _et al._ (2021) M. Arratia, Y. Makris, D. Neill, F. Ringer, and N. Sato, Phys. Rev. D 104, 034005 (2021), arXiv:2006.10751 [hep-ph] .
* Liu and Xing (2021) X. Liu and H. Xing, arXiv:2104.03328 (2021).
* Lai _et al._ (2022a) W. K. Lai, X. Liu, M. Wang, and H. Xing, (2022a), arXiv:2205.04570 [hep-ph] .
* Neill and Ringer (2020) D. Neill and F. Ringer, JHEP 06, 086 (2020), arXiv:2003.02275 [hep-ph] .
* Neill (2021) D. Neill, JHEP 03, 081 (2021), arXiv:2010.02934 [hep-ph] .
* Liu _et al._ (2019a) X. Liu, F. Ringer, W. Vogelsang, and F. Yuan, Phys. Rev. Lett. 122, 192003 (2019a), arXiv:1812.08077 [hep-ph] .
* Kang _et al._ (2021) Z.-B. Kang, K. Lee, D. Y. Shao, and F. Zhao, arXiv:2106.15624 (2021).
* Hatta _et al._ (2021) Y. Hatta, B.-W. Xiao, F. Yuan, and J. Zhou, Phys. Rev. D 104, 054037 (2021), arXiv:2106.05307 [hep-ph] .
* Gutierrez-Reyes _et al._ (2018) D. Gutierrez-Reyes, I. Scimemi, W. J. Waalewijn, and L. Zoppi, Phys. Rev. Lett. 121, 162001 (2018), arXiv:1807.07573 [hep-ph] .
* Gutierrez-Reyes _et al._ (2019a) D. Gutierrez-Reyes, I. Scimemi, W. J. Waalewijn, and L. Zoppi, JHEP 10, 031 (2019a), arXiv:1904.04259 [hep-ph] .
* Gutierrez-Reyes _et al._ (2019b) D. Gutierrez-Reyes, Y. Makris, V. Vaidya, I. Scimemi, and L. Zoppi, JHEP 08, 161 (2019b), arXiv:1907.05896 [hep-ph] .
* Kang _et al._ (2019) Z.-B. Kang, K. Lee, X. Liu, and F. Ringer, Phys. Lett. B 793, 41 (2019), arXiv:1811.06983 [hep-ph] .
* del Castillo _et al._ (2020) R. F. del Castillo, M. G. Echevarria, Y. Makris, and I. Scimemi, “TMD factorization for di-jet and heavy meson pair in DIS,” (2020), arXiv:2008.07531 [hep-ph] .
* Arratia _et al._ (2020b) M. Arratia, Z.-B. Kang, A. Prokudin, and F. Ringer, Phys. Rev. D 102, 074015 (2020b), arXiv:2007.07281 [hep-ph] .
* Neill _et al._ (2017) D. Neill, I. Scimemi, and W. J. Waalewijn, JHEP 04, 020 (2017), arXiv:1612.04817 [hep-ph] .
* Hatta _et al._ (2016) Y. Hatta, B.-W. Xiao, and F. Yuan, Phys. Rev. Lett. 116, 202301 (2016), arXiv:1601.01585 [hep-ph] .
* Hatta _et al._ (2020) Y. Hatta, N. Mueller, T. Ueda, and F. Yuan, Phys. Lett. B 802, 135211 (2020), arXiv:1907.09491 [hep-ph] .
* Chekanov _et al._ (2002) S. Chekanov _et al._ (ZEUS), Eur. Phys. J. C 24, 345 (2002), arXiv:hep-ex/0201043 .
* Aktas _et al._ (2006) A. Aktas _et al._ (H1), Eur. Phys. J. C 46, 585 (2006), arXiv:hep-ex/0510016 .
* Choi _et al._ (2003) S. K. Choi _et al._ (Belle), Phys. Rev. Lett. 91, 262001 (2003), arXiv:hep-ex/0309032 .
* Brambilla _et al._ (2020) N. Brambilla, S. Eidelman, C. Hanhart, A. Nefediev, C.-P. Shen, C. E. Thomas, A. Vairo, and C.-Z. Yuan, Phys. Rept. 873, 1 (2020), arXiv:1907.07583 [hep-ex] .
* Olsen _et al._ (2018) S. L. Olsen, T. Skwarnicki, and D. Zieminska, Rev. Mod. Phys. 90, 015003 (2018).
* Nys _et al._ (2018) J. Nys, A. N. Hiller Blin, V. Mathieu, C. Fernández-Ramírez, A. Jackura, A. Pilloni, J. Ryckebusch, A. P. Szczepaniak, and G. Fox (JPAC), Phys. Rev. D 98, 034020 (2018), arXiv:1806.01891 [hep-ph] .
* Mamo and Zahed (2021b) K. A. Mamo and I. Zahed, Phys. Rev. D 104, 066023 (2021b), arXiv:2106.00722 [hep-ph] .
* Xu _et al._ (2021) Y.-Z. Xu, S.-Y. Chen, Z.-Q. Yao, D. Binosi, Z.-F. Cui, and C. D. Roberts, The European Physical Journal C 81, 895 (2021).
* Ali _et al._ (2019) A. Ali _et al._ (GlueX), Phys. Rev. Lett. 123, 072001 (2019), arXiv:1905.10811 [nucl-ex] .
* Meziani _et al._ (2016) Z. E. Meziani _et al._ , arXiv:1609.00676 (2016).
* Cao _et al._ (2020) X. Cao, F.-K. Guo, Y.-T. Liang, J.-J. Wu, J.-J. Xie, Y.-P. Xie, Z. Yang, and B.-S. Zou, Phys. Rev. D 101, 074010 (2020).
* Paryev (2020) E. Y. Paryev, arXiv:2007.01172 (2020).
* Collaboration _et al._ (2021) J. Collaboration, M. Albaladejo, L. Bibrzycki, S. M. Dawid, C. Fernandez-Ramirez, S. Gonzalez-Solis, A. N. H. Blin, A. W. Jackura, V. Mathieu, M. Mikhasenko, V. I. Mokeev, E. Passemar, A. Pilloni, A. Rodas, J. A. Silva-Castro, W. A. Smith, A. P. Szczepaniak, and D. Winney, “Novel approaches in hadron spectroscopy,” (2021), arXiv:2112.13436 [hep-ph] .
* Albaladejo _et al._ (2020) M. Albaladejo, A. N. H. Blin, A. Pilloni, D. Winney, C. Fernández-Ramírez, V. Mathieu, and A. Szczepaniak (JPAC), Phys. Rev. D 102, 114010 (2020), arXiv:2008.01001 [hep-ph] .
* Kawamura and Kumano (2014) H. Kawamura and S. Kumano, Phys. Rev. D 89, 054007 (2014), arXiv:1312.1596 [hep-ph] .
* Anikin _et al._ (2004) I. V. Anikin, B. Pire, L. Szymanowski, O. V. Teryaev, and S. Wallon, Phys. Rev. D 70, 011501 (2004), arXiv:hep-ph/0401130 .
* Aghasyan _et al._ (2018b) M. Aghasyan _et al._ (COMPASS), Phys. Lett. B 783, 334 (2018b), arXiv:1707.01796 [hep-ex] .
* Aaij _et al._ (2021) R. Aaij _et al._ (LHCb), Phys. Rev. Lett. 127, 082001 (2021), arXiv:2103.01803 [hep-ex] .
* Ablikim _et al._ (2021) M. Ablikim _et al._ (BESIII), Phys. Rev. Lett. 126, 102001 (2021), arXiv:2011.07855 [hep-ex] .
* Aaij _et al._ (2020) R. Aaij _et al._ (LHCb), Sci. Bull. 65, 1983 (2020), arXiv:2006.16957 [hep-ex] .
* Bondar _et al._ (2012) A. Bondar _et al._ (Belle Collaboration), Phys. Rev. Lett. 108, 122001 (2012).
* Wang _et al._ (2015a) Q. Wang, X.-H. Liu, and Q. Zhao, Phys. Rev. D 92, 034022 (2015a), arXiv:1508.00339 [hep-ph] .
* Cao _et al._ (2021) X. Cao, J.-P. Dai, and Z. Yang, Eur. Phys. J. C 81, 184 (2021), arXiv:2011.09244 [hep-ph] .
* Winney _et al._ (2019) D. Winney, C. Fanelli, A. Pilloni, A. N. Hiller Blin, C. Fernández-Ramírez, M. Albaladejo, V. Mathieu, V. I. Mokeev, and A. P. Szczepaniak (JPAC), Phys. Rev. D 100, 034019 (2019), arXiv:1907.09393 [hep-ph] .
* Wang _et al._ (2019) X.-Y. Wang, X.-R. Chen, and J. He, Phys. Rev. D 99, 114007 (2019), arXiv:1904.11706 [hep-ph] .
* Karliner and Rosner (2017) M. Karliner and J. L. Rosner, arXiv:1705.07691 (2017).
* Hiller Blin _et al._ (2016) A. N. Hiller Blin, C. Fernández-Ramírez, A. Jackura, V. Mathieu, V. I. Mokeev, A. Pilloni, and A. P. Szczepaniak, Phys. Rev. D 94, 034002 (2016), arXiv:1606.08912 [hep-ph] .
* Wang _et al._ (2015b) X.-Y. Wang, X.-R. Chen, and A. Guskov, Phys. Rev. D 92, 094017 (2015b), arXiv:1503.02125 [hep-ph] .
* Galata (2011) G. Galata, Phys. Rev. C 83, 065203 (2011), arXiv:1102.2070 [hep-ph] .
* Lin _et al._ (2013) Q.-Y. Lin, X. Liu, and H.-S. Xu, Phys. Rev. D 88, 114009 (2013), arXiv:1308.6345 [hep-ph] .
* Adloff _et al._ (2000) C. Adloff _et al._ (H1), Eur. Phys. J. C 13, 371 (2000), arXiv:hep-ex/9902019 .
* Yang and Guo (2021) Z. Yang and F.-K. Guo, arXiv:2107.12247 (2021).
* Shi _et al._ (2022a) P.-P. Shi, F.-K. Guo, and Z. Yang, (2022a), arXiv:2208.02639 [hep-ph] .
* Xie _et al._ (2021) Y.-P. Xie, X.-Y. Wang, and X. Chen, Eur. Phys. J. C 81, 710 (2021), arXiv:2108.03767 [hep-ph] .
* Klein and Xie (2019) S. R. Klein and Y.-P. Xie, Phys. Rev. C 100, 024620 (2019).
* Bai _et al._ (2011) M. Bai, E. D. Courant, W. Fischer, F. Meot, T. Roser, and A. Zelenski, Conf. Proc. C 110328, 2315 (2011).
* Frankfurt and Strikman (1988) L. L. Frankfurt and M. I. Strikman, Phys. Rept. 160, 235 (1988).
* Frankfurt and Strikman (1981) L. L. Frankfurt and M. I. Strikman, Phys. Rept. 76, 215 (1981).
* Sargsian and Strikman (2006) M. Sargsian and M. Strikman, Phys. Lett. B639, 223 (2006), arXiv:hep-ph/0511054 .
* Cloet _et al._ (2005) I. C. Cloet, W. Bentz, and A. W. Thomas, Phys. Rev. Lett. 95, 052302 (2005), arXiv:nucl-th/0504019 .
* Brooks and Kuhn (2014) W. Brooks and S. Kuhn, (2014).
* Kondratyuk and Strikman (1984) L. A. Kondratyuk and M. I. Strikman, Nucl. Phys. A426, 575 (1984).
* Del Dotto _et al._ (2017) A. Del Dotto, L. P. Kaptari, E. Pace, G. Salmè, and S. Scopetta, Phys. Rev. C 96, 065203 (2017), arXiv:1704.06182 [nucl-th] .
* Cosyn and Weiss (2019) W. Cosyn and C. Weiss, Phys. Lett. B 799, 135035 (2019), arXiv:1906.11119 [hep-ph] .
* Cosyn and Weiss (2020) W. Cosyn and C. Weiss, Phys. Rev. C 102, 065204 (2020), arXiv:2006.03033 [hep-ph] .
* Friscic _et al._ (2021) I. Friscic _et al._ , Phys. Lett. B 823, 136726 (2021), arXiv:2106.08805 [nucl-ex] .
* Jentsch _et al._ (2021) A. Jentsch, Z. Tu, and C. Weiss, Phys. Rev. C 104, 065205 (2021), arXiv:2108.08314 [hep-ph] .
* Boeglin and Sargsian (2015) W. Boeglin and M. Sargsian, Int. J. Mod. Phys. E 24, 1530003 (2015), arXiv:1501.05377 [nucl-ex] .
* Miller _et al._ (2016) G. A. Miller, M. D. Sievert, and R. Venugopalan, Phys. Rev. C 93, 045202 (2016), arXiv:1512.03111 [nucl-th] .
* Tu _et al._ (2020) Z. Tu, A. Jentsch, M. Baker, L. Zheng, J.-H. Lee, R. Venugopalan, O. Hen, D. Higinbotham, E.-C. Aschenauer, and T. Ullrich, Phys. Lett. B 811, 135877 (2020), arXiv:2005.14706 [nucl-ex] .
* Hen _et al._ (2017) O. Hen, G. A. Miller, E. Piasetzky, and L. B. Weinstein, Rev. Mod. Phys. 89, 045002 (2017), arXiv:1611.09748 [nucl-ex] .
* Cosyn and Sargsian (2017) W. Cosyn and M. Sargsian, Int. J. Mod. Phys. E 26, 1730004 (2017), arXiv:1704.06117 [nucl-th] .
* Strikman and Weiss (2018) M. Strikman and C. Weiss, Phys. Rev. C 97, 035209 (2018), arXiv:1706.02244 [hep-ph] .
* Aguilar _et al._ (2019) A. C. Aguilar _et al._ , Eur. Phys. J. A 55, 190 (2019), arXiv:1907.08218 [nucl-ex] .
* Hauenstein _et al._ (2022) F. Hauenstein _et al._ , Phys. Rev. C 105, 034001 (2022), arXiv:2109.09509 [physics.ins-det] .
* Patsyuk _et al._ (2021) M. Patsyuk _et al._ , Nature Phys. 17 (2021), 10.1038/s41567-021-01193-4, arXiv:2102.02626 [nucl-ex] .
* Dutta _et al._ (2013) D. Dutta, K. Hafidi, and M. Strikman, Prog. Part. Nucl. Phys. 69, 1 (2013), arXiv:1211.2826 [nucl-th] .
* Bhetuwal _et al._ (2021) D. Bhetuwal _et al._ (Hall C), Phys. Rev. Lett. 126, 082301 (2021), arXiv:2011.00703 [nucl-ex] .
* Huber _et al._ (2022) G. M. Huber, W. B. Li, W. Cosyn, and B. Pire, MDPI Physics 4, 451 (2022), arXiv:2202.04470 [hep-ph] .
* Dupré and Scopetta (2016) R. Dupré and S. Scopetta, Eur. Phys. J. A 52, 159 (2016), arXiv:1510.00794 [nucl-ex] .
* Fucini _et al._ (2020a) S. Fucini, S. Scopetta, and M. Viviani, Phys. Rev. D 101, 071501 (2020a), arXiv:1909.12261 [nucl-th] .
* Fucini _et al._ (2020b) S. Fucini, S. Scopetta, and M. Viviani, Phys. Rev. C 102, 065205 (2020b), arXiv:2008.11437 [nucl-th] .
* Rinaldi and Scopetta (2013) M. Rinaldi and S. Scopetta, Phys. Rev. C 87, 035208 (2013), arXiv:1208.2831 [nucl-th] .
* Cano and Pire (2004) F. Cano and B. Pire, Eur. Phys. J. A 19, 423 (2004), arXiv:hep-ph/0307231 .
* Guzey and Strikman (2003) V. Guzey and M. Strikman, Phys. Rev. C 68, 015204 (2003), arXiv:hep-ph/0301216 .
* Scopetta (2004) S. Scopetta, Phys. Rev. C 70, 015205 (2004), arXiv:nucl-th/0404014 .
* Cosyn and Pire (2018) W. Cosyn and B. Pire, Phys. Rev. D98, 074020 (2018), arXiv:1806.01177 [hep-ph] .
* Fucini _et al._ (2018) S. Fucini, S. Scopetta, and M. Viviani, Phys. Rev. C 98, 015203 (2018), arXiv:1805.05877 [nucl-th] .
* Feliciello and Nagae (2015) A. Feliciello and T. Nagae, Rept. Prog. Phys. 78, 096301 (2015).
* Tyapkin (1975) A. A. Tyapkin, Yad. Fiz. 22, 181 (1975).
* Dover and Kahana (1977) C. B. Dover and S. H. Kahana, Phys. Rev. Lett. 39, 1506 (1977).
* Krein _et al._ (2018) G. Krein, A. W. Thomas, and K. Tsushima, Prog. Part. Nucl. Phys. 100, 161 (2018), arXiv:1706.02688 [hep-ph] .
* Wu _et al._ (2020) L. Wu, J. Hu, and H. Shen, Phys. Rev. C 101, 024303 (2020), arXiv:2001.08882 [nucl-th] .
* Haidenbauer and Krein (2021) J. Haidenbauer and G. Krein, arXiv:2101.07160 (2021).
* Song _et al._ (2020) J. Song, Y. Xiao, Z.-W. Liu, C.-X. Wang, K.-W. Li, and L.-S. Geng, Phys. Rev. C 102, 065208 (2020), arXiv:2010.06916 [nucl-th] .
* Haidenbauer and Krein (2018) J. Haidenbauer and G. Krein, Eur. Phys. J. A 54, 199 (2018), arXiv:1711.06470 [hep-ph] .
* Miyamoto (2019) T. Miyamoto, _Charmed baryon interaction from lattice QCD and its application to charmed hypernuclei_ , Ph.D. thesis, Kyoto U. (2019).
* Lyukov (1989) V. V. Lyukov, Nuovo Cim. A 102, 583 (1989).
* Barucca _et al._ (2021) G. Barucca _et al._ (PANDA), Eur. Phys. J. A 57, 184 (2021), arXiv:2101.11877 [hep-ex] .
* Feliciello (2012) A. Feliciello, Nucl. Phys. A 881, 78 (2012).
* Hagler _et al._ (2003) P. Hagler, J. W. Negele, D. B. Renner, W. Schroers, T. Lippert, and K. Schilling (LHPC, SESAM), Phys. Rev. D 68, 034505 (2003), arXiv:hep-lat/0304018 .
* Bratt _et al._ (2010) J. D. Bratt _et al._ (LHPC), Phys. Rev. D 82, 094502 (2010), arXiv:1001.3620 [hep-lat] .
* Ji (2013) X. Ji, Phys. Rev. Lett. 110, 262002 (2013), arXiv:1305.1539 [hep-ph] .
* Radyushkin (2017a) A. V. Radyushkin, Phys. Rev. D 96, 034025 (2017a), arXiv:1705.01488 [hep-ph] .
* Ma and Qiu (2018) Y.-Q. Ma and J.-W. Qiu, Phys. Rev. D 98, 074021 (2018), arXiv:1404.6860 [hep-ph] .
* Cichy and Constantinou (2019) K. Cichy and M. Constantinou, Adv. High Energy Phys. 2019, 3036904 (2019), arXiv:1811.07248 [hep-lat] .
* Detmold _et al._ (2019) W. Detmold, R. G. Edwards, J. J. Dudek, M. Engelhardt, H.-W. Lin, S. Meinel, K. Orginos, and P. Shanahan (USQCD), Eur. Phys. J. A 55, 193 (2019), arXiv:1904.09512 [hep-lat] .
* Ji _et al._ (2021c) X. Ji, Y.-S. Liu, Y. Liu, J.-H. Zhang, and Y. Zhao, Rev. Mod. Phys. 93, 035005 (2021c), arXiv:2004.03543 [hep-ph] .
* Lin (2020) H.-W. Lin, Int. J. Mod. Phys. A 35, 2030006 (2020).
* Constantinou (2021) M. Constantinou, _38th International Symposium on Lattice Field Theory_ , Eur. Phys. J. A 57, 77 (2021), arXiv:2010.02445 [hep-lat] .
* Cichy (2021) K. Cichy, in _38th International Symposium on Lattice Field Theory_ (2021) arXiv:2110.07440 [hep-lat] .
* Constantinou _et al._ (2022) M. Constantinou _et al._ , (2022), arXiv:2202.07193 [hep-lat] .
* Liu and Dong (1994) K.-F. Liu and S.-J. Dong, Phys. Rev. Lett. 72, 1790 (1994), arXiv:hep-ph/9306299 [hep-ph] .
* Detmold and Lin (2006) W. Detmold and C. J. D. Lin, Phys. Rev. D73, 014501 (2006), arXiv:hep-lat/0507007 [hep-lat] .
* Braun and Mueller (2008) V. Braun and D. Mueller, Eur. Phys. J. C55, 349 (2008), arXiv:0709.1348 [hep-ph] .
* Radyushkin (2017b) A. Radyushkin, Phys. Lett. B767, 314 (2017b), arXiv:1612.05170 [hep-ph] .
* Chambers _et al._ (2017) A. J. Chambers, R. Horsley, Y. Nakamura, H. Perlt, P. E. L. Rakow, G. Schierholz, A. Schiller, K. Somfleth, R. D. Young, and J. M. Zanotti, Phys. Rev. Lett. 118, 242001 (2017), arXiv:1703.01153 [hep-lat] .
* Lin _et al._ (2018b) H.-W. Lin, J.-W. Chen, T. Ishikawa, and J.-H. Zhang (LP3), Phys. Rev. D 98, 054504 (2018b), arXiv:1708.05301 [hep-lat] .
* Alexandrou _et al._ (2018a) C. Alexandrou, K. Cichy, M. Constantinou, K. Jansen, A. Scapellato, and F. Steffens, Phys. Rev. Lett. 121, 112001 (2018a), arXiv:1803.02685 [hep-lat] .
* Chen _et al._ (2018) J.-W. Chen, L. Jin, H.-W. Lin, Y.-S. Liu, Y.-B. Yang, J.-H. Zhang, and Y. Zhao, (2018), arXiv:1803.04393 [hep-lat] .
* Alexandrou _et al._ (2018b) C. Alexandrou, K. Cichy, M. Constantinou, K. Jansen, A. Scapellato, and F. Steffens, Phys. Rev. D98, 091503 (2018b), arXiv:1807.00232 [hep-lat] .
* Liu _et al._ (2018) Y.-S. Liu, J.-W. Chen, L. Jin, R. Li, H.-W. Lin, Y.-B. Yang, J.-H. Zhang, and Y. Zhao, (2018), arXiv:1810.05043 [hep-lat] .
* Lin _et al._ (2018c) H.-W. Lin, J.-W. Chen, X. Ji, L. Jin, R. Li, Y.-S. Liu, Y.-B. Yang, J.-H. Zhang, and Y. Zhao, Phys. Rev. Lett. 121, 242003 (2018c), arXiv:1807.07431 [hep-lat] .
* Joó _et al._ (2020) B. Joó, J. Karpie, K. Orginos, A. V. Radyushkin, D. G. Richards, and S. Zafeiropoulos, Phys. Rev. Lett. 125, 232003 (2020), arXiv:2004.01687 [hep-lat] .
* Bhat _et al._ (2021) M. Bhat, K. Cichy, M. Constantinou, and A. Scapellato, Phys. Rev. D 103, 034510 (2021), arXiv:2005.02102 [hep-lat] .
* Lin _et al._ (2020) H.-W. Lin, J.-W. Chen, and R. Zhang, (2020), arXiv:2011.14971 [hep-lat] .
* Alexandrou _et al._ (2019a) C. Alexandrou, K. Cichy, M. Constantinou, K. Hadjiyiannakou, K. Jansen, A. Scapellato, and F. Steffens, Phys. Rev. D99, 114504 (2019a), arXiv:1902.00587 [hep-lat] .
* Nocera _et al._ (2014) E. R. Nocera, R. D. Ball, S. Forte, G. Ridolfi, and J. Rojo (NNPDF), Nucl. Phys. B 887, 276 (2014), arXiv:1406.5539 [hep-ph] .
* Ethier _et al._ (2017) J. J. Ethier, N. Sato, and W. Melnitchouk, Phys. Rev. Lett. 119, 132001 (2017), arXiv:1705.05889 [hep-ph] .
* Lin _et al._ (2018d) H.-W. Lin, W. Melnitchouk, A. Prokudin, N. Sato, and H. Shows, Phys. Rev. Lett. 120, 152502 (2018d), arXiv:1710.09858 [hep-ph] .
* Cichy _et al._ (2019) K. Cichy, L. Del Debbio, and T. Giani, JHEP 10, 137 (2019), arXiv:1907.06037 [hep-ph] .
* Bringewatt _et al._ (2021) J. Bringewatt, N. Sato, W. Melnitchouk, J.-W. Qiu, F. Steffens, and M. Constantinou, Phys. Rev. D 103, 016003 (2021), arXiv:2010.00548 [hep-ph] .
* Del Debbio _et al._ (2021) L. Del Debbio, T. Giani, J. Karpie, K. Orginos, A. Radyushkin, and S. Zafeiropoulos, JHEP 02, 138 (2021), arXiv:2010.03996 [hep-ph] .
* Sufian _et al._ (2017a) R. S. Sufian, Y.-B. Yang, A. Alexandru, T. Draper, J. Liang, and K.-F. Liu, Phys. Rev. Lett. 118, 042001 (2017a), arXiv:1606.07075 [hep-ph] .
* Sufian _et al._ (2017b) R. S. Sufian, Y.-B. Yang, J. Liang, T. Draper, and K.-F. Liu, Phys. Rev. D 96, 114504 (2017b), arXiv:1705.05849 [hep-lat] .
* Alexandrou _et al._ (2018c) C. Alexandrou, M. Constantinou, K. Hadjiyiannakou, K. Jansen, C. Kallidonis, G. Koutsou, and A. Vaquero Avilés-Casco, Phys. Rev. D 97, 094504 (2018c), arXiv:1801.09581 [hep-lat] .
* Alexandrou _et al._ (2019b) C. Alexandrou, S. Bacchio, M. Constantinou, J. Finkenrath, K. Hadjiyiannakou, K. Jansen, G. Koutsou, and A. Vaquero Aviles-Casco, Phys. Rev. D 100, 014509 (2019b), arXiv:1812.10311 [hep-lat] .
* Alexandrou _et al._ (2020b) C. Alexandrou, S. Bacchio, M. Constantinou, J. Finkenrath, K. Hadjiyiannakou, K. Jansen, and G. Koutsou, Phys. Rev. D 101, 031501 (2020b), arXiv:1909.10744 [hep-lat] .
* Alexandrou _et al._ (2021a) C. Alexandrou, S. Bacchio, M. Constantinou, K. Hadjiyiannakou, K. Jansen, and G. Koutsou, Phys. Rev. D 104, 074503 (2021a), arXiv:2106.13468 [hep-lat] .
* Märkisch _et al._ (2019) B. Märkisch _et al._ , Phys. Rev. Lett. 122, 242501 (2019), arXiv:1812.04666 [nucl-ex] .
* Alexandrou _et al._ (2021b) C. Alexandrou _et al._ , Phys. Rev. D 103, 034509 (2021b), arXiv:2011.13342 [hep-lat] .
* Bali _et al._ (2020) G. S. Bali, L. Barca, S. Collins, M. Gruber, M. Löffler, A. Schäfer, W. Söldner, P. Wein, S. Weishäupl, and T. Wurm (RQCD), JHEP 05, 126 (2020), arXiv:1911.13150 [hep-lat] .
* Jang _et al._ (2020) Y.-C. Jang, R. Gupta, B. Yoon, and T. Bhattacharya, Phys. Rev. Lett. 124, 072002 (2020), arXiv:1905.06470 [hep-lat] .
* Shintani _et al._ (2019) E. Shintani, K.-I. Ishikawa, Y. Kuramashi, S. Sasaki, and T. Yamazaki, Phys. Rev. D 99, 014510 (2019), [Erratum: Phys.Rev.D 102, 019902 (2020)], arXiv:1811.07292 [hep-lat] .
* Bali _et al._ (2019a) G. S. Bali, S. Collins, M. Gruber, A. Schäfer, P. Wein, and T. Wurm, Phys. Lett. B 789, 666 (2019a), arXiv:1810.05569 [hep-lat] .
* Alexandrou _et al._ (2017b) C. Alexandrou, M. Constantinou, K. Hadjiyiannakou, K. Jansen, C. Kallidonis, G. Koutsou, and A. Vaquero Aviles-Casco, Phys. Rev. D 96, 054507 (2017b), arXiv:1705.03399 [hep-lat] .
* Hasan _et al._ (2018) N. Hasan, J. Green, S. Meinel, M. Engelhardt, S. Krieg, J. Negele, A. Pochinsky, and S. Syritsyn, Phys. Rev. D 97, 034504 (2018), arXiv:1711.11385 [hep-lat] .
* Lin (2022) H.-W. Lin, Phys. Lett. B 824, 136821 (2022), arXiv:2112.07519 [hep-lat] .
* Alexandrou _et al._ (2020c) C. Alexandrou _et al._ , Phys. Rev. D 101, 034519 (2020c), arXiv:1908.10706 [hep-lat] .
* Bali _et al._ (2019b) G. S. Bali, S. Collins, M. Göckeler, R. Rödl, A. Schäfer, and A. Sternbeck, Phys. Rev. D 100, 014507 (2019b), arXiv:1812.08256 [hep-lat] .
* Lin (2021) H.-W. Lin, Phys. Rev. Lett. 127, 182001 (2021), arXiv:2008.12474 [hep-ph] .
* Ji _et al._ (2015a) X. Ji, A. Schäfer, X. Xiong, and J.-H. Zhang, Phys. Rev. D 92, 014039 (2015a), arXiv:1506.00248 [hep-ph] .
* Liu _et al._ (2019b) Y.-S. Liu, W. Wang, J. Xu, Q.-A. Zhang, J.-H. Zhang, S. Zhao, and Y. Zhao, Phys. Rev. D 100, 034006 (2019b), arXiv:1902.00307 [hep-ph] .
* Radyushkin (2019) A. V. Radyushkin, Phys. Rev. D 100, 116011 (2019), arXiv:1909.08474 [hep-ph] .
* Hou _et al._ (2022) T.-J. Hou, H.-W. Lin, M. Yan, and C. P. Yuan, (2022), arXiv:2204.07944 [hep-ph] .
* Alexandrou _et al._ (2020d) C. Alexandrou, K. Cichy, M. Constantinou, K. Hadjiyiannakou, K. Jansen, A. Scapellato, and F. Steffens, Phys. Rev. Lett. 125, 262001 (2020d), arXiv:2008.10573 [hep-lat] .
* Alexandrou _et al._ (2021c) C. Alexandrou, K. Cichy, M. Constantinou, K. Hadjiyiannakou, K. Jansen, A. Scapellato, and F. Steffens, arXiv:2108.10789 (2021c).
* Bhattacharya _et al._ (2020) S. Bhattacharya, C. Cocuzza, and A. Metz, Phys. Rev. D 102, 054021 (2020), arXiv:1903.05721 [hep-ph] .
* Bhattacharya _et al._ (2021b) S. Bhattacharya, K. Cichy, M. Constantinou, J. Dodson, A. Metz, A. Scapellato, and F. Steffens, PoS(LATTICE2021)054 (2021b).
* Musch _et al._ (2011) B. U. Musch, P. Hagler, J. W. Negele, and A. Schafer, Phys. Rev. D 83, 094507 (2011), arXiv:1011.1213 [hep-lat] .
* Musch _et al._ (2012) B. U. Musch, P. Hagler, M. Engelhardt, J. W. Negele, and A. Schafer, Phys. Rev. D 85, 094510 (2012), arXiv:1111.4249 [hep-lat] .
* Yoon _et al._ (2017) B. Yoon, M. Engelhardt, R. Gupta, T. Bhattacharya, J. R. Green, B. U. Musch, J. W. Negele, A. V. Pochinsky, A. Schäfer, and S. N. Syritsyn, Phys. Rev. D 96, 094508 (2017), arXiv:1706.03406 [hep-lat] .
* Engelhardt _et al._ (2016) M. Engelhardt, P. Hägler, B. Musch, J. Negele, and A. Schäfer, Phys. Rev. D 93, 054501 (2016), arXiv:1506.07826 [hep-lat] .
* Ji _et al._ (2015b) X. Ji, P. Sun, X. Xiong, and F. Yuan, Phys. Rev. D 91, 074009 (2015b), arXiv:1405.7640 [hep-ph] .
* Ji _et al._ (2019) X. Ji, L.-C. Jin, F. Yuan, J.-H. Zhang, and Y. Zhao, Phys. Rev. D 99, 114006 (2019), arXiv:1801.05930 [hep-ph] .
* Ebert _et al._ (2019) M. A. Ebert, I. W. Stewart, and Y. Zhao, Phys. Rev. D 99, 034505 (2019), arXiv:1811.00026 [hep-ph] .
* Ji _et al._ (2020a) X. Ji, Y. Liu, and Y.-S. Liu, Nucl. Phys. B 955, 115054 (2020a), arXiv:1910.11415 [hep-ph] .
* Ji _et al._ (2020b) X. Ji, Y. Liu, and Y.-S. Liu, Phys. Lett. B 811, 135946 (2020b), arXiv:1911.03840 [hep-ph] .
* Li _et al._ (2021b) Y. Li _et al._ , arXiv:2106.13027 (2021b).
* Shanahan _et al._ (2021) P. Shanahan, M. Wagman, and Y. Zhao, arXiv:2107.11930 (2021).
* Fan _et al._ (2018) Z.-Y. Fan, Y.-B. Yang, A. Anthony, H.-W. Lin, and K.-F. Liu, Phys. Rev. Lett. 121, 242001 (2018), arXiv:1808.02077 [hep-lat] .
* Fan _et al._ (2021) Z. Fan, R. Zhang, and H.-W. Lin, Int. J. Mod. Phys. A 36, 2150080 (2021), arXiv:2007.16113 [hep-lat] .
* Khan _et al._ (2021) T. Khan _et al._ (HadStruc), arXiv:2107.08960 (2021).
* Fan and Lin (2021) Z. Fan and H.-W. Lin, arXiv:2110.14471 (2021).
* Egerer _et al._ (2022) C. Egerer _et al._ (HadStruc), (2022), arXiv:2207.08733 [hep-lat] .
* Hou _et al._ (2021) T.-J. Hou _et al._ , Phys. Rev. D 103, 014013 (2021), arXiv:1912.10053 [hep-ph] .
* Ball _et al._ (2017) R. D. Ball _et al._ (NNPDF), Eur. Phys. J. C 77, 663 (2017), arXiv:1706.00428 [hep-ph] .
* Moffat _et al._ (2021) E. Moffat, W. Melnitchouk, T. C. Rogers, and N. Sato (Jefferson Lab Angular Momentum (JAM)), Phys. Rev. D 104, 016015 (2021), arXiv:2101.04664 [hep-ph] .
* Pefkou _et al._ (2021) D. A. Pefkou, D. C. Hackett, and P. E. Shanahan, arXiv:2107.10368 (2021).
* Shanahan and Detmold (2019b) P. E. Shanahan and W. Detmold, Phys. Rev. Lett. 122, 072003 (2019b), arXiv:1810.07589 [nucl-th] .
* Luscher (1986a) M. Luscher, Commun. Math. Phys. 104, 177 (1986a).
* Luscher (1986b) M. Luscher, Commun. Math. Phys. 105, 153 (1986b).
* Luscher (1991) M. Luscher, Nucl. Phys. B 354, 531 (1991).
* Briceno _et al._ (2018) R. A. Briceno, J. J. Dudek, R. G. Edwards, and D. J. Wilson, Phys. Rev. D 97, 054513 (2018), arXiv:1708.06667 [hep-lat] .
* Woss _et al._ (2021) A. J. Woss, J. J. Dudek, R. G. Edwards, C. E. Thomas, and D. J. Wilson (Hadron Spectrum), Phys. Rev. D 103, 054502 (2021), arXiv:2009.10034 [hep-lat] .
* Briceño _et al._ (2016) R. A. Briceño, J. J. Dudek, R. G. Edwards, C. J. Shultz, C. E. Thomas, and D. J. Wilson, Phys. Rev. D 93, 114508 (2016), arXiv:1604.03530 [hep-ph] .
* Briceño _et al._ (2015) R. A. Briceño, M. T. Hansen, and A. Walker-Loud, Phys. Rev. D 91, 034501 (2015), arXiv:1406.5965 [hep-lat] .
* Briceño and Hansen (2015) R. A. Briceño and M. T. Hansen, Phys. Rev. D 92, 074509 (2015), arXiv:1502.04314 [hep-lat] .
* Briceño _et al._ (2021a) R. A. Briceño, J. J. Dudek, and L. Leskovec, Phys. Rev. D 104, 054509 (2021a), arXiv:2105.02017 [hep-lat] .
* Chen _et al._ (2019) T. Chen, Y. Chen, M. Gong, C. Liu, L. Liu, Y.-B. Liu, Z. Liu, J.-P. Ma, M. Werner, and J.-B. Zhang (CLQCD), Chin. Phys. C 43, 103103 (2019), arXiv:1907.03371 [hep-lat] .
* Prelovsek _et al._ (2021) S. Prelovsek, S. Collins, D. Mohler, M. Padmanath, and S. Piemonte, JHEP 06, 035 (2021), arXiv:2011.02542 [hep-lat] .
* Joó _et al._ (2019) B. Joó, C. Jung, N. H. Christ, W. Detmold, R. Edwards, M. Savage, and P. Shanahan (USQCD), Eur. Phys. J. A 55, 199 (2019), arXiv:1904.09725 [hep-lat] .
* NSA (2019) “Nuclear Physics and Quantum Information Science,” https://science.osti.gov/-/media/np/pdf/Reports/NSAC_QIS_Report.pdf (2019).
* Bañuls _et al._ (2020) M. C. Bañuls _et al._ , Eur. Phys. J. D 74, 165 (2020), arXiv:1911.00003 [quant-ph] .
* Briceño _et al._ (2021b) R. A. Briceño, J. V. Guerrero, M. T. Hansen, and A. M. Sturzu, Phys. Rev. D 103, 014506 (2021b), arXiv:2007.01155 [hep-lat] .
* EIC (6 16) “EICroot,” https://wiki.bnl.gov/eic/index.php/Eicroot (Accessed: 2020-06-16).
* Klein and Mäntysaari (2019) S. R. Klein and H. Mäntysaari, Nature Rev. Phys. 1, 662 (2019), arXiv:1910.10858 [hep-ex] .
* Chang, Wan and Aschenauer, Elke-Caroline and Baker, Mark D. and Jentsch, Alexander and Lee, Jeong-Hun and Tu, Zhoudunming and Yin, Zhongbao and Zheng, Liang (2021) Chang, Wan and Aschenauer, Elke-Caroline and Baker, Mark D. and Jentsch, Alexander and Lee, Jeong-Hun and Tu, Zhoudunming and Yin, Zhongbao and Zheng, Liang, arXiv:2108.01694 (2021).
* Klein (2014) S. R. Klein, Phys. Rev. ST Accel. Beams 17, 121003 (2014), arXiv:1409.5379 [physics.acc-ph] .
* Afanasev _et al._ (2017) A. Afanasev, P. Blunden, D. Hasell, and B. Raue, Prog. Part. Nucl. Phys. 95, 245 (2017), arXiv:1703.03874 [nucl-ex] .
* Mo and Tsai (1969) L. W. Mo and Y.-S. Tsai, Rev. Mod. Phys. 41, 205 (1969).
* Bardin and Shumeiko (1977) D. Y. Bardin and N. Shumeiko, Nuclear Physics B 127, 242 (1977).
* Yennie _et al._ (1961) D. Yennie, S. C. Frautschi, and H. Suura, Annals Phys. 13, 379 (1961).
* Kuraev and Fadin (1985) E. Kuraev and V. S. Fadin, Sov. J. Nucl. Phys. 41, 466 (1985).
* Kwiatkowski _et al._ (1992) A. Kwiatkowski, H. Spiesberger, and H. Mohring, Comput. Phys. Commun. 69, 155 (1992).
* Charchula _et al._ (1994) K. Charchula, G. Schuler, and H. Spiesberger, Comput. Phys. Commun. 81, 381 (1994).
* Maximon and Tjon (2000) L. C. Maximon and J. A. Tjon, Phys. Rev. C 62, 054320 (2000), arXiv:nucl-th/0002058 .
* Akushevich _et al._ (1999) I. Akushevich, H. Böttcher, and D. Ryckbosch, arXiv preprint hep-ph/9906408 (1999).
* Akushevich _et al._ (2012) I. Akushevich, O. Filoti, A. Ilyichev, and N. Shumeiko, Comput. Phys. Commun. 183, 1448 (2012), arXiv:1104.0039 [hep-ph] .
* Gramolin _et al._ (2014) A. Gramolin, V. Fadin, A. Feldman, R. Gerasimov, D. Nikolenko, I. Rachek, and D. Toporkov, J. Phys. G 41, 115001 (2014), arXiv:1401.2959 [nucl-ex] .
* Henderson _et al._ (2017) B. Henderson _et al._ (OLYMPUS), Phys. Rev. Lett. 118, 092501 (2017), arXiv:1611.04685 [nucl-ex] .
* Banerjee _et al._ (2020) P. Banerjee, T. Engel, A. Signer, and Y. Ulrich, SciPost Phys. 9, 027 (2020), arXiv:2007.01654 [hep-ph] .
* Mihovilovič _et al._ (2017) M. Mihovilovič _et al._ , Phys. Lett. B 771, 194 (2017), arXiv:1612.06707 [nucl-ex] .
* Buckley _et al._ (2011) A. Buckley _et al._ , Phys. Rept. 504, 145 (2011), arXiv:1101.2599 [hep-ph] .
* Liu _et al._ (2020) T. Liu, W. Melnitchouk, J.-W. Qiu, and N. Sato, arXiv:2008.02895 (2020).
* Hoeche _et al._ (2010) S. Hoeche, S. Schumann, and F. Siegert, Phys. Rev. D 81, 034026 (2010), arXiv:0912.3501 [hep-ph] .
* Akushevich and Ilyichev (2019b) I. Akushevich and A. Ilyichev, Phys. Rev. D 100, 033005 (2019b).
* Vanderhaeghen _et al._ (2000) M. Vanderhaeghen, J. M. Friedrich, D. Lhuillier, D. Marchand, L. Van Hoorebeke, and J. Van de Wiele, Phys. Rev. C 62, 025501 (2000).
* Akushevich and Ilyichev (2018) I. Akushevich and A. Ilyichev, Phys. Rev. D 98, 013005 (2018).
* Afanasev _et al._ (2002) A. Afanasev, I. Akushevich, V. Burkert, and K. Joo, Phys. Rev. D 66, 074004 (2002).
* Katich _et al._ (2014) J. Katich _et al._ , Phys. Rev. Lett. 113, 022502 (2014).
* Afanasev _et al._ (2013) A. Afanasev, A. Aleksejevs, and S. Barkanova, Phys. Rev. D 88, 053008 (2013).
* Cao and Zhou (2020) H.-Y. Cao and H.-Q. Zhou, Phys. Rev. C 101, 055201 (2020).
* Afanasev _et al._ (2020) A. Afanasev _et al._ , arXiv:2012.09970 (2020).
* United States House of Representatives Committee - Science, Space, and Technology (2020) United States House of Representatives Committee - Science, Space, and Technology, “National artificial intelligence initiative act of 2020,” https://www.congress.gov/116/crpt/hrpt617/CRPT-116hrpt617.pdf#page=1210 (2020).
* Carleo _et al._ (2019) G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto, and L. Zdeborová, Rev. Mod. Phys. 91, 045002 (2019).
* Deiana _et al._ (2021) A. M. Deiana _et al._ , arXiv:2110.13041 (2021).
* Bedaque _et al._ (2021) P. Bedaque _et al._ , Eur. Phys. J. A 57, 100 (2021).
* Boehnlein _et al._ (2021) A. Boehnlein _et al._ , arXiv:2112.02309 (2021).
* Almaeen _et al._ (2021) M. Almaeen, Y. Alanazi, N. Sato, W. Melnitchouk, M. P. Kuchera, and Y. Li, in _2021 International Joint Conference on Neural Networks (IJCNN)_ (2021) pp. 1–8.
* Cisbani _et al._ (2020) E. Cisbani _et al._ , JINST 15, P05009 (2020), arXiv:1911.05797 [physics.ins-det] .
* Fanelli (2022) C. Fanelli (2022) arXiv:2203.04530 [physics.ins-det] .
* Kumerički (2020) K. Kumerički, in _Probing Nucleons and Nuclei in High Energy Collisions: Dedicated to the Physics of the Electron Ion Collider_ (2020) pp. 25–29, arXiv:1910.04806 [hep-ph] .
* Abdul Khalek _et al._ (2019b) R. Abdul Khalek _et al._ (NNPDF), Eur. Phys. J. C 79, 931 (2019b), arXiv:1906.10698 [hep-ph] .
* Fujimoto _et al._ (2021) Y. Fujimoto, K. Fukushima, and K. Murase, JHEP 03, 273 (2021), arXiv:2101.08156 [nucl-th] .
* Fujimoto _et al._ (2020) Y. Fujimoto, K. Fukushima, and K. Murase, Phys. Rev. D 101, 054016 (2020), arXiv:1903.03400 [nucl-th] .
* Fujimoto _et al._ (2018) Y. Fujimoto, K. Fukushima, and K. Murase, Phys. Rev. D 98, 023019 (2018), arXiv:1711.06748 [nucl-th] .
* Soma _et al._ (2022) S. Soma, L. Wang, S. Shi, H. Stöcker, and K. Zhou, arXiv:2201.01756 (2022).
* Shi _et al._ (2022b) S. Shi, L. Wang, and K. Zhou, arXiv:2201.02564 (2022b).
* Shi _et al._ (2022c) S. Shi, K. Zhou, J. Zhao, S. Mukherjee, and P. Zhuang, Phys. Rev. D 105, 014017 (2022c), arXiv:2105.07862 [hep-ph] .
* Lasseri _et al._ (2020) R.-D. Lasseri, D. Regnier, J.-P. Ebran, and A. Penon, Phys. Rev. Lett. 124, 162502 (2020), arXiv:1910.04132 [nucl-th] .
* Scamps _et al._ (2021) G. Scamps, S. Goriely, E. Olsen, M. Bender, and W. Ryssens, Eur. Phys. J. A 57, 333 (2021), arXiv:2011.07904 [nucl-th] .
* Molchanov _et al._ (2022) O. M. Molchanov, K. D. Launey, A. Mercenne, G. H. Sargsyan, T. Dytrych, and J. P. Draayer, Phys. Rev. C 105, 034306 (2022), arXiv:2107.01498 [nucl-th] .
* Albergo _et al._ (2019) M. S. Albergo, G. Kanwar, and P. E. Shanahan, Phys. Rev. D 100, 034515 (2019), arXiv:1904.12072 [hep-lat] .
* Lai _et al._ (2022b) Y. S. Lai, D. Neill, M. Płoskoń, and F. Ringer, Phys. Lett. B 829, 137055 (2022b), arXiv:2012.06582 [hep-ph] .
* Gamage (2021) R. Gamage, https://indico.bnl.gov/event/11463/contributions/52053/ (3 Aug 2021).
|
[a,b]Rainer Sommer
# Log-enhanced discretization errors in integrated correlation functions
Leonardo Chimirri Nikolai Husung
###### Abstract
Integrated time-slice correlation functions $G(t)$ with weights $K(t)$ appear,
e.g., in the moments method to determine $\alpha_{s}$ from heavy quark
correlators, in the muon g-2 determination or in the determination of smoothed
spectral functions.
For the (leading-order-)normalised moment $R_{4}$ of the pseudo-scalar
correlator we have non-perturbative results down to $a=10^{-2}$ fm and for
masses, $m$, of the order of the charm mass in the quenched approximation. A
significant bending of $R_{4}$ as a function of $a^{2}$ is observed at small
lattice spacings.
Starting from the Symanzik expansion of the integrand we derive the asymptotic
convergence of the integral at small lattice spacing in the free theory and
prove that the short distance part of the integral leads to $\log(a)$-enhanced
discretisation errors when $G(t)K(t)\raisebox{-0.43057pt}{
$\stackrel{{\scriptstyle\small{t\to 0}}}{{\sim}}$}\,t$ for small $t$. In the
interacting theory an unknown, function $K(a\Lambda)$ appears.
For the $R_{4}$-case, we modify the observable to improve the short distance
behavior and demonstrate that it results in a very smooth continuum limit. The
strong coupling and the $\Lambda$-parameter can then be extracted. In general,
and in particular for $g-2$, the short distance part of the integral should be
determined by perturbation theory. The (dominating) rest can then be obtained
by the controlled continuum limit of the lattice computation.
## 1 Introduction
We consider a $\bf p=0$ (spatial momentum zero) correlator
$G(t,M,a)=a^{3}\sum_{\bf x}\langle
P^{\mathrm{RGI}}(x)\overline{P}^{\mathrm{RGI}}(0)\rangle=G(t,M,0)+\Delta
G(t,M,a)\,,$ (1)
of a renormalization group invariant (RGI) local field $P^{\mathrm{RGI}}$ of
dimension three with non-trivial quantum numbers such that the vacuum does not
contribute as intermediate state. The RGI mass of the theory (or the set of
masses) is denoted by $M$ and $\Delta O$ denotes the lattice artefact of an
observable $O$. Weighted integrals $\int G(t,M,0)\,K(t)\,\mathrm{d}t$ , such
as moments, need a weight $K(t)\raisebox{-0.43057pt}{
$\stackrel{{\scriptstyle\small{t\to 0}}}{{\sim}}$}t^{n}\,,n>2$ to ensure
convergence at small $t$.111Fields of other dimensions or integrals of the
type $\int\langle
P^{\mathrm{RGI}}(x)\overline{P}^{\mathrm{RGI}}(0)\rangle\,\tilde{K}(x)\,\mathrm{d}^{4}x$
lead to trivial changes of our discussion. Specializing to moments with
$n>2$, one can then also consider
$\mathcal{M}_{n}(M,a)=a\sum_{t}t^{n}\,G(t,M,a)=\mathcal{M}_{n}(M,0)+\Delta\mathcal{M}_{n}(M,a)\,,$
(2)
with a finite continuum limit $\mathcal{M}_{n}(M,0)$. The case $n=4$ will be
discussed in detail since it is of particular interest for computing
$\alpha_{s}$, when $P$ is a heavy-quark bilinear [1] and furthermore the
hadronic vacuum polarization contribution to $g-2$ of the muon has the form
above in the time-momentum representation [2] with a
$K(t)\raisebox{-0.43057pt}{ $\stackrel{{\scriptstyle\small{t\to
0}}}{{\sim}}$}t^{4}$. We will comment on other moments as we go along. In the
following we assume mass-degenerate quarks to simplify the notation.
Note that in the heavy quarks moments method for determining $\alpha_{s}$ one
typically considers the dimensionless
$\overline{\mathcal{M}}_{4}(M,a)=M^{2}\mathcal{M}_{4}(M,a)\,,$ (3)
with $M$ the RGI-mass, such that also $\overline{\mathcal{M}}_{4}$ is scale
invariant. Specifically one chooses
$P^{\mathrm{RGI}}=Z^{\mathrm{RGI}}P^{\mathrm{bare}},\;P^{\mathrm{bare}}=\bar{c}\gamma_{5}c^{\prime}$
and for discretizations with enough chiral symmetry the renormalization factor
$Z^{\mathrm{RGI}}$ is not needed due to
$MP^{\mathrm{RGI}}=m_{\mathrm{bare}}P^{\mathrm{bare}}$. The correlator $G$,
eq. 1, is even under time-refections, $G(t,M,0)=G(-t,M,0)$. Thus moments for
odd $n$ vanish and only moments with $n\geq 4$ are finite.
In an $\mathrm{O}(a)$-improved theory, the Symanzik effective theory
prediction (SymEFT) [3, 4, 5] is
$\Delta G\raisebox{-0.43057pt}{ $\stackrel{{\scriptstyle\small{a\to
0}}}{{\sim}}$}a^{2}[\alpha_{s}(1/a)]^{\hat{\gamma}_{\mathrm{lead}}}\,.$ (4)
Naively one may expect that this also leads to
$\Delta\mathcal{M}_{n}\raisebox{-0.43057pt}{
$\stackrel{{\scriptstyle\small{a\to
0}}}{{\sim}}$}a^{2}[\alpha_{s}(1/a)]^{\hat{\gamma}_{\mathrm{lead}}}\,.$ Here
we discuss that this is not the case and show that a safe continuum limit
cannot even be taken with lattice spacings down to $a=10^{-2}$fm (section 2).
We derive that already in the free theory an $a^{2}\log(aM)$ term is present
(section 3) and sketch what changes in the SymEFT prediction in the
interacting theory (section 4). Since the general conclusion is that integrals
such as the one defining $\mathcal{M}_{4}$ cannot be computed reliably on the
lattice, we then propose a modification for $\mathcal{M}_{4}$ (section 6.1)
and demonstrate that it works very well. Finally we also make a simple and
practical proposal which solves the issue for the HVP contribution to the muon
$g-2$ (section 6.3).
## 2 Demonstration of the deviations from simple $a^{2}$ scaling
We computed $\overline{\mathcal{M}}_{4}$ (and other moments [6]) in the
quenched approximation on ensembles sft7 - sft4 [7] and q_b649 - q_b616 [6]
with lattice spacings $a={0.01\,\mathrm{fm}}\times 2^{n/2},n=0\ldots 6$, i.e.
$0.01\,\mathrm{fm}\leq a\leq 0.08\,\mathrm{fm}$. The property
$MP^{\mathrm{RGI}}=m_{\mathrm{bare}}P^{\mathrm{bare}}$ is guaranteed by using
the twisted mass formulation at maximal twist and double insertions of the
Pauli term in SymEFT are avoided by including the Sheikholeslami-Wohlert term
[8] with non-perturbative improvement coefficient [9]. Further details are
given in [10].
In fig. 1 we show the lattice spacing dependence of
$R_{4}=\frac{\overline{\mathcal{M}}_{4}}{[\overline{\mathcal{M}}_{4}]_{g=0,a>0}}\,.$
(5)
The normalization by the lattice leading perturbative order (finite $a>0$) is
crucial as seen by the points with continuum norm,
$\overline{\mathcal{M}}_{4}/[\overline{\mathcal{M}}_{4}]_{g=0,a=0}$. Again we
refer to [6, 10] for more details. However, despite the strong reduction of
discretisation errors by the lattice norm, a continuum extrapolation with data
in the range $a\in[0.02,0.04]$ fm (linear fit in fig. 1) where $R_{4}$
seemingly scales with $a^{2}$ corrections, clearly leads to a wrong result.
This is seen by the $a=0.01$ fm data point and corroborated by our method
sketched in section 6.1. Such a behavior is the nightmare of numerical
analysis. Note that the mass $M\approx M_{\mathrm{charm}}$ is not that high.
Figure 1: Lattice dependence of
$R_{4}=\overline{\mathcal{M}}_{4}/[\overline{\mathcal{M}}_{4}]_{g=0}$, where
the normalization is performed with $[\overline{\mathcal{M}}_{4}]_{g=0}$ at
finite $a$ (“lattice norm”) or at $a=0$ (“continuum norm”). The quark mass is
around the charm quark mass.
## 3 Derivation of the $a^{2}\log(aM)$ term in the free theory
In this and the following section we study the small $t$ behavior where mass-
effects can be neglected and we first consider the contribution to
$\mathcal{M}_{4}$ from a range $t_{1}\leq t\leq t_{2}\ll 1/M$,
$\displaystyle\Delta I(t_{1},t_{2})$ $\displaystyle=$ $\displaystyle
2a\sum_{t=t_{1}}^{t_{2}}w_{\mathrm{T}}(t)\,t^{4}\,G(t,M,a)-I_{\mathrm{cont}}(t_{1},t_{2})\,,\quad
t_{1}M\ll 1,\;t_{2}M\ll 1\,,$ (7) $\displaystyle
I_{\mathrm{cont}}(t_{1},t_{2})=2\int_{t_{1}}^{t_{2}}\mathrm{d}t\;t^{4}\,G(t,M,0)\,,\quad
t_{1}M\ll 1,\;t_{2}M\ll 1\,.$
The weight $w_{\mathrm{T}}(t)$ implements the trapezoidal rule: it is $1/2$ at
the boundaries and $1$ otherwise.
In order to gain understanding, we start with the free theory, $g=0$. This
case is illuminating and at the same time we can get the relevant result by
dimensional reasoning alone.
We split
$\displaystyle\Delta I(0,t)$ $\displaystyle=$ $\displaystyle\Delta
I(0,t_{1})+\Delta I(t_{1},t)\,,$ (8)
discuss the second term and then add the first one. The SymEFT prediction for
the cutoff effects of $G$ are
$\displaystyle\Delta G$ $\displaystyle=$ $\displaystyle
k_{\mathrm{L}}\,\frac{a^{2}}{t^{5}}+\mathrm{O}(a^{4})+\mathrm{O}(M^{2}t^{2})\,,$
(9)
with a constant $k_{\mathrm{L}}$ which depends on the fermion
discretization.222This form is simply due to dimensional counting. $G(t)$ has
mass dimension $-3$ and therefore behaves like $\sim t^{-3}$ for small $t$ in
the free theory. In the interacting theory there are log-corrections to that
functional form due to anomalous dimensions of $P$ and the SymEFT operators.
Relative cutoff effects are $\sim a^{2}/t^{2}$, again because for $tM\ll 1$
the only dimensionful parameter apart from $a$ is $t$. Performing an explicit
leading order computation, expanded in $a/t$ in the Wilson regularization we
find $k_{\mathrm{L}}=1$. Since mass-effects are irrelevant, $k_{\mathrm{L}}=1$
holds irrespective of whether we choose a twisted mass term or a standard one.
Not indicating the higher order corrections in $a$ and $M$ any further we get
$\displaystyle\Delta I(t_{1},t)$ $\displaystyle\overset{a\ll t_{1}}{\sim}$
$\displaystyle\;k_{\mathrm{L}}\,a^{2}\
\,\int_{t_{1}}^{t}\,\mathrm{d}s\,s^{-1}+\Delta I_{\mathrm{T}}(t_{1},t)\,$ (10)
$\displaystyle=$ $\displaystyle k_{\mathrm{L}}a^{2}\,\,\log(t/t_{1})+\Delta
I_{\mathrm{T}}(t_{1},t)=k_{\mathrm{L}}a^{2}\,[\log(t/a)-\log(t_{1}/a)]+\Delta
I_{\mathrm{T}}(t_{1},t)\,.$ (11)
Here, $\Delta I_{\mathrm{T}}\sim a^{2}$ is the error in using the trapezoidal
rule for the integral. We drop it because it does not play a role in the
following; it is regular as $t\to 0$ and does not introduce a log. We then
obtain
$\displaystyle\Delta I(0,t)$ $\displaystyle=$ $\displaystyle\underbrace{\Delta
I(0,t_{1})-k_{\mathrm{L}}a^{2}\log(t_{1}/a)}_{=k\,a^{2}}+k_{\mathrm{L}}\,a^{2}\,\log(t/a)=a^{2}\,[k+k_{\mathrm{L}}\log(t/a)]\,,$
(12)
with another dimensionless constant $k$ depending on the regularization. The
first term, $ka^{2}$, has this form because it neither depends on $t_{1}$ nor
on $t$ and $a$ is the only dimensionful parameter.
For the full moment, $t$ gets replaced by the only physics scale of the
integral, namely $1/M$. We thus arrive at
$\displaystyle\frac{\Delta\mathcal{M}_{4}(M,a)}{\mathcal{M}_{4}(M,0)}$
$\displaystyle=$ $\displaystyle
a^{2}M^{2}\,[k^{\prime}-k_{\mathrm{L}}\log(M\,a)]+\mathrm{O}(M^{4}a^{4})\,\,.$
(13)
We note that [11] have argued for the presence of a $\log(t/a)$ term in the
same discretised integral (in the context of HVP). In contrast to their
argumentation, we never work with divergent integrals or with the Symanzik
expansion for $a/t=\mathrm{O}(1)$.
It is instructive to add higher order terms,
$k_{d}\,(a^{2}/t^{5})\,(a/t)^{d-2}$ with $d>2$ terms in the SymEFT for $\Delta
G$. They yield
$\displaystyle\Delta I(t_{1},t)$ $\displaystyle\overset{a\ll t_{1}}{\sim}$
$\displaystyle\;k_{\mathrm{L}}a^{2}\,[\log(t/a)-\log(t_{1}/a)]+a^{2}\sum_{d>2}\frac{k_{d}}{d-2}\,[\,(a/t_{1})^{d-2}-(a/t)^{d-2}\,]\,.$
(14)
and
$\displaystyle\Delta I(0,t)$ $\displaystyle\sim$
$\displaystyle\;a^{2}k^{\prime\prime}\,+\,k_{\mathrm{L}}a^{2}\,\log(t/a)-a^{2}\sum_{d>2}\frac{k_{d}}{d-2}(a/t)^{d-2}\,,$
(15)
where $k^{\prime\prime}$ now receives contributions also from the $d>2$ terms
in $\Delta G$. Note that the reasoning for the term $a^{2}k^{\prime}$ is
unchanged. It is simply the dimension of $\Delta I$ inherited from the one of
$I$ and the independence on $t_{1}$. This means that Symanzik improvement does
not hold for the integral: we could improve $\Delta G$ such that all $\sim
a^{2}$ terms are removed, but $\Delta\mathcal{M}_{4}$ would remain of order
$a^{2}$ due to the $d>2$ terms in (14). "Only" the log-term at order $a^{2}$
disappears by improvement of the integrand.
Consider for a moment the moment
$N_{3}=a\sum_{t\geq 0}t^{3}\,G(t,M,a)\,.$ (16)
In this case, we obtain $\mathrm{O}(a)$ effects, irrespective of how the
theory was improved. It is relevant to investigate whether such terms appear
in some (sub-)integrals in representations of light-by-light scattering
evaluated on the lattice [12].
## 4 SymEFT analysis beyond the free theory
It is not difficult to follow the above steps for the interacting theory. One
has to write $\Delta G$ as in eq. 4 and also the short distance behavior
changes due to anomalous dimension effects. These modifications introduce
powers of $\alpha_{s}(1/a)$ and $\alpha_{s}(1/t)$, respectively, but are not
of prime relevance. More important is that the step analogous to eq. 12 is
modified to
$ka^{2}\to K(a\Lambda)\,a^{2}\,,$ (17)
with a function $K(a\Lambda)$ which is not restricted by simple arguments.
Without knowing the behavior of $K$ at the origin, nothing can be concluded
about $M$-independent $a$-effects of the integral. The structure of external
scale dependent cutoff effects will be discussed in a publication [13]. The
basic reason for the difficulty is of course that the interacting theory has a
dynamical scale, $\Lambda$, which makes the dimensional analysis much less
restrictive.
## 5 Higher moments $\mathcal{M}_{n},\;n>4$
With $n>4$, the $a^{2}\log(aM)$ term is absent in the free theory. Still,
$\log(aM)$ dependences are present, but they are pushed to a higher order in
$a$,
$\mathcal{M}_{n}=\ldots+\mathrm{const.}\,\times
a^{n-2}\log(aM)+\ldots\,.\qquad$ (18)
## 6 Solutions
Our discussion shows that integrals of the considered type cannot be computed
on the lattice in the straight-forward way. The best solution to this problem
is to avoid integrands which have a behavior $\sim t^{k}\,,k<2$. First we
describe a specific solution for $\overline{\mathcal{M}}_{4}$ for which we
have a complete numerical demonstration. Then we propose a general solution,
which in particular will be useful for HVP.
### 6.1 A practical solution for $\overline{\mathcal{M}}_{4}$
Our simple solution for the moment $\overline{\mathcal{M}}_{4}$ uses two
different masses in the form (dropping the $a$-dependence)
$\displaystyle\rho(M_{1},M_{2})$
$\displaystyle=\frac{2\pi^{2}}{3}\,(1-r^{2})^{-1}\,[\overline{\mathcal{M}}_{4}(M_{1})-r^{2}\overline{\mathcal{M}}_{4}(M_{2})]$
(19)
$\displaystyle=\frac{2\pi^{2}}{3}\,(1-r^{2})^{-1}M_{1}^{2}[\mathcal{M}_{4}(M_{1})-\mathcal{M}_{4}(M_{2})]\,,\quad
r=M_{1}/M_{2}>1.$ (20)
The second line shows that the small $t$ asymptotics of the integrand is
improved via,
$\displaystyle t^{4}\,[G(t,M_{1})-G(t,M_{2})]\sim
t^{4}\,(t^{2}M_{2}^{2}-t^{2}M_{1}^{2})+\mathrm{O}(t^{8})\,.$ (21)
There are log-corrections to this equation in the interacting theory, which
are not relevant here. Due to the extra two powers of $t$, which come with the
mass-effects, the quantity $\rho(M_{1},M_{2})$ has no log-enhanced $a^{2}$
effects (they will appear only at the level $a^{4}$).
For the purpose of extracting $\alpha_{s}$ it is now relevant to choose
$M_{1}$ and $M_{2}$ not too different. Then the perturbative expansion, which
is given in terms of the one of $\overline{\mathcal{M}}_{4}$, does not contain
large logs of $M_{2}/M_{1}$. We write the perturbative expansion in terms
333We implicitly define
$m_{\star}=\overline{m}_{\rm\overline{MS}}(m_{\star})$. In practice, to
evaluate $\overline{\mathcal{M}}_{4}(M_{2})$ we choose
$\alpha_{\rm\overline{MS}}(m_{2\star})$ as expansion variable and use the
5-loop running of the coupling and quark mass to relate $m_{2\star}$ to
$M_{2}$. One could also obtain expansion coefficients which depend on $r$. of
$\alpha_{s}(m_{2\star})$ with $M_{1}>M_{2}$. The
$\frac{2\pi^{2}}{3}\,(1-r^{2})^{-1}$ normalization in eq. 19 ensures
$\rho(M_{1},M_{2})=1+c_{1}\alpha_{\rm\overline{MS}}(m_{2\star})+\ldots\,,$
(22)
where $c_{1}=0.74272...$ is the same expansion coefficient as the one of
$R_{4}=\frac{2\pi^{2}}{3}\overline{\mathcal{M}}_{4}$ and higher order ones are
easily obtained. We expand in $\alpha_{\rm\overline{MS}}(m_{2\star})$ because
the difference is dominated somewhat more by long distances and $M_{2}$ is the
smaller of the masses.
Figure 2: Continuum limit extrapolations of $\rho$ and its TL improved
version, $\rho(M_{1},M_{2})^{\mathrm{Latnorm}}$. Masses, specified in units
$z_{i}=M_{i}\sqrt{8t_{0}}$, are $z_{1}=4.5,\;z_{2}=3$ (left) and
$z_{1}=13.5,\;z_{2}=9$ (right).
We show continuum limit extrapolations in fig. 2. They are almost straight in
$a^{2}$ at small $a$ which makes them quite easy to do. They can be further
improved by dividing $\rho$ by the same function evaluated at leading order,
i.e. $g=0$. There is a choice which masses to insert into the leading order
formula. A good choice is again $m_{\star}$. Precisely we define
$R_{4}^{\mathrm{TL}}(a\mu)=R_{4}|_{g=0}$ (23)
with $\mu$ the twisted mass and then
$\displaystyle\rho^{\mathrm{Latnorm}}(M_{1},M_{2})$ $\displaystyle=$
$\displaystyle\frac{3}{2\pi^{2}}\,(1-r_{\star}^{2})\,\frac{\rho(M_{1},M_{2})}{R_{4}^{\mathrm{TL}}(am_{\star
1})-r_{\star}^{2}R_{4}^{\mathrm{TL}}(am_{\star 2})}$ (24)
with
$r_{\star}=\frac{m_{\star 1}}{m_{\star 2}}\,.$ (25)
In principle it is important that $r_{\star}$ is given by the ratio of the
masses that appear in $R_{4}^{\mathrm{TL}}$ for the log-term to cancel. But
numerically, replacing $r_{\star}\to r$ makes only a small difference.
Examples for how the discretization errors are reduced can be seen in fig. 2.
For all our values of $M_{1},M_{2}$, the leading order improved
$\rho(M_{1},M_{2})^{\mathrm{Latnorm}}$ has a rather convincing continuum
extrapolation.
After the continuum extrapolation, one straight-forwardly extracts the
effective $\Lambda$-parameter and arrives at the red circles in fig. 3. These
values are computed from three-loop perturbation theory (i.e. including
$\alpha^{3}$ in $R_{4}$) at finite $\alpha(m_{\star})$. They then have a
residual dependence
$\Lambda_{\mathrm{eff}}=\Lambda+\mathrm{O}(\alpha^{2}(m_{\star}))\,,$ (26)
on $m_{\star}$ and we call them “effective”. The comparison to the Dalla Brida
and Ramos value [14], extracted at $\alpha^{2}<0.01$ with the help of a finite
size step scaling method, shows that $\Lambda$ computed from $\rho$ has at
most small (on the scale of our uncertainties) corrections at the largest
mass. That mass is given by $z=9$ or $m_{\star}\approx 2.7\,\mathrm{GeV}$.
Figure 3: $\Lambda_{\rm\overline{MS}}$ computed from
$\alpha_{\rm\overline{MS}}(m_{2\star})$, where the latter is obtained from the
non-perturbative $\rho$. The dotted line is a fit to all points including the
Dalla Brida / Ramos one [14]. The reconstructed data points are described in
the text.
### 6.2 Reconstruction of
$R_{4}=\frac{2\pi^{2}}{3}\overline{\mathcal{M}}_{4}(M)$ from $\rho$.
From the definition eq. 19 of $\rho$ it is clear that given
$\rho(M_{1},M_{2})$ and $R_{4}(M_{2})$ one can determine $R_{4}(M_{1})$. This
can be exploited by using $\rho$ to go from
$R_{4}(M_{\mathrm{ref}}\gg\Lambda)$, where perturbative uncertainties are
suppressed the most, to smaller masses.444In the opposite direction all
uncertainties in $\rho$ get enhanced, quickly leading to uncontrolled results.
We insert the known [14] $\Lambda$-parameter into the three-loop (i.e.
including $\alpha^{3}$) perturbative expression for $R_{4}$ at our highest
mass, $z_{\mathrm{ref}}=13.5$ and obtain
$\displaystyle R_{4}^{\mathrm{reconstructed}}(M)$ $\displaystyle=$
$\displaystyle(1-r^{-2})\,\rho(M,M_{\mathrm{ref}})+r^{-2}\,R_{4}^{\mathrm{3-loop}}(M_{\mathrm{ref}})\,,\;$
(28) $\displaystyle
r=M/M_{\mathrm{ref}},\;z_{\mathrm{ref}}=\sqrt{8t_{0}}M_{\mathrm{ref}}=13.5\,.$
Note that perturbative errors are small in $R_{4}(M_{\mathrm{ref}})$ as seen
in the analysis of $\rho$. They get further suppressed by a factor
$r^{-2}\approx 1/20$ when we go to $z=3$. This means that we obtain the non-
perturbative dependence of $\Lambda_{\mathrm{eff}}$ (as of now computed from
$R_{4}$ and therefore with somewhat different $\mathrm{O}(\alpha^{2})$ terms)
on $\alpha$. We remind the reader that a direct computation of $R_{4}$ was
impossible due to the $a^{2}\log(aM)$ effects.
### 6.3 Proposal for the HVP contribution to the muon $g-2$
The discussion in the previous section is easily transferred to the case of
the muon $g-2$, working with differences of the HVP integral for different
(artificial) muon masses. Additionally, we would like to advocate a very
simple solution for this and similar cases, where the short distance
contribution to the integral is subdominant. In contrast to the
$\mathcal{M}_{4}$-case the goal is not to determine $\alpha_{s}$ or other
short-distance parameters.
It is then advisable to split the integral into a short-distance part
evaluated by continuum perturbation theory and a long-distance one to be
computed on the lattice:
$\int_{0}^{\infty}\mathrm{d}tF(t)=\underbrace{\int_{0}^{\infty}\mathrm{d}t\,[1-\chi(t)]\,F(t)}_{\text{continuum
PT}}\,+\,\underbrace{a\sum_{t=0}^{\infty}\,\chi(t)\,\,F(t)}_{\text{continuum
limit of lattice
results}}\,,\quad\chi(t)\,\sim\,\begin{cases}\mathrm{O}(t^{2})&t\Lambda_{\mathrm{\overline{MS}}}\ll
1\\\ 1&t\Lambda_{\mathrm{\overline{MS}}}\gg 1\end{cases}\,.$ (29)
For example the function $\chi$ can be taken as
$\chi(t)=\frac{(M_{\mathrm{cut}}t)^{k}}{(M_{\mathrm{cut}}t)^{k}+1}\,,\;M_{\mathrm{cut}}\gg\Lambda_{\mathrm{\overline{MS}}}$
(30)
or also as a step-function, $\chi(t)=\theta(tM_{\mathrm{cut}}-1)$. The smooth
version seems advantageous for perturbation theory as well as for the lattice
discretization of the integral. The use of perturbation theory for the small
$t$-part of the integral has already been anticipated in [2]. Our discussion
adds further motivation and understanding. It suggests a smooth function
$\chi$ such as eq. 30.
Acknowledgements. We thank the organizers for a very pleasant and successful
conference. Discussions with H. Meyer, S. Kuberski and A. Risch on HVP are
gratefully acknowledged. We also thank C. Lehner for an email exchange on that
subject. LC and RS acknowledge funding from the European Union’s Horizon 2020
research and innovation programme under the Marie Skłodowska-Curie grant
agreement No. 813942.
## References
* [1] HPQCD collaboration, _High-Precision Charm-Quark Mass and QCD coupling from Current-Current Correlators in Lattice and Continuum QCD_ , _Phys. Rev. D_ 78 (2008) 054513 [0805.2999].
* [2] D. Bernecker and H.B. Meyer, _Vector Correlators in Lattice QCD: Methods and applications_ , _Eur. Phys. J. A_ 47 (2011) 148 [1107.4388].
* [3] K. Symanzik, _Continuum limit and improved action in lattice theories:(i). principles and $\varphi$4 theory_, _Nuclear Physics B_ 226 (1983) 187.
* [4] N. Husung, P. Marquard and R. Sommer, _Asymptotic behavior of cutoff effects in Yang–Mills theory and in Wilson’s lattice QCD_ , _Eur. Phys. J. C_ 80 (2020) 200 [1912.08498].
* [5] N. Husung, _Logarithmic corrections to O( $a$) and O($a^{2}$) effects in lattice QCD with Wilson or Ginsparg-Wilson quarks_, 2206.03536.
* [6] L. Chimirri and R. Sommer, _Investigation of the Perturbative Expansion of Moments of Heavy Quark Correlators for $N_{f}=0$_, _PoS_ LATTICE2021 (2022) 354 [2203.07936].
* [7] N. Husung, M. Koren, P. Krah and R. Sommer, _SU(3) Yang Mills theory at small distances and fine lattices_ , _EPJ Web Conf._ 175 (2018) 14024 [1711.01860].
* [8] B. Sheikholeslami and R. Wohlert, _Improved Continuum Limit Lattice Action for QCD with Wilson Fermions_ , _Nucl. Phys. B_ 259 (1985) 572.
* [9] M. Lüscher, S. Sint, R. Sommer, P. Weisz and U. Wolff, _Nonperturbative O(a) improvement of lattice QCD_ , _Nucl. Phys. B_ 491 (1997) 323 [hep-lat/9609035].
* [10] L. Chimirri and R. Sommer, _A quenched exploration of heavy quark moments and their perturbative expansion_ , 2023.
* [11] T. Harris, M. Cè, H.B. Meyer, A. Toniato and C. Török, _Vacuum correlators at short distances from lattice QCD_ , _PoS_ LATTICE2021 (2021) 572 [2111.07948].
* [12] E.-H. Chao, R.J. Hudspith, A. Gérardin, J.R. Green, H.B. Meyer and K. Ottnad, _Hadronic light-by-light contribution to $(g-2)_{\mu}$ from lattice QCD: a complete calculation_, _Eur. Phys. J. C_ 81 (2021) 651 [2104.02632].
* [13] L. Chimirri, N. Husung and R. Sommer, _in preparation_ , 2023.
* [14] M. Dalla Brida and A. Ramos, _The gradient flow coupling at high-energy and the scale of SU(3) Yang–Mills theory_ , _Eur. Phys. J. C_ 79 (2019) 720 [1905.05147].
|
# Hierarchical Control Strategy for Moving A Robot Manipulator Between Small
Containers
Paolo Torrado1, Boling Yang2, and Joshua R. Smith1,2 This work was supported
by Amazon Inc through the UW+Amazon Science Hub.1 Electrical and Computer
Engineering Department, University of Washington2 Paul G. Allen School of
Computer Science & Engineering, University of Washington
###### Abstract
In this paper, we study the implementation of a model predictive controller
(MPC) for the task of object manipulation in a highly uncertain environment
(e.g., picking objects from a semi-flexible array of densely packed bins). As
a real-time perception-driven feedback controller, MPC is robust to the
uncertainties in this environment. However, our experiment shows MPC cannot
control a robot to complete a sequence of motions in a heavily occluded
environment due to its myopic nature. It will benefit from adding a high-level
policy that adaptively adjusts the optimization problem for MPC.
## I Introduction
Transferring objects between small containers is a popular robotic
manipulation task, and it is particularly common in warehouse manipulation
settings. It is a difficult control problem for an autonomous robot, requiring
precise control and robust collision avoidance. The task complexity is further
increased when the array of small containers is large because it heavily
occludes the task space and limits the robot’s maneuverability.
In this paper, we compared the performance of a MPC controller and a MPC-based
hierarchical controller in a warehouse manipulation task. This task requires
the robot to transfer objects between multiple containers and a shipping tote.
MPC is an online control strategy that works well in stochastic environments.
Yet, its sampling process and parameter-sensitive nature introduces
uncertainty to its performance in complex environments. To overcome this
weakness, we propose a MPC-based hierarchical controller that uses a policy
that adaptively generates high-level strategies for the MPC algorithm to
solve. Our experiment shows the hierarchical controller significantly
outperformed the MPC controller in terms of success rate in task completion,
control quality, and computational cost. We found that the hierarchical
control strategy is effective in solving our particular manipulation task. In
future research, we propose to improve the approach via reinforcement learning
and curriculum learning.
## II Sampling-Based Model Predictive Control
MPC performs well in unstructured and dynamic environments requiring online
adaptation [1]. It finds a locally optimal policy from the construction of
multiple possible trajectories based on an approximate dynamics model [2].
Trajectories are discretized into particles representing future possible
states of the robot and calculated a number of time steps into the future
called a horizon. The optimal policy is obtained through the use of a loss
function defined as follows:
$\displaystyle\hat{c}(x_{t},u_{t})=\alpha_{p}\hat{c}_{pose}+\alpha_{s}\hat{c}_{stop}+\alpha_{j}\hat{c}_{joint}+\alpha_{m}\hat{c}_{manip}$
$\displaystyle+\alpha_{c}(\hat{c}_{self-coll}+\hat{c}_{env-coll})$
where $\hat{c}_{pose}$ penalizes distance to target pose, $\hat{c}_{stop}$ is
the cost to stop for contingencies, $\hat{c}_{joint}$ is the joint limit
avoidance, $\hat{c}_{manip}$ is the manipulability cost, $\hat{c}_{self-coll}$
self collision avoidance cost and $\hat{c}_{env-coll}$ is the environment
collision cost. All the alpha terms are weight factors.
Figure 1: Universal Robot UR16e depicted picking items from the array of bins
in the real world and in simulation reaching a target. Qualitative results
(videos) available at this link. Figure 2: Experimental Results based on 10
trials for each run. (a) Table presents the average success rate per 10
trials. (b) End effector traversed distance. (c) Time elapsed from start of
simulation to target being reached.
## III Benchmarking Controller Performance
We implemented a MPC controller on a Universal Robot UR16e and in simulation
with IsaacGym [3]. The design of the experiment is based on experience
manipulating objects with the real robot. Benchmarking was done on the
simulation setup. The trajectory designed for the experiment represents
successful and unsuccessful picks.
The experiment consists of the robot reaching five waypoints. In waypoints one
and two the robot reaches into a bin to simulate the grasping of an object and
retrieving it to drop it in a shipping tote. Waypoints three, four and five
involve reaching into a bin, failing to grasp an object and moving onto the
next bin.
The experiment was performed with three variations of the horizon parameter in
the MPC controller. The number of particles for the trajectory calculation
remained unchanged. Fig.2.a shows all MPC controller settings had good
performance reaching the first two waypoints. Success rate starts declining in
the third waypoint. The controller does not overcome target three with a
horizon of twenty and target 4 with a horizon of thirty. With a horizon of
forty targets are reached with a low success rate.
We proceeded with hierarchical control by adding a function to dictate global
optimization. We choose to use a heuristic function as the high level policy
in this experiment; planning and machine learning based approaches are also
feasible policy alternatives.
We repeated the experiment with hierarchical control. Fig.2.a. shows an
improved target success rate. In the case of a horizon of twenty the
controller is able to reach targets that were inaccessible. Comparing the runs
with a horizon of forty shows the success rate is increased twenty percent for
the last two targets. As presented in Fig.2.b and c, the hierarchical
controller with reduced horizon is more efficient in time and traversed
distance than the MPC controller with longer horizon. Fig.2.b and c show the
hierarchical controller takes longer time and higher traversed distance in the
trials with a horizon of forty.
MPC is a good low level controller for solving local optimization problems but
has difficulties with finding global optimums. The last three targets of the
experiment prove this point. The controller is unable to reason about
navigating around the wall separating adjacent bins. A larger horizon for the
controller increases the success rate of reaching waypoints between adjacent
bins but at higher computational costs. The introduction of hierarchical
control helps the robot reach a better global optimum while decreasing
computational cost. In conclusion, the success rate of the robot is increased
for all three settings of the horizon with the use of hierarchical control.
The heuristic function we use for hierarchical control adds additional
waypoints to the trajectory of the robot. Naturally a greater number of
waypoints increases the time and traversed distance required to reach targets.
The reduction in performance is counterbalanced by greater number of original
targets reached. The controller temporally sacrifices local optimum while
improving the global optimum solution.
## IV Hierarchical Control
While the heuristic policy effectively improved the robot’s performance in our
experiment, it will not scale well to a more diverse problem set the robot
will encounter in realistic settings. The increase in complexity associated
with densely packed container arrays will require more sophisticated
manipulation skills. For example, other objects may occlude the target object
requiring the robot to rearrange the container. Such non linear dynamics will
increase the uncertainty in the performance of our hierarchical controller.
To extend the functionality and generalizability of the hierarchical
controller, we propose to use a reinforcement learning agent as the high-level
policy and train the agent with curriculum learning strategies [4]. Curriculum
learning is a training strategy that trains a machine learning model from
easier data to harder data, which imitates the meaningful learning order in
human curricula. We will be experimenting with two curriculum generation
methods. The first method requires human experts to design oracles that
generate training tasks. The second method is an autocurriculum approach [5,
6], where another RL agent is responsible for creating the curriculum. The
robot manipulator and task generator are trained simultaneously to ensure that
the generated tasks are reasonably challenging to the current robot policy.
The robot and task generator are constantly improving their policies until
reaching equilibrium.
## V Conclusions
We implemented an MPC controller in a UR16e robot and a simulation environment
to study its feasibility as a low level controller for a hierarchical control
strategy. The experiment consisted of having the robot traverse a series of
waypoints representing the picking of a list of items. The MPC controller
showed mixed performance in this task and was remarkably improved with the use
of hierarchical control. The improved results gives us confidence a
hierarchical control strategy combining the MPC controller and RL can be
successful in reducing controller performance uncertainty.
## References
* [1] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou, “Information theoretic mpc for model-based reinforcement learning,” in _2017 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2017, pp. 1714–1721.
* [2] M. Bhardwaj, B. Sundaralingam, A. Mousavian, N. D. Ratliff, D. Fox, F. Ramos, and B. Boots, “Storm: An integrated framework for fast joint-space model-predictive control for reactive manipulation,” in _Conference on Robot Learning_. PMLR, 2022, pp. 750–759.
* [3] J. Liang, V. Makoviychuk, A. Handa, N. Chentanez, M. Macklin, and D. Fox, “Gpu-accelerated robotic simulation for distributed reinforcement learning,” in _Conference on Robot Learning_. PMLR, 2018, pp. 270–282.
* [4] P. Soviany, R. T. Ionescu, P. Rota, and N. Sebe, “Curriculum learning: A survey,” _CoRR_ , vol. abs/2101.10382, 2021. [Online]. Available: https://arxiv.org/abs/2101.10382
* [5] B. Yang, G. Habibi, P. Lancaster, B. Boots, and J. Smith, “Motivating physical activity via competitive human-robot interaction,” in _Conference on Robot Learning_. PMLR, 2022, pp. 839–849.
* [6] B. Yang, L. Zheng, L. J. Ratliff, B. Boots, and J. R. Smith, “Stackelberg maddpg: Learning emergent behaviors via information asymmetry in competitive games,” 2022.
|
In this work we consider the following problem: given a network of spies, all distributed in different locations in space, and assuming that each spy possesses a small, but incomplete by itself part of a big secret, is it possible to securely transmit all these partial secrets to the spymaster, so that they can be combined together in order to reveal the big secret? We refer to it as the Quantum Secret Aggregation problem, and we propose a protocol, in the form of a quantum game, with Alice taking over the role of the spymaster, that addresses this problem in complete generality. Our protocol relies on the use of maximally entangled GHZ tuples, which are symmetrically distributed among Alice and all her spies. It is the power of entanglement that makes possible the secure transmission of the small partial secrets from the agents to the spymaster. As an additional bonus, entanglement guarantees the security of the protocol, by making it statistically improbable for the notorious eavesdropper Eve to steal the big secret.
Keywords:: Quantum entanglement, GHZ states, quantum cryptography, quantum secret sharing, quantum secret aggregation.
§ INTRODUCTION
Our rapidly growing dependence and continuous development of many prominent network-based technologies, such as the internet of things or cloud-based computing, have resulted, in an ever-growing need for more reliable and robust security protocols that can protect our current infrastructure from malicious individuals or parties. Even though our current security protocols, which base their security upon a set of computationally difficult mathematical problems like the factorization problem, have been proven reliable for the time being, they have also been proven vulnerable against more sophisticated attacks that incorporate the use of quantum algorithms and quantum computers. Despite the fact, that most of these quantum algorithms were theoretically developed a couple of decades ago, like the two famous algorithms developed by Peter Shor and Lov Grover [1, 2], for many years, there was not any immediate threat of such attacks. That was simply because the technology of quantum computation was not mature enough to even produce a quantum computer capable of surpassing the $100$ qubit barrier, let alone of having the qubit capacity required to actually break these encryption protocols.
However, after the monumental breakthrough of IBM's new quantum computers, which managed to surpass the $100$ qubit barrier [3] last year, and then immediately followed a year later by their most recent $433$ qubit quantum processor named Osprey [4], that managed to quadruple the previous processor's qubit capacity, the landscape has changed dramatically. It is now clear, that we are much closer to successfully developing a viable fully working quantum computer than originally anticipated. Thus, the need has arisen to immediate upgrade our security protocols, before they become a critical threat to our communication infrastructure. This inherent vulnerability of the current protocols has led to a plethora of initiatives from various countries and organizations, all aiming at establishing new and novel approaches for solving the ever-more critical problem of secure communication [5]. Among the various attempts to provide a viable solution for this problem, two new scientific fields emerged, namely the field of post-quantum or quantum-resistant cryptography and the field of quantum cryptography. Despite the confusing similarities in their names, these fields attempt to solve the problem by implementing radically different strategies. Specifically, the field of quantum-resistant cryptography is trying to maintain the philosophy of the previous era, by still relying on the use of mathematical problems, albeit of a more complex nature, such as supersingular elliptic curve isogenies and supersingular isogeny graphs, solving systems of multivariate equations, and lattice-based cryptography. On the other hand, the field of quantum cryptography is trying to establish security by relying on the fundamental principles of quantum mechanics, such the monogamy of entangled particles, the no-cloning theorem and nonlocality.
For the time being, due to the inherent restrictions of our current technology, the most prominent of the aforementioned fields is the field of post-quantum cryptography [6, 7, 8, 9], on account of the fact that the successful implementation of such protocols, does not require any changes of our current infrastructure. But in spite of all that, the field of quantum cryptography is still a crucial research topic, as it is wildly regarded as the long-term future of cryptography. This is due to the overwhelming advantages of the fundamental properties of quantum mechanics, which not only allow us to protect our information, but also efficiently transmit it using entangled states, as first proposed by Arthur Ekert [10]. In his E91 quantum key distribution protocol (QKD for short), Ekert proved that key distribution is possible with the use of EPR pairs. After this landmark discovery by Ekert, the field of quantum cryptography witnessed rapid growth in the development of entanglement-based QKD protocols [11, 12, 13, 14, 15, 16], thus proving the technique's importance, while simultaneously prompting the research community to expand the field by experimenting with other cryptographic primitives like quantum secret sharing.
The cryptographic primitive of secret sharing or secret splitting in its more elementary form can be described as a game between two spatially separated groups. The first group consisting of one player, who wants to share a secret message with the other group. This second group consists of the rest of the players, who will receive the secret message split into multiple pieces. By itself each piece does not contain any valuable information, but, if all the players of the second group were to combine their pieces, the secret message would be revealed. Understandably, one can make the mistake and regard this cryptographic primitive, as nothing more than a scaled-up key distribution protocol, in order to accommodate more than two people. However, the step of dividing the secret message into multiple pieces actually offers a crucial advantage, by providing security against malicious individuals that have managed to infiltrate the second group with the goal of covertly acquiring the secret message, by forcing every player honest or dishonest to participate in the process that unlocks the secret message (see the recent [17] for more details).
In the real world, secret sharing schemes are vital for providing security to new and emerging technologies, such as the fields of cloud computing, cloud storage [18, 19] or blockchain technologies [20]. These technologies require multiple parties to communicate with each other, accommodating the possibility that one or more of them might be malicious users, who want to take advantage of the system. Therefore, the research on quantum secret sharing has come a long way from the simple proof of concept by Hillery et al. [21], and Cleve et al. [22], who pioneered this field. All this progress has led to numerous research proposals and schemes that are continuously expanding the field to this day [23, 24, 25, 26, 27, 28]. At the same time, multiple experimental demonstrations involving real-world scenarios have been attempted by the researchers in [29, 30, 31, 32], and even schemes for non-binary quantum secret sharing protocols that rely on the use of qudits instead of qubits [33, 34, 35, 36] have been proposed.
This work tackles a problem that could be considered the inverse of quantum secret sharing. In our setting, there is a network of agents, all distributed in different locations, and all in possession of a small secret. All these small secrets must be combined together, if one is to reveal the big secret. So, the agents want to transmit their secrets to the spymaster, who is located elsewhere. Our agents operate on a need to know basis, that is they avoid any communication among them, and only report directly to the spymaster. Their task is complicated by the need to securely fulfill their mission, as adversaries might try to intercept any message and discover the big secret. Thus, going quantum seems the way to go. We refer to this problem as Quantum Secret Aggregation, and we give a protocol that solves this task in the form of a game. The use of games does not diminish the seriousness or importance of the problem, but, at least we hope so, makes its presentation more entertaining and memorable. Certainly this is not the first time games, coin tossing, etc. have been used in quantum cryptography (see [37] and recently [16, 17]). Quantum games have captured the interest of many researchers since their inception in 1999 [38], [39]. In many situations, quantum strategies seem to outperform classical ones [40, 41, 42]. This holds not only for iconic classical games like the Prisoners' Dilemma [39], [43], but also for abstract quantum games [44]. As a matter of fact, there is no inherent restriction on the type of a classical system that can be transformed to a quantum analogue, as even political institutions may be amenable to this process [45]. In closing this short reference, it is perhaps noteworthy that many games have been studied via unconventional means outside the quantum realm. The realization that nature computes has also been applied to bio-molecular processes (see for instance [46, 47, 48]). It should therefore come as no surprise that, in order to improve classical strategies in many famous games, including the Prisoners' Dilemma, tools and techniques from the biological domain have been utilized [49, 50, 51, 52].
Contribution. This paper poses and solves a problem in the general context of quantum cryptographic protocols. We refer to it as the Quantum Secret Aggregation problem because it involves aggregating many small secrets in order to reveal a big secret. The underlying setting visualizes a completely distributed network of agents, each in possession of a small secret, aiming to send their secret to the spatially separated Alice, which is our famous spymaster. The operation must be completed in the most secure way possible, as there are eavesdroppers eager to intercept their communications and steal the big secret. To address this problem we present the Quantum Secret Aggregation Protocol as a game. The solution outlined is completely general, as the number of players can be scaled arbitrarily as needed and all $n$ players are assumed to reside in different positions in space. Obviously, the solution still holds, even if a subset of the players are located in the same place. Security is enforced because of the integral role of entanglement in the protocol. The use of maximally entangled GHZ tuples, symmetrically distributed among Alice and all her spies not only makes possible the secure transmission of the small partial secrets from the agents to Alice, but also guarantees the security of the protocol, by making it statistically improbable for the notorious eavesdropper Eve to obtain the big secret.
§.§ Organization
The structure of this paper is the following. Section <ref> provides an introduction to the subject along with some relevant references. Section <ref> is a brief exposition on GHZ states and the phenomenon of entanglement. Section <ref> rigorously defines the problem at hand, while Section <ref> explains in detail the Quantum Secret Aggregation protocol. Section <ref> presents a small example of the protocol executed using Qiskit. Section <ref> is devoted to the security analysis on a number of possible attacks from, and, finally, Section <ref> contains a summary and a brief discussion on some of the finer points of this protocol.
§ A BRIEF REMINDER ABOUT GHZ STATES
Nowadays most quantum protocols designed to securely transmit keys, secrets, and information in general, rely on the power of entanglement. Entanglement is a hallmark property of the quantum world. As this phenomenon is absent from the everyday world, it is considered counterintuitive by some. However, from the point of view of quantum cryptography and quantum computation, this strange behavior is seen as a precious resource, which is the key to achieve quantum teleportation and unconditionally secure information transmission.
Thus, it comes as no surprise that this work too utilizes quantum entanglement in a critical manner, in order to implement the proposed protocol of quantum secret aggregation. Specifically, our protocol relies on maximally entangled $n$-tuples of qubits, i.e., qubits that are in what in the literature is reffered to as the $\ket{ GHZ_{n} }$ state. Present-day quantum computers can produce arbitrary GHZ states using various quantum circuits. A methodology for constructing efficient such circuits is given in [53]. The resulting quantum circuits are efficient in the sense that they require $\lg n$ steps to generate the $\ket{ GHZ_{n} }$ state. One such circuit that generates the $\ket{ GHZ_{5} }$ state using the IBM Quantum Composer [54] is shown in Figure <ref>. The dotted lines are a helpful visualization that allows us to distinguish “time slices” within which the CNOT gates are applied in parallel. Figure <ref>, which is also from the IBM Quantum Composer, indicates the state vector description of the $\ket{ GHZ_{5} }$ state.
Let us assume that we are given a composite quantum system made up of $n$ individual subsystems, where each subsystem contains just a single qubit. As explained above, it is possible to entangle all these $n$ of the composite system qubits in the $\ket{ GHZ_n }$ state. In such a case, the mathematical description of the state of the composite system is the following:
\begin{align} \label{eq:Extended General GHZ_n State}
\ket{ GHZ_{n} }
\frac{ 1 }{ \sqrt{2} }
\left( \ket{0}_{ n - 1 } \ket{0}_{ n - 2 } \dots \ket{0}_{ 0 } + \ket{1}_{ n - 1 } \ket{1}_{ n - 2 } \dots \ket{1}_{ 0 } \right)
\ ,
\end{align}
where the subscript $i, \ 0 \leq i \leq n - 1,$ is used to indicate the qubit belonging to subsystem $i$.
grow to left by = 0.0 cm,
grow to right by = 0.0 cm,
colback = WordTurquoiseLighter80!07,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
The above (efficient) quantum circuit in Qiskit can entangle 5 qubits in the $\ket{ GHZ_5 } = \frac{ \ket{0} \ket{0} \ket{0} \ket{0} \ket{0} + \ket{1} \ket{1} \ket{1} \ket{1} \ket{1} } {\sqrt{2}}$ state. Following the same pattern, we can be construct efficient quantum circuits that entangle $n$ qubits in the $\ket{ GHZ_n }$ state.
grow to left by = 0.0 cm,
grow to right by = 0.0 cm,
colback = white,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
This figure depicts the state vector description of 5 qubits that are entangled in the $\ket{ GHZ_5 }$ state.
It is expedient and necessary to generalize the above setting to the case where each individual subsystem is made of a quantum register and not just a single qubit. In this more general situation, each of the $n$ subsystems is a quantum register $r_i$, where $0 \leq i \leq n - 1,$ that has $m$ qubits and the corresponding qubits of all the $n$ registers are entangled in the $\ket{ GHZ_{n} }$ state. This means that all the qubits in position $j, \ 0 \leq j \leq m - 1,$ of the registers $r_0, r_1 \dots, r_{ n - 1 }$ are entangled in the $\ket{ GHZ_{n} }$ state. Figure <ref> provides a visual depiction of this situation, where the corresponding qubits comprising the $\ket{ GHZ_{n} }$ $n$-tuple are drawn with the same color. As expected, the state of the composite system is designated by $\ket{ GHZ_{n} }^{\otimes m}$ and its mathematical description is
\begin{align} \label{eq:m Extended General GHZ_n States}
\ket{ GHZ_{n} }^{\otimes m}
\frac{1}{ \sqrt{2^m} }
\sum_{\mathbf{x} \in \{ 0, 1 \}^m}
\ket{\mathbf{x}}_{ n - 1 } \dots \ket{\mathbf{x}}_{ 0 }
\ ,
\end{align}
where ${\mathbf{x} \in \{ 0, 1 \}^m}$ ranges through all the $2^{m}$ basis kets.
\begin{align} \label{eq: m+1 GHZ_n states}
\hspace{- 1.00 cm}
\ket{GHZ_{n}}^{\otimes m + 1}
\ket{GHZ_{n}}^{\otimes m}
\otimes
\ket{GHZ_{n}}
\nonumber \\
%\overset{(\ref{eq:m Extended General GHZ_n States}), (\ref{eq:Extended General GHZ_n State})}{=}
\frac{1}{ \sqrt{2^m} }
\sum_{\mathbf{x} \in \{ 0, 1 \}^m}
\ket{\mathbf{x}}_{ n - 1 } \dots \ket{\mathbf{x}}_{ 0 }
\otimes
\frac{ 1 }{ \sqrt{2} }
\left( \ket{0}_{ n - 1 } \ket{0}_{ n - 2 } \dots \ket{0}_{ 0 } + \ket{1}_{ n - 1 } \ket{1}_{ n - 2 } \dots \ket{1}_{ 0 } \right)
\nonumber \\
\frac{1}{ \sqrt{2^{m + 1}} }
\sum_{\mathbf{x} \in \{ 0, 1 \}^m}
\ket{ \mathbf{x} 0 }_{ n - 1 } \dots \ket{ \mathbf{x} 0 }_{ 0 }
\ket{ \mathbf{x} 1 }_{ n - 1 } \dots \ket{ \mathbf{x} 1 }_{ 0 }
\nonumber \\
\frac{1}{ \sqrt{2^{m + 1}} }
\sum_{\mathbf{x} \in \{ 0, 1 \}^{m + 1}}
\ket{ \mathbf{x} }_{ n - 1 } \dots \ket{ \mathbf{x} }_{ 0 }
\ .
\end{align}
grow to left by = 2.0 cm,
grow to right by = 0.0 cm,
colback = yellow!03!white,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
[ scale = 0.75 ]
⦜360 /
[ shade, top color = RedPurple, bottom color = black, rectangle, align = center, text width = 10.00 cm ] ( Label ) at ( 0.0, 9.00 )
A composite system consisting of $n$ quantum registers $r_{0}, \dots, r_{n - 1}$. Each register has $m$ qubits and the corresponding qubits are entangled in the $\ket{ GHZ_n }$ state. ;
[ line width = 2.00 pt, MyBlue ] ( 0, 0 ) circle [ radius = ];
[ shade, shading = ball, ball color = WordAquaLighter40, circle, minimum size = 11 mm ] ( r0 ) at ( * cos(0 * ⦜) , * sin(0 * ⦜) ) $\mathbf{r}_{0}$ ;
anchor = west,
column sep = 0.00 mm,
row sep = 0.0 mm,
at ( * cos(0 * ⦜) , * sin(0 * ⦜) )
[ draw = black, shade, outer color = WordLightGreen, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{m - 1} }$ ;
[ draw = black, shade, outer color = WordIceBlue, inner color = white, minimum size = 11 mm, semithick ] …;
[ draw = black, shade, outer color = WordBlueDark, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{1} }$ ;
[ draw = black, shade, outer color = RedDarkLightest, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{0} }$ ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(1 * ⦜) , * sin(1 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter40, circle, minimum size = 11 mm ] ( ri ) at ( * cos(2 * ⦜) , * sin(2 * ⦜) ) $\mathbf{r}_{i}$ ;
anchor = south,
column sep = 0.000 mm,
row sep = 0.0 mm,
at ( * cos(2 * ⦜) , * sin(2 * ⦜) )
[ draw = black, shade, outer color = WordLightGreen, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{m - 1} }$ ;
[ draw = black, shade, outer color = WordIceBlue, inner color = white, minimum size = 11 mm, semithick ] …;
[ draw = black, shade, outer color = WordBlueDark, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{1} }$ ;
[ draw = black, shade, outer color = RedDarkLightest, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{0} }$ ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(3 * ⦜) , * sin(3 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter40, circle, minimum size = 11 mm ] ( rj ) at ( * cos(4 * ⦜) , * sin(4 * ⦜) ) $\mathbf{r}_{j}$ ;
anchor = east,
column sep = 0.00 mm,
row sep = 0.0 mm,
at ( * cos(4 * ⦜) , * sin(4 * ⦜) )
[ draw = black, shade, outer color = WordLightGreen, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{m - 1} }$ ;
[ draw = black, shade, outer color = WordIceBlue, inner color = white, minimum size = 11 mm, semithick ] …;
[ draw = black, shade, outer color = WordBlueDark, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{1} }$ ;
[ draw = black, shade, outer color = RedDarkLightest, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{0} }$ ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(5 * ⦜) , * sin(5 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter40, circle, minimum size = 11 mm ] ( rk ) at ( * cos(6 * ⦜) , * sin(6 * ⦜) ) $\mathbf{r}_{k}$ ;
anchor = north,
column sep = 0.000 mm,
row sep = 0.0 mm,
nodes = draw = black, shade, outer color = WordLightGreen, inner color = white, minimum size = 11 mm, semithick
at ( * cos(6 * ⦜) , * sin(6 * ⦜) )
[ draw = black, shade, outer color = WordLightGreen, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{m - 1} }$ ;
[ draw = black, shade, outer color = WordIceBlue, inner color = white, minimum size = 11 mm, semithick ] …;
[ draw = black, shade, outer color = WordBlueDark, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{1} }$ ;
[ draw = black, shade, outer color = RedDarkLightest, inner color = white, minimum size = 11 mm, semithick ] $\ket{ q_{0} }$ ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(7 * ⦜) , * sin(7 * ⦜) ) ;
This figure visualizes the situation where each of the $n$ subsystems is a quantum register $r_i, \ 0 \leq i \leq n - 1,$ that has $m$ qubits, and the corresponding qubits in all the registers, drawn above with the same color, are entangled in the $\ket{ GHZ_{n} }$ state.
Equation (<ref>) can be proved by an easy induction on $m$. For $m = 1$, equation (<ref>) reduces to (<ref>), and trivially holds. Let us assume that, according to the induction hypothesis, (<ref>) holds for $m$. We shall prove that (<ref>) also holds for $m + 1$. Indeed, by invoking (<ref>) and (<ref>), the computation shown below completes the proof by induction.
§ THE PROBLEM OF QUANTUM SECRET AGGREGATION
In the current section we rigorously define the problem of Quantum Secret Aggregation, simply referred to as QSA from now on. To the best of our knowledge, this is the first time that this problem is posed and solved in the relevant literature. Informally, QSA can be considered as the inverse of Quantum Secret Sharing (QSS for short). The latter focuses on how a single entity (usually called Alice) can securely transmit a secret to a group of two or more agents. Typically in QSS Alice is in a different location from her agents; however the agents are assumed to be in the same location, which implies that they can readily exchange information. In contrast, in QSA we assume that Alice and her agents are all in different locations, and this time it is the agents that want to securely transmit a part of the secret to Alice. Each agent has only a small part of the secret, and no two agents possess secrets with common fragments. Alice requires all the parts in order to decipher the secret.
Let us assume that the following hold.
* There are $n - 1$ spatially separated agents Agent$_0$, …, Agent$_{n - 2}$. Each agent possesses of a partial secret key $\mathbf{p}_i, \ 0 \leq i \leq n - 2$.
* Every partial secret keys is unique and is known only to the corresponding agent. Furthermore, there is no information redundancy among the partial secret keys, i.e., no one can be inferred from the rest.
* Every agent wants to securely send her secret key to the spymaster Alice, who is also in an entirely different location.
* Alice wants to discover the complete secret key, denoted by $\mathbf{s}$. This can only be done by combining all the partial secret keys $\mathbf{p}_0, \dots, \mathbf{p}_{n - 2}$.
* The length of the complete secret key, denoted by $m$, is the sum of the lengths of all the partial secret keys: $m = | \mathbf{p}_0 | + \dots + | \mathbf{p}_{n - 2} |$.
* The whole operation must be executed with utmost secrecy, due to the presence of the eavesdropper Eve.
The Quantum Secret Aggregation problem asks how to establish a protocol that will guarantee that Alice and her agents achieve their goal.
In view of the fact that Agent$_i$ possesses the partial key $\mathbf{p}_i, \ 0 \leq i \leq n - 2$, we can make the following observations.
* Implicit in the definition of the problem is the assumption that Alice has assigned a specific ordering to her ring of agents and all her agents are aware of this ordering. This simply means that not only Alice, but also all agents know who is Agent$_0$, …, Agent$_{n - 2}$.
* Definition <ref> is general enough to allow for partial secret keys of different length, which is more realistic.
* Although neither Alice nor her agents know the partial secret keys (except their own), they all know their lengths $| \mathbf{p}_0 |, \dots, | \mathbf{p}_{n - 2} |$. This does not compromise the secrecy factor because knowing the length of a secret key does not reveal its contents.
From an algorithmic perspective it is convenient to have a standard length for all partial secret keys. This prompts the following definition.
Each Agent$_i, \ 0 \leq i \leq n - 2,$ constructs from her partial secret key $\mathbf{p}_i$ her extended partial secret key $\mathbf{s}_i$, which is defined as
\begin{align} \label{eq:Extended Partial Key}
\mathbf{s}_i
\underbrace{ 0 \ \cdots \ 0 }_{k \rm\ times}
\ \mathbf{p}_i \
\underbrace{ 0 \ \cdots \ 0 }_{l \rm\ times}
\ ,
\end{align}
where $k = | \mathbf{p}_{n - 2} | + \dots + | \mathbf{p}_{i + 1} |$ and $l = | \mathbf{p}_{i - 1} | + \dots + | \mathbf{p}_0 |$.
This simple construction enforces uniformity among the agents, since they all end up having extended keys of length $m$, even though their partial keys will in general be of different length, and greatly simplifies the construction of the quantum circuit. Additionally, it enables us to derive the next simple and elegant formula connecting the complete secret key $\mathbf{s}$ with the
extended partial secret keys $\mathbf{s}_{0}, \dots, \mathbf{s}_{n - 2}$:
\begin{align} \label{eq:Complete Secret as Mod $2$ Sum of Partial Keys}
\mathbf{s}
\mathbf{s}_{0} \oplus \dots \oplus \mathbf{s}_{n - 2}
\ .
\end{align}
§ THE QUANTUM SECRET AGGREGATION PROTOCOL
We know present the proposed QSA protocol as a game, aptly named the QSA game. In this game, there are $n, n \geq 3,$ players, which can be conceptually divided in two group. Alice alone makes the first group, which is the recipient of the secret information from distant sources. These sources are the $n - 1$ agents in the spy ring that constitute the second group. The proposed protocol is general enough to accommodate an arbitrary number of agents. To thoroughly describe the QSA game, we carefully distinguish the phases in its progression.
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = yellow!03!white,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
[scale = 1.00]
⦜360 /
shade, top color = GreenLighter2, bottom color = black, rectangle, text width = 9.00 cm, align = center
] ( Label ) at ( 0.0, 7.00 )
THE QUANTUM CHANNEL
Alice sends through the quantum channel $m$ qubits in the $\ket{ GHZ_{n} }$ state to each of the $n - 1$ spatially distributed agents in her spy network.;
[ line width = 2.00 pt, MyBlue ] ( 0, 0 ) circle [ radius = cm ];
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] east: Agent$_0$
( ) at ( * cos(0 * ⦜) , * sin(0 * ⦜) ) ;
maninblack, shirt = WordBlueDark, hair = gray,
scale = 1.50,
anchor = center,
label = [ rotate = ⦜, label distance = 0.00 cm ] east: Agent$_1$
( ) at ( * cos(1 * ⦜) , * sin(1 * ⦜) ) ;
maninblack, shirt = GreenTeal, hair = red!20!black,
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] north: Agent$_2$
( ) at ( * cos(2 * ⦜) , * sin(2 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(3 * ⦜) , * sin(3 * ⦜) ) ;
maninblack, shirt = brown!50!gray, skin = brown!70!black,
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] west: Agent$_i$
( ) at ( * cos(4 * ⦜) , * sin(4 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(5 * ⦜) , * sin(5 * ⦜) ) ;
maninblack, shirt = red!50!blue, hair = brown!30!black, skin = brown,
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] south: Agent$_{n - 3}$
( ) at ( * cos(6 * ⦜) , * sin(6 * ⦜) ) ;
maninblack, shirt = green!20!blue, hair = gray!70!black, skin = brown,
scale = 1.50,
anchor = center,
label = [ rotate = - ⦜, label distance = 0.00 cm ] right: Agent$_{n - 2}$
( ) at ( * cos(7 * ⦜) , * sin(7 * ⦜) ) ;
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] south: Alice
(Alice) at (0.0, 0.0) ;
[on background layer]
∠in 0, 45, 90, 270, 315
[ WordAquaLighter40, ->, >=stealth, line width = 5.0 mm ]
( * cos(∠) , * sin(∠) ) –
( * cos(∠) , * sin(∠) )
node [ midway, text = black, rotate = ∠] $\ket{ GHZ_{n} }^{\otimes m}$;
∠in 135, 225
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * 0.5 * cos(∠) , * 0.5 * sin(∠) ) ;
[ WordAquaLighter40, ->, >=stealth, line width = 5.0 mm ]
( * cos(180) , * sin(180) ) –
( * cos(180) , * sin(180) )
node [ midway, text = black ] $\ket{ GHZ_{n} }^{\otimes m}$ ;
The above figure depicts the situation where Alice herself initiates the protocol by creating and sending through the quantum channel to each of the $n - 1$ spatially distributed agents in her spy network $m$ qubits entangled in the $\ket{ GHZ_{n} }$ state.
§.§ Initialization phase through the quantum channel
This game utilizes entanglement. As a matter of fact, its successful completion relies on the use of entanglement. So, it is necessary, before the main part of the protocol commences, to create the required number, which is denoted by $m$, of $n$-tuples of qubits entangled in the $\ket{ GHZ_{n} }$ state. Such entangled tuples can be produced by a contemporary quantum computer, for instance, using a quantum circuit like the one shown in Figure <ref>. These $\ket{ GHZ_{n}}$ tuples can be produced by Alice or by another trusted source, which can even be a satellite [55]. Figure <ref> depicts the former situation. We note however that our protocol does not depend on which source actually creates the entangled tuples. The crucial requirement is that they are produced and sent through the quantum channel, so that they may populate the Input Registers of Alice and all her agents.
§.§ Input phase in the local quantum circuits
The purpose of the QSA game from Alice's point of view is to aggregate all the partial secret keys $\mathbf{p}_0, \dots, \mathbf{p}_{n - 2}$ from her $n - 1$ agents, in order to reveal the complete secret key $\mathbf{s}$. All the $n - 1$ partial keys are absolutely necessary for this, as they are distinct and nonoverlapping, i.e., there is no information redundancy among them. From the perspective of the individual agents, the operation is strictly on a need to know basis, which means that, after the completion of the protocol, they gain no additional information that they did not knew already.
grow to left by = 0.50 cm,
grow to right by = 0.50 cm,
colback = WordTurquoiseLighter80!07,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
[ scale = 0.90 ]
nobit AUX_0_0;
qubits $IR_0$: $\ket{ GHZ_{n} }^{\otimes m}$ IR_AGENT_0;
qubit $OR_0$: $\ket{1}$ OR_AGENT_0;
nobit AUX_0_1;
nobit AUX_0_i;
nobit AUX_i_0;
qubits $IR_i$: $\ket{ GHZ_{n} }^{\otimes m}$ IR_AGENT_i;
qubit $OR_i$: $\ket{1}$ OR_AGENT_i;
nobit AUX_i_1;
nobit AUX_i_n_2;
nobit AUX_n_2_0;
qubits $IR_{n - 2}$: $\ket{ GHZ_{n} }^{\otimes m}$ IR_AGENT_n_2;
qubit $OR_{n - 2}$: $\ket{1}$ OR_AGENT_n_2;
nobit AUX_n_2_1;
nobit AUX_A_0;
qubits $AIR$: $\ket{ GHZ_{n} }^{\otimes m}$ AIR;
nobit AUX_A_1;
[ name = Ph0, WordBlueVeryLight, line width = 0.50 mm, label = Initial State ]
barrier ( - ) ;
[ draw = WordRed, fill = WordRed ] [ radius = 0.5 cm ] box H OR_AGENT_0;
[ draw = WordRed, fill = WordRed ] [ radius = 0.5 cm ] box H OR_AGENT_i;
[ draw = WordRed, fill = WordRed ] [ radius = 0.5 cm ] box H OR_AGENT_n_2;
[ name = Ph1, WordBlueVeryLight, line width = 0.50 mm, label = Phase 1 ]
barrier ( - ) ;
[ draw = GreenLighter1, fill = GreenLighter1 ] [ x radius = 0.7 cm ] box U$_{f_{0}}$ (IR_AGENT_0 - OR_AGENT_0);
[ draw = GreenLighter1, fill = GreenLighter1 ] [ x radius = 0.7 cm ] box U$_{f_{i}}$ (IR_AGENT_i - OR_AGENT_i);
[ draw = GreenLighter1, fill = GreenLighter1 ] [ x radius = 0.7 cm ] box U$_{f_{n - 2}}$ (IR_AGENT_n_2 - OR_AGENT_n_2);
[ name = Ph2, WordBlueVeryLight, line width = 0.50 mm, label = Phase 2 ]
barrier ( - ) ;
[ draw = WordRed, fill = WordRed ] [ radius = 0.5 cm ] box H$^{\otimes m}$ IR_AGENT_0;
[ draw = WordRed, fill = WordRed ] [ radius = 0.5 cm ] box H$^{\otimes m}$ IR_AGENT_i;
[ draw = WordRed, fill = WordRed ] [ radius = 0.5 cm ] box H$^{\otimes m}$ IR_AGENT_n_2;
[ draw = WordRed, fill = WordRed ] [ radius = 0.5 cm ] box H$^{\otimes m}$ AIR;
[ name = Ph3, WordBlueVeryLight, line width = 0.50 mm, label = Phase 3 ]
barrier ( - ) ;
[ fill = yellow!50 ] [ radius = 0.5 cm ] measure IR_AGENT_0;
[ fill = yellow!50 ] [ radius = 0.5 cm ] measure IR_AGENT_i;
[ fill = yellow!50 ] [ radius = 0.5 cm ] measure IR_AGENT_n_2;
[ fill = yellow!50 ] [ radius = 0.5 cm ] measure AIR;
[ name = Ph4, WordBlueVeryLight, line width = 0.50 mm, label = Phase 4 ]
barrier ( - ) ;
hspace 0.3 cm IR_AGENT_0;
output $\ket{ \mathbf{y}_0 }$ IR_AGENT_0;
output $\ket{ \mathbf{y}_i }$ IR_AGENT_i;
output $\ket{ \mathbf{y}_{n - 2} }$ IR_AGENT_n_2;
output $\ket{ \mathbf{a} }$ AIR;
[ below = 6.25 cm ] at (Ph0) $\ket{\psi_{0}}$;
[ below = 6.25 cm ] at (Ph1) $\ket{\psi_{1}}$;
[ below = 6.25 cm ] at (Ph2) $\ket{\psi_{2}}$;
[ below = 6.25 cm ] at (Ph3) $\ket{\psi_{3}}$;
[ below = 6.25 cm ] at (Ph4) $\ket{\psi_{4}}$;
scale = 1.00,
anchor = center,
label = [ label distance = 0.00 cm ] south: Agent$_0$
( ) at ( -2.35, -1.50 ) ;
maninblack, shirt = brown!50!gray, skin = brown!70!black,
scale = 1.00,
anchor = center,
label = [ label distance = 0.00 cm ] south: Agent$_i$
( ) at ( -2.35, -5.10 ) ;
maninblack, shirt = green!20!blue, hair = gray!70!black, skin = brown,
scale = 1.00,
anchor = center,
label = [ label distance = 0.00 cm ] south: Agent$_{n - 2}$
( ) at ( -2.35, -8.90 ) ;
scale = 1.00,
anchor = center,
label = [ label distance = 0.00 cm ] south: Alice
(Alice) at ( -2.35, -11.65 ) ;
The above figure shows the quantum circuits employed by Alice and her agents. We point out that these circuits are spatially separated, but, due to entanglement, strongly correlated forming a composite system. The state vectors $\ket{\psi_{0}}$, $\ket{\psi_{1}}$, $\ket{\psi_{2}}$, $\ket{\psi_{3}}$ and $\ket{\psi_{4}}$ describe the evolution of the composite system.
The QSA protocol successfully accomplishes this feat, by employing the quantum circuit shown in Figure <ref>. There, we show the individual quantum circuits employed by Alice and her $n - 1$ agents Agent$_0$, …, Agent$_{n - 2}$. Table <ref> explains the abbreviations that are used in the quantum circuit depicted in Figure <ref>. It is important to emphasize that this is a distributed quantum circuit made up of $n$ individual, spatially separated and private circuits. It is the phenomenon of entanglement that strongly correlates the individual subsircuits, forming in a effect a composite distributed circuit. The state vectors $\ket{\psi_{0}}$, $\ket{\psi_{1}}$, $\ket{\psi_{2}}$, $\ket{\psi_{3}}$ and $\ket{\psi_{4}}$ describe the evolution of the composite system. The $n$ individual subcircuits have obvious similarities, and some important differences, as summarized in Table <ref>. Let us also clarify that for consistency we follow the Qiskit [56] convention in the ordering of qubits, by placing the least significant at the top and the most significant at the bottom.
In our subsequent analytical mathematical description of the QSA game, we use the typical convention of writing the contents of quantum registers in boldface, e.g., $\ket{ \mathbf{x} } = \ket{ x_{ m - 1 } } \dots \ket{ x_{0} }$, for some $m \geq 1$. Moreover, apart from equation (<ref>), we will make use of the two other well-known formulas given below (see any standard textbook, such as [57] or [58]).
\begin{align}
H \ket{1}
\frac{1}{\sqrt{2}}
\left( \ket{0} - \ket{1} \right)
\ket{-}
\label{eq:Ket - Definition}
\\
H^{\otimes m} \ket{ \mathbf{x} }
\frac{1}{\sqrt{2^m}}
\sum_{ \mathbf{z} \in \{ 0, 1 \}^m }
(-1)^{ \mathbf{z \cdot x} } \ket{ \mathbf{z} }
\ ,
\label{eq:Hadamard m-Fold Ket x}
\end{align}
where $\ket{ \mathbf{z} } = \ket{ z_{ m - 1 } } \dots \ket{ z_{0} }$ and $\mathbf{z \cdot x}$ is the inner product modulo $2$, defined as
\begin{align} \label{eq:Inner Product Modulo $2$}
\mathbf{z \cdot x} = z_{ m - 1 } x_{ m - 1 } \oplus \dots \oplus z_{0} x_{0}
\ .
\end{align}
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = WordTurquoiseLighter80!07,
enhanced jigsaw,
sharp corners,
boxrule = 0.1 pt,
toprule = 0.1 pt,
bottomrule = 0.1 pt
This table contains the notations and abbreviations that are used in Figure <ref>.
2cNotations and Abbreviations
Number of players (Alice plus her $n - 1$ agents)
Length of the secret key $\mathbf{s}$, equal to the
number of qubits in the Input Registers
of Alice & every one of her agents
Alice's $m$-qubit Input Register
IR$_{ i }$
The $m$-qubit Input Register of Agent$_i, \ 0 \leq i \leq n - 2$
OR$_{ i }$
The single-qubit Output Register of Agent$_i, \ 0 \leq i \leq n - 2$
grow to left by = 2.00 cm,
grow to right by = 0.00 cm,
colback = WordTurquoiseLighter80!07,
enhanced jigsaw,
sharp corners,
boxrule = 0.1 pt,
toprule = 0.1 pt,
bottomrule = 0.1 pt
Differences and similarities among the $n$ subcircuits depicted in Figure <ref>.
2cDifferences and Similarities
Alice's circuit lacks Output Register
All circuits contain an $m$-qubit Input Register
Alice does not apply any function
All agents' circuits contain an Output Register
Every agent applies a different function $f_{i}$
All Output Registers are initialized to $\ket{1}$
All circuits apply the $m$-fold
Hadamard transform on their
Input Register prior to measurement
The initial state $\ket{ \psi_0 }$ of the circuit shown in Figure <ref> is given by
\begin{align} \label{eq:SQA Phase 0}
\ket{ \psi_0 }
\frac{1}{ \sqrt{2^m} }
\sum_{ \mathbf{x} \in \{ 0, 1 \}^m }
\ket{\mathbf{x}}_{ A }
\ket{1}_{ n - 2 }
\ket{\mathbf{x}}_{ n - 2 }
\dots
\ket{1}_{ 0 }
\ket{\mathbf{x}}_{ 0 }
\ .
\end{align}
In equation (<ref>), $\ket{\mathbf{x}}_{A}$ designates the contents of Alice's Input Register, $\ket{1}_{i}, \ 0 \leq i \leq n - 2,$ is the state of the agents' Output Registers, and $\ket{\mathbf{x}}_{i}, 0 \leq i \leq n - 2,$ denotes the contents of the Input Registers of the $n - 1$ agents. In what follows, the subscripts $A$ and $0, 1, \dots, n - 2$ are utilized in an effort to distinguish between the local registers of Alice and Agent$_0$, …, Agent$_{ n - 2 }$, respectively.
The first phase of the protocol begins when all the agents apply the Hadamard transform to their respective Output Register, driving the system to the next state $\ket{ \psi_1 }$
\begin{align} \label{eq:SQA Phase 1}
\ket{\psi_1}
\frac{1}{ \sqrt{2^m} }
\sum_{ \mathbf{x} \in \{ 0, 1 \}^m }
\ket{\mathbf{x}}_{ A }
\ket{-}_{ n - 2 }
\ket{\mathbf{x}}_{ n - 2 }
\dots
\ket{-}_{ 0 }
\ket{\mathbf{x}}_{ 0 }
\ .
\end{align}
At this point each of the $n - 1$ agents transmits her secret. Since this is the most important part of the protocol, we explain in detail how this task is implemented. Agent$_{i}, \ 0 \leq i \leq n - 2,$ defines a function that is based on her extended partial secret key $\mathbf{s}_i$, namely
\begin{align} \label{eq:Agent's i Function}
f_{ i } ( \mathbf{x} )
\mathbf{s}_i \cdot \mathbf{x} \ , \ 0 \leq i \leq n - 2
\ .
\end{align}
Agent$_{ i }, \ 0 \leq i \leq n - 2,$ uses function $f_{ i }$ to construct the unitary transform $U_{ f_{ i } }$, which, as is typical of many quantum algorithms, acts on both Output and Input Registers, producing the following output:
\begin{align} \label{eq:Agent's i Oracle - I}
U_{ f_{ i } } :
\ket{ y } \ket{ \mathbf{x} }
\rightarrow
\ket{ y \oplus f( \mathbf{x} ) } \ket{ \mathbf{x} }
\ .
\end{align}
Taking into account (<ref>), which asserts that for every agent the state of the Output Register is $\ket{-}$, and (<ref>), formula (<ref>) becomes
\begin{align} \label{eq:Agent's i Oracle - II}
U_{ f_{ i } } :
\ket{ - } \ket{ \mathbf{x} }
\rightarrow
( - 1 )^{ \mathbf{s}_{ i } \cdot \mathbf{x} }
\ket{ - } \ket{ \mathbf{x} }
\ .
\end{align}
Hence, the cumulative action of the unitary transforms $U_{f_{i}}, \ 0 \leq i \leq n - 2,$ sends the quantum circuit to the next state:
\begin{align} \label{eq:SQA Phase 2}
\ket{\psi_2}
\frac{ 1 }{ \sqrt{ 2^m } }
\sum_{ \mathbf{x} \in \{ 0, 1 \}^m }
\ket{\mathbf{x}}_{ A }
( - 1 )^{ \mathbf{s}_{ n - 2 } \cdot \mathbf{x} }
\ket{-}_{ n - 2 }
\ket{\mathbf{x}}_{ n - 2 }
\dots
( - 1 )^{ \mathbf{s}_{ 0 } \cdot \mathbf{x} }
\ket{-}_{ 0 }
\ket{\mathbf{x}}_{ 0 }
\nonumber \\
\frac{ 1 }{ \sqrt{ 2^m } }
\sum_{ \mathbf{x} \in \{ 0, 1 \}^m }
( - 1 )^{ ( \mathbf{s}_{ n - 2 } \oplus \dots \oplus \mathbf{s}_{ 0 } ) \cdot \mathbf{x} }
\ket{\mathbf{x}}_{ A }
\ket{-}_{ n - 2 }
\ket{\mathbf{x}}_{ n - 2 }
\dots
\ket{-}_{ 0 }
\ket{\mathbf{x}}_{ 0 }
\nonumber \\
&\overset{(\ref{eq:Complete Secret as Mod $2$ Sum of Partial Keys})}{=}
\frac{ 1 }{ \sqrt{ 2^m } }
\sum_{ \mathbf{x} \in \{ 0, 1 \}^m }
( - 1 )^{ \mathbf{s} \cdot \mathbf{x} }
\ket{\mathbf{x}}_{ A }
\ket{-}_{ n - 2 }
\ket{\mathbf{x}}_{ n - 2 }
\dots
\ket{-}_{ 0 }
\ket{\mathbf{x}}_{ 0 }
\ .
\end{align}
At this point, the complete secret key is implicitly encoded in the state of the circuit. It remains to be deciphered by Alice, as explained in the next subsection.
§.§ Retrieval phase
Subsequently, Alice and all her spies apply the $m$-fold Hadamard transformation to their Input Registers. The next state of the circuit is shown below. Please note that henceforth, and in order to make the remaining formulas more readable and understandable, we have chosen to omit the Output Registers; they have served their intended purpose and will no longer be of any use.
grow to left by = 0.85 cm,
grow to right by = 0.85 cm,
colback = white,
enhanced jigsaw,
sharp corners,
boxrule = 0.01 pt,
toprule = 0.01 pt,
bottomrule = 0.01 pt
\begin{align} \label{eq:SQA Phase 3 - I}
\ket{\psi_3}
&\frac{ 1 }{ \sqrt{ 2^m } }
\sum_{ \mathbf{x} \in \{ 0, 1 \}^m }
( - 1 )^{ \mathbf{s} \cdot \mathbf{x} }
H^{ \otimes m } \ket{ \mathbf{x} }_{A}
H^{ \otimes m } \ket{\mathbf{x}}_{ n - 2 }
\dots
H^{ \otimes m } \ket{\mathbf{x}}_{0}
\nonumber \\
\overset{(\ref{eq:Hadamard m-Fold Ket x})}{=}
&\frac{ 1 }{ \sqrt{ 2^m } }
\sum_{ \mathbf{x} \in \{ 0, 1 \}^m }
( - 1 )^{ \mathbf{s} \cdot \mathbf{x} }
\left(
\frac{ 1 }{ \sqrt{ 2^m } }
\sum_{ \mathbf{a} \in \{ 0, 1 \}^m }
( - 1 )^{ \mathbf{a} \cdot \mathbf{x} }
\ket{ \mathbf{a} }_{A}
\right)
\nonumber \\
\frac{ 1 }{ \sqrt{ 2^m } }
\sum_{ \mathbf{y}_{ n - 2 } \in \{ 0, 1 \}^m }
( - 1 )^{ \mathbf{y}_{ n - 2 } \cdot \mathbf{x} }
\ket{ \mathbf{y}_{ n - 2 } }_{ n - 2 }
\right)
\dots
\left(
\frac{ 1 }{ \sqrt{ 2^m } }
\sum_{ \mathbf{y}_{ 0 } \in \{ 0, 1 \}^m }
( - 1 )^{ \mathbf{y}_{ 0 } \cdot \mathbf{x} }
\ket{ \mathbf{y}_{ 0 } }_{0}
\right)
\nonumber \\
&\frac{ 1 }{ ( \sqrt{ 2^m } )^{ n + 1 } }
\sum_{ \mathbf{x} \in \{ 0, 1 \}^m }
\sum_{ \mathbf{a} \in \{ 0, 1 \}^m }
\sum_{ \mathbf{y}_{ n - 2 } \in \{ 0, 1 \}^m }
\dots
\sum_{ \mathbf{y}_{ 0 } \in \{ 0, 1 \}^m }
( - 1 )^{ ( \mathbf{s} \oplus \mathbf{a} \oplus \mathbf{y}_{ n - 2 } \oplus \cdots \oplus \mathbf{y}_{ 0 } ) \cdot \mathbf{x} }
\ket{ \mathbf{a} }_{A}
\ket{ \mathbf{y}_{ n - 2 } }_{ n - 2 }
\dots
\ket{ \mathbf{y}_{ 0 } }_{0}
\ .
\end{align}
The above formula looks complicated but it can be simplified by invoking an important property of the inner product modulo $2$ operation. If $\ket{ \mathbf{c} } = \ket{ c_{ m - 1 } } \dots \ket{ c_{0} } \neq \ket{ 0 }^{\otimes m}$ is a fixed basis ket, then for precisely half of the basis kets $\ket{ \mathbf{x} }$, $\mathbf{c} \cdot \mathbf{x}$ will be $0$ and for the remaining half, $\mathbf{c} \cdot \mathbf{x}$ will be $1$. In the special case where $\ket{ \mathbf{c} } = \ket{ 0 }^{\otimes m}$, then for every basis ket $\ket{ \mathbf{x} }$, $\ \mathbf{c} \cdot \mathbf{x} = 0$. Applying this property to equation (<ref>), we conclude that if
\begin{align} \label{eq:Application of the Inner Product Modulo 2 Property - I}
\mathbf{a} \oplus \mathbf{y}_{ n - 2 } \oplus \cdots \oplus \mathbf{y}_{ 0 } = \mathbf{s} \ ,
\end{align}
then, for each $\mathbf{x} \in \{ 0, 1 \}^m$, the expression $( - 1 )^{ ( \mathbf{s} \oplus \mathbf{a} \oplus \mathbf{y}_{ n - 2 } \oplus \cdots \oplus \mathbf{y}_{ 0 } ) \cdot \mathbf{x} }$ becomes $( - 1 )^{0} = 1$. Therefore, the sum $\sum_{ \mathbf{x} \in \{ 0, 1 \}^m } ( - 1 )^{ ( \mathbf{s} \oplus \mathbf{a} \oplus \mathbf{y}_{ n - 2 } \oplus \cdots \oplus \mathbf{y}_{ 0 } ) \cdot \mathbf{x} }$ equals $2^{m}$. In contrast, when $\mathbf{a} \oplus \mathbf{y}_{ n - 2 } \oplus \cdots \oplus \mathbf{y}_{ 0 } \neq \mathbf{s}$, the sum reduces to $0$. This is typically written in a compact way as
\begin{align} \label{eq:General Inner Product Modulo 2 Property - II}
\sum_{ \mathbf{x} \in \{ 0, 1 \}^m }^{}
( - 1 )^{ ( \mathbf{s} \oplus \mathbf{a} \oplus \mathbf{y}_{ n - 2 } \oplus \cdots \oplus \mathbf{y}_{ 0 } ) \cdot \mathbf{x} }
2^{m} \delta_{ \mathbf{s}, \mathbf{a} \oplus \mathbf{y}_{ n - 2 } \oplus \cdots \oplus \mathbf{y}_{ 0 } }
\ .
\end{align}
In view of (<ref>), we may express state $\ket{\psi_3}$ more succinctly as
\begin{align} \label{eq:SQA Phase 3 - II}
\ket{\psi_3}
\frac{ 1 }{ ( \sqrt{ 2^m } )^{ n - 1 } }
\sum_{ \mathbf{a} \in \{ 0, 1 \}^m }
\sum_{ \mathbf{y}_{ n - 2 } \in \{ 0, 1 \}^m }
\dots
\sum_{ \mathbf{y}_{ 0 } \in \{ 0, 1 \}^m }
\ket{ \mathbf{a} }_{A}
\ket{ \mathbf{y}_{ n - 2 } }_{ n - 2 }
\dots
\ket{ \mathbf{y}_{ 0 } }_{0}
\ .
\end{align}
The fundamental property of the QSA protocol, as encoded in equations (<ref>) and (<ref>) states that the contents of the Input Registers of Alice and all her $n - 1$ agents can not vary completely freely and independently. The presence of tuples entangled in the $\ket{ GHZ_{n} }$ state during the initialization of the quantum circuit, has manifested itself in state $\ket{\psi_3}$ in what we call the Fundamental Correlation Property. This property asserts that in each term of the linear combination described by $\ket{\psi_3}$, the states $\ket{ \mathbf{a} }_{A}, \ket{ \mathbf{y}_{ n - 2 } }_{ n - 2 }, \dots, \ket{ \mathbf{y}_{ 0 } }_{0}$ of the $n$ players' Input Registers are correlated by the following constraint:
\begin{align} \label{eq:QSA Fundamental Correlation Property}
\mathbf{a}
\oplus
\mathbf{y}_{ n - 2 }
\oplus \cdots \oplus
\mathbf{y}_{ 0 } = \mathbf{s}
\ .
\end{align}
The quantum part of the QSA protocol is completed when all players, i.e., Alice and her secret agents Agent$_0$, …, Agent$_{ n - 2 }$ measure their Input Registers, which results in the final state $\ket{\psi_4}$ of the quantum circuit.
\begin{align}
\label{eq:QSA Final Measurement}
\ket{\psi_4}
\ket{ \mathbf{a} }_{A}
\ket{ \mathbf{y}_{ n - 2 } }_{ n - 2 }
\dots
\ket{ \mathbf{y}_{ 0 } }_{0}
\ , \quad \text{for some} \quad
\mathbf{a}, \mathbf{y}_{ 0 }, \dots, \mathbf{y}_{ n - 2 } \in \{ 0, 1 \}^m
\ ,
\end{align}
where $\mathbf{a}, \mathbf{y}_{ 0 }, \dots, \mathbf{y}_{ n - 2 }$ are correlated via (<ref>). The unique advantage of entanglement has led to this situation: although the contents of each of the $n$ Input Registers may deceptively seem completely random to each player, in fact they are not. The distributed quantum circuit of Figure <ref>, considered as a composite system, ensures that the final contents of the Input Registers satisfy the Fundamental Correlation Property, as expressed by (<ref>).
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = yellow!03!white,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
[scale = 1.00]
⦜360 /
shade, top color = WordBlueDarker25, bottom color = black, rectangle, text width = 9.00 cm, align = center
] ( Label ) at ( 0.0, 7.00 )
THE CLASSICAL CHANNEL
The $n - 1$ spatially distributed agents send to Alice through the classical channel the measurements $\mathbf{y}_{ 0 }, \dots, \mathbf{y}_{ n - 2 }$ of their Input Registers.;
[ line width = 2.00 pt, MyBlue ] ( 0, 0 ) circle [ radius = cm ];
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] east: Agent$_0$
( ) at ( * cos(0 * ⦜) , * sin(0 * ⦜) ) ;
maninblack, shirt = WordBlueDark, hair = gray,
scale = 1.50,
anchor = center,
label = [ rotate = ⦜, label distance = 0.00 cm ] east: Agent$_1$
( ) at ( * cos(1 * ⦜) , * sin(1 * ⦜) ) ;
maninblack, shirt = GreenTeal, hair = red!20!black,
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] north: Agent$_2$
( ) at ( * cos(2 * ⦜) , * sin(2 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(3 * ⦜) , * sin(3 * ⦜) ) ;
maninblack, shirt = brown!50!gray, skin = brown!70!black,
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] west: Agent$_i$
( ) at ( * cos(4 * ⦜) , * sin(4 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(5 * ⦜) , * sin(5 * ⦜) ) ;
maninblack, shirt = red!50!blue, hair = brown!30!black, skin = brown,
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] south: Agent$_{n - 3}$
( ) at ( * cos(6 * ⦜) , * sin(6 * ⦜) ) ;
maninblack, shirt = green!20!blue, hair = gray!70!black, skin = brown,
scale = 1.50,
anchor = center,
label = [ rotate = - ⦜, label distance = 0.00 cm ] right: Agent$_{n - 2}$
( ) at ( * cos(7 * ⦜) , * sin(7 * ⦜) ) ;
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] south: Alice
(Alice) at (0.0, 0.0) ;
[on background layer]
∠/ in 0/0, 45/1, 90/2, 270/n-3, 315/n-2
[ WordBlueDarker25, <-, >=stealth, line width = 5.0 mm ]
( 0.75 * cos(∠) , 0.90 * sin(∠) ) –
( 3.00 * cos(∠) , 3.00 * sin(∠) )
node [ midway, text = white, rotate = ∠] $\mathbf{y}_{\index}$;
∠in 135, 225
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * 0.5 * cos(∠) , * 0.5 * sin(∠) ) ;
[ WordBlueDarker25, <-, >=stealth, line width = 5.0 mm ]
( 0.75 * cos(180) , 0.75 * sin(180) ) –
( 3.00 * cos(180) , 3.00 * sin(180) )
node [ midway, text = white ] $\mathbf{y}_{i}$ ;
The above figure visualizes the conclusion of the QSA protocol when the $n - 1$ spatially distributed agents in the spy network send to Alice through the classical channel the final measurements $\mathbf{y}_{ 0 }, \dots, \mathbf{y}_{ n - 2 }$ of their Input Registers.
One final step remains. Agent$_0$, …, Agent$_{ n - 2 }$ must all send the contents of their Input registers $\mathbf{y}_{ 0 }, \dots, \mathbf{y}_{ n - 2 }$, respectively, to Alice, so as to allow Alice to uncover the big secret $\mathbf{s}$. This can be achieved by communicating through the classical channel. Figure <ref> gives a mnemonic visualization of the conclusion of the QSA protocol.
The use of a public channel by the agents to broadcast their measurements will not compromise the security of the protocol for two reasons. First, the transmitted information $\mathbf{y}_{ i }, \ 0 \leq i \leq n - 2,$ is completely unrelated to the extended partial secret $\mathbf{s}_{ i }$. The latter cannot be recovered from the former. Secondly, in the general case, even if Eve combines all the measurements $\mathbf{y}_{ 0 }, \dots, \mathbf{y}_{ n - 2 },$ she still needs $\mathbf{a}$ in order to discover the secret message $\mathbf{s}$. There is of course the special case where $\mathbf{a} = \mathbf{0}$. In such a case Even has all the information she needs to find the secret message $\mathbf{s}$, although she might not know it, i.e., she might have no way to know that Alice's measurement is $\mathbf{0}$. Thus, to secure our protocol from this eventuality, we dictate that Alice should request the repetition of the whole process in the event that the contents of her Input Registers are all zero after the final measurement.
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = white,
enhanced jigsaw,
sharp corners,
boxrule = 0.01 pt,
toprule = 0.01 pt,
bottomrule = 0.01 pt
A toy scale quantum circuit simulating the QSA protocol, as applied to the spymaster Alice and her two agents Bob and Charlie.
grow to left by = 3.50 cm,
grow to right by = 0.00 cm,
colback = white,
enhanced jigsaw,
sharp corners,
boxrule = 0.01 pt,
toprule = 0.01 pt,
bottomrule = 0.01 pt
Some of the possible measurements and their corresponding probabilities for the circuit of Figure <ref>.
§ A TOY SCALE EXAMPLE DEMONSTRATING THE QSA PROTOCOL
In this section we present a toy scale example that should be viewed as a proof of concept about the viability of the QSA protocol. The resulting quantum circuit is illustrated in Figure <ref>. It was designed and simulated using IBM's Qiskit open source SDK ([56]) and, in particular, the Aer provider utilizing the high performance qasm simulator for simulating quantum circuits [59]. The measurements, of which only a small portion is shown in Figure <ref>, as their sheer number makes their complete visualization inexpedient, along with their corresponding probabilities were obtained by running the qasm simulator for 4096 shots.
In the current example, Alice's network consists of just two agents, nonother than Bob and Charlie. All of them are in different locations. Bob's partial secret key is $\mathbf{p}_{ B } = 10$ and Charlie's partial secret key is $\mathbf{p}_{ C } = 01$. Hence, their extended partial secret keys are $\mathbf{s}_{ B } = 1000$ and $\mathbf{s}_{ C } = 0001$, and the complete secret key that ALice must uncover is $\mathbf{s} = 1001$. As we clarified above, the local quantum circuit of Figure <ref> is best considered to be a proof of concept. This is because, at present, we are unable simulate in Qiskit the fact that Alice, Bob, and Charlie are spatially separated. An actual implementation of the QSA protocol would result in a distributed quantum circuit and not a local one as shown in Figure <ref>. Furthermore, we are also unable to directly specify a trusted third party source that generates the entangled GHZ triples, although Qiskit provides the ability to initialize the quantum circuit in specific initial state. In any case, we have opted for circuit itself to create the GHZ triples. Hence, these assumptions cannot be accurately reflected in the quantum circuit of Figure <ref> and this example should be considered a faithful representation of a real life scenario.
With all the above observations duly noted, we may verify that this simulation is indeed a localized version of the blueprint for the QSA protocol, as shown in Figure <ref>. The final measurements by Alice, Bob and Charlie will produce one of the $2^{8} = 256$ equiprobable outcomes. Showing all these outcomes would result in an unintelligible figure, so we have opted for depicting only some of them in Figure <ref>. This figure also shows the corresponding probabilities for each outcome; it should not come as a surprise that they are not shown to be equiprobable, as theory expects, since the figure has resulted from a simulation run for 4096 shots. The important thing though is that every possible outcome satisfies the Fundamental Correlation Property and verifies equation (<ref>). Therefore, ignoring the unlikely case that Alice measures $\mathbf{a} = 0000$ in her Input Register, Bob and Charlie, after measuring their Input Registers and obtaining $\mathbf{y}_{ B }$ and $\mathbf{y}_{ C }$, respectively, they only have to send their measurements to Alice so that she can uncover the secret key.
§ SECURITY ANALYSIS OF THE QSA PROTOCOL
§.§ Assumptions
In the this section, we shall focus on analyzing several different attack strategies, that a malicious individual, namely Eve, can incorporate against our protocol, with the goal of acquiring a piece of the secret message or in the worst-case scenario the complete message. This will allow us to establish the security of our protocol and its viability in practical applications. However, before we start with our analysis, it is crucial to first clarify two fundamental assumptions that we take for granted and serve as the basis of our security claims.
We begin by stating the first and most basic assumption, namely that quantum theory is correct and that we can use quantum mechanics to make accurate predictions about measurement outcomes. The reasoning behind this assumption is quite obvious, due to the fact that if the underlying theory was false in one way or another, certain features of quantum mechanics, such as the no-cloning theorem [60], the monogamy of entanglement [61] or nonlocality [62], which are vital for any quantum cryptographic protocol, would not apply and thus, it would have been impossible to create a secure protocol.
The second assumption that we adopt is that quantum theory is complete and there are no other special properties or phenomena of quantum mechanics that we don't know. This means that Eve's movements are restricted by the laws of physics and she can not go beyond what is possible with quantum mechanics, in order to acquire more information from her targets. This assumption by its very nature is not perfect, as the question regarding the completeness of quantum mechanics is still unresolved. But the combination of the correctness of quantum mechanics, along with the requirement that free randomness exists, implies that any future extension of quantum theory, will not improve the predictive abilities of any player [63].
§.§ Intercept and resend attack
We start our security analysis by inspecting the first attack strategy, which of course is the most basic and intuitive type of an individual attack, known as intercept and resend or (I&R) attack. The main idea of this strategy is for Eve to get hold of each photon coming from Alice or whoever is responsible for the distribution of the GHZ tuples to the rest of the players at the beginning of the protocol. Afterwards, Eve proceeds to measure them on some predefined basis and based on the result, to prepare a new photon and send it to the intended recipient. For this attack, it is rather obvious that in any of the aforementioned possible scenarios our protocol can be used, the GHZ tuples during the distribution phase of the protocol, do not carry any information as regards the nature of the secret message. Thus, our SQA protocol is secure against this attack strategy.
§.§ PNS attack
The next attack strategy, known as the Photon Number Splitting attack or (PNS) for short, was first introduced by Huttner et al. [64] and further discussed and analyzed by Lütkenhaus and Brassard et al. in [65, 66]. Today, it is considered as one of the most effective attack strategies that Eve can use against any protocol This is because it exploits the fact that our current detectors are not $100\%$ efficient and our photon sources do not emit single-photon signals all the time, meaning that there is a possibility for a photon source to produce multiple identical photons instead of only one. Therefore, in a realistic scenario, Eve can intercept these pulses coming from the player or the source responsible for the distribution of the GHZ tuples, take one photon from the multi-photon pulse and send the remaining photon(s) to their legitimate recipient undisturbed. In this scenario, Eve once again will not be able to acquire any information regarding the secret message or the random binary strings that will be used to unlock the secret key. This can be explained from the inherent nature of the QSA protocol, which leads to the creation of seemingly random binary strings during the final phase, when all players apply the final $m$-fold Hadamard transform to their corresponding Input Registers. This means that, if we assume that a tuple in the $\ket{GHZ_{ n + 1 }}$ state is created instead of a tuple in the $\ket{GHZ_{ n }}$ state, this $n + 1$-tuple will correspond to the $n$ players plus Eve. Accordingly, during the measurement phase the results would be
\begin{align}
\label{eq:QSA$_n+1$ Final Measurement PNS attack}
\ket{\psi_4}
\ket{ \mathbf{a} }_{A}
\ket{ \mathbf{y}_{ n - 1 } }_{ E }
\ket{ \mathbf{y}_{ n - 2 } }_{ n - 2 }
\dots
\ket{ \mathbf{y}_{ 0 } }_{0}
\ , \quad \text{for some} \quad
\mathbf{a}, \mathbf{y}_{ 0 }, \dots, \mathbf{y}_{ n - 1 } \in \{ 0, 1 \}^m
\ ,
\end{align}
instead of the anticipated
\begin{align}
\label{eq:QSA$_n$ Final Measurement PNS attack}
\ket{\psi_4}
\ket{ \mathbf{a} }_{A}
\ket{ \mathbf{y}_{ n - 2 } }_{ n - 2 }
\dots
\ket{ \mathbf{y}_{ 0 } }_{0}
\ , \quad \text{for some} \quad
\mathbf{a}, \mathbf{y}_{ 0 }, \dots, \mathbf{y}_{ n - 2 } \in \{ 0, 1 \}^m
\ .
\end{align}
In such a situation, Eve can be considered as an extra player and, thus, her ability to acquire any extra information about the other players' measurement is, like all the other players, nonexistent.
§.§ Blinding attack
Finally, we conclude our security analysis with the blinding attack. During this attack strategy, Eve, instead of trying to intercept the GHZ tuples, she blocks and destroys them entirely before they reach the intended players. Then she proceeds to create her own set of GHZ tuples, with a proper ancilla state in each tuple, and then distributes them to the players. From this description, it is obvious that in order for this particular type of attack to work, the entity responsible for the creation and distribution of the GHZ tuples must be a third party source and not a player. Therefore, during this attack Eve will have a full set of tuples in the $\ket{GHZ_{ n + 1 }}$ state, instead of the aforementioned smaller number of tuples in the $\ket{GHZ_{ n + 1 }}$ state acquired exploiting the inefficiency of our current photon sources, during the PNS attack. However, once again the scenario is similar to the PNS attack, meaning that Eve will be considered as an extra player, and in that case she will again be unable to acquire any information regarding the secret message.
§ DISCUSSION AND CONCLUSIONS
In this article, we have introduced a new problem in the literature of cryptographic protocols, which we call the Quantum Secret Aggregation problem. We have given a solution to the aforementioned problem that is based on the use of maximally entangled GHZ tuples. These are uniformly distributed among the players, which include the spymaster Alice and her network of agents, all of them being in different locations. We conducted a detailed analysis of the proposed protocol and, subsequently, illustrated its use with a toy scale example involving Alice and her two agents Bob and Charlie. Our presentation has been completely general in the sense that number of players can increase as needed, and the players are assumed to be spatially separated. It is clear that the same protocol can immediately accommodate groups of players that are in the same region of space.
In closing, we point out that the security of our protocol is attributed to its entanglement based nature. For instance, Entanglement Monogamy precludes the entanglement of a maximally entangled tuple with any other qubit. This nullifies Eve;s attempts at gaining information by trying to entangle a qubit of the GHZ tuples used in our protocol, during the transmission of the GHZ tuples to the players.
[1]
P. Shor, “Algorithms for quantum computation: discrete logarithms and
factoring,” in Proceedings 35th Annual Symposium on Foundations of
Computer Science, IEEE Comput. Soc. Press, 1994.
[2]
L. Grover, “A fast quantum mechanical algorithm for database search,” in Proc. of the Twenty-Eighth Annual ACM Symposium on the Theory of Computing,
1996, 1996.
[3]
J. Chow, O. Dial, and J. Gambetta, “IBM Quantum breaks the 100-qubit
processor barrier.”
<https://research.ibm.com/blog/127-qubit-quantum-processor-eagle>, 2021.
Accessed: 2022-04-03.
[4]
I. Newsroom, “IBM unveils 400 qubit-plus quantum processor.”
Accessed: 2022-11-18.
[5]
V. Chamola, A. Jolfaei, V. Chanana, P. Parashari, and V. Hassija, “Information
security in the post quantum era for 5g and beyond networks: Threats to
existing cryptography, and post-quantum cryptography,” Computer
Communications, vol. 176, pp. 99–118, 2021.
[6]
L. Chen, L. Chen, S. Jordan, Y.-K. Liu, D. Moody, R. Peralta, R. Perlner, and
D. Smith-Tone, Report on post-quantum cryptography, vol. 12.
US Department of Commerce, National Institute of Standards and
Technology, 2016.
[7]
G. Alagic, G. Alagic, J. Alperin-Sheriff, D. Apon, D. Cooper, Q. Dang, Y.-K.
Liu, C. Miller, D. Moody, R. Peralta, et al., Status report on the
first round of the NIST post-quantum cryptography standardization process.
US Department of Commerce, National Institute of Standards and
Technology …, 2019.
[8]
G. Alagic, J. Alperin-Sheriff, D. Apon, D. Cooper, Q. Dang, J. Kelsey, Y.-K.
Liu, C. Miller, D. Moody, R. Peralta, et al., “Status report on the
second round of the nist post-quantum cryptography standardization process,”
US Department of Commerce, NIST, 2020.
[9]
G. Alagic, D. Apon, D. Cooper, Q. Dang, T. Dang, J. Kelsey, J. Lichtinger,
C. Miller, D. Moody, R. Peralta, et al., “Status report on the third
round of the nist post-quantum cryptography standardization process,” National Institute of Standards and Technology, Gaithersburg, 2022.
[10]
A. K. Ekert, “Quantum cryptography based on bell's theorem,” Physical
Review Letters, vol. 67, no. 6, pp. 661–663, 1991.
[11]
C. H. Bennett, G. Brassard, and N. D. Mermin, “Quantum cryptography without
bell's theorem,” Physical Review Letters, vol. 68, no. 5,
pp. 557–559, 1992.
[12]
N. Gisin, G. Ribordy, H. Zbinden, D. Stucki, N. Brunner, and V. Scarani,
“Towards practical and fast quantum cryptography,” arXiv preprint
quant-ph/0411022, 2004.
[13]
K. Inoue, E. Waks, and Y. Yamamoto, “Differential phase shift quantum key
distribution,” Physical review letters, vol. 89, no. 3, p. 037902,
[14]
J.-Y. Guan, Z. Cao, Y. Liu, G.-L. Shen-Tu, J. S. Pelc, M. Fejer, C.-Z. Peng,
X. Ma, Q. Zhang, and J.-W. Pan, “Experimental passive round-robin
differential phase-shift quantum key distribution,” Physical review
letters, vol. 114, no. 18, p. 180502, 2015.
[15]
E. Waks, H. Takesue, and Y. Yamamoto, “Security of differential-phase-shift
quantum key distribution against individual attacks,” Physical Review
A, vol. 73, no. 1, p. 012344, 2006.
[16]
M. Ampatzis and T. Andronikos, “QKD based on symmetric entangled
bernstein-vazirani,” Entropy, vol. 23, no. 7, p. 870, 2021.
[17]
M. Ampatzis and T. Andronikos, “A symmetric extensible protocol for quantum
secret sharing,” Symmetry, vol. 14, no. 8, p. 1692, 2022.
[18]
V. Attasena, J. Darmont, and N. Harbi, “Secret sharing for cloud data
security: a survey,” The VLDB Journal, vol. 26, no. 5, pp. 657–681,
[19]
T. Ermakova and B. Fabian, “Secret sharing for health data in multi-provider
clouds,” in 2013 IEEE 15th conference on business informatics,
pp. 93–100, IEEE, 2013.
[20]
J. Cha, S. K. Singh, T. W. Kim, and J. H. Park, “Blockchain-empowered cloud
architecture based on secret sharing for smart city,” Journal of
Information Security and Applications, vol. 57, p. 102686, 2021.
[21]
M. Hillery, V. Bužek, and A. Berthiaume, “Quantum secret sharing,” Physical Review A, vol. 59, no. 3, p. 1829, 1999.
[22]
R. Cleve, D. Gottesman, and H.-K. Lo, “How to share a quantum secret,” Physical Review Letters, vol. 83, no. 3, p. 648, 1999.
[23]
A. Karlsson, M. Koashi, and N. Imoto, “Quantum entanglement for secret sharing
and secret splitting,” Physical Review A, vol. 59, no. 1, p. 162,
[24]
A. D. Smith, “Quantum secret sharing for general access structures,” arXiv preprint quant-ph/0001087, 2000.
[25]
D. Gottesman, “Theory of quantum secret sharing,” Physical Review A,
vol. 61, no. 4, p. 042311, 2000.
[26]
B. Fortescue and G. Gour, “Reducing the quantum communication cost of quantum
secret sharing,” IEEE transactions on information theory, vol. 58,
no. 10, pp. 6659–6666, 2012.
[27]
H. Qin, W. K. Tang, and R. Tso, “Hierarchical quantum secret sharing based on
special high-dimensional entangled state,” IEEE Journal of Selected
Topics in Quantum Electronics, vol. 26, no. 3, pp. 1–6, 2020.
[28]
K. Senthoor and P. K. Sarvepalli, “Theory of communication efficient quantum
secret sharing,” IEEE Transactions on Information Theory, 2022.
[29]
Y. Fu, H.-L. Yin, T.-Y. Chen, and Z.-B. Chen, “Long-distance
measurement-device-independent multiparty quantum communication,” Physical review letters, vol. 114, no. 9, p. 090501, 2015.
[30]
X. Wu, Y. Wang, and D. Huang, “Passive continuous-variable quantum secret
sharing using a thermal source,” Physical Review A, vol. 101, no. 2,
p. 022301, 2020.
[31]
W. P. Grice and B. Qi, “Quantum secret sharing using weak coherent states,”
Physical Review A, vol. 100, no. 2, p. 022339, 2019.
[32]
J. Gu, Y.-M. Xie, W.-B. Liu, Y. Fu, H.-L. Yin, and Z.-B. Chen, “Secure quantum
secret sharing without signal disturbance monitoring,” Optics Express,
vol. 29, no. 20, pp. 32244–32255, 2021.
[33]
A. Keet, B. Fortescue, D. Markham, and B. C. Sanders, “Quantum secret sharing
with qudit graph states,” Physical Review A, vol. 82, no. 6,
p. 062315, 2010.
[34]
W. Helwig, W. Cui, J. I. Latorre, A. Riera, and H.-K. Lo, “Absolute maximal
entanglement and quantum secret sharing,” Physical Review A, vol. 86,
no. 5, p. 052335, 2012.
[35]
C.-J. Liu, Z.-H. Li, C.-M. Bai, and M.-M. Si, “Quantum-secret-sharing scheme
based on local distinguishability of orthogonal seven-qudit entangled
states,” International Journal of Theoretical Physics, vol. 57, no. 2,
pp. 428–442, 2018.
[36]
M. Mansour and Z. Dahbi, “Quantum secret sharing protocol using maximally
entangled multi-qudit states,” International Journal of Theoretical
Physics, vol. 59, no. 12, pp. 3876–3887, 2020.
[37]
C. H. Bennett and G. Brassard, “Quantum cryptography: Public key distribution
and coin tossing,” in Proceedings of the IEEE International Conference
on Computers, Systems, and Signal Processing, pp. 175–179, IEEE Computer
Society Press, 1984.
[38]
D. A. Meyer, “Quantum strategies,” Physical Review Letters, vol. 82,
no. 5, p. 1052, 1999.
[39]
J. Eisert, M. Wilkens, and M. Lewenstein, “Quantum games and quantum
strategies,” Physical Review Letters, vol. 83, no. 15, p. 3077, 1999.
[40]
T. Andronikos, A. Sirokofskich, K. Kastampolidou, M. Varvouzou, K. Giannakis,
and A. Singh, “Finite automata capturing winning sequences for all possible
variants of the PQ penny flip game,” Mathematics, vol. 6, p. 20, Feb
[41]
T. Andronikos and A. Sirokofskich, “The connection between the PQ penny flip
game and the dihedral groups,” Mathematics, vol. 9, no. 10, p. 1115,
[42]
T. Andronikos, “Conditions that enable a player to surely win in sequential
quantum games,” Quantum Information Processing, vol. 21, no. 7, 2022.
[43]
K. Giannakis, G. Theocharopoulou, C. Papalitsas, S. Fanarioti, and
T. Andronikos, “Quantum conditional strategies and automata for prisoners'
dilemmata under the EWL scheme,” Applied Sciences, vol. 9, p. 2635,
Jun 2019.
[44]
K. Giannakis, C. Papalitsas, K. Kastampolidou, A. Singh, and T. Andronikos,
“Dominant strategies of quantum games on quantum periodic automata,” Computation, vol. 3, pp. 586–599, nov 2015.
[45]
T. Andronikos and M. Stefanidakis, “A two-party quantum parliament,” Algorithms, vol. 15, no. 2, p. 62, 2022.
[46]
G. Theocharopoulou, K. Giannakis, C. Papalitsas, S. Fanarioti, and
T. Andronikos, “Elements of game theory in a bio-inspired model of
computation,” in 2019 10th International Conference on Information,
Intelligence, Systems and Applications (IISA), pp. 1–4, IEEE, jul 2019.
[47]
K. Kastampolidou, M. N. Nikiforos, and T. Andronikos, “A brief survey of the
prisoners' dilemma game and its potential use in biology,” in Advances
in Experimental Medicine and Biology, pp. 315–322, Springer International
Publishing, 2020.
[48]
D. Kostadimas, K. Kastampolidou, and T. Andronikos, “Correlation of biological
and computer viruses through evolutionary game theory,” in 2021 16th
International Workshop on Semantic and Social Media Adaptation &
Personalization (SMAP), IEEE, 2021.
[49]
K. Kastampolidou and T. Andronikos, “A survey of evolutionary games in
biology,” in Advances in Experimental Medicine and Biology,
pp. 253–261, Springer International Publishing, 2020.
[50]
K. Kastampolidou and T. Andronikos, “Microbes and the games they play,” in
GeNeDis 2020, pp. 265–271, Springer International Publishing, 2021.
[51]
K. Kastampolidou and T. Andronikos, “Game theory and other unconventional
approaches to biological systems,” in Handbook of Computational
Neurodegeneration, pp. 1–18, Springer International Publishing, 2021.
[52]
C. Papalitsas, K. Kastampolidou, and T. Andronikos, “Nature and
quantum-inspired procedures – a short literature review,” in
GeNeDis 2020, pp. 129–133, Springer International Publishing, 2021.
[53]
D. Cruz, R. Fournier, F. Gremion, A. Jeannerot, K. Komagata, T. Tosic,
J. Thiesbrummel, C. L. Chan, N. Macris, M.-A. Dupertuis, and
C. Javerzac-Galy, “Efficient quantum algorithms for GHZ and w states, and
implementation on the IBM quantum computer,” Advanced Quantum
Technologies, vol. 2, no. 5-6, p. 1900015, 2019.
[54]
IBM, “IBM Quantum Composer.”
Accessed: 2022-11-18.
[55]
M. Aspelmeyer, T. Jennewein, M. Pfennigbauer, W. R. Leeb, and A. Zeilinger,
“Long-distance quantum communication with entangled photons using
satellites,” IEEE Journal of Selected Topics in Quantum Electronics,
vol. 9, no. 6, pp. 1541–1551, 2003.
[56]
Qiskit, “Qiskit open-source quantum development.” <https://qiskit.org>.
Accessed: 2022-11-18.
[57]
M. A. Nielsen and I. L. Chuang, Quantum computation and quantum
Cambridge University Press, 2010.
[58]
N. Mermin, Quantum Computer Science: An Introduction.
Cambridge University Press, 2007.
[59]
Qasm, “The qasm simulator.”
Accessed: 2022-04-03.
[60]
W. K. Wootters and W. H. Zurek, “A single quantum cannot be cloned,” Nature, vol. 299, no. 5886, pp. 802–803, 1982.
[61]
V. Coffman, J. Kundu, and W. K. Wootters, “Distributed entanglement,” Physical Review A, vol. 61, no. 5, p. 052306, 2000.
[62]
N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, “Bell
nonlocality,” Reviews of Modern Physics, vol. 86, no. 2, p. 419, 2014.
[63]
R. Colbeck and R. Renner, “No extension of quantum theory can have improved
predictive power,” Nature communications, vol. 2, no. 1, pp. 1–5,
[64]
B. Huttner, N. Imoto, N. Gisin, and T. Mor, “Quantum cryptography with
coherent states,” Physical Review A, vol. 51, no. 3, p. 1863, 1995.
[65]
N. Lütkenhaus, “Security against individual attacks for realistic quantum
key distribution,” Physical Review A, vol. 61, no. 5, p. 052304, 2000.
[66]
G. Brassard, N. Lütkenhaus, T. Mor, and B. C. Sanders, “Limitations on
practical quantum cryptography,” Physical review letters, vol. 85,
no. 6, p. 1330, 2000.
|
# Macroscopic Wave Propagation for 2D Lattice with Random Masses
Joshua A. McGinnis
###### Abstract
We consider a simple 2D harmonic lattice with random, independent and
identically distributed masses. Using the methods of stochastic
homogenization, we show that solutions with long wave initial data converge in
an appropriate sense to solutions of an effective wave equation. The
convergence is strong and almost sure. In addition, the role of the lattice’s
dimension in the rate of convergence is discussed. The technique combines
energy estimates with powerful classical results about sub-Gaussian random
variables.
## 1 Introduction
We prove an almost sure convergence result for solutions of the following two-
dimensional spatially discrete, linear harmonic lattice with random masses in
the long wave limit i.e. as the wavelength becomes longer:
$m({\bf{j}})\ddot{u}({\bf{j}},t)=\Delta u({\bf{j}},t)$ (1.1)
where ${\bf{j}}=(j_{1},j_{2})$. The discrete Laplacian is defined as
$\Delta
u({\bf{j}},t)\coloneqq-4u({\bf{j}},t)+\sum_{i=1}^{2}u({\bf{j}}+{\bf{e}}_{i},t)+u({\bf{j}}-{\bf{e}}_{i},t),$
(1.2)
where ${\bf{e}}_{1}=(1,0)$ and ${\bf{e}}_{2}=(0,1).$
Here, $(j_{1},j_{2})\in{\bf{Z}}^{2},t\in{\bf{R}},$ and $\Delta$ is the usual
discrete Laplacian on the integer lattice ${\bf{Z}}^{2}$. The $m(j_{1},j_{2})$
are independent and identically distributed random variables (i.i.d.)
contained almost surely in some interval $[a,b]\in{\bf{R}}^{+}$. There is a
long history of studying such lattices, especially when the masses or
constants of elasticity vary periodically [1], since they can provide
simplified models for investigating physical phenomena arising in more complex
systems. We are interested in understanding to what degree approximations to
the wave equation are still valid, since random coefficients could in
principle be used to model impurities. The system is well understood when the
masses are either constant or periodic with respect to $(j_{1},j_{2})$ [12],
but for a 2D lattice most of what is known for the random problem is for weak
randomness [10] or numerical [11]. Much has already been said about the
continuous versions of (1.1). For an extensive resource, see [8]. Note that
rates of convergence are rarely addressed. Furthermore almost sure convergence
results in the discrete setting require different techniques. In the
continuous setting convergence can be achieved more directly through the law
of large numbers. In our setting, one must define what one means mean by
convergence, and this is why we prove convergence through “coarse-graining”,
which is used in [12] to prove convergence in the periodic problem. To achieve
a rigorous rate of convergence, one must have bounds on the stochastic error
terms. In [14], the law of the iterated logarithm was used. Here we use the
theory of sub-Gaussian random variables.
For initial conditions whose wavelength and amplitude is $O(\epsilon^{-1})$
with $\epsilon$ a small positive number, we prove that the $\ell^{2}$ norm of
the differences between true solutions and appropriately scaled solutions to
the wave equation is almost surely $O(\epsilon^{-1-\sigma})$, where $\sigma$
is any small positive number. While such an absolute error diverges as
$\epsilon\to 0^{+},$ it happens that this is enough to establish an almost
sure convergence of the macroscopic dynamics within the coarse-graining
setting.
The article [14] studies a similar problem on a 1D lattice. There the
constants of elasticity also vary randomly and the system is studied in the
relative coordinates. The so called multiscale method of homogeniziation, a
by-now classical tool with a long history in PDE for deriving effective
equations, see [2], is employed. Our results should be contrasted with those
in 1D. In [14], it is shown that that the coarse-graining converges at a rate
of $\sqrt{\epsilon\log\log(\epsilon^{-1})}$, and this is thought to be sharp
based on numerical evidence. Below, we show that in 2D, the rate of
convergence is no slower than $\epsilon^{1-\sigma}$ when the masses are i.i.d.
However the rate of convergence is different for layered masses i.e. where
masses are only random with respect to $j_{1}$ and constant with respect to
$j_{2}$. We explore this in Section 7.2 and in the numerical experiments in
Section 9. In such a case we show that the convergence rate is comparable to
the rate in the 1D lattice. We do not believe our estimates are sharp like
those in the 1D setting because the analysis requires the use of a cut-off
function, which demands slightly greater regularity of the initial conditions.
Still, numerical evidence suggests they are probably close to being sharp and
likely only off a logarithmic term.
Although large parts of the formal derivation are the same in 2D as they are
in 1D, difficulty arises in the fact that, as far as we can tell, in 2D no
known explicit formulation of a solution for (2.19) exists. It can however be
solved on a finite domain. Therefore a cut-off function involving $\epsilon$
is introduced and although it eventually disappears from the final Theorems
7.1 and 8.1, several new arguments need to be made, most important of which
are almost sure estimates for solutions of (3.5) involving tails of sequences
of sub-Gaussian random variables. In Section 2, we derive the effective
equations. The core probability theory is in Section 3. A common energy
estimate, akin to those found in [3], [9], and [16] is given in Section 4.
These help bound the error in terms of the “residuals” defined in (2.1).
Elementary theory of the energy of the effective wave, such as that in [7], is
given in Section 5. This theory is used to derive estimates of various norms
needed in the following Section 6. Finally, Section 8 contains the main
convergence result. The appendix contains technical proofs of several
statements that are believable enough to be skipped in a first read through.
## 2 Deriving the Effective Equations
### 2.1 Expansions
Define the “residual” to be
$\operatorname{Res}\widetilde{u}({\bf{j}},t)=m({\bf{j}})\ddot{\widetilde{u}}-\Delta\widetilde{u}({\bf{j}},t).$
(2.1)
We hope to find an approximate solution to (1.1) whose residual is as small as
possible. We look for approximate solutions of the form
$\widetilde{u}({\bf{j}},t)=\epsilon^{-1}U_{0}({\bf{j}},\epsilon{\bf{j}},\epsilon
t)+\epsilon U_{2}({\bf{j}},\epsilon{\bf{j}},\epsilon t).$ (2.2)
The amplitude of the ansatz is $O(\epsilon^{-1})$ so that the relative
displacement of the masses is $O(1)$. The relative displacement of the masses
is given by
$r_{i}({\bf{j}},t)\coloneqq u({\bf{j}}+{\bf{e}}_{i},t)-u({\bf{j}},t).$ (2.3)
These are the same initial conditions used in [12]. Here
$U_{i}:{\bf{Z}}^{2}\times{\bf{R}}^{2}\times{\bf{R}}\to{\bf{R}},$ (2.4)
and we keep track of its arguments by letting ${\bf{X}}=\epsilon{\bf{j}}$ and
$\tau=\epsilon t,$ in which case ${\bf{X}}=(X_{1},X_{2})\in{\bf{R}}^{2}.$ In
order to plug (2.2) into (1.1), we need to understand how $\Delta$ acts on
functions of this form. Note that
$\Delta
U({\bf{j}},{\bf{X}})=\sum_{i=1}^{2}U({\bf{j}}+{\bf{e}}_{i},\epsilon{\bf{j}}+\epsilon{\bf{e}}_{i})+U({\bf{j}}-{\bf{e}}_{i},\epsilon{\bf{j}}-\epsilon{\bf{e}}_{i})-2U({\bf{j}},\epsilon{\bf{j}}).$
(2.5)
The addends,
$U({\bf{j}}+{\bf{e}}_{i},\epsilon{\bf{j}}+\epsilon{\bf{e}}_{i})+U({\bf{j}}-{\bf{e}}_{i},\epsilon{\bf{j}}-\epsilon{\bf{e}}_{i})$
(2.6)
can be Taylor expanded in powers of $\epsilon$. Using this expansion in (2.5),
we have
$\Delta U={\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{0}U+\epsilon{\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{1}U+\epsilon^{2}{\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{2}U,$
(2.7)
where
${\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{0}U({\bf{j}},{\bf{X}})\coloneqq-2U({\bf{j}},{\bf{X}})+\sum_{i=1}^{2}U({\bf{j}}+{\bf{e}}_{i},{\bf{X}})+U({\bf{j}}-{\bf{e}}_{i},{\bf{X}}),$
(2.8) ${\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{1}U({\bf{j}},{\bf{X}})\coloneqq\sum_{i=1}^{2}\partial_{X_{i}}U({\bf{j}}+{\bf{e}}_{i},{\bf{X}})-\partial_{X_{i}}U({\bf{j}}-{\bf{e}}_{i},{\bf{X}}),$
(2.9)
and
${\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{2}U({\bf{j}},{\bf{X}})\coloneqq\frac{1}{2}\sum_{i=1}^{2}\partial_{X_{i}X_{i}}U({\bf{j}}+{\bf{e}}_{i},{\bf{X}})+\partial_{X_{i}X_{i}}U({\bf{j}}-{\bf{e}}_{i},{\bf{X}}).$
(2.10)
Everything remaining in the expansion is $O(\epsilon^{3})$ so we neglect it
for now. Now we calculate $\operatorname{Res}{\widetilde{u}}$ using (2.7).
Again grouping terms by powers of $\epsilon$ yields
$\displaystyle\operatorname{Res}\widetilde{u}({\bf{j}},{\bf{X}},\tau)=$
$\displaystyle-\epsilon^{-1}{\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{0}U_{0}({\bf{j}},{\bf{X}},\tau)$
(2.11) $\displaystyle-{\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{1}U_{0}({\bf{j}}-{\bf{e}}_{i},{\bf{X}},\tau)$
$\displaystyle+\epsilon
m({\bf{j}})\partial_{\tau\tau}U_{0}({\bf{j}},{\bf{X}},\tau)-\epsilon\sum_{i=0}^{1}{\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{2i}U_{2-2i}({\bf{j}},{\bf{X}},\tau)$
$\displaystyle+O(\epsilon^{2}).$
Since we want $\operatorname{Res}\widetilde{u}$ to be small, we solve for
$U_{0}$ and $U_{2}$ so that formally the residual is $O(\epsilon^{2}).$
Assuming we seek bounded solutions, the $O(\epsilon^{-1})$ term implies
${\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{0}U_{0}({\bf{j}},{\bf{X}},\tau)=0.$
(2.12)
We can meet (2.12) by putting
$U_{0}({\bf{j}},{\bf{X}},\tau)=\bar{U}_{0}({\bf{X}},\tau)$ (2.13)
i.e. $U_{0}$ does not depend on the microscopic component. From (2.13), it is
seen that
${\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{1}U_{0}\equiv
0$ (2.14)
and that
${\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{2}U_{0}=\Delta_{\bf{X}}U_{0}\coloneqq\sum_{i=1}^{2}\partial_{X_{i}X_{i}}U_{0}.$
(2.15)
With $U_{0}$ constant in ${\bf{j}}$, the residual in (2.11) simplifies to
$\displaystyle\operatorname{Res}\widetilde{u}({\bf{j}},{\bf{X}},\tau)=$
$\displaystyle\epsilon
m({\bf{j}})\partial_{\tau\tau}U_{0}({\bf{X}},\tau)-\epsilon\Delta_{{\bf{X}}}U_{0}({\bf{X}},\tau)-\epsilon{\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{2}U_{0}({\bf{j}},{\bf{X}},\tau)+O(\epsilon^{2}).$
(2.16)
To make the $O(\epsilon)$ terms vanish, we need
$m({\bf{j}})\partial_{\tau\tau}U_{0}({\bf{X}},\tau)-\Delta_{\bf{X}}U_{0}({\bf{X}},\tau)={\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{0}U_{2}({\bf{j}},{\bf{X}},\tau).$
(2.17)
Let $z({\bf{j}})\coloneqq m({\bf{j}})-\bar{m}$, and
$\bar{m}\coloneqq{\mathbb{E}}[m({\bf{j}})]$ where ${\mathbb{E}}$ is the
expected value with respect to the probability measure of the i.i.d. lattice
of masses. One way to solve (2.17) is to write it as
$\bar{m}\partial_{\tau\tau}U_{0}({\bf{X}},\tau)-\Delta_{\bf{X}}U_{0}({\bf{X}},\tau)={\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{0}U_{2}({\bf{j}},{\bf{X}},\tau)-z({\bf{j}})\partial_{\tau\tau}U_{0}({\bf{X}},\tau),$
(2.18)
and then pick $U_{0}$ and $U_{2}$ to force the left hand side and right hand
side to vanish independently. From the left hand side, we find $U_{0}$ solves
an effective wave equation. In this case, the wave speed $\bar{m}$ is guessed
to be the correct one. For a more probabilistic derivation, see Subsection 2.2
below.
Solving the right hand side by picking $U_{2}$ would be easy if only we had
$\chi:{\bf{Z}}^{2}\to{\bf{R}}$ s.t.
$\Delta\chi({\bf{j}})=z({\bf{j}}).$ (2.19)
With such a solution we have
$U_{2}({\bf{j}},{\bf{X}},\tau)=\chi({\bf{j}})\partial_{\tau\tau}U_{0}({\bf{j}},\tau).$
(2.20)
From this, the approximate solution would be
$\widetilde{u}({\bf{j}},{\bf{X}},\tau)=\epsilon^{-1}U_{0}({\bf{j}},\tau)+\epsilon\chi({\bf{j}})\partial_{\tau\tau}U_{0}({\bf{X}},\tau).$
(2.21)
In order to control the magnitude of the residual, and to prove that
$\epsilon^{-1}U_{0}$ is the dominant term in the approximation, we need to
have asymptotic bounds on $\chi$. However, we do not even know how to solve
for such a $\chi$. To circumnavigate this issue, we resort to solving
$\eqref{Chi Easy}$ where the support of $z$ is on a finite domain dependent
upon $\epsilon$. We call this restricted solution $\chi_{r}$. Justifying that
$\widetilde{u}({\bf{j}},{\bf{X}},\tau)=\epsilon^{-1}U_{0}({\bf{j}},\tau)+\epsilon\chi_{r}({\bf{j}})\partial_{\tau\tau}U_{0}({\bf{X}},\tau),$
(2.22)
where $U_{0}$ solves the wave equation, is a valid approximation by finding
almost sure asymptotic bounds on $\chi_{r}$ in $\epsilon$ is the main novelty
of this paper.
### 2.2 Averaging
Another way to get at (2.17) is by averaging. We make another ad hoc
assumption, this time on $U_{2}$. We require that
$\lim_{|{\bf{j}}|_{\infty}\to\infty}\dfrac{U_{2}({\bf{j}},{\bf{X}},\tau)}{\lvert{\bf{j}}\rvert_{\infty}}=0$
(2.23)
where
$\lvert{\bf{j}}\rvert_{\infty}\coloneqq\max\\{|j_{1}|,|j_{2}|\ |\
{\bf{j}}=(j_{1},j_{2})\\}.$ (2.24)
We have introduced the $\infty$ norm here for technical reasons. Henceforth
${\bf{0}}$ is the vector $(0,0).$ We define disks
$D({\bf{j}}_{0},r)\coloneqq\\{{\bf{j}}\in{\bf{Z}}^{2}\ |\
|{\bf{j}}-{\bf{j}}_{0}|_{\infty}\leq r\\}.$ (2.25)
The boundary of these disks are given by
$\delta D({\bf{j}}_{0},r)=\\{{\bf{j}}\ |\
|{\bf{j}}-{\bf{j}}_{0}|_{\infty}=r\\}.$ (2.26)
For any set $U\subset{\bf{Z}}^{2}$, $\lvert U\rvert$ means the number of
elements in the set. With the assumption (2.23), one may argue that for
$U_{2}$ satisfying (2.17) to exist for all ${\bf{X}}$ and $\tau$, it must be
that its spatial average is $0$ i.e.
$\lim_{r\to\infty}\frac{1}{|D({\bf{0}},r)|}\sum_{{\bf{j}}\in
D({\bf{0}},r)}m({\bf{j}})\partial_{\tau\tau}U_{0}({\bf{X}},\tau)-\Delta_{{\bf{X}}}U_{0}({\bf{X}},\tau)=0,$
(2.27)
but by the law of large numbers
$\displaystyle\lim_{r\to\infty}\frac{1}{|D({\bf{0}},r)|}\sum_{{\bf{j}}\in
D({\bf{0}},r)}m({\bf{j}})\partial_{\tau\tau}U_{0}({\bf{X}},\tau)-\Delta_{{\bf{X}}}U_{0}({\bf{X}},\tau)$
(2.28) $\displaystyle=$
$\displaystyle\bar{m}\partial_{\tau\tau}U_{0}({\bf{X}},\tau)-\Delta_{\bf{X}}U_{0}({\bf{X}},\tau).$
Combining (2.27) with (2.28) yields the same effective wave equation as seen
in the left hand side of 2.18,
$\bar{m}\partial_{\tau\tau}U_{0}({\bf{X}},\tau)-\Delta_{\bf{X}}U_{0}({\bf{X}},\tau)=0.$
(2.29)
We now subtract (2.29) from (2.17) yielding
$z({\bf{j}})\partial_{\tau\tau}U_{0}({\bf{X}},\tau)={\mathchoice{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\displaystyle\widehat{\vrule
width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\displaystyle\Delta$}}}}{{\ooalign{\hbox{\raise
7.09259pt\hbox{\scalebox{1.0}[-1.0]{\lower
7.09259pt\hbox{$\textstyle\widehat{\vrule width=0.0pt,height=6.83331pt\vrule
height=0.0pt,width=8.33336pt}$}}}}\cr\hbox{$\textstyle\Delta$}}}}{{\ooalign{\hbox{\raise
6.40926pt\hbox{\scalebox{1.0}[-1.0]{\lower
6.40926pt\hbox{$\scriptstyle\widehat{\vrule width=0.0pt,height=4.78333pt\vrule
height=0.0pt,width=5.83334pt}$}}}}\cr\hbox{$\scriptstyle\Delta$}}}}{{\ooalign{\hbox{\raise
5.9537pt\hbox{\scalebox{1.0}[-1.0]{\lower
5.9537pt\hbox{$\scriptscriptstyle\widehat{\vrule
width=0.0pt,height=3.41666pt\vrule
height=0.0pt,width=4.16667pt}$}}}}\cr\hbox{$\scriptscriptstyle\Delta$}}}}}_{0}U_{2}({\bf{j}},{\bf{X}},\tau)$
(2.30)
which is the right hand side of (2.18).
## 3 Probabilistic Estimates
The intuition for the next step is that we need only solve (2.19) on a domain
which is accessible to the wave i.e. since we only care about $|t|\leq
T\epsilon^{-1}$, the approximation works without having a solution to (2.19)
outside a radius proportional to $T\epsilon{-1}$. Given enough regularity of
the initial conditions, only a negligible amount of the wave will have
traveled outside this radius. Therefore we can use a spatial cutoff and well
known results of sub-Gaussian random variables to obtain bounds on $\chi$. The
form of the estimates in this section are motivated by the error estimates in
the following sections, specifically Section 6.
### 3.1 Poisson Problem with Random Source
Let $I:{\bf{Z}}^{2}\to\\{0,1\\}$ s.t.
$I({\bf{j}})=\begin{cases}1&{\bf{j}}=0\\\ 0&{\bf{j}}\neq 0\end{cases}$ (3.1)
and $\varphi:{\bf{Z}}^{2}\to{\bf{R}}$ s.t.
$\Delta\varphi({\bf{j}})=I({\bf{j}}).$ (3.2)
Then $\varphi$ is the fundamental solution and it is known that
$\varphi({\bf{0}})=0$, and for ${\bf{j}}\neq 0$
$\varphi({\bf{j}})=\frac{1}{2\pi}\log|{\bf{j}}|+C_{0}+O(|{\bf{j}}|^{-2}),\
|{\bf{j}}|\to\infty$ (3.3)
where $|{\bf{j}}|\coloneqq\sqrt{j_{1}^{2}+j_{2}^{2}}.$ The proof of the
existence and uniqueness of $\varphi$ can be found in [5]. In (3.3), $\log$
may be replaced with $\log^{+}$ 111$\log^{+}(x)=\max\\{0,\log(x)\\}$ and
$\log^{+}(0)=0$., so all logs in the sequel should be thought of as
$\log^{+}$, even if we neglect the plus sign.
Define
$\chi_{r}({\bf{j}})\coloneqq\sum_{{\bf{k}}\in
D({\bf{0}},r)}\varphi({\bf{j}}-{\bf{k}})z({\bf{k}})$ (3.4)
where the $z({\bf{k}})$ and $D({\bf{0}},r)$ are defined in Subsection 2.1. The
following is true
$\Delta\chi_{r}({\bf{j}})=\begin{cases}z({\bf{j}})&{\bf{j}}\in
D({\bf{0}},r)\\\ 0&{\bf{j}}\in{\bf{Z}}^{2}/D({\bf{0}},r)\end{cases}.$ (3.5)
For any generic linear operator $\mathcal{L}$ acting on functions of
${\bf{Z}}^{2}$, we have
$\mathcal{L}(\chi_{r})({\bf{j}})=\sum_{{\bf{k}}\in
D({\bf{0}},r)}\mathcal{L}(\varphi)({\bf{j}}-{\bf{k}})z({\bf{k}}).$ (3.6)
### 3.2 Estimates on the Solution
Recall that $z({\bf{j}})\in[a-\bar{m},b-\bar{m}]$ almost surely and are mean
$0$. Therefore Hoeffding’s Lemma [13] tells us that for all $s\in{\bf{R}}$
${\mathbb{E}}\left[\exp({s{\bf{z}}({\bf{j}})})\right]\leq\exp\left({\frac{s^{2}(a-b)^{2}}{8}}\right).$
(3.7)
Moreover
${\mathbb{E}}\left[\exp\left({s\mathcal{L}(\varphi)({\bf{j}}-{\bf{k}}){\bf{z}}({\bf{j}})}\right)\right]\leq\exp\left({\frac{s^{2}(\mathcal{L}(\varphi)({\bf{j}}-{\bf{k}}))^{2}(a-b)^{2}}{8}}\right).$
(3.8)
By the independence of $z({\bf{j}})$
${\mathbb{E}}\left[\exp\left(s\mathcal{L}(\chi_{r})({\bf{j}})\right)\right]\leq\exp\left(\frac{s^{2}\left\|\mathcal{L}\varphi\right\|_{D({\bf{j}},r)}^{2}(a-b)^{2}}{8}\right)$
(3.9)
where
$\left\|\mathcal{L}\varphi\right\|^{2}_{D({\bf{j}},r)}\coloneqq\sum_{{\bf{k}}\in
D({\bf{j}},r)}(\mathcal{L}(\varphi)({\bf{k}}))^{2}=\sum_{{\bf{k}}\in
D(0,r)}(\mathcal{L}(\varphi)({\bf{j}}-{\bf{k}}))^{2}.$ (3.10)
From (3.9), it follows that the $\chi_{r}({\bf{j}})$ are sub-Gaussian, with
proxy variance given by
$\frac{(a-b)^{2}}{4}\left\|\mathcal{L}\varphi\right\|^{2}_{D({\bf{j}},r)}.$
Letting $P$ be the probability measure of the random masses, a common estimate
on sub-Gaussian random variables, also found in [13], says
$P(|\mathcal{L}(\chi_{r})({\bf{j}})|\geq t)\leq
2\exp\left(\frac{-2t^{2}}{(a-b)^{2}\left\|\mathcal{L}\varphi\right\|^{2}_{D({\bf{j}},r)}}\right).$
(3.11)
To make use of (3.11), we order the $\chi_{r}({\bf{j}})$ into a sequence.
First, restrict $r$ to take on only values ${\bf{N}}^{+}/\\{1\\}$. We map $r$
and ${\bf{j}}$ to a set of indices using a bijection
$I:{\bf{Z}}^{2}\times{\bf{N}}^{+}/\\{1\\}\to{\bf{N}}^{+}$ which satisfies the
inequality
$I({\bf{j}},r)\leq C\max\\{r^{3},|{\bf{j}}|^{3}\\}.$ (3.12)
Such a bijection is possible by Lemma 11.1. We let
$\mathcal{L}(\chi)(n)\coloneqq\mathcal{L}(\chi_{r})({\bf{j}})\ \text{and}\
\left\|\mathcal{L}\varphi\right\|^{2}_{D(n)}\coloneqq\left\|\mathcal{L}\varphi\right\|^{2}_{D({\bf{j}},r)}$
(3.13)
when $I({\bf{j}},r)=n$. One sees we still have (3.11) as
$P(|\mathcal{L}(\chi)(n)|\geq t)\leq
2\exp\left(\frac{-2t^{2}}{\left\|\mathcal{L}\varphi\right\|^{2}_{D(n)}(a-b)^{2}}\right).$
(3.14)
Using the Borel-Cantelli Lemma, [6], we have that for all but finitely many
$n$
$|\mathcal{L}(\chi)(n)|\leq\sqrt{(a-b)^{2}\left\|\mathcal{L}\varphi\right\|^{2}_{D(n)}\log(n)}$
(3.15)
almost surely, but (3.15) and (3.12) imply that there exists $C_{\omega}$
almost surely s.t.
$|\mathcal{L}(\chi_{r})({\bf{j}})|\leq
C_{\omega}\sqrt{\left\|\mathcal{L}\varphi\right\|^{2}_{D({\bf{j}},r)}\left(\log(|{\bf{j}}|)+\log(r)\right)}.$
(3.16)
For $\varphi$ given by (3.3) we find that for some constant $C$
$\left\|\varphi\right\|^{2}_{D({\bf{j}},r)}\leq Cr^{2}\log(|{\bf{j}}|+r)^{2}.$
(3.17)
A quartet of $\mathcal{L}$ operators we need to consider is
$S_{i}^{\pm}(\xi)({\bf{j}})\coloneqq\xi({\bf{j}}\pm{\bf{e}}_{i}).$ (3.18)
Notice that for $S_{i}^{\pm}$, there exists a $C$ s.t.
$\left\|S^{\pm}_{i}\varphi\right\|^{2}_{D({\bf{j}},r)}\leq
Cr^{2}\log(|{\bf{j}}|+r)^{2}.$ (3.19)
Another pair of $\mathcal{L}$ operators we need to consider is $\delta_{i}$
which we define as
$\delta_{i}(\varphi)({\bf{j}})\coloneqq\varphi({\bf{j}}+{\bf{e}}_{i})-\varphi({\bf{j}}-{\bf{e}}_{i}).$
(3.20)
From Lemma 11.3, there exists a $C$ s.t.
$|\delta_{i}(\varphi)({\bf{j}})|\leq\frac{C}{|{\bf{j}}|+1}.$ (3.21)
From Lemma 11.4 there exists another constant $C$ s.t.
$\left\|\delta_{i}\varphi\right\|^{2}_{D({\bf{j}},r)}\leq
C\log(|{\bf{j}}|+r).$ (3.22)
From (3.16), (3.17), and (3.22) we get the following theorem.
###### Theorem 3.1.
Let $\\{z({\bf{j}})\\}_{{\bf{j}}\in{\bf{Z}}^{2}}$ be independent, mean zero
random variables with $z({\bf{j}})\in[a-\bar{m},b-\bar{m}]\subset R$ a.s. Let
$r\geq 2$ be any real number, and $\chi_{r}({\bf{j}})$ be as defined in (3.4)
with $\varphi$ defined by (3.2). Then there almost surely exists a constant
$C_{\omega}$ s.t.
$|\chi_{r}({\bf{j}})|\leq
C_{\omega}r\left(\log(|{\bf{j}}|)^{\frac{3}{2}}+\log(|r|)^{\frac{3}{2}}\right),$
(3.23) $|S_{i}^{\pm}(\chi_{r})({\bf{j}})|\leq
C_{\omega}r\left(\log(|{\bf{j}}|)^{\frac{3}{2}}+\log(|r|)^{\frac{3}{2}}\right)$
(3.24)
and
$|\delta_{i}(\chi_{r})({\bf{j}})|\leq
C_{\omega}\left(\log(|{\bf{j}}|)+\log(|r|)\right).$ (3.25)
###### Remark 1.
Note that the definition in (3.4) implies $\chi_{r}=\chi_{\lfloor r\rfloor}$,
and thus the theorem holds for all real $r\geq 2$, even though $r$ was
originally an integer in the bijection in (3.12).
###### Remark 2.
The $\omega$ indicates that the constant depends on the realization of the
masses.
###### Proof.
Insert (3.19) and (3.22) into (3.16). After sweeping any constants into
$C_{\omega}$, one has
$|\chi_{r}({\bf{j}})|\leq
C_{\omega}r\log(|{\bf{j}}|+r)\sqrt{\left(\log(|{\bf{j}}|)+\log(r)\right)}$
(3.26)
and
$|\delta_{i}(\chi_{r})({\bf{j}})|\leq
C_{\omega}\sqrt{\log(|{\bf{j}}|+r)\left(\log(|{\bf{j}}|)+\log(r)\right)}.$
(3.27)
An elementary argument in Lemma 11.2 yields
$|\chi_{r}({\bf{j}})|\leq
C_{\omega}r\left(\log(|{\bf{j}}|)+\log(r)\right)^{\frac{3}{2}}$ (3.28)
and
$|\delta_{i}(\chi_{r})({\bf{j}})|\leq
C_{\omega}\left(\log(|{\bf{j}}|)+\log(r)\right).$ (3.29)
A typical argument using convexity of $(\cdot)^{\frac{3}{2}}$ and sweeping
constants into $C_{\omega}$ now gives the result. ∎
## 4 Lattice Energy Argument
### 4.1 Error Estimate
We want the approximate solution to be good for $t$ s.t.
$|t|\leq\frac{T}{\epsilon}$. Recall that the wave travels at an effective
speed given by
$c\coloneqq(\bar{m})^{-\frac{1}{2}}$ (4.1)
according to (2.29). Therefore, let
$R_{\epsilon}\coloneqq\frac{cT+\epsilon^{-\sigma}}{\epsilon}+1$ (4.2)
where $\sigma$ is any small positive real number. Recall that $\tau=\epsilon
t$ and ${\bf{X}}=\epsilon{\bf{j}}$. Then the approximate solution has the form
of (2.22) and is
$\widetilde{u}({\bf{j}},t)=\epsilon^{-1}U({\bf{X}},\tau)+\epsilon\chi_{R_{\epsilon}}({\bf{j}})\partial_{\tau\tau}U({\bf{X}},\tau),$
(4.3)
where $U$ satisfies (2.29) and $\chi_{R_{\epsilon}}$ is defined by (3.4) so
that $\chi_{R_{\epsilon}}$ solves (3.5). Define the “error”
$\xi({\bf{j}},t)\coloneqq\widetilde{u}({\bf{j}},t)-u({\bf{j}},t).$ (4.4)
Using (1.1) we find
$m({\bf{j}})\ddot{\xi}({\bf{j}},t)=\Delta\xi({\bf{j}},t)+\operatorname{Res}{\widetilde{u}}({\bf{j}},t)$
(4.5)
where $\operatorname{Res}{\widetilde{u}}$ is given by $\eqref{Res}.$ Before
going further it is useful to define some other operators similar to
$\delta_{i}$ given by (3.20). We also have the two partial difference
operators
$\delta_{i}^{\pm}(\xi)({\bf{j}})\coloneqq\pm(\xi({\bf{j}}\pm{\bf{e}}_{i})-\xi({\bf{j}})).$
(4.6)
The energy of the error is given by
$H(t)\coloneqq\frac{1}{2}\sum_{{\bf{j}}\in{\bf{Z}}^{2}}m({\bf{j}})\dot{\xi}({\bf{j}},t)^{2}+(\delta^{+}_{1}\xi({\bf{j}},t))^{2}+(\delta^{+}_{2}\xi({\bf{j}},t))^{2}.$
(4.7)
Differentiating gives
$\dot{H}(t)=\sum_{{\bf{j}}\in{\bf{Z}}^{2}}\left(m({\bf{j}})\dot{\xi}({\bf{j}},t)\ddot{\xi}({\bf{j}},t)+\sum_{i=1}^{2}\delta^{+}_{i}\xi({\bf{j}},t)\delta^{+}_{i}\dot{\xi}({\bf{j}},t)\right).$
(4.8)
We insert (4.5)
$\dot{H}(t)=\sum_{{\bf{j}}\in{\bf{Z}}^{2}}\left(\dot{\xi}({\bf{j}},t)(\Delta\xi({\bf{j}},t)+\operatorname{Res}{\widetilde{u}}({\bf{j}},t))+\sum_{i=1}^{2}\delta^{+}_{i}\xi({\bf{j}},t)\delta^{+}_{i}\dot{\xi}({\bf{j}},t)\right).$
(4.9)
Via summation by parts
$\sum_{{\bf{j}}\in{\bf{Z}}^{2}}\left(\dot{\xi}({\bf{j}},t)\Delta\xi({\bf{j}},t)+\sum_{i=1}^{2}\delta^{+}_{i}\xi({\bf{j}},t)\delta^{+}_{i}\dot{\xi}({\bf{j}},t)\right)=0.$
(4.10)
Hence (4.9) with Cauchy-Schwarz becomes
$\dot{H}(t)=\sum_{{\bf{j}}\in{\bf{Z}}^{2}}\dot{\xi}({\bf{j}},t)\operatorname{Res}{\widetilde{u}}({\bf{j}},t)\leq\left\|\dot{\xi}(\cdot,t)\right\|_{\ell^{2}}\left\|\operatorname{Res}{\widetilde{u}}(\cdot,t)\right\|_{\ell^{2}}$
(4.11)
The $\ell^{2}$ norm,
$\left\|\dot{\xi},\delta^{+}_{1}\xi,\delta^{+}_{2}\xi\right\|_{\ell^{2}}^{2},$
and the energy, $H$ are equivalent, and so there exists a constant $C$
depending on $a$ and $b$ from (1.1) s.t. (4.11) becomes
$\dot{H}(t)(H(t))^{-\frac{1}{2}}\leq
C\left\|\operatorname{Res}{\widetilde{u}}(\cdot,t)\right\|_{\ell^{2}}.$ (4.12)
Integrating from $0$ to $t$ for any $|t|\leq\frac{T}{\epsilon}$ gives
$\left(H(t)\right)^{-\frac{1}{2}}\leq\left(H(0)\right)^{\frac{1}{2}}+\epsilon^{-1}CT\sup_{|t|\leq\epsilon^{-1}T}\left\|\operatorname{Res}{\widetilde{u}}(\cdot,t)\right\|_{\ell^{2}}.$
(4.13)
By the equivalence of norms and (4.13)
$\left\|\dot{\xi}(\cdot,t),\delta_{1}^{+}\xi(\cdot,t),\delta_{2}^{+}\xi(\cdot,t)\right\|_{\ell^{2}}\leq
C\left\|\dot{\xi}(\cdot,0),\delta_{1}^{+}\xi(\cdot,0),\delta_{2}^{+}\xi(\cdot,0)\right\|_{\ell^{2}}+\epsilon^{-1}CT\sup_{|t|\leq\epsilon^{-1}T}\left\|\operatorname{Res}{\widetilde{u}}(\cdot,t)\right\|_{\ell^{2}}$
(4.14)
The bound (4.14) is valid for all $t$ s.t. $|t|\leq\frac{T}{\epsilon}$. Let us
suppose that
$\left\|\dot{\xi}(\cdot,0),\delta_{1}^{+}\xi(\cdot,0),\delta_{2}^{+}\xi(\cdot,0)\right\|_{\ell^{2}}\leq\epsilon^{-1}CT\sup_{|t|\leq\epsilon^{-1}T}\left\|\operatorname{Res}{\widetilde{u}}(\cdot,t)\right\|_{\ell^{2}}.$
(4.15)
Furthermore let
$p({\bf{j}},t)\coloneqq\dot{u}({\bf{j}},t)\ \text{and}\
r_{i}({\bf{j}},t)\coloneqq\delta_{i}^{+}u({\bf{j}},t)$ (4.16)
with the analogous definitions of $\widetilde{p}$ and $\widetilde{r}_{i}$.
These are the velocities and relative displacements of the masses. Then, for
some constant $C$ dependent on $T$, (4.14) becomes
$\sup_{|t|\leq\epsilon^{-1}T}\left\|p-\widetilde{p},r_{1}-\widetilde{r}_{1},r_{2}-\widetilde{r}_{2}\right\|_{\ell^{2}}\leq\epsilon^{-1}C\left(\sup_{|t|\leq\epsilon^{-1}T}\left\|\operatorname{Res}{\widetilde{u}}\right\|_{\ell^{2}}\right).$
(4.17)
From (4.17) and (4.16), we also have by integrating that
$\sup_{|t|\leq\epsilon^{-1}T}\left\|u-\widetilde{u}\right\|_{\ell_{2}}\leq\epsilon^{-2}C\left(\sup_{|t|\leq\epsilon^{-1}T}\left\|\operatorname{Res}{\widetilde{u}}\right\|_{\ell^{2}}\right).$
(4.18)
Controlling the residual is essential to proving our main theorem. Thus we are
concerned with proving the following proposition throughout the next couple of
sections.
###### Proposition 4.1.
Suppose the $m(j)$ are i.i.d. random variables contained in some interval
$[a,b]\subset{\bf{R}}^{+}$ almost surely. Then almost surely there exists a
constant $C_{\omega}(T,a,b,\bar{m})$ s.t. (4.19)
$\sup_{|t|\leq\epsilon^{-1}T}\left\|\operatorname{Res}\widetilde{u}\right\|\leq\epsilon^{1-\sigma}C_{\omega}\log(\epsilon^{-1-\sigma})^{\frac{3}{2}}\left\|\Delta\phi,\psi\right\|_{H^{5}_{\sigma}}.$
(4.19)
The norm $\left\|\Delta\phi,\psi\right\|_{H^{5}_{\sigma}}$ is a norm on the
initial conditions of $U({\bf{X}},\tau)$ given in Subsection 5.1. The norm is
defined in Subsection 5.4. Crucially, it does not depend on $\epsilon$.
### 4.2 Calculating the Residual
Now we calculate (2.1) for our approximate solution given by (4.3). We do not
write the arguments of functions when we do not need, but recall that
${\bf{X}}=\epsilon j$ and $\tau=\epsilon t.$ Recall the operators defined in
(3.18), (3.20), and (4.6). We also have
$\Delta_{i}u\coloneqq S_{i}^{+}u-2u+S_{i}^{-}u$ (4.20)
and note that
$\Delta u=\Delta_{1}u+\Delta_{2}u.$ (4.21)
Next calculate using Lemma 11.5 that
$\Delta\widetilde{u}=\epsilon^{-1}\Delta
U+\epsilon\Delta(\chi_{R_{\epsilon}})U_{\tau\tau}+\epsilon\sum_{i=1}^{2}\delta_{i}(\chi_{R_{\epsilon}})\delta_{i}^{-}(U_{\tau\tau})+S_{i}^{+}(\chi_{R_{\epsilon}})\Delta_{i}(U_{\tau\tau}).$
(4.22)
For $D({\bf{0}},R_{\epsilon})\subset{\bf{Z}}^{2}$, we denote the indicator of
$D({\bf{0}},R_{\epsilon})$ as $\mathbb{I}_{D({\bf{0}},R_{\epsilon})}$. The
complement of $D({\bf{0}},R_{\epsilon})$ is $D({\bf{0}},R_{\epsilon})^{c}.$
Using (3.5) in (4.22)
$\Delta\widetilde{u}=\epsilon^{-1}\Delta U+\epsilon
z\mathbb{I}_{D({\bf{0}},R_{\epsilon})}U_{\tau\tau}+\epsilon\sum_{i=1}^{2}\delta_{i}(\chi_{R_{\epsilon}})\delta_{i}^{-}(U_{\tau\tau})+S_{i}^{+}(\chi_{R_{\epsilon}})\Delta_{i}(U_{\tau\tau}).$
(4.23)
Recall that ${\bf{z}}({\bf{j}})=m({\bf{j}})-\bar{m}$ and that $U$ solves
(2.29). Then (4.23) becomes
$\Delta\widetilde{u}=\epsilon^{-1}\Delta
U-\epsilon\Delta_{\bf{X}}U+\epsilon\left(mU_{\tau\tau}-z\mathbb{I}_{D({\bf{0}},R_{\epsilon})^{c}}U_{\tau\tau}+\sum_{i=1}^{2}\delta_{i}(\chi_{R_{\epsilon}})\delta_{i}^{-}(U_{\tau\tau})+S_{i}^{+}(\chi_{R_{\epsilon}})\Delta_{i}(U_{\tau\tau})\right)$
(4.24)
where $D({\bf{0}},R_{\epsilon})^{c}$ denotes the complement of the original
set. Plugging into (2.1) we have
$\displaystyle\operatorname{Res}{\widetilde{u}}=$
$\displaystyle\epsilon^{-1}(\epsilon^{2}\Delta_{\bf{x}}U-\Delta
U)+\epsilon\left(z\mathbb{I}_{D({\bf{0}},R_{\epsilon})^{c}}U_{\tau\tau}-\sum_{i=1}^{2}\delta_{i}(\chi_{R_{\epsilon}})\delta_{i}^{-}(U_{\tau\tau})+S_{i}^{+}(\chi_{R_{\epsilon}})\Delta_{i}(U_{\tau\tau})\right)$
(4.25) $\displaystyle+\epsilon^{3}m\chi_{R_{\epsilon}}U_{\tau\tau\tau\tau}.$
## 5 The Effective Wave
### 5.1 Initial Conditions
We specify the initial conditions for (1.1). For smooth enough functions
$\phi,\psi:{\bf{R}}^{2}\to{\bf{R}}$, we let
$u({\bf{j}},0)=\epsilon^{-1}\phi(\epsilon{\bf{j}})\ \text{and}\
\dot{u}({\bf{j}},0)=\psi(\epsilon{\bf{j}}).$ (5.1)
With these initial conditions, the initial relative displacements and velocity
as defined in (4.16) are roughly $\epsilon^{-1}$ in $\ell^{2}$. If the initial
conditions for $\widetilde{u}$ are defined analogously to (5.1) with
$\widetilde{\phi}$ and $\widetilde{\psi},$ then in order to satisfy (4.15), we
need informally that
$\left\|\phi-\widetilde{\phi},\psi-\widetilde{\psi}\right\|_{\ell^{2}}\leq\epsilon
C.$ (5.2)
Let us suppose that (4.15) is satisfied so that we do not need to worry about
the disparity of the initial conditions. Therefore, to save on writing, $\phi$
and $\psi$ can simply refer to the initial conditions of either the true or
approximate solution for now.
Suppose we have the following
$U({\bf{X}},0)=\phi({\bf{X}})\ \text{and}\
\partial_{\tau}U({\bf{X}},0)=\psi({\bf{X}})$ (5.3)
Then $\epsilon^{-1}U({\bf{X}},\tau)$ satisfies (2.29) and so according to
(4.3) excluding the higher order term, which we show later is small enough to
ignore, we have
$\widetilde{u}({\bf{j}},t)=\epsilon^{-1}U({\bf{X}},\tau).$ (5.4)
This is the correct approximate solution in that it satisfies (2.29) and
(5.1).
### 5.2 The Energy
Recall that $c$ is the effective wave speed defined in (4.1). In order to
bound the terms in (4.25), we need to control $L^{2}$ norms of derivatives of
$U$ in terms of the initial conditions. This can be done using arguments
involving the energy of $U$. For
$V({\bf{X}},\tau):{\bf{R}}^{2}\times{\bf{R}}\to{\bf{R}}^{2^{k}}$, let $DV$ be
the total derivative of $V$ with respect to its $2$ spatial components. In
particular $D$ is the gradient, $\nabla$, of scalar valued functions. For each
$j\in\\{0,1\dots,2^{k}\\},$ define
$E(V_{j})(\tau)\coloneqq\frac{1}{2}\int_{{\bf{R}}^{2}}\left(\partial_{\tau}V_{j}\right)^{2}+c^{2}\left|\nabla\left(V_{j}\right)\right|^{2}d{\bf{X}}.$
(5.5)
Then $E(V)(\tau)$ represents a $2^{k}$ dimensional vector of energies. For $U$
satisfying (2.29) with initial conditions (5.3), it is well known, see for
instance [4] or [7], that for all $\tau$ that
$\left|E\left(D^{k}\frac{\partial^{i}U}{\partial\tau^{i}}\right)(\tau)\right|\leq
C\left(\left\|\nabla\phi\right\|_{H^{i+k}}^{2}+\left\|\psi\right\|_{H^{i+k}}^{2}\right).$
(5.6)
The proof is also provided in Lemma 11.6 in the Appendix. The energy of
derivatives of $U$ (5.5) is equivalent to $L^{2}$ norms of derivatives of $U$.
$\sum_{j=1}^{k}\left\|D^{j}\frac{\partial^{i}U}{\partial\tau^{i}}\right\|^{2}_{L^{2}}\leq
C\sum_{j=0}^{k-1}\left|E\left(D^{j}\frac{\partial^{i}U}{\partial\tau^{i}}\right)(\tau)\right|\leq
C\left(\left\|\nabla\phi\right\|^{2}_{H^{i+k-1}}+\left\|\psi\right\|^{2}_{H^{i+k-1}}\right).$
(5.7)
The constant depends on $\bar{m}$ and we see that $\nabla\phi\in H^{i+k-1}$
and $\psi\in H^{i+k-1}.$
### 5.3 Weighted Energy
In (4.25), there are terms where $U$ is “weighted” by some version of $\chi$.
By 3.24, the spatially varying aspect of these can be bounded by functions
involving logarithms. We need a way to deal with these weights in a continuous
setting. We can obviously find a smooth function
$w({\bf{X}}):{\bf{R}}^{d}\to{\bf{R}}$ s.t.
$w({\bf{X}})=\log(|{\bf{X}}|+1)^{\frac{3}{2}}+1\ \forall\ |{\bf{X}}|\geq 1\
\text{and}\ w({\bf{X}})\geq 1,$ (5.8)
and for all $n\in{\bf{N}}^{+}$ there exists a constant $B$ s.t.
$\left|D^{n}w\right|\leq B\quad\text{and}\quad\left|D(w^{2})\right|\leq B.$
(5.9)
In the same context as (5.2), the possibly vector valued “weighted energy” is
given by
$E_{w}(V_{j})(\tau)\coloneqq\frac{1}{2}\int_{{\bf{R}}^{2}}w^{2}\left(\partial_{\tau}V_{j}\right)^{2}+c^{2}w^{2}\left|\nabla\left(V_{j}\right)\right|^{2}d{\bf{X}}.$
(5.10)
According to Lemma 11.7, for all $\tau$
$\left|E_{w}\left(D^{k}\frac{\partial^{i}U}{\partial\tau^{i}}\right)(\tau)\right|\leq|\tau|C\left(\left\|\nabla\phi\right\|^{2}_{H_{w}^{i+k}}+\left\|\psi\right\|^{2}_{H_{w}^{i+k}}\right),$
(5.11)
where from (5.8)
$\left\|\psi\right\|_{H^{k}}\leq\left\|\psi\right\|_{H^{k}_{w}}\coloneqq\sum_{j=0}^{k}\left\|wD^{j}\psi\right\|_{L^{2}}.$
(5.12)
The constant depends on $\bar{m}$ as well as $B$. Assumptions (5.8) and (5.9)
on $w$ also give us
$\left\|D^{k}\left(w\frac{\partial^{i}U}{\partial\tau^{i}}\right)\right\|_{L^{2}}\leq
C\sum_{j=0}^{k}\left\|wD^{j}\frac{\partial^{i}U}{\partial\tau^{i}}\right\|_{L^{2}}\leq
C\sum_{j=0}^{k}\sqrt{\left|E_{w}\left(D^{j}\frac{\partial^{i-1}U}{\partial\tau^{i-1}}\right)\right|}.$
(5.13)
One sees from (5.11) and (5.13) that for $i\geq 1$
$\left\|w\frac{\partial^{i}U}{\partial\tau^{i}}\right\|_{H^{k}}\leq
C\sqrt{|\tau|}\left(\left\|\nabla\phi\right\|_{H_{w}^{i+k-1}}+\left\|\psi\right\|_{H_{w}^{i+k-1}}\right)$
(5.14)
which holds for all $\tau.$
### 5.4 Tail Energy
We have already defined disks, so to be very clear, we let
$B(r)=\\{{\bf{X}}\in{\bf{R}}^{2}\ |\ |{\bf{X}}|<r\\}.$ (5.15)
Recall that $U$ satisfies (2.29) and travels with speed $c$. In the same
context as (5.2), the energy at the tails of the function is given by
$\widetilde{E}(V)(\tau)\coloneqq\int_{B(c|\tau|+\epsilon^{-\sigma})^{c}}\left(\partial_{\tau}V_{j}\right)^{2}+c^{2}\left|\nabla\left(V_{j}\right)\right|^{2}d{\bf{X}}.$
(5.16)
Lemma 11.8 shows that for $i\geq 1$
$\left|\widetilde{E}\left(D^{k}\frac{\partial^{i}U}{\partial\tau^{i}}\right)(\tau)\right|\leq
C\left(\left\|\nabla\phi\right\|^{2}_{H^{i+k}(B(\epsilon^{-\sigma})^{c})}+\left\|\psi\right\|^{2}_{H^{i+k}(B(\epsilon^{-\sigma})^{c})}\right)$
(5.17)
where $C$ depends only on $\bar{m}.$ We have
$\left\|\frac{\partial^{i}}{\partial\tau^{i}}U\right\|_{H^{k}(B(c\tau+\epsilon^{-\sigma})^{c})}\leq
C\left(\left\|\nabla\phi\right\|_{H^{i+k-1}(B(\epsilon^{-\sigma})^{c})}+\left\|\psi\right\|_{H^{i+k-1}(B(\epsilon^{-\sigma})^{c})}\right).$
(5.18)
It follows from Lemma 11.9, that for any $\sigma>0$
$\left\|\psi\right\|_{H^{i+k}(B(\epsilon^{-\sigma})^{c})}\leq\epsilon\left\|\psi\right\|_{H^{i+k}_{\sigma}}\coloneqq\epsilon\sum_{j=0}^{i+k}\left\|\left(1+\left|\
\cdot\ \right|\right)^{\sigma^{-1}}\left|D^{j}\psi\right|\right\|_{L^{2}}.$
(5.19)
Therefore as long as $i\geq 1$, (5.18) becomes
$\left\|\frac{\partial^{i}}{\partial\tau^{i}}U\right\|_{H^{k}(B(c|\tau|+\epsilon^{-\sigma})^{c})}\leq\epsilon
C\left(\left\|\nabla\phi\right\|_{H^{i+k-1}_{\sigma}}+\left\|\psi\right\|_{H^{i+k-1}_{\sigma}}\right),$
(5.20)
which holds for all $\tau$.
## 6 Residual Estimates
We bound the terms from left to right in (4.25) by the initial conditions
given by (5.1).
###### Lemma 6.1.
For $U$ satisfying (2.29) with sufficiently smooth initial conditions given by
(5.1), there exists a constant $C$ depending only on $\bar{m}$ s.t.
$\sup_{|t|\leq\epsilon^{-1}T}\epsilon^{-1}\left\|\Delta
U(\epsilon\cdot,\epsilon
t)-\epsilon^{2}\Delta_{\bf{X}}U(\epsilon\cdot,\epsilon
t)\right\|_{\ell^{2}}\leq\epsilon^{2}4C\left(\left\|\nabla\phi\right\|_{H^{5}}+\left\|\psi\right\|_{H^{5}}\right)$
(6.1)
###### Proof.
Recall the definition of $\Delta_{\bf{X}}U$ in (2.15). For each $i=1,2$, by
Taylor’s Theorem with remainder, there exists a $\widehat{{\bf{j}}}$ with
$\widehat{j}_{i}\in[j_{i}-1,j_{i}+1]$ s.t
$\Delta_{i}U(\epsilon{\bf{j}})-\epsilon^{2}\partial_{X_{i}X_{i}}(U)(\epsilon{\bf{j}})=\epsilon^{4}\partial_{X_{i}X_{i}X_{i}X_{i}}(U)(\epsilon\widehat{{\bf{j}}}).$
(6.2)
By convexity of $(\cdot)^{2}$,
$\left(\Delta
U(\epsilon{\bf{j}})-\epsilon^{2}\Delta_{\bf{x}}(U)(\epsilon{\bf{j}})\right)^{2}\leq
2\sum_{i=1}^{2}\left(\Delta_{i}U(\epsilon{\bf{j}})-\epsilon^{2}\partial_{X_{i}X_{i}}(U)(\epsilon{\bf{j}})\right)^{2}.$
(6.3)
Combining (6.2) and (6.3) yields
$\left(\Delta
U(\epsilon{\bf{j}})-\epsilon^{2}\Delta_{\bf{x}}(U)(\epsilon{\bf{j}})\right)^{2}\leq\epsilon^{8}2\sum_{i=1,2}\left(\partial_{X_{i}X_{i}X_{i}X_{i}}(U)(\epsilon\widehat{{\bf{j}}})\right)^{2}.$
(6.4)
Summing over ${\bf{j}}$ and using Corollary 11.11 we find
$\left\|\Delta U(\epsilon\cdot,\epsilon
t)-\epsilon^{2}\Delta_{\bf{X}}U(\epsilon\cdot,\epsilon
t)\right\|_{\ell^{2}}\leq\epsilon^{3}4\left\|D^{4}U\right\|_{H^{2}}.$ (6.5)
Now we apply (5.7) to get a constant $C$ which depends only on $\bar{m}$ s.t.
$\left\|\Delta U(\epsilon\cdot,\epsilon
t)-\epsilon^{2}\Delta_{\bf{X}}U(\epsilon\cdot,\epsilon
t)\right\|_{\ell^{2}}\leq\epsilon^{3}4C\left(\left\|\nabla\phi\right\|_{H^{5}}+\left\|\psi\right\|_{H^{5}}\right).$
(6.6)
∎
###### Lemma 6.2.
For $U$ satisfying (2.29) with sufficiently smooth initial conditions given by
(5.1), there exists a constant $C$ depending on $a,b,$ and $\bar{m}$ s.t.
$\sup_{|t|\leq\epsilon^{-1}T}\epsilon\left\|z\mathbb{I}_{D({\bf{0}},R_{\epsilon})^{c}}U_{\tau\tau}\right\|_{\ell^{2}}\leq\epsilon
C\left(\left\|\nabla\phi\right\|_{H^{3}_{\sigma}}+\left\|\psi\right\|_{H^{3}_{\sigma}}\right)$
(6.7)
###### Proof.
First recall that $z({\bf{j}})\in[a-\bar{m},b-\bar{m}]$. There exists a $C$
that depends on $a,b$ or $\bar{m}$ s.t.
$\left\|z\mathbb{I}_{D({\bf{0}},R_{\epsilon})^{c}}U_{\tau\tau}\right\|_{\ell^{2}}\leq
C\left\|\mathbb{I}_{D({\bf{0}},R_{\epsilon})^{c}}U_{\tau\tau}\right\|_{\ell^{2}}.$
(6.8)
Recall the definition of $R_{\epsilon}$ in (4.2). Note that $R_{\epsilon}$ has
the form $\epsilon^{-1}\widetilde{R}(T)+1$ where $\widetilde{R}(\tau)\coloneqq
c\tau+\epsilon^{-\sigma}$. According to Lemma 11.12
$\left\|\mathbb{I}_{D({\bf{0}},R_{\epsilon})^{c}}U_{\tau\tau}\right\|_{\ell^{2}}\leq
2\epsilon^{-1}\left\|U_{\tau\tau}\right\|_{H^{2}(B(\widetilde{R}(T))^{c})}.$
(6.9)
Since $|\tau|\leq T,$
$\left\|U_{\tau\tau}\right\|_{H^{2}(B(\widetilde{R}(T)^{c})}\leq\left\|U_{\tau\tau}\right\|_{H^{2}(B(\widetilde{R}(\tau)^{c}))}.$
(6.10)
It follows from (5.20) that
$\left\|U_{\tau\tau}\right\|_{H^{2}(B(\widetilde{R}(\tau))^{c})}\leq\epsilon
C\left(\left\|\nabla\phi\right\|_{H^{3}_{\sigma}}+\left\|\psi\right\|_{H^{3}_{\sigma}}\right).$
(6.11)
Stringing these inequalities together and taking $\sup$ yields the result. ∎
###### Lemma 6.3.
Let $U$ satisfy (2.29) with sufficiently smooth initial conditions given by
(5.1). For each $\omega$ in a set of probability one, there exists a constant
$C_{\omega}$ s.t.
$\sup_{|t|\leq\epsilon^{-1}T_{0}}\epsilon\left\|\delta_{i}(\chi)\delta_{i}^{-}(U_{\tau\tau})\right\|_{\ell^{2}}\leq\epsilon
C_{\omega}\left(\left(\sqrt{T}+\log(R_{\epsilon})\right)\left(\left\|\nabla\phi\right\|_{H_{w}^{4}}+\left\|\psi\right\|_{H_{w}^{4}}\right)\right)$
(6.12)
###### Proof.
Recall the bound given by (3.25). We therefore have that
$\left\|\delta_{i}(\chi_{R_{\epsilon}})\delta_{i}^{-}(U_{\tau\tau})\right\|_{\ell^{2}}\leq
C_{\omega}\left(\left\|\log^{+}(\cdot)\delta_{i}^{-}\left(U_{\tau\tau}\right)\right\|_{\ell^{2}}+\log(R_{\epsilon})\left\|\delta_{i}^{-}(U_{\tau\tau})\right\|_{\ell^{2}}\right).$
(6.13)
Recalling the definition of $w$ in (5.8)
$\log^{+}(\cdot)\leq\log(\epsilon^{-1})\log(\epsilon\cdot+1)\leq\log(R_{\epsilon})w_{\epsilon}(\cdot).$
(6.14)
where
$w_{\epsilon}(\cdot)\coloneqq w(\epsilon\cdot).$ (6.15)
Then
$\left\|\delta_{i}^{-}(U_{\tau\tau})\right\|_{\ell^{2}}\leq\left\|w_{\epsilon}\delta_{i}^{-}(U_{\tau\tau})\right\|_{\ell^{2}}\
\text{and}\
\left\|\log^{+}(\cdot)\delta_{i}^{-}\left(U_{\tau\tau}\right)\right\|_{\ell^{2}}\leq\log(R_{\epsilon})\left\|w_{\epsilon}\delta_{i}^{-}(U_{\tau\tau})\right\|_{\ell^{2}}.$
(6.16)
Applying Corollary 11.10 we have
$\left\|w_{\epsilon}\delta_{i}^{-}\left(U_{\tau\tau}\right)\right\|_{\ell^{2}}\leq\epsilon^{-1}\left\|w(\cdot)\left(U_{\tau\tau}(\cdot)-U_{\tau\tau}(\cdot-\epsilon{\bf{e}}_{i})\right)\right\|_{H^{2}}.$
(6.17)
Note that $\delta_{\epsilon i}^{-}(U)(\cdot,\tau)\coloneqq
U(\cdot,\tau)-U(\cdot-\epsilon{\bf{e}}_{i},\tau)$ solves the same wave
equation $U$ does with initial conditions
$\delta_{\epsilon i}^{-}U({\bf{X}},0)=\delta_{\epsilon i}^{-}(\phi)({\bf{X}})\
\text{and}\ \delta_{\epsilon i}^{-}U_{\tau}({\bf{X}},0)=\delta_{\epsilon
i}^{-}(\phi)({\bf{X}}).$ (6.18)
Hence we can apply (5.14)
$\left\|w(\cdot)\left(U_{\tau\tau}(\cdot)-U_{\tau\tau}(\cdot-\epsilon{\bf{e}}_{i})\right)\right\|_{H^{2}}\leq
C\sqrt{|\tau|}\left(\left\|\delta^{-}_{\epsilon
i}(\nabla\phi)\right\|_{H^{3}_{w}}+\left\|\delta_{\epsilon
i}^{-}(\psi)\right\|_{H^{3}_{w}}\right).$ (6.19)
By Lemma 11.13
$\left\|w(\cdot)\left(U_{\tau\tau}(\cdot)-U_{\tau\tau}(\cdot-\epsilon{\bf{e}}_{i})\right)\right\|_{H^{2}}\leq\epsilon
C\sqrt{|\tau|}\left(\left\|\nabla\phi\right\|_{H^{4}_{w}}+\left\|\psi\right\|_{H^{4}_{w}}\right).$
(6.20)
Stringing the inequalities together and taking $\sup$ gives us the result. ∎
###### Lemma 6.4.
Let $U$ satisfy (2.29) with sufficiently smooth initial conditions given by
(5.1). For each $\omega$ in a set of probability one, there exists a constant
$C_{\omega}$ s.t.
$\sup_{|t|\leq\epsilon^{-1}T}\epsilon\left\|S_{i}^{+}(\chi_{R_{\epsilon}})\Delta_{i}(U_{\tau\tau})\right\|_{\ell^{2}}\leq\epsilon^{2}R_{\epsilon}C_{\omega}\left(\left(\sqrt{T}+\log(R_{\epsilon})^{\frac{3}{2}}\right)\left(\left\|\nabla\phi\right\|_{H_{w}^{5}}+\left\|\psi\right\|_{H_{w}^{5}}\right)\right)$
(6.21)
###### Proof.
The proof is essentially the same as the proof of the previous theorem but now
we use (3.24) instead of (3.24) and Corollary 11.14 instead of Lemma 11.13. ∎
###### Lemma 6.5.
Let $U$ satisfy (2.29) with sufficiently smooth initial conditions given by
(5.1). For each $\omega$ in a set of probability one, there exists a constant
$C_{\omega}$ s.t.
$\sup_{|t|\leq\epsilon^{-1}T}\epsilon^{3}\left\|m\chi_{R_{\epsilon}}U_{\tau\tau\tau\tau}\right\|_{\ell^{2}}\leq\epsilon^{2}R_{\epsilon}C_{\omega}\left(\left(\sqrt{T}+\log(R_{\epsilon})^{\frac{3}{2}}\right)\left(\left\|\nabla\phi\right\|_{H_{w}^{5}}+\left\|\psi\right\|_{H_{w}^{5}}\right)\right).$
(6.22)
###### Proof.
Since $m({\bf{j}})\in[a,b]$, there exists a constant either $a$ or $b$ s.t.
$\left\|m\chi_{R_{\epsilon}}U_{\tau\tau\tau\tau}\right\|_{\ell^{2}}\leq
C\left\|\chi_{R_{\epsilon}}U_{\tau\tau\tau\tau}\right\|_{\ell^{2}}.$ (6.23)
The steps are now very similar to those found in Lemma 6.3 but with (3.23)
instead of (3.24). We then use Lemma 11.10 and (5.14). ∎
## 7 Residual Bound and Discussion
### 7.1 Proof of Proposition 4.1
###### Proof.
Recall the calculation for $\operatorname{Res}\widetilde{u}$ in (4.25). We
have bounded each of the terms, in order, using Lemmas 6.1, 6.2, 6.3, 6.4 and
6.5. We obtain Proposition (4.1) by using the largest parts of each of the
bounds. ∎
Lemma 6.1 bounds a completely deterministic term that would appear no matter
how the masses are chosen. Lemma 6.2 bounds a term that arises due to our use
of a cut-off function. Recall we need to use a cut-off in order to work around
solving (2.19). In the case where this can be solved, say for example where
the masses vary periodically, then this term would not appear. The norm,
$H^{3}_{\sigma}$, in this bound is the largest norm. As $\sigma$ becomes
smaller, this norm becomes larger. The term in Lemma 6.3 is the first term
that must be bounded using probabilistic arguments. The constant $C_{\omega}$
exists almost surely but depends on the actual realization of masses. It
therefore could be arbitrarily large, since there is always a small
probability that (3.15) holds only for $n$ extremely large. Thus $C_{\omega}$
may be worthy of statistical quantification in a follow up work. Recalling the
definition of $R_{\epsilon}$ in (4.2), the bound for the term in Lemma (6.4)
is the dominant one in $\epsilon$. It is
$O(\epsilon^{1-\sigma}\log(\epsilon^{-1-\sigma})^{3/2})$. This term also
requires the most smoothness of the initial condition. The final term in Lemma
(6.5) is also $O(\epsilon^{1-\sigma}\log(\epsilon^{-1-\sigma})^{3/2})$ for
similar reasons. It may be conjectured that $\sigma$ here is artificial as it
arises from our inability to solve (2.19). We could analyse more; for example,
we could find the dependence of $T,\bar{m},a$ or $b$, but what we are most
interested in is tracking $\epsilon$.
### 7.2 Other Masses
The method we have employed is robust enough to consider other ways of
realizing the masses. For example, we may consider periodic masses by which we
mean there exists a positive integer $k$ s.t.
$m(j_{1},j_{2})=m(j_{1}+k,j_{2}+k)$. We also may consider masses which are all
identical. In such a case, the only term which appears in the residual is the
one bounded in Lemma 6.1. This improves the accuracy of approximate solution
substantially as the residual would be $O(\epsilon^{2})$. Another
generalization we can make is that masses need not be identically distributed,
so long as they all fall into some interval $[a,b]\in{\bf{R}}^{+}$ and have
the same expected value. Since our methods did not use any other features of
the masses being identically distributed, e.g. equal variances or
probabilities, our result extends to this case without modification.
Another important example is layered media. Suppose that
$\\{m(j_{1},j_{2})\\}_{j_{1}=-\infty}^{\infty}.$ (7.1)
is random i.i.d. sequence of masses and that for all $j_{1}$
$\cdots=m(j_{1},-1)=m(j_{1},0)=m(j_{1},1)=\cdots$ (7.2)
In this case, it is actually possible to solve (2.19) and thereby not use a
cutoff, but we proceed using the same tools we have developed, since such
tools can be utilized in higher dimensions. In this case, (3.9) does not
immediately hold since we do not have complete independence. Reconsider (3.6)
$\displaystyle\mathcal{L}(\chi_{R_{\epsilon}})(j_{1},j_{2})$
$\displaystyle=\sum_{|k_{1}|\leq r}\sum_{|k_{2}|\leq
r}\mathcal{L}(\varphi)(j_{1}-k_{1},j_{2}-k_{2})z(k_{1},k_{2})$ (7.3)
$\displaystyle=\sum_{k_{1}=j_{1}-r}^{j_{1}+r}\sum_{k_{2}=j_{2}-r}^{j_{2}+r}\mathcal{L}(\varphi)(k_{1},k_{2})z(j_{1}-k_{1},j_{2}-k_{2}).$
Let $z(k_{1})\coloneqq z(k_{1},\cdot).$ This is well defined because of (7.2).
Thus
$\mathcal{L}(\chi_{r})(j_{1},j_{2})=\sum_{k_{1}=j_{1}-r}^{j_{1}+r}z(j_{1}-k_{1})\sum_{k_{2}=j_{2}-r}^{j_{2}+r}\mathcal{L}(\varphi)(k_{1},k_{2}).$
(7.4)
Let
$\mathcal{L}(\varPhi_{r})(j_{2},k_{1})\coloneqq\sum_{k_{2}=j_{2}-r}^{j_{2}+r}\mathcal{L}(\varphi)(k_{1},k_{2})$
(7.5)
so
$\mathcal{L}(\chi_{R_{\epsilon}})(j_{1},j_{2})=\sum_{k_{1}=j_{1}-r}^{j_{1}+r}z(j_{1}-k_{1})\mathcal{L}(\varPhi_{r})(j_{2},k_{1}).$
(7.6)
This is a sum involving independent random variables so we get a version of
(3.9). Let
$\left\|\mathcal{L}(\varPhi_{r})(j_{1},j_{2})\right\|^{2}\coloneqq\sum_{k_{1}=j_{1}-r}^{j_{1}+r}\mathcal{L}(\varPhi_{r})(j_{2},k_{1})^{2}$
(7.7)
Then we have
$\exp(s\chi_{r}(j_{1},j_{2}))\leq\exp\left(\frac{s^{2}\left\|\mathcal{L}(\varPhi_{r})(j_{1},j_{2})\right\|^{2}(a-b)^{2}}{8}\right).$
(7.8)
Now the same argument works as is made in Section 3.2 but the relevant
quantity to calculate is the square root of (7.7). Take $\mathcal{L}$ to be
any of the operators in Section 3.2. Then
$\left\|\mathcal{L}(\varPhi_{r})(j_{1},j_{2})\right\|^{2}=\sum_{k_{1}=j_{1}-r}^{j_{1}+r}\left(\sum_{k_{2}=j_{2}-r}^{j_{2}+r}\mathcal{L}(\varphi)(k_{1},k_{2})\right)^{2}.$
(7.9)
It is possible to use Jensen’s inequality to obtain
$\left\|\mathcal{L}(\varPhi_{r})(j_{1},j_{2})\right\|^{2}\leq(2r+1)\sum_{k_{1}=j_{1}-r}^{j_{1}+r}\sum_{k_{2}=j_{2}-r}^{j_{2}+r}\mathcal{L}(\varphi)(k_{1},k_{2})^{2}=(2r+1)\left\|\mathcal{L}\varphi\right\|^{2}_{D({\bf{j}},r)}.$
(7.10)
One notices that $\left\|\mathcal{L}\varphi\right\|^{2}_{D({\bf{j}},r)}$ are
norms we have already computed in Section 3.2, see (3.17) for example. The $r$
out front provides an extra half power of $\epsilon^{-1-\sigma}$ when one
considers (4.2) and after taking square roots as in (3.16). Therefore, in
contrast with (4.19), we have
$\sup_{|t|\leq\epsilon^{-1}T}\left\|\operatorname{Res}\widetilde{u}\right\|_{\ell^{2}}\leq\epsilon^{\frac{1}{2}-\frac{3}{2}\sigma}C_{\omega}\log(\epsilon^{-1-\sigma})^{\frac{3}{2}}\left\|\phi,\psi\right\|_{H^{5}_{\sigma}}.$
(7.11)
This example sheds some light on the complexity introduced when considering
random masses. In contrast, for periodic masses, even if (7.2) holds, then
(2.19) is solvable and the solution is periodic so most importantly bounded.
Therefore the residual is always $O(\epsilon).$
### 7.3 Main Estimate Result
###### Theorem 7.1.
Suppose $\psi,\phi\in H_{\sigma}^{5}$. Let $u$ satisfy (4.10) with
$u({\bf{j}},0)=\epsilon^{-1}\phi(\epsilon{\bf{j}})\ \text{and}\
\dot{u}({\bf{j}},0)=\psi(\epsilon{\bf{j}}).$ (7.12)
and let $U$ satisfy (2.29) with
$U({\bf{X}},0)=\phi({\bf{X}})\ \text{and}\
\partial_{\tau}U({\bf{X}},0)=\psi({\bf{X}}).$ (7.13)
With
$\widehat{u}({\bf{j}},t)=\epsilon^{-1}U(\epsilon{\bf{j}},\epsilon t)$ (7.14)
where $\chi_{R_{\epsilon}}$ is given by (3.4) and $R_{\epsilon}$ by (4.2).
Then there exists a $C_{\omega}(a,b,\bar{m},T)$ a.s. s.t.
$\sup_{|t|\leq\epsilon^{-1}T}\left\|u-\widehat{u}\right\|_{\ell^{2}}\leq\epsilon^{-1-\sigma}C^{*}_{\omega}\log(\epsilon^{-1-\sigma})^{\frac{3}{2}}\left\|\phi,\psi\right\|_{H^{5}_{\sigma}}.$
(7.15)
and
$\sup_{|t|\leq\epsilon^{-1}T}\left\|\dot{u}-\dot{\widehat{u}}\right\|_{\ell^{2}}\leq\epsilon^{-\sigma}C^{*}_{\omega}\log(\epsilon^{-1-\sigma})^{\frac{3}{2}}\left\|\phi,\psi\right\|_{H^{5}_{\sigma}}.$
(7.16)
###### Remark 3.
Although the right hand side of (7.15) and (7.16) appear to grow, the size of
$u$ and $\dot{u}$ is roughly $\epsilon^{-2}$ and $\epsilon^{-1}$, so the error
is relatively small.
###### Proof.
Let
$\widetilde{u}({\bf{j}},t)=\epsilon^{-1}U(\epsilon{\bf{j}},\epsilon
t)+\epsilon\chi_{R_{\epsilon}}({\bf{j}})U_{\tau\tau}(\epsilon{\bf{j}},\epsilon
t).$ (7.17)
This is what has typically been our $\widetilde{u}$ as defined in (4.3). Since
we have taken the initial conditions for $u$ and $\widehat{u}$ to be
identical, (4.15) holds as long as we have
$\epsilon\left\|\epsilon\chi_{R_{\epsilon}}(\cdot)U_{\tau\tau\tau}(\cdot,0),\delta_{1}^{+}\left(\chi_{R_{\epsilon}}U_{\tau\tau}\right)(\cdot,0),\delta_{2}^{+}\left(\chi_{R_{\epsilon}}U_{\tau\tau}\right)(\cdot,0)\right\|\leq\epsilon^{-1}C\sup_{|t|\leq\epsilon^{-1}T}\left\|\operatorname{Res}{\widetilde{u}}\right\|_{\ell^{2}}$
(7.18)
which can be proven using the same kind of arguments given in Section 6.
Then we have by (4.17) and (4.18) that for a constant depending on depending
on $a,b$ and $T$ s.t.
$\sup_{|t|\leq\epsilon^{-1}T}\left\|\dot{u}-\dot{\widetilde{u}}\right\|_{\ell^{2}}\leq\epsilon^{-1}C\sup_{|t|\leq\epsilon^{-1}T}\left\|\operatorname{Res}{\widetilde{u}}\right\|_{\ell^{2}}$
(7.19)
and
$\sup_{|t|\leq\epsilon^{-1}T}\left\|u-\widetilde{u}\right\|_{\ell^{2}}\leq\epsilon^{-2}C\sup_{|t|\leq\epsilon^{-1}T}\left\|\operatorname{Res}{\widetilde{u}}\right\|_{\ell^{2}}.$
(7.20)
Now we use (4.19) to obtain the correct right hand side in (7.15) and (7.16)
but for $\widetilde{u}$ instead of $\widehat{u}.$ Thus it remains to analyse
the difference between the two which is
$\sup_{|t|\leq\epsilon^{-1}T}\left\|\dot{\widehat{u}}-\dot{\widetilde{u}}\right\|_{\ell^{2}}\leq\sup_{|t|\leq\epsilon^{-1}T}\left\|\epsilon^{2}\chi_{R_{\epsilon}}(\cdot)U_{\tau\tau\tau}(\epsilon\cdot)\right\|_{\ell^{2}}$
(7.21)
and
$\sup_{|t|\leq\epsilon^{-1}T}\left\|\widehat{u}-\widetilde{u}\right\|_{\ell^{2}}\leq\sup_{|t|\leq\epsilon^{-1}T}\left\|\epsilon\chi_{R_{\epsilon}}(\cdot)U_{\tau\tau}(\epsilon\cdot)\right\|_{\ell^{2}}.$
(7.22)
Both of these can be bounded using steps similar to those in Lemmas 6.3 and
6.4. One finds
$\sup_{|t|\leq\epsilon^{-1}T}\left\|\dot{\widehat{u}}-\dot{\widetilde{u}}\right\|_{\ell^{2}}\leq\epsilon
C_{\omega}R_{\epsilon}\left((\sqrt{T}+\log(R_{\epsilon})^{\frac{3}{2}})(\left\|\phi\right\|_{H_{w}^{3}}+\epsilon\left\|\psi\right\|_{H_{w}^{2}})\right)$
(7.23)
$\sup_{|t|\leq\epsilon^{-1}T}\left\|\widehat{u}-\widetilde{u}\right\|_{\ell^{2}}\leq
C_{\omega}R_{\epsilon}\left((\sqrt{T}+\log(R_{\epsilon})^{\frac{3}{2}})(\left\|\phi\right\|_{H_{w}^{2}}+\epsilon\left\|\psi\right\|_{H_{w}^{1}})\right).$
(7.24)
Recalling the definition of $R_{\epsilon}$ in (4.2), these bounds have the
correct power of $\epsilon$. Thus with the appropriate constant
$C^{\star}_{\omega}$ everything can be dominated by the right hand side in
(7.15) and (7.16). ∎
## 8 Coarse Graining
The Theorem 7.1 says that the macroscopic behavior of the system is wavelike
i.e. it evolves according to an effective wave equation. We formalize this
notion by proving a convergence result in the macroscopic setting. We have a
number of operators to introduce. The discrete Fourier Transform is given by
$F[f]({\bf{y}})=\frac{1}{(2\pi)^{2}}\sum_{{\bf{j}}\in{\bf{Z}}^{2}}\exp\left(-i{\bf{j}}\cdot{\bf{y}}\right)f({\bf{j}})$
(8.1)
Here $y\in{\bf{R}}^{2}$. Its inverse is
$F^{-1}[g]({\bf{j}})=\int_{[-\pi,\pi]^{2}}\exp\left(i{\bf{y}}\cdot{\bf{j}}\right)g({\bf{y}})d{\bf{y}}.$
(8.2)
Let $\mathcal{F}$ and $\mathcal{F}^{-1}$ be the Fourier Transform and its
inverse, defined analogously to the discrete versions (i.e. replacing the sum
with an integral). The sampling operator is
$\mathcal{S}(u)({\bf{j}})=u({\bf{j}}).$ (8.3)
A cutoff operator is
$\theta_{\phi}({\bf{y}})=\begin{cases}1&{\bf{y}}\in[-\phi,\phi]^{2}\\\
0&\text{else}.\end{cases}$ (8.4)
Finally, we define a low pass interpolator. For a continuous variable
${\bf{x}}\in{\bf{R}}^{2}$
$\mathcal{L}[f]({\bf{x}})=\mathcal{F}^{-1}[\theta_{\pi}(\cdot)F[f](\cdot)]({\bf{x}}).$
(8.5)
The following theorem says that the same abstract diagram found in [12] holds
in this setting almost surely.
### 8.1 Main Convergence Result
###### Theorem 8.1.
Let $U$ solve (2.29) with initial conditions given by (5.3). Let $u$ solve
(1.1) with initial conditions given by (5.1). Put
$U_{\epsilon}({\bf{X}},\tau)=\epsilon\mathcal{L}(u(\cdot,\tau/\epsilon))({\bf{X}}/\epsilon)$
(8.6)
so that
$\partial_{\tau}U_{\epsilon}({\bf{X}},\tau)=\mathcal{L}(\dot{u}(\cdot,\tau/\epsilon))({\bf{X}}/\epsilon).$
(8.7)
Then
$\lim_{\epsilon\to 0}\sup_{|\tau|\leq
T}\left\|U_{\epsilon}-U,\partial_{\tau}U_{\epsilon}-\partial_{\tau}U\right\|_{L^{2}}=0$
(8.8)
almost surely.
.
###### Remark 4.
The rate of convergence is
$\epsilon^{1-\sigma}\log(\epsilon^{-1-\sigma})^{\frac{3}{2}}$ as seen in the
proof.
###### Proof.
$\left\|U_{\epsilon}-U\right\|_{L^{2}}\leq\left\|\epsilon\mathcal{L}(u(\cdot,\tau/\epsilon))(\cdot/\epsilon)-\mathcal{LS}(U(\epsilon\cdot,\tau))(\cdot/\epsilon)\right\|_{L^{2}}+\left\|\mathcal{LS}(U(\epsilon\cdot,\tau))(\cdot/\epsilon)-U(\cdot,\tau)\right\|_{L^{2}}.$
(8.9)
According to Lemma 11.15, the second term goes to $0$ uniformly in $\tau$ at a
rate of $\epsilon$. From the first term, we have
$\left\|\epsilon\mathcal{L}(u(\cdot,\tau/\epsilon))(\cdot/\epsilon)-\mathcal{LS}U(\epsilon\cdot,\tau)(\cdot/\epsilon)\right\|_{L^{2}}=\epsilon\left\|\epsilon\mathcal{L}(u(\cdot,\tau/\epsilon))(\cdot)-\mathcal{LS}U(\epsilon\cdot,\tau)(\cdot)\right\|_{L^{2}}.$
(8.10)
By Plancherel
$\epsilon\left\|\epsilon\mathcal{L}(u(\cdot,\tau/\epsilon))({\bf{X}})-\mathcal{LS}U(\epsilon\cdot,\tau)(\cdot)\right\|_{L^{2}}\leq\epsilon
C\left\|\epsilon
u(\cdot,\tau/\epsilon)-\mathcal{S}U(\epsilon\cdot,\tau)\right\|_{L^{2}}.$
(8.11)
Now we use that $U(\epsilon\cdot,\tau)=\epsilon\widehat{u}(\cdot,\epsilon t)$
as in Theorem 7.1 and the result follows from that same theorem. Recall
$\tau=\epsilon t$. Similarly
$\epsilon\left\|\mathcal{L}(\dot{u}(\cdot,\tau/\epsilon))({\bf{X}})-\mathcal{LS}U_{\tau}(\epsilon\cdot,\tau)(\cdot)\right\|_{L^{2}}\leq\epsilon\left\|\dot{u}(\cdot,\tau/\epsilon)-\mathcal{S}U_{\tau}(\epsilon\cdot,\tau)\right\|_{\ell^{2}}.$
(8.12)
Again we use Theorem 7.1 where
$U_{\tau}(\epsilon\cdot,\tau)=\dot{\widehat{u}}(\cdot,t).$ ∎
## 9 Numerical Results
Our numerical results focus on confirming the upper bounds found in Theorem
7.1.We refer to the left hand side of (7.15) as the absolute error of the
displacement (a.e.d.) and the left hand side of (7.16) as the absolute error
of the velocity (a.e.v). For the next two experiments, we have chosen
$\phi(X_{1},X_{2})=\operatorname{sech}\left(\frac{1}{2}(X_{1}-1)^{2}+(X_{2}-1)^{2}\right),\
\psi(X_{1},X_{2})=\operatorname{sech}\left((X_{1}+1)^{2}+\frac{1}{2}(X_{2}+1)^{2}\right)$
(9.1)
and looked at $\epsilon$ over $\\{1/2,1/4,1/8,1/16,1/32\\}.$ Every $m(j)$ is
sampled from $\\{1/2,3/2\\}$ and for the first experiment the masses are
chosen to be i.i.d. As $\epsilon$ varies, this grid of randomly chosen masses
remains fixed.
Figure 1: The left (right) panel shows a $\log$-$\log$ graph of a.e.d. (a.e.v)
versus $\epsilon$. Since the masses are chosen randomly, ten realization of
masses were tested. The distribution of the error is plotted with a box and
whisker plot at each value of $\epsilon$. The slope of the line of best fit
numerically approximates the power of $\epsilon$ in (7.15) ((7.16)).
The upper bound for the a.e.v. obtained in Theorem 7.1 is
$\epsilon^{-\sigma}\log(\epsilon^{-1-\sigma})^{\frac{3}{2}}$ for the a.e.v.
(Recall that $\sigma$, is any arbitrarily small positive number.) The slopes
seen in Figure 1 are in agreement with the bounds obtained in the theorem,
i.e. the slopes reflect to what power of $\epsilon$ the error depends. In fact
the estimate is close to sharp. We can thus think of such bounds as giving an
analytic prediction on the size of the error in many cases.
For the second experiment, we use the setup for the masses given by (7.1) and
(7.2) i.e. the masses are layered. According to (7.11), we expect the slopes
to be no more than a half power less than those seen in Figure 1. This is
indeed what we see in Figure 2. Again, the numerical error is close to the
predicted error.
Figure 2: These panels reflect the same measurements as those in Figure 1, but
now the masses have been chosen according to (7.1) and (7.2)
Finally we compare these results to what happens in three deterministic cases.
In one case we choose the masses to be constant in which case we would expect
the slope for the a.e.v and the a.e.d. to be no worse than $1$ and 0
respectively. In a second, the masses are periodically layered i.e. they only
vary periodically with a period of 2 along one of the coordinate axis and
along the other, (7.2) holds. For the third case, masses are chosen to vary
periodically along both coordinate axes with period of 2. In both cases we
expect the slope of the a.e.v. and a.e.d. to be no worse than $0$ and $1$. In
both cases the upper bound holds; however, unlike in all the previous
experiments, numerically computed a.e.d. seems to be substantially better than
the analytic upper bound, since the slope for both periodic cases is closer to
$0$ than to $1$.
Figure 3: Masses have been chosen deterministically in three different ways.
One important observation is that when the randomly chosen masses are layered,
the approximation to the wave equation does not converge as quickly as it does
when they are not layered. The main physical explanation we propose for the
difference in the observed slope between Figures 1 and 2 is that in the second
experiment, reflections caused by the random masses manifest as long ripples
transverse to the direction in which the masses are randomly changing. This is
in contrast to the first experiment where reflections manifest as localized
disturbances. Figure (4) gives some empirical evidence for this phenomenon.
One possible heuristic explanation for why we don’t get improvement for
periodic masses is that there is always a direction in ${\bf{R}}^{2}$ along
which the averages of masses in lines transverse to that direction are varying
(unless the masses are all constant).This produces a kind of layering that
cannot occur if all the masses are chosen to be i.i.d. since masses along any
line average to the same value. Hence, in this sense the homogenization occurs
faster, albeit the amplitudes of localized fluctuations are larger in the
i.i.d. case.
Figure 4: The left panel shows a wave traveling through masses which are
i.i.d. and the right shows a wave traveling through layered random masses. One
can spot the long transverse ripples in the right panel, whereas in the left
panel, deviations appear more localized.
Finally, we have performed a number of similar experiments in 1D, the results
of which are summarized in 1. The predicted values can all be proven using
essentially the same method as what has been demonstrated or discussed in
Section 7.2 and throughout the rest of the paper. Even though our predicted
values are upper bounds, we see that in most cases our estimates are close to
sharp. The exception is when the masses are chosen periodically, where the
predicted a.e.d. often overshoots by close to a full power of $\epsilon$. This
indicates that integrating $p$, and then applying a triangle inequality to
obtain an upper bound on $u$ is not efficient in the periodic case.
For both the constant coefficient and periodic cases, a half power of
$\epsilon$ is lost with each increase in dimension. This is due to the length
scaling and is negated when one considers the coarse-graining limit. We see no
reason this trend should not continue into higher dimensions. On the other
hand, there is not a decrease in the power of $\epsilon$ in the i.i.d. case.
Heuristically, one can see this by comparing the variance of $\chi$ in 1D and
in 2D. The fundamental solution $\varphi$ of $\Delta$ plays an important role
in the growth rate. In 1D, this fundamental solution is given by
$\varphi(\cdot)=\frac{1}{2}|\cdot|.$ Taking the expectation of $\chi_{r}^{2}$
where $\chi_{r}$ is defined by (3.4) yields that $\chi_{r}^{2}$ is
approximately the size of $r^{3}$ in $\ell^{\infty}$. The same procedure in 2D
yields $r^{2}\log(r)^{2}$. This accounts for an additional half power of
$\epsilon$ after taking square roots. The argument using the ideas of sub-
Gaussian random variables is one way to formalize this intuition and obtain an
almost sure estimate. Ultimately, this half power is negated by the increase
in dimension, and since the random term in the residual is still the dominant
one, we find that the size of the absolute error (as dependent upon
$\epsilon$) roughly does not change from 1D to 2D. Therefore the coarse-
graining limit converges faster in 2D.
Rates for: | a.e.v. | a.e.d.
---|---|---
Mass/Dim. | Predicted | Simulated | Predicted | Simulated
1D Const. Coeff. | $1.5$ | $1.5$ | $0.5$ | $0.5$
1D Period. Coeff. | $0.5$ | $0.7$ | $-0.5$ | $0.5$
1D Rando. Coeff. | $-\sigma$ | $0.2$ | $-1-\sigma$ | $-0.8$
2D Const. Coeff | $1$ | $1.1$ | $0$ | $0.0$
2D Period. Coeff | $0$ | $0.1$ | $-1$ | $0.0$
2D Rando. Coeff | $-\sigma$ | $0.1$ | $-1-\sigma$ | $-1.0$
2D Layered Coeff | $-0.5-\sigma$ | $-0.4$ | $-1.5-\sigma$ | $-1.5$
Table 1: This table summarizes the epsilon dependence of the absolute errors.
The numbers indicate the power of $\epsilon$. $\sigma$ is a small and positive
constant.
## 10 Conclusion
We have rigorously justified the wave equation as descriptor for the
macroscopic dynamics of a 2D square lattice composed of i.i.d. masses. We have
given analytic as well as numerical evidence that this description is more
accurate in 2D than it is in 1D. We conjecture that in 3D, the a.e.d. and
a.e.v. is roughly the same as it is in 2D for the i.i.d. case. This means it
would be roughly only a half power of epsilon larger than in the constant
case. We think such a result is modest evidence that waves propagate more
easily in a disordered lattice in higher dimensions. An important exception
that was seen to this occurs in the case of layered random masses, where the
error became larger from 2D to 1D in the same way it did for periodic or
constant masses. Although there are results in the continuous setting that are
similar to ours [8], as far as we can tell, this is the first result that
provides a rigorous almost sure bounds on the rate of convergence for
lattices, and we think that these rates of convergence provide insight into
the effects of dimensionality of the disorder on wave propagation. Finally,
the techniques introduced, especially the use of sub-Gaussian random variables
can probably be used to access similar results for various other discrete
systems with different kinds of disorder.
## 11 Appendix
### 11.1 Probabilistic Estimates A
###### Lemma 11.1.
There exists a bijection
$I:{\bf{Z}}^{2}\times{\bf{N}}^{+}/\\{1\\}\to{\bf{N}}^{+}$ (11.1)
such that
$I({\bf{j}},r)\leq C\max\\{{\bf{j}}^{3},r^{3}\\}.$ (11.2)
###### Proof.
Notice the set
$B(n)\coloneqq\\{({\bf{j}},r)\ |\ |{\bf{j}}|\leq n\ ,\ r\leq n\\}$ (11.3)
is a subset of the top half of a ball in ${\bf{Z}}^{3}.$ If we require
$I({\bf{j}}_{1},r_{1})<I({\bf{j}}_{2},r_{2})$ (11.4)
whenever there exists an integer $n$ s.t.
$({\bf{j}}_{1},r_{1})\in B(n)\ \ \text{but}\ \ ({\bf{j}}_{2},r_{2})\notin
B(n)$ (11.5)
we have at worst that
$I({\bf{j}},r)\leq|B(\max\\{{\bf{j}},r\\})|\leq
C\max\\{{\bf{j}}^{3},r^{3}\\}.$ (11.6)
∎
###### Lemma 11.2.
For $r\geq 2$, and ${\bf{j}}\in{\bf{Z}}^{2},$ there exists a constant $C$ s.t.
$\log^{+}(|{\bf{j}}|+r)\leq C(\log^{+}(|{\bf{j}}|)+\log^{+}(r))$ (11.7)
###### Proof.
The inequality is trivial for $|{\bf{j}}|=0$. For $|{\bf{j}}|\neq 0$, we can
prove the inequality for $\log$. By the concavity of $\log$, $\log(\cdot+1)$
is sub-additive.
$\log(|{\bf{j}}|+r-1+1)\leq\log(|{\bf{j}}|+1)+\log(r)\leq
C(\log(|{\bf{j}}|)+\log(r)).$ (11.8)
∎
###### Lemma 11.3.
For $\varphi$ given by (3.3) and $\delta_{i}$ defined in (3.20) we have for
some $C$
$|\delta_{i}(\varphi)({\bf{j}})|\leq\frac{C}{|{\bf{j}}|+1}$ (11.9)
###### Proof.
By (3.3), we have
$\delta_{i}(\varphi)({\bf{j}})=\frac{1}{2\pi}\log^{+}|{\bf{j}}+{\bf{e}}_{i}|-\frac{1}{2\pi}\log^{+}|{\bf{j}}-{\bf{e}}_{i}|+O(|{\bf{j}}|^{-2})$
(11.10)
Note that when $|{\bf{j}}|=0$, we are left with only small $O(|j|^{-2})$
terms. For $|{\bf{j}}|=1$ we have
$\left|\log^{+}|{\bf{j}}+{\bf{e}}_{i}|-\log^{+}|{\bf{j}}-{\bf{e}}_{i}|\right|\leq\log(2).$
(11.11)
For $|{\bf{j}}|>1$, we have
$\displaystyle\left|\log|{\bf{j}}+{\bf{e}}_{i}|-\log|{\bf{j}}-{\bf{e}}_{i}|\right|$
$\displaystyle\leq\left|\log\left(\frac{|{\bf{j}}|+|{\bf{e}}_{i}|}{|{\bf{j}}|-|{\bf{e}}_{i}|}\right)\right|$
(11.12)
$\displaystyle=\left|\log\left(\frac{|{\bf{j}}|-1+2}{|{\bf{j}}|-1}\right)\right|$
$\displaystyle\leq\left|\log\left(1+\frac{2}{|{\bf{j}}|-1}\right)\right|$
$\displaystyle\leq\left|\log\left(1+\frac{2}{|{\bf{j}}|+1}\frac{|{\bf{j}}|+1}{|{\bf{j}}|-1}\right)\right|$
$\displaystyle\leq\left|\log\left(1+\frac{4}{|{\bf{j}}|+1}\right)\right|$
$\displaystyle\leq\frac{4}{|{\bf{j}}|+1}.$
Thus we obtain the result. ∎
###### Lemma 11.4.
For $\varphi$ given by (3.3) and $\delta_{i}$ defined in (3.20), we have for
some $C$
$\left\|\delta_{i}\phi\right\|^{2}_{D({\bf{j}},r)}\leq C\log(|{\bf{j}}|+r)$
(11.13)
###### Proof.
By Lemma 11.3
$\left\|\delta_{i}\varphi\right\|^{2}_{D({\bf{j}},r)}\leq C\sum_{{\bf{k}}\in
D({\bf{j}},r)}\left(\frac{1}{|{\bf{k}}|+1}\right)^{2}.$ (11.14)
The largest magnitude in $D({\bf{j}},r)$ is $|{\bf{j}}|+r$ is. Also note that
there is some constant $C$ s.t. the number of elements of some magnitude
$r^{\prime}$ is less than $C(r^{\prime}+1)$. Thus there exists a constant $C$
s.t.
$\displaystyle\sum_{{\bf{k}}\in
D({\bf{j}},r)}\left(\frac{1}{|{\bf{k}}|+1}\right)^{2}$ $\displaystyle\leq
C\sum_{r^{\prime}=0}^{|{\bf{j}}|+r}(r^{\prime}+1)\left(\frac{1}{r^{\prime}+1}\right)^{2}=C\sum_{r^{\prime}=0}^{|{\bf{j}}|+r}\left(\frac{1}{r^{\prime}+1}\right)$
(11.15)
A common bound on the harmonic series yields
$\sum_{{\bf{k}}\in D({\bf{j}},r)}\left(\frac{1}{|{\bf{k}}|+1}\right)^{2}\leq
C\log(|{\bf{j}}|+r+2),$ (11.16)
which yields the result. ∎
### 11.2 Lattice Energy Argument A
###### Lemma 11.5.
Let $f,g:{\bf{Z}}^{2}\to{\bf{R}}$. Recall the definition of the operators in
(3.20), (4.6), (3.18), and (4.20). Then
$\Delta_{i}(fg)=\Delta_{i}(f)g+S_{i}^{+}(f)\Delta_{i}(g)+\delta_{i}(f)\delta_{i}^{-}(g).$
(11.17)
###### Proof.
Starting from the definition in (4.20) we have
$\displaystyle\Delta_{i}(fg)=$ $\displaystyle S_{i}^{+}(fg)-2fg+S_{i}^{-}(fg)$
(11.18) $\displaystyle=$ $\displaystyle
S_{i}^{+}(f)g-2fg+S_{i}^{-}(f)g+S_{i}^{+}(fg)+S_{i}^{-}(fg)-S_{i}^{+}(f)g-S_{i}^{-}(f)g$
$\displaystyle=$
$\displaystyle\Delta_{i}(f)g+S_{i}^{+}(fg)-S_{i}^{+}(f)g+S_{i}^{-}(fg)-S_{i}^{-}(f)g$
$\displaystyle=$
$\displaystyle\Delta_{i}(f)g+S_{i}^{+}(fg)-2S_{i}^{+}(f)g+S_{i}^{+}(f)S_{i}^{-}(g)$
$\displaystyle+S_{i}^{+}(f)g-S_{i}^{+}(f)S_{i}^{-}(g)+S_{i}^{-}(fg)-S_{i}^{-}(f)g$
$\displaystyle=$
$\displaystyle\Delta_{i}(f)g+S_{i}^{+}(f)\Delta_{i}(g)+S_{i}^{+}(f)(g-S_{i}^{-}(g))-S_{i}^{-}(f)(g-S_{i}^{-}(g))$
$\displaystyle=$
$\displaystyle\Delta_{i}(f)g+S_{i}^{+}(f)\Delta_{i}(g)+\delta_{i}(f)\delta^{-}_{i}(g).$
∎
### 11.3 The Effective Wave A
###### Lemma 11.6.
Let $U$ satisfy (2.29) with sufficiently smooth initial conditions given by
(5.3). Then
$\left|E\left(D^{k}\frac{\partial^{i}U}{\partial\tau^{i}}\right)\right|\leq
C\left(\left\|\nabla\phi\right\|_{H^{i+k}}^{2}+\left\|\psi\right\|_{H^{i+k}}^{2}\right)$
(11.19)
where $E\left(D^{k}\frac{\partial^{i}U}{\partial\tau^{i}}\right)$ is defined
by (5.5). Here the constant $C$ depends on the wave speed $c$ which in turn is
entirely dependent on $\bar{m}.$
###### Proof.
Note that
$D^{k}U({\bf{X}},\tau):{\bf{R}}^{2}\times{\bf{R}}\to{\bf{R}}^{2^{k}}.$
Consider the $j^{\text{th}}$ component of the $2^{k}$ dimensional vector of
$D^{k}U$ denoted by $D^{k}U_{j}$. Consider a ball of radius $r$ in
${\bf{R}}^{d}$ about the origin denoted $B(r)$.
$\dot{E}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)=\lim_{r\to\infty}\int_{B(r)}\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)\left(\frac{d^{i+2}D^{k}U_{j}}{d\tau^{i+2}}\right)+c^{2}\nabla\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\cdot\nabla\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)d{\bf{X}}$
(11.20)
Using integration by parts we find
$\displaystyle\dot{E}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)=$
$\displaystyle\lim_{r\to\infty}\int_{B(r)}\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)\left(\frac{d^{i+2}D^{k}U_{j}}{d\tau^{i+2}}\right)-c^{2}\Delta\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)d{\bf{X}}$
(11.21) $\displaystyle+\lim_{r\to\infty}\int_{\partial
B(r)}\left(\frac{\partial\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}}{\partial\nu}\right)\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)dS.$
Since $U$ solves (2.29), swapping the order of partials yields,
$\left(\frac{d^{i+2}D^{k}U_{j}}{d\tau^{i+2}}\right)-c^{2}\Delta\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)=0.$
(11.22)
We are left with
$\dot{E}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)=\lim_{r\to\infty}\int_{\partial
B(r)}\left(\frac{\partial\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}}{\partial\nu}\right)\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)dS,$
(11.23)
which is $0$ for all $\tau$ due to the finite propagation speed of the wave.
Note that at time zero, when $i$ is even,
$\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}=c^{2}D^{k}\Delta_{\bf{X}}^{\frac{i}{2}}U_{j}=c^{2}D^{k}\Delta_{\bf{X}}^{\frac{i}{2}}\phi_{j}$
(11.24)
while, when $i$ is odd,
$\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}=c^{2}D^{k}\Delta_{\bf{X}}^{\frac{i-1}{2}}\frac{d_{j}U}{d\tau}=c^{2}D^{k}\Delta_{\bf{X}}^{\frac{i-1}{2}}\psi_{j}.$
(11.25)
Since (11.21) is $0$ and holds for all $j\in\\{1,2,\cdots,2^{k}\\}$, we have
using either (11.24) or (11.25) in (5.5) that
$\left|E\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)\right|=\left|E\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(0)\right|\leq
C\left(\left\|\nabla\phi\right\|_{H^{i+k}}^{2}+\left\|\psi\right\|_{H^{i+k}}^{2}\right).$
(11.26)
∎
###### Lemma 11.7.
Let $U$ satisfy (2.29) with initial conditions given by (5.3). Suppose that
$w$ satisfies (5.8) and (5.9). Then
$\left|E_{w}\left(D^{k}\frac{\partial^{i}U}{\partial\tau^{i}}\right)(\tau)\right|\leq|\tau|C\left(\left\|\phi\right\|^{2}_{H_{w}^{i+k+1}}+\left\|\psi\right\|^{2}_{H_{w}^{i+k}}\right).$
(11.27)
The constant $C$ depends upon $\bar{m}$ as well as $B$ found in (5.9).
###### Proof.
Note that
$D^{k}U({\bf{X}},\tau):{\bf{R}}^{2}\times{\bf{R}}\to{\bf{R}}^{2^{k}}.$
Consider the $j^{\text{th}}$ component of the $2^{k}$ dimensional vector of
$D^{k}U$ denoted by $D^{k}U_{j}$. Consider a ball of radius $r$ in
${\bf{R}}^{d}$ about the origin denoted $B(r)$.
$\dot{E}_{w}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)=\lim_{r\to\infty}\int_{B(r)}w^{2}\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)\left(\frac{d^{i+2}D^{k}U_{j}}{d\tau^{i+2}}\right)+c^{2}w^{2}\nabla\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\cdot\nabla\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)d{\bf{X}}$
(11.28)
Using integration by parts we find
$\displaystyle\dot{E}_{w}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)=$
$\displaystyle\lim_{r\to\infty}\int_{B(r)}w^{2}\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)\left(\frac{d^{i+2}D^{k}U_{j}}{d\tau^{i+2}}\right)-c^{2}\nabla\cdot\left(w^{2}\nabla\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\right)\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)d{\bf{X}}$
(11.29) $\displaystyle+\lim_{r\to\infty}\int_{\partial
B(r)}w^{2}c^{2}\left(\frac{\partial\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}}{\partial\nu}\right)\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)dS.$
The boundary term vanishes in the limit due to finite propagation speed of the
wave. We need to calculate
$\nabla\cdot\left(w^{2}\nabla\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\right)=\left(\nabla
w^{2}\right)\cdot\nabla\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)+w^{2}\Delta\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right).$
(11.30)
Since $D^{k}U_{j}$ satisfies (2.29), we are left with
$\dot{E}_{w}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)=\lim_{r\to\infty}\int_{B(r)}\nabla
w^{2}\cdot\nabla\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)d{\bf{X}}.$
(11.31)
Using the assumption on $w$ in (5.9) and Cauchy-Schwarz, we have
$\left|\nabla
w^{2}\cdot\nabla\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\right|\leq
C\left|\nabla\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\right|.$ (11.32)
Therefore
$\dot{E}_{w}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)\leq\lim_{r\to\infty}C\int_{B(r)}\left|\nabla\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\right|\left|\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right|.d{\bf{X}}$
(11.33)
Using Cauchy-Schwarz once and swapping derivatives
$\dot{E}_{w}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)\leq
C\left\|D^{k+1}\left(\frac{d^{i}U_{j}}{d\tau^{i}}\right)\right\|_{L^{2}}\left\|\frac{D^{k}d^{i+1}U_{j}}{d\tau^{i+1}}\right\|_{L^{2}}.$
(11.34)
From (5.7), we know how to bound such beasts.
$\dot{E}_{w}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)\leq
C\left(\left\|\nabla\phi\right\|^{2}_{H^{i+k}}+\left\|\psi\right\|^{2}_{H^{i+k}}\right).$
(11.35)
Integrating yields
$E_{w}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)\leq|\tau|C\left(\left\|\phi\right\|^{2}_{H^{i+k+1}}+\left\|\psi\right\|^{2}_{H^{i+k}}\right)+E_{w}\left(D^{k}\frac{\partial^{i}U}{\partial\tau^{i}}\right)(0).$
(11.36)
Using (11.24) or (11.25) and the definition (5.12)
$E_{w}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(0)\leq
C\left(\left\|\nabla\phi\right\|_{H^{i+k}_{w}}^{2}+\left\|\psi\right\|_{H^{i+k}_{w}}^{2}\right).$
(11.37)
This holds for all $j\in\\{1,2,\dots,2^{k}\\}.$ Thus
$\left|E_{w}\left(D^{k}\frac{\partial^{i}U}{\partial\tau^{i}}\right)(\tau)\right|\leq|\tau|C\left(\left\|\phi\right\|^{2}_{H_{w}^{i+k}}+\left\|\psi\right\|^{2}_{H_{w}^{i+k}}\right).$
(11.38)
∎
###### Lemma 11.8.
Let $U$ satisfy (2.29) with initial conditions given by (5.3)
$\left|\widetilde{E}\left(D^{k}\frac{\partial^{i}U}{\partial\tau^{i}}\right)(\tau)\right|\leq
C\left(\left\|\nabla\phi\right\|^{2}_{H^{i+k}(B(\epsilon^{-\sigma})^{c})}+\left\|\psi\right\|^{2}_{H^{i+k}(B(\epsilon^{-\sigma})^{c})}\right).$
(11.39)
The constant $C$ depends only on $\bar{m}.$
###### Proof.
Note that
$D^{k}U({\bf{X}},\tau):{\bf{R}}^{2}\times{\bf{R}}\to{\bf{R}}^{2^{k}}.$
Consider the $j^{\text{th}}$ component of the $2^{k}$ dimensional vector of
$D^{k}U$ denoted by $D^{k}U_{j}$. We apply Leibniz’s rule
$\displaystyle\dot{\widetilde{E}}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)=$
$\displaystyle\int_{B(c|\tau|+\epsilon^{-\sigma})^{c}}\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)\left(\frac{d^{i+2}D^{k}U_{j}}{d\tau^{i+2}}\right)+c^{2}D\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\cdot\left(D\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)d{\bf{X}}$
(11.40) $\displaystyle-\operatorname{sgn}(\tau)\frac{1}{2}\int_{\partial
B(c\tau+\epsilon^{-\sigma})}c\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)^{2}+c^{3}\left|D\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\right|^{2}dS.$
As we have seen two times already, since $D^{k}U_{j}$ satisfies (2.29),
integration by parts yields
$\displaystyle\dot{\widetilde{E}}^{(i)}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)=$
$\displaystyle-\int_{\partial
B(c\tau+\epsilon^{-\sigma})}c^{2}\left(\frac{\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}}{\partial\nu}\right)\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)$
(11.41)
$\displaystyle+\operatorname{sgn}(\tau)\left(\frac{1}{2}c\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)^{2}+\frac{1}{2}c^{3}\left|D\left(\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right)\right|^{2}\right)dS.$
The first term in the integrand is bounded
$c^{2}\left|\left(\frac{\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}}{\partial\nu}\right)\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)\right|\leq
c^{2}\left|D\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right|\left|\frac{d^{i+1}V}{d\tau^{i+1}}\right|\leq\frac{1}{2}c\left(\frac{d^{i+1}D^{k}U_{j}}{d\tau^{i+1}}\right)^{2}+\frac{1}{2}c^{3}\left|D\frac{d^{i}D^{k}U_{j}}{d\tau^{i}}\right|^{2}$
(11.42)
Therefore
$\dot{\widetilde{E}}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)\leq
0$ (11.43)
and thus
$\widetilde{E}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)\leq\widetilde{E}\left(D^{k}\frac{\partial^{i}U}{\partial\tau^{i}}\right)(0).$
(11.44)
Using (11.24) or (11.25)
$\widetilde{E}\left(\frac{\partial^{i}D^{k}U_{j}}{\partial\tau^{i}}\right)(\tau)\leq
C\left(\left\|\nabla\phi\right\|^{2}_{H^{i+k}(B(\epsilon^{-\sigma})^{c})}+\left\|\psi\right\|^{2}_{H^{i+k}(B(\epsilon^{-\sigma})^{c})}\right).$
(11.45)
This holds for all $j\in\\{1,\dots,2^{k}\\}$ therefore yielding the result. ∎
###### Lemma 11.9.
For $\sigma>0$
$\left\|\phi\right\|_{H^{k}(B(\epsilon^{-\sigma})^{c})}\leq\epsilon\left\|\phi\right\|_{H^{k}_{\sigma}}.$
(11.46)
###### Proof.
Consider
$\left\|D^{j}\phi\right\|^{2}_{L^{2}(B(\epsilon^{-\sigma})^{c})}=\int_{B(\epsilon^{-\sigma})^{c}}\left|D^{j}\phi({\bf{X}})\right|^{2}d{\bf{X}}=\int_{B(1)^{c}}\left|D^{j}\phi(\epsilon^{-\sigma}{\bf{X}})\right|^{2}\epsilon^{-2\sigma}d{\bf{X}}.$
(11.47)
Therefore
$\left\|D^{j}\phi\right\|^{2}_{L^{2}(B(\epsilon^{-\sigma})^{c})}=\int_{B(1)^{c}}\frac{\left|\epsilon^{\sigma}{\bf{X}}\right|^{2\sigma^{-1}}}{\left|\epsilon^{\sigma}{\bf{X}}\right|^{2\sigma^{-1}}}\left|D^{k}\phi(\epsilon^{-\sigma}{\bf{X}})\right|^{2}\epsilon^{-2\sigma}d{\bf{X}}\leq\epsilon^{2}\int_{B(\epsilon^{-\sigma})^{c}}\left|{\bf{X}}\right|^{2\sigma^{-1}}\left|D^{k}\phi({\bf{X}})\right|^{2}d{\bf{X}}.$
(11.48)
Thus
$\left\|D^{j}\phi\right\|^{2}_{L^{2}(B(\epsilon^{-\sigma})^{c})}\leq\epsilon^{2}\left\|\left(1+\left|\
\cdot\ \right|\right)^{\sigma^{-1}}D^{j}\phi(\cdot)\right\|^{2}_{L^{2}}.$
(11.49)
Taking square roots, summing over $j$ and comparing with the definition of
$H^{k}_{w}$ in (5.19) yields the result. ∎
### 11.4 Residual Estimates A
###### Lemma 11.10.
Let $f({\bf{x}}):{\bf{R}}^{2}\to{\bf{R}}$ be in $H^{2}$ and let
${\bf{x}}^{({\bf{j}})}\in{\bf{R}}^{2}\cap\prod_{i=1}^{2}[j_{1},j_{1}+1]$ where
${\bf{j}}\in\mathcal{Q}\subset{\bf{Z}}^{2}$. Then
$\sum_{{\bf{j}}\in\mathcal{Q}}f({\bf{x}}^{({\bf{j}})})^{2}\leq\sum_{{\bf{j}}\in\mathcal{Q}}4\left\|f\right\|^{2}_{H^{2}(\prod_{i=1}^{2}[{\bf{j}},{\bf{j}}+{\bf{e}}_{i}])}.$
(11.50)
###### Proof.
Note that $f$ is continues by Sobolev embedding. Let
$\underline{{\bf{x}}}^{({\bf{j}})}$ denote the minimizer of $f$ over
$[j_{1},j_{1}+1]\times\\{x^{({\bf{j}})}_{2}\\}$. We can write
$f({\bf{x}}^{({\bf{j}})})^{2}=f(\underline{{\bf{x}}}^{({\bf{j}})})^{2}+\int_{\underline{x}^{({\bf{j}})}_{1}}^{x^{({\bf{j}})}_{1}}\frac{d}{dx_{1}}f(x_{1},x^{({\bf{j}})}_{2})^{2}dx_{1}.$
(11.51)
We can apply Cauchy-Schwarz to get
$f({\bf{x}}^{({\bf{j}})})^{2}\leq
f(\underline{{\bf{x}}}^{({\bf{j}})})^{2}+2\left|\int_{\underline{x}^{({\bf{j}})}_{1}}^{x^{({\bf{j}})}_{1}}f(x_{1},x^{({\bf{j}})}_{2})^{2}dx_{1}\right|^{\frac{1}{2}}\left|\int_{\underline{x}^{({\bf{j}})}_{1}}^{x^{({\bf{j}})}_{1}}\left(\frac{d}{dx_{1}}f(x_{1},x^{({\bf{j}})}_{2})\right)^{2}dx_{1}\right|^{\frac{1}{2}},$
(11.52)
which becomes
$f({\bf{x}}^{({\bf{j}})})^{2}\leq
f(\underline{{\bf{x}}}^{({\bf{j}})})^{2}+\left|\int_{\underline{x}^{({\bf{j}})}_{1}}^{x^{({\bf{j}})}_{1}}f(x_{1},x^{({\bf{j}})}_{2})^{2}dx_{1}\right|+\left|\int_{\underline{x}^{({\bf{j}})}_{1}}^{x^{({\bf{j}})}_{1}}\left(\frac{d}{dx_{1}}f(x_{1},x^{({\bf{j}})}_{2})\right)^{2}dx_{1}\right|.$
(11.53)
Since the integrands are non-negative
$f({\bf{x}}^{({\bf{j}})})^{2}\leq
f(\underline{{\bf{x}}}^{({\bf{j}})})^{2}+\int_{j_{1}}^{j_{1}+1}f(x_{1},x^{({\bf{j}})}_{2})^{2}dx_{1}+\int_{j_{1}}^{j_{1}+1}\left(\frac{d}{dx_{1}}f(x_{1},x^{({\bf{j}})}_{2})\right)^{2}dx_{1}.$
(11.54)
The first term on the right hand side is now less than the first integral.
$f({\bf{x}}^{({\bf{j}})})^{2}\leq
2\int_{j_{1}}^{j_{1}+1}f(x_{1},x^{({\bf{j}})}_{2})^{2}dx_{1}+\int_{j_{1}}^{j_{1}+1}\left(\frac{d}{dx_{1}}f(x_{1},x^{({\bf{j}})}_{2})\right)^{2}dx_{1}.$
(11.55)
Let
$F(x_{2})^{2}\coloneqq
2\int_{j_{1}}^{j_{1}+1}f(x_{1},x_{2})^{2}dx_{1}+\int_{j_{1}}^{j_{1}+1}\left(\frac{d}{dx_{1}}f(x_{1},x_{2})\right)^{2}dx_{1}.$
(11.56)
Since we are done with the old $\underline{x}^{({\bf{j}})}_{2}$, we use it to
denote the minimizer of $F$ over $[j_{2},j_{2}+1].$ Now similar to before we
get that
$F(x_{2}^{({\bf{j}})})^{2}=F(\underline{x}^{({\bf{j}})}_{2})^{2}+\int_{\underline{x}^{({\bf{j}})}_{2}}^{x_{2}^{({\bf{j}})}}\frac{d}{dx_{2}}F(x_{2})^{2}dx_{2}.$
(11.57)
Now
$\frac{d}{dx_{2}}F(x_{2})^{2}=\int_{j_{1}}^{j_{1}+1}4f(x_{1},x_{2})\frac{\partial}{\partial
x_{2}}f(x_{1},x_{2})+2\frac{\partial}{\partial
x_{1}}f(x_{1},x_{2})\frac{\partial^{2}}{\partial
x_{2}x_{1}}f(x_{1},x_{2})dx_{1}.$ (11.58)
One application of Cauchy-Schwarz to the integrand gives
$\left|\frac{d}{dx_{2}}F(x_{2})^{2}\right|\leq
4\int_{j_{1}}^{j_{1}+1}\left(f^{2}+\left(\frac{\partial}{\partial
x_{1}}f\right)^{2}\right)^{\frac{1}{2}}\left(\left(\frac{\partial}{\partial
x_{2}}f\right)^{2}+\left(\frac{\partial^{2}}{\partial
x_{2}x_{1}}f\right)^{2}\right)^{\frac{1}{2}}dx_{1},$ (11.59)
which becomes
$\left|\frac{d}{dx_{2}}F(x_{2})^{2}\right|\leq
2\int_{j_{1}}^{j_{1}+1}f^{2}+\left(\frac{\partial}{\partial
x_{1}}f\right)^{2}+\left(\frac{\partial}{\partial
x_{2}}f\right)^{2}+\left(\frac{\partial^{2}}{\partial
x_{2}x_{1}}f\right)^{2}dx_{1}.$ (11.60)
Therefore
$\int_{\underline{x}^{({\bf{j}})}_{2}}^{x_{2}^{({\bf{j}})}}\left|\frac{d}{dx_{2}}F^{2}\right|dx_{2}\leq
2\int_{j_{2}}^{j_{2}+1}\int_{j_{1}}^{j_{1}+1}f^{2}+\left(\frac{\partial}{\partial
x_{1}}f\right)^{2}+\left(\frac{\partial}{\partial
x_{2}}f\right)^{2}+\left(\frac{\partial^{2}}{\partial
x_{2}x_{1}}f\right)^{2}dx_{1}dx_{2}.$ (11.61)
Also
$F(\underline{x}_{2}^{({\bf{j}})})^{2}\leq\int_{j_{2}}^{j_{2}+1}F(x_{2})^{2}dx_{2}.$
(11.62)
By the (11.56)
$F(\underline{x}_{2}^{({\bf{j}})})^{2}\leq
2\int_{j_{2}}^{j_{2}+1}\int_{j_{1}}^{j_{1}+1}f(x_{1},x_{2})^{2}+\left(\frac{d}{dx_{1}}f(x_{1},x_{2})\right)^{2}dx_{1}dx_{2}.$
(11.63)
Using (11.61) and (11.63) in (11.57), we have
$F(x_{2}^{({\bf{j}})})^{2}\leq\int_{j_{2}}^{j_{2}+1}\int_{j_{1}}^{j_{1}+1}4f^{2}+4\left(\frac{\partial}{\partial
x_{1}}f\right)^{2}+2\left(\frac{\partial}{\partial
x_{2}}f\right)^{2}+2\left(\frac{\partial^{2}}{\partial
x_{2}x_{1}}f\right)^{2}dx_{1}dx_{2}.$ (11.64)
From (11.55), we have
$f({\bf{x}}^{({\bf{j}})})^{2}\leq\int_{j_{2}}^{j_{2}+1}\int_{j_{1}}^{j_{1}+1}4f^{2}+4\left(\frac{\partial}{\partial
x_{1}}f\right)^{2}+2\left(\frac{\partial}{\partial
x_{2}}f\right)^{2}+2\left(\frac{\partial^{2}}{\partial
x_{2}x_{1}}f\right)^{2}dx_{1}dx_{2}.$ (11.65)
Summing over ${\bf{j}}\in\mathcal{Q}$ we get
$\sum_{{\bf{j}}\in\mathcal{Q}}f({\bf{x}}^{({\bf{j}})})^{2}\leq
4\sum_{{\bf{j}}\in\mathcal{Q}}\left\|f\right\|^{2}_{H^{2}(\prod_{i=1}^{2}[{\bf{j}},{\bf{j}}+{\bf{e}}_{i}])}.$
(11.66)
∎
###### Corollary 11.11.
In the same context as Lemma 11.10 with $\mathcal{Q}={\bf{Z}}^{2},$ we have
$\sum_{{\bf{j}}\in{\bf{Z}}^{2}}f(\epsilon{\bf{x}}^{({\bf{j}})})^{2}\leq\epsilon^{-2}4\left\|f\right\|^{2}_{H^{2}}.$
(11.67)
###### Lemma 11.12.
In the same context as Lemma 11.10 with
$\mathcal{Q}=D({\bf{0}},\epsilon^{-1}R+1)^{c},$ we have
$\sum_{{\bf{j}}\in\mathcal{Q}}f(\epsilon{\bf{x}}^{({\bf{j}})})^{2}\leq\epsilon^{-2}4\left\|f\right\|^{2}_{H^{2}(B(R)^{c})}.$
(11.68)
###### Proof.
From Lemma 11.10
$\sum_{{\bf{j}}\in
D({\bf{0}},\epsilon^{-1}R+1)^{c}}f(\epsilon{\bf{x}}^{({\bf{j}})})^{2}\leq
4\sum_{{\bf{j}}\in
D({\bf{0}},\epsilon^{-1}R+1)^{c}}\left\|f(\epsilon\cdot)\right\|^{2}_{H^{2}(\prod_{i=1}^{2}[{\bf{j}},{\bf{j}}+{\bf{e}}_{i}])}.$
(11.69)
Note that
$\bigcup_{{\bf{j}}\in
D({\bf{0}},\epsilon^{-1}R+1)^{c}}\prod_{i=1}^{2}[{\bf{j}},{\bf{j}}+{\bf{e}}_{i}]\subset
B(\epsilon^{-1}R)^{c}.$ (11.70)
Hence
$\sum_{{\bf{j}}\in
D({\bf{0}},\epsilon^{-1}R+1)^{c}}\left\|f(\epsilon\cdot)\right\|^{2}_{H^{2}(\prod_{i=1}^{2}[{\bf{j}},{\bf{j}}+{\bf{e}}_{i}])}\leq\left\|f(\epsilon\cdot)\right\|^{2}_{H^{2}(B(\epsilon^{-1}R)^{c})}.$
(11.71)
Written in polar coordinates
$\displaystyle\left\|f(\epsilon\cdot)\right\|^{2}_{H^{2}(B(\epsilon^{-1}R)^{c})}=$
$\displaystyle\int_{0}^{2\pi}\int_{\epsilon^{-1}R}^{\infty}f(\theta,\epsilon
r)^{2}rdrd\theta$ (11.72) $\displaystyle=$
$\displaystyle\epsilon^{-2}\int_{0}^{2\pi}\int_{R}^{\infty}f(\theta,s)^{2}sdsd\theta$
$\displaystyle=$
$\displaystyle\epsilon^{-2}\left\|f\right\|^{2}_{H^{2}(B(R)^{c})}.$
∎
###### Lemma 11.13.
Let $\delta^{\pm}_{\epsilon
i}\phi({\bf{X}})\coloneqq\pm(\phi({\bf{X}})-\phi({\bf{X}}-\epsilon{\bf{e}}_{i})).$
Let $w$ be as in assumptions $\eqref{Assumption 1 on Weight}$ and (5.9). Then
$\left\|\delta^{\pm}_{\epsilon i}(\phi)\right\|_{H^{k}_{w}}\leq\epsilon
C\left\|\phi\right\|_{H^{k+1}_{w}}.$ (11.73)
###### Proof.
We do the proof for $i=1$. Let $\partial$ be some generic partial derivative
of order $k$. By the Fundamental Theorem of Calculus
$\partial\delta^{+}_{\epsilon i}\phi({\bf{X}})=\delta^{+}_{\epsilon
i}\partial\phi({\bf{X}})=\int_{X_{1}}^{X_{1}+\epsilon}\partial_{X^{\prime}_{1}}\partial\phi(X_{1}^{\prime},X_{2})dX_{1}^{\prime}.$
(11.74)
Now we calculate
$\displaystyle\int_{-\infty}^{-\infty}\int_{-\infty}^{-\infty}w(X_{1},X_{2})^{2}\left(\partial\delta_{\epsilon
i}\phi(X_{1},X_{2})^{2}\right)^{2}dX_{1}dX_{2}$ (11.75)
$\displaystyle=\int_{-\infty}^{-\infty}\int_{-\infty}^{-\infty}w(X_{1},X_{2})^{2}\left(\int_{X_{1}}^{X_{1}+\epsilon}\partial_{X^{\prime}_{1}}\partial\phi(X_{1}^{\prime},X_{2})dX_{1}^{\prime}\right)^{2}dX_{1}dX_{2}$
An application of Jensen’s Inequality yields
$\displaystyle\int_{-\infty}^{-\infty}\int_{-\infty}^{-\infty}\left(\int_{X_{1}}^{X_{1}+\epsilon}\partial_{X^{\prime}_{1}}\partial\phi(X_{1}^{\prime},X_{2})dX_{1}^{\prime}\right)^{2}dX_{1}dX_{2}$
(11.76)
$\displaystyle=\epsilon\int_{-\infty}^{-\infty}\int_{-\infty}^{-\infty}w(X_{1},X_{2})^{2}\int_{X_{1}}^{X_{1}+\epsilon}\left(\partial_{X^{\prime}_{1}}\partial\phi(X_{1}^{\prime},X_{2})\right)^{2}dX_{1}^{\prime}dX_{1}dX_{2}.$
Now we flip the $X_{1}$ and $X_{1}^{\prime}$
$\displaystyle\int_{-\infty}^{-\infty}\int_{-\infty}^{-\infty}w(X_{1},X_{2})^{2}\int_{X_{1}}^{X_{1}+\epsilon}\left(\partial_{X^{\prime}_{1}}\partial\phi(X_{1}^{\prime},X_{2})\right)^{2}dX_{1}^{\prime}dX_{1}dX_{2}$
(11.77)
$\displaystyle=\int_{-\infty}^{-\infty}\int_{-\infty}^{-\infty}\left(\partial_{X^{\prime}_{1}}\partial\phi(X_{1}^{\prime},X_{2})\right)^{2}\int_{X_{1}^{\prime}-\epsilon}^{X_{1}^{\prime}}w(X_{1},X_{2})^{2}dX_{1}dX_{1}^{\prime}dX_{2}$
From our definition of $w$ in (5.8)
$\displaystyle\int_{-\infty}^{-\infty}\int_{-\infty}^{-\infty}\left(\partial_{X^{\prime}_{1}}\partial\phi(X_{1}^{\prime},X_{2})\right)^{2}\int_{X_{1}^{\prime}-\epsilon}^{X_{1}^{\prime}}w(X_{1},X_{2})^{2}dX_{1}dX_{1}^{\prime}dX_{2}$
(11.78)
$\displaystyle\leq\epsilon\int_{-\infty}^{-\infty}\int_{-\infty}^{-\infty}\left(\partial_{X^{\prime}_{1}}\partial\phi(X_{1}^{\prime},X_{2})\right)^{2}w(X_{1}^{\prime},X_{2})^{2}dX_{1}^{\prime}dX_{2}.$
∎
Stringing these inequalities together, we find
$\displaystyle\int_{-\infty}^{-\infty}\int_{-\infty}^{-\infty}w(X_{1},X_{2})^{2}\left(\partial\delta_{\epsilon
i}\phi(X_{1},X_{2})^{2}\right)^{2}dX_{1}dX_{2}$ (11.79)
$\displaystyle\leq\epsilon^{2}\int_{-\infty}^{-\infty}\int_{-\infty}^{-\infty}\left(\partial_{X^{\prime}_{1}}\partial\phi(X_{1}^{\prime},X_{2})\right)^{2}w(X_{1}^{\prime},X_{2})^{2}dX_{1}^{\prime}dX_{2},$
which implies
$\left\|\delta^{+}_{\epsilon
i}\phi\right\|_{H^{k}_{w}}\leq\epsilon\left\|\phi\right\|_{H^{k+1}_{w}}.$
(11.80)
Now let $S^{-}_{\epsilon
i}\phi({\bf{X}})\coloneqq\phi({\bf{X}}-\epsilon{\bf{e}}_{i})$. Thus
$\delta_{\epsilon i}^{-}\phi=S_{\epsilon i}^{-}\delta^{+}_{\epsilon i}\phi$.
We now show that $S_{\epsilon i}^{-}$ is a bounded operator. Consider any
$f\in L^{2}$, then
$\int_{{\bf{R}}^{2}}w({\bf{X}})^{2}f({\bf{X}}-\epsilon{\bf{e}}_{i})^{2}d{\bf{X}}=\int_{{\bf{R}}^{2}}w({\bf{X}}+\epsilon{\bf{e}}_{i})^{2}f({\bf{X}})^{2}d{\bf{X}}.$
(11.81)
From the definition of the weight in (5.8), there exists a constant $C$ s.t.
$w({\bf{X}}+\epsilon{\bf{e}}_{i})^{2}\leq C^{2}w({\bf{X}})^{2}.$ (11.82)
Therefore the operator $S_{\epsilon i}^{-}$ is bounded by this constant $C.$
Thus
$\left\|\delta^{-}_{\epsilon i}\phi\right\|_{H^{k}_{w}}\leq\epsilon
C\left\|\phi\right\|_{H^{k+1}_{w}}.$ (11.83)
###### Corollary 11.14.
Let
$\Delta_{\epsilon}\phi({\bf{X}})\coloneqq\phi({\bf{X}}+\epsilon{\bf{e}}_{i})-2\phi({\bf{X}})+\phi({\bf{X}}-\epsilon{\bf{e}}_{i})$
and $w$ as in the previous lemma. Then
$\left\|\Delta_{\epsilon}\phi\right\|_{H_{w}^{k}}\leq\epsilon^{2}C\left\|\phi\right\|_{H^{k+2}_{w}}.$
(11.84)
### 11.5 Coarse Graining A
###### Lemma 11.15.
Let $U:{\bf{R}}^{2}\to{\bf{R}}^{2}$ be in $H^{s}$ with $s>1$. Put
$U_{\epsilon}({\bf{x}})\coloneqq U(\epsilon{\bf{x}}).$ Then
$\lim_{\epsilon\to
0^{+}}\left\|\mathcal{LS}[U_{\epsilon}](\cdot/\epsilon)-U\right\|_{L^{2}}=0$
(11.85)
where $\mathcal{L}$ and $\mathcal{S}$ are defined by (8.5) and (8.3).
###### Proof.
From their definitions we have
$\mathcal{LS}[U_{\epsilon}]({\bf{x}})=\frac{1}{4\pi}\int_{[-\pi,\pi]^{2}}\left(\sum_{{\bf{j}}\in{\bf{Z}}}e^{-i{\bf{y}}\cdot{\bf{j}}}U(\epsilon{\bf{j}})\right)e^{i{\bf{y}}\cdot{\bf{x}}}d{\bf{y}}.$
(11.86)
Changing variables from ${\bf{y}}\to\epsilon{\bf{y}}$ gives
$\mathcal{LS}[U_{\epsilon}]({\bf{x}})=\frac{\epsilon^{2}}{4\pi}\int_{[-\pi/\epsilon,\pi/\epsilon]^{2}}\left(\sum_{{\bf{j}}\in{\bf{Z}}}e^{-i\epsilon{\bf{y}}\cdot{\bf{j}}}U(\epsilon{\bf{j}})\right)e^{i\epsilon{\bf{y}}\cdot{\bf{x}}}d{\bf{y}}.$
(11.87)
Now swap the integral and the sum and integrate to get
$\mathcal{LS}[U_{\epsilon}]({\bf{x}})=\sum_{{\bf{j}}\in{\bf{Z}}^{2}}U(\epsilon{\bf{j}})\operatorname{sinc}(x_{1}-j_{1})\operatorname{sinc}(x_{2}-j_{2}),$
(11.88)
where $\operatorname{sinc}$ is the normalized $\operatorname{sinc}$ function,
$\frac{\sin(\pi x)}{\pi x}.$ Subbing ${\bf{x}}\to{\bf{X}}/\epsilon$ yields
$\mathcal{LS}[U_{\epsilon}]({\bf{X}}/\epsilon)=\sum_{{\bf{j}}\in{\bf{Z}}^{2}}U(\epsilon{\bf{j}})\operatorname{sinc}(X_{1}/\epsilon-
j_{1})\operatorname{sinc}(X_{2}/\epsilon-j_{2}).$ (11.89)
This series is exactly equal to the band-limited approximation of
$U_{\epsilon}$ given by
$\mathcal{LS}[U_{\epsilon}]({\bf{X}}/\epsilon)=\widetilde{U}_{\epsilon}({\bf{X}})\coloneqq\mathcal{F}^{-1}[\theta_{\pi/\epsilon}\mathcal{F}[U]]({\bf{X}}).$
(11.90)
See [15] for details regarding band-limited functions. Now we only need to
compare $U$ and $\widetilde{U}_{\epsilon}$, but by use of Plancherel’s
Theorem, this is equivalent to looking at
$\displaystyle\left\|\theta_{\pi/\epsilon}\mathcal{F}[U]-\mathcal{F}[U]\right\|^{2}_{L^{2}}$
$\displaystyle=\int_{|{\bf{y}}|_{\infty}>\pi/\epsilon}\widehat{U}({\bf{y}})^{2}d{\bf{y}}$
(11.91)
$\displaystyle\leq\sup_{|{\bf{y}}|_{\infty}\geq\pi/\epsilon}\left(\frac{1}{|{\bf{y}}|^{2s}}\right)\int_{|{\bf{y}}|_{\infty}>\pi/\epsilon}|{\bf{y}}|^{2s}\widehat{U}({\bf{y}})^{2}d{\bf{y}}$
$\displaystyle\leq C\epsilon^{2s}\left\|U\right\|^{2}_{H^{s}}.$
Since we have taken $s>1$, the result is proved and the rate of convergence is
greater than $\epsilon.$
∎
## References
* [1] L. Brillouin, _Wave propagation in periodic structures_ , Dover Books on Engineering and Engineering Physics, Dover Publications, Inc., New York, New York, 1953
* [2] D. Cioranescu and P. Donato, _An Introduction to Homogenization_ , Oxford Lecture Ser. Math. Appl. 17, The Clarendon Press, Oxford University Press, New York, New York, 1999
* [3] M. Chirilus-Bruckner, C. Chong, O. Prill, and G. Schneider, _Rigorous description of macroscopic wave packets in infinite periodic chains of coupled oscillators by modulation equations_ , Discrete Contin. Dyn. Syst. Ser. S, 5 (2012), pp. 879–901.
* [4] W. Craig, _A Course on Partial Differential Equations_ , Graduate Studies in Math. 197 American Mathematical Society, Providence, Rhode Island, 2018, pp.122-123
* [5] R.J. Duffin, _Basic Properties of Discrete Analytic Functions_ , Duke Math. J., 23 (1956), pp. 335 - 363
* [6] R. Durrett, _Probability, Theory and Examples_ , Cambride Ser. in Stat. and Prob. Math., Cambridge University Press, New York, 2010, pp. 65
* [7] L.C. Evens, _Partial Differential Equations, Second Edition_ , Graduate Studies in Math. Vol 19, American Mathematical Society, Providence, Rhode Island, 2010
* [8] J.-P. Fouque, J. Garnier, G. Papanicolaou, K. Sølna, _Wave Propagation and Time Reversal in Randomly Layered Media_ , Stochastic Modeling and Applied Probability 56, Springer Science+Business Media, New York, New York, 2007
* [9] J. Gaison, S. Moskow, J. D. Wright, and Q. Zhang, _Approximation of Polyatomic FPU Lattices by KdV Equations_ , Mult. Scale Model. Simul., 12 (2014), pp. 953-995
* [10] J. Lukkarinen and H. Spohn, _Kinetic limit for wave propagation in a random media_ , Arch. Rational Mech. Anal. 183 (2007) 93–162
* [11] M. J. Martínez, P.G. Kevrekidis, and M. A. Porter. _Superdiffusive tansport and energy localization in disordered granular crystals_ , Phys Rev. E 93 022902 (2016).
* [12] A. Mielke, _Macroscopic Behavior of Microscopic Oscillations in Harmonic Lattices via Wigner-Husimi Transforms_ , Arch. Rational Mech.Anal., 181 (2006), pp. 401–448
* [13] P. Massart, _Concentration Inequalities and Model Selction_ , Ecole d’Eté de Probabilités de Saint-Flour XXXIII-2003, Springer Verlag, Berlin, 2007, pp. 21-23
* [14] J. A. McGinnis and J. D. Wright, _Using Random Walks to Establish Wavelike Behavior in an FPUT System with Random Coefficients_ ,Discrete Contin. Dyn. Syst. Ser. S, 5 (2021)
* [15] J. McNamee, F. Stenger and E.L. Whitney, _Whittaker’s Cardinal Function in Retrospect_ , Mathematics of Computation, 25 (1971). pp 141-154
* [16] G. Schneider and C. E. Wayne, _Counter-propagating waves on fluid surfaces and the continuum limit of the Fermi-Pasta-Ulam model_ , International Conference on Differential Equations, Vols. 1, 2 (Berlin, 1999), World Scientific, River Edge, NJ, 2000, pp. 390–404.
|
# Compact optical grating compressor.
Vladyslav V. Ivanov<EMAIL_ADDRESS>Physical Sciences Inc. 20 New
England Bus Center Dr, Andover, MA 01810 USA
###### Abstract
A novel design of a grating-based optical pulse compressor is proposed. The
proposed compressor provides a large group delay dispersion while keeping the
compressor linear size small. The design of the proposed compressor is based
on a traditional Treacy compressor with a straightforward modification of
inserting two lenses between the compressor’s gratings. This simple
alternation aims to substantially increase group delay dispersion of the
compressor or alternatively to decrease the compressor size while maintaining
its group delay dispersion. A theoretical description of the enhanced
compressor has been developed in the paraxial approximation. A detailed
numerical model has been built to calculate the compressor parameters more
accurately. These theoretical studies have revealed that the enhanced optical
compressor provides a significant increase in the group delay dispersion
compared to a standard Treacy compressor.
## I Introduction
In laser amplifiers for ultra-short pulses, the peak optical powers become
very high, so nonlinear pulse distortion or destruction of some optical
components might occur. Chirped pulse amplification (CPA) is a common
technique to circumvent this problem. CPA for lasers was invented in the
mid-1980s, Strickland85 and awarded the Nobel Prize in Physics in 2018. CPA
is routinely used in high peak power lasers. Ultrafast high power lasers have
numerous applications such as high-precision material machining including
drilling, cutting, and surface processing Orazi21 ; Zhang19 ; Lei20 ;
Theodosiou19 ; Xu19 , medical applications, in particular, ophthalmic surgery
Lubatschowski03 ; Li22 , military applications like countermeasures or direct
energy weapon LaserWeapon21 ; SBIR2017 ; SBIR2020 .
Peak powers of 10 TW and above have been reported Tavella07 ; Rouyers93 .
Currently, laser systems delivering peak power of Terrawatts-level can operate
in the laboratory environment only due to their size, weight, and frequent
need for maintenance Matsuoka06 ; Kessler14 ; Ekspla ; Lightcon . However,
high peak power laser systems would be interested in a number of applications
outside laboratory environments such as manufacturing and defense Ekspla ;
Breitkopf14 ; Chvykov17 ; SBIR2020 ; devcom2021 .
In the CPA technique, low-power short pulses are passed through a stretcher i.
e. optical element with a positive spectral dispersion so that the pulses are
temporally stretched prior to amplification. The stretched pulses can be
safely amplified without damaging the amplifier material while the peak power
significantly reduced. After the amplification, the original pulse width is
recovered by passing the pulses through the compressor, which cancels the
positive dispersion of the stretcher by providing the dispersion equal in
amplitude but opposite in sign. The pulse compressor is based on a concept
introduced by Treacy in 1969. The Teacy′s concept relies on the grating pair
that provide large values of the negative dispersion.
Currently, available peak powers are limited by the optical damage of the
amplifier material. The optical damage can be alleviated by stretching the
pulses to a longer duration. Significant difficulties are posed by the
available diffraction gratings. First, it is difficult to fabricate gratings
with large sizes that are uniformly ruled over large areas. This imposes a
limit on the peak powers and pulse duration achievable through the use of
conventional compressors. Second, large dispersion compressors imply a large
distance between the compressor’s elements. This limits the size of the laser
system and makes their use unpractical outside of labs, especially on moving
vehicles.
The linear size of the optical compressor can be significantly reduced by
folding the beam path using retrospective prisms Lai93 or by integrating
diffraction gratings onto opposite surfaces of a solid fused silica block
Yang16 . While Yang and Towe Yang16 design offers an elegant idea that may be
difficult to manufacture. Further, their design implies propagation of the
laser pulse inside of the media for a significant distance that might cause
spectral distortions or optical damage. The design proposed by Lai Lai93 is
easily achievable, and can be further improved using grism Chauhan10 .
However, the design presented by Lai doesn’t address the problem of a long
optical path. In a way the design described in Lai93 trades a linear size of
compressor to several reflective optical elements, thus such an optical
compressor remains bulky and sensitive to vibrations.
The present work focuses on reducing the required optical path by enhancing
the dispersion of the compressor. In this work, group delay dispersion of the
compressor is substantially enhanced by two cylindrical lenses placed between
the compressor′s gratings. The theoretical study below demonstrates a notable
advantage of such a compressor compared to the more standard design.
## II Description and Theory of the Treacy Grating Compressor
The basic sketch of the traditional Treacy grating compressor Treacy69 is
shown in Fig.1(a). It comprises two parallel diffraction gratings, with their
working surfaces facing one another and their lines parallel to each other.
When a beam is an incident on the first grating of the compressor (A), its
spectral components of the incoming beam are diffracted at different angles.
In a sense, the grating acts as a convex mirror or a negative lens in the case
of a transmission grating. After the reflection, the different spectral
components of the beam become spatially separated. After the reflection from
the second grating (B), the beam becomes again collimated. At the output of
the grating pair, the beam is spatially incoherent. This can be solved by a
2nd identical gratings pair or more practically by retro-reflecting the light
back into original the grating pair by a mirror (C). Additionally, that
generates double the amount of negative dispersion. Ultimately the different
spectral components have different, frequency-dependent optical paths. This
path difference creates a negative dispersion, which is needed for the re-
compression of previously stretched pulses. The group delay dispersion (GDD)
of the Treacy compressor is written in Eq.1.
Figure 1: (a) the Treacy grating compressor, (b) the enhanced grating
compressor.
$\varphi^{\prime\prime}=-\frac{G\lambda^{3}}{2\pi c^{2}d^{2}\cos^{3}\beta},$
(1)
here $G$ is the slant (perpendicular) distance between the gratings, $\beta$
is the reflection angle from the grating, $d$ is the grating constant, and $c$
is the speed of light. The reflection angle $\beta$ can be found in Eq. 2 as
$cos\beta=\sqrt{1-(\sin\alpha-\lambda/d)^{2}},$ (2)
here $\alpha$ is the incident angle of the incoming beam to the 1st grating.
The dispersion provided by the traditional Treacy compressor
$\varphi^{\prime\prime}$ is a function of gratings period $d$, the beam
incidence angle $\alpha$, and distance in the compressor $G$ . Unfortunately,
increasing the compressor’s dispersion using the first two parameters is
problematic. The gratings period can not be arbitrarily small because of
manufacturing constraints, and the incidence angle is usually chosen to
maximize power efficiency. Larger negative dispersion in grating compressors
can be achieved by increasing the distance between the gratings. For moderate
pulse duration with several hundreds of picoseconds, a typical compressor
length is in order of tenth centimeters. However, if one has to stretch the
pulse to the duration of 1 nsec or above, propagation distances beyond a meter
are required. As a simple yet practical example, we consider a pulse with a
Fourier limited width of 500 fsec (FWHM), which corresponds to the wavelength
spread of $\Delta\lambda$ =3.1 nm (FWHM) at the central wavelength of 1030 nm.
The wavelength of 1030 nm is chosen because it is the optimal wavelength of
Yb:YAG lasers that are a popular choice for high-power ultrafast lasers. The
pulse is stretched to a temporal width of 1 nsec, using chirped fiber Bragg
grating (CFBG) with the second-order group velocity dispersion of 180
$ps^{2}$. The non-linear effects are neglected assuming strong pulse
stretching keeps the B-integral values small. In order to compress the pulse,
the grating compressor has to have the dispersion of the opposite sign and
equal amplitude. If for example, one uses the commercially available
transmission grating $T-1702-1030$ from Lightsmyth, with a groove density
1702.13 mm-1, and the incidence angle of 61.2∘, the required grating
separation $G$ would be 1.8 meters in double-pass grating pair compressor in
Treacy configuration. This is an impractically long optical path, that makes
the compressor, and hence the laser system, heavy, bulky, sensitive to
vibrations, and overall unusable.
## III Description of the Compact Grating Compressor
The path differences for different spectral components can be significantly
increased by modifying the traditional Treacy compressor design as shown in
Fig. 1(b). The spectral chirp of the beam reflected by 1st grating (A) can be
further enhanced by a negative cylindrical lens (L1) placed between 1st and
2nd gratings. The L1 is placed at a position where the beam is already
spatially chirped, thus, it selectively addresses different spectral
components of the beam, enhancing the propagation path for ’red’, while
decreasing it for the ’blue’ spectral components. Further, the effect of the
lens L1 on the beam divergence is compensated by a positive lens (L2) that
recovers the beam divergence to its value prior to the lens L1. The lenses L1
and L2 are positioned in the manner that the beam peak intensity and hence the
central wavelength passes through the center of both of them.
Analytical estimations have been performed assuming a small angular divergence
of the beam reflected from the 1st grating, i.e. in paraxial approximation.
Fig. 2 provides a more detailed scheme of the proposed grating compressor. The
transmission gratings are used for a practical example because of their high
diffraction efficiencies. Propagation of two spectral components is considered
and noted as a trial and a reference beam. The reference beam propagates
through the center of the lenses L1 and L2, hence its propagation path is not
altered. For the practical reason of avoiding optical aberrations, the
reference beam should correspond to a spectral component of a central
wavelength of the compressed beam. The trial spectral components propagate off
the center of L1 and L2 hence its propagation direction changes so its optical
path. In Fig. 2, $x_{1}$, $x_{2}$ and $x_{3}$ are the distances between the
1st diffraction grating and the lens L1, lenses L1 and L2, and the lens L2 and
2nd diffracting grating correspondingly ( $x_{1}=AH_{1}$, $x_{2}=H_{1}F_{1}$
and $x_{3}=F_{1}B$ ). $G_{1}$, $G_{2}$ and $G_{3}$ are the projection of the
distances $x_{1}$, $x_{2}$ and $x_{3}$ on the axis perpendicular to the 1st
and 2nd gratings. The three projections combined are equal to the total slant
distance between gratings, $G_{1}+G_{2}+G_{3}=G$, where $G=DB_{1}$.
Figure 2: Propagation of the reference and trial beams. The reference spectral
component reflects from the 1st grating at the angle $\beta_{0}$ and
propagates through the points $AH_{1}F_{1}B_{1}C_{1}$ through the center of
both lenses. The trial spectral component reflects from the 1st grating at the
angle $\beta$ propagates through the points $AH_{2}F_{2}B_{2}C_{2}$.
The original Treacy paper Treacy69 describes a straightforward and intuitive
path for calculating the GDD that has been directly applied to the enhance
compressor. As it was shown there, the GDD can be written as in Eq. 3.
$\varphi^{\prime\prime}=\frac{d^{2}\varphi}{d\omega^{2}}=\frac{1}{c}\frac{dp}{d\beta}\frac{d\beta}{d\omega},$
(3)
where $p$ is the optical path, and $\omega$ is the frequency. First term
$d\beta/d\omega$ read as Eq. 4.
$\frac{d\beta}{d\omega}=\frac{2\pi c}{\omega^{2}d\cos\beta}.$ (4)
To find $dp/d\beta$ the optical path $AB_{2}C_{2}$ was spitted in three parts:
before L1 ($G_{1}$), in between L1 and L2 ($G_{2}$), and after L2 ($G3$).
Assuming that the L2 recovered the initial propagation angle before L1 i.e.
$\delta=\beta$ the problem is identical to the standard Treacy compressor for
the first and third parts. Their pass $p$ can written as
$p=(G_{1}+G_{3})\cdot[1+cos(\alpha+\beta)]/cos(\beta)$, one straightforwardly
gets $\frac{dp_{1}}{d\beta}$ as shown in Eq. 5.
$\frac{dp_{1}}{d\beta}=-\frac{G_{1}\lambda}{d\cos^{2}\beta}.$ (5)
The 2nd part of the optical path (i.e. after L1, before L2), is affected by
the negative length L1. The angle between the grating and a spectral component
propagation changes, the new angle is noted as $\gamma$. The variation of the
optical path as a function of the angles $\gamma$, $\beta$, and hence the
wavelength $\lambda$ leads to modification of the GDD of the compressor.
Because the optical path after the lens L1 doesn’t depend directly on the
angle $\beta$, but only indirectly through the angle $\gamma$, which is the
function of $\beta$. One can write Eq. 6.
$\frac{dp_{2}}{d\beta}=\frac{dp_{2}}{d\gamma}\frac{d\gamma}{d\beta},$ (6)
here $p_{2}$ is the optical path between the lenses L1 and L2 ($H_{2}F_{2}$).
The calculation of the derivative $\frac{dp_{2}}{d\gamma}$ is analogous to the
calculations of $\frac{dp_{2}}{d\beta}$ .
The offset of the trial spectral component on the lens L1 is
$H_{1}H_{2}=x_{1}\tan(\beta-\beta_{0})$, where the $\beta$ and $\beta_{0}$ are
the reflection angles from the 1st grating for the trial and reference beams
correspondingly. Using the Ray transfer matrix analysis (ABCD matrix
formalism) ABCD the expression for the angle $\gamma$ in Eq. 7 has been
obtained, where the $\gamma$ is the angle between the spectral component and
the gratings.
$\gamma=\beta+\frac{G_{1}\tan(\beta-\beta_{0})}{f_{1}\cos\beta_{0}},$ (7)
where $f_{1}$ is the focal distance of the lens L1. Please, note that the
angle $\beta$ is the function of the spectral component wavelength. Eq. 7 is
an approximation assuming small angles between spectral components and the
lens axis.
Assembling the equation (5-7) and performing derivative
$\frac{d\gamma}{d\beta}$ the expression for the derivative
$\frac{dp_{2}}{d\beta}$ is derived.
$\frac{dp_{2}}{d\beta}=-\frac{G_{2}(\sin\alpha-\sin\gamma)}{\cos^{2}\gamma}\left(1+\frac{G_{1}\sec^{2}(\beta-\beta_{0})}{f_{1}\cos\beta_{0}}\right),$
(8)
Overall expression for the GDD of the modified compressor is written in Eq. 9.
$\varphi^{\prime\prime}=-\frac{\lambda^{2}}{2\pi
c^{2}d\cos\beta}\left[\frac{(G_{1}+G_{3})\lambda}{d\cos^{2}\beta}+\frac{G_{2}(\sin\alpha-\sin\gamma)}{\cos^{2}\gamma}\left(1+\frac{G_{1}\sec^{2}(\beta-\beta_{0})}{f_{1}\cos\beta_{0}}\right)\right],$
(9)
here the angle $\gamma$ is the function from angles $\beta$ and $\beta_{0}$ as
shown in Eq. 7, while the angles $\beta$ and $\beta_{0}$ are the function of
the incidence angle $\alpha$ (Eq. 2). Eq. 9 provides a theoretical description
of GDD based on input parameters, such as the incidence angle, the wavelength,
the grating period, geometrical distances, and the focal distance of the lens
L1. Note, that Eq. 9 presents single-pass GDD, in the case of the 2nd pass
provided by retro-reflection the results should be multiplied by 2. The
expression above is rather complicated, but one can get some intuitive
understanding assuming $\beta\approx\beta_{0}$. In this case, the optical path
isn’t altered significantly by the lens L1, however, this doesn’t mean that
its derivative vs. $\omega$ isn’t altered significantly and hence, the GDD. If
$\beta\approx\beta_{0}$ the second term in Eq. 7 is small compared to the
first one, in this case, $\beta\approx\gamma$. Based on these assumptions Eq.
9 can be simplified to Eq. 10.
$\varphi^{\prime\prime}=-\frac{\lambda^{3}}{2\pi
c^{2}d^{2}\cos^{3}\beta}\left[G_{1}+G_{3}+G_{2}\left(1+\frac{G_{1}}{f_{1}\cos\beta}\right)\right].$
(10)
Based on Eq. 10 one can make some preliminary conclusions and estimate the
parameters giving the highest GDD. The $G_{3}$ should be kept as small as
physically possible since it doesn’t contribute to the enhancement of GDD
while placing L1 in the middle between the 1st grating and L2 ($G_{1}=G_{2}$)
provides the maximum dispersion. This is a compromise between the necessity to
have the beam spatially chirped in the direction perpendicular to the beam
propagation and enough propagation length to benefit from this enhanced chirp.
The focal length $f_{1}$ should be as short as possible and limited mostly by
manufacturing considerations and geometrical constraints. The dependence of
the GDD on other parameters such $d$, $\lambda$, and $\beta$ is similar to the
standard Treacy compressor.
I made estimations of the performance of the enhanced compressor based on Eq.
10. The considered parameters are identical to the parameters in the Section
2: the $\lambda$=1030 nm, the wavelength spread of $\Delta\lambda$ =3.1 nm
(FWHM), the grating period $1/d=1702$ per mm, $\alpha=61.2^{\circ}$. The angle
spread of the reflected beam is $0.6^{\circ}$. I assume the L1 is positioned
in the middle between the gratings, and the distance the G3 is small (i.e.
$G_{3}\ll G_{1},G_{2}$). Assuming the slant distance between gratings $G$
equals $1$ m and the focal distance $f_{1}$ of L1 equals $-0.1$ m, one
calculates that the pair of lenses L1 and L2 lead to a significant increase in
the GDD of the compressor by a factor of 6.2. Such an increase in the GDD
allows the achievement of higher peak powers by alleviating the risk of
optical damage.
## IV Numerical simulations
To go beyond the assumptions of the previous section the numerical model was
developed based on MATLAB software. The model calculates the optical path
depending on the wavelength, then the model calculated changes of the optical
path versus small variation of the angle $\beta$, basically numerically
calculating the derivative $dp/d\beta$. Such calculations directly provide the
dependence of the GDD versus wavelength. These calculations are performed for
500 wavelengths in the range between 1025 and 1035 nm, thus yielding a GDD
versus the wavelength curve that can be compared with a GDD versus the
wavelength curve for a Treacy compressor with similar parameters.
The key task addressed by the numerical model is the calculation of the angles
$\gamma$ and $\delta$ beyond the ABCD matrix expression used for Eq. 7. In the
general case, the angles $\gamma$ and $\delta$ depend on the specific shape of
the lens. I assume the pair of fused-silica plano-concave and plano-convex
lenses were employed for L1 and L2. The lenses L1 and L2 are positioned in the
way that the 1030 nm spectral component passes through the center of both
lenses at the right incidence angle thus experiencing no angle change. After
passing through the plano-concave lens a beam propagation angle of
$\varphi_{in}$ changes to an angle $\varphi_{out}$ that can be written as
shown in Eq. 11.
$\varphi_{out}=\arcsin\left\\{n\times\sin\left\langle\arcsin\left[\frac{\varphi_{in}+\arcsin(y/R)}{n}\right]-\arcsin(y/R)\right\rangle\right\\}$
(11)
here $R$ is the curvature radius of the lens, $n$ is the refractive index of
the lens material and $y$ is the displacement from the entrance of the lens.
Further, the numerical model addresses the finite (non-zero ) thickness of the
lenses L1 and L2.
Figure 3: The angles for the spectral components are presented as a function
of the wavelength. The black line shows the angle $\beta$ after the reflection
from the 1st grating. The magenta and blue curves show the angle $\gamma$
after the lens L1 and were calculated based on Eq. 8 and numerical simulations
correspondingly. The red curve shows the angle $\delta$ after the lens L2
based on the numerical model.
First, I calculate the angles of $\beta$, $\gamma$, and $\delta$ depending on
the wavelength shown in Fig. 3. The angles $\gamma$ are calculated based on
the theory Eq. 7 and the numerical model. All curves are calculated for same
parameters as before: $\lambda$=1030 nm, $\Delta\lambda$ =3.1 nm, $1/d$=1702
per mm, $\alpha=61.2^{\circ}$, G=1 m, $G_{1}$=0.5 m, $G_{2}$=0.4 m and
$G_{3}$=0.1 m. The L1 and L2 lenses are centered on a 1030 nm spectral
component and have the focal distances of $f_{1}=$-0.1 and $f_{2}=$0.5 m
correspondingly. As one can see Eq. 7 provides a good approximation in the
wavelength range close to the central $\lambda_{0}=1030$ nm because the
spectral components of the beam are passing near the center of lenses L1 and
L2. The output angle $\delta$ theoretically should well coincide with the
original angle $\beta$ assuming that the focal distance $f_{2}$ is chosen
correctly, however one can see noticeable deviation at the wavelengths 1025 nm
and 1035 nm. Such deviations occur because Eq. 7 is not valid for a
significant distance between a spectral component and the lens center. The
beam diameter increased substantially more before L2 compare to L2 the
paraxial approximation is rougher for the angles $\delta$ compared to the
angles $\gamma$.
Figure 4: GDD is presented as a function of the wavelength for a number of
scenarios. The red line is GDD for the traditional Treacy compressor for the
same grating parameters and gratings separation ($G$, $\alpha$, and $1/d$ ).
All other plotted curves are calculated for the enhanced compressor. The
black, dashed curve is the calculation based on simplified Eq. 10. and the
magenta curve is calculated based on Eq. 9. Finally the blue curve is a result
of the numerical simulations described above.
I calculate the GDD as a function of the wavelength using the numerical model
and compare it with the traditional Treacy compressor as well as analytical
solutions full Eq.9 and simplified Eq.10. The parameters of standard Treacy
and enhanced compressor are identical aside from the lenses L1 and L2 used in
the enhance compressor. The results are presented in Fig. 4. Figure 4 is the
main output of this paper. First, one can see the significant enhancement of
the GDD compared to a standard Treacy compressor (red curve). Overall the
approximation based on Eq. 9 (blue curve) does a decent job, while further
simplified Eq. 10 (black dashed curve) predicts expected GDD only at the
central wavelength and its vicinity. The deviation between the numerical model
and theoretical expression of Eq. 9 is within 5 $\%$ in a chosen wavelength
range. Near the central wavelength of 1030 nm, the deviation is negligible.
For these conservative, realistic parameters we calculate the lenses L1 and L2
increase the GDD of the compressor by more than a factor of 5 at the
wavelength of 1030 nm. The discrepancy between the enhancement of the factor
of 5 in Figure 5 and the enhancement of the factor of 6.2 roughly estimated at
the end of the previous section is caused mostly because of setting $G_{3}$ to
0 for the rough estimations.
Increased curvature of the GDD vs. wavelength line indicates higher values of
higher order desertions that have to be compensated by CFBG.
Figure 5: TODs as a function of the wavelength are presented for a traditional
Treacy, and enhanced compressor (the same parameters ($G$, $\alpha$ and $1/d$
for both compressors).
The third-order dispersion (TOD) of the enhanced compressor becomes important
for broad-band pulses. Theoretically, it can be pre-compensated by CFBG,
however, in order to do so the TOD value of the later stages needs to be
known. Although it is possible to calculate TOD by taking the derivative of
Eq. 8 over $\lambda$ analytically the usefulness of this step is questionable
because of the length and complexity of the result. Instead, the TOD can be
calculated straightforwardly using a numerical model. In Fig. 5 we show TOD as
a function of the wavelength for a standard Treacy compressor and for an
enhanced compressor with parameters the same as above. We observe a TOD
increase of a factor of 40 at the wavelength of 1030 nm.
Such large TOD has to be pre-compensated by the design of CFBG or by employing
tunable FBG Min18 .
## V Conclusion
I have proposed an enhanced optical grating compressor that is suitable for
compressing strongly stretched laser pulses. This novel compressor employs
cylindrical lenses to enhance the spectral chirp of the beam inside of the
compressor that allows achievement of significantly larger GDD compare to
currently used Treacy compressors. This drastically reduces the required
linear distances; which makes the compressor compact and more suitable for
out-of-the-lab applications. The enhanced compressor allows to greatly reduce
the size of the high-peak-power lasers, thus making them more applicable
outside of scientific laboratories, in particular, on moving vehicles.
## VI Acknowledgements
The author thanks Kevin Wall and Bhabana Pati for their careful reading of the
manuscript and insightful suggestions.
## References
* (1) Donna Strickl and Gerard Mourou “Compression of amplified chirped optical pulses,” Optics Communications 56(3), 219-221 (1985).
* (2) L. Orazi, L. Romoli, M. Schmidt and L. Li, “Ultrafast laser manufacturing: from physics to industrial applications,” CIRP Annals 70(2), 543-566 (2021).
* (3) Jie Zhang, Sha Tao, Brian Wang, and Jay Zhao “Development of industrial scale laser micro-processing solution for mobile devices,” SPIE LASE 10906, 109061A (2019).
* (4) Shuting Lei, Xin Zhao, Xiaoming Yu, Anming Hu, Sinisa Vukelic, Martin B. G. Jun, Hang-Eun Joe, Y. Lawrence Yao, Yung C. Shin, “Ultrafast Laser Applications in Manufacturing Processes: A State of the Art Review,” Journal of Manufacturing Science and Engineering 142(3), 1-43 (2020).
* (5) Antreas Theodosiou, Rui Min, Arnaldo G. Leal-Junior, Andreas Ioannou, Anselmo Frizera, Maria Jose Pontes, Carlos Marques, and Kyriacos Kalli, “Long period grating in a multimode cyclic transparent optical polymer fiber inscribed using a femtosecond laser,” Optics Letters 44(21), 5346-5349 (2019).
* (6) K. Xu, Y. Chen, T. A. Okhai, and L. W. Snyman, “Micro optical sensors based on avalanching silicon light-emitting devices monolithically integrated on chips,” Optical Materials Express 9(10), 3985-3997 (2019).
* (7) Holger Lubatschowski, Alexander Heisterkamp, Fabian Will, Ajoy I. Singh, Jesper Serbin, Andreas Ostendorf, Omid Kermani, Ralf Heermann, Herbert Welling, and Wolfgang Ertmer, “Medical applications for ultrafast laser pulses,” Proceedings of SPIE - The International Society for Optical Engineering 50(50), (2003).
* (8) C. L. Li, C. J. Fisher, R. Burke, and S. Andersson-Engels, “Orthopedics-Related Applications of Ultrafast Laser and Its Recent Advances,” Appl. Sci. 12, 3957 (2022).
* (9) www.militaryaerospace.com/power/article/14206874/laser-weapons-ultrashortpulse-fiber
* (10) www.sbir.gov/node/1207829
* (11) www.sbir.gov/node/1654485
* (12) Franz Tavella, Yutaka Nomura, Laszlo Veisz, Vladimir Pervak, Andrius Marcinkevičius, and Ferenc Krausz, “Dispersion management for a sub-10-fs, 10 TW optical parametric chirped-pulse amplifier,” Opt. Lett. 32(15), 2227 (2007).
* (13) C. Rouyer, É. Mazataud, I. Allais, A. Pierre, S. Seznec, C. Sauteret, G. Mourou, and A. Migus, “Generation of 50-TW femtosecond pulses in a Ti:sapphire/Nd:glass chain,” Opt. Lett. 18, 214 (1993).
* (14) Shinichi Matsuoka, Takehiro Yoshii, Masatoshi Sato, Fumihiko Nakano, Yoshimori Tamaoki, You Wang, Yoshinori Kato, Koichi Iyama, Minoru Nishihata, Hirofumi Kan, and Sadao Nakai, “1TW, 10 Hz and 0.1TW, 1 kHz All-Solid-State Femtosecond Lasers,” The Review of Laser Engineering 34(9), 214 (2006).
* (15) Alexander Kessler, Marco Hornung, Sebastian Keppler, Frank Schorcht, Marco Hellwing, Hartmut Liebetrau, Jörg Körner, Alexander Sävert, Mathias Siebold, Matthias Schnepp, Joachim Hein, and Malte C. Kaluza, “16.6 J chirped femtosecond laser pulses from a diode-pumped Yb:CaF2 amplifier,” Opt. Lett. 39, 1333 (2014).
* (16) www.ekspla.com/product/sylos-2a/
* (17) www.lightcon.com/product/high-energy-opcpa/
* (18) Sven Breitkopf, Tino Eidam, Arno Klenke, Lorenz von Grafenstein, Henning Carstens, Simon Holzberger, Ernst Fill, Thomas Schreiber, Ferenc Krausz, Andreas Tünnermann, Ioachim Pupeza, and Jens Limpert, “ A concept for multiterawatt fibre lasers based on coherent pulse stacking in passive cavities,” Light: Science and Applications 3(e211), 1 (2014).
* (19) V. Chvykov “ New Generation of Ultra-High Peak and Average Power Laser Systems,” Intechopen 70720 (2017).
* (20) apps.dtic.mil/sti/pdfs/AD1125446.pdf
* (21) M. Lai, S. T. Lai, and C. Swinger, “Single-grating laser pulse stretcher and compressor,” Applied Optics 33(30), 6985 (1993).
* (22) C. Yang, E. Towe, “Ultra-compact grating-based monolithic optical pulse compressor for laser amplifier systems,” Journal of the Optical Society of America B 33(10), 2135-2143 (2016).
* (23) V. Chauhan, P. Bowlan, J. Cohen, and R. Trebino, “Single-diffractiongrating and grism pulse compressors,” Journal of the Optical Society of America B 27(4), 619-624 (2010).
* (24) E. B. Treacy, “Optical pulse compression with diffraction gratings,” IEEE J. Quantum Electron. 5(9), 454 (1969).
* (25) www.rp-photonics.com/abcdmatrix.html
* (26) Rui Min, Sanzhar Korganbayev, Carlo Molardi, Christian Broadway, Xuehao Hu, Christophe Caucheteur, Ole Bang, Paulo Antunes, Daniele Tosi, Carlos Marques, and Beatriz Ortega, “Largely tunable dispersion chirped polymer FBG,” Opt. Lett. 43(20), 5106 (2018).
|
# Approximate Gibbs Sampler for Efficient Inference of Hierarchical Bayesian
Models for Grouped Count Data
Jin-Zhu Yua and Hiba Baroudb CONTACT Jin-Zhu Yu. Email<EMAIL_ADDRESS>Affiliation: Department of Civil Engineering, University of Texas at
Arlington, Nedderman Hall 417, 416 Yates Street, Arlington, TX 76010, USA
aDepartment of Civil Engineering, University of Texas at Arlington, Arlington,
TX, USA;
bDepartment of Civil and Environmental Engineering, Vanderbilt University,
Nashville, TN, USA
###### Abstract
Hierarchical Bayesian Poisson regression models (HBPRMs) provide a flexible
modeling approach of the relationship between predictors and count response
variables. The applications of HBPRMs to large-scale datasets require
efficient inference algorithms due to the high computational cost of inferring
many model parameters based on random sampling. Although Markov Chain Monte
Carlo (MCMC) algorithms have been widely used for Bayesian inference, sampling
using this class of algorithms is time-consuming for applications with large-
scale data and time-sensitive decision-making, partially due to the non-
conjugacy of many models. To overcome this limitation, this research develops
an approximate Gibbs sampler (AGS) to efficiently learn the HBPRMs while
maintaining the inference accuracy. In the proposed sampler, the data
likelihood is approximated with Gaussian distribution such that the
conditional posterior of the coefficients has a closed-form solution.
Numerical experiments using real and synthetic datasets with small and large
counts demonstrate the superior performance of AGS in comparison to the state-
of-the-art sampling algorithm, especially for large datasets.
###### keywords:
Conditional conjugacy; Approximate MCMC; Gaussian approximation; Intractable
likelihood
Total Words: 6040
## 1 Introduction
Count data are frequently encountered in a wide range of applications, such as
finance, epidemiology, sociology, and operations, among others [1]. For
example, in epidemiological studies, the occurrences of a disease are often
recorded as counts on a regular basis [2]. Death counts, classified by various
demographic variables, are regularly recorded by government agencies [3]. In
customer service centers, the service level is often measured based on the
number of customers served during a given period of time. More recently, data-
driven disaster management approaches have used count data to analyze the
impact of disasters (e.g., number of power outages [4] and pipe breaks [5])
and the recovery process (e.g., recovery rate [6]). Understanding the features
that can influence the occurrence of such events is critical to inform future
decisions and policies. Therefore, statistical models have been developed to
accommodate the complexity of count data, among which are Hierarchical
Bayesian Poisson regression models (HBPRMs) that have been widely employed to
analyze count data under uncertainty [7, 8, 9, 10]. The wide applicability of
this class of models is due to the fact that the hierarchical Bayesian
approach offers the flexibility to capture the complex hierarchical structure
of count data and predictors by estimating different parameters for different
data groups, thereby improving the estimation accuracy of parameters for each
group. The data can be grouped based on geographical areas, types of
experiments in clinical studies, or different hazard types and intensities in
disaster studies. The hierarchical structure assumes that the parameters of
the prior distribution are uncertain and characterized by their own
probability distribution with corresponding parameters referred to as
hyperparameters. Therefore, this class of models can account for the
individual- and group-level variations in estimating the parameters of
interest and the uncertainty around the estimation of hyperparameters [11].
The flexibility of hierarchical models in capturing the complex interactions
in the data comes with a high computational expense since all the model
parameters need to be estimated jointly [12]. Furthermore, large-scale data
may be structured in many levels or groups [13], resulting in a large number
of parameters to learn for a hierarchical model, further increasing the
computational load. Given that many of the applications involving count data
have recently benefited from technological advances in data collection and
storage, there is a critical need to ensure the applicability of HBPRMs. As a
result, efficient inference algorithms are needed to support the use of
statistical learning models such as HBPRMs in risk-based decision-making,
especially for time-sensitive applications such as resource allocation during
emergency response and disaster recovery.
The most popular algorithms for parameter inference in hierarchical Bayesian
models (and generally for Bayesian inference) are Markov Chain Monte Carlo
(MCMC) algorithms. MCMC algorithms obtain samples from a target distribution
by constructing a Markov chain (irreducible and aperiodic) in the parameter
space that has precisely the target distribution as its stationary
distribution [14]. This class of algorithms provide a powerful tool to obtain
posterior samples and then estimate the parameters of interest when the exact
full posterior distributions are only known up to a constant and direct
sampling is not possible [14]. However, a major drawback of standard MCMC
algorithms, such as the Metropolis-Hastings algorithm (MH), is that they
suffer from slow mixing, requiring numerous Monte Carlo samples that grow with
the dimension and complexity of the dataset [15, 16]. In some applications of
Bayesian approaches (e.g., emergency response), decisions relying on outcomes
of the model cannot afford to wait days for running MCMC chains to collect a
sufficiently large number of posterior samples. As such, the application of
standard MCMC algorithms to learn Bayesian models such as HBPRMs or other
hierarchical Bayesian models for large datasets is significantly limited and a
fast approximate MCMC is needed.
The key idea of approximate MCMC is to replace complex distributions that lead
to a computational bottleneck with an approximation that is simpler or faster
to sample from than the original [17, 18]. Several studies have applied
analytical approximation techniques by exploiting conjugacy to accelerate
MCMC-based inference in hierarchical Bayesian models [19, 20, 21, 22]. More
specifically, an approximate Gibbs sampling algorithm to is used to enable the
inference of the rate parameter in the hierarchical Poisson regression model
in [19]. The conditional posterior of the rate parameter, which does not have
a closed-form expression due to non-conjugacy between Poisson likelihood and
log-normal prior distribution, is approximated as a mixture of Gaussian and
Gamma distributions using the moment matching method. The exact conditional
moments are obtained by minimizing the Kullback-Liebler divergence between the
original and the approximate conditional posterior distributions. Conjugacy is
also employed to improve inference efficiency in large and more complex
hierarchical models in [21]. It is shown that the approximation using
conjugacy can be utilized even though the original hierarchical model is not
fully conjugate [21]. As an example in their study, the approximate full
conditional distributions are derived when the likelihood function follows a
gamma distribution while the prior for the parameters are assumed to be
multivariate normal and inverse Wishart distribution. In [22], a Gaussian
approximation to the conditional distribution of the normal random effects in
the hierarchical Bayesian binomial model (HBBM) is derived using Taylor series
expansion, such that Gibbs sampling can be applied to infer the HBBM more
efficiently. A similar approach that approximates the data likelihood with a
Gaussian distribution to allow for faster inference of parameters is used for
parameter inference in a Bayesian Poisson model [20]. With regard to count
data, a fast approximate Bayesian inference method is proposed to infer a
negative binomial model (NB) in [23]. The non-conjugacy of the NB likelihood
is addressed by the Pólya-Gamma data augmentation. This technique is first
developed in [24] and is employed to approximate the likelihood as a Gaussian
distribution. Consequently, the conditional posteriors of all but one
parameters have a closed-form solution and a Metropolis-within-Gibbs algorithm
is thus developed for the posterior inference.
While approximate MCMC algorithms have been developed for hierarchical and
non-hierarchical Poisson models as well as hierarchical Bayesian binomial and
negative binomial models, the development of approximate MCMC algorithm for an
efficient inference of HBPRMs for grouped count data is still lacking. In this
paper, we propose an approximate Gibbs sampler to address this problem. To
deal with the non-conjugacy between the likelihood and the prior, we
approximate the conditional likelihood as a Gaussian distribution, leading to
closed-form conditional posteriors for all model parameters. The contribution
lies in the derivation of a closed-form approximation to the complex
conditional posterior of the parameters and the development of the Approximate
Gibbs sampling (AGS) algorithm. The proposed algorithm allows for an efficient
inference of the general HBPRM using the approximate Markov chain without
compromising the inference accuracy, enabling the use of HBPRMs in
applications with large-scale data and time sensitive decision-making. To
demonstrate the performance of the proposed AGS algorithm, we conduct multiple
numerical experiments and compare the inference accuracy and computational
load to state-of-the-art sampling algorithms.
The rest of this paper is organized as follows. In Sec. 2, a general
hierarchical Bayesian Poisson model for grouped count data is presented, and
the closed-form solution to the approximate conditional posterior distribution
of each regression coefficients is derived, followed by a description of the
proposed AGS algorithm. Sec. 3 introduces the datasets used in the numerical
experiments along with the comparison of the performance of sampling
algorithms. Conclusions and future work are provided in Sec. 4.
## 2 Methodology
Table 1: Notations $i$ | Index of a count data point in a group
---|---
$j,k$ | Index of a group and index of a covariate
$a,b$ | Shape and scale parameters of the inverse-gamma distribution
$n_{j}$ | Number of data points in group $j$
$w_{jk}$ | Coefficient for covariate $k$ in group $j$
$x_{ijk}$ | Value of covariate $k$ for count data point $i$ in group $j$
$x_{\cdot j}$ | Value of group level covariate for group $j$
$y_{ij}$ | Count data point $i$ in group $j$
$v$ | Log of the gamma random variable $z$
$z$ | Gamma random variable
$E_{s}$ | Sampling efficiency
$\mathcal{D}$ | Dataset including the covariates and the counts
$J,K$ | Number of groups, and number of covariates
$N_{0},N_{1}$ | Number of samples as warm-ups and number of desired samples to keep
$N_{d}$ | Total number of data points in a dataset
$PCT_{0}$ | Percentages of zero counts in a dataset
$PCT_{1,5}$ | Percentages of counts in [1, 5] in a dataset
$RG$ | Range of counts in a dataset
$T_{s}$ | Sampling time per 1000 iterations in seconds
$\alpha,\beta$ | Location parameter and shape parameter of the gamma distribution
$\gamma$ | Euler-Mascheroni constant
$\ell$ | Index of iterations
$\mu,\sigma$ | Mean and variance of the Gaussian distribution
$\lambda_{ij}$ | Mean for count data point $i$ in group $j$
$\Theta$ | Set of parameters
$m,\tau^{2}$ | Mean and variance for $\mu$
### 2.1 Hierarchical Bayesian Poisson Regression Model
This section presents the Hierarchical Bayesian Poisson Regression Model
(HBPRM) for count data. Without loss of generality, we consider a general
HBPRM, the hierarchical version of Poisson log-normal model [19, 25, 26, 27]
for grouped count data, in which the coefficient for each covariate varies
across groups (Eq. (1) to Eq. (5)). This model can be applied to count
datasets in which the counts can be divided into multiple groups based on the
covariates. Let $\mathcal{D}=\\{x,y\\}$ be the dataset where $x$ represents
the covariates and $y$ represents the dependent positive counts. This HBPRM
assumes that each count, $y_{ij}$, follows a Poisson distribution. The log of
the mean in the Poisson distribution is a linear function of the covariates.
In the hierarchical Bayesian paradigm, each of the parameters (regression
coefficients) in the linear function follows a prior distribution with
hyperparameter(s) which are in turn specified by a hyperprior distribution.
Note that the hyperprior is shared among the parameters of the same covariate
for all groups, thereby resulting in shrinkage of the parameters towards the
group-mean [11]. When the variance of the hyperprior is decreased to zero, the
hierarchical model is reduced to a non-hierarchical model. The mathematical
formulation of the HBPRM is provided in Eq. (1) to Eq. (5):
$\displaystyle
y_{ij}|\lambda_{ij}\sim\textrm{Pois}(\lambda_{ij}),\;\forall\;i=1,\dots,n_{j},\;j=1,\dots,J,$
(1)
$\displaystyle\ln\lambda_{ij}=\sum\limits_{k=1}^{K}{w_{jk}x_{ijk}},\;\forall\;i=1,\dots,n_{j},\;j=1,\dots,J,\;k=1,\dots,K,$
(2) $\displaystyle
w_{jk}|\mu_{k},\sigma_{k}^{2}\sim\mathcal{N}(\mu_{k},\sigma_{k}^{2}),\;\forall\;j=1,\dots,J,\;k=1,\dots,K,$
(3)
$\displaystyle\mu_{k}|m,\tau^{2}\sim\mathcal{N}(m,\tau^{2}),\;\forall\;k=1,\dots,K,$
(4)
$\displaystyle\sigma_{k}^{2}|a,b\sim\mathcal{IG}(a,b),\;\forall\;k=1,\dots,K.$
(5)
In the HBPRM formulation, $y_{ij}$ is the $i$-th count within group $j$ with
an estimated mean of $\lambda_{ij}$, $n_{j}$ is the number of data points in
group $j$, $w_{jk}$ is the regression coefficient of covariate $k$, and
$x_{ijk}$ is the $i$-th value in group $j$ of covariate $k$. The prior for the
coefficient of each covariate, $\mu_{k}$, is assumed to be a Gaussian
distribution ($\mathcal{N}$) while the prior for the variance,
$\sigma^{2}_{k}$, is assumed to be an inverse-gamma distribution
($\mathcal{IG}$). The Gaussian and inverse-gamma distributions are specified
such that we can exploit conditional conjugacy for analytical and
computational convenience. Alternative distributions (such as half-Cauchy and
uniform distributions) for the prior of group-level variance $\sigma_{k}^{2}$
do not have this benefit, which will significantly increase the computational
load. According to Ref. [28], when the group-level variance is close to zero,
$\epsilon$ must be set to a reasonable value. For our model, the estimated
group variance is always much larger than zero because the mean of estimated
variance is approximately $\frac{\epsilon+J}{2}$ (Eq. (17)) where $J$ is the
number of groups. As such, the $\epsilon$ can be set to a sufficiently large
value, such as 2.
### 2.2 Inference
Given an observed count dataset structured using multiple groups,
$\mathcal{D}$, fitting an HBPRM entails the estimation of the joint posterior
density distribution of all the parameters, which is only known up to a
constant. If we denote the parameters by
${\Theta}=\left\\{w_{11},\dots,w_{jk},\dots,w_{JK};\mu_{1},\dots,\mu_{{k}};\sigma_{1}^{2},\dots,\sigma_{K}^{2}\right\\}$,
then the joint posterior factorizes as
$\displaystyle p\left({\Theta}|y,x\right)\propto$
$\displaystyle\prod\limits_{j=1}^{J}\prod\limits_{i=1}^{{n_{j}}}{\rm{Pois}}\left({{y_{ij}}\mid\exp\left(\sum\limits_{k=1}^{K}{{w_{jk}}{x_{ijk}}}\right)}\right)\times$
$\displaystyle\prod\limits_{k=1}^{K}{\mathcal{N}\left({{w_{jk}}|{\mu_{{k}}},\sigma_{{k}}^{2}}\right)}\mathcal{N}\left({{\mu_{{k}}}|m,{\tau^{2}}}\right)\mathcal{IG}\left({\sigma_{{k}}^{2}|a,b}\right).$
(6)
Sampling from the joint posterior becomes a challenging task as it does not
admit a closed-form expression. While MCMC algorithms (e.g., the MH) can be
used, the need to judiciously tune the step size for the desired acceptance
rate often repel users from using this algorithm [29, 30]. In comparison, the
Gibbs sampler is more efficient and does not require any tuning of the
proposal distribution, therefore it has been used for Bayesian inference in a
wide range of applications [31, 32]. Classical Gibbs sampling requires that
one can directly sample from the conditional posterior distribution of each
parameter (or block of parameters), such as conditional conjugate posterior
distributions. The full conditional posteriors for implementing the Gibbs
sampler are
$\displaystyle p\left({{w_{jk}}|-}\right)$
$\displaystyle\propto{\prod\limits_{i=1}^{{n_{j}}}{{\rm{Pois}}\left({{y_{ij}}\mid\exp\left(\sum\limits_{k=1}^{K}{{w_{jk}}{x_{ijk}}}\right)}\right)\mathcal{N}\left({{w_{jk}}|{\mu_{{k}}},\sigma_{{k}}^{2}}\right)}},$
(7) $\displaystyle p\left({{\mu_{{k}}}|-}\right)$
$\displaystyle\propto{\mathcal{N}\left({{w_{1k},\dots,w_{Jk}}|{\mu_{{k}}},\sigma_{{k}}^{2}}\right)\mathcal{N}\left({{\mu_{{k}}}|m,{\tau^{2}}}\right)},$
(8) $\displaystyle p\left({\sigma_{{k}}^{2}|-}\right)$
$\displaystyle\propto{\mathcal{N}\left({{w_{1k},\dots,w_{Jk}}|{\mu_{{k}}},\sigma_{{k}}^{2}}\right)\mathcal{IG}\left({\sigma_{{k}}^{2}|\frac{a}{2},\frac{b}{2}}\right)},$
(9)
where $p\left(\cdot|-\right)$ represents the conditional posterior of a
parameter of interest given the remaining parameters and the data. Due to the
Gaussian-Gaussian and Gaussian-inverse-gamma conjugacy, Eq. (8) and Eq. (9)
can be expressed in an analytical form [33]
$\displaystyle p\left({{\mu_{{k}}}|-}\right)$
$\displaystyle\propto\mathcal{N}{\left({{\mu_{{k}}}\left|{\frac{1}{{\frac{1}{{{\tau^{2}}}}+\frac{J}{{\sigma_{{k}}^{2}}}}}\left({\frac{m}{{{\tau^{2}}}}+\frac{{\sum\limits_{j=1}^{J}{{w_{jk}}}}}{{\sigma_{{k}}^{2}}}}\right)}\right.,\frac{1}{{\frac{1}{{{\tau^{2}}}}+\frac{J}{{\sigma_{{k}}^{2}}}}}}\right)},$
(10) $\displaystyle p\left({\sigma_{{k}}^{2}|-}\right)$
$\displaystyle\propto\mathcal{IG}\left(\sigma_{{k}}^{2}\left|\frac{a+J}{2}\right.,\frac{b+\sum\limits_{j=1}^{J}{\left(w_{jk}-\mu_{k}\right)^{2}}}{2}\right).$
(11)
However, Eq. (7) does not admit an analytical solution because the Poisson
likelihood is not conjugate to the Gaussian prior. Consequently, it is
challenging to sample directly from the conditional posterior to enable the
Gibbs sampler. In this case, other algorithms can be used to obtain
$p\left({{w_{jk}}|-}\right)$, such as adaptive-rejection sampling [34], and
Metropolis-within-Gibbs algorithm [35]. However, these algorithms introduce an
additional computational cost due to the need to evaluate the complex
conditional distribution. Therefore, we propose to use a Gaussian
approximation to the Poisson likelihood given in Eq. (7) to obtain a closed-
form solution to the conditional posterior of coefficients. With the closed-
form solution, the complex inference of regression coefficients can be
simplified to save computational resources. Reducing the computational cost of
sampling from $p\left({{w_{jk}}|-}\right)$ is critical for datasets with a
large number of groups because the number of regression coefficients, $J\times
K$, can be significantly larger than the number of prior parameters, $2K$.
### 2.3 Gaussian Approximation to Log-gamma Distribution
This section introduces the Gaussian approximation to the log-gamma
distribution that is used to obtain the closed-form approximate conditional
posterior distribution in Section 2.4. Consider a gamma random variable $z$
with probability density function (pdf) given by
$p\left(z|\alpha,\beta\right)=\frac{z^{\alpha-1}e{-\frac{z}{\beta}}}{\Gamma\left(\alpha\right)\beta^{\alpha}},\;\alpha>0,\;\beta>0,$
(12)
where $\Gamma\left(\cdot\right)$ is the gamma function, and $\alpha$ and
$\beta$ are the location parameter and the scale parameter, respectively. The
random variable, $\ln z\in\mathbb{R}$, follows a log-gamma distribution. The
mean $\mu_{z}$ and variance $\sigma^{2}_{z}$ of log-gamma distribution are
calculated using Eq. (13) and Eq. (14), respectively [36].
$\displaystyle\mu_{z}$ $\displaystyle={\psi_{0}}\left(\alpha\right)+\ln\beta$
(13a)
$\displaystyle=-\gamma+\sum\limits_{n=1}^{\infty}{\left({\frac{1}{n}-\frac{1}{{n+\alpha-1}}}\right)}+\ln\beta$
(13b)
$\displaystyle=-\gamma+\sum\limits_{n=1}^{\alpha-1}{\frac{1}{n}}+\ln\beta$
(13c)
$\displaystyle\sigma^{2}_{z}$ $\displaystyle={\psi_{1}}\left(\alpha\right)$
(14a)
$\displaystyle=\sum\limits_{n=0}^{\infty}{\frac{1}{{{{\left({\alpha+n}\right)}^{2}}}}}$
(14b)
$\displaystyle=\frac{{{\pi^{2}}}}{6}-\sum\limits_{n=1}^{\alpha-1}{\frac{1}{{{n^{2}}}}}$
(14c)
In Eq. (13a) and Eq. (14a), $\psi_{0}(\cdot)$ and $\psi_{1}(\cdot)$ are the
zeroth and the first order of polygamma functions [37]. In Eq. (13b) and Eq.
(13c), $\gamma$ is the Euler-Mascheroni constant [38].
For large values of $\alpha$, the pdf of log-gamma distribution can be
approximated by that of a Gaussian distribution [20, 39], shown in Eq. (15).
$\textrm{Log-gamma}(\ln z|\alpha,\beta)\approx\mathcal{N}\left(\ln
z|{\psi_{0}}\left(\alpha\right)+\ln\beta,\,{\psi_{1}}\left(\alpha\right)\right)$
(15)
To apply the approximation in the conditional posterior (Eq. (20) to Eq.
(21)), we need to include $y$ in the pdf of log-gamma distribution. Therefore,
we let $\alpha=y$ and $\beta=1$, and Eq. (15) becomes
$\textrm{Log-gamma}(\ln z|y,1)\approx\mathcal{N}\left(\ln
z|{\psi_{0}}\left(y\right),\,{\psi_{1}}\left(y\right)\right).$ (16)
Note that because $\alpha>0$ and $\alpha$ is replaced by count data $y$, the
approximation can only be applied to positive counts.
Similarly, plugging in $\alpha=y$, $\beta=1$, and $\Gamma(n)=(n-1)!$ where
$n\in\mathbb{Z}^{+}$, Eq. (12) becomes
$p\left(z|y,1\right)=\frac{z^{y-1}e^{-z}}{\left(y-1\right)!}.$ (17)
Next, we need to relate Eq. (16) and Eq. (17). First, using the “change of
variable” method and substituting $\ln z$ with $v$ [20] in Eq. (17), we obtain
the pdf of $v$
$\displaystyle p\left(v|y,1\right)$
$\displaystyle=p\left(z=e^{v}|y,1\right)\frac{\partial e^{v}}{{\partial v}}$
(18a) $\displaystyle=\frac{1}{(y-1)!}e^{vy}e^{-e^{v}}.$ (18b)
Then, using Eq. (16) yields
$\frac{1}{(y-1)!}e^{vy}e^{-e^{v}}\approx\frac{1}{\sqrt{2\pi\psi_{1}(y)}}e^{\frac{(v-\psi_{0}(y))^{2}}{-2\psi_{1}(y)}}.$
(19)
Figure 1: Quality of the Gaussian approximation (dashed blue line) to the true
distribution (solid red line) for different values of $y$: (a) $y$=1; (b)
$y$=3; (c) $y$=5; (d) $y$=10; (e) $y$=20. (f) Values of KS distance (solid red
line) between the approximate distribution and true distribution and values of
the absolute error in the mean (dashed blue line) of the approximate
distribution as the value of $y$ increases.
The comparison between the true and approximate Gaussian distribution, i.e.,
left and right-hand sides of Eq. (19) respectively, is shown in Fig. 1. When
the counts are small, such as $y\leq 3$ ((a) to (c)), the approximation is not
very close to the true distribution. We can also see that the
Kolmogorov–Smirnov (KS) distance (f), defined as the largest absolute
difference between the cumulative density functions for the approximate
distribution and true distribution, is relatively large. As the value of $y$
increases, the approximate Gaussian distribution is increasingly closer to the
true distribution. Also, the absolute error in the mean value of the Gaussian
approximation is relatively small when $y$ is greater than 3. Notice again
that the approximation given by Eq. (19) is not directly applicable to zero
counts. However, when zero counts are present in the dataset, such as in
epidemiology studies, one can increase each count by a positive count (e.g.,
5). This linear transformation allows one to circumvent the problem brought by
zero counts (dependent variable) without affecting the model accuracy since
such a transformation does not change the distribution of the error and
preserves the relation between the dependent and independent variables.
### 2.4 Closed-form Approximate Conditional Posterior Distribution
In the conditional posterior of coefficient $w_{jk}$ given by Eq. (7), the
likelihood function is
${\prod\limits_{i=1}^{{n_{j}}}{{\rm{Pois}}\left({{y_{ij}}\mid
e^{\sum_{k=1}^{K}{{w_{jk}}{x_{ijk}}}}}\right)}}={\prod\limits_{i=1}^{{n_{j}}}{{\frac{1}{y_{ij}(y_{ij}-1)!}e^{\sum_{k=1}^{K}{w_{jk}x_{ijk}}y_{ij}}e^{-e^{\sum_{k=1}^{K}{w_{jk}x_{ijk}}}}}}}.$
(20)
Applying the approximation given by Eq. (19) yields
${\prod\limits_{i=1}^{{n_{j}}}{{\rm{Pois}}\left({{y_{ij}}\mid
e^{\sum_{k=1}^{K}{{w_{jk}}{x_{ijk}}}}}\right)}}\approx{\prod\limits_{i=1}^{n_{j}}{{\frac{1}{y_{ij}}}\frac{1}{\sqrt{2\pi\psi_{1}(y_{ij})}}e^{\frac{\left({\sum_{k=1}^{K}{w_{jk}x_{ijk}}}-\psi_{0}(y_{ij})\right)^{2}}{-2\psi_{1}(y_{ij})}}}}.$
(21)
Plugging Eq. (21) into Eq. (7) we get
$\displaystyle p\left({{w_{jk}}|-}\right)$
$\displaystyle\propto\exp\left[{\frac{{{{\left({{w_{jk}}-{\mu_{{k}}}}\right)}^{2}}}}{{-2\sigma_{{k}}^{2}}}}\right]\prod\limits_{i=1}^{{n_{j}}}{\exp\left\\{{\frac{{{{\left[{\sum\limits_{k=1}^{K}{{w_{jk}}{x_{ijk}}}-{\psi_{0}}\left({{y_{ij}}}\right)}\right]}^{2}}}}{{-2{\psi_{1}}\left({{y_{ij}}}\right)}}}\right\\}}$
(22)
$\displaystyle=\exp\left\\{{\frac{{{{\left({{w_{jk}}-{\mu_{{k}}}}\right)}^{2}}}}{{-2\sigma_{{k}}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{{{\left[{\sum\limits_{k=1}^{K}{{w_{jk}}{x_{ijk}}}-{\psi_{0}}\left({{y_{ij}}}\right)}\right]}^{2}}}}{{-2{\psi_{1}}\left({{y_{ij}}}\right)}}}}\right\\}.$
(23)
As the product of two Gaussians is still Gaussian, the posterior can also be
written as
$p\left({w_{jk}|-}\right)\propto\exp\left[{\frac{{{{\left({w_{jk}-{{\widehat{\mu}}_{{{k}}}}}\right)}^{2}}}}{{\widehat{\sigma}_{{{k}}}^{2}}}}\right],$
(24)
where ${\widehat{\mu}}_{{{k}}}$ and ${\widehat{\sigma}_{{{k}}}^{2}}$ are the
mean and variance of the approximate Gaussian posterior. Completing the
squares (see Appendix A for more details) we get
$\displaystyle{\widehat{\mu}}_{{{k}}}$
$\displaystyle=\frac{{{\mu_{{k}}}+\sigma_{{k}}^{2}\sum\limits_{i=1}^{{n_{j}}}{\frac{{{x_{ijk}}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}\left[{{\psi_{0}}\left({{y_{ij}}}\right)-\sum\limits_{h=1,h\neq
k}^{K}{{x_{ijh}}{w_{jh}}}}\right]}}{{\sigma_{{k}}^{2}\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}+1}},\,\forall
j=1,\dots,J,$ (25) $\displaystyle{\widehat{\sigma}_{{{{k}}}}^{2}}$
$\displaystyle=\frac{{\sigma_{{k}}^{2}}}{{\sigma_{{k}}^{2}\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}+1}},\,\forall
j=1,\dots,J.$ (26)
Now that the full conditional posterior distributions can be expressed
analytically, we can construct the approximate Gibbs sampler (Algorithm 1) to
obtain posterior samples of the parameters in HBPRM efficiently.
Algorithm 1 Approximate Gibbs sampler
1:${x}$, ${y}$, number of samples as warm-ups $N_{0}$, number of desired
samples $N_{1}$.
2:Desired posterior samples,
$\mu_{k}^{(\ell)},\sigma_{k}^{2(\ell)},w_{jk}^{(\ell)}$,
$\ell=N_{0}+1,\dots,N_{0}+N_{1}$, $k=1,\dots,K$, and $j=1\dots,J$.
3:Generate the initial sample $\mu_{k}^{(0)},\sigma_{k}^{2(0)},w_{jk}^{(0)}$.
4:for $\ell=1$ to $(N_{0}+N_{1})$ do
5: for $k=1$ to $K$ do
6: Sample hyperparameter $\mu_{k}^{(\ell)}$ according to Eq. (10).
7: Sample hyperparameter $\sigma_{k}^{2(\ell)}$ according to Eq. (11).
8: for $j=1$ to $J$ do
9: Sample each parameter $w_{jk}^{(\ell)}$ according to Eq. (24).
10: end for
11: end for
12:end for
## 3 Experiments
We evaluate the performance of our proposed AGS algorithm by applying it to
several synthetic and real data sets. The performance of AGS is evaluated in
terms of the accuracy, efficiency, and computational time. The proposed
approach is compared with the state-of-the-art MCMC algorithm, No-U-Turn
sampler (NUTS) [40], using the same datasets and performance metrics. NUTS is
an extension to Hamiltonian Monte Carlo (HMC) algorithm that exploits
Hamiltonian dynamics to propose samples. NUTS can free users from tuning the
proposals and has been demonstrated to provide efficient inference of complex
hierarchical models [12]. The description of the datasets and the experimental
setup is provided in this section. The code and non-confidential data used for
the experiments are available upon request.
### 3.1 Data Description
Multiple synthetic and real datasets are used to evaluate the performance of
AGS for different data types and sizes. This section describes the approach to
generating synthetic data and the characteristics of real datasets which
include power outages, Covid-19 positive cases, and bike rentals. A subset of
each dataset is provided in tables 2 to 5.
Synthetic data. The synthetic datasets are generated according to the model
shown in Eq. (27). This model ensures the generated datasets cover a wide
range of counts and produce a dataset that is similar to that of responding to
emergency incidents during disasters such as power outages. An example of the
synthetic dataset is presented in Table 2.
$\displaystyle{x_{ij1}}$ $\displaystyle\sim
U\left({0.1,\,2}\right),\;i=1,\dots,n_{j},\;j=1,\dots,J$ (27a)
$\displaystyle{x_{ij2}}$ $\displaystyle\sim
U\left({0.1,\,1}\right),\;i=1,\dots,n_{j},\;j=1,\dots,J$ (27b)
$\displaystyle{x_{ij3}}$ $\displaystyle\sim
U\left({0.1,\,0.5}\right),\;i=1,\dots,n_{j},\;j=1,\dots,J$ (27c)
$\displaystyle{x_{ij4}}$ $\displaystyle\sim
U\left({1,\,10}\right),\;i=1,\dots,n_{j},\;j=1,\dots,J$ (27d)
$\displaystyle{x_{ij5}}$ $\displaystyle\sim
U\left({0.5,\,5}\right),\;i=1,\dots,n_{j},\;j=1,\dots,J$ (27e)
$\displaystyle{x_{ij6}}$ $\displaystyle\sim
U\left({10,\,100}\right),\;i=1,\dots,n_{j},\;j=1,\dots,J$ (27f) $\displaystyle
x_{\cdot j}$ $\displaystyle\sim
U\left({{{10}^{4}},\,{{10}^{6}}}\right),\;j=1,\dots,J$ (27g)
$\displaystyle{w_{jk}}$
$\displaystyle\sim\mathcal{N}\left({0.001,\,0.001}\right),\;j=1,\dots,J,\;k=1,\dots,K$
(27h) $\displaystyle y_{ij}$
$\displaystyle={\left(e^{\sum_{k=1}^{K}w_{jk}x_{ijk}}\right)_{{\rm{min-
max}}}}x_{\cdot j},\;i=1,\dots,n_{j},\;j=1,\dots,J.$ (27i)
In Eq. (27), the notation $\left(\cdot\right)_{{\rm{min-max}}}$ represents the
min-max normalizing function111For an array of real numbers represented by a
generic vector ${\bm{x}}$. The min-max normalization of ${\bm{x}}$ is given by
${\bm{x}}_{\rm{min-
max}}=\frac{{\bm{x}}-\bm{x}_{\rm{min}}}{\bm{x}_{\rm{max}}-\bm{x}_{\rm{min}}}$..
Each count $y_{ij}$ is rounded to the nearest integer. $x_{\cdot j}$ is the
group-level covariate for group $j$. We generate 15 synthetic datasets (S1, …,
S15) with varying total numbers of data points ($N_{d}$) in each dataset and
varying numbers of data points in each group $n_{j}$ (for simplicity, it is
assumed the same for each group in the same synthetic dataset), numbers of
covariates $K$ ($K\leq 6$), and numbers of groups $J$ to analyze the effect of
the size of the data on the performance of AGS (Table 6).
Table 2: An example of synthetic dataset $x_{1}$ | $x_{2}$ | $x_{3}$ | $x_{4}$ | $x_{5}$ | $x_{6}$ | $y$
---|---|---|---|---|---|---
0.14 | 0.11 | 0.27 | 1.49 | 0.66 | 12.10 | 42
0.61 | 0.15 | 0.27 | 2.22 | 1.52 | 24.76 | 6440
0.64 | 0.58 | 0.27 | 3.11 | 1.93 | 34.67 | 11424
0.77 | 0.58 | 0.30 | 5.70 | 2.13 | 38.71 | 13535
1.07 | 0.60 | 0.34 | 6.38 | 2.77 | 62.73 | 25064
1.29 | 0.75 | 0.42 | 7.78 | 3.69 | 79.38 | 33917
1.41 | 0.91 | 0.47 | 8.84 | 4.91 | 85.56 | 38272
1.55 | 0.93 | 0.49 | 8.93 | 4.96 | 92.86 | 41806
Power outage data. The power outage data includes the number of customers
without power following 11 disruptive events (denoted by P1$,\dots,$ P11). The
power outage data is grouped by the disruptive event, i.e., power outage
counts after the same disruptive event fall into the same group. The
covariates in each dataset include PS (surface pressure, Pa), TQV
(precipitable water vapor, kg$\cdot\textrm{m}^{-2}$), U10M (10-meter eastward
wind speed, m/s), V10M (10-meter northward wind speed, m/s), $t$ (time after
the start of an event, hours). The outage datasets were collected from public
utility companies during severe weather events and the weather data from the
National Oceanic and Atmospheric Administration.
Table 3: A subset of power outage data Event ID | PS | TQV | U10M | V10M | $\;\;t$ | Outage count
---|---|---|---|---|---|---
1 | 99691.45 | 43.18 | 2.39 | 4.79 | 4 | 66807
1 | 100917.62 | 26.75 | 1.11 | 1.39 | 32 | 18379
1 | 101041.88 | 36.29 | 1.11 | -0.18 | 60 | 12096
1 | 101467.45 | 55.72 | -1.73 | -0.67 | 116 | 14231
1 | 101155.43 | 37.50 | -0.08 | -3.01 | 144 | 10155
1 | 101037.79 | 32.13 | -0.48 | -1.53 | 172 | 4758
1 | 101194.86 | 40.20 | -0.90 | 1.66 | 200 | 2699
1 | 101183.98 | 46.61 | -0.54 | 1.20 | 228 | 2297
1 | 101136.76 | 34.68 | -1.14 | -1.50 | 256 | 248
1 | 101086.31 | 43.80 | -2.19 | -2.44 | 284 | 43
Covid-19 test data. The Covid-19 test dataset is obtained from Ref. [41],
which are originally collected from seven studies (two preprints and five
peer-reviewed articles) that provide data on RT-PCR (reverse transcriptase
polymerase chain reaction) performance by time since the symptom onset or
exposure using samples derived from nasal or throat swabs among patients
tested for Covid-19. The number of studies (groups) is 10. Each study includes
multiple test cases (Table 4), each of which includes the days, $t$, after
exposure to Covid-19, and the total number of samples tested, $N_{s}$. The
response variable is the number of patients who tested positive among the
samples. The total number of test cases is 380. As the proposed approximation
cannot be applied to zero counts, we remove the test cases with zero positive
test among the samples tested. The total number of test cases after removing
those with zero counts is 298. The test cases are grouped by studies.
Following Ref. [41], the exposure is assumed to have occurred five days before
the symptom onset and $\log(t)$, $\log(t)^{2}$, $\log(t)^{3}$, and $N_{s}$ are
used as the covariates.
Table 4: A subset of the Covid-19 test data Study ID | Test case ID | $\log(t)$ | $\log(t)^{2}$ | $\log(t)^{3}$ | $N_{s}$ | Positive count
---|---|---|---|---|---|---
1 | 1 | 1.23 | 1.51 | 1.86 | 35 | 15
1 | 2 | 1.26 | 1.58 | 1.98 | 23 | 11
1 | 3 | 1.28 | 1.64 | 2.09 | 20 | 6
1 | 5 | 1.32 | 1.75 | 2.31 | 20 | 8
1 | 6 | 1.34 | 1.8 | 2.42 | 11 | 3
1 | 7 | 1.36 | 1.85 | 2.53 | 11 | 5
1 | 8 | 1.38 | 1.9 | 2.63 | 9 | 2
1 | 9 | 1.4 | 1.95 | 2.73 | 6 | 3
1 | 10 | 1.41 | 2 | 2.83 | 5 | 2
Bike sharing data. The bike sharing data include daily bike rental counts for
729 days and the covariates we use include normalized temperature, normalized
humidity, and casual bike rentals. The dataset is obtained from the UC Irvine
Machine Learning Repository [42, 43]. Bike rental counts are grouped by
whether the rental occurs on a working day (Table 5).
Table 5: A subset of the bike sharing data. Workingday=1 indicates the rental occurs on a working day, and 0 otherwise. Workingday | Temperature | Humidity | Casual rental count | Total count
---|---|---|---|---
0 | 0.34 | 0.81 | 331 | 985
0 | 0.36 | 0.70 | 131 | 801
1 | 0.20 | 0.44 | 120 | 1349
1 | 0.20 | 0.59 | 108 | 1562
1 | 0.23 | 0.44 | 82 | 1600
1 | 0.20 | 0.52 | 88 | 1606
1 | 0.20 | 0.50 | 148 | 1510
0 | 0.17 | 0.54 | 68 | 959
0 | 0.14 | 0.43 | 54 | 822
To investigate the performance of AGS for small counts, including zero counts,
we also simulate datasets with small counts using the following model shown in
Eq. (28).
$\displaystyle{x_{ij1}}$ $\displaystyle\sim
U\left({0.1,\,2}\right),\;i=1,\dots,n_{j},\;j=1,\dots,J$ (28a)
$\displaystyle{x_{ij2}}$ $\displaystyle\sim
U\left({0.1,\,1}\right),\;i=1,\dots,n_{j},\;j=1,\dots,J$ (28b)
$\displaystyle{x_{ij3}}$ $\displaystyle\sim
U\left({0.1,\,0.5}\right),\;i=1,\dots,n_{j},\;j=1,\dots,J$ (28c)
$\displaystyle{x_{ij4}}$ $\displaystyle\sim
U\left({1,\,10}\right),\;i=1,\dots,n_{j},\;j=1,\dots,J$ (28d)
$\displaystyle{x_{ij5}}$ $\displaystyle\sim
U\left({0.5,\,5}\right),\;i=1,\dots,n_{j},\;j=1,\dots,J$ (28e) $\displaystyle
x_{\cdot j}$ $\displaystyle\sim
TEXP\left(0.7,\,1,\,y_{\max}\right),\;j=1,\dots,J$ (28f)
$\displaystyle{w_{jk}}$
$\displaystyle\sim\mathcal{N}\left({0.1,\,0.1}\right),\;j=1,\dots,J,\;k=1,\dots,K$
(28g) $\displaystyle y_{ij}$
$\displaystyle=\left\lfloor{\left(e^{\sum_{k=1}^{K}w_{jk}x_{ijk}}\right)_{{\rm{min-
max}}}}x_{\cdot j}\right\rfloor,\;i=1,\dots,n_{j},\;j=1,\dots,J.$ (28h)
$TEXP$ represents the truncated exponential distribution. The PDF of
$TEXP(0.7,\,1,\,y_{\max})$, where 0.7 is the rate parameter while 1 and
$y_{\max}$ are the lower and upper bounds, is given by
$f(x)=\begin{cases}0.7e^{-0.7(y_{\max}-x-1)},&x\in[1,\,y_{\max}],\\\
0,&\textrm{otherwise}.\end{cases}$ (29)
This particular truncated exponential distribution instead of a uniform
distribution is used to ensure that the generated counts will not concentrate
on small values. By changing the value of the upper bound, we can generate
counts in different ranges. Notice that since the floor function,
$\left\lfloor\cdot\right\rfloor$, is used in Eq. (28h), the generated counts
can have zeros, which is smaller than the lower bound 1.
### 3.2 Experiment Setup
In the HBPRM for the count datasets listed above, we employ $\mathcal{N}(0,1)$
and $\mathcal{IG}\left(1,1\right)$ as a weakly-informative prior [44] for
$\mu_{w}$ and $\sigma^{2}_{w}$ respectively. In the numerical experiments,
NUTS is implemented with Stan [44]. Numbers are averaged over 4 runs of 10000
iterations for each algorithm, discarding the first 5000 samples as warm-
ups/burn-ins. We compare AGS with NUTS in terms of average sampling time in
seconds per 1000 iterations ($T_{s}$), sampling efficiency ($E_{s}$), $R^{2}$,
and Root Mean Square Error (RMSE). Sampling efficiency is quantified as the
mean effective sampler size ($\hat{n}_{\mathrm{eff}}$) over the average
sampling time in seconds per 1000 iterations, i.e.,
$E_{s}=\hat{n}_{\mathrm{eff}}/T_{s}$, where $\hat{n}_{\mathrm{eff}}$ is the
effective sample size of multiple sequences of samples [11, Chapter 11]. To
make this paper self-contained, we have included the details for calculating
$\hat{n}_{\mathrm{eff}}$, $R^{2}$, and RMSE in Appendix B. All experiments are
implemented with R (version 3.6.1) on a Windows 10 desktop computer with a
3.40 GHz Intel Core i7-6700 CPU and 16.0 GB RAM.
### 3.3 Results
The performances of NUTS and AGS under different datasets are summarized in
Table 6 (synthetic datasets) and Table 7 (real datasets). On both the
synthetic and real datasets, AGS consistently outperforms NUTS in the average
sampling time, especially when the size of the datasets is large. Depending on
the dataset, the improvement in the average inference speed can be greater
than one order of magnitude. This observation shows that using the Gaussian
approximation to avoid the evaluation of complex conditional posterior can
significantly boost the sampling speed. However, for all the datasets except
for power outage dataset P1, the sampling efficiency of AGS is significantly
lower than that of NUTS because the effective sample size obtained from AGS is
much lower than that from NUTS. The relatively low inference efficiency of AGS
does not compromise the accuracy of parameter estimates. In all the examined
datasets, $R^{2}$ and RMSE have comparable values for both AGS and NUTS. The
inference accuracy is better (higher $R^{2}$ and lower RMSE) for AGS for all
the synthetic datasets while in eight out of the thirteen (about 62%) real
datasets, AGS has slightly higher $R^{2}$ and lower RMSE. In particular, the
results on the Covid test data show that as long as a significant percentage
of counts are not all very small counts, then the proposed approximate Gibbs
sampler can outperform the NUTS in predictive accuracy. Overall, it can be
concluded that AGS significantly decreases the computational load by allowing
for faster sampling without compromising the accuracy of the estimates.
Table 6: Performance of NUTS and AGS on synthetic datasets Dataset | Characteristics | $T_{s}$ (s) | $E_{s}$ | $R^{2}$ | RMSE
---|---|---|---|---|---
$N_{d}$ | $K$ | $J$ | NUTS | AGS | NUTS | AGS | NUTS | AGS | NUTS | AGS
S1 | 200 | 2 | 10 | 1.51 | 0.96 | 649.28 | 33.46 | 0.9390 | 0.9450 | 6500 | 6191
S2 | 400 | 2 | 10 | 2.66 | 0.98 | 373.46 | 34.99 | 0.9430 | 0.9500 | 5823 | 5455
S3 | 800 | 2 | 20 | 5.05 | 1.63 | 195.21 | 18.63 | 0.9576 | 0.9626 | 3526 | 3310
S4 | 200 | 3 | 10 | 5.76 | 1.25 | 170.39 | 5.28 | 0.9614 | 0.9660 | 4235 | 3976
S5 | 400 | 3 | 10 | 16.63 | 1.29 | 59.78 | 2.03 | 0.9683 | 0.9710 | 3450 | 3305
S6 | 800 | 3 | 20 | 40.53 | 2.44 | 24.30 | 1.80 | 0.9677 | 0.9719 | 3993 | 3728
S7 | 200 | 4 | 10 | 7.64 | 1.63 | 129.25 | 1.59 | 0.9716 | 0.9760 | 2955 | 2718
S8 | 400 | 4 | 10 | 23.72 | 1.94 | 41.69 | 1.50 | 0.9639 | 0.9701 | 4406 | 4012
S9 | 800 | 4 | 20 | 45.00 | 3.31 | 21.99 | 0.49 | 0.9740 | 0.9770 | 2958 | 2790
S10 | 200 | 5 | 10 | 12.12 | 2.06 | 81.26 | 0.70 | 0.9574 | 0.9631 | 4941 | 4593
S11 | 400 | 5 | 10 | 24.70 | 2.41 | 39.68 | 0.49 | 0.9782 | 0.9826 | 2514 | 2245
S12 | 800 | 5 | 20 | 64.00 | 4.12 | 15.44 | 0.27 | 0.9767 | 0.9804 | 3446 | 3157
S13 | 200 | 6 | 10 | 14.66 | 2.44 | 67.49 | 0.23 | 0.9817 | 0.9876 | 2720 | 2721
S14 | 400 | 6 | 10 | 42.01 | 2.63 | 20.17 | 0.24 | 0.9922 | 0.9948 | 1339 | 1128
S15 | 800 | 6 | 20 | 93.20 | 4.97 | 8.44 | 0.17 | 0.9910 | 0.9940 | 1994 | 1629
We also investigate the scalability of the algorithms as it is crucial for
large-scale hierarchical data. Therefore, we show the average sampling time of
the two algorithms under different sizes of datasets to understand their
performance for larger datasets. We conduct an empirical analysis of the
average sampling time (seconds) per 1000 iterations for all the synthetic and
real datasets shown in Fig. 2. The sampling time of both samplers increases as
a function of the size of the dataset. However, when compared to NUTS, the
increase in the sampling time of AGS is significantly lower, showing a
significantly smaller rate of time increase over the size of datasets and
suggesting improved scalability. This observation also indicates that although
NUTS can generate samples effectively, it becomes inefficient in the case of
large datasets as evaluating the gradient in proposing new samples becomes
computationally expensive [45].
Table 7: Performance of NUTS and AGS on real datasets Dataset | Characteristics | $T_{s}$ (s) | $E_{s}$ | $R^{2}$ | RMSE
---|---|---|---|---|---
$N_{d}$ | $K$ | $J$ | NUTS | AGS | NUTS | AGS | NUTS | AGS | NUTS | AGS
P1 | 3817 | 5 | 56 | 885.65 | 15.76 | 1.12 | 1.25 | 0.9730 | 0.9801 | 13558 | 11621
P2 | 2467 | 5 | 50 | 652.87 | 13.71 | 1.50 | 0.62 | 0.9873 | 0.9884 | 1446 | 1384
P3 | 1548 | 5 | 35 | 387.47 | 9.41 | 2.54 | 0.92 | 0.9850 | 0.9870 | 1974 | 1833
P4 | 632 | 5 | 26 | 165.73 | 6.67 | 5.94 | 0.46 | 0.9923 | 0.9918 | 1327 | 1373
P5 | 520 | 5 | 16 | 135.16 | 4.31 | 7.27 | 1.85 | 0.9934 | 0.9940 | 3473 | 3312
P6 | 421 | 5 | 17 | 118.73 | 4.54 | 8.25 | 2.68 | 0.9908 | 0.9918 | 2526 | 2387
P7 | 375 | 5 | 23 | 39.86 | 5.76 | 24.75 | 2.41 | 0.9459 | 0.9355 | 3574 | 3903
P8 | 247 | 5 | 10 | 8.75 | 2.78 | 111.91 | 2.38 | 0.9744 | 0.9729 | 795 | 818
P9 | 157 | 5 | 8 | 5.48 | 2.24 | 179.87 | 4.07 | 0.9964 | 0.9967 | 803 | 766
P10 | 115 | 5 | 6 | 4.49 | 1.72 | 218.17 | 6.75 | 0.9869 | 0.9915 | 7715 | 6222
P11 | 63 | 5 | 4 | 5.49 | 1.25 | 177.38 | 0.92 | 0.9356 | 0.9027 | 251 | 308
Bike share | 729 | 3 | 2 | 9.09 | 0.98 | 109.54 | 24.75 | 0.6743 | 0.6292 | 1101 | 1175
Covid test | 298 | 3 | 11 | 34.60 | 2.59 | 28.57 | 18.54 | 0.8517 | 0.8582 | 2.53 | 2.47
(a) (b)
Figure 2: Scalability of NUTS and AGS on (a) real datasets and (b) synthetic datasets. The size of a dataset, i.e., the total number of count data points, is calculated by $\sum_{j=1}^{J}n_{j}\times J$. Table 8: Inference accuracy of NUTS and AGS on selected Covid-19 test datasets with small counts. The number of covariates $K=3$ and the number of groups $J=11$. Dataset | Characteristics | $R^{2}$ | RMSE
---|---|---|---
$RG$ | $PCT_{0}$ (%) | $PCT_{1,5}$ (%) | $N_{d}$ | NUTS | AGS | NUTS | AGS
Subset1 | [1, 5] | 0.0 | 100.0 | 168 | 0.4403 | 0.4452 | 0.9964 | 0.9920
Subset2 | [0, 5] | 32.5 | 67.5 | 249 | 0.6606 | 0.5684 | 0.9370 | 1.0570
Subset3 | [1, 38] | 0.0 | 56.4 | 298 | 0.8517 | 0.8582 | 2.5337 | 2.4685
Whole set | [0, 38] | 21.4 | 44.3 | 379 | 0.8692 | 0.8546 | 2.3492 | 2.4769
Results on the inference accuracy of two algorithms on selected Covid-19 test
dataset with small counts are summarized in Table 8 where $RG$ represents the
range of counts, $PCT_{0}$ represents the percentage of zero counts, and
$PCT_{1,5}$ represents the percentage of counts in [1, 5]. We can see that AGS
can outperform NUTS in $R^{2}$ and RMSE when there are no zero counts (subset1
and subset3) in the covid test data. However, when zero counts are included
(subset2 and the whole set), NUTS performs better than AGS. Comparing the
performance of both algorithms on subset 2, we can see that even when all the
counts are small but positive, AGS can still outperform NUTS in inference
accuracy by a small margin.
Table 9: Inference accuracy of NUTS and AGS on synthetic datasets with small counts. The number of covariates $K=5$ and the number of groups $J=8$. Dataset | Characteristics | $R^{2}$ | RMSE
---|---|---|---
$RG$ | $PCT_{0}$ (%) | $PCT_{1,5}$ (%) | $N_{d}$ | NUTS | AGS | NUTS | AGS
SS1 | [1, 5] | 0.0 | 100.0 | 252 | 0.9226 | 0.8139 | 0.3198 | 0.4959
SS2 | [0, 5] | 21.3 | 78.8 | 320 | 0.5414 | 0.4567 | 0.9460 | 1.0010
SS3 | [1, 10] | 0.0 | 71.2 | 288 | 0.9625 | 0.9421 | 0.4478 | 0.5564
SS4 | [0, 10] | 10.0 | 64.1 | 320 | 0.6461 | 0.6480 | 1.4805 | 1.4764
SS5 | [1, 15] | 0.0 | 57.3 | 307 | 0.9766 | 0.9732 | 0.5715 | 0.6113
SS6 | [0, 15] | 4.1 | 55.0 | 320 | 0.9641 | 0.9538 | 0.7225 | 0.8193
SS7 | [1, 20] | 0.0 | 47.0 | 319 | 0.9822 | 0.9819 | 0.6639 | 0.6708
SS8 | [0, 20] | 3.1 | 46.9 | 320 | 0.9691 | 0.9642 | 0.8774 | 0.9446
SS9 | [1, 30] | 0.0 | 32.2 | 314 | 0.9723 | 0.9767 | 1.3059 | 1.1979
SS10 | [0, 30] | 1.9 | 31.6 | 320 | 0.9525 | 0.9563 | 1.7246 | 1.6556
We also compare the inference accuracy of both algorithms using synthetic
datasets with small counts (SS1 - SS10) (Table 9). It can be observed that for
most synthetic datasets with a relatively large percentage of small counts
(SS1-SS3 and SS5-SS8), NUTS outperforms AGS in terms of $R^{2}$ and RMSE. If
the percentage of small counts is not very high (SS9 and SS10), then AGS
outperforms NUTS, even when there are zero counts (SS10).
Based on the performance comparison using synthetic datasets with small counts
and Covid-19 test datasets with small counts, we can conclude that when there
is a large percentage of small counts, particularly zero counts, NUTS tends to
outperform AGS. The specific percentage of small counts in a dataset that
leads to a better performance of NUTS than AGS varies with the dataset. That
is, depending on the particular dataset, AGS may still outperform NUTS when
there is a large percentage of small counts.
## 4 Conclusions
This research proposes a scalable approximate Gibbs sampling algorithm for the
HBPRM for grouped count data. Our algorithm builds on the approximation of
data-likelihood with Gaussian distribution such that the conditional posterior
for coefficients have a close-form solution. Empirical examples using
synthetic and real datasets demonstrate that the proposed algorithm
outperforms the state-of-the-art sampling algorithm, NUTS, in inference
efficiency. The improvement in efficiency is greater for larger datasets,
suggesting improved scalability. Due in part to the Gibbs updates, the AGS
trades off greater accuracy for slower mixing Markov chains, leading to a much
lower effective sample size and therefore lower sampling efficiency. However,
when sampling time is of great concern to model users (e.g. predicting
incidents and demands to allocate resources during a disaster), AGS would be
the only feasible option. As the approximation quality improves with larger
counts, our algorithm works better for count datasets in which the counts are
large. When a large portion of counts in a dataset are zero or very small
counts, then NUTS tend to outperform AGS in inference accuracy. Therefore,
when there are zero counts and inference accuracy is critical, NUTS is
recommended over AGS.
It is worth noting that the approximate conditional distributions of the
parameters in the HBPRMs for grouped count data may not be compatible with
each other, i.e., there may not exist an implicit joint posterior distribution
[46, 17] after applying the approximation. However, despite potentially
incompatible conditional distributions, the use of such approximate MCMC
samplers is suggested due to the computational efficiency and analytical
convenience [46, 18], especially when the efficiency improvement outweighs the
bias introduced by approximation [17].
Future work can explore scalable inference in hierarchical Bayesian models for
data with excessive zeros [47, 48, 49] as the Poisson regression model is not
appropriate for zero-inflated count data.
## References
* [1] Aktekin T, Polson N, Soyer R. Sequential Bayesian analysis of multivariate count data. Bayesian Analysis. 2018;13(2):385–409.
* [2] Hay JL, Pettitt AN. Bayesian analysis of a time series of counts with covariates: An application to the control of an infectious disease. Biostatistics. 2001;2(4):433–444.
* [3] De Oliveira V. Hierarchical Poisson models for spatial count data. Journal of Multivariate Analysis. 2013;122:393–408.
* [4] Han SR, Guikema SD, Quiring SM, et al. Estimating the spatial distribution of power outages during hurricanes in the Gulf coast region. Reliability Engineering & System Safety. 2009;94(2):199–210.
* [5] Yu JZ, Whitman M, Kermanshah A, et al. A hierarchical bayesian approach for assessing infrastructure networks serviceability under uncertainty: A case study of water distribution systems. Reliability Engineering & System Safety. 2021;215:107735.
* [6] Yu JZ, Baroud H. Quantifying community resilience using hierarchical Bayesian kernel methods: A case study on recovery from power outages. Risk Analysis. 2019;39(9):1930–1948.
* [7] Ma J, Kockelman KM, Damien P. A multivariate Poisson-lognormal regression model for prediction of crash counts by severity, using Bayesian methods. Accident Analysis & Prevention. 2008;40(3):964–975.
* [8] Baio G, Blangiardo M. Bayesian hierarchical model for the prediction of football results. Journal of Applied Statistics. 2010;37(2):253–264.
* [9] Flask T, Schneider IV WH, Lord D. A segment level analysis of multi-vehicle motorcycle crashes in Ohio using Bayesian multi-level mixed effects models. Safety Science. 2014;66:47–53.
* [10] Khazraee SH, Johnson V, Lord D. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions. Accident Analysis & Prevention. 2018;117:181–195.
* [11] Gelman A, Carlin JB, Stern HS, et al. Bayesian data analysis. Chapman and Hall/CRC; 2013.
* [12] Betancourt M, Girolami M. Hamiltonian Monte Carlo for hierarchical models. Current trends in Bayesian methodology with applications. 2015;79(30):2–4.
* [13] AlJadda K, Korayem M, Ortiz C, et al. PGMHD: A scalable probabilistic graphical model for massive hierarchical data problems. In: 2014 IEEE International Conference on Big Data (Big Data); IEEE; 2014. p. 55–60.
* [14] Brooks S. Markov Chain Monte Carlo method and its application. Journal of the Royal Statistical Society: Series D (the Statistician). 1998;47(1):69–100.
* [15] Conrad PR, Marzouk YM, Pillai NS, et al. Accelerating asymptotically exact MCMC for computationally intensive models via local approximations. Journal of the American Statistical Association. 2016;111(516):1591–1607.
* [16] Robert CP, Elvira V, Tawn N, et al. Accelerating MCMC algorithms. Wiley Interdisciplinary Reviews: Computational Statistics. 2018;10(5):e1435.
* [17] Alquier P, Friel N, Everitt R, et al. Noisy Monte Carlo: Convergence of Markov Chains with approximate transition kernels. Statistics and Computing. 2016;26(1-2):29–47.
* [18] Johndrow JE, Mattingly JC, Mukherjee S, et al. Optimal approximating Markov Chains for Bayesian inference. arXiv preprint arXiv:150803387. 2015;.
* [19] Streftaris G, Worton BJ. Efficient and accurate approximate Bayesian inference with an application to insurance data. Computational Statistics & Data Analysis. 2008;52(5):2604–2622.
* [20] Chan AB, Vasconcelos N. Bayesian Poisson regression for crowd counting. In: 2009 IEEE 12th international conference on computer vision; IEEE; 2009. p. 545–551.
* [21] Dutta R, Blomstedt P, Kaski S. Bayesian inference in hierarchical models by combining independent posteriors. arXiv preprint arXiv:160309272. 2016;.
* [22] Berman B. Asymptotic posterior approximation and efficient MCMC sampling for generalized linear mixed models [dissertation]. UC Irvine; 2019.
* [23] Bansal P, Krueger R, Graham DJ. Fast Bayesian estimation of spatial count data models. Computational Statistics & Data Analysis. 2020;:107152.
* [24] Polson NG, Scott JG, Windle J. Bayesian inference for logistic models using pólya–gamma latent variables. Journal of the American Statistical Association. 2013;108(504):1339–1349.
* [25] Aguero-Valverde J, Jovanis PP. Bayesian multivariate poisson lognormal models for crash severity modeling and site ranking. Transportation Research Record. 2009;2136(1):82–91.
* [26] Montesinos-López OA, Montesinos-López A, Crossa J, et al. A Bayesian Poisson-lognormal model for count data for multiple-trait multiple-environment genomic-enabled prediction. G3: Genes, Genomes, Genetics. 2017;7(5):1595–1606.
* [27] Serhiyenko V, Mamun SA, Ivan JN, et al. Fast Bayesian inference for modeling multivariate crash counts. Analytic Methods in Accident Research. 2016;9:44–53.
* [28] Gelman A. Prior distributions for variance parameters in hierarchical models (comment on article by browne and draper). Bayesian analysis. 2006;1(3):515–534.
* [29] Graves TL. Automatic step size selection in random walk metropolis algorithms. arXiv preprint arXiv:11035986. 2011;.
* [30] Holden L. Mixing of MCMC algorithms. Journal of Statistical Computation and Simulation. 2019;89(12):2261–2279.
* [31] Kass RE, Carlin BP, Gelman A, et al. Markov Chain Monte Carlo in practice: A roundtable discussion. The American Statistician. 1998;52(2):93–100.
* [32] Pang WK, Forster JJ, Troutt MD. Estimation of wind speed distribution using Markov Chain Monte Carlo techniques. Journal of Applied Meteorology. 2001;40(8):1476–1484.
* [33] Murphy KP. Conjugate Bayesian analysis of the Gaussian distribution. Technical Report. 2007;Available from: https://www.cs.ubc.ca/ murphyk/mypapers.html.
* [34] Gilks WR, Wild P. Adaptive rejection sampling for Gibbs sampling. Journal of the Royal Statistical Society: Series C (Applied Statistics). 1992;41(2):337–348.
* [35] Geweke J, Tanizaki H. Bayesian estimation of state-space models using the Metropolis–Hastings algorithm within Gibbs sampling. Computational Statistics & Data Analysis. 2001;37(2):151–170.
* [36] Halliwell LJ. The log-gamma distribution and non-normal error. Variance: Advancing the Science of Risk. 2018;.
* [37] Batir N. On some properties of digamma and polygamma functions. Journal of Mathematical Analysis and Applications. 2007;328(1):452–465.
* [38] Mačys J. On the Euler-Mascheroni constant. Mathematical Notes. 2013;94.
* [39] Prentice RL. A log-gamma model and its maximum likelihood estimation. Biometrika. 1974;61(3):539–544.
* [40] Hoffman MD, Gelman A. The No-U-Turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research. 2014;15(1):1593–1623.
* [41] Kucirka LM, Lauer SA, Laeyendecker O, et al. Variation in false-negative rate of reverse transcriptase polymerase chain reaction–based SARS-CoV-2 tests by time since exposure. Annals of Internal Medicine. 2020;.
* [42] Asuncion A, Newman D. UCI Machine Learning Repository ; 2007.
* [43] Fanaee-T H, Gama J. Event labeling combining ensemble detectors and background knowledge. Progress in Artificial Intelligence. 2014;2(2-3):113–127.
* [44] Stan Development Team. Stan modeling language users guide and reference manual; 2016.
* [45] Nishio M, Arakawa A. Performance of Hamiltonian Monte Carlo and No-U-Turn sampler for estimating genetic parameters and breeding values. Genetics Selection Evolution. 2019;51(1):73.
* [46] Gelman A. Parameterization and Bayesian modeling. Journal of the American Statistical Association. 2004;99(466):537–545.
* [47] Gholiabad SG, Moghimbeigi A, Faradmal J, et al. A multilevel zero-inflated Conway–Maxwell type negative binomial model for analysing clustered count data. Journal of Statistical Computation and Simulation. 2021;91(9):1762–1781.
* [48] Liu X, Winter B, Tang L, et al. Simulating comparisons of different computing algorithms fitting zero-inflated poisson models for zero abundant counts. Journal of Statistical Computation and Simulation. 2017;87(13):2609–2621.
* [49] Brown S, Duncan A, Harris MN, et al. A zero-inflated regression model for grouped data. Oxford Bulletin of Economics and Statistics. 2015;77(6):822–831.
## 5 Appendices
## Appendix A Derivation of the approximate conditional posterior
Derivation of the approximate posterior distribution in the posterior
distribution is presented below. Terms that do not impact $w_{jk}$ are
regarded as a constant, i.e. $C_{i}$ ($i=1,\dots,5$) in the following
equations.
The conditional posterior of regression coefficient $w_{jk}$ can be written as
$p\left({{w_{jk}}|-}\right)\propto\exp\left\\{{\frac{{{{\left({{w_{jk}}-{\mu_{{k}}}}\right)}^{2}}}}{{-2\sigma_{{k}}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{{{\left[{{x_{ijk}}{w_{jk}}+\sum\limits_{h=1,h\neq
k}^{K}{{x_{ijh}}{w_{jh}}}-{\psi_{0}}\left({{y_{ij}}}\right)}\right]}^{2}}}}{{-2{\psi_{1}}\left({{y_{ij}}}\right)}}}}\right\\}.$
(30)
Let $A$ be the exponent in Eq. (30) and expand the square terms, then we have
$\displaystyle A$
$\displaystyle=\frac{{{{\left({{w_{jk}}-{\mu_{k}}}\right)}^{2}}}}{{-2\sigma_{k}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{{{\left({{w_{jk}}{x_{ijk}}+\sum\limits_{h=1,h\neq
k}^{K}{{w_{jh}}{x_{ijh}}}-{\psi_{0}}\left({{y_{ij}}}\right)}\right)}^{2}}}}{{-2{\psi_{1}}\left({{y_{ij}}}\right)}}}$
(31)
$\displaystyle=\frac{{w_{jk}^{2}-2{\mu_{k}}{w_{jk}}+{C_{1}}}}{{-2\sigma_{k}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}w_{jk}^{2}+2\left({\sum\limits_{h=1,h\neq
k}^{K}{{w_{jh}}{x_{ijh}}}-{\psi_{0}}\left({{y_{ij}}}\right)}\right){x_{ijk}}{w_{jk}}+{C_{2}}}}{{-2{\psi_{1}}\left({{y_{ij}}}\right)}}}$
(32)
$\displaystyle=-\frac{1}{2}\left({\frac{1}{{\sigma_{k}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}}\right)w_{jk}^{2}+\left({\frac{{{\mu_{k}}}}{{\sigma_{k}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{\left({\sum\limits_{h=1,h\neq
k}^{K}{{w_{jh}}{x_{ijh}}}-{\psi_{0}}\left({{y_{ij}}}\right)}\right){x_{ijk}}}}{{-{\psi_{1}}\left({{y_{ij}}}\right)}}}}\right){w_{jk}}+{C_{3}}.$
(33)
Dividing the numerator and denominator by the coefficient of the quadratic
term, we get
$\displaystyle A$
$\displaystyle\propto\frac{{w_{jk}^{2}+\left({\frac{{\frac{{{\mu_{k}}}}{{\sigma_{k}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{\left({\sum\limits_{h=1,h\neq
k}^{K}{{w_{jh}}{x_{ijh}}}-{\psi_{0}}\left({{y_{ij}}}\right)}\right){x_{ijk}}}}{{-{\psi_{1}}\left({{y_{ij}}}\right)}}}}}{{-\frac{1}{2}\left({\frac{1}{{\sigma_{k}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}}\right)}}}\right){w_{jk}}}}{{\frac{1}{{-\frac{1}{2}\left({\frac{1}{{\sigma_{k}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}}\right)}}}}+{C_{4}}$
(34)
$\displaystyle\propto\frac{{{{\left({{w_{jk}}-\frac{{\frac{{{\mu_{k}}}}{{\sigma_{k}^{2}}}-\sum\limits_{i=1}^{{n_{j}}}{\frac{{\left({\sum\limits_{h=1,h\neq
k}^{K}{{w_{jh}}{x_{ijh}}}-{\psi_{0}}\left({{y_{ij}}}\right)}\right){x_{ijk}}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}}}{{\frac{1}{{\sigma_{k}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}}}}\right)}^{2}}}}{{-2\cdot\frac{1}{{\frac{1}{{\sigma_{k}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}}}}}+{C_{5}}.$
(35)
Therefore, we obtain the mean and variance of the the approximate Gaussian
posterior
$\displaystyle\widehat{\mu}_{{{{k}}}}=$
$\displaystyle\,{\frac{{\frac{{{\mu_{k}}}}{{\sigma_{k}^{2}}}-\sum\limits_{i=1}^{{n_{j}}}{\frac{{\left({\sum\limits_{h=1,h\neq
k}^{K}{{w_{jh}}{x_{ijh}}}-{\psi_{0}}\left({{y_{ij}}}\right)}\right){x_{ijk}}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}}}{{\frac{1}{{\sigma_{k}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}}}}=\frac{{{\mu_{k}}+\sigma_{k}^{2}\sum\limits_{i=1}^{{n_{j}}}{\frac{{{x_{ijk}}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}\left({{\psi_{0}}\left({{y_{ij}}}\right)-\sum\limits_{h=1,h\neq
k}^{K}{{w_{jh}}{x_{ijh}}}}\right)}}}{{\sigma_{k}^{2}\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}+1}}},$
(36) $\displaystyle\,\,\forall j=1,\dots,J,$
$\displaystyle{\widehat{\sigma}_{{{{k}}}}}^{2}=$
$\displaystyle\frac{1}{{\frac{1}{{\sigma_{k}^{2}}}+\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}}}}=\frac{{\sigma_{k}^{2}}}{{\sigma_{k}^{2}\sum\limits_{i=1}^{{n_{j}}}{\frac{{x_{ijk}^{2}}}{{{\psi_{1}}\left({{y_{ij}}}\right)}}+1}}},\,\forall
j=1,\dots,J.$ (37)
## Appendix B Metrics used for comparing samplers
$\bullet$ Effective sample size ($\hat{n}_{\mathrm{eff}}$)
For each scalar estimand $\psi$, the simulations/samples are labeled as
$\psi_{ij}\;(i=1,\dots,n;j=1,\dots,m)$ where $n$ is the number of samples in
each chain (sequence) and $m$ is the number of chains. The between-sequence
variance $B$ and the within-sequence variance $W$ are given as The effective
sample size is calculated according to Ref. [11, Chapter 11]
$\hat{n}_{\mathrm{eff}}=\frac{mn}{1+2\sum_{t=1}^{T}\widehat{\rho}_{t}},$ (38)
where the estimated auto-correlations $\widehat{\rho}_{t}$ are computed as
$\widehat{\rho}_{t}=1-\frac{{{V_{t}}}}{{2\,{\widehat{\rm var}}^{+}}}$ (39)
and $T$ is the first odd positive integer for which
$\widehat{\rho}_{T+1}+\widehat{\rho}_{T+2}$ is negative. In Eq. (39), $V_{t}$,
the variogram at each lag $t$, is given by
${V_{t}}=\frac{1}{{m\left({n-t}\right)}}{\sum\limits_{j=1}^{m}{\sum\limits_{i=t+1}^{n}{\left({{\psi_{i,j}}-{\psi_{i-t,j}}}\right)^{2}}}},$
(40)
and ${\widehat{\rm var}}^{+}$, the marginal posterior variance of the
estimand, is given by
${\widehat{\rm
var}}^{+}=\frac{{n-1}}{mn}\sum\limits_{j=1}^{m}{s_{j}^{2}}+\frac{1}{{m-1}}{\sum\limits_{j=1}^{m}{\left({\mathop{{\psi_{.j}}}\limits^{-}-\mathop{{\psi_{..}}}\limits^{-}}\right)}^{2}},$
(41)
where
$s_{j}^{2}=\frac{1}{{n-1}}{\sum\limits_{i=1}^{n}{\left({{\psi_{ij}}-\mathop{{\psi_{.j}}}\limits^{-}}\right)}^{2}},~{}\mathop{{\psi_{.j}}}\limits^{-}=\frac{1}{n}\sum\limits_{i=1}^{n}{{\psi_{ij}}},~{}\textrm{and}~{}\mathop{{\psi_{..}}}\limits^{-}=\frac{1}{m}\sum\limits_{j=1}^{m}{\mathop{{\psi_{.j}}}\limits^{-}}.$
$\bullet$ $R^{2}$
The $R^{2}$ of generic predicted values $\hat{y}_{i},i={1,\dots,N}$ of the
dependent variables $y_{i},i={1,\dots,N}$ is expressed as
$R^{2}=1-\frac{\sum_{i=1}^{N}({y_{i}}-\hat{y_{i}})^{2}}{\sum_{i=1}^{N}(y_{i}-\bar{y})^{2}},$
(42)
where $\bar{y}$ is the average value of $y_{i},i={1,\dots,N}$.
$\bullet$ RMSE
The RMSE of predicted values is given by
$\textrm{RMSE}=\sqrt{\frac{\sum_{i=1}^{N}\left(\hat{y}_{i}-y_{i}\right)^{2}}{N}}.$
|
# Determining the viscosity of the Navier-Stokes equations from observations
of finitely many modes
Animikh Biswas and Joshua Hudson
###### Abstract.
In this work, we ask and answer the question: when is the viscosity of a fluid
uniquely determined from spatially sparse measurements of its velocity field?
We pose the question mathematically as an optimization problem using the
determining map (the mapping of data to an approximation made via a nudging
algorithm) to define a loss functional, the minimization of which solves the
inverse problem of identifying the true viscosity given the measurement data.
We give explicit _a priori_ conditions for the well-posedness of this inverse
problem. In addition, we show that smallness of the loss functional implies
proximity to the true viscosity. We then present an algorithm for solving the
inverse problem and prove its convergence.
## 1\. Introduction
In many instances, the general form of a particular dynamical system is
commonly derived from well-accepted fundamental physical, biological or
epidemiological principles. However, frequently these systems have one or
multiple parameters and their values play a crucial role in its dynamical
evolution. For instance, in geophysical models such as the Navier–Stokes or
the Boussinesq systems, physical parameters such as the Reynolds, Raleigh or
Prandtl numbers determine whether or not the system approaches an equilibrium
or transitions to chaos, while in the SEIRD model of transmission of an
epidemic, the associated parameters determine whether the disease is
eradicated or becomes endemic. Other examples include reservoir modelling [11]
and determining the transmissivity coefficient of groundwater, which captures
its ability to move across an aquifer. It is a central parameter in these
models that must be estimated in some way [7, 19] in order to make accurate
predictions.
Due to its ubiquity in applications, there has been a recent surge of interest
in parameter determination and estimation problems _from finitely many partial
observations_ of the system under consideration. For instance, in case of a
geophysical model with a state variable $\theta(\bm{x},t)$, depending on a
spatial variable $\bm{x}$ and time $t$, such observations may take the form of
nodal values $\\{\theta(\bm{x}_{i},t)\\}_{i=1}^{N}$ where
$\\{\bm{x}_{i}\\}_{i=1}^{N}$ is a finite set of points in the spatial domain.
These observations can either be collected continuously in time or at a
discrete sequence of time points $t_{1}<t_{2}<\cdots$. Other examples of
commonly used observational data include finitely many Fourier coefficients
$\\{\widehat{\theta}(n,t):|n|\leq N\\}$ or volume elements
$\\{\bar{\theta}_{i}\\}_{i=1}^{N}$ where
$\bar{\theta}_{i}=\int_{\Omega_{i}}\theta(\bm{x},t)\\}$ and $\Omega_{i}$ is a
subset of the whole spatial domain $\Omega$. For finite dimensional dynamical
systems, such partial observations constitute a time series of observations on
a subset of the variables. In the SEIRD model of epidemic transmission, data
may not be available for certain variables such as the exposed population due
to asymptomatic transmission, thus leading to partial observations in these
cases.
The above mentioned examples can be mathematically formulated as follows. Let
$\\{{\bm{u}}(t)\\}$ be a dynamical system parameterized by $\nu$ evolving on a
Hilbert space $\mathcal{H}$ according to the equation
${\displaystyle\frac{\text{d}{\bm{u}}}{\text{dt}}}=F_{\bm{\nu}}({\bm{u}}(t)),\>{\bm{u}}(\cdot)\in\mathcal{H},\>{\bm{\nu}}\in\mathbb{R}^{d}.$
(1.1)
The problem under consideration then is the estimation of the parameter
${\bm{\nu}}\in\mathbb{R}^{d}$ based on observations $\\{\cal
O({\bm{u}}(t))\\}$, where $\cal O:\cal H\longrightarrow\cal\mathbb{R}^{N}$ is
an adequate linear (or nonlinear) operator and $t\in[t_{0},T],T\leq\infty$.
More generally, one may have observations on a discrete set of points $\\{\cal
O({\bm{u}}(t_{i})\\}_{i=1}^{\infty}$ and the observations may be contaminated
with error. When ${\bm{\nu}}$ is known, recovering the state variable from the
observational data is commonly referred to as data assimilation. When
${\bm{\nu}}$ is unknown, we may view this as a parameter estimation problem
embedded in data assimilation. This has led to various computational
strategies, often ad hoc, using machine learning [14, 15], Bayesian estimation
methods related to various extensions of Kalman filtering employed in data
assimilation, or other on the fly numerical methods based on the nudging
scheme for data assimilation [4, 13, 1]. In this context, we note first that
the parameter identification problem is not in general well-posed even when
the state variable is fully observable, i.e. when the solution ${\bm{u}}$ in
(1.1) is known exactly. As described below, this can be seen in the multi-
parameter identification problem addressed in [13].
Let $\Omega=[0,2\pi]$ and
$\cal H=\\{f:\Omega\rightarrow\mathbb{R},\>f\ \text{is}\
2\pi-\text{periodic},\>\int_{\Omega}f=0\\}$
and $A=-\partial_{xx}$ on $\cal H\cap\mathbb{H}^{2}$. Consider now the problem
of determining the parameter vector $(\lambda_{1},\lambda_{2})$ from the
solution $u$ of the equation
$u_{t}-\lambda_{1}Au+\lambda_{2}A^{2}u=f,$ (1.2)
which is an illustrative example of the type of parameter identification
problems discussed in [13]. If one takes $u=u_{\delta}$ to be an (time-
independent) eigenfunction of $A$ corresponding to the eigenvalue $\delta$,
and $f=c\delta u_{\delta}$, then $u_{\delta}$ solves the equation (1.2)
provided $(\lambda_{1}-\lambda_{2}\delta)=c$. However, _one cannot determine
the parameters_ $\lambda_{1}$ and $\lambda_{2}$, even if one knows $u$
completely on the entire domain, $\Omega$; in a neighborhood of this solution,
the inverse problem of determining the parameter $(\lambda_{1},\lambda_{2})$
is ill-posed. Thus, the question whether or not the parameter ${\bm{\nu}}$ can
be determined in (1.1) from partial observations (i.e. whether or not the
parameter-to-data map is one-to-one) is even more delicate.
### 1.1. Overview of Results
Here, for the specific case of determining the viscosity parameter for the
two-dimensional Navier–Stokes equations (2D NSE), we consider the fundamental
questions inherent in rigorous justification of all of the parameter
estimation methods mentioned before, namely, when can one recover the
parameter from observational data and what is the regularity property of the
inverse map.
In addition, we define an algorithm for solving the inverse problem using the
determining map. Our algorithm can be seen as an improvement over the
algorithm originally presented in [4] due to the replacement of the _a
posteriori_ verifiable condition with an _a priori_ condition for the update
to have a nonzero denominator. Our rigorous convergence criteria is also an
improvement over the recent work in [12] (which was developed simultaneously
with this work) for the same reason; in addition, our criteria is independent
of the magnitude of the observable error, whereas the condition in [12]
requires more data as the observable error decreases.
In Section 2 we will establish some notation and review the basic theory
necessary for studying the Navier–Stokes equations. We will also briefly
review the nudging data assimilation algorithm originally presented in [2] for
the Navier–Stokes equations. Then, in Section 3, we define the determining
map, extending its domain from that of [3] to include viscosity as a variable,
and prove some of its regularity properties.
In Section 4, we frame the viscosity recovery inverse problem as a PDE
constrained optimization problem, and provide rigorous conditions for the
inverse problem to have a unique solution. Then, in Section 6, we prove that
the loss function controls the viscosity error, which we leverage in Section 7
to define an algorithm that converges to the true viscosity, solving the
inverse problem.
## 2\. Preliminaries
### 2.1. The Navier–Stokes equations and their functional form
The Navier–Stokes equations for a homogeneous, incompressible Newtonian fluid
in two dimensions are given by
$\frac{\partial{\bm{u}}}{\partial
t}-\nu\Delta{\bm{u}}+({\bm{u}}\cdot\nabla){\bm{u}}+\nabla
p=\bm{g},\quad\nabla\cdot{\bm{u}}=0,$ (2.1)
where ${\bm{u}}=(u_{1},u_{2})$ and $p$ are the unknowns and denote the
velocity vector field and the pressure, respectively, while $\nu>0$ and
$\bm{g}$ are given and denote the kinematic viscosity parameter and the body
forces applied to the fluid per unit mass, respectively. In this work, we
consider the Navier–Stokes equations defined on the spatial domain
$\Omega=[0,L]\times[0,L],$
with periodic boundary conditions.
Let $\hat{{\bm{u}}}(\bm{k}),\bm{k}=(k_{1},k_{2})\in\mathbb{Z}^{2}$ denote the
$\bm{k}$-th Fourier coefficient of ${\bm{u}}$. Given finitely many Fourier
coefficients $\\{\hat{\bm{u}}(\bm{k}):|\bm{k}|\leq N\\}$, we address the
question as to when the viscosity $\nu$ be determined from this finite
observational data and whether the data-to-viscosity map is Lipschitz.
We now proceed to recall the basic functional setting of the NSE, a systematic
development of which can be found in [6, 17, 18]. Let $\mathcal{V}$ be the
space of test functions, given by
$\mathcal{V}=\left\\{\bm{\varphi}:\mathbb{R}^{2}\to\mathbb{R}^{2}\,:\,\bm{\varphi}\,\text{
is an $L$-periodic trigonometric polynomial, }\right.\\\
\left.\nabla\cdot\bm{\varphi}=0,\int_{\Omega}\bm{\varphi}({\bm{x}}){\text{\rm
d}}{\bm{x}}=0\right\\}.$ (2.2)
We denote by $H$ and $V$ the closures of $\mathcal{V}$ with respect to the
norms in $(L^{2}(\Omega))^{2}$ and $(H^{1}(\Omega))^{2}$, respectively.
Moreover, we denote by $H^{\prime}$ and $V^{\prime}$ the dual spaces of $H$
and $V$, respectively. As usual, we identify $H$ with $H^{\prime}$, so that
$V\subseteq H\subseteq V^{\prime}$ with the injections being continuous and
compact, with each space being densely embedded in the following one. The
duality action between $V^{\prime}$ and $V^{\prime}$ is denoted by
$\langle\cdot,\cdot\rangle_{V^{\prime},V}$.
The inner product in $H$ is given by
$({\bm{u}}_{1},{\bm{u}}_{2})=\int_{\Omega}{\bm{u}}_{1}\cdot{\bm{u}}_{2}\,{\text{\rm
d}}{\bm{x}}\quad\forall{\bm{u}}_{1},{\bm{u}}_{2}\in H,$
with the corresponding norm denoted by
$|{\bm{u}}|:=\|{\bm{u}}\|_{L^{2}}=({\bm{u}},{\bm{u}})^{1/2}$. In $V$, we
consider the following inner product:
$(\\!({\bm{u}}_{1},{\bm{u}}_{2})\\!)=\int_{\Omega}\nabla{\bm{u}}_{1}:\nabla{\bm{u}}_{2}\,{\text{\rm
d}}{\bm{x}}\quad\forall{\bm{u}}_{1},{\bm{u}}_{2}\in V,$
where it is understood that $\nabla{\bm{u}}_{1}:\nabla{\bm{u}}_{2}$ denotes
the component-wise product between the tensors $\nabla{\bm{u}}_{1}$ and
$\nabla{\bm{u}}_{2}$. The corresponding norm in $V$ is given by
$\|{\bm{u}}\|:=\|\nabla{\bm{u}}\|_{L^{2}}=(\\!({\bm{u}},{\bm{u}})\\!)^{1/2}$.
The fact that $\|\cdot\|$ defines a norm on $V$ follows from the Poincaré
inequality, given in (2.10), below.
For every subspace $\Lambda\subset(L^{1}(\Omega))^{2}$, we denote
$\dot{\Lambda}_{\textnormal{per}}=\left\\{\bm{\varphi}\in\Lambda\,:\,\bm{\varphi}\mbox{
is $L$-periodic and }\int_{\Omega}\bm{\varphi}({\bm{x}}){\text{\rm
d}}{\bm{x}}=0\right\\}.$
Observe that $H$ is a closed subspace of $(\dot{L}^{2}(\Omega))^{2}$. Let
$P_{\sigma}$ denote the Helmholtz-Leray projector, which is defined as the
orthogonal projection from $(\dot{L}^{2}_{\textnormal{per}}(\Omega))^{2}$ onto
$H$. Applying $P_{\sigma}$ to (2.1), we obtain the following equivalent
functional formulation of the Navier–Stokes equations.
###### System 2.1 (Navier–Stokes in functional form).
$\frac{{\text{\rm d}}{\bm{u}}}{{\text{\rm d}}t}+\nu
A{\bm{u}}+B({\bm{u}},{\bm{u}})=\bm{f}\mbox{ in }V^{\prime},$ (2.3)
where $\bm{f}=P_{\sigma}\bm{g}$.
The bilinear operator $B:V\times V\to V^{\prime}$ is defined as the continuous
extension of
$B({\bm{u}},{\bm{v}})=P_{\sigma}[({\bm{u}}\cdot\nabla){\bm{v}}]\quad\forall{\bm{u}},{\bm{v}}\in\mathcal{V},$
and $A:D(A)\subset V\to V^{\prime}$, the Stokes operator, is the continuous
extension of
$A{\bm{u}}=-P_{\sigma}\Delta{\bm{u}}\quad\forall{\bm{u}}\in\mathcal{V}.$
In fact, in the case of periodic boundary conditions, we have $A=-\Delta$.
We recall that $D(A)=V\cap(\dot{H}^{2}_{\textnormal{per}}(\Omega))^{2}$ and
that $A$ is a positive and self-adjoint operator with compact inverse.
Therefore, the space $H$ admits an orthonormal basis
$\\{\bm{{\bm{\phi}}}_{j}\\}_{j=1}^{\infty}$ of eigenfunctions of $A$
corresponding to a non-decreasing sequence of eigenvalues
$\\{\lambda_{j}\\}_{j=1}^{\infty}$, where
$\lambda_{j}\in\\{\kappa_{0}^{2}|{\bm{k}}|^{2},{\bm{k}}\in\mathbb{Z}^{2}\setminus\\{0\\}\\}$
and $\lambda_{1}:=\kappa_{0}^{2}=(2\pi/L)^{2}$. For a periodic function
${\bm{\phi}}$ on $\Omega$, we recall the Parseval’s identity:
$|{\bm{\phi}}|^{2}=L^{2}\sum_{\bm{k}}|\hat{{\bm{\phi}}}_{\bm{k}}|^{2},$
where $\hat{{\bm{\phi}}}_{\bm{k}}$ denotes the Fourier coefficient
corresponding to ${\bm{k}}=(k_{1},k_{2})\in\mathbb{Z}^{2}$. Furthermore,
${\bm{\phi}}\in L^{2}(\Omega)$ satisfies the mean-free condition
$\int_{\Omega}{\bm{\phi}}=0$ if and only if $\hat{{\bm{\phi}}}_{0}=0$.
Given a normed space $\mathcal{X}$ with norm $\|\cdot\|_{\mathcal{X}}$, let
$C_{b}(\mathcal{X})$ denote the continuous bounded mappings from $\mathbb{R}$
to $X$, and define the norm
$\|{\bm{\phi}}\|_{C_{b}(X)}=\sup_{t\in\mathbb{R}}\|{\bm{\phi}}(t)\|_{\mathcal{X}}.$
In particular, we have the norms $\|\cdot\|_{C_{b}(H)}$ and
$\|\cdot\|_{C_{b}(V)}.$
### 2.2. Global attractor for the Navier–Stokes equations
It is well-known that, given ${\bm{u}}_{0}\in H$, there exists a unique
solution ${\bm{u}}$ of (2.3) on $[0,\infty)$ such that
${\bm{u}}(0)={\bm{u}}_{0}$ and
${\bm{u}}\in C([0,\infty);H)\cap L_{\textnormal{loc}}^{2}([0,\infty);V)\
\mbox{and}\ {\displaystyle\frac{\text{d}{\bm{u}}}{\text{dt}}}\in
L^{2}_{loc}([0,\infty);V^{\prime}).$ (2.4)
Moreover, we also have ${\bm{u}}\in C((0,\infty);D(A))$ (see, e.g., [6,
Theorem 12.1]). Therefore, equation (2.3) has an associated semigroup
$\\{S_{\nu}(t)\\}_{t\geq 0}$, where, for each $t\geq 0$, $S_{\nu}(t):H\to H$
is the mapping given by
$S_{\nu}(t){\bm{u}}_{0}={\bm{u}}_{\nu}(t),$ (2.5)
with ${\bm{u}}_{\nu}$ being the unique solution of (2.3) on $[0,\infty)$
satisfying ${\bm{u}}_{\nu}(0)={\bm{u}}_{0}$ and (2.4). For simplicity, we will
drop the subscript $\nu$ as it will be understood from context and simply
write $S(t)$ and ${\bm{u}}(t)$ instead.
Recall that a bounded set $\mathcal{B}\subset H$ is called _absorbing_ with
respect to $\\{S(t)\\}_{t\geq 0}$ if, for any bounded subset $B\subset H$,
there exists a time $T=T(B)$ such that $S(t)B\subset\mathcal{B}$ for all
$t\geq T$. The existence of a bounded absorbing set for (2.3) is a well-known
result; therefore, a global attractor $\mathcal{A}_{\nu}$ of (2.3) exists, and
is uniquely defined by any of the equivalent conditions given below.
###### Definition 2.2 (Global Attractor).
Let $\mathcal{B}\subset H$ be a bounded absorbing set with respect to
$\\{S(t)\\}_{t\geq 0}$. Then the global attractor, $\mathcal{A}_{\nu}$, exists
as given by any of the following equivalent definitions:
1. (1)
$\mathcal{A}_{\nu}=\bigcap_{t\geq 0}S(t)\mathcal{B}.$
2. (2)
$\mathcal{A}_{\nu}$ is the largest compact subset of $H$ which is invariant
under the action of the semigroup $\\{S(t)\\}_{t\geq 0}$, i.e.,
$S(t)\mathcal{A}_{\nu}=\mathcal{A}_{\nu}$ for all $t\geq 0$.
3. (3)
$\mathcal{A}_{\nu}$ is the minimal set that attracts all bounded sets.
4. (4)
$\mathcal{A}_{\nu}$ is the set of all points in $H$ through which there exists
a globally bounded trajectory ${\bm{u}}(t)$, $t\in\mathbb{R}$, with
$\sup_{t\in\mathbb{R}}\|{\bm{u}}(t)\|_{L^{2}}<\infty$.
Also, recall the definition of the (dimensionless) Grashof number, given by
$G_{\nu}=\frac{\|\bm{f}\|_{L^{2}}}{(\nu\kappa_{0})^{2}}.$ (2.6)
In the periodic case, the following bounds hold on the global attractor,
$\mathcal{A}_{\nu}$:
$\|{\bm{u}}\|_{L^{2}}\leq\nu
G_{\nu},\quad\|\nabla{\bm{u}}\|_{L^{2}}\leq\nu\kappa_{0}G_{\nu}\quad\forall{\bm{u}}\in\mathcal{A}_{\nu},$
(2.7)
and
$uu\|A{\bm{u}}\|_{L^{2}}\leq
c_{2}\nu\kappa_{0}^{2}(G_{\nu}+c_{0}^{-2})^{3}\quad\forall{\bm{u}}\in\mathcal{A}_{\nu},$
(2.8)
where $c_{0}$ is the constant given below in (2.16), and
$c_{2}=2137\,c_{0}^{4}$. The proof of (2.7) can be found in any of the
references listed above ([6, 17, 18]), and the proof of (2.8) is given in [10,
Lemma 4.4].
In particular, we will make use of the following fact wich follows directly
from (2.7) and (2.8): if ${\bm{u}}\in\mathcal{A}_{\nu}$, then ${\bm{u}}\in
C_{b}(H)$ and ${\bm{u}}\in C_{b}(V)$.
### 2.3. Data Assimilation via Nudging
The data assimilation problem we consider is defined as follows: given
measurements of a reference solution of (2.3) on the attractor, and a guess
for the fluid viscosity, contruct an approximation of the solution. This kind
of problem is typically known as data assimilation, but can also be thought of
as an inverse problem (mapping measurements of the velocity field back to the
full velocity field) or a regression problem (interpolating the measurements
in a way that generalizes to the entire space and time domain).
There are many approaches to solving data assimilation problems. Extensions of
the Kalman filter for nonlinear problems, like the extended Kalman filter, or
ensemble Kalman filter are common, as well as 3DVAR and 4DVAR, or more
recently, machine learning approaches such as Physics Informed Neural Networks
[14, 15]. The approach we consider, often called nudging, benefits from a
rigorous mathematical framework for establishing convergence results after
being studied by Azouani, Olsen and Titi in 2014 [2].
The idea can be simply stated as follows: given data collected on a variable,
$y$, and a method of interpolating the data, putting them together, we have an
interpolant operator $I_{h}$, where $h$ is a measure of the resolution of the
data, and $I_{h}(y)$ is the interpolation of the data. If $y$ satisfies a
differential equation $\dot{y}=F(y)$, then the nudging algorithm is to solve
$\dot{\tilde{y}}=F(y)+\mu\left(I_{h}(y)-I_{h}(\tilde{y})\right)$
for an approximation, $\tilde{y}$. The algorithm is succesful when $\tilde{y}$
converges to $y$.
In [2], the authors prove results for interpolant operators satisfying one of
two approximation criteria. The first kind of interpolant operator (and most
strict) has the following property:
###### Definition 2.3 (Type 1 Interpolant Operator).
$|{\bm{\phi}}-I_{h}({\bm{\phi}})|\leq
c_{1}h|\nabla{\bm{\phi}}|\quad\forall{\bm{\phi}}\in(\dot{H}^{1}(\Omega))^{2}$
In this work, we focus on the case where the measurements are exact values for
finitely many spectral coefficients, which are then interpolated to give
approximations in the range of the modal projection, $P_{N}.$ Specifically,
for any $N\geq 1$, $P_{N}$ is defined by:
$\widehat{P_{N}({\bm{\phi}})}_{\bm{k}}=\begin{cases}\widehat{{\bm{\phi}}}_{\bm{k}},\quad|{\bm{k}}|<N\\\
0,\quad|{\bm{k}}|\geq N\end{cases}\quad\forall{\bm{k}}\in\mathbb{Z}^{2}.$
The modal projection is an example of a Type 1 interpolant operator, as can
easily be verified.
$|{\bm{\phi}}-P_{N}{\bm{\phi}}|^{2}\leq\frac{1}{\kappa_{0}^{2}}\frac{1}{N^{2}}\|{\bm{\phi}}\|^{2}$
(2.9)
In particular, for any ${\bm{\phi}}\in H$ (which by the mean-free condition
implies $P_{1}{{\bm{\phi}}}=0$) we have the Poincaré inequality
$|{\bm{\phi}}|=|{\bm{\phi}}-P_{1}{{\bm{\phi}}}|\leq\frac{1}{\kappa_{0}}\|{\bm{\phi}}\|.$
(2.10)
We explicitly state the nudging data assimilation algorithm for our current
setting in System 2.11.
###### System 2.4 (Navier–Stokes with data feedback).
Given ${\bm{\phi}}\in C_{b}(V)$ (the data) and an approximate viscosity,
$\gamma$, find ${\bm{v}}\in C_{b}(V)$ satisfying
$\frac{d}{dt}{\bm{v}}+\gamma
A{\bm{v}}+B\left({\bm{v}},{\bm{v}}\right)=\bm{f}+\mu({\bm{\phi}}-P_{N}{\bm{v}}).$
(2.11)
When solving the data assimilation problem, we typically have
${\bm{\phi}}=P_{N}{\bm{u}}$, where ${\bm{u}}\in\mathcal{A}_{\nu}$. However,
(2.11) remains valid in the general setting with ${\bm{\phi}}\in C_{b}(V)$,
and is the basis for the definition of the determining map in Section 3.
### 2.4. Standard Inequalities
We now review several standard inequalities which are used throughout the
subsequent sections. We often use Young’s inequality to establish estimates,
$a^{p}b^{1-p}\leq pa+(1-p)b,\quad\forall a,b\geq 0,\;p\in[0,1].$ (2.12)
In particular, we frequently use Young’s inequality in the following form:
$ab\leq\frac{1}{2\epsilon}a^{2}+\frac{\epsilon}{2}b^{2},\quad\forall a,b\geq
0,\;\epsilon>0.$ (2.13)
We also make use of Jensen’s inequality,
$\left(\frac{1}{L}\int_{0}^{L}{\bm{\phi}}(x)dx\right)^{2}\leq\frac{1}{L}\int_{0}^{L}{\bm{\phi}}^{2}(x)dx$
(2.14)
We use Ladyzhenskaya’s inequality (for periodic boundary conditions) in the
following form to obtain estimates of the nonlinear term in (2.11) (a proof is
provided in the Appendix):
$\|{\bm{\phi}}\|_{L^{4}(\Omega)}\leq|{\bm{\phi}}|^{\frac{1}{2}}\left(\tfrac{1}{L}|{\bm{\phi}}|+\|{\bm{\phi}}\|\right)^{\frac{1}{2}}$
(2.15)
###### Remark 2.5.
Note that, using (2.10), we can rewrite (2.15) in the more usual form
$\|{\bm{\phi}}\|_{L^{4}(\Omega)}\leq
c_{0}|{\bm{\phi}}|^{\frac{1}{2}}\|{\bm{\phi}}\|^{\frac{1}{2}}$ (2.16)
where $c_{0}=\sqrt{1+\tfrac{1}{2\pi}}$.
Using (2.15), we have the following estimate:
$|b\left({\bm{u}},{\bm{v}},{\bm{w}}\right)|\leq\|{\bm{u}}\|_{L^{4}(\Omega)}|\nabla{\bm{v}}|\|{\bm{w}}\|_{L^{4}(\Omega)}\\\
\leq|{\bm{u}}|^{\frac{1}{2}}\left(\tfrac{1}{L}|{\bm{u}}|+\|{\bm{u}}\|\right)^{\frac{1}{2}}\|{\bm{v}}\||{\bm{w}}|^{\frac{1}{2}}\left(\tfrac{1}{L}|{\bm{w}}|+\|{\bm{w}}\|\right)^{\frac{1}{2}}$
(2.17)
where
$b\left({\bm{u}},{\bm{v}},{\bm{w}}\right):=\left(B\left({\bm{u}},{\bm{v}}\right),{{\bm{w}}}\right)$.
## 3\. Determining Map
The determining map was first defined in [8, 9] as the mapping of data to the
corresponding solution of (2.11). It was exploited in [3] for statistical data
assimilation. We extend the definition of the determining map to include
viscosity as an input and establish its Lipschitz property in all its
arguments, which plays a pivotal role in establishing the results in Section 4
and Section 7.
###### Definition 3.1 (Determining Map).
Fix an arbitrary lower bound $\nu_{0}>0$ for the viscosity, and fix an
arbitrary radius $R>0$ for a ball $B_{R}(0)\subset C_{b}(V)$. The determining
map, $\mathbb{W}:(\nu_{0},\infty)\times B_{R}(0)\to C_{b}(V)$, is the mapping
of viscosity and data to the corresponding solution of (2.11) on the
attractor.
For example, $\mathbb{W}(\tilde{\nu},P_{N}{\bm{u}})$ is the solution of (2.11)
with $\gamma=\tilde{\nu}$ and ${\bm{\phi}}=P_{N}{\bm{u}}$. However, we don’t
in general require that the data ${\bm{\phi}}$ come from a solution of (2.3).
We begin by deriving sufficient conditions for the determining map to be a
well-defined Lipschitz continuous mapping. These conditions are placed on
$\mu$ and $N$. Note that in contrast to the usual analysis taken wherein $N$
depends on $\mu$, we derive our results in such a way that the lower bound for
$N$ comes directly from the data, forcing, and $\nu_{0}$, and $\mu$ is chosen
to be larger than a lower bound which grows with $N^{2}$. Specifically, we
parameterize $\mu$ in terms of a (non-dimensional) constant $\mu_{0}>0$ as
follows:
$\mu=\mu_{0}\nu_{0}\kappa_{0}^{2}N^{2}.$ (3.1)
###### Lemma 3.2.
Fix $\nu_{0}\in\mathbb{R}_{+}$ and suppose that
$\mu_{0}\geq 1\quad\text{i.e.}\quad\mu\geq\nu_{0}\kappa_{0}^{2}N^{2}.$ (3.2)
For any $\bm{f},{\bm{\phi}}\in C_{b}(V),$ and for any $1\leq N\in\mathbb{R},$
if
$\gamma\in[\nu_{0},\infty)$
and ${\bm{v}}$ is a corresponding solution of (2.11), then the following
bounds are satisfied:
$\|{\bm{v}}\|_{C_{b}(H)}^{2}\leq\frac{2}{(\nu_{0}\kappa_{0}^{2}N^{2})^{2}}\|\bm{f}\|_{C_{b}(H)}^{2}+2\mu_{0}^{2}\|{\bm{\phi}}\|_{C_{b}(H)}^{2}\leq(M_{H}({\bm{\phi}}))^{2},$
(3.3)
and
$\|{\bm{v}}\|_{C_{b}(V)}^{2}\leq\frac{2}{(\nu_{0}\kappa_{0}^{2}N^{2})^{2}}\|\bm{f}\|_{C_{b}(V)}^{2}+2\mu_{0}^{2}\|{\bm{\phi}}\|_{C_{b}(V)}^{2}\leq(M_{V}({\bm{\phi}}))^{2}.$
(3.4)
Here, given fixed $\bm{f}\in C_{b}(V),$ $\nu_{0},\kappa_{0}>0,$ and
$\mu_{0}\geq 1$, we define
$M_{H}({\bm{\phi}})=\sqrt{2}\sqrt{\frac{1}{(\nu_{0}\kappa_{0}^{2})^{2}}\|\bm{f}\|_{C_{b}(H)}^{2}+\mu_{0}^{2}\|{\bm{\phi}}\|_{C_{b}(H)}^{2}}$
(3.5)
and
$M_{V}({\bm{\phi}})=\sqrt{2}\sqrt{\frac{1}{(\nu_{0}\kappa_{0}^{2})^{2}}\|\bm{f}\|_{C_{b}(V)}^{2}+\mu_{0}^{2}\|{\bm{\phi}}\|_{C_{b}(V)}^{2}}.$
(3.6)
###### Proof.
Let ${\bm{v}}$ be a solution of (2.11). Then taking the inner product of
(2.11) with ${\bm{v}}$, we have
$\frac{1}{2}\frac{d}{dt}|{\bm{v}}|^{2}+\gamma\|{\bm{v}}\|^{2}=\left<\bm{f},{\bm{v}}\right>+\mu\left<{\bm{\phi}},{\bm{v}}\right>-\mu|P_{N}{\bm{v}}|^{2}.$
Therefore, for any choice of $c_{1},c_{2}>0$,
$\frac{1}{2}\frac{d}{dt}|{\bm{v}}|^{2}+\gamma\|{\bm{v}}\|^{2}+\mu|P_{N}{\bm{v}}|^{2}\leq|\bm{f}||{\bm{v}}|+\mu|{\bm{\phi}}||{\bm{v}}|\leq\frac{1}{2c_{1}}|\bm{f}|^{2}+\frac{\mu}{2c_{2}}|{\bm{\phi}}|^{2}+\frac{c_{1}+\mu
c_{2}}{2}|{\bm{v}}|^{2}\\\
\leq\frac{1}{2c_{1}}|\bm{f}|^{2}+\frac{\mu}{2c_{2}}|{\bm{\phi}}|^{2}+\frac{c_{1}+\mu
c_{2}}{2}|P_{N}{\bm{v}}|^{2}+\frac{1}{\kappa_{0}^{2}}\frac{c_{1}+\mu
c_{2}}{2N^{2}}\|{\bm{v}}\|^{2},$
where to obtain the last inequality, we used
$|{\bm{v}}|^{2}=|P_{N}{\bm{v}}|^{2}+|(I-P_{N}){\bm{v}}|^{2}$ and (2.9).
Collecting terms, we have
$\frac{1}{2}\frac{d}{dt}|{\bm{v}}|^{2}+\left(\gamma-\frac{1}{\kappa_{0}^{2}}\frac{c_{1}+\mu
c_{2}}{2N^{2}}\right)\|{\bm{v}}\|^{2}+\left(\mu-\frac{c_{1}+\mu
c_{2}}{2}\right)|P_{N}{\bm{v}}|^{2}\leq\frac{1}{2c_{1}}|\bm{f}|^{2}+\frac{\mu}{2c_{2}}|{\bm{\phi}}|^{2}.$
Let $\epsilon,\delta\in(0,1),$ and define
$r=\frac{\nu_{0}\kappa_{0}^{2}N^{2}}{\mu}.$
Choosing
$c_{1}=2\mu r\delta(1-\epsilon)\quad\text{ and }\quad c_{2}=2r(1-\delta),$
we have
$c_{1}+\mu c_{2}=2\mu r(1-\delta\epsilon)$
and the previous differential inequality becomes
$\frac{1}{2}\frac{d}{dt}|{\bm{v}}|^{2}+\left(\gamma-\nu_{0}(1-\delta\epsilon)\right)\|{\bm{v}}\|^{2}+\left(\mu-\nu_{0}\kappa_{0}^{2}N^{2}(1-\delta\epsilon)\right)|P_{N}{\bm{v}}|^{2}\leq\frac{1}{2c_{1}}|\bm{f}|^{2}+\frac{\mu}{2c_{2}}|{\bm{\phi}}|^{2}.$
(3.7)
Observe that
$\|{\bm{v}}\|^{2}=\|P_{N}{\bm{v}}\|^{2}+\|(I-P_{N}){\bm{v}}\|^{2}\geq\kappa_{0}^{2}N^{2}|(I-P_{N}){\bm{v}}|^{2},$
where to obtain the last inequality we used (2.9). Consequently,
$\frac{1}{2}\frac{d}{dt}|{\bm{v}}|^{2}+\nu_{0}\delta\epsilon\|{\bm{v}}\|^{2}+\left(\mu-\nu_{0}\kappa_{0}^{2}N^{2}(1-\delta\epsilon)\right)|P_{N}{\bm{v}}|^{2}\\\
\geq\frac{1}{2}\frac{d}{dt}|{\bm{v}}|^{2}+\nu_{0}\kappa_{0}^{2}N^{2}\delta\epsilon|{\bm{v}}|^{2}+\left(\mu-\nu_{0}\kappa_{0}^{2}N^{2}\right)|P_{N}{\bm{v}}|^{2}.$
(3.8)
Note that by imposing the constraint $\epsilon,\delta\in(0,1),$ we have
ensured that $\gamma-\nu_{0}(1-\delta\epsilon)\geq\nu_{0}\delta\epsilon>0.$
Using (3.7), (3.8) and (3.2), and by dropping the last term in (3.8), we
obtain
$\frac{d}{dt}|{\bm{v}}|^{2}+\beta|{\bm{v}}|^{2}\leq\frac{1}{c_{1}}|\bm{f}|^{2}+\frac{\mu}{c_{2}}|{\bm{\phi}}|^{2},$
with $\beta:=2\nu_{0}\kappa_{0}^{2}N^{2}\delta\epsilon.$
So, by Gröwall’s inequality,
$|{\bm{v}}(t)|^{2}\leq
e^{-\beta(t-s)}|{\bm{v}}(s)|^{2}+\int_{s}^{t}e^{-\beta(t-\tau)}\left(\frac{1}{c_{1}}|\bm{f}(\tau)|^{2}+\frac{\mu}{c_{2}}|{\bm{\phi}}(\tau)|^{2}\right)d\tau\\\
\leq
e^{-\beta(t-s)}|{\bm{v}}(s)|^{2}+\left(\frac{1}{c_{1}}\|\bm{f}\|_{C_{b}(H)}^{2}+\frac{\mu}{c_{2}}\|{\bm{\phi}}\|_{C_{b}(H)}^{2}\right)\frac{1}{\beta}\left(1-e^{-\beta(t-s)}\right)$
We get the stated result by choosing $\epsilon=\frac{2}{3}$ and
$\delta=\frac{3}{4}$ (so $c_{1}=\frac{1}{2}\nu_{0}\kappa_{0}^{2}N^{2}$ and
$c_{2}=\frac{1}{2}\frac{\nu_{0}\kappa_{0}^{2}N^{2}}{\mu}$), and taking the
limit as $s\to-\infty.$
Taking the inner product of (2.11) with $A{\bm{v}}$ and proceeding similarly,
but this time using the enstrophy cancelation property
$(B({\bm{v}},{\bm{v}}),A{\bm{v}})=0$, we obtain (3.4). ∎
We now derive the main Lipschitz continuity property of the determining map
which is essential to the proofs of the subsequent results.
###### Lemma 3.3.
Let ${\bm{v}}_{1}$ and ${\bm{v}}_{2}$ be solutions of (2.11) with viscosity
$\gamma_{1}\geq\nu_{0}$ and $\gamma_{2}\geq\nu_{0}$ and data ${\bm{\phi}}_{1}$
and ${\bm{\phi}}_{2}$ respectively. Let $p\in[0,1]$ and set
$\alpha=\frac{\gamma_{1}+\gamma_{2}}{2}\left(1-\frac{1}{2}\left(2\frac{|\gamma_{1}-\gamma_{2}|}{\gamma_{1}+\gamma_{2}}\right)^{p}\right).$
Then, $\alpha\in\left[\frac{1}{2}\nu_{0},\bar{\gamma}\right]$ where
$\bar{\gamma}:=\frac{1}{2}(\gamma_{1}+\gamma_{2})$. Moreover, if
$\mu_{0}\geq\max\left\\{1,\frac{\alpha}{\nu_{0}}\right\\},$ (3.9)
and
$N\geq\frac{4}{\alpha\kappa_{0}}M_{V}({\bm{\phi}}_{i}),i=1,2.$ (3.10)
then
$\|{\bm{v}}_{1}-{\bm{v}}_{2}\|_{C_{b}(H)}^{2}\leq\frac{5}{2}\frac{|\gamma_{1}-\gamma_{2}|^{2-p}}{\left(\frac{\gamma_{1}+\gamma_{2}}{2}\right)^{1-p}}\frac{\|\frac{1}{2}({\bm{v}}_{1}+{\bm{v}}_{2})\|_{C_{b}(V)}^{2}}{\alpha\kappa_{0}^{2}N^{2}}+\frac{5}{2}\mu_{0}^{2}\left(\frac{\nu_{0}}{\alpha}\right)^{2}\|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}\|_{C_{b}(H)}^{2}.$
(3.11)
###### Proof.
Observe that
$\alpha=\bar{\gamma}-\frac{1}{2}\bar{\gamma}^{1-p}|\gamma_{1}-\gamma_{2}|^{p}=\bar{\gamma}\left(1-\frac{1}{2}\left(2\frac{|\gamma_{1}-\gamma_{2}|}{\gamma_{1}+\gamma_{2}}\right)^{p}\right),$
and note that for all $p\in[0,1],$
$\bar{\gamma}\geq\alpha\geq\bar{\gamma}\left(1-\frac{1}{2}\max\left\\{1,2\frac{|\gamma_{1}-\gamma_{2}|}{\gamma_{1}+\gamma_{2}}\right\\}\right)=\min\left\\{\frac{1}{2}\bar{\gamma},\min\left\\{\gamma_{1},\gamma_{2}\right\\}\right\\}\geq\frac{1}{2}\nu_{0}.$
(3.12)
Thus, $\alpha\in\left[\frac{1}{2}\nu_{0},\bar{\gamma}\right]$
Now let ${\bm{v}}_{1}$ and ${\bm{v}}_{2}$ be solutions of (2.11) with
viscosity $\gamma_{1}$ and $\gamma_{2}$ and data ${\bm{\phi}}_{1}$ and
${\bm{\phi}}_{2}$ respectively. We now analyze the difference
${\bm{w}}:={\bm{v}}_{1}-{\bm{v}}_{2},$ and for convenience of presentation, we
define the average quantities
$\bar{{\bm{v}}}=\frac{1}{2}({\bm{v}}_{1}+{\bm{v}}_{2})$ and
$\bar{\gamma}=\frac{1}{2}(\gamma_{1}+\gamma_{2}).$
Writing the evolution equation for ${\bm{w}}$ using (2.11),
$\partial_{t}{\bm{w}}+\bar{\gamma}A{\bm{w}}+(\gamma_{1}-\gamma_{2})A\bar{{\bm{v}}}+B\left(\bar{{\bm{v}}},{\bm{w}}\right)+B\left({\bm{w}},\bar{{\bm{v}}}\right)=\mu({\bm{\phi}}_{1}-{\bm{\phi}}_{2})-\mu
P_{N}{\bm{w}}$
and taking the inner product with ${\bm{w}}$ and using (2.17), we have
$\frac{1}{2}\frac{d}{dt}|{\bm{w}}|^{2}+\bar{\gamma}\|{\bm{w}}\|^{2}+\mu|P_{N}{\bm{w}}|^{2}=-(\gamma_{1}-\gamma_{2})\left<A\bar{{\bm{v}}},{\bm{w}}\right>+\mu\left<({\bm{\phi}}_{1}-{\bm{\phi}}_{2}),{\bm{w}}\right>-b\left({\bm{w}},\bar{{\bm{v}}},{\bm{w}}\right)\\\
\leq|\gamma_{1}-\gamma_{2}|\|\bar{{\bm{v}}}\|\|{\bm{w}}\|+\mu|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}||{\bm{w}}|+\|\bar{{\bm{v}}}\||{\bm{w}}|(\tfrac{1}{L}|{\bm{w}}|+\|{\bm{w}}\|)$
Using (2.13), for any $p\in[0,1]$, we can write
$|\gamma_{1}-\gamma_{2}|\|\bar{{\bm{v}}}\|\|{\bm{w}}\|\leq\frac{1}{2}\bar{\gamma}^{p-1}|\gamma_{1}-\gamma_{2}|^{2-p}\|\bar{{\bm{v}}}\|^{2}+\frac{1}{2}\bar{\gamma}^{1-p}|\gamma_{1}-\gamma_{2}|^{p}\|{\bm{w}}\|^{2},$
therefore,
$\frac{1}{2}\frac{d}{dt}|{\bm{w}}|^{2}+\left(\bar{\gamma}-\frac{1}{2}\bar{\gamma}^{1-p}|\gamma_{1}-\gamma_{2}|^{p}\right)\|{\bm{w}}\|^{2}+\mu|P_{N}{\bm{w}}|^{2}\\\
\leq\frac{1}{2}\bar{\gamma}^{p-1}|\gamma_{1}-\gamma_{2}|^{2-p}\|\bar{{\bm{v}}}\|^{2}+\mu|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}||{\bm{w}}|+\|\bar{{\bm{v}}}\||{\bm{w}}|(\tfrac{1}{L}|{\bm{w}}|+\|{\bm{w}}\|).$
For the remaining terms, using (2.13) and (2.9), for any $c_{1}>0$ we have
$\mu|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}||{\bm{w}}|\leq\frac{\mu}{2c_{1}}|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}|^{2}+\frac{\mu
c_{1}}{2\kappa_{0}^{2}N^{2}}\|{\bm{w}}\|^{2}+\frac{\mu
c_{1}}{2}|P_{N}{\bm{w}}|^{2},$
and
$\|\bar{{\bm{v}}}\||{\bm{w}}|(\tfrac{1}{L}|{\bm{w}}|+\|{\bm{w}}\|)\\\
\leq\left(\frac{1}{\kappa_{0}^{2}N^{2}L}+\frac{1}{\kappa_{0}N}\right)\|\bar{{\bm{v}}}\|\|{\bm{w}}\|^{2}+\left(\frac{1}{L}+\frac{\kappa_{0}N}{2}\right)\|\bar{{\bm{v}}}\||P_{N}{\bm{w}}|^{2}.$
Therefore,
$\frac{1}{2}\frac{d}{dt}|{\bm{w}}|^{2}+\left(\alpha-\frac{\mu
c_{1}}{2\kappa_{0}^{2}N^{2}}-\left(\frac{1}{\kappa_{0}^{2}N^{2}L}+\frac{1}{\kappa_{0}N}\right)\|\bar{{\bm{v}}}\|\right)\|{\bm{w}}\|^{2}\\\
+\left(\mu-\frac{\mu
c_{1}}{2}-\left(\frac{1}{L}+\frac{\kappa_{0}N}{2}\right)\|\bar{{\bm{v}}}\|\right)|P_{N}{\bm{w}}|^{2}\\\
\leq\frac{1}{2}\bar{\gamma}^{p-1}|\gamma_{1}-\gamma_{2}|^{2-p}\|\bar{{\bm{v}}}\|^{2}+\frac{\mu}{2c_{1}}|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}|^{2}$
and after choosing $c_{1}=\alpha\frac{\kappa_{0}^{2}N^{2}}{\mu}$ and
simplifying,
$\frac{1}{2}\frac{d}{dt}|{\bm{w}}|^{2}+\left(\frac{1}{2}\alpha-\left(1+\frac{1}{2\pi
N}\right)\frac{\|\bar{{\bm{v}}}\|}{\kappa_{0}N}\right)\|{\bm{w}}\|^{2}\\\
+\left(\mu-\frac{1}{2}\alpha\kappa_{0}^{2}N^{2}-\frac{1}{L}\left(1+\pi
N\right)\|\bar{{\bm{v}}}\|\right)|P_{N}{\bm{w}}|^{2}\\\
\leq\frac{1}{2}\bar{\gamma}^{p-1}|\gamma_{1}-\gamma_{2}|^{2-p}\|\bar{{\bm{v}}}\|^{2}+\frac{\mu^{2}}{2\alpha\kappa_{0}^{2}N^{2}}|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}|^{2}.$
Now, by (3.10) and (3.4), we have
$N\geq\frac{4}{\alpha\kappa_{0}}\|\bar{{\bm{v}}}\|,$ (3.13)
so,
$\frac{1}{2}\alpha-\left(1+\frac{1}{2\pi
N}\right)\frac{\|\bar{{\bm{v}}}\|}{\kappa_{0}N}\geq\frac{1}{2}\alpha-\left(1+\frac{1}{2\pi
N}\right)\frac{\alpha}{4}=\frac{1}{4}\alpha\left(1-\frac{1}{2\pi
N}\right)>\frac{1}{5}\alpha>0.$ (3.14)
Therefore, we can apply (2.9), obtaining
$\frac{1}{2}\frac{d}{dt}|{\bm{w}}|^{2}+\left(\frac{1}{2}\alpha-\left(1+\frac{1}{2\pi
N}\right)\frac{\|\bar{{\bm{v}}}\|}{\kappa_{0}N}\right)\kappa_{0}^{2}N^{2}|{\bm{w}}|^{2}\\\
+\left(\mu-\alpha\kappa_{0}^{2}N^{2}+\frac{1}{2}\kappa_{0}N\|\bar{{\bm{v}}}\|\right)|P_{N}{\bm{w}}|^{2}\\\
\leq\frac{1}{2}\bar{\gamma}^{p-1}|\gamma_{1}-\gamma_{2}|^{2-p}\|\bar{{\bm{v}}}\|^{2}+\frac{\mu^{2}}{2\alpha\kappa_{0}^{2}N^{2}}|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}|^{2}.$
By assumption (3.9) we can drop the third term. Then, after simplifying and
applying the estimate (3.14) to the second term on the left hand side, we have
$\frac{d}{dt}|{\bm{w}}|^{2}+\frac{2}{5}\alpha\kappa_{0}^{2}N^{2}|{\bm{w}}|^{2}\\\
\leq\bar{\gamma}^{p-1}|\gamma_{1}-\gamma_{2}|^{2-p}\|\bar{{\bm{v}}}\|^{2}+\frac{\mu^{2}}{\alpha\kappa_{0}^{2}N^{2}}|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}|^{2}.$
From here, for any $-\infty<s<t$ we can apply Grönwall’s inequality,
$|{\bm{w}}(t)|^{2}\leq
e^{-\frac{2}{5}\alpha\kappa_{0}^{2}N^{2}(t-s)}|{\bm{w}}(s)|^{2}\\\
+\bar{\gamma}^{p-1}|\gamma_{1}-\gamma_{2}|^{2-p}\|\bar{{\bm{v}}}\|_{C_{b}(V)}^{2}\frac{5}{2}\frac{1}{\alpha\kappa_{0}^{2}N^{2}}\left(1-e^{-\frac{2}{5}\alpha\kappa_{0}^{2}N^{2}(t-s)}\right)\\\
+\frac{\mu^{2}}{\alpha\kappa_{0}^{2}N^{2}}\|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}\|_{C_{b}(H)}^{2}\frac{5}{2}\frac{1}{\alpha\kappa_{0}^{2}N^{2}}\left(1-e^{-\frac{2}{5}\alpha\kappa_{0}^{2}N^{2}(t-s)}\right)$
and taking the limit as $s\to-\infty$ we get the stated result. ∎
In the previous lemma, the lower bound for $N$ depends on the value of
$\alpha$, which depends on the viscosities. If we impose an upper bound,
$\nu_{1}$, on the viscosities in addition to the lower bound, $\nu_{0}$, we
can easily replace this dependence to be on $\nu_{1}$ and $\nu_{0}$.
###### Corollary 3.4.
Let ${\bm{v}}_{1}$ and ${\bm{v}}_{2}$ be solutions of (2.11) with viscosity
$\gamma_{1}$ and $\gamma_{2}$ and data ${\bm{\phi}}_{1}$ and ${\bm{\phi}}_{2}$
respectively, and suppose that $\gamma_{1},\gamma_{2}\in[\nu_{0},\nu_{1}]$,
where $\nu_{1}>\nu_{0}$. If
$\mu_{0}\geq\frac{\nu_{1}}{\nu_{0}},$ (3.15)
and
$N\geq\frac{8}{\nu_{0}\kappa_{0}}M_{V}({\bm{\phi}}_{i})$ (3.16)
for $i=1,2,$ then
$\|{\bm{v}}_{1}-{\bm{v}}_{2}\|_{C_{b}(H)}^{2}\leq
5\frac{|\gamma_{1}-\gamma_{2}|^{2-p}}{\left(\frac{\gamma_{1}+\gamma_{2}}{2}\right)^{1-p}}\frac{\|\frac{1}{2}({\bm{v}}_{1}+{\bm{v}}_{2})\|_{C_{b}(V)}^{2}}{\nu_{0}\kappa_{0}^{2}N^{2}}+10\mu_{0}^{2}\|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}\|_{C_{b}(H)}^{2}.$
###### Proof.
Substituting for $\alpha$ using the bounds
$\frac{\nu_{0}}{2}\leq\alpha\leq\frac{\gamma_{1}+\gamma_{2}}{2}\leq\nu_{1}$
proves the corollary. ∎
We can also allow for $\gamma_{1}$ and $\gamma_{2}$ to be arbitrarily large,
as long as their relative difference is smaller than $1$.
###### Corollary 3.5.
Let ${\bm{v}}_{1}$ and ${\bm{v}}_{2}$ be solutions of (2.11) with viscosity
$\gamma_{1}\geq\nu_{0}$ and $\gamma_{2}\geq\nu_{0}$ and data ${\bm{\phi}}_{1}$
and ${\bm{\phi}}_{2}$ respectively. If
$\mu_{0}\geq 1,$ (3.17)
and
$N\geq\frac{8}{\nu_{0}\kappa_{0}}M_{V}({\bm{\phi}}_{i})$ (3.18)
for $i=1,2,$ and if
$\frac{|\gamma_{1}-\gamma_{2}|}{\frac{1}{2}(\gamma_{1}+\gamma_{2})}\leq 1,$
then
$\|{\bm{v}}_{1}-{\bm{v}}_{2}\|_{C_{b}(H)}^{2}\leq
5\left(\frac{|\gamma_{1}-\gamma_{2}|}{\frac{\gamma_{1}+\gamma_{2}}{2}}\right)^{2-p}\frac{\|\frac{1}{2}({\bm{v}}_{1}+{\bm{v}}_{2})\|_{C_{b}(V)}^{2}}{\kappa_{0}^{2}N^{2}}+10\mu_{0}^{2}\|{\bm{\phi}}_{1}-{\bm{\phi}}_{2}\|_{C_{b}(H)}^{2}.$
###### Proof.
Using the same notation as before, we have by assumption
$\frac{|\gamma_{1}-\gamma_{2}|}{\bar{\gamma}}\leq 1,$ so
$\frac{1}{2}\nu_{0}\leq\alpha\leq\nu_{0},$ hence (3.9) and (3.10) are
satisfied by (3.17) and (3.18). Also,
$\alpha\geq\bar{\gamma}\left(1-\frac{1}{2}\max\left\\{1,2\frac{|\gamma_{1}-\gamma_{2}|}{\gamma_{1}+\gamma_{2}}\right\\}\right)=\frac{1}{2}\bar{\gamma}.$
Substituting this lower bound for $\alpha$ in (3.11) along with the more
general lower bound, $\alpha>\frac{1}{2}\nu_{0}$, gives the result. ∎
We can now show the well-posedness of the determining map.
###### Theorem 3.6.
Let $R>0$ and $\nu_{0}>0$, and let $\bm{f}\in C_{b}(V).$ If
$N\geq\frac{8}{\nu_{0}\kappa_{0}}\sqrt{2}\left(\frac{1}{\nu_{0}\kappa_{0}^{2}N^{2}}\|\bm{f}\|_{C_{b}(V)}+\mu_{0}R\right),$
(3.19)
for any $\mu_{0}\geq 1$ with $\mu$ given by (3.1), then $\mathbb{W}$ as
defined by Definition 3.1 is well-defined.
###### Proof.
The existence of solutions on the attractor for any given pair
$(\gamma,{\bm{\phi}})$ on the domain of $\mathbb{W}$ can be shown by first
taking a Galerkin truncation and proceeding to the limit as is done in the
classical case for the 2D Navier–Stokes, and so we omit the details here.
The uniqueness of solutions follows from Corollary 3.5: for any
$(\gamma,{\bm{\phi}})\in(\nu_{0},\infty)\times B_{R}(0)$, note that the
conditions of the corollary are satisfied. Therefore, given two solutions
${\bm{v}}_{1}$ and ${\bm{v}}_{2}$ of (2.11) corresponding to $\gamma$ and
${\bm{\phi}}$, by Corollary 3.5 we have
$\|{\bm{v}}_{1}-{\bm{v}}_{2}\|_{C_{b}(H)}=0,$
therefore, by the embedding $V\subset H$,
$\|{\bm{v}}_{1}-{\bm{v}}_{2}\|_{C_{b}(V)}=0.$
∎
###### Remark 3.7.
Theorem 3.6 also provides an alternate proof of the classical result that
there are finitely many determining modes for the Navier–Stokes equations. To
see that this is true, suppose ${\bm{u}}$ and ${\bm{v}}$ are two solutions of
(2.3) in $C_{b}(V)$ with viscosity $\nu$, which agree at low modes, i.e.
$P_{N}{\bm{u}}=P_{N}{\bm{v}}$ for all $t\in\mathbb{R}$. If $N$ satisfies the
conditions of Theorem 3.6 with $\nu_{0}=\nu,$
$R=\max\\{\|{\bm{u}}\|_{C_{b}(V)},\|{\bm{v}}\|_{C_{b}(V)}\\},$ and
$\mu_{0}=1$, then
${\bm{u}}=\mathbb{W}(\nu,P_{N}{\bm{u}})=\mathbb{W}(\nu,P_{N}{\bm{v}})={\bm{v}}.$
## 4\. The Parameter Recovery Inverse Problem
Our goal in this section is to define and study the following optimization
problem.
###### Problem 4.1.
Let $P_{N}{\bm{u}}$ be given on $\mathbb{R}$. Define the cost functional,
$\mathcal{L}$, by
$\mathcal{L}(\gamma)=\|P_{N}\mathbb{W}(\gamma,P_{N}{\bm{u}})-P_{N}{\bm{u}}\|_{C_{b}(H)}.$
(4.1)
Find $\gamma^{*}>0$ such that
$\mathcal{L}(\gamma^{*})=\min_{\gamma>0}\mathcal{L}(\gamma).$
As motivation, consider the following:
###### Fact 4.2.
If ${\bm{u}}\in C_{b}(V)$ is a solution of (2.3) with viscosity $\nu$, then
$\min_{\gamma>0}\|\mathbb{W}(\gamma,P_{N}{\bm{u}})-{\bm{u}}\|_{C_{b}(H)}$
has a unique solution, namely $\nu$, provided that $\mu$ and $N$ satisfy the
conditions of Theorem 3.6 with $R=\|{\bm{u}}\|_{C_{b}(V)},$ and that
$\bm{f}\not\equiv 0$.
###### Proof.
Choosing $\gamma=\nu$ makes the cost exactly $0$, as ${\bm{u}}$ solves (2.11)
when $\gamma=\nu$, and therefore, by Theorem 3.6,
$\mathbb{W}(\nu,P_{N}{\bm{u}})={\bm{u}}$. To see that this solution is unique,
suppose that ${\bm{v}}:=\mathbb{W}(\gamma,P_{N}{\bm{u}})$ satisfies
$\|{\bm{v}}-{\bm{u}}\|_{C_{b}(H)}=0.$ Then
$\frac{d}{dt}{\bm{v}}+\gamma
A{\bm{v}}+B\left({\bm{v}},{\bm{v}}\right)=\bm{f}+P_{N}{\bm{u}}-P_{N}{\bm{v}}.$
Taking an inner-product with ${\bm{v}}$, we obtain
$\frac{1}{2}\frac{d}{dt}|{\bm{v}}|^{2}+\gamma\|{\bm{v}}\|^{2}=\left<\bm{f},{\bm{v}}\right>+\left<P_{N}{\bm{u}},{\bm{v}}\right>-|P_{N}{\bm{v}}|^{2},$
whereas, by the definition of ${\bm{u}}$ we have
$\frac{d}{dt}{\bm{u}}+\nu A{\bm{u}}+B\left({\bm{u}},{\bm{u}}\right)=\bm{f},$
and taking an inner-product with ${\bm{u}}$, we get
$\frac{1}{2}\frac{d}{dt}|{\bm{u}}|^{2}+\nu\|{\bm{u}}\|^{2}=\left<\bm{f},{\bm{u}}\right>.$
Now, by assumption, $|{\bm{v}}-{\bm{u}}|=0$ for each $t$, so
$\left<\bm{f},{\bm{v}}\right>=\left<\bm{f},{\bm{u}}\right>$,
$\left<P_{N}{\bm{u}},{\bm{v}}\right>=|P_{N}{\bm{v}}|^{2}$, and
$\|{\bm{u}}\|=\|{\bm{v}}\|.$ Therefore, taking the difference of the two
energy equations, we see that for all $t\in(-\infty,\infty)$,
$(\gamma-\nu)\|{\bm{u}}\|=0.$
This can only hold if ${\bm{u}}\equiv 0$ or if $\gamma=\nu$. However,
${\bm{u}}\equiv 0$ only if $\bm{f}\equiv 0$. Therefore $\gamma=\nu$. ∎
Note that the minimization problem presented in Fact 4.2 is not useful,
because to calculate the cost we would need to know ${\bm{u}}$ exactly. In
which case, rather than solving the minimization problem using
$\mathbb{W}(\cdot,\cdot)$, we could compute $\nu$ directly by averaging the
energy equation over $[-T,T]$ for any $T>0$:
$\nu=\frac{\int_{-T}^{T}\left<\bm{f},{\bm{u}}\right>-\left(|{\bm{u}}(T)|^{2}-|{\bm{u}}(-T)|^{2}\right)}{\int_{-T}^{T}\|{\bm{u}}\|^{2}}.$
The point of posing Problem 4.1 is to obtain a similar result while requiring
a minimal amount of data.
## 5\. Uniqueness of the Inverse Problem
Given the same data, the Lipschitz properties we proved in Section 3 show that
closeness in the viscosity implies smallness of the cost function,
$\mathcal{L}$. Now that we are considering the inverse problem, we are
interested in the converse: when does smallness in the cost function imply
closeness to the true viscosity? We give conditions for the converse to be
true in Section 6, but first, we directly address the simpler question of
uniqueness of the solution, $\gamma=\nu$, to Problem 4.1.
In the following lemma, we provide an inequality bounding the difference
between two viscosities which, given the same data ${\bm{\phi}}$, map (via
$\mathbb{W}$) to trajectories with identical projections on $range(P_{N}).$ We
leverage this result to show the uniqueness of solutions of Problem 4.1 in the
subsequent theorem.
###### Lemma 5.1.
Let $\bm{f},{\bm{\phi}}\in C_{b}(V)$ and
$\gamma_{1},\gamma_{2}\in[\nu_{0},\nu_{1}]$, and fix
$\mu_{0}\geq\frac{\nu_{1}}{\nu_{0}}$ and
$N\geq\frac{8}{\nu_{0}\kappa_{0}}M_{V}({\bm{\phi}}).$ If
$\|P_{N}\mathbb{W}(\gamma_{1},{\bm{\phi}})-P_{N}\mathbb{W}(\gamma_{2},{\bm{\phi}})\|_{C_{b}(H)}=0,$
(5.1)
then either $|\gamma_{1}-\gamma_{2}|=0,$ or for any $p\in[0,1]$,
$\left(\frac{|\gamma_{1}-\gamma_{2}|}{\frac{\gamma_{1}+\gamma_{2}}{2}}\right)^{p/2}\|{\bm{\psi}}\|_{C_{b}(H)}\,N\leq\frac{2\sqrt{5c}}{\nu_{0}\kappa_{0}^{2}\sqrt{|\Omega|}}\sqrt{\ln(N+1)}M_{H}({\bm{\phi}})M_{V}({\bm{\phi}}),$
(5.2)
where
${\bm{\psi}}:=P_{N}\mathbb{W}(\gamma_{1},{\bm{\phi}})=P_{N}\mathbb{W}(\gamma_{2},{\bm{\phi}}).$
###### Proof.
Let ${\bm{v}}_{1}=\mathbb{W}(\gamma_{1},{\bm{\phi}})$ and
${\bm{v}}_{2}=\mathbb{W}(\gamma_{2},{\bm{\phi}})$, and assume that
$P_{N}{\bm{v}}_{1}=P_{N}{\bm{v}}_{2}=:{\bm{\psi}}$. Let
${\bm{w}}={\bm{v}}_{1}-{\bm{v}}_{2}$ (note that $P_{N}{\bm{w}}$ is therefore
$0$). Writing (2.11) in Fourier space, for any ${\bm{k}}\in\mathbb{Z}^{2}$,
$|{\bm{k}}|\leq N$, we have
$\frac{d}{dt}\widehat{{\bm{v}}_{1}}({\bm{k}})+\gamma_{1}\kappa_{0}^{2}|{\bm{k}}|^{2}\widehat{{\bm{v}}_{1}}({\bm{k}})+i\kappa_{0}P_{\sigma}\sum_{{\bm{h}}}({\bm{k}}\cdot\widehat{{\bm{v}}_{1}}({\bm{h}}))\widehat{{\bm{v}}_{1}}({\bm{k}}-{\bm{h}})=\widehat{\bm{f}}({\bm{k}})+\mu\widehat{{\bm{\phi}}}({\bm{k}})-\mu\widehat{{\bm{v}}_{1}}({\bm{k}})$
and
$\frac{d}{dt}\widehat{{\bm{v}}_{2}}({\bm{k}})+\gamma_{2}\kappa_{0}^{2}|{\bm{k}}|^{2}\widehat{{\bm{v}}_{2}}({\bm{k}})+i\kappa_{0}P_{\sigma}\sum_{{\bm{h}}}({\bm{k}}\cdot\widehat{{\bm{v}}_{2}}({\bm{h}}))\widehat{{\bm{v}}_{2}}({\bm{k}}-{\bm{h}})=\widehat{\bm{f}}({\bm{k}})+\mu\widehat{{\bm{\phi}}}({\bm{k}})-\mu\widehat{{\bm{v}}_{2}}({\bm{k}}).$
Subtracting these equations and using the fact that
$\widehat{{\bm{v}}_{1}}({\bm{k}})=\widehat{{\bm{v}}_{2}}({\bm{k}})=\widehat{{\bm{\psi}}}({\bm{k}})$
and $\widehat{{\bm{w}}}({\bm{k}})=0$ for $|{\bm{k}}|\leq N$, we find
$(\gamma_{1}-\gamma_{2})\kappa_{0}^{2}|{\bm{k}}|^{2}\widehat{{\bm{\psi}}}({\bm{k}})+i\kappa_{0}P_{\sigma}\sum_{{\bm{h}}}({\bm{k}}\cdot\widehat{{\bm{v}}_{1}}({\bm{h}}))\widehat{{\bm{w}}}({\bm{k}}-{\bm{h}})+i\kappa_{0}P_{\sigma}\sum_{{\bm{h}}}({\bm{k}}\cdot\widehat{{\bm{w}}}({\bm{h}}))\widehat{{\bm{v}}_{2}}({\bm{k}}-{\bm{h}})=0.$
Next we estimate the convolutions. First, we may write
$|\gamma_{1}-\gamma_{2}|\kappa_{0}|{\bm{k}}|^{2}|\widehat{{\bm{\psi}}}({\bm{k}})|\leq\sum_{{\bm{h}}}|{\bm{k}}||\widehat{{\bm{v}}_{1}}({\bm{h}})||\widehat{{\bm{w}}}({\bm{k}}-{\bm{h}})|+\sum_{{\bm{h}}}|{\bm{k}}||\widehat{{\bm{v}}_{2}}({\bm{k}}-{\bm{h}})||\widehat{{\bm{w}}}({\bm{h}})|.$
Then, applying the Cauchy-Schwarz inequality and Parseval’s theorem, we obtain
$|\gamma_{1}-\gamma_{2}|\kappa_{0}|{\bm{k}}|^{2}|\widehat{{\bm{\psi}}}({\bm{k}})|\leq|{\bm{k}}|\frac{1}{|\Omega|}|{\bm{v}}_{1}||{\bm{w}}|+|{\bm{k}}|\frac{1}{|\Omega|}|{\bm{v}}_{2}||{\bm{w}}|.$
Now, for any ${\bm{k}}\neq 0$, we can multiply both sides by
$\frac{|\widehat{{\bm{\psi}}}({\bm{k}})|}{|{\bm{k}}|^{2}}|\Omega|$,
$|\gamma_{1}-\gamma_{2}|\kappa_{0}|\widehat{{\bm{\psi}}}({\bm{k}})|^{2}|\Omega|\leq\frac{1}{|{\bm{k}}|}|\widehat{{\bm{\psi}}}({\bm{k}})|(|{\bm{v}}_{1}|+|{\bm{v}}_{2}|)|{\bm{w}}|,$
and taking the sum over ${\bm{k}}\in\mathbb{Z}^{2},0<|{\bm{k}}|\leq N,$ and
using the Cauchy-Schwarz inequality,
$|\gamma_{1}-\gamma_{2}|\kappa_{0}|P_{N}{\bm{\psi}}|^{2}\leq\left(\sum_{0<|{\bm{k}}|\leq
N}\frac{1}{|{\bm{k}}|^{2}}\right)^{\frac{1}{2}}\frac{1}{\sqrt{|\Omega|}}|P_{N}{\bm{\psi}}|(|{\bm{v}}_{1}|+|{\bm{v}}_{2}|)|{\bm{w}}|.$
Using the estimate
$\sum_{0<|{\bm{k}}|\leq N}\frac{1}{|{\bm{k}}|^{2}}\leq 8+2\pi(1+\ln(N))\leq
c\ln(N+1),$
and noting that ${\bm{\psi}}=P_{N}{\bm{\psi}}$, we have
$|\gamma_{1}-\gamma_{2}|\kappa_{0}\sqrt{|\Omega|}|{\bm{\psi}}|^{2}\leq\sqrt{c\ln(N+1)}|{\bm{\psi}}|(|{\bm{v}}_{1}|+|{\bm{v}}_{2}|)|{\bm{w}}|.$
Finally, by Corollary 3.4,
$|{\bm{w}}(t)|\leq\sqrt{5}\frac{|\gamma_{1}-\gamma_{2}|^{1-p/2}}{\left(\frac{\gamma_{1}+\gamma_{2}}{2}\right)^{1/2-p/2}}\frac{\|\frac{1}{2}({\bm{v}}_{1}+{\bm{v}}_{2})\|_{C_{b}(V)}}{\sqrt{\nu_{0}}\kappa_{0}N},$
and using Lemma 3.2, we can bound the norms involving ${\bm{v}}_{1}$ and
${\bm{v}}_{2}$. The resulting inequality is
$\left(\frac{|\gamma_{1}-\gamma_{2}|}{\frac{\gamma_{1}+\gamma_{2}}{2}}\right)^{p/2}|\gamma_{1}-\gamma_{2}||{\bm{\psi}}|^{2}\\\
\leq|\gamma_{1}-\gamma_{2}||{\bm{\psi}}|\frac{2\sqrt{5}}{\nu_{0}\kappa_{0}^{2}\sqrt{|\Omega|}}\frac{\sqrt{c\ln(N+1)}}{N}M_{H}({\bm{\phi}})M_{V}({\bm{\phi}}),$
so either $|\gamma_{1}-\gamma_{2}|=0$ (taking care in the case where $p=0$),
or
$\left(\frac{|\gamma_{1}-\gamma_{2}|}{\frac{\gamma_{1}+\gamma_{2}}{2}}\right)^{p/2}|{\bm{\psi}}|N\leq\frac{2\sqrt{5c}}{\nu_{0}\kappa_{0}^{2}\sqrt{|\Omega|}}\sqrt{\ln(N+1)}M_{H}({\bm{\phi}})M_{V}({\bm{\phi}}).$
This holds for all time, so we may take the supremum over all time and get the
stated result. ∎
In the previous lemma no conclusions can be drawn regarding the viscosity in
the event that the data are zero in $C_{b}(H)$ (compare to the nonzero
requirement of Fact 4.2). However, given a nonzero function ${\bm{\phi}}\in
C_{b}(H)$, we always have $\|P_{n}({\bm{\phi}})\|_{C_{b}(H)}>0$ for some
$n>1$. Therefore, given $0\neq{\bm{\phi}}\in C_{b}(H)$, let
$n_{0}({\bm{\phi}})=\sup\\{n\in\mathbb{R}\>|\>\|P_{n}({\bm{\phi}})\|_{C_{b}(H)}=0\\}.$
(5.3)
Then $1\leq n_{0}({\bm{\phi}})<\infty$, and
$\|P_{N}({\bm{\phi}})\|_{C_{b}(H)}>0$ for any $N>n_{0}({\bm{\phi}}).$
We now give conditions for Problem 4.1 to have a unique solution.
###### Theorem 5.2.
Let ${\bm{u}}\in C_{b}(V)$ be a solution of (2.3) with viscosity
$\nu\in[\nu_{0},\nu_{1}]$, where $0<\nu_{0}<\nu_{1}<\infty$ are given, and
suppose that ${\bm{u}}\neq 0$. Choose $\mu_{0}\geq\frac{\nu_{1}}{\nu_{0}}.$ If
$N>\max\left\\{\frac{8}{\nu_{0}\kappa_{0}}M_{V}({\bm{u}}),\>n_{0}({\bm{u}})\right\\}$
and
$\frac{N}{\sqrt{\ln(N+1)}}>\frac{2\sqrt{5c}}{\nu_{0}\kappa_{0}^{2}\sqrt{|\Omega|}}\frac{M_{H}({\bm{u}})M_{V}({\bm{u}})}{\|P_{N}{\bm{u}}\|_{C_{b}(H)}},$
(5.4)
then $\nu$ is the _unique_ solution of Problem 4.1 on $[\nu_{0},\nu_{1}]$.
###### Proof.
By the assumption that ${\bm{u}}\in C_{b}(V),$ we may take
$R=\|P_{N}{\bm{u}}\|_{C_{b}(V)}\leq\|{\bm{u}}\|_{C_{b}(V)}<\infty,$ in Theorem
3.6 and conclude that ${\bm{u}}=\mathbb{W}(\nu,P_{N}{\bm{u}}).$
Let $\gamma\in[\nu_{0},\nu_{1}]$ and suppose that $\mathcal{L}(\gamma)=0.$
Then by Lemma 5.1 with $p=0$, either $\nu=\gamma$ or
$\|P_{N}{\bm{u}}\|_{C_{b}(H)}\,N\leq\frac{2\sqrt{5c}}{\nu_{0}\kappa_{0}^{2}\sqrt{|\Omega|}}\sqrt{\ln(N+1)}M_{H}(P_{N}{\bm{u}})M_{V}(P_{N}{\bm{u}})\\\
\leq\frac{2\sqrt{5c}}{\nu_{0}\kappa_{0}^{2}\sqrt{|\Omega|}}\sqrt{\ln(N+1)}M_{H}({\bm{u}})M_{V}({\bm{u}}).$
This inequality is invalidated by (5.4), so we conclude that $\nu=\gamma$. ∎
In Theorem 5.2, condition (5.4) depends on the solution ${\bm{u}}$ on which we
only have knowledge of its finite-dimensional projection $P_{N}u$. The
previous Theorem can be reformulated in terms of a dynamical property namely
the distance between zero and the attractor, $\mathcal{A}$.
###### Theorem 5.3.
Assume $\nu\in[\nu_{0},\nu_{1}]$ with $\nu_{0}>0$ and assume
$\bm{0}\notin\mathcal{A}_{\nu}$. Let
$G=\max\left\\{\frac{\|\bm{f}\|_{L^{2}}}{\nu_{0}^{2}\kappa_{0}^{2}},\frac{\|\bm{f}\|_{\mathbb{H}^{1}}}{\nu_{0}^{2}\kappa_{0}^{3}}\right\\}\
\mbox{and}\
\delta=d_{H}(\bm{0},\mathcal{A}_{\nu})=\inf_{{\bm{u}}\in\mathcal{A}_{\nu}}|{\bm{u}}|.$
Then $n_{0}({\bm{u}})\lesssim\frac{\nu_{1}G}{\delta}$ where
$G:=\frac{\|\bm{f}\|_{L^{2}}}{\nu_{0}^{2}\kappa_{0}^{2}}$. Moreover, $\nu$ is
the unique solution of Problem 4.1 provided
$N\gtrsim\max\\{\frac{\nu_{1}G}{\delta},\frac{\nu_{1}}{\nu_{0}}G\sqrt{1+\frac{\nu_{1}}{\nu_{0}}}\\}\
\mbox{and}\
\frac{N}{\sqrt{\ln(N+1)}}\gtrsim\frac{\nu_{1}^{2}(1+\frac{\nu_{1}}{\nu_{0}})G}{\nu_{0}\delta}.$
(5.5)
###### Proof.
Note that in view of (2.9). we have
$|P_{n}{\bm{u}}-{\bm{u}}|^{2}\leq\frac{1}{\kappa_{0}^{2}N^{2}}\|{\bm{u}}\|^{2}\lesssim\frac{\nu_{1}^{2}G^{2}}{N^{2}},$
(5.6)
where, with $G_{\nu}$ as in (2.6), we used the classical bounds (2.7) yielding
$\sup_{{\bm{u}}\in\mathcal{A}_{\nu}}|{\bm{u}}|\leq\nu G_{\nu}\leq\nu_{1}G\
\mbox{and}\
\sup_{{\bm{u}}\in\mathcal{A}_{\nu}}\|{\bm{u}}\|\leq\nu\kappa_{0}G_{\nu}\leq\nu_{1}\kappa_{0}G.$
(5.7)
Then, provided $N\gtrsim\frac{\nu_{1}G}{\delta}$, from (5.6), we have
$|P_{N}{\bm{u}}-{\bm{u}}|^{2}\leq\frac{1}{2}\delta^{2}$. Using the equality
$|{\bm{u}}|^{2}=|P_{N}{\bm{u}}|^{2}+|(I-P_{N}){\bm{u}}|^{2}$ and the
inequality $|{\bm{u}}|^{2}\geq\delta^{2}$ for ${\bm{u}}\in\mathcal{A}_{\nu}$,
it follows that $|P_{N}{\bm{u}}|^{2}\geq\frac{1}{2}\delta^{2}$. It follows
from the definition (5.3) that $n_{0}({\bm{u}})\leq\frac{\nu_{1}G}{\delta}$.
Let $\mu_{0}=\frac{\nu_{1}}{\nu_{0}}$. Since $\nu\in[\nu_{0},\nu_{1}]$. it
follows readily from (5.7) and the definition of $M_{H}({\bm{u}})$ and
$M_{V}({\bm{u}})$ in (3.5) and (3.6) that
$M_{H}({\bm{u}})\lesssim\nu_{1}\sqrt{1+\frac{\nu_{0}}{\nu_{1}}}\,\widetilde{G}\
\mbox{and}\
M_{H}({\bm{u}})\lesssim\nu_{1}\sqrt{1+\frac{\nu_{0}}{\nu_{1}}}\kappa_{0}\,\widetilde{G}.$
The conclusion now follows immediately from Theorem 5.2 by inserting the bound
$|P_{N}{\bm{u}}|^{2}\geq\frac{1}{2}\delta^{2}$ in (5.4). ∎
We can easily see that the condition ${\bm{u}}\neq 0$ is necessary for the
unique recovery of $\nu$ by inspecting (2.3): if ${\bm{u}}\equiv 0$ satisfies
(2.3) for a given force $\bm{f}$, then it does so with any value for $\nu$
because $Au\equiv 0$, hence there is no unique $\nu$ to recover. As evidenced
by the following example, the condition $N>n_{0}({\bm{u}})$ in Theorem 5.2 is
also necessary.
###### Example 5.4.
Fix ${\bm{k}}\in\mathbb{Z}^{2}/\\{0\\}$ and $c\in\mathbb{C}/\\{0\\}$ and
define $\bm{f}:\Omega\to V$ by
$\bm{f}({\bm{x}})=c{\bm{k}}^{\perp}e^{i\kappa_{0}{\bm{k}}\cdot{\bm{x}}}+\bar{c}{\bm{k}}^{\perp}e^{-i\kappa_{0}{\bm{k}}\cdot{\bm{x}}},\quad\forall{\bm{x}}\in\Omega,$
(where ${\bm{k}}^{\perp}$ is a unit vector orthogonal to ${\bm{k}}$, for
example, $[-k_{2},k_{1}]^{T}/|{\bm{k}}|$), i.e. $\bm{f}$ is a time-autonomous
“single mode” force in $V$. Then
${\bm{u}}\equiv\frac{1}{\nu\kappa_{0}^{2}|{\bm{k}}|^{2}}\bm{f}$
is a stationary solution of (2.3) for any $\nu>0$. Note that
$\|\bm{f}\|=2\sqrt{2}\pi|{\bm{k}}||c|,$
so, by choosing $c=\frac{1}{2\sqrt{2}\pi|{\bm{k}}|}$, we have $\|\bm{f}\|=1$
for any choice of ${\bm{k}}$, and
$\|{\bm{u}}\|=\frac{1}{\nu\kappa_{0}^{2}|{\bm{k}}|^{2}},$
so
$M_{V}(P_{N}{\bm{u}})=\sqrt{2}\sqrt{\frac{1}{(\nu\kappa_{0}^{2}N^{2})^{2}}+\mu_{0}^{2}\|P_{N}{\bm{u}}\|^{2}}\leq\frac{\sqrt{2}}{\nu\kappa_{0}^{2}}\sqrt{\frac{1}{N^{4}}+\mu_{0}^{2}\frac{1}{|{\bm{k}}|^{4}}}.$
Now, choosing $\mu_{0}>1$ and
$N\geq\frac{8\sqrt{2}}{\nu^{2}\kappa_{0}^{3}}\sqrt{1+\mu_{0}^{2}}$, for any
$\gamma\in[\frac{\nu}{\mu_{0}},\mu_{0}\nu]$,
$\mathbb{W}(\gamma,P_{N}{\bm{u}})=\frac{1}{\gamma\kappa_{0}^{2}|{\bm{k}}|^{2}+\mu}(\bm{f}+\mu
P_{N}{\bm{u}}).$
Therefore,
$\|\mathbb{W}(\gamma,P_{N}{\bm{u}})-{\bm{u}}\|_{C_{b}(H)}=\frac{|\bm{f}|}{\gamma\kappa_{0}^{2}|{\bm{k}}|^{2}+\mu}\frac{1}{\nu}\begin{cases}|\gamma-\nu|,&\text{
if }|{\bm{k}}|<N\\\
\left|\gamma-\nu+\frac{\mu}{\kappa_{0}^{2}|{\bm{k}}|^{2}}\right|,&\text{ if
}|{\bm{k}}|\geq N\end{cases}$
and
$\|P_{N}\mathbb{W}(\gamma,P_{N}{\bm{u}})-P_{N}{\bm{u}}\|_{C_{b}(H)}=\begin{cases}\frac{|\bm{f}|}{\gamma\kappa_{0}^{2}|{\bm{k}}|^{2}+\mu}\frac{|\gamma-\nu|}{\nu},&\text{
if }|{\bm{k}}|<N\\\ 0,&\text{ if }|{\bm{k}}|\geq N\end{cases}.$
So, if $|{\bm{k}}|\geq N$, then $P_{N}{\bm{u}}=0,$ and
$\|P_{N}\mathbb{W}(\gamma,P_{N}{\bm{u}})-P_{N}{\bm{u}}\|_{C_{b}(H)}=0$ while
$|\nu-\gamma|$ can be arbitrarily large. Hence, we have found cases with $N$
arbitrarily large and ${\bm{u}}\neq 0$ where there are infinitely many
solutions of Problem 4.1 because $P_{N}{\bm{u}}=0$.
## 6\. Lipschitz Continuity of the Inverse Problem
In the previous section we derived conditions for Problem 4.1 to have a unique
solution. In so doing, we found conditions on $N$ for Problem 4.1 to be a
well-defined mapping of data, $P_{N}{\bm{u}}$, to viscosity. We now provide
conditions for this mapping, which can be thought of as the inverse of the
determining map, $\mathbb{W}(\cdot,P_{N}{\bm{u}})^{-1}$, to be Lipschitz
continuous; i.e., we derive bounds on the viscosity discrepancy in terms of
$\mathcal{L}$. In addition, we propose an algorithm for solving Problem 4.1,
which is revealed by the proof of the Lipschitz continuity.
When $\mathcal{L}\geq 0$, the rate of change of the difference between two
trajectories corresponding to different viscosities is dynamic, and needs to
be integrated. As a consequence, the following results explicitly involve the
time-derivative of the data, $P_{N}{\bm{u}}$.
Before focusing on the case where $\phi=P_{N}{\bm{u}}$ for some solution
${\bm{u}}$ of (2.3), we first obtain a bound valid for the determining map in
the general setting.
###### Lemma 6.1.
Let ${\bm{\phi}}\in C_{b}(V)$ and $\gamma_{1},\gamma_{2}\in[\nu_{0},\nu_{1}]$,
where $0<\nu_{0}<\nu_{1}<\infty$ are given. Let
$\mu_{0}\geq\frac{\nu_{1}}{\nu_{0}},$ and
$N>\frac{8}{\nu_{0}\kappa_{0}}M_{V}({\bm{\phi}}).$
Let ${\bm{v}}_{1}=\mathbb{W}(\gamma_{1},{\bm{\phi}})$ and
${\bm{v}}_{2}=\mathbb{W}(\gamma_{2},{\bm{\phi}})$. Then for any interval of
time $[s,t]\subset\mathbb{R}$,
$|\gamma_{1}-\gamma_{2}|\left(\inf_{[s,t]}|P_{N}{\bm{v}}_{1}|^{2}-\frac{1}{N}M_{1}\right)\leq
M_{2}\|P_{N}({\bm{v}}_{1}-{\bm{v}}_{2})\|_{C_{b}(H)},$
where
$M_{1}=\frac{2\sqrt{5}c_{0}^{2}}{\nu_{0}\kappa_{0}^{2}}\;M_{H}({\bm{\phi}})\,(M_{V}({\bm{\phi}}))^{2}$
and
$M_{2}=\mu\frac{1+\delta^{N^{2}}}{1-\delta^{N^{2}}}\|A^{-1}P_{N}{\bm{v}}_{1}\|_{C_{b}(H)}+\|A^{-1}\partial_{t}P_{N}{\bm{v}}_{1}\|_{C_{b}(H)}+\gamma_{2}\|P_{N}{\bm{v}}_{1}\|_{C_{b}(H)},$
and $\delta=e^{-\mu_{0}\nu_{0}\kappa_{0}^{2}(t-s)}$.
###### Proof.
Define ${\bm{w}}={\bm{v}}_{1}-{\bm{v}}_{2}$ and
${\bm{\psi}}=P_{N}{\bm{v}}_{1}.$ Writing the evolution equation for ${\bm{w}}$
using (2.11),
$\partial_{t}{\bm{w}}+\gamma_{2}A{\bm{w}}+(\gamma_{1}-\gamma_{2})A{\bm{v}}_{1}+B\left({\bm{v}}_{1},{\bm{w}}\right)+B\left({\bm{w}},{\bm{v}}_{2}\right)+\mu
P_{N}{\bm{w}}=0,$
and taking an inner-product with respect to $A^{-1}{\bm{\psi}}$, we have
$\left<\partial_{t}{\bm{w}},A^{-1}{\bm{\psi}}\right>+\gamma_{2}\left<{\bm{w}},{\bm{\psi}}\right>+(\gamma_{1}-\gamma_{2})|{\bm{\psi}}|^{2}\\\
+b\left({\bm{v}}_{1},{\bm{w}},A^{-1}{\bm{\psi}}\right)+b\left({\bm{w}},{\bm{v}}_{2},A^{-1}{\bm{\psi}}\right)+\mu\left<P_{N}{\bm{w}},A^{-1}{\bm{\psi}}\right>=0.$
Using the product rule, we have
$\frac{d}{dt}\left<{\bm{w}},A^{-1}{\bm{\psi}}\right>=\left<\partial_{t}{\bm{w}},A^{-1}{\bm{\psi}}\right>+\left<{\bm{w}},\partial_{t}A^{-1}{\bm{\psi}}\right>$
and using the facts that $P_{N}{\bm{\psi}}={\bm{\psi}}$, $P_{N}$ is self-
adjoint, and $P_{N}$ commutes with $A^{-1}$, we can rewrite the last
differential inequality as
$\frac{d}{dt}\left<A^{-1}P_{N}{\bm{w}},{\bm{\psi}}\right>-\left<A^{-1}P_{N}{\bm{w}},\partial_{t}{\bm{\psi}}\right>+\gamma_{2}\left<P_{N}{\bm{w}},{\bm{\psi}}\right>+(\gamma_{1}-\gamma_{2})|{\bm{\psi}}|^{2}\\\
+b\left({\bm{v}}_{1},{\bm{w}},A^{-1}{\bm{\psi}}\right)+b\left({\bm{w}},{\bm{v}}_{2},A^{-1}{\bm{\psi}}\right)+\mu\left<A^{-1}P_{N}{\bm{w}},{\bm{\psi}}\right>=0.$
Let ${\bm{\beta}}:=\left<A^{-1}P_{N}{\bm{w}},{\bm{\psi}}\right>$. Then
$\dot{{\bm{\beta}}}+\mu{\bm{\beta}}-\left<A^{-1}P_{N}{\bm{w}},\partial_{t}{\bm{\psi}}\right>+\gamma_{2}\left<P_{N}{\bm{w}},{\bm{\psi}}\right>+(\gamma_{1}-\gamma_{2})|{\bm{\psi}}|^{2}\\\
+b\left({\bm{v}}_{1},{\bm{w}},A^{-1}{\bm{\psi}}\right)+b\left({\bm{w}},{\bm{v}}_{2},A^{-1}{\bm{\psi}}\right)=0,$
so, multiplying by the integrating factor $e^{\mu(\tau-s)}$ and taking the
time-integral over the interval $[s,t]$, we have
${\bm{\beta}}(t)-e^{-\mu(t-s)}{\bm{\beta}}(s)+(\gamma_{1}-\gamma_{2})\int_{s}^{t}e^{-\mu(t-\tau)}|{\bm{\psi}}|^{2}\>d\tau\\\
=\int_{s}^{t}e^{-\mu(t-\tau)}\left(\left<A^{-1}P_{N}{\bm{w}},\partial_{t}{\bm{\psi}}\right>-\gamma_{2}\left<P_{N}{\bm{w}},{\bm{\psi}}\right>\right)\;d\tau\\\
-\int_{s}^{t}e^{-\mu(t-\tau)}\left(b\left({\bm{v}}_{1},{\bm{w}},A^{-1}{\bm{\psi}}\right)+b\left({\bm{w}},{\bm{v}}_{2},A^{-1}{\bm{\psi}}\right)\right)\;d\tau$
(6.1)
Therefore,
$|\gamma_{1}-\gamma_{2}|\frac{1}{\mu}\left(1-e^{-\mu(t-s)}\right)\inf_{[s,t]}|{\bm{\psi}}|^{2}\leq|{\bm{\beta}}(t)|+e^{-\mu(t-s)}|{\bm{\beta}}(s)|\\\
+\frac{1}{\mu}\left(1-e^{-\mu(t-s)}\right)\sup_{[s,t]}\left(|A^{-1}\partial_{t}{\bm{\psi}}|+\gamma_{2}|{\bm{\psi}}|\right)\|P_{N}{\bm{w}}\|_{C_{b}(H)}\\\
+\frac{1}{\mu}\left(1-e^{-\mu(t-s)}\right)\sup_{[s,t]}\left((\|{\bm{v}}_{1}\|_{L^{4}}+\|{\bm{v}}_{2}\|_{L^{4}})\|A^{-\frac{1}{2}}{\bm{\psi}}\|_{L^{4}}\right)\|{\bm{w}}\|_{C_{b}(H)}.$
Note that for all $t$,
$|{\bm{\beta}}(t)|\leq|P_{N}{\bm{w}}||A^{-1}{\bm{\psi}}|\leq\|P_{N}{\bm{w}}\|_{C_{b}(H)}\|A^{-1}{\bm{\psi}}\|_{C_{b}(H)}<\infty,$
and letting
$\delta^{N^{2}}=e^{-\mu(t-s)}=e^{-\mu_{0}\nu_{0}\kappa_{0}^{2}N^{2}(t-s)}$ (so
$t-s=\frac{\ln(\frac{1}{\delta})}{\mu_{0}\nu_{0}\kappa_{0}^{2}}$), we can
replace the previous inequality with
$|\gamma_{1}-\gamma_{2}|\inf_{[s,t]}|{\bm{\psi}}|^{2}\\\
\leq\left(\mu\frac{1+\delta^{N^{2}}}{1-\delta^{N^{2}}}\|A^{-1}{\bm{\psi}}\|_{C_{b}(H)}+\|A^{-1}\partial_{t}{\bm{\psi}}\|_{C_{b}(H)}+\gamma_{2}\|{\bm{\psi}}\|_{C_{b}(H)}\right)\|P_{N}{\bm{w}}\|_{C_{b}(H)}\\\
+\sup_{[s,t]}\left((\|{\bm{v}}_{1}\|_{L^{4}}+\|{\bm{v}}_{2}\|_{L^{4}})\|A^{-\frac{1}{2}}{\bm{\psi}}\|_{L^{4}}\right)\|{\bm{w}}\|_{C_{b}(H)}.$
Now, by Corollary 3.4 with $p=0$,
$\|{\bm{w}}\|_{C_{b}(H)}\leq\sqrt{5}\frac{|\gamma_{1}-\gamma_{2}|}{\left((\gamma_{1}+\gamma_{2})/2\right)^{1/2}}\frac{\|\frac{1}{2}({\bm{v}}_{1}+{\bm{v}}_{2})\|_{C_{b}(V)}}{\sqrt{\nu_{0}}\kappa_{0}N}\\\
\leq\frac{\sqrt{5}}{2}\frac{|\gamma_{1}-\gamma_{2}|}{\nu_{0}\kappa_{0}N}(\|{\bm{v}}_{1}\|_{C_{b}(V)}+\|{\bm{v}}_{2}\|_{C_{b}(V)}),$
so
$|\gamma_{1}-\gamma_{2}|\left(\inf_{[s,t]}|{\bm{\psi}}|^{2}-\frac{1}{N}\widetilde{M}_{1}\right)\leq
M_{2}\|P_{N}{\bm{w}}\|_{C_{b}(H)}$
where
$\widetilde{M}_{1}=\frac{\sqrt{5}}{2\nu_{0}\kappa_{0}}(\|{\bm{v}}_{1}\|_{C_{b}(V)}+\|{\bm{v}}_{2}\|_{C_{b}(V)})\sup_{[s,t]}\left((\|{\bm{v}}_{1}\|_{L^{4}}+\|{\bm{v}}_{2}\|_{L^{4}})\|A^{-\frac{1}{2}}{\bm{\psi}}\|_{L^{4}}\right)$
and
$M_{2}=\mu\frac{1+\delta^{N^{2}}}{1-\delta^{N^{2}}}\|A^{-1}{\bm{\psi}}\|_{C_{b}(H)}+\|A^{-1}\partial_{t}{\bm{\psi}}\|_{C_{b}(H)}+\gamma_{2}\|{\bm{\psi}}\|_{C_{b}(H)}.$
Finally, we use (2.15) to bound the $L^{4}$ norms,
$\sup_{[s,t]}\left(\|{\bm{v}}_{1}\|_{L^{4}}+\|{\bm{v}}_{2}\|_{L^{4}}\right)\leq
c_{0}\sup_{[s,t]}\left(|{\bm{v}}_{1}|^{\frac{1}{2}}\|{\bm{v}}_{1}\|^{\frac{1}{2}}+|{\bm{v}}_{2}|^{\frac{1}{2}}\|{\bm{v}}_{2}\|^{\frac{1}{2}}\right)\\\
\leq
c_{0}\left(\|{\bm{v}}_{1}\|_{C_{b}(H)}^{\frac{1}{2}}\|{\bm{v}}_{1}\|_{C_{b}(V)}^{\frac{1}{2}}+\|{\bm{v}}_{2}\|_{C_{b}(H)}^{\frac{1}{2}}\|{\bm{v}}_{2}\|_{C_{b}(V)}^{\frac{1}{2}}\right)$
and (additionally using (2.10)),
$\sup_{[s,t]}\left(\|A^{-\frac{1}{2}}{\bm{\psi}}\|_{L^{4}}\right)\leq
c_{0}\|A^{-\frac{1}{2}}{\bm{\psi}}\|_{C_{b}(H)}^{\frac{1}{2}}\|A^{-\frac{1}{2}}{\bm{\psi}}\|_{C_{b}(V)}^{\frac{1}{2}}\\\
\leq
c_{0}\frac{1}{\kappa_{0}^{\frac{1}{2}}}\|{\bm{\psi}}\|_{C_{b}(H)}^{\frac{1}{2}}\frac{1}{\kappa_{0}^{\frac{1}{2}}}\|{\bm{\psi}}\|_{C_{b}(V)}^{\frac{1}{2}}\leq\frac{c_{0}}{\kappa_{0}}\|{\bm{v}}_{1}\|_{C_{b}(H)}^{\frac{1}{2}}\|{\bm{v}}_{1}\|_{C_{b}(V)}^{\frac{1}{2}}$
so that we can replace $\widetilde{M}_{1}$ with $M_{1}$ using Lemma 3.2:
$\widetilde{M}_{1}\leq\frac{\sqrt{5}}{2\nu_{0}\kappa_{0}}\;2M_{V}({\bm{\phi}})\;2c_{0}\sqrt{M_{H}({\bm{\phi}})}\sqrt{M_{V}({\bm{\phi}})}\;\frac{c_{0}}{\kappa_{0}}\sqrt{M_{H}({\bm{\phi}})}\sqrt{M_{V}({\bm{\phi}})}=M_{1}.$
∎
In the following Theorem, we apply Lemma 6.1 to the case where ${\bm{v}}_{1}$
is a solution of (2.3) with viscosity $\nu$.
###### Theorem 6.2.
Let ${\bm{u}}\in C_{b}(V)$ be a solution of (2.3) with viscosity
$\nu\in[\nu_{0},\nu_{1}]$, where $0<\nu_{0}<\nu_{1}<\infty$ are given, and
suppose that for some $n\geq 1$ and a time interval, $[s,t]$,
$\inf_{[s,t]}|P_{n}{\bm{u}}|^{2}>0.$
Choose $\mu_{0}\geq\frac{\nu_{1}}{\nu_{0}}.$ If
$N>\max\left\\{\frac{8}{\nu_{0}\kappa_{0}}M_{V}({\bm{u}}),\>n,\>2\frac{M_{1}}{\inf_{[s,t]}|P_{N}{\bm{u}}|^{2}}\right\\},$
then for any $\gamma\in[\nu_{0},\nu_{1}]$,
$|\nu-\gamma|\leq
2\frac{M_{2}}{\inf_{[s,t]}|P_{N}{\bm{u}}|^{2}}\mathcal{L}(\gamma),$
where $M_{1}$ and $M_{2}$ are defined as in Lemma 6.1 with
${\bm{v}}_{1}={\bm{u}}$ and $\phi=P_{N}{\bm{u}}$.
###### Proof.
By the assumption that ${\bm{u}}\in C_{b}(V),$ we may take
$R=\|P_{N}{\bm{u}}\|_{C_{b}(V)}\leq\|{\bm{u}}\|_{C_{b}(V)}<\infty,$ in Theorem
3.6 and conclude that ${\bm{u}}=\mathbb{W}(\nu,P_{N}{\bm{u}}).$ The rest is an
immediate consequence of Lemma 6.1. ∎
###### Remark 6.3.
We chose to state Theorem 6.2 with the _a priori_ assumption that there exists
an interval of time where finitely many modes of ${\bm{u}}$ are bounded away
from zero. However, given the continuity of ${\bm{u}}$ in time, the weaker
assumption that ${\bm{u}}\neq 0$ is sufficient to guarantee the existence of
such a time interval.
To see this, if ${\bm{u}}\neq\bm{0}$. there exists $t_{0}$ such that
${\bm{u}}(t_{0})\neq\bm{0}$. This implies that there exists an $\epsilon>0$
such that $\inf_{t_{0}-\epsilon,t_{0}+\epsilon]}|{\bm{u}}(t)|\geq\delta$.
Proceeding as in the proof of Theorem 5.3, it follows that there exists
$n_{0}$ sufficiently large such that
$\inf_{[t_{0}-\epsilon,t_{0}+\epsilon]}|P_{n_{0}}{\bm{u}}(t)|\geq\frac{1}{2}\delta$.
One can in fact show that if $\bm{0}\notin\mathcal{A}_{\nu}$, then there
exists $\delta>0$ and $t_{0}>0$ such that
$\inf_{[t_{0},\infty)}|P_{n_{0}}{\bm{u}}(t)|\geq\delta$, where $n_{0}$ depends
only on the Grashoff number and the distance of $\bm{0}$ from the attractor.
## 7\. Algorithm for Solving the Inverse Problem
So far we have shown the validity of Problem 4.1 as a means of parameter
recovery. We now present a method of computing its solution. In [4], the
authors proposed the following update for the approximate viscosity, $\gamma$,
as part of an algorithm to recover the true viscosity, $\nu$, using the
available data only:
$\nu\approx\gamma+\frac{\frac{1}{2}\frac{1}{t-s}\left(|P_{N}{\bm{w}}(t)|^{2}-|P_{N}{\bm{w}}(s)|^{2}\right)+\mu\left<|P_{N}{\bm{w}}|^{2}\right>_{s}^{t}}{\left<\left<P_{N}A\mathbb{W}(\gamma,P_{N}{\bm{u}}),P_{N}{\bm{w}}\right>\right>_{s}^{t}},$
(7.1)
where $P_{N}{\bm{w}}=P_{N}{\bm{u}}-P_{N}\mathbb{W}(\gamma,P_{N}{\bm{u}})$.
Rigorous conditions for the convergence of this algorithm were obtained in
[12].
We provide a proof for the convergence of a slightly modified version of this
algorithm, which is presented in Algorithm 1. This new algorithm is based on
the approach taken in Lemma 6.1. In comparison to (7.1), the new update
formula (see the definition of $\Gamma$ in Algorithm 1) is computed using
inner-products with the data, $P_{N}{\bm{u}}$, and the time derivative of the
data, as opposed to the observable error, $P_{N}{\bm{u}}-P_{N}{\bm{w}}$.
Convolutions are taken with the decaying exponential factor $\tau\mapsto
e^{-\mu\tau}$ over a time interval, $[s,t]$; with $\mu$ fixed, we denote this
operation by $\left<\cdot\right>_{s}^{t}$,
$\left<\phi\right>_{s}^{t}:=\frac{1}{t-s}\int_{s}^{t}e^{-\mu(t-\tau)}\phi(\tau)\;d\tau,\quad\forall\phi\in
C([s,t];\mathbb{R}).$
Algorithm 1 Viscosity inference from data using the determining map
1:$0<\nu_{0}<\nu_{1}$ such that $\nu\in[\nu_{0},\nu_{1}]$ $\triangleright$
bounds for the unknown viscosity, $\nu$
2:$N,\mu_{0}$ $\triangleright$ both chosen sufficiently large (see Theorem
7.1)
3:$P_{N}{\bm{u}}(t)\quad\forall t\in\mathbb{R}$ $\triangleright$ observations
of the reference solution
4:$\>s<t\>$ such that $\left<|P_{N}{\bm{u}}|^{2}\right>_{s}^{t}>0$
5:$\epsilon_{1}\in(0,\nu_{0})$ $\triangleright$ the convergence factor (also
limited by $N$)
6:$\epsilon_{2}>0$ $\triangleright$ a stopping tolerance for the viscosity
error
7:$\gamma\leftarrow\gamma_{0}$ $\triangleright$ an initial guess for the true
viscosity
8:repeat
9: $\gamma_{0}\leftarrow\gamma$
10: $\gamma\leftarrow\textsc{$\Gamma$}(\gamma)$
11:until
$|\gamma-\gamma_{0}|/|\nu_{1}-\nu_{0}|\leq\epsilon_{2}/(\nu_{1}-\nu_{0}+\epsilon_{1})$
$\triangleright$ the stopping condition
12:return $\gamma$
13:procedure $\Gamma$($\gamma$)
14: ${\bm{v}}\leftarrow\mathbb{W}(\gamma,P_{N}{\bm{u}})$ $\triangleright$
compute the solution of the determining map
15: ${\bm{\psi}}\leftarrow P_{N}{\bm{u}}-P_{N}{\bm{v}}$ $\triangleright$
compute the observable velocity error
16:
$c_{1}\leftarrow\frac{1}{t-s}\left(\left<A^{-1}{\bm{\psi}}(t),P_{N}{\bm{u}}(t)\right>-e^{-\mu(t-s)}\left<A^{-1}{\bm{\psi}}(s),P_{N}{\bm{u}}(s)\right>\right)$
17:
$c_{2}\leftarrow-\left<\left<A^{-1}{\bm{\psi}},\partial_{t}P_{N}{\bm{u}}\right>\right>_{s}^{t}$
18:
$c_{3}\leftarrow\gamma\left<\left<{\bm{\psi}},P_{N}{\bm{u}}\right>\right>_{s}^{t}$
19: return
$\gamma-\frac{c_{1}+c_{2}+c_{3}}{\left<|P_{N}{\bm{u}}|^{2}\right>_{s}^{t}}$
$\triangleright$ return a new approximation for the viscosity
20:end procedure
The benefit of this new update formula is that the condition for the
denominator to be nonzero is an _a priori_ condition depending on the
observations of the reference solution only, as opposed to an _a posteriori_
condition which depends on the observable error, and so cannot be verified
until the update is computed.
Conditions for the convergence of Algorithm 1 are given in the following
theorem.
###### Theorem 7.1.
Let ${\bm{u}}$ be a solution of (2.3) with viscosity
$\nu\in[\nu_{0},\nu_{1}]$, and suppose there exists an $n>1$ and a time
interval $[s,t]$ over which
$\left<|P_{n}{\bm{u}}|^{2}\right>_{s}^{t}>0.$
Let $\epsilon_{1}\in(0,\nu_{0})$ and fix
$\mu_{0}\geq\frac{\nu_{1}+\epsilon_{1}}{\nu_{0}-\epsilon_{1}}.$ If
$N>\max\left\\{\frac{8}{\nu_{0}\kappa_{0}}M_{V}({\bm{u}}),\>n,\>\frac{\nu_{1}-\nu_{0}+\epsilon_{1}}{\epsilon_{1}}\frac{M_{1}}{\left<|P_{N}{\bm{u}}|^{2}\right>_{s}^{t}}\right\\},$
where
$M_{1}:=\frac{2\sqrt{5}c_{0}^{2}}{\nu_{0}\kappa_{0}^{2}}\;M_{H}({\bm{u}})\,(M_{V}({\bm{u}}))^{2},$
then, with $\Gamma$ as defined in Algorithm 1, for any choice of
$\gamma\in[\nu_{0}-\epsilon_{1},\nu_{1}+\epsilon_{1}]$,
$|\nu-\Gamma(\gamma)|<\frac{\epsilon_{1}}{\nu_{1}-\nu_{0}+\epsilon_{1}}|\nu-\gamma|<|\nu-\gamma|,$
and $\Gamma(\gamma)\in[\nu_{0}-\epsilon_{1},\nu_{1}+\epsilon_{1}].$ Therefore,
$\Gamma$ can be applied iteratively, and the result converges to $\nu$.
Furthermore, let $\epsilon_{2}>0$ be a tolerance for stopping. We can infer
closeness to the true viscosity by examining the residuals:
$\frac{|\Gamma^{k+1}(\gamma)-\Gamma^{k}(\gamma)|}{(\nu_{1}-\nu_{0})}\leq\frac{\epsilon_{2}}{\nu_{1}-\nu_{0}+\epsilon_{1}}\implies|\Gamma^{k}(\gamma)-\nu|\leq\epsilon_{2}.$
###### Proof.
Let ${\bm{v}}=\mathbb{W}(\gamma,P_{N}{\bm{u}})$, and let
${\bm{w}}={\bm{u}}-{\bm{v}}=\mathbb{W}(\nu,P_{N}{\bm{u}})-\mathbb{W}(\gamma,P_{N}{\bm{u}})$.
Proceeding as in Lemma 6.1 up to equation (6.1), we obtain
${\bm{\beta}}(t)-e^{-\mu(t-s)}{\bm{\beta}}(s)+(\nu-\gamma)\left<|P_{N}{\bm{u}}|^{2}\right>_{s}^{t}=\left<\left<A^{-1}P_{N}{\bm{w}},\partial_{t}P_{N}{\bm{u}}\right>-\gamma\left<P_{N}{\bm{w}},P_{N}{\bm{u}}\right>\right>_{s}^{t}\\\
-\left<b\left({\bm{u}},{\bm{w}},A^{-1}P_{N}{\bm{u}}\right)+b\left({\bm{w}},{\bm{v}},A^{-1}P_{N}{\bm{u}}\right)\right>_{s}^{t}$
where ${\bm{\beta}}:=\left<A^{-1}P_{N}w,P_{N}{\bm{u}}\right>.$ Moving the
first term on the right hand side over to the left, and dividing by the term
multipying $(\nu-\gamma)$, we have
$\nu-\Gamma(\gamma)=-\frac{1}{\left<|P_{N}{\bm{u}}|^{2}\right>_{s}^{t}}\left(\left<b\left({\bm{u}},{\bm{w}},A^{-1}P_{N}{\bm{u}}\right)\right>_{s}^{t}+\left<b\left({\bm{w}},{\bm{v}},A^{-1}P_{N}{\bm{u}}\right)\right>_{s}^{t}\right)$
where $\Gamma(\gamma)$ is as defined in Algorithm 1.
We then take the absolute value of both sides and estimate the remaining terms
on the right hand side as was done in Lemma 6.1, and obtain
$|\nu-\Gamma(\gamma)|\leq\frac{1}{N}\frac{M_{1}}{\left<|P_{N}{\bm{u}}|^{2}\right>_{s}^{t}}|\nu-\gamma|.$
Let $\delta>0$. Taking
$N>\frac{1}{\delta}\frac{M_{1}}{\left<|P_{N}{\bm{u}}|^{2}\right>_{s}^{t}}$, we
have
$|\nu-\Gamma(\gamma)|\leq\delta|\nu-\gamma|.$
Choosing $\delta<1$ ensures $\Gamma(\gamma)$ is closer to $\nu$ than $\gamma$,
but we also need to ensure that $\Gamma(\gamma)$ satisfies the feasibility
condition: $\Gamma(\gamma)\in[\nu_{0}-\epsilon_{1},\nu_{1}+\epsilon_{1}]$.
From the previous inequality and the feasibility condition imposed on
$\gamma$, as well as the assumption $\nu\in[\nu_{0},\nu_{1}]$, we have
$\nu_{0}-\delta(\nu_{1}-\nu_{0}+\epsilon_{1})\leq\Gamma(\gamma)\leq\nu_{1}+\delta(\nu_{1}-\nu_{0}+\epsilon_{1}).$
Therefore, the feasibility condition for $\Gamma(\gamma)$ is satisfied with
$\delta=\frac{\epsilon_{1}}{\nu_{1}-\nu_{0}+\epsilon_{1}}.$ This expression
for $\delta$ is an increasing function of $\epsilon_{1}\in(0,\nu_{0})$, and so
attains its max when $\epsilon_{1}=\nu_{0}$. Therefore,
$\delta=\frac{\epsilon_{1}}{\nu_{1}-\nu_{0}+\epsilon_{1}}<\frac{\nu_{0}}{\nu_{1}}<1$,
and iterating $\Gamma$ results in convergence to $\nu$:
$|\Gamma^{k}(\gamma)-\nu|\leq\delta^{k}|\gamma-\nu|\to 0\>\text{ as
}\>k\to\infty.$
Now, for the stopping condition, note that
$|\Gamma^{k}(\gamma)-\nu|=|\Gamma^{k}(\gamma)-\Gamma^{k+1}(\gamma)+\Gamma^{k+1}(\gamma)-\nu|\leq|\Gamma^{k+1}(\gamma)-\Gamma^{k}(\gamma)|+|\Gamma^{k+1}(\gamma)-\nu|$
so,
$|\Gamma^{k+1}(\gamma)-\Gamma^{k}(\gamma)|\geq|\Gamma^{k}(\gamma)-\nu|-|\Gamma^{k+1}(\gamma)-\nu|\geq(1-\delta)|\Gamma^{k}(\gamma)-\nu|.$
Therefore, if after $k$ iterations we have
$|\Gamma^{k}(\gamma)-\nu|>\epsilon_{2}$, then
$|\Gamma^{k+1}(\gamma)-\Gamma^{k}(\gamma)|>(1-\delta)\epsilon_{2}=\frac{(\nu_{1}-\nu_{0})\epsilon_{2}}{\nu_{1}-\nu_{0}+\epsilon_{1}}.$
We get the stopping condition from the last statement by taking its
contrapositive. ∎
In Theorem 7.1, we first chose the convergence factor $\epsilon_{1}$ and then
chose $N$. However, we can instead let the data dictate the convergence
factor; if
$N>\frac{\nu_{1}}{\nu_{0}}\frac{M_{1}}{\left<|P_{N}{\bm{u}}|^{2}\right>_{s}^{t}}$
then
$\frac{(\nu_{1}-\nu_{0})M_{1}}{N\left<|P_{N}{\bm{u}}|^{2}\right>_{s}^{t}-M_{1}}<\nu_{0},$
so we can choose $\epsilon_{1}$ such that
$\nu_{0}>\epsilon_{1}\geq\frac{(\nu_{1}-\nu_{0})M_{1}}{N\left<|P_{N}{\bm{u}}|^{2}\right>_{s}^{t}-M_{1}}\implies|\nu-\Gamma(\gamma)|<\frac{\epsilon_{1}}{\nu_{1}-\nu_{0}+\epsilon_{1}}|\nu-\gamma|.$
## 8\. Conclusions
We extended the definition of the determining map to include viscosity as a
variable. We then studied the inverse problem of determining the viscosity
from the velocity data using the determining map, and found conditions for the
well-posedness of the inverse problem, as well as conditions for the
regularity of the inverse mapping, and defined an iterative algorithm for
obtaining its solution.
One consequence of our results is that, on the attractor, viscosity is
determined, just like all higher modes, from the low modes only. Therefore, we
have extended the concept of determining modes to include a parameter of the
equation as being determined from finitely many low modes. This also has
consequences for how different attractors (indexed by the viscosity) can
intersect, as no two solutions on the attractor can overlap for any interval
of time without the viscosities and solutions being identical
For simplicity of presentation, we chose to frame our results as well as
Algorithm 1 in terms of solutions on the attractor, which requires having data
for the reference solution for all time and computing the solution of the
determining map for all time. However, our results easily extend to the
initial value problem, where the data are only required on a bounded interval
of time, and the determining map is solved as an initial value problem with an
arbitrary initial condition. Algorithm 1 can also be modified as an “on-the-
fly” algorithm, as was done in [4].
## References
* [1] D. A. F. Albanez and M. J. Benvenutti Parameter analysis in continuous data assimilation for three-dimensional Brinkman-Forchheimer-extended Darcy, _preprint_ (2022). https://doi.org/10.48550/ARXIV.2210.11432
* [2] A. Azouani, E. Olson and E. S. Titi, Continuous data assimilation using general interpolant observables. _J Nonlinear Sci_ 24 (2014), 277–304. https://doi.org/10.1007/s00332-013-9189-y.
* [3] A. Biswas, C. Foias, C. F. Mondaini and E. S. Titi, Downscaling data assimilation algorithm with applications to statistical solutions of the Navier–Stokes equations, _Annales de l’institut henri poincaré c, analyse non linéaire_ , Elsevier. (2018)
* [4] E. Carlson, J. Hudson and A. Larios, Parameter recovery for the 2 dimensional Navier–Stokes equations via continuous data assimilation, SIAM J. Sci. Comput. 42 (2020), no. 1, A250-A270.
* [5] T. Colin, A. Iollo, J-B. Lagaert and O. Saut. An inverse problem for the recovery of the vascularization of a tumor, _J. Inverse Ill-Posed Probl._ 22 (2014), 759–786.
* [6] P. Constantin and C. Foias, _Navier–Stokes Equations_ , Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, 1988.
* [7] R. E. Ewing and T. Lin, A class of parameter estimation techniques for fluid flow in porous media, Adv. Water Resour. 1991 14, 89–97, Parameter identification in ground water flow, transport, and related processes, part I.
* [8] C. Foias, M. S. Jolly, R. Kravchenko and E. S. Titi, A determining form for the two-dimensional Navier-Stokes equations: The Fourier modes case, _Journal of Mathematical Physics_ 53 (2012), 115623. https://doi.org/10.1063/1.4766459.
* [9] C. Foias, M. S. Jolly, R. Kravchenko and E. S. Titi, A unified approach to determining forms for the 2D Navier-Stokes equations – the general interpolants case, _Russian Mathematical Surveys_ 69 (2014), 359–381. https://doi.org/10.1070/RM2014v069n02ABEH004891.
* [10] C. Foias, M.S. Jolly, R. Lan, R. Rupam, Y. Yang and B. Zhang, _Time analyticity with higher norm estimates for the 2D Navier–Stokes equations_ , IMA J. Appl. Math., 80 (2015), pp. 766–810.
* [11] M. A. Iglesius and C. Dawson, An iterative representer-based scheme for data inversion in reservoir modeling, _Inverse Problems_ 25 (2009) 035006 (34pp).
* [12] Vincent R. Martinez, Convergence Analysis of a Viscosity Parameter Recovery Algorithm for the 2D Navier–Stokes Equations, _Nonlinearity_ 35 (2022), 2241. https://doi.org/10.1088/1361-6544/ac5362.
* [13] B. Panchev, J. P. Whitehead and S. A. McQuarrie, Concurrent multiparameter learning demonstrated on the Kuramoto-Sivashinsky equation, _SIAM J. Sci. Comput._ 44 (2022), no. 5, A2974-A2990.
* [14] M. Raissi and G. E. Karniadakis, Hidden physics models: machine learning of nonlinear partial differential equations. _J. Comput. Phys._ 357 (2018), 125-141.
* [15] M. Raissi, P. Perdikaris and G. E. Karniadakis, Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. _J. Comput. Phys._ 378 (2019), 686-707.
* [16] R. Temam, _Navier–Stokes Equations and Nonlinear Functional Analysis_ , 2nd ed., CBMS-NSF Regional Conference Series in Applied Mathematics, 66, SIAM, Philadelphia, PA, 1995.
* [17] R. Temam, _Infinite-Dimensional Dynamical Systems in Mechanics and Physics_ , 2nd ed., Applied Mathematical Sciences, 68, Springer-Verlag, New York, 1997.
* [18] R. Temam, _Navier–Stokes Equations. Theory and Numerical Analysis_ , Studies in Mathematics and its Applications, 3rd edition, North-Holland Publishing Co., Amsterdam-New York, 1984. Reedition in the AMS Chelsea Series, AMS, Providence, 2001.
* [19] W W-G Yeh, Review of parameter identification procedures in groundwater hydrology: the inverse problem, Water Resour. Res. 1986, 22 95–108.
## Appendix A Proof of Ladyzhenskaya’s inequality
For completeness, we provide a proof of Ladyzhenskaya’s inequality, valid on
the torus in 2D.
$\|{\bm{\phi}}\|_{L^{4}(\Omega)}\leq|{\bm{\phi}}|^{\frac{1}{2}}\left(\tfrac{1}{L}|{\bm{\phi}}|+\|{\bm{\phi}}\|\right)^{\frac{1}{2}}$
(A.1)
###### Proof.
Let ${\bm{\phi}}\in C([0,L];\mathbb{R})$ be periodic (i.e.
${\bm{\phi}}(L)={\bm{\phi}}(0)$ and ${\bm{\phi}}$ can be extended to
$\mathbb{R}$ via the relation ${\bm{\phi}}(x+nL):={\bm{\phi}}(x)\>\forall
x\in[0,L],n\in\mathbb{N}$). Denote the mean of ${\bm{\phi}}$ by
$\overline{{\bm{\phi}}}=\frac{1}{L}\int_{0}^{L}{\bm{\phi}}(x)dx.$ Then by the
Mean Value Theorem, there is a point $x_{0}\in[0,L]$ such that
${\bm{\phi}}(x_{0})=\overline{{\bm{\phi}}}.$
For any $x\in[0,L]$, we have
$\frac{1}{2}{\bm{\phi}}^{2}(x)=\frac{1}{2}\overline{{\bm{\phi}}}^{2}+\int_{x_{0}}^{x}{\bm{\phi}}(z){\bm{\phi}}^{\prime}(z)dz\leq\frac{1}{2}\overline{{\bm{\phi}}}^{2}+\int_{x_{0}}^{x}|{\bm{\phi}}(z)||{\bm{\phi}}^{\prime}(z)|dz,$
and by periodicity,
$\frac{1}{2}{\bm{\phi}}^{2}(x)=\frac{1}{2}\overline{{\bm{\phi}}}^{2}-\int_{x}^{x_{0}+L}{\bm{\phi}}(z){\bm{\phi}}^{\prime}(z)dz\leq\frac{1}{2}\overline{{\bm{\phi}}}^{2}+\int_{x}^{x_{0}+L}|{\bm{\phi}}(z)||{\bm{\phi}}^{\prime}(z)|dz,$
so
${\bm{\phi}}^{2}(x)\leq\overline{{\bm{\phi}}}^{2}+\int_{x_{0}}^{x_{0}+L}|{\bm{\phi}}(z)||{\bm{\phi}}^{\prime}(z)|dz=\overline{{\bm{\phi}}}^{2}+\int_{0}^{L}|{\bm{\phi}}(z)||{\bm{\phi}}^{\prime}(z)|dz.$
This easily extends to vector valued ${\bm{\phi}}\in C([0,L];R^{2}):$
$|{\bm{\phi}}(x)|^{2}={\bm{\phi}}_{1}^{2}(x)+{\bm{\phi}}_{2}^{2}(x)\leq\overline{{\bm{\phi}}_{1}}^{2}+\int_{0}^{L}|{\bm{\phi}}_{1}(z)||{\bm{\phi}}_{1}^{\prime}(z)|dz+\overline{{\bm{\phi}}_{2}}^{2}+\int_{0}^{L}|{\bm{\phi}}_{2}(z)||{\bm{\phi}}_{2}^{\prime}(z)|dz,$
which we can simplify by applying (2.14),
$\overline{{\bm{\phi}}_{1}}^{2}+\overline{{\bm{\phi}}_{2}}^{2}=\left(\frac{1}{L}\int_{0}^{L}{\bm{\phi}}_{1}(x)dx\right)^{2}+\left(\frac{1}{L}\int_{0}^{L}{\bm{\phi}}_{2}(x)dx\right)^{2}\\\
\leq\frac{1}{L}\int_{0}^{L}{\bm{\phi}}_{1}^{2}(x)dx+\frac{1}{L}\int_{0}^{L}{\bm{\phi}}_{2}^{2}(x)dx=\frac{1}{L}\int_{0}^{L}|{\bm{\phi}}(x)|^{2}dx$
and Cauchy-Schwarz,
$|{\bm{\phi}}_{1}(z)||{\bm{\phi}}_{1}^{\prime}(z)|+|{\bm{\phi}}_{2}(z)||{\bm{\phi}}_{2}^{\prime}(z)|=\begin{bmatrix}|{\bm{\phi}}_{1}(z)|\\\
|{\bm{\phi}}_{2}(z)|\end{bmatrix}\cdot\begin{bmatrix}|{\bm{\phi}}_{1}^{\prime}(z)|\\\
|{\bm{\phi}}_{2}^{\prime}(z)|\end{bmatrix}\leq|{\bm{\phi}}(z)||{\bm{\phi}}^{\prime}(z)|$
to obtain
$|{\bm{\phi}}(x)|^{2}\leq\frac{1}{L}\int_{0}^{L}|{\bm{\phi}}(z)|^{2}dz+\int_{0}^{L}|{\bm{\phi}}(z)||{\bm{\phi}}^{\prime}(z)|dz.$
Now, if ${\bm{\phi}}\in C(\Omega;\mathbb{R}^{2})$, using the estimates for a
one-dimensional domain, for any $(x,y)\in\Omega,$ we have
$|{\bm{\phi}}(x,y)|^{4}=|{\bm{\phi}}(x,y)|^{2}|{\bm{\phi}}(x,y)|^{2}\\\
\leq\left(\frac{1}{L}\int_{0}^{L}|{\bm{\phi}}(z,y)|^{2}dz+\int_{0}^{L}|{\bm{\phi}}(z,y)||\partial_{1}{\bm{\phi}}(z,y)|dz\right)\\\
\left(\frac{1}{L}\int_{0}^{L}|{\bm{\phi}}(x,z)|^{2}dz+\int_{0}^{L}|{\bm{\phi}}(x,z)||\partial_{2}{\bm{\phi}}(x,z)|dz\right),$
so
$\|{\bm{\phi}}\|_{L^{4}(\Omega)}^{4}=\int_{0}^{L}\int_{0}^{L}|{\bm{\phi}}(x,y)|^{4}\,dx\,dy\\\
\leq\int_{0}^{L}\left(\frac{1}{L}\int_{0}^{L}|{\bm{\phi}}(z,y)|^{2}dz+\int_{0}^{L}|{\bm{\phi}}(z,y)||\partial_{1}{\bm{\phi}}(z,y)|dz\right)dy\\\
\int_{0}^{L}\left(\frac{1}{L}\int_{0}^{L}|{\bm{\phi}}(x,z)|^{2}dz+\int_{0}^{L}|{\bm{\phi}}(x,z)||\partial_{2}{\bm{\phi}}(x,z)|dz\right)dx\\\
\leq\left(\frac{1}{L}|{\bm{\phi}}|^{2}+|{\bm{\phi}}||\partial_{1}{\bm{\phi}}|\right)\left(\frac{1}{L}|{\bm{\phi}}|^{2}+|{\bm{\phi}}||\partial_{2}{\bm{\phi}}|\right)\\\
\leq\left(\frac{1}{L}|{\bm{\phi}}|^{2}+|{\bm{\phi}}|\|{\bm{\phi}}\|\right)\left(\frac{1}{L}|{\bm{\phi}}|^{2}+|{\bm{\phi}}|\|{\bm{\phi}}\|\right),$
where the second to last line follows from the Cauchy-Schwarz inequality. ∎
|
# Decentralized Learning with Multi-Headed Distillation
Andrey Zhmoginov Mark Sandler Nolan Miller Gus Kristiansen Max Vladymyrov
Google Research
<EMAIL_ADDRESS>
###### Abstract
Decentralized learning with private data is a central problem in machine
learning. We propose a novel distillation-based decentralized learning
technique that allows multiple agents with private non-iid data to learn from
each other, without having to share their data, weights or weight updates. Our
approach is communication efficient, utilizes an unlabeled public dataset and
uses multiple auxiliary heads for each client, greatly improving training
efficiency in the case of heterogeneous data. This approach allows individual
models to preserve and enhance performance on their private tasks while also
dramatically improving their performance on the global aggregated data
distribution. We study the effects of data and model architecture
heterogeneity and the impact of the underlying communication graph topology on
learning efficiency and show that our agents can significantly improve their
performance compared to learning in isolation.
## 1 Introduction
Supervised training of large models historically relied on access to massive
amounts of labeled data. Unfortunately, since data collection and labeling are
very time-consuming, curating new high-quality datasets remains expensive and
practitioners are frequently forced to get by with a limited set of available
labeled datasets. Recently it has been proposed to circumvent this issue by
utilizing the existence of large amounts of siloed private information.
Algorithms capable of training models on the entire available data without
having a direct access to private information have been developed with
Federated Learning approaches [24] taking the leading role.
Figure 1: Conceptual diagram of a distillation in a distributed system.
Clients use a public dataset to distill knowledge from other clients, each
having their primary private dataset. Individual clients may have different
architectures and different objective functions.
While very effective in large-scale distributed environments, more canonical
techniques based on federated averaging, have several noticeable drawbacks.
First, gradient aggregation requires individual models to have fully
compatible weight spaces and thus identical architectures. While this
condition may not be difficult to satisfy for sufficiently small models
trained across devices with compatible hardware limitations, this restriction
may be disadvantageous in a more general setting, where some participant
hardware can be significantly more powerful than the others. Secondly,
federated averaging methods are generally trained in a centralized fashion.
Among other things, this prohibits the use of complex distributed
communication patterns and implies that different groups of clients cannot
generally be trained in isolation from each other for prolonged periods of
time.
Another branch of learning methods suitable for distributed model training on
private data are those based on distillation [6, 3, 15]. Instead of
synchronizing the inner states of the models, such methods use outputs or
intermediate representations of the models to exchange the information. The
source of data for computing exchanged model predictions is generally assumed
to be provided in the form of publicly available datasets [12] that do not
have to be annotated since the source of annotation can come from other models
in the ensemble (see Figure 1). One interesting interpretation of model
distillation is to view it as a way of using queries from the public dataset
to indirectly gather information about the weights of the network (see
Appendix A). Unlike canonical federated-based techniques, where the entire
model state update is communicated, distillation only reveals activations on
specific samples, thus potentially reducing the amount of communicated bits of
information. By the data processing inequality, such reduction, also
translates into additional insulation of the private data used to train the
model from adversaries. However, it is worth noting that there exists multiple
secure aggregation protocols including SecAgg [5] that provide data privacy
guarantees for different Federated Learning techniques.
The family of approaches based on distillation is less restrictive than
canonical federated-based approaches with respect to the communication
pattern, supporting fully distributed knowledge exchange. It also permits
different models to have entirely different architectures as long as their
outputs or representations are compatible with each other. It even allows
different models to use various data modalities and be optimizing different
objectives, for example mixing supervised and self-supervised tasks within the
same domain. Finally, notice that the distillation approaches can and
frequently are used in conjunction with weight aggregation [21, 30, 31, 37],
where some of the participating clients may in fact be entire ensemble of
models with identical architectures continuously synchronized using federated
aggregation (see Figure 8 in Supplementary).
##### Our contributions.
In this paper, we propose and empirically study a novel distillation-based
technique that we call Multi-Headed Distillation (MHD) for distributed
learning on a large-scale ImageNet [9] dataset. Our approach is based on two
ideas: (a) inspired by self-distillation [10, 38, 2] we utilize multiple model
heads distilling to each other (see Figure 2) and (b) during training we
simultaneously distill client model predictions and intermediate network
embeddings to those of a target model. These techniques allow individual
clients to effectively absorb more knowledge from other participants,
achieving a much higher accuracy on a set of all available client tasks
compared with the naive distillation method.
In our experiments, we explore several key properties of the proposed model
including those that are specific to decentralized distillation-based
techniques. First, we analyse the effects of data heterogeneity, studying two
scenarios in which individual client tasks are either identical or very
dissimilar. We then investigate the effects of working with nontrivial
communication graphs and using heterogeneous model architectures. Studying
complex communication patterns, we discover that even if two clients in the
ensemble cannot communicate directly, they can still learn from each other via
a chain of interconnected clients. This “transitive” property relies in large
part on utilization of multiple auxiliary heads in our method. We also conduct
experiments with multi-client systems consisting of both ResNet-18 and
ResNet-34 models [14] and demonstrate that: (a) smaller models benefit from
having large models in the ensemble, (b) large models learning from a
collection of small models can reach higher accuracies than those achievable
with small models only.
## 2 Related Work
##### Personalized Federated Learning.
While many early canonical Federated Learning approaches trained a single
global model for all clients [24], it has been quickly realized that non-IID
nature of private data in real systems may pose a problem and requires
personalized approaches [20]. Since then many Personalized Federated Learning
approaches have been developed, many covered in the surveys [18, 33].
##### Federated Distillation.
Emergence of Federated Distillation was motivated by the need to perform
learning across ensembles of heterogeneous models111note that multiple
existing approaches like [28, 35] allow using FedAvg for training
heterogeneous model ensembles, reducing communication costs and improving
performance on non-IID data. Existing distillation-based approaches can be
categorized based on the system setup and the types of the messages passed
between participants. A number of approaches including [23, 12, 21, 30, 8, 31,
40, 37] combine aggregation of weight updates with model distillation. They
are typically centralized and frequently involve client-side distillation,
which may restrict the size of the aggregated model. A different body of work
is concentrated on centralized systems, where only model predictions are
communicated between the clients and the server [19, 16, 13, 29, 32, 39, 11,
26]. Another related family of approaches is based on communicating embedding
prototypes [34], or using embeddings for distillation directly [1, 26]. In
this paper, we concentrate on a more general decentralized setup, where there
is not single central authority and all clients exchange knowledge via
distillation [4].
## 3 Model
### 3.1 Setup
We consider a system of $K$ clients $C=\\{C_{1},\dots,C_{K}\\}$. Each client
$C_{i}$ is assumed to possess their own private dataset $\mathcal{D}_{i}$
while training a private model $\mathcal{M}_{i}$ that solves a corresponding
task $\mathcal{T}_{i}$. In the following, we assume that all tasks
$\mathcal{T}_{i}$ are supervised.
While using their local dataset $\mathcal{D}_{i}$ to train the private model,
each client can also communicate with other clients to learn from them. At
each global training step $t$, we define a local directed graph
$\mathcal{G}_{t}$ that determines the pattern of this communication. While the
set of nodes of $\mathcal{G}_{t}$ is fixed to be the set of all clients, the
set of edges $\mathcal{E}_{t}$ with the corresponding incidence function can
be dynamic and change every training step.
The local datasets $\mathcal{D}_{i}$ are not directly exchanged between the
clients, instead the information exchange occurs via a shared public source of
unlabeled data $\mathcal{D}_{*}$. We assume that at training step $t$, each
client $C_{i}$ can perform inference on a set of public samples and request
the results of a similar computation on the same samples from other clients
that are incident to it by directed edges of $\mathcal{G}_{t}$. In other
words, each client $C_{i}$ is optimizing a local objective $\mathcal{L}_{i}$
defined as:
$\displaystyle\mathcal{L}_{i}(t)=\mathcal{L}_{i,{\rm
CE}}+\sum_{\alpha}\mathbb{E}_{x\sim\mathcal{D}_{*}}\mathcal{L}_{\rm
dist}^{\alpha}(\psi^{\alpha}_{i}(x),\Phi_{t,i}^{\alpha}),$ (1)
where $\mathcal{L}_{i,{\rm
CE}}\equiv\mathbb{E}_{(x,y)\sim\mathcal{D}_{i}}\mathcal{L}_{\rm CE}(x,y)$ and
$\mathcal{L}_{\rm CE}$ is a cross-entropy loss optimized locally by each
client on their private data $\mathcal{D}_{i}$, $\mathcal{L}_{\rm
dist}^{\alpha}$ is a collection of different distillation losses enumerated by
$\alpha$ that use some local computation result $\psi^{\alpha}_{i}$ and a
remote results $\Phi_{t,i}^{\alpha}(x)\equiv\\{\phi^{\alpha}_{j}(x)|j\in
e_{t}(i)\\}$ computed on the same sample and $e_{t}(i)$ is a set of clients
connected to $i$ via a set of outgoing edges (from $\mathcal{G}_{t}$).
Notice that in contrast with Federated Learning, here we do not require
different models $\mathcal{M}_{i}$ to have compatible architectures, but
instead optimize local and remote sample representations $\psi_{i}(x)$ and
$\phi_{j}(x)$ to be compatible. In the next section, we discuss several
potential choices of the distillation losses.
In this paper, we are interested in evaluating the impact that the
communication and cross-learning between the clients has on (a) how well these
models can be suited for their original private tasks and (b) how much of the
knowledge gets shared and distributed to the other tasks over time. Notice
that if each client has a sufficiently simple model and enough training data
(making the model underfit), the communication between individual models is
not expected to improve their private task performance, but can only enhance
their learned representations making them more suitable for adapting to other
client’s tasks. However, if the private training data is scarce (making the
model overfit), the model communication could improve generalization and
ultimately improve client performance on their private tasks.
### 3.2 Distillation Losses
##### Embedding distillation.
We utilize the embedding regularization loss [1, 26] in our experiments. If
$\xi_{i}(x)$ is an intermediate embedding produced for a sample $x$ coming
from the shared public dataset by the model $\mathcal{M}_{i}$, then we can
choose $\psi^{\rm emb}_{i}(x)\equiv\xi_{i}(x)$, $\phi^{\rm
emb}_{j}(x)\equiv\xi_{j}(x)$ and define $\mathcal{L}_{\rm dist}^{\rm
emb}\left(\psi_{i}^{\rm emb}(x),\Phi_{t,i}^{\rm emb}(x)\right)$ as
$\displaystyle\nu_{\rm emb}\sum_{j\in e_{t}(i)}\rho\left(\|\psi_{i}^{\rm
emb}(x)-\phi_{j}^{\rm emb}(x)\|\right),$ (2)
or simply $\nu_{\rm emb}\sum_{j\in
e_{t}(i)}\rho\left(\|\xi_{i}(x)-\xi_{j}(x)\|\right)$, where $\nu_{\rm emb}$ is
the weighting constant and $\rho(x)\in C^{\infty}$ is some monotonically
growing function. The choice of this distillation loss forces compatibility
between sample embeddings across the ensemble. In practice, we noticed that
the embedding norms of different models frequently diverge during training,
and to adapt to that we use normalized embeddings preserving regularization
consistency across the entire duration of training: $\psi^{\rm
norm}_{i}(x)\equiv\xi_{i}(x)/\|\xi_{i}(x)\|$.
##### Prediction distillation.
Ability to predict on classes that are rarely present in private data can be
improved by utilizing prediction vector as an additional distillation target.
However, since $\mathcal{M}_{i}$ is tasked with fitting ground truth on a
particular dataset $\mathcal{D}_{i}$, distilling this prediction to labels
relevant for another client may be damaging for the model performance on
$\mathcal{T}_{i}$. Instead, we choose to add another single prediction head to
$\mathcal{M}_{i}$ that is distilled from all existing tasks thus (a) not
polluting the main prediction head of the model $\mathcal{M}_{i}$, but (b) at
the same time forcing the intermediate representation $\xi_{i}(x)$ to contain
information relevant for solving all existing tasks $\\{\mathcal{T}_{j}|j\in
1,\dots,K\\}$.
Let ${\boldsymbol{h}}_{i}(\xi_{i}(x))$ be the main head of the model
$\mathcal{M}_{i}$ used for computing $\mathcal{L}_{\rm CE}$ and
${\boldsymbol{h}}^{\rm aux}_{i}(\xi_{i}(x))$ be the auxiliary head. Then, the
naïve prediction distillation loss takes the following form:
$\displaystyle\mathcal{L}_{\rm dist}^{\rm aux}[{\boldsymbol{h}}^{\rm
aux},{\boldsymbol{h}}]\equiv-\nu_{\rm aux}\sum_{j\in
e_{t}(i)}{\boldsymbol{h}}_{j}\log{\boldsymbol{h}}^{\rm aux}_{i}(x),$ (3)
where $\nu_{\rm aux}$ is the auxiliary loss weight. Here all the distillation
targets from $e_{t}(i)$ are essentially treated the same irrespective of their
confidence in their prediction. One way of integrating the knowledge of the
distillation target quality is to use some confidence metric for their
prediction on $x$. For example, we could consider the following modification
of the loss (3):
$\displaystyle-\nu_{\rm aux}\sum_{j\in
e_{t}(i)\cup\\{i\\}}Q\left[\Lambda({\boldsymbol{h}}_{j});H[{\boldsymbol{h}}]\right]\times{\boldsymbol{h}}_{j}\log{\boldsymbol{h}}^{\rm
aux}_{i}(x),$ (4)
where $\Lambda({\boldsymbol{h}}(x))$ is the confidence of the classifier
prediction, $Q$ is some function of the client confidence and
$H[{\boldsymbol{h}}]\equiv\\{\Lambda({\boldsymbol{h}}_{j})|j\in
e_{t}(i)\cup\\{i\\}\\}$ is the information about confidence of all possible
distillation targets including the $i^{\rm th}$ client itself. We considered
perhaps the simplest choice for $\Lambda$ defining it as
$\operatorname*{arg\,max}_{k}h_{k}(x)$. This measure of the model confidence
that we end up using in our method is, of course, not reliable (see Appendix
A) and using a separate per-client density model $\rho_{i}(x)$ for detecting
in-distribution and out-of-distribution samples could potentially improve
model performance (for an alternative approach see [22]). For $Q$, we only
considered perhaps the most obvious choice of
$Q[\Lambda({\boldsymbol{h}}_{j})]=1$ if $j^{\rm th}$ client has the largest
confidence from $H$ and $0$ otherwise, effectively selecting the most
confident client and using it as the distillation target (see Appendix A for a
detailed discussion).
Figure 2: A pattern used for distilling multiple auxiliary heads. Here
multiple auxiliary heads of “Client 1” are distilled from other auxiliary
heads of the same model and from auxiliary heads of other clients (here
“Client 2”). Auxiliary head Aux 1 is distilled from the main heads, auxiliary
head Aux 2 is distilled from auxiliary heads Aux 1 and so on.
##### Self-distillation with multiple auxiliary heads.
Self-distillation is a well-known technique that improves model performance by
repeatedly using the previous iteration of the model as the distillation
target for itself [10, 38, 2, 25]. The most direct application of this
technique to training an ensemble of models is to perform multiple cycles of
self-distillation across all available networks. Here, however, we propose a
different approach, where we modify a conventional training procedure by
equipping each classifier with a collection of multiple auxiliary heads
$\\{{\boldsymbol{h}}^{{\rm aux},1},\dots,{\boldsymbol{h}}^{{\rm aux},m}\\}$.
These auxiliary heads distill from each other by optimizing the following
loss:
$\mathcal{L}_{\rm dist}^{\rm aux}[{\boldsymbol{h}}^{{\rm
aux},1},{\boldsymbol{h}}]+\sum_{k=2}^{m}\mathcal{L}_{\rm dist}^{\rm
aux}[{\boldsymbol{h}}^{{\rm aux},k},{\boldsymbol{h}}^{{\rm aux},k-1}],$ (5)
where $\mathcal{L}_{\rm dist}^{\rm
aux}[{\boldsymbol{h}}^{(a)},{\boldsymbol{h}}^{(b)}]$ is defined according to
Eq. (4). In other words, ${\boldsymbol{h}}^{{\rm aux},1}$ distills from
${\boldsymbol{h}}$ and ${\boldsymbol{h}}^{{\rm aux},k}$ distills from
${\boldsymbol{h}}^{{\rm aux},k-1}$ for all $1<k\leq m$. This approach
illustrated in Figure 2 is one of the core contributions of our paper.
##### Communication efficiency.
In terms of communication efficiency, this approach could suffer from
ineffective communication when the distillation targets are frequently a poor
source of knowledge for a particular sample class. This problem would ideally
require client awareness of the label distribution on each client that it
communicates with. However, since in practice, prediction distillation
(embedding distillation is more costly) only requires a transmission of
several highest-confidence predictions for each sample, each step with batch
size of $512$ would require a communication of only a few thousand floating
point numbers (assuming that shared public set images could be uniquely
identified with a small hash). At the same time, a single back-and-forth round
of FedAvg communication of a ResNet-34 model would require more than $100$
million floating-point parameters, which would be equivalent to around $50{\rm
k}$ prediction distillation steps.
### 3.3 Dataset
In this work, we study distributed learning in systems with varying degrees of
data heterogeneity: from those where the distribution of data is the same
across all clients, to more extreme cases where each client specializes on
it’s own unique task. We simulate these scenarios using an underlying labeled
dataset $\mathcal{D}$. Let $S$ be the set of all samples from $\mathcal{D}$.
Some fraction of samples $\gamma_{\rm pub}$ (typically around $10\%$) is
treated as a set of unlabeled public samples. The remaining samples are
treated as the source of private data and are distributed without repetition
across all of $K$ clients as discussed below.
##### Label assignment.
Each client $C_{i}$ is assigned a subset $\ell_{i}$ of all labels, which are
treated as primary labels for $C_{i}$. Remaining labels from $\mathcal{D}$ not
belonging to $\ell_{i}$ are treated as secondary labels for $C_{i}$. For each
label $l$, we take all available samples and randomly distribute them across
all clients. The probability of assigning a sample with label $l$ to a client
$C_{i}$ is chosen to be $1+s$ times higher for clients that have $l$ as their
primary label. We call the parameter $s$ dataset skewness. As a result, in the
iid case with $s=0$ all samples are equally likely to be assigned to any one
of the clients. However, in the non-iid case in the limit of $s\to\infty$, all
samples for label $l$ are only distributed across clients for which $l$ is
primary.
We considered two choices for selecting the primary label sets for the
clients. One choice (we refer to as even) is to subdivide the set of all
labels in such a way that each label has exactly $m$ corresponding primary
clients. Another choice (we refer to as random) is to randomly assign each
client $C_{i}$ a random fixed-size subset of all labels. This choice creates a
variation in the number of primary clients for different labels, making it a
less idealized and more realistic setup even in the limit of $s\to\infty$. For
example, for ImageNet with $1000$ classes, if it is subdivided between $8$
clients each receiving $250$ random labels: (a) around $100$ labels will be
distributed evenly across all clients (no primary clients), (b) around $270$
labels will have a single primary client, (c) around $310$ labels will have
two primary clients, (d) around $210$ labels will have three primary clients
and (e) around $110$ remaining labels will have $4$ or more primary clients.
## 4 Experiments
### 4.1 Experimental Framework
In most of our experiments, we used ImageNet dataset with samples distributed
across multiple clients as discussed in Section 3.3. The public dataset used
for distillation was chosen by selecting $\gamma_{\rm pub}=10\%$ of all
available training samples and the remaining $90\%$ were distributed across
clients as private labeled data. We used both random and even label
distribution strategies and considered two cases of $s=0$ and $s=100$
corresponding to homogeneous and heterogeneous task distributions
correspondingly. In most of our experiments, unless indicated otherwise, we
used ResNet-34 models as individual clients, trained $8$ clients and each was
assigned $250$ primary labels at random. The models were typically trained for
60 000 or 120 000 steps with SGD with momentum, batch size of $512$, cosine
learning rate decay and the initial learning rate of $0.1$ and momentum $0.9$.
Our experimental platform was based on distillation losses outlined in Section
3.2. However, being restricted by computational efficiency needed to run
numerous experiments, we made several implementation choices that deviated
from the general formulation of Section 3.2. Most importantly, individual
clients do not directly exchange their predictions on the public dataset, but
instead each client $C_{i}$ keeps a rolling pool $\mathcal{P}_{i}$ of
$N_{\mathcal{P}}$ model checkpoints. In most of our experiments,
$N_{\mathcal{P}}$ was chosen to be equal to the total number of clients in the
system. Every step, each client $C_{i}$ picks a $\Delta$ random checkpoints
from $\mathcal{P}_{i}$ and uses them for performing a distillation step on a
new batch. Each pool $\mathcal{P}_{i}$ is updated every $S_{\mathcal{P}}$
steps, when a new checkpoint for one of the other clients is added into the
pool (replacing another random checkpoint). In most of our experiments, we
used a single distillation client on every step, i.e., $\Delta=1$ and
$e_{t}(i)$ defined in Sec. 3.1 contains a single element every step $t$.
However, a separate exploration of the parameter $\Delta$ was also performed.
Also, since in most of our experiments we used $S_{\mathcal{P}}=200$,
infrequent pool updates would typically introduce a time lag causing the model
to distill knowledge from somewhat outdated checkpoints.
(a) IID ($s=0$), Private Acc.
(b) IID ($s=0$), Shared Acc.
(c) non-IID ($s=100$), Private Acc.
(d) Non-IID ($s=100$), Shared Acc.
Figure 3: Comparison of private (on the client’s dataset) and shared
accuracies (on a uniform class distribution) for models trained on datasets
with iid and non-iid distributions (see Sec. 3.3) (a) with $s=0$ and (b)
$s=100$. Both the main head (solid) and the auxiliary head accuracies (dashed)
are shown. Four values of $\nu_{\rm aux}$ are shown: $0.0$ ( blue), $1.0$ (
orange), $3.0$ ( green), $10.0$ ( red). The accuracies are seen to peak for
$\nu_{\rm aux}=3$ and $\nu_{\rm emb}=3$ for $s=0$ and $\nu_{\rm emb}=1$ for
$s=100$.
### 4.2 Embedding and Multi-Headed Distillation
In this section we start exploring distillation technique in the simplest
scenario with identical model architectures and a complete graph connectivity,
where each model can distill knowledge from any other existing client.
#### 4.2.1 Evaluating Basic Distillation Approaches
Consider a set of models with identical ResNet-based architectures learning on
their private subsets of ImageNet and distilling the knowledge from each other
assuming a complete connectivity of the communication graph. Here we compare
the efficiency of knowledge transfer for different distillation approaches:
(a) distilling sample embeddings preceding the final logits layer (embedding
distillation) and (b) distilling actual model predictions (prediction
distillation) (see Sec. 3.2). Specifically, we consider two extreme cases of
an iid ($s=0$) and non-iid ($s=100$) distributed ImageNet datasets and study
the final performance of individual agents while varying the strengths of the
embedding and the prediction distillation losses, $\nu_{\rm emb}$ and
$\nu_{\rm aux}$ correspondingly.
In our experiments, we study the performance of primary and auxiliary model
heads on two data distributions: (a) private dataset defining the primary
problem that the client is tasked with and (b) shared dataset reflecting the
uniform label distribution averaged across all clients. Any technique
improving the private dataset accuracy $\beta_{\rm priv}$ can be viewed as
successful at learning from other clients and translating the acquired
knowledge into better performance on their own task. On the other hand, a
technique improving the shared dataset accuracy $\beta_{\rm sh}$ is successful
at learning a more robust representation that can be easily adapted to solving
other possible tasks (seen by other clients). Both of these potential
capabilities can be viewed as positive outcomes of cross-client communication
and learning, but their utility may be application specific.
Figure 3 summarizes our empirical results (see Appendix B for raw numbers)
showing the measurements of the average private accuracy $\beta_{\rm priv}$,
that is the accuracy of each client on their respective dataset
$\mathcal{D}_{i}$, and the averaged shared accuracy $\beta_{\rm sh}$ measured
on a dataset with a uniform label distribution identical to that of the
original ImageNet. While $\beta_{\rm priv}$ measures how well a particular
client performs on their own task, $\beta_{\rm sh}$ is a reflection of the
world knowledge (some may be irrelevant for the private task) that the client
learns from other participants.
Figure 3 contains several interesting findings: (a) while both regularization
techniques are useful for improving model performance, there is a threshold
beyond which they start deteriorating both accuracies; (b) taken alone
prediction distillation seems to have a stronger positive effect than the
embedding distillation, while embedding distillation is more effective in the
$s=0$ case; (c) however, the best results are obtained by combining both
distillation techniques. Furthermore, we see that the distillation techniques
generally improve both $\beta_{\rm priv}$ and $\beta_{\rm sh}$ simultaneously.
Notice that the positive effect of $\nu_{\rm aux}$ suggests that training a
separate auxiliary head has an effect on the model embedding that leads to an
improved performance on the main head trained with the client’s private
dataset alone. Another interesting observation is that for uniform datasets
with a small $s$, the auxiliary head ends up having better performance on both
the private and shared tasks (identical for $s=0$). At the same time, in a
non-iid dataset with $s=100$, auxiliary head performs much better on the
shared dataset, but lags behind on the private task since it is not trained on
it directly.
#### 4.2.2 Improving Distillation Efficiency
While Figure 3 shows a clear evidence that distillation techniques can be
useful for distributed learning even in the case of heterogeneous client data,
there is a room for further improvement.
(a) IID ($s=0$)
(b) non-IID ($s=100$)
Figure 4: Private (dot-dashed) and shared (solid) dataset accuracies of main
and auxiliary heads in ensembles trained with different numbers of auxiliary
heads: $1$ aux head ( blue), $2$ heads ( orange), $3$ heads ( green) and $4$
heads ( red). For the IID case the private and shared performance match.
##### Ignoring poor distillation targets.
In some cases, agents can be distilling knowledge about particular categories
from agents that themselves do not possess accurate information. It is even
possible that the agent’s auxiliary head is already “more knowledgeable” about
the class than the main head of another agent that it is trying to distill
from. As a result, the performance of the auxiliary head may degrade. One
approach that we study here is to skip distillation on a sample if the
auxiliary head confidence is already higher than that of the head it is trying
to distill from. In our experiments, we observed that that this simple idea
had virtually no effect for $s=0$, but allowed us to improve the performance
of the auxiliary head for heterogeneous data distributions with $s=100$.
Specifically, for $8$ clients and $s=100$, this technique improved auxiliary
head $\beta_{\rm sh}$ from $44.7\%$ to $46.5\%$, while having virtually no
effect on the private dataset accuracy $\beta_{\rm priv}$ of the main model
head, which stayed at $72.2\%$. While effective for single auxiliary head,
this technique did not improve results in multiple auxiliary heads scenario
(see Appendix B) that we will discuss next.
$s=0$ | Accuracy | $s=100$ | Accuracy
---|---|---|---
Separate | $46.3\%$ | Separate | $25.1\%$
MHD (Ours) | $59.9\%$ | MHD (Ours) | $54.5\%$
MHD+ (Ours) | $68.6\%$ | MHD+ (Ours) | $63.4\%$
FA, $u=200$ | $70.5\%$ | FA, $u=200$ | $68.0\%$
FA, $u=1000$ | $69.1\%$ | FA, $u=1000$ | $65.7\%$
Supervised | $68.9\%$ | – | –
Table 1: Comparison of the shared accuracies $\beta_{\rm sh}$ for our
technique and two “upper-bound” baselines trained for $60{\rm k}$ steps on
$90\%$ of ImageNet: (a) supervised and (b) trained with Federated Averaging
(FA) performed every $u$ steps. MHD+ experiments were conducted with $180{\rm
k}$ steps and used the entire ImageNet as a public dataset (regime of
plentiful public data). Separate corresponds to shared dataset performance for
clients trained independently on their own private data. FA accuracy being
higher than the supervised could be explained by a much larger number of
samples being effectively processed during training ($\times 8$).
##### Multiple auxiliary heads.
Here we empirically study the multi-head approach inspired by self-
distillation and described in detail in Section 3.2. Guided by earlier results
from Section 4.2.1, we choose $\nu_{\rm emb}=1$ and $\nu_{\rm aux}=3$. We then
train an ensemble of $8$ models, each with $250$ primary labels and two
choices of dataset skew: $s=0$ and $s=100$. For each choice of parameters, we
independently trained models with $1$ to $4$ auxiliary heads and then measured
the performance of the main and every auxiliary head on the client’s private
dataset and a shared test set with a uniform label distribution. The results
of our experiments are presented in Figure 4 (see Appendix B for raw numbers).
For a uniform data distribution, i.e., $s=0$, we see that distilling multiple
auxiliary heads has a positive impact on all model heads for up to $3$
auxiliary heads, after which performance starts to degrade. Among the heads
themselves, the peak performance is seen to be attained by the $2^{\rm nd}$
auxiliary head. However, we hypothesize that with the increase of the number
of training steps, the final head will end up having the highest accuracy.
In the case of a non-iid distribution with $s=100$, we observed that
increasing the number of auxiliary heads has a very profound positive affect
on the shared dataset performance $\beta_{\rm sh}$ of the final auxiliary
head. However, it is the main head that achieves the highest private dataset
accuracy $\beta_{\rm priv}$. All consecutive auxiliary heads appear to loose
their private dataset performance $\beta_{\rm priv}$ by specializing on
capturing the overall data distribution.
##### Dependence on the number of distillation targets $\Delta$.
We studied the effect of using multiple distillation targets $\Delta$ at every
training step by considering a typical $8$-client setup with $s=100$, $4$
auxiliary heads, $\nu_{\rm emb}=1$ and $\nu_{\rm aux}=3$. While increasing
$\Delta$ from $1$ to $3$ had virtually no effect on the main head private
accuracy $\beta_{\rm priv}$, the shared dataset accuracy $\beta_{\rm sh}$ for
the last auxiliary head improved from $54.5\%$ to $56.1\%$ and then to
$56.4\%$ as we increased $\Delta$ from $1$ to $3$. At $\Delta=4$, $\beta_{\rm
sh}$ appeared to saturate and fell to $56.2\%$ (within the statistical error
of about $0.2\%$). Overall, earlier auxiliary heads appeared to be affected by
$\Delta$ more strongly.
##### Choice of the confidence measure.
The choice of the confidence $\Lambda({\boldsymbol{h}}(x))$ is central to the
distillation technique. We compared our current choice based on selecting the
most confident head, with a random selection of the distillation target. In
our experiments with $8$ clients each with $250$ random primary labels,
$\nu_{\rm emb}=1$, $\nu_{\rm aux}=3$, $s=0$ and $3$ auxiliary heads, we
observed that randomizing confidence caused the main head $\beta_{\rm priv}$
degradation from $56\%$ to $55.2\%$ and the last auxiliary head $\beta_{\rm
sh}$ went down from $59.5\%$ to $58.4\%$. The degradation of model performance
is more significant in the case of heterogeneous client data. In experiments
with $s=100$ and $4$ auxiliary heads, we observed the main head $\beta_{\rm
priv}$ degraded from $72.1\%$ to $71.3\%$ and the last auxiliary head
$\beta_{\rm sh}$ decreased from $54.5\%$ to $49\%$.
##### Dependence on the technique efficiency on the public dataset size.
The efficiency of model distillation depends on the amount of data used for
performing this distillation, in our case, on the size of the public dataset.
In our experiments outlined in Appendix B.2, increasing the size of the public
dataset while fixing the amount of private training data has a positive impact
on the final model performance.
In practice, since unlabeled data is more abundant, one can expect that the
public dataset size will be comparable or even larger than the total amount of
labeled data available to clients. Being constrained by the ImageNet size and
attempting to keep the amount of private training data unaffected, we simulate
the abundance of public data by reusing the entirety of the ImageNet dataset
as an unlabeled public dataset. This, of course, is not realistic and somewhat
biased given that we reuse the same samples as labeled and unlabeled, but it
allows us to explore the limits of the distributed training efficiency with
distillation.
MHD Base | MHD | FedMD Base | FedMD
---|---|---|---
$60.6\%$ | $57.0\%$ / $0.6\%$ | $56.5\%$ | $50.2\%$ / $2.7\%$
Table 2: Comparison of mean test accuracies (first number) and their
deviations (second number after $/$) across $10$ clients for our method and
FedMD as reported in Ref. [19]. Baselines (Base) are obtained by training
clients with all available private data.
### 4.3 Baseline Comparisons
Before comparing our technique with a similar distillation-based method, we
compared its performance with two strong “upper-bound” baselines (see Table
1): supervised training on all ImageNet and FedAvg algorithm implemented
within our framework. A large performance gap between shared dataset
accuracies obtained using our method and the strong baselines can be viewed as
a price paid for learning via distillation in a decentralized multi-agent
system. At the same time, we see that increasing the public dataset size and
training for a longer period of time, allowing the information to propagate
across all clients (Our+ results), brings us close to the supervised model
performance. Notice that like many other distillation-based techniques [19,
39], our method reaches higher accuracy in the homogeneous data scenario.
We compared our method with FedMD [19] a similar, but centralized
distillation-based methods. This comparison was carried out by replicating the
dataset and $10$ model architectures from the publicly available
implementation. The dataset is based on CIFAR-100 [17] and makes use of $20$
coarse labels, while the public dataset is chosen to be CIFAR-10. Due to the
differences in the training process, our baseline results with individual
models trained on all private data pooled together was higher than that
reported in [19]. At the same time, we observed a much smaller gap in
performance between this upper baseline and the results obtained using our
method than the gap reported in [19] (see Table 2). Interestingly, we also
observe a much smaller performance spread across all 10 models trained with
our technique (deviation of $0.6\%$ compared to $2.7\%$ for FedMD).
### 4.4 Communication Topology Effects
Figure 5: Topologies compared to validate transitive distillation. Figure 6:
The performance by topology and distance between distillation teacher and
student. On the shared dataset the blue horizontal lines indicate the upper
bound per the embedding quality – computed by fine tuning a head on the frozen
model embeddings. Note that island embedding accuracy for “Islands” is still
worse than “Cycle”. Best viewed in color.
In order to explore how our approach might scale to larger systems in which
pairwise connections between all agents are not feasible, we aim to evaluate
how the communication topology affects performance. In particular we are
interested in the question of whether “transitive distillation” is possible
with our approach – that is whether two agents that are not directly connected
to one-another can still learn from each-other through an intermediary.
To evaluate this and determine how auxiliary heads play a role in the
performance we ran a training sweep with $4$ agents arranged in $3$ different
topologies (Figure 5) with $3$ auxiliary heads each. In all cases we trained
for $120{\rm k}$ steps, with $250$ primary labels per agent with $s=100$. We
observe (Figure 6) that performance on the shared dataset improves
significantly between island and cycle topology, with the baseline performance
matching closely the cycle performance. Without transitive distillation we
would expect island and cycle performance to match closely so this provides
strong evidence for transitive distillation. Also note that this behavior is
only present on auxiliary heads and is more pronounced for later heads.
We further analyze the performance of each agent on other agents’ private
data. Predictably we observe that island topologies perform well on in-island
other agents, and poorly on agents from outside their island. Cycle topology
agents perform best on their direct teacher (Cycle-1), but auxiliary heads 2
and 3 perform well on the “1-hop” transitive teacher (Cycle-2), and auxiliary
head 3 has markedly improved performance on the “2-hop” transitive teacher
(Cycle-3). We take this as strong evidence that auxiliary heads enable
transitive distillation, and that additional heads make learning across
additional degrees of separation more efficient.
### 4.5 Learning in Heterogeneous Systems
In Section 4.2, we conducted experiments with homogeneous ensembles of models.
However, in many realistic scenarios of distributed deep learning, client
devices may have different hardware-defined limitations and it may be
desirable to train smaller models on some clients, while allowing other
devices to utilize much larger networks. While model distillation allows one
to achieve this, it is reasonable to ask why would this even be desirable?
What do we expect to gain from having much larger models in the ensemble? Here
we show two positive effects emerging from having larger models in an ensemble
of smaller clients: (a) informally speaking, small models benefit from having
stronger teachers and (b) large models can gain complex knowledge by
distilling from smaller and simpler models.
Our ImageNet experiments were conducted with $4$ clients each assigned $500$
primary labels with one client being a ResNet34 model and the remaining
clients being ResNet18. Primary label assignment was random across clients and
we trained the model for $240\textrm{k}$ steps.
First, we observed that the presence of a larger model improved the accuracy
of smaller clients suggesting that they benefited from seeing a stronger
teacher holding some of the relevant data. Specifically, we observed that the
presence of a ResNet34 model instead of ResNet18 in the ensemble led to an
increase in the average shared accuracy $\beta_{\rm sh}$ of ResNet18 models
from $66.2\%$ to $66.7\%$.
Secondly, if small models achieve high performance on their limited
personalized domains, a large model distilling from such an ensemble can
potentially learn a much more complex picture of the entire dataset than would
otherwise be accessible to any individual small learner. This observation has
already inspired centralized distillation-based methods like [13]. In our
experiments, we witnessed this by observing that ResNet34 trained in
conjunction with $3$ ResNet18 clients reached the shared accuracy $\beta_{\rm
sh}$ of $68.6\%$, which exceeds the $67.7\%$ accuracy of an ensemble of $4$
ResNet18 models trained with FedAvg or $66.0\%$ if trained with our approach
(both with $200$ steps between updates). Notice that if the ResNet34 model is
isolated from ResNet18 models, it only reaches $\beta_{\rm sh}$ of $39.4\%$.
## 5 Discussion and Conclusions
In this paper, we proposed a novel distributed machine learning technique
based on model distillation. The core idea of our approach lies in using a
hierarchy of multiple auxiliary heads distilling knowledge from each other and
across the ensemble. We show that this technique is much more effective than
naive distillation and allows us to get close to the supervised accuracy on a
large ImageNet dataset given a large public dataset and longer training time
necessary for information to spread across the system. We also study two key
capabilities of a distributed distillation-based learning technique.
Specifically, we demonstrate that in systems where direct communication
between the clients is limited, multiple auxiliary heads allow information
exchange across clients that are not directly connected. We also demonstrate
two positive effects of adding larger models into the system of small models:
(a) small models benefit from seeing larger teachers and that (b) large models
learning from a collection of small models can reach higher accuracies than
those achievable with small models only.
## References
* [1] Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Chenlei Guo. Knowledge distillation from internal representations. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7350–7357. AAAI Press, 2020.
* [2] Sungsoo Ahn, Shell Xu Hu, Andreas C. Damianou, Neil D. Lawrence, and Zhenwen Dai. Variational information distillation for knowledge transfer. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 9163–9171. Computer Vision Foundation / IEEE, 2019.
* [3] Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2654–2662, 2014.
* [4] Ilai Bistritz, Ariana J. Mann, and Nicholas Bambos. Distributed distillation for on-device learning. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
* [5] Kallista A. Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for privacy-preserving machine learning. In Bhavani Thuraisingham, David Evans, Tal Malkin, and Dongyan Xu, editors, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30 - November 03, 2017, pages 1175–1191. ACM, 2017.
* [6] Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Tina Eliassi-Rad, Lyle H. Ungar, Mark Craven, and Dimitrios Gunopulos, editors, Proceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Philadelphia, PA, USA, August 20-23, 2006, pages 535–541. ACM, 2006.
* [7] George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, and Jun-Yan Zhu. Dataset distillation by matching training trajectories. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 10708–10717. IEEE, 2022.
* [8] Hong-You Chen and Wei-Lun Chao. Fedbe: Making bayesian model ensemble applicable to federated learning. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
* [9] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [10] Tommaso Furlanello, Zachary Chase Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. Born-again neural networks. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 1602–1611. PMLR, 2018\.
* [11] Xuan Gong, Abhishek Sharma, Srikrishna Karanam, Ziyan Wu, Terrence Chen, David S. Doermann, and Arun Innanje. Preserving privacy in federated learning with ensemble cross-domain knowledge distillation. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 11891–11899. AAAI Press, 2022.
* [12] Neel Guha, Ameet Talwalkar, and Virginia Smith. One-shot federated learning. CoRR, abs/1902.11175, 2019.
* [13] Chaoyang He, Murali Annavaram, and Salman Avestimehr. Group knowledge transfer: Federated learning of large cnns at the edge. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
* [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society, 2016.
* [15] Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015.
* [16] Sohei Itahara, Takayuki Nishio, Yusuke Koda, Masahiro Morikura, and Koji Yamamoto. Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-iid private data. CoRR, abs/2008.06180, 2020.
* [17] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009\.
* [18] Viraj Kulkarni, Milind Kulkarni, and Aniruddha Pant. Survey of personalization techniques for federated learning. CoRR, abs/2003.08673, 2020.
* [19] Daliang Li and Junpu Wang. Fedmd: Heterogenous federated learning via model distillation. CoRR, abs/1910.03581, 2019.
* [20] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. In Inderjit S. Dhillon, Dimitris S. Papailiopoulos, and Vivienne Sze, editors, Proceedings of Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, March 2-4, 2020. mlsys.org, 2020.
* [21] Tao Lin, Lingjing Kong, Sebastian U. Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
* [22] Jiaxin Ma, Ryo Yonetani, and Zahid Iqbal. Adaptive distillation for decentralized learning from heterogeneous clients. In 25th International Conference on Pattern Recognition, ICPR 2020, Virtual Event / Milan, Italy, January 10-15, 2021, pages 7486–7492. IEEE, 2020.
* [23] Disha Makhija, Xing Han, Nhat Ho, and Joydeep Ghosh. Architecture agnostic federated learning for neural networks. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 14860–14870. PMLR, 2022.
* [24] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Aarti Singh and Xiaojin (Jerry) Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA, volume 54 of Proceedings of Machine Learning Research, pages 1273–1282. PMLR, 2017\.
* [25] Hossein Mobahi, Mehrdad Farajtabar, and Peter L. Bartlett. Self-distillation amplifies regularization in hilbert space. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
* [26] Minh N. H. Nguyen, Huy Q. Le, Shashi Raj Pandey, and Choong Seon Hong. CDKT-FL: cross-device knowledge transfer using proxy dataset in federated learning. CoRR, abs/2204.01542, 2022.
* [27] Timothy Nguyen, Roman Novak, Lechao Xiao, and Jaehoon Lee. Dataset distillation with infinitely wide convolutional networks. CoRR, abs/2107.13034, 2021.
* [28] Krishna Pillutla, Kshitiz Malik, Abdelrahman Mohamed, Michael Rabbat, Maziar Sanjabi, and Lin Xiao. Federated learning with partial model personalization. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 17716–17758. PMLR, 2022.
* [29] Felix Sattler, Arturo Marbán, Roman Rischke, and Wojciech Samek. Communication-efficient federated distillation. CoRR, abs/2012.00632, 2020.
* [30] Tao Shen, Jie Zhang, Xinkang Jia, Fengda Zhang, Gang Huang, Pan Zhou, Fei Wu, and Chao Wu. Federated mutual learning. CoRR, abs/2006.16765, 2020.
* [31] Stefán Páll Sturluson, Samuel Trew, Luis Muñoz-González, Matei Grama, Jonathan Passerat-Palmbach, Daniel Rueckert, and Amir Alansary. Fedrad: Federated robust adaptive distillation. CoRR, abs/2112.01405, 2021.
* [32] Lichao Sun and Lingjuan Lyu. Federated model distillation with noise-free differential privacy. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, pages 1563–1570. ijcai.org, 2021.
* [33] Alysa Ziying Tan, Han Yu, Lizhen Cui, and Qiang Yang. Towards personalized federated learning. CoRR, abs/2103.00710, 2021.
* [34] Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, Qinghua Lu, Jing Jiang, and Chengqi Zhang. Fedproto: Federated prototype learning across heterogeneous clients. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 8432–8440. AAAI Press, 2022.
* [35] Tianchun Wan, Wei Cheng, Dongsheng Luo, Wenchao Yu, Jingchao Ni, Liang Tong, Haifeng Chen, and Xiang Zhang. Personalized federated learning via heterogeneous modular networks. CoRR, abs/2210.14830, 2022.
* [36] Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A. Efros. Dataset distillation. CoRR, abs/1811.10959, 2018.
* [37] Chuhan Wu, Fangzhao Wu, Ruixuan Liu, Lingjuan Lyu, Yongfeng Huang, and Xing Xie. Fedkd: Communication efficient federated learning via knowledge distillation. CoRR, abs/2108.13323, 2021.
* [38] Chenglin Yang, Lingxi Xie, Siyuan Qiao, and Alan L. Yuille. Training deep neural networks in generations: A more tolerant teacher educates better students. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 5628–5635. AAAI Press, 2019\.
* [39] Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wenchao Xu, and Feijie Wu. Parameterized knowledge transfer for personalized federated learning. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 10092–10104, 2021.
* [40] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 12878–12889. PMLR, 2021.
## Appendix A Analysis and Discussion of Our Method
### A.1 Analysis of Multi-Headed Distillation
As described in the main text, the multi-headed distillation involves
simultaneous training of multiple model heads that communicate with each
other. Rigorous theoretical analysis of this process in the most general case
is very complicated. However, by making several assumptions, here we find an
approximate solution for the weights of the heads of rank $k$ being given the
weights of the heads of rank $k-1$. In our future work, we hope to study this
model for different prediction aggregation techniques and compare conclusions
obtained from this simple model with those obtained empirically in realistic
systems.
Let $X$ be the input space and $L=\mathbb{R}^{d}$ be the logit space, where
$d$ is the number of classes. The logits $f(x)$ for a model $f:X\to L$ are
then converted to label assignment probabilities $p(y|x)$ via softmax, i.e.,
$p(y|x)={\rm softmax}(f(x))$.
Consider a single client $h_{i}:X\to L$ distilling information from some model
head $h:X\to L$. The corresponding distillation loss $\mathcal{L}[h_{i};h]$
admits many possible choices, but we will assume that
$\mathcal{L}\equiv\mathbb{E}_{x\sim\mathcal{D}}\,D\left[h_{i}(y|x;\psi_{i})\,\middle\|\,p_{h}(y|x)\right]$
with $\mathcal{D}$ being the shared (proxy) dataset and $D$ being some
divergence (more general $f$-divergence or KL-divergence as some examples).
The distillation is then carried out by performing optimization of
$\mathcal{L}$, for example via gradient descent:
$\Delta\psi_{i}=-\gamma\frac{\partial\mathcal{L}}{\partial\psi_{i}}.$
Notice that the components of $\psi_{i}$ corresponding to the model backbone
may receive updates from multiple heads reusing the same model embedding.
In our system, we assume that there is a set of heads
$\\{h_{i}^{(1)},h_{i}^{(2)},\dots,h_{i}^{(n)}\\}$ for each client $i$. For
simplicity, let us first consider distillation procedure independent of
prediction confidence. In this case, the loss for head $k$ of the client $i$
may look like:
$\mathcal{L}^{(k)}_{i}\equiv\sum_{j=1}^{N}\rho_{ij}\,\Gamma\left[h^{(k)}_{i}\,\middle\|\,h_{j}^{(k-1)}\right],$
where
$\Gamma\left[h^{(k)}_{i}\,\middle\|\,h_{j}^{(k-1)}\right]\equiv\mathbb{E}_{x\sim\mathcal{D}}\,D\left[p^{(k)}_{i}(y|x)\,\middle\|\,p_{j}^{(k-1)}(y|x)\right],$
$p^{(k)}_{i}(y|x)$ is a shorter notation for $p_{h^{(k)}_{i}}(y|x)$ and
$\rho_{ij}$ is some distribution defining the probability of picking a
particular client for distillation. Again, here we assume that $\rho_{ij}$
does not depend on the sample confidence and is simply fixed.
While we talked about $h^{(k)}$ distilling to $h^{(k-1)}$, we have not yet
discussed the ”main head” $h^{(1)}$. This head is normally trained locally on
the client’s private data. For simplicity, in the following we thus assume
that its behavior is known, i.e., $h^{(1)}_{i}$ is a specified function of the
training step. Furthermore, in the following, we start analyzing the problem
by assuming that all $h_{i}^{(1)}$ already converged and are all generally
different due to some differences in the client’s private data. The behavior
of all other heads is then defined by the losses outlined above.
Let us first consider the simplest case of $\rho_{ij}=\delta_{ij}$. In other
words, each head only distills from the same client’s ”prior” head. The choice
of $h_{i}^{(n)}=\dots=h_{i}^{(1)}$ would obviously minimize all losses
$\mathcal{L}_{i}^{(k)}$ since all corresponding $\Gamma[\cdot]$ values vanish.
But as soon as we introduce a small correction
$\rho_{ij}=\delta_{ij}+\nu_{ij}$ with $\sum_{j}\nu_{ij}=0$, this trivial
solution is no longer optimal. Instead, each client’s head is now optimizing:
$\mathcal{L}^{(k)}_{i}=\Gamma\left[h^{(k)}_{i}\,\middle\|\,h_{i}^{(k-1)}\right]+\sum_{j=1}^{N}\nu_{ij}\,\Gamma\left[h^{(k)}_{i}\,\middle\|\,h_{j}^{(k-1)}\right].$
Notice that if $\Gamma$ was a metric in the $h$ space, we could interpret this
optimization objective geometrically as a minimization of the head’s distance
to its lower-order state (towards $h_{i}^{(k-1)}$) coupled with a weak
($\sim\nu$) attraction towards a number of other heads ($h_{j}^{(k-1)}$). See
Figure 7 for illustration.
Figure 7: Illustration of multi-head distillation as discussed in Appendix
A.1. Large red arrow shows strong distillation of $h_{i}^{(k)}$ to
$h_{i}^{(k-1)}$ and smaller gray arrows indicate attraction towards
$h_{j}^{(k-1)}$ with effective “strength” $\nu_{ij}$.
Here we have to make yet another simplifying assumption and consider a
prescribed model backbone (and corresponding embedding) that we are not
optimizing or updating with backpropagated gradients. Doing so, we disentangle
individual heads and can treat their optimization as independent tasks. For
sufficiently small $\nu_{ij}$ it will hold that
$p_{i}^{(k)}=p_{i}^{(k-1)}+O(\nu)$ and we can therefore write:
$\mathcal{L}^{(k)}_{i}=\mathbb{E}_{x\sim\mathcal{D}}\Biggl{\\{}D\left[\left.p_{h^{(k-1)}_{i}+\kappa^{(k)}_{i}}\,\right\|\,p_{h_{i}^{(k-1)}}\right]+\\\
+\sum_{j=1}^{N}\nu_{ij}D\left[\left.p_{h^{(k-1)}_{i}+\kappa^{(k)}_{i}}\,\right\|\,p_{h_{j}^{(k-1)}}\right]\Biggr{\\}},$
where $h_{i}^{(k)}\equiv h_{i}^{(k-1)}+\kappa^{(k)}_{i}$ and
$\kappa^{(k)}_{i}\sim O(\nu)$. Introducing $\delta_{i}^{(k)}\equiv
p_{i}^{(k)}-p_{i}^{(k-1)}$, we obtain:
$\mathcal{L}^{(k)}_{i}=\mathbb{E}_{x\sim\mathcal{D}}\Biggl{\\{}D\left[\left.p^{(k-1)}_{i}+\delta_{i}^{(k)}\,\right\|\,p_{i}^{(k-1)}\right]+\\\
+\sum_{j}\nu_{ij}\,D\left[\left.p^{(k-1)}_{i}+\delta_{i}^{(k)}\,\right\|\,p_{j}^{(k-1)}\right]\Biggr{\\}}.$
Noticing that the first term needs to be decomposed near the global minimum
and the second term permits linear expansion, we obtain:
$\mathcal{L}^{(k)}_{i}\approx\mathbb{E}_{x\sim\mathcal{D}}\Biggl{\\{}D^{\prime\prime}\left[\left.p^{(k-1)}_{i}\,\right\|\,p_{i}^{(k-1)}\right]\frac{\delta_{i}^{(k)}\delta_{i}^{(k)}}{2}+\\\
+\sum_{j}\nu_{ij}\,D^{\prime}\left[\left.p^{(k-1)}_{i}\,\right\|\,p_{j}^{(k-1)}\right]\delta_{i}^{(k)}\Biggr{\\}},$
where $D^{\prime\prime}$ and $D^{\prime}$ are the derivatives of $D$ with
respect to the first argument. Recalling that
$\delta_{i}^{(k)}\in\mathbb{R}^{d}$ we can rewrite the loss function as:
$\mathcal{L}^{(k)}_{i}\approx\mathbb{E}_{x\sim\mathcal{D}}\left[\delta^{\top}{\mathrm{A}}\delta+b^{\top}\delta\right],$
where $\delta\equiv\delta_{i}^{(k)}$ for brevity,
${\mathrm{A}}\equiv
D^{\prime\prime}\left[\left.p^{(k-1)}_{i}\,\right\|\,p_{i}^{(k-1)}\right]/2$
is effectively a matrix and
$b\equiv\sum_{j}\nu_{ij}\,D^{\prime}\left[\left.p^{(k-1)}_{i}\,\right\|\,p_{j}^{(k-1)}\right]\in\mathbb{R}^{d}$
can be thought of as a column vector.
At this point we can connect the probability distribution perturbation
$\delta$ to the logit perturbation $\kappa\equiv\kappa^{(k)}_{i}$ using the
fact that $p_{m}\equiv e^{h_{m}}/Z$, where $Z\equiv\sum_{k}e^{h_{k}}$ (we omit
this simple calculation here):
$p_{i}^{(k)}=p_{h_{i}^{(k-1)}+\kappa_{i}^{(k)}}=p_{i}^{(k-1)}+\delta=\\\
=p_{i}^{(k-1)}+\kappa*p_{i}^{(k-1)}-(\kappa\cdot p_{i}^{(k-1)})p_{i}^{(k-1)},$
where ${\boldsymbol{a}}*{\boldsymbol{b}}$ is an element-wise product of two
vectors and therefore:
$\delta=\kappa*p_{i}^{(k-1)}-(\kappa\cdot
p_{i}^{(k-1)})p_{i}^{(k-1)}\equiv{\mathrm{C}}\kappa,$ (6)
where ${\mathrm{C}}$ is a matrix constructed from the components of
$p_{i}^{(k-1)}(x)\in\mathbb{R}^{d}$. Notice that $\sum_{m}\delta_{m}=0$, which
agrees with $\delta$ being the perturbation of the normalized probability
distribution.
Finally, remember that $\kappa$ itself is a perturbation of model logits.
Given the sample embedding $\xi_{i}(x)\in\mathbb{R}^{t}$, the sample logits
are constructed as ${\mathrm{W}}_{i}\xi_{i}(x)$ with ${\mathrm{W}}_{i}$ being
a $d\times t$ matrix. The perturbation $\kappa$ transforming
${\mathrm{W}}_{i}^{(k-1)}\xi_{i}$ into ${\mathrm{W}}_{i}^{(k)}\xi_{i}$ can
thus be characterized by the logit weight perturbation
$\mu\equiv\mu_{i}^{(k)}:={\mathrm{W}}_{i}^{(k)}-{\mathrm{W}}_{i}^{(k-1)}$ and
we get $\kappa=\mu\xi(x)$. Combining everything together, we see that the loss
function transforms to:
$\mathcal{L}^{(k)}_{i}\approx\mathbb{E}_{x\sim\mathcal{D}}\left[\xi(x)^{\top}\mu^{\top}{\mathrm{C}}^{\top}{\mathrm{A}}{\mathrm{C}}\mu\xi(x)+b^{\top}{\mathrm{C}}\mu\xi(x)\right],$
(7)
where ${\mathrm{A}}$, ${\mathrm{C}}$ and $b\sim\nu$ all depend on the sample
$x$ via $p_{i}^{(k-1)}(x)$ and $\xi$ is a function of $x$, while $\mu$ is
effectively an unknown sample-independent matrix that we need to tune with the
goal of minimizing $\mathcal{L}^{(k)}_{i}$. The optimum can be identified by
taking a derivative with respect to $\mu_{\alpha\beta}$ and setting it to 0:
$\mathbb{E}_{x\sim\mathcal{D}}\left[2(\xi(x)^{\top}\mu^{\top}{\mathrm{C}}^{\top}{\mathrm{A}}{\mathrm{C}})_{\alpha}\xi_{\beta}(x)+(b^{\top}{\mathrm{C}})_{\alpha}\xi_{\beta}(x)\right]=0.$
This is a linear equation on $\mu\sim\nu$ that can be solved in a closed form
to give us a logit weight perturbation $\mu$ as a complex nonlinear function
of $\nu_{ij}$ and $\\{p_{\ell}^{(k-1)}\\}$.
Note that since $\mu$ is only a small perturbation, we can introduce
${\mathrm{W}}_{i}^{(k)}$ as a function of a continuous parameter $k$ and
approximate $d{\mathrm{W}}_{i}^{(k)}/dk$ with a finite difference
${\mathrm{W}}_{i}^{(k)}-{\mathrm{W}}_{i}^{(k-1)}=\mu$ leaving us with a
differential equation (the approximation is valid in the first order in
$\nu$):
$\frac{d{\mathrm{W}}_{i}(k)}{dk}=G\left[\nu,\\{{\mathrm{W}}_{\ell}(k)\\}\right]$
with $G$ being a linear function with respect to $\nu$, but very complex
nonlinear function with respect to $\\{{\mathrm{W}}_{\ell}\\}$. If $\nu_{ij}$
is localized around $i=j$ (which would be the case for communication patterns
with partial connectivity, like in the case of long chains), this differential
equation resembles a complex nonlinear diffusion equation defining the spread
of information across the clients as we look at deeper and deeper heads (with
the head rank $k$ essentially playing the role of time).
It is also worth noting here that if $\nu$ was not fixed, but was itself a
function of model confidence (while still remaining small), our conclusions
would not change except that $\nu$ itself would now itself be a complex
nonlinear function of $\\{{\mathrm{W}}_{\ell}(k)\\}$ and $x$. In our future
work, we hope to study the effect that this confidence-dependent aggregation
has on head dynamics and the final stationary state.
Finally, let us look at the stationary state of system dynamics. Equation (7)
suggests that $\mu=0$ is a local optimum when $b^{\top}{\mathrm{C}}=0$, or
$\displaystyle\sum_{i,j}\nu_{ij}D^{\prime}\left[p_{i}\|p_{j}\right]{\mathrm{C}}_{ik}=0,$
or after noticing that $D^{\prime}[p_{i}\|p_{j}]=p_{j}/p_{i}$ and recalling
that ${\mathrm{C}}$ is defined by Eq. (6) we obtain for every $k$:
$\displaystyle\mathbb{E}_{x\sim\mathcal{D}}\left[\sum_{i,j}\nu_{ij}p_{j}\left(\delta_{ik}-p_{k}\right)\right]=0.$
(8)
Since $\sum_{j}\nu_{ij}=0$, the trivial solution of this system of equations
is the case of identical models, i.e., $p_{1}=\dots=p_{n}$, but since
generally the models might have different embeddings and cannot be made
identical, the solution of Eq. (8) restricts the system stationary state.
### A.2 Value of $p(y|x)$ as Classifier Confidence
In our model distillation approach, we need to combine predictions of multiple
different model heads. If all predictions $p_{k}(y|x)$ (by heads
$\\{h_{k}\\}$) come with reliable error estimates, this information can be
taken into account. For example, if we know that for the true distribution
$p(y|x)$ and every prediction $p_{k}(y|x)$ it holds that
$D[p_{k}(y|x)\|p(y|x)]\leq e_{k}(x)$ with $D$ being some divergence, the true
$p(y|x)$ belongs to the intersection of “balls”222note that $D$ is not
generally a metric $\mathcal{B}_{k}\equiv\\{p^{\prime}|D[p^{\prime}\|p]\leq
e_{k}\\}$. We can then choose any point in this set and compute a prediction
error as a maximum distance from a chosen distribution to any point in the
intersection. Unfortunately, however such reliable measures of classifier
error are not generally available and even approximating them can be quite
difficult.
Instead we choose a very simple approach based on estimating classifier
confidence and picking the most confident model, effectively ignoring other
predictions. The confidence measure itself is chosen as a value of the largest
component of the classifier prediction
$o(x)\equiv\mathrm{softmax}(f(x;\theta))$ with
$f(x;\theta)={\mathrm{W}}\xi(x;\theta)$ and $\xi$ being the embedding vector.
##### Why choose $\max_{k}o_{k}$.
This value has a number of trivial properties that can actually make it a
useful measure of classifier uncertainty. First is that after seeing a
supervised training sample $(x,y)$, the value of $o_{y}(x)$ is increased.
Second is that if the class prototypes in the embedding space are nearly
orthogonal for different classes, then updates for samples of different
classes would not “interfere” with each other and high-confidence predictions
would not generally be disrupted by observing unrelated samples with different
labels. For a simple logits layer ${\mathrm{W}}\xi(x)$ trained with cross-
entropy loss, both of these properties trivially follow from the following
expression for $\Delta o_{k}(x^{\prime})$ after training on a sample $(x,y)$:
$\Delta o_{k}(x^{\prime})=\lambda
o_{k}(x^{\prime})[\xi(x)\cdot\xi(x^{\prime})]\times\\\
\times\sum_{i}\left(\delta_{k,i}-o_{i}(x^{\prime})\right)\left(\delta_{y,i}-o_{i}(x)\right).$
##### Drawbacks.
But while $\max_{k}o_{k}(x)$ has these useful properties, it is not guaranteed
to be a reliable measure of classifier confidence for out-of-distribution
samples and the training objective never explicitly optimizes for
that333Contrast this to the energy-based models, for example, where the energy
update and the MCMC sampling are explicitly contributing to the model
“awareness” of what in-distribution samples are and are not.. A density model
$\rho(x)$ would allow detecting such out-of-distribution samples, but could
also reveal information about the client samples in their private dataset.
Combining classification models with locally-trained density models, or
adopting other existing similar techniques could be a logical extension of our
present work.
### A.3 Distillation as Revelation of Some Information about Model Weights
The canonical version of FedAvg combines the knowledge of individual clients
by periodically aggregating their weight snapshots. Distillation-based
techniques are instead based on communicating model predictions on datasets
accessible to all participants. While these two approaches appear to be
different, communication in distillation-based methods can of course also be
viewed as a way of revealing incomplete information about model weights.
The amount of revealed information can be defined as follows. Assuming the
knowledge of the prior $p(\theta)$ on the model weights and model predictions
$(y_{1},\dots,y_{n})$ on a public dataset
$\mathcal{D}_{*}=(x_{1},\dots,x_{n})$, one can compare the difference of
entropies for the original $p(\theta)$ and $p(\theta|y_{1},\dots,y_{n})$ with
$\displaystyle
p(\theta|y_{1},\dots,y_{n})=\frac{p(y_{1},\dots,y_{n}|\theta)p(\theta)}{\int
d\theta\,p(y_{1},\dots,y_{n}|\theta)p(\theta)}.$
While generally intractable, it might be possible to obtain the lower bound on
the amount of the revealed information by training a model that predicts the
weights $\theta$ from $(y_{1},\dots,y_{n})$.
Deeper understanding of this question can have an impact on the optimal choice
of the public dataset $\mathcal{D}_{*}$ that would allow us to retrieve the
knowledge of interest from a trained model using only a small number of
samples. Ongoing research on dataset distillation [7, 27, 36] is very closely
related to this question.
## Appendix B Additional Experiments and Experimental Data
### B.1 Effect of Distilling to Self and Same-Level Heads
In Section 4.2.2 we reported that including a head into the list of its own
distillation targets (“self”) improved the model accuracy, but the gain was
still smaller than that of a model with multiple auxiliary heads. Here we
explore what happens if we use the head as its own potential distillation
target, while also using a number of auxiliary heads. Furthermore, what if we
modify our method to include distillations to other heads of the same rank
(see Figure 9)?
We conducted a set of experiments with a heterogeneous dataset with $s=100$,
$\nu_{\rm emb}=1$, $\nu_{\rm aux}=3$, four auxiliary heads and $250$
randomized labels per each of $8$ clients. The results of experiments using
different combinations of distillation targets and both $\Delta=1$ and
$\Delta=2$ (choosing two other clients at a time as potential distillation
targets) are presented in Table 3. We observed that using same-level heads and
“self” targets separately provides noticeable benefit only for earlier heads.
But when used together, these two techniques result in $\sim 1\%$ accuracy
improvement and this improvement is realized for the $2^{\rm nd}$ auxiliary
head. Also, not unexpectedly, using two clients to distill to ($\Delta=2$)
instead of one, leads to a noticeable $1.5\%$ accuracy improvement. Combined
together, all these techniques, in conjunction with using the entire ImageNet
as the public dataset improve the accuracy to $59.4\%$ if trained for $60{\rm
k}$ steps, or $65.7\%$ if trained for $180{\rm k}$ steps.
Experiment | $\beta_{\rm priv}^{\rm Main}$ | $\beta_{\rm sh}^{\rm Aux1}$ | $\beta_{\rm sh}^{\rm Aux2}$ | $\beta_{\rm sh}^{\rm Aux3}$ | $\beta_{\rm sh}^{\rm Aux4}$
---|---|---|---|---|---
Base | $70.9\%$ | $46.7\%$ | $51.8\%$ | $53.9\%$ | ${\bf 54.6\%}$
$\Delta=2$ | $71.1\%$ | $50.9\%$ | $55.1\%$ | ${\bf 56.1\%}$ | ${\bf 56.0\%}$
SL | $70.8\%$ | $48.6\%$ | $53.6\%$ | ${\bf 54.7\%}$ | ${\bf 54.7\%}$
SF | $71.3\%$ | $48.1\%$ | $53.4\%$ | ${\bf 54.9\%}$ | $\bf{54.8\%}$
SL+SF | $70.3\%$ | $53.0\%$ | ${\bf 55.5\%}$ | $53.9\%$ | $52.4\%$
All | $70.8\%$ | $53.5\%$ | ${\bf 55.8\%}$ | $54.5\%$ | $52.9\%$
All+ | $72.7\%$ | $56.5\%$ | ${\bf 59.4\%}$ | $57.9\%$ | $56.1\%$
All+, $180{\rm k}$ steps | $76.2\%$ | $62.3\%$ | ${\bf 65.7\%}$ | $65.0\%$ | $64.0\%$
Table 3: Experimental results exploring the usage of different distillation
heads trained for $60{\rm k}$ steps. Here “Base” is the original experiment
with $\Delta=1$ and conventional heads as described in Sec. 4.2.2; “SL” adds
same-level heads to distillation targets; “SF” adds the distilled head
(“self”) as a potential target; “All” combines same-level and “self” heads and
$\Delta=2$ (each step distilling to two other clients), “All+” is the same as
All, but also uses the entire ImageNet as the public dataset.
### B.2 Dependence on the Public Dataset Size
In a separate set of experiments, we trained $8$ clients with 4 auxiliary
heads, $s=100$, $\nu_{\rm emb}=1$, $\nu_{\rm aux}=3$ and $250$ randomly
assigned “private” labels and “private” samples drawn from $70\%$ of the
ImageNet training data. The remaining $30\%$ of ImageNet samples were fully or
partly used as a public dataset, i.e., $\gamma_{\rm pub}\leq 30\%$. As one
would expect, increasing the size of the “public” dataset while fixing the
amount of “private” training data has a positive impact on the final model
performance (see Table 4).
Public DS fraction | $10\%$ | $20\%$ | $30\%$ | All
---|---|---|---|---
main head $\beta_{\rm priv}$ | $70.1\%$ | $71.1\%$ | $70.9\%$ | $71.9\%$
last aux head $\beta_{\rm sh}$ | $52.4\%$ | $53.9\%$ | $54.1\%$ | $55.3\%$
Table 4: The dependence of the main head “private” accuracy $\beta_{\rm
priv}$ and the “shared” accuracy of the $4^{\rm th}$ auxiliary head on the
size of the public dataset (fraction of ImageNet training set). Experiments
were conducted for a system of $8$ clients with 4 auxiliary heads, $s=100$,
$\nu_{\rm emb}=1$, $\nu_{\rm aux}=3$ and $250$ randomly assigned “private”
labels. Private training samples were drawn from $70\%$ of the ImageNet
training set in all experiments. “All” column shows the accuracy attained by
using the entire ImageNet training set as a public dataset (while still using
only $70\%$ of it as private data).
## Appendix C Additional Tables and Figures
Figure 8: Conceptual diagram of a distillation in a distributed system.
Clients use a “public” dataset to distill knowledge from other clients, each
having their primary private dataset. Individual clients may have different
architectures and different objective functions. Furthermore, some of the
“clients” may themselves be collections of models trained using federated
learning. Figure 9: A pattern used for distilling multiple auxiliary heads
with two additional types of distillation targets: (a) distilling to heads of
the same “rank” (dashed), (b) distilling to “self” (dotted). Here distilling
to the same “rank” means, for example, that Aux1 head is distilled to the most
confident of Main heads, or Aux1 heads of adjacent clients. Distilling to
“self” means that the samples on which the distilled head is already most
confident will effectively be ignored.
### C.1 Raw Experimental Data
Tables 5 and 6 contain raw values used for producing Figure 3, while Tables 7
and 8 complement Figure 4.
$\nu_{\rm emb}$ | $\nu_{\rm aux}$ | $\beta_{\rm priv}^{\rm(m)}$ | $\beta_{\rm sh}^{\rm(m)}$ | $\beta_{\rm priv}^{\rm(aux)}$ | $\beta_{\rm sh}^{\rm(aux)}$
---|---|---|---|---|---
0.0 | 0.0 | $46.3\%$ | $46.3\%$ | $0.1\%$ | $0.1\%$
| 1.0 | $52.2\%$ | $52.0\%$ | $56.0\%$ | $56.0\%$
| 3.0 | $54.1\%$ | $53.5\%$ | $57.3\%$ | $57.0\%$
| 10.0 | $54.3\%$ | $54.1\%$ | $57.1\%$ | $57.1\%$
1.0 | 0.0 | $48.5\%$ | $48.5\%$ | $0.1\%$ | $0.1\%$
| 1.0 | $53.6\%$ | $53.5\%$ | $57.3\%$ | $57.2\%$
| 3.0 | $55.4\%$ | $55.2\%$ | $58.6\%$ | $58.5\%$
| 10.0 | $54.1\%$ | $53.6\%$ | $56.9\%$ | $56.3\%$
3.0 | 0.0 | $48.3\%$ | $48.0\%$ | $0.1\%$ | $0.1\%$
| 1.0 | $54.3\%$ | $54.0\%$ | $58.2\%$ | $57.6\%$
| 3.0 | $55.7\%$ | $55.5\%$ | $59.3\%$ | $58.8\%$
| 10.0 | $53.3\%$ | $53.4\%$ | $56.5\%$ | $56.3\%$
Table 5: Results for $8$-client experiments with $250$ random classes per client, $s=0$ and a varying values of $\nu_{\rm emb}$ and $\nu_{\rm aux}$. $\nu_{\rm emb}$ | $\nu_{\rm aux}$ | $\beta_{\rm priv}^{\rm(m)}$ | $\beta_{\rm sh}^{\rm(m)}$ | $\beta_{\rm priv}^{\rm(aux)}$ | $\beta_{\rm sh}^{\rm(aux)}$
---|---|---|---|---|---
0.0 | 0.0 | $68.0\%$ | $25.2\%$ | $0.1\%$ | $0.1\%$
| 1.0 | $70.6\%$ | $29.1\%$ | $70.5\%$ | $42.0\%$
| 3.0 | $70.9\%$ | $30.0\%$ | $70.1\%$ | $43.3\%$
| 10.0 | $68.0\%$ | $25.3\%$ | $66.0\%$ | $39.7\%$
1.0 | 0.0 | $69.0\%$ | $26.0\%$ | $0.1\%$ | $0.1\%$
| 1.0 | $71.8\%$ | $29.8\%$ | $71.5\%$ | $43.0\%$
| 3.0 | $72.0\%$ | $29.9\%$ | $71.0\%$ | $44.1\%$
| 10.0 | $66.8\%$ | $23.0\%$ | $65.1\%$ | $37.5\%$
3.0 | 0.0 | $65.2\%$ | $24.9\%$ | $0.1\%$ | $0.1\%$
| 1.0 | $71.7\%$ | $29.7\%$ | $72.1\%$ | $39.1\%$
| 3.0 | $71.8\%$ | $29.7\%$ | $71.9\%$ | $40.8\%$
| 10.0 | $65.4\%$ | $23.1\%$ | $63.4\%$ | $36.4\%$
Table 6: Results for $8$-client experiments with $250$ random classes per client, $s=100$ and a varying values of $\nu_{\rm emb}$ and $\nu_{\rm aux}$. Heads | $1$ | $2$ | $3$ | $4$
---|---|---|---|---
$\beta_{\rm priv}^{\rm(m)}$ | $56.2\%$ | $56.1\%$ | $55.8\%$ | $55.9\%$
$\beta_{\rm sh}^{\rm(m)}$ | $56.1\%$ | $55.8\%$ | $55.8\%$ | $55.5\%$
$\beta_{\rm priv}^{\rm(1)}$ | $59.6\%$ | $59.6\%$ | $59.4\%$ | $59.4\%$
$\beta_{\rm sh}^{\rm(1)}$ | $59.4\%$ | $59.5\%$ | $59.6\%$ | $59.4\%$
$\beta_{\rm priv}^{\rm(2)}$ | | $60.0\%$ | $59.7\%$ | $59.7\%$
$\beta_{\rm sh}^{\rm(2)}$ | | $59.7\%$ | $59.9\%$ | $59.5\%$
$\beta_{\rm priv}^{\rm(3)}$ | | | $59.5\%$ | $59.1\%$
$\beta_{\rm sh}^{\rm(3)}$ | | | $59.5\%$ | $59.1\%$
$\beta_{\rm priv}^{\rm(4)}$ | | | | $58.7\%$
$\beta_{\rm sh}^{\rm(4)}$ | | | | $58.6\%$
Table 7: Results for $8$-client experiments with $250$ random classes per client, $s=0$, $\nu_{\rm emb}=1$, $\nu_{\rm aux}=3$ and a varying number of auxiliary heads (separate columns). Heads | $1$ | $2$ | $3$ | $4$
---|---|---|---|---
$\beta_{\rm priv}^{\rm(m)}$ | $72.5\%$ | $71.6\%$ | $71.1\%$ | $72.5\%$
$\beta_{\rm sh}^{\rm(m)}$ | $30.5\%$ | $32.5\%$ | $33.1\%$ | $32.7\%$
$\beta_{\rm priv}^{\rm(1)}$ | $71.4\%$ | $70.6\%$ | $70.7\%$ | $71.4\%$
$\beta_{\rm sh}^{\rm(1)}$ | $44.7\%$ | $46.6\%$ | $46.9\%$ | $46.4\%$
$\beta_{\rm priv}^{\rm(2)}$ | | $68.5\%$ | $68.1\%$ | $68.7\%$
$\beta_{\rm sh}^{\rm(2)}$ | | $51.6\%$ | $52.0\%$ | $51.6\%$
$\beta_{\rm priv}^{\rm(3)}$ | | | $66.1\%$ | $66.1\%$
$\beta_{\rm sh}^{\rm(3)}$ | | | $53.8\%$ | $53.6\%$
$\beta_{\rm priv}^{\rm(4)}$ | | | | $63.4\%$
$\beta_{\rm sh}^{\rm(4)}$ | | | | $54.5\%$
Table 8: Results for $8$-client experiments with $250$ random classes per
client, $s=100$, $\nu_{\rm emb}=1$, $\nu_{\rm aux}=3$ and a varying number of
auxiliary heads (separate columns).
|
FIAN/TD/07-22
Disentanglement of Topological and Dynamical Fields in 3d Higher-Spin Theory
within Shifted Homotopy Approach
A.V. Korybut1, A.A. Sevostyanova1,2, M.A. Vasiliev1,2 and V.A. Vereitin1,2
1 I.E. Tamm Department of Theoretical Physics, Lebedev Physical Institute,
Leninsky prospect 53, 119991, Moscow, Russia
2 Moscow Institute of Physics and Technology,
Institutsky lane 9, 141700, Dolgoprudny, Moscow region, Russia
###### Abstract
The first-order correction to the one-form sector of equations of the $3d$
higher-spin theory is derived from the generating nonlinear HS system by
virtue of the shifted homotopy approach. The family of solutions to the
generating system that disentangles equations for dynamical and topological
fields in the first order of perturbation theory is found. This family is
shown to belong to the different cohomology class compared to the solution
found earlier by the direct methods. The related cohomology is shown to be the
same as that underlying the mass deformation in the matter sector of $3d$
higher-spin equations.
###### Contents
1. 1 Introduction
2. 2 Space-time geometry
3. 3 Fields and star product
1. 3.1 Star product
2. 3.2 Topological and dynamical fields
4. 4 Nonlinear System
5. 5 Shifted homotopy
1. 5.1 Homotopy trick
2. 5.2 Star-exchange formulas
6. 6 Perturbative expansion
1. 6.1 Vacuum solution
2. 6.2 Linearisation
7. 7 Shifted solution
1. 7.1 Shifted homotopy setup
2. 7.2 Covariant derivative
3. 7.3 Family of shifts
4. 7.4 Definite parity solutions
8. 8 $D_{0}$-cohomology from massive terms
9. 9 Vertex $\Upsilon(\omega,\omega,C)$
10. 10 Conclusion
11. Acknowledgement
## 1 Introduction
Full nonlinear higher-spin (HS) field theory in three dimensions is known at
the level of classical equations of motion which can be obtained from the
generating system of [1] (see also [2, 3]). HS vertices derived from the
generating system can be written in the form
${{\rm
d}}_{x}\omega+\omega\ast\omega=\Upsilon(\omega,\omega,C)+\Upsilon(\omega,\omega,C,C)+\ldots,$
(1.1) ${{\rm d}}_{x}C+\omega\ast
C-C\ast\omega=\Upsilon(\omega,C,C)+\Upsilon(\omega,C,C,C)+\ldots\,,$ (1.2)
where the roles of the one-form $\omega$ and zero-form $C$ are specified below
(${{\rm d}}_{x}:=dx^{n}\frac{\partial}{\partial x^{n}}$ is space-time de Rham
derivative).
It is well known that $3d$ massless HS gauge fields do not propagate. However,
in the sector of zero-forms $C$, the theory contains propagating dynamical
matter fields. These make the theory non-trivial even locally. In addition,
the zero-forms $C$ contain topological fields that do not propagate in the
usual sense. These fields belong to the finite-dimensional modules over the
isometry algebra. Topological fields do appear on the r.h.s. of (1.1) and
(1.2) and the two types of fields may source each other.
For the analysis of the theory, it is desirable to disentangle the equations
for dynamical and topological fields. This problem was originally raised in
[2] where it has been solved in the first order of perturbation theory via
particular field redefinition (this problem was also discussed in [4]). In
this paper we analyse the same problem in a more general setup appropriate for
studying higher-order corrections in (1.1), (1.2). From this perspective, we
are not interested in a particular field redefinition that solves the problem
in the first order, but examine a family of them resulting from the so-called
shifted homotopy approach [5, 6].
Solving the equations of generating system [1] one systematically faces
equations
${{\rm d}}_{z}f=g\,,\qquad{{\rm d}}_{z}:=dz^{\alpha}\frac{\partial}{\partial
z^{\alpha}}$ (1.3)
on differential forms $f$ in certain auxiliary variables $z^{\alpha}$. These
equations can be solved with the help of the homotopy trick. One of the
advantages of the shifted homotopy approach of [6] is that, at least in the
lowest orders of perturbation theory, it allows one to obtain local vertices
with no additional field redefinitions.
The formalism originally proposed in [6] to the $4d$ HS theory has general
applicability and can be applied to the HS theory in $AdS_{3}$. The purpose of
its application in this paper is to solve equations of the generating system
in such a way that the r.h.s. of (1.1) and (1.2) for topological (dynamical)
$\omega$ and $C$ be composed only of topological (dynamical) $\omega$ and $C$,
thus disentangling dynamical and topological fields. (Note that the locality
problem, that emerges at the second order, will be considered elsewhere.)
Though the disentanglement problem is successfully solved in this paper, the
class of considered shifted homotopy operators does not cover all field
redefinitions that solve it in the first order. In particular, it does not
reproduce the change of variables of [2].
An important observation of this paper, is on the role of the algebra of
deformed oscillators [7] in the disentanglement problem. Namely, it will be
shown that the terms that govern the deformation of the $3d$ HS theory
associated with the deformed oscillator algebra reproduce the cohomological
terms that will be identified in the homotopy analysis. Note that this
observation may have another useful application in the context of the approach
to holography relying on the unfolded formulation [8]. It works as follows:
one starts with equations for the free fields in $AdS$ and then by a proper
rescaling111Two pairs of oscillators $y$ and $\bar{y}$ with canonical
commutation relations [8] get stretched in $\sqrt{z}$ times, where $z$ is the
radial Puancaré coordinate in $AdS$. of the oscillator variables goes to the
boundary resulting in the equations on the boundary conformal currents. The
counterpart of this version of holography has not yet been elaborated for
massive fields in $AdS_{3}$. Partially this is due to the highly involved
product law of the deformed oscillator algebra [9, 10, 11, 12]. However,
cohomological (massive) terms obtained in this paper in terms of the
undeformed Moyal star product may significantly simplify this analysis.
The layout of the rest of the paper is as follows. In section 2 the geometry
of space-time and the frame-like formalism are recalled. Section 3 sketches
elements of the $3d$ HS theory, including the language of two-component
spinors, Moyal star product and the disentanglement problem of topological and
dynamical fields. Section 4 recalls the setup of the nonlinear HS equations of
[1]. Section 5 briefly describes the homotopy approach, that reproduces
perturbative solutions of the equations of the theory in the zeroth and first
order. In sections 6,7 the homotopy shift disentangling the topological and
dynamical fields is obtained by searching for a gauge transformation. In
section 8 the cohomological plot [13] is completed while in section 9 the
homotopy shift is found again, but in a different way. Brief conclusions are
given in section 10.
## 2 Space-time geometry
In this paper the space-time geometry will be described in terms of the one-
form connection $A$
$A=h^{a}P_{a}+\omega^{ab}L_{ab}\,,$ (2.1)
which takes values in the space-time symmetry algebra. Here generators of
generalized translations $P_{a}$ and Lorenz rotations $L_{ab}$ are contracted
with the vielbein one-form $h^{a}=h_{\mu}^{a}(x)dx^{\mu}$ and spin-connection
$\omega^{ab}=-\omega^{ba}=\omega_{\mu}^{ab}(x)dx^{\mu}$ (see [14], [15]).
The symmetry algebra of $AdS_{d}$ is $o(d-1,2)$ with the gauge fields
$A_{\mu}^{BC}=-A_{\mu}^{CB}$ (the indices $B,C=0,\ldots,d$ are raised and
lowered by the flat metrics $\eta^{BC}=\operatorname{diag}(+-$ $\dots-+)$). In
terms of the connections
$\omega_{\mu}^{ab}=A_{\mu}^{ab},h_{\mu}^{a}=(\sqrt{2}\lambda)^{-1}A_{\mu}^{ad}$
with the convention $a,b=0,\ldots,d-1$, the respective $o(d-1,2)$ gauge
curvatures ${{\rm d}}A+AA=L_{ab}R^{ab}+P_{a}R^{a}$ read as:
$R_{\mu\nu}{}^{ab}=\partial_{\mu}\omega_{\nu}{}^{ab}+\omega_{\mu}{}^{a}{}_{c}\omega_{\nu}{}^{cb}-2\lambda^{2}h_{\mu}{}^{a}h_{\nu}{}^{b}-(\mu\leftrightarrow\nu),$
(2.2)
$R_{\mu\nu}{}^{a}=\partial_{\mu}h_{\nu}{}^{a}+\omega_{\mu}{}^{a}{}_{c}h_{\nu}{}^{c}-(\mu\leftrightarrow\nu).$
(2.3)
Expression (2.2) differs from the Puancaré curvatures by the terms
proportional to $\lambda^{2}$. One can express Lorentz connection
$\omega_{\mu}{}^{ab}$ via vielbein $h_{\mu}{}^{a}$ with the aid of the
constraint $R_{\mu\nu}{}^{a}=0$ (for the non-degenerate
$\left.h_{\mu}{}^{a}\right)$. Substituting
$\omega_{\mu}{}^{ab}=\omega_{\mu}{}^{ab}(h)$ into (2.2), one can see that the
condition $R_{\mu\nu}{}^{ab}=0$ is equivalent to
$\mathcal{R}_{\mu\nu}{}^{ab}=2\lambda^{2}\left(h_{\mu}{}^{a}h_{\nu}{}^{b}-h_{\nu}{}^{a}h_{\mu}{}^{b}\right),$
(2.4)
where
$\mathcal{R}_{\mu\nu}{}^{ab}:=\partial_{\mu}\omega_{\nu}{}^{ab}+\omega_{\mu}{}^{a}{}_{c}\omega_{\nu}{}^{cb}-(\mu\leftrightarrow\nu)$
is the Riemann tensor. Therefore it describes the AdS space-time with radius
$(\sqrt{2}\lambda)^{-1}$:
$\mathcal{R}:=\mathcal{R}_{\nu\mu}{}^{\mu\nu}=-2\lambda^{2}d(d-1).$ (2.5)
In the case of $AdS_{3}$ with $o(2,2)\sim sp(2)\oplus sp(2)$ it is convenient
to use the spinor formalism where:
$\omega^{\alpha\beta}(x)=\omega_{\nu}^{\alpha\beta}(x)dx^{\nu},\quad
h^{\alpha\beta}(x)=h_{\nu}^{\alpha\beta}(x)dx^{\nu}.$ (2.6)
Here $x^{\nu}$ are space-time coordinates $(\nu=0,1,2)$ and $\alpha,\beta=1,2$
are spinor indices which are raised and lowered with the aid of the symplectic
form $\epsilon$:
$\begin{array}[]{ll}A_{\beta}=A^{\alpha}\epsilon_{\alpha\beta},&A^{\alpha}=\epsilon^{\alpha\beta}A_{\beta},\\\
\epsilon_{\alpha\beta}=-\epsilon_{\beta\alpha},&\epsilon_{\alpha\beta}\epsilon^{\alpha\gamma}=\delta_{\beta}^{\gamma}.\end{array}$
(2.7)
The $AdS_{3}$ geometry is described by the zero-curvature and zero-torsion
conditions
${{\rm
d}}\omega^{\alpha\beta}+\omega^{\alpha}{}_{\gamma}\wedge\omega^{\beta\gamma}+\lambda^{2}h^{\alpha}{}_{\gamma}\wedge
h^{\beta\gamma}=0,$ (2.8) ${{\rm
d}}h^{\alpha\beta}+\omega^{\alpha}{}_{\gamma}\wedge
h^{\beta\gamma}+h^{\alpha}{}_{\gamma}\wedge\omega^{\beta\gamma}=0.$ (2.9)
## 3 Fields and star product
Such a formulation of space-time geometry provides a natural starting point
for the formulation of the dynamics of free matter fields in terms of
covariant constancy conditions for appropriate representations of the space-
time symmetry algebra. Namely, this “unfolded formulation” [16] allows one to
rewrite free field equations in the form:
${{\rm d}}C_{i}=A_{i}^{j}\wedge C_{j},$ (3.1)
where ${{\rm d}}=dx^{\nu}\frac{\partial}{\partial x^{\nu}}$ is the space-time
exterior differential and the gauge fields
$A_{i}{}^{j}=A^{a}\left(T_{a}\right)_{i}{}^{j}$ obeying the zero-curvature
conditions:
${{\rm d}}A^{a}=U_{bc}^{a}A^{b}\wedge A^{c},$ (3.2)
where $U_{bc}^{a}$ are structure coefficients of the space-time Lie
(super)algebra.
Next it is convenient to represent the scalar field zero-form in terms of the
generating function [1]
$C\left(\hat{y};\psi_{1,2};k|x\right)=\sum_{A,B,C=0}^{1}\sum_{n=0}^{\infty}\frac{1}{n!}\lambda^{-\left[\frac{n}{2}\right]}C_{\alpha_{1}\ldots\alpha_{n}}^{ABC}(x)k^{A}\psi_{1}^{B}\psi_{2}^{C}\hat{y}^{\alpha_{1}}\ldots\hat{y}^{\alpha_{n}}\,.$
(3.3)
Here $\hat{y}^{\alpha_{i}}$ are Weyl oscillators, $k$ is the exterior Klein
operator and $\psi_{1,2}$ are Clifford elements obeying the following
conditions:
$\begin{gathered}\left[\hat{y}_{\alpha},\hat{y}_{\beta}\right]=2i\epsilon_{\alpha\beta},\quad
k\hat{y}_{\alpha}=-\hat{y}_{\alpha}k,\quad k^{2}=1,\\\
\left\\{\psi_{i},\psi_{j}\right\\}=2\delta_{ij},\quad\left[\psi_{i},\hat{y}_{\alpha}\right]=0,\quad\left[\psi_{i},k\right]=0,\quad[\psi_{i},dx^{\mu}]=0.\end{gathered}$
(3.4)
The coefficients $C_{\alpha_{1}\ldots\alpha_{n}}^{ABC}(x)$ are symmetric in
the two-component spinor indices $\alpha$, that corresponds to the Weyl
ordering in the Weyl algebra of the oscillators $\hat{y}_{\alpha}$.
Following [1], $\psi_{1}$ is introduced to describe the doubling of $sp(2)$ in
the $AdS_{3}$ algebra $o(2,2)$ while the role of $\psi_{2}$ will be clarified
below. The $o(2,2)$ generators can be realized as
$L_{\alpha\beta}:=T_{\alpha\beta}\,,\qquad
P_{\alpha\beta}:=\psi_{1}T_{\alpha\beta}\,,\qquad
T_{\alpha\beta}:=\frac{1}{4i}\left\\{\hat{y}_{\alpha},\hat{y}_{\beta}\right\\}$
(3.5)
with the $sl_{2}\sim sp(2)$ generators $T_{\alpha\beta}$ obeying relations
$\left[T_{\alpha\beta},T_{\gamma\delta}\right]=\epsilon_{\alpha\gamma}T_{\beta\delta}+\epsilon_{\beta\delta}T_{\alpha\gamma}+\epsilon_{\alpha\delta}T_{\beta\gamma}+\epsilon_{\beta\gamma}T_{\alpha\delta}\,,$
(3.6)
$\left[T_{\alpha\beta},\hat{y}_{\gamma}\right]=\epsilon_{\alpha\gamma}\hat{y}_{\beta}+\epsilon_{\beta\gamma}\hat{y}_{\alpha}\,.$
(3.7)
So, the $o(2,2)$ one-form connection can be realized as:
$W_{gr}=\omega+\lambda
h\psi_{1},\quad\omega=\frac{1}{8i}\omega^{\alpha\beta}\left\\{\hat{y}_{\alpha},\hat{y}_{\beta}\right\\},\quad
h=\frac{1}{8i}h^{\alpha\beta}\left\\{\hat{y}_{\alpha},\hat{y}_{\beta}\right\\}.$
(3.8)
Now the flatness conditions (2.8), (2.9), that describe $AdS_{3}$ geometry,
read
${{\rm d}}W_{gr}+W_{gr}\wedge W_{gr}=0.$ (3.9)
### 3.1 Star product
To write down covariant constancy equations for the set of fields
$C\left(\hat{y};\psi_{1,2};k\right)$ it is convenient to replace all Weyl-
ordered elements of the oscillator algebra by their Weyl symbols according to
the rule
$C_{\alpha_{1}\ldots\alpha_{n}}^{ABC}k^{A}\psi_{1}^{B}\psi_{2}^{C}\hat{y}^{\alpha_{1}}\ldots\hat{y}^{\alpha_{n}}\rightarrow
C_{\left(\alpha_{1}\ldots\alpha_{n}\right)}^{ABC}k^{A}\psi_{1}^{B}\psi_{2}^{C}y^{\alpha_{1}}\ldots
y^{\alpha_{n}}\,,$ (3.10)
where $y^{\alpha}$ are usual commuting variables.
The commutation relations of the Weyl algebra are reproduced by the star
product222The integration measure is normalized so that
$(1*f)(y)=(f*1)(y)=f(y)$.
$(f*g)(y)=\int dudvf(y+u)g(y+v)e^{iu_{\alpha}v^{\alpha}}\,,$ (3.11)
where $(f*g)(y)$ is the Weyl symbol of $f(\hat{y})g(\hat{y})$. The following
useful relations can be easily checked:
$\left[y_{\alpha},y_{\beta}\right]_{*}=2i\epsilon_{\alpha\beta},\quad\left[y_{\alpha},f\right]_{*}=2i\frac{\partial
f}{\partial y^{\alpha}},$ (3.12)
$\left[y_{\alpha}y_{\beta},f(y)\right]_{*}=2i\left(y_{\alpha}\frac{\partial}{\partial
y^{\beta}}+y_{\beta}\frac{\partial}{\partial y^{\alpha}}\right)f(y),$ (3.13)
$\left\\{y_{\alpha}y_{\beta},f(y)\right\\}_{*}=2\left(y_{\alpha}y_{\beta}-\frac{\partial}{\partial
y^{\alpha}}\frac{\partial}{\partial y^{\beta}}\right)f(y),$ (3.14)
### 3.2 Topological and dynamical fields
In these terms, the Weyl symbols for the $AdS_{3}$ connection and scalar field
read as:
$W_{gr}=\omega+\lambda
h\psi_{1},\quad\omega=\frac{1}{4i}\omega^{\alpha\beta}y_{\alpha}y_{\beta},\quad
h=\frac{1}{4i}h^{\alpha\beta}y_{\alpha}y_{\beta},$ (3.15)
$C\left(y;\psi_{1,2};k\right)=\sum_{A,B,C=0}^{1}\sum_{n=0}^{\infty}\frac{1}{n!}\lambda^{-\left[\frac{n}{2}\right]}C_{\alpha_{1}\ldots\alpha_{n}}^{ABC}k^{A}\psi_{1}^{B}\psi_{2}^{C}y^{\alpha_{1}}\ldots
y^{\alpha_{n}}.$ (3.16)
One can define the full covariant derivative in $AdS_{3}$
$D_{0}P={{\rm d}}P+W_{gr}*P-(-1)^{p}P*W_{gr},$ (3.17)
for a degree-$p$ differential form $P$, and write down the covariant constancy
equation for the zero-form $C\left(y;\psi_{1,2};k\right)$:
$D_{0}C=0.$ (3.18)
Expansion of $C\left(y;\psi_{1,2};k\right)$ in powers of $\psi_{2}$
$C\left(y;\psi_{1,2};k\right)=C^{top}\left(y;\psi_{1};k\right)+C^{dyn}\left(y;\psi_{1};k\right)\psi_{2},$
(3.19)
yields two equations
$DC^{top}=-\lambda\psi_{1}\left[h,C^{top}\right]_{*},$ (3.20)
$DC^{dyn}=-\lambda\psi_{1}\left\\{h,C^{dyn}\right\\}_{*}\,,$ (3.21)
where $D:={{\rm d}}+[\omega,\bullet]_{*}$ is the Lorentz-covariant derivative.
So, $\psi_{2}$ induces the decomposition of equations (3.18) into two
independent subsystems describing topological and dynamical fields.
Indeed, using the form of the generating function
$C^{dyn}\left(y;\psi_{1};k\right)$ and vielbein $h(y)$ by virtue of (3.14) one
observes that equation (3.21) amounts to an infinite chain of equations:
$DC^{dyn}_{\alpha(n)}=\frac{\psi_{1}}{2i}\left[h^{\gamma\delta}C^{dyn}_{\gamma\delta\alpha(n)}-\lambda^{2}n(n-1)h_{\alpha\alpha}C^{dyn}_{\alpha(n-2)}\right].$
(3.22)
All indices are raised and lowered by the rules (2.7). Here symmetric sets of
indices are denoted as $\alpha(n)=(\alpha_{1}\dots\alpha_{n})$.
It is easy to see that for lower $n$, these equations reduce to the massless
Klein-Gordon and Dirac equations in $AdS_{3}$
$D^{\mu}D_{\mu}C=\frac{3}{2}\lambda^{2}C,\quad
h^{\nu}{}_{\alpha}{}^{\beta}D_{\nu}C_{\beta}=0\,.$ (3.23)
Other equations relate higher components in $y$ with the higher space-time
derivatives of the scalar and spinor fields. Thus, given values of the fields
$C_{\alpha(2k)}(x_{0})$ and $C_{\alpha(2k+1)}(x_{0})$ at some point $x_{0}$
determines the behaviour of the fields $C(y|x)$ in some its neighbourhood.
This set of values is, in a sense, an infinite set of initial data in the
Cauchy problem. Thus, not surprisingly, the dynamical scalar and spinor fields
have an infinite number of degrees of freedom.
Analogously, one can obtain that (3.20) decomposes into an infinite set of
independent subsystems
$DC_{\alpha(n)}^{top}=4i\lambda
nh_{\alpha}{}^{\beta}C_{\gamma\alpha(n-1)}^{top}$ (3.24)
with different $n$. The number of initial data to be specified to solve the
Cauchy problem is finite for every subsystem, being equal to the number of
components $C^{top}(y|x)$ of a fixed homogeneity degree in $y$. Topological
fields have a finite number of degrees of freedom and do not contribute to the
local dynamics of the system. In fact, as emphasized in [17], they play a role
of coupling constants in the theory. However, it is convenient to unify
dynamical and topological fields into a single generating function to write a
system of nonlinear equations in a compact way.
## 4 Nonlinear System
Following [1], to formulate the full nonlinear system that possesses necessary
gauge symmetries and reduces at the linearized level to the free system we
introduce additional variables $z$ and three types of the generating functions
$W\left(z;y;\psi_{1,2};k\right)$, $B\left(z;y;\psi_{1,2};k\right)$, and
$S\left(z;y;\psi_{1,2};k\right)$. At $z=0$, $W$ and $B$ reduce to the fields
of the free theory
$W\left(z;y;\psi_{1,2};k\right)\stackrel{{\scriptstyle
z=0}}{{\longrightarrow}}\omega\left(y;\psi_{1,2};k\right),\quad
B\left(z;y;\psi_{1,2};k\right)\stackrel{{\scriptstyle
z=0}}{{\longrightarrow}}C\left(y;\psi_{1,2};k\right).$ (4.1)
Generating function $S\left(z;y;\psi_{1,2};k\right)$ is a one-form in $dz$,
having the meaning of connection in the additional variables
$S\left(dz;z;y;\psi_{1,2};k\right)=dz^{\alpha}S_{\alpha}\left(z;y;\psi_{1,2};k\right).$
(4.2)
The extended set of variables obeys the commutation relations
$\begin{gathered}{\left[y_{\alpha},y_{\beta}\right]=\left[z_{\alpha},z_{\beta}\right]=\left[z_{\alpha},y_{\beta}\right]=0},\\\
\left\\{dx_{\mu},dx_{\nu}\right\\}=\left\\{dz_{\alpha},dz_{\beta}\right\\}=\left\\{dz_{\alpha},dx_{\mu}\right\\}=0,\\\
\left\\{k,y_{\alpha}\right\\}=\left\\{k,z_{\alpha}\right\\}=\left\\{k,dz_{\alpha}\right\\}=0,\quad
k^{2}=1,\\\
{\left[\psi_{i},y_{\alpha}\right]=\left[\psi_{i},z_{\alpha}\right]=\left[\psi_{i},k\right]=0,\quad\left\\{\psi_{i},\psi_{j}\right\\}=2\delta_{ij}}.\end{gathered}$
(4.3)
Also, $z$-variables bring new exterior differential ${{\rm d}}_{z}$
${{\rm d}}=dx^{\mu}\frac{\partial}{\partial x^{\mu}},\quad{{\rm
d}}_{z}=dz^{\alpha}\frac{\partial}{\partial z^{\alpha}},\quad{{\rm d}}{{\rm
d}}_{z}=-{{\rm d}}_{z}{{\rm d}}.$ (4.4)
Further, the extension of the definition of the star product, taking into
account the new variables333Here we also normalize the measure in the way that
$(1\ast f)(z,y)=(f\ast 1)(z,y)=f(z,y)$,
$f(z,y)*g(z,y)=\int dudvf(z+u,y+u)g(z-v,y+v)e^{iu_{\alpha}v^{\alpha}}\,,$
(4.5)
has the following properties:
$\begin{gathered}{\left[y_{\alpha},y_{\beta}\right]_{*}=-\left[z_{\alpha},z_{\beta}\right]_{*}=2i\epsilon_{\alpha\beta},\quad\left[z_{\alpha},y_{\beta}\right]_{*}=0},\\\
{\left[y_{\alpha},f(z,y)\right]_{*}=2i\frac{\partial f(z,y)}{\partial
y^{\alpha}},\quad\left[z_{\alpha},f(z,y)\right]_{*}=-2i\frac{\partial
f(z,y)}{\partial z^{\alpha}}},\\\
e^{iz_{\alpha}y^{\alpha}}*f(z,y)=f(-z,-y)*e^{iz_{\alpha}y^{\alpha}}.\end{gathered}$
(4.6)
The exponent in the last equation $\varkappa=e^{iz_{\alpha}y^{\alpha}}$ is
conventionally called the inner Klein operator.
In these terms one can write the zero-curvature and covariant constancy
equations for the fields $W$, $B$ and $S$ to obtain a nonlinear system of
equations of [1] of the form
${{\rm d}}W+W*W=0,$ (4.7) ${{\rm d}}B+W*B-B*W=0,$ (4.8) ${{\rm
d}}S+W*S+S*W=0,$ (4.9)
$S*S=i(dz^{\alpha}dz_{\alpha}+B*dz^{\alpha}dz_{\alpha}e^{iz_{\alpha}y^{\alpha}}k),$
(4.10) $S*B-B*S=0.$ (4.11)
## 5 Shifted homotopy
### 5.1 Homotopy trick
Solving the nonlinear HS system one faces equations of the type
${{\rm d}}_{z}f(z;y;dz)=J(z;y;dz),\text{ where }{{\rm
d}}_{z}J(z;y;dz)=0\,,\quad J(z;y;0)=0.$ (5.1)
In the HS context, such equations were originally analysed in the $4d$ theory
[18] by the homotopy technique in the particular case of the so-called
conventional homotopy. In [6] this approach was further generalized to shifted
homotopies proven to be useful to analyse the locality problem in $4d$ HS
theory. Namely, a particular solution of (5.1) can be found in the form
$f_{q}(J)=\Delta_{q}J(z;y;dz):=\left(z^{\alpha}+q^{\alpha}\right)\frac{\partial}{\partial
dz^{\alpha}}\int_{0}^{1}\frac{dt}{t}J(tz-(1-t)q;y;tdz)\,.$ (5.2)
Here $q$ is $z$\- and $dz$\- independent spinor which in general can be an
operator. The general solution of (5.1) is
$f(z;y;dz)=f_{q}(J)(z;y;dz)+h(y;dz)+{{\rm d}}_{z}\epsilon(z;y;dz),$ (5.3)
where $h(y;dz)$ is in ${{\rm d}}_{z}$-cohomology and ${{\rm
d}}_{z}\epsilon(z;y;dz)$ is an exact form. At $q_{\alpha}\neq 0$ homotopy is
called shifted as it results from the shift of $z_{\alpha}$ by $q_{\alpha}$.
The action of two operators can be written in a symmetric form as an integral
over a triangle in a three-dimensional space:
$\begin{split}\Delta_{p}\Delta_{q}f(z;y;dz)=\int_{[0,1]^{3}}d^{3}\tau\delta\left(1-\tau_{1}-\tau_{2}-\tau_{3}\right)&\left(z^{\alpha}+p^{\alpha}\right)\left(z^{\beta}+q^{\beta}\right)\times\\\
\times&\frac{1}{\tau_{1}^{2}}\frac{\partial^{2}}{\partial
dz^{\alpha}\,\partial
dz^{\beta}}f(\tau_{1}z-\tau_{3}p-\tau_{2}q;y;\tau_{1}dz)\,.\end{split}$ (5.4)
Introducing the projection operator onto the cohomology space
$h_{q}f(z;dz)=f(-q;0)$, one obtains, following [6], a list of useful relations
$\Delta_{p}\Delta_{q}=-\Delta_{q}\Delta_{p},\quad
h_{p}\Delta_{q}=-h_{q}\Delta_{p},\quad h_{p}\Delta_{p}=0,\quad
h_{p}h_{q}=h_{q},\quad\Delta_{q}h_{p}=0,$ (5.5) $\left\\{{{\rm
d}}_{z},\Delta_{q}\right\\}=1-h_{q},\qquad\Delta_{b}-\Delta_{a}=\left[{{\rm
d}}_{z},\Delta_{a}\Delta_{b}\right]+h_{a}\Delta_{b}.$ (5.6)
It is particularly useful to compute the action of the homotopy operators on
the central element $\gamma=dz^{\alpha}dz_{\alpha}e^{iz_{\alpha}y^{\alpha}}k$
from the $r.h.s$ of (4.10). Application of $h_{c}\Delta_{b}\Delta_{a}$ to
$\gamma$ yields
$h_{c}\Delta_{b}\Delta_{a}\gamma=2\int_{[0,1]^{3}}d^{3}\tau\delta\left(1-\tau_{1}-\tau_{2}-\tau_{3}\right)(c-b)_{\gamma}(c-a)^{\gamma}e^{-i\left(\tau_{1}c+\tau_{2}a+\tau_{3}b\right)_{\alpha}y^{\alpha}}k.$
(5.7)
Using the resolution identity (5.6) and the fact that $\gamma$ is a two-form
in $dz$ one obtains the following relations often used in the shifted homotopy
computations
$\left(\Delta_{b}-\Delta_{a}\right)\gamma={{\rm
d}}_{z}\Delta_{a}\Delta_{b}\gamma,$ (5.8)
$\Delta_{c}\left(\Delta_{b}-\Delta_{a}\right)\gamma=\left(h_{c}-1\right)\Delta_{b}\Delta_{a}\gamma,$
(5.9)
$\left(\Delta_{d}-\Delta_{c}\right)\left(\Delta_{b}-\Delta_{a}\right)\gamma=\left(h_{d}-h_{c}\right)\Delta_{b}\Delta_{a}\gamma,$
(5.10)
$\left(\Delta_{c}\Delta_{b}-\Delta_{c}\Delta_{a}+\Delta_{b}\Delta_{a}\right)\gamma=h_{c}\Delta_{b}\Delta_{a}\gamma.$
(5.11)
### 5.2 Star-exchange formulas
Since equations (4.7)-(4.11) are written in the star-product formalism, it is
important to understand how the shifted homotopy operators interact with the
star product. Here we present some of the relevant relations called star-
exchange formulas originally obtained in [6].
Let the homotopy operator $\Delta_{q+\alpha y}$ act on the star product
$A(y;k)*\phi(z;y;k;dz)$, where $A(y;k)$ is a space-time $r$-form, $q$ is
$y$–independent and $\alpha$ is a $\mathds{C}$-number. Then the relation
$\Delta_{q+\alpha
y}(A(y;k)*\phi(z;y;k;dz))=(-1)^{r}A(y;k)*\Delta_{q+(1-\alpha)p+\alpha
y}\phi(z;y;k;dz)$ (5.12)
holds true. Here a spinor differential operator $p_{\alpha}$ is defined to act
according to the rule
$p_{\alpha}A(y;k)\equiv A(y;k)p_{\alpha}:=-i\dfrac{\partial}{\partial
y^{\alpha}}(A_{1}(y)+A_{2}(y)k).$ (5.13)
Note that the operator $p_{\alpha}$ differentiates $A(y;k)$ with respect to
its full argument and commutes with all other symbols. Let us stress that the
operator $p^{\alpha}$ with upper spinor index has opposite sign by definition
$p^{\alpha}A(y;k):=+i\dfrac{\partial}{\partial y_{\alpha}}A(y;k),\qquad
p^{\alpha}=\epsilon^{\alpha\beta}p_{\beta}.$ (5.14)
Note that in these terms the star product of two $y$-functions can be written
as
$F(y)*G(y)=F(y-p_{G})G(y)=F(y)G(y+p_{F}).$ (5.15)
For central element $\gamma=dz^{\alpha}dz_{\alpha}e^{iz_{\alpha}y^{\alpha}}k$
by virtue of star-exchange relation (5.12) one can show that
$\Delta_{\tilde{q}}\gamma*A(y;k)=(-1)^{r}A(y;k)*\Delta_{\tilde{q}+2p}\gamma,\quad\tilde{q}=q+\alpha
y\,.$ (5.16)
## 6 Perturbative expansion
### 6.1 Vacuum solution
We will analyse the system (4.7)-(4.11) perturbatively. Let us start with the
vacuum solution with $B=B_{0}=0,\>W=W_{0},\>S=S_{0}$. The system takes the
form
${{\rm d}}W_{0}+W_{0}*W_{0}=0,$ (6.1) ${{\rm
d}}S_{0}+W_{0}*S_{0}+S_{0}*W_{0}=0,$ (6.2)
$S_{0}*S_{0}=idz^{\alpha}dz_{\alpha}.$ (6.3)
Equation (6.1) is solved by $W_{0}=W_{gr}$ that describes the $AdS_{3}$
background connection
$W_{0}(y|x)=\omega_{0}(y|x)+\lambda
h_{0}(y|x)\psi_{1},\quad\omega_{0}(y|x)=\dfrac{1}{4i}\omega_{0}^{\alpha\beta}(x)y_{\alpha}y_{\beta}\,,\quad
h_{0}(y|x)=\dfrac{1}{4i}h_{0}^{\alpha\beta}(x)y_{\alpha}y_{\beta}\,.$ (6.4)
Equations (6.2),(6.3) are solved by $S_{0}=dz^{\alpha}z_{\alpha}$. Note that
the star-commutator with $S_{0}$ therefore acquires the meaning of the
$z_{\alpha}$ exterior derivative
$[S_{0},f]_{*}=-2idz^{\alpha}\dfrac{\partial f}{\partial z^{\alpha}}=-2i{{\rm
d}}_{z}f.$ (6.5)
### 6.2 Linearisation
In the first order of the perturbative expansion, we put $B=C$,
$W=W_{0}+W_{1}$, $S=S_{0}+S_{1}$. Then equations (4.7)-(4.11) yield
${{\rm d}}W_{1}+W_{0}*W_{1}+W_{1}*W_{0}=0,$ (6.6) ${{\rm
d}}C+W_{0}*C-C*W_{0}=0,$ (6.7) ${{\rm
d}}S_{1}+W_{0}*S_{1}+S_{1}*W_{0}=-\left\\{S_{0},W_{1}\right\\}_{*},$ (6.8)
$\left\\{S_{0},S_{1}\right\\}_{*}=iC*\gamma,$ (6.9)
$\left[S_{0},C\right]_{*}=0.$ (6.10)
From (6.10) and (6.5) one can see that $C$ is indeed $z$-independent,
$C=C(y;\psi_{1,2};k|x)$. From (6.9) we get
${{\rm
d}}_{z}S_{1}(z;dz;y;\psi_{1,2};k|x)=-\frac{1}{2}C(y;\psi_{1,2};k|x)*\gamma.$
(6.11)
Generally, the field $C(y;\psi_{1,2};k|x)$ is the sum of the $k$-independent
part and that linear in $k$. For the sake of simplicity, below we consider
only the part linear in $k$ and, slightly abusing the notation, we denote it
$C(y;\psi_{1,2}|x)k$. For the $k$-independent part all the computations are
analogous, and for even background connections $W_{0}(-y)=W_{0}(y)$ they are
identical. Since $k$ anticommutes with $y$, we will use notation
$kA(y)=A(-y)k:=\tilde{A}(y)k$.
Applying the homotopy trick with the homotopy parameter $q=0$ and using star-
exchange formulas, one obtains
$S_{1}(z;dz;y;\psi_{1,2};k|x)=-\frac{1}{2}C(y;\psi_{1,2}|x)*\Delta_{p}\gamma
k.$ (6.12)
Plugging the expression for $S_{1}$ into (6.8) and using (6.5), one finds,
keeping $\psi_{1,2}$ and $x$ implicit
${{\rm
d}}_{z}W_{1}=-\frac{1}{4i}\left[W_{0}*C*(\Delta_{p}-\Delta_{p+t})\gamma-C*\tilde{W}_{0}*(\Delta_{p-2t}-\Delta_{p-t})\gamma\right]k:=J_{W}(z;y;dz).$
(6.13)
Solution to this equation via conventional homotopy (with parameter $q=0$)
yields
$W_{1}(y;z)=\omega(y)+\Delta_{0}J_{W}(y;z;dz)=\omega+W_{1}^{0}.$ (6.14)
The $z$-independent field $\omega(y)$ serves as the generating function for HS
gauge fields corresponding to the pure Chern-Simons HS theory of Blencowe
[19]. Dynamical and topological sectors in space-time one-forms are
distinguished between in a way opposite to (3.19),
$\omega(y;\psi)=\omega^{dyn}\left(y;\psi_{1}\right)+\omega^{top}\left(y;\psi_{1}\right)\psi_{2}\,,$
(6.15)
with the $\psi_{2}$-independent component identified as dynamical while that
linear in $\psi_{2}$ as topological (such a complementary picture for one- and
zero-forms also occurs in $4d$ HS theory [20]). In particular, the
gravitational fields (6.4) belong to the dynamical sector.
Substitution of (6.14) into (6.6) yields
$D_{0}\omega=-D_{0}W_{1}^{0}=\Delta R(W_{0},C).$ (6.16)
For the $4d$ problem, the analogous deformation $\Delta R$ is non-zero
encoding non-trivial HS equations, the so-called central on-mass-shell theorem
[20]. Since $3d$ HS fields do not propagate, the deformation $\Delta R$ is
anticipated to be trivial. Analysing the dynamical and topological sectors it
is easy to see that in the $\psi_{2}$-independent dynamical sector indeed
$\Delta R^{dyn}=0$. However, in the topological sector
$\Delta
R^{top}\left(y;\psi_{1}\right)=-\left.\frac{\lambda^{2}}{16i}h_{0}{}^{\alpha}{}_{\gamma}\wedge
h_{0}{}^{\gamma\beta}\left(y_{\alpha}-p_{\alpha}\right)\left(y_{\beta}-p_{\beta}\right)C^{dyn}\left(\xi;\psi_{1}\right)\right|_{\xi=0}\neq
0\,.$ (6.17)
We observe that the deformation in the topological sector is expressed in
terms of the dynamical fields. Hence, in this setup topological fields will
contribute to the HS field equations in higher orders. This mixing of
dynamical and topological sectors complicates further higher order
calculations. It turns out to be possible to compensate the deformation of the
$r.h.s$ of (6.16) by an appropriate shift (field redefinition) of variables of
the structure $\omega^{top}\rightarrow\omega^{top}+h_{0}C^{dyn}$ with some
space-time local $C^{dyn}$-dependent terms. Such a field redefinition
originally found in [2] is
$\omega^{top}(y)=\omega^{\prime top}(y)+\delta\omega^{top}(y),$ (6.18)
$\delta\omega^{top}(y)=-\frac{\lambda}{8i}\int_{0}^{1}dt\left(1-t^{2}\right)h_{0}^{\alpha\beta}\left(y_{\alpha}-p_{\alpha}\right)\left(y_{\beta}-p_{\beta}\right)C^{dyn}(ty)\psi_{1}.$
(6.19)
Our goal is to find all solutions of equation (6.13) within the shifted
homotopy approach, that trivialize the $r.h.s.$ of (6.16) in order to avoid
mixing dynamical and topological fields.
## 7 Shifted solution
Our aim is to solve the equation for the master field $W_{1}$ in such a way
that the first-order correction in $C$ to the sector of one-form equations is
trivial, i.e. $W_{1}$ should obey the equation
$D_{0}(W_{1})=0.$ (7.1)
### 7.1 Shifted homotopy setup
Equation (6.13), that reconstructs the $z$-dependence of $W_{1}$, can
schematically be put into the form
$d_{z}W_{1}=(\ldots)^{top}+(\ldots)^{top}\psi_{1}+(\ldots)^{dyn}\psi_{2}+(\ldots)^{dyn}\psi_{1}\psi_{2}.$
(7.2)
Here superscripts $top$ and $dyn$ indicate if the expressions in the brackets
are composed of $C^{top}(y;\psi_{1})$ or $C^{dyn}(y;\psi_{1})$. Note that here
the dependence of the $C$-fields on $\psi_{1}$ is implicit, while the explicit
dependence on $\psi_{1}$ in (7.2) comes through the background $AdS_{3}$
connection (6.4). Since ${{\rm d}}_{z}$ commutes with $\psi_{1,2}$, each
expression in brackets in (7.2) is ${{\rm d}}_{z}$-closed. Hence, to obtain a
solution, one can apply different homotopy operators to each of them. For
example, one can apply conventional homotopy to all of these terms to obtain a
particular solution for $W_{1}=W_{1}^{0}$
$W_{1}^{0}=\frac{1}{4i}\left[W_{0}*C*\Delta_{p+t}\Delta_{p}\gamma-C*\tilde{W}_{0}*\Delta_{p-t}\Delta_{p-2t}\gamma\right]k\,,$
(7.3)
where the superscript $0$ indicates that the solution results from the zero-
shift (conventional) homotopy $\Delta_{0}$. Straightforward computation shows
that only the term proportional to $\psi_{1}\psi_{2}$ (recall that additional
dependence on $\psi_{1}$ is hidden in $C^{dyn}(y,\psi_{1})$) of $W_{1}$
computed with the help of conventional homotopy is not covariantly constant:
its covariant derivative is equal to $-\Delta R$ $\eqref{D_R}$. For that
reason we apply shifted homotopy only to the part of the r.h.s. of (6.13)
proportional to $\psi_{1}\psi_{2}$. Also, we use the fact that the system of
[1] is consistent with all fields valued in any associative algebra, which
implies that terms with different orders of $C$ and $W_{0}$ are ${{\rm
d}}_{z}$-closed independently. The part of the r.h.s. of (6.13) we are
interested in is
$-\frac{\lambda}{4i}\left[h_{0}*C^{dyn}*\big{(}\Delta_{p+t}-\Delta_{p}\big{)}\gamma+C^{dyn}*\tilde{h}_{0}*\big{(}\Delta_{p-2t}-\Delta_{p-t}\big{)}\gamma\right]\psi_{1}\psi_{2}k.$
(7.4)
We apply the homotopy operator $\Delta_{\mu_{1}p+\nu_{1}t+\alpha_{1}y}$ to the
first term and $\Delta_{\mu_{2}p-\nu_{2}t+\alpha_{2}y}$ to the second. Using
(5.9) one can check that the difference $\delta W_{1}$ between $W_{1}$ solved
this way and $W_{1}^{0}$ (7.3) is
$\delta
W_{1}=-\frac{\lambda}{4i}\left[h_{0}*C^{dyn}*h_{Q_{1}(p,t,y)}\Delta_{p+t}\Delta_{p}\gamma+C^{dyn}*\tilde{h}_{0}*h_{Q_{2}(p,-t,y)}\Delta_{p-t}\Delta_{p-2t}\gamma\right]\psi_{1}\psi_{2}k,$
(7.5)
where
$Q_{i}(p,t,y)=(\mu_{i}-\alpha_{i}+1)p+(\nu_{i}-\alpha_{i}+1)t+\alpha_{i}y.$
(7.6)
Expressions (7.6) result from $\Delta_{\mu_{1}p+\nu_{1}t+\alpha_{1}y}$ and
$\Delta_{\mu_{2}p-\nu_{2}t+\alpha_{2}y}$ via star-exchange relations (5.12),
(5.16). To arrive at (7.1), the parameters $\mu_{1,2}$, $\nu_{1,2}$,
$\alpha_{1,2}$ have to be adjusted in such a way that
$D_{0}(\delta W_{1})=\Delta R\,.$ (7.7)
Applying formula (5.7) and the Taylor expansion, one can write
$\delta
W_{1}=-\frac{\lambda}{2i}\int_{[0,1]^{3}}d^{3}\tau\delta(1-\tau_{1}-\tau_{2}-\tau_{3})\times\\\
\Big{[}(\mu_{1}p+\alpha_{1}y)_{\gamma}t^{\gamma}C^{dyn}\big{(}-(\mu_{1}-\alpha_{1})\tau_{1}y,\psi_{1}\big{)}h_{0}\big{(}(\tau_{2}-(\nu_{1}-\alpha_{1})\tau_{1})y-(1-\tau_{2}+(\nu_{1}-\mu_{1})\tau_{1})p\big{)}+\\\
+(\mu_{2}p+\alpha_{2}y)_{\gamma}t^{\gamma}C^{dyn}\big{(}-(\mu_{2}-\alpha_{2})\tau_{1}y,\psi_{1}\big{)}h_{0}\big{(}(\tau_{2}+(\nu_{2}-\alpha_{2})\tau_{1})y-(1-\tau_{2}-(\nu_{2}-\mu_{2})\tau_{1})p\big{)}\Big{]}\psi_{1}\psi_{2}.$
(7.8)
Then, plugging in the $AdS_{3}$-vielbein
$h_{0}(y)=\frac{\lambda}{4i}h_{0}{}^{\alpha\beta}y_{\alpha}y_{\beta}$ and
integrating over $\tau_{2}$ and $\tau_{3}$, one obtains
$\delta
W_{1}=-\frac{\lambda}{4i}\int_{0}^{1}dt\,(1-t)\,h_{0}^{\alpha\beta}\times\\\
\times\Big{\\{}\alpha_{1}\Big{(}\frac{1-t}{2}-(\nu_{1}-\alpha_{1})t\Big{)}y_{\alpha}y_{\beta}C^{dyn}((\alpha_{1}-\mu_{1})ty,\psi_{1})+\\\
+\alpha_{2}\Big{(}\frac{1-t}{2}+(\nu_{2}-\alpha_{2})t\Big{)}y_{\alpha}y_{\beta}C^{dyn}((\alpha_{2}-\mu_{2})ty,\psi_{1})+\\\
+\Big{[}\mu_{1}\Big{(}\frac{1-t}{2}-(\nu_{1}-\alpha_{1})t\Big{)}-\alpha_{1}\Big{(}1-\frac{1-t}{2}+(\nu_{1}-\mu_{1})t\Big{)}\Big{]}y_{\alpha}p_{\beta}C^{dyn}((\alpha_{1}-\mu_{1})ty,\psi_{1})+\\\
+\Big{[}\mu_{2}\Big{(}\frac{1-t}{2}+(\nu_{2}-\alpha_{2})t\Big{)}-\alpha_{2}\Big{(}1-\frac{1-t}{2}-(\nu_{2}-\mu_{2})t\Big{)}\Big{]}y_{\alpha}p_{\beta}C^{dyn}((\alpha_{2}-\mu_{2})ty,\psi_{1})-\\\
-\mu_{1}\Big{(}1-\frac{1-t}{2}+(\nu_{1}-\mu_{1})t\Big{)}p_{\alpha}p_{\beta}C^{dyn}((\alpha_{1}-\mu_{1})ty,\psi_{1})-\\\
-\mu_{2}\Big{(}1-\frac{1-t}{2}-(\nu_{2}-\mu_{2})t\Big{)}p_{\alpha}p_{\beta}C^{dyn}((\alpha_{2}-\mu_{2})ty,\psi_{1})\Big{\\}}\psi_{1}\psi_{2}.$
(7.9)
To solve the problem, one has to act by the derivative $D_{0}$ on this
expression and choose the homotopy parameters in such a way that the result
yields the deformation $\Delta R$.
### 7.2 Covariant derivative
The expression (7.9) is the sum of the one-forms of the types
$\mathrm{A}=-\frac{\lambda}{4i}h_{0}^{\alpha\beta}\int_{0}^{1}dt\,a(t)y_{\alpha}y_{\beta}C^{dyn}(\rho
ty,\psi_{1})\psi_{1}\psi_{2},$ (7.10a)
$\mathrm{B}=-\frac{\lambda}{4i}h_{0}^{\alpha\beta}\int_{0}^{1}dt\,b(t)y_{\alpha}p_{\beta}C^{dyn}(\rho
ty,\psi_{1})\psi_{1}\psi_{2},$ (7.10b)
$\mathrm{C}=-\frac{\lambda}{4i}h_{0}^{\alpha\beta}\int_{0}^{1}dt\,c(t)p_{\alpha}p_{\beta}C^{dyn}(\rho
ty,\psi_{1})\psi_{1}\psi_{2},$ (7.10c)
where $a(1)=b(1)=c(1)=0$ and $\rho$ denotes some coefficients that may differ
for $A$,$B$ and $C$. To evaluate the action of the covariant derivative on
them, one has to use the Schouten identity and integration by parts
$h_{0}{}^{\alpha\beta}\wedge
h_{0}{}^{\gamma\delta}=\frac{1}{2}H^{\alpha\gamma}\epsilon^{\beta\delta}+\frac{1}{2}H^{\beta\delta}\epsilon^{\alpha\gamma},\quad\text{where}\quad
H^{\alpha\beta}=h_{0}{}^{\alpha}{}_{\gamma}\wedge h_{0}{}^{\gamma\beta},$
(7.11) $\int_{0}^{1}dt\,a(t)y^{\sigma}\frac{\partial}{\partial(\rho
ty^{\sigma})}C^{dyn}(\rho ty,\psi_{1})=\frac{a(t)}{\rho}C^{dyn}(\rho
ty,\psi_{1})\bigg{|}_{0}^{1}-\int_{0}^{1}dt\,\frac{1}{\rho}\frac{\partial
a(t)}{\partial t}C^{dyn}(\rho ty,\psi_{1}).$ (7.12)
As a result, the action of the covariant derivative takes the form of the sum
of the boundary term proportional to $C^{dyn}(0,\psi_{1})$ and the integral
containing a differential operator acting on measures $a,b,c$ of the same type
for all contributions:
$\displaystyle\begin{aligned}
D_{0}\mathrm{A}=\frac{\lambda^{2}}{8i}&\bigg{[}\frac{a(0)}{\rho}H^{\alpha\beta}y_{\alpha}p_{\beta}C^{dyn}(\xi,\psi_{1})\big{|}_{\xi=0}+\\\
&+\int_{0}^{1}dt\,\big{(}2\rho ta(t)-\rho t^{2}\frac{\partial a(t)}{\partial
t}+\frac{1}{\rho}\frac{\partial a(t)}{\partial
t}\big{)}H^{\alpha\beta}y_{\alpha}p_{\beta}C^{dyn}(\rho
ty,\psi_{1})\bigg{]},\end{aligned}$ (7.13a) $\displaystyle\begin{aligned}
D_{0}\mathrm{B}=\frac{\lambda^{2}}{16i}&\bigg{[}\frac{b(0)}{\rho}H^{\alpha\beta}[y_{\alpha}y_{\beta}+p_{\alpha}p_{\beta}]C^{dyn}(\xi,\psi_{1})\big{|}_{\xi=0}+\\\
&+\int_{0}^{1}dt\,\big{(}2\rho tb(t)-\rho t^{2}\frac{\partial b(t)}{\partial
t}+\frac{1}{\rho}\frac{\partial b(t)}{\partial
t}\big{)}H^{\alpha\beta}[y_{\alpha}y_{\beta}+p_{\alpha}p_{\beta}]C^{dyn}(\rho
ty,\psi_{1})\bigg{]},\end{aligned}$ (7.13b) $\displaystyle\begin{aligned}
D_{0}\mathrm{C}=\frac{\lambda^{2}}{8i}&\bigg{[}\frac{c(0)}{\rho}H^{\alpha\beta}y_{\alpha}p_{\beta}C^{dyn}(\xi,\psi_{1})\big{|}_{\xi=0}+\\\
&+\int_{0}^{1}dt\,\big{(}2\rho tc(t)-\rho t^{2}\frac{\partial c(t)}{\partial
t}+\frac{1}{\rho}\frac{\partial c(t)}{\partial
t}\big{)}H^{\alpha\beta}y_{\alpha}p_{\beta}C^{dyn}(\rho
ty,\psi_{1})\bigg{]}.\end{aligned}$ (7.13c)
Note that $D_{0}A$ and $D_{0}C$ have the same form. This is not incidental,
being a consequence of the fact that $A-C$ is $D_{0}$-exact. This follows from
the observation that, as is not hard to check, for any zero-form field
$\epsilon=C^{dyn}(\rho ty)\psi_{2}$,
$D_{0}\epsilon=-\frac{\lambda}{2i}(1-\rho^{2}t^{2})h_{0}^{\alpha\beta}(y_{\alpha}y_{\beta}-p_{\alpha}p_{\beta})C^{dyn}(\rho
ty,\psi_{1})\psi_{1}\psi_{2}\,.$ (7.14)
Since $D_{0}^{2}=0$, it follows that $D_{0}(A-C)=0$. Hence it is enough to
consider $A+C$. For instance, one can set $C=0$.
For $D_{0}(A+B+C)$ to give $\Delta R$ the integrands have to vanish (only
boundary terms appear in (6.17)). Solving the differential equation, we find
that the measures $a(t)$, $b(t)$, $c(t)$ in (7.10) must have the form:
$f(t)=(1-\rho^{2}t^{2})\cdot\text{const}.$ (7.15)
Note that, if the polynomials are proportional to $(1-t)$ like in (7.9), this
is possible only for $\rho=\pm 1$. Hence, expression (7.9) should acquire the
form
$\begin{gathered}\delta
W_{1}=-\frac{\lambda}{4i}\int_{0}^{1}dt\,h_{0}{}^{\alpha\beta}\cdot\\\
\begin{aligned}
\cdot\big{[}a_{1}(t)y_{\alpha}y_{\beta}C^{dyn}((\alpha_{1}-\mu_{1})ty,\psi_{1})&+a_{2}(t)y_{\alpha}y_{\beta}C^{dyn}((\alpha_{2}-\mu_{2})ty,\psi_{1})+\\\
+b_{1}(t)y_{\alpha}p_{\beta}C^{dyn}((\alpha_{1}-\mu_{1})ty,\psi_{1})&+b_{2}(t)y_{\alpha}p_{\beta}C^{dyn}((\alpha_{2}-\mu_{2})ty,\psi_{1})+\\\
+c_{1}(t)p_{\alpha}p_{\beta}C^{dyn}((\alpha_{1}-\mu_{1})ty,\psi_{1})&+c_{2}(t)p_{\alpha}p_{\beta}C^{dyn}((\alpha_{2}-\mu_{2})ty,\psi_{1})\big{]}\psi_{1}\psi_{2}\end{aligned}\end{gathered}$
(7.16)
with $a_{i}(t),b_{i}(t),c_{i}(t)$ of the form (7.15). Demanding expressions
(7.13) to compensate $\Delta R$ (6.17) it is not hard to find the following
conditions on the parameters:
$\frac{b_{1}(0)}{\rho_{1}}+\frac{b_{2}(0)}{\rho_{2}}=-1;\qquad\frac{a_{1}(0)+c_{1}(0)}{\rho_{1}}+\frac{a_{2}(0)+c_{2}(0)}{\rho_{2}}=1\,,\qquad\rho_{i}:=\alpha_{i}-\mu_{i}\,.$
(7.17)
### 7.3 Family of shifts
Using the explicit form of the polynomials in (7.9) one can see that the
condition (7.15) imposes the following restrictions on the homotopy
parameters:
$\mu_{1}=\nu_{1}=\alpha_{1}-1;\quad\mu_{2}=\nu_{2}=\alpha_{2}+1;\quad\forall\alpha_{1},\alpha_{2}\,.$
(7.18)
Then $\delta W_{1}$ acquires the form
$\begin{gathered}\delta
W_{1}=-\frac{\lambda}{4i}\int_{0}^{1}dth_{0}{}^{\alpha\beta}\times\\\
\begin{aligned}
\times\Big{[}\frac{1}{2}\alpha_{1}(1-t^{2})y_{\alpha}y_{\beta}C^{dyn}(ty,\psi_{1})&+\frac{1}{2}\alpha_{2}(1-t^{2})y_{\alpha}y_{\beta}C^{dyn}(-ty,\psi_{1})-\\\
-\frac{1}{2}(1-t^{2})y_{\alpha}p_{\beta}C^{dyn}(ty,\psi_{1})&+\frac{1}{2}(1-t^{2})y_{\alpha}p_{\beta}C^{dyn}(-ty,\psi_{1})+\\\
+\frac{1}{2}(1-\alpha_{1})(1-t^{2})p_{\alpha}p_{\beta}C^{dyn}(ty\psi_{1})&-\frac{1}{2}(1+\alpha_{2})(1-t^{2})p_{\alpha}p_{\beta}C^{dyn}(-ty,\psi_{1})\Big{]}\psi_{1}\psi_{2},\end{aligned}\end{gathered}$
(7.19)
It is easy to see that it obeys the boundary conditions (7.17). Note that the
leftover freedom is in the $y$-shift parameters $\alpha_{1,2}$ in the homotopy
operators.
Now it is interesting to compare this family of shifts $\delta W_{1}$ with the
shift $\delta\omega^{top}$ (6.19) found in [2]. To this end, we split
$C^{dyn}(y,\psi_{1})$ into even and odd parts
$C^{dyn}(y,\psi_{1})=C^{dyn}_{+}(y,\psi_{1})+C^{dyn}_{-}(y,\psi_{1})\,,$
(7.20)
where
$C^{dyn}_{+}(y,\psi_{1})=\frac{C^{dyn}(y,\psi_{1})+C^{dyn}(-y,\psi_{1})}{2}\,,\;\;C^{dyn}_{-}(y,\psi_{1})=\frac{C^{dyn}(y,\psi_{1})-C^{dyn}(-y,\psi_{1})}{2}.$
(7.21)
Here $C^{dyn}_{+}(y,\psi_{1})$ and $C^{dyn}_{-}(y,\psi_{1})$ describe bosons
and fermions, respectively. In these terms, the difference between (7.19) and
(6.19) takes form
$\begin{gathered}\delta
W_{1}-\delta\omega^{top}\psi_{2}=-\frac{\lambda}{8i}\int_{0}^{1}dt\,(1-t^{2})h_{0}^{\alpha\beta}\cdot\\\
\begin{aligned}
\cdot\Big{[}\big{(}\alpha_{1}-\frac{1}{2}\big{)}(y_{\alpha}y_{\beta}-p_{\alpha}p_{\beta})C^{dyn}(ty,\psi_{1})&+\big{(}\alpha_{2}+\frac{1}{2}\big{)}(y_{\alpha}y_{\beta}-p_{\alpha}p_{\beta})C^{dyn}(-ty,\psi_{1})-\\\
-(y_{\alpha}y_{\beta}+p_{\alpha}p_{\beta})C_{+}^{dyn}(ty,\psi_{1})&+2y_{\alpha}p_{\beta}C_{-}^{dyn}(ty,\psi_{1})\Big{]}\psi_{1}\psi_{2}\,.\end{aligned}\end{gathered}$
(7.22)
On the r.h.s. we find two $D_{0}$-exact expressions in the second row (cf.
(7.14)) and two non-exact in the third. Note that here the parity of
$C^{dyn}_{\pm}$ plays an important role. Indeed, by (7.10), (7.13)
$\displaystyle
D_{0}\int_{0}^{1}dt(1-t^{2})h_{0}^{\alpha\beta}(y_{\alpha}y_{\beta}+p_{\alpha}p_{\beta})C_{+}^{dyn}(ty,\psi_{1})\psi_{1}\psi_{2}\propto\left.H^{\alpha\beta}y_{\alpha}p_{\beta}C_{+}^{dyn}(\xi,\psi_{1})\psi_{2}\right|_{\xi=0}$
$\displaystyle=0\,,$ (7.23a) $\displaystyle
D_{0}\int_{0}^{1}dt(1-t^{2})h_{0}^{\alpha\beta}y_{\alpha}p_{\beta}C_{-}^{dyn}(ty,\psi_{1})\psi_{1}\psi_{2}\propto\left.H^{\alpha\beta}\left(y_{\alpha}y_{\beta}+p_{\alpha}p_{\beta}\right)C_{-}^{dyn}(\xi,\psi_{1})\psi_{2}\right|_{\xi=0}$
$\displaystyle=0\,.$ (7.23b)
Hence, the two terms in the third line of (7.22) belong to non-trivial
$D_{0}$-cohomology.
Thus, as anticipated, the difference $(\delta
W_{1}-\delta\omega^{top}\psi_{2})$ is $D_{0}$-closed, which implies that
equation (7.7) is solved. However, the old shift $\delta\omega^{top}$ cannot
be reproduced within the shifted homotopy setup of this paper: free parameters
$\alpha_{1},\alpha_{2}$ only affect the gauge transformation while $(\delta
W_{1}-\delta\omega^{top}\psi_{2})$ belongs to a non-zero $D_{0}$-cohomology
class. It would be interesting to find an extension of the shifted homotopy
approach rich enough to reproduce the $D_{0}$-cohomological terms as well and,
in particular, the shift of variables of [2].
### 7.4 Definite parity solutions
Consider once again the equation (6.13). Schematically, it can be represented
as (7.2). Since each term in the brackets on the r.h.s. is ${{\rm
d}}_{z}$-closed, one can use different homotopy operators for each of them.
However, there is an additional freedom that is not considered so far. One can
decompose $C^{dyn}$ into even and odd parts (7.20). Then using that ${{\rm
d}}_{z}C^{dyn}_{\pm}(y,\psi_{1})=0$, ${{\rm d}}_{z}h_{0}(y)=0$, ${{\rm
d}}_{z}\gamma=0$, $h_{q}\gamma=0$ and formula (5.6), one can show that the
following expressions
$\frac{\lambda}{4i}h_{0}*C_{\pm}^{dyn}*\big{(}\Delta_{p+t}-\Delta_{p}\big{)}\gamma\psi_{1}\psi_{2}k\,,\qquad\frac{\lambda}{4i}C_{\pm}^{dyn}*\tilde{h}_{0}*\big{(}\Delta_{p-2t}-\Delta_{p-t}\big{)}\gamma\psi_{1}\psi_{2}k$
(7.24)
are ${{\rm d}}_{z}$-closed as well, that allows us to apply different homotopy
operators to each of them.
As in Section 7.1, we apply non-conventional homotopy only to some of the
terms on the r.h.s. of (6.13) since the other resulting from the conventional
homotopy happen to be covariantly constant. Since each term of
$\frac{\lambda}{4i}h_{0}*C^{dyn}_{+}*\big{(}\Delta_{p+t}-\Delta_{p}\big{)}\gamma\psi_{1}\psi_{2}k+\frac{\lambda}{4i}C^{dyn}_{+}*\tilde{h}_{0}*\big{(}\Delta_{p-2t}-\Delta_{p-t}\big{)}\gamma\psi_{1}\psi_{2}k+\\\
+\frac{\lambda}{4i}h_{0}*C^{dyn}_{-}*\big{(}\Delta_{p+t}-\Delta_{p}\big{)}\gamma\psi_{1}\psi_{2}k+\frac{\lambda}{4i}C^{dyn}_{-}*\tilde{h}_{0}*\big{(}\Delta_{p-2t}-\Delta_{p-t}\big{)}\gamma\psi_{1}\psi_{2}k$
(7.25)
is ${{\rm d}}_{z}$-closed, this allows us to apply
$\Delta_{\mu^{+}_{1}p+\nu^{+}_{1}t+\alpha^{+}_{1}y}$,$\Delta_{\mu^{+}_{2}p+\nu^{+}_{2}t+\alpha^{+}_{2}y}$,
$\Delta_{\mu^{-}_{1}p+\nu^{-}_{1}t+\alpha^{-}_{1}y}$,
$\Delta_{\mu^{-}_{2}p+\nu^{-}_{2}t+\alpha^{-}_{2}y}$ to the first, second,
third and fourth term, respectively. The difference between $W_{1}$ obtained
this way and (7.3) is
$W_{1}-W_{1}^{0}=\delta W_{1}^{+}+\delta W_{1}^{-},$ (7.26)
where $\delta W_{1}^{\pm}$ have the form of (7.9) but with respective
$\pm$-superscripts. Note that $\Delta R^{top}$ also decomposes into bosonic
and fermionic parts as
$\Delta
R_{+}^{top}\left(y;\psi_{1}\right)=-\left.\frac{\lambda^{2}}{16i}h_{0}{}^{\alpha}{}_{\gamma}\wedge
h_{0}{}^{\gamma\beta}\left(y_{\alpha}y_{\beta}+p_{\alpha}p_{\beta}\right)C_{+}^{dyn}\left(\xi;\psi_{1}\right)\right|_{\xi=0}\,,$
(7.27) $\Delta
R_{-}^{top}\left(y;\psi_{1}\right)=-\left.\frac{\lambda^{2}}{16i}h_{0}{}^{\alpha}{}_{\gamma}\wedge
h_{0}{}^{\gamma\beta}\left(y_{\alpha}p_{\beta}+p_{\alpha}y_{\beta}\right)C_{-}^{dyn}\left(\xi;\psi_{1}\right)\right|_{\xi=0}.$
(7.28)
Analogously to the previous section, taking into account the form of
expressions (7.9) and (7.27), (7.28), one should choose
$(\rho^{\pm}_{i})^{2}=1$, where
$\rho^{\pm}_{i}=\alpha_{i}^{\pm}-\mu_{i}^{\pm}$ (cf. (7.10), (7.15)). For
example, consider $\delta W_{1}^{+}$ at $\rho^{+}_{1}=1$ and $\rho^{+}_{2}=-1$
$\begin{gathered}\delta
W^{+}_{1}=-\frac{\lambda}{4i}\int_{0}^{1}dt\,(1-t)\,h_{0}^{\alpha\beta}\times\\\
\times\Big{\\{}\Big{[}(\alpha^{+}_{1}+\alpha^{+}_{2})\frac{1-t}{2}-\alpha^{+}_{1}(\nu^{+}_{1}-\alpha^{+}_{1})t+\alpha^{+}_{2}(\nu^{+}_{2}-\alpha^{+}_{2})t\Big{]}C^{dyn}_{+}(ty,\psi_{1})+\\\
+\Big{[}(\alpha^{+}_{1}-1)\Big{(}\frac{1-t}{2}-(\nu^{+}_{1}-\alpha^{+}_{1})t\Big{)}-\alpha^{+}_{1}\Big{(}\frac{1+3t}{2}t+(\nu^{+}_{1}-\alpha^{+}_{1})t\Big{)}\Big{]}y_{\alpha}p_{\beta}C^{dyn}_{+}(ty,\psi_{1})+\\\
+\Big{[}(\alpha^{+}_{2}+1)\Big{(}\frac{1-t}{2}+(\nu^{+}_{2}-\alpha^{+}_{2})t\Big{)}-\alpha^{+}_{2}\Big{(}\frac{1+3t}{2}-(\nu^{+}_{2}-\alpha^{+}_{2})t\Big{)}\Big{]}y_{\alpha}p_{\beta}C^{dyn}_{+}(ty,\psi_{1})-\\\
-\Big{[}(\alpha^{+}_{1}-1)\Big{(}\frac{1+3t}{2}+(\nu^{+}_{1}-\alpha^{+}_{1})t\Big{)}+(\alpha^{+}_{2}+1)\Big{(}\frac{1+3t}{2}-(\nu^{+}_{2}-\alpha^{+}_{2})t\Big{)}\Big{]}p_{\alpha}p_{\beta}C^{dyn}_{+}(ty,\psi_{1})\Big{\\}}\psi_{1}\psi_{2}.\end{gathered}$
(7.29)
Here it is used that $C^{dyn}_{+}(y,\psi_{1})$ is even in $y$. We wish (7.29)
to solve equation $D_{0}(\delta W_{1}^{+})=\Delta R_{+}^{top}\psi_{2}$. Using
the rules (7.15) and (7.17) we find the same constraints on the homotopy
parameters as (7.18):
$\mu^{+}_{1}=\nu^{+}_{1}=\alpha^{+}_{1}-1,\quad\mu^{+}_{2}=\nu^{+}_{2}=\alpha^{+}_{2}+1\,.$
(7.30)
For $\rho^{+}_{1}=\rho^{+}_{2}$ there are no solutions for
$\mu^{+}_{1,2},\nu^{+}_{1,2},\alpha^{+}_{1,2}$ compatible with the requirement
$D_{0}(\delta W_{1}^{+})=\Delta R_{+}^{top}$, while for
$\rho^{+}_{1}=-1,\rho^{+}_{2}=1$ the solution is
$\mu_{1}=\alpha_{1}+1,\mu_{2}=\alpha_{2}-1,\quad\nu_{1}=\alpha_{1}-\frac{1}{1+2\alpha_{1}},\nu_{2}=\alpha_{2}+\frac{1}{1-2\alpha_{2}},\quad\alpha_{1}\neq-\frac{1}{2},\alpha_{2}\neq\frac{1}{2}\,.$
(7.31)
Analogously, one finds homotopy parameters for $\delta W_{1}^{-}$ providing
$D_{0}(\delta W_{1}^{-})=\Delta R_{-}^{top}\psi_{2}$. Namely, using (7.15) and
(7.17) one finds the following constraints on the fermionic homotopy
parameters: at $\rho_{1}^{-}=1,\rho_{2}^{-}=-1$
$\mu^{-}_{1}=\nu^{-}_{1}=\alpha^{-}_{1}-1,\quad\mu^{-}_{2}=\nu^{-}_{2}=\alpha^{-}_{2}+1$
(7.32)
and at $\rho_{1}^{-}=-1,\rho_{2}^{-}=1$
$\mu^{-}_{1}=\alpha^{-}_{1}+1,\;\mu^{-}_{2}=\alpha^{-}_{2}-1,\;\nu^{-}_{1}=\alpha^{-}_{1}+\frac{1}{1+2\alpha^{-}_{1}},\;\nu^{-}_{2}=\alpha^{-}_{2}-\frac{1}{1-2\alpha^{-}_{2}},\;\alpha^{-}_{1}\neq-\frac{1}{2},\;\alpha^{-}_{2}\neq\frac{1}{2}\,.$
(7.33)
Again, one can check that being $D_{0}$-closed the expression $\left(\delta
W_{1}^{+}+\delta W_{1}^{-}-\delta\omega^{top}\psi_{2}\right)$ is not
$D_{0}$-exact for all allowed $\alpha^{\pm}_{i}$.
## 8 $D_{0}$-cohomology from massive terms
In the previous section we have found two types of $D_{0}$-closed but not
$D_{0}$-exact expressions (7.22), (7.23). Let us set
$\displaystyle G^{+}$
$\displaystyle=\frac{\lambda}{8i}\int_{0}^{1}dt(1-t^{2})h_{0}^{\alpha\beta}(y_{\alpha}y_{\beta}+p_{\alpha}p_{\beta})C_{+}^{dyn}(ty,\psi_{1})\psi_{1}\psi_{2},$
(8.1a) $\displaystyle G^{-}$
$\displaystyle=-\frac{\lambda}{4i}\int_{0}^{1}dt(1-t^{2})h_{0}^{\alpha\beta}y_{\alpha}p_{\beta}C_{-}^{dyn}(ty,\psi_{1})\psi_{1}\psi_{2}\,.$
(8.1b)
Since $G^{\pm}$ are $D_{0}$-closed, they can be added to the r.h.s. of the
covariant constancy equation for $C(y)$ without violating its consistency. It
can be shown that these terms reproduce the linearized mass corrections in the
$AdS_{3}$ HS theory described in [13] (for more information on description
massive higher spin particles in AdS see also [21, 22, 23]). The latter have
different forms for bosons and fermions:
$\displaystyle
DC^{boson}_{\alpha(n)}=\frac{\psi_{1}}{2i}\left[\left(1-\frac{\nu(\nu-2)}{(n+1)(n+3)}\right)h^{\beta\gamma}C_{\beta\gamma\alpha(n)}-\lambda^{2}n(n-1)h_{\alpha\alpha}C_{\alpha(n-2)}\right],$
(8.2a) $\displaystyle\begin{aligned}
DC^{fermion}_{\alpha(n)}=\frac{\psi_{1}}{2i}\left(1-\frac{\nu^{2}}{(n+2)^{2}}\right)h^{\beta\gamma}C_{\beta\gamma\alpha(n)}-\nu&\frac{\lambda\psi_{1}}{n+2}h_{\alpha}{}^{\beta}C_{\beta\alpha(n-1)}-\\\
-&\frac{\psi_{1}}{2i}\lambda^{2}n(n-1)h_{\alpha\alpha}C_{\alpha(n-2)}.\end{aligned}$
(8.2b)
Here $\nu$ is a parameter of the deformed Weyl algebra
$\left[\hat{y}_{\alpha},\hat{y}_{\beta}\right]=2i\epsilon_{\alpha\beta}(1+\nu
k)$ [7] related to the mass.
This can be seen by rewriting $G^{\pm}$ (8.1) in the component form analogous
to (3.16)
$\displaystyle
G^{+}_{\alpha(n=2k)}=-\frac{\psi_{1}}{4i}\left[\frac{1}{(n+1)(n+3)}h_{0}^{\beta\gamma}C^{dyn}_{\beta\gamma\alpha(n)}-\lambda^{2}n(n-1)\frac{1}{(n-1)(n+1)}h_{0}{}_{\alpha\alpha}C^{dyn}_{\alpha(n-2)}\right]\,,$
(8.3a) $\displaystyle
G^{-}_{\alpha(n=2k-1)}=-\frac{1}{2}\frac{\lambda\psi_{1}}{n+2}h_{0\alpha}{}^{\beta}C^{dyn}_{\beta\alpha(n-1)}\,,$
(8.3b)
From (8.1) it is also obvious that
$G^{+}_{\alpha(n=2k-1)}=G^{-}_{\alpha(n=2k)}=0$.
So, one can deform the unfolded equation (3.22) as
$D_{0}C=aG^{+}+bG^{-}\,.$ (8.4)
The resulting deformed equations differ for even and odd $n$:
$\displaystyle\begin{aligned}
DC_{\alpha(n=2k)}=\frac{\psi_{1}}{2i}\bigg{[}&\left(1-\frac{a/2}{(n+1)(n+3)}\right)h_{0}^{\beta\gamma}C^{dyn}_{\beta\gamma\alpha(n)}-\\\
-\lambda^{2}n(n-1)&\left(1-\frac{a/2}{(n-1)(n+1)}\right)h_{0\alpha\alpha}C^{dyn}_{\alpha(n-2)}\bigg{]},\end{aligned}$
(8.5a) $\displaystyle
DC_{\alpha(n=2k-1)}=\frac{\psi_{1}}{2i}h_{0}^{\beta\gamma}C^{dyn}_{\beta\gamma\alpha(n)}+\frac{b}{2}\frac{\lambda\psi_{1}}{n+2}h_{0\alpha}{}^{\beta}C^{dyn}_{\beta\alpha(n-1)}-\frac{\psi_{1}}{2i}\lambda^{2}n(n-1)h_{0\alpha\alpha}C^{dyn}_{\alpha(n-2)}.$
(8.5b)
It can be seen that deformed equations (8.5) for $C_{\alpha(n)}$ with even and
odd $n$ reproduce respectively the linearized massive equations for bosons and
fermions (8.2) with
$a=2\nu(\nu-2),\quad b=-2\nu.$ (8.6)
## 9 Vertex $\Upsilon(\omega,\omega,C)$
In the higher orders of the perturbative expansion, one has to evaluate the
r.h.s. of the field equations. Here we show how the vertex
$\Upsilon(\omega,\omega,C)$ can be obtained within the shifted homotopy
approach for arbitrary (not necessarily $AdS_{3}$) $\omega_{0}(y)$ and
$h_{0}(y)$ verifying (6.1).
So, our goal is to find a solution $W_{1}=\omega_{1}+W_{1}^{q}$ of equation
(6.13)
${{\rm
d}}_{z}W_{1}=-\frac{1}{4i}\left[W_{0}*C*(\Delta_{p}-\Delta_{p+t})\gamma-C*\tilde{W}_{0}*(\Delta_{p-2t}-\Delta_{p-t})\gamma\right]k$
which provides $D_{0}\omega_{1}=0$. Therefore, we have to choose the shift $q$
so that $D_{0}\omega_{1}=D_{0}(-W_{1}^{q})=0$. Since
$W_{1}^{q}=W_{1}^{0}+\delta W$, where $W_{1}^{0}$ and $\delta W$ are expressed
as (7.3) and (7.5), respectively, the following formulas valid up to higher
orders have to be used to calculate $D_{0}(\delta W)$:
${{\rm d}}h_{0}(y)=-\omega_{0}(y)*h_{0}(y)-h_{0}(y)*\omega_{0}(y)\,,$ (9.1)
${{\rm
d}}C^{dyn}(y)=-\omega_{0}(y)*C^{dyn}(y)+C^{dyn}(y)*\tilde{\omega}_{0}-\psi_{1}h_{0}(y)*C^{dyn}(y)-\psi_{1}C^{dyn}(y)*\tilde{h}_{0}\,.$
(9.2)
It can be seen that the vertex $D_{0}\left(\omega_{1}\right)$ consists of
three parts: the known $\Delta R$ (6.17), the vertex $\Upsilon_{\omega h}$,
proportional to $\left(\omega_{0},h_{0}\right)$, and $\Upsilon_{hh}$,
proportional to $\left(h_{0},h_{0}\right)$
$D_{0}(\omega_{1})=\Delta R+\Upsilon_{\omega h}+\Upsilon_{hh}.$ (9.3)
For the $AdS_{3}$-connection, one can check that the
$(\omega_{0},h_{0})$-vertex is identically zero $\Upsilon_{\omega h}\equiv 0$
regardless of the homotopy parameters.444This is not incidental, being a
consequence of the Lorentz covariance of the nonlinear equations of [1] as
well as of the Lorentz covariance of the homotopy procedure.
Direct calculation using the star-exchange formulas gives:
$\displaystyle\Upsilon_{hh}=\frac{1}{4i}\Big{[}$
$\displaystyle+h_{0}*h_{0}*C*h_{(\mu_{1}-\alpha_{1}+1)p+(\nu_{1}-\alpha_{1}+1)t_{1}+(\mu_{1}-\alpha_{1}+1)t_{2}+\alpha_{1}y}\Delta_{p+t_{1}+t_{2}}\Delta_{p+t_{2}}\gamma$
(9.4)
$\displaystyle+h_{0}*h_{0}*C*h_{(\mu_{1}-\alpha_{1}+1)p+(\nu_{1}-\alpha_{1}+1)t_{2}+\alpha_{1}y}\Delta_{p+t_{2}}\Delta_{p}\gamma$
$\displaystyle+h_{0}*C*\tilde{h}_{0}*h_{(\mu_{1}-\alpha_{1}+1)p+(\nu_{1}-\alpha_{1}+1)t_{1}-(\mu_{1}-\alpha_{1}+1)t_{2}+\alpha_{1}y}\Delta_{p+t_{1}-t_{2}}\Delta_{p-t_{2}}\gamma$
$\displaystyle-
h_{0}*C*\tilde{h}_{0}*h_{(\mu_{1}-\alpha_{1}+1)p+(\nu_{1}-\alpha_{1}+1)t_{1}-2t_{2}+\alpha_{1}y}\Delta_{p+t_{1}-2t_{2}}\Delta_{p-2t_{2}}\gamma$
$\displaystyle-
h_{0}*C*\tilde{h}_{0}*h_{(\mu_{2}-\alpha_{2}+1)p+(\mu_{2}-\alpha_{2}+1)t_{1}-(\nu_{2}-\alpha_{2}+1)t_{2}+\alpha_{2}y}\Delta_{p+t_{1}-t_{2}}\Delta_{p+t_{1}-2t_{2}}\gamma$
$\displaystyle+h_{0}*C*\tilde{h}_{0}*h_{(\mu_{2}-\alpha_{2}+1)p-(\nu_{2}-\alpha_{2}+1)t_{2}+\alpha_{2}y}\Delta_{p-t_{2}}\Delta_{p-2t_{2}}\gamma$
$\displaystyle-C*\tilde{h}_{0}*\tilde{h}_{0}*h_{(\mu_{2}-\alpha_{2}+1)p-(\mu_{2}-\alpha_{2}+1)t_{1}-(\nu_{2}-\alpha_{2}+1)t_{2}+\alpha_{2}y}\Delta_{p-t_{1}-t_{2}}\Delta_{p-t_{1}-2t_{2}}\gamma$
$\displaystyle-C*\tilde{h}_{0}*\tilde{h}_{0}*h_{(\mu_{2}-\alpha_{2}+1)p-(\nu_{2}-\alpha_{2}+1)t_{1}-2t_{2}+\alpha_{2}y}\Delta_{p-t_{1}-2t_{2}}\Delta_{p-2t_{1}-2t_{2}}\gamma\Big{]}.$
By substituting the $AdS_{3}$-connection, one can find restrictions on the
homotopy parameters under which $\Delta R+\Upsilon_{hh}=0$ which coincide with
those found in Section 7:
$\mu_{1}=\nu_{1}=\alpha_{1}-1;\quad\mu_{2}=\nu_{2}=\alpha_{2}+1;\quad\forall\alpha_{1},\alpha_{2}\,.$
(9.5)
## 10 Conclusion
In this paper a $3d$ nonlinear system of HS equations has been analysed in the
first order in the zero-forms using the shifted homotopy technique. The $3d$
HS theory differs from the $4d$ theory of [18] in that respect that in the
latter the conventional (zero-shift) homotopy leads directly to the proper
form of the free HS equations while in the former it leads to a entanglement
between dynamical and topological fields. The main result of this paper,
anticipated to simplify the analysis of the higher-order corrections to
nonlinear HS field equations, consists of finding a shifted homotopy procedure
that eliminates this entanglement.
Interestingly enough, the resulting field redefinition differs from that found
originally in [2] by the direct analysis without using the shifted homotopy
technique. This difference is shown to be cohomological in nature being
related to the deformed oscillator algebra underlying the massive deformation
of the matter field equations in $3d$ HS theory. It raises the question
whether there exists an extension of the shifted homotopy approach rich enough
to reproduce the field redefinition of [2] as well.
To summarize, we reached a coherent picture, which will allow us in the future
to perform higher-order analysis of $3d$ HS theory with the proper
disentanglement of the dynamical and topological fields. The applied method of
shifted homotopy is very convenient and greatly simplifies the analysis. In
particular, it will be interesting to see how the problem of disentangling of
the dynamical and topological fields can be resolved at higher orders.
## Acknowledgement
This research was supported by the Russian Science Foundation grant
18-12-00507.
## References
* [1] S. F. Prokushkin and M. A. Vasiliev, Nucl. Phys. B 545 (1999), 385 doi:10.1016/S0550-3213(98)00839-6 [arXiv:hep-th/9806236 [hep-th]].
* [2] M. A. Vasiliev, Mod. Phys. Lett. A 7 (1992), 3689-3702 doi:10.1142/S0217732392003116
* [3] M. A. Vasiliev, Int. J. Mod. Phys. D 5 (1996), 763-797 doi:10.1142/S0218271896000473 [arXiv:hep-th/9611024 [hep-th]].
* [4] P. Kessel, G. Lucena Gómez, E. Skvortsov and M. Taronna, JHEP 11 (2015), 104 doi:10.1007/JHEP11(2015)104 [arXiv:1505.05887 [hep-th]].
* [5] O. A. Gelfond and M. A. Vasiliev, Phys. Lett. B 786 (2018), 180-188 doi:10.1016/j.physletb.2018.09.038 [arXiv:1805.11941 [hep-th]].
* [6] V. E. Didenko, O. A. Gelfond, A. V. Korybut and M. A. Vasiliev, J. Phys. A 51 (2018) no.46, 465202 doi:10.1088/1751-8121/aae5e1 [arXiv:1807.00001 [hep-th]].
* [7] M. A. Vasiliev, Int. J. Mod. Phys. A 6 (1991), 1115-1135 doi:10.1142/S0217751X91000605
* [8] M. A. Vasiliev, J. Phys. A 46 (2013), 214013 doi:10.1088/1751-8113/46/21/214013 [arXiv:1203.5554 [hep-th]].
* [9] C. N. Pope, L. J. Romans and X. Shen, Nucl. Phys. B 339 (1990), 191-221 doi:10.1016/0550-3213(90)90539-P
* [10] P. Bieliavsky, S. Detournay and P. Spindel, Commun. Math. Phys. 289 (2009), 529-559 doi:10.1007/s00220-008-0697-9 [arXiv:0806.4741 [math-ph]].
* [11] A. V. Korybut, Theor. Math. Phys. 193 (2017) no.1, 1409-1419 doi:10.1134/S0040577917100014 [arXiv:1409.8634 [hep-th]].
* [12] A. Korybut, J. Phys. A 54 (2021) no.50, 505202 doi:10.1088/1751-8121/ac367e [arXiv:2006.01622 [hep-th]].
* [13] A. V. Barabanshchikov, S. F. Prokushkin and M. A. Vasiliev, Teor. Mat. Fiz. 110N3 (1997), 372-384 doi:10.1007/BF02630455 [arXiv:hep-th/9609034 [hep-th]].
* [14] S. W. MacDowell and F. Mansouri, Phys. Rev. Lett. 38 (1977), 739 [erratum: Phys. Rev. Lett. 38 (1977), 1376] doi:10.1103/PhysRevLett.38.739
* [15] K. S. Stelle and P. C. West, Phys. Rev. D 21 (1980), 1466 doi:10.1103/PhysRevD.21.1466
* [16] M. A. Vasiliev, Class. Quant. Grav. 11 (1994), 649-664 doi:10.1088/0264-9381/11/3/015
* [17] V. E. Didenko, N. G. Misuna and M. A. Vasiliev, JHEP 03 (2017), 164 doi:10.1007/JHEP03(2017)164 [arXiv:1512.07626 [hep-th]].
* [18] M. A. Vasiliev, Phys. Lett. B 285 (1992), 225-234 doi:10.1016/0370-2693(92)91457-K
* [19] M. P. Blencowe, Class. Quant. Grav. 6 (1989), 443 doi:10.1088/0264-9381/6/4/005
* [20] M. A. Vasiliev, Annals Phys. 190 (1989), 59-106 doi:10.1016/0003-4916(89)90261-3
* [21] Y. M. Zinoviev, [arXiv:hep-th/0108192 [hep-th]].
* [22] Y. M. Zinoviev, Nucl. Phys. B 808 (2009), 185-204 doi:10.1016/j.nuclphysb.2008.09.020 [arXiv:0808.1778 [hep-th]].
* [23] M. V. Khabarov and Y. M. Zinoviev, JHEP 04 (2022), 055 doi:10.1007/JHEP04(2022)055 [arXiv:2201.09491 [hep-th]].
|
# Nitrous Oxide and Climate
C. A. de Lange Atomic, Molecular and Laser Physics, Vrije Universiteit, De
Boelelaan 1081, 1081 HV Amsterdam, The Netherlands J. D. Ferguson University
of Pennsylvania School of Veterinary Medicine, USA W. Happer Department of
Physics, Princeton University, USA W. A. van Wijngaarden Department of
Physics and Astronomy, York University, Canada
###### Abstract
Higher concentrations of atmospheric nitrous oxide (N2O) are expected to
slightly warm Earth’s surface because of increases in radiative forcing.
Radiative forcing is the difference in the net upward thermal radiation flux
from the Earth through a transparent atmosphere and radiation through an
otherwise identical atmosphere with greenhouse gases. Radiative forcing,
normally measured in W m-2, depends on latitude, longitude and altitude, but
it is often quoted for the tropopause, about 11 km of altitude for temperate
latitudes, or for the top of the atmosphere at around 90 km. For current
concentrations of greenhouse gases, the radiative forcing per added N2O
molecule is about 230 times larger than the forcing per added carbon dioxide
(CO2) molecule. This is due to the heavy saturation of the absorption band of
the relatively abundant greenhouse gas, CO2, compared to the much smaller
saturation of the absorption bands of the trace greenhouse gas N2O. But the
rate of increase of CO2 molecules, about 2.5 ppm/year (ppm = part per million
by mole), is about 3000 times larger than the rate of increase of N2O
molecules, which has held steady at around 0.00085 ppm/year since the year
1985. So, the contribution of nitrous oxide to the annual increase in forcing
is 230/3000 or about 1/13 that of CO2. If the main greenhouse gases, CO2, CH4
and N2O have contributed about 0.1 C/decade of the warming observed over the
past few decades, this would correspond to about $0.00064$ K per year or 0.064
K per century of warming from N2O. Proposals to place harsh restrictions on
nitrous oxide emissions because of warming fears are not justified by these
facts. Restrictions would cause serious harm; for example, by jeopardizing
world food supplies.
###### Contents
1. 1 Introduction
2. 2 Radiative Properties of Greenhouse Gas Molecules
1. 2.1 Permanent dipole moment
3. 3 Radiation Transfer in the Atmosphere
1. 3.1 Altitude profiles of molecular temperature
2. 3.2 Radiation temperature
3. 3.3 Altitude profiles of greenhouse gases
4. 3.4 Radiation intensities and fluxes
5. 3.5 Forcings
6. 3.6 Saturation
4. 4 Future Forcings
1. 4.1 Temperature changes due to forcing changes
2. 4.2 Global warming potential
5. 5 Nitrogen in the Biosphere
6. 6 Nitrogen in Agriculture
1. 6.1 Efficient use of nitrogen fertilizer
7. 7 Conclusion
## 1 Introduction
This is a sequel to an earlier paper from the CO2 Coalition that discussed the
small effects of increasing concentrations of atmospheric methane on Earth’s
climate, Methane and Climate [1]. The discussion below is focused on nitrous
oxide. There have been recent proposals to put harsh restrictions on human
activities that release nitrous oxide, most importantly, farming, dairying and
ranching.
Basic radiation-transfer science that is outlined in this paper gives no
support to the assertion that greenhouse gases like nitrous oxide, N2O,
methane, CH4, or carbon dioxide, CO2, are contributing to a climate crisis. In
fact, increasing concentrations of CO2 have already benefitted the world by
substantially increasing the yields of agriculture and forestry. For example,
in a recent paper [3] Taylor and Shlenker state:
> We find consistently high fertilization effects: a 1 ppm increase in CO2
> equates to a 0.5%, 0.6%, and 0.8% yield increase for corn, soybeans, and
> wheat, respectively. Viewed retrospectively, 10%, 30%, and 40% of each
> crop’s yield improvements since 1940 are attributable to rising CO2.
Policies to address this non-existent crisis will cause enormous harm because
of the vital role of nitrogen in agriculture. The collapse of rice yields in
Sri Lanka because of recent restrictions on nitrogen fertilizer should be a
sobering warning [2]. Much greater damage will be done in the future unless
more rational policies are adopted.
We begin with a review of the limited degree to which nitrous oxide and other
greenhouse gases can influence Earth’s climate. This is followed by a
discussion of nitrogen’s vital role in agriculture. This is a fairly technical
paper, written primarily for scientists and engineers; but we hope it will
also be useful to non-technical readers and policy makers.
Figure 1: The three most common oxides of nitrogen. Only nitrous oxide (N2O) is of concern as a greenhouse gas. About 60% of the greenhouse forcing of N2O comes from the linear stretch mode, where the directions of atomic motion are indicated by the arrows, and where the vibrational frequency is $\nu_{i}=1285$ cm-1. As indicated by the $+$ and $-$ signs, the N2O molecule has a small permanent electric dipole moment that points from the negative O end (red) to the positive N end (blue). Molecule | $\nu_{i}$ | $d_{i}$ | $n_{i}$ | $P_{i}$ | $M_{i}$ | $\Gamma_{i}$
---|---|---|---|---|---|---
| (cm-1) | | | (10-21 W) | (D) | (s-1)
H2O | 1595 | 1 | $4.75\times 10^{-4}$ | 0.330 | 0.186 | 21.9
| 3652 | 1 | $2.47\times 10^{-8}$ | $6.40\times 10^{-5}$ | 0.068 | 35.7
| 3756 | 1 | $1.50\times 10^{-8}$ | $5.81\times 10^{-5}$ | 0.077 | 49.2
CO2 | 667 | 2 | $8.50\times 10^{-2}$ | 1.73 | 0.181 | 1.53
| 1388 | 1 | $1.18\times 10^{-3}$ | 0 | 0 | 0
| 2349 | 1 | $1.18\times 10^{-5}$ | 0.253 | 0.457 | 424
N2O | 588 | 2 | $1.26\times 10^{-1}$ | 0.224 | 0.069 | 0.152
| 1285 | 1 | $1.87\times 10^{-3}$ | 0.673 | 0.206 | 14.1
| 2224 | 1 | $2.06\times 10^{-5}$ | 0.221 | 0.375 | 243
CH4 | 1311 | 3 | $5.58\times 10^{-3}$ | 0.332 | 0.080 | 2.28
| 1533 | 2 | $1.28\times 10^{-3}$ | 0 | 0 | 0
| 2916 | 1 | $8.38\times 10^{-7}$ | 0 | 0 | 0
| 3019 | 3 | $1.53\times 10^{-6}$ | $2.52\times 10^{-3}$ | 0.080 | 27.4
Table 1: Naturally occurring greenhouse gas molecules are listed in the first
column. The vibrational mode frequencies $\nu_{i}$ are listed in the second
column. The mode degeneracies $d_{i}$ are listed in the third column. The mean
numbers of vibrational quanta per molecule are listed in the fourth column for
a temperature $T=300$ K . At the same temperature, the thermal powers $P_{i}$
radiated by molecules at the frequencies of the $i$th mode are listed in the
fifth column. The transition dipole moments $M_{i}$ for the modes are listed
in the sixth column. The last column lists the spontaneous decay rates
$\Gamma_{i}$ for molecules with one excitation quantum of the $i$th
vibrational mode. The values of $\Gamma_{i}$ were taken from Table 5 of
reference [5]. There, one can also find details on how to calculate
$\Gamma_{i}$ from the hundreds of thousands of line intensities and
frequencies in the HITRAN data base [6].
## 2 Radiative Properties of Greenhouse Gas Molecules
Most of dry air, 99.96% by volume, consists of the non-greenhouse gases
nitrogen, oxygen and argon [4]. For moist air, the concentration of water
vapor, by far the most abundant greenhouse gas, is extremely variable, but it
typically makes up several percent. The second most abundant greenhouse gas,
carbon dioxide (CO2) makes up about $0.04$% of dry air, methane (CH4) is
$0.00019$% and nitrous oxide (N2O) is only $0.000034$%. These seem negligibly
small, but one must remember that greenhouse gases resemble food dyes. A
little goes a long way. Tiny concentrations, most especially of water vapor
and carbon dioxide, can greatly increase the opacity of air to thermal
radiation that carries solar heat back to space. In the same way, a few drops
of green dye can color a much larger volume of nearly clear lager beer bright
green on St. Patrick’s Day. But quantitative considerations outlined below
show that future increases of nitrous oxide, N2O, or methane, CH4, will have a
negligible effect on climate.
Fig. 1 shows the three oxides of nitrogen commonly found in the air. Only
nitrous oxide (N2O) has a significant greenhouse effect. Nitric oxide (NO),
and nitrogen dioxide (NO2) are also released by human activities, notably by
high-temperature combustion of fossil fuels in air. At high concentrations NO,
and especially NO2, which readily reacts with water to form nitric acid, can
have harmful effects on human health. But the greenhouse effects of NO and NO2
are completely negligible.
Key radiative properties of N2O and the other important naturally occurring
greenhouse gases, methane (CH4) carbon dioxide (CO2) and most importantly,
water vapor (H2O), are summarized in Table 1. Ozone (O3) is also an important
greenhouse gas, but we have not included it in the table because most ozone is
located in the stratosphere, and changes of its concentration there have
little direct effect on surface temperatures.
The second column of Table 1 lists the normal-mode vibration frequencies
$\nu_{i}$. For thermal infrared radiation, these are traditionally quoted in
waves per centimeter (cm-1). The characteristic ways that the atoms of a
molecule can vibrate about the center of mass are called normal modes. For
simple diatomic molecules like nitrogen (N2) or oxygen (O2) the vibrating
atoms simply stretch and compress the chemical bond between them. But for
molecules with three or more atoms, there are more complicated vibrational
modes, each with its own vibrational frequency, $\nu_{i}$, and spatial
pattern. Instructive animations of the vibrational modes of N2O, CH4, CO2 and
H2O, can be found at the links of reference [7].
The spatial degeneracies, $d_{i}$ of the modes, are shown in the third column
of Table 1. Degeneracies are the number of ways a molecule can vibrate at the
same frequency but in different directions with respect to some reference
orientation. For example, the spatial degeneracy of the bending modes of CO2
or N2O are $d_{i}=2$. If the reference orientation of the linear molecules is
vertical, there can be bending vibrations, at the same frequency $\nu_{i}$,
from left to right or forward and backward. The tetrahedral molecule methane
(CH4) has some normal modes that are three-fold degenerate, with $d_{i}=3$.
The free vibration and rotation of a molecule in the atmosphere is interrupted
from time to time by collisions with other molecules and, much less
frequently, by emission or absorption of photons. Relatively large amounts of
vibrational, rotational and translational energy are exchanged in each
collision. The rapid collisions will make the probability of finding a
molecule in a quantized state of energy $E_{i}$ proportional to the Boltzmann
factor $e^{-E_{i}/kT}$, where $k$ is Boltzmann’s constant and where the
surrounding air has the local absolute temperature $T$. Since the
characteristic thermal energy $kT$ is much smaller than the quantized
excitation energy $hc\nu_{i}$ of a mode with frequency $\nu_{i}$ (here $h$ is
Planck’s constant and $c$ is the speed of light), the average number of
vibrational quanta $n_{i}$ is very small and is given (very nearly) by the
Arrhenius factor,
$n_{i}=d_{i}e^{-hc\nu_{i}/kT}$ (1)
The mean numbers of excitation quanta $n_{i}$ listed in the fourth column of
Table 1 for a temperature of $T=300$ K are within a few percent of the simple
approximation (1).
The electric fields of thermal radiation do work on moving charges within a
molecule. If the work is positive, the molecule absorbs radiation and
increases its energy at the expense of energy removed from the radiation. If
the work is negative, the molecule loses energy, which is emitted as
additional radiation. The nuclei and electrons of the molecule produce a
complicated charge density $\rho({\bf r},t)$ at locations ${\bf r}$ and time
$t$. The positive charge is localized within the tiny atomic nuclei. The
negative charges from electrons are much more delocalized. The chemical bonds
of molecules, which are formed by valence electrons, are negatively charged.
The wavelengths of thermal infrared radiation are much longer than the sizes
of molecules. For example, the distance between the nucleus of the N atom on
the left of the N2O molecule of Fig. 1 and the nucleus of the O atom on the
right is about 0.231 nm (nanometer, or $10^{-9}$ m). The wavelength of the
main greenhouse mode of N2O, with a vibrational frequency of $\nu_{i}=1285$
cm-1, is $\lambda_{i}=1/\nu_{i}=$ 7,782 nm, some 34,000 times longer than the
length of the molecule. Under these conditions, the interaction of the
molecules with radiation is almost completely controlled by the electric
dipole moment, ${\bf M}(t)$ or the first moment of the charge density, which
at time $t$ is given by
${\bf M}(t)=\int{\bf r}\,\rho({\bf r},t)dV.$ (2)
Here, $\rho({\bf r},t)dV=dq$ is the charge located in the infinitesimal volume
$dV$ centered on the location ${\bf r}$ at time $t$. Radiation generated by
oscillations of the electric quadrupole, octupole, and higher moments of the
molecule is many orders of magnitude less powerful than that of the electric
dipole moment ${\bf M}(t)$ of (2) and can be ignored. Radiation from the
oscillating magnetic moments of the molecule can also be ignored.
For electric-dipole radiation, the radiative power $P$ that a vibrating and
rotating molecule emits can be calculated with Larmor’s classic formula [8]
$P=\frac{2|{\bf\ddot{M}}|^{2}}{3c^{3}}$ (3)
Here ${\bf\ddot{M}}=d^{2}{\bf M}/dt^{2}$, the second derivative with respect
to time, is the acceleration of the dipole moment ${\bf M}$. Eq. (3) is for
cgs units. Large, collision-induced changes of the molecular quantum states
will cause the radiated power (3) for an individual atmospheric molecule to
fluctuate chaotically. But the mean value of (3), averaged over a time long
enough to include many collisions, will have a well-defined average value
$\langle P\rangle=P_{1}+P_{2}+P_{3}+\cdots$, where $P_{1}$ is the power
radiated near the frequency $\nu_{1}$ of the first vibrational mode, $P_{2}$
is the power radiated near the frequency $\nu_{2}$ of the second, etc. The
mean radiated powers depend on the temperature $T$, and on the vibrational
frequency $\nu_{i}$. The powers $P_{i}$ are displayed in the fifth column of
Table 1. These are very small powers, of order $10^{-21}$ watts per molecule
or less. For comparison, a mobile phone radiates a few watts of power, and is
not supposed to exceed 3 watts.
From inspection of Table 1, we see that three of the powers $P_{i}$ in the
fifth column are zero. These correspond to modes for which the molecular
charge density $\rho({\bf r},t)$ is so symmetric that vibrations and rotations
do not produce corresponding vibrations of the dipole moment and, therefore,
generate no radiation. For Table 1 these especially symmetric vibrational
modes are the symmetric stretch mode of the CO2 molecule, which vibrates with
a frequency of 1388 cm-1, and the modes of methane, CH4 with frequencies
$\nu_{i}$ of 1533 cm-1 and 2916 cm-1.
Molecules of the atmosphere’s two most abundant gases, nitrogen (N2) and
oxygen (O2) have vanishing dipole moments, ${\bf M}(t)=0$, because of their
high symmetry and they therefore absorb or emit negligible amounts of thermal
radiation. O2 does emit and absorb millimeter-wave thermal radiation because
of spin-flip transitions. Satellite observations of this radiation are used
for monitoring the temperatures of Earth’s atmosphere [9]. But the heat
transported by the millimeter waves is negligible.
The amplitude of a vibrating dipole moment of a molecule with $n_{i}$ quanta
of vibrational excitation can be written as
$M(t)=\sqrt{2n_{i}}M_{i}\cos\omega_{i}t.$ (4)
The angular frequency $\omega_{i}$ is related to the spatial frequency
$\nu_{i}$ by
$\omega_{i}=2\pi\nu_{i}c.$ (5)
In (4), $M_{i}\geq 0$ is the transition electric dipole moment of the
molecule, the root-mean-square value of the oscillating moment. The number of
vibrational quanta $n_{i}$ was given by the Boltzmann factor (1). Substituting
(4) into (3) we find that the transition dipole moment is given by the (cgs)
formula
$P_{i}=\frac{32\pi^{4}\nu_{i}^{4}cn_{i}M_{i}^{2}}{3}.$ (6)
The average power $P_{i}$ radiated per molecule of (6) increases very rapidly
with temperature because the number of vibrational excitation quanta $n_{i}$
increases so rapidly, approximately as the Arrhenius factor of (1).
Solving (6) with the values of $P_{i}$, $n_{i}$ and $\nu_{i}$ gives the
magnitudes $|M_{i}|$ of the transition electric dipole moments listed in the
sixth column of Table 1. Molecular dipole moments are traditionally measured
in Debye units, where 1 D = $10^{-18}$ esu cm = $3.34\times 10^{-30}$ C m. The
time needed for a molecule to radiate away a quantum of vibrational energy
turns out to be tens of milliseconds. This is orders of magnitude longer than
the time between collisions with other molecules, around 1 nanosecond at sea-
level pressures. Even for the extremely low air pressures of the upper
stratosphere, the time between collisions, around 1 microsecond, is much
shorter than the tens of milliseconds needed for a vibrating molecule to emit
a photon. Relatively large amounts of vibrational, rotational and
translational energy are exchanged in each collision; and this leads to the
Boltzman distribution (1) of the molecules in their quantized energy states.
The last column of Table 1 lists the spontaneous radiative decay rates
$\Gamma_{i}$ of molecules with one quantum of vibrational excitation of
frequency $\nu_{i}$. The inverse of this rate is the radiative lifetime,
$\tau_{i}=1/\Gamma_{i}$, the time needed for a molecule to radiate away
$1-1/e=63\,$% of its vibrational energy if it is not interrupted by a
collision with another molecule. The spontaneous decay rate $\Gamma_{i}$ is
related to the other molecular parameters in the table by
$\Gamma_{i}=\frac{P_{i}}{hc\nu_{i}n_{i}}=\frac{32\pi^{4}\nu_{i}^{3}M_{i}^{2}}{3h}.$
(7)
### 2.1 Permanent dipole moment
| $B$ | $M$
---|---|---
Molecule | (cm-1) | (D)
OH | 18.91 | 1.67
N2O | 0.419 | 0.161
Table 2: Rotational constants $B$ of (8) and permanent electric dipole moments
$M$ of the diatomic hydroxyl molecule OH [10] and the nitrous oxide molecule
N2O [11].
As indicated in Fig. 1, the N end of the linear N2O molecule has a small
positive charge and the O end has a small negative charge. The resulting
permanent electric dipole moment, $M$, listed in Table 2, is relatively small,
$M=0.161$ D.
Molecules with non-vanishing permanent electric dipole moments can emit or
absorb radiant energy at the expense of decreasing or increasing their
rotational energy. They do not need to change their vibrational states. The
pure rotational transitions of the water molecule (H2O) dominate the opacity
of Earth’s atmosphere for radiation frequencies less than about 500 cm-1. But
as we will outline below, the analogous pure rotational transitions of the N2O
molecule have a negligible effect on opacity. For more symmetric molecules
like N2, O2, CO2 or CH4, the permanent electric dipole moments vanishes,
$M=0$, and there is no pure rotational absorption or emission.
With respect to rotations, the water molecule (H2O) is an asymmetric top, with
three different principal moments of inertia. The details of rotational motion
of asymmetric top molecules are relatively complicated, both for classical
mechanics and quantum mechanics. So, for comparisons with the radiative
properties of the linear N2O molecule, we will consider the linear diatomic OH
molecule, which has an electric dipole moment $1.67$ D that is similar [12] to
that of H2O, $M=1.85$ D. The moment of inertia of OH is comparable to the
three moments of inertia of H2O. The thermal radiation powers emitted and
absorbed by pure rotational transitions of OH and H2O are about the same.
Both OH and N2O are linear molecules, with moments of inertia $I$ for
rotations about any axis normal to the symmetry axis of the molecule and
through the center of mass. The spectroscopic “rotation constant” $B$ of the
molecules is related to the moment of inertia $I$ by
$\displaystyle B=\frac{\hbar^{2}}{2Ihc}.$ (8)
The rotational constants of N2O and OH are listed in Table 2. For both
molecules, the characteristic rotational energy $hcB$ is much smaller than the
thermal energy $kT$
$B\ll\frac{kT}{hc}\approx 200\hbox{ cm}^{-1}.$ (9)
Under these conditions both classical physics and quantum mechanics show that
the thermally averaged value of (3) is
$\displaystyle P$ $\displaystyle=$
$\displaystyle\frac{16k^{2}T^{2}M^{2}}{3I^{2}c^{3}}.$ (10)
Using the values of $B$ and $M$ from Table 1, together with (8) in (10), we
find that the pure rotational power $P(\hbox{N${}_{2}$O})$ radiated by an N2O
molecule is more than five orders of magnitude smaller than the power
$P(\hbox{OH})$, radiated by an OH molecule
$\frac{P(\hbox{N${}_{2}$O})}{P(\hbox{OH})}=0.456\times 10^{-5}.$ (11)
So, although N2O does have pure rotational emission and absorption, like H2O,
it is so small that it can be neglected; and we need only consider the
vibration-rotation transitions, as is the case for CO2 and CH4. Water vapor is
the only greenhouse gas for which the pure rotational band matters.
## 3 Radiation Transfer in the Atmosphere
The properties of individual greenhouse gases discussed in connection with
Table 1 are not sufficient to understand Earth’s greenhouse warming. There are
so many greenhouse gas molecules that the radiation power $P_{i}$ of (3),
emitted by one molecule, is very likely to be absorbed by another molecule, of
the same or different chemical species, before the radiation can escape to
space and cool the Earth. The radiation to space comes from an emission
height, $z^{\\{e\\}}(\nu)$, from which there is a high probability of escape
to space because so few absorbing molecules remain overhead. The emission
height, $z^{\\{e\\}}(\nu)$, has a complicated dependence on the radiation
frequency $\nu$. For the weakly absorbed frequencies of the clear - sky
atmospheric window between about 800 and 1200 cm-1, the emission height can be
taken to be zero $z^{\\{e\\}}(\nu)=0$ and the radiation to space comes
directly from ground. For the other extreme of frequencies that are strongly
absorbed by greenhouse gases, the emission heights can be several km or
greater. Nearly all the radiation to space comes from atmospheric molecules
near the emission height. Most of the surface radiation is absorbed and does
not reach outer space.
Radiation transfer in the cloud-free atmosphere of the Earth is controlled by
only two quantities: (a) how the temperature $T=T(z)$ varies with the altitude
$z$, shown in the left panel of Fig. 2, and (b) the altitude dependence of the
molar concentrations, $C^{\\{i\\}}=C^{\\{i\\}}(z)$ of the $i$th type of
molecule, shown on the right panel. We will call $z$-dependent quantities
altitude profiles. Although the altitude profiles of temperature and
concentrations vary with latitude and longitude, the horizontal variation is
normally small enough to neglect when calculating local radiative forcing. The
altitude profile of the temperature is as important as the altitude profile of
concentrations. If the temperature were the same from the surface to the top
of the atmosphere, there would be no radiative forcing, no matter how high the
concentrations of greenhouse gases.
### 3.1 Altitude profiles of molecular temperature
Representative midlatitude altitude profiles of temperature [13], and
concentrations of greenhouse gases [14], are shown in Fig. 2. Altitude
profiles of temperature directly measured by radiosondes in ascending balloons
[15] are always much more complicated than the profile in the left panel of
Fig. 2, which can be thought of as an ensemble average. As already implied by
(1), collision rates of molecules in the Earth’s troposphere and stratosphere
are sufficiently fast that for a given altitude $z$, a single local
temperature $T=T(z)$ provides an excellent description of the distribution of
molecules between translational, vibrational and rotational energy levels.
Figure 2: Left. A standard atmospheric temperature profile [13]. The surface
temperature is $T(0)=288.7$ K. Right. Concentration profiles [14],
$C^{\\{i\\}}_{\rm sd}=N^{\\{i\\}}_{\rm sd}/N$ for greenhouse molecules. The
total number density of atmospheric molecules is $N=N(z)$. At sea level the
concentrations are $7750$ ppm of H2O, $1.8$ ppm of CH4 and $0.32$ ppm of N2O.
The O3 concentration peaks at $7.8$ ppm at an altitude of 35 km, and the CO2
concentration was approximated by $400$ ppm at all altitudes. The ratio
$N/N_{0}$ of the total atmospheric number density, $N=N(z)$, to its surface
value, $N_{0}=N(0)$, is also shown.
On the left of Fig. 2 we have indicated the molecular temperature of the three
most important atmospheric layers for radiative heat transfer. The lowest
atmospheric layer is the troposphere, where parcels of air, warmed by contact
with the solar-heated surface, float upward, much like hot-air balloons. As
they expand into the surrounding air, the parcels do work at the expense of
internal thermal energy. This causes the parcels to cool with increasing
altitude. There is very little heat flow in or out of the parcels during
ascent or descent, so expansions or compressions are very nearly adiabatic (or
isentropic). If the parcels consisted of dry air, the cooling rate would be
9.8 C km-1 the dry adiabatic lapse rate [16]. But rising air has usually
picked up water vapor from the land or ocean, and the condensation of water
vapor to droplets of liquid in clouds, or to ice crystallites, releases so
much latent heat that the lapse rates can be much less than 9.8 C km-1. A
representative tropospheric lapse rate for midlatitudes is $-dT/dz=6.5$ K km-1
as shown in Fig. 2. The tropospheric lapse rate is familiar to vacationers who
leave hot areas near sea level for cool vacation homes at higher altitudes in
the mountains. On average, the temperature lapse rates are small enough to
keep the troposphere buoyantly stable [17] so that higher-altitude cold air
does not sink to replace lower-altitude warm air. Parcels of stable
tropospheric air that are displaced in altitude will oscillate slowly up and
down, with periods of a few minutes or longer. However, at any given time,
large regions of the troposphere (particularly in the tropics) can become
unstable to moist convection. Then small displacements of parcels can grow
exponentially with time, rather than oscillating.
Above the troposphere is the stratosphere, which extends from the tropopause
to the stratopause, at a typical altitude of $47$ km, as shown in Fig. 2.
Stratospheric air is much more stable to vertical displacements than
tropospheric air, and negligible moist convection occurs there. For
midlatitudes, the temperature of the lower stratosphere is nearly constant, at
about 220 K, but it increases at higher altitudes. At the stratopause the
temperature reaches a peak value not much less than the surface temperature.
The stratospheric heating is due to the absorption of solar ultraviolet
radiation by ozone molecules, O3. The average solar flux at the top of the
atmosphere is about 1350 Watts per square meter (W m-2) [18]. Approximately 9%
consists of ultraviolet light (with wavelengths shorter than $\lambda=405$
nanometers (nm)) which can be absorbed in the upper atmosphere.
Above the stratosphere is the mesosphere, which extends from the stratopause
to the mesopause at an altitude of about $86$ km. With increasing altitudes
above the mesopause, radiative cooling, mainly by CO2 and O3, becomes
increasingly more important compared to heating by solar ultraviolet
radiation. This causes the temperature to decrease with increasing altitude in
the mesosphere.
Above the mesopause, is the extremely low-pressure thermosphere, where
convective mixing processes are negligible. Temperatures increase rapidly with
altitude in the thermosphere, to as high as 1000 K, due to heating by extreme
ultraviolet sunlight, the solar wind and atmospheric waves. Polyatomic
molecules break up into individual atoms, and there is gravitational
stratification, with lighter gases increasingly dominating at higher
altitudes.
The vertical radiation flux $Z$, which is discussed below, can increase
rapidly in the troposphere and less rapidly in the stratosphere. The energy
needed to increase the flux comes mostly from moist convection in the
troposphere and mostly from absorption of ultraviolet sunlight in the
stratosphere. There are still smaller increases of $Z$ in the mesosphere due
to emission at the centers of the most intense lines of the greenhouse gases.
Changes in $Z$ above the mesopause are small enough to be neglected, so we
will often refer to the mesopause as “the top of the atmosphere” (TOA), with
respect to radiation transfer.
### 3.2 Radiation temperature
Radiation can have a temperature nearly equal to the molecular temperature if
the mean-free paths of all photons are small compared to distances over which
there is negligible variation of the molecular temperature. The spectral
intensity $\tilde{I}$ of radiation in full thermal equilibrium at a
temperature $T$ is exactly equal to the Planck intensity [19]
$\tilde{B}=\frac{2hc^{2}\nu^{3}}{e^{\nu c\,h/(kT)}-1}.$ (12)
The Planck spectral intensity $\tilde{B}$ depends on the spatial frequency
$\nu$, measured in wavenumbers (cm-1) and on the absolute temperature $T$ in
Kelvin (K). The cgs units of $\tilde{B}$ are ergs per second, per square
centimeter, per wavenumber, and per steradian of solid angle. Eq. (12) for
radiation (photon) energy is closely related to the Boltzman distribution (1)
of molecules over their quantized energy states. Radiation of temperature $T$
must have the frequency spectrum (12) and it must be isotropic, with equal
intensities in all directions.
Planck radiation in full thermal equilibrium is not commonly observed in
Earth’s atmosphere. The classic way to generate Planck radiation is with a
Hohlraum, a cavity with opaque, isothermal walls and a tiny hole to allow
observation of the radiation inside. So, Planck radiation is often called
cavity radiation or blackbody radiation [20].
The integral of the Planck intensity (12) over all frequency increments
$d\nu$, and over all solid angle increments, $2\pi d\mu$ (radiation
propagating at an angle $\theta$ to the zenith has the direction cosine
$\mu=\cos\theta$) gives the well-known Stefan Boltzmann flux, which was
discovered several decades before the Planck spectrum (12)
$Z=\int_{0}^{\infty}\pi\tilde{B}d\nu=\sigma T^{4}.$ (13)
Here, the Stefan-Boltzmann constant (for MKS units) is
$\sigma=5.67\times 10^{-8}\hbox{ W m${}^{-2}$ K${}^{-4}$}.$ (14)
Radiation temperatures are routinely measured with radiometers. These
instruments detect the radiation flux $Z_{r}$ coming from some direction. The
instrument is calibrated in a cyclic manner by interleaving views of the
unknown radiation source with the flux $Z$ of a calibrating black body. The
blackbody temperature $T$ is close to the expected radiation temperature. For
satellite-based radiometers [9], the calibration cycle normally includes an
observation of dark space, which defines the zero-flux output of the detector.
The radiometer gives an apparent, frequency-integrated radiation temperature
$T_{r}=\left(\frac{Z_{r}}{Z}\right)^{1/4}T.$ (15)
Some radiometers measure only environmental radiation and calibrating
radiation with frequencies close to $\nu$. These give a $\nu$-dependent
apparent temperature, $T_{r}(\nu)$.
Thermal radiation in Earth’s atmosphere is almost never in full thermal
equilibrium. For cloud-free skies, the mean-free paths of photons can exceed
the atmospheric thickness in the infrared atmospheric window from about 800 to
1300 cm-1. For these frequencies, the very weak downwelling radiation from
cold space will have an apparent temperature $T_{r}$ that is much smaller than
the apparent temperature, $T_{r}$, of upwelling radiation from the relatively
warm lower atmosphere and from Earth’s surface. Because greenhouse gases
absorb radiation in fairly narrow bands of frequencies, the frequency spectrum
of the radiation in cloud-free skies differs drastically from that of the
spectral Planck intensity (12) that is required for radiation in full
thermodynamic equilibrium.
The mean-free paths of thermal radiation photons inside thick clouds can be
much shorter than the size of the cloud. There is also little frequency
dependence to the absorption and emission coefficients of thermal radiation by
the water droplets or the ice crystallites of the cloud. Cloud interiors are
the only parts of the atmosphere where radiation is almost in thermal
equilibrium, nearly isotropic and with a spectrum close to that of (12).
Planck’s spectral intensity (12) is one of the most famous equations of
physics. It finally swept aside the absurd prediction of pre-quantum physics
that thermal radiation would have infinite intensity (the ultraviolet
catastrophe), and it gave birth to quantum mechanics [19, 20, 21].
### 3.3 Altitude profiles of greenhouse gases
As shown in Fig. 2, the most abundant greenhouse gas at the surface is water
vapor. However, the concentration of water vapor drops by a factor of a
thousand or more between the surface and the tropopause. This is because of
condensation of water vapor into clouds and eventual removal by precipitation.
Carbon dioxide, CO2, the most abundant greenhouse gas after water vapor, is
also the most uniformly mixed because of its immunity to further oxidation and
resistance to photodissociation in the upper stratosphere.
Methane is much less abundant than CO2. Methane’s concentration decreases
somewhat in the stratosphere because of oxidation by OH radicals and ozone,
O3. The oxidation of methane provides about 1/3 of the stratospheric water
vapor shown in Fig. 2, with most of the rest directly injected by upwelling in
the tropics [22].
Nitrous oxide (N2O), the main topic of this discussion, is also much less
abundant than CO2, and its concentration decreases in the stratosphere because
absorption of ultraviolet sunlight dissociates N2O to nitric oxide molecules
(NO) and free atoms of O.
Ozone molecules (O3) are produced from O2 molecules by ultraviolet sunlight in
the upper atmosphere, and this is the reason that O3 concentrations peak in
the stratosphere and are hundreds of times smaller in the troposphere, as
shown in Fig. 2.
Figure 3: Left: The altitude dependence of temperature from Fig. 2. Right The
flux $Z$ increases with increasing altitude as a result of net upward energy
radiation from the greenhouse gases H2O, O3, N2O, CH4, and CO2 in clear air
with no clouds. The middle, green curve is the flux for current
concentrations. The forcings $F$ are the differences between the altitude-
independent flux $Z_{0}=\sigma T_{0}^{4}$ through a transparent atmosphere
with no greenhouse gases, for a surface temperature of $T_{0}=288.7$ K, and
the flux $Z$ for an atmosphere with the greenhouse gas concentrations of Fig.
2. Fluxes and forcings for halved and doubled concentrations of CO2, but with
the same concentrations of all other greenhouse gases, are shown as dotted
blue and dashed red curves, which barely differ from the green curve, the flux
for current concentrations. We used doubled and halved CO2 rather than N2O for
this illustration since the flux changes for doubling or halving N2O
concentrations are about ten times smaller than the corresponding fluxes for
doubled and halved CO2, and they would be too small to discern from the
figure. From reference [23].
### 3.4 Radiation intensities and fluxes
The most detailed description of radiation in the atmosphere is given by the
spectral intensity $\tilde{I}=\tilde{I}(z,\mu,\nu)$, which describes radiation
of frequency $\nu$ at an altitude $z$, propagating in a direction that makes
an angle $\theta$ to the vertical, with the direction cosine $\mu=\cos\theta$.
For cloud-free skies there can be absorption and emission of thermal radiation
by greenhouse gases, but there is negligible scattering. The propagation of
the radiation is given by the Schwarzschild equation [24]
$\mu\frac{\partial}{\partial
z}\tilde{I}(z,\mu,\nu)=\alpha(z,\nu)\left[\tilde{B}(z,\nu)-\tilde{I}(z,\mu,\nu)\right].$
(16)
We assume that the molecular temperature $T(z)$ depends on altitude $z$. So,
the Planck intensity $\tilde{B}(z,\nu)$ can be thought of as a function of
altitude $z$ and spatial frequency $\nu$. The Planck intensity is equal for
all directions and does not depend on the direction cosine $\mu$ of the
radiation. The attenuation coefficient $\alpha(z,\nu)$ describes absorption
and emission by greenhouse gases. The Schwarzschild equation of (16) says that
the actual intensity $\tilde{I}$ of radiation in the atmosphere is always
trying to become equal to the local Planck intensity $\tilde{B}$, which would
make the right side of the equation vanish, and stop any further changes of
$\tilde{I}$. As explained in detail in reference [23], the Schwarzschild
equation (16) has a well-defined solution that is determined by the
temperature profile $T(z)$ and the attenuation-rate profile $\alpha(z,\nu)$ of
the atmosphere, together with emission of radiation by Earth’s surface. It is
worth stressing that the altitude dependence of temperature $T(z)$ is as
important as altitude dependence of greenhouse gas concentrations which
determines $\alpha(z,\nu)$.
For discussions of climate, the upward spectral flux
$\tilde{Z}(z,\nu)=\int_{-1}^{1}d\mu\,\mu\tilde{I}(z,\mu,\nu)$ (17)
is of more importance than the intensity $\tilde{I}$. The flux,
$\tilde{Z}(z,\nu)d\nu$, measures the energy flow, in units of W m-2 cm, for
radiation of frequencies between $\nu$ and $\nu+d\nu$ at the altitude $z$ in
the atmosphere. In (17) the parts of the integrand with upward direction
cosines, $\mu>0$ are called upwelling. They come from the Earth’s surface
emissions and from the emission of greenhouse gases at altitudes lower than
the observation altitude $z$. The parts of the integrand with downward
direction cosines, $\mu<0$ are called downwelling. They come from the emission
of greenhouse gases at altitudes higher than the observation altitude $z$.
### 3.5 Forcings
How greenhouse gases affect radiative energy transfer through Earth’s
atmosphere, and ultimately Earth’s climate, is quantitatively determined by
the radiative forcing, $F$, the difference between the flux $Z_{0}$ of thermal
radiant energy from a black surface through a hypothetical, transparent
atmosphere, and the flux $Z$ through an atmosphere with greenhouse gases,
particulates and clouds, but with the same surface temperature $T_{0}$ [23]
$F=Z_{0}-Z.$ (18)
The flux from a black surface is given by the Stefan-Boltzmann formula
$Z_{0}=\sigma T_{0}^{4}.$ (19)
The forcing $F$ and the flux $Z$ are usually specified in units of W m-2.
According to (18), increments $\Delta F$ in forcing and increments $\Delta Z$
in net upward flux are equal and opposite
$\Delta F=-\Delta Z.$ (20)
Forcing depends on latitude, longitude and on the altitude, $z$.
The radiative heating rate [23],
$R=\frac{dF}{dz},$ (21)
is equal to the rate of change of the forcing with increasing altitude $z$.
Over most of the atmosphere, $R<0$, so thermal infrared radiation is a cooling
mechanism that transfers internal energy of atmospheric molecules to space or
to the Earth’s surface.
The definition (18) of forcing is sometimes called the “instantaneous
forcing,” since it assumes that the concentration of the greenhouse gas of
interest is instantaneously changed, but that all other atmospheric properties
remain the same as before. Because the radiation itself travels at the speed
of light even an “instantaneous” change in greenhouse gas concentrations
requires a few tens of microseconds to reestablish radiative equilibrium.
We consider the instantaneous forcing to be the least ambiguous metric for how
changing greenhouse gases affect radiative transfer. But the IPCC commonly
uses a slightly different definition of radiative forcing (RF), where the
stratosphere is allowed to cool to a new state of radiative equilibrium after
instantaneous addition of a greenhouse gas. In Section 8.1.1.1 of the 2018
IPCC report [25], one finds a definition, identical to ours, of the
instantaneous forcing:
> Alternative definitions of RF have been developed, each with its own
> advantages and limitations. The instantaneous RF refers to an instantaneous
> change in net (down minus up) radiative flux (shortwave plus longwave; in W
> m-2) due to an imposed change. This forcing is usually defined in terms of
> flux changes at the top of the atmosphere (TOA) or at the climatological
> tropopause, with the latter being a better indicator of the global mean
> surface temperature response in cases when they differ.
The right panel of Fig. 3 shows the altitude dependence of the net upward flux
$Z$ and the forcing $F$ for the greenhouse gas concentrations of Fig. 2. The
temperature profile of Fig 2 is reproduced in the left panel. The altitude-
independent flux, $\sigma T_{0}^{4}=394$ W m-2, from the surface with a
temperature $T_{0}=288.7$ K, through a hypothetical transparent atmosphere, is
shown as the vertical dashed line in the panel on the right. The fluxes for
current concentrations of CO2 and for doubled or halved concentrations are
shown as the continuous green line, the dashed red line and the dotted blue
line.
At current greenhouse gas concentrations the net upward flux at the surface,
142 W m-2, is less than half the surface flux, $Z_{0}=\sigma
T_{0}^{4}=394\hbox{ W m}^{-2}$, also given by (25), for a transparent
atmosphere. More than half of the upward thermal radiation from the surface is
canceled by downwelling radiation from greenhouse gases above. At the
tropopause altitude, 11 km in this example, the net flux has nearly doubled to
$Z=257$ W m-2. The 115 W m-2 increase in flux from the surface to the
tropopause is due to net upward radiation by greenhouse gases in the
troposphere. Most of the energy needed to supply the radiated power comes from
convection of moist air. Direct absorption of sunlight in the troposphere
makes a much smaller contribution.
From Fig. 3 we see that the flux $Z$ increases by another 20 W m-2, from 257 W
m-2 to 277 W m-2 between the tropopause and the top of the atmosphere. The
energy needed to supply the 20 W m-2 increase in flux comes from the
absorption of solar ultraviolet light by ozone, O3 in the stratosphere and
mesosphere. Convective heat transport above the tropopause is small enough to
be neglected.
Figure 4: Spectral radiation flux $\tilde{Z}$ at the top of the cloud-free
atmosphere versus instantaneous changes of CO2 concentrations from a reference
value of 400 ppm, by factors of $2$ or $1/2$. Because of “saturation,” every
doubling (100% increase) reduces the frequency-integrated flux to space by
$\Delta Z=-3.0\hbox{ W m}^{-2}$ and every halving (50% decrease) increases the
radiation to space by $\Delta Z=+3.0\hbox{ W m}^{-2}$. The normal mode
frequencies of Table 1, where the greenhouse gases are most effective, are
marked by the molecular formulas. Pure rotational transitions are responsible
for the low-frequency absorption of water vapor, H2O. The smooth blue envelope
curve is the flux to space, $\pi\tilde{B}$, if there were no greenhouse gases.
The Planck intensity $\tilde{B}$ was given by (12) and the factor of $\pi$
comes from integrating the flux over upward solid angles. Computational
details can be found in reference [23]. Figure 5: The jagged black curve is
the spectral flux $\tilde{Z}$ of (22) at the top of the cloud-free atmosphere
for the conditions of Fig. 2. The jagged green curve shows the flux with all
N2O instantaneously removed, but with no other changes to the conditions of
Fig. 2. The jagged red curve shows the flux that results when the N2O
concentration is instantaneously doubled. The barely perceptible differences
between the green, black and red curves occur for frequencies close to N2O’s
normal mode frequencies, that are listed in Table 1. The contributions of the
pure rotational band, at frequencies below about 500 cm-1 are too small to
perceive. The smooth blue envelope curve is the flux to space, $\pi\tilde{B}$,
if there were no greenhouse gases. The Planck intensity $\tilde{B}$ was given
by (12) and the factor of $\pi$ comes from integrating the flux over upward
solid angles. Computational details can be found in reference [23].
The fluxes, $Z$, and forcings, $F$, of (18) can be thought of as sums of
infinitesimal contributions, $\tilde{Z}\,d\nu$ and $\tilde{F}\,d\nu$, from
spectral fluxes, $\tilde{Z}$, or spectral forcings, $\tilde{F}$, carried by
infrared radiation of spatial frequencies between $\nu$ and $\nu+d\nu$. That
is,
$Z=\int_{0}^{\infty}\tilde{Z}\,d\nu=277\hbox{ W m}^{-2},$ (22)
and
$F=\int_{0}^{\infty}\tilde{F}\,d\nu=117\hbox{ W m}^{-2}.$ (23)
The integral (22) is the area under the jagged black curve of Fig. 5. The
spectral fluxes and forcings are related by a formula analogous to (18)
$\tilde{F}=\pi\tilde{B}_{0}-\tilde{Z}.$ (24)
Here $\tilde{B}_{0}=\tilde{B}(\nu,T_{0})$, is the surface value of the
spectral Planck intensity (12).
Analogous examples of how spectral fluxes $\tilde{Z}$ depend on greenhouse
gases are shown in Fig. 4 for various concentrations of CO2. To guard against
an overly simplistic interpretation of this graph, the following should be
realized. The flux at the top of the atmosphere comes from different altitudes
at different frequencies. Only in the infrared atmospheric window, from about
800 to 1200 cm-1 is emission from Earth’s surface directly observed, as long
as no clouds get in the way. Stratospheric ozone obscures some of the
atmospheric window at frequencies around 1050 cm-1.
Fig. 4 is for cloud-free skies. Except in the atmospheric window, the infrared
radiation observed by satellites is not surface radiation with various amounts
of attenuation by greenhouse gases. Instead, it is newly created radiation
that has been emitted at altitudes well above the surface, at the “emission
height.” The upwelling ground radiation has been completely absorbed. The
radiation in the frequency band centered on 667 cm-1 is emitted by CO2
molecules located in the nearly isothermal lower stratosphere, from altitudes
between about 10 km and 20 km, where the temperature is about 220 K.
Even if all the CO2 were removed from the atmosphere, as shown by the jagged
green line of Fig. 4, no surface radiation would reach the top of the
atmosphere in the band centered at 667 cm-1 since there is substantial
absorption by water vapor in this same band. So, if all the CO2 could be
removed, but the atmospheric temperature profile and the concentrations of
other greenhouse gases remained the same as for Fig. 2, one would observe
emission of water vapor at a few km altitude, where it is warmer than the
lower stratosphere, but colder than the surface.
How doubling the current concentration of N2O or removing it all affects the
spectral flux at the top of the atmosphere is illustrated in Fig. 5. The area
under the black, jagged curve is $Z=277$ W m-2 and is the frequency-integrated
upward flux at the top of the atmosphere of Fig. 3.
The Stefan-Boltzman flux, $Z_{0}=\sigma T_{0}^{4}=394$ W m-2 of (18), for a
surface temperature of $T_{0}=288.7$ K, is the frequency integral of the
Planck spectral flux, $\pi\tilde{B}_{0}$, of (12)
$\int_{0}^{\infty}\pi\tilde{B}_{0}d\nu=\sigma T_{0}^{4}=394\hbox{ W m}^{-2}.$
(25)
The integral (25) is the area in Fig. 5 beneath the smooth blue curve, the
spectral flux for a transparent atmosphere.
Figure 6: The forcing increments $\Delta F^{\\{i\\}}$ of the five most
important greenhouse gases, H2O, CO2, O3, N2O and CH4, if their current
concentrations are multiplied by the factor $f$, but all other atmospheric
conditions remain the same as for Fig. 2. All of the forcing increments show
“saturation,” an initial very rapid increase for near-zero concentrations
($f\approx 0$), and an increasingly smaller rate of increase as the
concentrations increase. At current atmospheric conditions with $f=1$, the
saturation is most extreme for H2O and CO2. From reference [23]. Figure 7:
Changing concentrations of Earth’s main natural greenhouse gases carbon
dioxide, CO2, nitrous oxide, N2O, methane, CH4, and several halocarbon
refrigerant gases. The rates of the naturally occurring greenhouse gases have
suppressed zeros and are relatively slow, as shown by the century-scale
doubling times $t_{2}$ shown in Table 3. From reference [26].
### 3.6 Saturation
As we will discuss in more detail below, most of the increased forcing over
the next half century will be from increasing concentrations $C$ of CO2. There
is little debate that the forcing due to CO2 is “logarithmic.” This means that
for an increase of CO2 concentrations from $C_{1}$ to $C_{2}$ the forcing will
increase from $F_{1}$ to $F_{2}$, where
$F_{2}-F_{1}=\Delta F\log_{2}(C_{2}/C_{1}).$ (26)
Here, $\log_{2}(x)$ denotes the base-2 logarithm of the variable $x$, for
example, $\log_{2}(1)=0$, $\log_{2}(2)=1$, $\log_{2}(4)=2$, and
$\log_{2}(8)=3$. The forcing increment $\Delta F$ produced by a doubling of
CO2 concentrations depends on the altitude, location on Earth’s surface and
season of the year. A representative number for the top of the atmosphere in a
midlatitude location is $\Delta F=3.0$ W m-2 [23]. Increasing the CO2
concentrations by another 400 ppm, from $C_{2}=800$ ppm to $C_{3}=1200$ ppm
would increase the forcing by
$F_{3}-F_{2}=\Delta F\log_{2}(C_{3}/C_{2})=\Delta F\log_{2}(3/2)=\Delta
F\times 0.585.$ (27)
The second 400 ppm increase of CO2 concentration only increases the forcing by
about 59% of the first 400 ppm increase. This “saturation” of the forcing from
more CO2 can be seen in Fig. 6. The curves of forcing versus greenhouse gas
concentration droop downward, especially for CO2 and H2O, and to a lesser
extent, for CH4 and N2O. There is a “law of diminishing returns” for forcing
increases caused by increasing concentrations of greenhouse gases.
Molecule | $\hat{C}_{1}^{\\{i\\}}$ | $d\hat{C}_{1}^{\\{i\\}}/dt$ | $t_{2}^{\\{i\\}}$ | $P^{\\{i\\}}$ | $dF^{\\{i\\}}/dt$ | $\partial T^{\\{i\\}}/\partial t$
---|---|---|---|---|---|---
| | y-1 | y | W | W m-2 y-1 | K y-1
CO2 | $4.1\times 10^{-4}$ | $2.5\times 10^{-6}$ | 164 | $9.0\times 10^{-26}$ | $4.8\times 10^{-2}$ | $8.5\times 10^{-3}$
CH4 | $1.9\times 10^{-6}$ | $8\times 10^{-9}$ | 238 | $2.8\times 10^{-24}$ | $4.8\times 10^{-3}$ | $8.5\times 10^{-4}$
N2O | $3.4\times 10^{-7}$ | $8\times 10^{-10}$ | 425 | $2.1\times 10^{-23}$ | $3.6\times 10^{-3}$ | $6.4\times 10^{-4}$
Table 3: Some properties of the three most important greenhouse gases (after
water vapor): carbon dioxide, methane, and nitrous oxide. Altitude-averaged
concentrations, $\hat{C}_{1}^{\\{i\\}}$, for the year 2021, and their rates of
increase, $d\hat{C}_{1}^{\\{i\\}}/dt$ , can be found by inspection of Fig. 7,
and are shown in the second and third columns. Doubling times from (32) are
listed in the fourth column. The forcing powers per molecule from (35) are
displayed in the fifth column. The rates of increase of the forcing,
$dF^{\\{i\\}}/dt$, calculated from (36) for a cloud-free atmosphere, are shown
in the sixth column. The final column contains the warming rates that follow
from (41), from the forcing rates of the previous column and from (43).
## 4 Future Forcings
In Fig. 6 we show the forcing increments,
$\Delta
F^{\\{i\\}}_{f}=F^{\\{i\\}}(\hat{C}^{\\{i\\}}_{f})-F^{\\{i\\}}(\hat{C}^{\\{i\\}}_{1}),$
(28)
caused by increasing the current concentration $C^{\\{i\\}}_{1}(z)$ of the
$i$th greenhouse gas, and its altitude average
$\hat{C}^{\\{i\\}}_{1}=\frac{1}{\hat{N}}\int_{0}^{\infty}dz\,C^{\\{i\\}}_{1}(z)N(z)$
(29)
by a factor $f$ to
$C^{\\{i\\}}_{f}=fC^{\\{i\\}}_{1},\quad\hbox{and}\quad\hat{C}^{\\{i\\}}_{f}=f\hat{C}^{\\{i\\}}_{1}.$
(30)
In (29), $N(z)$ is the number density of all air molecules at the altitude
$z$. The column density $\hat{N}$ of all atmospheric molecules, for dry air at
a sea-level pressure of 1013 mb, is
$\hat{N}=\int_{0}^{\infty}dzN(z)=2.15\times 10^{29}\hbox{ m}^{-2}.$ (31)
The rapid decrease of $N(z)$ with increasing altitude $z$ is shown in Fig. 2.
In calculating the forcing $F^{\\{i\\}}(\hat{C}^{\\{i\\}}_{f})$ of (28) we
assume no change of the atmospheric temperature profile $T(z)$ or
concentrations $C^{\\{j\\}}_{1}(z)$ of other greenhouse gases ($j\neq i$). The
second and third columns of Table 3 show the average concentrations,
$\hat{C}^{\\{i\\}}_{1}$ in the year 2021, and the rates of growth,
$d\hat{C}^{\\{i\\}}_{1}/dt$, both estimated by inspection of Fig. 7. The
fourth column of Table 3 shows the doubling times of the greenhouse gases if
current rates of growth remain unchanged
$t_{2}^{\\{i\\}}=\frac{\hat{C}_{1}^{\\{i\\}}}{d\hat{C}_{1}^{\\{i\\}}/dt}.$
(32)
The “partial forcings” $F^{\\{i\\}}_{f}$ of (28) increase with increasing
average concentrations $\hat{C}^{\\{i\\}}_{f}$ at the rate
$\frac{\partial
F^{\\{i\\}}_{f}}{\partial\hat{C}^{\\{i\\}}_{f}}=\frac{1}{\hat{C}^{\\{i\\}}_{1}}\frac{d}{df}\Delta
F^{\\{i\\}}_{1},$ (33)
where the incremental forcings $\Delta F^{\\{i\\}}_{f}=\Delta F^{\\{i\\}}$
were shown in Fig. 6. Partial derivative symbols, $\partial$, are used on the
left of (33) as a reminder that only the concentration of the $i$th greenhouse
gas is varied from the conditions of Fig. 2. The column density
$\hat{N}^{\\{i\\}}$ of the greenhouse gas $i$ is related to its altitude-
averaged concentration by
$\hat{N}^{\\{i\\}}=\hat{N}\hat{C}^{\\{i\\}}.$ (34)
Multiplying both sides of (33) by $1/\hat{N}$ we see that the rate of increase
of forcing with increasing column density $\hat{N}^{\\{i\\}}$ of the
greenhouse gas $i$ is
$P^{\\{i\\}}=\frac{\partial
F^{\\{i\\}}}{\partial\hat{N}^{\\{i\\}}}=\frac{1}{\hat{N}\hat{C}^{\\{i\\}}_{1}}\frac{d}{df}\Delta
F^{\\{i\\}}.$ (35)
The unit of (35) is watt so we can think of the quantity $P^{\\{i\\}}$ as the
additional forcing produced by 1 molecule per square meter of the greenhouse
gas $i$ above the Earth’s surface. Values of $P^{\\{i\\}}$, that follow from
data like that of Fig. 6 and (35), are entered in the fifth column of Table 3.
The values of $P^{\\{i\\}}$ depend on altitude, as one can see from Fig. 6.
Those listed in Table 3 are for the tropopause altitude (11 km) of a
representative temperate latitude.
For current concentrations the per-molecule forcing power of methane
$P^{\\{{\rm CH}_{4}\\}}$ is 31 times larger than that of carbon dioxide,
$P^{\\{{\rm CO_{2}}\\}}$, and the per-molecule forcing power of nitrous oxide,
$P^{\\{{\rm N_{2}O}\\}}$, is 233 times larger. This is mostly due to the
“saturation” of the absorption bands of CO2 that was discussed in Section 3.6.
The current density of CO2 molecules is about 220 times greater than that of
CH4 molecules and about 1300 times greater than that of N2O. So, the
absorption bands of CO2 are much more saturated than those of CH4 or N2O.
Reference [23] shows that in the dilute limit, where molecules do not absorb
each other’s radiation before it can escape to space, CO2 is the most potent
greenhouse gas molecule. Each additional CO2 molecule in the dilute limit
causes about 5.3, 1.6 or 1.8 times more tropospheric forcing increase than an
additional molecule of CH4, N2O, or H2O.
Since the column density of the greenhouse gas $i$ increases at the rate
$d\hat{N}^{\\{i\\}}/dt=\hat{N}d\hat{C}^{\\{i\\}}/dt$, the rate of increase of
the forcing is
$\frac{dF^{\\{i\\}}}{dt}=\hat{N}P^{\\{i\\}}\frac{d\hat{C}^{\\{i\\}}}{dt}.$
(36)
The values of (36) are listed in the sixth column of Table 3. The total rate
of increase in forcing is
$\frac{dF}{dt}=\sum_{i}\frac{dF^{\\{i\\}}}{dt}.$ (37)
Using data from Table 3, we see that the total rate of increase of forcing,
due to increasing concentrations of CO2, CH4 and N2O, is
$\frac{dF}{dt}=0.056\hbox{ W m${}^{-2}$ y${}^{-1}$}.$ (38)
This is a representative number for a midlatitude location. Analogous numbers
would have to be averaged over Earth’s surface and over a year to be useful
for “global” estimates.
Figure 8: Projected midlatitude forcing increments at the tropopause from
continued increases of CO2, CH4 and N2O at the rates of Fig. 7 for the next 50
years. The projected forcings are very small compared to the current
tropospheric forcing of 137 W m-2. The temperature-increase contours
correspond to the average warming of the past few decades, $0.01$ C y-1. The
total forcing rate (warming rate) is approximately 85% from CO2, 9% from CH4
and 6% from N2O.
### 4.1 Temperature changes due to forcing changes
Instantaneous forcing changes due to instantaneous changes in the
concentrations of greenhouse gases, but with no other changes to the
atmosphere, can be calculated accurately, as we have outlined above. The next
step, using instantaneous forcing changes to predict changes in Earth’s
temperatures, is much harder. The very concept of “Earth’s temperature” is not
well defined. Earth’s temperatures are highly variable in time, in latitude,
in altitude, or depth in the ocean. The Earth does not have a single
temperature. Even defining a sensible average temperature or temperature
anomaly is problematic. Lindzen and Christy [27] have given a good discussion
of how elusive the concept of Earth’s “temperature” is.
Suppose that the average solar heating of the Earth were to remain the same
after an instantaneous doubling in the concentration of CO2. Since the
additional greenhouse gas has slightly decreased the thermal radiation to
space (or radiative cooling), the Earth would begin to accumulate heat at the
rate of a few Watts per square meter. The unbalanced addition of solar heat
would continue until the conditions of the atmosphere, the surface and the
oceans change enough to equalize the average values of solar heating and
radiant cooling. Emission of thermal radiation increases rapidly with
increasing temperature so a small amount of warming of the atmosphere or the
surfaces of the land and oceans can increase radiative cooling. An increase in
cloud cover can reflect more solar energy back into space before it is
converted to heat. The details of how balance is restored are extremely
complicated because of the complexity of Earth’s climate physics. The
atmosphere and oceans convect large amounts of heat from the tropics to the
poles, so more radiation is emitted from polar regions than the solar heating
there. Cloud cover and ice extent will probably change as well. But because
the relative changes in radiative forcing are so small, between one and two
percent, as one can see from Fig. 3, minor changes of the present climate will
be needed to reestablish balance.
For the sake of further discussion, we assume that some average temperature or
temperature anomaly $T$ of the Earth can be reliably defined and measured.
Then we can write the dependence of this temperature $T$ on time $t$ as
$\frac{dT}{dt}=\frac{\partial T^{\\{\rm nat\\}}}{\partial t}+\frac{\partial
T^{\\{\rm gg\\}}}{\partial t}.$ (39)
The first term on the right of (39), $\partial T^{\\{\rm nat\\}}/\partial t$,
is the temperature change due to natural causes. The geological history of
Earth clearly shows that the natural variations of temperature can be large on
all time scales, from hours to millennia and longer. The second term,
$\partial T^{\\{\rm gg\\}}/\partial t$, is the rate of change due to
increasing concentrations of greenhouse gases.
Temperature records have been available at many sites since the late 1800s.
So, the value of $dT/dt$ on the left side of the equation (39) is known, at
least approximately. But early temperature readings were not taken for most
parts of the Earth’s surface, including the oceans, the polar regions, the
Sahara Desert, the Amazon Basin, etc. So, the observed $dT/dt$ has especially
large uncertainties before the year 1900. According to the IPCC, surface
temperatures have increased by about 1.1 K [28] since the year 1850. The
observed warming exhibits significant decadal fluctuations. The average
surface temperature increased by about 0.5 K during 1900 - 1940, remained
relatively constant from 1940 to 1980, increased from 1980 to 2000 and,
surprisingly, leveled off for over a decade after 2000. This is known as the
global warming hiatus. Since then, an overall warming trend may have resumed,
although satellite measurements indicate that another pause may be underway
[29].
Since the year 1979, satellite measurements of thermal millimeter (mm) waves
emitted by O2 molecules in the atmosphere have been used to infer temperatures
in Earth’s atmosphere [9, 29]. For mm-waves near 60 GHz in frequency, there is
a very strong absorption band due to spin-flip transitions in the O2 molecule.
O2 is nearly transparent to visible sunlight and to most of the thermal
radiation from the Earth, but it does absorb and thermally emit mm waves. By
tuning the mm-wave radiometers on satellites to appropriate frequencies near
60 GHz, one can measure radiation intensity (and the corresponding
temperature) from relatively narrow ranges of atmospheric altitude, from the
lower troposphere to the stratosphere. Since satellite measurements began in
the year 1979, the warming rate of “the global lower atmosphere” has averaged
about 0.015 K y-1 [29].
It is hard to construct a climate model that does not predict faster warming
of the troposphere than the surface [23]. The moist adiabats (or
pseudoadiabats) that give a reasonably accurate fit to the average temperature
profiles measured by radiosondes on weather balloons, do not have simple
linear lapse rates of 6.5 K km-1. Instead, they “bend” in such a way that
higher altitudes warm more than lower altitudes [23]. The different warming
rates are due to differences in how moisture condenses to liquid water or ice
crystals and releases latent heat at various altitudes. The mm-wave emissivity
of the surface is so variable and uncertain that measurements of surface
temperatures from satellites are substantially less accurate than measurements
of atmospheric temperatures at higher altitudes. That satellites observe
faster atmospheric warming rates than surface warming rates measured by ground
stations may be due to the effects of moist adiabats.
No one knows how much of the observed surface or atmospheric warming is due to
natural causes, the first term on the right side of (39), and how much is due
to increasing concentrations of greenhouse gases, the second term. Some
temperature increase over the past two centuries must have been a natural
recovery from the Little Ice Age. For the sake of argument, we will assume
that in recent decades the amount of temperature increase that can be solely
ascribed to greenhouse gases amounts to
$\frac{dT^{\\{\rm gg\\}}}{dt}=0.01\hbox{ K y}^{-1},$ (40)
or 1 C per century. The relative contributions of CO2, N2O and CH4 to the
warming are determined by the relative forcings, that can be accurately
calculated. The relative contributions are independent of the value of (40).
The thermal flux changes (the negatives of the forcing changes) at the top of
the atmosphere are about 1% of the flux to space for doubling CO2, as shown in
Fig. 3. And these forcing changes occur very slowly. From Table 3 we see that
doubling CO2 at current rates would require about 160 years. It is therefore a
good assumption that the part of the rate of temperature change in (39) that
is due to greenhouse gases is proportional to the rate of forcing change, and
we can write the contribution of the $i$th greenhouse gas to the warming rate
as
$\frac{\partial T^{\\{i\\}}}{\partial t}=\left(\frac{\partial T}{\partial
Z}\right)\frac{dF^{\\{i\\}}}{dt}.$ (41)
The total warming rate due to increases of all greenhouse gases is then
$\displaystyle\frac{dT^{\\{\rm gg\\}}}{dt}$ $\displaystyle=$
$\displaystyle\sum_{i}\frac{\partial T^{\\{i\\}}}{\partial t}$ (42)
$\displaystyle=$ $\displaystyle\left(\frac{\partial T}{\partial
Z}\right)\frac{dF}{dt}.$
Substituting (38) and (40) into (42), we find
$\frac{\partial Z}{\partial T}=\left(\frac{\partial T}{\partial
Z}\right)^{-1}=5.6\hbox{ W m${}^{-2}$ K${}^{-1}$}.$ (43)
Here, we have assumed that the value $dF/dt$ of (38), computed for the
midlatitude tropopause of clear skies, is an adequate metric for the
complicated range of values of $dF/dt$ that characterize other latitudes,
altitudes and cloud conditions. Using (43) and values of $dF^{\\{i\\}}/dt$
from Table 3 in (41) we find the warming rates $\partial T^{\\{i\\}}/\partial
t$ due to the $i$th species of greenhouse gases. The values are listed in the
last column of Table 3.
Equation (43) is very important. It says that the average global temperature,
$T$, for all its logical problems [27], is relatively insensitive to changes
in the concentration of greenhouse gases. A forcing of 5.6 W m-2 is needed to
increase $T$ by 1 K or 1 C. Instantaneously doubling the concentration of CO2,
which will take more than a century according to Table 3, only decreases the
flux by 5.5 W m-2 at the tropopause (and only by 3.0 W m-2 at the top of the
atmosphere) [23]. So, for the estimated value (43) of $\partial Z/\partial T$,
a warming of 0.98 K would increase the tropopause flux back to the value it
had before doubling CO2 concentrations.
To get some feeling for how physically reasonable (43) is, we note that if the
Earth radiated as a blackbody, in accordance with the Stefan-Boltzmann law
(19), at a temperature $T=288.7\hbox{ K}$, the rate of change of flux with
temperature would be
$\frac{\partial Z}{\partial T}=4\sigma T^{3}=5.5\hbox{ W m${}^{-2}$
K${}^{-1}$}.$ (44)
The estimates (43) and (44) are within a few percent of each other. In
reference [23] a representative rate of decrease of flux at the top of a
cloud-free atmosphere due to a uniform increase of temperatures at all
altitudes was calculated to be
$\frac{\partial Z}{\partial T}=3.9\hbox{ W m${}^{-2}$ K${}^{-1}$}.$ (45)
This is about 30% smaller than (43).
The elaborate computer models of the IPCC predict that much larger changes of
temperature are needed to increase the cooling flux and compensate for the
forcing of greenhouse gases. In the year 2021, the IPCC [28] estimated that
doubling CO2 concentrations would cause a “most likely” warming of
$\partial T=3.0\hbox{ K}.$ (46)
Taking the flux increase to be
$\partial Z=5.5\hbox{ W m${}^{-2}$ K${}^{-1}$},$ (47)
as calculated in reference [23] for the tropopause, we find
$\frac{\partial Z}{\partial T}=1.8\hbox{ W m${}^{-2}$ K${}^{-1}$}.$ (48)
If $\partial Z/\partial T$ were really as small as (48), the warming rate (42)
of the past few decades would have been
$\frac{\partial T^{\\{\rm gg\\}}}{\partial t}=0.031\hbox{ K y}^{-1}$ (49)
three times larger than the observed rate, $0.010\hbox{ K y}^{-1}$, of (40).
Figure 9: Left: Fractions $f=f(t)$ of greenhouse gases remaining in the
atmosphere at a time $t$ after one unit mass of gas was injected at time
$t=0$. The fractions are described by the decaying exponentials of (53).
Right: Forcing times $\theta=\theta(t)$ for observation-time intervals $t$.
These are described by (54).
### 4.2 Global warming potential
The powers per molecule $P^{\\{i\\}}$ of (35) measure the relative forcings of
different greenhouse molecules. They play a similar role to the global warming
potential (GWP), that is defined in Section 8.7.1.2 of the 2018 IPCC document
[25], in terms of radiative forcings (RF), by:
> The Global Warming Potential (GWP) is defined as the time-integrated RF due
> to a pulse emission of a given component, relative to a pulse emission of an
> equal mass of CO2 (Figure 8.28a and formula). The GWP was presented in the
> First IPCC Assessment (Houghton et al., 1990), stating “It must be stressed
> that there is no universally accepted methodology for combining all the
> relevant factors into a single global warming potential for greenhouse gas
> emissions. A simple approach has been adopted here to illustrate the
> difficulties inherent in the concept, …”. Further, the First IPCC Assessment
> gave no clear physical interpretation of the GWP.
To translate these words into equations that allow calculations of global
warming potentials, we denote the molecular mass of a greenhouse molecule of
type $i$ by $m^{\\{i\\}}$ and the forcing time by $\theta^{\\{i\\}}$. Then we
can quantify the IPCC definition above by writing the global warming potential
of the $i$th type of greenhouse gas as
$\hbox{GWP}^{\\{i\\}}=\frac{\langle\hbox{RF}^{\\{i\\}}\rangle}{\langle\hbox{RF}^{\\{{\rm
CO}_{2}\\}}\rangle}.$ (50)
The “time-integrated RF” (time-integrated radiative forcing) per unit mass of
the $i$th greenhouse gas is
$\langle\hbox{RF}^{\\{i\\}}\rangle=\frac{P^{\\{i\\}}\theta^{\\{i\\}}}{m^{\\{i\\}}}.$
(51)
The units of $\langle\hbox{RF}^{\\{i\\}}\rangle$ are joules per square meter
per unit mass. The $\langle\hbox{RF}^{\\{i\\}}\rangle$ can be thought of as
the amount of “forcing heat” acquired by a square-meter column of the
atmosphere during the observation time $t$ after the pulse emission of one
unit mass of greenhouse gas of type $i$. The forcing time $\theta^{\\{i\\}}$
for an observation time interval $t$ is
$\theta^{\\{i\\}}=\int_{0}^{t}dt^{\prime}f^{\\{i\\}}(t^{\prime}).$ (52)
Here, $f^{\\{i\\}}(t^{\prime})$ is the fraction of the excess greenhouse gas
molecules remaining in the atmosphere at a time $t^{\prime}$ after the “pulse
emission.” Excess greenhouse molecules can remain in the atmosphere long after
the greenhouse molecules of the original pulse have exchanged with the land
and oceans because they are replaced with equivalent molecules that continue
to provide radiative forcing. An example is the relatively short time needed
for the exchange of 14CO2 molecules from atmospheric nuclear weapon tests with
CO2 molecules of the ocean. It takes a much longer time for excess CO2
concentrations to decay away. The excess atmospheric concentration of
molecules, caused by the emitted pulse of the species $i$, can be removed from
the atmosphere by various mechanisms. For example, CO2 is absorbed by the
biosphere and oceans. CH4 is oxidized by OH radicals and other atmospheric
gases. N2O is photodissociated by solar ultraviolet radiation in the
stratosphere, etc.
For a global warming potential to make physical sense, there must be some
equilibrium concentration of the greenhouse gas of interest for which natural
sources and sinks are in balance, and for which the concentration would not
change with time if there were no human influences. Though plausible, there is
no compelling observational evidence for the existence of equilibrium
concentrations of any greenhouse gas. Indeed, ice core records show large
variations of CO2 concentrations over periods of hundreds of thousands of
years, especially between glacial maxima and interglacials of the current ice
age. The variations of CO2 appear to have been driven by variations of Earth’s
average temperature, as inferred from variations of the stable isotope 18O in
the ice surrounding the trapped air bubbles. Changes in ice-core
concentrations of CO2 follow changes in temperature by several centuries [30].
Molecule | $P^{\\{i\\}}$ | $P^{\\{i\\}}/P^{\\{{\rm CO}_{2}\\}}$ | $m^{\\{i\\}}$ | This Work
---|---|---|---|---
$i$ | W | | amu | GWP{i}(0) | GWP{i}(20) | GWP{i}(100)
CO2 | $9.0\times 10^{-26}$ | 1 | 44 | 1 | 1 | 1
CH4 | $2.8\times 10^{-24}$ | 31 | 16 | 85.5 | 53.9 | 19.2
N2O | $2.1\times 10^{-23}$ | 233 | 44 | 233 | 279 | 290
Table 4: Forcing powers per molecule, $P^{\\{i\\}}$ from (35), global warming potentials GWP${}^{\\{i\\}}(t)$ from (55) for observation times $t\to 0$ years, $t=20$ years and $t=100$ years. Molecule | IPCC 2022
---|---
$i$ | GWP{i}(20) | GWP{i}(100)
CO2 | 1 | 1
CH4 | $82.5\pm 25.8$ | $29.8\pm 11$
N2O | $273\pm 118$ | $273\pm 130$
Table 5: Global warming potentials for fossil-fuel methane and nitrous oxide
from the 2021 IPCC report [25]. The observation times are $t=20$ years and
$t=100$ years.
To facilitate further discussion, we will assume that equilibrium
concentrations exist. In that case, it is customary and convenient to describe
the excess fraction $f^{\\{i\\}}(t)=f(t)$ with a sum of $n+1$ exponentially
decaying components, of amplitudes $a_{j}$ and time constants $\tau_{j}$ [31,
32]
$f(t)=\sum_{j=0}^{n}a_{j}e^{-t/\tau_{j}},\quad\hbox{where}\quad\sum_{j=0}^{n}a_{j}=1.$
(53)
Substituting (53) into (52) we find that the forcing time,
$\theta^{\\{i\\}}=\theta$, is
$\theta=\sum_{j=0}^{n}a_{j}\tau_{j}\left(1-e^{-t/\tau_{j}}\right).$ (54)
For the limiting case of infinitely long time constants, $\tau_{j}\to\infty$,
one should make the replacement $\tau_{j}\left(1-e^{-t/\tau_{j}}\right)\to t$
in (54). From inspection of (54) one can conclude that the forcing time
$\theta^{\\{i\\}}$ is always less than or equal to the observation time,
$\theta^{\\{i\\}}(t)\leq t$. Using (54) in (50) we find that the global
warming potentials are
$\hbox{GWP}^{\\{i\\}}=\left(\frac{P^{\\{i\\}}}{P^{\\{{\rm
CO}_{2}\\}}}\right)\left(\frac{m^{\\{{\rm
CO}_{2}\\}}}{m^{\\{i\\}}}\right)\left(\frac{\theta^{\\{i\\}}}{\theta^{\\{{\rm
CO}_{2}\\}}}\right).$ (55)
Unlike the forcing powers $P^{\\{i\\}}$ per greenhouse molecule of type $i$,
which can be accurately calculated, the time dependence of the fractions
$f^{\\{i\\}}(t)$ is not well known. So, the global warming potentials GWP{i}
are of limited quantitative value. To add further uncertainty, “indirect
effects” are sometimes included in the GWP{i}, for example, the effects of the
CO2 and H2O molecules which result from the oxidation of CH4. For this paper
we will consider only the direct forcing of the greenhouse gases.
For CH4 and N2O, Table 7.15 of Chapter 7 of the IPCC report for the year 2021
[28] gives lifetimes for single-exponential versions of (53)
$\tau^{\rm CH_{4}}=11.8\pm 1.8\hbox{ y}\quad\hbox{and}\quad\tau^{\rm
N_{2}O}=109\pm 10\hbox{ y}.$ (56)
For CO2, Harvey [31] uses a five-exponential form of (53) with the parameters
$\left[\begin{array}[]{c}a_{0}\\\ a_{1}\\\ a_{2}\\\ a_{3}\\\
a_{4}\end{array}\right]=\left[\begin{array}[]{c}.131\\\ .201\\\ .321\\\
.249\\\
.098\end{array}\right]\quad\hbox{and}\quad\left[\begin{array}[]{c}\tau_{0}\\\
\tau_{1}\\\ \tau_{2}\\\ \tau_{3}\\\
\tau_{4}\end{array}\right]=\left[\begin{array}[]{c}\infty\\\ 362.9\\\ 73.6\\\
17.3\\\ 1.9\end{array}\right]\hbox{ y}.$ (57)
Alternate parameterizations of $f^{\\{i\\}}$ for CO2, for example, those of
Joos et al. [32], give GWP’s that do not differ more than the uncertainties of
the estimates.
The left panel of Fig. 9 shows excess fractions $f^{\\{i\\}}$ of CO2, CH4 and
N2O, calculated with (53) and with the parameters of (56) and (57). The right
panel of Fig. 9 shows the forcing times $\theta^{\\{i\\}}$ of (54). Table 4
shows representative global warming potentials calculated with (55) and the
parameters of (56) and (57). These are consistent with those of the 2021 IPCC
report [28] shown in Table 5. The somewhat larger values of GWP${}^{\\{\rm
CH_{4}\\}}$ may be due to indirect effects included in the IPCC calculations.
## 5 Nitrogen in the Biosphere
Nitrogen, the main focus of this paper, is the third most abundant material,
after water and carbon dioxide, needed for plant growth. The ultimate source
of nitrogen in the biosphere is elemental diatomic nitrogen molecules, N2,
from the atmosphere. The weathered rocks from which soils are formed contain
negligible amounts of nitrogen. How atmospheric nitrogen is converted into
forms that can be used by living organisms and ultimately cycled back into the
atmosphere is called the nitrogen cycle [33]. Fig. 10 gives a simplified
sketch of the nitrogen cycle. Most soil nitrogen is eventually returned to the
atmosphere as diatomic nitrogen molecules, N2, in the process of
denitrification. But a few percent is returned as the greenhouse gas molecules
nitrous oxide or N2O. The biosphere is the main source of N2O in the
atmosphere.
Figure 10: Simplified nitrogen cycle for the soil. SOM is soil organic matter.
Most of the fixed nitrogen of the soil is returned to the atmosphere as N2
molecules, with a small fraction as N2O molecules. See the text for more
details.
For land plants, water is the most important requirement. Without adequate
water, soil fertility is irrelevant, plants would die, and there would only be
barren desert. Of course, water is never in short supply for aquatic plants.
After water, carbon dioxide is the next most essential requirement. Life would
be impossible without the carbon atoms that are the main constituents of
carbohydrates, hydrocarbons, proteins, nucleic acids, and the many other
organic compounds of which living things are built. The carbon in living
organisms comes from carbon dioxide molecules of the atmosphere and oceans.
When living organisms die, much of the organic carbon of their remains is
eventually oxidized back to CO2 by microorganisms and returned to the
atmosphere and ocean. But under special anoxic conditions, some of the
organic-rich sediments can be preserved, then buried deeply and eventually
converted to coal, oil and natural gas.
Current atmospheric concentrations of carbon dioxide are low by the standards
of geological history [34]. Insufficient atmospheric carbon dioxide is
retarding plant growth [3]. The modest growth of CO2 concentrations over the
past century has contributed to the greening of the Earth, which can be
observed from satellites [35] and to significant increases of crop yields [3].
Figure 11: Left. An amino acid molecule, the building block of the proteins of
living organisms. From reference [37]. Right. The nucleoside adenosine.
Attaching one phosphate, PO${}_{4}^{3-}$ to the OH on the left makes a
nucleotide, adenosine monophosphate, one of the four building blocks of
ribonucleic acid (RNA). Attaching a chain of three phosphates makes adenosine
triphosphate (ATP), the main energy-transport molecule of life. From reference
[38].
Nitrogen, the third most essential requirement for plant growth, is a key part
of proteins, the structural material of all life. Proteins are long-chain
polymers (peptide chains) of amino acids like the generic one sketched in the
left side of Fig. 11. Amino acids have at least one nitrogen atom in the amino
group, -NH2. There are 20 different amino acids commonly found in plants and
animals, each with its own unique side chain R. Six side chains R contain
additional nitrogen atoms [36]. Though far less abundant than amino acids,
there are many other nitrogen-containing molecules that are essential for
life. One example is the adenosine molecule, shown on the right side of Fig.
11. Other examples include the porphyrin rings that hold the iron ions of
hemoglobin and cytochromes, or the magnesium ions of chlorophyll, or the B
vitamins, or the genetic material of cells, deoxyribonucleic acid (DNA).
The biosynthetic machinery of plants uses ammonium ions, NH${}_{4}^{+}$ as a
source of nitrogen atoms in organic molecules. When living organisms die,
microorganisms convert most of the nitrogen from their remains back into
ammonium ions and nitrate ions, NO${}_{3}^{-}$. This process of mineralization
allows nitrogen to be used by new generations of living organisms and recycled
many times through the soil. But the fixed nitrogen of the soil is eventually
lost by leaching or into the atmosphere as N2 or N2O molecules in the process
of denitrification, which is carried out by other microorganisms.
In addition to using recycled, mineralized nitrogen, some microorganisms of
the soil and oceans can convert diatomic nitrogen, N2, from the atmosphere
into ammonium ions NH${}_{4}^{+}$. Fixing nitrogen in this way is a highly
energy-intensive step, since the triple bond energy of N2, about 9.8 electron
volts (eV) [39], is one of the largest in nature. Within nitrogen - fixing
microorganisms, nitrogenase, itself a protein containing many nitrogen atoms,
catalyzes the overall reaction
${\rm N}_{2}+3{\rm H}_{2}+2\hbox{ H${}_{2}$O}+16\,\hbox{ATP}\to 2{\rm
NH}_{4}^{+}+2{\rm OH}^{-}+16\,\hbox{ADP}+16\hbox{P}_{i}.$ (58)
The hydrogen atoms and the adenosine triphosphate (ATP) energy-carrier
molecules on the left side of the equation are usually derived from the
metabolism of energy-storage carbohydrates, such as glucose, C6H12O6, which is
a major part of plant tissue, often in polymeric forms like starch or
cellulose. Starch, which consists of single chains of glucose molecules, is
easily decomposed to glucose. Cellulose fibers, the main building material of
plants, are hydrogen-bonded polymers of glucose with slightly different bond
geometries between the monomers than for starch. Cellulose is much more
resistant to decomposition than starch. Fungi and soil microorganisms can
disassemble cellulose and release glucose.
The detailed biochemistry summarized by (58) has very complicated pathways,
and it is a tour de force for organisms that can implement it. Breaking the N2
bond requires the chemical energy of at least 16 ATP molecules. The chemical
energy, or Gibbs free energy,
$\Delta G=\Delta H-T\Delta S,$ (59)
available from each conversion of ATP to adenosine diphosphate (ADP) and an
inorganic phosphate ion (Pi), depends on the absolute temperature $T$, the
ionic strength, the pH, and other factors. It involves not only the enthalpy
increment $\Delta H$ from making or breaking chemical bonds but also large
entropy increments $\Delta S$, that is, the exchange of heat with the
surrounding medium.
For the intracellular conditions of many plants or microorganisms, the energy
release $\Delta G$ for each hydrolyzed ATP molecule is around 50 kJ kg-1 or
about 0.5 eV [40]. Therefore, the 16 ATP’s in (58) would correspond to about 8
eV of energy, pretty close to the 9.8 eV dissociation energy of the N2
molecule. This is about half the chemical energy released by the oxidative
metabolism of a glucose molecule [40] and relatively costly to the energy
budget of the organism. For comparison, only three ATP molecules, and two
molecules of NADH (the reduced form of nicotinamide adenine dinucleotide) are
needed to incorporate a CO2 molecule into a sugar molecule [41].
Nitrogenase is inactivated by oxygen, O2, so nitrogen-fixing organisms have
evolved various ways to exclude O2 from contact with nitrogenase. Legumes
enclose symbiotic, nitrogen-fixing bacteria in root nodules from which oxygen
is removed with the aid of hemoglobin molecules, similar to those that
transport O2 molecules in human blood. Oceanic cynanobacteria (blue-green
algae) isolate the nitrogenase in special heterocysts, thick-walled cells
which have the same function as root nodules. And some photosynthesizing
microorganisms only fix nitrogen after sundown when photosynthesis stops
flooding the organism with O2 molecules [33].
In water, most ammonia molecules rapidly convert to positively charged
ammonium ions, NH${}_{4}^{+}$, in the reversible reaction
$\hbox{NH${}_{3}$}+\hbox{H${}_{2}$O}\to\hbox{NH${}_{4}^{+}$}+\hbox{OH${}^{-}$}.$
(60)
As indicated by (60), ammonia is a strong chemical base and releases negative
hydroxyl ions into solution.
In oxygenated soils or water, there are specialized microorganisms, ammonia
oxidizers, that convert ammonium ions to nitrite ions, NO${}_{2}^{-}$. The
detailed biochemistry involves several intermediate steps, but the overall
reaction can be written as
$\hbox{NH${}_{4}^{+}$}+\frac{3}{2}\hbox{O${}_{2}$}+2\hbox{OH${}^{-}$}\to\hbox{NO${}_{2}^{-}$}+3\hbox{H${}_{2}$O}.$
(61)
Other microorganisms are able to oxidize the nitrites produced by reaction
(61) to nitrate ions NO${}_{3}^{-}$, as described by the overall reaction
$\hbox{NO${}_{2}^{-}$}+\frac{1}{2}\hbox{ O${}_{2}$}\to\hbox{NO${}_{3}^{-}$}.$
(62)
Both reactions (61) and (62) involve complicated biochemical pathways.
Microorganisms which drive reactions (61) and (62) are able to use the modest
amounts of chemical energy released to grow and multiply, although rather
slowly compared to microorganisms with more energy-rich lifestyles that use O2
to oxidize organic compounds to carbon dioxide and water [33].
Figure 12: The faces and edges of clay particles are covered with insoluble
negative charges, to which soluble positive cations can temporarily bind.
These include many mineral nutrients of plants, including ammonium,
NH${}_{4}^{+}$, potassium, K+, calcium, Ca++ and magnesium Mg++ ions. From
reference [42].
The nitrification of ammonium ions to nitrate ions described by (61) and (62)
is especially important in soils. Both the mineral and organic constituents of
soil contain many cation exchange sites. These are sites with insoluble
negative charges to which dissolved, positively charged ions or cations, like
the ammonium ion, can bind. Humus is rich in carboxylic exchange sites, -COO-.
Fig. 12 shows a schematic clay particulate, an aluminosilicate mineral with
many insoluble negative charge sites on its faces and edges. Cations of
minerals needed by plants, potassium ions (K+), calcium ions (Ca++), magnesium
ions (Mg++) and ammonium ions (NH${}_{4}^{+}$) can reversibly bind to these
cation exchange sites.
There are far fewer anion exchange sites than cation exchange sites in soils,
so nitrate anions can diffuse to plant roots more rapidly than ammonium ions
which are held back on cation exchange sites. Within the plant tissue, nitrate
ions are converted back to nitrite ions and then to ammonium ions for
biosynthesis into amino acids and other nitrogen-containing organic molecules
like those of Fig. 11.
If the nitrogen cycle only involved the reactions (58) – (62), large and
potentially toxic concentrations of ammonium, nitrite and nitrate ions would
build up in soils and waters. In fact, the buildup of inorganic nitrogen is
limited by the biological process of denitrification which converts nitrogen-
containing ions back into diatomic nitrogen molecules, N2, and a few percent
of nitrous oxide molecules, N2O. These are released back into the atmosphere.
Denitrification is desirable for sewage treatment plants, or to prevent
eutrophication of water bodies, or to eliminate oxides of nitrogen from the
exhaust gases of high-temperature combustion in air. But denitrification is
undesirable for agriculture since it wastes the nitrogen fertility of soils.
Denitrification is most intense in oxygen-poor soils or waters. When O2 is in
short supply, for example, in waterlogged soils, some microorganisms can use
nitrite, NO${}_{2}^{-}$, or nitrate, NO${}_{3}^{-}$, to obtain chemical energy
from oxidizing organic compounds to CO2, H2O, N2 and N2O. For a glucose
molecule, C6H12O6, obtained from the cellulose of plant remains, a
representative denitrification process can be summarized as
$8\,\hbox{NO${}_{2}^{-}$}+8\hbox{H}^{+}+\hbox{C${}_{6}$H${}_{12}$O${}_{6}$}\to
4\hbox{N${}_{2}$}+6\,\hbox{CO${}_{2}$}+10\,\hbox{H${}_{2}$O},$ (63)
or infrequently
$7\hbox{NO${}_{2}^{-}$}+\hbox{NO${}_{3}^{-}$}+8\hbox{H}^{+}+\hbox{C${}_{6}$H${}_{12}$O${}_{6}$}\to
3\hbox{N${}_{2}$}+\hbox{N${}_{2}$O}+6\,\hbox{CO${}_{2}$}+10\,\hbox{H${}_{2}$O}.$
(64)
Microorganisms implement oxidations, like (63) and (64), with complicated
biochemical steps.
In the 1990’s the importance of anaerobic ammonia oxidation (anammox) by
bacteria was first recognized [33]. In the anammox process, microorganisms use
NO${}_{2}^{-}$ and NO${}_{3}^{-}$ ions to gain chemical energy by oxidizing
ammonium ions, NH${}_{4}^{+}$ instead of carbon-containing organic compounds.
Representative overall reactions can be described by the equations
$\hbox{NH${}_{4}^{+}$}+\hbox{NO${}_{2}^{-}$}\to\hbox{N${}_{2}$}+2\,\hbox{H${}_{2}$O},$
(65)
or infrequently
$\hbox{NH${}_{4}^{+}$}+\hbox{NO${}_{3}^{-}$}\to\hbox{N${}_{2}$O}+2\,\hbox{H${}_{2}$O}.$
(66)
The chemical energy released in reactions like (65) and (66) allows slow
growth of the anammox microorganisms. Many of the NO${}_{2}^{-}$ ions involved
in the anammox oxidation process are produced by archaea, non-bacterial
prokaryotes that are adapted to extreme environments [43]. Natural anammox
processes are widespread in soils and waters, and they contribute
significantly to global denitrification. Anammox systems have also been
commercialized for treatment of wastewater [44].
Quantitative rates of nitrogen fixation are not known very well, but the mass
of atmospheric nitrogen naturally fixed by microorganisms is believed to be
approximately [45]
$\frac{dM_{{\rm N}_{2}}}{dt}\approx 175\hbox{ Tg y}^{-1}.$ (67)
Here, the unit Tg y-1 is a tera gram ($10^{12}$ g) of nitrogen per year, or a
million tonnes of nitrogen per year. Assuming that this fixation rate is very
nearly the same as the denitrification rate, some 175 Tg y-1 of nitrogen must
be returned to the atmosphere by denitrification.
Yang et al. [46] estimate that natural emissions of N2O are about
$\frac{dM_{{\rm N}_{2}{\rm O}}}{dt}=9.7\hbox{ Tg y${}^{-1}$}.$ (68)
Equations (67) and (68) imply that denitrification produces about
$(175-9.7)/9.7\approx 17$ N2 molecules for every one N2O molecule.
It is much harder to estimate anthropogenic emissions of N2O than emissions of
CO2, where there are reliable figures for the annual usage of fossil fuels and
of cement, the main human sources. But estimates of anthropogenic N2O emission
rates by different authors are
$\frac{dM_{{\rm N}_{2}{\rm O}}}{dt}=\left\\{\begin{array}[]{ll}7.3\hbox{ Tg
y${}^{-1}$},&\hbox{Yang {\it et al.}\,\cite[cite]{[\@@bibref{}{Yang}{}{}]}}\\\
7.8\hbox{ Tg y${}^{-1}$},&\hbox{Reay\,\cite[cite]{[\@@bibref{}{Reay}{}{}]}
}\end{array}\right\\}.$ (69)
The growth of nitrous oxide shown in Fig. 7 and in Table 3 implies that the
mass $M_{\rm N}$ of nitrogen in atmospheric N2O molecules is increasing at a
rate
$\displaystyle\frac{dM_{\rm N}}{dt}$ $\displaystyle=$ $\displaystyle 4\pi
r^{2}\hat{N}\left(\frac{2\,m_{\rm N}}{N_{\rm
A}}\right)\frac{dC^{\\{i\\}}}{dt}$ (70) $\displaystyle=$ $\displaystyle
4.1\hbox{ Tg y}^{-1}.$
Here, $r=6371$ km is the radius of the Earth, $\hat{N}=2.15\times 10^{29}$ m-2
is the column density of all air molecules, already given by (31), $N_{\rm
A}=6.02\times 10^{23}$ is Avogadro’s number, the number of molecules in a gram
molecular weight, and $m_{\rm N}=14$ g is the gram molecular weight of
nitrogen atoms. The rate of increase of the N2O concentration,
$dC^{\\{i\\}}/dt=8\times 10^{-10}$ y-1, taken from Fig. 7, was listed in Table
3.
The observed growth of atmospheric N2O from (70) is about half of the
estimated human contribution to the emission rate (69) of N2O. It is natural
to wonder where the other half goes. Observations show that only about half of
the concentration increases of atmospheric CO2 due to human emissions remains
in the atmosphere after a few years. Most of the other half is thought to be
absorbed in the oceans which contain about 50 times more CO2 than the
atmosphere. In the oceans, most molecules of the acidic oxide, CO2 are
reversibly converted to bicarbonate HCO${}_{3}^{-}$ and carbonate ions
CO${}_{3}^{--}$.
Molecules of the neutral oxide N2O do not form analogs of bicarbonate and
carbonate ions. Therefore, in equilibrium a much smaller fraction of N2O is
contained in the oceans. Weiss and Price [48] have measured the solubility
coefficient $K_{0}$ of N2O in water as a function of salinity and temperature.
A representative value for the oceans is $K_{0}=4.0\times 10^{-2}$ mole kg-1
atm-1. Using the known masses of the atmospheric gas and ocean water, and
assuming a concentration of $C^{\\{i\\}}=340\times 10^{-9}$ molecules of N2O
per molecule of air, in accordance with Table 3, this value of $K_{0}$ implies
an equilibrium partitioning of 77% of N2O in the atmosphere and 23% in the
oceans. For comparison, in equilibrium the partitioning for CO2 is
approximately 2% in the atmosphere and 98% in the oceans [49]. Observations
[50] show that the concentrations of N2O in the oceans can be orders of
magnitude larger than values that would be in equilibrium with the atmosphere.
Just as in the soil, biological processes in the ocean, like the anammox
reactions discussed above, are creating N2O, especially in suboxic waters
[50].
Using the N2O concentration $C^{\\{i\\}}=3.4\times 10^{-7}$ of Table 3, and
other parameters the same as those for (70), we find that the total mass $M$
of nitrogen contained in atmospheric N2O is
$\displaystyle M_{\rm N}$ $\displaystyle=$ $\displaystyle 4\pi
r^{2}\hat{N}\left(\frac{2\,m_{\rm N}}{N_{\rm A}}\right)C^{\\{i\\}}$ (71)
$\displaystyle=$ $\displaystyle 1734\hbox{ Tg }.$
If we assume an atmospheric residence time $\tau^{\rm N_{2}O}=109$ y, in
accordance with estimates by Forster et al. [28], the equilibrium loss rate of
atmospheric N2O would be no greater than
$\displaystyle\frac{dM_{\rm eq}}{dt}$ $\displaystyle=$ $\displaystyle
M/\tau^{\rm N_{2}O}$ (72) $\displaystyle=$ $\displaystyle 15.9\hbox{ Tg
y}^{-1}.$
As indicated in Fig. 10, most of the loss rate (72) is believed to be due to
photodissociation of N2O molecules by solar ultraviolet light in the
stratosphere [51].
## 6 Nitrogen in Agriculture
Since the biosphere is the main source of the minor greenhouse gases, nitrous
oxide and methane, agriculture has been targeted with various regulations that
will supposedly “save the planet” from climate change. The planet is not in
danger from greenhouse gases. But some of the regulations to address this non-
problem are of great concern since they will drastically cut food supplies for
the world.
The huge increases in agricultural productivity since the year 1950 have
eliminated the deadly famines that plagued mankind throughout recorded
history. A number of factors have contributed to this green revolution [52],
including more efficient agronomic practices, increased concentrations of
atmospheric carbon dioxide [3], more productive plant species like hybrid corn
[53], and greatly increased use of inorganic fertilizers [54, 55]. Nitrogen
fertilizers are the single most important, followed by phosphorus and
potassium. Fig. 13 shows the strong correlation of global cereal yield and the
world production of nitrogen fertilizer.
Figure 13: Annual world production of nitrogen fertilizer used in agriculture
(blue, in Tg) and world production of all cereal crops (orange, in gigatonnes)
from 1961 to 2019. Data from reference [58]. The threefold increase of cereal
crop yields was largely due to the use of mineral nitrogen fertilizer.
Additional contributors to the increased yields were other mineral fertilizers
like phosphorus and potassium, better plant varieties like hybrid corn,
increasing concentrations of atmospheric CO2, etc. Figure 14: Crop yields
relative to yields in 1866 for corn, wheat, barley, grass hay, oats and rye in
the United States. Also shown from the year 1961 is the annual mineral
nitrogen fertilizer (in Tg = megatonnes) used in agriculture. Crop yields are
from the USDA, National Statistical Service [62] and nitrogen fertilizer usage
is from the Food Agriculture Organization statistical database [58]. Note the
high correlation between yields and the use of nitrogen fertilizer.
Growing plants require many chemical elements. The largest demand is for
hydrogen (H) and oxygen (O) in water, followed by carbon (C) and nitrogen (N).
An optimally fertile soil must also contain phosphorus (P), potassium (K),
calcium (Ca), magnesium (Mg), sulfur (S), and traces of many additional
elements [56].
Soils naturally lose their fertility with time. Even with no removal of
minerals as a result of harvested crops or logging, rains gradually wash away
accessible minerals to subsoils or to waterways. For this reason, soils in
areas of high rainfall, most notably geologically old tropical soils, have
very low fertility, often only enough for one or two years of crops [57]. Like
phosphorus, potassium, and other essential minerals, mineralized nitrogen can
be leached from the soil by rainfall. But nitrification and the direct release
of N2 and N2O to the atmosphere are just as important as rainfall. No analog
of nitrification accelerates the loss of other minerals.
At the beginning of the 20th Century, the majority of nitrogen for crops was
supplied by manure, nitrogen fixation by soil microorganisms or legumes.
Inorganic fertilizer in the form of nitrate salts or ammonia from coke ovens
[52] was used in small amounts. This was equivalent to only about 2% of all
nitrogen removed by crops [52]. Soil nitrogen was still a significant
limitation on crop production, even with the emergence of hybrid varieties.
Commercial production of fertilizer from the reaction of nitrogen and hydrogen
gas at high temperatures and pressures, in the presence of appropriate
catalysts, was developed in Germany between 1910 and 1915. Invented by Fritz
Haber, for which he received the 1918 Nobel Prize in Chemistry, and
commercialized by Carl Bosch [52], the Bosch-Haber process is used to produce
most nitrogen fertilizer today. Fertilizer production did not begin in earnest
until after World War II [52]. Some of the impressive consequences are shown
in Fig. 14. The yields of all non-legume food and fodder crops have increased
dramatically since the year 1950, in large part due to nitrogen fertilizers.
### 6.1 Efficient use of nitrogen fertilizer
All crops need nitrogen to produce essential organic molecules like those of
Fig. 11. Adequate nitrogen is important throughout the growth and maturing
process [60]. More nitrogen increases canopy development, providing
opportunity for greater photosynthesis and greater crop development. Adequate
nitrogen at specific times in plant development enhances growth and crop
yield. Organic production, ignoring use of inorganic fertilizers, and a return
to low input agriculture, will not achieve the food supply needed to support
8.5 to 10 billion people [52].
Farmers have to pay for fertilizer. Nitrogen fertilizer is particularly
expensive. Therefore, farmers have every incentive to increase nitrogen use
efficiency (NUE), the fraction of nitrogen from fertilizer that is taken up by
a crop. The lower the crop uptake of nitrogen, the greater will be the
potential losses of nitrogen to air and water, and the larger the fraction of
fertilizer expenditures that are simply wasted. Typical NUE for corn
production in the US is 40% to 50% [59]. However, with good management
practices NUE can be increased to 70%. This represents a large financial
savings and a large reduction of nitrogen emissions.
With adaptation of improved crop varieties and agronomic practices, it is
difficult to separate the benefit of nitrogen fertilizer alone on yield
increases. However, throughout the United States, England, Brazil and Peru,
agricultural experiment stations have maintained crop plots with varying
inputs of livestock manure, and inorganic N, P, K, in addition to no added
fertilizer or manure amendments [61]. Stewart et al. [61] summarized plots for
sites in the United States, United Kingdom and South America which compared no
fertilizer versus fertilizer over long time periods. Fertilizer increased
yields often by 60% or more, compared to fields with no applied fertilizer.
Stewart et al. [61] concluded that estimates of improved yields of 30% to 50%
due to fertilizer use appear reasonable. The question becomes: What is the
most economic and environmentally friendly fertilizer use strategy?
Maharjan et al. [55] reported on the effects of incremental amounts of
nitrogen fertilizer compared with no fertilizer on continuous corn plots from
1952 to 2012. Plots with no added nitrogen had yield increases of 31 kg/ha,
which represents the improvements hybrid selection and agronomic practices may
achieve. Incremental amounts of nitrogen fertilizer of 45, 90, 135, and 180
kg-N/ha increased yields 1.88, 2.90, 2.95, and 1.80-fold, compared to the non-
fertilized plots. Plots which received manure from beef feedlots increased
yields 2.86-fold relative to the non-fertilized plots. Adding fertilizer to
plots receiving manure added no benefit. However, the manure nitrogen applied
was from 248 kg-N/ha to 769 kg-N/ha. These data demonstrate nitrogen
fertilizer greatly increases yields. However, supplemental nitrogen above that
needed for crop growth increases the risk for environmental losses of nitrogen
in leachate of NO${}_{3}^{-}$, volatilization of NH3 and N2O, depending on
soil conditions and rainfall. Losses to the environment are costly to the
ecosystem and to the farmer. Therefore, farmers are increasingly employing
precision practices to improve NUE.
The “4Rs” describe good fertilizer management: the right source, the right
amount, at the right time, and at the right place [59]. Providing needed
amounts of fertilizer at optimal times for crop, soil and climatic conditions
is called precision agriculture [52, 63]. For optimal NUE, adequate phosphorus
and potassium are necessary [59]. Soil tests for phosphorus and potassium can
determine if soil concentrations are adequate, to ensure that response to
nitrogen fertilizer will not be inhibited. The moisture content, carbon-to-
nitrogen ratios in soil organic matter, pH, and soil texture, in addition to
crop type, can influence the utilization of added nitrogen. Detailed farm soil
maps are needed to define where soils need various amounts of added nitrogen.
Accurately calculating crop nitrogen needs is a complicated process because so
many variables are involved. Prudent farmers follow recommended application
rates and timing, modified based on crop history and the applications of
manure and mineral fertilizers in prior years [56, 64].
Timing of application is critical. Typical application across the US occurs in
the fall, in the spring prior to planting, at planting, and after crop
emergence [65]. With fall application, nitrogenous losses will vary depending
on cover crop. Risks of nitrogen losses to the environment are lower with
application prior to planting and additional applications later in the growing
period. Splitting applications into several periods can improve NUE and crop
response.
## 7 Conclusion
Variants of the nitrogen cycle have been operating throughout the several
billion years that life has existed on Earth; the atmosphere has always
contained some N2O. There is no evidence that the concentrations of N2O have
ever been constant in time. Measurements of N2O concentrations in air bubbles
of ice cores from Antarctica and Greenland [66] show large changes in the
concentration of N2O between glacial maxima and interglacial periods like the
current one. At present, sources are emitting slightly more N2O than is being
absorbed by sinks. This is the reason for the upward trend of atmospheric N2O
that is shown in Fig. 7. The rates of increase may look large, but all graphs,
except that for refrigerant gases, have expanded vertical scales. The growth
rates are relatively small, as quantified by the “doubling times” $t_{2}$ of
(32) that are listed in Table 3. At current rates of increase, it would take
over 400 years to double N2O concentrations.
We use radiative forcings to predict the absolute warming of all greenhouse
gases, and also the relative contributions of CO2, CH4 and N2O. A major
conclusion of this paper is that observed rates of increase of N2O pose no
threat whatsoever to climate. This is based on the well-established physics of
radiative transfer, which shows that for current growth rates, the
contribution of N2O to warming is only about 6% that of all greenhouse gases.
We estimate that the absolute warming rate from N2O is about 0.064 C/century.
Few realize that large increases of greenhouse gas concentrations produce very
small increases of radiative forcing, or equivalently, small decreases of
radiation to space. Fig. 5 shows how little radiation to space changes from
the present value, the jagged black line, if N2O concentrations are doubled to
give the jagged red line. The black and red lines differ only slightly for
frequencies near 588, 1285 and 2224 cm-1, the normal-mode frequencies of the
N2O molecule shown in Table 1. Because forcing changes are such a small
fraction of the absolute radiation to space, on the order of a percent or less
for doubling greenhouse gas concentrations, the rate of temperature increase
$dT^{\\{\rm gg\\}}/dt$ with time $t$, must be proportional to the rate of
forcing increase $dF/dt$ with time as quantified by our Eq. (42).
Radiative forcings of greenhouse gases can be calculated very accurately on
laptop computers. Elaborate general circulation models are not needed. The key
question that cannot be resolved, even with supercomputers, is how much
warming is needed to correct imbalance, induced by growing greenhouse gas
concentrations, between heating of the Earth by the Sun and cooling of the
Earth by thermal radiation to space. This is described quantitatively by the
factor $\partial T/\partial F$ of (42). Our estimate (43) of $\partial
T/\partial F$, chosen to be consistent with observed temperature changes, is
very close to the “Planck value” of (44), the simplest theoretical estimate.
For present greenhouse gas concentrations, the radiative forcing per added N2O
molecule is about 230 times larger than the forcing per added CO2 molecule.
This is due to the strong saturation of the absorption band of the relatively
abundant greenhouse gas CO2, compared to the much smaller saturation of the
absorption bands of the trace greenhouse gas N2O. Fig. 7 or Table 3 show that
the observed CO2 rate of increase, about 2.5 ppm/year, is about 3000 times
larger than the N2O rate of increase, 0.0085 ppm/year. So, the contribution of
nitrous oxide to the annual increase in forcing is 230/3000 or about 1/13 that
of CO2.
As discussed in Section 5, the biosphere is the main source of atmospheric
N2O. Microrganisms of soils and oceans fix atmospheric nitrogen N2 from the
air as ammonium ions NH${}_{4}^{+}$, which are subsequently converted to
inorganic nitric oxide NO${}_{3}^{-}$ and other compounds. These in turn are
incorporated into organic molecules – most importantly the amino acids and
proteins of living organisms. The nitrogen cycle, outlined in Fig. 10,
describes how nitrogen from the remains of plants and other living things,
from manure, mineral fertilizers, etc., is converted many times from mineral
to organic forms in soils, waters and living organisms. Denitrification
eventually returns nitrogen to the atmosphere, mostly as N2 molecules, but
with a small fraction of N2O.
The contributions of mineral fertilizer to the nitrogen cycle appear to be
comparable to natural contributions. Some of the slow increase of N2O
concentrations in the atmosphere shown in Fig. 7 may, therefore, be due to
nitrogen fertilizer usage. How nitrogen fertilizer and natural nitrogen
fixation modify the nitrogen cycle is still not completely clear and is a
subject of ongoing research. But as we explained above, the details of the
nitrogen cycle, though of great scientific interest, are of no concern for
climate because the warming from N2O is so small.
Since few citizens realize that the effects of total N2O emissions on climate
are negligible, many governments are under pressure to “do something” about
agricultural contributions of N2O. Ideologically driven government mandates on
agriculture have usually led to disaster. The world has just witnessed the
collapse of the once bountiful agricultural sector of Sri Lanka as a result of
government restrictions on mineral fertilizer [2]. An earlier example is the
collectivization of agriculture in the Soviet Union [67], when the kulak (the
derogatory Bolshevik word “fist” for a successful farmer) was “eliminated as a
class.” In consequence, millions died of starvation. Folk memories of the
Golodomor (hunger-murder) played no small part in unleashing the present war
in Ukraine.
Despite this lamentable history, various governments have proposed burdensome
regulations on farming, ranching and dairying to reduce emissions of N2O. The
regulations will have no perceptible effect on climate, but some of them will
do great harm to agricultural productivity and world food supplies. As we
pointed out in Section 6, one of the major factors for the world’s
unprecedented abundance of food in recent years has been the use of mineral
nitrogen fertilizers. It is not possible to maintain highly productive
agriculture without nitrogen fertilizer.
Although agricultural emissions of N2O are no threat to climate, they can lead
to eutrophication of waterways. So, nitrogen fertilizer should be used
intelligently to maximize nitrogen use efficiency by crops. To limit
production costs, leading farmers already use no more nitrogen fertilizer than
needed [68]. Farmers are much better qualified to make this judgement than
bureaucratic ideologues.
Mandates to reduce animal numbers and fertilizer use will reduce agricultural
yields. To continue to feed the world’s growing population without mineral and
natural fertilizers, agricultural areas would have to increase and encroach on
native habitats, which could have remained untouched with rational use of
fertilizer. The result would be more environmental stresses, not less.
## Acknowledgements
The Canadian Natural Science and Engineering Research Council provided
financial support of one of the contributing authors. This paper was first
published by the CO2 Coalition, https://co2coalition.org/, in November, 2022.
## References
* [1] W. A. van Wijngaarden and W. Happer, Methane and Climate,
https://co2coalition.org/wp-content/uploads/2021/08/Methane-and-Climate.pdf
* [2] Sri Lanka fertilizer restrictions, https://www.reuters.com/markets/commodities/fertiliser-ban-decimates-sri-lankan-crops-2022-03-03/
* [3] C. Taylor and W. Schlenker, Environmental Drivers of Agricultural Productivity Growth: CO2 Fertilization of US Field Cropshttps://www.nber.org/papers/w29320
* [4] Earth’s Atmosphere, https://en.wikipedia.org/wiki/Atmosphere_of_Earth
* [5] W. A. van Wijngaarden and W. Happer, Relative Potency of Greenhouse Gases (March, 2021), http://arxiv.org/abs/2103.16465
* [6] I. E. Gordon, L. S. Rothman et al., The HITRAN2016 Molecular Spectroscopic Database, JQSRT 203, 3-69 (2017).
* [7] Normal Vibrational Modes of N2O, CH4, CO2 and H2O; http://www2.ess.ucla.edu/~schauble/MoleculeHTML/N2O_html/N2O_page.html, http://www2.ess.ucla.edu/~schauble/MoleculeHTML/CH4_html/CH4_page.html, http://www2.ess.ucla.edu/~schauble/MoleculeHTML/CO2_html/CO2_page.html, http://www2.ess.ucla.edu/~schauble/MoleculeHTML/H2O_html/H2O_page.html.
* [8] Dipole radiation, https://en.wikipedia.org/wiki/Larmor_formula
* [9] NOAA overview of satellite temperature measurements, https://www.ncei.noaa.gov/access/monitoring/msu/overview.
* [10] Rotational constant for OH, https://cccbdb.nist.gov/exp2x.asp?casno=3352576&charge=0
* [11] Rotational constant of N2O, https://cccbdb.nist.gov/exp2x.asp?casno=10024972&charge=0
* [12] Computational Chemistry Comparison and Benchmark Data Base (May, 2022), Experimental Dipoles, https://cccbdb.nist.gov/diplistx.asp
* [13] The U.S. Standard Atmosphere, NASA. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19770009539.pdf.
* [14] G. P. Anderson, S. A. Clough, F. X. Kneizys, J. H. Chetwynd, E. P. Shettle, AFGL Atmospheric Constituent Profiles (0$-$120 km), AFGL-TR-86-0110 (1986), https://apps.dtic.mil/dtic/tr/fulltext/u2/a175173.pdf
* [15] National Weather Service, Radiosondes or Weather Balloons.
https://www.weather.gov/rah/virtualtourballoon
* [16] Adiabatic lapse rates.
https://eesc.columbia.edu/courses/ees/climate/lectures/atm_phys.html
* [17] Buoyancy and atmospheric stability. http://kestrel.nmt.edu/~raymond/classes/ph332/notes/oldstuff/convection/convection.pdf
* [18] Solar spectrum.
https://www.sciencedirect.com/topics/engineering/solar-spectrum
* [19] The Plank spectral intensity.
https://en.wikipedia.org/wiki/Planck%27s_law
* [20] Blackbody or cavity radiation, https://en.wikipedia.org/wiki/Black-body_radiation
* [21] The ultraviolet catastrophe,
https://en.wikipedia.org/wiki/Ultraviolet_catastrophe.
* [22] S. Fueglistaler, private communication.
* [23] W. A. van Wijngaarden and W. Happer, Dependence of Earth’s Thermal Radiation on Five Most Abundant Greenhouse Gases, (June, 2020)
http://arxiv.org/abs/2006.03098
* [24] Radiation transfer.
https://en.wikipedia.org/wiki/Radiative_transfer
* [25] Radiative Forcing IPCC, Anthropogenic and Natural Radiative Forcing.
www.ipcc.ch/site/assets/uploads/2018/02/WG1AR5_Chapter08_FINAL.pdf
* [26] NOAA Global Monitoring Laboratory, https://gml.noaa.gov
* [27] R. Lindzen and J. Christy, The Global Mean Temperature Anomaly Record,
co2coalition.org/publications/the-global-mean-temperature-anomaly-record/
* [28] P. Forster et al., The Earth’s Energy Budget, Climate Feedback and Climate Sensitivity, In Climate Change 2021: The Physical Science Basis. Contribution of Working Group 1 to the 6th Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, U.K., pp. 923-1054,
doi:10.1017/9781009157896.009
* [29] R. Spencer, Global Warming,
https://www.drroyspencer.com/latest-global-temperatures/
* [30] J. E. Petit et al., Atmospheric history of the past 420,000 years from Vostok ice cores, Antarctica, Nature, 399,485-497 (1999).
* [31] L. D. Harvey, A guide to global warming potentials (GWPs), Energy Policy, p. 23, January 1993.
* [32] E. Joos et al., Carbon dioxide and climate impulse response functions for the computation of greenhouse gas metrics: a multi-model analysis, Atmos. Chem. Phys. 13, 2293-2825, (2013).
* [33] A. Bernhard, The Nitrogen Cycle: Processes, Players, and Human Impact,
https://www.nature.com/scitable/knowledge/library/the-nitrogen-cycle-
processes-players-and-human-15644632/
* [34] R. A. Berner et al.,GEOCARB III: A revised model of atmospheric CO2 over Phanerozoic time, American Journal of Science, 301, 182-204 (2001).
* [35] Z. Zhu et al. Greening of the Earth and its drivers, https://sites.bu.edu/cliveg/files/2016/04/zhu-greening-earth-ncc-2016.pdf
* [36] Amino Acid Structures, https://www.neb.com/tools-and-resources/usage-guidelines/amino-acid-structures.
* [37] Generic Amino Acid, https://www.astrochem.org/sci/Amino_Acids.php
* [38] Adenosine, https://www.aquaportail.com/definition-9524-adenosine.html
* [39] Bond dissociation energies,
https://labs.chem.ucsb.edu/zakarian/armen/11---bonddissociationenergy.pdf
* [40] Energy release from ATP hdyrolysis,
http://book.bionumbers.org/how-much-energy-is-released-in-atp-hydrolysis/
* [41] The Calvin Cycle, https://courses.lumenlearning.com/suny-biology1/chapter/the-calvin-cycle/
* [42] Cation exchange capacity, https://nrcca.cals.cornell.edu/nutrient/CA2/CA2_SoftChalk/CA2_print.html
* [43] L. L. Straka et al., Affinity informs environmental cooperation between ammonia-oxidizing archaea (AOA) and anaerobic ammonia-oxidizing (Anammox) bacteria, The ISME Journal, 13, 1997-2004, (2019).
* [44] Aquatex Maxcon, Anammox, https://www.aquatecmaxcon.com.au/technologies/industrial-processes/anammox
* [45] Science Direct, Biological Nitrogen Fixation, https://www.sciencedirect.com/topics/earth-and-planetary-sciences/biological-nitrogen-fixation#:~:text=The%20biological%20nitrogen%20fixation%2C%20carried,million%20tons%20N%20per%20year.
* [46] W. H. Yang, S. J. Hall and G. McNicol, Global Gases, Principles and Applications of Soil Microbiology, (Third Edition), 2021
* [47] D. Reay, Nitrous Oxide Sources, Nitrogen and Climate Change, Palgrave Macmillan, London. https://doi.org/10.1057/9781137286963_5
* [48] R. F. Weiss and B. A. Price, Nitrous Oxide Solubility in Water and Seawater, Marine Chemistry, 8, 347-359, (1980).
* [49] R. Cohen and W. Happer, Fundamentals of Ocean pH, https://co2coalition.org/wp-content/uploads/2021/11/2015-Cohen-Happer-Fundamentals-of-Ocean-pH.pdf
* [50] H. W. Bange, et al., A Harmonized Nitrous Oxide (N2O) Ocean Observation Network for the 21st Century, Frontiers in Marine Science, 6, Article 157, April (2019).
* [51] J. A. Schmidt et al., Photodissociation of N2O: Energy partitioning, J. Chem. Phys. 135, 024311 (2011).
* [52] V. Smil, Feeding the World. A challenge for the twenty-first century, The MIT Press. Cambridge Massachusetts. Co. 2000. Pp 105-139.
* [53] F. Kutka, Open-Pollinated vs. Hybrid Maize Cultivars, Sustainability 3, 1531-1554 (2011), doi:10.3390/su3091531
* [54] D. B. Egli, Comparison of Corn and Soybean Yields in the United States: Historical Trends and Future Prospects, A Supplement to Agronomy Journal, S-79-S-88 (2008), http://www.ask-force.org/web/Soya/Egli-Comparison-Corn-Soy-Yields-2008.pdf
* [55] B. Maharjan, S. Das, R. Nielsen, G.W. Hergert, Maize yields from manure and mineral fertilizers in the 100-year-old Knorr–Holden Plot, Agronomy Journal, 113, 5383-5397 (2020), https://acsess.onlinelibrary.wiley.com/doi/epdf/10.1002/agj2.20713
* [56] J. C. Forbes and R. D. Watson, Plants in Agriculture, Press Syndicate of the University of Cambridge. Cambridge, UK. Co 1992. pp: 62-67.
* [57] C. Koranda and J. Udaikumar, The Tropical Rainforest: A Perspective from Asia, https://tropicalrainforestasiakorandamar.weebly.com/soil.html
* [58] Food and Agricultural Organization of the United Nations, https://www.fao.org/faostat/en/#data/RFN, data downloaded August 16, 2022.
* [59] C. S. Snyder and P. E. Fixen, Plant nutrient management and risks of nitrous oxide emission, J. of Soil and Water Conserv. 67, 137A-144A (2012), https://www.jswconline.org/content/67/5/137A
* [60] M. J. Hawkesford, Reducing the reliance on nitrogen fertilizer for wheat production, Journal of Cereal Science 59, 276e283 (2014), http://dx.doi.org/10.1016/j.jcs.2013.12.001
* [61] W. M. Stewart, D. W. Dibb, A. E. Johnston, and T. J. Smyth, The Contribution of Commercial Fertilizer Nutrients to Food Production, Agron. J. 97, 1–6 (2005).
https://academic.uprm.edu/dsotomayor/agro6505/Stewart_etal_Fert&FoodProduction.pdf
* [62] US Crop Yields,
https://www.nass.usda.gov/Publications/Todays_Reports/reports/croptr19.pdf
* [63] N. C. Lawrence, C. G. Tenesaca, A. VanLoocke, and S. J. Hall, Nitrous oxide emissions from agricultural soils challenge climate sustainability in the US corn belt, PNAS. 118, e2112108118 (2021), https://doi.org/10.1073/pnas.2112108118.
* [64] D. J. Connor, R. S. Loomis, K. G. Cassman, Crop Ecology. Productivity and Management in Agriculture Systems Second Edition. Cambridge University Press. Co. (2011).
* [65] P. Cao, C. Lu, Z. Yu, Historical nitrogen use in agricultural ecosystems of the contiguous United States during 1850 -2015: application rate, timing, and fertilizer type, Earth Syst. Sci. Data 10, 969-984 (2018).
* [66] Atmospheric Methane and Nitrous Oxide of the Late Pleistocene from Antarctic Ice Cores, https://www.science.org/doi/10.1126/science.1120132.
* [67] Collectivization, Encylopedia Brittanica,
https://www.britannica.com/topic/collectivization
* [68] Saturation of fertilizer usage,
https://grist.org/article/many-countries-reaching-diminishing-
returns-in-fertilizer-use/
|
# Towards Preserving Semantic Structure in Argumentative Multi-Agent via
Abstract Interpretation
Minal Suresh Patil Umeå Universitet
###### Abstract
Over the recent twenty years, argumentation has received considerable
attention in the fields of knowledge representation, reasoning, and multi-
agent systems. However, argumentation in dynamic multi-agent systems
encounters the problem of significant arguments generated by agents, which
comes at the expense of representational complexity and computational cost. In
this work, we aim to investigate the notion of abstraction from the model-
checking perspective, where several arguments are trying to defend the same
position from various points of view, thereby reducing the size of the
argumentation framework whilst preserving the semantic flow structure in the
system.
## 1 Introduction
Humans must possess beliefs in order to engage with their surrounding
environments successfully, coordinate their activities, and be capable of
communicating. Humans sometimes use arguments to influence others to act or
realise a particular approach, to reach a reasonable agreement, and to
collaborate together to seek the optimal possible solution to a particular
problem. In light of this, it is not unexpected that many recent efforts to
represent artificially intelligent agents have incorporated arguments and
beliefs of their environment. Argumentation-based decision-making approaches
are anticipated to be more in line with how people reason, consider
possibilities and achieve objectives. This confers particular advantages on
argumentation-based techniques, including transparent decision-making and the
capability to provide a defensible rationale for outcomes.
In our recent work, we propose the use of explanations in autonomous
pedagogical scenarios [Patil, 2022] i.e. how explanations should be tailored
in multi-agent systems (MAS) (teacher-learner interaction) as shown in Figure
1. It is rational to assume that autonomous agents in open, dynamic, and
distributed systems will conform to a linguistic system for expressing their
knowledge in terms of one or more ontologies that reflect the salient domain.
Agents consequently must agree on the semantics (e.g. privacy) of the terms
they use to organize the information, contextualise the environment, and
represent different entities in order to engage or cooperate jointly. Abstract
argumentation frameworks (AFs) [Dung, 1995, Bench-Capon and Dunne, 2007] are
naturally employed for modelling and effectively resolving such types of
challenges. In both multi-agent [Maudet et al., 2006] and single-agent [Amgoud
and Prade, 2009] decision-making situations, AFs have been extensively
utilised to describe behaviours since they can innately represent and reason
with opposing information. Moreover, argumentative models have been presented
due to the dialectic nature of AFs so that agents can cooperatively resolve
issues or arrive at decisions by communicating implicitly [Dung et al., 2009].
Present studies of AFs, however, may not be immediately applicable to multi-
agent scenarios where agents could come across certain unexpected
circumstances in their environment.AFs are naturally used for modelling
dynamic systems since, in actuality, the argumentation process is inherently
dynamic in nature [Falappa et al., 2011, Booth et al., 2013] and this comes
with high computational complexity [Dunne and Wooldridge, 2009, Dunne, 2009].
To give a practical example, autonomous Intent-Based Networking (IBN)
[Campanella, 2019] captures and translates business intent into network
policies that can be automated and applied consistently across the network.
The goal is for the network to continuously monitor and adjust its performance
to assure the desired business outcome. Intent allows the agent to understand
the global utility and the value of its actions. Consequently, the autonomous
agents can evaluate situations and potential action strategies rather than
being limited to following instructions that human developers have specified
in policies. In these cases, agents may adjust their model of the environment
as well as their strategy according to information provided by the
environment. There are several other circumstances in which the agent may not
be able to guarantee a specific status of specific arguments and would
necessitate assistance from other agents. Agents may not always know the
optimal strategy until they form a coalition. In such circumstances, agents
cannot merely compute semantics/conclusions from the ground up since it is not
feasible. “Abstracting” AFs from the original(concrete domain) AF to via
Abstract Interpretation can help compute semantics on a much smaller AF.
Abstraction inherently are necessary if specific properties or specifications
of the AF that are abstracted away from them are maintained.
The main contribution of this work is to investigate the semantic properties
of the “abstract” AF from the “concrete” AF during the multi-agent
interactions. The term “abstraction” in this work pertains to the notion of
abstraction from model checking. Abstraction of the state space may reduce the
AF to a manageable size by clustering similar concrete states into abstract
states, which can further facilitate verifying these abstract states. We
summarise the primary research question as follows: Given a MAS in an
uncertain environment, each with a specific subjective evaluation of a given
set of conflicting arguments, how can agents reach a consensus whilst
preserving specific semantic properties?
Figure 1: Pedagogical Multi-Agent Reasoning
## 2 Method
Motivation for Abstract Interpretation:
Model checking[Clarke et al., 2000] is widely accepted as a powerful automatic
verification technique for the verification of finite-state systems. Halpern
and Vardi proposed the use of model checking as an alternative to the
deduction for logics of knowledge [Halpern and Vardi, 1991]. Since then, model
checking has been extended to multi-agent systems[Hoek and Wooldridge,
2002].The state explosion issue is the main impediment to the tractability of
model checking. Nevertheless, significant research has been done on this well-
known issue, and a variety of approaches have been proposed to circumvent the
model checking limitation, including such symbolic methods with binary
decision diagrams [Burch et al., 1992], SAT solvers [Biere et al., 1999],
partial order reduction [Peled and Pnueli, 1994] and abstraction [Clarke et
al., 2000].
In this work, we focus on abstract interpretation for computing dynamic
semantics in MAS. The main point of abstract interpretation[Cousot and Cousot,
1977] is to replace the formal semantics of a system with an abstract
semantics computed over a domain of abstract objects, which describe the
properties of the system we are interested in. It formalises formal methods
and allows to discuss the guarantees they provide such as soundness (the
conclusions about programs are always correct under suitable explicitly stated
hypotheses), completeness (all true facts are provable), or incompleteness
(showing the limits of applicability of the formal method). Abstract
interpretation is mainly applied to design semantics, proof methods, and
static analysis of programs. The semantics of programs formally defines all
their possible executions at various levels of abstraction. Proof methods can
be used to prove (manually or using theorem provers) that the semantics of a
program satisfy some specification, that is a property of executions defining
what programs are supposed to do. Now, we provide a brief technical primer on
key concepts in abstraction interpretation.
Posets: A partially ordered set (poset)
$\langle\mathcal{D},\sqsubseteq\rangle$ is a set $\mathcal{D}$ equipped with a
partial order $\sqsubseteq$ that is (1) reflexive: $\forall
x\in\mathcal{D}\cdot x\sqsubseteq x;(2)$ antisymmetric: $\forall
x,y\in\mathcal{D}\cdot((x\sqsubseteq y)\wedge(y\sqsubseteq$
$x))\Rightarrow(x=y)$; and (3) transitive: $\forall
x,y,z\in\mathcal{D}.((x\sqsubseteq y)\wedge(y\sqsubseteq
z))\Rightarrow(x\sqsubseteq z)$. Let $S\in\wp(\mathcal{D})$ be a subset of the
poset $\langle\mathcal{D},\sqsubseteq\rangle$, then the least upper bound
(lub/join) of $S$ (if any) is denoted as $\sqcup S$ such that $\forall x\in
S.x\sqsubseteq\sqcup S$ and $\forall u\in S$. ( $\forall x\in S$.
$x\sqsubseteq u)\Rightarrow$ $\sqcup S\sqsubseteq u$, and the greatest lower
bound (glb/meet) of $S$ (if any) is denoted as $\sqcap S$ such that $\forall
x\in S.\sqcap S\sqsubseteq x$ and $\forall l\in S.(\forall x\in S.l\sqsubseteq
x)\Rightarrow l\sqsubseteq\sqcap S$. The poset $\mathcal{D}$ has a supremum
(or top) $T$ if and only if $T=\sqcup\mathcal{D}\in\mathcal{D}$, and has an
infimum (or bottom) $\perp$ iff $\perp=\sqcap\mathcal{D}\in\mathcal{D}$.
Lattice and Complete Partial Order (CPO): A CPO is a poset
$\langle\mathcal{D},\sqsubseteq,\perp,\sqcup\rangle$ with infimum $\perp$ such
that any denumerable ascending chain $\left\\{x_{i}\in\mathcal{D}\mid
i\in\mathbb{N}\right\\}$ has a least upper bound
$\sqcup_{i\in\mathbb{N}}x_{i}\in\mathcal{D}$. A lattice is a poset
$\langle\mathcal{D},\sqsubseteq,\sqcup,\sqcap\rangle$ such that every pair of
elements $x,y$ has a lub $x\sqcup y$ and a glb $x\sqcap y$ in $\mathcal{D}$,
thus every finite subset of $\mathcal{D}$ has a lub and glb. A complete
lattice $\langle\mathcal{D},\sqsubseteq,\perp,\top,\sqcup,\sqcap\rangle$ is
lattice with arbitary subset $S\in\wp(\mathcal{D})$ has a lub $\sqcup S$,
hence a complete lattice has a supremum $\top=\sqcup\mathcal{D}$ and an
infimum $\perp=\sqcup\emptyset$.
Preorder and Equivalence Relation: A preorder $\preceq$ is a binary relation
that is reflexive and transitive, but not necessarily antisymmetric. Then
$x\sim y\triangleq x\preceq y\wedge y\preceq x$ is a equivalence relation that
is reflexive, symmetric $(\forall x,y\in\mathcal{D}.x\sim y\Rightarrow y\sim
x)$, and transitive. For any equivalence relation $\sim$, the equivalence
class of $x\in\mathcal{D}$ is defined as
$[x]_{\sim}\triangleq\\{y\in\mathcal{D}\mid y\sim x\\}$. The quotient set
$\left.\mathcal{D}\right|_{\sim}$ of $\mathcal{D}$ by the equivalence relation
$\sim$ is the partition of $\mathcal{D}$ into a set of equivalence classes,
i.e. $\left.\mathcal{D}\right|_{\sim}\triangleq\left\\{[x]_{\sim}\mid
x\in\mathcal{D}\right\\}$. Furthermore, the preorder $\preceq$ on
$\mathcal{D}$ can be extended to a relation $\preceq\sim$ on the quotient set
$\left.\mathcal{D}\right|_{\sim}$ such that $[x]_{\sim}\preceq\sim[y]_{\sim}$
$\Leftrightarrow\exists x^{\prime}\in[x]_{\sim},y^{\prime}$
$\in[y]_{\sim.}x^{\prime}\preceq y^{\prime}$. Hence, if $\preceq$ is a
preorder on $\mathcal{D}$, then $\preceq\sim$ is a partial order on the
corresponding quotient set $\left.\mathcal{D}\right|_{\sim}$.
Abstraction and Galois connection: In the framework for abstract
interpretation, Galois connections are used to formalise the correspondence
between concrete properties (like sets of traces) and abstract properties
(like sets of reachable states), in case there is always a most precise
abstract property over-approximating any concrete property. Given two posets
$\left\langle\mathcal{D},\sqsubseteq\right\rangle$ (concrete domain) and
$\left\langle\mathcal{D}^{\sharp},\sqsubseteq^{\sharp}\right\rangle$ (abstract
domain), the pair $\langle\alpha,\gamma\rangle$ of functions
$\alpha\in\mathcal{D}\mapsto\mathcal{D}^{\sharp}$ (known as abstraction
function and $\gamma\in\mathcal{D}^{\sharp}\mapsto\mathcal{D}$ (known as
concretisation function) forms a Galois connection iff $\forall
x\in\mathcal{D}.\forall
y^{\sharp}\in\mathcal{D}^{\sharp}.\alpha(x)\sqsubseteq^{\sharp}y^{\sharp}\Leftrightarrow
x\sqsubseteq\gamma\left(y^{\sharp}\right)$ which is mathematically represented
as
$\langle\mathcal{D},\sqsubseteq\rangle\underset{\alpha}{\stackrel{{\scriptstyle\gamma}}{{\leftrightarrows}}}\left\langle\mathcal{D}^{\sharp},\sqsubseteq^{\sharp}\right\rangle$
such that (1) $\alpha$ and $\gamma$ are monotonic; (2) $\gamma\circ\alpha$ is
extensive (i.e. $\forall x\in\mathcal{D}\cdot x\sqsubseteq\gamma(\alpha(x)))$;
(3) $\alpha\circ\gamma$ is reductive (i.e. $\forall
y^{\sharp}\in\mathcal{D}\cdot
y^{\sharp}\sqsubseteq\alpha\left(\gamma\left(y^{\sharp}\right)\right))$.
The rationale underpinning Galois connections is that the concrete properties
in $\mathcal{D}$ are approximated by abstract properties in
$\mathcal{D}^{\sharp}:\alpha(x)$ is the most precise sound over-approximation
of $x$ in the abstract domain $\mathcal{D}$ and
$\gamma\left(y^{\sharp}\right)$ is the least precise element of $\mathcal{D}$
that can be over-approximated by $y^{\sharp}$. The abstraction of a concrete
property $x\in\mathcal{D}$ is said to be exact whenever $\gamma(\alpha(x))=x$,
in other words, abstraction $\alpha(x)$ of property $x$ loses no information
at all. Furthermore, we can say $y^{\sharp}\in\mathcal{D}^{\sharp}$ is a sound
approximation of $x\in\mathcal{D}$ iff
$x\sqsubseteq\gamma\left(y^{\sharp}\right)$.
## 3 Discussion
In this section, we illustrate the nature of _abstraction_ in AF and leverage
the accrual of arguments whilst preserving the semantic information between
them.
###### Example 3.1.
Consider the example provided in [Nielsen and Parsons, 2006], consisting of
the following abstract arguments:
* •
A1: Joe does not like Jack;
* •
A2: There is a nail in Jack’s antique coffee table;
* •
A3: Joe hammered a nail into Jack’s antique coffee table;
* •
A4: Joe plays golf, so Joe has full use of his arms;
* •
A5: Joe has no arms, so Joe cannot use a hammer, so Joe did not hammer a nail
into Jack’s antique coffee table.
Figure 2: Original AF of Jack and Joe situation Figure 3: Abstraction of Jack
and Joe situation
As we can see in Figure 2, that the argument $A5$ attacks the argument $A3$,
whereas arguments $A3$ and $A4$ directly attacks and defeats the argument
$A5$.
In our work, employing the abstraction interpretation technique, the semantic
relationship between arguments $A1$, $A2$ and $A4$ can be strengthened as
$AX$, as shown in Figure 3. Through this simple example, we can reduce the
representational complexity of large AFs which further reduces computational
cost. This abstraction in multi-agent dynamic AFs can be extended to many
realms of argumentation, where auxiliary information (apart from simply
winning or losing the argument) come into consideration. One such
consideration involves hiding certain information from an opponent e.g. agents
abstracting away sensitive and confidential information.
## 4 Conclusion
In this work, we introduced the notion of reducing the complexity of an
abstract argumentation framework in a multi-agent setting using abstraction
principles from model checking to reduce representational as well as
computational cost, which is usually caused due to increased number of
arguments in the framework. Furthermore, due to the abstraction of the AF, it
would be possible to develop succinct explanations for humans or other agents
in the system.
## Acknowledgements
The author thanks Timotheus Kampik for guidance and valuable insights in this
project and the anonymous reviewers for their suggestions and feedback. This
work was partially funded by the Knut and Alice Wallenberg Foundation.
## References
* [Amgoud and Prade, 2009] Amgoud, L. and Prade, H. (2009). Using arguments for making and explaining decisions. Artificial Intelligence, 173(3-4):413–436.
* [Bench-Capon and Dunne, 2007] Bench-Capon, T. J. and Dunne, P. E. (2007). Argumentation in artificial intelligence. Artificial intelligence, 171(10-15):619–641.
* [Biere et al., 1999] Biere, A., Cimatti, A., Clarke, E., and Zhu, Y. (1999). Symbolic model checking without bdds. In International conference on tools and algorithms for the construction and analysis of systems, pages 193–207. Springer.
* [Booth et al., 2013] Booth, R., Kaci, S., Rienstra, T., and Torre, L. v. d. (2013). A logical theory about dynamics in abstract argumentation. In International Conference on Scalable Uncertainty Management, pages 148–161. Springer.
* [Burch et al., 1992] Burch, J. R., Clarke, E. M., McMillan, K. L., Dill, D. L., and Hwang, L.-J. (1992). Symbolic model checking: 1020 states and beyond. Information and computation, 98(2):142–170.
* [Campanella, 2019] Campanella, A. (2019). Intent based network operations. In 2019 Optical Fiber Communications Conference and Exhibition (OFC), pages 1–3. IEEE.
* [Clarke et al., 2000] Clarke, E., Grumberg, O., and Peled, D. (2000). Model checking cambridge.
* [Cousot and Cousot, 1977] Cousot, P. and Cousot, R. (1977). Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages, pages 238–252.
* [Dung, 1995] Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial intelligence, 77(2):321–357.
* [Dung et al., 2009] Dung, P. M., Kowalski, R. A., and Toni, F. (2009). Assumption-based argumentation. In Argumentation in artificial intelligence, pages 199–218. Springer.
* [Dunne, 2009] Dunne, P. E. (2009). The computational complexity of ideal semantics. Artificial Intelligence, 173(18):1559–1591.
* [Dunne and Wooldridge, 2009] Dunne, P. E. and Wooldridge, M. (2009). Complexity of abstract argumentation. In Argumentation in artificial intelligence, pages 85–104. Springer.
* [Falappa et al., 2011] Falappa, M. A., Garcia, A. J., Kern-Isberner, G., and Simari, G. R. (2011). On the evolving relation between belief revision and argumentation. The Knowledge Engineering Review, 26(1):35–43.
* [Halpern and Vardi, 1991] Halpern, J. Y. and Vardi, M. Y. (1991). Model checking vs. theorem proving: a manifesto. Artificial intelligence and mathematical theory of computation, 212:151–176.
* [Hoek and Wooldridge, 2002] Hoek, W. v. d. and Wooldridge, M. (2002). Model checking knowledge and time. In International SPIN Workshop on Model Checking of Software, pages 95–111. Springer.
* [Maudet et al., 2006] Maudet, N., Parsons, S., and Rahwan, I. (2006). Argumentation in multi-agent systems: Context and recent developments. In International workshop on argumentation in multi-agent systems, pages 1–16. Springer.
* [Nielsen and Parsons, 2006] Nielsen, S. H. and Parsons, S. (2006). A generalization of dung’s abstract framework for argumentation: Arguing with sets of attacking arguments. In International Workshop on Argumentation in Multi-Agent Systems, pages 54–73. Springer.
* [Patil, 2022] Patil, M. S. (2022). Explainability in autonomous pedagogically structured scenarios. In 36th AAAI 2022 Workshop on Explainable Agency in Artificial Intelligence.
* [Peled and Pnueli, 1994] Peled, D. and Pnueli, A. (1994). Proving partial order properties. Theoretical Computer Science, 126(2):143–182.
|
# Mathematically Modeling the Lexicon Entropy of Emergent Language
Brendon Boldt, David Mortensen
Language Technologies Institute
Carnegie Mellon University
Pittsburgh, PA 15213, USA
<EMAIL_ADDRESS>
###### Abstract
We formulate a stochastic process, FiLex, as a mathematical model of lexicon
entropy in deep learning-based emergent language systems. Defining a model
mathematically allows it to generate clear predictions which can be directly
and decisively tested. We empirically verify across four different
environments that FiLex predicts the correct correlation between
hyperparameters (training steps, lexicon size, learning rate, rollout buffer
size, and Gumbel-Softmax temperature) and the emergent language’s entropy in
$20$ out of $20$ environment-hyperparameter combinations. Furthermore, our
experiments reveal that different environments show diverse relationships
between their hyperparameters and entropy which demonstrates the need for a
model which can make well-defined predictions at a precise level of
granularity.
## 1 Introduction
The methods of deep learning-based emergent language provide a uniquely
powerful way to study the nature of language and language change. In
addressing these topics, some papers hypothesize general principles describing
emergent language. For example, Resnick et al. (2020) hypothesize a
predictable relationship exists between compositionality and neural network
capacity, and Kharitonov et al. (2020) hypothesize a general entropy
minimization pressure in deep learning-based emergent language. In many cases,
these hypotheses are derived from intuitions and stated in natural language;
this can lead to ambiguous interpretation, inadequate experiments, and _ad
hoc_ explanations. To this end, we study a general principle of emergent
language by proposing a mathematical model which generates a testable
hypothesis which can be directly evaluated through the empirical studies, akin
to what we find prototypically in natural science.
We formulate a stochastic process, FiLex, as a mathematical model of lexicon
entropy in deep learning-based emergent language systems111 _Emergent language
system_ or ELS refers to the combination of agents (neural networks), the
environment, and the training procedure used as part of an emergent language
experiment. (ELS). We empirically verify across four different environments
that FiLex predicts the correct correlation between hyperparameters (training
steps, lexicon size, learning rate, rollout buffer size, and Gumbel-Softmax
temperature) and the emergent language’s entropy in $20$ out of $20$
environment-hyperparameter combinations.
There are three primary reasons for using an explicitly defined model for
studying a topic like emergent language: clarity, testability, and
extensibility. A mathematical model yields a _clear_ , unambiguous
interpretation since its components have precise meanings; this is especially
important when conveying such concepts in writing. It is easier to _test_ a
model than a hypothesis articulated in natural language because the model
yields clear predictions which can be shown to be accurate or inaccurate; as a
result, models can also be directly compared to one another. Our experiments
reveal that different environments show diverse relationships between their
hyperparameters and entropy which demonstrates the need for such clarity in
making well-defined predictions at a precise level of granularity. Finally,
mathematical models hypothesize a _mechanism_ for an observed effect and not
simply the effect itself (with a possibly _ad hoc_ explanation). This is what
facilitates their _extensibility_ since a multitude of hypotheses can be
derived from these mechanisms; furthermore, this “mechanical” nature allows
future work to build directly on top of the model.
As mathematical models are seldom used to their full potential in studying
emergent language, this paper is meant to serve as a reference and starting
point for entire methodology of developing and testing such models. We
articulate our contributions as follows:
* •
Defining a mathematical model of lexicon entropy in emergent language systems
which we demonstrate to be accurate in predicting hyperparameter-entropy
correlations.
* •
Presenting a case study of defining and empirically evaluating a mathematical
model in emergent language.
* •
Provide a direct, intuitive comparison of the effects of hyperparameters on
lexicon entropy across different environments.
We briefly discuss related work in Section 2. In Section 3, we introduce the
mathematical model, FiLex, as well as the ELSs. Empirical evaluation is
presented in Section 4 and discussed in Section 5, concluding with Section 6.
Code is available at https://example.com/reponame (in supplemental material
while under review).
## 2 Related work
For a survey of deep learning-based emergent language work, please see
Lazaridou and Baroni (2020). Contemporary deep learning-based emergent
language research often aims at establishing and refining general principles
about emergent language. In large part, these principles can be expressed as
relationships between certain characteristics of the environment or agents
(e.g., model capacity (Resnick et al., 2020), population size (Rita et al.,
2022)) and properties of the emergent language (e.g., compositionality
(Resnick et al., 2020; Rodríguez Luna et al., 2020), entropy (Kharitonov et
al., 2020; Chaabouni et al., 2021; Rita et al., 2022), and generalizability
(Chaabouni et al., 2020; Guo et al., 2021; Słowik et al., 2020)). Some of
these works (Kharitonov et al., 2020; Khomtchouk and Sudhakaran, 2018; Resnick
et al., 2020) make use of mathematical models to describe parts of the
hypotheses and/or experiments, but these fall short of establishing a clear
model which generates a testable hypothesis which is then evaluated through
the empirical studies.
Pre-deep learning emergent language research frequently relied on mathematical
models (Skyrms, 2010; Kirby et al., 2015; Brighton et al., 2005), but such
models played a different role. Whereas these models were meant to account for
some property of language observed in human language, the model presented in
this paper is accounting for emergent language directly (and human language
only indirectly). Thus, this paper presents a (mathematical) model of a
(computational) model which, in the future, will be used to more directly
study human language.
## 3 Methods
### 3.1 Model
FiLex (“fixed lexicon stochastic process”) is a mathematical model developed
from the Chinese restaurant process (Blei, 2007; Aldous, 1985), a stochastic
process where each element in the sequence is a stochastic distribution over
the positive integers (i.e., a distribution over distributions). The analogy
for the Chinese restaurant process is a restaurant with tables indexed by the
natural numbers; as each customer walks in, they sit at a random table with a
probability proportional to the number of people already at that table. The
key property here is that the process is _self-reinforcing_ ; tables with many
people are likely to get even more. By analogy to language, the more a word is
used the more likely it is to continue to be used. For example, speakers may
develop a cognitive preference for it, or it gets passed along to subsequent
generations as a higher rate (Francis et al., 2021).
#### Formulation
FiLex is defined as a sequence of stochastic vectors indexed by
$N\in\mathbb{N}^{+}$ given by:
$\displaystyle\textsc{FiLex}{}(\alpha,\beta,S,N)$
$\displaystyle=\frac{\boldsymbol{w}^{(N)}}{\|\boldsymbol{w}^{(N)}\|_{1}}$ (1)
$\displaystyle\boldsymbol{w}^{(n+1)}$
$\displaystyle=\boldsymbol{w}^{(n)}+\alpha\frac{\boldsymbol{x}^{(n)}}{\beta}$
(2) $\displaystyle\boldsymbol{x}^{(n)}$
$\displaystyle\sim\text{Multi}\\!\left(\beta,\frac{\boldsymbol{w}^{(n)}}{\|\boldsymbol{w}^{(n)}\|_{1}}\right)$
(3) $\displaystyle\boldsymbol{w}^{(1)}$
$\displaystyle=\frac{1}{S}\cdot(1,1,\ldots,1)\in\mathbb{R}^{S}$ (4)
where $\boldsymbol{w}^{(n)}$ is a vector of weights,
$\alpha\in\mathbb{R}_{>0}$ controls the weight update magnitude,
$\beta\in\mathbb{N}^{+}$ controls the variance of the updates,
$S\in\mathbb{N}^{+}$ is the size of the weight vector (i.e., lexicon), and
$\text{Multi}(k,\boldsymbol{p})$ is a $k$-trial multinomial distribution with
probabilities $\boldsymbol{p}\in\mathbb{R}^{S}$. The pseudocode describing
FiLex is given in Algorithm 1. Conceptually, the process starts with an
$S$-element array of weights initialized to $1/S$. At each iteration we draw
from a $\beta$-trial multinomial distribution parameterized by the normalized
weights.222The $\beta$-trial multinomial sample is written as $\beta$ i.i.d.
samples from a categorical distribution to draw parallels to PPO in Algorithm
2. This multinomial sample is multiplied by $\alpha/\beta$ and added to the
weights so that the update magnitude is $\alpha$. This proceeds $N$ times.
Since the sequence elements are the _normalized_ weights, the elements are
themselves probability distributions; thus, FiLex is technically a sequence of
distributions over distributions.
⬇
1alpha: float > 0
2beta: int > 0
3N: int > 0
4S: int > 0
5
6weights = array(size=S)
7weights.fill(1 / S)
8for _ in range(N):
9 W += sample_multinomial(W / sum(W), beta) / beta
10 w_copy = weights.copy()
11 for _ in range(beta): # equivalent to normalized multinomial
12 i = sample_categorical(w_copy / sum(w_copy))
13 weights[i] += alpha / beta
14return weights / sum(weights)
Algorithm 1 FiLex pseudocode
The two key differences between FiLex and the Chinese restaurant process are
the hyperparameters $S$ and $\beta$.333Note that $\alpha$ in FiLex is actually
equivalent to the _inverse_ of $\alpha$ in the Chinese restaurant process.
FiLex has a fixed number of parameters so as to match the fact that the agents
in the ELS have a fixed-size bottleneck layer, that is, a fixed lexicon.
Secondly, $\beta$ is introduced to modulate the smoothness of parameter
updates. It is closely connected to the fact that certain RL algorithms like
PPO accumulate a buffer of data points from the environment with the same
parameters before performing gradient descent.
### 3.2 Environments
To evaluate FiLex, we use four different reinforcement learning environments
in our experiments. These are inhabited by two deep learning-based agents: (1)
a sender agent which receives an observation and produces a message and (2) a
receiver agent which receives a message (and possibly additional observation)
and takes an action. The agent architecture and optimization are detailed
Section 3.3.
#### NoDyn
The “no dynamics” environment is a proof-of-concept environment which is not
intended to be realistic but rather to match as closely as possible the
simplifying assumptions which FiLex makes while keeping the same neural
architecture in the environments below. As the name suggests, the primary
simplification in this environment is that there are trivial dynamics, that
is, every episode immediately ends with reward of $1$ no matter what the
sender or receiver do. The sender input and receiver output are identical to
those of Nav, defined below. Just as FiLex assumes that every instance of word
use is reinforced, this process reinforces every message which the sender
produces.
#### Recon
The reconstruction game (Chaabouni et al., 2020), in the general case, mimics
a discrete autoencoder: the input value is translated into a discrete message
by the sender, and the receiver tries to output the original input based on
the message. For a given episode, the sender observes $x\sim\mathcal{U}(-1,1)$
and produces a message; the receiver’s action is a real number $\hat{x}$,
yielding a reward $(x-\hat{x})^{2}$.
#### Sig
The signaling game environment comes from Lewis (1970) and has been frequently
used in the literature (Lazaridou et al., 2017; Bouchacourt and Baroni, 2018).
In this setup, the data is partitioned into a fixed number of discrete
classes. The sender observes a datum from one of the classes and produces a
message; the receivers observes this message, the sender’s datum, and data
points from other classes (i.e., “distractors”). The reward for the
environment is $1$ if the receiver correctly identifies the sender’s datum
among the distractors and $0$ otherwise.
To eliminate the potential confounding factors from using natural inputs
(e.g., image embeddings (Lazaridou et al., 2017)), we use a synthetic dataset.
For an $n$-dimensional signaling game, we have $2^{n}$ classes. Each class is
represented by an isotropic multivariate normal distribution with mean
$(\mu_{1},\mu_{2},\dots,\mu_{n})$ where $\mu_{i}\in\\{-3,3\\}$. Observations
of a given are samples of its corresponding distribution. For example, in the
$2$-dimensional game, the $4$ classes would be represented by the
distributions: $\mathcal{N}((-3,-3),I_{2})$, $\mathcal{N}((3,-3),I_{2})$,
$\mathcal{N}((-3,3),I_{2})$, and $\mathcal{N}((3,3),I_{2})$ (we use a
$5$-dimensional signaling game for our experiments with $32$ classes). The
motivation for this setup is minimal need for feature extraction while still
using real-valued, stochastic inputs.
#### Nav
For a multi-step environment, we use a $2$-dimensional, obstacle-free
navigation task. The sender agent observes the $(x,y)$ position of a receiver
and produces a message; the receiver moves by producing an $(x,y)$ vector. For
a given episode, the receiver is initialized uniformly at random within a
circle and must navigate towards a smaller circular goal region at the center.
The agents are rewarded for both reaching the goal and for moving towards the
center. An illustration is provided in Appendix A. The receiver’s location and
action are continuous variables.
### 3.3 Agents
#### Architecture
Our architecture comprises two agents, conceptually speaking, but in practice,
they are a single neural network. The sender and receiver are randomly
initialized at the start of training, are trained together, and are tested
together. The sender itself is a $2$-layer perceptron with tanh activations.
The sender’s input is environment-dependent. The output of the second layer is
passed to a Gumbel-Softmax bottleneck layer (Maddison et al., 2017; Jang et
al., 2017) which enables learning a discrete, one-hot representation.444Using
a Gumbel-Softmax bottleneck layer allows for end-to-end backpropagation,
making optimization faster and more consistent than using a backpropagation-
free method like REINFORCE (Kharitonov et al., 2020; Williams, 1992).
Nevertheless, future work may want to use REINFORCE for its more realistic
assumptions about communication. The activations of this layer can be thought
of as the words forming the lexicon of the emergent language. Messages consist
only of a single one-hot vector (word) passed from sender to receiver. At
evaluation time, the bottleneck layer functions deterministically as an argmax
layer, emitting one-hot vectors. The receiver is a $1$-layer perceptron which
takes the output of the Gumbel-Softmax layer as input. The receiver’s output
is environment-dependent. An illustration and precise specification are
provided in Appendices A and B.
#### Optimization
Although only our Nav environment involves multi-step episodes, using a full
reinforcement learning algorithm across all environments benefits
comparability and extensibility in future work. Specifically, we use proximal
policy optimization (PPO) (Schulman et al., 2017) paired with Adam (Kingma and
Ba, 2015) to optimize the neural networks. PPO is widely used RL algorithm
which selected primarily for its stability (e.g., training almost always
converges, minimal hyperparameter tuning); attempts to train with “vanilla”
advantage actor critic did not consistently converge. We use the PPO
implementation of Stable Baselines 3 (MIT license) built on PyTorch (BSD
license) (Raffin et al., 2019; Paszke et al., 2019).
⬇
1n_updates: int >= 0
2buffer_size: int > 0
3
4for _ in range(n_updates): # outer loop
5 rollout_buffer = []
6 for _ in range(buffer_size): # inner loop
7 episode = run_episode(model, environment)
8 rollout_buffer.append(episode)
9 update_parameters(model, rollout_buffer)
Algorithm 2 PPO pseudocode
One relevant characteristic of PPO and similar algorithms is that in their
training they contain an inner and outer loop analogous to FiLex (Algorithm
1); this is illustrated in Algorithm 2. The (main) outer loop consists of two
steps: the inner loop which populates a rollout buffer with “experience” from
the environment and the updating of parameters based on that buffer. What is
important to note is that the buffer is populated with data from the same
model parameters, and it is not until after this that model parameters change.
### 3.4 Hypothesis
Here we state the hypothesis used to evaluate FiLex. The sign of
hyperparameter-entropy correlation observed in FiLex will be the same as what
we observe for a corresponding hyperparameter in the ELSs. We can state this
more formally as: for each pair of corresponding hyperparameters
$(h,h^{\prime})$ in FiLex and an ELS respectively,
$\displaystyle\text{sgn}(\text{corr}(D))$
$\displaystyle=\text{sgn}(\text{corr}(D^{\prime}))$ (5) $\displaystyle D$
$\displaystyle=\left\\{(x,H(\boldsymbol{y}))\mid x\in
X_{h},\,\boldsymbol{y}\sim\textsc{FiLex}{}_{h=x}\right\\}$ (6) $\displaystyle
D^{\prime}$ $\displaystyle=\left\\{(x,H(\boldsymbol{y}))\mid x\in
X_{h^{\prime}},\,\boldsymbol{y}\sim\text{ELS}_{h^{\prime}=x}\right\\}$ (7)
$\displaystyle H(\boldsymbol{y})$
$\displaystyle=-\sum_{i=1}^{S}y_{i}\log_{2}y_{i}$ (8)
where $\text{corr}(\cdot)$ is the Kendall rank correlation coefficient
($\tau$) (Kendall, 1938), $\textsc{FiLex}{}_{h=x}$ is the distribution over
frequency vectors yielded by the model for hyperparameter $h$ set to $x$
(assume likewise for $\text{ELS}_{h^{\prime}=x}$), $H$ is Shannon entropy, and
$X_{h}$ is the set of experimental values for hyperparameter $h$. A “sample”
from an ELS consists of training the agents in the environment, and estimating
word frequencies by collecting the sender’s messages over a random sample of
inputs. Accordingly, our null hypothesis is that FiLex does not meaningfully
correspond to the ELSs, and thus the signs of correlation would be expected to
match with a probability $0.5$.
We intentionally formulate our hypothesis at this level of granularity:
equality of direction (sign) of correlation rather stronger claims such as raw
correlation: $|\text{corr}(D)-\text{corr}(D^{\prime})|<\epsilon$ or mean
squared error: $1/|X|\cdot\sum_{x\in X}(D(x)-D^{\prime}(x))^{2}$. We select
this level of direction of correlation for a few reasons. The level of
simplicity of FiLex compared to the ELSs means that the unaccounted for
factors would make supporting stronger hypotheses too difficult; furthermore,
even if the hypothesis were defended, it would be less widely applicable for
the same reasons. Additionally, the current literature tends to speak of the
general principles of emergent language at the level of “relationships” and
“effects” rather than exact numeric approximations (Kharitonov et al., 2020;
Resnick et al., 2020).
Table 1: Corresponding hyperparameters in the ELSs and FiLex. ELS | FiLex
---|---
Time steps | $N$
Lexicon size | $S$
Learning rate | $\alpha$
Buffer size | $\beta$
Temperature | $\beta$
#### Corresponding Hyperparameters
A key component of the hypothesis is the correspondence of hyperparameters of
the ELSs with those of FiLex. These correspondences are the foundation for
applying reasoning about FiLex to the ELSs; accordingly, they also determine
how the model will be empirically tested. We present five pairs of
corresponding environment-agnostic hyperparameters in Table 1. Although
environment-specific hyperparameters can easily correspond with those of FiLex
we chose the agnostic for ease of experimentation and comparison.
To identify these correspondences, it is important to understand the intuitive
similarities between the ELSs and FiLex. Firstly, the weights of FiLex
correspond the learned likelihood with which a given bottleneck unit is used
in the ELS; in turn, both of these correspond to the frequency with which a
word is used in a language. Each iteration of FiLex’s outer loop is analogous
to a whole cycle in the ELS of simulating episodes in the environment,
receiving the rewards, and performing gradient descent with respect to the
rewards (compare Algorithms 1 and 2).
Based on this analogy, we can explain the corresponding hyperparameters as
follows. $N$ corresponds the number of parameter updates taken throughout the
course of training the ELS (i.e., the outer loop of PPO). $S$ corresponds the
size of the bottleneck layer in the ELS. $\alpha$ corresponds to the learning
rate (i.e., magnitude of parameter updates) in the ELS. The ELS has two
analogs of $\beta$. First, $\beta$ corresponds to the rollout buffer size of
PPO because both control the number of iterations of the inner loop of
training where episodes are collected before updating the weights. Second,
$\beta$, more generally, control how smooth the updates to FiLex’s weights are
which makes it analogous to the temperature of the Gumbel-Softmax distribution
in the ELS since a higher temperature results in smoother updates to the
bottleneck’s parameters.
## 4 Experiments
Our experiments consist of comparing the correlation between the
hyperparameters of FiLex and the ELSs and the Shannon entropy of lexicon at
the end of training. The entropy for the ELSs is calculated based on the
bottleneck unit (word) frequencies gathered by sampling from the sender’s
input distribution. To gather data for FiLex, we run a Rust implementation of
a sampling algorithm. Each experiment consists of a logarithmic sweep of a
hyperparameter plotted against the entropy yielded by those hyperparameters
(see Appendix B for details).
Time Steps
Lexicon Size
Learning Rate
Buffer Size
Temperature
FiLex
NoDyn
Recon
Sig
Nav
Figure 1: Plots of hyperparameters ($x$-axis, log scale) vs. entropy ($y$-axis) . Each row corresponds to a particular environment. Each column corresponds to a particular hyperparameter. All $y$-axes are on the same scale with the dashed lines representing min/max entropy. The points are individual runs and the lines are a Gaussian convolution of the points. Table 2: Kendall’s $\tau$’s for various configurations. All values have a significance of $p\leq 0.01$. Environment | Time Steps | Lexicon Size | Learning Rate | Buffer Size | Temperature
---|---|---|---|---|---
FiLex | $-0.53$ | $+0.67$ | $-0.87$ | $+0.93$ | $+0.93$
NoDyn | $-0.81$ | $+0.12$ | $-0.74$ | $+0.07$ | $+0.58$
Recon | $-0.17$ | $+0.93$ | $-0.35$ | $+0.84$ | $+0.68$
Sig | $-0.49$ | $+0.15$ | $-0.16$ | $+0.30$ | $+0.49$
Nav | $-0.81$ | $+0.36$ | $-0.84$ | $+0.20$ | $+0.68$
Each point in the resulting scatter plots corresponds to an independent run of
the model or ELS with the hyperparameter on the $x$-axis and entropy on the
$y$-axis. The plots also include a Gaussian convolution of the data points
(the solid line) to better illustrate the general trend of the data. The plots
are presented in Figure 1 with the rank correlation coefficients in Table 2.
## 5 Discussion
### 5.1 Model evaluation
Looking at the signs of correlations shows that FiLex makes the correct
prediction $20$ out of $20$ times. Given a simple one-sided binomial test, the
empirical data rejects the null hypothesis at $p<0.001$. Although this number
drops to $15$ out of $20$ if we require $|\tau|\geq 0.2$, the binomial test
rejects the null hypothesis with $p=0.02$ for this stronger hypothesis.
Though the directions of correlations predicted by FiLex are correct, looking
at the plots show that ELSs do not always demonstrate the monotonicity
predicted by the model. This is especially evident in _Time Steps_ for Recon:
moving left-to-right, the plot follows a similar path to the other environment
and FiLex at first but then diverges halfway through with increasing entropy.
A possible explanation of this is that Recon allows learning new, useful words
more easily than Sig or Nav, meaning that additional training can lead to
further improvement. The conclusion we draw from these plots is that FiLex
correctly predicts a sort of baseline correlation between the hyperparameters
and entropy. Other works, Kharitonov et al. (2020); Chaabouni et al. (2021)
for example, find similar correlations between entropy and bottleneck
temperature. Nevertheless, this correlation can be overridden by the specifics
of the environment.
### 5.2 Environment variability
When looking beyond just the direction of correlation at the slopes and shapes
of the curves, the four ELSs all present unique set of relationships between
entropy their hyperparameters. This implies that none of these environments
are reducible to each other, that is, we cannot make observations about one
environment and automatically assume they apply to other environments.
Certainly this makes an researcher’s task harder as learning general
principles would not be possible from a single environment. Furthermore, there
is a sensitivity to hyperparameters _within_ a given environment, which would
imply that discovering general principles within single environment could not
be done with just a single set of hyperparameters.
Although this diversity in behavior makes modeling it more difficult, it also
shows the importance of precision we get from a mathematical model. For
example, say Recon has not been empirically tested and we wanted to predict
the lexicon size-entropy relationship in Recon. It is the case that we could
simply observe the positive correlations in the other environments and predict
the same Recon, but we could easily over-extrapolate and predict a relatively
shallow slope when Recon’s slope is relatively steep. What this paper’s model,
hypothesis, and evaluation offer in this situation is not a more detailed
prediction but a “prepackaged” prediction which is precisely stated and
supported by data.
### 5.3 Applications to future work
There are two primary ways in which FiLex can be applied in future research.
First, the model can be applied to and tested against further phenomena in
emergent language (i.e., it is _extensible_). The fact that it is formulated
mathematically means that it does not just predict correlations but
_mechanisms_ which account for the correlations. For example, FiLex’s $\beta$
hyperparameter was designed to account for _Buffer Size_ and the _Temperature_
experiment was conducted after the fact. The fact that FiLex describes both
_Buffer Size_ and _Temperature_ with the same hyperparameter suggests that
similar mechanisms account for their positive correlations with entropy. This
statement about similar mechanisms, on the other hand, is not present set of
one-off hypotheses about hyperparameter-entropy correlations derived from
intuition. Second, FiLex and accompanying experiments provide an easy way for
future research to discover confounding factors in their experiments. For
example, an experiment might show that entropy decreases as rewards are scaled
up, yet FiLex would suggest that this might be equivalent to simply increasing
the learning rate rather than being its own unique cause of the effect on
entropy.
### 5.4 Methodological difficulties
The greatest challenge in the methodology of this work is not the formulation
of the model but rather evaluating the quality of the model. In part, this is
on account of a lack of established baseline model—comparative analysis
(“which is better?”) is significantly easier than absolute analysis (“how good
is this?”) yet requires an adequate baseline to compare against. But more
significantly, the granularity of experimentation is a design decision with no
obvious answer.
For example, merely comparing the signs of rank correlations is very coarse-
grained as it makes minimal assumptions about the data (e.g., linearity,
absence of outliers) and captures very little information about the data.
Naturally, it is easier to apply such an analysis, and as mentioned before,
researcher typically phrase hypotheses in terms of such correlations, but it
can only offer minimal support for applicability of the model to the actual
system. On the other hand, evaluating the model’s ability to predict exact
behavior of the system (e.g., measuring mean squared error of the model’s
predictions) can establish a more precise link between model and system but
might miss more general but important similarities. For example, _Lexicon
Size_ for FiLex and Nav might show similar trends, but be different by a
constant, yielding a high mean squared error.
A subtle but significant methodological difficulty is the selection of
hyperparameters. In Recon’s _Time Steps_ plot, it is easy to see that changing
the range of hyperparameters could easily yield either a positive or a
negative correlation when in reality there are both. To a certain extent, this
can be resolved be choosing a “reasonable” range of hyperparameters based on
values are typically, but this is of little help to selection of FiLex’s
hyperparameters as there is no “typical usage.” For example, FiLex for
$\beta=1$ and $\beta=100$ yield significantly different distributions, but
there is no obvious _a priori_ reason to say that one value of $\beta$ should
be preferred over the other for comparing to the ELSs. Although additional
hyperparameters increase the range of phenomena which the model can account
for, the additional degrees of freedom can weaken the model’s predictions by
introducing confounding variables (cf. overparameterization).
One of the primary contributions of this work is to serve as a case study and
example of working with explicitly defined models in studying deep learning-
based emergent language. Thus, this paper is starting point for future work to
improve upon. One of the most important improvements would be finding a more
rigorous way to select “reasonable” experimental hyperparameters.
Additionally, it would be better to develop the hypothesis and experimental in
full before performing any evaluation; the process was somewhat iterative in
this paper.
## 6 Conclusion
We have presented FiLex as a mathematical model of lexicon entropy in deep
learning-based emergent language systems and demonstrated that, at the level
of correlations, it accurately predicts the behavior of our emergent language
environments. Opting for a mathematical model possesses the benefits of having
a clear interpretation, making testable predictions, and being reused for new
predictions in future studies. Although the model’s hypothesis was testable,
the process is not free from non-trivial design decisions which affect the
quality of evaluation. Nevertheless, this paper serves as starting point and
example of how more rigorous models can be applied to the study of emergent
language.
## Acknowledgments and Disclosure of Funding
This material is based on research sponsored in part by the Air Force Research
Laboratory under agreement number FA8750-19-2-0200. The U.S. Government is
authorized to reproduce and distribute reprints for Governmental purposes
notwithstanding any copyright notation thereon. The views and conclusions
contained herein are those of the authors and should not be interpreted as
necessarily representing the official policies or endorsements, either
expressed or implied, of the Air Force Research Laboratory or the U.S.
Government.
## References
* Aldous (1985) David J. Aldous. Exchangeability and related topics. In P. L. Hennequin, editor, _École d’Été de Probabilités de Saint-Flour XIII — 1983_ , pages 1–198, Berlin, Heidelberg, 1985. Springer Berlin Heidelberg. ISBN 978-3-540-39316-0.
* Blei (2007) David Blei. The chinese restaurant process, 2007. URL https://www.cs.princeton.edu/courses/archive/fall07/cos597C/scribe/20070921.pdf.
* Bouchacourt and Baroni (2018) Diane Bouchacourt and Marco Baroni. How agents see things: On visual representations in an emergent language game. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , page 981–985, Brussels, Belgium, Oct 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1119. URL https://aclanthology.org/D18-1119.
* Brighton et al. (2005) Henry Brighton, Kenny Smith, and Simon Kirby. Language as an evolutionary system. _Physics of Life Reviews_ , 2:177–226, 2005.
* Chaabouni et al. (2020) Rahma Chaabouni, Eugene Kharitonov, Diane Bouchacourt, Emmanuel Dupoux, and Marco Baroni. Compositionality and generalization in emergent languages. _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , 2020. doi: 10.18653/v1/2020.acl-main.407. URL http://dx.doi.org/10.18653/v1/2020.acl-main.407.
* Chaabouni et al. (2021) Rahma Chaabouni, Eugene Kharitonov, Emmanuel Dupoux, and Marco Baroni. Communicating artificial neural networks develop efficient color-naming systems. _Proceedings of the National Academy of Sciences_ , 118(12), Mar 2021. ISSN 0027-8424, 1091-6490. doi: 10.1073/pnas.2016569118. URL https://www.pnas.org/content/118/12/e2016569118.
* Francis et al. (2021) David Francis, Ella Rabinovich, Farhan Samir, David Mortensen, and Suzanne Stevenson. Quantifying Cognitive Factors in Lexical Decline. _Transactions of the Association for Computational Linguistics_ , 9:1529–1545, 12 2021. ISSN 2307-387X. doi: 10.1162/tacl_a_00441. URL https://doi.org/10.1162/tacl_a_00441.
* Guo et al. (2021) Shangmin Guo, Yi Ren, Simon Kirby, Kenny Smith, Kory Wallace Mathewson, and Stefano V. Albrecht. Expressivity of Emergent Languages is a Trade-off between Contextual Complexity and Unpredictability. September 2021. URL https://openreview.net/forum?id=WxuE_JWxjkW.
* Jang et al. (2017) Eric Jang, Shixian Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In _Proceedings of the 2017 International Conference on Learning Representations (ICLR)_ , 2017. URL https://openreview.net/forum?id=rkE3y85ee.
* Kendall (1938) M. G. Kendall. A new measure of rank correlation. _Biometrika_ , 30(1-2):81–93, 06 1938. ISSN 0006-3444. doi: 10.1093/biomet/30.1-2.81. URL https://doi.org/10.1093/biomet/30.1-2.81.
* Kharitonov et al. (2020) Eugene Kharitonov, Rahma Chaabouni, Diane Bouchacourt, and Marco Baroni. Entropy minimization in emergent languages. In Hal Daumé III and Aarti Singh, editors, _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 5220–5230. PMLR, 13–18 Jul 2020. URL http://proceedings.mlr.press/v119/kharitonov20a.html.
* Khomtchouk and Sudhakaran (2018) Bohdan Khomtchouk and Shyam Sudhakaran. Modeling natural language emergence with integral transform theory and reinforcement learning. _arXiv:1812.01431 [cs]_ , Nov 2018. URL http://arxiv.org/abs/1812.01431. arXiv: 1812.01431.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In _ICLR (Poster)_ , 2015. URL http://arxiv.org/abs/1412.6980.
* Kirby et al. (2015) Simon Kirby, Monica Tamariz, Hannah Cornish, and Kenny Smith. Compression and communication in the cultural evolution of linguistic structure. _Cognition_ , 141:87–102, 2015. ISSN 0010-0277. doi: https://doi.org/10.1016/j.cognition.2015.03.016.
* Lazaridou and Baroni (2020) Angeliki Lazaridou and Marco Baroni. Emergent multi-agent communication in the deep learning era. _arXiv:2006.02419 [cs]_ , Jul 2020. URL http://arxiv.org/abs/2006.02419. arXiv: 2006.02419.
* Lazaridou et al. (2017) Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. Multi-agent cooperation and the emergence of (natural) language, 2017\. URL https://openreview.net/forum?id=Hk8N3Sclg.
* Lewis (1970) David Lewis. Convention: A philosophical study. 1970\.
* Maddison et al. (2017) Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. In _Proceedings of the 2017 International Conference on Learning Representations (ICLR)_ , 2017. URL https://openreview.net/forum?id=S1jE5L5gl.
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems 32_ , pages 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
* Raffin et al. (2019) Antonin Raffin, Ashley Hill, Maximilian Ernestus, Adam Gleave, Anssi Kanervisto, and Noah Dormann. Stable baselines3. https://github.com/DLR-RM/stable-baselines3, 2019.
* Resnick et al. (2020) Cinjon Resnick, Abhinav Gupta, Jakob Foerster, Andrew M. Dai, and Kyunghyun Cho. Capacity, Bandwidth, and Compositionality in Emergent Language Learning. _International Conference on Autonomous Agents and Multi-Agent Systems_ , April 2020. URL http://arxiv.org/abs/1910.11424.
* Rita et al. (2022) Mathieu Rita, Florian Strub, Jean-Bastien Grill, Olivier Pietquin, and Emmanuel Dupoux. On the role of population heterogeneity in emergent communication. In _International Conference on Learning Representations_ , 2022. URL https://openreview.net/forum?id=5Qkd7-bZfI.
* Rodríguez Luna et al. (2020) Diana Rodríguez Luna, Edoardo Maria Ponti, Dieuwke Hupkes, and Elia Bruni. Internal and external pressures on language emergence: least effort, object constancy and frequency. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , page 4428–4437, Online, 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.397. URL https://www.aclweb.org/anthology/2020.findings-emnlp.397.
* Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. _arXiv preprint arXiv:1707.06347_ , 2017.
* Skyrms (2010) Brian Skyrms. _Signals: Evolution, learning, and information_. OUP Oxford, 2010.
* Słowik et al. (2020) Agnieszka Słowik, Abhinav Gupta, William L. Hamilton, M. Jamnik, S. Holden, and C. Pal. Exploring Structural Inductive Biases in Emergent Communication. _undefined_ , 2020. URL https://www.semanticscholar.org/paper/Exploring-Structural-Inductive-Biases-in-Emergent-S%C5%82owik-Gupta/29d1adb458d5b5a0fc837d37af01a6673efd531c.
* Williams (1992) Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine learning_ , 8(3):229–256, 1992.
## Appendix A Emergent language system illustration
(a) The receiver (pictured) is rewarded for moving towards the goal region at
the center in the Nav environment.
(b) The agent architecture for Nav.
Figure 2:
## Appendix B Experiment parameters
Each experiment uses a logarithmic sweep across hyperparameters; the sweep is
defined by Equation 9, where $x$ and $y$ are the inclusive upper and lower
bounds respectively and $n$ is the number steps to divide the interval into.
The floor function is applied if the elements must be integers.
$\text{LS}(x,y,n)=\left\\{x\cdot\left(\frac{y}{x}\right)^{\frac{i}{n-1}}\,\middle|\,i\in\\{0,1,\dots,n-1\\}\right\\}$
(9)
Hyperparameter | Default | Low | High | Steps
---|---|---|---|---
$N$ | $10^{3}$ | $10^{0}$ | $10^{3}$ | $1000$
$S$ | $2^{6}$ | $2^{3}$ | $2^{8}$ | $1000$
$\alpha$ | $1$ | $10^{-3}$ | $10^{3}$ | $1000$
$\beta$ | $8$ | $10^{0}$ | $10^{3}$ | $1000$
Table 3: Hyperparameters for the empirical evaluation of FiLex. “Low” and “High” refer to the logarithmic sweep used for that experiment; default values used for all other experiments. Hyperparameter | Default | Low | High | Steps
---|---|---|---|---
Time steps | $2\cdot 10^{5}$ | $10^{2}$ | $10^{6}$ | $600$
Bottleneck size | $2^{6}$ | $2^{3}$ | $2^{8}$ | $600$
Learning rate | $3\cdot 10^{-3}$ | $10^{-4}$ | $10^{-1}$ | $600$
Buffer size | $2^{8}$ | $2^{3}$ | $2^{10}$ | $600$
Temperature | $1.5$ | $10^{-1}$ | $10^{1}$ | $600$
Table 4: Hyperparameters for the empirical evaluation of FiLex. “Low” and
“High” refer to the logarithmic sweep used for that experiment; default values
used for all other experiments. Please see code for further details and
default values.
|
# A Survey of Relevant Text Mining Technology
Claudia Peersman, Matthew Edwards, Emma Williams, & Awais Rashid
## Introduction
In recent years, Darknets and other environments offering anonymity are
becoming increasingly popular among cybercriminals with a high degree of
computer literacy and forensic awareness. Additionally, the emergence of
online cybercriminal communities are enhancing the “normalisation” of
cybercrime, providing offenders with technical and security support [1].
Although none of the existing anonymisation techniques (e.g., the ToR service
(there are many legitimate and ethical uses of anonimisation techniques such
as ToR, which we do not debate here (see [2, 3, 4]) is entirely bulletproof,
they can easily complicate or even block current cybercrime investigations. In
such cases, the communications produced on social media platforms (both
regular and cybercriminal fora) can be one of few clues to an offender’s
identity [5]. Additionally, investigating such social interactions can
contribute to a better understanding of the dynamics leading to initial
engagement in cybercrime, continued careers and (potentially) retirement.
Recent advances in text mining and natural language processing technology have
enabled researchers to detect an author’s identity or demographic
characteristics, such as age and gender, in several text genres by
automatically analysing the variation of linguistic characteristics. However,
applying such techniques “in the wild” [6], i.e., in both cybercriminal and
regular online social media, differs from more general applications in that
its defining characteristics are both domain and process dependent. This gives
rise to a number of challenges of which contemporary research has only
scratched the surface. More specifically, a text mining approach applied on
social media communications typically has no control over the dataset size –
the number of available communications will vary across users. Hence, the
system has to be robust towards limited data availability. Additionally, the
quality of the data cannot be guaranteed. As a result, the approach needs to
be tolerant to a certain degree of linguistic noise (for example,
abbreviations, non-standard language use, spelling variations and errors).
Finally, in the context of cybercriminal fora, it has to be robust towards
deceptive or adversarial behaviour, i.e. offenders who attempt to hide their
criminal intentions (obfuscation) or who assume a false digital persona
(imitation) [7], potentially using coded language.
In this work we present a comprehensive survey that discusses the problems
that have already been addressed in current literature and review potential
solutions. Additionally, we highlight which areas need to be given more
attention. In the next section, we briefly introduce the fields of text mining
and computational stylometry. Section 3 provides an overview of related work
in these fields. In section 4, we discuss outstanding challenges and present
the project’s research agenda. Finally, we conclude this survey in Section 5.
## Background
### Text Mining
With the increasing availability of large amounts of computer-mediated
communications, text mining has become a popular area of research for
automatically detecting patterns and trends in a “Big Data” set-up. Typically
designed within a Natural Language Processing (NLP) framework as a level of
information extraction from text, the main objective of a text mining approach
is to build an intelligent tool, that has the capability of analysing large
amounts of natural language texts (e.g., newspaper articles, books or emails)
and extracting useful information in a timely manner. Hence, it is a step
forward from the information retrieval task, in which the best matches in a
database are calculated based on a user query, to a level of exploring the
various types of high quality knowledge that can be extracted from text.
Although this is a relatively new research area, the technology is already
being used in a wide variety of applications, such as biomedical applications
(e.g., GoPubMed111http://www.gopubmed.com/web/gopubmed/, a knowledge-based
search engine for biomedical texts), business and marketing applications
(e.g., stock return prediction [8]), security applications (e.g., automatic
monitoring of Internet news, blogs and social media [9]) and academic
applications (e.g., academic publishers making their papers available for text
mining purposes).
A text mining approach typically involves the following six steps:
1. 1.
A dataset of text documents relevant to the task at hand is collected.
2. 2.
Each document is pre-processed. More specifically, the data is converted to
the desired format, is split up into individual words and punctuation marks
(i.e., tokenised) and processed for removing content undesirable for the task
in question (e.g., hyperlinks, non-standard word forms, redundancies and stop
words).
3. 3.
The documents are transformed from the full text version to a vector space
model that represents the different sets of linguistic features present in
each document (e.g., words, characters, Parts-of-Speech and semantic roles).
4. 4.
Statistical techniques are applied to determine which features are most
informative for the task at hand. Usually, non-discriminative features are
discarded by the system to reduce the dimensionality of the dataset.
5. 5.
The resulting structured database is analysed using either automatic
classification or clustering techniques that are also used in data mining. In
most cases, analysis happens using machine learning or statistical algorithms.
6. 6.
The output of the previous step is evaluated and can be stored or used in a
series of following text mining experiments.
Figure 1 shows a standard text mining process flow. The next section
introduces the emerging research field of computational stylometry, in which
the relation between natural language and its users is typically studied by
adopting such a text mining framework.
Figure 1: A standard text mining process flow.
### Computational Stylometry
Language is a social phenomenon and language variation is, as a consequence,
innate to its social nature. By selecting between a range of potential
variations of language use, people construct a social identity, which can be
employed to achieve certain social goals. In other words, language users can
make use of specific language varieties to represent themselves in a certain
way. This freedom of choice, which is shaped by a series of both consciously
and unconsciously made linguistic decisions, is often referred to as speaker
agency. Such variation can be manifested at various levels of language use,
for example, the choice between different words, phonological variants or
grammatical alternatives, and is typically influenced by a speaker’s
(intended) audience, demographic variables (such as age group, gender or
background) and objectives (e.g., knowledge transfer, persuasion or
likeability). Stylometry studies are mainly based on the hypothesis that the
combination of these (un)conscious linguistic decisions is unique for each
individual — like a fingerprint or a DNA profile [10, 11] — and that language
users (i.e., both speakers and authors) can be identified by analysing the
specific properties of their linguistic choices. The idea of such a human
stylome [10] can be dated as far back as the mediaeval scholastics.
In modern times, the approach of analysing a text on different linguistic
levels to determine its authorship was adopted within research fields such as
stylistics and literary criticism, one of the most prominent examples being
the investigations into the literary works attributed to Shakespeare [12].
This type of research is commonly referred to as traditional authorship
attribution and typically involves in-depth reading by human experts. However,
in the late 19th century, a new line of research demonstrated that an author’s
writing style could be quantified. A study by [13], for example, showed that
the authorship of the Federalist Papers could be settled by comparing the
distribution of stop words (or function words) in the disputed texts to other
texts written by the three candidate authors222Alexander Hamilton, James
Madison and John Jay..
The arrival of modern computational methods and the emergence of the Internet
have instigated a new research area, combining insights from the fields of
stylometry to techniques commonly used in computer science. Based on the
assumption that authors can be distinguished by their stylome, non-traditional
authorship attribution typically focuses on developing a computational model
that can automatically identify the author of a given text. The dominant
approach in these studies is typically based on text mining methods, which are
used to automatically attribute one or more predefined thematic categories —
such as authors— to a set of natural language texts (e.g., books, papers or
emails)333The history and background of authorship attribution studies can be
found in [14].. Recently, a significant part of the field has shifted focus
from attributing texts to specific authors to investigating whether certain
aspects in an author’s writing style can be generalised for larger author
groups belonging to, for example, the same age or gender group or showing
similar personality traits (e.g., outgoing or withdrawn). Together with non-
traditional authorship attribution, such author profiling studies constitute
the rapidly developing field of computational stylometry.
## Related Work
### Automatic User Profiling
A large body of work already exists on detecting a user’s age and gender based
on computer-mediated communications. At first, most studies that involved a
computational stylometry approach to automatically predict users’ demographics
were based on large collections of blogs (e.g., [15, 16, 17, 18, 19, 20, 21,
22]. The main advantage of using blog corpora is that blog sites are publicly
available and they usually contain information about the blogger’s profile. In
one such study, Schler et al. [23] applied a text mining approach to predict
gender in a corpus of over 71,000 English blogs. Based on stylistic features
(non-dictionary words, parts-of-speech, function words and hyper-links) and
content features (content words444Content words carry the primary
communicative message of an utterance (e.g., nouns, adjectives, verbs and
adverbs). with the highest Information Gain), they found that, despite the
strong stereotypical differences in content between male and female bloggers,
stylistic differences proved to be more discriminative than content
differences [23]. However, combining both feature types, they were able to
obtain an accuracy of 80.1% when distinguishing between male and female
bloggers.
With regard to age (group) identification, content words showed to be slightly
more useful than the style-based features, but again combining them rendered
the best results [23]: 10s (13–17) were distinguishable from 30s (33–42) with
accuracy above 96% and differentiating between 10s and 20s (23–27) was
achieved with an accuracy of 87.3%. However, many 30s were wrongly classified
as 20s, which rendered an overall accuracy of 76.2%. This resulted in an
F-score of 0.86 for the 10s, 0.75 for the 20s and 0.52 for the 30s
category555These scores were calculated based on the confusion matrix in the
paper.. Yan and Yan [22] were the first to include “non-traditional” features
in their experiments, such as background colour, word fonts and cases,
punctuation marks and emoticons. When combining these non-traditional features
with bag-of-word features, their system achieved an F-score of 0.68 based on a
corpus of 75,000 English blog entries authored by 3,000 individual bloggers.
Interesting to see was that removing stop words actually decreased the
performance of their system to 0.64, which is consistent with previous
sociolinguistic studies that attested gender differences in the use of highly
frequent word classes such as pronouns, articles and prepositions (e.g., [24,
25, 26, 27, 28, 29]). Similar results were found for age: based on the same
corpus as was described in [23], Koppel et al. [30] showed that language usage
in blogs correlates with age: pronouns and the use of both assent and negation
become scarcer with age, while prepositions and determiners become more
frequent. Their system yielded an accuracy of 76.1% for the three-way
classification problem of attributing blogs to one of three age groups: 13–17,
23–27 or 33–47 (majority baseline = 42.7%) by combining style- and content-
based features and 80.5% for predicting gender. Goswami et al. [18] further
expanded the research of [23] by adding non-dictionary words and the average
sentence length as features. Furthermore, the stylistic difference in usage of
non-dictionary words combined with content words allowed to predict the age
group (10s, 20s, 30s or higher) with an accuracy of 80.3% and gender with an
accuracy of 89.2%. The average sentence length, however, did not correlate
significantly with age or gender. Additionally, [31] found that female authors
were more likely to use emoticons, ellipses, character flooding, repeated
exclamation marks, puzzled punctuation (i.e., combinations of “?” and “!”),
the abbreviation “omg” (oh my god), and transcriptions of back-channels like
“ah”, “hmm”, “ugh”, and “grr”. Affirmations like “yeah” and “yea” were the
only preferences that were attributed to males. These latter features are
called — not quite accurately — “sociolinguistic features” in e.g., [31].
Finally, a number of other, non-textual features have been suggested for age
and gender prediction, such as the number of friends and followers [31, 32]
and posted images [22].
More recently, a number of studies were based on a corpus of Twitter (e.g.,
[33, 34, 35, 36, 32, 31]) and other social network data (see e.g., the author
profiling tasks at PAN 2013, 2014 and 2015 [37, 38, 39]). Although the amount
of available data on Twitter is expanding massively, profile data is often
absent, which requires additional techniques to acquire such meta-data.
Contrary to blogs, tweets are typically very short, containing a maximum of
140 characters. However, most studies tend to combine multiple messages per
user and show very similar results to previous studies on weblog data. The
best results for gender prediction were achieved by Bamman et al. [36], whose
system achieved an accuracy score of 88.0% based on over 600 tweets per user.
When predicting age on a corpus of 200 Dutch tweets per user, [33] were able
to reach a 0.76 F-score when distinguishing between users younger than 20,
between 20 and 40 years old and older than 40. Binary age prediction (adults
versus adolescents), as examined in this chapter, was first performed by
Filippova [40], who investigated the performance of shallow textual features
(e.g., character counts), language models and non-textual information (e.g.,
number of friends) when identifying bloggers under and over 18. However, their
classifiers only yielded slightly better results than their majority baseline.
Finally, Rashid et al. [41] presented a set of tools for predicting age and
gender in a forensic context. By including POS, semantic and BOW features in a
hierarchic classification system, their hierarchical, binary age prediction
model yields probabilities that a user belongs to a specific age band (11–18
or over 18, followed by a breakdown of the probabilities for 11–14; 15–18;
19–49; 50+; etc.), resulting in a 72.15% recall and 72.24% precision for
distinguishing between children and adults.
Aside from investigating which feature types are most effective for predicting
profile information, Zhang and Zhang [17] contributed to the field by
comparing different data representation methods, feature selection methods and
machine learning algorithms for gender prediction in 3,226 blogs (52% female),
which contained about 400 words on average. They also included 20 semantic
labels (e.g., “conversation”, “family”) as features in their instances, which
were based on lists of words appearing in a similar context (e.g., “tell”,
“talk”, “ask” belonged to the “conversation” label). Together with these word
factor analysis features, they included word unigrams, POS tags and average
word and sentence length in their experiments, but did not compare the results
of these feature types individually. Their best prediction accuracy of 72.1%
was achieved by using Information Gain as feature selection criterion, and
Support Vector Machines (linear kernel) as machine learning algorithm. Based
on a corpus of 3,100 English blogs with an average post length of 250 words
for men and 330 words for women, [16] investigated which feature selection
methods were most suitable for their type of data. Their ensemble feature
selection method (EFS) improved the accuracy scores on gender attribution
significantly compared to single selection metrics, such as Information Gain
and Chi Square, by about 6-10%, resulting in a best accuracy score of 88.6%.
Although this EFS method showed promising results, its application in age
and/or gender attribution remains limited to [16]. The reason for this could
be that building a new classifier for each subset remains very time-consuming
when working with a large number of features.
Contrary to research on age and gender prediction, studies on automatically
detecting a user’s region of origin in social media communications are far
less prominent in the field. Rao et al. [31] experimented with token n-grams
and the same set of sociolinguistic features that was mentioned above and were
able to distinguish between English-writing Twitter users located in either
Northern or Southern India with an accuracy score of 77.1%.
Although some of the previously mentioned studies show promising results for
user profiling in social media communications, all of these works included
text fragments ranging from 250 to several thousands of words on average per
user. However, when looking at recent studies by [42, 43], these results are
subject to scalability issues when the models are applied on shorter text
fragments: Burger et al. [43] reported a significant decrease in the
performance to 66.5% when predicting gender using only a single tweet per
user. Additionally, the work of [42] specifically investigated the effect of
different aspects of experimental design, such as feature types, feature
selection, document representation and machine learning algorithms, when
performing user profiling based on only one social media posting per user in
the context of designing online child protection technology. The developed
techniques will be evaluated for their efficiency when analysing cyber
offenders’ online messages in the project.
### Adversarial Stylometry
Stylometry is based on the assumption that every individual has a unique
writing style and, as a result, an author can be distinguished from other
authors by measuring specific properties of his or her writings. However, most
stylometric research is also based on the assumption that authors do not
attempt to disguise their linguistic writing style. The author of [14]
discusses the importance of determining the robustness of an authorship
attribution system when it is confronted with deception. However, so far,
research into this issue has been limited. Kacmarcik and Gamon [44] were the
first to explore the possibility to computationally obfuscate the (most
likely) author of the disputed Federalist Papers (see e.g., [45]). They
attempted to hide the author’s identity by neutralising 14 of the most
informative words per thousand words in the texts. Yet, the obfuscation was
successfully detected by a technique called unmasking, which was proposed by
[46]: using a series of SVM classifiers to iteratively remove the features
that received the highest weight from the SVM’s during training, they found
that, when comparing two texts that were written by two different authors, the
accuracy score slowly declined during the iterations. However, when comparing
two texts that were written by the same author of which one was
computationally modified, as in [44], they attested a steep drop of the
accuracy. This drop is explained by the fact that when comparing between texts
that are written by the same author, the number of highly discriminative
features is limited. Hence, when building a degradation curve, iteratively
removing these features typically results in sudden drops in accuracy.
However, when comparing texts that are written by different authors, the
number of discriminative features is larger, resulting in a more steadily
declining accuracy when iteratively removing subsets of these features (see
also [47]).
Contrary to these studies, however, initial work by Brennan and Greenstadt [7]
showed that including obfuscation passages written by humans resulted in a
devastating effect on the robustness of most state-of-the-art authorship
attribution methods. Moreover, in an extended study on the English Brennan-
Greenstadt corpus, which included original writings, obfuscated, and imitation
passages of 45 different authors, the work of [48] stated that including
obfuscation passages resulted in a decrease of the precision for authorship
attribution from over 80% to less than 10% when training on data from forty
different authors. With regard to detecting imitations of literary writings
(or pastiches), Dinu and Nisioi [49] reported that using frequency rankings of
stop words as features showed promising results when trying to distinguish
between the Romanian novelist Caragiale’s writings and authors that had
attempted to imitate his writing style after his death. However, research by
[48] reported a precision of less than 5% when including imitation passages
from the English Brennan-Greenstadt corpus.
Although the work of [50, 51] confirmed the fact that identifying the author
of such deceptively written texts is extremely difficult, both Afroz et al.
[48] and Juola [52] found that the authors’ intent to deceive or hide their
identity is detectable. On the one hand, Afroz et al. [48] reported that, in
imitated passages, the usage of personal pronouns and particles increased,
while the usage of adjectives decreased. They also noticed an increased use of
existential “there”, adverbs, particles and personal pronouns in obfuscated
passages, but a decrease in the usage of nouns and wh-pronouns. Finally, they
noticed that authors tend to “dumb down” their writing style by using shorter
sentences, simpler words with less syllables, lower readability scores and
higher readability ease and that changing function words seemed to be an
important way to obfuscate a text. On the other hand, using the Java Graphical
Authorship Attribution Program (JGAAP) software package, [52] was able to
identify five of six “deceptive” documents (83%) and 22 out of 28 “honest”
writings (79%) in the Brennan-Greenstadt corpus. He concluded that the
attempts of people to write “differently” could be fit into a recognisable and
distinctive stylistic pattern.
Yet, the research described above still shows a number of limitations in the
context of analysing cybercriminal social media communications: (a) some of
the mentioned studies were performed on computationally modified data instead
of on deceptive texts written by (untrained) human beings; (b) all studies
were performed on a minimum of 500 words per author; and (c) they only
included formal text genres.
## Towards Automatic User Profiling in Cybercriminal Communities
### Challenges
Three aspects are essential for designing a real-life computational stylometry
application that can be used to support digital forensic investigations
pertaining to cybercrime, namely: (i) dealing with linguistically noisy texts,
(ii) sparse, skewed “big data” analysis and (iii) detecting adversarial text
samples.
Noisy data. The increased level of immediacy in computer-mediated
communication (CMC) has led to the rise of a new glocal language variety,
displaying characteristics from a global Internet language leading to a wild
proliferation of new language varieties (e.g., Internet abbreviations,
acronyms, character flooding, concatenations and emoticons) [53, 54]. The
presence of such linguistic noise is said to provide a significant challenge
for text mining research, because many off-the-shelf NLP tools fail to
correctly analyse this anomalous input. Previous work on age and gender
detection in CMC discards all non-standard language varieties (e.g., [23, 22,
21]) or normalises them to improve feature extraction procedures (e.g., [55]).
However, previous work in spoken discourse studies has observed strong
correlations between the use of non-standard language and sociological
variables such as age and gender (e.g., [56, 57, 58, 59, 60, 61, 62, 63, 64,
65, 66, 67, 68]). Additionally, linguistic noise can also be used as an
adversarial tactic by cybercriminals to avoid detection [42].
Sparse, skewed Big Data. Most documents only contain a small percentage of the
total number of features present in the dataset. Because of their limited
length, they provide great challenges for standard text mining approaches that
rely on word frequencies, word co-occurrences or shared context to determine
the similarity between documents. Additionally, in many cases the focus lies
on detecting the minority class and, hence, the number of useful instances is
limited. Standard practice in a wider text mining context is to increase the
data in each sample by, e.g., grouping multiple text fragments written by the
same author ([23, 17, 33]) or by incorporating additional word level concept
information obtained from external sources, such as pre-trained word
embeddings, WordNet, concept annotations or snippets produced by public search
engines (e.g., [69, 70, 71]). However, for a real-life application, it is
essential that a cyber offender profiling and detection approach is able to
achieve a reliable performance, even when confronted with limited data
availability. Furthermore, in a digital forensics context, it would be
inconceivable to combine evidence with external content that was not produced
by the person under investigation.
Adversaries. Contemporary computational stylometry research typically focuses
on two aspects: identifying and extracting linguistic features that are
potentially discriminative for an author’s writing print (or stylome [10]) and
developing an efficient computational model that includes these features to
automatically determine an author’s identity or demographics. Although a range
of feature types and computational methods have been suggested for the task,
the field is dominated by studies that evaluate their computational stylometry
approaches on non-deceptive datasets. However, a key issue when designing a
computational stylometry approach to be used in cybercrime investigations is
whether it will remain useful when it is confronted with adversarial
behaviour. Cyber criminals may try to hide behind multiple digital personas or
a group of offenders can share a single online identity. Additionally, they
might attempt to hide their true identity or imitate other (non-criminal)
users and use specialised vocabulary or coded language to conceal the nature
of their activities 666For example, illegal drug traffickers have been
reported to use a widely varied terminology for selling their products [72]..
### Research Agenda
In this section, we discuss the advances needed to study cybercriminal careers
at scale. We focus on two key dimensions: (1) datasets that provide an
empirically-grounded basis for uncovering identifying characteristics of cyber
offenders; (2) technological advances for designing a text mining approach to
automatically detect cybercriminal demographics and assessing such approaches
for their robustness when applied in the wild and in adversarial settings of
cybercriminal fora.
#### Datasets
To support any text mining methodology, large and diverse datasets (Big Data)
are required in order to study cybercriminal careers at scale. To our
knowledge, the only significant longitudinal coverage of many cybercriminal
communities is the dataset collected by [73]. This dataset consists of
longitudinal observations of some 89 darknet marketplaces (crypto-markets
using Bitcoin and other cryptocurrencies in escrow systems accessible as
hidden services within the Tor network) and 37 forums over a period covering
roughly 2013 to 2015, dependent on the marketplace accessibility. This large
dataset, over 1.5TB of web pages and associated resources, contains within it
a wealth of business and social interactions between criminals, covering
several adverse events which had impact on the community, along with market
listings, reputation indicators and other activity information and identity
markers which are valuable for understanding the characteristics of offenders
and their motivations. However, there are a number of limitations to this
dataset, largely related to the incompleteness or partial inconsistency of
particular crawling results. For example, pages may be missing from a given
snapshot of a market on a particular date, due to scraper errors or
connectivity issues with the market itself. These limitations can be partially
addressed through the redundancy inherent in the longitudinal nature of the
scraping process, inferring the approximate extent of missing data for a given
site snapshot from prior and future observations of the same site, and
translating this into imputed adjustments of the impact assessment measures.
Other limitations include the comparatively smaller populations using darknet
marketplaces for cyber-dependent crime as opposed to, e.g., illegal drug
trade. There are, of course, other issues with the trustworthiness of
observations from within the scrapes (e.g., vendors using shills,
counterintelligence efforts, scams) and hold implications for any analysis.
Finally, no (gold standard) information is available about the demographics of
the users who are represented in the dataset.
For the purposes of the approach presented in the project, a new dataset with
a broad coverage, which is being established by building a list of target
criminal communities and collecting both archive and current data covering
online communications between cyber criminals and adjacent participants, over
as broad a period as possible in each case. Web-crawling technology is being
deployed to unobtrusively collect online forum history, while conventional
text-logging systems are used to monitor chat channels over a defined
interval. This data will provide up-to-date information on the online language
use of different types of cyber offenders, fuelling both the project’s
qualitative research objectives and text mining analyses and tool development.
However, while this approach to corpus development would generate material
suitable for unsupervised learning approaches (see below), it does not provide
“ground truth” for the testing of user profiling models developed by the
research team.
Therefore, performance is being (partially) established through first
establishing a baseline in similar non-criminal data for which ground-truth
demographics are available:
* •
The SMS-AP-18 corpus. This corpus was used for the first shared task on
Multilingual Author Profiling on SMS (English/Urdu) [74]. It consists of 810
user profiles together with their age metadata (15–19, 20–24, 25–xx) and
gender.
* •
The PAN corpus (2012 – 2017). This dataset contains different corpora
collected from the author profiling tasks at PAN 2013, 2014 and 2015 [37, 38,
39] and covers three online media genres (blogs, Twitter feeds and unspecified
social media postings). All corpora used contain metadata about gender and age
group (13–17, 23–27 and 33–47).
This will be followed by transfer learning and evaluation on smaller validated
datasets established from criminal prosecutions or doxxing events, in which
(anonymous) users are linked to their actual offline identity, drawn from
criminal platforms themselves. We expect that such datasets will be too
limited for training our systems, but will be valuable for evaluating the
robustness of our approach when applied in the wild.
#### Technological Advances
Computational Stylometry. Our approach to this computational stylometry task
is based on text categorisation, and involves the creation of document
representations based on a selected set of (patterns) of linguistic features,
feature selection using statistical techniques, and classification using
machine learning algorithms (see Section 2). Contrary to previous research
that mainly focused on predicting authors’ demographics based on large, formal
text samples, we will perform a systematic study of different aspects of
methodological design incorporated in a state-of-the-art user profiling
approach to assess its robustness to highly sparse, skewed and noisy text
data, i.e., performing user profiling “in the wild”.
Based on the results of this systematic study, we will investigate different
strategies to boost the performance for automatic user profiling using only a
single message per user. Because, in the context of a cybercrime
investigation, a conjunction of evidence with external content, which was not
produced by the suspect777For example, by including information from
Wikipedia, Wordnet or Internet search engines as is typically done in semantic
and semi-supervised classification approaches. would be out of the question,
the following strategies will be explored:
1. 1.
a systematic study of different aspects of methodological design incorporated
in a state-of-the-art user profiling approach to assess its robustness to
highly sparse, skewed, noisy and adversarial text data, i.e., performing user
profiling in the wild;
2. 2.
novel feature engineering methods in which different feature types are
extracted in parallel and the resulting vectors are concatenated into larger
vectors to create complex models; and
3. 3.
a cross-task classification approach in which the meta-data for gender or
location is included in the experiments in order to investigate their effect
on age prediction and vice versa.
Additionally, we aim to investigate whether a _message-based_ approach, in
which predictions are made on the level of the individual post and aggregated
to the user level in post-processing, enables the system to identify different
offender characteristics more accurately than the traditional _user-based_
approach that renders predictions directly on the user level. We will apply
both approaches (a) in the context of automatically detecting people trying to
imitate the writing style of another demographic group and (b) on the text
genre of short, online social media communications. So far, both these
applications have remained unexplored in the field.
Finally, the research project will also include a qualitative analysis of the
features that make the user profiling model. Providing such a qualitative
analysis of the most discriminative linguistic features enables an evaluation
of the scalability of the approach for other researches and allows for
methodological reflection towards well established sociolinguistic principles.
Moreover, a qualitative analysis is typically absent in most text mining
studies, as they tend to focus solely on the performance of their user
profiling models.
Topic Detection. Aside from author profiling techniques, the project will
focus on developing advanced unsupervised topic detection modules, which will
be required for automatically analysing (underground) forums and marketplaces,
allowing for automatically categorising and prioritising potential cyber
threats and offences. Contrary to most text mining approaches, in which the
training data available is pre-labelled with the required information to
perform a categorisation task (i.e., the “ground truth”), detecting topic in
adversarial communications will require an unsupervised learning approach,
because no information on the presence or absence of guarded language will be
available when analysing currently existing or newly collected datasets. As a
result, methodologies will be investigated that can reveal hidden structures,
patterns or features from unlabelled data. Potential machine learning
techniques used in unsupervised learning that could contribute to this task
include clustering techniques, Neural Networks and Formal Concept Analysis.
Clustering. In the baseline clustering approach, the similarity between
different objects is measured by using one or more similarity functions. With
regard to textual data in which the objects can be of different granularities
(e.g., documents, paragraphs, sentences or words), clustering methods have
shown promising results for e.g., browsing or organising documents and
summarising large text corpora [75]. Standard practice for vector data is to
use the K-means algorithm or Latent Dirichlet Allocation (LDA). The first
technique divides a set of text samples into k disjoint clusters, each
described by the centroid of the text samples in the cluster. The algorithm
then attempts to select centroids that minimise the within-cluster sum-of-
squares (or inertia) [76]. LDA, from its part, is a Bayesian probabilistic
model, which also assumes a collection of k clusters. The latter algorithm
could be especially useful when applied to social media communications,
because it assumes that each document instance is a mixture of a small number
of topics and that each word can be clustered into one of these topics [77].
Recent work on topic modelling over short text samples suggested incorporating
word embeddings generated by Word2Vec [78]. More specifically, short texts are
aggregated into long pseudo-texts by incorporating the semantic knowledge from
the word embeddings to boost the performance of clustering algorithms [79]. A
second challenge is that most clustering techniques depend on a predefined (k)
number of clusters. Therefore, agglomerative clustering algorithms will be
investigated to determine the hierarchy of all topic clusters present in the
dataset. More specifically, by using a bottom-up approach, in which each
instance initiates its own cluster and clusters are merged together using a
linkage criteria (for example, Ward’s algorithm [80] hierarchical clustering
can be achieved and represented in a tree structure or dendrogram for further
analysis of the number of different topics present in the data.
Neural Networks. Artificial neural networks can mainly be distinguished from
other methods by its inclusion of one or more non-linear (or hidden) layers
between the input and the output layer during the analysis. The input layer
consists of a set of neurons, which represent the features of each instance.
Next, each neuron in the hidden layer transforms the values from the previous
layer with a weighted linear summation followed by a non-linear activation
function. The output layer then analyses the values from the last hidden layer
and produces a decision. In most cases, a supervised learning technique called
Backpropagation is used during training, which runs a “forward pass” to
compute all the activations throughout the neural network and determine the
degree in which each node in each layer contributes to any errors in the
output of the system [81, 82]. In recent work by [83], neural topic models
have been presented that show similar sparse topic distributions as found with
traditional Dirichlet-Multinomial models on larger text samples, such as song
lyrics or news articles. The advantage of recurrent neural networks,
specifically, with regard to the task at hand is their ability to model
sequences of unbounded length and, when combined with variational inference
methods, they allow the number of topics to dynamically increase [83].
Formal Concept Analysis. FCA provides a well-founded mathematical framework
for organising a set of objects based on their shared features without
including any knowledge about the objects. In the context of automatic topic
modelling in social media communications, the technique can be used to create
thematically-based and cohesive clusters. The key advantage of FCA is that no
prior knowledge of the data is required for its computation, which enables
researchers to overcome typical topic detection problems, such as unknown
topic distribution and the appearance of new topics. The technique has already
been successfully applied on police data for detecting radicalisation and
child sex offender grooming [84].
## Conclusions
Online interactions between cybercriminals are a valuable lens into the
underlying nature of offenders, which in turn is necessary grounding for any
preventative or disruptive intervention. Prior work has demonstrated some of
the insight that studies of cybercriminal communities might have to offer [85,
86, 87]. In this review, we described the related work, open challenges and
requirements regarding computational assessment of cyber offenders, their
identifying characteristics and their behaviours. These challenges and
requirements underpin our research on analysing cybercriminal careers at
scale. Within our on-going work in the project, we are focusing on developing
a novel text mining approach for automatic user profiling and topic detection
under the complex conditions of linguistically noisy, highly sparse,
adversarial datasets and evaluate their forensic readiness when applied in the
wild. Aside from addressing policy-guiding research questions, these new
methodologies can be refined according to guidelines and feedback from law
enforcement, leading to software tools that can support their investigations
into cybercrime.
## References
* Jeney [2015] P. Jeney, Combatting child sexual abuse online, http://www.europarl.europa.eu/RegData/etudes/STUD/2015/536481/IPOL_STU(2015)536481_EN.pdf, 2015\.
* Troncoso [2019] C. Troncoso, Privacy & online rights knowledge area (2019).
* Dingledine et al. [2004] R. Dingledine, N. Mathewson, P. Syverson, Tor: The second-generation onion router, Technical Report, Naval Research Lab Washington DC, 2004.
* Reed et al. [1998] M. G. Reed, P. F. Syverson, D. M. Goldschlag, Anonymous connections and onion routing, IEEE Journal on Selected areas in Communications 16 (1998) 482–494.
* Rocha et al. [2017] A. Rocha, W. J. Scheirer, C. W. Forstall, T. Cavalcante, A. Theophilo, B. Shen, A. R. Carvalho, E. Stamatatos, Authorship attribution for social media forensics, IEEE Transactions on Information Forensics and Security 12 (2017) 5–33.
* Koppel et al. [2011] M. Koppel, J. Schler, S. Argamon, Authorship attribution in the wild, Language Resources and Evaluation 45 (2011) 83–94.
* Brennan and Greenstadt [2009] M. R. Brennan, R. Greenstadt, Practical attacks against authorship recognition techniques., in: IAAI.
* Gálvez and Gravano [2017] R. H. Gálvez, A. Gravano, Assessing the usefulness of online message board mining in automatic stock prediction systems, Journal of Computational Science 19 (2017) 43–56.
* Zanasi [2009] A. Zanasi, Virtual weapons for real wars: Text mining for national security, in: Proceedings of the International Workshop on Computational Intelligence in Security for Information Systems CISIS’08, Springer, pp. 53–60.
* Van Halteren et al. [2005] H. Van Halteren, H. Baayen, F. Tweedie, M. Haverkort, A. Neijt, New machine learning methods demonstrate the existence of a human stylome, Journal of Quantitative Linguistics 12 (2005) 65–77.
* Daelemans [2013] W. Daelemans, Explanation in computational stylometry, in: International Conference on Intelligent Text Processing and Computational Linguistics, Springer, pp. 451–462.
* Dobson [1992] M. Dobson, The Making of the National Poet: Shakespeare, Adaptation and Authorship, 1660-1769: Shakespeare, Adaptation and Authorship, 1660-1769, Clarendon Press, 1992\.
* Mosteller and Wallace [1964] F. Mosteller, D. Wallace, Inference and disputed authorship: The Federalist, Addison-Wesley, 1964.
* Juola [2008] P. Juola, Authorship attribution, Foundations and Trends® in Information Retrieval 1 (2008) 233–334.
* Sarawgi et al. [2011] R. Sarawgi, K. Gajulapalli, Y. Choi, Gender attribution: tracing stylometric evidence beyond topic and genre, in: Proceedings of the Fifteenth Conference on Computational Natural Language Learning, Association for Computational Linguistics, pp. 78–86.
* Mukherjee and Liu [2010] A. Mukherjee, B. Liu, Improving gender classification of blog authors, in: Proceedings of the 2010 conference on Empirical Methods in natural Language Processing, Association for Computational Linguistics, pp. 207–217.
* Zhang and Zhang [2010] C. Zhang, P. Zhang, Predicting gender from blog posts, University of Massachussetts Amherst, USA (2010).
* Goswami et al. [2009] S. Goswami, S. Sarkar, M. Rustagi, Stylometric analysis of bloggers’ age and gender, in: Third International AAAI Conference on Weblogs and Social Media.
* Argamon et al. [2009] S. Argamon, M. Koppel, J. W. Pennebaker, J. Schler, Automatically profiling the author of an anonymous text, Communications of the ACM 52 (2009) 119–123.
* Koppel et al. [2002] M. Koppel, S. Argamon, A. R. Shimoni, Automatically categorizing written texts by author gender, Literary and Linguistic Computing 17 (2002) 401–412.
* Nowson [????] S. Nowson, Identifying more bloggers: Towards large scale personality classifiation of personal weblogs, http://nowson.com/papers/NowOberICWSM07.pdf, ????
* Yan and Yan [2006] X. Yan, L. Yan, Gender classification of weblog authors., in: AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs, pp. 228–230.
* Schler et al. [2006] J. Schler, M. Koppel, S. Argamon, J. W. Pennebaker, Effects of age and gender on blogging., in: AAAI spring symposium: Computational approaches to analyzing weblogs, volume 6, pp. 199–205.
* McMillan et al. [1977] J. R. McMillan, A. K. Clifton, D. McGrath, W. S. Gale, Women’s language: Uncertainty or interpersonal sensitivity and emotionality?, Sex Roles 3 (1977) 545–559.
* Biber et al. [1998] D. Biber, S. Conrad, R. Reppen, Corpus linguistics: Investigating language structure and use, Cambridge University Press, 1998.
* Mulac et al. [2000] A. Mulac, D. R. Seibold, J. L. Farris, Female and male managers’ and professionals’ criticism giving: Differences in language use and effects, Journal of Language and Social Psychology 19 (2000) 389–415.
* Mehl and Pennebaker [2003] M. R. Mehl, J. W. Pennebaker, The sounds of social life: a psychometric analysis of students’ daily social environments and natural conversations., Journal of personality and social psychology 84 (2003) 857.
* Newman et al. [2008] M. L. Newman, C. J. Groom, L. D. Handelman, J. W. Pennebaker, Gender differences in language use: An analysis of 14,000 text samples, Discourse Processes 45 (2008) 211–236.
* Keune [2012] K. Keune, Explaining register and sociolinguistic variation in the lexicon: Corpus studies on Dutch, Ph.D. thesis, Radboud Universiteit Nijmegen, 2012.
* Koppel et al. [2009] M. Koppel, J. Schler, S. Argamon, Computational methods in authorship attribution, Journal of the Association for Information Science and Technology 60 (2009) 9–26.
* Rao et al. [2010] D. Rao, D. Yarowsky, A. Shreevats, M. Gupta, Classifying latent user attributes in twitter, in: Proceedings of the 2nd international workshop on Search and mining user-generated contents, ACM, pp. 37–44.
* Al Zamal et al. [2012] F. Al Zamal, W. Liu, D. Ruths, Homophily and latent attribute inference: Inferring latent attributes of twitter users from neighbors., ICWSM 270 (2012).
* Nguyen et al. [2013] D. Nguyen, R. Gravel, D. Trieschnigg, T. Meder, “how old do you think i am?’ a study of language and age in twitter., in: ICWSM.
* Bergsma and Van Durme [2013] S. Bergsma, B. Van Durme, Using conceptual class attributes to characterize social media users, in: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, volume 1, pp. 710–720.
* Fink et al. [2012] C. Fink, J. Kopecky, M. Morawski, Inferring gender from the content of tweets: A region specific example., in: ICWSM.
* Bamman et al. [2012] D. Bamman, J. Eisenstein, T. Schnoebelen, Gender in Twitter: Styles, stances, and social networks, CoRR abs/1210.4567 (2012).
* Rangel et al. [2013] F. Rangel, P. Rosso, M. Koppel, E. Stamatatos, G. Inches, Overview of the author profiling task at PAN 2013, in: CLEF Conference on Multilingual and Multimodal Information Access Evaluation, CELCT, pp. 352–365.
* Rangel et al. [2014] F. Rangel, P. Rosso, I. Chugur, M. Potthast, M. Trenkmann, B. Stein, B. Verhoeven, W. Daelemans, Overview of the 2nd author profiling task at PAN 2014, in: CLEF 2014 Evaluation Labs and Workshop Working Notes Papers, Sheffield, UK, 2014, pp. 1–30.
* Rangel et al. [2015] F. Rangel, P. Rosso, M. Potthast, B. Stein, W. Daelemans, Overview of the 3rd author profiling task at PAN 2015, in: CLEF, sn, p. 2015\.
* Filippova [2012] K. Filippova, User demographics and language in an implicit social network, in: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Association for Computational Linguistics, pp. 1478–1488.
* Rashid et al. [2013] A. Rashid, A. Baron, P. Rayson, C. May-Chahal, P. Greenwood, J. Walkerdine, Who am I? Analyzing digital personas in cybercrime investigations, Computer 46 (2013) 54–61.
* Peersman [2018] C. Peersman, Detecting deceptive behaviour in the wild: text mining for online child protection in the presence of noisy and adversarial social media communications, Ph.D. thesis, Lancaster University, 2018.
* Burger et al. [2011] J. D. Burger, J. Henderson, G. Kim, G. Zarrella, Discriminating gender on Twitter, in: Proceedings of the Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, pp. 1301–1309.
* Kacmarcik and Gamon [2006] G. Kacmarcik, M. Gamon, Obfuscating document stylometry to preserve author anonymity, in: Proceedings of the COLING/ACL on Main conference poster sessions, Association for Computational Linguistics, pp. 444–451.
* Oakes [2004] M. Oakes, Ant colony optimisation for stylometry: The Federalist papers, in: Proceedings of the 5th International Conference on Recent Advances in Soft Computing, pp. 86–91.
* Koppel et al. [2007] M. Koppel, J. Schler, E. Bonchek-Dokow, Measuring differentiability: Unmasking pseudonymous authors, Journal of Machine Learning Research 8 (2007) 1261–1276.
* Kestemont et al. [2012] M. Kestemont, K. Luyckx, W. Daelemans, T. Crombez, Cross-genre authorship verification using unmasking, English Studies 93 (2012) 340–356.
* Afroz et al. [2012] S. Afroz, M. Brennan, R. Greenstadt, Detecting hoaxes, frauds, and deception in writing style online, in: Security and Privacy (SP), 2012 IEEE Symposium on, IEEE, pp. 461–475.
* Dinu and Nisioi [2012] L. P. Dinu, S. Nisioi, Authorial studies using ranked lexical features., in: COLING (Demos), pp. 125–130.
* Juola and Vescovi [2010] P. Juola, D. Vescovi, Empirical evaluation of authorship obfuscation using JGAAP, in: Proceedings of the 3rd ACM workshop on Artificial Intelligence and Security, ACM, pp. 14–18.
* Juola and Vescovi [2011] P. Juola, D. Vescovi, Analyzing stylometric approaches to author obfuscation, Advances in Digital Forensics VII (2011) 115–125.
* Juola [2012] P. Juola, Detecting stylistic deception, in: Proceedings of the Workshop on Computational Approaches to Deception Detection, Association for Computational Linguistics, pp. 91–96.
* Androutsopoulos and Ziegler [2004] J. Androutsopoulos, E. Ziegler, Exploring language variation on the Internet: Regional speech in a chat community, in: Language Variation in Europe: Papers from the Second International Conference on Language Variation in Europe, ICLaVE, volume 2, pp. 99–111.
* Crystal [2001] D. Crystal, Language and the Internet, Cambridge, CUP (2001).
* Beaufort et al. [2010] R. Beaufort, S. Roekhaut, L.-A. Cougnon, C. Fairon, A hybrid rule/model-based finite-state framework for normalizing sms messages, in: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, pp. 770–779.
* Tagliamonte [2012] S. Tagliamonte, Variationist sociolinguistics: Change, observation, interpretation, volume 40, John Wiley & Sons, 2012\.
* Wolfram [1969] W. A. Wolfram, A Sociolinguistic Description of Detroit Negro Speech. Urban Language Series, No. 5., ERIC, 1969.
* Labov [1972] W. Labov, Sociolinguistic patterns, 4, University of Pennsylvania Press, 1972\.
* Trudgill [1983] P. Trudgill, On dialect: Social and geographical perspectives, Wiley-Blackwell, 1983\.
* Milroy et al. [1994] J. Milroy, L. Milroy, S. Hartley, D. Walshaw, Glottal stops and tyneside glottalization: Competing patterns of variation and change in British English, Language Variation and Change 6 (1994) 327–357.
* Cheshire [2002] J. Cheshire, Sex and gender in variationist research, The handbook of language variation and change (2002) 423–443.
* Labov [1990] W. Labov, The intersection of sex and social class in the course of linguistic change, Language variation and change 2 (1990) 205–254.
* Labov [1994] W. Labov, Principles of linguistic change Volume 1: Internal Factors, Blackwell, 1994\.
* Labov [2001] W. Labov, Principles of linguistic change Volume 2: Social Factors, Blackwell, 2001\.
* Downes [1998] W. Downes, Language and society, Cambridge University Press, 1998.
* Wardhaugh [2010] R. Wardhaugh, An introduction to sociolinguistics, John Wiley & Sons, 2010\.
* Holmes [2013] J. Holmes, An introduction to sociolinguistics, Routledge, 2013.
* Chambers and Schilling [2013] J. K. Chambers, N. Schilling, The handbook of language variation and change, volume 129, John Wiley & Sons, 2013\.
* Meng et al. [2013] W. Meng, L. Lanfen, W. Jing, Y. Penghua, L. Jiaolong, X. Fei, Improving short text classification using public search engines, in: International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making, Springer, pp. 157–166.
* Heap et al. [2017] B. Heap, M. Bain, W. Wobcke, A. Krzywicki, S. Schmeidl, Word vector enrichment of low frequency words in the bag-of-words model for short text multi-class classification problems, arXiv preprint arXiv:1709.05778 (2017).
* Liu et al. [2017] M. Liu, G. Haffari, W. Buntine, M. Ananda-Rajah, Leveraging linguistic resources for improving neural text classification, in: Proceedings of the Australasian Language Technology Association Workshop 2017, pp. 34–42.
* Nunn [2010] S. Nunn, ‘wanna still nine hard?’: Exploring mechanisms of police bias in the translation and interpretation of wiretap conversations, Surveillance & Society 8 (2010) 28–42.
* Branwen et al. [2015] G. Branwen, N. Christin, D. Décary-Hétu, R. M. Andersen, StExo, E. Presidente, Anonymous, D. Lau, Sohhlz, D. Kratunov, V. Cakic, V. Buskirk, Whom, M. McKenna, S. Goode, Dark net market archives, 2011-2015, https://www.gwern.net/DNM-archives, 2015\. Accessed: 2019-02-02.
* Fatima et al. [2018] M. Fatima, S. Anwar, A. Naveed, W. Arshad, R. M. A. Nawab, M. Iqbal, A. Masood, Multilingual sms-based author profiling: Data and methods, Natural Language Engineering 24 (2018) 695–724.
* Aggarwal and Zhai [2012] C. C. Aggarwal, C. Zhai, A survey of text clustering algorithms, in: Mining text data, Springer, 2012, pp. 77–128.
* Jain [2010] A. K. Jain, Data clustering: 50 years beyond k-means, Pattern recognition letters 31 (2010) 651–666.
* Blei et al. [2003] D. M. Blei, A. Y. Ng, M. I. Jordan, Latent dirichlet allocation, Journal of machine Learning research 3 (2003) 993–1022.
* Mikolov et al. [2013] T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space, arXiv preprint arXiv:1301.3781 (2013).
* Qiang et al. [2017] J. Qiang, P. Chen, T. Wang, X. Wu, Topic modeling over short texts by incorporating word embeddings, in: Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer, pp. 363–374.
* Ward Jr [1963] J. H. Ward Jr, Hierarchical grouping to optimize an objective function, Journal of the American statistical association 58 (1963) 236–244.
* Pedregosa et al. [2011] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, et al., Scikit-learn: Machine learning in python journal of machine learning research (2011).
* Rumelhart et al. [1985] D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning internal representations by error propagation, Technical Report, California Univ San Diego La Jolla Inst for Cognitive Science, 1985\.
* Miao et al. [2017] Y. Miao, E. Grefenstette, P. Blunsom, Discovering discrete latent topics with neural variational inference, in: Proceedings of the 34th International Conference on Machine Learning-Volume 70, JMLR. org, pp. 2410–2419.
* Elzinga [2011] P. Elzinga, Formalizing the concepts of crimes and criminals, Ph.D. thesis, University of Amsterdam, 2011\.
* Motoyama et al. [2011] M. Motoyama, D. McCoy, K. Levchenko, S. Savage, G. M. Voelker, An analysis of underground forums, in: Proceedings of the 2011 ACM SIGCOMM Internet Measurement Conference, ACM, pp. 71–80.
* Holt [2013] T. J. Holt, Examining the forces shaping cybercrime markets online, Social Science Computer Review 31 (2013) 165–177.
* Décary-Hétu and Dupont [2013] D. Décary-Hétu, B. Dupont, Reputation in a dark network of online criminals, Global Crime 14 (2013) 175–196.
|
# MuSFA: Improving Music Structural Function Analysis with Partially Labeled
Data
###### Abstract
Music structure analysis (MSA) systems aim to segment a song recording into
non-overlapping sections with useful labels. Previous MSA systems typically
predict abstract labels in a post-processing step and require the full context
of the song. By contrast, we recently proposed a supervised framework, called
“Music Structural Function Analysis” (MuSFA), that models and predicts
meaningful labels like ‘verse’ and ‘chorus’ directly from audio, without
requiring the full context of a song. However, the performance of this system
depends on the amount and quality of training data. In this paper, we propose
to repurpose a public dataset, HookTheory Lead Sheet Dataset (HLSD), to
improve the performance. HLSD contains over 18K excerpts of music sections
originally collected for studying automatic melody harmonization. We treat
each excerpt as a partially labeled song and provide a label mapping, so that
HLSD can be used together with other public datasets, such as SALAMI, RWC, and
Isophonics. In cross-dataset evaluations, we find that including HLSD in
training can improve state-of-the-art boundary detection and section labeling
scores by ~3% and ~1% respectively.
## 1 Introduction
Music structure analysis (MSA) is often posed as an effort to understand the
self-similarity patterns within a piece of music [1]: the aim is to identify
spans in a piece that repeat and points that represent boundaries between
dissimilar regions. The self-similarity patterns then guide the segment
labelling: abstract labels (‘A’, ‘B’, or ‘C’) or meaningful labels (‘chorus’,
‘bridge’, etc.) are typically predicted in a post-processing step. By
contrast, in [2] we introduced a supervised “Music Structural Function
Analysis” (MuSFA) system that predicts meaningful labels directly from audio:
using a neural network, it estimated the “chorusness”, “bridgeness” and
several other likelihoods as functions of time. The pipeline of the method is
illustrated in Figure 1; please see [2] for details.
Figure 1: Diagram of the DeepMuSFA system.
This approach has a novel benefit: the system is able to predict the structure
of musical excerpts without using the rest of the song as context. E.g., given
an excerpt containing the last 3 bars of a verse and the first 5 bars of a
chorus, the system could predict the boundary location and segment labels. A
method that relies on modelling the relationship of this excerpt to the rest
of the song—to determine which spans repeat, how closely, and how often—would
not work in this example.
However, the approach has a cost: it requires a large training dataset.
Annotating music structural functions is a laborious task, and the datasets
are scarce: four large public ones—Harmonix (912), RWC-Pop (100), Isophonics
(277) and SALAMI-pop (274)—total just 1,563 full songs. However, if the system
does not require full songs, but can be trained on excerpts, then other data
sources could be used. In this paper, we use the HookTheory Lead Sheet Dataset
(HLSD) [3] to improve an MSA system for the first time.
The music theory website HookTheory [4] invites users to annotate the lead
sheet (melody, chords, key and mode) of songs. However, lead sheets on the
site are not for full songs, but for musically meaningful sections like
‘chorus’, ‘verse’, or ‘intro’. HLSD has been used in several MIR tasks,
including symbolic music generation and harmonization [3, 5, 6], audio key
estimation [7], and melody and chord recognition of audio [8]. To our
knowledge, this paper is the first work to leverage the structural information
of HLSD to help MSA.
## 2 Data Pre-processing for HLSD
For this research work, we use an open-sourced
collection,111https://github.com/wayne391/lead-sheet-dataset which contains
18,843 music sections, each has the starting and ending times of the section
in the corresponding recording. Two steps are needed to pre-process this data:
(1) map the structural labels into 7 classes, so they can be merged with other
datasets; (2) include random lengths of context when excerpting the sample
from the full song.
HLSD labels | MuSFA
---|---
chorus, chorus-lead-out, theme, verse-and-chorus, theme-recap, pre-chorus-and-chorus | chorus
verse, development, verse-and-pre-chorus, pre-chorus | verse
instrumental, lead-in-alt, lead-in, loop, solo | inst
bridge, variation | bridge
intro, intro-and-chorus, intro-and-verse | intro
outro, pre-outro | outro
Table 1: HLSD-to-MuSFA label mapping.
After parsing the HLSD metadata, we derive the taxonomy that includes about 22
different labels. Table 1 lists the mapping from a HLSD label (left column)
into 6 of the 7 structural labels (right column) used in MuSFA [2].
Next, to create useful training examples for the model to learn the
boundaries, we select excerpts with a random amount of context, choosing
between 8 and 12 seconds of context to include before and after the section.
For instance, suppose that a song has a chorus section that occurs from
0:35–0:58 in the song. Then, we randomly select 9 and 10 seconds of front and
rear padding, respectively, meaning we choose the span from 0:26–1:08. This
would result in a 42-second excerpt, within which the span 0:09–0:32 has the
label ‘chorus’ and the remainder was unlabelled. If the selected context
padding exceeds the boundaries of the full song, it is cut: i.e., if the song
above ended at 1:00, then the rear padding would be cut to 2 seconds. We also
set a minimum duration of 30 seconds: if the section and the random padding
results in an excerpt shorter than that, the padding is extended in either
direction until the requirement is met.
## 3 Evaluation
We treat each excerpt as an independent song with partially annotated
boundaries and structural labels. There are two types of target activation
curve defined for our model [2]: _boundary activation_ and _function
activation_ curves. For boundary activation, it follows the previous work [9].
For function activation, however, we need to alter the _function loss_ to only
count errors within the boundaries, since we do not know the function labels
for the front and rear padding spans of the excerpt.
### 3.1 Configuration
Our goal is to compare the performance of the MuSFA system introduced in [2]
using and without using HLSD. We use four public datasets following the train-
test configurations described in [2]: _Harmonix Set_ [10], _SALAMI-pop_ [11],
_RWC-Pop_ [12], and _Isophonics_ [13]. Four evaluation metrics are used and
calculated using the mir_eval package [14]: (1) _HR.5F_ : F-measure of hit
rate at 0.5 seconds; (2) Accuracy (_ACC_): the frame-wise accuracy between the
predicted function label and the converted ground-truth label; (3) _CHR.5F_ :
F-measure of ‘chorus’ boundary hit rate at 0.5 seconds; (4) _CF1_ : F-measure
of pair-wise frames for ‘chorus’ and ‘non-chorus’ sections [9].
| HR.5F | ACC | CHR.5F | CF1
---|---|---|---|---
Four-Fold Cross-Validation
Harmonix Set
DSF + Scluster [15] | .497 | - | .326 | .611
CNN-Chorus [9] | - | - | .371 | .692
MuSFA-24s | .570 | .701 | .501 | .815
MuSFA-24s (HLSD) | .595 | .714 | .512 | .820
MuSFA-36s | .558 | .723 | .476 | .831
MuSFA-36s (HLSD) | .582 | .731 | .495 | .835
Cross-Dataset Evaluation
SALAMI-pop
MuSFA-24s | .490 | .544 | .357 | .811
MuSFA-24s (HLSD) | .532 | .551 | .399 | .820
RWC-Pop
MuSFA-24s | .623 | 675 | .465 | .847
MuSFA-24s (HLSD) | .643 | .677 | .496 | .850
Isophonics
MuSFA-24s | .590 | .550 | .401 | .733
MuSFA-24s (HLSD) | .598 | .559 | .411 | .741
Table 2: Experimental results.
### 3.2 Result and Discussion
Table 2 shows the result and comparison. Our baseline systems are named
“MuSFA-24s” and “MuSFA-36s” which use the SpecTNT model [16] with audio chunks
of 24 seconds and 36 seconds, respectively. We conduct two types of
evaluations: first, we use _Harmonix Set_ in a 4-fold cross-validation manner,
but with _SALAMI-pop_ , _RWC-Pop_ , and _Isophonics_ included in the training
set of every fold. Second, in cross-dataset evaluations, each of _SALAMI-pop_
, _RWC-Pop_ , and _Isophonics_ in turn serves as the test set, and the
remaining datasets (including Harmonix Set) are used for training. For all of
these evaluations, we train the MuSFA system as described, or with the
addition of HLSD.
We found that adding HLSD can improve performance in all metrics, on average
increasing the boundary detection performance (see HR.5F and CHR.5F) by 3% and
the section labeling performance by 1% (see ACC and CF1). This result is
expected: the HLSD example provides clearer context for boundaries; whereas,
since the labels of neighboring sections are unknown, the improvement for
section labeling is relatively minor.
## References
* [1] O. Nieto, G. J. Mysore, C.-I. Wang, J. B. L. Smith, J. Schlüter, T. Grill, and B. McFee, “Audio-based music structure analysis: Current trends, open challenges, and applications,” _Trans. ISMIR_ , vol. 3, no. 1, 2020.
* [2] J.-C. Wang, Y.-N. Hung, and J. B. L. Smith, “To catch a chorus, verse, intro, or anything else: Analyzing a song with structural functions,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2022, pp. 416–420.
* [3] Y.-C. Yeh, W.-Y. Hsiao, S. Fukayama, T. Kitahara, B. Genchel, H.-M. Liu, H.-W. Dong, Y. Chen, T. Leong, and Y.-H. Yang, “Automatic melody harmonization with triad chords: A comparative study,” _Journal of New Music Research_ , vol. 50, no. 1, pp. 37–51, 2021.
* [4] C. Anderson, C. Carlton, R. Miyakawa, and D. Schwachhofer, “Hooktheory theorytab database,” https://www.hooktheory.com/theorytab.
* [5] J. Jiang, G. G. Xia, D. B. Carlton, C. N. Anderson, and R. H. Miyakawa, “Transformer VAE: A hierarchical model for structure-aware and interpretable music representation learning,” in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2020, pp. 516–520.
* [6] Y.-W. Chen, H.-S. Lee, Y.-H. Chen, and H.-M. Wang, “SurpriseNet: Melody harmonization conditioning on user-controlled surprise contours,” in _Proc. ISMIR_ , 2021.
* [7] J. Jiang and D. B. Carlton, “MIREX 2019 submission: Crowd annotation for audio key estimation,” _Proc. MIREX, ISMIR_ , 2019.
* [8] C. Donahue and P. Liang, “Sheet Sage: Lead sheets from music audio,” in _Proc. ISMIR Late-Breaking and Demo_ , 2021.
* [9] J.-C. Wang, J. B. L. Smith, J. Chen, X. Song, and Y. Wang, “Supervised chorus detection for popular music using convolutional neural network and multi-task learning,” in _ICASSP_ , 2021, pp. 566–570.
* [10] O. Nieto, M. McCallum, M. Davies, A. Robertson, A. Stark, and E. Egozy, “The Harmonix Set: Beats, downbeats, and functional segment annotations of western popular music,” in _ISMIR_ , 2019, pp. 565–572.
* [11] J. B. L. Smith, J. A. Burgoyne, I. Fujinaga, D. D. Roure, and J. S. Downie, “Design and creation of a large-scale database of structural annotations,” in _Proc. ISMIR_ , 2011, pp. 555–560.
* [12] M. Goto, H. Hashiguchi, T. Nishimura, and R. Oka, “RWC Music Database: Popular, classical and jazz music databases,” in _Proc. ISMIR_ , 2002.
* [13] M. Mauch, C. Cannam, M. Davies, S. Dixon, C. Harte, S. Kolozali, D. Tidhar, and M. Sandler, “OMRAS2 metadata project 2009,” in _ISMIR Late Breaking and Demo_ , 2009.
* [14] C. Raffel, B. McFee, E. J. Humphrey, J. Salamon, O. Nieto, D. Liang, and D. P. W. Ellis, “mir_eval: A transparent implementation of common MIR metrics,” in _ISMIR_ , 2014.
* [15] J.-C. Wang, J. B. L. Smith, W.-T. Lu, and X. Song, “Supervised metric learning for music structure features,” in _Proc. ISMIR_ , 2021, pp. 730–737.
* [16] W.-T. Lu, J.-C. Wang, M. Won, K. Choi, and X. Song, “SpecTNT: A time-frequency transformer for music audio,” in _Proc. ISMIR_ , 2021, pp. 396–403.
|
# Spectrum of the $\overline{\partial}$-Laplace operator on zero forms for the
quantum quadric ${\mathcal{O}}_{q}(\textbf{Q}_{N})$
Fredy Díaz García
Mathematical Institute of Charles University, Prague, Czech Republic
E-mail address<EMAIL_ADDRESS>
###### Abstract
We study the Laplacian operator $\Delta_{\overline{\partial}}$ associated to a
Kähler structure $(\Omega^{(\bullet,\bullet)},\kappa)$ for the
Heckenberger–Kolb differential calculus of the quantum quadrics
${\mathcal{O}}_{q}(\textbf{Q}_{N})$, which is to say, the irreducible quantum
flag manifolds of types $B_{n}$ and $D_{n}$. We show that the eigenvalues of
$\Delta_{\overline{\partial}}$ on zero forms tend to infinity and have finite
multiplicity.
## 1 Introduction
Since the discovery of quantum groups in the 1980’s mathematicians have been
trying to fit them into Connes’ framework of noncommutative spin geometry. In
fact, the construction of spectral triples for the Drinfeld–Jimbo quantum
groups has been a very difficult problem in noncommutative geometry. One of
the best-studied spectral triples is that of the Podleś sphere [6] which
satisfies most of the Connes conditions for a noncommutative spin geometry, up
to some slight modifications. Other constructions have been made in this
direction, for example the equivariant isospectral Dirac operators for all the
Podleś spheres [4], for the quantum projective spaces [2], for the quantum
group ${\mathcal{O}}_{q}(SU(2))$ [5] and for all the compact quantum groups
[23]. In [7] a Dirac operator associated to Kähler structures is constructed
for the quantum projective spaces ${\mathcal{O}}_{q}(\mathbb{CP}^{n})$
generalizing the work of S. Majid in [20].
In recent years it has been shown that the quantum flag manifolds provide a
large family of well behaved quantum homogeneous spaces whose noncommutative
geometry is quite similar to the classical situation. It was shown by I.
Heckenberger and S. Kolb that the subclass of irreducible quantum flag
manifolds ${\mathcal{O}}_{q}(G/L_{S})$ admit a unique differential calculus
which is a $q$-deformation of their classical de Rham complex [12], [13]. It
is important to note that the existence of this differential calculus is an
extremely especial property of these quantum spaces and it is not the case for
all quantum spaces. Moreover the existence of this canonical deformation
allows us to construct Dirac operators and investigate their spectrum.
The problem of the spectrum of the Dirac operator has been attacked for some
cases. For example in [3] the spectrum for
${\mathcal{O}}_{q}(\mathbb{CP}^{2})$ was calculated. In [2] the authors showed
that the spectrum of the Dolbeault–Dirac operator for the quantum projective
spaces has exponential growth by relating a Casimir operator. In [18] the
authors gave a new approach for the construction of Dolbeault–Dirac operator
and quantum Clifford algebras for the irreducible quantum flag manifolds.
Later, it was shown in [22] that this Dolbeault–Dirac operator gives a
spectral triple for the quantum Lagrangian Grassmannian of rank 2 by giving a
suitable version of the Parthasarathy formula.
Recently, a different approach to constructing Dolbeault–Dirac operators has
appeared in [24] where the notion of noncommutative Kähler structure for a
differential calculus was introduced. In the same paper the author showed the
existence of a covariant Kähler structure for the Heckenberger–Kolb calculus
for the quantum projective spaces. Later this result was generalized in [21]
for all the irreducible quantum flag manifolds, for all but a finite number of
values $q$. The existence of such a structure allows us to construct metrics,
Hilbert space completions and to verify all the axioms of a spectral triple
except the compact resolvent condition, see [8] for a more detailed
discussion. This last axiom has turned out to be the most challenging one.
This problem was solved in [7] for the quantum projective spaces
${\mathcal{O}}_{q}(\mathbb{CP}^{n})$ by using the existence of the Kähler
structure and the multiplicity-free property of the space of anti-holomorphic
forms. Also the spectrum was calculated for the quantum quadric
${\mathcal{O}}_{q}(\textbf{Q}_{5})$ in [10] by using the quantum version of
the Gelfand–Bernstein–Gelfand resolution for quantized irreducible flag
manifolds [14].
In this paper we investigate the asymptotic behaviour of the Laplace operator
$\Delta_{\overline{\partial}}$ on the space of zero forms $\Omega^{(0,0)}$ for
the irreducible quantum flag manifolds of quadric type. Besides of the
existence of the Kähler form we make use of the FRT description of the quantum
group ${\mathcal{O}}_{q}(SO(N))$ as well as the fact that $\Omega^{(0,0)}$ is
multiplicity-free as a $U_{q}(\mathfrak{g})$-module to make explicit
calculations of the eigenvalues. It has been conjectured in [7] that the
Dolbeault–Dirac operator $D_{\overline{\partial}}$ allows us to construct a
spectral triple for all the irreducible quantum flag manifolds, here we take a
small but significant step towards proving this conjecture.
The paper is organized as follows: in §2 we recall some necessary definitions
and properties of the quantized enveloping algebras, differential calculi,
Kähler structures as well as irreducible quantum flag manifolds and Takeuchi’s
equivalence. We also recall the FRT description of the algebras
${\mathcal{O}}_{q}(SO(N))$.
In §3 we give the most important properties of the Heckenberger–Kolb calculus
for the irreducible quantum flag manifolds and a proof of the existence of a
Kähler structure for the case of quantum quadrics
${\mathcal{O}}_{q}(\textbf{Q}_{N})$ by using tools of representation theory.
In §4 we present the main result of the paper. It is shown by explicit
calculation that the eigenvalues of the Laplace operator
$\Delta_{\overline{\partial}}$ on zero forms tend to infinity, supporting the
conjecture that the asymptotic behaviour of the Dolbeault–Dirac operator
$D_{\overline{\partial}}$ satisfies the compact resolvent condition for all
the irreducible quantum flag manifolds.
## 2 Preliminaries
In this section we recall the necessary definitions of quantum enveloping
algebras, covariant differential calculi as well as complex and Kähler
structures. For a more detailed presentation see [17], [24], and [25]. We also
introduce the definition of quantum flag manifolds, Takeuchi’s equivalence and
the FRT presentation of the quantum coordinate algebra
${\mathcal{O}}_{q}(SO(N))$.
### 2.1 Drinfeld–Jimbo quantum groups
Let $\mathfrak{g}$ a finite dimensional semisimple Lie algebra of rank $r$.
Let $\mathfrak{g}$ be a finite-dimensional complex semisimple Lie algebra of
rank $r$. We fix a Cartan subalgebra $\mathfrak{h}$ and choose a set of simple
roots $\Pi=\\{\alpha_{1},\dots,\alpha_{r}\\}$ for the corresponding root
system in the linear dual of $\mathfrak{g}$. We denote by $(\cdot,\cdot)$ the
symmetric bilinear form induced on $\mathfrak{h}^{*}$ by the Killing form of
$\mathfrak{g}$, normalised so that any shortest simple root $\alpha_{i}$
satisfies $(\alpha_{i},\alpha_{i})=2$. The Cartan matrix $(a_{ij})$ of
$\mathfrak{g}$ is defined by
$a_{ij}:=\big{(}\alpha_{i}^{\vee},\alpha_{j}\big{)}$, where
$\alpha_{i}^{\vee}:=2\alpha_{i}/(\alpha_{i},\alpha_{i})$ is the _coroot_ of
$\alpha_{i}$.
Throughout the paper $q$ stands for a real number in the interval
$(1,+\infty)$ and denote $q_{i}:=q^{(\alpha_{i},\alpha_{i})/2}$. The
Drinfeld–Jimbo quantised enveloping algebra $U_{q}(\mathfrak{g})$ is the
noncommutative associative algebra generated by the elements
$E_{i},F_{i},K_{i}$, and $K^{-1}_{i}$, for $i=1,\ldots,r$, with the relations
$\displaystyle K_{i}E_{j}=q_{i}^{a_{ij}}E_{j}K_{i},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
K_{i}F_{j}=q_{i}^{-a_{ij}}F_{j}K_{i},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
K_{i}K_{j}=K_{j}K_{i},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ K_{i}K_{i}^{-1}=K_{i}^{-1}K_{i}=1,$
$\displaystyle
E_{i}F_{j}-F_{j}E_{i}=\delta_{ij}\frac{K_{i}-K^{-1}_{i}}{q_{i}-q_{i}^{-1}},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ $
and the quantum Serre relations which we omit (see [17, §6.1.2] for details).
A Hopf algebra structure on $U_{q}(\mathfrak{g})$ is defined by
$\displaystyle\Delta(K_{i})=K_{i}\otimes K_{i},\ \ \Delta(E_{i})=E_{i}\otimes
K_{i}+1\otimes E_{i},\ \ \Delta(F_{i})=F_{i}\otimes 1+K_{i}^{-1}\otimes
F_{i},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ $ $\displaystyle S(K_{i})=K^{-1}_{i},\ \
S(E_{i})=-E_{i}K^{-1}_{i},\ \ S(F_{i})=-K_{i}F_{i},\ \ \varepsilon(K_{i})=1,\
\ \varepsilon(E_{i})=\varepsilon(F_{i})=0.$
A Hopf $\ast$-algebra structure is given by
$\displaystyle K_{i}^{*}=K_{i},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
E^{*}_{i}=K_{i}F_{i},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
F^{*}_{i}=E_{i}K_{i}^{-1}.$
Let $\\{\varpi_{1},\ldots,\varpi_{r}\\}$ denote the corresponding set of
_fundamental weights_ of $\mathfrak{g}$ which is the dual basis of the simple
coroots $\\{\alpha_{i}^{\vee},\ldots,\alpha_{r}^{\vee}\\}$:
$\displaystyle(\alpha_{i}^{\vee},\varpi_{j})=\delta_{ij}.$
Let $\mathcal{P}$ be the weight lattice of $\mathfrak{g}$ and
$\mathcal{P}^{+}$ the set of _dominant integral weights_ which are the
$\mathbb{Z}$-span and $\mathbb{Z}_{\geq 0}$-span of the fundamental weights,
respectively. For each $\mu\in\mathcal{P}^{+}$ there exists an irreducible
finite dimensional $U_{q}(\mathfrak{g})$-module $V_{\mu}$ uniquely determined
by the existence of a vector $\nu_{\mu}$ which is called the _highest weight
vector_ , satisfying
$\displaystyle
K_{i}\triangleright\nu_{\mu}=q^{(\alpha_{i},\mu)}\nu_{\mu},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ E_{i}\triangleright\nu_{\mu}=0.$
Moreover $\nu_{\mu}$ is unique up to scalar multiple. A finite direct sum of
such representations is called a _type-1 representation_. A vector $\nu$ is
called a _weight vector_ of weight $\mathrm{wt}(\nu)\in\mathcal{P}$ if
$\displaystyle K_{i}\triangleright\nu=q^{(\alpha_{i},\mathrm{wt}(\nu))}\nu.$
In this paper we use the fact that $U_{q}(\mathfrak{g})$ has invertible
antipode and therefore we have an equivalence between
$\mbox{}_{U_{q}(\mathfrak{g})}$Mod and $\mathrm{Mod}_{U_{q}(\mathfrak{g})}$
induced by the antipode.
### 2.2 Differential calculi
A differential calculus
$\big{(}\Omega^{\bullet}\simeq\bigoplus_{k\in\mathbb{N}_{0}}\Omega^{k},\mbox{d}\big{)}$
is a differential graded algebra (dg-algebra) which is generated in degree $0$
as a dg-algebra, that is to say, it is generated as an algebra by the elements
$a,\mbox{d}b$, for $a,b\in\Omega^{0}$. We call an element
$\omega\in\Omega^{\bullet}$ a _form_ , and if $\omega\in\Omega^{k}$, for some
$k\in\mathbb{N}_{0}$, then $\omega$ is said to be _homogeneous_ of degree
$|\omega|:=k$. The product of two forms $\omega,\nu\in\Omega^{\bullet}$ is
usually denoted by $\omega\wedge\nu$, unless one of the forms is of degree
$0$, where we just denote the product by juxtaposition.
For a given algebra $B$, a _differential calculus over_ $B$ is a differential
calculus such that $\Omega^{0}=B$. Note that for a differential calculus over
$B$, each $\Omega^{k}$ is a $B$-bimodule. A differential calculus is said to
have _total degree_ $m\in\mathbb{N}_{0}$, if $\Omega^{m}\neq 0$, and
$\Omega^{k}=0$, for every $k>m$.
A differential $*$-calculus over a $*$-algebra $B$ is a differential calculus
over $B$ such that the $*$-map of $B$ extends to a (necessarily unique)
conjugate-linear involutive map $*:\Omega^{\bullet}\to\Omega^{\bullet}$
satisfying $\mbox{d}(\omega^{*})=(\mbox{d}\omega)^{*}$, and
$\displaystyle\big{(}\omega\wedge\nu\big{)}^{*}=(-1)^{kl}\nu^{*}\wedge\omega^{*},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \text{ for all }\ \
\omega\in\Omega^{k},\,\nu\in\Omega^{l}.$
We say that $\omega\in\Omega^{\bullet}$ is _closed_ if $\mbox{d}\omega=0$, and
_real_ if $\omega^{*}=\omega$.
We now recall the definition of a complex structure as introduced in [25].
This abstracts the properties of the de Rham complex for a classical complex
manifold [16].
###### Definition 2.1.
An almost complex structure $\Omega^{(\bullet,\bullet)}$ for a differential
$*$-calculus $(\Omega^{\bullet},\mbox{d})$ is an $\mathbb{N}^{2}_{0}$-algebra
grading $\bigoplus_{(a,b)\in\mathbb{N}^{2}_{0}}\Omega^{(a,b)}$ for
$\Omega^{\bullet}$ such that, for all $(a,b)\in\mathbb{N}^{2}_{0}$ the
following hold:
1. (i)
$\Omega^{k}=\bigoplus_{a+b=k}\Omega^{(a,b)}$,
2. (ii)
$\big{(}\Omega^{(a,b)}\big{)}^{*}=\Omega^{(b,a)}$.
We call an element of $\Omega^{(a,b)}$ an $(a,b)$-form. For the associated
pair of projections
$\mathrm{proj}_{\Omega^{(a+1,b)}}:\Omega^{a+b+1}\to\Omega^{(a+1,b)}$ and
$\mathrm{proj}_{\Omega^{(a,b+1)}}:\Omega^{a+b+1}\to\Omega^{(a,b+1)}$, we
denote
$\displaystyle\partial|_{\Omega^{(a,b)}}:=\mathrm{proj}_{\Omega^{(a+1,b)}}\circ\mbox{d},$
$\displaystyle\overline{\partial}|_{\Omega^{(a,b)}}:=\mathrm{proj}_{\Omega^{(a,b+1)}}\circ\mbox{d}.$
A _complex structure_ is an almost complex structure which satisfies
$\displaystyle\mbox{d}\Omega^{(a,b)}\subseteq\Omega^{(a+1,b)}\oplus\Omega^{(a,b+1)},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \text{ for all }\
(a,b)\in\mathbb{N}^{2}_{0}.$ (1)
For a complex structure, (1) implies the identities
$\displaystyle\mbox{d}=\partial+\overline{\partial},$
$\displaystyle\overline{\partial}\circ\partial=-\,\partial\circ\overline{\partial},$
$\displaystyle\partial^{2}=\overline{\partial}^{2}=0.$
In particular,
$\big{(}\bigoplus_{(a,b)\in\mathbb{N}^{2}_{0}}\Omega^{(a,b)},\partial,\overline{\partial}\big{)}$
is a double complex, which we call the Dolbeault double complex of
$\Omega^{(\bullet,\bullet)}$. Moreover, it is easily seen that both $\partial$
and $\overline{\partial}$ satisfy the graded Leibniz rule, and
$\displaystyle\partial(\omega^{*})=\big{(}\overline{\partial}\omega\big{)}^{*},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
\overline{\partial}(\omega^{*})=\big{(}\partial\omega\big{)}^{*},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \text{for all}\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \omega\in\Omega^{\bullet}.$ (2)
#### 2.2.1 Hermitian and Kähler structures
We now present the definition of an Hermitian structure, as introduced in
[24], as well as a Kähler structure, introduced in §7 of the same paper.
###### Definition 2.2.
An Hermitian structure $(\Omega^{(\bullet,\bullet)},\sigma)$ for a
differential $*$-calculus $\Omega^{\bullet}$ over $B$ of even total degree
$2n$ is a pair consisting of a complex structure $\Omega^{(\bullet,\bullet)}$
and a central real $(1,1)$-form $\sigma$, called the Hermitian form, such
that, with respect to the Lefschetz operator
$\displaystyle L:\Omega^{\bullet}\to\Omega^{\bullet},$
$\displaystyle\omega\mapsto\sigma\wedge\omega,$
isomorphisms are given by
$\displaystyle L^{n-k}:\Omega^{k}\to\Omega^{2n-k},$ $\displaystyle\text{ for
all }\ \ k=0,\dots,n-1.$
For $L$ the Lefschetz operator of an Hermitian structure, we denote
$\displaystyle
P^{(a,b)}:=\begin{cases}\\{\alpha\in\Omega^{(a,b)}\,|\,L^{n-a-b+1}(\alpha)=0\\},&\text{
\leavevmode\nobreak\ if }a+b\leq n,\\\ 0,&\text{ \leavevmode\nobreak\ if
}a+b>n.\end{cases}$
Moreover, we denote $P^{k}:=\bigoplus_{a+b=k}P^{(a,b)}$, and
$P^{\bullet}:=\bigoplus_{k\in\mathbb{N}_{0}}P^{k}$. An element of
$P^{\bullet}$ is called a primitive form.
An important consequence of the existence of the Lefschetz operator is the
Lefschetz decomposition of the differential $*$-calculus, which we now recall
below. For a proof see [24, §4.1].
###### Proposition 2.3 (Lefschetz decomposition).
For $L$ the Lefschetz operator of an Hermitian structure on a differential
$*$-calculus $\Omega^{\bullet}$, a $B$-bimodule decomposition of $\Omega^{k}$,
for all $k\in\mathbb{N}_{0}$, is given by
$\displaystyle\Omega^{k}\simeq\bigoplus_{j\geq 0}L^{j}\big{(}P^{k-2j}\big{)}.$
In classical Hermitian geometry, the Hodge map of an Hermitian metric is
related to the associated Lefschetz decomposition through the well-known Weil
formula [16, §1.2]. In the noncommutative setting, we take the direct
generalisation of the Weil formula for our definition of the Hodge map, and
build upon this to define an Hermitian metric.
###### Definition 2.4.
The Hodge map associated to an Hermitian structure
$\big{(}\Omega^{(\bullet,\bullet)},\sigma\big{)}$ is the $B$-bimodule map
$\ast_{\sigma}:\Omega^{\bullet}\to\Omega^{\bullet}$ satisfying, for any
$j\in\mathbb{N}_{0}$,
$\displaystyle\ast_{\sigma}\big{(}L^{j}(\omega)\big{)}=(-1)^{\frac{k(k+1)}{2}}\mathbf{i}^{a-b}\frac{j!}{(n-j-k)!}L^{n-j-k}(\omega),$
$\displaystyle\omega\in P^{(a,b)}\subseteq P^{k=a+b},$
where $\mathbf{i}^{2}=-1$.
The metric associated to a Hermitian structure
$(\Omega^{(\bullet,\bullet)},\sigma)$ is the map
$g_{\sigma}:\Omega^{\bullet}\times\Omega^{\bullet}\rightarrow B$, for which
$g(\Omega^{k},\Omega^{l})=0$ for $k\neq l$ and
$g_{\sigma}(\omega,\nu):=*_{\sigma}(*_{\sigma}(\omega^{*})\wedge\nu),\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \text{ if }\ \omega,\nu\in\Omega^{k}.$
###### Definition 2.5.
We say that an Hermitian structure $(\Omega^{(\bullet,\bullet)},\sigma)$ is
_positive definite_ if the associated metric $g_{\sigma}$ satisfies
$\displaystyle g_{\sigma}(\omega,\omega)\in
B_{>0}:=\Big{\\{}\sum_{i=1}^{m}\lambda_{i}b^{*}b_{i}:b_{i}\in B,\ \
\lambda_{i}\in\mathbb{R}_{>0},\ m\in\mathbb{N}\Big{\\}},\ \ \mbox{for all non-
zero}\ \ \omega\in\Omega^{\bullet}.$
In this case we say that $\sigma$ is a positive definite Hermitian form.
###### Definition 2.6.
A Kähler structure for a differential $*$-calculus is an Hermitian structure
$(\Omega^{(\bullet,\bullet)},\kappa)$ such that $\kappa$ is closed, which is
to say, $\mbox{d}\kappa=0$. We call such a form $\kappa$ a Kähler form.
### 2.3 Irreducible quantum flag manifolds
Let $V$ be a finite-dimensional $U_{q}(\mathfrak{g})$-module, $v\in V$, and
$f\in V^{*}$, the linear dual of $V$. Consider the function
$c^{\textrm{\tiny$V$}}_{f,v}:U_{q}(\mathfrak{g})\to\mathbb{C}$ defined by
$c^{\textrm{\tiny$V$}}_{f,v}(X):=f\big{(}X(v)\big{)}$. The coordinate ring of
$V$ is the subspace
$\displaystyle
C(V):=\text{span}_{\mathbb{C}}\\!\left\\{c^{\textrm{\tiny$V$}}_{f,v}\,|\,v\in
V,\,f\in V^{*}\right\\}\subseteq U_{q}(\mathfrak{g})^{*}.$
It is easy to check that $C(V)$ is contained in $U_{q}(\mathfrak{g})^{\circ}$,
the Hopf dual of $U_{q}(\mathfrak{g})$, and moreover that a Hopf subalgebra of
$U_{q}(\mathfrak{g})^{\circ}$ is given by
$\displaystyle\mathcal{O}_{q}(G):=\bigoplus_{\lambda\in\mathcal{P}^{+}}C(V_{\lambda}).$
(3)
We call ${\mathcal{O}}_{q}(G)$ the quantum coordinate algebra of $G$, where
$G$ is the compact, simply-connected, simple Lie group having $\mathfrak{g}$
as its complexified Lie algebra. Moreover, we call the decomposition of
${\mathcal{O}}_{q}(G)$ given in (3) the _Peter–Weyl decomposition_ of
${\mathcal{O}}_{q}(G)$.
For $S$ a subset of simple roots, consider the Hopf subalgebra of
$U_{q}(\mathfrak{g})$ given by
$\displaystyle
U_{q}(\mathfrak{l}_{S}):=\big{<}K_{i},E_{j},F_{j}\,|\,i=1,\ldots,r;j\in
S\big{>}.$
There are obvious notions of weight vectors, highest weight vectors as well as
type-1 representations. It can be shown that every irreducible finite
dimensional type-1 $U_{q}(\mathfrak{l}_{S})$-module admits a highest weight
vector which is unique up to scalar multiple. Moreover, the weight of this
highest weight vector determines the module up to isomorphism and the there is
a correspondence between irreducible type-1 $U_{q}(\mathfrak{l}_{S})$ modules
and weights in the sub-lattice
$\displaystyle\mathcal{P}^{+}_{S}+\mathcal{P}_{S^{c}}\subseteq\mathcal{P},$
where $S^{c}=\Pi\setminus S$ and
$\displaystyle\mathcal{P}^{+}_{S}$
$\displaystyle:=\Big{\\{}\lambda\in\mathcal{P}:\lambda=\sum_{j\in
S}\lambda_{j}\varpi_{j},\ \ \lambda_{j}\in\mathbb{Z}_{\geq 0}\Big{\\}},$
$\displaystyle\mathcal{P}_{S^{c}}$
$\displaystyle:=\Big{\\{}\lambda\in\mathcal{P}:\lambda=\sum_{j\in
S^{c}}\lambda_{j}\varpi_{j}\ \ \lambda_{j}\in\mathbb{Z}\Big{\\}}.$
Just as for ${\mathcal{O}}_{q}(G)$, we can construct the type-1 dual of
$U_{q}(\mathfrak{l}_{S})$ using matrix coefficients, this Hopf algebra will be
denoted by ${\mathcal{O}}_{q}(L_{S})$. Restriction of domains gives us a
surjective Hopf $\ast$-algebra map
$\pi_{S}:{\mathcal{O}}_{q}(G)\rightarrow{\mathcal{O}}(L_{S})$, dual to the
inclusion of $U_{q}(\mathfrak{l}_{S})$ in $U_{q}(\mathfrak{g})$. The quantum
flag manifold associated to S is the quantum homogeneous space associated to
$\pi_{S}$
$\displaystyle{\mathcal{O}}_{q}\big{(}G/L_{S}\big{)}$
$\displaystyle:={\mathcal{O}}_{q}(G)^{co({\mathcal{O}}_{q}(L_{S}))}$
$\displaystyle\ =\\{b\in{\mathcal{O}}_{q}(G):b_{(1)}\langle
X,b_{(2)}\rangle=\varepsilon(X)b\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \text{for all}\ \ X\in
U_{q}(\mathfrak{l}_{S})\\}.$
We say that the quantum flag manifold is irreducible if
$S=\Pi\setminus\\{\alpha_{x}\\}$ where $\alpha_{x}$ has coefficient $1$ in the
expansion of the highest rood of $\mathfrak{g}$. In the table 1 we give a
diagrammatic presentation of the set of simple roots defining the irreducible
quantum flag manifold, where the node corresponding to $\alpha_{x}$ is the
coloured one.
Table 1: Irreducible Quantum Flag Manifolds with nodes numbered according to [15, §11.4]. $A_{n}$ | | ${\mathcal{O}}_{q}(\text{Gr}_{s,n+1})$ | quantum Grassmannian
---|---|---|---
$B_{n}$ | | ${\mathcal{O}}_{q}(\mathbf{Q}_{2n+1})$ | odd quantum quadric
$C_{n}$ | | ${\mathcal{O}}_{q}(\mathbf{L}_{n})$ | quantum Lagrangian Grassmannian
$D_{n}$ | | ${\mathcal{O}}_{q}(\mathbf{Q}_{2n})$ | even quantum quadric
$D_{n}$ | | ${\mathcal{O}}_{q}(\textbf{S}_{n})$ | quantum spinor variety
$E_{6}$ | | ${\mathcal{O}}_{q}(\mathbb{OP}^{2})$ | quantum Caley plane
$E_{7}$ | | ${\mathcal{O}}_{q}(\textrm{F})$ | quantum Freudenthal variety
#### 2.3.1 Takeuchi’s equivalence
In the following we denote $\mbox{}^{\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
{\mathcal{O}}_{q}(G)}_{{\mathcal{O}}_{q}(G/L_{S})}\mathrm{Mod}_{0}$ the
category whose objects are relative Hopf modules $\mathcal{F}$ such that
$\mathcal{F}{\mathcal{O}}_{q}(G/L_{S})^{+}={\mathcal{O}}_{q}(G/L_{S})^{+}\mathcal{F}$
and morphisms are left ${\mathcal{O}}_{q}(G)$-comodule,
${\mathcal{O}}(G/L_{S})$-bimodule maps. Let
$\mbox{}^{{\mathcal{O}}(L_{S})}\mathrm{Mod}$ be the category whose objects are
${\mathcal{O}}(L_{S})$-comodules and morphisms are
${\mathcal{O}}_{q}(L_{S})$-comodule maps. By _Takeuchi’s equivalence_ , see
for example [26], [13, §2.2], an equivalence of the categories
$\mbox{}^{\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\
{\mathcal{O}}_{q}(G)}_{{\mathcal{O}}_{q}(G/L_{S})}\mathrm{Mod}_{0}$ and
$\mbox{}^{{\mathcal{O}}_{q}(L_{S})}\mathrm{Mod}$ is given by the functors
$\displaystyle\Phi:\mbox{}^{\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
{\mathcal{O}}_{q}(G)}_{{\mathcal{O}}_{q}(G/L_{S})}\mathrm{Mod}_{0}\rightarrow\mbox{}^{{\mathcal{O}}_{q}(L_{S})}\mathrm{Mod},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\
\Phi(\Gamma)=\Gamma/{\mathcal{O}}_{q}(G/L_{S})^{+}\Gamma,$
$\displaystyle\Psi:\mbox{}^{{\mathcal{O}}_{q}(L_{S})}\mathrm{Mod}\rightarrow\mbox{}^{\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
{\mathcal{O}}_{q}(G)}_{{\mathcal{O}}(G/L_{S})}\mathrm{Mod}_{0},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
\Psi(V)={\mathcal{O}}_{q}(G)\square_{{\mathcal{O}}_{q}(L_{S})}V.$
Here for $\Gamma\in\mbox{}^{\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\
{\mathcal{O}}_{q}(G)}_{{\mathcal{O}}(G/L_{S})}\mathrm{Mod}_{0}$ the left
${\mathcal{O}}(L_{S})$-comodule structure on
$\Gamma/{\mathcal{O}}_{q}(G/L_{S})^{+}\Gamma$ is induced by the left
${\mathcal{O}}_{q}(G)$-comodule structure of $\Gamma$. The left
${\mathcal{O}}_{q}(G)$-comodule and ${\mathcal{O}}(G/L_{S})$-bimodule
structures on the cotensor
${\mathcal{O}}_{q}(G)\square_{{\mathcal{O}}_{q}(L_{S})}V$ are given by the
coproduct and ${\mathcal{O}}(G/L_{S})$-bimodule structure of
${\mathcal{O}}_{q}(G)$, respectively. Takeuchi’s equivalence states that
natural isomorphisms are given by
$\displaystyle\mathrm{C}:\Phi\circ\Psi(V)\rightarrow V,\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\Big{[}\sum_{i}a^{i}\otimes
v^{i}\Big{]}\mapsto\sum_{i}\varepsilon(a^{i})v^{i},$
$\displaystyle\mathrm{U}:\mathcal{F}\rightarrow\Psi\circ\Phi(\mathcal{F}),\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ f\mapsto f_{(-1)}\otimes[f_{(0)}].$
### 2.4 The Hopf algebras ${\mathcal{O}}_{q}(SO(N))$
In this subsection we give a brief description of the FRT presentation of the
Hopf algebras ${\mathcal{O}}_{q}(SO(N))$, see [17] for details. We first give
a bit of notation. We denote $i^{\prime}=N+1-i$. Let $N=2n$ if $N$ is even and
$N=2n+1$ if $N$ is odd. We define
$\displaystyle\rho_{i}=\frac{N}{2}-i\ \ \text{if}\ \ i<i^{\prime},\ \
\rho_{i}^{\prime}=-\rho_{i}\ \ \text{if}\ \ i\leq i^{\prime}.$
Recall that the R-matrix associated to the quantum ${\mathcal{O}}_{q}(SO(N))$
is given by
$R^{ij}_{mn}=q^{\delta_{ij}-\delta_{ij^{\prime}}}\delta_{im}\delta_{jn}+(q-q^{-1})\theta(i-m)(\delta_{jm}\delta_{in}-\delta_{ji^{\prime}}\delta_{mn^{\prime}}q^{-\rho_{j}-\rho_{m}}).$
(4)
Let $U_{q}(\mathfrak{so}_{N})^{\circ}$ denote the dual Hopf algebra of
$U_{q}(\mathfrak{so}_{N})$. Then ${\mathcal{O}}_{q}(SO(N))\subset
U_{q}(\mathfrak{so}_{N})^{\circ}$ is the Hopf subalgebra generated by matrix
coefficients of the $N$-dimensional irreducible
$U_{q}(\mathfrak{so}_{N})$-representation of highest weight $\varpi_{1}$. We
denote the matrix coefficients by $u^{i}_{j}$, $i,j=1,\ldots,N$, they satisfy
the relations
$\displaystyle\sum_{k,l=1}^{N}R^{ij}_{kl}u^{k}_{m}u^{l}_{n}=\sum_{k,l=1}^{N}u^{j}_{k}u^{i}_{l}R^{lk}_{mn},\
\ i,j,m,n=1,\ldots,N,$ (5) $\displaystyle\mathcal{D}_{q}=1,\ \
\sum_{i,j,k=1}^{N}C^{i}_{j}(C^{-1})^{k}_{m}u^{n}_{i}u^{k}_{j}=\sum_{i,j,k=1}^{N}C^{n}_{i}(C^{-1})^{j}_{k}u^{j}_{i}u^{k}_{m}=\delta_{mn},\
\ m,n=1,\ldots,N,$ (6)
where the C-matrix coefficients and the quantum determinant are given as in
[17, §9.3]. The Hopf $\ast$-algebra structure on ${\mathcal{O}}_{q}(SO(N))$ is
determined on generators by
$\displaystyle\Delta(u^{i}_{j})=\sum_{k}u^{i}_{k}\otimes
u^{k}_{j},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
\varepsilon(u^{i}_{j})=\delta_{ij},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
S(u^{i}_{j})=q^{\rho_{j}-\rho_{i}}u^{j^{\prime}}_{i^{\prime}},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ u^{i*}_{j}=S(u^{j}_{i}).$
It follows from [17, §9.4] that there exists a dual pairing
$\langle\cdot,\cdot\rangle$ of the Hopf algebras $U_{q}(\mathfrak{so}_{N})$
and ${\mathcal{O}}_{q}(SO(N))$ which in turn induces a left action
$U_{q}(\mathfrak{so}_{N})\otimes{\mathcal{O}}_{q}(SO(N))\ni X\otimes a\mapsto
X\triangleright a\in{\mathcal{O}}_{q}(SO(N))$ and a right action
${\mathcal{O}}_{q}(SO(N))\otimes U_{q}(\mathfrak{so}_{N})\ni a\otimes X\mapsto
a\triangleleft X\in{\mathcal{O}}_{q}(SO(N))$ such that
${\mathcal{O}}_{q}(SO(N))$ is a left and a right
$U_{q}(\mathfrak{so}_{N})$-module $\ast$-algebra. The actions on generators
are given as follows:
$\displaystyle E_{j}\triangleright u^{i}_{j}=u^{i}_{j+1},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ E_{j}\triangleright
u^{i}_{(j+1)^{\prime}}=-u^{i}_{j^{\prime}},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ F_{j}\triangleright
u^{i}_{j+1}=u^{i}_{j},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ F_{j}\triangleright
u^{i}_{j^{\prime}}=-u^{i}_{(j+1)^{\prime}},\ \ j<n,$ (7) $\displaystyle
E_{n}\triangleright
u^{i}_{n}=[2]^{1/2}_{q_{2}}u^{i}_{n+1},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ E_{n}\triangleright
u^{i}_{n+1}=-q_{2}[2]^{1/2}_{q_{2}}u^{i}_{n+2},$ (8) $\displaystyle
F_{n}\triangleright
u^{i}_{n+1}=[2]^{1/2}_{q_{2}}u^{i}_{n},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ F_{n}\triangleright
u^{i}_{n+2}=-q^{-1}_{2}[2]^{1/2}_{q_{2}}u^{i}_{n+1},$ (9)
for $i=1,\ldots,N$ and $N=2n+1$. For $N=2n$ we have
$\displaystyle E_{j}\triangleright u^{i}_{j}=u^{i}_{j+1},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ E_{j}\triangleright
u^{i}_{(j+1)^{\prime}}=-u^{i}_{j^{\prime}},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ F_{j}\triangleright
u^{i}_{j+1}=u^{i}_{j},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ F_{j}\triangleright
u^{i}_{j^{\prime}}=u^{i}_{(j+1)^{\prime}}\ \ i<n,$ (10) $\displaystyle
E_{n}\triangleright u^{i}_{n}=-u^{i}_{n+2},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ E_{n}\triangleright
u^{i}_{n-1}=u^{i}_{n+1},$ (11) $\displaystyle F_{n}\triangleright
u^{i}_{n+2}=-u^{i}_{n},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ F_{n}\triangleright u^{i}_{n+1}=u^{i}_{n-1}.$ (12)
The corresponding right actions for $N=2n+1$ are given by
$\displaystyle u^{i}_{j}\triangleleft F_{i}=u^{i+1}_{j},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ u^{(i+1)^{\prime}}_{j}\triangleleft
F_{i}=-u^{i^{\prime}}_{j},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ u^{i+1}_{j}\triangleleft
E_{i}=u^{i}_{j},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
u^{i^{\prime}}_{j}\triangleleft E_{i}=-u^{(i+1)^{\prime}}_{j},\ \ i<n,$ (13)
$\displaystyle u^{n}_{j}\triangleleft
F_{n}=[2]^{1/2}_{q_{2}}u^{n+1}_{j},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ u^{n+1}_{j}\triangleleft
F_{n}=-q_{2}[2]^{1/2}_{q_{2}}u^{n+2}_{j},$ (14) $\displaystyle
u^{n+1}_{j}\triangleleft E_{n}=[2]^{1/2}_{q_{2}}u^{n}_{j},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ u^{n+2}_{j}\triangleleft
E_{n}=-q^{-1}_{2}[2]^{1/2}_{q_{2}}u^{n+1}_{j},$ (15)
and for even $N=2n$
$\displaystyle u^{i}_{j}\triangleleft F_{i}=u^{i+1}_{j},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ u^{(i+1)^{\prime}}_{j}\triangleleft
F_{i}=-u^{i^{\prime}}_{j},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ u^{i+1}_{j}\triangleleft
E_{i}=u^{i}_{j},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
u^{i^{\prime}}_{j}\triangleleft E_{i}=u^{(i+1)^{\prime}}_{j},\ \ i<n,$ (16)
$\displaystyle u^{n}_{j}\triangleleft F_{n}=-u^{n+2}_{j},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ u^{n-1}_{j}\triangleleft
F_{n}=u^{n+1}_{j},$ (17) $\displaystyle u^{n+2}_{j}\triangleleft
E_{n}=-u^{n}_{j},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ u^{n+1}_{j}\triangleleft E_{n}=u^{n-1}_{j},$ (18)
for $i=1,\ldots,N$. The values $E_{j}\triangleright u^{k}_{l}$,
$F_{j}\triangleright u^{k}_{l}$, $u^{k}_{l}\triangleleft E_{j}$ and
$u^{k}_{l}\triangleleft F_{j}$ for all other cases are zero.
For the convenience of the reader, the next lemma collects the commutation
relations of the generators of ${\mathcal{O}}_{q}(SO(N))$ which will be used
in this paper.
###### Lemma 2.7.
If $u^{i}_{j},i,j=1,...,N$ are the generators of $\mathcal{O}_{q}(SO(N))$ then
the following commutation relations hold
$\displaystyle u^{i}_{1}u^{i}_{N}=q^{2}u^{i}_{N}u^{i}_{1},\ i\neq
i^{\prime},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $
(19) $\displaystyle u^{i}_{l}u^{i}_{k}=qu^{i}_{k}u^{i}_{l},\ l<k,\ i\neq
i^{\prime},$ (20) $\displaystyle u^{j}_{l}u^{i}_{k}=u^{i}_{k}u^{j}_{l},\ l<k,\
i<j,\ l\neq k^{\prime},\ i\neq j^{\prime},$ (21) $\displaystyle
u^{j}_{1}u^{i}_{N}=qu^{i}_{N}u^{j}_{1},\ i<j,\ i\neq j^{\prime},$ (22)
$\displaystyle
u^{i}_{1}u^{j}_{N}=qu^{j}_{N}u^{i}_{1}+(q^{2}-1)u^{i}_{N}u^{j}_{1},\ i<j,\
i\neq j^{\prime},$ (23) $\displaystyle
u^{i}_{k}u^{j}_{l}=u^{j}_{l}u^{i}_{k}-(q-q^{-1})u^{j}_{k}u^{i}_{l},\ i<j,\
k<l,\ i\neq j^{\prime},\ k\neq l^{\prime},$ (24) $\displaystyle
u^{i}_{k}u^{j}_{k}=qu^{j}_{k}u^{i}_{k},\ k\neq k^{\prime},\ i<j,\ i\neq
j^{\prime},$ (25)
$\displaystyle(u^{1}_{1}u^{2}_{a}-qu^{2}_{1}u^{1}_{a})(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})=q^{2}(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})(u^{1}_{1}u^{2}_{a}-qu^{2}_{1}u^{1}_{a}),$
(26)
$\displaystyle(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})(u^{1}_{a}u^{2}_{N}-qu^{2}_{a}u^{1}_{N})=q^{2}(u^{1}_{a}u^{2}_{N}-qu^{2}_{a}u^{1}_{N})(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N}),$
(27)
where $a=2,\ldots,N-1$.
###### Proof.
Equations (19)-(24) follow directly from the R-matrix relations (5). For
instance, if we take $m=N$, $n=1$ and $i>j$ with $i\neq j^{\prime}$ in
relation (5) we obtain
$\displaystyle
u^{i}_{N}u^{j}_{1}+(q-q^{-1})u^{j}_{N}u^{i}_{1}=q^{-1}u^{j}_{1}u^{i}_{N},$
which yields equation (23). Equation (25) follows by choosing $k=l+1$ in (21)
and acting on both sides by the $E_{i}$’s or $F_{i}$’s from the left by using
formulas (7)-(12). We only give the proof of (27) since the proof of (26) is
analogous. By using (19)-(25) it can be show that
$u^{1}_{N}(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})=q^{-1}(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})u^{1}_{N}$.
If we act on both sides by $E_{1}$ from the left we get
$\displaystyle
u^{1}_{N}(u^{1}_{2}u^{2}_{N}-qu^{2}_{2}u^{1}_{N})=q^{-1}(u^{1}_{2}u^{2}_{N}-qu^{2}_{2}u^{1}_{N})u^{1}_{N}.$
Then the relation
$u^{1}_{N}(u^{1}_{a}u^{2}_{N}-qu^{2}_{a}u^{1}_{N})=q^{-1}(u^{1}_{a}u^{2}_{N}-qu^{2}_{a}u^{1}_{N})u^{1}_{N}$
can be obtained by acting from the left by the $E_{i}$’s according to
identities (7)-(12). The commutation relations
$\displaystyle
u^{1}_{1}(u^{1}_{a}u^{2}_{N}-qu^{2}_{a}u^{1}_{N})=q^{2}(u^{1}_{a}u^{2}_{N}-qu^{2}_{a}u^{1}_{N})u^{1}_{1},\quad\
u^{2}_{N}(u^{1}_{a}u^{2}_{N}-qu^{2}_{a}u^{1}_{N})=(u^{1}_{a}u^{2}_{N}-qu^{2}_{a}u^{1}_{N})u^{2}_{N},$
$\displaystyle
u^{2}_{1}(u^{1}_{a}u^{2}_{N}-qu^{2}_{a}u^{1}_{N})=(u^{1}_{a}u^{2}_{N}-qu^{2}_{a}u^{1}_{N})u^{2}_{1}\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $
can be easily deduced from (19)-(25). Combining this results we have that (27)
holds for $a=2,\ldots,N-1$. ∎
## 3 The Heckenberger–Kolb calculi and the Kähler structure
In this section we recall important facts about Heckenberger–Kolb calculi for
irreducible quantum flag manifolds as well as the existence of a Kähler
structure for them. For more details we refer the reader to the seminal papers
[13] and [14] for the Heckenberger–Kolb calculus and [21], [24] for Kähler
structures.
As mentioned before, the irreducible quantum flag manifolds are distinguished
by existence of a unique $q$-deformed de Rham complex. The following theorem
collects the main results of [12], [13], and [21].
###### Theorem 3.1.
For any irreducible quantum flag manifold ${\mathcal{O}}_{q}(G/L_{S})$, there
exists a unique finite dimensional left ${\mathcal{O}}(G)$-covariant
differential $\ast$-calculus
$\displaystyle\Omega^{\bullet}_{q}(G/L_{S})\in\mbox{}^{\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
{\mathcal{O}}_{q}(G)}_{{\mathcal{O}}_{q}(G/L_{S})}\mathrm{Mod}_{0},$
of classical dimension
$\displaystyle\mathrm{dim}\,\Phi\big{(}\Omega^{k}_{q}(G/L_{S})\big{)}=\left(\begin{array}[]{c}\\!\\!2M\\!\\!\\\
\\!\\!k\\!\\!\end{array}\right),\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \mbox{for all}\leavevmode\nobreak\
\leavevmode\nobreak\ k=1,\ldots,2M,$
where $M$ is the complex dimension of the corresponding classical manifold.
Moreover
1. (i)
$\Omega^{\bullet}_{q}(G/L_{S})$ admits a unique left
${\mathcal{O}}_{q}(G)$-covariant complex structure
$\displaystyle\Omega^{\bullet}_{q}(G/L_{S})\simeq\bigoplus_{(a,b)\in\mathbb{N}_{0}}\Omega^{(a,b)}=:\Omega^{(\bullet,\bullet)},$
2. (ii)
$\Omega^{(1,0)}$ and $\Omega^{(0,1)}$ are irreducible objects in
$\mbox{}^{\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
{\mathcal{O}}_{q}(G)}_{{\mathcal{O}}_{q}(G/L_{S})}\mathrm{Mod}_{0}$.
We recall that the $\ast$-calculus and the covariant complex structure were
given in [21]. The following theorem gives us the existence of the Kähler
structure for the Heckenberger–Kolb calculus.
###### Theorem 3.2.
([21] Theorem 5.10). Let $\Omega^{\bullet}_{q}(G/L_{S})$ be the
Heckenberger–Kolb calculus of the irreducible quantum flag manifold
${\mathcal{O}}_{q}(G/L_{S})$. Then there exists a form
$\kappa\in\Omega^{(1,1)}$ such that $(\Omega^{(\bullet,\bullet)},\kappa)$ is a
covariant Kähler structure for all $q\in\mathbb{R}_{>0}\setminus F$, where $F$
is a finite, possibly empty, subset of $\mathbb{R}_{>0}$. Moreover, any
element of $F$ is necessarily non-transcendental.
It has been shown in [9, §5.7] that if ${\mathcal{O}}_{q}(G)$ is a compact
quantum group algebra and $q$ belongs to a suitable open interval around $1$
then an inner product on $\Omega^{(\bullet,\bullet)}$ can be defined by
$\displaystyle\langle\cdot,\cdot\rangle:\Omega^{(\bullet,\bullet)}\times\Omega^{(\bullet,\bullet)}\rightarrow\mathbb{C},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ (\omega,\nu)\mapsto\mbox{{h}}\circ
g_{\sigma}(\omega,\nu),$ (28)
where is the Haar state associated to ${\mathcal{O}}_{q}(G)$ [17, §11.3.2].
From now on we consider $q\in(1,\epsilon)$ so that the inner product is well
defined. With respect to this inner product we consider the adjoints
$\mathrm{d}^{\dagger}$, $\partial^{\dagger}$ and
$\overline{\partial}^{\dagger}$. It is shown in [24, §5.4] that
$\displaystyle\mathrm{d}^{\dagger}=-\ast_{\kappa}\circ\,\mathrm{d}\circ\ast_{\kappa},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\
\partial^{\dagger}=-\ast_{\kappa}\circ\,\overline{\partial}\circ\ast_{\kappa},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\
\overline{\partial}^{\dagger}=-\ast_{\kappa}\circ\,\partial\circ\ast_{\kappa}.$
(29)
Then Theorem 3.1 and the inner product (28) allow us to define the
$\overline{\partial}$-Dirac operator and the $\overline{\partial}$-Laplacian
$\displaystyle
D_{\overline{\partial}}:=\overline{\partial}+\overline{\partial}^{\dagger},\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\
\Delta_{\overline{\partial}}:=(\overline{\partial}+\overline{\partial}^{\dagger})^{2}.$
###### Lemma 3.3.
With respect to the notation in [13] we have
1. (i)
$u^{i}_{j}=c^{\varpi_{1}}_{f_{i},v_{j}}$,
$S(u^{i}_{j})=c^{-w_{0}(\varpi_{1})}_{v_{j},f_{i}}$ and
$z_{ij}:=c^{\varpi_{1}}_{f_{i},v_{N}}c^{-w_{0}\varpi_{1}}_{v_{j},f_{N}}=u^{i}_{N}S(u^{N}_{j})$
are the generators of the algebra ${\mathcal{O}}_{q}(G/L_{S})$,
2. (ii)
$\Phi(\Omega^{(0,1)})=\Omega^{(0,1)}/{\mathcal{O}}_{q}(\mathrm{Q}_{N})^{+}\Omega^{(0,1)}=\mathrm{Lin}_{\mathbb{C}}\big{\\{}[\overline{\partial}z_{Ni}]:i=2,\ldots,N-1\big{\\}}$
and $\Phi(\Omega^{(1,0)})=\mathrm{Lin}_{\mathbb{C}}\big{\\{}[\partial
z_{iN}]:i=2,\ldots,N-1\big{\\}}$
3. (iii)
if we consider $\mathcal{O}_{q}(G/L_{S})$ as a left
$U_{q}(\mathfrak{g})$-module with action $X\raisebox{0.86108pt}{
\scriptsize{\mbox{$\blacktriangleright$}}}z:=z\triangleleft S(X)$ then the
generators of $\mathcal{O}_{q}(\mathrm{Q}_{N})$ are given by
$z:=u^{1}_{1}u^{1}_{N}$, $y:=u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N}$ with
weights $2\varpi_{1}$, $\varpi_{2}$ respectively.
###### Proof.
(i) follows from the notation in [13]. To prove (ii) we know from [13, §3.2]
that $I_{(1)}=\\{i\in
I:(\varpi_{1},\varpi_{1}-\alpha_{1}-\mbox{wt}(v_{i}))=0\\}$. Since $v_{N}$ is
the highest weight vector
$(\varpi_{1},\varpi_{1}-\alpha_{1}-\mbox{wt}(v_{N}))=(\varpi_{1},\varpi_{1}-\alpha_{1}-\varpi_{1})=-(\varpi_{1},\alpha_{1})\neq
0$, thus $v_{N}\notin I_{(1)}$. Similarly, we can consider $v_{1}$ as the
lowest weight vector, then
$\mbox{wt}(v_{1})=w_{0}(\varpi_{1})=-\mbox{id}(\varpi_{1})=-\varpi_{1}$. Then
$(\varpi_{1},\varpi_{1}-\alpha_{1}-\mbox{wt}(v_{1}))=(\varpi_{1},\varpi_{1}-\alpha_{1}+\varpi_{1})=2(\varpi_{1},\varpi_{1})-(\varpi_{1},\alpha_{1})=(2r_{1}-1)(\varpi_{1},\alpha_{1})$
where $r_{1}$ is the coefficient of $\alpha_{1}$ in the expression of
$\varpi_{1}$ as linear combination of the $\alpha_{1},\ldots,\alpha_{n}$. From
the form of the Cartan matrix of type $B_{n}$ and $D_{n}$ it can be shown that
$r_{1}=1$ and therefore $v_{1}\notin I_{(1)}$. It also can be shown that
$M=N-2$ for both cases ${\mathcal{O}}_{q}(\mathbf{Q}_{2n+1})$ and
${\mathcal{O}}_{q}(\mathbf{Q}_{2n})$, see for example table 2 in [9].
Therefore $I_{(1)}=\\{2,\ldots,N-1\\}$.
For the proof of (iii) we first note that the left and right actions on matrix
coefficients are given by
$\displaystyle X\triangleright c^{\varpi_{1}}_{f_{i},v_{j}}\triangleleft
Y=c^{\varpi_{1}}_{f_{i}\triangleleft Y,X\triangleright vj}.$ (30)
By (30) we have $E_{i}\raisebox{0.86108pt}{
\scriptsize{\mbox{$\blacktriangleright$}}}(u^{1}_{1}u^{1}_{N})=(u^{1}_{1}u^{1}_{N})\triangleleft
S(E_{i})=-(c^{\varpi_{1}}_{f_{1}\triangleleft
E_{i},v_{1}}c^{\varpi_{1}}_{f_{1}\triangleleft
K_{i},v_{1}}+c^{\varpi_{1}}_{f_{1},v_{1}}c^{\varpi_{1}}_{f_{1}\triangleleft
E_{i},v_{N}})\triangleleft K_{i}^{-1}=0$ since $f_{1}$ is the highest weight
of the dual representation. On the other hand
$\displaystyle K_{i}\raisebox{0.86108pt}{
\scriptsize{\mbox{$\blacktriangleright$}}}(u^{1}_{1}u^{1}_{N})=(u^{1}_{1}u^{1}_{N})\triangleleft
K^{-1}_{i}=c^{\varpi_{1}}_{f_{1}\triangleleft
K_{i},v_{j}}c^{\varpi_{1}}_{f_{1}\triangleleft
K_{i},v_{N}}=q^{(2\varpi_{1},\alpha_{i})}u^{1}_{1}u^{1}_{N}.$
Similarly, since $f_{1}$ is a highest weight vector we have for $i\neq 1$
$\displaystyle(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})\triangleleft
E_{i}=c^{\varpi_{1}}_{f_{1},v_{1}}c^{\varpi_{1}}_{f_{2}\triangleleft
E_{i},v_{N}}-qc^{\varpi_{1}}_{f_{2}\triangleleft
E_{i},v_{1}}c^{\varpi_{1}}_{f_{1}\triangleleft K_{i},v_{N}}.$ (31)
However, since $v_{1}$ is the lowest weight vector of $V_{\varpi_{1}}$ with
weight $w_{0}\varpi_{1}=-\varpi_{1}$ then for $i\neq 1$ the vector $v_{1}$ is
the lowest weight with weight zero of the
$U_{q}(\mathfrak{sl}_{2})_{i}$-module $U_{q}(\mathfrak{sl}_{2})_{i}v_{N}$
where $U_{q}(\mathfrak{sl}_{2})_{i}:=\langle E_{i},F_{i},K_{i}\rangle$.
Therefore $E_{i}v_{1}=0$ for $i\neq 1$ and $E_{1}v_{1}=v_{2}$. This in turn
implies that $f_{2}\triangleleft E_{i}=0$ for $i\neq 1$. Then the right hand
side of equation (31) is zero. Since $f_{2}\triangleleft E_{1}=f_{1}$ and
$f_{1}\triangleleft K_{1}=f_{1}$ we have
$\displaystyle E_{1}\raisebox{0.86108pt}{
\scriptsize{\mbox{$\blacktriangleright$}}}(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})$
$\displaystyle=-(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})\triangleleft
E_{1}K_{1}^{-1}$
$\displaystyle=-(c^{\varpi_{1}}_{f_{1},v_{1}}c^{\varpi_{1}}_{f_{2}\triangleleft
E_{1},v_{N}}-qc^{\varpi_{1}}_{f_{2}\triangleleft
E_{1},v_{1}}c^{\varpi_{1}}_{f_{1}\triangleleft K_{1},v_{N}})\triangleleft
K_{i}^{-1}$ $\displaystyle=0.$
Finally, since
$K_{i}E_{1}v_{1}=q^{(\alpha_{i},\alpha_{1})}E_{1}K_{i}v_{1}=q^{(\alpha_{i},\alpha_{1})-(\varpi_{1},\alpha_{i})}E_{1}v_{1}$
we have
$\displaystyle K_{i}\raisebox{0.86108pt}{
\scriptsize{\mbox{$\blacktriangleright$}}}(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})$
$\displaystyle=(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})\triangleleft
K^{-1}_{i}$ $\displaystyle=c^{\varpi_{1}}_{f_{1}\triangleleft
K^{-1}_{i},v_{1}}c^{\varpi_{1}}_{f_{2}\triangleleft
K^{-1}_{i},v_{N}}-qc^{\varpi_{1}}_{f_{2}\triangleleft
K^{-1}_{i},v_{1}}c^{\varpi_{1}}_{f_{1}\triangleleft K^{-1}_{i},v_{N}}$
$\displaystyle=q^{(2\varpi_{1}-\alpha_{1},\alpha_{i})}(c^{\varpi_{1}}_{f_{1},v_{1}}c^{\varpi_{1}}_{f_{2},v_{N}}-qc^{\varpi_{1}}_{f_{2},v_{1}}c^{\varpi_{1}}_{f_{1},v_{N}})$
$\displaystyle=q^{(\varpi_{2},\alpha_{i})}(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N}),$
where we used the fact that $\alpha_{i}=\sum_{j}a_{ji}\varpi_{j}$ to derive
the third equality. ∎
Part (iii) of the Lemma 3.3 says that the weights $2\varpi_{1},\varpi_{2}$
form a minimal set of generators for the weights of the highest weights
vectors of ${\mathcal{O}}_{q}(Q_{N})$ as a monoid under addition which are
called spherical weights. In table 2 below we present the spherical weights
for each irreducible quantum flag manifold which was presented by Krämer [19]
in the classical case. We note that for the case of
${\mathcal{O}}_{q}(\mathbf{S}_{2n})$, the weight $2\varpi_{2n-1}$ or
$2\varpi_{2n}$ appears according to the defining crossed node, as presented in
table 1.
Table 2: Spherical weights according to the numbering of the Dynkin nodes in [15, §11.4]. ${\mathcal{O}}_{q}(\text{Gr}_{r,s})$ | $\varpi_{1}+\varpi_{r+s-1},\varpi_{2}+\varpi_{r+s-2},\ldots,\varpi_{r}+\varpi_{s}$
---|---
${\mathcal{O}}_{q}(\mathbf{Q}_{2n+1})$ | $2\varpi_{1},\varpi_{2}$
${\mathcal{O}}_{q}(\mathbf{L}_{n})$ | $2\varpi_{1},2\varpi_{2},\ldots,2\varpi_{n}$
${\mathcal{O}}_{q}(\mathbf{Q}_{2n})$ | $2\varpi_{1},\varpi_{2}$
${\mathcal{O}}_{q}(\textbf{S}_{2n})$ | $\varpi_{2},\varpi_{4},\ldots,\varpi_{2n-2},2\varpi_{2n-1}$ or $2\varpi_{2n}$
${\mathcal{O}}_{q}(\textbf{S}_{2n+1})$ | $\varpi_{2},\varpi_{4},\ldots,\varpi_{2n-2},\varpi_{2n}+2\varpi_{2n+1}$
${\mathcal{O}}_{q}(\mathbb{OP}^{2})$ | $\varpi_{1}+\varpi_{6},\varpi_{2}$
${\mathcal{O}}_{q}(\textrm{F})$ | $2\varpi_{7},\varpi_{1},\varpi_{6}$
###### Lemma 3.4.
Let $\partial$, $\overline{\partial}$, $z,y$ be as before. Then the following
holds
1. (i)
$\overline{\partial}zz=q^{2}z\overline{\partial}z$,
2. (ii)
$\overline{\partial}yy=q^{2}y\overline{\partial}y$,
3. (iii)
$\partial zz=q^{-2}z\partial z$,
4. (iv)
$\partial yy=q^{-2}y\partial y$.
###### Proof.
We only show (iii) and (iv), equations (i) and (ii) can be shown in a similar
way by using Lemma 2.7 and Lemma 3.3. By Takeuchi’s equivalence there exists
an isomorphism in the category ${}^{\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\
{\mathcal{O}}_{q}(G)}_{{\mathcal{O}}_{q}(G/L_{S})}\mathrm{Mod}_{0}$ given by
$\displaystyle\mathrm{U}:\Omega^{(0,1)}\rightarrow\mathcal{O}(SO_{q}(5))\square_{{\mathcal{O}}_{q}(L)}\Phi(\Omega^{(0,1)}),\qquad\omega\mapsto\omega_{(-1)}\otimes[\omega_{(0)}].$
Now we recall by Lemma 3.3 that
$\\{[\overline{\partial}z_{Nj}]:j=1,\ldots,N-1\\}$ and $\\{[\partial z_{jN}]$,
$j=2,\ldots,N-1\\}$ are bases of the vector spaces $\Phi(\Omega^{(0,1)})$ and
$\Phi(\Omega^{(1,0)})$, respectively. Moreover, it was shown in [13, §3.2]
that $[\overline{\partial}z_{ij}]=[\partial z_{ji}]=0$ if $i\neq N$ and
$j\in\\{1,N\\}$.
Then by definition of $\mathrm{U}$ and relations in Lemma 2.7 we have
$\displaystyle\mathrm{U}\big{(}(\overline{\partial}z_{1N})z_{1N}\big{)}$
$\displaystyle=\mathrm{U}(\overline{\partial}z_{1N})z_{1N}$
$\displaystyle=(z_{1N})_{(1)}z_{1N}\otimes[\overline{\partial}(z_{1N})_{(2)}]$
$\displaystyle=(u^{1}_{N}S(u^{N}_{N}))_{(1)}z_{1N}\otimes[\overline{\partial}(u^{1}_{N}S(u^{N}_{N}))_{(2)}]$
$\displaystyle=\sum_{j,k}u^{1}_{j}S(u^{k}_{N})z_{1N}\otimes[\overline{\partial}(u^{j}_{N}S(u^{N}_{k}))]$
$\displaystyle=\sum_{j,k}u^{1}_{j}S(u^{k}_{N})z_{1N}\otimes[\overline{\partial}z_{jk}]$
$\displaystyle=\sum_{k=2}^{N-1}u^{1}_{N}S(u^{k}_{N})z_{1N}\otimes[\overline{\partial}z_{Nk}]$
$\displaystyle=\sum_{k=2}^{N-1}u^{1}_{N}S(u^{k}_{N})u^{1}_{N}S(u^{N}_{N})\otimes[\overline{\partial}z_{Nk}]$
$\displaystyle=\sum_{k=2}^{N-1}q^{\rho_{N}-\rho_{k}}u^{1}_{N}u^{1}_{k^{\prime}}u^{1}_{N}u^{1}_{1}\otimes[\overline{\partial}z_{Nk}]$
$\displaystyle=q^{-2}\sum_{k=2}^{N-1}q^{\rho_{N}-\rho_{k}}u^{1}_{N}u^{1}_{1}u^{1}_{N}u^{1}_{k^{\prime}}\otimes[\overline{\partial}z_{Nk}]$
$\displaystyle=q^{-2}z_{1N}\mathrm{U}(\overline{\partial}z_{1N}).$
This yields equation (iii).
Since $y=q(z_{2,N}-q^{-2}z_{1,N-1})$ then analogously by using relations of
Lemma 2.7 we have
$\displaystyle\mathrm{U}\big{(}(\overline{\partial}y)y\big{)}$
$\displaystyle=\mathrm{U}(\overline{\partial}y)y$
$\displaystyle=\mathrm{U}(\overline{\partial}(qz_{2,N}-q^{-1}z_{1,N-1}))(qz_{2,N}-q^{-1}z_{1,N-1})$
$\displaystyle=(qz_{2,N}-q^{-1}z_{1,N-1})_{(1)}(qz_{2,N}-q^{-1}z_{1,N-1})\otimes[\overline{\partial}(qz_{2,N}-q^{-1}z_{1,N-1})_{(2)}]$
$\displaystyle=\sum_{l=2}^{N-1}(qu^{2}_{N}S(u^{l}_{N})-q^{-1}u^{1}_{N}S(u^{l}_{N-1}))(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})\otimes[\overline{\partial}(qz_{Nl}-q^{-1}z_{Nl})]$
$\displaystyle=\sum_{l=2}^{N-1}q^{\rho_{N}-\rho_{l}}(qu^{2}_{N}u^{1}_{l^{\prime}}-u^{1}_{N}u^{2}_{l^{\prime}})(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})\otimes[\overline{\partial}(qz_{Nl}-q^{-1}z_{Nl})]$
$\displaystyle=\sum_{l=2}^{N-1}q^{\rho_{N}-\rho_{l}+1}(u^{1}_{l^{\prime}}u^{2}_{N}-qu^{2}_{l^{\prime}}u^{1}_{N})(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})\otimes[\overline{\partial}(qz_{Nl}-q^{-1}z_{Nl})]$
$\displaystyle=q^{-2}\sum_{l=2}^{N-1}q^{\rho_{N}-\rho_{l}+1}(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})(u^{1}_{l^{\prime}}u^{2}_{N}-qu^{2}_{l^{\prime}}u^{1}_{N})\otimes[\overline{\partial}(qz_{Nl}-q^{-1}z_{Nl})]$
$\displaystyle=q^{-2}(u^{1}_{1}u^{2}_{N}-qu^{2}_{1}u^{1}_{N})\sum_{l=2}^{N-1}q^{\rho_{N}-\rho_{l}+1}(u^{1}_{l^{\prime}}u^{2}_{N}-qu^{2}_{l^{\prime}}u^{1}_{N})\otimes[\overline{\partial}(qz_{Nl}-q^{-1}z_{Nl})]$
$\displaystyle=q^{-2}y\mathrm{U}(\overline{\partial}y).$
This proves equation (iv).
∎
In the following lemma we give the existence of a Kähler structure for the
Heckenberger–Kolb calculus over the quantum quadric
${\mathcal{O}}_{q}(\textbf{Q}_{N})$. Since the space of coinvariants
$\mbox{}^{co(L_{S})}\Phi(\Omega^{(1,1)})$ is a 1-dimensional vector space (see
for example [11, §3.3]) it can be shown that this Kähler form is up to scalar
equal to the Kähler form given in [21, §5].
###### Lemma 3.5.
A Kähler form is given by
$\displaystyle\kappa=\mathrm{i}\sum_{i,j=1}^{N}q^{-\rho_{i}}\partial
z_{ij}\wedge\overline{\partial}z_{ji}.$
For $i=1,\ldots,N-2$ we denote $e^{+}_{i}:=[\partial z_{i+1,N}]$ and
$e^{-}_{i}:=q^{-\rho_{i+1}}[\overline{\partial}z_{N,i+1}]$, then
$\displaystyle\mathrm{U}(\kappa)=\mathrm{i}\sum_{i=1}^{N-2}1\otimes
e^{+}_{i}\wedge e^{-}_{i}$ (32)
and there exist polynomials $f_{I,J}(q)\in\mathbb{C}[q,q^{-1}]$ such that
$\displaystyle\mathrm{U}(\kappa^{l})=\mathrm{i}^{l}\sum_{I,J\in\Theta(l)}f_{l,I,J}(q)\\!\cdot\\!1\otimes
e^{+}_{I}\wedge e^{-}_{J},\quad\ \ f_{l,I,I}(1)\neq 0,\ \ \text{and}\ \
f_{l,I,J}(1)=0\ \ \text{for}\ \ I\neq J,$ (33)
where $\Theta(l)$ denotes the set of all ordered subsets of
$\\{1,\ldots,N-2\\}$ with $l$ elements. Moreover,
sgn$(f_{l,I,I}(1))=(-1)^{\frac{l(l-1)}{2}}$ for all $I\in\Theta(l)$.
###### Proof.
That $\kappa$ is real follows from identities (2) and the fact that
$z_{ij}^{*}=z_{ji}$. Equation (32) follows from the calculation
$\displaystyle\mathrm{U}\Big{(}\sum_{i,j=1}^{N}q^{-2\rho_{i}}\partial
z_{ij}\wedge\overline{\partial}z_{ji}\Big{)}$
$\displaystyle\\!=\sum_{i,j=1}^{N}q^{-2\rho_{i}}(z_{ij})_{(1)}(z_{ji})_{(1)}\otimes[\partial((z_{ij})_{(2)})]\wedge[\overline{\partial}((z_{ji})_{(2)})]$
$\displaystyle\\!=\sum_{i,j=1}^{N}\sum_{a,b,c,d}q^{-2\rho_{i}}u^{i}_{a}S(u^{b}_{j})u^{j}_{c}S(u^{d}_{i})\otimes[\partial(u^{a}_{N}S(u^{N}_{b}))]\wedge[\overline{\partial}(u^{c}_{N}S(u^{N}_{d}))]$
$\displaystyle\\!=\sum_{i=1}^{N}\sum_{a,b,d}q^{-2\rho_{i}}u^{i}_{a}S(u^{d}_{i})\otimes[\partial(u^{a}_{N}S(u^{N}_{b}))]\wedge[\overline{\partial}(u^{b}_{N}S(u^{N}_{d}))]$
$\displaystyle\\!=\sum_{i=1}^{N}\sum_{a,d}q^{-2\rho_{i}}u^{i}_{a}S(u^{d}_{i})\otimes[\partial
z_{aN}]\wedge[\overline{\partial}z_{Nd}]$
$\displaystyle\\!=\sum_{d=2}^{N-1}1\otimes[\partial z_{dN}]\wedge
q^{-\rho_{d}}[\overline{\partial}z_{Nd}],$
where the identity
$\sum_{i=1}^{N}q^{-2\rho_{i}}u^{i}_{a}S(u^{d}_{i})=q^{-2\rho_{d}}\delta_{da}\\!\cdot\\!1$
was used in the penultimate line. This implies the left
${\mathcal{O}}_{q}(G)$-coinvariance of $\kappa$ since
$\displaystyle\Big{[}\sum_{i,j=1}^{N}q^{-2\rho_{i}}\partial
z_{ij}\wedge\overline{\partial}z_{ji}\Big{]}=\sum_{i,j=1}^{N}q^{-2\rho_{i}}[\partial
z_{ij}]\wedge[\overline{\partial}z_{ji}]=\sum_{i=2}^{N-1}[\partial
z_{iN}]\wedge q^{-2\rho_{i}}[\overline{\partial}z_{Ni}].$
Now we show that $\kappa$ is closed by a Lie theoretic argument: we set
$V^{(a,b)}:=\Phi(\Omega^{(a,b)})$ and note that $\mathrm{d}\kappa=0$ if and
only if $\partial\kappa=\overline{\partial}\kappa=0$, and both are coinvariant
elements in $\Omega^{(2,1)}$ and $\Omega^{(1,2)}$, respectively. But
$\mbox{}^{co(L_{S})}V^{(1,2)}=\mbox{}^{co(L_{S})}(V^{(1,0)}\otimes
V^{(0,2)})$, then $\overline{\partial}\kappa=0$ if
$\mbox{}^{co(L_{S})}(V^{(1,0)}\otimes V^{(0,2)})=0$. Since $V^{(1,0)}\otimes
W$ contains the trivial representation if and only if
$W\simeq(V^{(1,0)})^{*}\simeq V^{(0,1)}$ it is enough to show that $V^{(0,2)}$
is not isomorphic to $V^{(1,0)}$. But $V^{(0,2)}$ is an irreducible component
of $V^{(0,1)}\otimes V^{(0,1)}$. The following formulas for the tensor product
decomposition can be found in Table 5 of [27]: for odd $N=2n+1$ we have
$\displaystyle V^{(0,1)}\otimes
V^{(0,1)}\simeq\left\\{\begin{array}[]{lc}V_{4\varpi_{1}}\oplus
V_{2\varpi_{1}}\oplus\mathbb{C},&n=2\ \ (V^{(0,1)}\simeq V_{2\varpi_{1}}\
\text{as}\ U_{q}(\mathfrak{sl}_{2})\text{-modules}),\\\ V_{2\varpi_{1}}\oplus
V_{2\varpi_{2}}\oplus\mathbb{C},&n=3\ \ (V^{(0,1)}\simeq V_{\varpi_{1}}\
\text{as}\ U_{q}(\mathfrak{so}_{5})\text{-modules}),\\\ V_{2\varpi_{1}}\oplus
V_{\varpi_{2}}\oplus\mathbb{C},&\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ n\geq 4\ \ (V^{(0,1)}\simeq
V_{\varpi_{1}}\ \text{as}\
U_{q}(\mathfrak{so}_{2n-1})\text{-modules}).\end{array}\right.$
For $n=2$, it can be shown that the $U_{q}(\mathfrak{l}^{s}_{S})$-invariant
element corresponding to $\mathbb{C}$ is not
$U_{q}(\mathfrak{l}_{S})$-invariant. For even $N=2n$ we have
$\displaystyle V^{(0,1)}\otimes
V^{(0,1)}\simeq\left\\{\begin{array}[]{lc}V_{2\varpi_{2}}\oplus
V_{\varpi_{3}+\varpi_{1}}\oplus\mathbb{C},&n=4\ \ (V^{(0,1)}\simeq
V_{\varpi_{2}}\ \text{as}\ \ U_{q}(\mathfrak{sl}_{4})\text{-modules}),\\\
V_{2\varpi_{1}}\oplus V_{\varpi_{2}}\oplus\mathbb{C},&\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ n\geq 5\ \ (V^{(0,1)}\simeq V_{\varpi_{1}}\ \text{as}\ \
U_{q}(\mathfrak{so}_{2n-2})\text{-modules}).\end{array}\right.$
This formulas show that $V^{(1,0)}$ does not appear as an irreducible
component of $V^{(0,1)}\otimes V^{(0,1)}$ and therefore
$\overline{\partial}\kappa=0$. A similar argument shows that $\partial k=0$.
By Lemma 4.6 in [24] we have that $\kappa$ is central. The bijectivity of the
Lefschetz operators $L^{n-k}$ is shown in [21] for $q$ belonging to a
sufficiently small open interval around $1$.
Now we proceed to calculate the powers of $\kappa$ by using the relations in
$\Phi(\Omega^{(\bullet,0)})$ and $\Phi(\Omega^{(0,\bullet)})$. It follows from
[13] that
$\Phi(\Omega^{(0,1)})=\mathrm{Lin}_{\mathbb{C}}\\{e^{-}_{1},\ldots,e^{-}_{N-2}\\}$
and the algebra $\Phi(\Omega^{(0,\bullet)})$ is isomorphic to the quantum
exterior algebra $\Lambda({\mathcal{O}}^{N-2}_{q})$ of the quantum Euclidean
space ${\mathcal{O}}^{N-2}_{q}$ in [17, §9.3]. Then the relations in [13]
imply that the relations in $\Phi(\Omega^{(0,\bullet)})$ for $N=2n+1$ are
given by
$\displaystyle e^{-}_{i}\wedge e^{-}_{i}=0,\ \ i\neq
i^{\prime},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ e^{-}_{i}\wedge
e^{-}_{j}=-q^{-1}e^{-}_{j}\wedge e^{-}_{i},\ \ i<j,\ i\neq j^{\prime},$ (34)
$\displaystyle e^{-}_{i^{\prime}}\wedge e^{-}_{i}+e^{-}_{i}\wedge
e^{-}_{i^{\prime}}=(q-q^{-1})\sum_{1\leq
j<i}\lambda_{i}^{-1}\lambda_{j}q^{j-i+1}e^{-}_{j}\wedge e^{-}_{j^{\prime}},\
i<i^{\prime},\leavevmode\nobreak\ \leavevmode\nobreak\ $ (35) $\displaystyle
e^{-}_{n}\wedge e^{-}_{n}=(q^{1/2}-q^{-1/2})\sum_{1\leq j\leq
n-1}\lambda_{n}^{-1}\lambda_{j}q^{j-(n-1)}e^{-}_{j}\wedge
e^{-}_{j^{\prime}},\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ $ (36)
where $\lambda_{i}$ is a non-zero complex number. For even $N=2n$ the
relations are given by $\eqref{ii}$ and $\eqref{i'i}$ for possibly different
constants $\lambda_{i}$. The relations for $\Phi(\Omega^{(\bullet,0)})$ are
giving by applying the involution to the relations (34)-(36). If we apply the
involution to (35) we get
$\displaystyle e^{+}_{i}\wedge e^{+}_{i^{\prime}}+e^{+}_{i^{\prime}}\wedge
e^{+}_{i}=(q-q^{-1})\sum_{1\leq
j<i}\overline{\lambda}_{i}^{-1}\overline{\lambda}_{j}q^{j-i+1}e^{+}_{j^{\prime}}\wedge
e^{+}_{j},\ \ i<i^{\prime}.$ (37)
We claim that the relations in $\Phi(\Omega^{(\bullet,0)})$ are given by
$\displaystyle e^{+}_{i}\wedge e^{+}_{i}=0,\ \ i\neq
i^{\prime},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ e^{+}_{i}\wedge
e^{+}_{j}=-qe^{+}_{j}\wedge e^{+}_{i},\ \ i<j,\ i\neq j^{\prime},$ (38)
$\displaystyle e^{+}_{i^{\prime}}\wedge e^{+}_{i}+e^{+}_{i}\wedge
e^{+}_{i^{\prime}}=-(q-q^{-1})\sum_{1\leq
j<i}\overline{\lambda}_{i}^{-1}\overline{\lambda}_{j}q^{-(j-i+1)}e^{+}_{j}\wedge
e^{+}_{j^{\prime}},\ i<i^{\prime},\leavevmode\nobreak\ \leavevmode\nobreak\ $
(39) $\displaystyle e^{+}_{n}\wedge e^{+}_{n}=-(q^{1/2}-q^{-1/2})\sum_{1\leq
j\leq
n-1}\overline{\lambda}_{n}^{-1}\overline{\lambda}_{j}q^{-(j-(n-1))}e^{+}_{j}\wedge
e^{+}_{j^{\prime}}.\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ $ (40)
Relations (38) are shown by applying the involution to relations (34). For the
rest of the paper we denote $\nu_{\tau}:=\tau-\tau^{-1}$ if $\tau>0$. We use
induction to prove (39):
$\displaystyle\overline{\lambda}_{i+1}\big{(}$ $\displaystyle
e^{+}_{i+1}\wedge e^{+}_{(i+1)^{\prime}}+e^{+}_{(i+1)^{\prime}}\wedge
e^{+}_{i+1}\big{)}=\nu_{q}\sum_{1\leq
j<i+1}\overline{\lambda}_{j}q^{j-(i+1)+1}e^{+}_{j^{\prime}}\wedge
e^{+}_{j}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \mathrm{by}\ \
\eqref{invi'i}$ $\displaystyle=q^{-i}\nu_{q}\sum_{1\leq
j<i+1}\overline{\lambda}_{j}q^{j}\big{(}\\!-\nu_{q}\sum_{1\leq
k<j}\overline{\lambda}_{j}^{-1}\overline{\lambda}_{k}q^{-k+j-1}e^{+}_{k}\wedge
e^{+}_{k^{\prime}}-e^{+}_{j}\wedge
e^{+}_{j^{\prime}}\big{)}\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ (\text{ind. hyp.})$
$\displaystyle=q^{-i}\nu_{q}\Big{(}\\!\\!-\nu_{q}\sum_{1\leq j<i+1}\sum_{1\leq
k<j}\overline{\lambda}_{k}q^{2j-k-1}e^{+}_{k}\wedge
e^{+}_{k^{\prime}}-\sum_{1\leq
j<i+1}\overline{\lambda}_{j}q^{j}e^{+}_{j}\wedge e^{+}_{j^{\prime}}\Big{)}$
$\displaystyle=q^{-i}\nu_{q}\Big{(}\\!\\!-\nu_{q}\sum_{1\leq k<i}\sum_{k+1\leq
j\leq i}\overline{\lambda}_{k}q^{2j-k-1}e^{+}_{k}\wedge
e^{+}_{k^{\prime}}-\sum_{1\leq
j<i+1}\overline{\lambda}_{j}q^{j}e^{+}_{j}\wedge e^{+}_{j^{\prime}}\Big{)}$
$\displaystyle=q^{-i}\nu_{q}\Big{(}\\!\\!-\nu_{q}\sum_{1\leq
k<i}\overline{\lambda}_{k}q^{-k-1}\Big{(}\frac{q^{2(k+1)}-q^{2i+2}}{1-q^{2}}\Big{)}e^{+}_{k}\wedge
e^{+}_{k^{\prime}}-\sum_{1\leq
j<i+1}\overline{\lambda}_{j}q^{j}e^{+}_{j}\wedge e^{+}_{j^{\prime}}\Big{)}$
$\displaystyle=q^{-i}\nu_{q}\Big{(}\sum_{1\leq
k<i}\overline{\lambda}_{k}(q^{k}-q^{2i-k})e^{+}_{k}\wedge
e^{+}_{k^{\prime}}-\sum_{1\leq
j<i+1}\overline{\lambda}_{j}q^{j}e^{+}_{j}\wedge e^{+}_{j^{\prime}}\Big{)}$
$\displaystyle=q^{-i}\nu_{q}\Big{(}\sum_{1\leq
k<i}\overline{\lambda}_{k}q^{k}e^{+}_{k}\wedge e^{+}_{k^{\prime}}-\sum_{1\leq
k<i}\overline{\lambda}_{k}q^{2i-k}e^{+}_{k}\wedge
e^{+}_{k^{\prime}}-\sum_{1\leq
j<i+1}\overline{\lambda}_{j}q^{j}e^{+}_{j}\wedge e^{+}_{j^{\prime}}\Big{)}$
$\displaystyle=\nu_{q}\Big{(}-\sum_{1\leq
k<i}\overline{\lambda}_{k}q^{i-k}e^{+}_{k}\wedge
e^{+}_{k^{\prime}}-\overline{\lambda}_{i}e^{+}_{i}\wedge
e^{+}_{i^{\prime}}\Big{)}.$
Similar calculations show the identity (40) by using (39).
Now assuming that the equation (33) holds for $l$, we have
$\displaystyle\mathrm{U}(\kappa^{l+1})$
$\displaystyle=\mathrm{U}(\kappa)\wedge\mathrm{i}^{l}\sum_{I,J\in\Theta(l)}f_{I,J}(q)\\!\cdot\\!1\otimes
e^{+}_{I}\wedge e^{-}_{J}$
$\displaystyle=\mathrm{i}^{l+1}\sum_{I,J\in\Theta(l)}\sum_{i}f_{I,J}(q)\\!\cdot\\!1\otimes
e^{+}_{I}\wedge e^{+}_{i}\wedge e^{-}_{i}\wedge e^{-}_{J},$
where we use the fact that $\kappa$ is central. First we can focus on the case
$e^{+}_{I}\wedge e^{+}_{i}\wedge e^{-}_{i}\wedge e^{-}_{I}$ for
$I=\\{i_{1},\ldots,i_{l}\\}$ fixed. Then different cases arise:
Case $i\neq i^{\prime}$ and $i\in I$: $e^{+}_{I}\wedge e^{+}_{i}\wedge
e^{-}_{i}\wedge e^{-}_{I}=0$ since we can move $e^{-}_{i}$ to the right(resp.
left) by using (34)(resp. (38)) if $i<i^{\prime}$(resp. $i>i^{\prime}$).
Case $i,i^{\prime}\notin I$: in this case we can move $e^{+}_{i}$ to the left
and $e^{-}_{i}$ to the right by using (38) and (34), respectively and we get
$\displaystyle e^{+}_{I}\wedge e^{+}_{i}\wedge e^{-}_{i}\wedge
e^{-}_{I}=(-1)^{l}q^{l-2\alpha_{i}}e^{+}_{i_{1}}\wedge\cdots\wedge
e^{+}_{i}\wedge\cdots\wedge e^{+}_{l}\wedge e^{-}_{i_{1}}\wedge\cdots\wedge
e^{-}_{i}\wedge\cdots\wedge e^{-}_{l},$
where $\alpha_{i}\in\mathbb{N}_{0}$.
Case there is a $k<l$ such that $i_{k}=i^{\prime}\geq i$ and $i\notin I$: in
this case we can move $e^{+}_{i}$ to the left by commuting with
$e^{+}_{i_{l}},\ldots,e^{+}_{i_{k+1}}$ by using relations (38) and we get
$\displaystyle e^{+}_{i_{1}}\wedge\cdots\wedge e^{+}_{i_{l}}$
$\displaystyle\wedge e^{+}_{i}\wedge e^{-}_{i}\wedge
e^{-}_{i_{1}}\wedge\cdots\wedge e^{-}_{i_{l}}$
$\displaystyle=\\!(-q)^{-(l-k)}e^{+}_{i_{1}}\wedge\cdots\wedge
e^{+}_{i_{k}}\wedge e^{+}_{i}\wedge\cdots\wedge e^{+}_{l}\wedge
e^{-}_{i}\wedge e^{-}_{i_{1}}\wedge\cdots\wedge e^{-}_{i_{l}}.$
Now if $i_{k}=i^{\prime}>i$ then we can use (39) with
$\gamma_{ij}:=-\lambda^{-1}_{i}\lambda_{j}$ and we get
$\displaystyle e^{+}_{I}\wedge e^{+}_{i}\wedge e^{-}_{i}\wedge
e^{-}_{I}=\\!(-q)^{-(l-k)}e^{+}_{i_{1}}\wedge\cdots\wedge e^{+}_{i_{k}}\wedge
e^{+}_{i}\wedge\cdots\wedge e^{+}_{i_{l}}\wedge e^{-}_{i}\wedge
e^{-}_{i_{1}}\wedge\cdots\wedge e^{-}_{i_{l}}$
$\displaystyle=\\!(-q)^{-(l-k)}\nu_{q}\\!\sum_{1\leq
j<i}\overline{\gamma}_{ij}q^{-(j-i+1)}e^{+}_{i_{1}}\wedge\cdots\wedge
e^{+}_{i_{k-1}}\wedge e^{+}_{j}\wedge e^{+}_{j^{\prime}}\wedge\cdots\wedge
e^{+}_{i_{l}}\wedge e^{-}_{i}\wedge e^{-}_{i_{1}}\wedge\cdots\wedge
e^{-}_{i_{l}}$
$\displaystyle\quad-(-q)^{-(l-k)}e^{+}_{i_{1}}\wedge\cdots\wedge
e^{+}_{i_{k-1}}\wedge e^{+}_{i}\wedge e^{+}_{i_{k}}\wedge\cdots\wedge
e^{+}_{i_{l}}\wedge e^{-}_{i}\wedge e^{-}_{i_{1}}\wedge\cdots\wedge
e^{-}_{i_{l}}.$
Since $j^{\prime}\neq i_{r}$ and $i^{\prime}\neq i_{r}$ for $r\neq k$ then we
can move $e^{+}_{j}$ and $e^{+}_{i}$ to the left and $e^{+}_{j^{\prime}}$ and
$e^{-}_{i}$ to the right and we obtain
$\displaystyle e^{+}_{I}\wedge e^{+}_{i}\wedge e^{-}_{i}\wedge e^{-}_{I}$
$\displaystyle=\\!(-q)^{-(l-k)}\nu_{q}\\!\\!\sum_{1\leq
j_{s}<i}\\!\overline{\gamma}_{ij}q^{-(j_{s}-i+1)+\alpha_{is}}e^{+}_{i_{1}}\wedge\\!\cdots\\!\wedge
e^{+}_{j_{s}}\wedge\\!\cdots\\!\wedge
e^{+}_{j^{\prime}_{s}}\wedge\\!\cdots\\!\wedge e^{+}_{i_{l}}\wedge
e^{-}_{i_{1}}\wedge\\!\cdots\\!\wedge e^{-}_{i}\\!\wedge\\!\cdots\\!\wedge
e^{-}_{i_{l}}$
$\displaystyle\quad+(-1)^{l}q^{1-l+2\beta_{i}}e^{+}_{i_{1}}\wedge\cdots\wedge
e^{+}_{i}\wedge\cdots\wedge e^{+}_{i_{k}}\wedge\cdots\wedge
e^{+}_{i_{l}}\wedge e^{-}_{i_{1}}\wedge\cdots\wedge
e^{-}_{i}\wedge\cdots\wedge e^{-}_{i_{k}}\wedge\cdots\wedge e^{-}_{i_{l}},$
where $\alpha_{is}\in\mathbb{Z}$ and $\beta_{i}\in\mathbb{N}_{0}$. Now if
$i=i^{\prime}=n$ and $n\in I$ then by (35) and (39) we have
$\displaystyle e^{+}_{I}\wedge e^{+}_{n}\wedge e^{-}_{n}\wedge
e^{-}_{I}=\\!(-1)^{l-1}q^{l-1-2\gamma_{n}}e^{+}_{i_{1}}\wedge\\!\cdots\\!\wedge
e^{+}_{n}\wedge e^{+}_{n}\wedge\\!\cdots\\!\wedge e^{+}_{i_{l}}\wedge
e^{-}_{i_{1}}\wedge\\!\cdots\\!\wedge e^{-}_{n}\wedge
e^{-}_{n}\wedge\\!\cdots\\!\wedge e^{-}_{i_{l}}$
$\displaystyle=\\!(-1)^{l}q^{l-1-2\gamma_{i}}\nu_{q^{1/2}}^{2}\\!\\!\sum_{1\leq
j,r<n}\\!\\!\overline{\gamma}_{nj}\gamma_{nr}q^{r-j}e^{+}_{i_{1}}\wedge\\!\cdots\\!\wedge
e^{+}_{j}\wedge e^{+}_{j^{\prime}}\wedge\\!\cdots\\!\wedge e^{+}_{i_{l}}\wedge
e^{-}_{i_{1}}\wedge\\!\cdots\\!\wedge e^{-}_{r}\wedge
e^{-}_{r^{\prime}}\wedge\\!\cdots\\!\wedge e^{-}_{i_{l}}$
$\displaystyle=\\!(-1)^{l}q^{l-1-2\gamma_{i}}\nu_{q^{1/2}}^{2}\\!\\!\sum_{1\leq
j<n}\\!\\!|\gamma_{nj}|^{2}e^{+}_{i_{1}}\wedge\cdots\wedge e^{+}_{j}\wedge
e^{+}_{j^{\prime}}\wedge\cdots\wedge e^{+}_{i_{l}}\wedge
e^{-}_{i_{1}}\wedge\cdots\wedge e^{-}_{j}\wedge
e^{-}_{j^{\prime}}\wedge\cdots\wedge e^{-}_{i_{l}}$
$\displaystyle+\\!(-1)^{l}q^{l-1-2\gamma_{i}}\nu_{q^{1/2}}^{2}\\!\\!\sum_{\begin{subarray}{c}1\leq
j,r<n\\\ j\neq
r\end{subarray}}\\!\\!\overline{\gamma}_{nj}\gamma_{nr}q^{r-j}e^{+}_{i_{1}}\\!\wedge\\!\cdots\\!\wedge
e^{+}_{j}\wedge e^{+}_{j^{\prime}}\wedge\\!\cdots\\!\wedge e^{+}_{i_{l}}\wedge
e^{-}_{i_{1}}\wedge\\!\cdots\\!\wedge e^{-}_{r}\wedge
e^{-}_{r^{\prime}}\wedge\\!\cdots\\!\wedge e^{-}_{i_{l}},$
where $\gamma_{n}\in\mathbb{Z}$. Analogously, we can move $e^{+}_{j}$,
$e^{-}_{j}$ to the left and $e^{+}_{j^{\prime}}$ and $e^{-}_{j^{\prime}}$ to
the right by using (34) and (38), respectively.
Case $i>i_{l}$ and $i=i^{\prime}_{k}$ for some $k$: then we can move
$e^{+}_{i}$ to the right by using (34) and we get
$\displaystyle e^{+}_{i_{1}}\wedge\cdots\wedge e^{+}_{i_{l}}\wedge
e^{+}_{i}\wedge e^{-}_{i}\wedge e^{-}_{i_{1}}\wedge\cdots\wedge e^{-}_{i_{l}}$
$\displaystyle=(-1)^{k-1}q^{k-1}e^{+}_{i_{1}}\wedge\cdots\wedge
e^{+}_{i_{l}}\wedge e^{+}_{i}\wedge e^{-}_{i_{1}}\wedge\cdots\wedge
e^{-}_{i}\wedge e^{-}_{i_{k}}\wedge\cdots\wedge e^{-}_{i_{l}}$
$\displaystyle=(-1)^{k-1}q^{k-1}\nu_{q}\sum_{j<{i_{k}}}\gamma_{i_{k}j}q^{j-i_{k}+1}e^{+}_{i_{1}}\wedge\cdots\wedge
e^{+}_{i_{l}}\wedge e^{+}_{i}\wedge e^{-}_{i_{1}}\wedge\cdots\wedge
e^{-}_{j}\wedge e^{-}_{j^{\prime}}\wedge\cdots\wedge e^{-}_{i_{l}}$
$\displaystyle\quad-(-1)^{k-1}q^{k-1}e^{+}_{i_{1}}\wedge\cdots\wedge
e^{+}_{i_{l}}\wedge e^{+}_{i}\wedge e^{-}_{i_{1}}\wedge\cdots\wedge
e^{-}_{i_{k}}\wedge e^{-}_{i}\wedge\cdots\wedge e^{-}_{i_{l}}$
$\displaystyle=(-1)^{k-1}q^{k-1}\nu_{q}\sum_{j_{s}<{i_{k}}}\gamma_{i_{k}j_{s}}q^{j_{s}-i_{k}+1+\theta_{is}}e^{+}_{i_{1}}\\!\wedge\\!\cdots\\!\wedge
e^{+}_{i_{l}}\wedge e^{+}_{i}\wedge e^{-}_{i_{1}}\wedge\\!\cdots\\!\wedge
e^{-}_{j_{s}}\wedge\\!\cdots\\!\wedge
e^{-}_{j^{\prime}_{s}}\wedge\\!\cdots\\!\wedge e^{-}_{i_{l}}$
$\displaystyle\quad+(-1)^{l}q^{l-1}e^{+}_{i_{1}}\wedge\cdots\wedge
e^{+}_{i_{l}}\wedge e^{+}_{i}\wedge e^{-}_{i_{1}}\wedge\cdots\wedge
e^{-}_{i_{k}}\wedge\cdots\wedge e^{-}_{i_{l}}\wedge e^{-}_{i}.$
where $\theta_{is}\in\mathbb{Z}$.
Case $I\neq J$: in the same way it can be show that $e^{+}_{I}\wedge
e^{+}_{i}\wedge e^{-}_{i}\wedge e^{-}_{J}$ is a linear combination of elements
of the form $e^{+}_{P}\wedge e^{-}_{P}$ and $e^{+}_{S}\wedge e^{-}_{T}$ for
$S\neq T$ where $P,S,T\in\Theta(l+1)$.
Finally, if $I^{\prime}\in\Theta(l+1)$ then from the previous cases we see
that the coefficient $f_{l+1,I^{\prime},I^{\prime}}(q)$ of
$e^{+}_{I^{\prime}}\wedge e^{-}_{I^{\prime}}$ is of the form
$\displaystyle(-1)^{l}\sum_{t}f_{l,I_{t},I_{t}}(q)q^{t}+(q-q^{-1})\xi(q)+(q^{1/2}-q^{-1/2})^{2}\zeta(q)+\sum_{r}\chi_{r}(q)f_{l,I_{r},J_{r}}(q),$
where $I_{t},I_{r},J_{r}\in\Theta(l)$ and
$\xi(q),\zeta(q),\chi_{r}(q)\in\mathbb{C}[q,q^{-1}]$. Since all the
$f_{l,I_{t},I_{t}}(1)$ have the same sign and $f_{l,I_{r},J_{r}}(1)=0$, then
$f_{l+1,I^{\prime},I^{\prime}}(1)=(-1)^{l}\sum_{t}f_{l,I_{t},I_{t}}(1)\neq 0$
and
$\displaystyle\mathrm{sgn}(f_{l+1,I^{\prime},I^{\prime}}(1))=(-1)^{l}\mathrm{sgn}\big{(}\sum_{t}f_{l,I_{t},I_{t}}(1)\big{)}=(-1)^{l}(-1)^{\frac{l(l-1)}{2}}=(-1)^{\frac{l(l+1)}{2}}.$
∎
## 4 Spectrum of the $\overline{\partial}$-Laplace operator on zero forms
In this last section we present the proof of the main result of the paper by
using previous results and the representation theory of the quantum group
$U_{q}(\mathfrak{g})$.
###### Lemma 4.1.
There exist constants $\theta$, $\theta^{\prime}$, $\theta^{\prime\prime}$ and
$\theta^{\prime\prime\prime}$ such that
1. (i)
-$\ast_{\kappa}(\partial y\wedge\ast_{\kappa}(\overline{\partial}y))=\theta y^{2}$,
2. (ii)
-$\ast_{\kappa}(\ast_{\kappa}(\overline{\partial}y)\wedge\partial z)=\theta^{\prime}yz$,
3. (iii)
-$\ast_{\kappa}(\partial y\wedge\ast_{\kappa}(\overline{\partial}z))=\theta^{\prime\prime}yz$,
4. (iv)
-$\ast_{\kappa}(\ast_{\kappa}(\overline{\partial}z)\wedge\partial z)=\theta^{\prime\prime\prime}z^{2}$,
where the constants $\theta$, $\theta^{\prime}$ and
$\theta^{\prime\prime\prime}$ are non-zero.
###### Proof.
We first note that both sides of the equations (i)-(iv) are highest weight
vectors with the same weight. Since the decomposition
$\displaystyle\Omega^{(0,0)}={\mathcal{O}}_{q}(\mathbf{Q}_{N})=\bigoplus_{k,l\in\mathbb{N}_{0}}U_{q}(\mathfrak{g})\raisebox{0.86108pt}{
\scriptsize{\mbox{$\blacktriangleright$}}}z^{k}y^{l}$
is multiplicity-free we have that there are constants
$\theta,\theta^{\prime},\theta^{\prime\prime},\theta^{\prime\prime\prime}$
such that equations (i)-(iv) hold.
To show that $\theta$ is non-zero, we note first that $\partial
y\wedge\ast_{\kappa}(\overline{\partial}y)\neq 0$ if and only if $\partial
y\wedge\overline{\partial}y$ is not primitive. This follows from the fact that
$\displaystyle\ast_{\kappa}(\overline{\partial}y)\wedge\partial
y=-i^{-1}\frac{1}{(n-1)!}L^{n-1}(\overline{\partial}y)\wedge\partial
y=-i^{-1}\frac{1}{(n-1)!}L^{n-2+1}(\overline{\partial}y\wedge\partial y).$
A similar argument shows that
$\ast_{\kappa}(\overline{\partial}y)\wedge\partial z)$, $\partial
y\wedge\ast_{\kappa}(\overline{\partial}z)$ and
$\ast_{\kappa}(\overline{\partial}z)\wedge\partial z$ are non-zero if and only
if $\overline{\partial}y\wedge\partial z$, $\partial
y\wedge\overline{\partial}z$ and $\overline{\partial}z\wedge\partial z$ are
not primitive, respectively.
Since the Lefschetz map is a right $U_{q}(\mathfrak{g})$-module map, if
$\overline{\partial}z\wedge\partial z$ is primitive then
$X\raisebox{0.86108pt}{
\scriptsize{\mbox{$\blacktriangleright$}}}(\overline{\partial}z\wedge\partial
z)$ is primitive for any $X\in U_{q}(\mathfrak{g})$. We claim that there
exists $X\in U_{q}(\mathfrak{g})$ such that
$\displaystyle[X\raisebox{0.86108pt}{
\scriptsize{\mbox{$\blacktriangleright$}}}(\overline{\partial}y\wedge\partial
y)]=\gamma[\overline{\partial}z_{Ni}]\wedge[\partial
z_{jN}]\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \text{for some}\ \
i,j\in\\{2,\ldots,N-1\\}\ \ \text{and}\ \ \gamma\neq 0.$
or equivalently there exists $X^{\prime}\in U_{q}(\mathfrak{g})$ and
$\gamma^{\prime}\neq 0$ such that
$\displaystyle[(\overline{\partial}y\wedge\partial)\triangleleft
X^{\prime}]=\gamma^{\prime}[\overline{\partial}z_{Ni}]\wedge[\partial
z_{iN}].$
We show this for the case $\mathfrak{so}_{2n+1}$ since for
$\mathfrak{so}_{2n}$ the proof is similar. We define the sequence
$\displaystyle
X_{1}:=F_{2},\,X_{2}:=F_{3},\ldots,X_{n-1}:=F_{n},\,X_{n}:=F_{n},\,X_{n+1}:=F_{n-1},\ldots,X_{2n-2}:=F_{2},$
$\displaystyle\leavevmode\nobreak\
X_{2n-1}:=F_{2},\ldots,X_{3n-3}:=F_{n},\,X_{3n-2}:=F_{n},\,X_{3n-1}:=F_{n-1},\ldots,X_{4n-4}:=F_{2},$
$\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
X_{4n-3}:=F_{1},\,X_{4n-2}:=F_{1}.$
If $1\leq i_{1}<\cdots<i_{r}\leq 4n-2$ then by using the formulas (13)-(15) it
can be shown that if $y\triangleleft X_{i_{1}}\cdots X_{i_{r}}$ is non-zero
then it is a multiple of one element of the following list
1. (i’)
$z_{i,N}-q^{\gamma_{i}}z_{1,i^{\prime}}$ $2\leq i\leq N-1$,
$\gamma_{i}\in\mathbb{Z}$,
2. (ii’)
$z_{N,N}-q^{\eta_{1}}z_{N-1,N-1}+q^{\eta_{2}}z_{2,2}-q^{\eta_{3}}z_{1,1}$,
$\eta_{i}\in\mathbb{Z}$,
3. (iii’)
$z_{i,N-1}-q^{a_{i}}z_{2,i^{\prime}}$, $2\leq i\leq N-2$,
$a_{i}\in\mathbb{Z}$,
4. (iv’)
$z_{N,N-1}-\mu z_{2,1}$, $\mu<0$,
and the only way to obtain $z_{N,N-1}-\mu z_{2,1}$ is by acting by
$F_{2}\cdots F^{2}_{n}\cdots F_{2}F^{2}_{1}$ on $y=q(z_{2,N}-q^{-2}z_{1,N-1})$
from the right. If we set $X:=X_{1}\cdots X_{4n-2}$ then by using the previous
list (i’)-(iv’) and Lemma 3.3 (ii), we obtain
$\displaystyle[(\partial(y)\wedge\overline{\partial}(y))\triangleleft X]$
$\displaystyle\\!=\sum_{\begin{subarray}{c}r=0\\\
p\in\mathcal{P}_{4n-2,r}\end{subarray}}^{4n-2}q^{\alpha_{r}}[\partial(y\triangleleft
X_{p(1)}\cdots X_{p(r)}]\wedge[\overline{\partial}(y\triangleleft
X_{p(r+1)}\cdots X_{p(4n-2)})]$
$\displaystyle=q^{\alpha}\partial(y\triangleleft F_{2}\cdots F_{n}^{2}\cdots
F_{2})]\wedge[\overline{\partial}(y\triangleleft F_{2}\cdots F_{n}^{2}\cdots
F_{2}F_{1}^{2})]$
$\displaystyle=\gamma[\partial(z_{N-1,N}\\!-\\!q^{\gamma_{N-1}}z_{1,2})]\wedge[\overline{\partial}(z_{N,N-1}\\!-\\!\mu
z_{2,1})]$ $\displaystyle=\gamma e^{+}_{N-2}\wedge e^{-}_{N-2},$ (41)
where $\gamma<0$ and $\mathcal{P}_{4n-2,r}$ is the set of
$(r,4n-2-r)$-shuffles, that is, the set of permutations
$p\in\mathcal{P}_{4n-2}$ such that $p(1)<\cdots<p(r)$ and
$p(r+1)<\cdots<p(4n-2)$. On the other hand, the form $e^{+}_{N-2}\wedge
e^{-}_{N-2}$ is not primitive since by Lemma 3.5 we have
$\displaystyle[\kappa^{N-3}]\wedge e^{+}_{N-2}\wedge e^{-}_{N-2}$
$\displaystyle=e^{+}_{N-2}\wedge[\kappa]^{N-3}\wedge e^{-}_{N-2}$
$\displaystyle=\sum_{I,J\in\Theta(N-3)}f_{N-3,I,J}(q)e^{+}_{N-2}\wedge
e^{+}_{I}\wedge e^{-}_{J}\wedge e^{-}_{N-2}$
$\displaystyle=\sum_{I\in\Theta(N-3)}f_{N-3,I,J}(q)e^{+}_{N-2}\wedge
e^{+}_{I}\wedge e^{-}_{I}\wedge e^{-}_{N-2}$ $\displaystyle\quad+\sum_{I\neq
J\in\Theta(N-3)}f_{N-3,I,J}(q)e^{+}_{N-2}\wedge e^{+}_{I}\wedge
e^{-}_{J}\wedge e^{-}_{N-2}$ $\displaystyle=f_{N-3,J,J}(q)e^{+}_{N-2}\wedge
e^{+}_{1}\wedge\cdots\wedge e^{+}_{N-3}\wedge e^{-}_{1}\wedge\cdots\wedge
e^{-}_{N-3}\wedge e^{-}_{N-2}$
$\displaystyle=(-1)^{N-3}q^{-(N-3)}f_{N-3,J,J}(q)e^{+}_{1}\wedge\cdots\wedge
e^{+}_{N-2}\wedge e^{-}_{1}\wedge\cdots\wedge e^{-}_{N-2},$
where $J=\\{1,\ldots,N-3\\}$ and $f_{N-3,J,J}(q)\neq 0$ if $q$ belongs to a
suitable interval around $1$. Since $e^{+}_{N-2}\wedge e^{-}_{N-2}$ is not
primitive then $(\partial(y)\wedge\overline{\partial}(y))\triangleleft
X_{1}\cdots X_{4n-2}$ is not primitive and therefore
$\partial(y)\wedge\overline{\partial}(y)$ is not primitive. The calculation
for $\overline{\partial}y\wedge\partial z$ is similar by using the fact that
$\displaystyle(\overline{\partial}y\wedge\partial z)\triangleleft
F_{1}=q\overline{\partial}(z_{2,N}-q^{-2}z_{1,N-1})\wedge\partial(z_{2,N}+qz_{1,N-1}),$
and we can proceed by acting with $X$ and obtain
$\displaystyle[(\overline{\partial}y\wedge\partial z)\triangleleft F_{1}X]$
$\displaystyle=[(\overline{\partial}y\wedge\partial z)\triangleleft
F_{1}(F_{2}\cdots F_{n}^{2}\cdots F_{2})^{2}F_{1}^{2}]$
$\displaystyle=q^{\beta}[\overline{\partial}(y\triangleleft F_{2}\cdots
F_{n}^{2}\cdots
F_{2}F_{1}^{2})]\wedge[\partial\big{(}(z_{2,N}+qz_{1,N-1})\triangleleft
F_{2}\cdots F_{n}^{2}\cdots F_{2}\big{)}]$
$\displaystyle=\gamma^{\prime}[\overline{\partial}(z_{N,N-1}-\mu
z_{2,1})]\wedge[\partial(z_{N-1,N}+q^{\gamma^{\prime}_{N-1}}z_{1,2})]$
$\displaystyle=\gamma^{\prime}e^{-}_{N-2}\wedge e^{+}_{N-2},$
where $\gamma^{\prime}\neq 0$. Similarly, it can be shown that
$e^{-}_{N-2}\wedge e^{+}_{N-2}$ is not primitive by using the fact that the
Kähler form $\kappa$ is non-zero multiple of
$\displaystyle\mathrm{i}\sum_{i,j=1}^{N}q^{2(N-i)}\overline{\partial}z_{ij}\wedge\partial
z_{ji}$
satisfying analogous results as those of Lemma 3.5, i.e., there exist
$g_{l,I,J}(q)\in\mathbb{C}[q,q^{-1}]$ such that
$\displaystyle[\kappa^{l}]=\mathrm{i}^{l}\sum_{I,J\in\Theta(l)}g_{l,I,J}(q)e^{-}_{I}\wedge
e^{+}_{J},$
where $\mathrm{sgn}(g_{l,I,I}(1))=(-1)^{\frac{l(l-1)}{2}}$ for all
$I\in\Theta(l)$ and $g_{l,I,J}(1)=0$, $I\neq J$.
Much in the same way is the calculation for
$\overline{\partial}(z)\wedge\partial z$: if we act with $F_{1}$ from the
right we obtain
$\displaystyle(\overline{\partial}z_{1,N}\wedge\partial z_{1,N})\triangleleft
F_{1}^{2}$
$\displaystyle=\alpha\overline{\partial}(z_{2,N}+qz_{1,N-1})\wedge\partial(z_{2,N}+qz_{1,N-1})+\beta\overline{\partial}(z_{1,N})\wedge\partial(z_{2,N-1})$
$\displaystyle\quad+\zeta\overline{\partial}(z_{2,N-1})\wedge\partial(z_{1,N}),$
(42)
where $\alpha,\beta,\zeta>0$. Now a routine calculation shows that
$\displaystyle\big{[}(\overline{\partial}(z_{1,N})\wedge\partial(z_{2,N-1}))\triangleleft(F_{2}\cdots
F_{n}^{2}\cdots F_{2})^{2}F_{1}^{2}\big{]}=0,$ (43)
$\displaystyle(\overline{\partial}(z_{2,N-1})\wedge\partial(z_{1,N}))\triangleleft(F_{2}\cdots
F_{n}^{2}\cdots
F_{2})^{2}F_{1}^{2}=\chi\big{(}-\overline{\partial}(z_{N,2}+q^{\alpha}z_{N-1,1})\wedge\partial(z_{2,N}+q^{\beta}z_{1,N-1})$
$\displaystyle+\vartheta_{1}\overline{\partial}(z_{N-1,2})\wedge\partial(z_{2,N-1})+\vartheta_{2}\overline{\partial}(z_{N,1})\wedge\partial(z_{1,N})\big{)},$
(44)
for a certain real $\chi>0$ and real $\vartheta_{1}$, $\vartheta_{2}$. On the
other hand, similar calculations as given in (4) show that
$\displaystyle\big{[}(\overline{\partial}(z_{2,N}+qz_{1,N-1})\wedge\partial(z_{2,N}+qz_{1,N-1}))\triangleleft(F_{2}\cdots
F_{n}^{2}\cdots F_{2}$
$\displaystyle)^{2}F_{1}^{2}\big{]}=-\gamma^{\prime}e^{-}_{N-2}\wedge
e^{+}_{N-2},$ (45)
for a real $\gamma^{\prime}>0$. Now combining equations (4)-(45) we have
$\displaystyle\big{[}(\overline{\partial}z_{1,N}\wedge\partial
z_{1,N})\triangleleft F_{1}^{2}(F_{2}\cdots F_{n}^{2}\cdots F_{2}$
$\displaystyle)^{2}F_{1}^{2}\big{]}=-\gamma^{\prime}e^{-}_{N-2}\wedge
e^{+}_{N-2}-\chi e^{-}_{1}\wedge e^{+}_{1}$
which is not primitive since $-\gamma^{\prime}$ and $-\chi$ are both negative
and we are assuming that $q$ belongs to a suitable open interval around $1$.
Finally, similar calculations work for the case $\mathfrak{so}_{2n}$ where the
element $X$ is given by $X:=(F_{2}\cdots F_{n}F_{n-2}\cdots
F_{2})^{2}F_{1}^{2}.$
∎
With this lemma and results of previous sections we are now ready to calculate
the eigenvalues of the Laplace operator $\Delta_{\overline{\partial}}$ on zero
forms.
###### Proposition 4.2.
The eigenvalue of the Dolbeault Laplace operator
$\Delta_{\overline{\partial}}$ on zero forms goes to infinity with finite
multiplicity and therefore the operator has compact resolvent.
###### Proof.
Combining Lemma 3.4, Lemma 4.1 and the identities (29) we have
$\displaystyle\Delta_{\overline{\partial}}(y^{k}z^{l})$
$\displaystyle=\overline{\partial}^{\dagger}\overline{\partial}(y^{k}z^{l})$
$\displaystyle=\overline{\partial}^{\dagger}\big{(}(k)_{q^{2}}y^{k-1}\overline{\partial}yz^{l}+(l)_{q^{-2}}y^{k}\overline{\partial}zz^{l-1}\big{)}$
$\displaystyle=-\ast_{\kappa}\circ\partial\big{(}(k)_{q^{2}}y^{k-1}\ast_{\kappa}(\overline{\partial}y)z^{l}+(l)_{q^{-2}}y^{k}\ast_{\kappa}(\overline{\partial}z)z^{l-1}\big{)}$
$\displaystyle=-\ast_{\kappa}\big{(}(k)_{q^{2}}y^{k-2}\partial
y\wedge\ast_{\kappa}(\overline{\partial}y)z^{l}+(k)_{q^{2}}y^{k-1}(\partial\circ\ast_{\kappa}\circ\overline{\partial}y)z^{l}$
$\displaystyle\quad+(k)_{q^{2}}(l)_{q^{2}}y^{k-1}(\ast_{\kappa}\circ\overline{\partial}y)\wedge\partial
zz^{l}+(l)_{q^{-2}}(k)_{q^{-2}}y^{k-1}\partial
y\wedge\ast_{\kappa}(\overline{\partial}z)z^{l-1}$
$\displaystyle\quad+(l)_{q^{-2}}y^{k}\partial\circ\ast_{\kappa}(\overline{\partial}z)z^{l-1}+(l)_{q^{-2}}(l-1)_{q^{2}}y^{k}\ast_{\kappa}(\overline{\partial}z)\wedge\partial
zz^{l-2}\big{)}$ $\displaystyle=-(k)_{q^{2}}y^{k-2}\ast_{\kappa}(\partial
y\wedge\ast_{\kappa}(\overline{\partial}y))z^{l}-(k)_{q^{2}}y^{k-1}\ast_{\kappa}(\partial\circ\ast_{\kappa}\circ\overline{\partial}y)z^{l}$
$\displaystyle\quad-(k)_{q^{2}}(l)_{q^{2}}y^{k-1}\ast_{\kappa}((\ast_{\kappa}\circ\overline{\partial}y)\wedge\partial
z)z^{l}-(l)_{q^{-2}}(k)_{q^{-2}}y^{k-1}\ast_{\kappa}(\partial
y\wedge\ast_{\kappa}(\overline{\partial}z))z^{l-1}$
$\displaystyle\quad-(l)_{q^{-2}}y^{k}\ast_{\kappa}(\partial\circ\ast_{\kappa}(\overline{\partial}z))z^{l-1}-(l)_{q^{-2}}(l-1)_{q^{2}}y^{k}\ast_{\kappa}(\ast_{\kappa}(\overline{\partial}z)\wedge\partial
z)z^{l-2}$
$\displaystyle=\big{[}\theta(k)_{q^{2}}(k-1)_{q^{-2}}+(k)_{q^{2}}\mu_{y}+(l)_{q^{2}}(k)_{q^{2}}\theta^{\prime}+(l)_{q^{-2}}(k)_{q^{-2}}\theta^{\prime\prime}+(l)_{q^{-2}}\mu_{z}$
$\displaystyle\quad+(l)_{q^{-2}}(l-1)_{q^{2}}\theta^{\prime\prime\prime}\big{]}y^{k}z^{l},$
where $\mu_{z}$ and $\mu_{y}$ are the eigenvalues of $z$ and $y$ respectively.
A differential calculus
$(\Omega^{(\bullet,\bullet)},\partial,\overline{\partial})$ is said to be
_connected_ if
$\displaystyle\text{Ker}(\partial:\Omega^{(0,0)}\rightarrow\Omega^{(1,0)})=\text{Ker}(\overline{\partial}:\Omega^{(0,0)}\rightarrow\Omega^{(0,1)})=\mathbb{C}1.$
Since the Heckengerger–Kolb calculus is connected [1] and
$\mathrm{ker}(\Delta_{\overline{\partial}})=\mathrm{ker}(\partial)\cap\mathrm{ker}(\overline{\partial})$
[24], then $\mu_{z}$ and $\mu_{y}$ are nonzero. Now we note that
$(l)_{q^{-2}}(k)_{q^{-2}}\theta^{\prime\prime}$ and $(l)_{q^{-2}}\mu_{z}$ have
finite limit as $k,l\rightarrow\infty$ and therefore they do not contribute in
the asymptotic behaviour of the eigenvalue. By Lemma 4.1 we have that
$\theta^{\prime}\neq 0$. It is not difficult to see that if
$\theta^{\prime}<0$ then the eigenvalue would tend to $-\infty$ if $k$ is
fixed and $l\rightarrow\infty$. Therefore $\theta^{\prime}>0$.
Now we can consider the sum
$\displaystyle\big{(}\theta\mu_{y}^{-1}(k-1)_{q^{-2}}+1\big{)}(k)_{q^{2}}\mu_{y}+(l)_{q^{-2}}(l-1)_{q^{2}}\theta^{\prime\prime\prime}.$
(46)
First we note that
$\displaystyle\theta<-(1-q^{-2})\mu_{y}\Rightarrow\big{(}1+(k-1)_{q^{-2}}\theta\mu_{y}^{-1}\big{)}<0,\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \text{as}\ \ k\rightarrow\infty,$
and this would imply that the eigenvalues of $\Delta_{\overline{\partial}}$ on
$y^{k}$ are negative. Therefore $\theta\geq-(1-q^{-2})\mu_{y}$. If
$\theta=-(1-q^{-2})\mu_{y}$ then
$(\theta\mu_{y}^{-1}(k-1)_{q^{-2}}+1)(k)_{q^{2}}\mu_{y}$ converges when
$k\rightarrow\infty$. On the other hand, if $\theta>-(1-q^{-2})\mu_{y}$ then
$(\theta\mu_{y}^{-1}(k-1)_{q^{-2}}+1)(k)_{q^{2}}\mu_{y}$ tends to infinity.
Now if $\theta^{\prime\prime\prime}<0$ then
$\displaystyle\Delta_{\overline{\partial}}(z^{l})=\theta^{\prime\prime\prime}(l)_{q^{-2}}(l-1)_{q^{2}}+(l)_{q^{-2}}\mu_{z}=(\mu_{z}^{-1}\theta^{\prime\prime\prime}(l-1)_{q^{2}}+1)\mu_{z}(l)_{q^{-2}}<0,$
as $l\rightarrow\infty$ which contradicts the fact the eigenvalues of $z^{l}$
are non-negatives. Then $\theta^{\prime\prime\prime}\geq 0$. By Lemma 4.1 we
have that $\theta^{\prime\prime\prime}>0$ and this implies that the second
term of (46) tends to infinity as $l\rightarrow\infty$. Combining the above
results we can conclude that the eigenvalues of $\Delta_{\overline{\partial}}$
tend to infinity, as $k,l\rightarrow\infty$. ∎
### Acknowledgments
FDG was supported by the Charles University PRIMUS grant _Spectral
Noncommutative Geometry of Quantum Flag Manifolds_ PRIMUS/21/SCI/026. The
author would like to thank Réamonn Ó Buachalla for his support and useful
discussions during the preparation of this paper.
## References
* [1] A. Carotenuto, F. Díaz García, R. Ó Buachalla: _A Borel–Weil theorem for the irreducible quantum flag manifolds_. To appear in International Mathematics Research Notices.
* [2] F. D’Andrea, L. Da̧browski: _Dirac operators on quantum projective spaces_. Commun. Math. Phys. 295, 731-790, 2010.
* [3] F. D’Andrea, L. Da̧browski, G. Landi: _The noncommutative geometry of the quantum projective plane_. Rev. Math. Phys. 20, 979-1006, 2008.
* [4] F. D’Andrea, L. Da̧browski, G. Landi, E. Wagner: _Dirac operators on all Podleś quantum spheres_. J. Noncommut. Geom. 1, 213-239, 2007.
* [5] L. Da̧browski, G. Landi, A. Sitarz, W. van Suijlekom, J.C. Várilly: _The Dirac operator on $SU_{q}(2)$_. Commun. Math. Phys. 259, 729-759, 2005.
* [6] L. Da̧browski, A. Sitarz: _Dirac operator on the standard Podleś quantum sphere_. Banach Cent. Publ. 61, 49-58, 2003.
* [7] B. Das, R. Ó Buachalla, P. Somberg: _A Dolbeault–Dirac spectral triple for quantum proyective space_. Doc. Math. 25, 1079-1157, 2020.
* [8] B. Das, R. Ó Buachalla, P. Somberg: _Compact quantum homogeneous Kähler spaces_. arXiv:1910.14007.
* [9] B. Das, R. Ó Buachalla, P. Somberg: _Spectral gaps for twisted Dolbeault–Dirac operators over the irreducible quantum flag manifolds_. arXiv:2206.10719.
* [10] F. Díaz García, R. Ó Buachalla, E. Wagner: _A Dolbeault–Dirac spectral triple for the $B_{2}$-irreducible quantum flag manifold_. arXiv:2109.09885.
* [11] F. Díaz García, A. Krutov, R. Ó Buachalla, P. Somberg, K. R. Strung: _Positive line bundles over the irreducible quantum flag manifolds_. arXiv:1912.08802.
* [12] I. Heckenberger, S. Kolb: _The locally finite part of the dual coalgebra of irreducible quantum flag manifolds_. Proc. London Math. Soc. 89, 457-484, 2004.
* [13] I. Heckenberger, S. Kolb: _De Rham complex for the quantized irreducible flag manifolds_. J. Algebra, 305, 704-741, 2006.
* [14] I. Heckenberger, S. Kolb: _Differential forms via the Bernstein–Gelfand–Gelfand resolution for quantized irreducible flag manifolds_. J. Geom. Phys. 57, 2316-2344, 2007.
* [15] J. Humphreys: _An introduction to Lie algebras and representation theory_. Springer-Verlag, New York, 1972.
* [16] D. Huybrechts: _Complex geometry: an introduction_. Universitext, Springer-Verlag, Heidelberg, 2005.
* [17] A. Klimyk, K. Schmüdgen: _Quantum groups and their representations_. Texts and Monograph in Physics, Springer-Verlag, Berlin, 1998.
* [18] U. Krähmer, M. Tucker-Simmons: _On the Dolbeault–Dirac operator on quantized symmetric spaces_. Trans. London Math. Soc. 2, 33-56, 2015.
* [19] M. Krämer: _Sphärische Untergruppen in kompakten zusammenhängenden Liegruppen_. Compisitio Math. 38, 129-153, 1979.
* [20] S. Majid: _Noncommutative Rieamannian and spin geometry of the standard $q$-sphere_. Comm. Math. Phys. 256, 255-285, 2005.
* [21] M. Matassa: _Kähler structures on quantum irreducible flag manifolds_. J. Geom. Phys. 145 (2019), 103477.
* [22] M. Matassa: _The Parthasarathy formula and a spectral triple for the quantum Lagrangian Grassmanians of rank two_. Lett. Math. Phys. 109, 1703-1734, 2019.
* [23] S. Neshveyev, L. Tuset: _The Dirac operator on compact quantum groups_. J. Reine. Angew. Math. 641, 1-20, 2010.
* [24] R. Ó Buachalla: _Noncommutative Kähler structures of quantum homogeneous spaces_. Adv. Math. 322, 892-939, 2017.
* [25] R. Ó Buachalla: _Noncommutative complex structures on quantum homogeneous spaces_. J. Geom. Phys. 99, 154-173, 2016.
* [26] M. Takeuchi: _Relative Hopf modules - equivalence and freeness criteria_. J. Algebra, 60, 452-471, 1979.
* [27] A.L. Onishchick, E. B. Vinberg: _Lie groups and algebraic groups_. Springer-Verlag, Heidelberg, 1990.
|
11institutetext: Max Planck Institute for Astrophysics, Karl-Schwarzschild-
Strasse 1, 85748 Garching, Germany
11email<EMAIL_ADDRESS>††thanks: 22institutetext: Argelander-
Institut für Astronomie, Universität Bonn, Auf dem Hügel 71, 53121 Bonn,
Germany 33institutetext: Astronomical Institute “Anton Pannekoek”, University
of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands
44institutetext: European Southern Observatory, Karl-Schwarzschild-Strasse 2,
85748 Garching, Germany 55institutetext: Dipartimento di Fisica e Astronomia
“Galileo Galilei”, Univ. di Padova, Vicolo dell’Osservatorio 3, I-35122
Padova, Italy 66institutetext: Institute of astrophysics, KU Leuven,
Celestijnlaan 200D, 3001 Leuven, Belgium
# The initial spin distribution of B-type stars revealed by the split main
sequences of young star clusters
Chen Wang 11 Ben Hastings 22 Abel Schootemeijer 22 Norbert Langer 22 Selma E.
de Mink 1133 Julia Bodensteiner 44 Antonino Milone 55 Stephen Justham 11 Pablo
Marchant 66
(Preprint online version: August 02, 2022)
Spectroscopic observations of stars in young open clusters have revealed
evidence for a dichotomous distribution of stellar rotational velocities, with
10$-$30% of stars rotating slowly and the remaining 70$-$90% rotating fairly
rapidly. At the same time, high-precision multiband photometry of young star
clusters shows a split main sequence band, which is again interpreted as due
to a spin dichotomy. Recent papers suggest that extreme rotation is required
to retrieve the photometric split. Our new grids of MESA models and the
prevalent SYCLIST models show, however, that initial slow (0$-$35% of the
linear Keplerian rotation velocities) and intermediate (50$-$65% of the
Keplerian rotation velocities) rotation are adequate to explain the
photometric split. These values are consistent with the recent spectroscopic
measurements of cluster and field stars, and are likely to reflect the birth
spin distributions of upper main-sequence stars. A fraction of the initially
faster-rotating stars may be able to reach near-critical rotation at the end
of their main-sequence evolution and produce Be stars in the turn-off region
of young star clusters. However, we find that the presence of Be stars up to
two magnitudes below the cluster turnoff advocates for a crucial role of
binary interaction in creating Be stars. We argue that surface chemical
composition measurements may help distinguish these two Be star formation
channels. While only the most rapidly rotating, and therefore nitrogen-
enriched, single stars can evolve into Be stars, slow pre-mass-transfer
rotation and inefficient accretion allows for mild or no enrichment even in
critically rotating accretion-induced Be stars. Our results shed new light on
the origin of the spin distribution of young and evolved B-type main sequence
stars.
###### Key Words.:
stars: rotation – stars: evolution – stars: clusters – Magellanic Clouds
## 1 Introduction
Rotation has a profound impact on stellar structure and evolution (see Maeder
& Meynet, 2000, for a review). On the one hand, it makes a star oblate,
reducing its effective gravity, thereby reducing its equatorial flux and
temperature, as described by the von Zeipel theorem (von Zeipel, 1924). This
is the so-called gravity darkening. Since the centrifugal force is larger at
the equator than on the pole, a rotating star appears dimmer and cooler if it
is seen equator-on as opposed to pole-on. On the other hand, rotation is also
predicted to trigger internal mixing in the stellar interior (Endal & Sofia,
1978), such that fresh hydrogen is injected into the stellar center, while
nuclear-burning products are dragged up to the stellar surface. As a
consequence, rotationally induced mixing may extend the main-sequence (MS)
lifetime of a rotating star and, on average, may make it more luminous and
hotter than its nonrotating counterpart. The consequences of rotation on the
position of a star in the Hertzsprung-Russell diagram (HRD) or color-magnitude
diagram (CMD) are determined by these effects. Thereby, rotation may affect
the age determinations of star clusters based on single star models.
Spectroscopic observations in the Tarantula nebula revealed a genuinely
bimodal rotational velocity distribution for the early B-type stars
(7$-$15$\,\mathrm{M}_{\odot}\,$), with one peak at $0\leq v_{\mathrm{e}}\leq
100{\rm\,km\,s^{-1}}$ and the other at $200\leq v_{\mathrm{e}}\leq
300{\rm\,km\,s^{-1}}$ (Dufton et al., 2013), where $v_{\mathrm{e}}$ is the
equatorial velocity. A similar bimodal rotational velocity distribution is
also reported in the Galactic field for late B- and early A-type stars in a
mass range of 2.5$-$4$\,\mathrm{M}_{\odot}\,$(Zorec & Royer, 2012), whereas
stars with masses between 1.6$\,\mathrm{M}_{\odot}\,$and
2.3$\,\mathrm{M}_{\odot}\,$are found to have a unimodal equatorial velocity
distribution peaking at $v_{\mathrm{e}}\sim 200{\rm\,km\,s^{-1}}$ (Zorec &
Royer, 2012).
It is interesting that such a bimodal rotational velocity distribution appears
to be required to explain the split MSs detected in young star clusters as
well (younger than $\sim$700 Myr) in the Large and Small Magellanic Clouds
(LMC and SMC, respectively) based on Hubble Space Telescope (HST) (Bastian &
de Mink, 2009; Girardi et al., 2011; Yang et al., 2013; Brandt & Huang, 2015;
Niederhofer et al., 2015; D’Antona et al., 2015; Correnti et al., 2017; Li et
al., 2017; Wang et al., 2022), with the blue MS containing 10–30% of the stars
and the red MS containing the remaining stars. Other mechanisms including a
prolonged star formation history, different metallicities, or different
chemical compositions fail at explaining the split MS feature (see Milone et
al., 2016, and references therein). Although it has been widely accepted that
the red and blue MSs are comprised of stars with fast and slow rotation,
respectively, the difference in rotation speed which is required to explain
the observed color split of the two MSs is still under discussion (Milone et
al., 2018; Gossage et al., 2019; Wang et al., 2022). Most of the papers
explain the red and blue MSs as comprised of stars rotating at 90% and 0% of
their critical velocities by comparing the prevalent SYnthetic CLusters
Isochrones & Stellar Tracks (SYCLIST) single-star models111
https://www.unige.ch/sciences/astro/evolution/en/database/syclist/ (Georgy et
al., 2013) with the photometric observations (see D’Antona et al., 2017;
Milone et al., 2018, and references therein). Interpreting the red MS stars
with extremely fast rotation is further supported by a large fraction of Be
stars, that is the B-type stars with emission lines that probably arise from
decretion disks that are induced by near-critical rotation, detected in the
Magellanic Cloud clusters younger than $\sim$300 Myr (Keller & Bessell, 1998;
Bastian et al., 2017; Milone et al., 2018).
However, it is unlikely that the majority of stars are born with near-critical
rotation. Huang et al. (2010) argued that nonevolved critically rotating
B-type stars should be scarce ($<$ 1.3%). Meanwhile, the peak of the fast
component in the abovementioned bimodal velocity distribution of field B- and
A-type stars (Zorec & Royer, 2012; Dufton et al., 2013) only corresponds to
$\sim$50–60% of the critical velocities. The most compelling piece of evidence
against the red MS stars rotating near critically arises from the
spectroscopic measurement of the velocity of the red MS stars in the young LMC
cluster NGC 1818, which shows a mean projected equatorial rotational velocity
$v\sin i$ of $\sim 202\pm 23{\rm\,km\,s^{-1}}$, again only corresponding to
velocities slightly larger than $\sim$ 50% of their critical values (Marino et
al., 2018). In fact, several studies have indeed argued that the red MS should
contain stars with around half-critical velocities, by comparing the single-
star models computed from the Modules for Experiments in Stellar Astrophysics
(MESA) code with the photometric observations (Gossage et al., 2019; Wang et
al., 2022).
In this paper, we show that, counter-intuitively, this half-critical
rotational velocity is in agreement with previously inferred near-critical
rotational velocity, and that the differences are the result of, firstly, how
the fraction of critical rotation is defined and, secondly, an early spin-down
phase of the SYCLIST models. The disadvantage of Gossage et al. (2019) is that
they investigated the impact of rotation in cluster turn-off stars, in which
the influence of single star evolution and binary interaction cannot be
eliminated. Wang et al. (2022) used their MESA isochrones with different
rotational velocities to fit cluster split MSs, but they neither compared them
with other stellar models nor compared them with the recent spectroscopic
velocity measurements of cluster stars.
We aim to explore the similarities and the differences of different model sets
when using them to study young star clusters. We derive the rotational
velocity of the red and blue MS stars in young star clusters by comparing
their observed color split with the rotationally induced color variation in
stellar models. We then directly compare stellar models with the photometric
and spectroscopic observations of the MS stars in the LMC cluster NGC 1818.
After finding out the differences in the SYCLIST and the MESA models, we
propose a possible way to probe the related physics assumptions in these two
models. At last, we examine how the derived rotational velocity of the red MS
stars affects our understanding of the origin of Be stars.
The layout of the paper is as follows. Section 2 clarifies different
definitions of critical rotational velocity used in different stellar models.
Section 3 describes the physics and assumptions adopted in computing our
single-star models. Our results and comparisons with the observations are
presented in Section 4. We also discuss our results and their implication on
the origin of Be stars in this section. At last, we summarize our conclusions
in Section 5.
## 2 Different definitions of critical rotational velocity
There are different definitions of the critical rotational velocity (Rivinius
et al., 2013), which need to be clarified before using stellar models. The
critical velocity considered in the SYCLIST models uses the equatorial radius
of a critically rotating star. Under this definition, the linear rotational
velocity is expressed as:
$v_{\mathrm{crit,SYCLIST}}=\sqrt{\frac{GM}{R_{\mathrm{e,crit}}}}=\sqrt{\frac{2}{3}\frac{GM}{R_{\mathrm{p,crit}}}}.$
(1)
Here $R_{\mathrm{e,crit}}$ and $R_{\mathrm{p,crit}}$ are the equatorial radius
and the polar radius of a critically rotating star, respectively. The factor
$2/3$ comes from the relation $R_{\mathrm{e,crit}}=3/2R_{\mathrm{p,crit}}$ in
a critically rotating star, in the framework of the Roche approximation.
Critical velocity can also be expressed in terms of angular velocity, which in
the SYCLIST models is
$\Omega_{\mathrm{crit,SYCLIST}}=\sqrt{\frac{GM}{R^{3}_{\mathrm{e,crit}}}}=\sqrt{\frac{8}{27}\frac{GM}{R^{3}_{\mathrm{p,crit}}}}.$
(2)
In general, the variation of the polar radius due to rotation is marginal,
such that $R_{\mathrm{p,crit}}/R_{\mathrm{P}}\simeq 1$ is a good
approximation, where $R_{\mathrm{P}}$ is the polar radius of a star rotating
at an arbitrary rate. Therefore, it is possible to calculate
$\Omega_{\mathrm{crit,SYCLIST}}$ and $v_{\mathrm{crit,SYCLIST}}$ without
having a numerical model of a critically rotating star. This definition takes
into account the fact that a star will increase its radius as it approaches
critical rotation.
However, the MESA stellar models, including our newly computed models and the
MIST models, adopt a different definition, in which the current equatorial
radius $R_{\mathrm{e}}$ is used. The critical linear and angular velocities
are then defined as:
$v_{\mathrm{crit,MESA}}=\sqrt{\frac{GM}{R_{\mathrm{e}}}}\quad\mathrm{and}\quad\Omega_{\mathrm{crit,MESA}}=\sqrt{\frac{GM}{R^{3}_{\mathrm{e}}}},$
(3)
respectively, which are denoted by $v_{\mathrm{orb}}$ and
$\Omega_{\mathrm{orb}}$ in Rivinius et al. (2013). This definition indicates
the orbital velocity at the equator of a rotating star. As has been stated in
Rivinius et al. (2013), a star rotates critically when its equatorial velocity
equals the orbital velocity at its equator. In the case of a critically
rotating star, $v_{\mathrm{crit,SYCLIST}}=v_{\mathrm{crit,MESA}}$ due to the
increase of its equatorial radius. But for noncritically rotating stars, the
MESA definition is more meaningful when studying fast-rotating stars with
disks, because $v_{\mathrm{crit,MESA}}$ describes which velocity is required
for a rotating star to eject material.
The critical velocity fraction is more frequently used than the absolute value
of rotational velocity when describing how quickly a star rotates. Under the
SYCLIST definition, the critical linear velocity fraction does not equal the
critical angular velocity fraction. The relation between them is given by:
$\begin{split}\frac{v_{\mathrm{e}}}{v_{\mathrm{crit,SYCLIST}}}&=\frac{\Omega_{\mathrm{e}}R_{\mathrm{e}}}{\Omega_{\mathrm{crit,SYCLIST}}R_{\mathrm{e,crit}}}\\\
&=\frac{\Omega_{\mathrm{e}}}{\Omega_{\mathrm{crit,SYCLIST}}}\frac{R_{\mathrm{e}}}{R_{\mathrm{p}}}\frac{R_{\mathrm{p}}}{R_{\mathrm{p,crit}}}\frac{R_{\mathrm{p,crit}}}{R_{\mathrm{e,crit}}}\\\
&\simeq\frac{2}{3}\frac{R_{\mathrm{e}}}{R_{\mathrm{p}}}\frac{\Omega_{\mathrm{e}}}{\Omega_{\mathrm{crit,SYCLIST}}}.\end{split}$
(4)
Here $v_{\mathrm{e}}$ and $\Omega_{\mathrm{e}}$ are the equatorial linear and
angular velocities, respectively. Under the MESA definition, the fractional
linear and angular velocities are identical, that is
$v_{\mathrm{e}}/v_{\mathrm{crit,MESA}}=\Omega_{\mathrm{e}}/\Omega_{\mathrm{crit,MESA}}$.
Except for the extreme cases of zero rotation and critical rotation, we have
$v_{\mathrm{e}}/v_{\mathrm{crit,MESA}}<v_{\mathrm{e}}/v_{\mathrm{crit,SYCLIST}}<\Omega_{\mathrm{e}}/\Omega_{\mathrm{crit,SYCLIST}}$.
In this work, we always mean the MESA definition when referring to critical
rotation, that is $v_{\mathrm{crit}}=v_{\mathrm{crit,MESA}}$, because we
mainly use our MESA models to investigate the role of rotation in splitting
the cluster MS. But in order to compare our models with the SYCLIST models, we
use eq. 11 in Rivinius et al. (2013) to convert the critical velocity fraction
of the SYCLIST models to the value under the MESA definition.
It is worth noting that the SYCLIST stellar models undergo a relaxation phase
at the very beginning of their MS evolution, lasting for a few percent of
their MS lifetimes, during which their rotational velocities decrease
dramatically as meridional circulation transports angular momentum from their
outer layers to their inner layers (Ekström et al., 2008). The decrease of the
surface rotational velocity is more substantial in models with higher initial
velocities. In particular, the SYCLIST model formally labeled as
$\omega_{\mathrm{label}}=\Omega_{\rm e}/\Omega_{\mathrm{crit,SYCLIST}}=0.9$
has $\Omega_{\rm e}/\Omega_{\mathrm{crit,SYCLIST}}\sim 0.8$ after such
relaxation phase, which corresponds to
$v_{\mathrm{e}}/v_{\mathrm{crit,MESA}}\sim 0.5$. Therefore, we stress that the
SYCLIST models that are formally labeled as extreme rotation are in fact not
rotating that fast.
## 3 Input physics and assumptions in stellar models
We use the single stellar models described in detail in Wang et al. (2022),
which are extended from the models in Schootemeijer et al. (2019). The models
are computed by the detailed one-dimensional stellar evolution code MESA,
version 12115, (Paxton et al., 2011, 2013, 2015, 2019). The models include
differential rotation, rotationally induced internal mixing and magnetic
angular momentum transport. In this section, we briefly summarize the most
relevant assumptions in the following and compare them with those used in
SYCLIST and MIST models.
The stellar structure of a rapidly rotating star deviates from spherical
symmetry due to the presence of the centrifugal force. In a one-dimensional
stellar evolution code, stellar structure equations are solved on the isobaric
shells that are assumed to have constant angular velocities (Meynet & Maeder,
1997). In the MESA code, two factors $f_{T}$ and $f_{P}$, are introduced to
correct temperature and pressure of a rotating star, such that the regular
form of the equations of nonrotating stars is retained (Endal & Sofia, 1976;
Paxton et al., 2013). In the older versions of the MESA code, $f_{T}$ and
$f_{P}$ are limited to specific values to ensure numerical stability, which
limits the accuracy of computing the stellar models rotating at more than
$60\%$ of their critical velocities. However, in version 12115, which is the
one we use, a new implementation of the centrifugal effects allows for a
precise calculation of the stellar models rotating up to 90% of their critical
velocities (Paxton et al., 2019).
We use the standard mixing-length theory to model convection with a mixing
length parameter of $\alpha=l/H_{\mathrm{P}}=1.5$, where $H_{\mathrm{P}}$ is
the local pressure scale height. The boundary of the convective core is
determined by the Ledoux criterion. We adopt step overshooting that extends
the convective zone by $\alpha_{\mathrm{OV}}H_{\mathrm{P}}$, where
$\alpha_{\mathrm{OV}}$ varies with mass (see Hastings et al. (2021); Wang et
al. (2022) for detail). In the SYCLIST modes, the same method is used with a
fixed $\alpha_{\mathrm{OV}}$ of 0.1. In the MIST models, an exponentially
decaying diffusion coefficient is applied, which is roughly equivalent to
$\alpha_{\mathrm{OV}}=0.2$ in the context of step overshooting. A larger
convective overshooting parameter will result in a larger convective core and
thus a longer MS lifetime. We adopt a semiconvection mixing efficiency
parameter of $\alpha_{\mathrm{SC}}=10$ (Schootemeijer et al., 2019), and a
thermohaline mixing efficiency of $\alpha_{\mathrm{th}}=1$ (Cantiello &
Langer, 2010).
Rotationally enhanced mixing is modeled as a diffusive process (Heger et al.,
2000), with $f_{c}$ scaling the efficiency of composition mixing with respect
to angular momentum transport, and $f_{\mu}$ describing the stabilizing effect
of mean molecular weight gradients. We adopt values of $f_{c}=1/30$ (Chaboyer
& Zahn, 1992; Marchant et al., 2016) and $f_{\mu}=0.1$ (Yoon et al., 2006),
while the MIST models use the same $f_{c}$, but a different $f_{\mu}$ of 0.05.
A smaller $f_{\mu}$ value means a more efficient rotational mixing even in the
presence of a stabilizing chemical composition gradient. The effects of the
dynamical and secular shear instabilities, the Goldreich-Schubert-Fricke
instability, and the Eddington-Sweet circulations are included. In the SYCLIST
models, a more efficient diffusive-advective approach is taken when modeling
rotational mixing, which makes their fast rotators hotter and mroe luminous
than our models and than the MIST models with the same initial parameters. We
implement the Tayler-Spruit dynamo (Spruit, 2002) to transport angular
momentum in our MESA models, which imposes a strong coupling between the
contracting core and the expanding envelope during stars’ MS evolution. As a
consequence, our MESA models rotate nearly as solid bodies. This process is
not considered in the SYCLIST models, and consequently, the SYCLIST models
rotate differentially and spin down more quickly at the stellar surface than
the MESA models (Choi et al., 2016; Hastings et al., 2020). Differential
rotation in the SYCLIST models induces shear mixing within the adjacent
layers, which also leads to efficient rotational mixing. Despite the
difference of these parameters, they are all able to explain some observations
but struggle with others (see Choi et al., 2016, for example), meaning that
current observations cannot uniquely constrain the uncertain physics in
stellar models.
We compute single-star models with both LMC-like metallicity
$Z_{\mathrm{LMC}}=0.004841$ and SMC-like metallicity
$Z_{\mathrm{SMC}}=0.002179$ according to Brott et al. (2011). Our single-star
models have initial masses between 1.05 and 19.95$\,\mathrm{M}_{\odot}\,$,
which securely covers the magnitude range of MS stars in the LMC cluster NGC
1818, with intervals of $\log M/\rm\,M_{\odot}=0.04$. For each mass, we have
nine evolutionary tracks with initial rotational velocities
$W_{\mathrm{i}}=v_{\mathrm{e}}/v_{\mathrm{crit,MESA}}$ ranging from 0 to 0.8,
in intervals of 0.1. Our models start from the zero-age MS (ZAMS) and end when
their central helium is depleted.
We notice that our MESA models do not achieve thermal equilibrium initially,
which causes wobbles in the stellar radius, and in turn affects their critical
and rotational velocities. Unlike in the SYCLIST models, the change of the
rotational velocity during this relaxation period is marginal in our MESA
models, due to the implementation of a strong core-envelope coupling. To
eliminate this uncertain phase, we redefine the ZAMS state as the time when 3%
of hydrogen is burnt in the core. We use the rotational velocity
$v_{\mathrm{e}}/v_{\mathrm{crit,MESA}}$ at that time as the initial velocity
$W_{\mathrm{i}}$. Then we perform interpolation to obtain stellar models with
specified $W_{\mathrm{i}}$ values from 0 to 0.75, in intervals of 0.05. To
construct isochrones from our stellar models, we use the method described in
Wang et al. (2020) to obtain the stellar properties at certain ages. We
convert stellar luminosity and temperature to magnitude in the HST/WFC3 F814W
filter and a color. For the color we use the difference between the magnitudes
in the HST/WFC3 F336W and F814W filters, using the method described in Wang et
al. (2022).
To compare our models with the SYCLIST models, we first obtain the appropriate
SYCLIST evolutionary tracks and isochrones, with marked
$\omega_{\mathrm{label}}$ from 0.0 to 0.95, from their website. We then
redefine the SYCLIST models’ ZAMS phase and their initial rotational
velocities using the method in Sec. 2. We also interpolate the SYCLIST models
to obtain the models rotating at precisely the required values. For the MIST
models, only the models with $W_{\rm i}=0.0$ and $W_{\rm i}=0.4$ are available
to the public, thus we do not do any redefinition and interpolation to them.
This does not affect our comparison, because the MIST models have the same
definition of critical velocity as our MESA models, and meanwhile the effect
of their initial relaxation phase before 3% H consumption is negligible. We
need to point out that in the SYCLIST models, metallicities $Z_{\rm
LMC,\,SYCLIST}=0.006$ and $Z_{\rm SMC,\,SYCLIST}=0.002$ are tailored for LMC
and SMC stars. The MIST models have multiple metallicities available. We chose
two values that are closest to the adopted values in our MESA models, which
are $Z_{\rm LMC,\,MIST}=0.0452$ and $Z_{\rm SMC,\,MIST}=0.00254$ for the LMC
and SMC stars, respectively. We also need to stress that, apart from the
different metallicities, the abundance ratio of the heavy elements are also
different in the SYCLIST models and our MESA models. The SYCLIST models simply
use solar-scaled abundance ratios, while our models follow the initial
chemical compositions in Brott et al. (2011) that are intended to match the
observations of the OB stars in the LMC and SMC in the VLT-FLAMES survey
(Evans et al., 2005).
## 4 Results and discussions
### 4.1 Rotationally induced color variation
We have mentioned in Sec. 1 that the effects of rotation can change stellar
effective temperature, luminosity, and lifetime. Previous studies usually
inferred the spins for the red and blue MS stars by comparing the photometric
observations with the isochrones of stellar models with different rotation
(Milone et al., 2018; Gossage et al., 2019; Wang et al., 2022). However,
isochrone fitting is a nontrivial task due to many free parameters, including
age, initial rotation, distance modulus, and reddening. There is a more
straightforward way to infer the spins of the red and blue MS stars in the
first place, that is to calculate the color difference of the stellar models
with different initial rotational velocities and then compare it with the
observed color split of the red and blue MSs in young star clusters. This
experiment is not as sensitive to the adopted age, distance modulus, and
reddening as isochrone fitting. After acquiring rotational velocities for the
red and blue MS stars through this method, we inspect our results by
performing isochrone fits to cluster stars in the CMD.
To obtain the observed color split of the red and blue MSs in young star
clusters, we utilize previous analyses on the fraction of stars in each MS in
three clusters, namely NGC 1755 (see fig. 5 in Milone et al., 2016), NGC 2164
(see fig. 13 in Milone et al., 2018) and NGC 1866 (see fig. 6 in Milone et
al., 2017). In these studies, the stellar density distribution as a function
of color ($m_{\mathrm{F}336W}-m_{\mathrm{F}814W}$) has been scrutinized. We
treat the color that has the highest density of the red and blue MS stars as
the color of the corresponding population, and then calculate their color
difference. We use the magnitude intervals given in these studies. We only
take into account the unevolved and less-evolved observed stars, because the
split MS feature is prominent among these stars. Meanwhile, focusing on the
unevolved and less-evolved stars, which have not been affected by stellar
evolution processes, allows us to identify the impact of stellar rotation. The
results are shown by the dots with error bars in the last three panels of Fig.
1, with different panels corresponding to different clusters. We convert the
apparent magnitude of the observed stars to the absolute magnitude using the
distance moduli and reddening values provided in Milone et al. (2018).
As to the stellar models, we calculate the color separation,
$\Delta\,\mathrm{color}=\Delta(m_{\mathrm{F}336W}-m_{\mathrm{F}814W})$, of the
isochrones of rotating single-star models and the isochrones of nonrotating
single-star models in the color-magnitude diagram at four evolutionary times,
that is zero age and the derived ages for the abovementioned three observed
clusters. We include our MESA models, the SYCLIST models and the MIST models.
The results are shown in Fig. 1.
For our MESA models, we also examine the effect of the inclination of rotation
axis on the color of the stellar models with eqs. 44 and 45 in Paxton et al.
(2019). The results are shown by the shaded areas in Fig. 1, with the left and
right borderline denoting the pole-on and the equator-on case, respectively.
As anticipated, at zero age, the projection effect of gravity darkening is
more significant in more rapidly rotating stars. However, the projection
effect of gravity darkening also contributes more at high magnitudes. We
stress that it is merely a geometric consequence, since the isochrones of the
stellar models with different initial rotational velocities have larger
horizontal color differences at higher magnitudes (see the red shaded area in
the right panel of Fig. 2). At older ages, the effect of gravity darkening
becomes significant near the turn-off region. The reason is that gravity
darkening either makes a star hotter and brighter (pole-on), or cooler and
dimmer (equator-on); therefore, its net effect shifts a star in the CMD almost
along the isochrone. Hence, the net effect of gravity darkening is only
visible when an isochrone bends toward the right and becomes almost vertical
in the turn-off region (also see the red shaded area in the right panel of
Fig. 2).
Figure 1 yields three conclusions. Firstly, for the unevolved and less-evolved
stellar models in which rotation only plays a role through the centrifugal
force, the same initial velocity differences lead to nearly the same color
variations in all the three model sets. Secondly, after the ZAMS, the color
difference between the rotating and nonrotating models is different in
different model sets. Among the three model sets, the SYCLIST models adopt the
strongest rotational mixing efficiency (see Sec. 3 and Appendix A). It can be
seen that the $\Delta\mathrm{color}$ values of the SYCLIST models become
smaller than that of our MESA models and the MIST models near the turnoff. In
some cases, $\Delta\mathrm{color}$ is even negative, meaning that the
corresponding SYCLIST rotating models are bluer than their nonrotating models.
The adopted rotational mixing efficiency in the MIST models is slightly larger
than that in our MESA models (see Sec. 3 and Appendix A). As a consequence,
the $\Delta\mathrm{color}$ values of the MIST models are between those of the
SYCLIST models and our MESA models near the cluster turnoff. The last
conclusion is that the color difference between the models with $\sim$50% of
critical rotation, rather than the previously proposed near-critical rotation
(D’Antona et al., 2017; Milone et al., 2018), and the models with zero
rotation is adequate to retrieve the color split of the observed red and blue
MSs in young star clusters. This is consistent with the previous results based
on the MESA models (Gossage et al., 2019; Wang et al., 2022). From the star
formation point of view, half of the critical rotational velocity agrees with
the results of hydrodynamic simulations that take into account the
gravitational torques between stars and disks (Lin et al., 2011). We remind
the reader that 50% of the critical rotational velocity under the MESA
definition roughly equals 60% of critical linear velocity and 75% of critical
angular velocity under the definition in the SYCLIST models.
We notice that other combinations of initial rotational velocities are also
able to explain the color difference of the observed red and blue MS stars. In
Appendix B, we show that $W_{\rm i}\sim 0.55$ and $W_{\rm i}\sim 0.6$ are
required to explain the red MS stars, if the blue MS stars have $W_{\rm
i}=0.2$ and $W_{\rm i}=0.3$, respectively. Meanwhile, Wang et al. (2022)
showed that $W_{\rm i}\sim 0.65$ and $W_{\rm i}\sim 0.35$ can also explain the
red and blue MSs in young star clusters. We do not consider even faster
rotation for the blue MS stars, otherwise it contradicts the spectroscopically
measured low velocity of the blue MS stars in NGC 1818 (see Sec. 4.4).
Figure 1: Color difference between the rotating and nonrotating MS stellar
models with an LMC metallicity as a function of absolute magnitude
$M_{\mathrm{F814W,0}}$ at four different ages, which are the zero-age and the
derived ages for three young LMC clusters NGC 1755 (80 Myr), NGC 2164 (100
Myr) and NGC 1866 (200 Myr) (Milone et al., 2018). The solid, dashed, and
dotted lines denote our MESA models, the SYCLIST models and the MIST models,
respectively, color-coded by their initial rotational rates. The formally
labeled fractional critical angular velocities of the SYCLIST models in their
web interface are shown in parentheses. Shaded areas show the $\Delta$color
range occupied by stars with orientations that are between pole-on and
equator-on. The dots with error bars in the last three panels indicate the
color difference between the red and blue MSs of the abovementioned three
clusters (see fig. 5 in Milone et al. 2016 for NGC 1755, fig. 13 in Milone et
al. 2018 for NGC 2164 and fig. 6 in Milone et al. 2017 for NGC 1866,
respectively). We only consider the unevolved stars below the cluster turnoff
(see text). The error bars on the x-axis mean photometric errors while the
error bars on the y-axis correspond to magnitude intervals.
### 4.2 Isochrone fitting of the double main sequences in NGC 1818
In the following, we perform isochrone fitting to the red and blue MSs in the
LMC cluster NGC 1818 using our findings in Sec 4.1 that these double MSs can
be explained by the stellar models with $\sim$50% and $\sim$0% of their
initial critical rotational velocities. The left panel of Fig. 2 displays the
distribution of the stars with (red dots) and without (black dots) H$\alpha$
excess in the MS region of NGC 1818. We analyze this cluster because it
exhibits a clear double MS feature, and has spectroscopic measurements for the
rotational velocity of the stars in different MSs. In the right panel of this
figure, we show isochrone fitting based on both our MESA models and the
SYCLIST models. We first replicate the isochrone fitting shown in Milone et
al. (2018), in which 40 Myr isochrones of the SYCLIST models with
$\omega_{\mathrm{label}}=0.9$ and $\omega_{\mathrm{label}}=0.0$ are used. The
adopted distance modulus and reddening are listed both in the figure and in
Tab. LABEL:tab1. As we have mentioned, $\omega_{\mathrm{label}}=0.9$ in the
SYCLIST models roughly correspond to $W_{\mathrm{i}}=0.5$ under the MESA
definition.
As to our MESA models, we adapt the isochrone age, distance modulus and
reddening, such that the isochrone of our $W_{\mathrm{i}}=0.5$ MESA models
(solid red line) fits the observed red MS as well as the SYCLIST isochrone, in
a visual inspection. We find that a younger age is required when using our
MESA models to achieve a satisfactory fit compared to using the SYCLIST
models. This is because extensive rotational mixing makes fast-rotating
SYCLIST models appear younger, indicating again that different physics
assumptions in stellar models can give rise to different results in cluster
age estimation. In this figure, we also display the impact of gravity
darkening and inclination on our MESA isochrone of the fast-rotating-star
models in the CMD. As explained in Sec. 4.1, the influence of gravity
darkening in the CMD is only visible in the cluster turn-off area.
We notice that the adopted distance modulus $\mu=18.28$ for our MESA models is
smaller than the measured value for the LMC ($\mu=18.49\pm 0.06$) (Pietrzyński
et al., 2013; Inno et al., 2016). Distance moduli of 18.49 and 18.28
correspond to distances of 49.9 kpc and 45.3 kpc. The diameter of the LMC
estimated in Gaia Collaboration et al. (2018) is around 5.2 kpc, meaning that
the spatial extent of the LMC clusters is inconsistent with the small value of
the adopted distance modulus. We stress here that the adopted distance modulus
is sensitive to metallicity and chemical composition of the employed stellar
models. Meanwhile, the adopted bolometric correction table also affects the
derived distance modulus. Moreover, compared to the distance modulus, the
position of an isochrone in the CMD depends more on the adopted reddening. The
mean measured reddening of the LMC is $E(B-V)=0.113\pm 0.060$ mag (Joshi &
Panchal, 2019). The gray arrow in the right panel of Fig. 2 demonstrates in
which direction the isochrones will move in the CMD if reddening is increased
by 0.05. In summary, we state that our adopted distance modulus and reddening
are in the acceptable range.
Figure 2 supports our previous result that, in both our MESA models and the
SYCLIST models, 50% and 0% of critical rotation can explain the best
discernible part of the observed red and blue MSs below the cluster turnoff.
Our MESA models and the SYCLIST models show different behavior in the turn-off
area. The isochrone of the fast-rotating SYCLIST models crosses with that of
the slowly-rotating stellar models, as a consequence of strong rotational
mixing. In contrast, the isochrone of our fast-rotating MESA models is always
redder than the isochrone of our slowly-rotating-star models. It seems that
our MESA isochrones match the spread distribution of the turn-off stars
better, as the SYCLIST isochrones miss the observed blue MS stars between 16
and 17.5th magnitude in the F814W filter. However, considering the fact that
the cluster turn-off area may contain a mixture of stars, including single
stars, pre- and post-interaction binaries, we cannot assess these two model
sets based on current photometric observations. A plausible way to constrain
rotational mixing is to measure the chemical abundance of the H-burning
products at the stellar surface. We explore the surface He and N enrichment of
our MESA models and the SYCLIST models in Sec. 4.3.
We show in Appendix B that other combinations of rotational velocities can
produce equally satisfying isochrone fittings as Fig. 2. This again means that
the precise rotational velocity of the cluster stars cannot be solely
determined by photometric observations. Nevertheless, the conclusion that
moderate and slow rotation is required to explain the split MSs in young star
clusters is rigid.
The stars redder than the isochrones of the fast-rotating-star models can be
understood by the unresolved binaries containing two fast-rotating components.
The presence of a secondary star shifts an observed star to a redder and
brighter position in the CMD. The H$\alpha$ emitters, which are widely
considered to rotate at or close to their critical rotational velocities,
occupy the reddest part of the CMD, extending $\sim 2$ magnitudes from the
cluster turnoff. We discuss the formation of these stars in Sec. 4.5. At last,
the stars bluer than the isochrones of the slowly-rotating-star models in Fig.
2 are explained as MS merger products (Wang et al., 2020, 2022). In addition
to their slow rotation, MS merger products appear bluer also because of
rejuvenation.
Figure 2: Isochrone fitting to the main-sequence stars in the LMC cluster NGC,1818. Left: distribution of the main-sequence stars in NGC 1818 observed by HST. The black and red dots are normal MS stars and the stars with a brightness excess in the H$\alpha$ narrow band filter, respectively. The gray error bars on the left indicate 1 $\sigma$ photometric uncertainties at corresponding magnitudes. Right: isochrone fitting to the red and blue MSs of NGC 1818, using the SYCLIST models and our MESA models. For the SYCLIST models, we take the fit in Milone et al. (2018), using 40 Myr isochrones of the nonrotating models and the models with labeled 90% of critical angular velocities (roughly equal to 50% of critical linear velocities under the MESA definition) to fit the observed blue and red MSs, indicated by the solid orange and the dashed green lines, respectively. While for our MESA models, we employ 30 Myr isochrones of the nonrotating models and the models with 50% of critical linear velocities to fit the observed blue and red MSs, denoted by the solid red and the dashed blue lines, respectively. The adopted distance modulus and reddening in each fit are listed. The red shaded area depicts the projection effect (gravity darkening) on the red solid line. The small gray arrow shows how an isochrone would move if $E(B-V)=0.05$ is added. Table 1: Models and parameters used in isochrone fittings to NGC 1818. Fitting model | $W_{\rm i}$ for the red MS | $W_{\rm i}$ for the blue MS | Age (Myr) | $\mu$ | $E(B-V)$
---|---|---|---|---|---
Our MESA models | 0.50 | 0.00 | 30 | 18.28 | 0.11
Our MESA models | 0.55 | 0.20 | 35 | 18.29 | 0.09
Our MESA models | 0.60 | 0.30 | 35 | 18.31 | 0.09
Our MESA models | 0.65 | 0.35 | 40 | 18.31 | 0.07
SYCLIST models | 0.50 | 0.00 | 40 | 18.40 | 0.10
### 4.3 Surface chemical composition
Massive stars burn H through the CNO cycle, in which the rate of the reaction
between 14N and 1H is the slowest. This leads to nitrogen enhancement in the
layers where nuclear burning takes (or has taken) place. Rotational mixing may
take these layers that are enriched in 14N, as well as the H burning ashes He,
to the stellar surface. Hence, surface chemical composition abundance can be
used as a probe of the strength of rotational mixing. In Fig. 2, we show that
our MESA isochrones and the SYCLIST isochrones can fit the split MS in young
star clusters equally well, regardless of the different adopted rotational
mixing efficiencies. In this section we seek clues to assess these two model
sets — or, being more precise, to constrain the strength of rotational mixing
— by studying the surface abundance of nitrogen and helium in different
stellar models.
Figure 3 shows the surface He and N abundances of the stellar models that are
employed in building the isochrones in the right panel of Fig. 2. It can be
seen that the rotationally induced surface N enhancement of our MESA models
and the SYCLIST models is almost the same ($\sim 10\%$) at the cluster
turnoff. However, a significant enhancement of surface He abundance is only
seen in the SYCLIST fast-rotating models. The different behavior of the
surface He and N enrichment may be due to the fact that He is produced more
slowly than N. When He is produced, a strong chemical composition gradient is
created on top of the convective core, which suppresses further mixing of He
to the stellar surface in our MESA models, but plays a less important role in
the SYCLIST models. The significant surface He enhancement in the SYCLIST
fast-rotating models near the cluster turnoff explains their higher
temperature. Despite a scarcity of detailed measurements of He abundance of
the stars in young star clusters, Carini et al. (2020) found similar surface
He abundances for the red and blue MS stars in the turn-off area of the young
SMC star cluster NGC 330, which may argue against efficient rotational mixing.
Nevertheless, their results should be taken with a grain of salt, given their
low S/N, resolution and small sample size. Further observations on more stars
in more star clusters are in high demand for our understanding of rotational
mixing. We note that recent asteroseismology studies inferred near-rigid
rotation in late-B and AF stars (Aerts, 2021), and a small amount of
differential rotation in massive stars (Bowman, 2020), which seems to support
our MESA models with efficient angular momentum transfer in the stellar
interior.
We have shown that stellar models with $W_{\mathrm{i}}=0.5$ exhibit a
remarkable surface N enrichment at the end of their MS evolution, in both
model sets. In the following, we examine whether the stellar models with other
initial velocities also have a surface N enrichment at their TAMS. The results
are shown in Fig. 4, with the left and the right panel corresponding to our
MESA TAMS models appropriate for the LMC and SMC metallicities, respectively.
It can be seen that the stellar models with higher initial velocities and
larger initial masses will have larger surface N abundances at their TAMS,
which agrees well with the findings in Hastings et al. (2020). Our results
indicate that single stars born with spins larger than 40% of their critical
values will exhibit a significant N enrichment at the end of their MS
evolution.
Figure 3: Current surface He (left) and N (right) abundances as a function of
magnitude for our MESA models and the SYCLIST models that are used to build
the isochrones in Fig. 2. The upper two x-axes show the corresponding mass
derived from our MESA models and the SYCLIST models with rapid rotation.
Figure 4: Surface N abundances for our MESA models with the LMC metallicity
(left) and the SMC metallicity (right) at the end of their main sequences when
1% of H is left in the stellar center. The x-axis shows the initial mass of
the stellar models, with the corresponding cluster turn-off ages shown on the
top x-axis. Solid lines with different colors indicate different initial
rotational rates.
### 4.4 Comparison with the spectroscopic measurements of the rotational
velocity of the stars in NGC 1818
Figure 5: Comparison between the predicted and the observed rotational
velocities. Left: a zoom-in image of the main-sequence stars in NGC 1818 which
have magnitudes between 17 and 19. The black and red dots correspond to the
normal main-sequence stars and H$\alpha$ emitters, respectively. The open
symbols mark the stars that have spectroscopic rotational velocity
measurements (Marino et al., 2018), with color indicating their measured
projected rotational velocity $v{\rm sin}\,i$ values. Here $i$ is the
inclination. The red main-sequence stars, blue main-sequence stars, and Be
stars with H$\alpha$ emission lines are designated by triangles, squares and
asterisks, respectively. Lower right: comparison between the model predicted
average surface equatorial velocity and the spectroscopic measurements. The
open symbols with error bars correspond to the observations, using the same
symbol types as the left panel. The solid lines and dashed lines represent the
average surface equatorial velocity (times $\pi/4$ to account for a random
orientation) of our MESA models at 30 Myr and the SYCLIST models at 40 Myr
(the ages used in our isochrone fitting). Color coding is the same as in Fig.
1. Upper right: comparison between our theoretical predicted $v{\rm sin}i$
distributions (color dotted steps) and the observed $v{\rm sin}i$
distributions of the red (solid black steps) and the blue (solid gray steps)
main-sequence stars. For the theoretical predictions, we perform a Monte Carlo
simulation by assuming a random orientation angle distribution and normalize
the results to a total number of 15 for each initial rotation, which is
similar to the number of the blue (14) and red main-sequence (16) stars that
have spectroscopic observations.
By studying photometric observations, we have drawn the conclusion that the
red and blue MSs in young star clusters are comprised of the stars with natal
spins of 50%-65% and 0%-35% of the critical velocities, respectively, by
studying photometric observations. Recently, several high-resolution
spectroscopic studies have been performed in young and intermediate-age
clusters, providing an unprecedented opportunity to directly measure the
rotational rate of the red and the blue MS stars (Dupree et al., 2017; Kamann
et al., 2018; Bastian et al., 2018; Marino et al., 2018; Kamann et al., 2020;
Bodensteiner et al., 2021). Among them, NGC 1818, the cluster we study in this
work, is so far, the youngest cluster that has detailed velocity measurements
for the stars in different MSs (Marino et al., 2018).
The left panel of Fig. 5 is a zoomed-in version of the CMD of the NGC 1818 MS
stars which have magnitudes between 17 and 19. We mark the MS stars whose
velocities have been measured spectroscopically in Marino et al. (2018). In
the lower right panel, we display the observed $v\,{\rm{sin}}i$ of these MS
stars as a function of magnitude, and compare them with the stellar models.
The average measured velocities for the red and the blue MS stars are $202\pm
23\,\mathrm{km\,s^{-1}}$ and $71\pm 10\,\mathrm{km\,s^{-1}}$, respectively,
with the red MS stars having a more dispersed velocity distribution. This
provides compelling evidence that rotation is responsible for the split MS,
with the red MS stars rotating faster than the blue MS stars. The large
velocity dispersion of the red MS stars may be attributed to their large
intrinsic velocities in combination with the effect of inclination. It is
worth noting that all the three red MS stars fainter than $\sim$18.5th
magnitudes are reported to be slow rotators. Marino et al. (2018) argued that
the accuracy of the velocity measurement for these faint stars may be affected
by their large systematic errors. The Be stars are shown to have the largest
$v\,{\rm{sin}}i$ values on average. In the upper right panel of this figure,
the solid steps show the sum of the number of the observed MS stars in
different velocity bins in each population.
To compare with the observations, we display the velocities of our MESA models
and the SYCLIST models in the right panels of Fig. 5. In the lower right
panel, the red lines demonstrate the average surface rotational velocity
(times $\pi/4$ to account for a random orientation) of the $W\mathrm{i}=0.5$
stellar models along the isochrones that are used to fit the red MS in the
right panel of Fig. 2, as a function of magnitude. Then we also display the
results for the stellar models that have the same ages but different initial
velocities (see other color lines). Consistent with our findings from
photometric observations, the current surface velocity of the stellar models
with $W_{\rm i}\sim 0.5$ agrees well with the mean observed rotational
velocity of the red MS stars ($202\pm 23\,\mathrm{km\,s^{-1}}$). But the three
fastest rotating red MS stars can only be explained by the stellar models with
initial velocities larger than $\sim 70\%$ of their critical values, assuming
$\sin i=\pi/4$. As to the blue MS stars, the spectroscopic observations report
slow, yet clearly non-zero velocities ($\sim 10-35\%$ of their critical
velocities). In the upper right panel of Fig. 5, we show the number
distribution of our MESA models as a function of their velocities with dashed
steps. To do this, we again assume a random orientation for our models. We
employ our MESA models with $W_{\rm i}=0.2$, $0.5$ and $0.6$ and consider a
Salpeter IMF with the exponent of -2.35 (Salpeter, 1955). We generate $10^{6}$
stellar models and then normalize them to a total number of 15 for each
rotational velocity. This number is chosen such that it is similar to the
number of the spectroscopically studied red (16) and blue (14) MS stars.
Comparing with the observations, we conclude that current velocity of our
stellar models with $W_{\rm i}\sim 0.5-0.6$ and the stellar models with
$W_{\rm i}\sim 0.2$ cover the velocity range of the observed red and blue MS
stars well, respectively. The mismatch between the peak of the star model
distribution and the observation may be attributed to the small sample size of
this observation. Hence, further spectroscopic observations of more stars in
this cluster and the stars in other clusters are in high demand to thoroughly
constrain the rotation of the cluster MS stars.
### 4.5 Implications for the origin of Be stars
Currently, two plausible ways have been proposed for the formation of Be
stars. The first one is the so-called single-star channel, in which a rotating
single star achieves near-critical rotation toward the end of its MS
evolution, due to angular momentum transport from the contracting core to the
expanding envelope (Ekström et al., 2008; Hastings et al., 2020). The second
way is the so-called binary channel, in which the accretor in a binary system
spins up during Roche-lobe overflow (Pols et al., 1991; Langer, 2012; Wang et
al., 2020). The detection of Be/X-ray binaries comprised of a Be star and a
compact object (Raguzova & Popov, 2005; Liu et al., 2006) and Be+sdO binaries
(Peters et al., 2008; Wang et al., 2017; Schootemeijer et al., 2018; Wang et
al., 2021) provide direct evidence for the binary channel. However, which
fraction of the Be stars has a binary origin is still unknown (Pols et al.,
1991; van Bever & Vanbeveren, 1997; Shao & Li, 2014). Meanwhile, Bodensteiner
et al. (2020) found a lack of MS companions to Be stars, which indicates that
the binary channel could dominate Be star formation. The H$\alpha$ emitters
detected in young star clusters may deliver new insights into the origin of Be
stars. Hastings et al. (2020) stated that the detection of Be stars several
magnitudes fainter than the cluster turnoff may advocate the major role of
binary interaction in producing Be stars, because the single-star channel can
only lead to the Be stars near the cluster turnoff, unless stars are born with
extremely high spins. Because they adopted an initial rotational velocity
distribution based on the observations of early B stars in the LMC (Dufton et
al., 2013) and they used different stellar models (Brott et al., 2011) from
ours, we find it helpful to reinvestigate the contribution of single-star
evolution in producing Be stars in young star clusters with our MESA models,
considering our findings that the red MS stars are born with 50–65% of their
critical rotational velocities.
In Fig. 6 we show the surface critical rotational velocity fraction
$v/v_{\mathrm{crit}}$ of our LMC models whose magnitudes are between 0.1 and 3
mags (in the F814W filter) below the cluster turnoff. We only consider the
stellar models which have initial spins higher than 40% of their critical
values, because the stellar models rotating more slowly only change their
rotational velocities marginally during their MS evolution and cannot give
rise to Be stars. We consider star clusters with ages from 10 to 100 Myr,
which are covered well by our stellar models. Nevertheless, our conclusion
should apply to star clusters as old as those with cluster ages corresponding
to a turn-off mass of around 2.5 $M_{\odot}$, below which magnetic braking may
play a vital role in determining stars’ initial rotational velocities.
The last panel of Figure 6 demonstrates that the stellar models three
magnitudes below the cluster turn-off luminosity almost retain their initial
rotational velocities. Only the stellar models in clusters younger than
$\sim$30 Myr evolve to slightly higher critical rotational velocity fractions,
because the stellar models three magnitudes below the cluster turnoff in such
young star clusters are in later evolutionary phases, that is, have consumed
more hydrogen, compared to those in older star clusters. The other panels in
this figure represent the results for brighter stars. It can be seen that in
clusters older than $\sim$60 Myr, $v/v_{\mathrm{crit}}$ increases with stellar
brightness monotonically and even reach close to 1 near the cluster turnoff.
While in clusters between 10 and 60 Myr, there is a “U-shape” feature for the
stellar models with $W_{\mathrm{i}}\geq 0.6$. The right side of the “U-shape”
feature is caused by strong stellar winds due to the bi-stability jump, which
have a larger impact on more massive stars. While the left side of the
“U-shape” feature is a combined effect of a later evolutionary phase of the
corresponding stellar models and a smaller impact of the enhanced stellar
winds due to the bi-stability jump, because the related massive stellar models
are only affected by the bi-stability jump at the very end of their MS
evolution. However, whether the bi-stability jump indeed exists and how it
affects stellar evolution is still debated (Björklund et al., 2022). We
explain this in more detail in Appendix C.
Figure 6 shows that the contribution of single-star evolution to the Be stars
in young star clusters depends on how fast single stars initially rotate and
how fast B stars have to rotate to become Be stars. Unlike the binary channel
that mainly produces near critically rotating Be stars (Wang et al., 2020),
the single-star channel gives rise to stars of various rotational velocities.
Given the fact that how fast Be stars rotate is disputed (Yudin, 2001;
Cranmer, 2005; Zorec et al., 2016), we only provide some instructive
discussions. We have concluded that the red MS stars in young star clusters
have 50–65% of their critical rotational velocities. Figure 6 illustrates that
the stellar models with $W_{\mathrm{i}}=0.5$ cannot give rise to Be stars if
we consider 70% of the critical rotational velocity as the threshold for a
star to be a Be star, unless in very young star clusters ($\sim$10 Myr). We do
not consider an even lower velocity threshold for Be stars, because otherwise
the red MS stars may be born as Be stars according to their derived initial
spins, which contradicts the observations that H$\alpha$ emitters are only
found within 2-3 magnitudes below the cluster turnoff. If the initial
rotational velocity for the red MS stars can be as high as 65% of their
critical values, single star evolution is able to explain the Be stars as
faint as 2 magnitudes below the cluster turnoff, given a Be star velocity
threshold of 70% of critical rotational velocity. But it becomes hard to
explain the Be stars with the single-star channel if the Be star velocity
threshold is higher than 80% of critical rotation. Near the cluster turn-off
region it is stellar winds that prevent stellar models from being Be stars,
while below the cluster turn-off stars do not have enough time to evolve to
very high spins.
In summary we find that, from the theoretical side, the significance of the
role of single-star evolution in producing Be stars in young star clusters
depends on the uncertain stellar wind and the uncertain rotational velocity
required to form Be stars. Though Wang et al. (2020, 2022) showed that binary
evolution can explain the luminosity range of the observed H$\alpha$ emitters
in young star clusters, and Hastings et al. (2021) found that binary
interaction alone is able to explain the luminosity range and number fraction
of the observed Be stars in young star clusters based on a simple analytical
analysis, we cannot exclude the possibility that the single-star channel
produces Be stars. Accurate estimation of the spins of Be stars should be
useful since, as we have mentioned above, Be stars stemming from the single-
star channel should have a larger spin range than those originating from the
binary channel. In addition, Be stars originating from the single-star channel
usually have a visible surface N enrichment (see Sec. 4.3). On the contrary,
the Be stars formed in binary systems may bear a normal N abundance, because
the progenitors of these Be stars, that is the accretors, rotate at normal
speed until mass transfer starts, at which time a strong chemical composition
gradient that can impede a further mixing has been established. But whether
binary-evolution-induced Be stars have N enrichment may also depend on the
accretion efficiency, as the N enriched layers of the donor stars may remain
on the accretors.
We show the results for the SMC models in Fig. C.1. We attain a similar
conclusion in the SMC models, but, in general, the SMC models reach higher
velocities than their LMC counterparts as a consequence of weaker wind mass
loss.
Figure 6: Surface critical velocity fraction $v/v_{\mathrm{crit}}$ of our LMC
models with different magnitudes as a function of cluster age. Each panel
shows the result for the stellar models with a given magnitude, indicated in
the legend. $M_{\mathrm{TO}}$ denotes the absolute $F814W$ magnitude of the
cluster turnoff. In each panel, different colors correspond to the stellar
models with different initial rotational velocities. We show the turn-off mass
of the clusters with corresponding ages on the top x-axis in the first panel.
## 5 Summary & conclusions
There is a growing consensus that a bimodal distribution of rotation rates
causes the split MSs in young star clusters. Previous studies proposed that
the red and blue MSs are comprised of near-critical rotating stars and
nonrotating stars, respectively, by comparing the SYCLIST models with
photometric observations. However, the suggestion that most of the stars are
born with near-critical rotation contradicts the spectroscopically measured
velocity of both cluster and field stars. Gossage et al. (2019) and Wang et
al. (2022) argued that the red MS stars should rotate at around half of their
critical velocities. In this work, we reinvestigated how much rotation is
needed to explain the color split of the observed double MSs in young star
clusters, considering our newly computed MESA models, the SYCLIST and the MIST
models. We pointed out that it is a misconception that the SYCLIST models
formally labeled as rotating at 90% of their critical angular velocities are
rotating close to their critical rotational velocity, because of the early
relaxation stage of the models and different definitions of critical rotation.
We indeed found consistency between our MESA models and the SYCLIST models.
Moreover, we found that the cluster MS stars follow a bi-modal velocity
distribution, with the majority born with spins of 50-65% of critical rotation
constituting the red MS, while a smaller yet significant fraction of stars
born with 0-35% of critical rotation constitute the blue MS. This does not
only agree with the bimodal velocity distribution found for field B and A type
stars (Dufton et al., 2013; Zorec & Royer, 2012), but also matches the
spectroscopically measured velocities of the red and the blue MS stars in NGC
1818 (Marino et al., 2018).
For our analysis, we found it most useful to focus on the unevolved stars well
below the MS turnoff, because near the turnoff, gravity darkening as well as
single- and binary-star evolution effects can blur the picture. We show that
model sets which use different assumptions about rotational mixing lead to the
same results when studying the unevolved stars, but predict different turn-off
features. For example, fast-rotating SYCLIST models that adopt efficient
rotational mixing are bluer than their slowly-rotating counterparts in the
turn-off region. In contrast, in our MESA models and in the MIST models, in
which strong rotational mixing is inhibited by chemical composition gradients,
fast-rotating models always remain redder than their slowly-rotating
counterparts. Current sparse spectroscopic observations find that fast-
rotating stars are in general redder than the slowly-rotating ones even near
the cluster turnoff, which may support inefficient rotational mixing (Dupree
et al., 2017; Marino et al., 2018; Kamann et al., 2020). The surface nitrogen
abundance is significantly enhanced in both, MESA and SYCLIST fast-rotating
models, but the helium abundance is only enhanced in the fast-rotating SYCLIST
models. Spectroscopic measurements of the surface chemical composition of
stars in the turn-off region may be able to put tighter constraints on the
strength of rotational mixing.
The knowledge of the initial spin distribution of the cluster stars bears
important conclusions for their later evolution. Consistent with previous
studies, our single-star models increase their critical velocity fraction as
consequence of their contracting core during hydrogen burning. We found that
if the rotation-velocity threshold for a star to be a Be star is as low as 70%
of the critical velocity, our LMC models starting with 50-65% of their
critical velocity can evolve into Be stars before the end of core hydrogen
burning. However, for a threshold larger than 80% of critical velocity, it
depends on the cluster age whether the initially rapidly rotating single-star
models can become Be stars during their MS evolution. A major factor that may
prevent stellar models from rotating critically is the strong mass loss due to
the bi-stability jump. As discussed above, whether the bi-stability jump
exists is disputed. Nevertheless, the uncertain wind mass loss does not affect
our finding that in most cases the initially moderately rotating stars (red MS
stars) cannot reproduce the Be stars as faint as 2-3 magnitudes below the
cluster turnoff. The detection of these faint Be stars in young star clusters
implies that binary evolution is an indispensable part of Be star formation,
while single-star evolution may still play a role in producing Be stars near
the cluster turnoff.
Our conclusion that the vast majority of stars are born with moderately rapid
or with slow rotation also affects our understanding of post-MS stars. It
implies, that according to our models, rotational mixing may affect the
surface abundances of trace elements like boron or nitrogen. But otherwise,
rotation is not affecting the evolution of the stars, their post-MS life,
supernova type, or the remnant mass in an important way. We might still expect
differences in the advanced evolution of blue and red MS stars, perhaps due to
different binary properties or different magnetic fields, which would reward
study with appropriate stellar models and investigation in future
observational campaigns.
## References
* Aerts (2021) Aerts, C. 2021, Reviews of Modern Physics, 93, 015001
* Bastian et al. (2017) Bastian, N., Cabrera-Ziri, I., Niederhofer, F., et al. 2017, MNRAS, 465, 4795
* Bastian & de Mink (2009) Bastian, N. & de Mink, S. E. 2009, MNRAS, 398, L11
* Bastian et al. (2018) Bastian, N., Kamann, S., Cabrera-Ziri, I., et al. 2018, MNRAS, 480, 3739
* Björklund et al. (2022) Björklund, R., Sundqvist, J. O., Singh, S. M., Puls, J., & Najarro, F. 2022, arXiv e-prints, arXiv:2203.08218
* Bodensteiner et al. (2021) Bodensteiner, J., Sana, H., Wang, C., et al. 2021, A&A, 652, A70
* Bodensteiner et al. (2020) Bodensteiner, J., Shenar, T., & Sana, H. 2020, A&A, 641, A42
* Bowman (2020) Bowman, D. M. 2020, Frontiers in Astronomy and Space Sciences, 7, 70
* Brandt & Huang (2015) Brandt, T. D. & Huang, C. X. 2015, ApJ, 807, 25
* Brott et al. (2011) Brott, I., de Mink, S. E., Cantiello, M., et al. 2011, A&A, 530, A115
* Cantiello & Langer (2010) Cantiello, M. & Langer, N. 2010, A&A, 521, A9
* Carini et al. (2020) Carini, R., Biazzo, K., Brocato, E., Pulone, L., & Pasquini, L. 2020, AJ, 159, 152
* Chaboyer & Zahn (1992) Chaboyer, B. & Zahn, J. P. 1992, A&A, 253, 173
* Choi et al. (2016) Choi, J., Dotter, A., Conroy, C., et al. 2016, ApJ, 823, 102
* Correnti et al. (2017) Correnti, M., Goudfrooij, P., Bellini, A., Kalirai, J. S., & Puzia, T. H. 2017, MNRAS, 467, 3628
* Cranmer (2005) Cranmer, S. R. 2005, ApJ, 634, 585
* D’Antona et al. (2015) D’Antona, F., Di Criscienzo, M., Decressin, T., et al. 2015, MNRAS, 453, 2637
* D’Antona et al. (2017) D’Antona, F., Milone, A. P., Tailo, M., et al. 2017, Nature Astronomy, 1, 0186
* de Jager et al. (1988) de Jager, C., Nieuwenhuijzen, H., & van der Hucht, K. A. 1988, A&AS, 72, 259
* Dufton et al. (2013) Dufton, P. L., Langer, N., Dunstall, P. R., et al. 2013, A&A, 550, A109
* Dupree et al. (2017) Dupree, A. K., Dotter, A., Johnson, C. I., et al. 2017, ApJ, 846, L1
* Ekström et al. (2012) Ekström, S., Georgy, C., Eggenberger, P., et al. 2012, A&A, 537, A146
* Ekström et al. (2008) Ekström, S., Meynet, G., Maeder, A., & Barblan, F. 2008, A&A, 478, 467
* Endal & Sofia (1976) Endal, A. S. & Sofia, S. 1976, ApJ, 210, 184
* Endal & Sofia (1978) Endal, A. S. & Sofia, S. 1978, ApJ, 220, 279
* Evans et al. (2005) Evans, C. J., Smartt, S. J., Lee, J. K., et al. 2005, A&A, 437, 467
* Gaia Collaboration et al. (2018) Gaia Collaboration, Helmi, A., van Leeuwen, F., et al. 2018, A&A, 616, A12
* Georgy et al. (2013) Georgy, C., Ekström, S., Granada, A., et al. 2013, A&A, 553, A24
* Girardi et al. (2011) Girardi, L., Eggenberger, P., & Miglio, A. 2011, MNRAS, 412, L103
* Gossage et al. (2019) Gossage, S., Conroy, C., Dotter, A., et al. 2019, ApJ, 887, 199
* Hastings et al. (2021) Hastings, B., Langer, N., Wang, C., Schootemeijer, A., & Milone, A. P. 2021, A&A, 653, A144
* Hastings et al. (2020) Hastings, B., Wang, C., & Langer, N. 2020, A&A, 633, A165
* Heger et al. (2000) Heger, A., Langer, N., & Woosley, S. E. 2000, ApJ, 528, 368
* Huang et al. (2010) Huang, W., Gies, D. R., & McSwain, M. V. 2010, ApJ, 722, 605
* Inno et al. (2016) Inno, L., Bono, G., Matsunaga, N., et al. 2016, ApJ, 832, 176
* Joshi & Panchal (2019) Joshi, Y. C. & Panchal, A. 2019, A&A, 628, A51
* Kamann et al. (2020) Kamann, S., Bastian, N., Gossage, S., et al. 2020, MNRAS, 492, 2177
* Kamann et al. (2018) Kamann, S., Bastian, N., Husser, T. O., et al. 2018, MNRAS, 480, 1689
* Keller & Bessell (1998) Keller, S. C. & Bessell, M. S. 1998, A&A, 340, 397
* Langer (2012) Langer, N. 2012, ARA&A, 50, 107
* Li et al. (2017) Li, C., de Grijs, R., Deng, L., & Milone, A. P. 2017, ApJ, 844, 119
* Lin et al. (2011) Lin, M.-K., Krumholz, M. R., & Kratter, K. M. 2011, MNRAS, 416, 580
* Liu et al. (2006) Liu, Q. Z., van Paradijs, J., & van den Heuvel, E. P. J. 2006, A&A, 455, 1165
* Maeder & Meynet (2000) Maeder, A. & Meynet, G. 2000, ARA&A, 38, 143
* Marchant et al. (2016) Marchant, P., Langer, N., Podsiadlowski, P., Tauris, T. M., & Moriya, T. J. 2016, A&A, 588, A50
* Marino et al. (2018) Marino, A. F., Przybilla, N., Milone, A. P., et al. 2018, AJ, 156, 116
* Meynet & Maeder (1997) Meynet, G. & Maeder, A. 1997, A&A, 321, 465
* Milone et al. (2016) Milone, A. P., Marino, A. F., D’Antona, F., et al. 2016, MNRAS, 458, 4368
* Milone et al. (2017) Milone, A. P., Marino, A. F., D’Antona, F., et al. 2017, MNRAS, 465, 4363
* Milone et al. (2018) Milone, A. P., Marino, A. F., Di Criscienzo, M., et al. 2018, MNRAS, 477, 2640
* Niederhofer et al. (2015) Niederhofer, F., Georgy, C., Bastian, N., & Ekström, S. 2015, MNRAS, 453, 2070
* Nieuwenhuijzen & de Jager (1990) Nieuwenhuijzen, H. & de Jager, C. 1990, A&A, 231, 134
* Paxton et al. (2011) Paxton, B., Bildsten, L., Dotter, A., et al. 2011, ApJS, 192, 3
* Paxton et al. (2013) Paxton, B., Cantiello, M., Arras, P., et al. 2013, ApJS, 208, 4
* Paxton et al. (2015) Paxton, B., Marchant, P., Schwab, J., et al. 2015, ApJS, 220, 15
* Paxton et al. (2019) Paxton, B., Smolec, R., Schwab, J., et al. 2019, ApJS, 243, 10
* Peters et al. (2008) Peters, G. J., Gies, D. R., Grundstrom, E. D., & McSwain, M. V. 2008, ApJ, 686, 1280
* Pietrzyński et al. (2013) Pietrzyński, G., Graczyk, D., Gieren, W., et al. 2013, Nature, 495, 76
* Pols et al. (1991) Pols, O. R., Cote, J., Waters, L. B. F. M., & Heise, J. 1991, A&A, 241, 419
* Raguzova & Popov (2005) Raguzova, N. V. & Popov, S. B. 2005, Astronomical and Astrophysical Transactions, 24, 151
* Rivinius et al. (2013) Rivinius, T., Carciofi, A. C., & Martayan, C. 2013, A&A Rev., 21, 69
* Salpeter (1955) Salpeter, E. E. 1955, ApJ, 121, 161
* Schootemeijer et al. (2018) Schootemeijer, A., Götberg, Y., de Mink, S. E., Gies, D., & Zapartas, E. 2018, A&A, 615, A30
* Schootemeijer et al. (2019) Schootemeijer, A., Langer, N., Grin, N. J., & Wang, C. 2019, A&A, 625, A132
* Shao & Li (2014) Shao, Y. & Li, X.-D. 2014, ApJ, 796, 37
* Spruit (2002) Spruit, H. C. 2002, A&A, 381, 923
* van Bever & Vanbeveren (1997) van Bever, J. & Vanbeveren, D. 1997, A&A, 322, 116
* Vink et al. (2001) Vink, J. S., de Koter, A., & Lamers, H. J. G. L. M. 2001, A&A, 369, 574
* von Zeipel (1924) von Zeipel, H. 1924, MNRAS, 84, 665
* Wang et al. (2020) Wang, C., Langer, N., Schootemeijer, A., et al. 2020, ApJ, 888, L12
* Wang et al. (2022) Wang, C., Langer, N., Schootemeijer, A., et al. 2022, Nature Astronomy, 6, 480
* Wang et al. (2017) Wang, L., Gies, D. R., & Peters, G. J. 2017, ApJ, 843, 60
* Wang et al. (2021) Wang, L., Gies, D. R., Peters, G. J., et al. 2021, AJ, 161, 248
* Yang et al. (2013) Yang, W., Bi, S., Meng, X., & Liu, Z. 2013, ApJ, 776, 112
* Yoon et al. (2006) Yoon, S. C., Langer, N., & Norman, C. 2006, A&A, 460, 199
* Yudin (2001) Yudin, R. V. 2001, A&A, 368, 912
* Zorec et al. (2016) Zorec, J., Frémat, Y., Domiciano de Souza, A., et al. 2016, A&A, 595, A132
* Zorec & Royer (2012) Zorec, J. & Royer, F. 2012, A&A, 537, A120
###### Acknowledgements.
We thank Dominic Bowman for useful discussions on how asteroseismology
constrains the rotational structure of stars. We thank Sebastian Kamann and
Nate Bastian for their stimulating discussions. SdM and SJ acknowledge partial
funding by the Netherlands Organization for Scientific Research (NWO) as part
of the Vidi research program BinWaves (project number 639.042.728). This work
has received funding from the European Research Council (ERC) under the
European Union’s Horizon 2020 research innovation programme (Grant Agreement
ERC-StG 2016, No 716082 ’GALFOR’, PI: Milone,
http://progetti.dfa.unipd.it/GALFOR), and from MIUR through the FARE project
R164RM93XW SEMPLICE (PI: Milone) and the PRIN program 2017Z2HSMF (PI: Bedin).
## Appendix A Examples of stellar evolution in the HRD
Figure A.1 displays the evolutionary tracks of 5$\,\mathrm{M}_{\odot}\,$models
with different initial rotational velocities in the HRD. The left and the
right panels correspond to the LMC and SMC metallicity, respectively. We take
into account our MESA models, the SYCLIST models and the MIST models. We have
redefined the initial critical velocity fraction for the SYCLIST models, such
that it matches the definition used in our MESA models (see Sec. 3). As a
reference, we show the formally labeled critical angular velocity fraction of
the SYCLIST models in brackets. As mentioned in Sec. 3, the SYCLIST models use
a larger metallicity $Z$ appropriate for the LMC stars than our MESA models
and the MIST models. As a consequence, the SYCLIST ZAMS models are cooler than
the corresponding MESA models in this work and the MIST models at the same
velocities. Whereas all three model sets adopt similar metallicity tailored
for the SMC stars, therefore, their ZAMS models occupy similar positions in
the HRD.
It can be seen in Fig. A.1 that at the ZAMS, when evolutionary effects have
not kicked in, the fast-rotating models are cooler than the slowly-rotating
models in all three model sets, with similar rotation difference resulting in
similar temperature variation. This means that the implementation of the
impact of centrifugal force in the three model sets is consistent.
Nevertheless, after the ZAMS, the models in different model sets evolve
differently, even for the nonrotating-star models. The discrepancy of the
evolutionary tracks of the nonrotating stars is caused by different internal
mixing. For a 5$\,\mathrm{M}_{\odot}\,$star, $\alpha_{\rm OV}$ roughly equals
0.1, 0.15 and 0.2 in terms of step overshooting in the SYCLIST models, our
MESA models and the MIST models, respectively. Consequently, the SYCLIST
nonrotating-star models have the lowest turn-off luminosities and the shortest
MS lifetimes, while the MIST models are opposite.
Whereas the diverse evolution of the rotating-star models in different model
sets is mainly due to different rotationally induced mixing efficiencies. In
contrast to our MESA models and the MIST models, in which the fast-rotating
models are always cooler than their slowly-rotating counterparts, the
evolutionary tracks of the SYCLIST fast-rotating-star models cross that of the
nonrotating models near the end of the MS evolution. This is because in the
SYCLIST models, helium is efficiently mixed into the stellar surface even in
the presence of a chemical composition gradient. Figure A.2 compares relative
enhancement of the surface He abundance of different stellar models. It can be
seen that rotational mixing is the strongest in the SYCLIST models, but modest
in the MIST models and negligible in our MESA models. It is worth noting that
rotational mixing increases the MS lifetime of the SYCLIST rotating models by
an overall amount of $\sim$ 25% (Ekström et al., 2012), while it only
increases the MS lifetime of our MESA and the MIST rotating models modestly.
Figure A.1: Evolutionary tracks of 5$\,\mathrm{M}_{\odot}\,$stellar models
with LMC-like (left) and SMC-like (right) metallicities. The solid blue,
dashed orange and dotted purple lines denote our MESA models, the SYCLIST
models, and the MIST models, respectively. The adopted metallicity Z in
different model sets are listed in the legend. In each model set, stellar
evolutionary tracks from left to right correspond to models with increasing
initial rotational velocity, with values listed in the legend. Numbers in the
parentheses indicate the formally labeled fractional critical angular
velocities in the SCYLIST web interface. Figure A.2: Surface helium
enhancement of 5$\,\mathrm{M}_{\odot}\,$SMC stellar models as a function of
stellar age. The y-axis shows the increase of the surface helium abundance of
the stellar models with respect to their initial abundance. Here
$X_{\mathrm{He}}$ and $X_{\mathrm{He,0}}$ are the current surface helium
abundance and the initial surface helium abundance of the stellar models,
respectively. The orange solid line corresponds to our nonrotating MESA
models, while the solid, dashed, and dotted blue lines correspond to our MESA
models, the SYCLIST models, and the MIST models that initially rotate at 40%
of their critical velocities.
## Appendix B Degeneracy in explaining the split main sequence as the effect
of rotation
Degeneracy exits in interpreting the observed split MS in young star clusters
as the effect of rotation, because different combinations of slow and fast
rotation can retrieve the observed color separation between the red and blue
MSs equally well. Figures. B.1 and B.2 are similar to Fig. 1, but
$\Delta\mathrm{color}$ is calculated by using the isochrones of the $W_{\rm
i}=0.20$, $W_{\rm i}=0.30$ star modes as the baselines. In these three cases,
$W_{\rm i}\sim 0.55$, $W_{\rm i}\sim 0.60$ are required to explain the
observed color separation between the red and blue MSs, respectively. Wang et
al. (2022) also use $W_{\rm i}\sim 0.65$ and $W_{\rm i}\sim 0.35$ to fit the
red and blue MSs in young star clusters.
In Fig. B.3, we perform isochrone fittings to the red and the blue MS stars in
NGC 1818, using our MESA models with the abovementioned initial velocities.
After adjusting distance modulus and reddening, we attain isochrone fittings
that are as good as those shown in the right panel of Fig. 2, which implies
that photometric observations are not enough to constrain stars’ precise
rotation. Further spectroscopic observations are encouraged.
Figure B.1: Same as Fig. 1, but color differences are calculated with respect
to the $W_{\rm i}=0.2$ models.
Figure B.2: Same as Fig. 1, but color differences are calculated with respect
to the $W_{\rm i}=0.3$ models. Figure B.3: Same as Fig. 2, but we only show
our MESA models and use pairs of isochrones of the $W_{\rm i}=0.55$ and
$W_{\rm i}=0.20$ (left), the $W_{\rm i}=0.60$ and $W_{\rm i}=0.30$ (middle),
and the $W_{\rm i}=0.65$ and $W_{\rm i}=0.35$ (right) models to fit the
observed red and blue main sequences in NGC 1818.
## Appendix C Surface velocity of the main-sequence models
In this appendix, we continue our discussion in Sec. 4.5 on how will our
finding, that the majority of the stars should be born with moderate rotation,
change our perception of Be star formation. First of all, we display Fig. C.1,
which is similar to Fig. 6, but for the SMC models. In the following, we
explain more about this figure and Fig 6 by showing some evolutionary examples
of our MESA models and the SYCLIST models. The left panel of Fig. C.2 shows
the evolutionary tracks of the selected stellar models in the HRD. The gray
shaded area indicates the temperature zone where the stellar mass loss rate in
our MESA models is affected by the bi-stability jump. The bi-stability jump
temperature is set by eqs. 14 and 15 in Vink et al. (2001). The gray shaded
area is a buffer zone with a temperature range of 10% of the bi-stability jump
temperature. In this buffer zone, the mass loss rate is computed by
interpolating between the winds for hot stars (Vink et al., 2001) and the
winds for cool stars (i.e., the maximum value between the mass loss rate
computed from Vink et al. (2001) and Nieuwenhuijzen & de Jager (1990)). The
right panels of this figure show the mass loss history and the surface
velocity evolution of the selected stellar models. We see that our
$15\,\rm\,M_{\odot}$ LMC model with $W_{\mathrm{i}}=0.7$ achieves near
critical-rotation at $\sim 12\,$Myr and then suddenly spins down to
$v/v_{\mathrm{crit}}\sim 0.55$, which roughly equals the spin of our LMC model
with $W_{\mathrm{i}}=0.4$ at the TAMS, due to strong stellar winds. The first
peak of the mass loss rate of this star model is caused by critical rotation,
while the later steady increase and the plateau between $\sim 12.5$ and 13.75
Myr is due to the bi-stability jump. The bi-stability jump related stellar
winds have a larger impact on the stellar models with higher initial
velocities. Nevertheless, the $15\,\rm\,M_{\odot}$ SMC star model with
$W_{\mathrm{i}}=0.7$ does not spin down as much as its LMC counterpart because
it enters the bi-stability jump temperature zone later and therefore spends
less time with the enhanced stellar wind mass loss. A similar feature of a
velocity bump before the end of MS evolution is also seen in the
$15\,\rm\,M_{\odot}$ models in Hastings et al. (2020).
The behavior of the $12\,\rm\,M_{\odot}$ models is similar to the
$15\,\rm\,M_{\odot}$ models, but the TAMS spin of the $12\,\rm\,M_{\odot}$ LMC
model with $W_{\mathrm{i}}=0.7$ is slightly larger than that of the
$15\,\rm\,M_{\odot}$ counterpart, due to less intense stellar winds.
Differently, the $10\,\rm\,M_{\odot}$ models reach the bi-stability jump
temperature at an early time. The enhanced stellar winds prevent our MESA
models from rotating critically, but they are not sufficient to significantly
brake the stellar models. Again in this case, the SMC models achieve the bi-
stability jump temperature later, and therefore, have higher surface
velocities during their MS evolution than their LMC counterparts. Stars less
massive than $10\,\rm\,M_{\odot}$ start to get rid of the effect of the bi-
stability jump and can reach near critical rotation at their TAMS, as their
wind mass loss rates are quite low even in the bi-stability jump regime.
Figure C.2 helps us understand why the initially fast-rotating-star models in
certain mass ranges cannot reach as high velocity as in other mass ranges,
which is the feature seen in Fig. 6. In contrast to our MESA models, all the
selected SYCLIST models do not experience such a spin down, because they adopt
de Jager et al. (1988) winds that do not include the bi-stability jump. We do
not attempt to assess the mass loss recipes in different model sets because
stellar wind is one of the least constrained physics in astronomy, but we mean
to demonstrate that initially moderately rotating single stars are able to
reach higher velocities, depending on the competition between angular momentum
transport from the stellar center to the stellar surface and angular momentum
lost accompanied by stellar winds.
At last, we directly examine the current surface rotational velocity of the
stellar models to explore if single star evolution can interpret the observed
Be stars in NGC 1818. We have exhibited current velocity of the stellar models
in the lower right panel of Fig. 5, but it is in terms of $v\sin i$ and only
for the models $\sim 1.5$ magnitudes fainter than the cluster turnoff. Here we
show current critical rotation fraction of the stellar models as a function of
their apparent magnitude in Fig C.3. The red solid and dashed lines are the
stellar models that are employed in fitting the red MS stars in NGC 1818
(i.e., the stellar models that are used in constructing the red and orange
solid lines in the right panel of Fig. 2). The other lines represent the
results for the stellar models with other initial velocities, using the same
distance moduli and reddenings. In general, $v/v_{\rm crit}$ increases
steadily from high magnitude (unevolved part) to low magnitude (evolved part)
until the turn-off magnitude, at which $v/v_{\rm crit}$ increases
dramatically. The turn-over at around 16.5th magnitude for our MESA models
with $W_{\mathrm{i}}\geq 0.6$ is caused by the spin-down of the turn-off stars
caused by the bi-stability jump. Figure C.3 directly shows that the stars
$\sim$1.5 magnitude fainter than the cluster turn-off magnitude almost retain
their initial spins. It can be seen that even for the initially rapidly
rotating stars ($\sim$60% of critical velocities), single star evolution can
only contribute to the Be stars at most 1.5 magnitude below the cluster
turnoff, given a Be star velocity threshold of 70% of break-up values. To
explain the origin of the fainter Be stars, binary interaction is needed.
Figure C.1: Similar as Fig. 6, but for the SMC models. Figure C.2: The left
panel shows the evolutionary tracks of the rotating single-star models in
different model sets in the Hertzsprung-Russell diagram. The thick orange
lines correspond to our MESA models with higher initial rotational velocity,
with solid and dashed lines denoting the LMC metallicity and the SMC
metallicity, respectively. While the thick blue lines represent our MESA
models with lower initial rotational velocity. The thin purple and green lines
denote the SYCLIST models that have initial fast and slow spins, respectively.
The mass of the stellar models are listed. The gray shaded area indicates the
temperature region in which we consider the bi-stability jump related stellar
wind for our MESA models. The other three columns of panels illustrate the
evolution of mass loss rate and critical velocity (see Eq.3) fraction as a
function of stellar age, for three different masses. The meaning of the lines
are the same as in the left panel. Figure C.3: Current surface rotational
rates for our MESA models (solid line) and the SYCLIST models (dashed line).
The adopted model age, distance modulus and reddening for each model set are
the same as the right panel of Fig. 2. Color coding is the same as Fig. 1.
|
# On powers of cover ideals of graphs
Dancheng Lu and Zexin Wang Dancheng Lu, School of Mathematics Science,
Soochow University, P.R.China<EMAIL_ADDRESS>Zexin Wang, School of
Mathematics Science, Soochow University, P.R.China<EMAIL_ADDRESS>
###### Abstract.
For a simple graph $G$, assume that $J(G)$ is the vertex cover ideal of $G$
and $J(G)^{(s)}$ is the $s$-th symbolic power of $J(G)$. We prove that
$\operatorname{reg}(J(C)^{(s)})=\operatorname{reg}(J(C)^{s})$ for all $s\geq
1$ and for all odd cycle $C$. For a simplicial complex $\Delta$, we show that
if $I_{\Delta}^{\vee}$ is weakly polymatroidal (not necessarily generated in
one degree) then $\Delta$ is vertex decomposable. Let $W=G^{\pi}$ be a fully
clique-whiskering graph. We prove that $J(W)^{s}$ is weakly polymatroidal for
all $s\geq 1$. Finally, we point out a gap in the proof of [17, Theorem 4.3]
and give a revised proof for it.
###### Key words and phrases:
Regularity, Symbolic power, Odd cycle, Weakly polymatroidal, Whisker graph,
###### 2010 Mathematics Subject Classification:
Primary 05E40,13A02; Secondary 06D50.
## 1\. Introduction
Let $R=K[x_{1},\ldots,x_{n}]$ be the polynomial ring over a field $K$ and let
$G$ be a simple graph on vertex set $[n]:=\\{1,2,\ldots,n\\}$ with edge set
$E(G)$. There are two square-free monomial ideals of $R$ associated to $G$:
the edge ideal $I(G)$ which is generated by all monomial $x_{i}x_{j}$ with
$\\{i,j\\}\in E(G)$ and the vertex cover ideal $J(G)$ generated by monomials
$\prod_{i\in F}x_{i}$, where $F$ is taken over all minimal vertex covers of
$G$. Recall that a subset $F$ of $V(G)$ is a vertex cover of $G$ if $F\cap
e\neq\emptyset$ for every edge $e$ of $G$ and a vertex cover $F$ of $G$ is
minimal if $C\setminus\\{i\\}$ is not a vertex cover for each $i\in F$. The
vertex cover ideal $J(G)$ is the Alexander duality of the edge ideal $I(G)$,
i.e.,
$J(G)=I(G)^{\vee}=\bigcap_{\\{i,j\\}\in E(G)}(x_{i},x_{j}).$
Let $I$ be a graded ideal of $R$. The $s$-th symbolic power of $I$ is defined
by
$I^{(s)}=\bigcap_{\mathfrak{p}\in\mathrm{Min}(I)}I^{s}R_{\mathfrak{p}}\cap R,$
where $\mathrm{Min}(I)$ is as usual the set of all minimal prime ideals of
$I$. It follows from [12, Proposition 1.4.4] that for every integer $s\geq 1$,
$J(G)^{(s)}=\bigcap_{\\{i,j\\}\in E(G)}(x_{i},x_{j})^{s}.$
The Castelnuovo-Mumford regularity (or simply regularity) is a fundamental
invariant in commutative algebra and algebraic geometry. For a finitely
generated graded module $M$ over the polynomial ring $R$, the regularity of
$M$, denoted by $\operatorname{reg}(M)$, is the least integer $r\geq 0$ such
that for all $i\geq 0$, the $i$-th syzygy of $M$ is generated by homogeneous
elements of degree at most $r+i$. An equivalent definition of the regularity
via local cohomology is as follows:
$\operatorname{reg}(M)=\max\\{i+j\colon\;H^{i}_{\mathfrak{m}}(M)_{j}\neq
0,i\geq 0,j\in\mathbb{Z}\\}.$
Here $\mathfrak{m}$ denotes the maximal ideal $(x_{1},\ldots,x_{n})$.
For edge ideals of graphs, there have been a lot of research on connections
between the regularity functions $\operatorname{reg}(I(G)^{s})$ as well as
$\operatorname{reg}(I(G)^{(s)})$ and the combinatorial properties of $G$, see
[1] and the references therein. Recently the conjecture that
$\operatorname{reg}(I(G)^{(s)})=\operatorname{reg}(I(G)^{s})$ for all $s\geq
1$ and for all graphs $G$ attracted much attention and much progress has been
made in this direction, For the details, see [9] and the references therein.
Meanwhile, the study of algebraic properties of (symbolic and ordinary) powers
of vertex cover ideals of graphs is also an active research topic. However,
the regularity of powers of such ideals is harder to compute or deal with. In
fact, although S.A. Seyed Fakhari presented in [21] the following remarkable
bounds for a large class of graphs $G$, including bipartite graphs, unmixed
graphs, claw-free graphs:
$s\mathrm{Deg}(J(G))\leq\mathrm{reg}(J(G)^{(s)})\leq(s-1)\mathrm{Deg}(J(G))+|V(G)|-1,$
there are not many graphs $G$ for which either
$\operatorname{reg}(J(G)^{(s)})$ or $\operatorname{reg}(J(G)^{s})$ is known
precisely. Here, $\mathrm{Deg}(J(G))$ is the maximum size of minimal vertex
covers of $G$. When $G$ is either a crown graph or a complete multipartite
graph, $\operatorname{reg}(J(G)^{(s)})$ was explicitly given in [11]. It is
known if a graded ideal is componentwise linear then its regularity is equal
to the maximum degree of its minimal generators. In the literatures [4, 7, 17,
21, 20] and [22], some classes of graphs for which either $J(G)^{s}$ or
$J(G)^{(s)}$ is componentwise linear are identified. For examples, in [22], it
was characterized all the graphs $G$ with the property that $J(G)^{(k)}$ has a
linear resolution for some (equivalently, for all ) integer $k\geq 2$. In [4,
7, 20, 21] and [22], they investigate the question of how to combinatorially
modify a graph to obtain prescribed algebraic properties of the corresponding
monomial ideals, and identify some graphs $G$ such that
$\operatorname{reg}(J(G)^{(s)})$ is componentwise linear. In [17], some
classes of graphs $G$ such that $\operatorname{reg}(G)^{s}$ is componentwise
linear were found. For such graphs $G$, the regularity of either $J(G)^{s}$ or
$J(G)^{(s)}$ is known.
In this paper we investigate further the properties of (symbolic and ordinary)
powers of vertex cover ideals of simple graphs. Our first main result is
motivated by Theorem 5.15 in [4], in which a family of graphs $G$ was
constructed such that $\operatorname{reg}(J(G)^{(s)})$ is not eventually
linear in $s$. This result particularly shows that the equality
$\operatorname{reg}(J(G)^{(s)})=\operatorname{reg}(J(G)^{s})$ is not true in
general. On the other side, we have known the equality
$\operatorname{reg}(I(G)^{(s)}=\operatorname{reg}(I(G)^{s})$ for all $s\geq 1$
holds for many classes of graphs such as unicyclic graphs and chordal graphs
and so on. These facts lead us to ask the following question:
For which graphs $G$ has one
$\operatorname{reg}(J(G)^{(s)})=\operatorname{reg}(J(G)^{s})$ for all $s\geq
1$?
In this vein we prove the following result.
Theorem 2.4 If $G$ is an odd cycle, then
$\operatorname{reg}(J(G)^{(s)})=\operatorname{reg}(J(G)^{s})$ for all $s\geq
1$.
We remark this is the first non-trivial example where the above formula holds,
due to the well-known fact that $G$ is a bipartite graph if and only if
$J(G)^{(s)}=J(G)^{s}$ for some $s\geq 2$ (equivalently for all $s\geq 1$), see
[23, Proposition 1.3] or [13, Theorem 5.1].
We next investigate relations between weak polymatroidality of a monomial
ideal and vertex decomposability of a simplicial complex. It turns out it
helps to understand the behaviors of the powers of cover ideals. Vertex
decomposability was first introduced by [19] in the pure case, and extended to
the non-pure case in [2]. It is defined in terms of the deletion and link. Let
$\Delta$ be a simplicial complex on $[n]$. For $x\in[n]$, the link of $x$ in
$\Delta$ is the subcomplex
$\mbox{lk}_{\Delta}(x)=\\{F\in\Delta\colon\;F\cup\\{x\\}\in\Delta\mbox{ and
}x\notin F\\};$
and the deletion of $x$ in $\Delta$ is the subcomplex
$\mbox{del}_{\Delta}(x)=\\{F\in\Delta\colon\;x\notin F\\}.$
###### Definition 1.1.
A simplicial complex $\Delta$ is said to be vertex decomposable if either
$\Delta$ is a simplex, or there exists a vertex $x$ of $\Delta$ such that
(1) $\mbox{lk}_{\Delta}(x)$ and $\mbox{del}_{\Delta}(x)$ are vertex
decomposable;
(2) Each facet of $\mbox{lk}_{\Delta}(x)$ is not a facet of
$\mbox{del}_{\Delta}(x)$.
A vertex satisfying condition (2) is called a shedding vertex of $\Delta$.
For the recent developments on vertex decomposability, one may refer to [8]
and the references therein. The following strict implications is well-known
for a simplicial complex $\Delta$:
$\mbox{vertex decomposable }\Longrightarrow\mbox{ shellable
}\Longrightarrow\mbox{ Sequentially Cohen-Macaulay}$
Moreover, $\Delta$ is shellable if and only if $I_{\Delta}^{\vee}$ has linear
quotients; and $\Delta$ is Sequentially Cohen-Macaulay if and only if
$I_{\Delta}^{\vee}$ is componentwise linear. One may ask what property
$I_{\Delta}^{\vee}$ has when $\Delta$ is vertex decomposable, or vice versa.
###### Definition 1.2.
Following [10], we say that a monomial ideal $I$ in $R$ is weakly
polymatroidal if for every pair of elements $u=x_{1}^{a_{1}}\cdots
x_{n}^{a_{n}}$ and $v=x_{1}^{b_{1}}\cdots x_{n}^{b_{n}}$ of $G(I)$ with
$a_{1}=b_{1},\ldots,a_{q-1}=b_{q-1}$ and $a_{q}<b_{q}$, (noting that $q<n$)
there exists $p>q$ such that $w:=(x_{q}u)/x_{p}$ belongs to $G(I)$. Here,
$G(I)$ denotes the set of minimal monomial generators of $I$.
Different from the original definition in [10], we here do not require $I$ to
be generated in one degree. Using the same method as in [10], one can prove if
$I$ is weakly polymatroidal in this generalized sense, then $I$ has linear
quotients. Our second main result is as follows:
Theorem 3.1. If $I_{\Delta}^{\vee}$ is weakly polymatroidal then $\Delta$ is
vertex decomposable.
This particularly shows that being weakly polymatroidal is a condition
stronger than the property of having quotients. We also prove that the
converse of Theorem 3.1 holds in some special cases. Recall that a graph is
called unmixed if every minimal vertex cover of $G$ has the same cardinality,
i.e. $J(G)$ is generated in one degree.
Corollary 3.2. Let $G$ be either a cactus graph or a bipartite graphs or a
chordal graph. Assume further that $G$ is unmixed. Then the following are
equivalent:
1. (1)
$R/I(G)$ is Cohen-Macaulay;
2. (2)
$G$ is vertex indecomposable;
3. (3)
$J(G)$ is weakly polymatroidal (in some ordering of variables);
4. (4)
$J(G)^{k}$ is weakly polymatroidal (in some ordering of variables) for all
$s\geq 1$.
Here, a cactus graph is a simple graph in which every edge belongs to at most
one cycle. This leads us to conjecture the following.
Conjecture If $G$ is an unmixed graph, then $G$ is vertex decomposable if and
only if $J(G)$ is weakly polymatroidal.
We prove that the above conjecture is true if $G$ has girth $\geq 5$, see
Proposition 3.5.
It is natural to ask for which unmixed graphs $G$, $J(G)^{s}$ is weakly
polymatroidal besides the graphs appearing in Corollary 3.2. Of course, such
graphs (i.e., $R/I(G)$) should be Cohen-Macaulay by Alexander duality. Let $G$
be a simple graph on vertex set $V(G)$ with edge set $E(G)$. Following [3], a
clique vertex-partition of $G$ is a partition $V(G)=W_{1}\sqcup\cdots\sqcup
W_{t}$ such that the induced graph of $G$ on $W_{i}$ is a clique (a complete
graph) for $i=1,\ldots,t$. Denote this partition by
$\pi=\\{W_{1},\ldots,W_{t}\\}$. The fully clique-whiskering graph $G^{\pi}$ of
$G$ by $\pi$ is the graph on vertex set $V(G)\cup\\{y_{1},\ldots,y_{t}\\}$ and
with edge set $E(G)\cup\\{vy_{i}\colon\;v\in W_{i},1\leq i\leq t\\}$. When
$\pi$ is a trivial partition, i.e., $|W_{1}|=\cdots=|W_{t}|=1$, $G^{\pi}$ is
also called the whisker graph of $G$. A tree is Cohen-Macaulay if and only if
it is the whisker graph of some graph; A Cameron-Walker graph is Cohen-
Macaulay if and only if it is a fully clique-whiskering graph $G^{\pi}$ of a
bipartite gaph $G$ by some clique vertex-partition $\pi$ of $G$, see [15,
Theorem 1.3]. Our third main result is as follows:
Theorem 4.4. If $W=G^{\pi}$ for some graph $G$ and some clique vertex-
partition $\pi$, then $J(W)^{s}$ is weakly polymatroidal.
This result is a complement of Corollary 3.2. As a consequence, we obtain if
$W=G^{\pi}$ then
$\operatorname{reg}(J(W)^{s})=\operatorname{reg}(J(W)^{(s)})=s|V(G)|$ for all
$s\geq 1$.
In [17] F. Mohammadi succeeded in describing the precise structure of a Cohen-
Macaulay cactus graph. Based on this result, he proved that if $G$ is a Cohen-
Macaulay cactus graph then $J(G)^{s}$ is weakly polymatroidal, see [17,
Theorem 4.3]. We conclude this paper by an appendix, in which we illustrate by
an example that there exists an essential mistake in the proof of [17, Theorem
4.3] and then present a corrected proof for this result.
In the rest part of this paper we will keep the notions introduced in this
section unless otherwise said, and refer to [12] for some unexplained notions.
## 2\. Powers of cover ideals of odd cycles
In this section we will prove that if $C$ is an odd cycle then both $J(C)^{s}$
and $J(C)^{(s)}$ have the same regularity for all $s\geq 1$.
We begin with fixing some notions. Let $M$ be a finitely generated graded
$R$-module generated by homogeneous elements $f_{1},\ldots,f_{r}$ minimally
with $\deg(f_{1})\leq\deg(f_{2})\leq\cdots\leq\deg(f_{r})$. We denote by
$\mathrm{Deg}(M)$ the number $\deg(f_{r})$ and by $\deg(M)$ the number
$\deg(f_{1})$. It is known that $\operatorname{reg}(M)\geq\mathrm{Deg}(M).$
Let $\mathfrak{m}$ denote the maximal graded ideal
$(x_{1},x_{2},\ldots,x_{n})$ of $R$.
###### Proposition 2.1.
Let $I$ be a homogeneous ideal in $R$ and $t$ a positive integer. Put
$J=I\cap\mathfrak{m}^{t}$. Then
1. (1)
$\mathrm{reg}(\frac{R}{I})\leq\mathrm{reg}(\frac{R}{J}).$
2. (2)
If $t\leq\mathrm{Deg}(I)$, then
$\mathrm{reg}(\frac{R}{I})=\mathrm{reg}(\frac{R}{J}).$
###### Proof.
(1) Set $b_{i}=\max\\{j\colon\;H_{\mathfrak{m}}^{i}(\frac{R}{I})_{j}\neq 0\\}$
and $a_{i}=\max\\{j\colon\;H_{\mathfrak{m}}^{i}(\frac{R}{J})_{j}\neq 0\\}$.
Then $\mathrm{reg}(\frac{R}{I})=\max\\{b_{i}+i\colon\;i\geq 0\\}$ and
$\mathrm{reg}(\frac{R}{J})=\max\\{a_{i}+i\colon\;i\geq 0\\}$. Applying the
local cohomological functors with respect to $\mathfrak{m}$ to the short exact
sequence
(${\dagger}$)
$0\rightarrow\frac{R}{J}\rightarrow\frac{R}{I}\oplus\frac{R}{\mathfrak{m}^{t}}\rightarrow\frac{R}{\mathfrak{m}^{t}+I}\rightarrow
0,$
we obtain the long exact sequence: $0\rightarrow
H_{\mathfrak{m}}^{0}(R/J)\rightarrow H_{\mathfrak{m}}^{0}(\frac{R}{I})\oplus
H_{\mathfrak{m}}^{0}(\frac{R}{\mathfrak{m}^{t}})\rightarrow
H_{\mathfrak{m}}^{0}(\frac{R}{\mathfrak{m}^{t}+I})\rightarrow\cdots\rightarrow
H_{\mathfrak{m}}^{i}(R/J)\rightarrow H_{\mathfrak{m}}^{i}(\frac{R}{I})\oplus
H_{\mathfrak{m}}^{i}(\frac{R}{\mathfrak{m}^{t}})\rightarrow
H_{\mathfrak{m}}^{i}(\frac{R}{\mathfrak{m}^{t}+I})\rightarrow\cdots$. From
this sequence as well as the equality
$H_{\mathfrak{m}}^{i}(\frac{R}{\mathfrak{m}^{t}+I})=H_{\mathfrak{m}}^{i}(\frac{R}{\mathfrak{m}^{t}})=0$
for all $i>0$, we obtain the following facts.
(i) $a_{i}=b_{i}$ for all $i\geq 2$.
(ii) The sequence $H_{\mathfrak{m}}^{1}(\frac{R}{J})\rightarrow
H_{\mathfrak{m}}^{1}(\frac{R}{I})\rightarrow 0$ is exact. From this, we have
$H_{\mathfrak{m}}^{1}(\frac{R}{I})_{i}=0$ if
$H_{\mathfrak{m}}^{1}(\frac{R}{J})_{i}=0$. This implies that $a_{1}\geq
b_{1}$.
(iii) The sequence $0\rightarrow H_{\mathfrak{m}}^{0}(\frac{R}{J})\rightarrow
H_{\mathfrak{m}}^{0}(\frac{R}{I})\oplus\frac{R}{\mathfrak{m}^{t}}\rightarrow\frac{R}{I+\mathfrak{m}^{t}}$
is exact. Thus, for any $i\in\mathbb{Z}$ with
$H_{\mathfrak{m}}^{0}(\frac{R}{J})_{i}=0$, we have
$\dim_{k}H_{\mathfrak{m}}^{0}(\frac{R}{I})_{i}+\dim_{k}[\frac{R}{\mathfrak{m}^{t}}]_{i}\leq\dim_{k}[\frac{R}{I+\mathfrak{m}^{t}}]_{i}\leq\dim_{k}[\frac{R}{\mathfrak{m}^{t}}]_{i}$
and so $\dim_{k}H_{\mathfrak{m}}^{0}(\frac{R}{I})_{i}=0$. Hence $a_{0}\geq
b_{0}$.
Combining (i),(ii) with (iii), we obtain that
$\mathrm{reg}(\frac{R}{J})\geq\mathrm{reg}(\frac{R}{I})$.
(2) By using the short exact sequence (${\dagger}$ ‣ 2), we also obtain
$\mathrm{reg}(\frac{R}{J})\leq\max\\{\mathrm{reg}(\frac{R}{I}),t-1\\}=\mathrm{reg}(\frac{R}{I}).$
This finishes the proof.
Proposition 2.1 can be extended to the case of graded modules. Let
$M=\bigoplus_{i\in\mathbb{Z}}M_{i}$ be a finitely generated
$\mathbb{Z}$-graded $R$-module. For $j\in\mathbb{Z}$, we denote by $M_{\geq
j}$ the graded submodule $\bigoplus_{j\geq i}M_{i}$ of $M$. Note that
$I\cap\mathfrak{m}^{t}=I_{\geq t}$ for any graded ideal $I$, we may look upon
the following result as a generalization of Proposition 2.1.
###### Proposition 2.2.
Let $M$ a finitely generated $\mathbb{Z}$-graded $R$-module and let
$j\in\mathbb{Z}$ such that $M_{\geq j}\neq 0$. Then the following statements
hold:
1. (1)
$\operatorname{reg}(M_{\geq j})\geq\operatorname{reg}(M)$;
2. (2)
If $j\leq\mathrm{Deg}(M)$, then $\operatorname{reg}(M_{\geq
j})=\operatorname{reg}(M)$.
###### Proof.
(1) Consider the following short exact sequence:
(${\ddagger}$) $0\rightarrow M_{\geq j}\rightarrow M\rightarrow M/M_{\geq
j}\rightarrow 0.$
It is known that $\operatorname{reg}(N)=\max\\{i\colon\;N_{i}\neq 0\\}$
whenever $N$ is a graded $R$-module of finite length. From this, it follows
that $\operatorname{reg}(M_{\geq j})\geq j>\operatorname{reg}(M/M_{\geq j})$.
Hence $\operatorname{reg}(M)\leq\max\\{\operatorname{reg}(M_{\geq
j}),\operatorname{reg}(M/M_{\geq j})\\}=\operatorname{reg}(M_{\geq j})$.
(2) Using the sequence (${\ddagger}$ ‣ 2) again, we obtain
$\operatorname{reg}(M_{\geq
j})\leq\max\\{\operatorname{reg}(M),\operatorname{reg}(M/M_{\geq j})+1\\}.$
Note that $\operatorname{reg}(M)\geq\mathrm{Deg}(M)\geq
j\geq\operatorname{reg}(M/M_{\geq j})+1$, the result follows.
Let $G$ be a simple graph on vertex set $[n]$ and $H$ a subgraph of $G$. The
neighborhood of $H$ is defined by
$N_{G}(H)=\\{i\in V(G)\colon\;i\mbox{ is adjacent to some vertex of }H\\}.$
By [13, Proposition 5.3], if $N_{G}(C)=[n]$ for every odd cycle $C$, then the
symbolic Rees algebra
$\mathcal{R}_{s}(J(G))=\bigoplus_{k\geq 0}J(G)^{(k)}t^{k}$
of $J(G)$ is generated by the monomial $x_{1}\cdots x_{n}t^{2}$ together with
the monomials $t\prod_{i\in F}x_{i}$ such that $F$ is a minimal vertex cover
of $G$. Thus, the following result is a direct sequence of [13, Proposition
5.3].
###### Proposition 2.3.
Let $G$ be a simple graph on vertex set $[n]$ such that $N_{G}(C)=[n]$ for
every odd cycle of $G$. Then
$J(G)^{(s)}=J(G)^{s}+\sum_{i=1}^{\lfloor\frac{s}{2}\rfloor}(x_{1}x_{2}\cdots
x_{n})^{i}J(G)^{s-2i}.$
Here, $\lfloor\frac{s}{2}\rfloor$ denotes the largest integer at most
$\frac{s}{2}$.
Let $C$ be an odd cycle of length $n=2r+1$. It is not difficult to see that
$C$ is not unmixed if $n\geq 9$. More precisely, we have $\deg(J(C))=r+1$ and
$\mathrm{Deg}(J(C))=\left\\{\begin{array}[]{ll}4t+2,&\hbox{$n=6t+3$;}\\\
4t+3,&\hbox{$n=6t+5$;}\\\ 4t+4,&\hbox{$n=6t+7$.}\end{array}\right.$
for all $t\geq 0$.
We now come to the main result of this section.
###### Theorem 2.4.
Let $J$ be the vertex cover ideal of an odd cycle of $C$ on vertex set $[n]$
with $n=2r+1$. Then $\operatorname{reg}(J^{(s)})=\operatorname{reg}(J^{s})$
for all $s\geq 1$.
###### Proof.
Put $t=\mathrm{Deg}(J)$. Then $t\geq r+1$ and so
$\mathrm{Deg}(J^{(s)})=\mathrm{Deg}(J^{s})=st$ for all $s\geq 1$ by
Proposition 2.3. Fix $s\geq 1$. We claim that
$J^{(s)}\cap\mathfrak{m}^{st}=J^{s}\cap\mathfrak{m}^{st}.$
In light of Proposition 2.3, it suffices to show that
$(x_{1}x_{2}\cdots x_{2r+1})^{i}J^{s-2i}\cap\mathfrak{m}^{st}\subseteq J^{s}$
for all $i=1,\ldots,\lfloor\frac{s}{2}\rfloor$. Fix $i$ and let $\alpha$ be a
monomial in $(x_{1}x_{2}\cdots x_{2r+1})^{i}J^{s-2i}\cap\mathfrak{m}^{st}$.
Then we may write $\alpha=(x_{1}x_{2}\cdots
x_{2r+1})^{i}\alpha_{1}\cdots\alpha_{s-2i}\mathbf{u}$, where $\alpha_{i}$ is a
minimal monomial generator of $J$ for each $i$ and $\mathbf{u}$ is some
monomial. Since $\deg(\alpha)\geq st$ and $\deg(\alpha_{i})\leq t$, it follows
that $\deg(\mathbf{u})\geq i$. This, together with the fact $(x_{1}x_{2}\cdots
x_{2r+1})x_{i}\in J^{2}$ for any $i\in[2r+1]$, implies $\alpha\in J^{s}$.
Thus, the claim is proved.
In view of Proposition 2.1, the result is immediate from the above claim.
###### Remark 2.5.
Proposition 2.1 is also useful in the study of powers of edge ideals. For
example, by Proposition 2.1, [9, Theorem 3.5] is a direct consequence of [9,
Corollary 3.3].
## 3\. Vertex decomposability via weak polymatroidality
In this section we prove that if $\Delta$ is a simplicial complex such that
$I_{\Delta}^{\vee}$ is weakly polymatroidal then $\Delta$ is vertex
decomposable. This particularly shows that for a monomial ideal, the property
of being weakly polymatroidal is stronger than the property of having linear
quotients. The converse implication of the above result is also discussed.
We first give an observation on the property of a shedding vertex. Let
$\Delta$ be a simplicial complex on vertex set $[n]$ with facets
$F_{1},F_{2},\ldots,F_{r}$. Assume that $k$ is a shedding vertex of $\Delta$
and assume without loss of generality that $k\in F_{i}$ for $i=1,\ldots,j$ and
$k\notin F_{i}$ for $i=j+1,\ldots,r$. Then $\mbox{lk}_{\Delta}(k)=\langle
F_{1}\setminus\\{k\\},\ldots,F_{j}\setminus\\{k\\}\rangle$ and
$\mbox{del}_{\Delta}(k)=\langle
F_{1}\setminus\\{k\\},\ldots,F_{j}\setminus\\{k\\},F_{j+1},\ldots,F_{r}\rangle=\langle
F_{j+1},\ldots,F_{r}\rangle$. This observation is useful in the following
proofs. Let $I$ be a monomial ideal. As usual, $G(I)$ denotes the set of
minimal monomial generators of $I$ and $\mathrm{supp}(I)$ is the set
$\cup_{u\in G(I)}\mathrm{supp}(u)$, where for a monomial $u$,
$\mathrm{supp}(u)$ denotes the set $\\{i\in[n]\colon\;x_{i}|u\\}$.
###### Theorem 3.1.
Let $\Delta$ be a simplicial complex on $[n]$ such that $I_{\Delta}^{\vee}$ is
weakly polymatroidal in some ordering of variables. Then $\Delta$ is vertex
decomposable.
###### Proof.
Since the vertex decomposability of a simplicial complex is independent of the
ordering of varaibles, we may assume $I_{\Delta}^{\vee}$ is weakly
polymatroidal itself. Let $\mathcal{F}(\Delta)$ denote the set of facets of
$\Delta$. In the following, we will use the induction on
$|\mathcal{F}(\Delta)|$, the number of facets of $\Delta$. If
$|\mathcal{F}(\Delta)|=1$, then $I_{\Delta}^{\vee}$ is generated by a single
monomial and so it is weakly polymatroidal automatically. Now assume
$|\mathcal{F}(\Delta)|\geq 2$. Let $k=\min\\{i\colon\;i\in F_{1}\cup\cdots\cup
F_{r},i\notin F_{1}\cap\cdots\cap F_{r}\\}$.
We first show that $k$ is a shedding vertex of $\Delta$. For this, let $F,G$
be facets of $\Delta$ such that $k\in F$ and $k\notin G$, respectively. Note
that $x_{\overline{F}}=x_{k+1}^{a_{k+1}}\cdots x_{n}^{a_{n}}$ and
$x_{\overline{G}}=x_{k}x_{k+1}^{b_{k+1}}\cdots x_{n}^{b_{n}}$, where
$a_{i},b_{i}\in\\{0,1\\}$ for all $i$ and they are minimal generators of
$I_{\Delta}^{\vee}$. Here, for $A\subseteq[n]$, $x_{\overline{A}}$ denotes the
monomial $\prod_{i\in[n]\setminus A}x_{i}$. This implies there exists $\ell>k$
such that $\mathbf{u}:=x_{k}x_{\overline{F}}/x_{\ell}$ is also a minimal
generators of $I_{\Delta}^{\vee}$ and so there exists a facet $H$ of $\Delta$
such that $\mathbf{u}=x_{\overline{H}}$. From this it follows that $k\notin H$
and $F\setminus\\{k\\}\subseteq H$. Since $H$ is a facet of $\Delta$, we have
$F\setminus\\{k\\}\subsetneq H$. This actually shows that none of facets of
$\mathrm{lk}_{\Delta}(k)$ is a facet of $\mathrm{del}_{\Delta}(k)$ and so $k$
is a shedding vertex of $\Delta$.
Set $\Delta_{1}:=\mathrm{lk}_{\Delta}(k)$ and
$\Delta_{2}:=\mathrm{del}_{\Delta}(k)$. Next we show $I_{\Delta_{1}}^{\vee}$
and $I_{\Delta_{2}}^{\vee}$ are both weakly polymatroidal. We will look
$\Delta_{1}$ and $\Delta_{2}$ upon as simplicial complexes on
$V:=[n]\setminus\\{k\\}$. Then $I_{\Delta_{i}}^{\vee}=(x_{V\setminus
F}\colon\;F\in\mathcal{F}(\Delta_{i}))$. Note that
(1)
$G(I_{\Delta}^{\vee})=G(I_{\Delta_{1}}^{\vee})\bigsqcup\\{x_{k}u\colon\;u\in
G(I_{\Delta_{2}}^{\vee})\\}.$
Moreover, $\mathrm{supp}(I_{\Delta_{i}}^{\vee})\subseteq\\{k+1,\ldots,n\\}$
for $i=1,2$.
Let $\mathbf{u}=x_{k+1}^{a_{k+1}}\cdots x_{n}^{a_{n}}$ and
$\mathbf{v}=x_{k+1}^{b_{k+1}}\cdots x_{n}^{b_{n}}$ be distinct elements in
$G(I_{\Delta_{1}}^{\vee})$ such that
$a_{k+1}=b_{k+1},\ldots,a_{k+i-1}=b_{k+i-1}$ and $a_{k+i}<b_{k+i}$. Then,
since $I_{\Delta}^{\vee}$ is weakly polymatroidal, there exists $j>i\geq 1$
such that $\mathbf{w}:=x_{k+i}\mathbf{u}/x_{k+j}\in G(I_{\Delta}^{\vee})$.
This, together with the decomposition in (1), implies $\mathbf{w}\in
G(I_{\Delta_{1}}^{\vee})$. Thus, we have proven that $I_{\Delta_{1}}^{\vee}$
is weakly polymatroidal. Similarly, we can prove $I_{\Delta_{2}}^{\vee}$ is
weakly polymatroidal. By induction hypothesis, we have $\Delta_{i}$ is vertex
decomposable for $i=1,2$ and so $\Delta$ is vertex decomposable, as required.
Let $G$ be a simple graph. A subset $A$ of $V(G)$ is an independent set of $G$
if for any $i,j\in A$, the pair $\\{i,j\\}$ is not an edge of $G$. The
independence complex of $G$ is the collection of all independent sets of $G$.
We call $G$ to be vertex decomposable if its independence complex is vertex
decomposable. If we let $\Delta$ be the independence complex of $G$, then the
Stanley-Reisner ideal $I_{\Delta}$ is the edge ideal $I(G)$ and its Alexander
duality $I_{\Delta}^{\vee}$ is the vertex cover ideal $J(G)$.
The converse of Theorem 3.1 is true in some cases as shown by the following
corollary.
###### Corollary 3.2.
Let $G$ be either a cactus graph or a bipartite graphs or a chordal graph.
Assume further that $G$ is unmixed. Then the following are equivalent:
1. (1)
$R/I(G)$ is Cohen-Macaulay;
2. (2)
$G$ is vertex indecomposable;
3. (3)
$J(G)$ is weakly polymatroidal (in some ordering of variables);
4. (4)
$J(G)^{k}$ is weakly polymatroidal (in some ordering of variables) for all
$s\geq 1$.
In particular, if $\Delta$ is the independence complex of either a cactus
graph or a bipartite graph or a chordal graph, then $I_{\Delta}^{\vee}$ is
weakly polymatroidal in some ordering of variables if and only if $\Delta$ is
vertex decomposable and pure.
###### Proof.
(1)$\Rightarrow$(4) Suppose that $R/I(G)$ is Cohen-Macaulay. If $G$ is either
a cactus graph or a chordal graph, then $J(G)^{s}$ is weakly polymatroidal by
[17, Theorem 4.3] and [18, Theorem 1.7] respectively. If $G$ is bipartite,
then $G$ comes from a finite poset $P$, see [12, Theorem 9.1.3]. Using the
notation in [5], we may write $I(G)=I_{2}(P)$. From this it follows that
$(I_{\Delta}^{\vee})^{s}=(H_{2}(P))^{s}$ is weakly polymatroidal for all
$s\geq 1$ by [5, Theorem 2.2].
(4)$\Rightarrow$(3) Automatically.
(3)$\Rightarrow$(2) It follows from Theorem 3.1.
(2)$\Rightarrow$(1) Every vertex decomposable graph is sequentially Cohen-
Macaulay and an unmixed sequentially Cohen-Macaulay graph is a Cohen-Macaulay
graph.
###### Corollary 3.3.
Let $C$ be a cycle of size $5$. Then
$\operatorname{reg}(J(C)^{(s)})=\operatorname{reg}(J(C)^{s})=3s.$
###### Proof.
It is known that if $C$ is a cycle of size $n$, then $R/I(C)$ is Cohen-
Macaulay if and only if $R/I(C)$ is sequentially Cohen-Macaulay if and only if
$n\in\\{3,5\\}$, see [6, Proposition 4.1]. By this fact the result follows
from Theorem 2.4 together with Corollary 3.2.
We also have the following partial converse of Theorem 3.1.
###### Proposition 3.4.
Let $\Delta$ be a pure simplicial complex on $[n]$. If $1$ is a shedding
vertex of $\Delta$ such that $I_{\Delta_{i}}^{\vee}$ is weakly polymatroidal
for $i=1,2$, then $I_{\Delta}^{\vee}$ is weakly polymatroidal. Here,
$\Delta_{1}:=\mathrm{lk}_{\Delta}(1)$ and
$\Delta_{2}:=\mathrm{del}_{\Delta}(1)$.
###### Proof.
Since $1$ is a shedding vertex of $\Delta$, we have
$G(I_{\Delta}^{\vee})=G(I_{\Delta_{1}}^{\vee})\bigsqcup\\{x_{1}u\colon\;u\in
G(I_{\Delta_{2}}^{\vee})\\}.$
Note that $\mathrm{supp}(u)\subseteq\\{2,\ldots,n\\}$ for any $u\in
G(I_{\Delta_{1}}^{\vee})\bigsqcup G(I_{\Delta_{2}}^{\vee})$.
Let $\mathbf{u}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}$ and
$\mathbf{v}=x_{1}^{b_{1}}\cdots x_{n}^{b_{n}}$ be monomials belonging to
$G(I_{\Delta}^{\vee})$ satisfying $a_{1}=b_{1},\ldots,a_{i-1}=b_{i-1}$ and
$a_{i}<b_{i}$. We need to find a monomial $\mathbf{w}\in G(I_{\Delta}^{\vee})$
such that $\mathbf{w}=x_{i}\mathbf{u}/x_{j}$ for some $j>i$. If either
$\\{\mathbf{u,v}\\}\subseteq G(I_{\Delta_{1}}^{\vee})$ or
$\\{\mathbf{u,v}\\}\subseteq\\{x_{1}u\colon\;u\in
G(I_{\Delta_{2}}^{\vee})\\}$, the existence of $\mathbf{w}$ follows by the
assumption that $I_{\Delta_{i}}^{\vee}$ is weakly polymatroidal for $i=1,2$.
So we only need to consider the case that $\mathbf{u}\in
G(I_{\Delta_{1}}^{\vee})$ and $\mathbf{v}\in\\{x_{1}u\colon\;u\in
G(I_{\Delta_{2}}^{\vee})\\}$. Note that $i=1$ in this case. Moreover, since
$1$ is a shedding vertex, there exists $\mathbf{v_{1}}\in
G(I_{\Delta_{2}}^{\vee})$ such that $\mathbf{u}$ divides $\mathbf{v_{1}}$.
This implies $\mathbf{v_{1}}=\mathbf{u}x_{j}$ for some $j>1$ by the purity of
$\Delta$. Hence $\mathbf{w:}=x_{1}\mathbf{v_{1}}=x_{1}\mathbf{u}/x_{j}$ meets
the requirement and so we are done.
Proposition 3.4 together with Corollary 3.2 leads us to present the following.
###### Conjecture 1.
If $\Delta$ is a vertex decomposable pure simplicial complex, then
$I_{\Delta}^{\vee}$ is weakly polymatroidal.
A weak form of this conjecture is:
###### Conjecture 2.
If $G$ is an unmixed vertex decomposable graph, then $J(G)$ is weakly
polymatroidal.
The condition that $G$ is unmixed is necessary in the above conjecture. For
example, if $G$ is a star graph with more than two vertices of degree 1, then
$G$ is vertex decomposable, but $J(G)$ is not weakly polymatroidal.
The following is another example in support of Conjecture 2.
###### Proposition 3.5.
Let $G$ be a Cohen-Macaulay graph of girth at least 5. Then $J(G)$ is weakly
polymatroidal.
###### Proof.
By [16, Theorem 2.4], we have $G$ is a $\mathcal{PC}$ graph. Recall an induced
5-cycle of $G$ is called basic if it does not contains two adjacent vertices
of degree three or more in $G$, and an edge is called a pendant if it contains
a vertex of degree 1. Let $C^{1},\ldots,C^{k}$ be the set of all basic
5-cycles of $G$ and let $L^{1},\ldots,L^{l}$ be the set of all pendants of
$G$. Recall that $G$ is a $\mathcal{PC}$ graph if $V(G)$ can be partitioned
into $V(G)=V(C^{1})\sqcup\cdots\sqcup V(C^{k})\sqcup
V(L^{1})\sqcup\cdots\sqcup V(L^{l})$.
Label the vertices of $C^{i}$ successively by
$x_{i1},x_{i4},x_{i2},x_{i3},x_{i5}$ such that $x_{i3},x_{i4},x_{i5}$ are
vertices of degree 2 for $i=1,\ldots,k$, and let $y_{i1},y_{i2}$ be vertices
of $L^{i}$ such that $y_{i2}$ has degree 1 for $i=1,\ldots,l$. We work with
the following ordering of variables:
$x_{11}>\cdots>x_{15}>\cdots>x_{k1}>\cdots
x_{k5}>y_{11}>y_{12}>\cdots>y_{l1}>y_{l2}.$
To prove $J(G)$ is weakly polymatroidal, we let $f,g$ be minimal monomial
generators of $J(G)$ with $f\neq g$ and let $z$ be a variable such that
$\deg_{z^{\prime}}f=\deg_{z^{\prime}}g$ for $z^{\prime}>z$ and
$\deg_{z}g<\deg_{z}f$. We need to find a variable $w<z$ such that $zg/w\in
J(G)$. To this end, for each monomial $h\in J(G)$, we write $h$ as following:
$h=h(C^{1})\cdots h(C^{k})h(L^{1})\cdots h(L^{l}),$
where $h(C^{i})$ and $h(L^{j})$ are monomials such that
$\mathrm{supp}(h(C^{i}))\subseteq V(C^{i})$ for $i=1,\ldots,k$ and
$\mathrm{supp}(h(L^{j}))\subseteq V(L^{j})$ for $j=1,\ldots,l$. We consider
the following cases:
Case 1 $z=x_{i1}$ for some $i$: Then
$g(C^{i})\in\\{x_{i2}x_{i4}x_{i5},x_{i3}x_{i4}x_{i5}\\}$. We set $w=x_{i4}$ if
$g(C^{i})=x_{i2}x_{i4}x_{i5}$ and set $w=x_{i5}$ if
$g(C^{i})=x_{i3}x_{i4}x_{i5}$. Then $zg/w\in J(G)$.
Case 2 $z=x_{i2}$ for some $i$: Then
$g(C^{i})\in\\{x_{i1}x_{i4}x_{i3},x_{i3}x_{i4}x_{i5}\\}$. We set $w=x_{i4}$ if
$g(C^{i})=x_{i1}x_{i4}x_{i3}$, and set $w=x_{i3}$ and set $w=x_{i4}$ if
$g(C^{i})=x_{i3}x_{i4}x_{i5}$. Then $zg/w\in J(G)$.
Case 3 $z=x_{i3}$ for some $i$: Then $g(C^{i})=x_{i1}x_{i5}x_{i2}$. We set
$w=x_{i5}$.
Case 4 $z=x_{i4}$ for some $i$: This case is impossible.
Case 5 $z=x_{i5}$ for some $i$: This case is impossible again.
Case 6 $z=y_{i1}$ for some $i$: Then $h(L_{i})=y_{i2}$. We set $w=y_{i2}$.
Case 7 $z=y_{i2}$ for some $i$: This case is impossible again.
Thus, in all possible cases, we find a variable $w$ which meets the
requirement. This completes the proof.
## 4\. Powers of cover ideals of clique-whiskering graphs
Let $W=G^{\pi}$ be the fully clique-whiskering of some graph $G$ by some
clique vertex-partition $\pi$ of $G$. In this section we prove $J(W)^{s}$ is
weakly polymatroidal for all $s\geq 1$. As a consequence,
$\operatorname{reg}(J(W)^{s})=\operatorname{reg}(J(W)^{(s)})=s|V(G)|$ for all
$s\geq 1$.
For convenience, we introduce the notions of a simplicial co-complex and the
face ideal of a simplicial co-complex with respect to a partition.
###### Definition 4.1.
Let $V$ be a finite set. We say that a collection $\nabla$ of subsets of $V$
is a simplicial co-complex on $V$ if whenever $F\in\nabla$ and $F\subseteq
G\subseteq V$ one has $G\in\nabla$. Assume that $V$ has a partition
$V=V_{1}\sqcup\cdots\sqcup V_{t}$ and assume that
$V_{i}=\\{x_{i1},\ldots,x_{ik_{i}}\\}$ for $i=1,\ldots,t$. For each face
$F\in\nabla$, we put
$u_{F}:=\prod_{x_{ij}\in F}x_{ij}\prod_{i\in[t]}y_{i}^{k_{i}-|F\cap W_{i}|}.$
Then the face ideal of $\nabla$ with respect to this partition is the
following monomial ideal $J$ in the polynomial ring
$k[x_{11},\ldots,x_{1,k_{1}},\ldots,x_{t1},\ldots,x_{1,k_{t}},y_{1},\ldots,y_{t}]$:
$J=(u_{F}:F\in\nabla).$
###### Remark 4.2.
When $|V_{1}|=\cdots=|V_{t}|=1$, the face ideal of a simplicial co-complex is
the same as the face ideal of a simplicial complex in [14] in essence.
###### Proposition 4.3.
Let $\nabla$ be a simplicial co-complex on $V$ and assume that $V$ has a
partition $V=V_{1}\sqcup\cdots\sqcup V_{t}$ with
$V_{i}=\\{x_{i1},\ldots,x_{ik_{i}}\\}$ for $i=1,\ldots,t$. Denote by $J$ the
face ideal of $\nabla$ with respect to this partition. Then $J^{s}$ is weakly
polymatroidal for each $s\geq 1$ in the ordering:
$x_{11}>\cdots>x_{1k_{1}}>\cdots>x_{t1}>\cdots>x_{tk_{t}}>y_{1}>\cdots>y_{n}$.
###### Proof.
Let $\alpha,\beta$ be the minimal monomial generators of $J^{s}$ with
$\alpha\neq\beta$. We may write
$\alpha=u_{F_{1}}u_{F_{2}}\cdots u_{F_{s}}=u_{1}\cdots
u_{t}y_{1}^{sk_{1}-\deg(u_{1})}\cdots y_{t}^{sk_{t}-\deg(u_{t})},$
and
$\beta=u_{G_{1}}u_{G_{2}}\cdots u_{G_{s}}=v_{1}\cdots
v_{t}y_{1}^{sk_{1}-\deg(v_{1})}\cdots y_{t}^{sk_{t}-\deg(v_{t})}.$
Here $F_{i}\in\nabla,G_{i}\in\nabla$ for all $i\in[s]$, and
$\mathrm{supp}(u_{i})\cup\mathrm{supp}(v_{i})\subseteq V_{i}$ for
$i=1,\ldots,t$. Since $\alpha\neq\beta$, there exists a variable $z<y_{1}$
such that $\deg_{w}(\alpha)=\deg_{w}(\beta)$ for all $w<z$ and
$\deg_{z}(\alpha)<\deg_{z}(\beta)$. Suppose that $z=x_{ij}$, where $1\leq
i\leq t$ and $1\leq j\leq k_{i}$. Then $\deg_{x_{ij}}\alpha\leq s-1$ and so
there exists $l\in[s]$, such that $x_{ij}\notin F_{l}\cap W_{i}$. Say $l=1$.
Put $F_{1}^{\prime}:=F_{1}\cup\\{x_{ij}\\}$ and let
$\gamma:=u_{F_{1}^{\prime}}u_{F_{2}}\cdots u_{F_{s}}.$
Then $\gamma=x_{ij}\alpha/y_{i}$. From this it follows that $J^{s}$ is weakly
polymatroidal.
###### Theorem 4.4.
Let $W=G^{\pi}$, where $G$ is a simple graph and $\pi=(W_{1},\ldots,W_{t})$ is
a clique vertex-partition of $G$. Then $J(W)^{s}$ is weakly polymatroidal for
every $s\geq 1$.
###### Proof.
Let $\nabla:=\\{C\cap V(G)\colon\;C\mbox{ is a minimal vertex cover of }W\\}.$
Then $\nabla$ is a simplicial co-complex on $V(G)$. Moreover, $J(W)$ coincides
with the face ideal of $\Delta$ with respect to the partition $\pi$. Now, the
result follows from Proposition 4.3.
As a direct consequence of Theorem 4.4 we have $G^{\pi}$ is vertex
decomposable, see Theorem 3.1. This recovers [3, Theorem 3.3]. Another
consequence of Theorem 4.4 is as follows.
###### Corollary 4.5.
Let $W=G^{\pi}$ as in Theorem 3.1. Then
$\operatorname{reg}(J(W)^{(s)})=\operatorname{reg}(J(W)^{s})=s|V(G)|$ for all
$s\geq 1$.
###### Proof.
It follows from Theorem 4.4 that $\operatorname{reg}(J(W)^{s})=s|V(G)|$. By
[20, Corollary 4.4], we have $\operatorname{reg}(J(W)^{(s)})=s|V(G)|$.
Finally in this section we present a class of graphs $G$ for which the
symbolic powers $J(G)^{(s)}$ are weakly polymatroidal.
###### Proposition 4.6.
Let $\nabla$ be a simplicial co-complex on $[n]$ such that that for $i\in[n]$,
there exist $F,G\in\nabla$ such that $F\cup G=[n]$ and $F\cap G=\\{i\\}$. Let
$J$ denote the face ideal of $\nabla$ with respect to the trivial partition
(i.e. a partition in which every part contains exactly one vertex). For
positive integers $s,t$ with $2t\leq s$, let $L_{s,t}$ denote the ideal
$J(\nabla)^{s}+(x_{[n]}y_{[n]})J^{s-2}+(x_{[n]}y_{[n]})^{2}J^{s-4}+\cdots+(x_{[n]}y_{[n]})^{t}J^{s-2t}.$
Then $L_{s,t}$ is weakly polymatroidal in the ordering:
$x_{1}>x_{2}>\cdots>x_{n}>y_{1}>\cdots>y_{n}$.
###### Proof.
Let $\alpha,\beta$ be the minimal monomial generators of $L_{s,t}$ with
$\alpha\neq\beta$. We may write
$\alpha=(x_{[n]}y_{([n])})^{k}u_{F_{1}}\cdots
u_{F_{s-2k}}=x_{1}^{a_{1}+k}\cdots x_{n}^{a_{n}+k}y_{1}^{s-k-a_{1}}\cdots
y_{n}^{s-k-a_{n}}$
and
$\beta=(x_{[n]}y_{([n])})^{\ell}u_{G_{1}}\cdots
u_{G_{s-2\ell}}=x_{1}^{b_{1}+\ell}\cdots x_{n}^{b_{n}+\ell}y_{1}^{s-\ell-
b_{1}}\cdots y_{n}^{s-\ell-b_{n}}.$
Here, $0\leq k,\ell\leq t$, and $F_{i},G_{j}\in\nabla$ for $i=1,\ldots,s-2k$
and $j=1,\ldots,s-2\ell$. Since $\alpha\neq\beta$, there exists $i\in[n]$ such
that $a_{1}+k=b_{1}+\ell,\ldots,a_{i-1}+k=b_{i-1}+\ell$ and
$a_{i}+k<b_{i}+\ell$. We consider the following two cases.
1. (1)
If $k\leq\ell$, then $a_{i}<s-\ell-k\leq s-2k$ and so there exits $j\in[k]$
such that $i\notin F_{j}$. Say $j=1$. Then put $F_{1}^{\prime}=F_{1}\cup{i}$
and $\gamma:=(x_{[n]}y_{([n])})^{k}u_{F_{1}^{\prime}}u_{F_{2}}\cdots
u_{F_{s-2k}}$. It is easy to check that $\gamma$ is a minimal monomial
generator of $L_{s,t}$ and moreover $\gamma=x_{i}\alpha/y_{i}$.
2. (2)
If $k>\ell$, then $k\geq 1$. By the assumption, there are $F,G\in\nabla$ such
that $F\cup G=[n]$ and $F\cap G=\\{i\\}$. Put
$\gamma=(x_{[n]}y_{([n])})^{k-1}u_{F_{1}}\cdots u_{F_{s-2k}}u_{F}u_{G}$, then
$\gamma=(x_{i}\alpha)/y_{i}$ is a minimal generator of $L_{s,t}$.
Thus, we can find a monomial $\gamma$ which meets the requirement in both
cases and this implies $L_{s,t}$ is weakly polymatroidal.
###### Corollary 4.7.
Let $W$ be the whisker graph of a cycle. Then $J(W)^{(s)}$ is weakly
polymatroidal for every $s\geq 1$.
###### Proof.
For any cycle $C$ on $[n]$ and for any $i\in[n]$, there exist vertex covers
$F_{1}$ and $F_{2}$ of $C$ such that $F_{1}\cap F_{2}=\\{i\\}$ and $F_{1}\cup
F_{2}=[n]$. Thus, the result follows from Proposition 4.6 as well as [13,
Proposition 5.3].
## 5\. Appendix
In this section we will explain why the original proof of [17, Theorem 4.3] is
not flawless, and then give a corrected proof. We will adopt the notions from
[17] in this section.
The author of [17] considers several cases in the proof of [17, Theorem 4.3].
In case 4, he assumes that $z=y_{i3}$, then deduces that
$\\{y_{i1},y_{i2},y_{i5}\\}\subseteq\mathrm{supp}(g_{j})$ for some $1\leq
j\leq k$. Let us see the following example.
###### Example 5.1.
Suppose that $m=t=0$ and $n=1$. In this case $G$ is just a 5-cycle. Let
$f=f_{1}f_{2}=(y_{1}y_{2}y_{3})(y_{3}y_{4}y_{5})=y_{1}y_{2}y_{3}^{2}y_{4}y_{5}$
and
$g=g_{1}g_{2}=(y_{2}y_{4}y_{5})(y_{1}y_{3}y_{4})=y_{1}y_{2}y_{3}y_{4}^{2}y_{5}$
be minimal monomial generators of $J(G)^{2}$. Then
$\deg_{y_{i}}f=\deg_{y_{i}}g$ for $i=1,2$ and $\deg_{y_{3}}f>\deg_{y_{3}}g$.
But, it is clear $\\{y_{1},y_{2},y_{5}\\}\nsubseteq\mathrm{supp}(g_{j})$ for
$j=1,2$. Furthermore, although
$\\{y_{1},y_{2},y_{5}\\}\subseteq\mathrm{supp}(g)$,
$\frac{g}{y_{1}y_{2}y_{5}}$ does not belong to $J(G)$ and one can easily check
that $y_{3}g/y_{5}\notin J(G)^{2}$.
This example shows the proof of case 4 is not correct. The one of case 5 is
not correct in a similar way. In the sequel, we give a new proof in case 4 and
case 5.
###### Theorem 5.2.
([17, Theorem 4.3]) Let $G$ be a Cohen-Macaulay cactus graph. Then $J(G)^{s}$
is weakly polymatroidal for all $k\geq 1$.
###### Proof.
As in the original proof of [17, Theorem 4.3], we may assume that $V(G)$ is
the disjoint union of
$F_{1},\ldots,F_{m},G_{1},\ldots,G_{n},L_{1},\ldots,L_{t}$
such that the induced graph of $G$ on $F_{i}$ is a simplex, the induced graph
of $G$ on $G_{j}$ is a a basic 5-cycle, and $L_{l}$ is an edge which belongs
to a 4-cycle of $G$ for each $ij,l$.
Consider the following ordering of the variables (corresponding to the
vertices of $G$):
$x_{11}>\cdots>x_{1k_{1}}>\cdots>x_{m1}>\cdots>x_{mk_{m}}>y_{11}>\cdots>y_{15}>$
$\cdots>y_{n1}>\cdots>y_{n5}>z_{11}>z_{12}>\cdots>z_{t1}>z_{t2},$
where $F_{i}=\\{{x_{i1},\ldots,x_{ik_{i}}}\\},2\leq k_{i}\leq 3$ and
${x_{ib_{i}},\ldots,x_{ik_{i}}}$ are the free vertices of $F_{i}$ for all $i$,
and $y_{j1},y_{j2},y_{j3},y_{j4},y_{j5}$ are the vertices of $G_{j}$ such that
the vertices $y_{j3},y_{j4},y_{j5}$ are of degree two for all $j$, and
$z_{i1},z_{i2}$ are the vertices of $L_{i}$ of degree two for all $i$. Note
that for each $i\in[n]$, each minimal vertex cover $C$ of $G$ contains exactly
one of the subsets
$\\{y_{i1},y_{i2},y_{i5}\\},\\{y_{i2},y_{i4},y_{i5}\\},\\{y_{i1},y_{i2},y_{i3}\\},\\{y_{i1},y_{i3},y_{i4}\\},\\{y_{i2},y_{i4},y_{i5}\\}$
Now, let $f=f_{1}\cdots f_{k}$ and $g=g_{1}\cdots g_{k}$ be elements in the
minimal generating set of $J(G)^{k}$ with $f_{s},g_{s}\in J(G)$ for $1\leq
s\leq k$. Suppose that there exists a variable $z$ such that
$\mathrm{deg}_{z^{{}^{\prime}}}f=\mathrm{deg}_{z^{{}^{\prime}}}g$ for any
variable $z^{{}^{\prime}}>z$ and $\mathrm{deg}_{z}f>\mathrm{deg}_{z}g$. We
only need to find a variable $w<z$ such that $zg/w\in J(G)^{2}$ in the cases
that $z=y_{i3}$ and that $z=y_{i4}$ for some $i$, which correspond to case 4
and case 5 of the original proof, respectively. To this end, we introduce some
notions.
Let $i$ be the number in $z=y_{i3}$ and $z=y_{i4}$. First we set
$f_{abc}=|\\{s:\\{y_{ia},y_{ib},y_{ic}\\}\subseteq\mathrm{supp}(f_{s})\\}\mbox{
and
}g_{abc}=|\\{s:\\{y_{ia},y_{ib},y_{ic}\\}\subseteq\mathrm{supp}(g_{s})\\}|,$
where $\\{y_{ia},y_{ib},y_{ic}\\}$ is a minimal vertex cover of $G_{i}$ for
each triple $\\{a,b,c\\}$.
Next, we let $T$ denote the subgraph of $G$ induced on the subset
$V(G)\setminus\\{y_{i3},y_{i4},y_{i5}\\}$, and for $a=1,2$ we set
$T_{a}=\\{z\in V(T)\colon\;\mbox{ there exists a path in }T\mbox{ connecting
}z\mbox{ and }y_{ia}\\}.$
We will assume $G$ is connected. Since $G$ is cactus, we have $T_{a}\cap
T_{b}=\emptyset$ and $V(T)=T_{1}\sqcup\\{y_{i1},y_{i2}\\}\sqcup T_{2}$.
Case that $z=y_{i3}$: If $g_{125}\neq 0$, there exists $s\in\\{1,\cdots,k\\}$
such that $\\{y_{i1},y_{i2},y_{i5}\\}\subseteq\mathrm{supp}(g_{s})$. Since the
subset $(\mathrm{supp}(g_{s})\backslash\\{y_{i5}\\})\cup\\{y_{i3}\\}$ is again
a minimal vertex cover of $G$, we have $y_{i3}g/y_{i5}\in J(G)^{k}$ and so we
are done. Note that $g_{125}\neq 0$ if $k=1$, we assume now that $g_{125}=0$
and $k\geq 2$.
Because $\mathrm{deg}_{y_{ia}}f=\mathrm{deg}_{y_{ia}}g$ for $a=1,2$ and
$\mathrm{deg}_{y_{i3}}f>\mathrm{deg}_{y_{i3}}$, we have the following
formulas:
$f_{134}+f_{123}+f_{125}=g_{134}+g_{123}+g_{125};$
$f_{245}+f_{123}+f_{125}=g_{245}+g_{123}+g_{125};$
$f_{134}+f_{123}+f_{345}>g_{134}+g_{123}+g_{345};$
$f_{134}+f_{245}+f_{123}+f_{125}+f_{345}=g_{134}+g_{245}+g_{123}+g_{125}+g_{345}=k.$
Due to $g_{125}=0$, we obtain successively that
$f_{123}>g_{123},f_{345}>g_{345},f_{245}<g_{245}\mbox{ and }f_{134}<g_{134}.$
Thus, there exist $p,q\in\\{1,\cdots,k\\}$ with $p\neq q$ such that
$\\{y_{i2},y_{i4},y_{i5}\\}\subseteq g_{p}$ and
$\\{y_{i1},y_{i3},y_{i4}\\}\subseteq g_{q}$. say $p=1$ and $q=2$, We need to
prove that $y_{i3}g_{1}g_{2}/y_{i4}$ belongs to $J(G)^{2}$. If this is done,
then $y_{i3}g/y_{i4}$ belongs to $J(G)^{k}$ and we are done.
We may write $g_{1},g_{2}$ as follows:
$g_{1}=y_{i2}y_{i4}y_{i5}u_{1}u_{2}\mbox{ and
}g_{2}=y_{i1}y_{i3}y_{i4}v_{1}v_{2}$
such that $\mathrm{supp}(u_{i})\cup\mathrm{supp}(v_{i})\subseteq T_{i}$ for
$i=1,2$.
Then we have the following decomposition:
$y_{i3}g_{1}g_{2}/y_{i4}=(y_{i3}y_{i4}y_{i5}u_{1}u_{2})(y_{i1}y_{i2}y_{i3}v_{1}v_{2})=(y_{i3}y_{i4}y_{i5}u_{1}v_{2})(y_{i1}y_{i2}y_{i3}v_{1}u_{2}).$
We next check that both $\mathrm{supp}(y_{i3}y_{i4}y_{i5}u_{1}v_{2})$ and
$\mathrm{supp}(y_{i1}y_{i2}y_{i3}v_{1}u_{2})$ are vertex covers of $G$. Denote
$V_{1}=\mathrm{supp}(y_{i3}y_{i4}y_{i5}u_{1}v_{2})$ and
$V_{2}=\mathrm{supp}(y_{i1}y_{i2}y_{i3}v_{1}u_{2})$. Let $e=\\{z_{1},z_{2}\\}$
be an edge of $E(G)$.
If $e\subseteq V(G_{i})$, then it is clear that $V_{1}\cap e\neq\emptyset$ and
$V_{2}\cap e\neq\emptyset$;
If $z_{1}=y_{i1}$ and $z_{2}\in T_{1}$ then $z_{2}\in\mathrm{supp}(u_{1})\cap
e$. This implies $V_{1}\cap e\neq\emptyset$ and $V_{2}\cap e\neq\emptyset$;
Similarly, we have if $z_{1}=y_{i2}$ and $z_{2}\in T_{2}$ then $V_{1}\cap
e\neq\emptyset$ and $V_{2}\cap e\neq\emptyset$.
Finally suppose that $e\cap V(G_{i})=\emptyset$. Then either $e\subseteq
E(T_{1})$ or $e\subseteq E(T_{2})$. In the case that $e\subseteq E(T_{1})$, we
have $\mathrm{supp}{(u_{1})}\cap e\neq\emptyset$ and
$\mathrm{supp}{(v_{1})}\cap e\neq\emptyset$ and so $V_{1}\cap e\neq\emptyset$
and $V_{2}\cap e\neq\emptyset$. The case that $e\subseteq E(T_{2})$ is
similar.
Thus, both $V_{1}$ and $V_{2}$ are vertex covers of $G$ indeed.
Case that $z=y_{i4}$: We may assume $k\geq 2$. Because
$\mathrm{deg}_{y_{ia}}f=\mathrm{deg}_{y_{ia}}g$ for $a=1,2,3$ and
$\mathrm{deg}_{y_{i4}}f>\mathrm{deg}_{y_{i4}}$, we have the following
formulas:
$f_{134}+f_{123}+f_{125}=g_{134}+g_{123}+g_{125};$
$f_{245}+f_{123}+f_{125}=g_{245}+g_{123}+g_{125};$
$f_{134}+f_{123}+f_{345}=g_{134}+g_{123}+g_{345};$
$f_{134}+f_{245}+f_{345}>g_{134}+g_{245}+g_{345};$
$f_{134}+f_{245}+f_{123}+f_{125}+f_{345}=g_{134}+g_{245}+g_{123}+g_{125}+g_{345}.$
From these we obtain successively that
$f_{134}>g_{134},f_{345}<g_{345},f_{245}>g_{245}\mbox{ and }f_{125}<g_{125}$
Thus, there exist $p,q\in\\{1,\cdots,k\\}$ such that
$\\{y_{i3},y_{i4},y_{i5}\\}\subseteq g_{p}$ and
$\\{y_{i1},y_{i2},y_{i5}\\}\subseteq g_{q}$. Say $p=1$ and $q=2$. We only need
to prove that $y_{i4}g_{1}g_{2}/y_{i5}$ belongs to $J(G)^{2}$.
For this we decompose $g_{1}$ and $g_{2}$ as follows:
$g_{1}=y_{i3}y_{i4}y_{i5}u_{1}u_{2}\mbox{ and
}g_{2}=y_{i1}y_{i2}y_{i5}v_{1}v_{2}$
such that $\mathrm{supp}(u_{i})\cup\mathrm{supp}(v_{i})\subseteq T_{i}$ for
$i=1,2$.
Then we have
$y_{i4}g_{1}g_{2}/y_{i5}=(y_{i3}y_{i4}y_{i5}u_{1}u_{2})(y_{i1}y_{i2}y_{i4}v_{1}v_{2})=(y_{i2}y_{i4}y_{i5}u_{1}v_{2})(y_{i1}y_{i3}y_{i4}v_{1}u_{2}).$
We next check that both $\mathrm{supp}(y_{i2}y_{i4}y_{i5}u_{1}v_{2})$ and
$\mathrm{supp}(y_{i1}y_{i3}y_{i4}v_{1}u_{2})$ are vertex covers of $G$. Denote
$V_{1}=\mathrm{supp}(y_{i2}y_{i4}y_{i5}u_{1}v_{2})$ and
$V_{2}=\mathrm{supp}(y_{i1}y_{i3}y_{i4}v_{1}u_{2})$. Let $e=\\{z_{1},z_{2}\\}$
be an edge of $E(G)$.
If $e\subseteq V(G_{i})$, then it is clear that $V_{1}\cap e\neq\emptyset$ and
$V_{2}\cap e\neq\emptyset$;
If $z_{1}=y_{i1}$ and $z_{2}\in T_{1}$ then $z_{2}\in\mathrm{supp}(u_{1})\cap
e$. This implies $V_{1}\cap e\neq\emptyset$ and $V_{2}\cap e\neq\emptyset$;
Similarly, we have if $z_{1}=y_{i2}$ and $z_{2}\in T_{2}$ then $V_{1}\cap
e\neq\emptyset$ and $V_{2}\cap e\neq\emptyset$.
Finally suppose that $e\cap V(G_{i})=\emptyset$. Then either $e\subseteq
E(T_{1})$ or $e\subseteq E(T_{2})$. In the case that $e\subseteq E(T_{1})$, we
have $\mathrm{supp}{(u_{1})}\cap e\neq\emptyset$ and
$\mathrm{supp}{(v_{1})}\cap e\neq\emptyset$ and so $V_{1}\cap e\neq\emptyset$
and $V_{2}\cap e\neq\emptyset$. The other case is similar.
Thus, both $V_{1}$ and $V_{2}$ are vertex covers of $G$ indeed.
Acknowledgment This research is supported by NSFC (No. 11971338)
## References
* [1] A. Banerjee, S. Beyarslan, H. T. H$\mathrm{\grave{a}}$, Regularity of Edge Ideals and Their Powers, Springer Proceedings in Mathematics and Statistics, 277 (2019) 17–52.
* [2] A. Bjorner, M. Wachs, Shellable nonpure complexes and posets. I, Trans. Amer. Math. Soc. 348 (1996) 1299–1327.
* [3] D. Cook II, U. Nagel, Cohen-Macaulay graphs and face vectors of flag complexes, SIAM J. Discrete Math. 26 (2012) 89–101.
* [4] L. X. Dung, T. T. Hien, H. D. Nguyen, T. N. Trung, Regularity and Koszul property of symbolic powers of monomial ideals, Math. Z. 298 (2021) 1487–1522.
* [5] V. Ene, J. Herzog, F. Mohammadi, Monomial ideals and toric rings of Hibi type arising from a finite poset, European Journal of Combinatorics, 2011 (32) 404–421.
* [6] A. C. Francisco; A. V. Tuyl, Adam, Sequentially Cohen-Macaulay edge ideals, Proc. Amer. Math. Soc. 135 (2007) 2327–2337.
* [7] Y. Gu, H. T. H$\mathrm{\grave{a}}$, J.W. Skelton, Symbolic powers of cover ideals of graphs and Koszul property, Internat. J. Algebra Comput. 31 (2021) 865–881.
* [8] J. Guo, M. Li, T. Wu, A new view toward vertex decomposable graphs, Discrete Mathematics 345 (2022) 112953
* [9] A. V. Jayanthan, R. Kumar, Regularity of symbolic powers of edge ideals, Journal of Pure and Applied Algebra 224 (2020) 106306
* [10] M. Kokubo, T. Hibi, Weakly polymatroidal ideals, Algebra Colloq. 13 (2006) 711–720.
* [11] A. Kumar, R. Kumar, R. Sarkar, S. Selvaraja, Symbolic Powers of Certain Cover Ideals of Graphs, Acta Mathematica Vietnamica 46 (2021) 599–611
* [12] J. Herzog, T. Hibi, Monomial Ideals, Graduate Text in Mathematics 260, Springer, 2011.
* [13] J. Herzog, T. Hibi, N. V. Trung, Symbolic powers of monomial ideals and vertex cover algebras, Adv. Math. 210 (2007) 304–322.
* [14] J. Herzog, T. Hibi, The face ideal of a simplicial complex, arXiv: 1411.3567
* [15] T. Hibi, A. Higashitani, K. Kimura, A.B. O’Keefe, Algebraic study on Cameron-Walker graphs, J. Algebra 422 (2015) 257–269.
* [16] D.T. Hoang, N.C. Minh, T.N. Trung, Cohen-Macaulay graphs with large girth, J. Algebra Appl. 14 (2015) 1550112
* [17] F. Mohammadi, Powers of the vertex cover ideals, Collect. Math. 65 (2014) 169–181.
* [18] F. Mohammadi, Powers of the vertex cover ideal of a chordal graph, Comm. Algebra 39 (2011) 1–12.
* [19] J. Provan, L. Billera, Decompositions of simplicial complexes related to diameters of convex polyhedra, Math. Oper. Res. 5 (1980) 576–594.
* [20] S. Selvaraja, Symbolic powers of vertex cover ideals. Internat. J. Algebra Comput. 30 (2020) 1167–1183.
* [21] S. A. Seyed Fakhari, Regularity of symbolic powers of cover ideals of graphs. Collect. Math. 70(2019) 187–195.
* [22] S. A. Seyed Fakhari, On the minimal free resolution of symbolic powers of cover ideals of graphs, Proc. Amer. Math. Soc. 149 (2021) 3687–3698.
* [23] N. V. Trung, T. M. Tuan, Equality of ordinary and symbolic powers of Stanley-Reisner ideals, J. Algebra (2011) 77-93.
|
††thanks: G.H.J.J. and M.U.G.-R. contributed equally to this work.
# Entropy engineering and tunable magnetic order in the spinel high entropy
oxide
Graham H.J. Johnstone Department of Physics & Astronomy, University of
British Columbia, Vancouver, BC V6T 1Z1, Canada Stewart Blusson Quantum
Matter Institute, University of British Columbia, Vancouver, BC V6T 1Z4,
Canada Mario U. González-Rivas Department of Physics & Astronomy, University
of British Columbia, Vancouver, BC V6T 1Z1, Canada Stewart Blusson Quantum
Matter Institute, University of British Columbia, Vancouver, BC V6T 1Z4,
Canada Keith M. Taddei Neutron Scattering Division, Oak Ridge National
Laboratory, Oak Ridge, Tennessee 37831, USA Ronny Sutarto Canadian Light
Source, Saskatoon, Saskatchewan S7N 2V3, Canada George A. Sawatzky
Department of Physics & Astronomy, University of British Columbia, Vancouver,
BC V6T 1Z1, Canada Stewart Blusson Quantum Matter Institute, University of
British Columbia, Vancouver, BC V6T 1Z4, Canada Robert J. Green Department
of Physics and Engineering Physics, University of Saskatchewan, Saskatoon, SK
S7N 5E2, Canada Stewart Blusson Quantum Matter Institute, University of
British Columbia, Vancouver, BC V6T 1Z4, Canada Mohamed Oudah Department of
Physics & Astronomy, University of British Columbia, Vancouver, BC V6T 1Z1,
Canada Stewart Blusson Quantum Matter Institute, University of British
Columbia, Vancouver, BC V6T 1Z4, Canada Alannah M. Hallas
<EMAIL_ADDRESS>Department of Physics & Astronomy, University of British
Columbia, Vancouver, BC V6T 1Z1, Canada Stewart Blusson Quantum Matter
Institute, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
###### Abstract
ABSTRACT
Spinel oxides are an ideal setting to explore the interplay between
configurational entropy, site selectivity, and magnetism in high entropy
oxides. In this work we characterize the magnetic properties of the spinel
(Cr,Mn,Fe,Co,Ni)3O4 and study the evolution of its magnetism as a function of
non-magnetic gallium substitution. Across the range of compositions studied
here, from 0% to 40% Ga, magnetic susceptibility and powder neutron
diffraction measurements show that ferrimagnetic order is robust in the spinel
HEO. However, we also find that the ferrimagnetic order is highly tunable,
with the ordering temperature, saturated and sublattice moments, and magnetic
hardness all varying significantly as a function of Ga concentration. Through
x-ray absorption and magnetic circular dichroism, we are able to correlate
this magnetic tunability with strong site selectivity between the various
cations and the tetrahedral and octahedral sites in the spinel structure. In
particular, we find that while Ni and Cr are largely unaffected by the
substitution with Ga, the occupancies of Mn, Co, and Fe are each significantly
redistributed. Ga substitution also requires an overall reduction in the
transition metal valence, and this is entirely accommodated by Mn. Finally, we
show that while site selectivity has an overall suppressing effect on the
configurational entropy, over a certain range of compositions, Ga substitution
yields a striking increase in the configurational entropy and may confer
additional stabilization. Spinel oxides can be tuned seamlessly from the low-
entropy to the high-entropy regime, making this an ideal platform for entropy
engineering.
## I Introduction
Since their initial discovery in 2015 Rost _et al._ (2015), the rapidly
growing field of high entropy oxides (HEOs) has attracted significant interest
from chemists, physicists, and materials scientists alike. There is no
universally agreed upon definition for what constitutes an HEO, but in general
terms they are crystalline oxides with a high degree of configurational
entropy due to the solid solution of multiple (in many cases five) metal
cations sharing a single sublattice Zhang and Reece (2019); Oses _et al._
(2020); Musicó _et al._ (2020); Sarkar _et al._ (2020); McCormack and
Navrotsky (2021). In certain cases, this configurational entropy is so large
that it dominates the free energy landscape at the high temperatures where
materials synthesis occurs, such that the resulting single phase solid
solution is actually selected by entropic rather than enthalpic forces. HEOs
have been demonstrated to have highly tunable properties and enhanced
structural stability which lends credence to the idea that they may one day
find widespread use in applications such as reversible energy storage Bérardan
_et al._ (2016); Sarkar _et al._ (2018, 2019) or catalysis Sun and Dai
(2021). One question that has elicited significant interest is the degree to
which the ideal configurational disorder is actually achieved in real HEOs or
whether it is significantly reduced due to short-range ordering or clustering,
and the extent to which these effects might alter the structural, magnetic,
and electronic properties.
The cubic spinel family of materials, with the general chemical formula
$AB_{2}$O4 Sickafus _et al._ (1999); Zhao _et al._ (2017), provide a
particularly interesting setting in which to explore the interplay between
configurational disorder and structure-function relationships in HEOs. First,
the spinel structure offers not one but two cation sublattices: the
tetrahedrally coordinated $A$ site that forms a diamond lattice and the
octahedrally coordinated $B$ site that forms a pyrochlore lattice. In the
extreme limit where both sublattices are occupied by a mixture of cations a
dramatic enhancement to configurational entropy can be expected. Spinels can
also exist as both normal (valence ordered) or inverted (mixing of valences
between $A$ and $B$ site) Verwey and Heilmann (1947); O’Neill and Navrotsky
(1983), they are known to exhibit oxygen vacancies Song _et al._ (2012);
Torres _et al._ (2014); Zhao _et al._ (2017), and they are susceptible to
short-range cation ordering O’Quinn _et al._ (2017); Liu _et al._ (2019).
Each of these factors could enhance or diminish the overall degree of
configurational entropy. In fact, the notion of entropy stabilization in
spinels vastly predates the concept of HEOs in the modern sense. For example,
in a series of papers, Navrotsky et al. show that even for binary spinel
systems such as MgAl2O4, mixing of the cation sites provides the necessary
configurational entropy to stabilize the spinel phase in relation to the
component oxide phases, which are favored by enthalpy Navrotsky and Kleppa
(1967, 1968); Navrotsky (1969).
Figure 1: Structural characterizations of the HEO spinels. Rietveld refinement
(yellow lines) of the powder x-ray diffraction data (black circles) on the (a)
0% Ga and (b) 20% Ga samples confirm the cubic spinel phase (residual
indicated by the blue line). The Bragg peak positions for the $Fd\bar{3}m$
spinel structure are indicated by the green crosses. Minor rock salt and
zirconia impurities in the 0% sample are indicated by yellow and orange
crosses, respectively. (c) The cubic spinel structure is made up of
tetrahedrally coordinated (yellow) and octahedrally coordinated (blue) cation
sites. Energy dispersive x-ray spectroscopy mapping for the (d) 0% and (e) 20%
Ga samples reveal that the elemental distribution in these spinel HEOs are
largely homogeneous on the micron scale, with a higher degree of clustering
evident in the Ga-substituted sample.
The spinel family also offers a rich opportunity to study the intersection of
magnetism with high configurational disorder. Previous studies on the
prototypical rock salt HEO, (Mg,Co,Ni,Cu,Zn)O, have revealed that long range
antiferromagnetic order can occur in high entropy systems, even when a high
concentration of the sites (40%) are occupied by non-magnetic ions Zhang _et
al._ (2019). This finding raises natural questions about the versatility and
robustness of magnetic order in HEOs more generally. In particular, are there
limitations on the types of magnetically ordered states that can be obtained
in a high entropy material and how does magnetic dilution interface with the
strong configurational disorder? In contrast to the much-studied rock salt
HEO, in the spinel structure it becomes possible to synthesize a five-
component material (Cr,Mn,Fe,Co,Ni)3O4 composed entirely of magnetic
transition metals Dąbrowa _et al._ (2018). Previous studies on this material
revealed long-range magnetic order Dąbrowa _et al._ (2018); Cieslak _et al._
(2021a); Mao _et al._ (2019) and remarkable electrochemical properties,
raising interest in their potential for energy applications Grzesik _et al._
(2020); Huang _et al._ (2021); Stygar _et al._ (2020); Nguyen _et al._
(2020); Talluri _et al._ (2021); Wang _et al._ (2020, 2019). Additionally,
spinels present the possibility for site selectivity stemming from competing
enthalpy and crystal field effects, further enriching the complexity of their
magnetic properties Sarkar _et al._ (2021).
In this paper, we attempt to disentangle the effects of configurational
disorder and magnetic dilution on the magnetic ground state of the spinel HEO
(Cr,Mn,Fe,Co,Ni)3O4 as a function of non-magnetic Ga substitution. Gallium is
the ideal candidate for magnetic dilution in this family, as in addition to
carrying no magnetic moment, it is known to form ternary spinel phases with
most of the ions present Casado and Rasines (1982); Ghose _et al._ (1977);
Nakatsuka _et al._ (2006); Areán and Trobajo-Fernandez (1985), and, further,
its ionic radius is similar to that of the $3d$ transition metals and
therefore produces negligible chemical pressure effects. We characterize the
effect of this magnetic dilution through an ensemble of measurements with
sensitivity to varying length scales (long and short-range order) and degrees
of freedom (magnetic and structural). We find that long range ferrimagnetic
order is robust across all Ga concentrations. However, the ordering
temperature, magnetic hardness, and saturation moment all vary sensitively
with composition, resulting in a highly tunable magnetic system. Locally, Ga-
substitution enhances the configurational disorder of the spinel HEO system,
with the more versatile ions (Mn, Fe, and Co) accommodating for charge
imbalances. Spinels are therefore found to be an exemplary system to study the
interactions between the configurational disorder, magnetic dilution, and site
selectivity.
Figure 2: Summary of magnetic property measurements performed on the HEO
spinels. (a) Magnetic susceptibility and (b) magnetization of
(Cr,Mn,Fe,Co,Ni)3O4 revealing the onset of ferrimagnetic order at $T_{C}=403$
K followed by a second anomaly at $T^{*}=55$ K and a small hysteresis within
the ordered state with a saturated moment of $1.52~{}\mu_{\text{B}}$ per
formula unit (FU). (c) The field-cooled magnetic susceptibility and (d) $T=2$
K magnetization for the Ga-substituted HEO-spinels showing the suppression of
magnetic order and a widening hysteresis with increasing Ga-concentration. The
variation of (e) $T_{C}$ and $T^{*}$ and (f) the saturated moment per formula
unit and per magnetic transition metal (TM) with Ga-concentration.
## II Results and discussion
### II.1 Crystal structure
Polycrystalline samples of the spinel high entropy oxide (Cr,Mn,Fe,Co,Ni)3O4
and its gallium substituted analogs (10%-Ga, 20%-Ga, and 40%-Ga) were
synthesized by solid state reaction. The spinel phase was initially confirmed
by powder x-ray diffraction. Rietveld refinements of the 0% and 20% Ga samples
are presented in Figure 1(a,b) showing excellent agreement with the cubic
spinel phase (space group $Fd\bar{3}m$). Structural characterization for the
10% and 40% samples are available in the Supplemental Materials. The refined
lattice parameters and adjustable oxygen coordinates as a function of Ga
substitution are presented in Table 1. The cubic spinel structure with the
general formula $AB_{2}$O4, as shown in Figure 1(c), has two distinct cation
sublattices. The first, indicated in yellow is the tetrahedrally coordinated
$A$-sublattice, while the second is the octahedrally coordinated $B$
sublattice, indicated in blue. Independently, these $A$ and $B$ sublattices
form, respectively, a diamond and pyrochlore network. From analogy with other
known $3d$ transition metal based spinels, one may infer a strong site
preference among the different cations but due to the similar atomic form
factors of the elements involved, this cannot be resolved by x-ray
diffraction. This site selectivity will be a major focus in the sections that
follow.
Two minor impurity phases are evident in only the 0% Ga sample, which are
zirconia and rock salt phases making up approximately 1% and 2% of the sample
by volume, respectively. The zirconia phase originates from the degradation of
the milling balls during the homogenization process and is not of concern to
the magnetic property measurements. The minor rock salt phase has all of its
Bragg reflections sitting directly atop spinel main phase peaks and was only
detected due to the high resolution of our diffractometer. The refined lattice
parameter for the rock salt impurity is in good agreement with that of NiO
Sasaki _et al._ (1979) and we therefore tentatively assign the likely origin
of this phase as being due to Ni segregation, which we will further discuss
later in the context of the local environment.
Energy dispersive x-ray spectroscopy (EDS) measurements, shown for the 0% and
20% Ga samples in Figure 1(d,e) were employed to investigate the composition
and homogeneity of these samples on a microscopic length scale. The grain
morphology of the samples is observed to evolve considerably as a function of
Ga substitution, despite the identical synthesis method, suggesting it is
intrinsic in origin. Likewise, while the elemental mapping is largely
homogeneous for the various ions, a higher degree of clustering and
inhomogeneity is evident for the Ga-substituted samples. The current results
exclude any possible phase segregation on the micron length scale accessible
with SEM, and future measurements with a transmission electron microscope
(TEM) can help us confirm the homogeneity on the nanometer length scale. The
current SEM-EDS mapping and the XRD results support the single phase nature of
our spinel HEO samples for different Ga doping levels. Magnetic
characterization, which will be shown next, also supports the single phase
nature of the magnetism.
### II.2 Magnetic Susceptibility: Onset of ferrimagnetic order
We begin our magnetic characterization of the spinel HEO by considering the
temperature dependent magnetic susceptibility for (Cr,Mn,Fe,Co,Ni)3O4,
measured between 2 and 600 K, which is presented in Fig. 2(a). The onset of
long-range magnetic order at $T_{C}=403$ K 111We define $T_{C}$ by a slope
method wherein the approximately linear increase in the susceptibility is
extrapolated to its $x$-intercept. is marked by a sharp increase in the
susceptibility and a bifurcation of the field-cooled and zero field-cooled
susceptibilities, measured in an applied field of $H=100$ Oe. While the low-
field susceptibility data could be consistent with either ferro- or ferri-
magnetic order, the latter is confirmed via isothermal magnetization
measurements, presented in Fig. 2(b). At $T=2$ K, the saturated magnetic
moment reaches a value of $\mu_{\text{sat}}=1.52$ $\mu_{\text{B}}$ per formula
unit (FU), which is smaller than even the smallest possible moment for any of
the five magnetic transition metals involved and significantly smaller than
the average expected moment. Therefore, we can conclude that the ordered
moments on the $A$ and $B$ sublattices are aligned antiparallel and the
saturated moment measured here represents the difference between the
sublattice magnetizations
($\mu_{\text{sat}}=|\mu^{A}_{\text{ord}}-\mu^{B}_{\text{ord}}|$). With the
exception of the $T=400$ K data set, which is above the ordering temperature,
a measurable but small hysteresis is detected at all other temperatures, with
the coercive field reaching $H_{c}=36$ mT at 2 K, characteristic of soft
magnetic behavior. Finally, we note the appearance of a second feature in the
zero field-cooled susceptibility at $T^{*}=55$ K, which is more evident when
viewing the derivative with respect to temperature, plotted in the top panel
of Fig. 2(a). As this feature only appears in the zero field-cooled data, we
tentatively assign it as a depinning transition, where thermal effects
overcome disordering allowing some domains to spontaneously realign towards
the applied field. Our magnetometry results on (Cr,Mn,Fe,Co,Ni)3O4 are in good
agreement with previous reports Musicó _et al._ (2019); Sarkar _et al._
(2021); Cieslak _et al._ (2021b).
The field-cooled magnetic susceptibility for (Cr,Mn,Fe,Co,Ni)3O4 and each of
the Ga-substituted samples is presented in Fig. 2(c) and zero field-cooled
data are available in the Supporting Information. The susceptibility data show
that the onset of long-range magnetic order is suppressed continuously by
magnetic dilution via substitution of non-magnetic Ga, reaching $T_{C}=121$ K
for the 40%-Ga sample. Meanwhile the $T^{*}$ depinning transition temperature
is found to increase with Ga-concentration (see derivative plots in the
Supporting Information), consistent with increasing magnetic disorder, and is
altogether absent in the 40%-Ga sample, apparently coinciding with $T_{C}$.
The evolution of $T_{C}$ and $T^{*}$ is presented in Fig. 2(e), showing that
$T_{C}$ decreases approximately linearly with Ga-concentration.
Magnetization isotherms collected at $T=2$ K for each sample are shown in Fig.
2(d) while higher temperatures are shown in the Supporting Information.
Naively, one might expect the replacement of magnetic transition metals with
non-magnetic Ga to yield a monotonic decrease in the net moment, which is not
borne out by the data. Instead, the saturated moment, when normalized per
formula unit, jumps from $\mu_{\text{sat}}=1.52$ $\mu_{\text{B}}$/FU in the 0%
Ga sample to over $2$ $\mu_{\text{B}}$/FU for both 10% and 20% Ga. Only at 40%
Ga does the saturated moment finally drop below the 0% value. Normalizing the
same data per magnetic transition metal, as shown in Fig. 2(f), shows a
similar trend with the exception of the 40% sample, which has
$\mu_{\text{sat}}=0.68$ $\mu_{\text{B}}$/TM that exceeds the 0% sample. There
are two possible contributing factors to these observations: (i) the non-
magnetic Ga selectively occupies one of the sublattices yielding a larger
difference between the sublattice ordered moments
($\mu_{\text{sat}}=|\mu^{A}_{\text{ord}}-\mu^{B}_{\text{ord}}|$) and (ii) in
order to satisfy charge neutrality, Ga-substitution induces a change of
valence for one or more of the cations from 3+ towards 2+, increasing the
average moment per transition metal. Magnetometry data alone do not provide
enough information to disentangle these two effects and we will return to both
of these points in the section on neutron diffraction and x-ray absorption
spectroscopy that follow.
The $T=2$ K magnetization isotherms also exhibit a pronounced magnetic
hardening as a function of Ga concentration. There is a moderate increase in
the coercive field going from 0% to 20% and a dramatic increase at 40% Ga,
where $H_{c}=880$ mT. This effect can be partially attributed to increasing
magnetic disorder. As will be shown in the x-ray absorption section, we can
account for this observation by considering the occupancy of Co2+, which is
known to commonly exhibit strong anisotropy in octahedral crystal field
environments Sucksmith and Thompson (1954); Lines (1963). Indeed, we find that
in the 0% sample with negligible coercivity, the Co2+ sits entirely on the
tetrahedral site while the increasing coercivity with Ga concentration tracks
with the systematic shift of Co2+ from the tetrahedral to octahedral site We
therefore conclude that through understanding the site selectivity of these
HEO spinels, we can likely apply fine-tuning of the elemental composition to
optimize magnetic properties.
### II.3 Neutron diffraction: Magnetic structure determination
Figure 3: Neutron diffraction measurements and magnetic structure
determination for the HEO spinels. The top set of panels show data for the 0%
Ga sample and the bottom set of panels are for the 20% Ga sample. (a,d) The
temperature dependence of the (111) Bragg peak peak shows a sharp increase in
intensity indicating the onset of $k=0$ magnetic order. (b,e) Rietveld
refinements in the paramagnetic ($T=500$ K for the 0% Ga sample and $T=400$ K
for the 20% Ga sample) and magnetically ordered ($T=4$ K) states allow the
determination of the magnetic structure. Tick marks indicate the positions of
Bragg peaks for the spinel nuclear structure (green), the copper background
from the sample environment (yellow), and the $k=0$ magnetic structure
(orange). (c,f) Both the 0% Ga and 20% Ga samples exhibit the ferrimagnetic
$\Gamma_{9}$ structure, in which the tetrahedral moments in yellow point
exactly along the $\langle 100\rangle$ while the octahedral moments in blue
are slightly canted away from $\langle 100\rangle$. Ga-substitution strongly
suppresses the tetrahedral moment.
To complete our characterization of the magnetically ordered state in the
spinel HEO, we performed powder neutron diffraction measurements on the 0% and
20% Ga-substituted samples. There has been one previous neutron diffraction
measurement on (Cr,Mn,Fe,Co,Ni)3O4 Sarkar _et al._ (2021); however, this
study only includes only a single data set, collected at $T=300$ K, which does
not allow the nuclear and magnetic structures to be separately determined, nor
does it represent the magnetic ground state since the magnetism continues to
evolve considerably below 300 K, as shown by the magnetic susceptibility data
in Fig. 2(a). In our experiment, we performed measurements between $T=4$ and
500 K, giving us access to both the paramagnetic regime as well as within the
ordered state.
Starting with first the 0% Ga sample, upon cooling through the ordering
transition at $T_{C}=403$ K, we observe the formation of intense magnetic
Bragg peaks that sit atop the nuclear Bragg peaks, as can be seen by viewing
the intensity of (111) Bragg peak as a function of temperature shown in Fig.
3(a). The onset of this magnetic order agrees well with the transition
temperature detected from susceptibility and we can see that the ordered
moment continues to grow monotonically with decreasing temperature before
finally saturating at close to 50 K. Comparing the (111) Bragg peak at 500 K,
where it is purely nuclear in origin, and 4 K, where it is overwhelmingly
magnetic in origin, we can see that there is no significant difference in the
line width, indicating that within the resolution limit of this
diffractometer, the magnetic order is truly long-range in nature. We can
estimate the minimum correlation length by considering the full-width and half
maximum of the (111) Bragg peak and this gives
$\xi=\nicefrac{{2\pi}}{{\text{FWHM}}}=294(2)$ Å.
In order to determine the nature of the magnetically ordered state, we perform
a Rietveld analysis of the full diffraction pattern, as shown in Figure 3(b)).
Above the onset of magnetic order, at $T=500$ K, our refinement shown in the
top panel includes three phases: (i) the cubic spinel structure with space
group $Fd\bar{3}m$, (ii) a metallic Cu phase, which originates from the sample
environment, and (iii) minor ZrO2 and NiO phases (as described previously in
the x-ray diffraction section, tick marks for these phases are omitted for
clarity). The coherent scattering lengths of our five transition metals span a
large range, from $-3.73$ barns for Mn to $10.3$ barns for Ni. One might
therefore hope to be able to independently refine the site occupations from
the neutron data. However, in our analysis, we found that due to trade-offs
with other refined parameters (such as the thermal parameters, $B_{iso}$, and
the oxygen occupancy), we could not meaningfully distinguish via their
$R_{\text{wp}}$ values between a range of plausible cation distributions.
Given that our primary goal is to refine the magnetic structure, we therefore
constrained the number of adjustable parameters in our refinement of the
nuclear structure by allowing only minor deviations from the cation
distribution deduced from the XMCD analysis, which will be described in the
section that follows, and fixed the oxygen occupancy at 100%. This yielded
excellent agreement between the data and the model and physically reasonable
values for all other refined parameters. The temperature dependence of the
cubic lattice parameter follows conventional thermal contraction, as shown in
the Supporting Information.
Table 1: Structural and magnetic properties of the Ga-substituted HEO spinels determined from x-ray diffraction (cubic lattice parameter $a$ and adjustable oxygen coordinate), magnetic susceptibility (ordering temperature $T_{C}$ and depinning transition $T^{*}$), magnetization (saturated moment per formula unit and per transition metal, and coercive field) and magnetic neutron diffraction (tetrahedral and octahedral sublattice moments for 0% and 20% samples only). | $a$ | O coord. | $T_{C}$ | $T^{*}$ | $\mu_{\text{sat}}$/FU | $\mu_{\text{sat}}$/TM | Coerc. | $\mu_{\text{tet}}$ | $\mu_{\text{oct}}$
---|---|---|---|---|---|---|---|---|---
| (Å) | | (K) | (K) | ($\mu_{\text{B}}$) | ($\mu_{\text{B}}$) | (mT) | ($\mu_{\text{B}}$) | ($\mu_{\text{B}}$)
0% Ga | 8.3539(3) | 0.2558(5) | 382 | 55 | 1.51 | 0.50 | 36 | 3.60(6) | 2.5(1)
10% Ga | 8.3375(2) | 0.2580(3) | 319 | 81 | 2.33 | 0.86 | 40 | - | -
20% Ga | 8.3397(2) | 0.2569(3) | 230 | 100 | 2.18 | 0.91 | 98 | 2.46(6) | 2.49(9)
40% Ga | 8.3542(2) | 0.2566(3) | 102 | - | 1.23 | 0.68 | 880 | - | -
Upon cooling into the magnetically ordered state, we observe that all magnetic
Bragg peaks are located at positions allowed by the face-centered cubic
selection rules of the nuclear structure. Therefore, we can characterize the
magnetic order as having a propagation vector of $k=0$. We next consider the
symmetry allowed magnetic structures for the tetrahedral and octahedral sites,
which correspond to the $8a$ and $16d$ Wyckoff sites, respectively. Following
the representation analysis formalism, we can define these symmetry allowed
magnetic structures according to their irreducible representations ($\Gamma$
composed of basis vectors $\psi$). There are two possible $k=0$ states for the
tetrahedral site, which are $\Gamma_{8}$ and $\Gamma_{9}$, while there are
four possible states for the octahedral site: $\Gamma_{3}$, $\Gamma_{5}$,
$\Gamma_{7}$, and $\Gamma_{9}$. In cases where there is a coupled ordering
transition involving two sublattices simultaneously, in the case of a second
order transition, it is required that both sites order in the same
representation group. Thus, by symmetry arguments alone one can deduce that
the ordering should involve the $\Gamma_{9}$ irreducible representations for
both the $A$ and $B$ sublattice. However, this assessment can be validated
through checking all linear combinations of the irreducible representations
and indeed, only a state involving $\Gamma_{9}$ on both sublattices can give a
satisfactory agreement with the data. The resulting refinement at $T=4$ K is
shown in the lower panel of Figure 3(b), which in addition to the $\Gamma_{9}$
magnetic structure contains the same set of structural phases as the $T=500$ K
refinement.
The spin configuration for the refined $\Gamma_{9}$ magnetic structure of the
spinel HEO is shown in Figure 3(c) (magnetic space group #141.557,
$I4_{1}/am^{\prime}d^{\prime}$). The $\Gamma_{9}$ representation for the
tetrahedral site is a ferromagnetic arrangement composed of a single type of
basis state (triply degenerate for cubic symmetry) in which the moments points
exactly along the cubic $\langle$100$\rangle$ directions with an ordered
moment of $\mu_{\text{tet}}=$3.60(6) $\mu_{\text{B}}$, represented by the
yellow sublattice in Fig. 3(c). The $\Gamma_{9}$ representation for the
octahedral site is composed of two types of basis states (each triply
degenerate for cubic symmetry) in which the moments point along non-coplanar
$\langle$111$\rangle$ and $\langle$211$\rangle$ directions, respectively.
Linear combinations of these two basis states can also produce a ferromagnetic
arrangement with moments pointing along the cubic $\langle$100$\rangle$
directions with an ordered moment of $\mu_{\text{oct}}=$2.5(1)
$\mu_{\text{B}}$, as shown for the blue sublattice in Fig. 3(c). The two
sublattices are aligned anti-parallel to one another, confirming the
ferrimagnetic order deduced from our magnetization data. In our refinement,
the best agreement with the experimental data occurred when the moments on the
octahedral sites are canted by a small amount, $\alpha=6.1$ degrees from the
$\langle$100$\rangle$ direction. Such canting is well known for ferrimagnetic
spinels and is often referred to as the Yafet-Kittel angle Yafet and Kittel
(1952); Murthy _et al._ (1969). Detailed characterizations of many spinel
solid solutions has shown that the magnitude of this canting is highly
correlated with the specific pairwise interactions between the tetrahedral
($A$) and octahedral ($B$) sublattices, which in turn depends strongly on the
specific composition and the degree of inversion and magnetic dilution Willard
_et al._ (1999). The presence of canting indicates a certain degree of
competition between inter- ($J_{AB}$) and intra- ($J_{AA}$ and $J_{BB}$)
sublattice exchange interactions, with a strong, antiferromagnetic $J_{BB}$
disrupting the antiferromagnetic coupling between the $A$ and $B$ sublattices.
The refined net moment is $\mu_{\text{net}}=$1.4(2) $\mu_{\text{B}}$ per
formula unit, in excellent agreement with the saturated moment observed in
magnetization. No change in the magnetic structure is detected at $T^{*}=55$ K
consistent with our assignment that it is a depinning transition.
For the 20% Ga substituted sample, we see similar behavior with a sharp
increase in the intensity of the (111) Bragg peak that correlates well with
$T_{C}=284$ K detected by magnetic susceptibility (Fig. 3(d)). Similar to our
protocol in the 0% Ga case, in our Rietveld refinement of the 20% Ga sample we
allow only small deviations from the cation distribution deduced from the XMCD
analysis to be shown in the next section. The resulting refinement of the
structure in the paramagnetic state at $T=400$ K gives excellent agreement
with the data as shown in Fig. 3(e). The refinement in the magnetically
ordered state, at $T=4$ K confirms that the magnetic order is described by the
same $\Gamma_{9}$ ferrimagnetic state observed in the 0% sample, with an
unchanged $B$ sublattice ordered moment of $\mu_{\text{oct}}=$2.49(9)
$\mu_{\text{B}}$ but a substantially reduced $A$ site ordered moment of
$\mu_{\text{tet}}=$2.46(6) $\mu_{\text{B}}$, which is approximately two-thirds
the value obtained for the 0% Ga sample. This supports our previous claim that
non-magnetic gallium is preferentially occupying the tetrahedral site. An
additional consequence of the highly suppressed $A$ sublattice moment, is that
there is a less complete cancellation between the antiparellel $A$ and $B$
sublattice moments such that the net moment increases to
$\mu_{\text{net}}=$2.5(2) $\mu_{\text{B}}$, as compared to
$\mu_{\text{net}}=$1.4(2) $\mu_{\text{B}}$ in the 0% sample. This also
accounts for the observed increase in the saturated moment found in
magnetization measurements. Finally, the refined canting angle for the
octahedral moments in the 20% Ga sample, $\alpha=4.6$ degrees, is reduced from
the 0% case, consistent with reduced competition between inter- and intra-
sublattice exchange interactions.
Figure 4: Normalized x-ray magnetic circular dichroism (XMCD) data and
multiplet calculations from measurements performed on the HEO spinel samples
with varying Ga concentration at the transition metal L2,3 edges. Top row: L+R
spectra for (a) Cr, (b) Mn, (c) Fe, (d) Co, and (e) Ni. The coordination
environment can be deduced from these signals, which are approximately
equivalent to pure x-ray absorption spectroscopy (XAS) measurements. Bottom
row: L-R (XMCD) spectra for (f) Cr, (g) Mn, (h) Fe, (i) Co, and (j) Ni. The
sign of each spectrum reflects the ferrimagnetic arrangement in the HEO
spinels. Predominantly positive signals correspond to mainly tetrahedral
states (Td) while negative signals indicate a mostly octahedral coordination
(Oh).
### II.4 XAS and XMCD: Local ordering and site selectivity
X-ray magnetic circular dichroism (XMCD) is one of the select few pathways to
determining site occupancies in transition-metal spinel oxides Pattrick _et
al._ (2002) due to its simultaneous chemical and crystal field sensitivity.
Site occupancies can be deduced by comparison with multiplet calculations,
which depend on both the coordination and the valence of the transition metal
in question. XMCD measurements for the spinel HEO samples, along with the
corresponding multiplet calculations, are presented in Fig. 4, where the top
row of panels are the sum of the left and right circularly polarized responses
(L+R) and therefore correspond approximately to x-ray absorption spectroscopy
(XAS) while the bottom row is the difference (L-R), corresponding to the
dichroism. The spectra are presented normalized to their maxima, so the main
features can be resolved across the entire composition range. An important
feature of these spectra is that, for ferrimagnetic spinels, one can observe
the antiferromagnetic coupling of the tetrahedral and octahedral sites in a
straightforward manner. The sign of the XMCD signal is directly dependent on
the number of available holes in the transition metal ions’ $d$-shell and
since the holes for the octahedral and tetrahedral sites have opposing spin,
the XMCD will have opposite sign. While it is not possible to directly measure
the site occupancy of Ga in this way due to its filled $d$-shell, its
distribution can be approximately deduced from the constraint that the
stoichiometry respects the spinel structure. As analysis of dichroism requires
the sample to be in a field polarized state, an important limitation occurs
for the 40% Ga sample, which is not close to saturation in our experiment due
to its large coercive field. For this reason, the XMCD of the 40% Ga sample
represents a semi-random arrangement of magnetic domains that cannot be
meaningfully interpreted and this is also why the observed signal to noise in
this sample is significantly higher. Our analysis of the 40% Ga sample
therefore relies on only the conventional absorption data.
The analysis of Cr and Ni from Fig. 4(a,f) and (e,j) is straightforward as
neither are observed to change as a function of Ga concentration. Both are
found exclusively in an octahedral (Oh) coordination, as indicated by the
negative dichroism with Cr in a 3+ oxidation state and Ni in a 2+ oxidation
state. In both cases, the multiplet calculation describes well all the
features in both the absorption and the dichroism spectra. The remaining
cations will be discussed in order of increasing complexity.
The next ion we consider is Fe, which can be seen in Fig. 4(c). For all
compositions, we observe a double peak structure in the XAS that cannot be
described by any one of the possible multiplet calculations individually;
instead Fe exists in an admixture of coordinations in all samples, requiring a
more nuanced analysis. As a function of increasing Ga concentration, we can
observe that the XAS signal becomes progressively sharper, with the 40% Ga
concentration sample exhibiting a sharp peak at 707 eV that aligns well with
the multiplet calculation for Fe3+ in octahedral coordination. Simultaneously,
the predominantly positive XMCD signature in the 0% Ga sample, which is
consistent with Fe3+ in tetrahedral coordination, evolves towards a larger
negative contribution with increasing Ga, suggesting a general trend of a Fe
shifting from predominantly tetrahedral towards predominantly octahedral
coordination. We therefore conclude that Fe remains in a 3+ oxidation state
across the analyzed composition range and gradually shifts from mainly
tetrahedral at 0% towards a primarily octahedral state around 20% Ga
substitution.
Considering next Co, we can observe that the XAS spectra for all Ga
concentrations appear at first largely similar but a closer examination
reveals the presence of a pronounced shoulder-like feature on the lower energy
side of the central peak in only the 40% Ga concentration sample (Fig. 4(d)).
Comparison of this data set with the multiplet calculations reveals that the
shoulder-like feature in the 40% Ga sample originates from a majority
composition of Co2+ in a tetragonally-distorted octahedron (D4h), originating
from a Jahn-Teller effect. This distortion is required to properly reproduce
the relative intensities and features in the pre and post-edge of the 40% XAS,
and is known to occur in Co2+ oxides and compounds van Schooneveld _et al._
(2012); Hibberd _et al._ (2015). Upon closer inspection of the spectra
corresponding to 10% and 20% Ga-diluted samples, the emergence of these
features can already be observed, albeit on a smaller scale. However, the XMCD
spectra show conclusively that for the lower Ga concentrations, the majority
species is Co2+ in a tetrahedral coordination. The XMCD for the 40% Ga sample
shows a near complete cancellation of the signal but we must stress again that
the XMCD of the 40% Ga sample is not reliable due to the sample not being in
its field saturated state and we therefore must base our conclusions on the
XAS and other supporting data. Detailed fitting of the XAS data is presented
in in the Supplemental Information. Important corroboration for the shift of
Co2+ from the tetrahedral to the octahedral site in the 40% Ga sample comes
from the magnetization. Co2+ in this distorted octahedral environment also
elucidates the origin of the increased coercivity observed in Fig. 2(d). While
more subtle than the effect it has on other ions, this change is then
identified as a direct consequence of Ga substitution.
The final ion to consider is Mn, whose behavior is the most challenging to
explain and reproduce with multiplet calculations, particularly for the low
Ga-concentration samples. From Fig. 4(b), we observe three broad features
centered around 640 eV in the L3 edge at lower Ga substitution, which are
replaced by two comparatively sharp features at 40% Ga concentration. The
features in the 40% sample are well explained by the spectrum for Mn2+ in
tetrahedral coordination. As a function of decreasing Ga concentration, the
Mn2+ tetrahedral signal persists but becomes weaker as simultaneously the XMCD
becomes increasingly negative. At 0% the most prominent signal is located at
640.7 eV and this feature is attributed to Mn3+ in a tetragonally distorted
octahedron (D4h symmetry) due to a strong Jahn-Teller effect. The final
feature in the L3 edge is located at 641.9 eV, and corresponds to the presence
of Mn4+ in octahedral coordination. The evolution of Mn in the sample can
therefore be characterized as follows: the undiluted sample is characterized
by the presence of all three Mn species, 2+ tetrahedral, 3+ tetragonally
distorted octahedral, and 4+ octahedral. Upon the introduction of Ga to the
structure, Mn accommodates the extra charge by gradually shifting towards the
Mn2+ state in tetrahedral coordination, as shown in the top panel of 5(a). At
20% Ga, the Mn4+ octahedral signal has been reduced substantially, and by 40%
Ga, Mn is predominantly in a 2+ tetrahedral state, with minimal 3+ D4h and 4+
octahedral contributions.
Putting the analysis of all XMCD results together allows us to arrive at
approximate site distributions and compositions for each of our four samples,
as summarized in Fig. 5(a,b). We can see that the undiluted 0% Ga sample
presents a significant degree of site selectivity, with Cr and Ni exclusively
occupying octahedral sites, Co only in tetrahedral sites, and mixed
occupations for Mn and Fe, leading to the estimated composition for the
tetrahedral site of $A=$
Mn${}^{2+}_{0.06}$Fe${}^{3+}_{0.33}$Co${}^{2+}_{0.61}$ and for the octahedral
site of $B=$
Cr${}^{3+}_{0.61}$Mn${}^{3+}_{0.28}$Mn${}^{4+}_{0.27}$Fe${}^{3+}_{0.28}$Ni${}^{2+}_{0.54}$,
which takes into account the deficiency in Ni due to the minor rock salt
impurity phase described previously. The result for the 0% Ga sample can be
compared with the previous study by Sarkar et al., where it was concluded that
Co was present in an almost equal mixture of 3+ and 2+ tetrahedral states,
while Mn was ascribed as being almost entirely 3+ octahedral with a minor 2+
octahedral component Sarkar _et al._ (2021). One factor that might account
for some of these differences is that the data collected in Ref. Sarkar _et
al._ (2021) used a total electron yield method, which is more surface
sensitive as compared to the inverse partial fluorescence yield method used
here. Another intriguing possibility is that the fundamental differences
between the results could be indicative of some dependence of the site
selectivity on the synthesis method, choice of reagents, or thermal treatments
performed on the samples. Nevertheless, it is interesting to note that despite
the differences in oxidation state, the elemental composition of the
octahedral and tetrahedral sites found here agrees almost exactly with that of
Ref. Sarkar _et al._ (2021).
Figure 5: Summary of cation distributions and sublattice entropies in the
spinel HEO as a function of Ga substitution. (a) The valence of Mn and the
fraction of octahedral occupation as a function of Ga concentration.
Preferential occupation of Ga onto tetrahedral sites causes a shift of Fe and
Co towards the octahedral site and the stable 3+ valence of Ga is compensated
by a reduction in the valence of Mn, which in its 2+ valence state prefers
tetrahedral sites. (b) Schematic representation of the cation occupation of
the tetrahedral and octahedral sites for each sample. (c) Calculated
sublattice entropy assuming ideal configurational disorder as a function of Ga
concentration for the tetrahedral site in yellow and the octahedral site in
blue. At 0% Ga, neither sublattice surpasses the empirical threshold for high
entropy but with Ga substitution high and medium entropy is achieved for the
octahedral and tetrahedral sites, respectively.
Upon the introduction of Ga into the HEO spinel, the compositions of each
sample can be qualitatively estimated from the XMCD spectra, with the
distribution of Ga ions in each sample constrained so that the final
composition respects the stoichiometry of the spinel structure. While Ga is
divided almost evenly between the tetrahedral and octahedral sites (Fig.
5(a)), due to the 1:2 sublattice ratio in the spinel structure a larger
fraction of the tetrahedral magnetic ions are replaced. Therefore, we find a
preferential dilution of the tetrahedral sublattice, as shown schematically in
Fig. 5(b), consistent with the trends observed in our magnetization data.
Meanwhile, Co and Fe accommodate by gradually shifting to primarily octahedral
crystal field environments (Fig. 5(a)). On the other hand, Mn seems to adapt
to the additional charge (Ga’s 3+ oxidation state is larger than the 2.667+
average for the spinel structure) by reducing towards a 2+ oxidation state, as
shown in the upper panel of Fig. 5(a). At 20% Ga substitution, we find that
the tetrahedral site has been 35% diluted by non-magnetic Ga while the
octahedral site has only been diluted by just over 10%, consistent with the
large reduction in the tetrahedral moment observed in our neutron diffraction
analysis. The negligible change in the octahedral sublattice moment can be
ascribed to the larger fraction of high spin Mn3+ as compared to Mn4+ at this
composition. By 40% Ga, the ion distribution reaches the estimated composition
for the tetrahedral site of $A=$
Mn${}^{2+}_{0.21}$Fe${}^{3+}_{0.12}$Co${}^{2+}_{0.09}$Ga${}^{3+}_{0.58}$ and
for the octahedral site of $B=$
Cr${}^{3+}_{0.36}$Mn${}^{3+}_{0.09}$Mn${}^{4+}_{0.06}$Fe${}^{3+}_{0.24}$Co${}^{2+}_{0.27}$Ni${}^{2+}_{0.36}$Ga${}^{3+}_{0.62}$.
Here we can see that non-magnetic Ga occupies more than 50% of the tetrahedral
sites while only occupying approximately 25% of the octahedral sites.
### II.5 Entropy analysis
With these results in mind, we can begin to construct a realistic estimate of
the configurational entropy for each of these spinel HEOs taking into account
the preferential occupation of multiple cation sublattices. Our deduced
compositions, which are shown schematically in Fig. 5(b) respect the
stoichiometry of the spinel structure and also obey charge neutrality with the
oxygen sublattice within negligible error. Our assessment of the transition
metal ion sites and valences agree with both the experimental XMCD spectra and
also align well with the well-established crystal field behaviors for these
ions. From this estimate, one can then compute the corresponding
configurational entropy,
$S_{\rm{config}}=-R\sum_{ions}x\ln{x}$ (1)
where $x$ is the fractional occupation of each ion inhabiting the site.
Empirically it has been established that the threshold for medium entropy
occurs at $S_{\rm{config}}>1R$ and high entropy occurs at
$S_{\rm{config}}>1.5R$ where the conventional 5-component equiatomic HEO has
$S_{\rm{config}}=1.61R$. The computed configurational entropy for each of our
HEO spinels, separated by sublattice, is plotted in Fig. 5(c). One important
limitation of such a naive calculation is that it does not take into account
any level of clustering or pairwise atomic correlations. While our Ga-
substituted samples are single phase, elemental mapping of these materials
shown previously does suggest some level of micron scale inhomogeneity that
places a limit on the true configurational entropy. Therefore the entropy
figures shown in Fig. 5(c) should be considered as a strict upper bound.
One can deduce on general grounds that site selectivity in the spinel
structure is a force that will act to reduce configurational entropy. Indeed
in the 0% Ga case we find that the tetrahedral site, which is majority
occupied by Co, falls in the low-entropy regime with
$S_{\rm{config,tet}}=0.9R$, while the octahedral site has medium-entropy
$S_{\rm{config,oct}}=1.35R$. However, if we consider the two different
valences of Mn as contributing to the configurational entropy, this alone
further enhances the octahedral to the high-entropy limit with
$S_{\rm{config,oct}}=1.52R$. Likewise, the introduction of Ga as a 6th cation
has an overall enhancing effect on the configurational entropy, with the
tetrahedral site being elevated to medium-entropy and the octahedral site
being raised to the high-entropy limit by 10% Ga. The maximum combined
configurational entropy, among compositions we’ve studied is reached for the
20% Ga sample, which exceeds $S_{\rm{config,total}}=3R$. This finding implies
that, with appropriate cation choice, the configurational entropy of the two
sublattices of the spinel structure can likely be independently tuned and
precisely controlled across different entropy regimes, which we term “entropy
engineering”.
## III Conclusions
In this work, we have demonstrated that due to intense site selectivity, the
magnetic ground state of the spinel HEO can be strongly tuned by nonmagnetic
Ga substitution. Magnetic susceptibility measurements show that the
suppression in the onset of long-range magnetic order and simultaneously,
there is a significant enhancement in the coercive field and a nonmonotonic
variation in the saturated moments. Our neutron diffraction measurements show
that the ferrimagnetic order is robust in response to Ga-substitution, with a
rapidly suppressed tetrahedral moment and a reduced inter sublattice
frustration. Finally, our XMCD measurements reveal that the inclusion of Ga is
accommodated through valence and site re-organizations for Mn, Fe, and Co,
while Ni and Cr are unaffected. An extremely high entropy system with
$S_{\rm{config,total}}>3R$ is realized for the 20% Ga sample and we show that
the individual sublattice entropies can be precisely engineered by the choice
of cations. We also demonstrate that multiple valences are a route to further
enhancing the configurational entropy.
In light of these findings, many new avenues of inquiry beckon. For instance,
are there any subsets of these five magnetic transition metals or different
stoichiometries that would yield a non-ferrimagnetic ordered state? What is
the fate of the ferrimagnetic order at higher nonmagnetic dilutions? Exploring
the limit in which long-range magnetic order is completely suppressed will
allow a direct comparison with percolation theory. Along a different axis, the
spectral XMCD differences between our work and Sarkar et al. Sarkar _et al._
(2021) suggest that there may be significant sample dependence related to
synthesis procedure and this topic remains largely unexplored. A better
understanding of the exact level of configurational entropy in the spinel HEO
is also, for now, out of reach. While our study suggests a very high level of
site selectivity, there may be additional preferred short range configurations
that will require highly sophisticated studies of ions in a pairwise fashion.
We therefore conclude on the optimistic note that the field of high entropy
oxides is still in its infancy and many important discoveries remain to be
made.
## Supporting Information
Experimental details, supporting figures for x-ray diffraction, energy
dispersive x-ray spectroscopy, magnetic susceptibility, and neutron
diffraction, parameters used in the multiplet theory calculations for the
x-ray absorption analysis
###### Acknowledgements.
The authors thank Joerg Rottler and Solveig Stubmo Aamlid for insightful
conversations on high entropy oxides and Jacob Kabel for assistance collecting
the energy dispersive x-ray spectroscopy data. This work was supported by the
Natural Sciences and Engineering Research Council of Canada and the CIFAR
Azrieli Global Scholars program. This research was undertaken thanks in part
to funding from the Canada First Research Excellence Fund, Quantum Materials
and Future Technologies Program. Part of research described in this paper was
performed at the Canadian Light Source, a national research facility of the
University of Saskatchewan, which is funded by the Canada Foundation for
Innovation, the NSERC, the National Research Council Canada, the Canadian
Institutes of Health Research, the Government of Saskatchewan, and the
University of Saskatchewan. A portion of this research used resources at the
High Flux Isotope Reactor, a DOE Office of Science User Facility operated by
the Oak Ridge National Laboratory.
## References
* Rost _et al._ (2015) Christina M Rost, Edward Sachet, Trent Borman, Ali Moballegh, Elizabeth C Dickey, Dong Hou, Jacob L Jones, Stefano Curtarolo, and Jon-Paul Maria, “Entropy-stabilized oxides,” Nature communications 6, 1–8 (2015).
* Zhang and Reece (2019) Rui-Zhi Zhang and Michael J Reece, “Review of high entropy ceramics: design, synthesis, structure and properties,” Journal of Materials Chemistry A 7, 22148–22162 (2019).
* Oses _et al._ (2020) Corey Oses, Cormac Toher, and Stefano Curtarolo, “High-entropy ceramics,” Nature Reviews Materials 5, 295–309 (2020).
* Musicó _et al._ (2020) Brianna L Musicó, Dustin Gilbert, Thomas Zac Ward, Katharine Page, Easo George, Jiaqiang Yan, David Mandrus, and Veerle Keppens, “The emergent field of high entropy oxides: Design, prospects, challenges, and opportunities for tailoring material properties,” APL Materials 8, 040912 (2020).
* Sarkar _et al._ (2020) Abhishek Sarkar, Ben Breitung, and Horst Hahn, “High entropy oxides: The role of entropy, enthalpy and synergy,” Scripta Materialia 187, 43–48 (2020).
* McCormack and Navrotsky (2021) Scott J McCormack and Alexandra Navrotsky, “Thermodynamics of high entropy oxides,” Acta Materialia 202, 1–21 (2021).
* Bérardan _et al._ (2016) D Bérardan, S Franger, AK Meena, and N Dragoe, “Room temperature lithium superionic conductivity in high entropy oxides,” Journal of Materials Chemistry A 4, 9536–9541 (2016).
* Sarkar _et al._ (2018) Abhishek Sarkar, Leonardo Velasco, Di Wang, Qingsong Wang, Gopichand Talasila, Lea de Biasi, Christian Kübel, Torsten Brezesinski, Subramshu S Bhattacharya, Horst Hahn, _et al._ , “High entropy oxides for reversible energy storage,” Nature communications 9, 1–9 (2018).
* Sarkar _et al._ (2019) Abhishek Sarkar, Qingsong Wang, Alexander Schiele, Mohammed Reda Chellali, Subramshu S Bhattacharya, Di Wang, Torsten Brezesinski, Horst Hahn, Leonardo Velasco, and Ben Breitung, “High-entropy oxides: fundamental aspects and electrochemical properties,” Advanced Materials 31, 1806236 (2019).
* Sun and Dai (2021) Yifan Sun and Sheng Dai, “High-entropy materials for catalysis: A new frontier,” Science Advances 7, eabg1600 (2021).
* Sickafus _et al._ (1999) Kurt E Sickafus, John M Wills, and Norman W Grimes, “Structure of spinel,” Journal of the American Ceramic Society 82, 3279–3292 (1999).
* Zhao _et al._ (2017) Qing Zhao, Zhenhua Yan, Chengcheng Chen, and Jun Chen, “Spinels: controlled preparation, oxygen reduction/evolution reaction application, and beyond,” Chemical reviews 117, 10121–10211 (2017).
* Verwey and Heilmann (1947) EJW Verwey and EL Heilmann, “Physical properties and cation arrangement of oxides with spinel structures i. cation arrangement in spinels,” The Journal of Chemical Physics 15, 174–180 (1947).
* O’Neill and Navrotsky (1983) Hugh St C O’Neill and Alexandra Navrotsky, “Simple spinels: crystallographic parameters, cation radii, lattice energies, and cation distribution,” American Mineralogist 68, 181–194 (1983).
* Song _et al._ (2012) Jie Song, Dong Wook Shin, Yuhao Lu, Charles D Amos, Arumugam Manthiram, and John B Goodenough, “Role of oxygen vacancies on the performance of Li[Ni0.5-xMn1.5+x] O4 ($x=0$, 0.05, and 0.08) spinel cathodes for lithium-ion batteries,” Chemistry of Materials 24, 3101–3109 (2012).
* Torres _et al._ (2014) CE Rodríguez Torres, Gustavo Alberto Pasquevich, P Mendoza Zélis, Federico Golmar, SP Heluani, Sanjeev K Nayak, Waheed A Adeagbo, Wolfram Hergert, Martin Hoffmann, Arthur Ernst, _et al._ , “Oxygen-vacancy-induced local ferromagnetism as a driving mechanism in enhancing the magnetic response of ferrites,” Physical Review B 89, 104411 (2014).
* O’Quinn _et al._ (2017) Eric C O’Quinn, Jacob Shamblin, Brandon Perlov, Rodney C Ewing, Joerg Neuefeind, Mikhail Feygenson, Igor Gussev, and Maik Lang, “Inversion in Mg1-xNixAl2O4 spinel: New insight into local structure,” Journal of the American Chemical Society 139, 10395–10402 (2017).
* Liu _et al._ (2019) Jue Liu, Xuelong Wang, Olaf J Borkiewicz, Enyuan Hu, Rui-Juan Xiao, Liquan Chen, and Katharine Page, “Unified view of the local cation-ordered state in inverse spinel oxides,” Inorganic Chemistry 58, 14389–14402 (2019).
* Navrotsky and Kleppa (1967) Alexandra Navrotsky and OJ Kleppa, “The thermodynamics of cation distributions in simple spinels,” Journal of Inorganic and nuclear Chemistry 29, 2701–2714 (1967).
* Navrotsky and Kleppa (1968) A Navrotsky and OJ Kleppa, “Thermodynamics of formation of simple spinels,” Journal of Inorganic and Nuclear Chemistry 30, 479–498 (1968).
* Navrotsky (1969) A Navrotsky, “Thermodynamics of $A_{3}$O4-$B_{3}$O4 spinel solid solutions,” Journal of Inorganic and Nuclear Chemistry 31, 59–72 (1969).
* Zhang _et al._ (2019) Junjie Zhang, Jiaqiang Yan, Stuart Calder, Qiang Zheng, Michael A. McGuire, Douglas L. Abernathy, Yang Ren, Saul H. Lapidus, Katharine Page, Hong Zheng, John W. Freeland, John D. Budai, and Raphael P. Hermann, “Long-range antiferromagnetic order in a rocksalt high entropy oxide,” Chemistry of Materials 31, 3705–3711 (2019).
* Dąbrowa _et al._ (2018) Juliusz Dąbrowa, Mirosław Stygar, Andrzej Mikuła, Arkadiusz Knapik, Krzysztof Mroczka, Waldemar Tejchman, Marek Danielewski, and Manfred Martin, “Synthesis and microstructure of the (co,cr,fe,mn,ni)3o4 high entropy oxide characterized by spinel structure,” Materials Letters 216, 32–36 (2018).
* Cieslak _et al._ (2021a) J. Cieslak, M. Reissner, K. Berent, J. Dabrowa, M. Stygar, M. Mozdzierz, and M. Zajusz, “Magnetic properties and ionic distribution in high entropy spinels studied by mössbauer and ab initio methods,” Acta Materialia 206, 116600 (2021a).
* Mao _et al._ (2019) Aiqin Mao, Feng Quan, Hou-Zheng Xiang, Zhan-Guo Zhang, Koji Kuramoto, and Ai-Lin Xia, “Facile synthesis and ferrimagnetic property of spinel (CoCrFeMnNi)3O4 high-entropy oxide nanocrystalline powder,” Journal of Molecular Structure 1194, 11–18 (2019).
* Grzesik _et al._ (2020) Zbigniew Grzesik, Grzegorz Smoła, Maria Miszczak, Mirosław Stygar, Juliusz Dąbrowa, Marek Zajusz, Konrad Świerczek, and Marek Danielewski, “Defect structure and transport properties of (Co,Cr,Fe,Mn,Ni)3O4 spinel-structured high entropy oxide,” Journal of the European Ceramic Society 40, 835–839 (2020).
* Huang _et al._ (2021) Chih-Yang Huang, Chun-Wei Huang, Min-Ci Wu, Jagabandhu Patra, Thi Xuyen Nguyen, Mu-Tung Chang, Oliver Clemens, Jyh-Ming Ting, Ju Li, Jeng-Kuei Chang, and Wen-Wei Wu, “Atomic-scale investigation of lithiation/delithiation mechanism in high-entropy spinel oxide with superior electrochemical performance,” Chemical Engineering Journal 420, 129838 (2021).
* Stygar _et al._ (2020) Mirosław Stygar, Juliusz Dąbrowa, Maciej Moździerz, Marek Zajusz, Wojciech Skubida, Krzysztof Mroczka, Katarzyna Berent, Konrad Świerczek, and Marek Danielewski, “Formation and properties of high entropy oxides in Co-Cr-Fe-Mg-Mn-Ni-O system: Novel (Cr,Fe,Mg,Mn,Ni)3O4 and (Co,Cr,Fe,Mg,Mn)3O4 high entropy spinels,” Journal of the European Ceramic Society 40, 1644–1650 (2020).
* Nguyen _et al._ (2020) Thi Xuyen Nguyen, Jagabandhu Patra, Jeng-Kuei Chang, and Jyh-Ming Ting, “High entropy spinel oxide nanoparticles for superior lithiation–delithiation performance,” J. Mater. Chem. A 8, 18963–18973 (2020).
* Talluri _et al._ (2021) Bhusankar Talluri, M.L. Aparna, N. Sreenivasulu, S.S. Bhattacharya, and Tiju Thomas, “High entropy spinel metal oxide (CoCrFeMnNi)3O4 nanoparticles as a high-performance supercapacitor electrode material,” Journal of Energy Storage 42, 103004 (2021).
* Wang _et al._ (2020) Dan Wang, Shunda Jiang, Chanqin Duan, Jing Mao, Ying Dong, Kangze Dong, Zhiyuan Wang, Shaohua Luo, Yanguo Liu, and Xiwei Qi, “Spinel-structured high entropy oxide (FeCoNiCrMn)3O4 as anode towards superior lithium storage performance,” Journal of Alloys and Compounds 844, 156158 (2020).
* Wang _et al._ (2019) Dongdong Wang, Zhijuan Liu, Shiqian Du, Yiqiong Zhang, Hao Li, Zhaohui Xiao, Wei Chen, Ru Chen, Yanyong Wang, Yuqin Zou, and Shuangyin Wang, “Low-temperature synthesis of small-sized high-entropy oxides for water oxidation,” J. Mater. Chem. A 7, 24211–24216 (2019).
* Sarkar _et al._ (2021) Abhishek Sarkar, Benedikt Eggert, Ralf Witte, Johanna Lill, Leonardo Velasco, Qingsong Wang, Janhavika Sonar, Katharina Ollefs, Subramshu S Bhattacharya, Richard A Brand, _et al._ , “Comprehensive investigation of crystallographic, spin-electronic and magnetic structure of (Co0.2Cr0.2Fe0.2Mn0.2Ni0.2)3O4: Unraveling the suppression of configuration entropy in high entropy oxides,” Acta Materialia , 117581 (2021).
* Casado and Rasines (1982) P Garcia Casado and I Rasines, “Crystal data for the spinels $M$Ga2O4 ($M=$ Mg, Mn),” Zeitschrift für Kristallographie 160, 33–37 (1982).
* Ghose _et al._ (1977) J Ghose, G C Hallam, and D A Read, “A magnetic study of FeGa2O4,” Journal of Physics C: Solid State Physics 10, 1051–1057 (1977).
* Nakatsuka _et al._ (2006) Akihiko Nakatsuka, Yuya Ikeda, Noriaki Nakayama, and Tadato Mizota, “Inversion parameter of the CoGa2O4 spinel determined from single-crystal X-ray data,” Acta Crystallographica Section E 62, i109–i111 (2006).
* Areán and Trobajo-Fernandez (1985) C. Otero Areán and M. C. Trobajo-Fernandez, “Cation distribution in MgxNi1–xGa2O4 oxide spinels,” Physica Status Solidi (a) 92, 443–447 (1985).
* Sasaki _et al._ (1979) Satoshi Sasaki, Kiyoshi Fujino, and Yoshio Takéuchi, “X-ray determination of electron-density distributions in oxides, MgO, MnO, CoO, and NiO, and atomic scattering factors of their constituent atoms,” Proceedings of the Japan Academy, Series B 55, 43–48 (1979).
* Note (1) We define $T_{C}$ by a slope method wherein the approximately linear increase in the susceptibility is extrapolated to its $x$-intercept.
* Musicó _et al._ (2019) Brianna Musicó, Quinton Wright, T Zac Ward, Alexander Grutter, Elke Arenholz, Dustin Gilbert, David Mandrus, and Veerle Keppens, “Tunable magnetic ordering through cation selection in entropic spinel oxides,” Physical Review Materials 3, 104416 (2019).
* Cieslak _et al._ (2021b) J Cieslak, M Reissner, K Berent, J Dabrowa, M Stygar, M Mozdzierz, and M Zajusz, “Magnetic properties and ionic distribution in high entropy spinels studied by Mössbauer and ab initio methods,” Acta Materialia 206, 116600 (2021b).
* Sucksmith and Thompson (1954) Willie Sucksmith and Jo E Thompson, “The magnetic anisotropy of cobalt,” Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 225, 362–375 (1954).
* Lines (1963) ME Lines, “Magnetic properties of CoCl2 and NiCl2,” Physical Review 131, 546 (1963).
* Yafet and Kittel (1952) Y Yafet and Ch Kittel, “Antiferromagnetic arrangements in ferrites,” Physical Review 87, 290 (1952).
* Murthy _et al._ (1969) NS Satya Murthy, MG Natera, SI Youssef, RJ Begum, and CM Srivastava, “Yafet-Kittel angles in zinc-nickel ferrites,” Physical review 181, 969 (1969).
* Willard _et al._ (1999) Matthew A Willard, Yuichiro Nakamura, David E Laughlin, and Michael E McHenry, “Magnetic properties of ordered and disordered spinel-phase ferrimagnets,” Journal of the American Ceramic Society 82, 3342–3346 (1999).
* Pattrick _et al._ (2002) Richard A.D. Pattrick, Gerrit Van Der Laan, C. Michael B. Henderson, Pieter Kuiper, Esther Dudzik, and David J. Vaughan, “Cation site occupancy in spinel ferrites studied by x-ray magnetic circular dichroism: developing a method for mineralogists,” European Journal of Mineralogy 14, 1095–1102 (2002).
* van Schooneveld _et al._ (2012) Matti M. van Schooneveld, Reshmi Kurian, Amélie Juhin, Kejin Zhou, Justine Schlappa, Vladimir N. Strocov, Thorsten Schmitt, and Frank M. F. de Groot, “Electronic structure of CoO nanocrystals and a single crystal probed by resonant x-ray emission spectroscopy,” The Journal of Physical Chemistry C 116, 15218–15230 (2012).
* Hibberd _et al._ (2015) Amber M. Hibberd, Hoang Q. Doan, Elliot N. Glass, Frank M. F. de Groot, Craig L. Hill, and Tanja Cuk, “Co polyoxometalates and a Co3O4 thin film investigated by L-edge x-ray absorption spectroscopy,” The Journal of Physical Chemistry C 119, 4173–4179 (2015).
|
# The role of stellar expansion on the formation of gravitational wave sources
A. Romagnolo1, K. Belczynski1, J. Klencki2,3, P. Agrawal4, T. Shenar5, D.
Szécsi6
1Nicolaus Copernicus Astronomical Center, The Polish Academy of Sciences, ul.
Bartycka 18, 00-716 Warsaw, Poland
2European Southern Observatory, Karl-Schwarzschild-Strasse 2, 85748, Garching
bei München, Germany
3Max Planck Institute for Astrophysics, Karl-Schwarzschild-Strasse 1, 85748,
Garching bei München, Germany
4 McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon
University, Pittsburgh, PA 15213, USA
5Anton Pannekoek Institute for Astronomy, Science Park 904, 1098 XH,
Amsterdam, the Netherlands
6Institute of Astronomy, Faculty of Physics, Astronomy and Informatics,
Nicolaus Copernicus University, Grudziądzka 5, 87-100 Toruń, Poland E-mail:
<EMAIL_ADDRESS><EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Massive stars are the progenitors of black holes and neutron stars, the
mergers of which can be detected with gravitational waves (GW). The expansion
of massive stars is one of the key factors affecting their evolution in close
binary systems, but it remains subject to large uncertainties in stellar
astrophysics. For population studies and predictions of GW sources, the
stellar expansion is often simulated with the analytic formulae from Hurley et
al. (2000). These formulae need to be extrapolated for stars beyond 50 M⊙ and
are often considered outdated.
In this work we present five different prescriptions developed from 1D stellar
models to constrain the maximum expansion of massive stars. We adopt these
prescriptions to investigate how stellar expansion affects mass transfer
interactions and in turn the formation of GW sources.
We show that limiting radial expansion with updated 1D stellar models, when
compared to the use of Hurley et al. (2000) radial expansion formulae, does
not significantly affect GW source properties (rates and masses). This is
because most mass transfer events leading to GW sources are initialised before
the donor star reaches its maximum expansion. The only significant difference
was found for the mass distribution of massive binary black hole mergers
($M_{\rm tot}$ > 50 M⊙) formed from stars that may evolve beyond the
Humphreys-Davidson limit, whose radial expansion is the most uncertain. We
conclude that understanding the expansion of massive stars and the origin of
the Humphrey-Davidson limit is a key factor for the study of GW sources.
###### keywords:
stars: evolution – stars: black holes – stars: neutron – binaries: general –
gravitational waves
††pubyear: 2022††pagerange: The role of stellar expansion on the formation of
gravitational wave sources–A
## 1 Introduction
The LIGO/Virgo/KAGRA collaboration (LVK) has so far detected around 76 binary
black hole (BH-BH), 12 binary neutron star (NS-NS) and 2 black hole-neutron
star (BH-NS) mergers in its three observing runs (Abbott et al., 2021b). The
main observables that can be derived from these mergers are the merger rate
density in the local universe (redshift $z\sim$ 0.2) and the distribution of
masses and orbital spins. The reported 90% credible intervals of the merger
rate densities from LVK observations are 16-61 Gpc-3yr-1 for BH-BH, 7.8-140
Gpc-3yr-1 for BH-NS and 10-1700 Gpc-3yr-1 for NS-NS. In the case of NSs there
is no pronounced single peak in the mass distribution and their inferred
maximum mass goes up to $\sim$ 2.2 M⊙ within the margins of uncertainty. In
contrast, the distribution of the BH primary masses (the masses of the more
massive BH in the binary) was shown to have substructures beyond an unbroken
single power law, where two local peaks at $\sim$ 10 and 35 M⊙ are found.
Finally, the effective spin distribution has been inferred in Abbott et al.
(2021b) to have a mean centred at $\sim$ 0.06.
There are several evolutionary channels that have been proposed to date to
explain LVK observations. Recent studies suggest that most of massive stars
will interact with a companion star in their lives (Sana et al., 2012; Moe &
Di Stefano, 2017). One of the more classical channels is therefore isolated
binary evolution, where populations of massive binaries that evolve in
relative isolation are considered. There are in turn several sub-channels for
isolated binary evolution that could lead to close double compact objects. One
of the most studied ones is the Roche lobe overflow (RLOF) scenario, where two
massive stars born in a wide orbit have sufficient space to evolve. In this
configuration one of the main ingredients dictating the evolution of the two
stars, their collapse into compact objects (COs), and their potential merger
is their radial evolution. If during the evolution one of the component stars
becomes larger than its Roche lobe radius it will initialise a RLOF event.
These events bring to a different distribution of the binary total mass and
also to its partial expulsion, which translate into a loss of angular momentum
for the binary system. These two factors in turn will decrease the orbital
distance and potentially initialise other RLOF events (Paczynski, 1976;
Belczynski et al., 2002; Olejak et al., 2021; Mandel & Broekgaarden, 2021). On
top of that, in case of an unstable RLOF due to a combination of mass ratio
between the two binary components, the presence of a convective envelope in
the donor star, metallicity (Z) and stellar type (Pavlovskii et al., 2016;
Klencki et al., 2020, 2021; Olejak et al., 2021) the binary will face a common
envelope (CE) phase. In this case friction between the two stellar objects and
the envelope will further reduce the orbital momentum of the system. This is
extremely important as closer the stars are, more likely they will merge
within a Hubble time after their collapse into compact objects (Dominik et
al., 2012; Kruckow et al., 2016; Marchant et al., 2021). Stellar radii affect
the parameter space for RLOF interactions as well as the evolutionary stage of
the donor star, which is in turn important in order to determine the outcome
of the RLOF event itself (de Mink et al., 2008; Klencki et al., 2020, 2022;
Agrawal et al., 2020). Another alternative is the chemically homogeneous
evolution sub-channel, where massive binary stars in a close orbit do not
significantly expand due to the influence of efficient internal mixing. In
this scenario massive stars in their main sequence (MS) mix the helium of
their core out into the envelope and bring hydrogen from the envelope into the
core. This will bring the star to burn almost all its hydrogen and contract at
the end of its MS and not significantly expand throughout the rest of their
lives (Heger et al., 2000; Maeder & Maeynet, 2000). Systems evolving this way
do not face any RLOF event and create heavy BH binaries at low orbital
separations that can merge within a Hubble time (Mandel & de Mink, 2016;
Marchant, Pablo et al., 2016; de Mink & Mandel, 2016), but they are quite rare
(Mandel & Broekgaarden, 2021).
The dynamical formation channel treats stars and binaries in dense stellar
environments where gravitational interactions with other stars influences
their evolution. Sub-channels for the dynamical formation are usually divided
based on the considered environment. In the young star cluster and the
globular cluster sub-channels, single BHs sink towards the centre of the
cluster through mass segregation (Spitzer, 1969) and either form a binary with
another BH or influence the evolution of already existing compact binaries.
Nuclear star clusters in the centre of galaxies, in the absence of
supermassive black holes, have higher escape velocities than globular clusters
(Miller & Lauburg, 2009; Antonini & Rasio, 2016) and therefore are a promising
nursery for the birth of BH-BH binaries. On top of that the influence of
three-body scattering and gaseous drag in active galactic nuclei could deeply
influence the birth and merger rates of BH-BH binaries (Bellovary et al.,
2016; Stone et al., 2016; Bartos et al., 2017; McKernan et al., 2018; Tagawa
et al., 2020). The presence of a supermassive black hole, instead, would make
the escape velocity of the cluster so high to counter recoil kicks from BH-BH
mergers and therefore to favour hierarchical mergers (Yang et al., 2019;
Secunda et al., 2020; Tagawa et al., 2021).
Another proposed dynamical formation scenario involves hierarchical triple
systems where, according to the Kozai-Lidov mechanism (Kozai, 1962; Lidov,
1962), the exchange of angular momentum between the inner binary and the orbit
of the outer companion periodically alters the eccentricity of the inner
binary, which could catalyse the merging process (e.g. Liu & Lai 2018).
Finally, a potential GW source channel involves primordial BH binaries, which
are BHs of any mass beyond the Planck mass that were born from energy
fluctuations during the early stages of the Universe, and that are mass-wise
potentially enhanced by dark matter (Bird et al., 2016; Ali-Haïmoud et al.,
2017; Chen & Huang, 2018).
There have been already some efforts to develop theoretical environments and
specific computational methods to infer population models that could combine
multiple formation channels (Ng et al., 2021; Wong et al., 2021; Zevin et al.,
2021). An important point to highlight is that at the current stage we cannot
draw any conclusion regarding the respective contribution of each evolutionary
channel for the observed LVK merger rates, since the considerable sources of
uncertainty in each model hinder the development of a clear picture to explain
the observational data (Belczynski et al., 2022a). Mandel & Broekgaarden
(2022) showed that not only in different evolutionary, but also in the same
evolutionary channel, but with different model implementation and codes, the
estimates for CO mergers can vary of several orders of magnitude.
Stellar evolution influences stellar radius and, in turn, is influenced by it.
One of the most important factors that determine the radial evolution of a
star of a given mass is the metallicity (Xin et al., 2022). For instance, the
convective eddies extension, as described by the Mixing-Length Theory (MLT),
was shown to be dependent on metallicity (Bonaca et al., 2012), which
therefore means that the efficiency of the stellar internal mixing is as well
dependent on it. Low metallicity levels affect the opacity and the nuclear
burning processes inside the star. The lower the opacity, the more easily the
luminosity produced in the inner regions of a star will be able to escape. Low
opacity also means low radiative pressure, which means that the outer layers
reach hydrostatic equilibrium deeper in the star where the temperature and
pressure are higher (Burrows et al., 1993; Kesseli et al., 2019), but on the
other hand a low metallicity also means a suppression of strong winds.
Internal mixing also influences the stellar radius evolution, since the
transport of different elements affects both the mean molecular weight and the
radiative opacity of the stellar envelope. In general an enhanced internal
mixing during the main sequence reduces the mass threshold for a star to
remove its envelope through winds (Gilkis et al., 2021), thereby preventing a
considerable stellar expansion. The stellar radius is so susceptible to all
the factors involved in the internal mixing that for instance for a star which
undergoes helium burning in its core (i.e. a Core He Burning star or CHeB)
even the slightest changes in the helium/hydrogen gradients in its envelope
can enhance its radial expansion by several times (Schootemeijer et al., 2019;
Klencki et al., 2020; Farrell et al., 2021). Stellar rotation, whose velocity
is inversely proportional to the stellar radius, greatly alters both stellar
winds (Higgins & Vink, 2019) and internal mixing.
Many modern population synthesis models are based on the analytic formulae
from Hurley et al. (2000), which describe the evolution of stars as a function
of mass and stellar metallicity. Given all the relatively new observational
and theoretical data since the publication of these formulae, population
synthesis models need to implement all the information available from survey
missions and detailed evolutionary codes in order to be up to date. In their
original prescription these models could simulate very massive stars (initial
mass above 100 M⊙) in their CHeB phase with a radius as large as $10^{4}$ R⊙,
which is currently deemed physically unrealistic by the stellar evolution
community. Also the fact that the these formulae were fitted from a set of
initial stellar masses below 50 M⊙ is a potential source of criticism, since
simulating the progenitors of very massive BHs in this framework requires
extrapolation, which could lead to potential artefacts. Recent studies have
shown how various codes (be them population synthesis or detailed evolutionary
codes) can simulate very different very different stellar expansions to a
point that, for a given Zero-Age Main Sequence (ZAMS) mass $M_{\text{ZAMS}}$
and metallicity, two different codes could simulate stars that differ by even
more than 1000 R⊙ (Agrawal et al., 2020, 2022; Belczynski et al., 2022a). The
first population synthesis study that tackled the role of stellar expansion
during the giant phase for binary evolution was Fryer et al. (1999), which
analysed the uncertainties that could lead to different NS-NS merger rates and
showed how these rates can drastically vary as a function of stellar radii and
kick velocities.
In our study we examine the radial expansion of massive stars in the context
of GW progenitors from isolated binary evolution. We analyse how the maximum
stellar radius changes as a function of metallicity and ZAMS mass according to
Hurley et al. (2000) formulae and according to one dimensional (1D or
detailed) stellar evolution codes. We then develop four new prescriptions by
fitting evolutionary equations from detailed evolutionary simulations in order
to simulate the maximum stellar expansion of massive stars ($M_{\text{ZAMS}}$
> 18 M⊙) as a function of their ZAMS masses. We then adopt these new
prescriptions in our population synthesis pipeline to study the impact of
different radial expansion models on the estimates for double compact object
merger rate densities and BH-BH mass distribution for LVK observations.
## 2 Method
In the following section we describe five different codes and calculations
(either performed in this study or adopted from literature) that we use to
show how different prescriptions can alter the maximum radii of massive stars
and the estimates of double compact object merger rate densities and BH-BH
merger mass distributions.
In our study we use the StarTrack111https://startrackworks.camk.edu.pl
(Belczynski et al., 2002; Belczynski et al., 2008) population synthesis code
to simulate the formation and mergers of double compact objects. It allows us
to predict how a population of binary stars behave for a wide array of initial
conditions and physical assumptions, but instead of re-computing their
evolution from first principles on the fly it relies on the evolutionary
formulae from Hurley et al. (2000). The currently implemented physics, star
formation history, Universe metallicity and its evolution with cosmic time are
described in Belczynski et al. (2020a), with two modifications described in
Sec. 2 of Olejak et al. (2020).
To constrain the maximum stellar radius ($R_{\rm MAX}$) within StarTrack
simulations we derive analytic formulae from four different sets of
simulations: (i) the ‘Bonn’ Optimized Stellar Tracks (BoOST) from Szécsi et
al. (2022) (ii) the Modules for Experiments in Stellar Astrophysics (MESA;
Paxton et al., 2011; Paxton et al., 2013, 2015, 2018, 2019) simulations from
Agrawal et al. (2020) and (iii) two different setups for our MESA simulations.
Both BoOST and MESA simulations from Agrawal et al. (2020) were further
interpolated by the Method of Interpolation for Single Star Evolution
(METISSE; Agrawal et al., 2020) to create a smooth distribution of radius with
initial stellar masses. These prescriptions are meant to limit the maximum
radius that a star can reach throughout its whole lifetime as a function of
its ZAMS mass. It must be stressed that our prescriptions do not alter the
simulated stellar expansion in any way. We simply set a hard limit to how much
a star could grow in size as a function of its ZAMS mass.
Strong stellar winds at high metallicities (see e.g. Heger et al. 2003) and
mass exchange during RLOF events from the donor star to its companion can
alter whether a star will evolve into a NS or a BH. Müller et al. (2016) also
showed that for a given metallicity there might not be a hard maximum $M_{\rm
ZAMS}$ value beyond which a massive star will always collapse into a BH. We
nevertheless choose to fit and apply these $R_{\rm MAX}$ formulae for a
$M_{\rm ZAMS}$ range between 18 and 150 M⊙ i) in order to affect only BH and
massive NS and ii) because stellar evolution is less understood at these
masses.
### 2.1 Model 1 – StarTrack
We adopted the broken power-law for the initial mass function from Kroupa et
al. (1993) & Kroupa (2002) for primary stars222We define as a primary star the
more massive binary component at ZAMS. with the following exponents:
$\alpha_{1}$ = -1.3 for $M_{\rm ZAMS}$ $\in$ [0.08; 0.5] M⊙,
$\alpha_{2}$ = -2.2 for $M_{\rm ZAMS}$ $\in$ [0.5; 1] M⊙,
$\alpha_{3}$ = -2.3 for $M_{\rm ZAMS}$ $\in$ [1; 150] M⊙
The mass of the secondary star is the mass of the primary companion (M1)
multiplied by a mass ratio factor q from a uniform distribution q $\in$
[0.08/M1; 1] (for mass distributions see e.g. Shenar et al. 2022). To produce
the semi-major axis of the binary system we use the initial orbital period (P)
power law distribution log(P [days]) $\in$ [0.5; 5.5] with an exponent
$\alpha_{P}$ = -0.55 and a power law initial eccentricity distribution with an
exponent $\alpha_{e}$ = -0.42 within the range [0; 0.9], and invoke Kepler’s
third law. The initial orbital parameters are taken from Sana et al. (2012),
but we adopt the extrapolation of the orbital period distribution from de Mink
& Belczynski (2015), where the period range has been extended to log(P) = 5.5
. The stellar winds prescription adopted in StarTrack is based on the
theoretical predictions of radiation driven mass loss from Vink et al. (2001),
with the Luminous Blue Variable mass loss from Belczynski et al. (2010), shown
below:
$\dot{M}_{\rm lbv}=f_{\text{lbv}}10^{-4}[{\rm M}_{\odot}{\rm yr}^{-1}]\\\ \\\
f_{\rm lbv}=1.5$ (1)
To calculate the mass accretion from stellar winds we use the approximation
from Boffin & Jorissen (1988) of the Bondi & Hoyle (1944) model (for more
information see Belczynski et al. 2008). The adopted prescription for the
accretion onto a CO both during a stable RLOF event or from stellar winds is
based on the approximations from King et al. (2001). We use 50 per cent non-
conservative RLOF for non-degenerate accretors, with a fraction of accreted
donor mass $f_{\rm a}$ = 0.5 and the remaining mass (1 $-f_{\rm a}$) being
dispersed in the binary surroundings with a consequent loss of orbital
momentum (Belczynski et al., 2008). In order to determine the potential
instability of a RLOF event (and therefore a CE phase) we adopt the diagnostic
diagram described in Belczynski et al. (2008). To treat CE events we use the
Alpha-lambda prescription, with $\alpha_{CE}=1$. We assume that systems with a
Hertzsprung gap (HG) donor merge during the CE phase (Belczynski et al.,
2007). This is due to the fact that in StarTrack the HG phase begins at the
end of the MS and it represents a period of huge expansion for the star. At
the beginning of the RLOF phase these donors are often only partially expanded
post-MS stars and it is not therefore clear if they already have a well-
separated core-envelope structure. We adopt the weak pulsation pair-
instability supernovae and pair-instability supernovae formulation from
Woosley (2017) & Belczynski et al. (2016), which limits the mass spectrum of
single BHs to 50 M⊙. We also use a delayed supernova engine (Fryer et al.,
2012; Belczynski et al., 2012), which affects the birth mass of NSs and BHs so
that it allows for the formation of compact objects within the first mass gap
($\sim$ 2 - 5 M⊙), with the assumption that anything beyond 2.5 M⊙ is a BH and
therefore everything below is a NS (Horvath et al., 2020). In our simulations
we set the solar metallicity $Z_{\odot}$=0.02 (Grevesse & Sauval, 1998) and a
Maxwellian distribution of natal kicks of $\sigma=265$ km $s^{-1}$, lowered by
fallback during the core-collapse (Fryer et al., 2012) by multiplying it with
a fallback factor $f_{b}$ $\in$ (0; 1).
### 2.2 Model 2 – METISSE-BoOST
We use the METISSE interpolated models from the BoOST (Szécsi et al., 2022)
project. Following Agrawal et al. (2020), we use the ’dwarfA’ set with
metallicity, $Z=0.00105$. The models have been computed using the Bonn Code
(Heger et al., 2000; Brott et al., 2011) from ZAMS till the end of CHeB.
However, the models of massive stars develop numerical instabilities that
inhibit computing their evolution after a certain point in 1D stellar
evolution codes (Agrawal et al., 2022). For such models, the remainder of
their evolution has been approximated in post-processing (with the so-called
"direct extension method" Szécsi et al. 2022). The models also include a small
amount of rotation (at $100$ km s-1). Further input parameters are described
in Szécsi et al. (2022). The models were fed into the METISSE interpolator to
create a set of 100 interpolated tracks uniformly distributed in mass between
9 and 100 M⊙. Although the initial metallicity differing from what has been
used in Model 3 and Model 4, it still falls in the range at which
SSE/StarTrack predict the highest radii (for more information look at Sec.
3.1).
### 2.3 Model 3 – METISSE-MESA
These models were computed using the version 11701 of MESA from pre-MS to the
end of core carbon burning ($X_{c}\leq 10^{-4}$) , for metallicity
$Z=0.00142$. In computing these models, mass-loss rates from Glebbeek et al.
(2009) and mixing parameters from Choi et al. (2016) have been used. The
models do not include rotation. To account for eruptive mass loss at super-
Eddington luminosity an additional mass loss from super-Eddington wind scheme
of MESA reduced by a factor of 10 has been employed whenever stellar
luminosity exceeded 1.1 times the mass-weighted average of Eddington
luminosity. The convection is modelled following the mixing length theory
(Böhm-Vitense, 1958), which is a 1D approximation of the radial motion of
blobs of gas over a radial distance that corresponds to the mixing length lm,
after which they dissolve in their surroundings and either release or absorb
heat depending on whether the motion was upwards or downwards. In this case
the mixing length parameter implemented in MESA has been kept at $l_{m}$ =
1.82 . To suppress the numerical instabilities in massive stars, MLT++ (Paxton
et al., 2013) scheme of MESA has also been used, which is a treatment for
convection that reduces the temperature gradient and therefore the
superadiabaticity in some radiative envelope layers of massive stars, so that
the simulation time-steps in MESA do not get extremely small. This in turn
means that the stellar effective temperature increases (Klencki et al., 2020)
without altering the actual stellar luminosity, which leads to an inhibition
of the stellar radial expansion. Other physical inputs are the same as
described in Agrawal et al. (2021). Following Agrawal et al. (2020), the MESA
models were interpolated with METISSE to create a distribution of stellar
tracks, similar to Model 2.
### 2.4 Models 4a and 4b – MESA
For these simulations we used the version 15140 of MESA to develop a set of
stellar tracks from 18 to 100 M⊙ at ZAMS that could give the lowest possible
maximum stellar radii while still having realistic input physics. For Model 4a
these simulations were initialised with the MLT++ physics to artificially
reduce the stellar radii during the giant phase, while for Model 4b the MLT++
module was turned off. This is the only difference between the two models. The
input physics has been chosen to enhance the mixing of chemical components
inside the various shells of a star (core included), which in turn affects the
nuclear reactions and the radial evolution of the object. We adopted the
Ledoux criterion for the determination of the convective boundaries (Ledoux,
1947). The metallicity has been set at $Z$= 0.00142 to be consistent with
Model 3.
The core overshooting leads a star during its MS lifetime to mix the layers
above the convective core, therefore increasing the stellar MS lifetime due to
a replenishment of the hydrogen reservoirs in the core and in turn rising the
nuclear timescale. In our models we adopt the step-overshooting approach,
whose representing parameter representing is $\alpha_{ov}$, which represents
the fraction of the pressure scale height $X_{\rm p}$ by which particles keep
travelling up to a distance $l_{ov}$ beyond the convective core boundary
(Higgins & Vink, 2019). We set a value of $\alpha_{ov}=0.5$ for our
simulations. This value can also be above above what we set if $\alpha_{ov}$
is used to represent not only the convective overshooting mechanism, but also
all the other physical effects that affect mixing and have not been taken into
account by the adopted stellar evolution models (like gravity waves on the
convective core). For instance, in Gilkis et al. (2021) values up to 1.2 are
explored for a ZAMS mass up to $\sim$ 200 M⊙.
Semiconvection is defined as the mixing in regions that are unstable to the
Schwarzschild criterion for convention, but stable to the Ledoux one. This
means semiconvective zones are defined as regions where the molecular
composition gradient is higher than the temperature gradient, which is in turn
higher than the adiabatic one. As described in Paxton et al. (2013), these
regions experience mixing through a time-dependent diffusive process. After
the end of core hydrogen burning, the following contraction phase of massive
stars leads them to ignite the hydrogen in a shell. Due to this reason the
deep hydrogen envelope forms semiconvective regions inside of it. In this
context the diffusion coefficient is directly proportional to the factor
$\alpha_{sc}$, which is a dimensionless parameter that describes the mixing
efficiency. Schootemeijer et al. (2019) has shown that for non-rotating models
the value of $\alpha_{sc}$ could be realistically up to 300, which we chose
for our simulations. This impact on internal mixing is however reduced in case
of a very efficient core overshooting, since if a considerable part of the
stellar hydrogen reservoirs is dragged inside the stellar core during the MS
due to efficient mixing, it will be harder for semiconvective regions of
localised hydrogen burning to form during the core He burning phase
(Schootemeijer et al., 2019). With very low or null values of overshooting
semiconvective regions can already form during the MS phase. As we expected,
from our checks we found that semiconvection didn’t have an important role in
the simulations, considering the adopted $\alpha_{ov}$.
The wind prescription that was adopted in our MESA simulations is the so-
called Dutch wind prescription (Glebbeek et al., 2009), which uses different
wind models depending on the effective temperature $T_{\rm eff}$ and the
surface hydrogen abundance $H_{\rm sur}$ (in terms of the fraction of hydrogen
on the stellar surface). As shown in Table 1, if $T_{\rm eff}<10^{4}$ K the de
Jager et al. (1988) model is used regardless of the hydrogen surface
abundance. When $T_{\rm eff}>10^{4}$ K, instead, the Dutch prescription either
adopts the Nugis & Lamers (2000) model ($H_{\rm sur}<0.4$), or the Vink et al.
(2001) model ($H_{\rm sur}>0.4$).
Table 1: Dutch Stellar Winds prescription in MESA. | $T_{\rm eff}<10^{4}$ K | $T_{\rm eff}\geq 10^{4}$ K
---|---|---
– | de Jager et al. (1988) | –
$X_{\rm sur}<0.4$ | – | Nugis & Lamers (2000)
$X_{\rm sur}\geq 0.4$ | – | Vink et al. (2001)
The de Jager et al. (1988) winds are described by the following equation:
$\log(\dot{M})=1.769\log\left(\frac{L}{{\rm
L}_{\odot}}\right)-1.676\log(T_{\rm eff})-8.158$ (2)
where $\dot{M}$ is in ${\rm M}_{\odot}yr^{-1}$. The wind mass loss for Wolf-
Rayet stars follows instead the Nugis & Lamers (2000) model:
$\dot{M}=10^{-11}\left(\frac{L}{{\rm L}_{\odot}}\right)^{1.29}Y^{1.7}\sqrt{Z}$
(3)
where Y is the He surface abundance and Z the metallicity.
## 3 Results
### 3.1 Depenendence of the maximum radius on mass and metallicity
In Figure 1 we show the dependence of the maximum stellar radius on
$M_{\text{ZAMS}}$ and metallicity, as simulated with StarTrack. Each line
represents a different ZAMS mass: 10, 20, 30, 40, 50 and 100 M⊙. The vertical
grey lines show the initial metallicities of the simulations from Pols et al.
(1998), upon which the models from Hurley et al. (2000) were built: 0.0001,
0.0003, 0.001, 0.004, 0.01, 0.02, 0.03 .
Figure 1: Maximum radii of massive stars as a function of their initial
metallicity from standard StarTrack simulations. Each line corresponds to a
different ZAMS mass between 10 and 100 M⊙. The vertical lines correspond to
the original stellar set of models from Pols et al. 1998 from which the
analytic Hurley et al. 2000 formulae were fitted. Artefacts due to
extrapolation and interpolation of the original evolutionary equations could
be the cause for the absence of a quasi-linear relation between $R_{\rm MAX}$
and metallicity. Figure 2: Maximum radii of massive stars as a function of
their initial metallicity from our Model 4b MESA simulations. Each line
corresponds to a different ZAMS mass between 10 and 100 M⊙. With the exception
of metallicity-driven wind mass loss, $R_{\rm MAX}$ almost linearly increases
in function of metallicity due to increased opacity levels.
In general the stellar radius is dependent on the ZAMS mass and then to the
initial metallicity of the object. On the other hand stellar winds are
proportional to metallicity and therefore may significantly reduce the mass
and the radius of a star. In Hurley et al. (2000) formulae, as implemented in
the StarTrack code, the maximum radial expansion always peaks at around $Z$=
0.002 . It must be pointed out that the results around this metallicity are
just interpolations from the Hurley et al. (2000) fits of the stellar set of
models from Pols et al. (1998). The maximum radius peak could therefore be
only an artefact. Also, the evolution of any star with a ZAMS mass beyond 50
M⊙ is an extrapolation of the formulae, since they were only fit for stars up
to that initial mass. Likewise, the stellar set of models from Pols et al.
(1998) was computed for ZAMS masses between 0.5 and 50 M⊙ with a spacing of
$\sim$ 0.1 in log$M$, which means that for any other initial mass within that
range the stellar evolution is also interpolated. Such an interpretation could
be strengthened by our Model 4b MESA simulations (see Sec. 2.4) shown in
Figure 2, which we repeat is the model that doesn’t adopt the MLT++ scheme to
treat superadiabatic radiative envelopes layers of massive stars. The line for
100 M⊙ stops at a metallicity of 0.004 because beyond that point with our MESA
setup the simulations reach a timestep limit that makes it impossible to get
past the main sequence. In these MESA calculations the maximum stellar radius
almost always increases with metallicity for stars below 40 M⊙. For the most
massive stellar tracks the maximum radius increases less steeply as a function
of metallicity if compared to lower mass stars. This is a result of an
increased internal mixing for massive stars that reduces the mass of the
H-rich envelope limiting the radial expansion beyond main sequence.
Additionally, for larger metallicities stellar winds become strong enough to
help effectively remove the H-rich envelope and further reduce stellar
expansion (see the large decrease in radius for the $M_{\text{ZAMS}}$ = 40 M⊙
and $M_{\text{ZAMS}}$ = 50 M⊙ models around $Z$ = 0.02). The peaks at
$R_{\text{MAX}}$ > 3000 R⊙ for the 30 M⊙ and 40 M⊙ stars are due to numerical
noise in the simulation of loosely bound, radiation-pressure-dominated
envelopes, not to actual physical effects. This was shown in Sec. 7.2 of
Paxton et al. (2013) to be a well-known issue for detailed simulations of
massive and high-metallicity post-MS stars. It is important to highlight that,
on the contrary to our StarTrack simulations, our detailed simulations do not
show $R_{\rm MAX}$ peaking at $Z$= 0.002 (or at any other initial metallicity)
for every $M_{\rm ZAMS}$.
### 3.2 Models fit
For all the models we fitted a logarithmic behaviour for $R_{\text{MAX}}$ [R⊙]
as a function of $M_{\text{ZAMS}}$ [M⊙]. The following formulae are labelled
as log$R_{\text{MAX,1}}$, log$R_{\text{MAX,2}}$, log$R_{\text{MAX,3}}$,
log$R_{\text{MAX,4a}}$ and log$R_{\text{MAX,4b}}$ to indicate which model they
were fit from. For $M_{\text{ZAMS}}$<18 M⊙ the maximum radius has not been
constrained by the following formulae and the stars evolve according to our
standard evolutionary prescription. For Model 1 we retrieved the following
logarithmic equation:
$\displaystyle{\rm log}R_{\rm MAX,1}=0.1\times{\rm
log}(3177.1M_{\text{ZAMS}})+0.006M_{\text{ZAMS}}+2.8$ (4)
It must be stressed that we do not use Model 1 to constrain the maximum
stellar radius, since this already represents our default prescription. This
formula was added as a further comparison with the other models.
For Model 2, we fitted the following logarithmic relation for
$R_{\text{MAX}}$:
$\displaystyle{\rm log}R_{\text{MAX,2}}=1.358\times{\rm log}(2.712\times
10^{-13}M_{\text{ZAMS}})$ (5) $\displaystyle-6.536\times
10^{-3}M_{\text{ZAMS}}+18.443$
For ZAMS masses higher than 100 M⊙ the maximum radius is set at
$R_{\text{MAX,2}}$(100 ${\rm M}_{\odot})\sim 2752$ R⊙.
Considering the behaviour of the METISSE-MESA simulations in Model 3, we
divided the simulation results into $M_{\text{ZAMS}}$ bins and fitted each
segment separately. The results of our fits can be found in Table 2. Every
star with a ZAMS mass beyond $100$ M⊙ is by default set at $R_{\rm MAX,3}$(100
${\rm M}_{\odot})\sim 631$ R⊙.
Table 2: Fitted METISSE-MESA formulae $M_{\text{ZAMS}}$ [M⊙] | log$R_{\text{MAX,3}}$ =
---|---
18-20 | log$R_{\text{MAX,2}}$
20-40 | log$R_{\text{MAX,2}}$(20 M⊙)+0.0002$M_{\text{ZAMS}}$
40-45 | (-9$M_{\text{ZAMS}}$+510)/50
45-52 | (-$M_{\text{ZAMS}}$+321)/20
52-71 | (3$M_{\text{ZAMS}}$+47)/100
71-100 | (2$M_{\text{ZAMS}}$+612)/290
$\geq$100 | 2.8
$R_{\text{MAX}}$ for our MESA simulations in Model 4a is described as:
$\displaystyle{\rm log}R_{\text{MAX,4a}}=-11.591\times{\rm
log}(2.434M_{\text{ZAMS}})+0.329M_{\text{ZAMS}}$ (6)
$\displaystyle-0.003M_{\text{ZAMS}}^{2}+1.197M_{\text{ZAMS}}^{3}\times
10^{-5}+17.037$
Since the relation is only valid for ZAMS masses between 18 and 100 M⊙,
therefore for stars beyond the upper limit we set
$R_{\text{MAX,4a}}$($M_{\text{ZAMS}}>100$ ${\rm M}_{\odot})$ =
$R_{\text{MAX,4a}}$(100 ${\rm M}_{\odot})\sim 322$ R⊙.
Finally, for our Model 4b we fitted this behaviour for ZAMS masses between 18
and 100 M⊙:
$\displaystyle{\rm log}R_{\text{MAX,4b}}=1.143\times{\rm log}(2.767\times
10^{2}M_{\text{ZAMS}})$ (7) $\displaystyle-5.825\times
10^{-3}M_{\text{ZAMS}}-1.129$
For more massive stars $R_{\text{MAX,4b}}$($M_{\text{ZAMS}}>100$ ${\rm
M}_{\odot})$ = $R_{\text{MAX,4b}}$(100 ${\rm M}_{\odot})\sim 2332$ R⊙. Despite
the two models were taken from different evolutionary codes, Model 2 and Model
4b predict similar logarithmic behaviours of the maximum expansion as a
function of $M_{\rm ZAMS}$. For $M_{\text{ZAMS}}>100$ ${\rm M}_{\odot}$,
$R_{\text{MAX,2}}$ and $R_{\text{MAX,4b}}$ differ of a factor $\sim$1.2 .
In Figure 3 we plotted the maximum radius that stars can reach during their
whole lifetime as a function of their ZAMS masses. The solid lines represent
log$R_{\text{MAX,1}}$, log$R_{\text{MAX,2}}$, log$R_{\text{MAX,3}}$,
log$R_{\text{MAX,4a}}$ and log$R_{\text{MAX,4b}}$ as a function of $M_{\rm
ZAMS}$, while the dots are the data points that have been used to fit the
logarithmic equations (they are the same colour of their respective model
line). If the radial evolution for stars with a $M_{\text{ZAMS}}\geq$ 18 M⊙ is
simulated with our standard Model 1 prescription, it can reach a maximum value
between 2$\times 10^{3}$ and 3.6$\times 10^{4}$ R⊙. The differences in terms
of $R_{\rm MAX}$ depend on $M_{\rm ZAMS}$ and they increase with increasing
initial masses. For low initial masses ($M_{\rm ZAMS}\lesssim$ 40 M⊙) the
detailed models give maximum radii smaller by a factor of $\sim$2 (Model 2),
$\sim$3 (Model 3), $\sim$5 (Model 4a) and $\sim$3 (Model 4b) than the ones
from Hurley et al. (2000) formulae (Model 1). For high initial masses ($M_{\rm
ZAMS}$ $\gtrsim$ 100 M⊙) the detailed models give maximum radii smaller by a
factor of $\sim$7 (Model 2), $\sim$30 (Model 3), $\sim$60 (Model 4a), $\sim$8
(Model 4b). It must be highlighted that Model 2 and Model 4b, which were built
from different evolutionary codes (BoOST and MESA), without any prior
calibration to make them produce similar results, predict similar trends of
maximum radial evolution as a function of ZAMS mass.
Figure 3: Maximum stellar radii as a function of the ZAMS mass for each
presented model. We show the maximum stellar radii obtained with the Hurley et
al. 2000 rapid evolutionary formulae used in many codes (e.g. StarTrack,
COMPAS, MOCCA), and the ones obtained from Models 2, 3, 4a and 4b. The dots
are the data points from the estimates from StarTrack (Model 1) or detailed
calculations (Model 2, 3, 4a, 4b). They are the same colour of the lines
representing the $R_{\rm MAX}$ prescriptions. As a reference we show the
radius of KY Cygni ($\sim$ 1500 R⊙, for $M_{\rm ZAMS}$ = 30 M⊙) and Small
Magellanic Cloud Dachs 1-4 ($\sim$ 1300 R⊙ $M_{\rm ZAMS}$ = 32 M⊙; see Sec.
3.2 for details). Maximum stellar radii for the same $M_{\rm ZAMS}$ may differ
by more than one order of magnitude depending on which code and input physics
is used.
As a sanity check we also compared the $R_{\text{MAX}}$ in our models with all
the Red, Yellow and Blue Supergiants (respectively RSGs, YSGs, and BSGs) in
the Small Magellanic Cloud (SMC, Figure 5), with a particular focus on the
coolest and most luminous supergiants. We use the same compilation of objects
as Gilkis et al. (2021). The sample includes all known cool supergiants with
estimated luminosities of $\log L>4.7\,[\,\mathrm{erg}\,{\mathrm{s}}^{-1}]$
and effective temperatures smaller than $T_{\rm eff}<12\,500$ K, which probes
the horizontal edge of the Humphreys-Davidson limit (Humphreys & Davidson,
1979). The catalogue compiles RSGs and YSGs from Davies et al. (2018), Neugent
et al. (2010), and BSGs that are found from Gaia DR2 (Gaia Collaboration,
2018) using colour-$T_{\rm eff}$ calibrations from Evans & Howarth (2003) or
derived properties from Dufton et al. (2000). The sample was cleaned from non-
SMC contaminants using proper motion and parallax criteria. The total list
comprises 179 stars: 140 RSGs, 7 YSGs, and 32 cool BSGs. We refer to Gilkis et
al. (2021) for more details.
To retrieve the radial extension of the supergiants in our sample, we used the
Stefan–Boltzmann law:
$\frac{L}{L_{\odot}}=\left(\frac{T}{T_{\odot}}\right)^{4}\left(\frac{R}{R_{\odot}}\right)^{2}$
(8)
The largest star known in the SMC is Dachs SMC 1-4, a RSG with a radius of
$\sim 1300$ R⊙, a luminosity of log($L$/L⊙) = 5.55 and log($T_{\rm eff}$ [K])
= 3.59. With the MESA setup from Model 4b, an initial metallicity of at $Z$ =
0.008, we tested different $M_{\rm ZAMS}$ values to check with which
evolutionary track we could get the same position of Dachs SMC 1-4 in the H-R
diagram. We found an ideal candidate in a star of $M_{\rm ZAMS}$ = 32 M⊙, with
log($T_{\rm eff}$) $\sim$ 3.60, log($L/{\rm L}_{\odot}$) $\sim$ 5.55 and $R$
$\sim$ 1260 R⊙. The grey dashed lines represent where, for a specific
luminosity and effective temperature, a star has a radius of 10, 100, 500 and
1000 R⊙. Also KY Cygni, one of the largest stars in the Milky Way, has been
used for this comparison, since it can even go beyond the expansion of Dachs
SMC 1-4 and it has been estimated to be $\sim$ 1500 R⊙ large (Levesque et al.,
2005), with log($L/{\rm L}_{\odot}$) = 5.43 and log($T_{\rm eff}$) = 3.54
(Dorn-Wallenstein et al., 2020). Like in the case with Dachs SMC 1-4, we used
the same setup to find an evolutionary track that could match KY Cygni in the
H-R diagram, but at a metallicity of 0.0142 (solar metallicity). With
log($T_{\rm eff}$) = 3.53, log($L/{\rm L}_{\odot}$) = 5.44 and $R$ = 1500.44
R⊙ we found an optimal candidate in a star of $M_{\rm ZAMS}$ = 30 M⊙. The HR
position of both stars and their respective simulated tracks are shown in Fig.
4. The KY Cygni and the Dachs SMC 1-4 radii as a function of their estimated
ZAMS masses are shown in Fig. 3.
Figure 4: H-R diagram of two stars as simulated by our MESA setup from Model
4b. As a reference we show the position of Dachs SMC 1-4 and KY Cygni in terms
of luminosity and effective temperature.
As a reference for the $R_{\rm MAX}$ prescriptions, we show the estimated
radii of Dachs SMC 1-4 and KY Cygni from the literature, in order to test if
our models could underestimate the real radial expansion of stars. This
comparison shows that Model 1, Model 2 and Model 4b can reproduce radii that
are high enough to be compatible with the given observational constraints,
while Model 3 and Model 4a can reach at most 1050 R⊙. It must be pointed out
that our $R_{\text{MAX}}$ prescriptions are only as a function of $M_{\rm
ZAMS}$: they do not take into account any other factor like metallicity or the
assumed mixing length. Estimations of RSGs radii must also be taken with a
grain of salt, considering the margins of error that come along their
luminosities and effective temperatures.
Figure 5: Estimated radii of RSGs, YSGs, and BSGs in the Small Magellanic
Cloud. The largest one is Dachs SMC 1-4 at $\sim$1300 R⊙, as shown by the red
arrow.
### 3.3 Estimated LVK local merger rate densities
In Table 3 we compare the merger rate densities as reported by the LVK
collaboration for a fiducial redshift of $z\sim 0.2$ with the ones we
calculated for the same redshift range.
Model 2, whose $R_{\rm MAX}$ as a function of $M_{\rm ZAMS}$ is between 2 and
7 times smaller than what is obtained from Model 1, differs only of 3 per cent
from the standard prescription in terms of BH-BH merger rate density. We
explain that this is due to the fact that the rate of RLOF events for BH-BH
merger progenitors remains unaltered from Model 1, since the simulated stars
in BH-BH progenitors overflow their Roche lobe before reaching their
respective $R_{\text{MAX}}$ (for more details see Sec. 3.5).
Model 3, that predicts maximum radii between 3 to 30 times smaller than Model
1 does, shows a decrease of the BH-BH merger rate density of $\sim$30 per
cent, with an increase of $\sim$25 per cent in the BH-NS merger rate density
and the same NS-NS merger rate density (within the numerical variability of
population synthesis estimations) of Model 1. This is due to the fact that
with this model more stars in BH-BH merger progenitors reach their $R_{\rm
MAX}$ before overfilling their Roche lobe. For BH-NS progenitors, instead,
many RLOF events that bring a close binary to an early merger event are not
initialised in the first place, which brings to a higher survival rate for
binaries and in turn to an increase in the number of BH-NS mergers.
Model 4a, which gives the smallest values of $R_{\rm MAX}$ for ZAMS masses
below $\sim 40$ M⊙ and above $\sim 85$ M⊙, shows a decrease from Model 1 in
the merger rate densities for all the channels. The BH-BH merger rate density
is 18.7 Gpc-3yr-1, which is the lowest one among our models. In this scenario
the BH-BH merger rate density show a decrease by a factor $\sim$3.5. The NS-NS
and BH-NS merger rate densities are respectively roughly 40 and 35 per cent
lower than the ones from Model 1. This shows that with this model the reduced
rate of initialisation of RLOF events is reducing the number of close double
compact objects that lead to mergers within a Hubble time, rather than forming
more close double compact objects due to a reduced number of early mergers.
Finally, with the Model 4b prescription, we predict merger rate densities for
BH-BH and NS-NS binaries that are fully compatible with both Model 1 and Model
2. The only exception being the BH-NS merger rate density, which is the
highest one among the models.
With these results we show that our standard prescription (Model 1) is as
reliable in terms of double compact object merger rate density estimates from
isolated binary evolution as other models that predict a much more restrictive
radial evolution. This is motivated by the fact that $R_{\text{MAX}}$ in our
prescription was reduced sometimes more than one order of magnitude from Model
1 and that the lowest BH-BH merger rate density estimation (Model 4a) was
roughly three times lower than the one from Model 1. This conclusion is also
strengthened by the fact that, as previously mentioned in Sec. 1, merger rate
density estimations have a way wider margin of variability than what has been
shown in this study.
Other two important results to highlight are (i) most of the RLOF events that
lead to BH-BH mergers happen below the radial maximum described by Model 4b
(ii) for redshifts at 0.2 all our models besides Model 4a estimate a BH-BH
merger rate density that is beyond the reported LVK 90 per cent credibility
range. On the other hand, considering the uncertainties in population
synthesis studies, slight variations in the input physics for Model 1, Model
2, Model 3 and Model 4b in the isolated binary channel could make their BH-BH
merger rate estimations to fit in the LVK credibility range (see e.g. Dominik
et al. 2012).
Table 3: Comparison between the LIGO/Virgo/KAGRA local ($z$ $\sim$0.2) merger rate densities (Abbott et al., 2021a, b) [Gpc-3yr-1] and the ones calculated with our models within a redshift of 0.2. With the bold font we mark the model for which BH-BH, BH-NS and NS-NS merger rate densities are within the LVK 90 per cent margin of credibility. Model | BH-BH | BH-NS | NS-NS
---|---|---|---
LVK | 17.9 - 44 | 7.8 - 140 | 10 - 1700
Model 1 | 68.1 | 15.6 | 158.9
Model 2 | 65.8 | 14.9 | 162.8
Model 3 | 46.6 | 19.7 | 157.7
Model 4a | 18.7 | 9.5 | 100.4
Model 4b | 65.6 | 22.9 | 160.8
### 3.4 Estimated BH-BH mass distribution
Fig. 6 shows our estimated merger rate density of BH-BH mergers within a
redshift $z$= 2 as a function of the total binary mass, the mass of the
primary/more massive BH, and the mass of the secondary/less massive BH for all
the models described in Sec. 2 .
Figure 6: BH-BH mass distribution for mergers within redshift $z$= 2. The plot
on the top represents the merger rate density as a function of the total BH-BH
mass. The one in the middle the mass distribution for the primary (i.e. the
most massive) BH, while the plot on the bottom shows the mass distribution of
the secondary BH. Similarly to Fig. 3 each model is described with a different
line.
Model 2 and Model 4b do not alter the estimates of the mass distribution of
BH-BH mergers within redshift z = 2 from our standard setup (Model 1), despite
the $R_{\text{MAX}}$ decrease. Model 3 and Model 4a, which are the ones with
the smallest $R_{\rm MAX}$, still show mass distributions peaked at 19 M⊙
(total mass), 15 M⊙ (primary mass) and 4 M⊙ (secondary mass). Our estimates
show that Model 3 and Model 4a $R_{\rm MAX}$ prescriptions diverge the most
(from Model 1 and each other) for a total BH mass higher than 50 M⊙. This is
expected, since, as noticeable in Fig. 3, the differences between the models
in terms of $R_{\rm MAX}$ grow in function of $M_{\rm ZAMS}$. None of our
models are therefore in agreement with the reported LVK BH-BH mass
distribution in Abbott et al. (2021a), where two peaks are observed at primary
masses of $\sim$10 M⊙ and $\sim$35 M⊙.
Considering that beyond a total BH mass of 50 M⊙ the BH-BH merger rate density
differs of roughly one order of magnitude between Models 1, 2 and 4b, and
Models 3 and 4a, we show in Table 4 how the same estimates from Table 3 can
vary for the BH-BH channel if only the mergers with a total mass > 50 M⊙ are
considered.
Table 4: Estimated BH-BH merger rate densities [Gpc-3yr-1] for total masses beyond 50 M⊙ for all our models. redshift | Model 1 | Model 2 | Model 3 | Model 4a | Model 4b
---|---|---|---|---|---
< 0.2 | 2.8 | 2.35 | 0.06 | 0.13 | 2.21
< 2 | 16.3 | 13.2 | 0.3 | 1.1 | 12.8
For this mass range there is a decrease of $\sim$ 16 per cent from Model 1 to
Models 2 and 4b for both z$\sim$0.2 and z$\sim$2\. Since these BH-BH
progenitors are initially very massive stars that are in the $M_{\rm ZAMS}$
range where the predicted $R_{\rm MAX}$ differs the most between the detailed
models and Model 1. Model 3 predicts a BH-BH merger rate density that is
between $\sim$ 45 (z$\sim$0.2) and $\sim$ 54 (z$\sim$2) times smaller than
what is predicted with Model 1, and 2 times smaller than what we estimate with
Model 4a.
### 3.5 CE events and nature of the stellar envelope for BH-BH merger
progenitors
In this section we take an in-depth look at the nature of donor stars in CE
events that lead to the formation of BH-BH mergers in our StarTrack
simulations.
According to Klencki et al. (2020, 2021) and Marchant et al. (2021), CE events
initiated by stars without a deep outer convective envelope lead to stellar
mergers rather than successful CE ejections. From this starting point we used
Model 4b to check the nature of the stellar envelope of massive stars. In our
methodology we define a deep convective envelope as an envelope that has at
least 10 per cent of its mass in the outer convective zone (if any). Note that
our MESA simulations are limited to the 0.1 Z⊙ metallicity, but the radius
above which a star develops an outer convective envelope may depend slightly
on metallicity (Klencki et al., 2020). As a comparison we also did the same
simulations with our StarTrack setup for metallicities of Z$=0.1$ Z⊙ and
Z$=0.01$ Z⊙ for a $M_{\rm ZAMS}$ range up to 150 M⊙333Due to a potential
extrapolation artefact in StarTrack, we report a metallicity-dependent maximum
mass beyond which no donor star, according to our default CE survival
prescription, survives a CE phase. For Z$=0.1$ Z⊙ this threshold mass is
$\sim$ 95 M⊙, while for Z$=0.01$ Z⊙ this value increases to $\sim$ 135 M⊙..
For StarTrack the CE survival conditions are described in Belczynski et al.
(2008). We define the Interaction Radius ($R_{\text{Int}}$) and Interaction
Mass ($M_{\text{Int}}$) as the stellar radius and mass at which a donor star
in a binary overflows its Roche lobe and initiates a mass transfer event. Both
of them are in solar units. We show the results in Fig. 7, where the 2D
histograms in the background represent the $R_{\text{Int}}$ and either
$M_{\text{Int}}$ (left) or $M_{\text{ZAMS}}$ (right) distribution for CE
events that lead binaries to BH-BH mergers within a Hubble time, according to
our Model 1 simulations. In the left plot the red dashed area shows the
conditions at which there is a deep convective envelope for a given stellar
radius and mass as predicted from our MESA simulations, while the dotted teal
and circled pink areas show instead the parameter space at which the CE
survival is possible based on the assumption made in StarTrack, for
respectively Z$=0.1$ Z⊙ and Z$=0.01$ Z⊙. In the right plot we show the
$R_{\text{MAX}}$ prescriptions from Model 2, Model 3, Model 4a, and Model 4b,
and the average StarTrack $R_{\text{Int}}$ as a function of $M_{\text{ZAMS}}$.
Figure 7: 2D histogram showing how often the CE events happen as a function of
$R_{\text{Int}}$ with $M_{\text{Int}}$ (left panel) and $M_{\text{ZAMS}}$
(right panel) for BH-BH merger progenitors. Both plots were rescaled to
include only the CE events with $R_{\text{Int}}$ < 3000 R⊙. On the left plot
the red dashed area shows where stars below 100 M⊙ get a convective envelope
according to our MESA simulations, while the dotted teal and pink areas show
how our StarTrack prescription defines CE survival for respectively Z = 0.1 Z⊙
and Z = 0.01 Z⊙. On the right plot the orange lines show the $R_{\text{MAX}}$
as a function of $M_{\text{ZAMS}}$ from our Model 2, Model 3, Model 4a and
Model 4b simulations, while the pink line represents the average
$R_{\text{Int}}$ values as a function of $M_{\text{ZAMS}}$ (Eq. 9).
The StarTrack average $R_{\text{Int}}$, as shown by the pink line in the right
plot from Figure 7, follows the following relation:
$R_{\text{Int}}=4.5M_{\rm ZAMS}^{3}\times
10^{-3}-0.8M_{\text{ZAMS}}^{2}+7.4M_{\text{ZAMS}}-1070.1$ (9)
In Model 1, of all the CE events leading to BH-BH mergers, only $\sim$1 per
cent are initiated by donors that would have an outer convective envelope
according to our MESA simulations. From this comparison it is already evident
that detailed modelling gives much more restrictive boundaries for CE
survival. This means that, if the donor star does indeed need a convective
envelope for the binary to survive the CE phase, many of the CE events
simulated with our standard prescription that lead a binary to evolve into a
BH-BH system, would lead instead merge with the CE. This in turn means that
our CE prescription might overestimate the merger rate densities from Table 3
and Table 4.
In the right panel of Fig. 7 we compare $R_{\rm MAX}$ from the different
models with the distribution of $R_{\rm Int}$ in CE events leading to BH-BH
mergers. As expected and already illustrated in the plot on the right of Fig.
7, the vast majority of the $R_{\rm Int}$ for CE events ($\sim$ 97 per cent)
falls below our Model 4b maximum radius limit, which explains the absence of
any substantial difference for BH-BH merger rate density predictions between
Model 1, Model 2, and Model 4b. This number decreases to $\sim$ 83 per cent if
we consider only $M_{\rm ZAMS}$ > 50 M⊙. For this considered mass range, only
$\sim$ 1 and 18 per cent of the stars in BH-BH progenitor binaries for
respectively Model 3 and Model 4a would be large enough to initiate a CE
phase. This explains why the differences in terms of merger rate densities for
BH-BH total masses > 50 M⊙, as shown in Sec. 3.4, are sharper than when we
consider the whole mass spectrum.
In Appendix A histograms of the $R_{\rm Int}$ distributions for CE, stable
RLOF and all RLOF events for BH-BH merger progenitors are provided.
## 4 Conclusions
In our study we analysed how the theoretical uncertainty on the maximum
expansion of massive stars ($M_{\rm ZAMS}$ > 18 M⊙) can affect the evolution
of isolated massive binaries and in turn the formation of gravitational-wave
sources.
We present five different models to describe the maximum stellar radius
($R_{\rm MAX}$) that massive stars reach throughout their lifetime as a
function of their initial mass. Model 1 represents our default StarTrack
prescription, which adopts the Hurley et al. (2000) analytic formulae for
stellar evolution. Model 2 was retrieved from the METISSE-BoOST models (see
Sec. 2.2) from Agrawal et al. (2020). Model 3 is based on the the METISSE-MESA
models (Agrawal et al. 2020, Sec. 2.3), while Model 4a and Model 4b are based
on our sets of evolutionary models computed with MESA (Sec. 2.4). All the
models and their respective predictions are shown in Fig. 3 .
Models from 2 to 4b predict $R_{\rm MAX}$ ($M_{\rm ZAMS}$) that is smaller
than what is predicted by the analytic formulae from Hurley et al. (2000) in
Model 1, for some masses by even an order of magnitude. These formulae were
fit from a set of stellar evolution models (Pols et al., 1998), which used the
Schwarzschild criterion for convective boundaries, relatively small core
overshooting $\alpha_{ov}$ = 0.12, no rotationally-induced mixing, and now
outdated: opacity tables, equations of state and nuclear reaction rates. On
top of that, these stellar tracks, when used in population studies for
gravitational wave sources, must be extrapolated for initial masses higher
than 50 M⊙, and interpolated for every metallicity that were not considered in
the original set of models. We show in Fig. 1 and Fig. 2 that the maximum
radial expansion of a star as a function of metallicity, as simulated by
Hurley et al. (2000) analytic formulae, does not reproduce what is expected by
our MESA simulations made with Model 4b, which is most apparent for those
initial metallicities and masses at which stellar evolution is interpolated
and/or extrapolated.
Given each of ours models for $R_{\rm MAX}$ ($M_{\rm ZAMS}$), we estimated the
merger rate density and BH mass distribution of double compact object mergers,
and compared them to the reported LIGO/Virgo/KAGRA values (Abbott et al.,
2021a, b). We find that in Model 2 and Model 4b there is no significant change
in the estimated BH-BH merger rate density from our reference model with the
Hurley et al. (2000) prescription (see Table 3). For Model 3 and Model 4a, we
show that the BH-BH merger rate changes at most by a factor of $\sim$ 3\. This
is not an extreme deviation from our reference model, since the merger rate
density of double compact objects was shown to vary by even orders of
magnitude between different models and prescriptions (Mandel & Broekgaarden,
2022). By studying the merger rate distribution as a function of the total BH-
BH merger mass (for redshifts z < 2), as shown in Fig. 6, we find no
significant difference among the models for $M_{\rm tot}$ < 50 M⊙. However,
looking at the most massive BH-BH mergers ($M_{\rm tot}$ > 50 M⊙), Models 3
and Models 4a predict a much lower rate of events compared to our reference
model or to Models 2 and 4b (by factors of $\sim$50 and $\sim$15,
respectively, see Table 4). This is due to the fact that in Model 3 stars with
initial masses larger than 45 M⊙ never expand to become red supergiants,
significantly limiting the parameter space for binary interactions. In a
similar but less significant effect, in Model 4a a limited radial expansion
arises due to the use of MLT++ convection scheme in MESA (as discussed in Chun
et al. 2018; Klencki et al. 2020). This is of the utmost importance since
radial expansion is needed to initiate Roche lobe overflow events and in turn
producing merging double compact objects in the isolated binary evolution
channel (with the exception of the chemically homogeneous evolution sub-
channel, which is not the subject of our study).
We also conclude that despite the Hurley et al. (2000) analytic formulae
potentially overestimate the maximum expansion of stars, this does not
significantly alter the estimates of merger rate densities in respect to
modern detailed evolutionary calculations of stellar radial expansion. This is
due to the fact that the vast majority of binary interactions occur before
stars are able to expand close to their maximum (theoretically uncertain)
radius (see Sec. 3.5). BH-BH merger total mass distribution for low masses
($M_{\rm tot}$ < 50 M⊙) does not show any difference between results that
employ Hurley radial expansion and expansion adopted from various current
detailed stellar evolution models. For high total BH-BH merger masses ($M_{\rm
tot}$ > 50 M⊙) stellar models of massive stars with modest expansion and most
reasonable input physics (Models 2 and 4b) also predict very similar mass
distributions to predictions based on Hurley’s prescriptions (Model 1).
However, models with almost no stellar radial expansion (Models 3 and 4a) are
significantly different (they predict less massive BH-BH mergers) than other
updated detailed evolutionary models (Models 2 and 4b) or models based on
Hurley et al. (2000) prescriptions (Model 1). While there are other factors
that can alter the BH-BH merger mass distribution (e.g. Stevenson et al. 2019;
Belczynski 2020; Belczynski et al. 2020b; Mapelli et al. 2020; Vink et al.
2021; Belczynski et al. 2022a; Belczynski et al. 2022b; Briel et al. 2022;
Fryer et al. 2022; van Son et al. 2022), we suggest that the study of BH-BH
mergers beyond a total mass of 50 M⊙ could help to better constrain the radial
evolution of massive stars. At the same time, our results illustrate that the
understanding of radial expansion of the most massive stars and the origin of
Humphreys-Davidson limit (Humphreys & Davidson, 1979; Davies et al., 2018;
Gilkis et al., 2021; Sabhahit et al., 2022) are of crucial importance for the
formation of massive BH-BH mergers via common envelope evolution.
Finally, we show in Fig. 7 that, when compared to detailed evolutionary
models, our standard prescription predicts mostly common envelope events
during which the donor star in BH-BH merger progenitor binary possesses an
outer radiative envelope. This means that, according to Klencki et al. (2020,
2021) and Marchant et al. (2021), our estimates for common envelope survival
and in turn BH-BH mergers could be overestimated.
## Acknowledgements
AR and KB acknowledge support from the Polish National Science Center (NCN)
grant Maestro (2018/30/A/ST9/00050). AR and KB thank J. Andrews for the
insight about the metallicity-dependent radial evolution of stars in
population synthesis codes. AR and KB acknowledge the help of A. Olejak and R.
Smolec for their helpful feedbacks. TS acknowledges support from the European
Union’s Horizon 2020 under the Marie Skłodowska-Curie grant agreement No
101024605. JK acknowledges support from an ESO Fellowship. This research was
funded in part by the National Science Center (NCN), Poland under grant number
OPUS 2021/41/B/ST9/00757. For the purpose of Open Access, the author has
applied a CC-BY public copyright license to any Author Accepted Manuscript
(AAM) version arising from this submission.
## References
* Abbott et al. (2021a) Abbott B. P., et al., 2021a, The population of merging compact binaries inferred using gravitational waves through GWTC-3 (arXiv:2111.03634)
* Abbott et al. (2021b) Abbott R., et al., 2021b, The Astrophysical Journal Letters, 913, L7
* Agrawal et al. (2020) Agrawal P., Hurley J., Stevenson S., Szécsi D., Flynn C., 2020, Monthly Notices of the Royal Astronomical Society, 497, 4549–4564
* Agrawal et al. (2021) Agrawal P., Stevenson S., Szécsi D., Hurley J., 2021, arXiv e-prints, p. arXiv:2112.02801
* Agrawal et al. (2022) Agrawal P., Szécsi D., Stevenson S., Eldridge J. J., Hurley J., 2022, MNRAS, 512, 5717
* Ali-Haïmoud et al. (2017) Ali-Haïmoud Y., Kovetz E. D., Kamionkowski M., 2017, Phys. Rev. D, 96, 123523
* Antonini & Rasio (2016) Antonini F., Rasio F. A., 2016, The Astrophysical Journal, 831, 187
* Bartos et al. (2017) Bartos I., Kocsis B., Haiman Z., Márka S., 2017, The Astrophysical Journal, 835, 165
* Belczynski (2020) Belczynski K., 2020, The Astrophysical Journal, 905, L15
* Belczynski et al. (2002) Belczynski K., Kalogera V., Bulik T., 2002, The Astrophysical Journal, 572, 407
* Belczynski et al. (2007) Belczynski K., Taam R. E., Kalogera V., Rasio F. A., Bulik T., 2007, The Astrophysical Journal, 662, 504
* Belczynski et al. (2008) Belczynski K., Kalogera V., Rasio F. A., Taam R. E., Zezas A., Bulik T., Maccarone T. J., Ivanova N., 2008, The Astrophysical Journal Supplement Series, 174, 223
* Belczynski et al. (2010) Belczynski K., Dominik M., Bulik T., O’Shaughnessy R., Fryer C., Holz D. E., 2010, The Astrophysical Journal, 715, L138–L141
* Belczynski et al. (2012) Belczynski K., Wiktorowicz G., Fryer C. L., Holz D. E., Kalogera V., 2012, The Astrophysical Journal, 757, 91
* Belczynski et al. (2016) Belczynski et al., 2016, A&A, 594, A97
* Belczynski et al. (2020a) Belczynski et al., 2020a, A&A, 636, A104
* Belczynski et al. (2020b) Belczynski K., et al., 2020b, ApJ, 890, 113
* Belczynski et al. (2022a) Belczynski K., et al., 2022a, The Astrophysical Journal, 925, 69
* Belczynski et al. (2022b) Belczynski K., Doctor Z., Zevin M., Olejak A., Banerje S., Chattopadhyay D., 2022b, ApJ, 935, 126
* Bellovary et al. (2016) Bellovary J. M., Low M.-M. M., McKernan B., Ford K. E. S., 2016, The Astrophysical Journal, 819, L17
* Bird et al. (2016) Bird S., Cholis I., Muñoz J. B., Ali-Haïmoud Y., Kamionkowski M., Kovetz E. D., Raccanelli A., Riess A. G., 2016, Phys. Rev. Lett., 116, 201301
* Boffin & Jorissen (1988) Boffin H. M. J., Jorissen A., 1988, A&A, 205, 155
* Böhm-Vitense (1958) Böhm-Vitense E., 1958, Z. Astrophys., 46, 108
* Bonaca et al. (2012) Bonaca A., et al., 2012, ApJ, 755, L12
* Bondi & Hoyle (1944) Bondi H., Hoyle F., 1944, MNRAS, 104, 273
* Briel et al. (2022) Briel M. M., Stevance H. F., Eldridge J. J., 2022, arXiv e-prints, p. arXiv:2206.13842
* Brott et al. (2011) Brott I., et al., 2011, A&A, 530, A115
* Burrows et al. (1993) Burrows A., Hubbard W. B., Saumon D., Lunine J. I., 1993, ApJ, 406, 158
* Chen & Huang (2018) Chen Z.-C., Huang Q.-G., 2018, The Astrophysical Journal, 864, 61
* Choi et al. (2016) Choi J., Dotter A., Conroy C., Cantiello M., Paxton B., Johnson B. D., 2016, ApJ, 823, 102
* Chun et al. (2018) Chun S.-H., Yoon S.-C., Jung M.-K., Kim D. U., Kim J., 2018, ApJ, 853, 79
* Davies et al. (2018) Davies B., Crowther P. A., Beasor E. R., 2018, MNRAS, 478, 3138
* Dominik et al. (2012) Dominik M., Belczynski K., Fryer C., Holz D. E., Berti E., Bulik T., Mandel I., O’Shaughnessy R., 2012, ApJ, 759, 52
* Dorn-Wallenstein et al. (2020) Dorn-Wallenstein T. Z., Levesque E. M., Neugent K. F., Davenport J. R. A., Morris B. M., Gootkin K., 2020, The Astrophysical Journal, 902, 24
* Dufton et al. (2000) Dufton P. L., McErlean N. D., Lennon D. J., Ryans R. S. I., 2000, A&A, 353, 311
* Evans & Howarth (2003) Evans C. J., Howarth I. D., 2003, MNRAS, 345, 1223
* Farrell et al. (2021) Farrell E., Groh J., Meynet G., Eldridge J., 2021, Understanding the evolution of massive stars (arXiv:2109.02488)
* Fryer et al. (1999) Fryer C. L., Woosley S. E., Hartmann D. H., 1999, ApJ, 526, 152
* Fryer et al. (2012) Fryer C. L., Belczynski K., Wiktorowicz G., Dominik M., Kalogera V., Holz D. E., 2012, The Astrophysical Journal, 749, 91
* Fryer et al. (2022) Fryer C. L., Olejak A., Belczynski K., 2022, ApJ, 931, 94
* Gaia Collaboration (2018) Gaia Collaboration 2018, VizieR Online Data Catalog, p. I/345
* Gilkis et al. (2021) Gilkis A., Shenar T., Ramachandran V., Jermyn A. S., Mahy L., Oskinova L. M., Arcavi I., Sana H., 2021, MNRAS, 503, 1884
* Glebbeek et al. (2009) Glebbeek E., Gaburov E., de Mink S. E., Pols O. R., Portegies Zwart S. F., 2009, A&A, 497, 255
* Grevesse & Sauval (1998) Grevesse N., Sauval A. J., 1998, Space Sci. Rev., 85, 161
* Heger et al. (2000) Heger A., Langer N., Woosley S. E., 2000, ApJ, 528, 368
* Heger et al. (2003) Heger A., Fryer C. L., Woosley S. E., Langer N., Hartmann D. H., 2003, ApJ, 591, 288
* Higgins & Vink (2019) Higgins Vink 2019, A&A, 622, A50
* Horvath et al. (2020) Horvath J. E., Rocha L. S., Bernardo A. L. C., de Avellar M. G. B., Valentim R., 2020, Birth events, masses and the maximum mass of Compact Stars (arXiv:2011.08157)
* Humphreys & Davidson (1979) Humphreys R. M., Davidson K., 1979, ApJ, 232, 409
* Hurley et al. (2000) Hurley J. R., Pols O. R., Tout C. A., 2000, Monthly Notices of the Royal Astronomical Society, 315, 543
* Kesseli et al. (2019) Kesseli A. Y., et al., 2019, The Astronomical Journal, 157, 63
* King et al. (2001) King A. R., Davies M. B., Ward M. J., Fabbiano G., Elvis M., 2001, ApJ, 552, L109
* Klencki et al. (2020) Klencki J., Nelemans G., Istrate A. G., Pols O., 2020, A&A, 638, A55
* Klencki et al. (2021) Klencki Nelemans, Gijs Istrate, Alina G. Chruslinska, Martyna 2021, A&A, 645, A54
* Klencki et al. (2022) Klencki J., Istrate A., Nelemans G., Pols O., 2022, A&A, 662, A56
* Kozai (1962) Kozai Y., 1962, AJ, 67, 591
* Kroupa (2002) Kroupa P., 2002, Science, 295, 82
* Kroupa et al. (1993) Kroupa P., Tout C. A., Gilmore G., 1993, MNRAS, 262, 545
* Kruckow et al. (2016) Kruckow M. U., Tauris T. M., Langer N., Szécsi D., Marchant P., Podsiadlowski P., 2016, Astronomy & Astrophysics, 596, A58
* Ledoux (1947) Ledoux P., 1947, ApJ, 105, 305
* Levesque et al. (2005) Levesque E. M., Massey P., Olsen K. A. G., Plez B., Josselin E., Maeder A., Meynet G., 2005, ApJ, 628, 973
* Lidov (1962) Lidov M., 1962, Planetary and Space Science, 9, 719
* Liu & Lai (2018) Liu B., Lai D., 2018, The Astrophysical Journal, 863, 68
* Maeder & Maeynet (2000) Maeder A., Maeynet G., 2000, Annual Review of Astronomy and Astrophysics, 38, 143
* Mandel & Broekgaarden (2021) Mandel I., Broekgaarden F. S., 2021, Rates of Compact Object Coalescences (arXiv:2107.14239)
* Mandel & Broekgaarden (2022) Mandel I., Broekgaarden F. S., 2022, Living Reviews in Relativity, 25
* Mandel & de Mink (2016) Mandel I., de Mink S. E., 2016, Monthly Notices of the Royal Astronomical Society, 458, 2634
* Mapelli et al. (2020) Mapelli M., Spera M., Montanari E., Limongi M., Chieffi A., Giacobbo N., Bressan A., Bouffanais Y., 2020, ApJ, 888, 76
* Marchant, Pablo et al. (2016) Marchant, Pablo Langer, Norbert Podsiadlowski, Philipp Tauris, Thomas M. Moriya, Takashi J. 2016, A&A, 588, A50
* Marchant et al. (2021) Marchant P., Pappas K. M. W., Gallegos-Garcia M., Berry C. P. L., Taam R. E., Kalogera V., Podsiadlowski P., 2021, Astronomy & Astrophysics
* McKernan et al. (2018) McKernan B., et al., 2018, The Astrophysical Journal, 866, 66
* Miller & Lauburg (2009) Miller M. C., Lauburg V. M., 2009, The Astrophysical Journal, 692, 917
* Moe & Di Stefano (2017) Moe M., Di Stefano R., 2017, ApJS, 230, 15
* Müller et al. (2016) Müller B., Heger A., Liptai D., Cameron J. B., 2016, Monthly Notices of the Royal Astronomical Society, 460, 742
* Neugent et al. (2010) Neugent K. F., Massey P., Skiff B., Drout M. R., Meynet G., Olsen K. A. G., 2010, ApJ, 719, 1784
* Ng et al. (2021) Ng K. K. Y., Vitale S., Farr W. M., Rodriguez C. L., 2021, ApJ, 913, L5
* Nugis & Lamers (2000) Nugis T., Lamers H. J. G. L. M., 2000, A&A, 360, 227
* Olejak et al. (2020) Olejak A., Fishbach M., Belczynski K., Holz D. E., Lasota J. P., Miller M. C., Bulik T., 2020, ApJ, 901, L39
* Olejak et al. (2021) Olejak A., Belczynski K., Ivanova N., 2021, Astronomy & Astrophysics, 651, A100
* Paczynski (1976) Paczynski B., 1976, SAO/NASA Astrophysics Data System, 73, 75
* Pavlovskii et al. (2016) Pavlovskii K., Ivanova N., Belczynski K., Van K. X., 2016, Monthly Notices of the Royal Astronomical Society, 465, 2092
* Paxton et al. (2011) Paxton B., Bildsten L., Dotter A., Herwig F., Lesaffre P., Timmes F., 2011, ApJS, 192, 3
* Paxton et al. (2013) Paxton B., et al., 2013, The Astrophysical Journal Supplement Series, 208, 4
* Paxton et al. (2015) Paxton B., et al., 2015, ApJS, 220, 15
* Paxton et al. (2018) Paxton B., et al., 2018, ApJS, 234, 34
* Paxton et al. (2019) Paxton B., et al., 2019, ApJS, 243, 10
* Pols et al. (1998) Pols O. R., Schröder K.-P., Hurley J. R., Tout C. A., Eggleton P. P., 1998, Monthly Notices of the Royal Astronomical Society, 298, 525
* Sabhahit et al. (2022) Sabhahit G. N., Vink J. S., Higgins E. R., Sander A. A. C., 2022, MNRAS, 514, 3736
* Sana et al. (2012) Sana H., et al., 2012, Science, 337, 444
* Schootemeijer et al. (2019) Schootemeijer A., Langer N., Grin N. J., Wang C., 2019, A&A, 625, A132
* Secunda et al. (2020) Secunda A., et al., 2020, The Astrophysical Journal, 903, 133
* Shenar et al. (2022) Shenar T., et al., 2022, arXiv e-prints, p. arXiv:2207.07674
* Spitzer (1969) Spitzer Lyman J., 1969, ApJ, 158, L139
* Stevenson et al. (2019) Stevenson S., Sampson M., Powell J., Vigna-Gómez A., Neijssel C. J., Szécsi D., Mandel I., 2019, ApJ, 882, 121
* Stone et al. (2016) Stone N. C., Metzger B. D., Haiman Z., 2016, Monthly Notices of the Royal Astronomical Society, 464, 946
* Szécsi et al. (2022) Szécsi Agrawal, Poojan Wünsch, Richard Langer, Norbert 2022, A&A, 658, A125
* Tagawa et al. (2020) Tagawa H., Haiman Z., Kocsis B., 2020, The Astrophysical Journal, 898, 25
* Tagawa et al. (2021) Tagawa H., Kocsis B., Haiman Z., Bartos I., Omukai K., Samsing J., 2021, The Astrophysical Journal, 908, 194
* Vink et al. (2001) Vink J. S., de Koter A., Lamers H. J. G. L. M., 2001, A&A, 369, 574
* Vink et al. (2021) Vink J. S., Higgins E. R., Sander A. A. C., Sabhahit G. N., 2021, MNRAS, 504, 146
* Wong et al. (2021) Wong K. W. K., Breivik K., Kremer K., Callister T., 2021, Phys. Rev. D, 103, 083021
* Woosley (2017) Woosley S. E., 2017, ApJ, 836, 244
* Xin et al. (2022) Xin C., Renzo M., Metzger B. D., 2022, Monthly Notices of the Royal Astronomical Society, 516, 5816
* Yang et al. (2019) Yang Y., et al., 2019, Phys. Rev. Lett., 123, 181101
* Zevin et al. (2021) Zevin M., et al., 2021, ApJ, 910, 152
* de Jager et al. (1988) de Jager C., Nieuwenhuijzen H., van der Hucht K. A., 1988, A&AS, 72, 259
* de Mink & Belczynski (2015) de Mink S. E., Belczynski K., 2015, The Astrophysical Journal, 814, 58
* de Mink & Mandel (2016) de Mink S. E., Mandel I., 2016, Monthly Notices of the Royal Astronomical Society, 460, 3545
* de Mink et al. (2008) de Mink S. E., Pols O. R., Yoon S.-C., 2008, AIP Conference Proceedings
* van Son et al. (2022) van Son L. A. C., et al., 2022, ApJ, 931, 17
## Appendix A Interaction radius distribution for BH-BH merger progenitors
Figure 8: Distribution of the interaction radii for all RLOF events from our
standard Model 1 simulations. The dashed vertical red line represents the
median value at $\sim 462$ R⊙, while the continuous vertical line represents a
range of three standard deviations from the median. Figure 9: Distribution of
the interaction radii for CE events from our standard Model 1 simulations. The
dashed vertical red line represents the median value at $\sim 653$ R⊙, while
the continuous vertical line represents a range of three standard deviations
from the median. Figure 10: Distribution of the interaction radii for stable
RLOF events from our standard Model 1 simulations. The dashed vertical red
line represents the median value at $\sim 324$ R⊙, while the continuous
vertical line represents a range of three standard deviations from the median.
|
aainstitutetext: Institute of Nuclear Physics, Polish Academy of Sciences, ul.
Radzikowskiego 152, 31-342 Krakow, Polandbbinstitutetext: Brookhaven National
Laboratory, Physics Department, Bldg. 510A, 20 Pennsylvania Street, 30-059
Upton, NY 11973 USAccinstitutetext: Institute of Applied Computer Science,
Jagiellonian University, ul. Łojasiewicza 11, 30-348 Krakow,
Polandddinstitutetext: Faculty of Material Engineering and Physics, Cracow
University of Technology, ul. Podchora̧żych 1, 30-084 Krakow,
Polandeeinstitutetext: Department of Physics and Technology, University of
Bergen, 5007 Bergen, Norway
# Transverse momentum broadening of medium-induced cascades in expanding media
Souvik Priyam Adhya https://orcid.org/0000-0002-4825-2827
<EMAIL_ADDRESS>a,b Krzysztof Kutak
https://orcid.org/0000-0003-1924-7372<EMAIL_ADDRESS>c Wiesław
Płaczek https://orcid.org/0000-0002-9678-9303<EMAIL_ADDRESS>d
Martin Rohrmoser https://orcid.org/0000-0003-2311-832X
<EMAIL_ADDRESS>e Konrad Tywoniuk
https://orcid.org/0000-0001-5677-0010<EMAIL_ADDRESS>
###### Abstract
In this work, we explore the features of gluonic cascades in static and
Bjorken expanding media by solving the full BDIM evolution equations in
longitudinal momentum fraction $x$ and transverse momentum ${\bm{k}}$
numerically using the Monte Carlo event generator MINCAS. Confirming the
scaling of the energy spectra at low-$x$, discovered in earlier works, we use
this insight to compare the amount of broadening in static and expanding
media. We compare angular distributions for the in-cone radiation for
different medium profiles with the effective scaling laws and conclude that
the out-of-cone energy loss proceeds via the radiative break-up of hard
fragments, followed by an angular broadening of soft fragments. While the
dilution of the medium due to expansion significantly affects the broadening
of the leading fragments, we provide evidence that in the low-$x$ regime,
which is responsible for most of the gluon multiplicity in the cascade, the
angular distributions are very similar when comparing different medium
profiles at an equivalent, effective in-medium path length. This is mainly due
to the fact that in this regime, the broadening is dominated by multiple
splittings. Finally, we discuss the impact of our results on the
phenomenological description of the out-of-cone radiation and jet quenching.
###### Keywords:
jet quenching, quark-gluon plasma, medium-induced cascades, expanding medium,
Markov Chain Monte Carlo
††preprint: IFJPAN-IV-2022-19
## 1 Introduction
In ultra-relativistic collisions of heavy ions, nuclear matter is subjected to
such extreme conditions that it leads to the creation of a quark-gluon plasma
(QGP) Busza:2018rrf . This state of matter is opaque to jet propagation, a
phenomenon referred to as “jet quenching” dEnterria:2009xfs ; Mehtar-
Tani:2013pia ; Blaizot:2015lma . This phenomenon was early proposed as a
sensitive observable to extract medium properties in Refs. Gyulassy:1990ye ;
Wang:1991xy , and was observed in collider experiments at RHIC Adler:2002tq
and LHC Aad:2010bu . Jet quenching is addressed using methods like AdS/CFT
Liu:2006ug ; Ghiglieri:2020dpq , semi-analytical calculations Baier:1994bd ;
Baier:1996vi ; Zakharov:1996fv ; Zakharov:1997uu ; Zakharov:1999zk ;
Baier:2000mf ; Gyulassy:2000fs ; Guo:2000nz ; Barata:2020rdn ; Barata:2020sav
; Mehtar-Tani:2019ygg ; Mehtar-Tani:2019tvy , kinetic theory Baier:2000sb ;
Jeon:2003gi ; Arnold:2002ja ; Ghiglieri:2015ala ; Kurkela:2018wud and,
finally, Monte Carlo and other numerical methods Salgado:2003gb ; Zapp:2008gi
; Armesto:2009fj ; Schenke:2009gb ; Lokhtin:2011qq ; Casalderrey-
Solana:2014bpa ; Kutak:2018dim ; Blanco:2020uzy ; Caron-Huot:2010qjx ;
Feal:2018sml ; Andres:2020vxs .
In recent years a lot of studies were focused on the out-of-cone radiation or
equivalently transverse-momentum dependence of produced mini-jets
Blaizot:2013hx . On a theoretical level, this problem leads to the
generalization of the BDMPS-Z frameworkBaier:2000mf ; Baier:2000sb ;
Jeon:2003gi ; Zakharov:1996fv ; Zakharov:1997uu ; Zakharov:1999zk ;
Baier:1994bd ; Baier:1996vi ; Arnold:2002ja ; Ghiglieri:2015ala , and to the
formulation of the BDIM equations for gluons Blaizot:2012fh and combined
system of quarks and gluons Blanco:2020uzy . One of the recently actively
investigated problems is to account for the expansion of the medium together
with turbulent dynamics of jet losing energy. In particular, in Ref.
Adhya:2021kws , the medium-modified gluon splitting rates for different
profiles of the expanding partonic medium, namely the profiles for the static,
exponential, and Bjorken expanding medium were calculated. This generalized
the rate equation for energy distribution that accounts for multiple soft
scatterings with a time-dependent transport coefficient characterizing the
expanding medium. This allowed quantifying the sensitivity of the inclusive
jet suppression on the way how the medium expands. In the subsequent study
Adhya:2021kws , the framework was generalized to account for quarks. Apart
from that, the initial-state nuclear effects, vacuum-like emissions as well as
coherence effects were taken into account. This allowed for achieving a
reasonable description of the nuclear modification of jets.
In this paper, we would like to generalize the framework of Ref. Adhya:2021kws
by accounting for the transverse-momentum dependence of mini-jets produced
during the jet quenching. This will amount to generalizing the BDIM equations
to the expanding medium-dependent case. The result will allow us to determine
the effect of expansion on transverse-momentum spectra of fragmenting jets and
to investigate the interplay of dynamics of expansion and re-scatterings. One
of the objectives of this work is to study the impact of different effects of
the medium expansion on the radiation and broadening revealed by the scaling
analysis in the effective evolution parameters. Secondly, we study the radial
distributions of gluons in bins of $x$ (or energy) to show where the hard and
soft emissions dominate. This reveals the medium kinematical scales where the
splitting and the broadening dominate, respectively, as well as differences
between various medium profiles.
In a very recent paper Mehtar-Tani:2022zwf , the authors studied similar
physical quantities as in this work through a kinetic theory approach with
thermalization effects included for an infinite static medium. In this paper,
we present results for a more realistic description using a time-dependent
splitting rate that captures the interplay of the formation time of the
radiation and the medium length for the static media. Next, we highlight
qualitative differences between the media profiles by including the Bjorken
expanding media and explore the sensitivity of the physical quantities on the
time for the onset of the jet quenching.
The paper is organized as follows. In Sec. 2, we revisit the single-gluon
distributions and collinear splitting kernels for the expanding as well as
static medium. We also introduce the in-medium gluon evolution equation. Next,
in Sec. 3, we discuss the scaling features of a purely radiative cascade for
different medium profiles, present formal solutions, and compare them to
numerical evaluations in a dedicated Monte Carlo event generator MINCAS. The
established scaling allows corroborating the concept of an equivalent
effective in-medium path length $L_{\rm eff}$, that leads to the same low-$x$
spectrum in different medium scenarios. In Sec. 4, we study fully differential
spectra of mini jets both in the static medium as well as in the medium
undergoing the Bjorken expansion, obtained via numerical evaluations in
MINCAS. In particular, we analyze the dependence of the fragmentation function
on a polar angle. From the full distributions in the longitudinal momentum
fraction $x$ and the transverse momentum $k_{T}$, alternatively $x$ and the
polar angle $\theta$, we discuss three key quantities: a) the average $k_{T}$
distribution; b) the $x$-distribution within a certain cone limited by
$\theta<\theta_{\rm max}$; and c) the angular distribution in specific bins of
$x$. Finally, in Sec. 5, we perform an in-depth study of the fraction of
gluons in a given $x$-range that remain within a cone. In Sec. 6, we summarize
our work and explore the possible qualitative impact on jet quenching
phenomenology.
## 2 Medium-induced cascades in expanding media
In this work, we study two classes of medium profiles: a static, non-expanding
medium and a medium following the Bjorken expansion.
### Static medium:
In this case, we consider the temperature of the medium to be independent of
time, $T=T_{0}$.
### Bjorken expanding medium:
Assuming one-dimensional longitudinal expansion, we use the following form for
the temperature evolution as a function of the proper time $t$,
$\displaystyle T(t)=\begin{cases}0&{\rm for}\quad t<t_{0}\,,\\\
T_{0}\left(\frac{t_{0}}{t}\right)^{\frac{1}{3}}&{\rm for}\quad t_{0}\leq t\leq
L+t_{0}\,,\\\ 0&{\rm for}\quad t>L+t_{0}\,,\end{cases}$ (1)
where the reference time $t_{0}$ is a free parameter.
In addition, we allow for two different initial time values $t_{0}$ in the
Bjorken scenario, resulting in a total of three studied cases. Further
discussion about the expanding media can be found in Refs. Adhya:2019qse ;
Salgado:2002cd ; Adhya:2021kws ; Andres:2019eus ; Salgado:2003gb .
We consider the gluon distribution
$D(x,{\bm{k}},t)\equiv x\frac{{\rm d}N}{{\rm d}x{\rm d}^{2}{\bm{k}}}\,,$ (2)
where $x$ is the longitudinal momentum fraction and ${\bm{k}}=(k_{x},k_{y})$
is the transverse momentum. The evolution equation for this distribution in a
dense medium, neglecting quark contributions, reads Blaizot:2014rla
$\displaystyle\frac{\partial}{\partial t}D(x,{\bm{k}},t)$
$\displaystyle=\frac{1}{t_{\ast}}\int_{0}^{1}{\rm
d}z\,{\cal\tilde{K}}(z,t)\left[\frac{1}{z^{2}}\sqrt{\frac{z}{x}}\,D\left(\frac{x}{z},\frac{{\bm{k}}}{z},t\right)\theta(z-x)-\frac{z}{\sqrt{x}}\,D(x,{\bm{k}},t)\right]$
$\displaystyle+\int\frac{{\rm
d}^{2}{\bm{l}}}{(2\pi)^{2}}\,C({\bm{l}},t)\,D(x,{\bm{k}}-{\bm{l}},t)\,.$ (3)
The above evolution equation describes the interplay between the collinear
splittings (first two terms on the r.h.s. of the equation) and diffusion in
momentum space (the last term). In this equation, we consider the case when
the momentum transfer during the branching is neglected. The collinear
branching kernel ${\cal\tilde{K}}(z,t)$ will be discussed in detail below. The
transverse-momentum dependence comes, nevertheless, from both the elastic
scattering, as described by the elastic collision kernel $C({\bm{l}},t)$, and
multiple splittings, see the terms in the first line of Eq. (2). Finally, the
characteristic time
$t_{\ast}\equiv\frac{1}{\bar{\alpha}}\sqrt{\frac{p_{0}^{+}}{\hat{q}_{0}}}\,,$
(4)
with $\bar{\alpha}=\alpha_{s}N_{c}/\pi$ and $p_{0}^{+}$ being the (light-cone)
energy of the initial jet particle, determines the stopping time of the jet at
the initial, or reference, value of the jet quenching parameter $\hat{q}_{0}$,
which will be specified in more detail below. In this work, we have not
included the thermalization effects for $x\leq 0.01$ as in Ref. Mehtar-
Tani:2022zwf .
The elastic collision kernel $C({\bm{l}},t)$ is given by
$C({\bm{l}},t)=w({\bm{l}},t)-\delta^{(2)}({\bm{l}})\int{\rm
d}^{2}{\bm{l}}^{\prime}\,w({\bm{l}}^{\prime},t)\,,$ (5)
where $w({\bm{l}},t)\propto n(t){\rm d}^{2}\sigma_{\text{el}}/{\rm
d}^{2}{\bm{l}}$ is proportional to the density of the medium $n(t)$ and the
in-medium elastic cross section ${\rm d}^{2}\sigma_{\text{el}}/{\rm
d}^{2}{\bm{l}}$. When the medium expands, the density drops reducing the
impact of elastic momentum transfer. Although it is less important, screening
effects, contained in the elastic cross section, will also typically be
hampered by the expansion111To be precise, we consider the uniform expansion,
for extensions see e.g. Ref. Barata:2022krd ..
Considering purely gluon systems, we use the so-called Hard Thermal Loop (HTL)
approximation for $w({\bm{l}},t)$ at the leading order:
$w({\bm{l}},t)=\frac{N_{c}g^{2}m_{D}^{2}T}{{\bm{l}}^{2}({\bm{l}}^{2}+m_{D}^{2})}=\frac{4\pi\,\hat{q}}{{\bm{l}}^{2}({\bm{l}}^{2}+m_{D}^{2})}\,,$
(6)
where the medium is characterized by the temperature $T$ and the Debye mass
$m_{D}$, and we have not written explicitly their time dependence. We have
also introduced the jet transport coefficient, which is given by
$\hat{q}=\alpha_{s}N_{c}m_{D}^{2}T$ with $g^{2}=4\pi\alpha_{s}$. The Debye
mass in the QCD medium at the leading order is given by
$\displaystyle
m_{D}^{2}=g^{2}T^{2}\left(\frac{N_{c}}{3}+\frac{N_{f}}{6}\right)=\frac{3}{2}g^{2}T^{2}\,.$
(7)
In addition to the above perturbative collision kernels, one can also study
the effect of using scattering potentials from non-perturbative extraction
from the lattice QCD Schlichting:2021idr as well as the NLO HTL contributions
Caron-Huot:2010qjx . In a static medium, $m_{D}^{2}=m_{D0}^{2}\equiv
3/2g^{2}T_{0}^{2}$ and
$\hat{q}=\hat{q}_{0}\equiv\alpha_{s}N_{c}m_{D0}^{2}T_{0}$ are simply constant,
whereas in the Bjorken model these parameters, together with the temperature,
are time-dependent and given by
$\hat{q}(t)=\hat{q}_{0}\left(\frac{T(t)}{T_{0}}\right)^{3}\,,\qquad
m_{D}^{2}(t)=m_{D0}^{2}\left(\frac{T(t)}{T_{0}}\right)^{2}\,,$ (8)
where the time evolution of the temperature is given in Eq. (1).
Next, we turn to the splitting kernel. It is related to the in-medium
splitting rate as $\mathcal{\tilde{K}}(z,t)=t_{\ast}\,{\rm d}I/({\rm d}z\,{\rm
d}t)$. This rate has recently been calculated analytically to high precision
in Ref. Isaksen:2022pkj and evaluated numerically in Ref. Andres:2020vxs . We
shall use it together with the so-called “harmonic oscillator” approximation
which is valid for soft emissions in a large medium Isaksen:2022pkj . For a
time-dependent splitting kernel, a pertinent question is whether the relevant
time should refer to the time between subsequent splittings or whether it
should be counted from the beginning of the cascade. The former scenario is
significantly more complicated, since it would involve a complicated interplay
between subsequent emissions, and remains an open challenge (see Ref.
Barata:2021byj for an exploratory study). This problem applies both to the
static and time-dependent media, see below. In Eq. (2), subsequent splittings
are assumed to be independent and the global time is always counted from the
beginning of the cascade up to its end.
In the static medium, it is easily identified from the medium-induced spectrum
Baier:1996kr ; Baier:1996sk ; Zakharov:1996fv ; Zakharov:1997uu , and reads
$\displaystyle\mathcal{\tilde{K}}^{\text{static}}(z,t)$ $\displaystyle={\cal
K}(z)\,{\rm
Re}\left[(i-1)\tan\left(\frac{1-i}{2}\kappa(z)\tau\right)\right]\,,$
$\displaystyle=\mathcal{K}(z)\,\frac{\sinh(\kappa(z)\tau)-\sin(\kappa(z)\tau)}{\cosh(\kappa(z)\tau)+\cos(\kappa(z)\tau)}\,,$
(9)
where $\tau\equiv t/t_{\ast}$, $\kappa(z)=\sqrt{[1-z(1-z)]/[z(1-z)]}$ and the
time-independent part is
${\cal K}(z)=\frac{\kappa(z)P_{gg}(z)}{2N_{c}}\,,$ (10)
where the (un-regularized) Altarelli-Parisi splitting function reads
$P_{gg}(z)=2N_{c}\,[1-z(1-z)]^{2}/[z(1-z)]$.
In the Bjorken expanding medium, the spectrum is given by Refs. Baier:1998yf ;
Salgado:2003gb ; Arnold:2008iy , resulting in the rate Adhya:2019qse
$\displaystyle\mathcal{\tilde{K}}^{\text{Bjorken}}(z,\tau,\tau_{0})=\mathcal{K}(z)\,\sqrt{\frac{\tau_{0}}{\tau}}\,\text{Re}\left[(1-i)\frac{J_{1}(z_{L})Y_{1}(z_{0})-J_{1}(z_{0})Y_{1}(z_{L})}{J_{1}(z_{0})Y_{0}(z_{L})-J_{0}(z_{L})Y_{1}(z_{0})}\right].$
(11)
where $\tau_{0}\equiv t_{0}/t_{\ast}$, $J_{\alpha}(\cdot)$ and
$Y_{\alpha}(\cdot)$ are the Bessel functions of the first and second kind,
respectively, and
$\displaystyle z_{0}$ $\displaystyle=(1-i)\kappa(z)\tau_{0}\,,$ (12)
$\displaystyle z_{L}$ $\displaystyle=(1-i)\kappa(z)\sqrt{\tau_{0}\tau}\,.$
(13)
At late times, the factor ${\rm Re}\big{[}\ldots\big{]}$ behaves similarly to
the static case, saturating at 1. In this case, the main effect of the
expansion comes from the factor $\sqrt{\tau_{0}/\tau}$ which arises directly
from considering the static rate with the time-dependent $\hat{q}$. Note that,
in this case, the final time $\tau$ corresponds to
$\tau\equiv(t_{0}+L)/t_{\ast}$, where $L$ refers to the path length in the
medium. It is also interesting to note the additional dependence of the rate
on $t_{0}$ in the Bjorken case. In the subsequent sections, we shall further
explore the possible physical implications of $t_{0}$ regarding the scaling
laws.
We consider a plasma with the initial temperature $T_{0}=0.4\,$GeV
($N_{c}=N_{f}=3$) and the initial momentum to be $p_{0}^{+}=100\,$GeV. The
coupling constant is fixed by $\bar{\alpha}=0.3$, which corresponds to
$g\approx 1.9869$. For the above input parameters we get $m_{D0}=0.97\,$GeV,
$\hat{q}_{0}=1.81\,$GeV2/fm and, finally, $t_{\ast}=11.0\,$fm. In addition, to
facilitate a numerical solution, we impose the minimal value on the $x$
variable at $x_{\rm min}=10^{-4}$.
Since we present the results in terms of $k_{T}\equiv|{\bm{k}}|$ dependence,
we introduce the distribution
$\tilde{D}(x,k_{T},t)=\int_{0}^{2\pi}{\rm
d}\phi_{{\bm{k}}}\,k_{T}D(x,{\bm{k}},t)\,,$ (14)
such that
$D(x,t)=\int_{0}^{\infty}{\rm d}k_{T}\,\tilde{D}(x,k_{T},t)\,,$ (15)
corresponds to the final longitudinal momentum, or in short energy, spectrum.
We solve the full evolution equation (2) for a collinear kernel using the
initial condition of the evolution of the single-particle
$D(x,{\bm{k}},t=0)=\delta(1-x)\delta({\bm{k}})$. For the results presented in
the rest of the paper, we use the numerical solutions of the evolution
equation that takes into account the transverse-momentum broadening as well as
the splittings in an effective way as already demonstrated in Refs.
Kutak:2018dim ; Blanco:2020uzy ; Blanco:2021usa for the static, infinite
media. To this end, we apply the Markov Chain Monte Carlo (MCMC) algorithm
implemented in the event generator MINCAS, as described in Ref. Kutak:2018dim
, with the necessary extensions. These extensions account for the time
dependence of the splitting kernels given in Eqs. (2) and (11) as well as of
the collision kernel according to Eq. (6). Such a dedicated MCMC algorithm
proved to be efficient in solving the evolution equation (2), even for the
complicated Bjorken-expanding medium case.
## 3 Scaling in the energy spectrum
Now, integrating Eq. (2) over the transverse momentum ${\bm{k}}$, we obtain
the evolution equation for the energy distribution $D(x,t)=\int{\rm
d}^{2}{\bm{k}}\,D(x,{\bm{k}},t)=x\,{\rm d}N/{\rm d}x$ of the gluon cascade
Blaizot:2013hx ; Blaizot:2013vha :
$\displaystyle\frac{\partial D(x,t)}{\partial
t}=\frac{1}{t_{\ast}}\int_{0}^{1}{\rm
d}z\,\mathcal{\tilde{K}}(z,t)\left[\sqrt{\frac{z}{x}}D\left(\frac{x}{z},t\right)\Theta(z-x)-\frac{z}{\sqrt{x}}D(x,t)\right],$
(16)
in which the collision term has been integrated out to zero. In this work, we
attempt to use the static in-medium kernel rates defined in Eqs. (2) and (11)
for which the above equation can be solved only numerically. This was
previously also analyzed in Ref. Caucal:2020uic , albeit neglecting the
finite-size effects in the rate, as contained in Eq. (11). Here, we use the
dedicated MCMC algorithm implemented in the MINCAS program Kutak:2018dim to
solve the equations numerically.
Before we turn to the numerical evaluations, let us first re-visit the scaling
laws in the evolution variable among different medium profiles. The quenching
parameter ${\hat{q}}(t)$ for expanding medium is time-dependent. The single-
gluon emission spectrum for different medium-expansion scenarios possesses
scaling features for an average transport coefficient Salgado:2002cd ;
Salgado:2003gb . However, as demonstrated in Ref. Adhya:2019qse , the average
scaling in $\hat{q}$ is valid only in the hard part of the single-gluon
spectrum and not relevant for the soft part of the spectrum which contributes
the most to the multiplicity, and therefore quenching Baier:2001yt , of
emitted gluons. Instead, to establish the optimal scaling in the soft sector
of the spectra, an “effective” quenching parameter has been identified
Adhya:2019qse . In terms of the effective in-medium evolution time $L_{\rm
eff}$, this translates to
$L_{\rm eff}=\int_{0}^{\infty}{\rm
d}t^{\prime}\,\sqrt{\frac{\hat{q}(t)}{\hat{q}_{0}}}\,,$ (17)
where the temperature profile is given by Eq. (1). This establishes a relation
between the length traversed in a static medium (on the left-hand side of the
equation) and in an expanding medium (on the right-hand side), which
corresponds to the equivalent amount of quenching. For the Bjorken model
considered here, the relation reads
$\displaystyle L_{\rm eff}$
$\displaystyle=2\sqrt{t_{0}}\left(\sqrt{L+t_{0}}-\sqrt{t_{0}}\right)\,.$ (18)
If we assume $L\gg t_{0}$, we can write $L_{\rm eff}\approx
2\sqrt{t_{0}L}\,[1+\mathcal{O}(\sqrt{t_{0}/L})]$. Since we are interested in
the regime of multiple soft scatterings, we consider only the effective
scaling.
Medium | $t_{0}$ [fm] | $L_{\rm eff}$ [fm] | $L$ [fm]
---|---|---|---
Static | 0.0 | 4.0; 6.0 | 4.0; 6.0
Bjorken (early) | 0.6 | 4.0; 6.0 | 10.7; 21.0
Bjorken (late) | 1.0 | 4.0; 6.0 | 8.0; 15.0
Table 1: Values of the initial time $t_{0}$, the effective length $L_{\rm
eff}$, and the actual length $L$ of the evolution for different types of the
medium used in Monte Carlo simulations.
Figure 1: The comparison of the gluon energy spectra for different medium
profiles. The upper row shows the spectra evaluated at the medium lengths
$L=4\,$fm (left) and $6\,$fm (right), respectively. The lower panels show the
scaling of the gluon spectra for the scaling variable $L_{\rm eff}$
corresponding to the two above values of $L$ (see Table 1).
The different profiles have been implemented in the Monte Carlo code MINCAS,
and the results are shown in Fig. 1 for the evolution parameters given in
Table 1. In the upper row, we plot the resulting spectra for a fixed length of
the medium $L=4\,$fm and $6\,$fm, with two choices for the Bjorken initial
time $t_{0}=0.6\,$fm and $1\,$fm. Naturally, one obtains a much more
pronounced evolution for the static medium as compared to the expanding-medium
cases. However, we see the appearance of the turbulent-like cascade $\sim
1/\sqrt{x}$ for all the medium profiles. In the lower row of Fig. 1, the
spectra have instead been generated at the equivalent effective medium length
$L_{\rm eff}$; see the third column in Table 1. One can observe the scaling
occurring in the low-$x$ part of the spectra for different medium profiles,
thus re-confirming the findings in Ref. Adhya:2019qse . These scaling features
can also be found analytically within a simplified evolution equation, see
e.g. Ref. Caucal:2020uic .
## 4 Fully differential spectra
We now move to the full solutions of the evolution equation in the energy
fraction $x$ and the transverse momentum $k_{T}\equiv|{\bm{k}}|$. Actually,
the full solution of Eq. (2) is 3-dimensional in the variables $(x,{\bm{k}})$,
but its dependence on the azimuthal angle in the transverse-momentum plane
$\phi_{{\bm{k}}}$ is trivial, so it is integrated out. In order to facilitate
comparisons between the different medium profiles, we shall henceforth
consistently plot the distributions at their equivalent effective medium
lengths $L_{\rm eff}$, see Table 1. The reasoning behind this particular
choice for presenting our numerical results will become clear in a moment.
Figure 2: The $\tilde{D}(x,k_{T})$ distributions for the effective length
$L_{\rm eff}=4\,$fm (left column) and $6\,$fm (right column), see Table 1.
The 2D differential distributions $\tilde{D}(x,k_{T},t)$ are shown in Fig. 2.
Let us first address the interplay of the hard and soft sectors of jet
quenching physics by focusing separately on the low- and high-$x$ regimes of
the distributions:
* •
The high-$x$ ($x\sim 1$) regime is dominated by the behavior of the leading
fragment in the cascade. The transverse-momentum distribution in this regime
is therefore expected to be dominated by multiple soft-gluon scatterings at
small $k_{T}$, leading to the Gaussian profile. At high-$k_{T}$, this becomes
a power-law suppression due to rare hard medium interactions.
* •
In the low-$x$ ($x\ll 1)$ regime, on the other hand, the $k_{T}$ distribution
is narrower and approximately Gaussian. No distinct transition to a power-law
behavior is observed for the range of plotted $k_{T}$ values.
These features are qualitatively in agreement with the discussion in Ref.
Blaizot:2014ula for the static media. There, the fully differential spectrum
was postulated to factorize as $D(x,{\bm{k}},t)\simeq D(x,t)P({\bm{k}},t)$.
Let us, therefore, parallel the qualitative observations made directly from
the obtained numerical results in Fig. 2 with a discussion based on these
analytical estimates.
At $x\sim 1$, we have $D(x,t)\simeq\delta(1-x)$, while $P({\bm{k}},t)$ is
simply the well-known broadening distribution for a single particle
Gyulassy:2002yv ; Qiu:2003pm ; DEramo:2010wup ; Barata:2020rdn , see also
Refs. Liou:2013qya ; Blaizot:2013vha ; Caucal:2021lgf ; Ghiglieri:2022gyv for
a discussion of radiative corrections. This distribution is defined
$P({\bm{k}},t)=\int{\rm d}^{2}{\bm{x}}\,{\rm
e}^{-i{\bm{k}}\cdot{\bm{x}}}\,{\rm e}^{-\int_{0}^{t}{\rm
d}t^{\prime}\,v({\bm{x}},t^{\prime})}\,,$ (19)
where $v({\bm{x}},t)=N_{c}n(t)\int{\rm d}^{2}{\bm{q}}\,(1-{\rm
e}^{-i{\bm{q}}\cdot{\bm{x}}})\sigma({\bm{q}})/(2\pi)^{2}$ is the so-called
dipole cross section, $\sigma({\bm{q}})\equiv{\rm d}\sigma^{\rm el}/{\rm
d}^{2}{\bm{q}}$ being the medium elastic scattering potential. This
distribution is characterized by two distinct regimes, see e.g. Ref.
Barata:2020rdn . At small ${\bm{k}}$, we can approximate
$v({\bm{x}},t)\simeq\hat{q}(t){\bm{x}}^{2}/4$, leading to
$P({\bm{k}},t)\simeq\frac{4\pi}{Q^{2}_{s}(t)}\,{\rm
e}^{-\frac{{\bm{k}}^{2}}{Q_{s}^{2}(t)}}\,,$ (20)
where $Q_{s}^{2}(t)=\int_{0}^{t}{\rm d}t^{\prime}\,\hat{q}(t^{\prime})$ is the
characteristic scale, sometimes called the saturation scale. In a static
medium, $Q_{s}^{2}=\hat{q}_{0}L$. This regime is therefore dominated by
multiple soft scatterings, leading to the Gaussian broadening. At large
${\bm{k}}$, on the other hand,
$P({\bm{k}},t)\simeq N_{c}\sigma({\bm{k}})\int_{0}^{t}{\rm
d}t^{\prime}\,n(t^{\prime})\sim(4\pi)^{2}\frac{\alpha_{s}^{2}N_{c}\int_{0}^{t}{\rm
d}t^{\prime}\,n(t^{\prime})}{{\bm{k}}^{4}}\,,$ (21)
follows from the generic behavior of $t$-channel exchange. As already
indicated in the formulas, this regime is easily generalized to expanding
media via the time-dependent quenching parameter $\hat{q}(t)$ (or the density
$n(t)$). Also, it becomes clear that the broadening of the leading fragments
is significantly less effective when the medium is becoming dilute compared to
when it remains at constant density.
At low-$x$, the situation becomes more complicated. The energy distribution
can approximately be described by a turbulent cascade Blaizot:2013hx . The
broadening is, however, not easily disentangled in this case due to the
structure of the evolution equation (2). For a static medium and in the
low-${\bm{k}}$ regime, it was found Blaizot:2014ula that $P({\bm{k}},t)$
again becomes Gaussian, however with a width given by
$\langle{\bm{k}}^{2}\rangle\sim\sqrt{xE\hat{q}}$. This particular behavior
comes about since the $k_{T}$ distribution in this regime is driven mainly by
multiple parton splittings. Note also that the $k_{T}$ distribution is much
narrower than in the large-$x$ regime, in line with the results in Fig. 2. In
angular variables, where in the soft limit we can approximate $\theta\sim
k_{T}/(xp_{0}^{+})$, the width is enhanced in the small-$x$ regime.
In an expanding medium, we have not been able to obtain an analytically closed
formula in these approximations. However, assuming that the broadening
continues to be dominated by multiple splittings, we expect to see a similar
amount of broadening in the soft sector for the static and expanding medium
profiles when evaluated at the equivalent effective evolution time $L_{\rm
eff}$. In the next steps, see also Sec. 5, we will study the behavior in the
numerical data from MINCAS and reveal how wide the jet becomes in expanding
media compared to a static one.
Figure 3: The transverse-momentum distribution $\tilde{D}(k_{T},t)$ for the
effective medium length $L_{\rm eff}=4\,$fm (left panel) and $6\,$fm (right
panel).
We start with presenting the transverse-momentum spectra of the gluons as
obtained from the evolution equation (2) in the case of the static medium and
the Bjorken expansion. First, we consider the $k_{T}$ distribution defined
as222In practice, the lower integral limit is given by the internal MINCAS
parameter $x_{\rm min}$, which for the presented results $=10^{-4}$.
$\tilde{D}(k_{T},t)=\int_{0}^{1}{\rm d}x\,\tilde{D}(x,k_{T},t)$. This
corresponds roughly to the $k_{T}$ distribution of the average $\langle
x\rangle$ fragments in the cascade, and is plotted in Fig. 3 for different
media. Due to the Jacobian, the distribution is suppressed at small $k_{T}$.
At intermediate $k_{T}$ we recognize the characteristic Gaussian peak,
followed by a strong, power-law suppression. We have plotted the distributions
for the equivalent effective in-medium path lengths, see Table 1. Since
$L_{\rm eff}$ was calculated with the scaling of the soft sector in mind,
these distributions, which are mostly sensitive to the large-$x$ fragments, do
not scale. Rather, we observe that the interactions occurring in the expanding
medium are significantly less efficient in generating large-$k_{T}$ fragments
than in the equivalent static medium.
As mentioned above, the simultaneous process of multiple splittings and
scatterings, as encoded in Eq. (2), sets up a dynamical picture where one has
to look for leading and sub-leading contributions of the radiative and
collisional processes. At this point, it is useful to switch variables from
the transverse momentum $k_{T}$ to the polar angle $\theta$. This is achieved
by the following transformation:
$\bar{D}(x,\theta,t)=xp_{0}^{+}\,\tilde{D}(x,xp_{0}^{+}\theta,t)\,,$ (22)
where we have used the small-angle approximation $k_{T}=xp_{0}^{+}\theta$ with
$\theta$ being the polar angle333The general definition of the polar angle in
light-cone coordinates is given by
$\bar{\theta}(x,{\bm{k}})=\arccos\big{[}(2xp_{0}^{+})^{2}-{\bm{k}}^{2}\big{]}/\big{[}(2xp_{0}^{+})^{2}+{\bm{k}}^{2}\big{]}$,
which leads to the distribution $\bar{D}(x,\theta,t)=\int{\rm
d}^{2}{\bm{k}}\,D(x,{\bm{k}},t)\,\delta\left(\theta-\bar{\theta}(x,{\bm{k}})\right)$.
We have explicitly checked in our numerical results that the small-angle
approximation works well in the angular region considered here, i.e.
$\theta\leq 0.6$.. It then naturally follows that the fully inclusive angular
distribution is given by $\bar{D}(\theta,t)=\int_{0}^{1}{\rm
d}x\,\bar{D}(x,\theta,t)$. In order to explore the scales where different
effects are at play, we separate the problem by looking at the $x$
distribution in specific bins of the angle $\theta$ and vice versa.
Figure 4: The distributions $D(x,t;\\{\theta_{\rm max}\\})$ for the
$\theta_{\rm max}$ choices of $0.2$, $0.4$ and $0.6$ for effective medium
length $L_{\rm eff}=4\,$fm (left column) and $6\,$fm (right column). The faded
dotted lines show the corresponding distribution for the full angular region
and correspond to the curves shown in Fig. 1.
To understand the behavior of soft gluons in the low-$x$ regime, where the
scaling estimates should work, we analyze the distribution of gluons in $x$
for a particular angular range. This corresponds to the angular integrated
energy distribution
$D(x,t;\\{\theta_{\rm max}\\})=\int_{0}^{\theta_{\rm max}}{\rm
d}\theta\,\bar{D}(x,\theta,t)\,.$ (23)
In Fig. 4, we show the comparisons of the medium evolved spectra for different
angular choices of $\theta_{\rm max}=0.2,\,0.4$ and $0.6$. It is worth
pointing out that the fully integrated over $\theta$ spectrum (dotted lines)
was analyzed in the literature for purely radiative cascades in the cases of
infinite media Jeon:2003gi ; Blaizot:2013hx ; Blaizot:2015jea ; Mehtar-
Tani:2018zba , finite media Adhya:2019qse ; Adhya:2021kws ; Isaksen:2022pkj
as well as including in-medium collisional elastic scattering Kutak:2018dim ;
Iancu:2015uja ; Schlichting:2020lef . It was realized that a turbulent
cascade, transferring energy from hard to soft modes, was responsible for the
characteristic behavior in the low-$x$ regime.
In Fig. 4, one can see the turbulent behavior in the region $0.3\leq x\leq 1$
for gluons within an opening angle $\theta_{\rm max}$. Since the hard part of
the spectra ($x\sim 1$) is mainly driven by collinear splittings, we observe
that the spectra remain the same with increasing the opening angle to a much
broader cone of $\theta_{\rm max}=0.4$ and $0.6$. Thus, the hard and the
intermediate energetic gluons are largely confined to a narrow cone size. On
increasing the medium size to $L_{\rm eff}=6\,$fm (right panels), we observe
the depletion of the peak at high $x$, resulting in the transfer of more
softer particles to the intermediate and small $x$ for all the angles. This is
true for both the static and expanding finite media. The finite-size effect in
the medium-induced emissions shows up around $x\sim 0.3$ (a dip) which would
otherwise be much flatter for the infinite media. Next, in the softer region
$(0.001\leq x\leq 0.3)$, the energy distribution caused by the accumulation of
soft gluons towards the medium scale leads to a significant broadening effect
on increasing the angular region from the narrow one of $\theta_{\rm max}=0.2$
to the wider regions of $\theta_{\rm max}=0.4$ and $0.6$. This region is
driven simultaneously by collisional as well as radiative processes. Let us
note that opening up the polar angle as well as increasing the medium size
recovers more softer gluons for all the medium profiles. Interestingly, we
observe the universality of the scaling between different medium profiles for
all the jet opening angles as compared to the total angular integrated ones.
In the following, we further probe the region of the soft gluons in angular
space.
Figure 5: The angular distributions $\bar{D}(\theta,t)$ in different ranges of
$x$ for the effective medium length $L_{\rm eff}=4\,$fm (left column) and
$6\,$fm (right column). The upper panels show the distributions for the semi-
hard and hard gluons, while the lower panels show the ones for the soft and
semi-soft gluons.
Next, we analyze the angular dependence of the spectra for all the media
profiles in four different $x$ ranges, restricting us again to the same
equivalent in-medium path $L_{\rm eff}$ for different medium scenarios. These
are: $0.5\leq x\leq 0.9$ (hard) and $0.3\leq x\leq 0.5$ (semi-hard), which
recover the gluons undergoing less in-medium quenching, $0.1\leq x\leq 0.3$
(semi-soft) and $0.01\leq x\leq 0.1$ (soft). For our purposes, we have chosen
the angular region from $0.1$ to $0.6$ in $\theta$ to recover as much of the
phase space of interest as possible. More importantly, this overlaps with with
experimental studies of jet-quenching observables. Each panel in Fig. 5 shows
the integrated energy distribution defined by
$\bar{D}(\theta,t;\\{x_{\rm min},x_{\rm max}\\})=\int_{x_{\rm min}}^{x_{\rm
max}}{\rm d}x\,\bar{D}(x,\theta,t)\,.$ (24)
In Fig. 5, for the hard and semi-hard regimes (upper panels) we observe that
the angular distribution drops sharply for hard gluons as the energy-momentum
conservation imposes a restriction on the available phase space for large
angle scatterings and tends towards the expected Coulomb tail $\sim
1/\theta^{4}$ Mehtar-Tani:2022zwf . In all the panels, we observe the
transverse momentum broadening effect where the energy is re-distributed to
larger angles due to elastic scatterings with the medium quasi-particles as
well as subsequent gluon splittings. In a larger length of the medium (right
column), we observe a greater magnitude of the broadening as the jet spent
more time within the media. More strikingly, for the hard, semi-hard, and
semi-soft regimes included, the broadening effect is significantly larger in
the static medium than in the expanding scenarios. Only in the genuinely soft
regime do we observe a similar amount of broadening.
The qualitative features of the results presented in Fig. 5 are as follows:
* •
Collinear radiation causes depletion of the particles around the hard momentum
sector $x\sim 0.5\,$–$\,0.9$. Subsequent splittings from hard particles to
softer fragments undergo a successive broadening by which the energy is
deposited in the soft sector of $x\sim 0.1\,$–$\,0.01$.
* •
The broadening effect is a combination of a decrease in momentum due to
subsequent splittings as well as elastic collisions with the medium. The
broadening effect is visualized as the energy deposited at small angles,
$\theta=0.2$, being transported to larger angles, $\theta=0.4$ and $0.6$, out
of the jet cone.
* •
We observe insignificant broadening of the hard partons (a sub-leading effect
compared to splittings), such that only the broadening near the medium scales
$x\sim 0.01$ contributes to the energy at larges angles.
* •
The broadening effect in the soft sector is a nearly universal effect with
respect to the static or expanding medium and the finite medium-size
corrections. It is also universal with respect to the starting time for
quenching of the Bjorken profile.
In order to estimate the magnitude of the transverse-momentum broadening
effect, in the next section, we estimate more precisely the amount of the in-
cone gluon fraction in the soft as well as the hard limit of the gluon
momentum fraction.
## 5 Soft versus hard gluons in a jet cone
Figure 6: Comparisons of the fraction of the gluon energy in a cone
$P(\theta,t;\\{x_{\rm min},x_{\rm max}\\})$ as a function of its opening angle
$\theta$ within different $x$-ranges, $x_{\rm min}\leq x\leq x_{\rm max}$, for
the effective medium length $L_{\rm eff}=4\,$fm (left panel) and $6\,$fm
(right panel).
In this section, we study the features of the jet in-cone energy loss for the
opening angle $\theta$ and as a function of the momentum fraction $x$. As
before, we restrict ourselves to considering the same $L_{\rm eff}$ in all
three medium scenarios. In order to more precisely pin down the effect of
medium expansion, let us define the fraction of the gluon energy contained
inside a cone of the opening angle $\theta$ within the $x$-range $x_{\rm
min}\leq x\leq x_{\rm max}$. This can be expressed as
$P(\theta,t;\\{x_{\rm min},x_{\rm max}\\})=\frac{\int_{x_{\rm min}}^{x_{\rm
max}}{\rm d}x\int_{0}^{\theta}{\rm
d}\theta^{\prime}\,\bar{D}(x,\theta^{\prime},t)}{\int_{x_{\rm min}}^{x_{\rm
max}}{\rm d}x\int_{0}^{\pi}{\rm
d}\theta^{\prime}\,\bar{D}(x,\theta^{\prime},t)}\,.$ (25)
In the above equation, the denominator counts the total amount of energy
contained in the chosen $x$-range for any angle. The numerical results are
shown in Fig. 6, where we analyze the angular structure of the energy
distribution of the gluons by dividing the medium evolved spectra into two
regions corresponding to the large energy fraction: $0.5\leq x\leq 1.0$
(square markers) and the soft sector: $0.01\leq x\leq 0.1$ (round markers).
For the hard sector (square markers), the fraction of energy in expanding
media is already quite close to unity for very small cone angles.
Nevertheless, the static medium recovers most of the energy already at
$\theta=0.2$. Hence, for phenomenological studies, one does not expect to be
very sensitive to the details of medium expansion.
In the soft sector (round markers), the gluon cascade has developed a
significant width and one needs to go to large angles $\theta\gg 0.6$ to
recover the full energy content. However, for $L_{\rm eff}=6\,$fm we clearly
observe that the cascade is narrower in the expanding medium than in the
static one (the situation for $L_{\rm eff}=4\,$fm is less clear). The
potential to be sensitive to the details of the medium expansion through the
width of the angular distribution at small-$x$ is interesting from a
phenomenological point of view.
## 6 Conclusions and outlook
In this work, we have obtained spectra of jet particles that undergo
scatterings and induced coherent splittings in a time-dependent medium, which
is parameterized by the time-dependent transport coefficient $\hat{q}$ and the
corresponding splitting kernels Adhya:2019qse of Eqs. (2) and (11),
corresponding to the static and Bjorken expanding medium, respectively.
Numerical results have been obtained using the MINCAS Markov Chain Monte Carlo
algorithm which calculates for the first time the corresponding fragmentation
functions due to evolution via scatterings as well as medium-induced coherent
splittings in the expanding medium.
Our findings can be summarized as follows:
* •
The scaling approximations extracted from the singular limits of the purely
radiative cascade work reasonably well for the medium-evolved spectra.
* •
We find that during the evolution of the energetic parton inside the media,
hard partons remain effectively collinear and the momentum broadening is
mainly caused by consecutive splittings rather than medium collisions and
transverse-momentum exchanges. This type of behavior for hard partons is
visible in the distributions of $x$ and $k_{T}$ in Fig. 2, which follow at
large $x$ and small $k_{T}$ largely the Gaussian distribution (due to the
predominance of multiple soft scatterings), while at large $x$ and $k_{T}$,
rare hard medium interactions have a significant influence, leading to a
power-law behavior.
* •
Subsequent decrease of the momentum of these energetic partons into the soft
momentum sector causes momentum broadening by transverse-momentum exchanges
with the medium through elastic scattering as well as subsequent splittings
into softer fragments. However, the broadening due to subsequent gluon
splittings in the softer sector contributes to the out-of-cone energy loss at
larger angles. This kind of behavior becomes apparent from Fig. 4, where the
differences between the full fragmentation functions $D(x,t)$ and its
respective contributions within jet cones of different sizes are shown in
comparison. It appears that with increasing the jet-cone angle, contributions
of more and more soft particles are considered, while the behavior at large
$x$ has almost no contributions due to large angles.
* •
Harder jet fragments within the jets inside a cone are more sensitive to the
details of the medium expansion, whereas the softer ones are less sensitive.
This effect is evident in Fig. 4, which strongly suggests a nearly universal
behavior for different medium types at small $x$ and for the same value of
$L_{\rm eff}$, whereas differences occur at larger values of $x$.
* •
The fraction of energy within a $x$-bin is also analyzed to distinguish
between different medium profiles. In Fig. 6, we observe persistent
differences in the soft sector, implying that the cascades in the expanding
media are still relatively more collimated than their counterparts in the
static scenarios.
* •
In the quantities we have chosen to analyze, we have not observed any
sensitivity to the hydro initialization time $t_{0}$, in contrast to $v_{2}$
of jets Adhya:2021kws , see also Ref. Andres:2022bql .
Naturally, for a similar in-medium path length in the static and expanding
media, a leading parton will evolve much less due to the rapid diluting of
medium density. This will lead to much fewer low-$x$ gluons and a narrower
profile in the polar angle $\theta$. Comparing the distributions at the
equivalent effective path length $L_{\rm eff}$ should reduce this “trivial”
effect. Nevertheless, our results indicate subtle differences between the
developing profiles of the cascade in the static and expanding media in both
the large- and small-$x$ regimes.
One of the natural extensions of the results presented in this paper is
introducing the transverse-momentum-dependent splitting kernels for the
individual medium profiles to solve the evolution equations for both the quark
and gluon-initiated jets. Secondly, one should include the effects of thermal
re-scattering whenever the energy of the fragments becomes comparable to the
local medium temperature, i.e. $xp_{0}^{+}\sim T$, see e.g. Ref.
Schlichting:2020lef . Furthermore, one can study the role of these
modifications on the radial distributions, or the so-called jet shape
functions, of the jet-quenching by comparison with the recent data from the
CMS CMS:2018zze and ATLAS ATLAS:2019pid experiments at the LHC. However,
these studies are beyond the scope of the present paper and will be reported
in a separate upcoming work.
## Acknowledgements
SPA and KK acknowledge the support of the Polish Academy of Sciences through
the grant agreement PAN.BFD.S.BDN.612.022.2021-PASIFIC 1, QGPAnatomy. This
work received funding from the European Union’s Horizon 2020 research and
innovation program under the Maria Skłodowska-Curie grant agreement No. 847639
and from the Polish Ministry of Education and Science. The research of MR was
supported by the Polish National Science Centre (NCN) grant no.
DEC-2021/05/X/ST2/01340. KT is supported by a Starting Grant from Trond Mohn
Foundation (BFS2018REK01) and the University of Bergen.
## References
* (1) W. Busza, K. Rajagopal and W. van der Schee, _Heavy Ion Collisions: The Big Picture, and the Big Questions_ , _Ann. Rev. Nucl. Part. Sci._ 68 (2018) 339 [1802.04801].
* (2) D. d’Enterria, _Jet quenching_ , _Landolt-Bornstein_ 23 (2010) 471 [0902.2011].
* (3) Y. Mehtar-Tani, J. G. Milhano and K. Tywoniuk, _Jet physics in heavy-ion collisions_ , _Int. J. Mod. Phys._ A28 (2013) 1340013 [1302.2579].
* (4) J.-P. Blaizot and Y. Mehtar-Tani, _Jet Structure in Heavy Ion Collisions_ , _Int. J. Mod. Phys._ E24 (2015) 1530012 [1503.05958].
* (5) M. Gyulassy and M. Plumer, _Jet Quenching in Dense Matter_ , _Phys. Lett._ B243 (1990) 432.
* (6) X.-N. Wang and M. Gyulassy, _Gluon shadowing and jet quenching in A + A collisions at s**(1/2) = 200-GeV_ , _Phys. Rev. Lett._ 68 (1992) 1480.
* (7) STAR collaboration, _Disappearance of back-to-back high $p_{T}$ hadron correlations in central Au+Au collisions at $\sqrt{s_{NN}}$ = 200-GeV_, _Phys. Rev. Lett._ 90 (2003) 082302 [nucl-ex/0210033].
* (8) ATLAS collaboration, _Observation of a Centrality-Dependent Dijet Asymmetry in Lead-Lead Collisions at $\sqrt{s_{NN}}=2.77$ TeV with the ATLAS Detector at the LHC_, _Phys. Rev. Lett._ 105 (2010) 252303 [1011.6182].
* (9) H. Liu, K. Rajagopal and U. A. Wiedemann, _Calculating the jet quenching parameter from AdS/CFT_ , _Phys. Rev. Lett._ 97 (2006) 182301 [hep-ph/0605178].
* (10) J. Ghiglieri, A. Kurkela, M. Strickland and A. Vuorinen, _Perturbative Thermal QCD: Formalism and Applications_ , _Phys. Rept._ 880 (2020) 1 [2002.10188].
* (11) R. Baier, Y. L. Dokshitzer, S. Peigne and D. Schiff, _Induced gluon radiation in a QCD medium_ , _Phys. Lett._ B345 (1995) 277 [hep-ph/9411409].
* (12) R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne and D. Schiff, _The Landau-Pomeranchuk-Migdal effect in QED_ , _Nucl. Phys._ B478 (1996) 577 [hep-ph/9604327].
* (13) B. G. Zakharov, _Fully quantum treatment of the Landau-Pomeranchuk-Migdal effect in QED and QCD_ , _JETP Lett._ 63 (1996) 952 [hep-ph/9607440].
* (14) B. G. Zakharov, _Radiative energy loss of high-energy quarks in finite size nuclear matter and quark - gluon plasma_ , _JETP Lett._ 65 (1997) 615 [hep-ph/9704255].
* (15) B. G. Zakharov, _Transverse spectra of radiation processes in-medium_ , _JETP Lett._ 70 (1999) 176 [hep-ph/9906536].
* (16) R. Baier, D. Schiff and B. G. Zakharov, _Energy loss in perturbative QCD_ , _Ann. Rev. Nucl. Part. Sci._ 50 (2000) 37 [hep-ph/0002198].
* (17) M. Gyulassy, P. Levai and I. Vitev, _NonAbelian energy loss at finite opacity_ , _Phys. Rev. Lett._ 85 (2000) 5535 [nucl-th/0005032].
* (18) X.-f. Guo and X.-N. Wang, _Multiple scattering, parton energy loss and modified fragmentation functions in deeply inelastic e A scattering_ , _Phys. Rev. Lett._ 85 (2000) 3591 [hep-ph/0005044].
* (19) J. a. Barata, Y. Mehtar-Tani, A. Soto-Ontoso and K. Tywoniuk, _Revisiting transverse momentum broadening in dense QCD media_ , _Phys. Rev. D_ 104 (2021) 054047 [2009.13667].
* (20) J. a. Barata and Y. Mehtar-Tani, _Improved opacity expansion at NNLO for medium induced gluon radiation_ , _JHEP_ 10 (2020) 176 [2004.02323].
* (21) Y. Mehtar-Tani and K. Tywoniuk, _Improved opacity expansion for medium-induced parton splitting_ , _JHEP_ 06 (2020) 187 [1910.02032].
* (22) Y. Mehtar-Tani, _Gluon bremsstrahlung in finite media beyond multiple soft scattering approximation_ , _JHEP_ 07 (2019) 057 [1903.00506].
* (23) R. Baier, A. H. Mueller, D. Schiff and D. T. Son, _’Bottom up’ thermalization in heavy ion collisions_ , _Phys. Lett._ B502 (2001) 51 [hep-ph/0009237].
* (24) S. Jeon and G. D. Moore, _Energy loss of leading partons in a thermal QCD medium_ , _Phys. Rev._ C71 (2005) 034901 [hep-ph/0309332].
* (25) P. B. Arnold, G. D. Moore and L. G. Yaffe, _Photon and gluon emission in relativistic plasmas_ , _JHEP_ 06 (2002) 030 [hep-ph/0204343].
* (26) J. Ghiglieri, G. D. Moore and D. Teaney, _Jet-Medium Interactions at NLO in a Weakly-Coupled Quark-Gluon Plasma_ , _JHEP_ 03 (2016) 095 [1509.07773].
* (27) A. Kurkela, A. Mazeliauskas, J.-F. Paquet, S. Schlichting and D. Teaney, _Matching the Nonequilibrium Initial Stage of Heavy Ion Collisions to Hydrodynamics with QCD Kinetic Theory_ , _Phys. Rev. Lett._ 122 (2019) 122302 [1805.01604].
* (28) C. A. Salgado and U. A. Wiedemann, _Calculating quenching weights_ , _Phys. Rev._ D68 (2003) 014008 [hep-ph/0302184].
* (29) K. Zapp, G. Ingelman, J. Rathsman, J. Stachel and U. A. Wiedemann, _A Monte Carlo Model for ’Jet Quenching’_ , _Eur. Phys. J._ C60 (2009) 617 [0804.3568].
* (30) N. Armesto, L. Cunqueiro and C. A. Salgado, _Q-PYTHIA: A Medium-modified implementation of final state radiation_ , _Eur. Phys. J._ C63 (2009) 679 [0907.1014].
* (31) B. Schenke, C. Gale and S. Jeon, _MARTINI: An Event generator for relativistic heavy-ion collisions_ , _Phys. Rev._ C80 (2009) 054913 [0909.2037].
* (32) I. P. Lokhtin, A. V. Belyaev and A. M. Snigirev, _Jet quenching pattern at LHC in PYQUEN model_ , _Eur. Phys. J._ C71 (2011) 1650 [1103.1853].
* (33) J. Casalderrey-Solana, D. C. Gulhan, J. G. Milhano, D. Pablos and K. Rajagopal, _A Hybrid Strong/Weak Coupling Approach to Jet Quenching_ , _JHEP_ 10 (2014) 019 [1405.3864].
* (34) K. Kutak, W. Płaczek and R. Straka, _Solutions of evolution equations for medium-induced QCD cascades_ , _Eur. Phys. J. C_ 79 (2019) 317 [1811.06390].
* (35) E. Blanco, K. Kutak, W. Płaczek, M. Rohrmoser and R. Straka, _Medium induced QCD cascades: broadening and rescattering during branching_ , _JHEP_ 04 (2021) 014 [2009.03876].
* (36) S. Caron-Huot and C. Gale, _Finite-size effects on the radiative energy loss of a fast parton in hot and dense strongly interacting matter_ , _Phys. Rev. C_ 82 (2010) 064902 [1006.2379].
* (37) X. Feal and R. Vazquez, _Intensity of gluon bremsstrahlung in a finite plasma_ , _Phys. Rev. D_ 98 (2018) 074029 [1811.01591].
* (38) C. Andres, L. Apolinário and F. Dominguez, _Medium-induced gluon radiation with full resummation of multiple scatterings for realistic parton-medium interactions_ , _JHEP_ 07 (2020) 114 [2002.01517].
* (39) J.-P. Blaizot, E. Iancu and Y. Mehtar-Tani, _Medium-induced QCD cascade: democratic branching and wave turbulence_ , _Phys. Rev. Lett._ 111 (2013) 052001 [1301.6102].
* (40) J.-P. Blaizot, F. Dominguez, E. Iancu and Y. Mehtar-Tani, _Medium-induced gluon branching_ , _JHEP_ 01 (2013) 143 [1209.4585].
* (41) S. P. Adhya, C. A. Salgado, M. Spousta and K. Tywoniuk, _Multi-partonic medium induced cascades in expanding media_ , _Eur. Phys. J. C_ 82 (2022) 20 [2106.02592].
* (42) Y. Mehtar-Tani, S. Schlichting and I. Soudi, _Jet thermalization in QCD kinetic theory_ , 2209.10569.
* (43) S. P. Adhya, C. A. Salgado, M. Spousta and K. Tywoniuk, _Medium-induced cascade in expanding media_ , _JHEP_ 07 (2020) 150 [1911.12193].
* (44) C. A. Salgado and U. A. Wiedemann, _A Dynamical scaling law for jet tomography_ , _Phys. Rev. Lett._ 89 (2002) 092303 [hep-ph/0204221].
* (45) C. Andres, N. Armesto, H. Niemi, R. Paatelainen and C. A. Salgado, _Jet quenching as a probe of the initial stages in heavy-ion collisions_ , _Phys. Lett. B_ 803 (2020) 135318 [1902.03231].
* (46) J.-P. Blaizot, L. Fister and Y. Mehtar-Tani, _Angular distribution of medium-induced QCD cascades_ , _Nucl. Phys._ A940 (2015) 67 [1409.6202].
* (47) J. a. Barata, A. V. Sadofyev and C. A. Salgado, _Jet broadening in dense inhomogeneous matter_ , _Phys. Rev. D_ 105 (2022) 114010 [2202.08847].
* (48) S. Schlichting and I. Soudi, _Splitting rates in QCD plasmas from a non-perturbative determination of the momentum broadening kernel $C(q_{\bot})$_, 2111.13731.
* (49) J. H. Isaksen, A. Takacs and K. Tywoniuk, _A unified picture of medium-induced radiation_ , 2206.02811.
* (50) J. a. Barata, F. Domínguez, C. Salgado and V. Vila, _A modified in-medium evolution equation with color coherence_ , _JHEP_ 05 (2021) 148 [2101.12135].
* (51) R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne and D. Schiff, _Radiative energy loss of high-energy quarks and gluons in a finite volume quark - gluon plasma_ , _Nucl. Phys. B_ 483 (1997) 291 [hep-ph/9607355].
* (52) R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne and D. Schiff, _Radiative energy loss and p(T) broadening of high-energy partons in nuclei_ , _Nucl. Phys. B_ 484 (1997) 265 [hep-ph/9608322].
* (53) R. Baier, Y. L. Dokshitzer, A. H. Mueller and D. Schiff, _Radiative energy loss of high-energy partons traversing an expanding QCD plasma_ , _Phys. Rev. C_ 58 (1998) 1706 [hep-ph/9803473].
* (54) P. B. Arnold, _Simple Formula for High-Energy Gluon Bremsstrahlung in a Finite, Expanding Medium_ , _Phys. Rev._ D79 (2009) 065025 [0808.2767].
* (55) E. Blanco, K. Kutak, W. Placzek, M. Rohrmoser and K. Tywoniuk, _System of evolution equations for quark and gluon jet quenching with broadening_ , _Eur. Phys. J. C_ 82 (2022) 355 [2109.05918].
* (56) J.-P. Blaizot, F. Dominguez, E. Iancu and Y. Mehtar-Tani, _Probabilistic picture for medium-induced jet evolution_ , _JHEP_ 06 (2014) 075 [1311.5823].
* (57) P. Caucal, E. Iancu and G. Soyez, _Jet radiation in a longitudinally expanding medium_ , _JHEP_ 04 (2021) 209 [2012.01457].
* (58) R. Baier, Y. L. Dokshitzer, A. H. Mueller and D. Schiff, _Quenching of hadron spectra in media_ , _JHEP_ 09 (2001) 033 [hep-ph/0106347].
* (59) J.-P. Blaizot, Y. Mehtar-Tani and M. A. C. Torres, _Angular structure of the in-medium QCD cascade_ , _Phys. Rev. Lett._ 114 (2015) 222002 [1407.0326].
* (60) M. Gyulassy, P. Levai and I. Vitev, _Reaction operator approach to multiple elastic scatterings_ , _Phys. Rev. D_ 66 (2002) 014005 [nucl-th/0201078].
* (61) J.-w. Qiu and I. Vitev, _Transverse momentum diffusion and broadening of the back-to-back dihadron correlation function_ , _Phys. Lett. B_ 570 (2003) 161 [nucl-th/0306039].
* (62) F. D’Eramo, H. Liu and K. Rajagopal, _Transverse Momentum Broadening and the Jet Quenching Parameter, Redux_ , _Phys. Rev. D_ 84 (2011) 065015 [1006.1367].
* (63) T. Liou, A. H. Mueller and B. Wu, _Radiative $p_{\bot}$-broadening of high-energy quarks and gluons in QCD matter_, _Nucl. Phys. A_ 916 (2013) 102 [1304.7677].
* (64) P. Caucal and Y. Mehtar-Tani, _Anomalous diffusion in QCD matter_ , _Phys. Rev. D_ 106 (2022) L051501 [2109.12041].
* (65) J. Ghiglieri and E. Weitz, _Classical vs quantum corrections to jet broadening in a weakly-coupled Quark-Gluon Plasma_ , _JHEP_ 11 (2022) 068 [2207.08842].
* (66) J.-P. Blaizot and Y. Mehtar-Tani, _Energy flow along the medium-induced parton cascade_ , _Annals Phys._ 368 (2016) 148 [1501.03443].
* (67) Y. Mehtar-Tani and S. Schlichting, _Universal quark to gluon ratio in medium-induced parton cascade_ , _JHEP_ 09 (2018) 144 [1807.06181].
* (68) E. Iancu and B. Wu, _Thermalization of mini-jets in a quark-gluon plasma_ , _JHEP_ 10 (2015) 155 [1506.07871].
* (69) S. Schlichting and I. Soudi, _Medium-induced fragmentation and equilibration of highly energetic partons_ , _JHEP_ 07 (2021) 077 [2008.04928].
* (70) C. Andres, L. Apolinário, F. Dominguez, M. G. Martinez and C. A. Salgado, _Medium-induced radiation with vacuum propagation in the pre-hydrodynamics phase_ , 2211.10161.
* (71) CMS collaboration, _Jet properties in PbPb and pp collisions at $\sqrt{s_{\mathrm{N}\;\mathrm{N}}}=5.02$ TeV_, _JHEP_ 05 (2018) 006 [1803.00042].
* (72) ATLAS collaboration, _Measurement of angular and momentum distributions of charged particles within and around jets in Pb+Pb and $pp$ collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV with the ATLAS detector_, _Phys. Rev. C_ 100 (2019) 064901 [1908.05264].
|
# Towards faster settlement in HTLC-based Cross-Chain Atomic Swaps
††thanks: This work was supported by the European Research Council (ERC) under
the European Union’s Horizon 2020 research (grant agreement 771527- BROWSEC),
by the Austrian Science Fund (FWF) through the projects PRO- FET (grant
agreement P31621) and the project W1255-N23, by the Austrian Research
Promotion Agency (FFG) through COMET K1 SBA and COMET K1 ABC, by the Vienna
Business Agency through the project Vienna Cybersecurity and Privacy Research
Center (VISP), by the Austrian Federal Ministry for Digital and Economic
Affairs, the National Foundation for Research, Technology and Development and
the Christian Doppler Research Association through the Christian Doppler
Laboratory Blockchain Technologies for the Internet of Things (CDL-BOT).
Subhra Mazumdar TU Wien, Christian Doppler Laboratory
Blockchain Technologies for the Internet of Things
Vienna, Austria
<EMAIL_ADDRESS>
###### Abstract
Hashed Timelock (HTLC)-based atomic swap protocols enable the exchange of
coins between two or more parties without relying on a trusted entity. This
protocol is like the _American call option_ without premium. It allows the
finalization of a deal within a certain period. This puts the swap initiator
at liberty to delay before deciding to proceed with the deal. If she finds the
deal unprofitable, she just waits for the time-period of the contract to
elapse. However, the counterparty is at a loss since his assets remain locked
in the contract. The best he can do is to predict the initiator’s behavior
based on the asset’s price fluctuation in the future. But it is difficult to
predict as cryptocurrencies are quite volatile, and their price fluctuates
abruptly. We perform a game theoretic analysis of HTLC-based atomic cross-
chain swap to predict whether a swap will succeed or not. From the strategic
behavior of the players, we infer that this model lacks fairness. We propose
_Quick Swap_ , a two-party protocol based on hashlock and timelock that
fosters faster settlement of the swap. The parties are required to lock
griefing-premium along with the principal amount. If the party griefs, he ends
up paying the griefing-premium. If a party finds a deal unfavorable, he has
the provision to cancel the swap. We prove that _Quick Swap_ is more
participant-friendly than HTLC-based atomic swap. Our work is the first to
propose a protocol to ensure fairness of atomic-swap in a cyclic multi-party
setting.
###### Index Terms:
Cryptocurrencies, Atomic Swap, Hashed Timelock (HTLC), Griefing Attack, Game
Theory, Griefing-Premium, Faster Settlement
## I Introduction
Centralized exchange enabled users to trade one cryptocurrency for the other.
For example, Alice wants to exchange $x_{a}$ coins for Bob’s $y_{b}$ coins.
The exchange can be done by involving a third party _Carol_ , where both
_Alice_ can deposit $x_{a}+x$ coins and _Bob_ deposits $y_{b}$ coins with
_Carol_ respectively. $x$ is the service fee charged by _Carol_ for offering
the swap service. If _Carol_ is honest, she will handover $x_{a}$ coins to
_Bob_ , $y_{b}$ coins to _Alice_ and keep the service charge. If she is
malicious, she can just run away with _Alice_ ’s and _Bob_ ’s money. Thus , it
become very important to have decentralized exchange of cryptocurrencies. With
the introduction of Blockchain [Nakamoto(2008)], it is now possible to realize
decentralized protocols for atomic swaps without relying on any trusted third
party [TN1(2013), Thomas and Schwartz(2015), Herlihy(2018), Zamyatin et
al.(2019), Zakhary et al.(2020), Tairi et al.(2021), Moreno-Sanchez et
al.(2020), Lys et al.(2021), Thyagarajan et al.(2022), Narayanam et
al.(2022)]. _Alice_ and _Bob_ can now safely lock their coins and have rules
encoded in the smart contract. Exchange of assets leads to change of
ownership, it either succeeds or fails in entirety. Bitcoin-based blockchains
primarily leverage on Hashed Timelock Contracts or HTLC [Herlihy(2018),
Borkowski et al.(2019), Dai et al.(2020), Narayanam et al.(2022)] for
exchanging Bitcoins (BTC) with other cryptocurrencies like Litecoins (LTC),
Ether (ETH), ERC Tokens.
Figure 1: HTLC-based atomic swap. We assume that there is no waiting involved
when parties lock their coins and are willing to exchange it as well.
We explain a two-party HTLC-based atomic swap protocol with an example shown
in Fig. 1. _Alice_ wants to exchange $x_{a}$ coins for $y_{b}$ coins of _Bob_.
Both of them have accounts in two different blockchains Chain-a and Chain-b.
_Alice_ samples a secret $s$, generates a hash $H=\mathcal{H}(s)$ and shares
it with _Bob_ at time $t_{0}$. The protocol comprises two phases: _Lock_ and
_Claim_. _Lock phase_ defines the time interval within which the parties have
locked their assets in the contracts instantiated in the respective
blockchains. _Alice_ locks $x_{a}$ coins in Chain-a at time $t_{1}$. The coin
are locked in a contract where the spending conditions are as follows: _if Bob
provides the secret $s$ within $t_{a}$ units of time, then he claims $x_{a}$
coins else Alice initiates a refund after $t_{a}$ elapses_. Once the
transaction is confirmed in Chain-a in the next $\tau_{a}$ units of time,
_Bob_ locks $y_{b}$ coins in Chain-b at time $t_{2}$. He uses the same hash
value $H$ in the contract deployed in Chain-b for locking his coins. The
spending conditions are different: _if Alice provides the secret $s$ within
$t_{b}$ units of time, then she claims $y_{b}$ coins else Bob initiates a
refund after $t_{b}$ elapses_, where $t_{a}>t_{b}$. It takes $\tau_{b}$ units
for the lock transaction to be confirmed in Chain-b. _Claim phase_ signals the
period within which the parties claim their assets. In the best case involving
zero waiting, _Alice_ broadcasts the claim transaction at $t_{3}$, releases
the preimage $s$ and claims $y_{b}$ coins from _Bob_. The latter uses the
preimage $s$ at broadcasts a transaction to claim $x_{a}$ coins at time
$t_{4}$, where $t_{4}-t_{3}=t_{\epsilon}$ is the time taken by _Bob_ to
observe _Alice_ ’s transaction in Chain-b. Once the ownership of the assets
changes successfully, an instance of the protocol succeeds.
### I-A Griefing Attack in Timelocked Contracts
Figure 2: Coins remain unutilized in HTLC-based atomic swap due to (a) Bob
speculates the deal and does not lock his coins and (b) Bob speculates, finds
$t_{2}^{\prime}$ favourable for locking his coins, delaying by
$t_{2}^{\prime}-t_{2}$, and Alice speculates, finds
$t_{3}^{\prime\prime}=t_{5}^{\prime}-\epsilon$ favourable for claiming the
coins from Bob
One of the main disadvantage of HTLC-based atomic swap is that parties are not
enforced to settle the transaction. It has already been shown in [Han et
al.(2019)], [zmn(2018)] that atomic swap is equivalent to _American call
option_ without premium. In _American call option_ , buyer is allowed to
exercise the contract no later than the strike time. We illustrate the
situation in Fig. 2. After _Alice_ has locked $x_{a}$ coins at $t_{1}$, the
time taken for the transaction to be confirmed in Chain-a is $\tau_{a}$.
Ideally, _Bob_ should begin locking his coins at $t_{2}=t_{1}+\tau_{a}$.
However, he may choose to delay speculate within duration $t_{2}$ to
$t_{6}-(t_{b}+\epsilon+t_{\epsilon})$ and delay or he may choose to not lock
his coins at all. If _Bob_ does not lock his coins, then _Alice_ ’s coins get
locked unnecessarily and she loses her coins in paying fees to miners for
successful mining of refund transaction. This is termed as _draining attack_
[Eizinger et al.(2021)]. Let us assume that _Bob_ locks his coins at
$t_{2}^{\prime}\in[t_{2},t_{6}-(t_{b}+\epsilon+t_{\epsilon})$. The time taken
for the transaction to be confirmed in Chain-b is $\tau_{b}$. If _Alice_
chooses not to delay, then she will claim the coins at time
$t_{3}^{\prime}=t_{2}^{\prime}+\tau_{b}$. However, _Alice_ may delay in
claiming the coins or just abort. If she chooses to claim the coins at time
$t_{3}^{\prime\prime}=t_{5}^{\prime}-\epsilon$ where
$t_{5}^{\prime}=t_{2}^{\prime}+\tau_{b}$ then _Bob_ gets to claim $x_{a}$
coins at $t_{4}^{\prime\prime}=t_{3}^{\prime\prime}+t_{\epsilon}$. So _Bob_
has to wait a duration of $t_{4}^{\prime\prime}-t_{4}^{\prime}$ where
$t_{4}^{\prime}=t_{3}^{\prime}+t_{\epsilon}$. Upon simplification, we observe
that he waits $t_{3}^{\prime\prime}-t_{3}^{\prime}=t_{5}^{\prime}-\epsilon-
t_{3}^{\prime}$, which is the delay due to _Alice_ ’s speculation. It may so
happen that _Alice_ does not want to claim $y_{b}$ coins, she waits for
$t_{b}$ units to elapse, and _Bob_ broadcasts refund transaction at time
$t_{5}^{\prime}=t_{2}^{\prime}+t_{b}$. _Alice_ broadcasts her refund
transaction after $t_{a}$ units elapse, i.e., at time $t_{6}=t_{1}+t_{a}$. If
_Alice_ chooses not to respond, then it leads to a _Griefing Attack_
[Robinson(2019)]. Coins remain unutilized in either of the blockchains leading
to substantial rise in opportunity cost. We define the attack formally.
###### Definition 1
_(Griefing Attacks in Atomic Swap)_ Given two parties A and B, such that A is
required to forward an HTLC of $x_{a}$ coins to B for a certain timeperiod
$t_{a}$, and in turn, B must forward an HTLC of $y_{b}$ coins to A for a
timeperiod $t_{b}:t_{a}>t_{b}$, a griefing attack can happen in the following
situations:
* •
_A locks $x_{a}$ coins and B doesn’t lock $y_{b}$ coins_: It leads to
$\textbf{A}^{\prime}s$ $x_{a}$ coins being locked for $t_{a}$ units, the loss
in terms of collateral cost being $\mathcal{O}(x_{a}t_{a})$.
* •
_A locks $x_{a}$ coins, B has locked $y_{b}$ coins, and A aborts_: In such a
situation, A griefs B at the cost of locking his coins for $t_{a}$ units of
time. B’s coins remains locked for $t_{b}$ units, the loss in terms of
collateral cost being $\mathcal{O}(y_{b}t_{b})$.
### Motivation for Griefing in Atomic Swap
A party may grief intentionally or decides to abort when the situation is not
favourable for exchanging coins . We consider our parties to be either
genuinely interested in exchanging coins or malicious. We define
characteristic of each type:
* •
_Interested to Exchange_ : A party who is willing to exchange coins but might
end up griefing depending on whether she finds a favourable exchange rate.
* •
_Malicious_ : A party whose only motive is to mount Denial-of-Service (DoS)
attack on the counterparty. Such a party will not take any actions after the
counterparty has locked coins for exchange. Gain of malicious party is the
lost opportunity cost of counterparty’s locked coins.
Enforcing the parties not to back out from the deal is a major challenge.
Additionally, we observe HTLC lacks flexibility as it does not provide an
option to cancel the contract when the situation turns out to be unfavorable.
_Is it possible to propose an atomic swap that allows cancellation but at the
same time penalizes malicious behavior?_
### I-B Contributions
* •
We model HTLC-based atomic cross-chain swap as a two-player sequential game
and analyze the success rate of the protocol as a function of exchange rate
and delay.
* •
We observe HTLC-based atomic swap is not participant-friendly by estimating
the success rate of such a protocol.
* •
We propose _Quick Swap_ , a hashlock and timelock-based protocol compatible
with Bitcoin script. Our protocol is more robust and efficient, allowing easy
cancellation of trade and penalizing a party if it griefs.
* •
_Quick Swap_ can also be generalized to multi-party cyclic atomic swap for
countering griefing attacks.
## II Game Theoretic Analysis of HTLC-based atomic swap
### II-A System Model & Assumptions
The atomic swap protocol comprises two phase: _Lock Phase_ \- for the duration
$t_{0}$ to just before $t_{3}$, and _Claim Phase_ \- from time $t_{3}$ onward.
It proceeds sequentially, with the assets being locked first and then the
assets being claimed in the next phase. Given two parties Alice and Bob, their
strategy space consists of the actions _continue_ and _stop_. In this paper,
we will alternately use the term _stop_ and _abort_ , both denoting that a
party chooses not to take any action. After _Alice_ locks $x_{a}$ coins in
Chain-a, _Bob_ can choose to either _continue_ , i.e., lock $y_{b}$ coins in
Chain-b, or abort from the process. There is no way by which _Alice_ can abort
before timeout period $t_{a}$ elapses, if _Bob_ aborts. If _Bob_ locks his
coins before that, _Alice_ can choose continue with the swap by claiming
$y_{b}$ coins and releasing the preimage of $H$ before $t_{b}$ elapses. If she
choose to _stop_ , she will wait for $t_{b}$ to elapse and Bob initiates a
refund after that. If either of the party is malicious, then the sole motive
would be to keep the coins locked. So if _Alice_ is malicious, she will not
initiate the _Claim Phase_ , and if _Bob_ is malicious, he will simply abort
in the _Lock Phase_ without locking his coins.
### II-B Basic Setup
We denominate coins locked by _Bob_ as a function of value of _Alice_ ’s coin
at a given time $t$. Thus we express $y_{b}$ coins as $x(y_{b},t)$, where the
price is decided based on the exchange rate prevailing at time $t$. At time
$t_{0}\approx t_{1}$, $x(y_{b},t_{1})$ is the price of _Bob_ ’s coins decided
to be exchanged for $x_{a}$ coins of _Alice_. Price of Bob’s coins at any time
$t$ follow a geometric Brownian motion [Dipple et al.(2020)]. Coins locked by
Bob (or B), i.e., $y_{b}$, is denoted as $x(y_{b},t)$, when denominated in
terms of Alice’s coins at time $t$. $x(y_{b},t)$ follows a geometric Brownian
motion:
$\ln{\frac{x(y_{b},t+\tau)}{x(y_{b},t)}}=\Big{(}\mu-\frac{\sigma^{2}}{2}\Big{)}\tau+\sigma(W_{t+\tau}-W_{t})$
(1)
where $W$ follows a Wiener Process with drift $\mu$ and variance $\sigma^{2}$
[Malliaris(1990)]. .
Given Eq.1, the expected price of $x(y_{b},t)$ at time
$t+\lambda,(\mathcal{E}(x(y_{b},t),\lambda)$, Probability density function of
$x(y_{b},t)$ at time $t+\lambda,(P(x,x(y_{b},t),\lambda))$ , and Cumulative
density function of $x(y_{b},t)$ coins at time
$t+\lambda,(C(x,x(y_{b},t),\lambda))$ is expressed as follows:
$\begin{matrix}\mathcal{E}(x(y_{b},t),\lambda)=\mathbb{E}[x(y_{b},t+\lambda)|x(y_{b},t)]=x(y_{b},t)e^{\mu\lambda}\\\
\\\ P(x,x(y_{b},t),\lambda)=\mathbb{P}[x(y_{b},t+\lambda)=x|x(y_{b},t)]\\\
=\frac{e^{-\Big{(}\frac{\ln{\frac{x}{x(y_{b},t)}}-\Big{(}\mu-\frac{\sigma^{2}}{2}\Big{)}\tau}{2\tau\sigma^{2}}\Big{)}^{2}}}{\sqrt{2\pi\tau}\sigma
x}\\\ \\\ C(x,x(y_{b},t),\lambda)=\mathbb{C}[x(y_{b},t+\lambda)\leq
x|x(y_{b},t)]\\\
=\frac{erfc{\Big{(}\frac{\ln{\frac{x}{x(y_{b},t)}}-\Big{(}\mu-\frac{\sigma^{2}}{2}\Big{)}\tau}{\sqrt{2\tau}\sigma}\Big{)}}}{2}\\\
\\\ \end{matrix}$ (2)
where $erfc$ is the complementary error function. $erfc(x)$ is defined as:
$erfc(x)=\frac{2}{\Pi}\int_{x}^{\infty}e^{t^{2}}\,dt$ (3)
The expressions are taken from [Xu et al.(2021)].
We assume both parties know each other’s parameters. Transaction fees are
assumed to be negligible compared to the amounts involved in the transactions.
The rest of the notations are defined in Table I.
Notation | Description
---|---
$x_{a}$ | Coins possessed by _Alice_ or A
$y_{b}$ | Coins of _Bob_ or B that A decides to buy for $x_{a}$ coins at time $t_{0}$
$\tau_{a}$ | Time taken for a transaction to get confirmed in Chain-a
$\tau_{a}$ | Time taken for a transaction to get confirmed in Chain-b
$t_{a}$ | HTLC forwarded by A to B in Chain-a, locking $x_{a}$, expires
$t_{b}$ | HTLC forwarded by B to A in Chain-b, locking $y_{b}$, expires
$t_{\epsilon}$ | Propagation delay
$\epsilon$ | Short time gap
$\theta_{1}$ | Belief of B regarding type of A being _interested to swap_
$\theta_{2}$ | Belief of A regarding type of B being _interested to swap_
$sp_{a}$ | Success premium of A, if swap succeeds
$sp_{b}$ | Success premium of B, if swap succeeds
$x(y_{b},t)$ | Price of $y_{b}$ coins in terms of $x_{a}$ at time a given time $t$
$r_{a}$ | Time discounting factor of A
$r_{b}$ | Time discounting factor of B
$f_{a}$ | Transaction fee in Chain-a
$f_{b}$ | Transaction fee in Chain-b
$\mu$ | Wiener process drift
$\sigma^{2}$ | Wiener process variance
$\mathcal{E}(x(y_{b},t),\lambda)$ | Expected price of $x(y_{b},t)$ at time $t+\lambda$, also expressed
| as $\mathbb{E}[x(y_{b},t+\lambda),x(y_{b},t)]=x(y_{b},t)e^{\mu\lambda}$
$P(x,x(y_{b},t),\lambda)$ | Probability density function at time $t+\lambda$
| given that price at time $t$ is $x(y_{b},t)$
$C(x,x(y_{b},t),\lambda)$ | Cumulative density function at time $t+\lambda$
| given that price at time $t$ is $x(y_{b},t)$
TABLE I: Notations used in the paper
### II-C Game Model
We model the interaction between two parties _Alice_ , denoted as A, and _Bob_
, denoted as B, as a sequential game $\Gamma_{swap}$. We assume that all the
miners in the blockchains Chain-a and Chain-b are honest. In our model, B
considers A to be _interested in exchanging coins_ with probability
$\theta_{1}$ and _malicious_ with probability $1-\theta_{1}$. A considers B to
be _interested with probability_ $\theta_{2}$ and _malicious_ with probability
$1-\theta_{2}$. Given the belief A has regarding B’s type, if she has chosen
to initiate the lock phase at time $t_{1}$ then B chooses either to lock
$y_{b}$ coins or abort based on his belief of A’s type at time $t_{2}$. If B
doesn’t lock his coins before $t_{1}+t_{a}$ elapses then the swap stands
canceled. If B locks his coins, A makes the next move at $t_{3}$, choosing
either to claim $y_{b}$ coins or abort. Once A has chosen an action, B follows
that.
The extensive-form game $\Gamma_{swap}$, is defined for players A and B. The
payoff function $u_{k,\theta}:S\times\mathbb{N}\rightarrow\mathbb{R}$ for any
player $k\in\\{\textbf{A,B}\\}$ and $\theta\in\\{\textrm{interested (int),
malicious}\\}$, where $S=\\{continue,stop\\}$, is denoted as
$u_{k,\theta}(s,t)$ where $s\in S$ and $t\in\mathbb{N}$. $S$ denotes the set
of actions for players A and B. $u_{k,\theta}(s,t)$ specifies the payoff the
player $k$ of type $\theta$ would get at time $t\in\mathbb{N}$, if the players
chose their action $s\in S$. The game begins with Nature (N) choosing the type
of players A and B, where the probability of picking both players who are
interested to exchange coins is $\theta_{1}\theta_{2}$ (probability of picking
malicious A (B) is $1-\theta_{1}$ ($1-\theta_{2}$). Since selecting type of
players are mutually independent, all possible combinations probability is the
product of individual probability). In the next step, _interested_ A will
selects her strategy of either _continue_ or _stop_ , based on the belief of
B’s type, where as _malicious_ A will always choose to _continue_. Next,
_interested_ B chooses his strategy based on his belief of A’s type, however
_malicious_ B will choose _stop_. Finally, interested A will choose to claim
the coins if the exchange rates are in her favour else she aborts. _Malicious_
A will always abort.
(a) $sp_{a}=sp_{b}=0.3,r_{a}=r_{b}=0.005,\sigma=0.1$
(b) $sp_{a}=sp_{b}=0.3,r_{a}=r_{b}=0.01,\sigma=0.1$
(c) $sp_{a}=sp_{b}=0.3,r_{a}=r_{b}=0.005,\sigma=0.2$
Figure 3: 3-D plot of Swap Success Rate as a function of delay
($T,T^{\prime}$) (x-y axis) and A’s coins (z-axis), with different parameter
values, $\theta_{1}=\theta_{2}=0.5$
#### II-C1 Preference Structure
To calculate the payoff of each party for type _interested_ 111We do not
calculate the payoff for type _malicious_ as we are interested in quantifying
the success rate of HTLC-based atomic swap and this is possible only in
presence of _interested_ parties., we use backward induction, starting from
$t_{3}$. The claim phase starts at time $t_{3}$, when an _interested_ A
decides whether to _continue_ and reveal preimage of $H$, or _stop_ for
$t_{b}$ to elapse. If _interested_ A decides to claim $y_{b}$ coins, then an
_interested_ B claims $y_{b}$ coins. We define the utility or payoff for each
strategy. If _interested_ A chooses to _continue_ at $t_{3}$, then the time
taken for the redeem transaction to get confirmed in Chain-b is $\tau_{b}$. We
multiply payoff of A upon continuing with a factor $(1+sp_{a})$, where $u_{a}$
is the success premium (or $u_{b}$ for B) to emphasize that rational parties
gain higher utility by swapping their assets rather than abort. The utility of
_interested_ A is expressed with their time discounted for duration
$\tau_{b}$. Similarly, when _interested_ B claims coins in Chain-a at time
$t_{3}+t_{\epsilon}$, time taken for the transaction to be confirmed is
$\tau_{a}$, hence the utility is expressed with their time discounted for
duration $t_{\epsilon}+\tau_{a}$.
$\begin{matrix}u_{A,int}(cont,t_{3})=(1+sp_{a})\mathcal{E}(x(y_{b},t_{3}),\tau_{b})e^{-r_{a}\tau_{b}}\\\
-f_{b}\\\
u_{B,int}(cont,t_{3})=(1+sp_{b})x_{a}e^{-r_{b}(\tau_{a}+t_{\epsilon})}-f_{a}\\\
\end{matrix}$ (4)
where $t_{3}=t_{2}+\tau_{b}+T$, and $T\in[0,t_{b}-\tau_{b}-\epsilon]$ being
the delay by _interested_ A before she decides to claim the coins. If
_interested_ A decides to _stop_ , then _interested_ B has to abort as well.
The coins remain locked uselessly for the entire duration. The utility is
expressed as: $u_{A,int}(stop,t_{3})=x_{a}e^{-r_{a}t_{a}}-f_{a}$ and
$u_{B,int}(stop,t_{3})=\mathcal{E}(x(y_{b},t_{3}),t_{b})e^{-r_{b}t_{b}}-f_{b}$.
An _interested_ A will decide on _continue_ over _stop_ at $t_{3}$, if the
following condition holds:
$\begin{matrix}(1+sp_{a})\mathcal{E}(x(y_{b},t_{3}),\tau_{b})e^{-r_{a}\tau_{b}}-f_{b}\geq
x_{a}e^{-r_{a}t_{a}}-f_{a}\\\
or,x(y_{b},t_{3})e^{\mu\tau_{b}}\geq\frac{x_{a}e^{-r_{a}(t_{a}-\tau_{b})}-(f_{a}-f_{b})e^{r_{a}\tau_{b}}}{1+sp_{a}}\\\
or,x(y_{b},t_{3})\geq\frac{\Big{(}x_{a}e^{-r_{a}(t_{a}-\tau_{b})}-(f_{a}-f_{b})e^{r_{a}\tau_{b}}\Big{)}e^{-\mu\tau_{b}}}{(1+sp_{a})}\\\
\end{matrix}$
We derive $x(y_{b},t_{3})^{*}$ in terms of $x_{a}$ for which the above
inequality holds. If $x(y_{b},t_{3})\geq x(y_{b},t_{3})^{*}$ then A claims the
coins.
In the second round of _lock phase_ , _interested_ B has to decide whether he
will _continue_ (i.e., lock $y_{b}$ coins) or _stop_ after A has locked
$x_{a}$ coins. Since _B_ is unaware of A’s type, he calculates the expected
payoff where with probability $1-\theta_{1}$ he has to _stop_ at time
$t_{2}+\tau_{b}$ and with probability $\theta_{1}$, he can either _continue_ ,
if the price of coins rise to $x(y_{b},t_{3})^{*}$, or _stop_ at time
$t_{2}+\tau_{b}+T$, if the price drops. The payoff is expressed as time
discounted, expected utility for duration $\tau_{b}+T$. An _interested_ B
calculates what will be the probability that price of his coins will rise to
$x(y_{b},t_{3})^{*}$ within time $\tau_{b}+T$.
$\begin{matrix}u_{B,int}(cont,t_{2})=\\\
\theta_{1}\Big{(}\frac{\Big{[}1-C\Big{(}x(y_{b},t_{3})^{*},x(y_{b},t_{2}),\tau_{b}+T\Big{)}\Big{]}u_{B,int}(cont,t_{3})}{e^{r_{b}(\tau_{b}+T)}}\\\
+\\\
\frac{\bigint_{0}^{x(y,t_{3})^{*}}P\Big{(}p,x(y_{b},t_{2}),\tau_{b}\Big{)}u_{B,int}(stop,t_{3})\,dp}{e^{r_{b}(\tau_{b})}}\Big{)}+\\\
(1-\theta_{1})\Big{(}\frac{u_{B,int}(stop,t_{3})}{e^{r_{b}\tau_{b}}}\Big{)}\end{matrix}$
(5)
_Interested_ A knows what action was taken at $t_{3}$ and so there is no need
to consider B’s type.
$\begin{matrix}u_{A,int}(cont,t_{2})=\\\
\frac{\bigint_{x(y_{b},t_{3})^{*}}^{\infty}P\Big{(}p,x(y_{b},t_{2}),\tau_{b}+T\Big{)}u_{A,int}(cont,t_{3})\,dp}{e^{r_{a}(\tau_{b}+T)}}\\\
+\frac{C\Big{(}x(y_{b},t_{3})^{*},x(y_{b},t_{2}),\tau_{b}\Big{)}u_{A,int}(stop,t_{3})}{e^{r_{a}\tau_{b}}}\\\
\end{matrix}$ (6)
If B _stops_ at $t_{2}$, then A’s coins remain locked for $t_{a}$ units of
time. The utility is expressed as
$u_{A,int}(stop,t_{2})=x_{a}e^{-r_{a}t_{a}}-f_{a}$ and
$u_{B,int}(stop,t_{2})=x(y_{b},t_{2})$. B’s decision is dependent on how price
of $y_{b}$ evolves until $t_{3}$. He will decide to _continue_ over _stop_ ,
if the following condition holds: $u_{B,int}(cont,t_{2})\geq x(y_{b},t_{2})$.
We derive the range in which $x(y_{b},t_{2})$ must lie for the above
inequality to hold. B will continue if $x_{1}<x(y_{b},t_{2})\leq x_{2}$. If
$x(y_{b},t_{2})>x_{2}$ or $x(y_{b},t_{2})\leq x_{1}$ then B will stop.
In the first round of the lock phase, _interested_ A takes a decision on
whether she wants to lock coins or abort at time $t_{1}$ based on her belief
regarding B’s type. B can choose to take action at
$t_{2}=t_{1}+\tau_{a}+T^{\prime}$ where
$T^{\prime}\in[0,t_{a}-(t_{b}-\epsilon+t_{\epsilon})-\tau_{a}]$. Payoff of A
can be expressed with their time-discounted, expected utility for duration
$t_{2}-t_{1}$.
$\begin{matrix}u_{A,int}(cont,t_{1})=\\\
\theta_{2}\Big{(}\frac{\bigint_{x_{1}}^{x_{2}}P\Big{(}p,x(y_{b},t_{1}),\tau_{a}+T^{\prime}\Big{)}u_{A,int}(cont,t_{2})\,dp}{e^{r_{a}(\tau_{a}+T^{\prime})}}+\\\
\frac{\Big{[}1-C\Big{(}x_{2},x(y_{b},t_{1}),\tau_{a}\Big{)}+C\Big{(}x_{1},x(y_{b},t_{1}),\tau_{a}\Big{)}\Big{]}u_{A,int}(stop,t_{2})}{e^{r_{a}\tau_{a}}}\Big{)}+\\\
(1-\theta_{2})\Big{(}\frac{u_{A,int}(stop,t_{2})}{e^{r_{a}\tau_{a}}}\Big{)}\end{matrix}$
(7)
_Interested_ B knows what action was taken at $t_{2}$ and so there is no need
to consider A’s type.
$\begin{matrix}u_{B,int}(cont,t_{1})=\\\
\frac{\bigint_{x_{1}}^{x_{2}}P\Big{(}p,x(y_{b},t_{1}),\tau_{a}+T^{\prime}\Big{)}u_{B,int}(cont,t_{2})\,dp}{e^{r_{b}(\tau_{a}+T^{\prime})}}+\\\
\frac{\bigint_{0}^{x_{1}}P\Big{(}x,x(y_{b},t_{1}),\tau_{a}\Big{)}u_{B,int}(stop,t_{2})\,dp}{e^{r_{b}\tau_{a}}}\par\par\end{matrix}$
(8)
If _interested_ A stops at $t_{1}$, then no one locks coins and we express the
utility as $u_{A,int}(stop,t_{1})=x_{a}$ and
$u_{B,int}(stop,t_{1})=x(y_{b},t_{1})$. A’s decision is based on how price of
$y_{b}$ evolves until $t_{2}$. A will decide on _continue_ over _stop_ , if
the following condition holds: $u_{A,int}(cont,t_{1})\geq x_{a}$. We derive
the range $(x^{*},x^{*^{\prime}})$, in which $x_{a}$ must lie for the swap to
start.
###### Proposition 1
HTLC-based atomic swap is not participant-friendly.
_Proof_ : A’s willingness to participate in the atomic swap is decided by the
expected success of the protocol for given set of parameters. The success rate
_(SR)_ of a swap is the probability that the swap succeeds after it has been
initiated, i.e. after A has locked coins at $t_{1}$ [Xu et al.(2021)]. For a
given pair of $\theta_{1}$ and $\theta_{2}$. _SR_ is defined as function of
$x_{a}$ (A’s coins or tokens), delay by A (T) at final step while claiming
coins, and delay by B (T’) at second step while locking $y_{b}$ coins. It is
expressed as:
$\begin{matrix}SR(x_{a},T^{\prime},T)=\bigints_{x_{1}[x_{a}]}^{x_{2}[x_{a}]}P_{A}(p,T^{\prime})P_{B}(p,T)\,dp\end{matrix}$
(9)
where $P_{A}(p,T^{\prime})=\theta_{2}P(p,x(y_{b},t_{1}),\tau_{a}+T^{\prime})$
and
$P_{B}(p,T)=\theta_{1}\Big{(}1-C(x(y_{b},t_{3})[x_{a}],p,\tau_{b}+T)\Big{)}$.
We plot the success rate of the protocol in Fig. 3 (a-c), the parameters used
are $t_{\epsilon}\approx\epsilon=1\ hr$ and $\tau_{a}=\tau_{b}=3\ hrs$. As per
standard practice, $t_{a}=48\ hrs$ and $t_{b}=24\ hrs$ [Han et al.(2019)]. We
select $x_{a}\in[1,3]$ given $x(y_{b},t_{1})=2$. The parameters $T$ is varied
between $[0,20$] and $T^{\prime}$ is varied between $[0,21]$. The fee $f_{a}$
and $f_{b}$ is negligible, so we consider them to be 0. The success premiums
$sp_{a}=sp_{b}=0.3$, time-discounting factor $r_{a}=r_{b}$ is chosen from
$\\{0.005,0.01\\}$, $\sigma$ is varied between 0.1 and 0.2, and $\mu$ is
selected from $\\{-0.002,0.002\\}$. We observe that the success rate is $\geq
0.9$ in Fig. 3 (a-b), and around 0.6 for Fig. 3 (c), when $T=T^{\prime}=0$.
When $T$ or $T^{\prime}$ increases, the success rate drops and beyond certain
range, it becomes $NA$ (not applicable) as $u_{A_{continue},t_{1}}<x_{a}$.
Success rate is function of $T$ and $T^{\prime}$ that cannot be determined
before the swap proceeds to the second round or to the third round. In the
worst case, if $T\approx t_{b}$ and/or $T^{\prime}\approx t_{a}-t_{b}$, then
the success rate drops drastically as the payoff upon continuing is too low.
In such situation, a party is better off if he does not participate in the
swap rather than keep his coins locked for one full day.
## III Quick Swap: A protocol based on Ideal Swap Model
The flaw in the HTLC-based atomic swap is that either of the parties can
speculate and delay in settling the transaction without losing anything. Our
objective is to force the parties to settle the transaction faster, and
penalize for delayed action. We provide a high-level overview of our proposed
protocol where a party will lock the principal amount for swap provided he or
she gets a guarantee of compensation upon suffering from a griefing attack.
With respect to the previous example, _Alice_ has to lock $x_{a}$ coins for a
period of $t_{a}$ units. If _Bob_ griefs, _Alice_ ’s collateral locked will be
$\mathcal{O}(x_{a}t_{a})$. She calculates the collateral cost of locking
$x_{a}$ coins for $t_{a}$, let it be defined as $c(x_{a}t_{a})$. Similarly,
_Bob_ calculates the collateral cost of locking $y_{b}$ coins for $t_{b}$, let
it be $c(y_{b}t_{b})$. The steps for locking the coins proceeds as follows:
* (i)
_Bob_ first locks $c(x_{a}t_{a})$ coins in Chain-b for $D+\Delta$ units of
time where $\tau_{a}+2\tau_{b}<D+\Delta<t_{b}$. We will explain later why an
additional $\Delta$ unit is chosen.
* (ii)
After _Alice_ gets a confirmation of the griefing-premium locked by _Bob_ ,
she locks $x_{a}$ for $t_{a}$ units and griefing-premium
$c(x_{a}t_{a})+c(y_{b}t_{b})$ for $D>\tau_{a}+\tau_{b}$ units of time, both in
Chain-a .
* (iii)
After _Bob_ gets a confirmation of the griefing-premium locked by _Alice_ , he
locks $y_{b}$ for $t_{b}$ units of time in Chain-b.
If _Bob_ doesn’t want to proceed after step [ii] and cancels the swap, then
_Alice_ can unlock $x_{a}$ coins from Chain-a. If _Bob_ griefs, then _Alice_
gets the compensation $c(x_{a}t_{a})$ after $D+\Delta$ elapses instead of
being griefed for $t_{a}$. The principle we follow here is _“Coins you have
now is better than coins you have later”_. _Alice_ can use the compensation
from $t_{b}-(D+\Delta)$. Had we set the timelock for locking $c(x_{a}t_{a})$
to $t_{a}$, _Bob_ can still grief by canceling the contract at time
$t_{b}-\epsilon$. We will discuss later how should we choose $D$ to ensure a
faster compensation.
If _Alice_ initiates the swap, she claims $y_{b}$ coins and withdraws the
compensation from Chain-a. _Bob_ gets to claim $x_{a}$ coins and he withdraws
the compensation from Chain-b. If _Alice_ cancels the swap, then _Bob_ unlocks
$y_{b}$ coins from Chain-b. If _Alice_ delays beyond $D$ unit of time, then
_Bob_ gets compensation for the loss. Since $D+\Delta<\tau_{b}$, _Alice_ can
delay beyond this point as well. In that case, she gets a compensation of
$c(x_{a}t_{a})$ and her net gain is $g_{a}=c(x_{a}t_{a})-c(y_{b}t_{b})$ and
_Bob_ ’s net loss is $-g_{a}$. Thus to prevent _Bob_ from incurring a loss,
_Alice_ is forced to pay a compensation of $c(x_{a}t_{a})+c(y_{b}t_{b})$. Even
if she does not respond, _Bob_ is entitled to a compensation of
$c(y_{b}t_{b})$ after refunding $c(x_{a}t_{a})$ coins to _Alice_.
### III-A Formal Description of Quick Swap
#### III-A1 System Model and Assumption
The system model and assumptions are same as HTLC-based atomic swap. Since
$t_{a}\approx 2t_{b}$, for ease of analysis, we consider $c(x_{a}t_{a})\geq
2c(y_{b}t_{b})$. For a fixed rate of griefing-premium, we consider
$Q=c(x_{a}t_{a})$ and $\frac{Q}{2}=c(y_{b}t_{b})$. _Alice_ locks a griefing-
premium of 1.5Q and _Bob_ locks griefing-premium Q coins. We denote _Alice_ as
A and _Bob_ as B while describing the protocol.
#### III-A2 Detailed Description
The protocol has the following phases: (A) Preprocessing Phase, (B) Lock Phase
and (C) Claim Phase. An instance of successful execution of _Quick Swap_ is
shown in Fig. 4.
Figure 4: An instance of successful Quick Swap between Alice and Bob
(A) Preprocessing Phase: The steps are defined as follow:
Sampling Cancellation Hash and Payment Hash:
* (i)
A’s pair of secret and public key is $(sk_{a},pk_{a})$ and B’s pair of secret
and public key is $(sk_{b},pk_{b})$. A uses $sk_{a}$ and B uses $sk_{b}$ for
signing transactions. $pk_{a}$ and $pk_{b}$ are used for verifying each such
signed transaction.
* (ii)
A samples random values $s_{1}$ and $s_{3}$, creates payment hash
$H_{1}=\mathcal{H}(s_{1})$ and cancellation hash $H_{3}=\mathcal{H}(s_{3})$.
She shares $H_{1}$ and $H_{3}$ with B.
* (ii)
B samples a random value $s_{2}$ and creates cancellation hash
$H_{2}=\mathcal{H}(s_{2})$ and shares it with A.
(B) Locking Phase:
* (i)
At time $t_{1}$, B creates and signs transaction griefing_premium_lock_B using
funding address $addr_{funding,penalty,B}$ that locks $Q$ coins address
$addr_{lock,penalty,B}$ and publishes griefing_premium_lock_B in Chain-b. B
encodes the condition whereby $Q$ coins can be claimed either by revealing the
preimage of $H_{1}$ or $H_{2}$, or it can be spend by A after $D+\Delta$ units
of time.
* (ii)
A checks whether griefing_premium_B is confirmed within $\tau_{b}$ units. Once
that is confirmed, at time $t_{2}=t_{1}+\tau_{b}$, A creates and signs a
transaction griefing_premium_lock_A using funding address
$addr_{funding,penalty,A}$ that locks $1.5Q$ coins into address
$addr_{lock,penalty,A}$. The condition for spending the coins are as follows:
(a) Either provide the preimage of $H_{1}$ or $H_{3}$;
(b) If no preimage is provided, B can spend the coins locked after $D$ units
of time.
Simultaneously, A creates and signs another transaction principal_lock_A using
funding address $addr_{funding,principal,A}$ that locks $x_{a}$ coins into
address $addr_{lock,principal,A}$. The coins can be spend either by revealing
the preimage of $H_{1}$, or, A can refund the coins either after $t_{a}$ unit
elapses or by revealing preimage of $H_{2}$, whichever occurs first. She
publishes both principal_lock_A and griefing_premium_lock_A in Chain-a.
* (iii)
B checks whether the transactions broadcasted by A gets confirmed in another
$\tau_{a}$ units. At time $t_{3}=t_{2}+\tau_{a}$, He creates and signs another
transaction principal_lock_B using funding address
$addr_{funding,principal,B}$ that locks $y_{b}$ coins into address
$addr_{lock,principal,B}$. The coins can be spend either by revealing the
preimage of $H_{1}$, or, B can refund the coins either after $t_{b}$ unit
elapses or by revealing preimage of $H_{3}$, whichever occurs first. He then
proceeds to publish principal_lock_B in Chain-b.
(C) Claim Phase: Once A observes that B has locked $y_{b}$ coins in Chain-b,
he initiates the claim phase at time $t_{4}=t_{3}+\tau_{b}$, where $tau_{b}$
is the time taken for principal_lock_B to be confirmed.
_Redeem_ : If A wishes to redeem the coins,
a. At time $t_{4}=t_{3}+\tau_{b}<D$:
* (i)
A releases the preimage $s_{1}$ for payment hash $H_{1}$ and claims the output
of transaction principal_lock_B with her signature in Chain-b. This allows her
to claim $y_{b}$ coins.
* (ii)
A uses $s_{1}$ to refund the griefing-premium of $1.5Q$ locked in Chain-a.
b. At time $t_{5}=t_{4}+t_{\epsilon}$:
* (i)
B uses the preimage $s_{1}$ and unlocks the output of the transaction
principal_lock_A with his signature in Chain-a, claiming $x_{a}$ coins.
* (ii)
B uses $s_{1}$ to refund $Q$ locked in Chain-b.
Refund: If A wishes to cancel the swap,
a. At time $t_{4}=t_{3}+\tau_{b}<D$:
* (i)
A releases the preimage $s_{3}$ for cancellation hash $H_{2}$ with her
signature in Chain-a, unlocking $1.5Q$ coins.
* (ii)
B uses $s_{3}$ to refund $y_{b}$ coins from Chain-b at time
$t_{4}+t_{\epsilon}$. This results in cancelation of swap before $t_{b}$
elapses.
b. At time $t_{5}=t_{4}+\tau_{b}+t_{\epsilon}$:
* (i)
B releases the preimage $s_{2}$, unlocks $Q$ coins with her signature from
Chain-b.
* (ii)
A uses $s_{2}$ for refunding $x_{a}$ coins from Chain-a at time
$t_{4}+\tau_{b}+2t_{\epsilon}$.
### III-B Proof of Correctness, Safety and Liveness
It is necessary to argue the state of a proposed protocol in presence of both
compliant and malicious parties. Parties either may choose to follow the
protocol or they may deviate. We prove that _Quick Swap_ satisfies both safety
and liveness. By safety, we mean that _“compliant parties should end up “no
worse off,” even when other parties deviate arbitrarily from the protocol”_
[Herlihy et al.(2022)]. Simultaneously, the liveness property states that none
of the parties end up keeping their coins locked forever. Before arguing for
liveness and safety, we prove the correctness of the protocol in presence of
compliant parties - either all agreed coin exchange take place, and no
exchange take place.
###### Property 1
(_Correctness_) If all parties are compliant, then the swap either succeeds
with coins being exchanged, or the swap gets canceled with the coins being
refunded.
_Proof_ : A and B exchange the hashes $H_{1},H_{2}$ and $H_{3}$ before the
start of the protocol. At $t_{1}$, B locks $Q$ coins in Chain-b that can be
unlocked contingent to either providing preimage of $H_{1}$ or $H_{2}$. The
coins are locked for $D+\Delta$ units, after which A can claim $Q$ coins and
begins the next phase of locking at $t_{2}=t_{1}+\tau_{b}$. She locks $x_{a}$
coins in Chain-a that can be claimed by B contingent to providing the preimage
of $H_{1}$, else A refunds after time $t_{a}$ or using preimage of $H_{2}$.
She also locks $1.5Q$ coins at $t_{2}$ in Chain-a. The amount can be unlocked
contingent to either providing preimage of $H_{1}$ or $H_{3}$. The coins are
locked for $D$ units, after which B can claim $1.5Q$ coins. The last locking
phase starts at $t_{3}=t_{2}+\tau_{a}$, when B locks $y_{b}$ coins in Chain-b.
The latter can be claimed by A contingent to providing the preimage of
$H_{1}$, else B refunds after time $t_{b}$ or using preimage of $H_{3}$. The
correctness of _Claim Phase_ follows from the description of _Redeem_ and
_Refund_ defined in Section III-A2.
###### Property 2
(_Safety Property_) The safety property states that compliant parties should
be as better off as they had been before the protocol execution, even when
other parties deviate arbitrarily from the protocol.
_Proof_ : After B has locked $Q$ coins Chain-b, if A does not lock any coins
Chain-a, $B$ unlocks it after a certain timeperiod $\delta<D+\Delta$. If B
locks $Q$ coins, A locks $x_{a}+1.5Q$ coins, but B aborts without locking
$y_{b}$ coins, then A unlocks $1.5Q$ coins revealing preimage $s_{3}$. She
gets a compensation $Q$ coins after $D+\Delta$ that covers up for the lost
opportunity cost of keeping $x_{a}$ coins locked for $t_{a}$ units. If all the
parties have locked their coins but A delays beyond $D$ or griefs, B gets
compensation of $1.5Q$. He may lose $Q$ coins (if A cancels before elapse of
$D+\Delta$ but after timeout $D$, then B is entitled to the full compensation
$1.5Q$) but we ensure that $1.5Q-Q=0.5Q$ coins are enough to compensate for
locked collateral $\mathcal{O}(y_{b}t_{b})$.
###### Property 3
_(Liveness)_ Coins No asset belonging to a compliant party do not remain
locked forever.
_Proof_ : If A doesn’t take any action by $t_{2}=t_{1}+\tau_{b}$, B unlocks
$Q$ coins after $\tau_{b}>\delta>0$ units. If B does not lock coins at
$t_{3}=t_{2}+\tau_{a}$, then by time $t_{3}+\tau_{b}$, A refunds the griefing-
premium $1.5Q$ by revealing preimage of $H_{3}$ in her own interest. B
observes that A has canceled the swap by withdrawing the griefing-premium. If
he is rational, then he will releases preimage $s_{2}$ for $H_{2}$, unlock $Q$
coins and allow A to unlock $x_{a}$ coins before $D$ elapses. If B is
malicious, then he will end up losing $Q$ coins after $D+\Delta$ units, but A
will be able to unlock $x_{a}$ coins after $t_{a}$ elapses. If at time
$t_{4}=t_{3}+\tau_{b}$, A aborts then she loses compensation of $1.5Q$ to B.
The latter can unlock $y_{b}$ coins after $t_{b}$ has elapsed but loses his
compesation of $Q$ coins.
### III-C Game-Theoretic Analysis
We model the interaction between the two entities A and B as a sequential game
$\Gamma_{quick\ swap}$. B initiates the lock phase by locking griefing-premium
$Q$ for duration $D+\tau_{a}+\tau_{b}+\Delta$ units in Chain-b at time
$t_{1}$, followed by A choosing either to lock $x_{a}$ coins for duration
$t_{a}$ units or abort at $t_{2}=t_{1}+\tau_{b}$. If A has not responded then
B either cancels the swap at time $t_{2}+\tau_{a}$ by unlocking $Q$ or he
stops. If A wishes to continue, she locks the principal amount and the
griefing-premium $1.5Q$ for $D$ units in Chain-a at time $t_{2}$. At time
$t_{3}=t_{2}+\tau_{a}$, if B observes that A has locked the principal amount
as well as griefing-premium, then B either locks $y_{b}$ coins for $t_{b}$
units in Chain-b or stops. If he does not lock coins, then at time
$t_{3}+\tau_{b}$, A cancels the swap by unlocking $1.5Q$ coins from Chain-a or
stops. If B has not responded, he loses the griefing-premium at time
$t_{1}+t_{a}$. Else if he chooses to cancel, then A will be able to withdraw
the principal amount as well. If B has chosen to continue, A decides at time
$t_{4}=t_{3}+\tau_{b}$, whether to continue or cancel the swap.
#### III-C1 Game Model & Preference Structure
The extensive-form game $\Gamma_{quick\ swap}$ is similar to $\Gamma_{swap}$,
except that A and B’s strategy space has the option to _cancel_ , apart from
_continue_ and _stop_. The analysis is done by applying backward induction on
$\Gamma_{quick\ swap}$.
If A delays instead of making a move at $t_{3}+\tau_{b}$, then the opportunity
cost of coins locked as griefing-premium will rise. Canceling the swap at
$t_{3}+\tau_{b}$ will lead to A’s utility as
$\frac{x_{a}}{e^{r_{a}(t_{\epsilon}+2\tau_{a})}}+\frac{1.5Q}{e^{r_{a}\tau_{a}}}$.
If A chooses to delay and cancel at time say $t_{3}+\tau_{b}+t$ for $t>0$,
then the utility drops further, i.e,
$\frac{x_{a}}{e^{r_{a}(t_{\epsilon}+2\tau_{a}+t)}}+\frac{1.5Q}{e^{r_{a}(\tau_{a}+t)}}$.
In the previous HTLC-based atomic swap, if A finds that the utility on
continuing is less than the utility of the swap till the lock time of HTLC
expires, she speculated till the situation turns in her favor. However, the
situation is different now as A is allowed to abort the swap much earlier
without waiting for the lock time to elapse. A rational A will choose not to
delay anticipating that the situation may turn worse later. At time $t_{4}$,
if A continues and B follows:
$\begin{matrix}u_{A,int}(cont,t_{4})=(1+sp_{a})\frac{\mathcal{E}(x(y_{b},t_{4}),\tau_{b})}{e^{r_{a}\tau_{b}}}+\frac{1.5Q}{e^{r_{a}\tau_{a}}}-f_{a}-f_{b}\\\
u_{B,int}(cont,t_{4})=(1+sp_{b})\frac{x_{a}}{e^{r_{b}(\tau_{a}+t_{\epsilon})}}+\frac{Q}{e^{r_{b}(t_{\epsilon}+\tau_{b})}}\\\
-f_{a}-f_{b}\end{matrix}$ (10)
At time $t_{4}$, if A cancels then B cancels the deal as well. The payoffs are
$u_{A,int}(cancel,t_{4})=\frac{x_{a}}{e^{r_{a}(2t_{\epsilon}+\tau_{b}+\tau_{a})}}+\frac{1.5Q}{e^{r_{a}\tau_{a}}}-2f_{a}$
and
$u_{B,int}(cancel,t_{4})=\frac{\mathcal{E}(x(y_{b},t_{4}),t_{\epsilon}+\tau_{b})}{e^{r_{b}(\tau_{b}+t_{\epsilon})}}+\frac{Q}{e^{r_{b}(t_{\epsilon}+2\tau_{b})}}-2f_{b}$.
A will continue at $t_{4}$ over canceling the swap, if the following condition
holds:
$\begin{matrix}(1+sp_{a})\frac{\mathcal{E}(x(y_{b},t_{4}),\tau_{b})}{e^{r_{a}\tau_{b}}}+\frac{1.5Q}{e^{r_{a}\tau_{a}}}-f_{a}-f_{b}>\frac{x_{a}}{e^{r_{a}(2t_{\epsilon}+\tau_{b}+\tau_{a})}}\\\
+\frac{1.5Q}{e^{r_{a}\tau_{a}}}-2f_{a}\\\ \\\ or,\
x(y_{b},t_{4})>\frac{\frac{x_{a}}{e^{r_{a}(2t_{\epsilon}+\tau_{b}+\tau_{a})}}-f_{a}+f_{b}}{(1+sp_{a})e^{(\mu-
r_{a})\tau_{b}}}\end{matrix}$ (11)
We derive $x(y_{b},t_{4})^{*}$ in terms of $x_{a}$ for which the above
inequality holds. If $x(y_{b},t_{4})\geq x(y_{b},t_{4})^{*}$ then A claims the
coins.
At time $t_{3}$, B decides to continue, then with probability $1-\theta_{1}$,
A is malicious and will delay till $D-\epsilon$ ($D-\epsilon\rightarrow D$).
The utility is expressed as follows:
$u_{B,int}(cont,t_{3})=\theta_{1}\Big{(}\frac{\Big{[}1-C(x(y,t_{4})^{*},x(y,t_{3}),\tau_{b})\Big{]}u_{B,int}(cont,t_{4})\,dp}{e^{r_{b}\tau_{b}}}+\\\
\frac{\bigint_{0}^{x(y,t_{4})^{*}}P(p,x(y,t_{3}),\tau_{b})u_{B,int}(cancel,t_{4})\,dp}{e^{r_{b}\tau_{b}}}\Big{)}+(1-\theta_{1})\Big{(}\frac{\mathcal{E}(x(y_{b},t_{4}),t_{\epsilon}+D)}{e^{r_{b}(D-\epsilon+t_{\epsilon})}}+\frac{Q}{e^{r_{b}(t_{\epsilon}+D-\epsilon+\tau_{b})}}-2f_{b}\Big{)}$
and $u_{A,int}(cont,t_{3})=\\\
\frac{\bigint_{x(y,t_{4})^{*}}^{\infty}P(p,x(y,t_{3}),\tau_{b})u_{A,int}(cont,t_{4})\,dp}{e^{r_{a}\tau_{b}}}+\frac{\bigint_{0}^{x(y,t_{4})^{*}}P(p,x(y,t_{3}),\tau_{b})u_{A,int}(cancel,t_{4})\,dp}{e^{r_{a}\tau_{b}}}$
At time $t_{3}$, B chooses stop, then the utility for A and B is defined as:
$u_{A,int}(stop,t_{3})=\frac{x_{a}}{e^{r_{a}(t_{a}+\tau_{a})}}+\frac{1.5Q}{e^{r_{a}(t_{b}+\tau_{a})}}-2f_{a}+\frac{Q}{e^{r_{b}\tau_{b}}}-f_{b}$
and $u_{B,int}(stop,t_{3})=x(y_{b},t_{3})$.
If at time $t_{3}$, B chooses cancel then he unlocks the premium $Q$ locked in
Chain-b by releasing preimage of $H_{2}$. A observes that swap is canceled, so
she unlocks $x_{a}$ coins and griefing-premium $1.5Q$ from Chain-a. The
utility is defined as
$u_{B,int}(cancel,t_{3})=x(y_{b},t_{3})+\frac{Q}{e^{r_{b}\tau_{b}}}-f_{b}$ and
$u_{A,int}(cancel,t_{3})=\frac{x_{a}}{e^{r_{a}(\tau_{a}+t_{\epsilon})}}+\frac{1.5Q}{e^{r_{a}(t_{\epsilon}+\tau_{a})}}-2f_{a}$.
We observe that it is better to cancel the swap than wait for the contract to
expire as B will lose his griefing-premium in the process. Thus _cancel_
strictly dominates _stop_ in our protocol. B will continue at $t_{3}$ over
canceling the swap, if the following condition holds:
$u_{B,int}(cont,t_{3})>u_{B,int}(cancel,t_{3})$
At time $t_{2}$, A decides to continue, then utility is:
$u_{A,int}(cont,t_{2})=\theta_{2}\Big{(}\frac{\bigint_{x^{3}_{1}}^{x^{3}_{2}}P(p,x(y,t_{2}),\tau_{a})u_{A,int}(cont,t_{3})\,dp}{e^{r_{a}\tau_{a}}}+\frac{\bigint_{x^{3}_{2}}^{\infty}P(p,x(y_{b},t_{2}),\tau_{a})u_{A,int}(stop,t_{3})\,dp}{e^{r_{a}\tau_{a}}}\Big{)}+(1-\theta_{2})\Big{(}\frac{u_{A,int}(stop,t_{3})}{e^{r_{a}\tau_{a}}}\Big{)}$
and
$u_{B,int}(cont,t_{2})=\Big{(}\frac{\bigint_{x^{3}_{1}}^{x^{3}_{2}}P(p,x(y,t_{2}),\tau_{a})u_{B,int}(cont,t_{3})\,dp}{e^{r_{b}\tau_{a}}}+\\\
\frac{\bigint_{x^{3}_{2}}^{\infty}P(p,x(y_{b},t_{2}),\tau_{a})u_{B,int}(stop,t_{3})\,dp}{e^{r_{b}\tau_{a}}}\Big{)}$
At time $t_{2}$, A decides to abort then B initiates cancellation by releasing
preimage of $H_{2}$. Note that A will not take any action if she intends to
cancel as she has not locked any coins. Thus both cancel and stop means the
same payoff for A. The payoff is defined as
$u_{A,int}(stop,t_{2})=u_{A,int}(cancel,t_{2})=x_{a}+1.5Q$ and
$u_{B,int}(cancel,t_{2})=x(y_{b},t_{2})+\frac{Q}{e^{r_{b}(\tau_{a}+\tau_{b})}}-f_{b}$.
A will continue at $t_{2}$ over stopping the swap, if the following condition
holds: $u_{A,int}(cont,t_{2})>u_{A,int}(cancel,t_{2})$.
###### Proposition 2
Quick Swap is more participant-friendly compared to HTLC-based atomic swap.
_Proof_ : In _Quick Swap_ , for given values of $\theta_{1}$ and $\theta_{2}$,
success rate or $SR$ is function of $x_{a}$ (or A’s tokens) since there is no
delay involved. It is expressed as:
$\begin{matrix}SR(x_{a})=\bigints_{x^{3}_{1}[x_{a}]}^{x^{3}_{2}[x_{a}]}A(x_{a})B(x_{a})\,dp\end{matrix}$
(12)
where $A(x_{a})=\theta_{2}P(p,x(y_{b},t_{2}),\tau_{a})$ and
$B(x_{a})=\theta_{1}\Big{(}1-C(x(y,t_{4})[x_{a}],p,\tau_{b})\Big{)}$
A is able to estimate the success rate now as it is dependent solely on
$x_{a}$. There is no uncertainty involved, unlike in HTLC-based atomic swap
where a higher delay leads to violation of participation constraint.
Additionally, the range of $x_{a}$ for which _Quick Swap_ has a non-zero
success rate is larger. The provision of cancellation, even before the elapse
of the contract’s locktime, makes the protocol robust and more resilient to
fluctuation in asset price.
### III-D Discussions
(a) Our protocol requires an extra round compared to _HTLC_ -based atomic
swap. The number of transactions created for _Quick Swap_ is double the number
of transactions needed for the latter.
(b) The parameter $D$ can be chosen by A and B, after they have negotiated
with each other. A rational honest party will choose a shorter delay for
keeping the griefing-premium locked.
## IV Construction of a fair multiparty cyclic atomic swap
Suppose _Alice_ wants to exchange coins with _Bob_ where the former has some
Bitcoins and later has Ethers, but _Bob_ wants to exchange his Ethers for
Litecoins. In such a scenario, they take the help of some intermediaries for
assisting in the exchange of coins. If we consider just a three-party
situation, then there may exist a participant _Carol_ who is willing to
exchange Litecoins for Bitcoins. So _Alice_ send $x_{a}$ BTC to _Carol_ and
_Carol_ sends $z_{c}$ LTC to _Bob_ , and finally _Bob_ sends $y_{b}$ ETH to
_Alice_. In a real situation, _Carol_ will charge a fee from _Alice_ for
facilitating the swap but we ignore the fee in this paper.
_Problem of Griefing_ : In an HTLC-based setting, _Alice_ samples a secret
$s$, shares $H=\mathcal{H}(s)$ with _Carol_ and _Bob_. _Alice_ forwards HTLC
to _Carol_ locking $x_{a}$ BTC in Chain-a for $T_{1}$ unit of time contingent
to providing preimage of $H$. The confirmation time of a transaction in
Chain-a in $\tau_{a}$. _Carol_ forwards an HTLC to _Bob_ using the same
condition for a time period of $T_{2}$ units where $T_{2}<T_{1}$, locking
$z_{c}$ LTC in Chain-b. The confirmation time of a transaction in Chain-b in
$\tau_{b}$, Finally, _Bob_ forwards the HTLC, locking $y_{b}$ BTC in Chain-c
for time period of $T_{3}$ units where $T_{3}<T_{2}$. The confirmation time of
a transaction in Chain-c in $\tau_{c}$. The problem of griefing persists as
_Carol_ may not choose to lock coins based on the fluctuation rate of Bitcoin
and Litecoin, and even if _Carol_ locks her coins, _Bob_ may abort. If all the
parties have locked coins, _Alice_ may abort, and makes _Bob_ and _Carol_
suffer. We discuss a fix to this problem by extending _Quick Swap_ from a two-
party setting to a three-party setting.
Figure 5: Quick Swap extended to three-party cyclic atomic swap
#### High Level Overview of the protocol
We discuss the steps followed upon extending _Quick Swap_ to three party
setting. The steps have been illustrated in Fig. 5.
* (i)
Alice _samples_ hashes $H_{1}$ and $H_{2}$ using a randomly sampled secret
$s_{1}$ and $s_{2}$ respectively, and shares it with _Bob_ and _Carol_. _Bob_
samples hash $H_{3}$ using secret $s_{3}$ and _Carol_ samples hash $H_{4}$
using secret $s_{4}$.
* (ii)
_Bob_ locks griefing-premium $c(x_{a}T_{1})$ in Chain-c for a time-period of
$D+2\Delta<T_{3}$ using hashlock $H_{3}\vee H_{1}$, at time $t_{1}$.
* (iii)
_Alice_ locks principal amount $x_{a}$ for time period $T_{1}$ using hashlock
$H_{1}$ at time $t_{2}=t_{1}+\tau_{a}$, with the provision of refunding
earlier if preimage of $H_{3}$ is revealed. She samples a hash $H_{2}$ using
secret $s_{2}$, and locks griefing-premium $c(z_{c}T_{2})$ for a time period
$D+\Delta$, hashlock $H_{1}\vee H_{2}$ in Chain-a at time $t_{2}$.
* (iv)
_Carol_ locks principal amount $z_{c}$ for time period $T_{2}$, hashlock
$H_{1}$, at time $t_{3}=t_{2}+\tau_{b}$ in Chain-b. She can refund before
$T_{2}$ elapses if preimge of $H_{2}$ is revealed. _Carol_ samples hash
$H_{4}$ using secret $s_{4}$, and locks griefing-premium $c(y_{b}T_{3})$ for
timperiod $D$, hashlock $H_{1}\vee H_{4}$ in Chain-b.
* (v)
Finally, _Bob_ locks $z_{c}$ coins in Chain-c for timeperiod $T_{3}$, hashlock
$H_{1}$, at time $t_{2}=t_{1}+\tau_{a}$. He has a provision to refund earlier
if the preimage of $H_{4}$ is revealed.
To initiate the swap, _Alice_ reveals secret $s_{1}$ and everyone is able to
unlock their griefing-premium and the swapped coins. If any of the parties
want to cancel, he or she will choose to reveal either secret $s_{2},s_{3}$ or
$s_{4}$.
### IV-A Generic n-party fair cyclic atomic swap
#### IV-A1 System Model & Assumption
Party $P_{0}$ wants to exchange $a_{0}$ coins for $a_{n}$ coins of party
$P_{n}$, taking help of $n-1$ intermediaries $P_{1},P_{2},\ldots,P_{n-1}$. A
party $P_{i}$ has account in Chain-i and Chain-($(i-1)\mod n+1)$ . Blockchain
Chain-i has transaction confirmation time $\tau_{i}$ where $i\in[0,n]$.
#### IV-A2 Detailed Construction
We describe the steps:
* •
$P_{0}$ samples payment hash $\bar{H}$ and shares with neighbors $P_{n}$ and
$P_{1}$.
* •
$P_{n}$ initiates the _Locking Phase_ , samples cancellation hash $H_{n}$. He
locks griefing-premium $c(a_{0}T_{0})$ for locktime $D+n\Delta<T_{n}$ in
Chain-n, using hashlock $\bar{H}\vee H_{n}$, at time $t_{1}$.
* •
Rest of the parties $P_{i},i\in[0,n-1]$ does the following: $P_{i}$ generates
a cancellation hash $H_{i}$ and locks $a_{i}$ coins for locktime $T_{i}$,
using hashlock $\bar{H}$, at time $t_{i+2}=t_{i+1}+\tau_{(i-1)\mod n+1}$ in
Chain-i. The coins can be refunded before $T_{i}$ if preimage of
$H_{(i-1)\mod(n+1)}$ is revealed. He also locks griefing-premium
$c(a_{i+1}T_{i+1})$ at $t_{i+2}$, for locktime $D+(n-1-i)\Delta$, using
hashlock $\bar{H}\vee H_{i}$.
* •
Finally, $P_{n}$ locks $a_{n}$ coins for locktime $T_{n}$, using hashlock
$\bar{H}$, at time $t_{n+2}=t_{n+1}+\tau_{n-1}$ in Chain-n. He has an option
to refund the coins if $P_{n-1}$ cancels the swap by revealing the preimage of
$H_{n-1}$. The locktimes assigned follow a strictly decreasing order:
$T_{0}>T_{1}>\ldots>T_{n}$.
## V Related Works
The HTLC-based atomic swap was first proposed in [TN1(2013)]. However, the
design lacks fairness and is susceptible to griefing attacks. Later Hao et al.
[Han et al.(2019)] suggested the use of premium to counter griefing attacks.
However, the protocol assumed that in a two-party setting where _Alice_ wants
to exchange currency with _Bob_ , only _Alice_ can be at fault. So she must
lock premium and _Bob_ is not required to do so. In an American-style option-
based swap, _Bob_ gets the premium even though _Alice_ initiates the swap on
time. In currency exchange-based atomic swap, _Bob_ gets the premium if
_Alice_ doesn’t respond within the time period of the contract. The protocol
is not fair as _Bob_ can grief as well. The construction cannot be realized in
Bitcoin scripts as it requires the inclusion of an additional opcode.
Similar work has been done that talks about locking premium by both the
parties involved in exchanging currency [Xue and Herlihy(2021)]. However, the
protocol is not compatible with Bitcoin scripts and suffers from the problem
of mismatched premiums, and lacking fairness. Further, the authors have
bootstrapped the premium, whereby small valued premiums get locked first, and
with each iteration, the premium amount increases. This leads to multiple
round communication, creation of multiple contracts for each iteration, and
longer lock time than [TN1(2013)] and griefing on the locked-up premium is
possible [Nadahalli et al.(2022)]. Nadahalli et al. [Nadahalli et al.(2022)]
have proposed a protocol that is _grief-free_ and compatible with Bitcoin
scripts. The protocol is efficient regarding the number of transactions and
the worst-case timelock for which funds remain locked. However, the problem of
mismatched premium exists. The model lacks flexibility due to the coupling of
premium with the principal amount, and thus cannot be extended to multi-party
atomic swap setting involving more than two blockchains [Herlihy(2018)]. Our
proposed protocol overcomes several such shortcomings. However, there is a
constant factor increase in overhead of transaction and communication round
compared to [Han et al.(2019)] and [Nadahalli et al.(2022)]. We summarize the
discussion by performing a comparative analysis of _Quick Swap_ with other
state-of-the-art protocols in Table II.
| TN [TN1(2013)] | HLY [Han et al.(2019)] | XH [Xue and Herlihy(2021)] | NKW [Nadahalli et al.(2022)] | _Quick_
---|---|---|---|---|---
| Swap | Swap | Swap | Swap | _Swap_
$P1$ | ✗ | Partial | ✓ | ✓ | ✓
$P2$ | ✗ | ✗ | ✗ | Partial | ✓
$P3$ | ✗ | ✗ | ✗ | Partial | ✓
$P4$ | NA | ✗ | ✗ | ✗ | ✓
$P5$ | ✓ | ✗ | ✗ | ✓ | ✓
$P6$ | NA | ✗ | ✗ | ✗ | ✓
TABLE II: Comparative Analysis of _Quick Swap_ with existing Atomic Swap
protocols in terms of $P1$: Countering griefing attack, $P2$: Cancellation
Allowed, $P_{3}$ Counters speculation, $P4$ Fairness of premium locked,
$P_{5}$ Supported by Bitcoin script and $P6$: Extension to multi-party cyclic
swap
## VI Conclusion
In this paper, we perform a game-theoretic analysis of HTLC-based atomic swap.
We observe that the protocol lacks fairness and it is not at all participant-
friendly. We propose _Quick Swap_ that is robust and allows faster settlement
of the transaction. We discuss the step for extending _Quick Swap_ to a
multiparty setting involving more than two blockchains. As a part of our
future work, we would like to analyze _Quick Swap_ in presence of rational
miners in underlying Blockchains.
## Acknowledgement
We thank Dr. Abhinandan Sinha, Assistant Professor at Ahmedabad University,
for his initial valuable comments on this work.
## References
* [1]
* [TN1(2013)] 2013\. _TierNolan_. Technical Report. https://github.com/TierNolan.
* [zmn(2018)] December 2018. An Argument For Single-Asset Lightning Network. https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001752.html.
* [Borkowski et al.(2019)] Michael Borkowski, Marten Sigwart, Philipp Frauenthaler, Taneli Hukkinen, and Stefan Schulte. 2019\. DeXTT: Deterministic cross-blockchain token transfers. _IEEE Access_ 7 (2019), 111030–111042.
* [Dai et al.(2020)] Bingrong Dai, Shengming Jiang, Menglu Zhu, Ming Lu, Dunwei Li, and Chao Li. 2020\. Research and implementation of cross-chain transaction model based on improved hash-locking. In _International Conference on Blockchain and Trustworthy Systems_. Springer, 218–230.
* [Dipple et al.(2020)] Stephen Dipple, Abhishek Choudhary, James Flamino, Boleslaw K Szymanski, and György Korniss. 2020\. Using correlated stochastic differential equations to forecast cryptocurrency rates and social media activities. _Applied Network Science_ 5, 1 (2020), 1–30.
* [Eizinger et al.(2021)] Thomas Eizinger, Philipp Hoenisch, and Lucas Soriano del Pino. 2021\. Open problems in cross-chain protocols. arXiv:2101.12412 [cs.CR]
* [Han et al.(2019)] Runchao Han, Haoyu Lin, and Jiangshan Yu. 2019. On the optionality and fairness of Atomic Swaps. In _Proceedings of the 1st ACM Conference on Advances in Financial Technologies_. 62–75.
* [Herlihy(2018)] Maurice Herlihy. 2018\. Atomic Cross-Chain Swaps. In _Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing, PODC 2018, Egham, United Kingdom, July 23-27, 2018_ , Calvin Newport and Idit Keidar (Eds.). ACM, 245–254. https://doi.org/10.1145/3212734
* [Herlihy et al.(2022)] Maurice Herlihy, Barbara Liskov, and Liuba Shrira. 2022\. Cross-chain deals and adversarial commerce. _The VLDB journal_ 31, 6 (2022), 1291–1309.
* [Lys et al.(2021)] Léonard Lys, Arthur Micoulet, and Maria Potop-Butucaru. 2021\. _R-SWAP: Relay based atomic cross-chain swap protocol_. Ph. D. Dissertation. Sorbonne Université.
* [Malliaris(1990)] A. G. Malliaris. 1990\. _Wiener Process_. Palgrave Macmillan UK, London, 316–318. https://doi.org/10.1007/978-1-349-20865-4_43
* [Moreno-Sanchez et al.(2020)] Pedro Moreno-Sanchez, Arthur Blue, Duc Viet Le, Sarang Noether, Brandon Goodell, and Aniket Kate. 2020. DLSAG: Non-interactive Refund Transactions for Interoperable Payment Channels in Monero. In _Financial Cryptography and Data Security - 24th International Conference, FC 2020_ _(Lecture Notes in Computer Science, Vol. 12059)_ , Joseph Bonneau and Nadia Heninger (Eds.). Springer, 325–345. https://doi.org/10.1007/978-3-030-51280-4\\_18
* [Nadahalli et al.(2022)] Tejaswi Nadahalli, Majid Khabbazian, and Roger Wattenhofer. 2022\. Grief-free Atomic Swaps. In _2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC)_. 1–9. https://doi.org/10.1109/ICBC54727.2022.9805490
* [Nakamoto(2008)] Satoshi Nakamoto. 2008\. Bitcoin: A peer-to-peer electronic cash system. _Decentralized Business Review_ (2008), 21260.
* [Narayanam et al.(2022)] Krishnasuri Narayanam, Venkatraman Ramakrishna, Dhinakaran Vinayagamurthy, and Sandeep Nishad. 2022\. Generalized HTLC for Cross-Chain Swapping of Multiple Assets with Co-Ownerships. _arXiv preprint arXiv:2202.12855_ (2022).
* [Robinson(2019)] Daniel Robinson. 2019\. HTLCs considered harmful. In _Stanford Blockchain Conference_.
* [Tairi et al.(2021)] Erkan Tairi, Pedro Moreno-Sanchez, and Matteo Maffei. 2021\. A${}^{\mbox{2}}$L: Anonymous Atomic Locks for Scalability and Interoperability in Payment Channel Hubs. In _2021 IEEE Symposium on Security and Privacy (SP)_. IEEE, 1834–1851.
* [Thomas and Schwartz(2015)] Stefan Thomas and Evan Schwartz. 2015. A protocol for interledger payments. _URL https://interledger. org/interledger. pdf_ (2015).
* [Thyagarajan et al.(2022)] S. Thyagarajan, G. Malavolta, and P. Moreno-Sanchez. 2022\. Universal Atomic Swaps: Secure Exchange of Coins Across All Blockchains. In _2022 2022 IEEE Symposium on Security and Privacy (SP) (SP)_. IEEE Computer Society, Los Alamitos, CA, USA, 1083–1100. https://doi.org/10.1109/SP46214.2022.00063
* [Xu et al.(2021)] Jiahua Xu, Damien Ackerer, and Alevtina Dubovitskaya. 2021\. A game-theoretic analysis of cross-chain atomic swaps with HTLCs. In _2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)_. IEEE, 584–594.
* [Xue and Herlihy(2021)] Yingjie Xue and Maurice Herlihy. 2021. Hedging against sore loser attacks in cross-chain transactions. In _Proceedings of the 2021 ACM Symposium on Principles of Distributed Computing_. 155–164.
* [Zakhary et al.(2020)] Victor Zakhary, Divyakant Agrawal, and Amr El Abbadi. 2020\. Atomic commitment across blockchains. _Proceedings of the VLDB Endowment_ 13, 9 (2020), 1319–1331.
* [Zamyatin et al.(2019)] Alexei Zamyatin, Dominik Harz, Joshua Lind, Panayiotis Panayiotou, Arthur Gervais, and William J. Knottenbelt. 2019. XCLAIM: Trustless, Interoperable, Cryptocurrency-Backed Assets. In _2019 IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA, USA, May 19-23, 2019_. IEEE, 193–210. https://doi.org/10.1109/SP.2019.00085
|
# Arboreal Categories and Homomorphism Preservation Theorems
Samson Abramsky Department of Computer Science, University College London,
66–72 Gower Street, London, WC1E 6EA, United Kingdom<EMAIL_ADDRESS>and
Luca Reggio Department of Computer Science, University College London, 66–72
Gower Street, London, WC1E 6EA, United Kingdom<EMAIL_ADDRESS>
###### Abstract.
The classical homomorphism preservation theorem, due to Łoś, Lyndon and
Tarski, states that a first-order sentence $\varphi$ is preserved under
homomorphisms between structures if, and only if, it is equivalent to an
existential positive sentence $\psi$. Given a notion of (syntactic) complexity
of sentences, an “equi-resource” homomorphism preservation theorem improves on
the classical result by ensuring that $\psi$ can be chosen so that its
complexity does not exceed that of $\varphi$.
We describe an axiomatic approach to equi-resource homomorphism preservation
theorems based on the notion of arboreal category. This framework is then
employed to establish novel homomorphism preservation results, and improve on
known ones, for various logic fragments, including first-order, guarded and
modal logics.
Research supported by the EPSRC grant EP/V040944/1, and by the European
Union’s Horizon 2020 research and innovation programme under the Marie
Skłodowska-Curie grant agreement No 837724.
## 1\. Introduction
Homomorphism preservation theorems relate the syntactic shape of a sentence
with the semantic property of being preserved under homomorphisms between
structures. Recall that a first-order sentence $\varphi$ in a vocabulary
$\tau$ is said to be _preserved under homomorphisms_ if, whenever there is a
homomorphism of $\tau$-structures $A\to B$ and ${A\vDash\varphi}$, then also
$B\vDash\varphi$. Further, an _existential positive sentence_ is a first-order
sentence that uses only the connectives $\vee,\wedge$ and the quantifier
$\exists$. The following classical result, known as the _homomorphism
preservation theorem_ , is due to Łoś, Lyndon and Tarski [31, 32, 39] and
applies to arbitrary (first-order) vocabularies.
###### Theorem 1.1.
A first-order sentence is preserved under homomorphisms if, and only if, it is
equivalent to an existential positive sentence.
The homomorphism preservation theorem is a fairly straightforward consequence
of the compactness theorem, see e.g. [40, Lemma 3.1.2]. However, applying the
compactness theorem means that we lose control over the syntactic shape of an
existential positive sentence $\psi$ that is equivalent to a sentence
$\varphi$ preserved under homomorphisms. In particular, it is an ineffective
approach if we want to determine to which extent the passage from $\varphi$ to
$\psi$ increases the “complexity” of the former sentence.
One way to measure the complexity of a formula $\varphi$ is in terms of its
_quantifier rank_ , i.e. the maximum number of nested quantifiers appearing in
$\varphi$. Rossman’s _equirank homomorphism preservation theorem_ [38], which
applies to relational vocabularies (i.e. vocabularies that contain no constant
or function symbols), shows that it is possible to find a $\psi$ whose
quantifier rank is less than or equal to the quantifier rank of $\varphi$:
###### Theorem 1.2.
A first-order sentence of quantifier rank at most $k$ is preserved under
homomorphisms if, and only if, it is equivalent to an existential positive
sentence of quantifier rank at most $k$.
This is a considerable improvement on the classical homomorphism preservation
theorem and was proved by Rossman on the way to his celebrated _finite
homomorphism preservation theorem_ , stating that Theorem 1.1 admits a
relativisation to finite structures.111The finite homomorphism preservation
theorem is a major result in finite model theory, as well as a surprising one
given that most preservation theorems fail when restricted to finite
structures. Note that the finite homomorphism preservation theorem and the
classical one are incomparable results. In the proof of the equirank
homomorphism preservation theorem, the application of the compactness theorem
is replaced with a model construction which is similar, in spirit, to the
construction of a saturated elementary extension of a given structure.
The main contribution of this paper consists in laying out a categorical
framework in which “equi-resource” homomorphism preservation theorems can be
proved in an axiomatic fashion. In [3, 8], _game comonads_ were introduced to
capture in a structural way a number of logic fragments and corresponding
combinatorial and game-theoretic notions, both at the level of finite and
infinite structures. For a recent survey article, see [2]. The template of
game comonads was axiomatised in [6], see also the extended version [7], by
means of the notion of _arboreal category_. Our proof strategy consists in
establishing abstract homomorphism preservation theorems at the level of
arboreal categories and then instantiating these results for specific choices
of game comonads.
We thus obtain novel equi-resource homomorphism preservation theorems for
modal and guarded logics (Theorems 4.9 and 4.11, respectively), along with
relativisations to appropriate subclasses of structures—e.g. the class of
finite structures (Theorems 4.23 and 4.24, respectively). Further, we derive a
relativisation result (Theorem 5.12) which refines Rossman’s equirank
homomorphism preservation theorem.
This paper is organised as follows. In Section 2 we provide a brief
introduction to game comonads, and in Section 3 we recall the necessary
definitions and facts concerning arboreal categories. Homomorphism
preservation theorems are recast into categorical statements and proved at the
level of arboreal categories in Sections 4 and 5. Finally, Section 6 contains
the proof of our main technical result, namely Theorem 5.9.
Throughout this article, we shall assume the reader is familiar with the basic
notions of category theory; standard references include e.g. [10, 33].
## 2\. Logic Fragments and Game Comonads
We shall mainly deal with two types of vocabularies:
* •
_Relational vocabularies_ , i.e. first-order vocabularies that contain no
function or constant symbols.
* •
_Multi-modal vocabularies_ , i.e. relational vocabularies in which every
relation symbol has arity at most $2$.
Multi-modal vocabularies will be referred to simply as _modal vocabularies_.
If $\sigma$ is a modal vocabulary, we can assign to each unary relation symbol
$P\in\sigma$ a propositional variable $p$, and to each binary relation symbol
$R\in\sigma$ modalities $\Diamond_{R}$ and $\Box_{R}$. We refer to
$\sigma$-structures as _Kripke structures_. For any Kripke structure $A$, the
interpretation of $P$ in $A$, denoted by $P^{A}$, corresponds to the valuation
of the propositional variable $p$, and the binary relation $R^{A}$ to the
accessibility relation for the modalities $\Diamond_{R}$ and $\Box_{R}$.
For our running examples and intended applications, we will be interested in
the following resource-bounded fragments of first-order logic (always
including the equality symbol) and modal logic, for a relational vocabulary
$\sigma$ and positive integer $k$:
* •
$\mathrm{FO}_{k}$ and $\exists^{+}\mathrm{FO}_{k}$: these denote,
respectively, the set of sentences of quantifier rank $\leq k$ in the
vocabulary $\sigma$, and its existential positive fragment.
* •
$\mathrm{ML}_{k}$ and $\exists^{+}\mathrm{ML}_{k}$: if $\sigma$ is a modal
vocabulary, $\mathrm{ML}_{k}$ is the set of modal formulas of modal depth
$\leq k$ in the vocabulary $\sigma$ (recall that the _modal depth_ is the
maximum number of nested modalities in a formula). Moreover,
$\exists^{+}\mathrm{ML}_{k}$ denotes the existential positive fragment of
$\mathrm{ML}_{k}$, i.e. the set of formulas in $\mathrm{ML}_{k}$ that use only
the connectives $\vee,\wedge$ and the diamond modalities.
* •
$\mathrm{ML}_{k}(\\#)$: this is the extension of $\mathrm{ML}_{k}$ with
_graded modalities_. Recall that graded modalities have the form
$\Diamond_{R}^{n}$ and $\Box_{R}^{n}$, with $n\geq 0$, and
$A,a\models{\Diamond_{R}^{n}}\,\varphi$ if there are at least $n$
$R$-successors of $a$ satisfying $\varphi$ (and
$\Box_{R}^{n}\varphi=\neg\Diamond_{R}^{n}\neg\varphi$).
Further logics, namely _guarded logics_ , will be considered in Section 4.2.
Any logic fragment222We employ the nomenclature “logic fragment”, rather than
the more customary term “theory”, as we are mainly interested in the situation
where $\mathcal{L}$ is defined by constraining the syntactic shape of
sentences. $\mathcal{L}$, i.e. any set of first-order sentences in a
vocabulary $\tau$, induces an equivalence relation $\equiv^{\mathcal{L}}$ on
$\tau$-structures defined as follows: for all $\tau$-structures $A,B$,
$A\equiv^{\mathcal{L}}B\ \ \Longleftrightarrow\ \
\forall\varphi\in\mathcal{L}.\,(A\vDash\varphi\,\Leftrightarrow\,B\vDash\varphi).$
If $\mathcal{L}$ consists of all first-order sentences, $\equiv^{\mathcal{L}}$
coincides with elementary equivalence.
Syntax-free characterisations of the equivalence relations
$\equiv^{\mathcal{L}}$ play an important role in model theory. For example,
the Keisler-Shelah theorem states that two $\tau$-structures are elementarily
equivalent if, and only if, they have isomorphic ultrapowers. A different
approach is through _model comparison games_. These have a wide range of
applications in model theory, see e.g. [24, §3], and are central to finite
model theory where tools such as the compactness theorem and ultraproducts are
not available. Model comparison games lead to a perspective which may be
described as “model theory without compactness”.
Game comonads arise from the insight that model comparison games can be seen
as semantic constructions in their own right. Although we shall not employ
games as a tool, we recall two examples of games to motivate the framework of
game comonads.
Henceforth we shall work with a relational vocabulary $\sigma$. Let $A,B$ be
$\sigma$-structures. Both types of game are two-player games played between
Spoiler and Duplicator. Whereas Spoiler aims to show that $A$ and $B$ are
different, Duplicator aims to show that they are similar. Each game is played
in a number of rounds:
Ehrenfeucht-Fraïssé game:
In the $i$th round, Spoiler chooses an element in one of the structures and
Duplicator responds by choosing an element in the other structure. After $k$
rounds have been played, we have sequences $[a_{1},\ldots,a_{k}]$ and
$[b_{1},\ldots,b_{k}]$ of elements from $A$ and $B$ respectively. Duplicator
wins this play if the ensuing relation $r\coloneqq\\{(a_{i},b_{i})\mid 1\leq
i\leq k\\}$ is a partial isomorphism. Duplicator wins the $k$-round game if
they have a strategy which is winning after $i$ rounds, for all $1\leq i\leq
k$.
Bisimulation game:
Suppose $\sigma$ is a modal vocabulary. The game is played between pointed
Kripke structures $(A,a)$ and $(B,b)$, where $a\in A$ and $b\in B$. The
initial position is $(a_{0},b_{0})=(a,b)$. In the $i$th round, with current
position $(a_{i-1},b_{i-1})$, Spoiler chooses one of the two structures, say
$A$, a binary relation symbol $R$ in $\sigma$, and an element $a_{i}\in A$
such that $(a_{i-1},a_{i})\in R^{A}$. Duplicator responds by choosing an
element in the other structure, say $b_{i}\in B$, such that
$(b_{i-1},b_{i})\in R^{B}$. If Duplicator has no such response, they lose.
Duplicator wins the $k$-round game if, for all unary relation symbols $P$ in
$\sigma$, we have $P^{A}(a_{i})\Leftrightarrow P^{B}(b_{i})$ for all $0\leq
i\leq k$.
Assume the vocabulary $\sigma$ is finite. The classical Ehrenfeucht-Fraïssé
theorem [20, 21] states that Duplicator has a winning strategy in the
$k$-round Ehrenfeucht-Fraïssé game played between $A$ and $B$ if, and only if,
$A$ and $B$ satisfy the same first-order sentences of quantifier rank at most
$k$. Similarly, Duplicator has a winning strategy in the $k$-round
bisimulation game played between pointed Kripke structures $(A,a)$ and $(B,b)$
if, and only if, $(A,a)$ and $(B,b)$ satisfy the same modal formulas of modal
depth at most $k$ [23].
###### Remark 2.1.
In both Ehrenfeucht-Fraïssé and bisimulation games, the resource parameter is
the number of rounds. This need not be the case in general. For instance, the
resource parameter for pebble games [13, 25], which correspond to finite-
variable fragments of first-order logic, is the number of pebbles available to
the players.
Next, we introduce the comonads corresponding to Ehrenfeucht-Fraïssé and
bisimulation games, respectively. For each $\sigma$-structure $A$, denote by
$\mathbb{E}_{k}(A)$ the set of all non-empty sequences of length at most $k$
of elements from $A$. In other words, $\mathbb{E}_{k}(A)$ is the set of all
plays in $A$ in the $k$-round Ehrenfeucht-Fraïssé game. The interpretations of
the relation symbols can be lifted from $A$ to $\mathbb{E}_{k}(A)$ as follows.
Let $\varepsilon_{A}\colon\mathbb{E}_{k}(A)\to A$ be the function sending a
sequence to its last element. For each relation symbol $R\in\sigma$ of arity
$j$, we define $R^{\mathbb{E}_{k}(A)}$ to be the set of all tuples
$(s_{1},\ldots,s_{j})\in\mathbb{E}_{k}(A)^{j}$ such that:
1. (i)
The sequences $s_{1},\ldots,s_{j}$ are pairwise comparable in the prefix
order.
2. (ii)
$(\varepsilon_{A}(s_{1}),\ldots,\varepsilon_{A}(s_{j}))\in R^{A}$.
For every homomorphism $f\colon A\to B$, let
$\mathbb{E}_{k}(f)\colon\mathbb{E}_{k}(A)\to\mathbb{E}_{k}(B),\ \
[a_{1},\ldots,a_{l}]\mapsto[f(a_{1}),\ldots,f(a_{l})].$
This yields a comonad (in fact, a _family_ of comonads indexed by $k>0$) on
the category $\mathbf{Struct}(\sigma)$ of $\sigma$-structures and their
homomorphisms, known as _Ehrenfeucht-Fraïssé comonad_ [8]. The underlying
functor of this comonad is
$\mathbb{E}_{k}\colon\mathbf{Struct}(\sigma)\to\mathbf{Struct}(\sigma)$, the
counit is $\varepsilon$, and the comultiplication at $A$ is the homomorphism
$\mathbb{E}_{k}(A)\to\mathbb{E}_{k}\mathbb{E}_{k}(A),\ \
[a_{1},\ldots,a_{l}]\mapsto[[a_{1}],[a_{1},a_{2}],\ldots,[a_{1},\ldots,a_{l}]].$
A similar construction applies to $k$-round bisimulation games. Suppose
$\sigma$ is a modal vocabulary and let $(A,a)$ be a pointed Kripke structure.
We define a Kripke structure $\mathbb{M}_{k}(A,a)$ whose carrier is the set of
all paths $p$ of length at most $k$ starting from $a$:
$a\xrightarrow{R_{1}}a_{1}\xrightarrow{R_{2}}a_{2}\to\cdots\xrightarrow{R_{n}}a_{n}$
where $R_{1},\dots,R_{n}$ are binary relation symbols in $\sigma$. If
$P\in\sigma$ is unary then $P^{\mathbb{M}_{k}(A,a)}$ is the set of paths $p$
whose last element $a_{n}$ belongs to $P^{A}$. For a binary relation symbol
$R$ in $\sigma$, $R^{\mathbb{M}_{k}(A,a)}$ is the set of pairs of paths
$(p,p^{\prime})$ such that $p^{\prime}$ is obtained by extending $p$ by one
step along $R$. The distinguished element of the Kripke structure
$\mathbb{M}_{k}(A,a)$ is the trivial path $\langle a\rangle$ of length $0$,
and the function $\varepsilon_{(A,a)}\colon(\mathbb{M}_{k}(A,a),\langle
a\rangle)\to(A,a)$ sending a path to its last element is a morphism of pointed
Kripke structures. By similar reasoning to the one above, the assignment
$(A,a)\mapsto(\mathbb{M}_{k}(A,a),\langle a\rangle)$ can be upgraded to a
comonad on the category $\mathbf{Struct}_{\bullet}(\sigma)$ of pointed Kripke
structures and their homomorphisms, the _modal comonad_ [8].
In addition to the examples mentioned above, the framework of game comonads
covers a number of model comparison games, cf. e.g. [3, 4, 5, 16, 26, 35]. In
each case, they yield structural (syntax-free) characterisations of
equivalence in the corresponding logic fragments. This will be illustrated
from an axiomatic perspective in Section 3.4.
## 3\. Arboreal Categories
In this section, we recall from [7] the basic definitions and facts concerning
arboreal categories. All categories under consideration are assumed to be
locally small and _well-powered_ , i.e. every object has a _small_ set of
subobjects (as opposed to a proper class).
### 3.1. Proper factorisation systems
Given arrows $e$ and $m$ in a category $\operatorname{\mathscr{C}}$, we say
that $e$ has the _left lifting property_ with respect to $m$, or that $m$ has
the _right lifting property_ with respect to $e$, if for every commutative
square as on the left-hand side below
${{\cdot}}$${{\cdot}}$${{\cdot}}$${{\cdot}}$$\scriptstyle{e}$$\scriptstyle{m}$
${{\cdot}}$${{\cdot}}$${{\cdot}}$${{\cdot}}$$\scriptstyle{e}$$\scriptstyle{d}$$\scriptstyle{m}$
there is an arrow $d$ such that the right-hand diagram above commutes. For any
class $\mathscr{H}$ of morphisms in $\operatorname{\mathscr{C}}$, let
${}^{\pitchfork}\mathscr{H}$ (respectively $\mathscr{H}^{\pitchfork}$) be the
class of morphisms having the left (respectively right) lifting property with
respect to every morphism in $\mathscr{H}$.
###### Definition 3.1.
A pair of classes of morphisms $(\mathscr{Q},\mathscr{M})$ in a category
$\operatorname{\mathscr{C}}$ is a _weak factorisation system_ provided it
satisfies the following conditions:
1. (i)
Every morphism $f$ in $\operatorname{\mathscr{C}}$ can be written as $f=m\circ
e$ with $e\in\mathscr{Q}$ and $m\in\mathscr{M}$.
2. (ii)
$\mathscr{Q}={}^{\pitchfork}\mathscr{M}$ and
$\mathscr{M}=\mathscr{Q}^{\pitchfork}$.
A _proper factorisation system_ is a weak factorisation system
$(\mathscr{Q},\mathscr{M})$ such that each arrow in $\mathscr{Q}$ is epic and
each arrow in $\mathscr{M}$ is monic. We refer to $\mathscr{M}$-morphisms as
_embeddings_ and denote them by $\rightarrowtail$. $\mathscr{Q}$-morphisms
will be referred to as _quotients_ and denoted by $\twoheadrightarrow$.
Next, we state some well known properties of proper factorisation systems (cf.
e.g. [22] or [37]) which will be used throughout the paper without further
reference.
###### Lemma 3.2.
Let $(\mathscr{Q},\mathscr{M})$ be a proper factorisation system in
$\operatorname{\mathscr{C}}$. The following hold:
1. (a)
$\mathscr{Q}$ and $\mathscr{M}$ are closed under compositions.
2. (b)
$\mathscr{Q}$ contains all retractions, $\mathscr{M}$ contains all sections,
and $\mathscr{Q}\cap\mathscr{M}=\\{\text{isomorphisms}\\}$.
3. (c)
The pullback of an $\mathscr{M}$-morphism along any morphism, if it exists, is
in $\mathscr{M}$.
4. (d)
$g\circ f\in\mathscr{Q}$ implies $g\in\mathscr{Q}$.
5. (e)
$g\circ f\in\mathscr{M}$ implies $f\in\mathscr{M}$.
Assume $\operatorname{\mathscr{C}}$ is a category admitting a proper
factorisation system $(\mathscr{Q},\mathscr{M})$. In the same way that one
usually defines the poset of subobjects of a given object
$X\in\operatorname{\mathscr{C}}$, we can define the poset
$\operatorname{\mathbb{S}}{X}$ of $\mathscr{M}$-subobjects of $X$. Given
embeddings $m\colon S\rightarrowtail X$ and $n\colon T\rightarrowtail X$, let
us say that $m\trianglelefteq n$ provided there is a morphism $i\colon S\to T$
such that $m=n\circ i$ (if it exists, $i$ is necessarily an embedding).
${S}$${X}$${T}$$\scriptstyle{m}$$\scriptstyle{i}$$\scriptstyle{n}$
This yields a preorder on the class of all embeddings with codomain $X$. The
symmetrisation $\sim$ of $\trianglelefteq$ is given by $m\sim n$ if, and only
if, there exists an isomorphism $i\colon S\to T$ such that $m=n\circ i$. Let
$\operatorname{\mathbb{S}}{X}$ be the class of $\sim$-equivalence classes of
embeddings with codomain $X$, equipped with the natural partial order $\leq$
induced by $\trianglelefteq$. We shall systematically represent a
$\sim$-equivalence class by any of its representatives. As
$\operatorname{\mathscr{C}}$ is well-powered and every embedding is a
monomorphism, $\operatorname{\mathbb{S}}{X}$ is a set.
For any morphism $f\colon X\to Y$ and embedding $m\colon S\rightarrowtail X$,
consider the (quotient, embedding) factorisation of $f\circ m$:
${S}$${\exists_{f}S}$${Y.}$$\scriptstyle{\exists_{f}m}$
This yields a monotone map
$\exists_{f}\colon\operatorname{\mathbb{S}}{X}\to\operatorname{\mathbb{S}}{Y}$
sending $m$ to $\exists_{f}m$. Note that the map $\exists_{f}$ is well-defined
because factorisations are unique up to isomorphism. If $f$ is an embedding
(or, more generally, $f\circ m$ is an embedding), $\exists_{f}m$ can be
identified with $f\circ m$. For the following observation, cf. e.g. [7, Lemma
2.7(a)].
###### Lemma 3.3.
Let $\operatorname{\mathscr{C}}$ be a category equipped with a proper
factorisation system and let $f\colon X\rightarrowtail Y$ be an embedding in
$\operatorname{\mathscr{C}}$. Then
$\exists_{f}\colon\operatorname{\mathbb{S}}{X}\to\operatorname{\mathbb{S}}{Y}$
is an order-embedding.
### 3.2. Arboreal categories
Let $\operatorname{\mathscr{C}}$ be a category endowed with a proper
factorisation system $(\mathscr{Q},\mathscr{M})$.
###### Definition 3.4.
An object $X$ of $\operatorname{\mathscr{C}}$ is called a _path_ provided the
poset $\operatorname{\mathbb{S}}{X}$ is a finite chain. Paths will be denoted
by $P,Q$ and variations thereof.
The collection of paths is closed under embeddings and quotients. That is,
given an arrow $f\colon X\to Y$, if $f$ is an embedding and $Y$ is a path then
$X$ is a path, and if $f$ is a quotient and $X$ is a path then $Y$ is a path
[7, Lemma 3.5].
A _path embedding_ is an embedding $P\rightarrowtail X$ whose domain is a
path. We let $\operatorname{\mathbb{P}}{X}$ denote the sub-poset of
$\operatorname{\mathbb{S}}{X}$ consisting of the path embeddings. Because
paths are closed under quotients, for any arrow $f\colon X\to Y$ the monotone
map
$\exists_{f}\colon\operatorname{\mathbb{S}}{X}\to\operatorname{\mathbb{S}}{Y}$
restricts to a monotone map
$\operatorname{\mathbb{P}}{f}\colon\operatorname{\mathbb{P}}{X}\to\operatorname{\mathbb{P}}{Y},\
\ (m\colon P\rightarrowtail
X)\mapsto(\exists_{f}m\colon\exists_{f}P\rightarrowtail Y).$ (1)
For any object $X$ of $\operatorname{\mathscr{C}}$, we have a diagram with
vertex $X$ consisting of all path embeddings with codomain $X$. The morphisms
between paths are those making the obvious triangles commute:
${X}$${P}$${Q}$
Choosing representatives in an appropriate way, this yields a cocone over the
small diagram $\operatorname{\mathbb{P}}{X}$. We say that $X$ is _path-
generated_ provided this is a colimit cocone in $\operatorname{\mathscr{C}}$.
Suppose for a moment that coproducts of sets of paths exist in
$\operatorname{\mathscr{C}}$. An object $X$ of $\operatorname{\mathscr{C}}$ is
_connected_ if, for all non-empty sets of paths $\\{P_{i}\mid i\in I\\}$ in
$\operatorname{\mathscr{C}}$, any morphism
$X\to\coprod_{i\in I}{P_{i}}$
factors through some coproduct injection $P_{j}\to\coprod_{i\in I}{P_{i}}$.
In order to state the definition of arboreal category, let us say that a
proper factorisation system is _stable_ if, for any quotient $e$ and embedding
$m$ with common codomain, the pullback of $e$ along $m$ exists and is a
quotient.
###### Definition 3.5.
An _arboreal category_ is a category $\operatorname{\mathscr{C}}$, equipped
with a stable proper factorisation system, that satisfies the following
conditions:
1. (i)
$\operatorname{\mathscr{C}}$ has all coproducts of sets of paths.
2. (ii)
For any paths $P,Q,Q^{\prime}$ in $\operatorname{\mathscr{C}}$, if a composite
$P\to Q\to Q^{\prime}$ is a quotient then so is $P\to Q$.
3. (iii)
Every object of $\operatorname{\mathscr{C}}$ is path-generated.
4. (iv)
Every path in $\operatorname{\mathscr{C}}$ is connected.
###### Remark 3.6.
Item (ii) in the previous definition is equivalent to the following _2-out-
of-3 condition_ : For any paths $P,Q,Q^{\prime}$ and morphisms
${P}$${Q}$${Q^{\prime},}$$\scriptstyle{f}$$\scriptstyle{g}$
if any two of $f$, $g$, and $g\circ f$ are quotients, then so is the third.
See [7, Remark 3.8]. Moreover, item (iii) is equivalent to saying that the
inclusion functor
$\operatorname{\mathscr{C}}_{p}\hookrightarrow\operatorname{\mathscr{C}}$ is
dense, where $\operatorname{\mathscr{C}}_{p}$ is the full subcategory of
$\operatorname{\mathscr{C}}$ defined by the paths [7, Lemma 5.1].
Finally, note that any arboreal category admits an initial object, obtained as
the coproduct of the empty set, and any initial object is a path because its
poset of $\mathscr{M}$-subobjects has a single element—namely, the equivalence
class of the identity.
If $(P,{\leq})$ is a poset, then $C\subseteq P$ is a _chain_ if it is linearly
ordered. $(P,\leq)$ is a _forest_ if, for all $x\in P$, the set
${\downarrow}x\coloneqq\\{y\in P\mid y\leq x\\}$ is a finite chain. The
_height_ of a forest is the supremum of the cardinalities of its chains. The
_covering relation_ $\prec$ associated with a partial order $\leq$ is defined
by $u\prec v$ if and only if $u<v$ and there is no $w$ such that $u<w<v$. The
_roots_ of a forest are the minimal elements, and a _tree_ is a forest with at
most one root. Morphisms of forests are maps that preserve roots and the
covering relation. The category of forests is denoted by
$\operatorname{\mathscr{F}}$, and the full subcategory of trees by
$\operatorname{\mathscr{T}}$.
###### Example 3.7.
Let $\sigma$ be a relational vocabulary. A _forest-ordered $\sigma$-structure_
$(A,{\leq})$ is a $\sigma$-structure $A$ equipped with a forest order $\leq$.
A morphism of forest-ordered $\sigma$-structures
$f\colon(A,{\leq})\to(B,{\leq^{\prime}})$ is a $\sigma$-homomorphism $f\colon
A\to B$ that is also a forest morphism. This determines a category
$\mathscr{R}(\sigma)$. We equip $\mathscr{R}(\sigma)$ with the factorisation
system given by (surjective morphisms, embeddings), where an embedding is a
morphism which is an embedding qua $\sigma$-homomorphism. Let
$\mathscr{R}^{E}(\sigma)$ be the full subcategory of $\mathscr{R}(\sigma)$
determined by those objects $(A,{\leq})$ satisfying the following condition:
1. (E)
If $a,b\in A$ are distinct elements that appear in a tuple of related elements
$(a_{1},\ldots,a_{l})\in R^{A}$ for some $R\in\sigma$, then either $a<b$ or
$b<a$.333I.e., if $a$ and $b$ are adjacent in the _Gaifman graph_ of $A$, then
they are comparable in the forest order.
For each $k>0$, let $\mathscr{R}^{E}_{k}(\sigma)$ be the full subcategory of
$\mathscr{R}^{E}(\sigma)$ of forest-ordered structures of height $\leq k$. In
[9, Theorem 9.1], it is shown that $\mathscr{R}^{E}_{k}(\sigma)$ is isomorphic
to the category of coalgebras for the Ehrenfeucht-Fraïssé comonad
$\mathbb{E}_{k}$ on $\mathbf{Struct}(\sigma)$. The objects $(A,{\leq})$ of
$\mathscr{R}^{E}_{k}(\sigma)$ are forest covers of $A$ witnessing that its
_tree-depth_ is at most $k$ [34].
The category $\mathscr{R}^{E}(\sigma)$ is arboreal when equipped with the
restriction of the factorisation system on $\mathscr{R}(\sigma)$. The paths in
$\mathscr{R}^{E}(\sigma)$ are those objects in which the order is a finite
chain. Similarly, $\mathscr{R}^{E}_{k}(\sigma)$ is an arboreal category for
all $k>0$, when equipped with the restriction of the factorisation system on
$\mathscr{R}(\sigma)$. See [7, Examples 5.3].
###### Example 3.8.
Assume that $\sigma$ is a modal vocabulary. Let $\mathscr{R}^{M}(\sigma)$ be
the full subcategory of $\mathscr{R}(\sigma)$ consisting of the tree-ordered
$\sigma$-structures $(A,{\leq})$ satisfying:
1. (M)
For $a,b\in A$, $a\prec b$ if and only if $(a,b)\in R^{A}$ for some unique
binary relation $R$.
For each $k>0$, the full subcategory $\mathscr{R}^{M}_{k}(\sigma)$ of
$\mathscr{R}^{M}(\sigma)$ consisting of the tree-ordered structures of height
at most $k$ is isomorphic to the category of coalgebras for the modal comonad
$\mathbb{M}_{k}$ on $\mathbf{Struct}_{\bullet}(\sigma)$ [9, Theorem 9.5]. When
equipped with the restriction of the factorisation system on
$\mathscr{R}(\sigma)$, the category $\mathscr{R}^{M}(\sigma)$ is arboreal and
its paths are those objects in which the order is a finite chain. Likewise for
$\mathscr{R}^{M}_{k}(\sigma)$.
It follows from the definition of path that, for any object $X$ of an arboreal
category, the poset $\operatorname{\mathbb{P}}{X}$ is a tree; in fact, a non-
empty tree. Crucially, this assignment extends to a functor into the category
of trees (for a proof, see [7, Theorem 3.10]):
###### Theorem 3.9.
Let $\operatorname{\mathscr{C}}$ be an arboreal category. The assignment
$f\mapsto\operatorname{\mathbb{P}}{f}$ in equation (1) induces a functor
$\operatorname{\mathbb{P}}\colon\operatorname{\mathscr{C}}\to\operatorname{\mathscr{T}}$
into the category of trees.
Finally, recall from [7, §5] the following properties of paths and posets of
embeddings.
###### Lemma 3.10.
The following statements hold in any arboreal category
$\operatorname{\mathscr{C}}$:
1. (a)
Between any two paths there is at most one embedding.
2. (b)
For all objects $X$ of $\operatorname{\mathscr{C}}$, the poset
$\operatorname{\mathbb{S}}{X}$ of its $\mathscr{M}$-subobjects is a complete
lattice.444In fact, $\operatorname{\mathbb{S}}{X}$ is a _perfect_ lattice, cf.
[36] or [17].
3. (c)
Let $X$ be an object of $\operatorname{\mathscr{C}}$ and let
$\mathcal{U}\subseteq\operatorname{\mathbb{P}}{X}$ be a non-empty subset. A
path embedding $m\in\operatorname{\mathbb{P}}{X}$ is below
$\bigvee{\mathcal{U}}\in\operatorname{\mathbb{S}}{X}$ if, and only if, it is
below some element of $\mathcal{U}$.
If it exists, the unique embedding between paths $P,Q$ in an arboreal category
is denoted by
$!_{P,Q}\colon P\rightarrowtail Q.$
If no confusion arises, we simply write $!\colon P\rightarrowtail Q$.
### 3.3. Bisimilarity and back-and-forth systems
Throughout this section, we fix an arbitrary arboreal category
$\operatorname{\mathscr{C}}$. A morphism $f\colon X\to Y$ in
$\operatorname{\mathscr{C}}$ is said to be _open_ if it satisfies the
following path-lifting property: Given any commutative square
${P}$${Q}$${X}$${Y}$$\scriptstyle{f}$
with $P,Q$ paths, there is an arrow $Q\to X$ making the two triangles commute.
(If such an arrow exists, it is automatically an embedding.) Further, $f$ is a
_pathwise embedding_ if, for all path embeddings $m\colon P\rightarrowtail X$,
the composite $f\circ m\colon P\to Y$ is a path embedding.
Combining these notions, we can define a bisimilarity relation between objects
of $\operatorname{\mathscr{C}}$:
###### Definition 3.11.
Two objects $X,Y$ of $\operatorname{\mathscr{C}}$ are said to be _bisimilar_
if there exist an object $Z$ of $\operatorname{\mathscr{C}}$ and a span of
open pathwise embeddings $X\leftarrow Z\rightarrow Y$.
###### Remark 3.12.
The definition of open morphism given above is a refinement of the one
introduced in [28] (cf. [7, §4.1] for a discussion of the relation between
these notions), which is a special case of the concept of open geometric
morphism between toposes [27].
As we shall see next, if $\operatorname{\mathscr{C}}$ has binary products the
bisimilarity relation can be characterised in terms of back-and-forth systems.
Let $X,Y$ be objects of $\operatorname{\mathscr{C}}$. Given
$m\in\operatorname{\mathbb{P}}{X}$ and $n\in\operatorname{\mathbb{P}}{Y}$, we
write $\llbracket m,n\rrbracket$ to indicate that
$\operatorname{dom}(m)\cong\operatorname{dom}(n)$. Intuitively, the pair
$\llbracket m,n\rrbracket$ encodes a partial isomorphisms between $X$ and $Y$
“of shape $P$”, with $P$ a path.
###### Definition 3.13.
A _back-and-forth system_ between objects $X$ and $Y$ of
$\operatorname{\mathscr{C}}$ is a set
$\mathcal{B}=\\{\llbracket m_{i},n_{i}\rrbracket\mid
m_{i}\in\operatorname{\mathbb{P}}{X},\,n_{i}\in\operatorname{\mathbb{P}}{Y},\,i\in
I\\}$
satisfying the following conditions:
1. (i)
$\llbracket\bot_{X},\bot_{Y}\rrbracket\in\mathcal{B}$, where
$\bot_{X},\bot_{Y}$ are the roots of $\operatorname{\mathbb{P}}{X}$ and
$\operatorname{\mathbb{P}}{Y}$, respectively.
2. (ii)
If $\llbracket m,n\rrbracket\in\mathcal{B}$ and
$m^{\prime}\in\operatorname{\mathbb{P}}{X}$ are such that $m\prec m^{\prime}$,
there exists $n^{\prime}\in\operatorname{\mathbb{P}}{Y}$ satisfying $n\prec
n^{\prime}$ and $\llbracket m^{\prime},n^{\prime}\rrbracket\in\mathcal{B}$.
3. (iii)
If $\llbracket m,n\rrbracket\in\mathcal{B}$ and
$n^{\prime}\in\operatorname{\mathbb{P}}{Y}$ are such that $n\prec n^{\prime}$,
there exists $m^{\prime}\in\operatorname{\mathbb{P}}{X}$ satisfying $m\prec
m^{\prime}$ and $\llbracket m^{\prime},n^{\prime}\rrbracket\in\mathcal{B}$.
Two objects $X$ and $Y$ of $\operatorname{\mathscr{C}}$ are said to be _back-
and-forth equivalent_ if there exists a back-and-forth system between them.
For a proof of the following result, see [7, Theorem 6.4].
###### Theorem 3.14.
Let $X,Y$ be objects of an arboreal category admitting a product. Then $X$ and
$Y$ are bisimilar if, and only if, they are back-and-forth equivalent.
The existence of a back-and-forth system between $X$ and $Y$ can be
equivalently described in terms of the existence of a Duplicator winning
strategy in a two-player game played between $\operatorname{\mathbb{P}}{X}$
and $\operatorname{\mathbb{P}}{Y}$ [7, §6.2]. Since winning strategies can be
composed to yield again a winning strategy, in any arboreal category with
binary products the bisimilarity relation is transitive, hence an equivalence
relation.
### 3.4. Resource-indexed arboreal adjunctions
Let $\operatorname{\mathscr{C}}$ be an arboreal category, with full
subcategory of paths $\operatorname{\mathscr{C}}_{p}$. We say that
$\operatorname{\mathscr{C}}$ is _resource-indexed_ if for all positive
integers $k$ there is a full subcategory $\operatorname{\mathscr{C}}_{p}^{k}$
of $\operatorname{\mathscr{C}}_{p}$ closed under embeddings555That is, for any
embedding $P\rightarrowtail Q$ in $\operatorname{\mathscr{C}}$ with $P,Q$
paths, if $Q\in\operatorname{\mathscr{C}}_{p}^{k}$ then also
$P\in\operatorname{\mathscr{C}}_{p}^{k}$. We shall further assume that each
category $\operatorname{\mathscr{C}}_{p}^{k}$ contains the initial object of
$\operatorname{\mathscr{C}}$. with
$\operatorname{\mathscr{C}}_{p}^{1}\hookrightarrow\operatorname{\mathscr{C}}_{p}^{2}\hookrightarrow\operatorname{\mathscr{C}}_{p}^{3}\hookrightarrow\cdots$
This induces a corresponding tower of full subcategories
$\operatorname{\mathscr{C}}_{k}$ of $\operatorname{\mathscr{C}}$, with the
objects of $\operatorname{\mathscr{C}}_{k}$ those whose cocone of path
embeddings with domain in $\operatorname{\mathscr{C}}_{p}^{k}$ is a colimit
cocone in $\operatorname{\mathscr{C}}$. It turns out that each category
$\operatorname{\mathscr{C}}_{k}$ is arboreal. Furthermore, the paths in
$\operatorname{\mathscr{C}}_{k}$ are precisely the objects of
$\operatorname{\mathscr{C}}_{p}^{k}$, i.e.
$(\operatorname{\mathscr{C}}_{k})_{p}=\operatorname{\mathscr{C}}_{p}^{k}$. Cf.
[7, Proposition 7.6] and its proof.
###### Example 3.15.
Consider the arboreal category $\mathscr{R}^{E}(\sigma)$ from Example 3.7.
This can be regarded as a resource-indexed arboreal category by taking as
$\operatorname{\mathscr{C}}_{p}^{k}$ the full subcategory of
$\mathscr{R}^{E}(\sigma)$ consisting of the objects in which the order is a
finite chain of cardinality $\leq k$. The generated subcategory
$\operatorname{\mathscr{C}}_{k}$ then coincides with
$\mathscr{R}^{E}_{k}(\sigma)$.
A similar reasoning shows that the arboreal category $\mathscr{R}^{M}(\sigma)$
from Example 3.8 can also be regarded as a resource-indexed arboreal category.
###### Definition 3.16.
Let $\\{\operatorname{\mathscr{C}}_{k}\\}$ be a resource-indexed arboreal
category and let $\operatorname{\mathscr{E}}$ be a category. A _resource-
indexed arboreal adjunction_ between $\operatorname{\mathscr{E}}$ and
$\operatorname{\mathscr{C}}$ is an indexed family of adjunctions
${\operatorname{\mathscr{C}}_{k}}$${\operatorname{\mathscr{E}}.}$$\scriptstyle{L_{k}}$$\scriptstyle{R_{k}}$$\bot$
A _resource-indexed arboreal cover_ of $\operatorname{\mathscr{E}}$ by
$\operatorname{\mathscr{C}}$ is a resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$ such that all
adjunctions $L_{k}\dashv R_{k}$ are comonadic, i.e. for all $k>0$ the
comparison functor from $\operatorname{\mathscr{C}}_{k}$ to the category of
Eilenberg-Moore coalgebras for the comonad $G_{k}\coloneqq L_{k}R_{k}$ is an
isomorphism.
###### Example 3.17.
Let $\sigma$ be a relational vocabulary and let
$\operatorname{\mathscr{E}}=\mathbf{Struct}(\sigma)$. Consider the resource-
indexed arboreal category $\mathscr{R}^{E}(\sigma)$ in Example 3.15. For each
$k>0$, there is a forgetful functor
$L^{E}_{k}\colon\mathscr{R}^{E}_{k}(\sigma)\to\mathbf{Struct}(\sigma)$
which forgets the forest order. This functor is comonadic. The right adjoint
$R^{E}_{k}$ sends a $\sigma$-structure $A$ to $\mathbb{E}_{k}(A)$ equipped
with the prefix order, and the comonad induced by this adjunction coincides
with the $k$-round Ehrenfeucht-Fraïssé comonad $\mathbb{E}_{k}$. This gives
rise to a resource-indexed arboreal cover of $\mathbf{Struct}(\sigma)$ by
$\mathscr{R}^{E}(\sigma)$.
Similarly, if $\sigma$ is a modal vocabulary, there is a resource-indexed
arboreal cover of $\mathbf{Struct}_{\bullet}(\sigma)$ by
$\mathscr{R}^{M}(\sigma)$ such that each adjunction $L^{M}_{k}\dashv
R^{M}_{k}$ induces the $k$-round modal comonad $\mathbb{M}_{k}$.
###### Example 3.18.
To deal with the equality symbol in the logic, it is useful to consider
resource-indexed arboreal adjunctions constructed as follows. Let
$\sigma^{I}\coloneqq\sigma\cup\\{I\\}$
be the vocabulary obtained by adding a fresh binary relation symbol $I$ to
$\sigma$. Any $\sigma$-structure can be expanded to a $\sigma^{I}$-structure
by interpreting $I$ as the identity relation. This yields a fully faithful
functor $J\colon\mathbf{Struct}(\sigma)\to\mathbf{Struct}(\sigma^{I})$. The
functor $J$ has a left adjoint
$H\colon\mathbf{Struct}(\sigma^{I})\to\mathbf{Struct}(\sigma)$ which sends a
$\sigma^{I}$-structure $A$ to the quotient of the $\sigma$-reduct of $A$ by
the equivalence relation generated by $I^{A}$ [18, Lemma 25]. We can then
compose the adjunction $H\dashv J$ with e.g. the Ehrenfeucht-Fraïssé resource-
indexed arboreal cover of $\mathbf{Struct}(\sigma^{I})$ by
$\mathscr{R}^{E}(\sigma^{I})$ from Example 3.17.
${{\mathscr{R}^{E}_{k}(\sigma^{I})}}$${{\mathbf{Struct}(\sigma^{I})}}$${{\mathbf{Struct}(\sigma)}}$$\scriptstyle{L^{E}_{k}}$$\scriptstyle{R^{E}_{k}}$$\scriptstyle{H}$$\scriptstyle{J}$$\bot$$\bot$
The composite adjunctions $HL^{E}_{k}\dashv R^{E}_{k}J$, which are not
comonadic, yield the _Ehrenfeucht-Fraïssé resource-indexed arboreal
adjunction_ between $\mathbf{Struct}(\sigma)$ and
$\mathscr{R}^{E}(\sigma^{I})$.
Crucially, a resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$ can be used to
define several resource-indexed relations between objects of
$\operatorname{\mathscr{E}}$:
###### Definition 3.19.
Consider a resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$, with
adjunctions $L_{k}\dashv R_{k}$, and any two objects $a,b$ of
$\operatorname{\mathscr{E}}$. For all $k>0$, we define:
* •
$a\rightarrow_{k}^{\operatorname{\mathscr{C}}}b$ if there exists a morphism
$R_{k}a\to R_{k}b$ in $\operatorname{\mathscr{C}}_{k}$.
* •
$a\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}b$ if $R_{k}a$ and $R_{k}b$
are bisimilar in $\operatorname{\mathscr{C}}_{k}$.
* •
$a\cong_{k}^{\operatorname{\mathscr{C}}}b$ if $R_{k}a$ and $R_{k}b$ are
isomorphic in $\operatorname{\mathscr{C}}_{k}$.
Further, we write $\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}$ for the
symmetrisation of the preorder $\rightarrow_{k}^{\operatorname{\mathscr{C}}}$.
There are inclusions
${\cong_{k}^{\operatorname{\mathscr{C}}}}\ \subseteq\
{\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}}\ \subseteq\
{\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}}.$
The first inclusion is trivial, the second follows from [7, Lemma 6.20]. For a
proof of the following easy observation, see [7, Lemma 7.18].
###### Lemma 3.20.
Consider a resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$, with
adjunctions $L_{k}\dashv R_{k}$. The following hold for all
$a,b\in\operatorname{\mathscr{E}}$ and all $k>0$:
1. (a)
If there exists a morphism $a\to b$ in $\operatorname{\mathscr{E}}$ then
$a\rightarrow_{k}^{\operatorname{\mathscr{C}}}b$.
2. (b)
$a\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}L_{k}R_{k}a$.
To conclude, we recall how the relations in Definition 3.19 capture, in our
running examples, preservation of the logics introduced at the beginning of
Section 2. Given a set of sentences (or modal formulas) $\mathcal{L}$, let
$\Rrightarrow^{\mathcal{L}}$ be the preorder on (pointed) $\sigma$-structures
given by
$A\Rrightarrow^{\mathcal{L}}B\ \ \Longleftrightarrow\ \
\forall\varphi\in\mathcal{L}.\,(A\vDash\varphi\,\Rightarrow\,B\vDash\varphi).$
The equivalence relation $\equiv^{\mathcal{L}}$ is the symmetrisation of
$\Rrightarrow^{\mathcal{L}}$.
###### Example 3.21.
Let $\sigma$ be a finite relational vocabulary. Consider the Ehrenfeucht-
Fraïssé resource-indexed arboreal adjunction between $\mathbf{Struct}(\sigma)$
and $\mathscr{R}^{E}(\sigma^{I})$ in Example 3.18, and write
$\rightarrow_{k}^{E}$ and $\leftrightarrow_{k}^{E}$ for the relations on
$\mathbf{Struct}(\sigma)$ induced according to Definition 3.19. For all
$\sigma$-structures $A,B$ and all $k>0$, we have
$A\rightarrow_{k}^{E}B\ \Longleftrightarrow\
A\Rrightarrow^{\exists^{+}\mathrm{FO}_{k}}B$
and
$A\leftrightarrow_{k}^{E}B\ \Longleftrightarrow\ A\equiv^{\mathrm{FO}_{k}}B.$
Cf. [9, Theorems 3.2 and 5.1] and [9, Theorem 10.5], respectively. We also
mention that $\cong_{k}^{E}$ coincides with equivalence in the extension of
$\mathrm{FO}_{k}$ with _counting quantifiers_ [9, Theorem 5.3(2)], although we
shall not need this fact.
###### Example 3.22.
Suppose $\sigma$ is a finite modal vocabulary and consider the relations
$\rightarrow_{k}^{M}$ and $\cong_{k}^{M}$ on
$\mathbf{Struct}_{\bullet}(\sigma)$ induced by the modal resource-indexed
arboreal cover of $\mathbf{Struct}_{\bullet}(\sigma)$ by
$\mathscr{R}^{M}_{k}(\sigma)$ in Example 3.17. For all pointed Kripke
structures $(A,a),(B,b)$ and all $k>0$, we have
$(A,a)\rightarrow_{k}^{M}(B,b)\ \Longleftrightarrow\
(A,a)\Rrightarrow^{\exists^{+}\mathrm{ML}_{k}}(B,b),$
see [8, Theorem 9]. Furthermore,
$(A,a)\cong_{k}^{M}(B,b)\ \Longrightarrow\
(A,a)\equiv^{\mathrm{ML}_{k}(\\#)}(B,b),$
cf. [8, Proposition 15] and [19, Proposition 3.6]. We mention in passing that
the relation $\leftrightarrow_{k}^{M}$ coincides with equivalence in
$\mathrm{ML}_{k}$ [9, Theorem 10.13].
## 4\. Homomorphism Preservation Theorems
In this section, we recast the statement of a generic equi-resource
homomorphism preservation theorem into a property (HP)—and its strengthening
(HP#)—that a resource-indexed arboreal adjunction may or may not satisfy. We
then identify a class of “tame” resource-indexed arboreal adjunctions, namely
those satisfying the _bisimilar companion property_ , for which (HP) always
holds. In the absence of the bisimilar companion property, one may try to
“force” it; this leads to the notion of _extendability_ , inspired by the work
of Rossman [38]. Finally, we provide simple sufficient conditions under which
properties (HP) and (HP#) admit a relativisation to a full subcategory.
### 4.1. (HP) and (HP#)
Given a first-order sentence $\varphi$ in a relational vocabulary $\sigma$,
its “model class” $\operatorname{\mathbf{Mod}}(\varphi)$ is the full
subcategory of $\mathbf{Struct}(\sigma)$ defined by the $\sigma$-structures
$A$ such that $A\vDash\varphi$. To motivate the formulation of properties (HP)
and (HP#), we recall a well-known characterisation of model classes of
sentences in $\mathrm{FO}_{k}$, i.e. first-order sentences of quantifier rank
at most $k$, and in its existential positive fragment
$\exists^{+}\mathrm{FO}_{k}$. Since a sentence can only contain finitely many
relation symbols, for the purpose of investigating homomorphism preservation
theorems we can safely assume that $\sigma$ is finite.
For a full subcategory $\operatorname{\mathscr{D}}$ of a category
$\operatorname{\mathscr{A}}$, and a relation $\nabla$ on the class of objects
of $\operatorname{\mathscr{A}}$, we say that $\operatorname{\mathscr{D}}$ is
_upwards closed (in $\operatorname{\mathscr{A}}$) with respect to $\nabla$_ if
$\forall a,b\in\operatorname{\mathscr{D}},\ \text{ if }\
a\in\operatorname{\mathscr{D}}\ \text{ and }\ a\nabla b\ \text{ then }\
b\in\operatorname{\mathscr{D}}.$
If $\nabla$ is an equivalence relation and the latter condition is satisfied,
we say that $\operatorname{\mathscr{D}}$ is _saturated under $\nabla$_. The
following lemma follows from the fact that, for all $k\geq 0$, there are
finitely many sentences in $\mathrm{FO}_{k}$ up to logical equivalence. Cf.
e.g. [30, Lemma 3.13].
###### Lemma 4.1.
The following hold for all $k\geq 0$ and all full subcategories
$\operatorname{\mathscr{D}}$ of $\mathbf{Struct}(\sigma)$:
1. (a)
$\operatorname{\mathscr{D}}=\operatorname{\mathbf{Mod}}(\varphi)$ for some
$\varphi\in\mathrm{FO}_{k}$ if, and only if, $\operatorname{\mathscr{D}}$ is
saturated under $\equiv^{\mathrm{FO}_{k}}$.
2. (b)
$\operatorname{\mathscr{D}}=\operatorname{\mathbf{Mod}}(\psi)$ for some
$\psi\in\exists^{+}\mathrm{FO}_{k}$ if, and only if,
$\operatorname{\mathscr{D}}$ is upwards closed with respect to
$\Rrightarrow^{\exists^{+}\mathrm{FO}_{k}}$.
###### Remark 4.2.
The previous lemma remains true if $\mathrm{FO}_{k}$ is replaced with any
fragment of first-order logic that is closed under Boolean connectives and
contains, up to logical equivalence, finitely many sentences.
Now, fix a resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$, with
adjunctions
${\operatorname{\mathscr{C}}_{k}}$${\operatorname{\mathscr{E}}.}$$\scriptstyle{L_{k}}$$\scriptstyle{R_{k}}$$\bot$
Let us say that a full subcategory $\operatorname{\mathscr{D}}$ of
$\operatorname{\mathscr{E}}$ is _closed (in $\operatorname{\mathscr{E}}$)
under morphisms_ if, whenever there is an arrow $a\to b$ in
$\operatorname{\mathscr{E}}$ with $a\in\operatorname{\mathscr{D}}$, also
$b\in\operatorname{\mathscr{D}}$. Note that, when
$\operatorname{\mathscr{E}}=\mathbf{Struct}(\sigma)$ and
$\operatorname{\mathscr{D}}=\operatorname{\mathbf{Mod}}(\varphi)$ is the model
class of some sentence $\varphi$, the category $\operatorname{\mathscr{D}}$ is
closed under morphisms precisely when $\varphi$ is preserved under
homomorphisms.
Consider the following statement, where
$\rightarrow^{\operatorname{\mathscr{C}}}_{k}$ and
$\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}$ are the relations on the
objects of $\operatorname{\mathscr{E}}$ induced by the resource-indexed
arboreal adjunction as in Definition 3.19:
1. (HP)
For any full subcategory $\operatorname{\mathscr{D}}$ of
$\operatorname{\mathscr{E}}$ saturated under
$\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}$,
$\operatorname{\mathscr{D}}$ is closed under morphisms precisely when it is
upwards closed with respect to $\rightarrow^{\operatorname{\mathscr{C}}}_{k}$.
Replacing the relation $\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}$ with
$\cong_{k}^{\operatorname{\mathscr{C}}}$, we obtain a strengthening of (HP),
namely:
1. (HP#)
For any full subcategory $\operatorname{\mathscr{D}}$ of
$\operatorname{\mathscr{E}}$ saturated under
$\cong_{k}^{\operatorname{\mathscr{C}}}$, $\operatorname{\mathscr{D}}$ is
closed under morphisms precisely when it is upwards closed with respect to
$\rightarrow^{\operatorname{\mathscr{C}}}_{k}$.
Just recall that
${\cong_{k}^{\operatorname{\mathscr{C}}}}\subseteq{\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}}$,
and so any full subcategory $\operatorname{\mathscr{D}}$ saturated under
$\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}$ is also saturated under
$\cong_{k}^{\operatorname{\mathscr{C}}}$. Thus, (HP#) entails (HP).
###### Remark 4.3.
By Lemma 3.20(a), any full subcategory of $\operatorname{\mathscr{E}}$ that is
upwards closed with respect to $\rightarrow^{\operatorname{\mathscr{C}}}_{k}$
is closed under morphisms. Hence, the right-to-left implications in (HP) and
(HP#) are always satisfied.
In view of Example 3.21 and Lemma 4.1, for the Ehrenfeucht-Fraïssé resource-
indexed arboreal adjunction between $\mathbf{Struct}(\sigma)$ and
$\mathscr{R}^{E}(\sigma^{I})$, property (HP) coincides with Rossman’s equirank
homomorphism preservation theorem (Theorem 1.2).
In Section 5 we will prove that (HP) holds for any resource-indexed arboreal
adjunction satisfying appropriate properties (see Corollary 5.10), which are
satisfied in particular by the Ehrenfeucht-Fraïssé resource-indexed arboreal
adjunction.
### 4.2. Tame: bisimilar companion property and idempotency
For all $k>0$, write $G_{k}\coloneqq L_{k}R_{k}$ for the comonad on
$\operatorname{\mathscr{E}}$ induced by the adjunction ${L_{k}\dashv
R_{k}\colon\operatorname{\mathscr{E}}\to\operatorname{\mathscr{C}}_{k}}$.
###### Definition 4.4.
A resource-indexed arboreal adjunction between $\operatorname{\mathscr{E}}$
and $\operatorname{\mathscr{C}}$, with induced comonads $G_{k}$, has the
_bisimilar companion property_ if
$a\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}G_{k}a$ for all
$a\,{\in}\,\operatorname{\mathscr{E}}$ and $k>0$.
###### Proposition 4.5.
(HP) holds for any resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$ satisfying the
bisimilar companion property.
###### Proof.
For the left-to-right implication in (HP), let $\operatorname{\mathscr{D}}$ be
a full subcategory of $\operatorname{\mathscr{E}}$ closed under morphisms and
saturated under $\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}$. Suppose
that $a\rightarrow^{\operatorname{\mathscr{C}}}_{k}b$ for objects $a,b$ of
$\operatorname{\mathscr{E}}$. By definition, this means that there is an arrow
$R_{k}a\to R_{k}b$ and so, as $L_{k}$ is left adjoint to $R_{k}$, there is an
arrow $G_{k}a\to b$. Using the bisimilar companion property, we get
$a\,\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}\,G_{k}a\,\to\,b.$
Therefore, if $a\in\operatorname{\mathscr{D}}$ then also
$b\in\operatorname{\mathscr{D}}$. That is, $\operatorname{\mathscr{D}}$ is
upwards closed with respect to $\rightarrow^{\operatorname{\mathscr{C}}}_{k}$.
The converse direction follows from Remark 4.3. ∎
In order to establish a similar result for property (HP#), recall that a
comonad $G$ is _idempotent_ if its comultiplication $G\Rightarrow GG$ is a
natural isomorphism.
###### Definition 4.6.
A resource-indexed arboreal adjunction between $\operatorname{\mathscr{E}}$
and $\operatorname{\mathscr{C}}$ is _idempotent_ if so are the induced
comonads $G_{k}$, for all $k>0$.
###### Proposition 4.7.
(HP#) holds for any idempotent resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$.
###### Proof.
Recall that $G_{k}$ is idempotent if, and only if, $\eta R_{k}$ is a natural
isomorphism, where $\eta$ is the unit of the adjunction $L_{k}\dashv R_{k}$.
In particular, for any $a\in\operatorname{\mathscr{E}}$, the component of
$\eta R_{k}$ at $a$ yields an isomorphism $R_{k}a\cong R_{k}G_{k}a$ in
$\operatorname{\mathscr{C}}$. Hence,
$a\cong_{k}^{\operatorname{\mathscr{C}}}G_{k}a$ for all
$a\in\operatorname{\mathscr{E}}$. Reasoning as in the proof of Proposition
4.5, it is easy to see that (HP#) holds. ∎
###### Remark 4.8.
Consider an idempotent resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$ with induced
comonads $G_{k}$ on $\operatorname{\mathscr{E}}$. The previous proof shows
that, for all $a\in\operatorname{\mathscr{E}}$ and $k>0$, we have
$a\cong_{k}^{\operatorname{\mathscr{C}}}G_{k}a$. A fortiori,
$a\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}G_{k}a$. Therefore, any
idempotent resource-indexed arboreal adjunction satisfies the bisimilar
companion property.
Next, we show how Propositions 4.5 and 4.7 can be exploited to obtain equi-
resource homomorphism preservation theorems for (graded) modal logic and
guarded first-order logics. Relativisations of these results to subclasses of
structures, e.g. to the class of all finite structures, are discussed in
Section 4.4.
### Graded modal logic
Let $\sigma$ be a finite modal vocabulary. As observed in [9, §9.3], the modal
comonads $\mathbb{M}_{k}$ on $\mathbf{Struct}_{\bullet}(\sigma)$ are
idempotent, hence so is the modal resource-indexed arboreal cover of
$\mathbf{Struct}_{\bullet}(\sigma)$ by $\mathscr{R}^{M}_{k}(\sigma)$. This
corresponds to the fact that a tree-ordered Kripke structure is isomorphic to
its tree unravelling. Thus, Proposition 4.7 entails the following _equidepth
homomorphism preservation theorem_ for graded modal formulas (i.e., modal
formulas that possibly contain graded modalities):
###### Theorem 4.9.
The following statements are equivalent for any graded modal formula $\varphi$
of modal depth at most $k$ in a modal vocabulary:
1. (1)
$\varphi$ is preserved under homomorphisms between pointed Kripke structures.
2. (2)
$\varphi$ is logically equivalent to an existential positive modal formula of
modal depth at most $k$.
###### Proof.
Fix a graded modal formula $\varphi\in\mathrm{ML}_{k}(\\#)$. Since a single
modal formula contains only finitely many modalities and propositional
variables, we can assume without loss of generality that $\varphi$ is a
formula in a finite modal vocabulary. By Example 3.22 we have
${\rightarrow_{k}^{M}}={\Rrightarrow^{\exists^{+}\mathrm{ML}_{k}}}\ \text{ and
}\ {\cong_{k}^{M}}\subseteq{\equiv^{\mathrm{ML}_{k}(\\#)}}.$
In particular, the latter inclusion entails that the full subcategory
$\operatorname{\mathbf{Mod}}(\varphi)$ of $\mathbf{Struct}_{\bullet}(\sigma)$
is saturated under $\cong_{k}^{M}$. As the modal resource-indexed arboreal
cover is idempotent, Proposition 4.7 implies that
$\operatorname{\mathbf{Mod}}(\varphi)$ is closed under morphisms if, and only
if, it is upwards closed with respect to $\to_{k}^{M}$. Note that
$\operatorname{\mathbf{Mod}}(\varphi)$ is closed under morphisms precisely
when $\varphi$ is preserved under homomorphisms between pointed Kripke
structures. On the other hand, the equality
${\rightarrow_{k}^{M}}={\Rrightarrow^{\exists^{+}\mathrm{ML}_{k}}}$ implies
that $\operatorname{\mathbf{Mod}}(\varphi)$ is upwards closed with respect to
$\to_{k}^{M}$ if, and only if,
$\operatorname{\mathbf{Mod}}(\varphi)=\operatorname{\mathbf{Mod}}(\psi)$ for
some $\psi\in\exists^{+}\mathrm{ML}_{k}$ (this is akin to Lemma 4.1(b) and
hinges on the fact that $\exists^{+}\mathrm{ML}_{k}$ contains finitely many
formulas up to logical equivalence). Thus the statement follows. ∎
###### Remark 4.10.
Forgetting about both graded modalities and modal depth, Theorem 4.9 implies
that a modal formula is preserved under homomorphisms if, and only if, it is
equivalent to an existential positive modal formula. This improves the well
known result that a modal formula is preserved under simulations precisely
when it is equivalent to an existential positive modal formula (see e.g. [15,
Theorem 2.78]).
### Guarded fragments of first-order logic
The study of guarded fragments of first-order logic was initiated by Andréka,
van Benthem and Németi in [12] to analyse, and extend to the first-order
setting, the good algorithmic and model-theoretic properties of modal logic.
_Guarded formulas_ (over a relational vocabulary $\sigma$) are defined by
structural induction, starting from atomic formulas and applying Boolean
connectives and the following restricted forms of quantification: if
$\varphi(\overline{x},\overline{y})$ is a guarded formula, then so are
$\exists\overline{x}.\,G(\overline{x},\overline{y})\wedge\varphi(\overline{x},\overline{y})\
\text{ and }\
\forall\overline{x}.\,G(\overline{x},\overline{y})\to\varphi(\overline{x},\overline{y})$
where $G$ is a so-called _guard_. The (syntactic) conditions imposed on guards
determine different guarded fragments of first-order logic. We shall consider
the following two:
* •
_Atom guarded:_ $G(\overline{x},\overline{y})$ is an atomic formula in which
all variables in $\overline{x},\overline{y}$ occur.
* •
_Loosely guarded:_ $G(\overline{x},\overline{y})$ is a conjunction of atomic
formulas such that each pair of variables, one in $\overline{x}$ and the other
in $\overline{x},\overline{y}$, occurs in one of the conjuncts.
The atom guarded fragment of first-order logic was introduced in [12] under
the name of F2 (“Fragment $2$”), whereas the loosely guarded fragment was
defined by van Benthem in [14]. The atom guarded fragment can be regarded as
an extension of modal logic, in the sense that the standard translation of the
latter is contained in the former. In turn, the loosely guarded fragment
extends the atomic one and can express e.g. (the translation of) the _Until_
modality in temporal logic, cf. [14, p. 9].
For each notion of guarding $\mathfrak{g}$ (atomic or loose), denote by
$\mathfrak{g}\mathrm{FO}^{n}\ \text{ and }\
\exists^{+}\mathfrak{g}\mathrm{FO}^{n},$
respectively, the $n$-variable $\mathfrak{g}$-guarded fragment of first-order
logic and its existential positive fragment. In [4], _guarded comonads_
$\mathbb{G}_{n}^{\mathfrak{g}}$ on $\mathbf{Struct}(\sigma)$ are defined for
all $n>0$. The associated categories of Eilenberg-Moore coalgebras are
arboreal and induce the _$\mathfrak{g}$ -guarded resource-indexed arboreal
cover_ of $\mathbf{Struct}(\sigma)$ with resource parameter $n$. For an
explicit description of the resource-indexed arboreal category in question,
cf. [4, §IV].
Assume the vocabulary $\sigma$ is finite and let
$\rightarrow_{n}^{\mathfrak{g}}$ and $\leftrightarrow_{n}^{\mathfrak{g}}$ be
the resource-indexed relations on $\mathbf{Struct}(\sigma)$ induced by the
$\mathfrak{g}$-guarded resource-indexed arboreal cover. It follows from [4,
Theorems III.4 and V.2] that, for all $\sigma$-structures $A,B$ and all $n>0$,
$A\rightarrow_{n}^{\mathfrak{g}}B\ \Longleftrightarrow\
A\Rrightarrow^{\exists^{+}\mathfrak{g}\mathrm{FO}^{n}}B$
and
$A\leftrightarrow_{n}^{\mathfrak{g}}B\ \Longleftrightarrow\
A\equiv^{\mathfrak{g}\mathrm{FO}^{n}}B.$
To obtain an analogue of Lemma 4.1, we consider finite fragments of
$\mathfrak{g}\mathrm{FO}^{n}$ by stratifying in terms of _guarded-quantifier
rank_ (cf. Remark 4.2). Note that, as guarded quantifiers bound _tuples_ of
variables, rather than single variables, the guarded-quantifier rank of a
guarded formula is typically lower than its ordinary quantifier rank.
Nevertheless, for all $k\geq 0$, the fragment
$\mathfrak{g}\mathrm{FO}^{n}_{k}$ of $\mathfrak{g}\mathrm{FO}^{n}$ consisting
of those sentences with guarded-quantifier rank at most $k$ contains finitely
many sentences up to logical equivalence.
This stratification can be modelled in terms of comonads
$\mathbb{G}_{n,k}^{\mathfrak{g}}$ on $\mathbf{Struct}(\sigma)$, for all
$n,k>0$, as explained in [4, §VII]. Fixing $n$ and letting $k$ vary, we obtain
an _$n$ -variable $\mathfrak{g}$-guarded resource-indexed arboreal cover_ of
$\mathbf{Struct}(\sigma)$, with resource parameter $k$. The induced relations
$\rightarrow_{n,k}^{\mathfrak{g}}$ and $\leftrightarrow_{n,k}^{\mathfrak{g}}$
on $\mathbf{Struct}(\sigma)$ coincide, respectively, with preservation of
$\exists^{+}\mathfrak{g}\mathrm{FO}^{n}_{k}$ and equivalence in
$\mathfrak{g}\mathrm{FO}^{n}_{k}$. Thus, for any full subcategory
$\operatorname{\mathscr{D}}$ of $\mathbf{Struct}(\sigma)$:
* •
$\operatorname{\mathscr{D}}=\operatorname{\mathbf{Mod}}(\varphi)$ for some
$\varphi\in\mathfrak{g}\mathrm{FO}^{n}_{k}$ if, and only if,
$\operatorname{\mathscr{D}}$ is saturated under
$\leftrightarrow_{n,k}^{\mathfrak{g}}$.
* •
$\operatorname{\mathscr{D}}=\operatorname{\mathbf{Mod}}(\psi)$ for some
$\psi\in\exists^{+}\mathfrak{g}\mathrm{FO}^{n}_{k}$ if, and only if,
$\operatorname{\mathscr{D}}$ is upwards closed with respect to
$\rightarrow_{n,k}^{\mathfrak{g}}$.
As observed in [5, §6.1], the $\mathfrak{g}$-guarded resource-indexed arboreal
cover of $\mathbf{Struct}(\sigma)$ satisfies the bisimilar companion property,
and so does the $n$-variable $\mathfrak{g}$-guarded resource-indexed arboreal
cover for all $n>0$. Therefore, Proposition 4.5 implies the following
_equirank-variable homomorphism preservation theorem_ for guarded logics:
###### Theorem 4.11.
Let $\mathfrak{g}$ be a notion of guarding (either atom or loose). The
following statements are equivalent for any $\mathfrak{g}$-guarded sentence
$\varphi$ in $n$ variables of guarded-quantifier rank at most $k$ in a
relational vocabulary:
1. (1)
$\varphi$ is preserved under homomorphisms.
2. (2)
$\varphi$ is logically equivalent to an existential positive
$\mathfrak{g}$-guarded sentence in $n$ variables of guarded-quantifier rank at
most $k$.
### 4.3. Not-so-tame: extendability
A resource-indexed arboreal adjunction may fail to satisfy the bisimilar
companion property, in which case Proposition 4.5 does not apply. This is the
case e.g. for the Ehrenfeucht-Fraïssé resource-indexed arboreal adjunction:
###### Example 4.12.
The Ehrenfeucht-Fraïssé resource-indexed arboreal adjunction between
$\mathbf{Struct}(\sigma)$ and $\mathscr{R}^{E}(\sigma^{I})$ does not have the
bisimilar companion property. Suppose that $\sigma=\\{R\\}$ consists of a
single binary relation symbol and let $A$ be the $\sigma$-structure with
underlying set $\\{a,b\\}$ satisfying $R^{A}=\\{(a,b),(b,a)\\}$. In view of
Examples 3.18 and 3.21, it suffices to find $k>0$ and a first-order sentence
$\varphi$ of quantifier rank $\leq k$ such that
$A\vDash\varphi\ \text{ and }\ H\mathbb{E}_{k}JA\not\vDash\varphi.$
Let $\varphi$ be the sentence $\forall x\forall y\ (x\neq y\Rightarrow xRy)$
of quantifier rank $2$ stating that any two distinct elements are $R$-related.
Then $\varphi$ is satisfied by $A$ but not by $H\mathbb{E}_{k}JA$, because the
sequences $[a]$ and $[b]$ are not $R$-related in $H\mathbb{E}_{k}JA$. This
shows that the bisimilar companion property fails for all $k\geq 2$.
When the bisimilar companion property fails, i.e.
$a\not\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}G_{k}a$ for some
$a\in\operatorname{\mathscr{E}}$ and $k>0$, we may attempt to force it by
finding appropriate extensions $a^{*}$ and $(G_{k}a)^{*}$ of $a$ and $G_{k}a$,
respectively, satisfying
$a^{*}\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}(G_{k}a)^{*}$. This
motivates the notion of _$k$ -extendability_ (see Definition 4.17 below),
inspired by the work of Rossman [38] and its categorical interpretation in
[1].
To start with, we introduce the following notations. Given objects $a,b$ of a
category $\operatorname{\mathscr{A}}$, we write $a\to b$ to denote the
existence of an arrow from $a$ to $b$. Further, we write $a\rightleftarrows b$
to indicate that $a\to b$ and $b\to a$, i.e. $a$ and $b$ are _homomorphically
equivalent_. This applies in particular to coslice categories. Recall that,
for any $c\in\operatorname{\mathscr{A}}$, the _coslice category_
$c/{\operatorname{\mathscr{A}}}$ (also known as _under category_) has as
objects the arrows in $\operatorname{\mathscr{A}}$ with domain $c$. For any
two objects $m\colon c\to a$ and $n\colon c\to b$ of
$c/{\operatorname{\mathscr{A}}}$, an arrow $f\colon m\to n$ in
$c/{\operatorname{\mathscr{A}}}$ is a morphism $f\colon a\to b$ in
$\operatorname{\mathscr{A}}$ such that $f\circ m=n$. Hence, $m\rightleftarrows
n$ in $c/{\operatorname{\mathscr{A}}}$ precisely when there are arrows
$f\colon a\to b$ and $g\colon b\to a$ in $\operatorname{\mathscr{A}}$
satisfying $f\circ m=n$ and $g\circ n=m$. We shall represent this situation by
means of the following diagram:
${c}$${a}$${b}$$\scriptstyle{m}$$\scriptstyle{n}$$\scriptstyle{f}$$\scriptstyle{g}$
###### Remark 4.13.
Note that $m\rightleftarrows n$ in $c/{\operatorname{\mathscr{A}}}$ whenever
there is a section $f\colon a\to b$ in $\operatorname{\mathscr{A}}$ satisfying
$f\circ m=n$. Just observe that the left inverse $f^{-1}$ of $f$ satisfies
$f^{-1}\circ n=f^{-1}\circ f\circ m=m$
and so $n\to m$. Further, $f\circ m=n$ entails $m\to n$. Hence,
$m\rightleftarrows n$.
Now, let $a$ be an object of an arboreal category $\operatorname{\mathscr{C}}$
and let $m\colon P\rightarrowtail a$ be a path embedding. As
$\operatorname{\mathbb{S}}{a}$ is a complete lattice by Lemma 3.10(b), the
supremum $\bigvee{{\uparrow}m}$ in $\operatorname{\mathbb{S}}{a}$ of all path
embeddings above $m$ exists and we shall denote it by
$\mathfrak{i}_{m}\colon\mathscr{S}_{m}\rightarrowtail a.$
Clearly, $m\leq\mathfrak{i}_{m}$ in $\operatorname{\mathbb{S}}{a}$, and so
there is a path embedding
$m_{\star}\colon P\rightarrowtail\mathscr{S}_{m}$
satisfying $\mathfrak{i}_{m}\circ m_{\star}=m$. (Note that $\mathfrak{i}_{m}$
is well defined only up to isomorphism in the coslice category
${\operatorname{\mathscr{C}}}/a$, but as usual we work with representatives
for isomorphism classes.)
###### Remark 4.14.
To provide an intuition for the previous definition, let us say for the sake
of this remark that a path embedding $m\colon P\rightarrowtail a$ is “dense in
$a$” if all elements of $\operatorname{\mathbb{P}}{a}$ are comparable with
$m$. Then Lemma 4.15(a) below implies that
$\mathfrak{i}_{m}\colon\mathscr{S}_{m}\rightarrowtail a$ is the largest
$\mathscr{M}$-subobject of $a$ in which $m$ is dense.
###### Lemma 4.15.
The following statements hold for all path embeddings ${m\colon
P\rightarrowtail a}$:
1. (a)
$\operatorname{\mathbb{P}}{\mathscr{S}_{m}}$ is isomorphic to the subtree of
$\operatorname{\mathbb{P}}{a}$ consisting of the elements that are comparable
with $m$.
2. (b)
For all path embeddings $n\colon P\rightarrowtail b$ and arrows $f\colon a\to
b$ such that ${f\circ m=n}$, there is a unique
$g\colon\mathscr{S}_{m}\to\mathscr{S}_{n}$ making the following diagram
commute.
${\mathscr{S}_{m}}$${\mathscr{S}_{n}}$${a}$${b}$$\scriptstyle{\mathfrak{i}_{m}}$$\scriptstyle{g}$$\scriptstyle{\mathfrak{i}_{n}}$$\scriptstyle{f}$
3. (c)
For all path embeddings $n\colon P\rightarrowtail b$, if $m\to n$ then
$m_{\star}\to n_{\star}$.
###### Proof.
(a) The map
$\mathfrak{i}_{m}\circ-\colon\operatorname{\mathbb{S}}{\mathscr{S}_{m}}\to\operatorname{\mathbb{S}}{a}$
is an order-embedding by Lemma 3.3, and so its restriction
$\operatorname{\mathbb{P}}{\mathscr{S}_{m}}\to\operatorname{\mathbb{P}}{a}$ is
an injective forest morphism. Hence,
$\operatorname{\mathbb{P}}{\mathscr{S}_{m}}$ is isomorphic to the subtree of
$\operatorname{\mathbb{P}}{a}$ consisting of those elements that factor
through $\mathfrak{i}_{m}$, i.e. that are below $\bigvee{{\uparrow}m}$. By
Lemma 3.10(c), an element of $\operatorname{\mathbb{P}}{a}$ is below
$\bigvee{{\uparrow}m}$ precisely when it is below some element of
${\uparrow}m$. In turn, the latter is equivalent to being comparable with $m$.
(b) Since $\mathscr{S}_{m}$ is path-generated, it is the colimit of the
canonical cocone $C$ of path embeddings over the small diagram
$\operatorname{\mathbb{P}}{\mathscr{S}_{m}}$ which, by item (a), can be
identified with the subdiagram of $\operatorname{\mathbb{P}}{a}$ consisting of
those elements comparable with $m$. As
$\operatorname{\mathbb{P}}{(f\circ\mathfrak{i}_{m})}$ is monotone, it sends
path embeddings comparable with $m$ to path embeddings comparable with $n$,
and so the cocone $\\{f\circ\mathfrak{i}_{m}\circ p\mid p\in C\\}$ factors
through $\mathfrak{i}_{n}\colon\mathscr{S}_{n}\rightarrowtail b$. Hence, there
is $g\colon\mathscr{S}_{m}\to\mathscr{S}_{n}$ such that $\mathfrak{i}_{n}\circ
g=f\circ\mathfrak{i}_{m}$. Finally, note that if
$g^{\prime}\colon\mathscr{S}_{m}\to\mathscr{S}_{n}$ satisfies
$\mathfrak{i}_{n}\circ g^{\prime}=f\circ\mathfrak{i}_{m}$ then we have
$\mathfrak{i}_{n}\circ g=\mathfrak{i}_{n}\circ g^{\prime}$, and so
$g=g^{\prime}$ because $\mathfrak{i}_{n}$ is monic.
(c) Suppose there exists $f\colon a\to b$ such that $f\circ m=n$. By the item
(b), there is $g\colon\mathscr{S}_{m}\to\mathscr{S}_{n}$ such that
$\mathfrak{i}_{n}\circ g=f\circ\mathfrak{i}_{m}$. Therefore,
$\mathfrak{i}_{n}\circ g\circ m_{\star}=f\circ\mathfrak{i}_{m}\circ
m_{\star}=f\circ m=n=\mathfrak{i}_{n}\circ n_{\star}$
and so $g\circ m_{\star}=n_{\star}$ because $\mathfrak{i}_{n}$ is a
monomorphism. ∎
###### Remark 4.16.
Lemma 4.15(c) entails that $m_{\star}\to n_{\star}$ whenever $m_{\star}\to n$.
Just observe that $(m_{\star})_{\star}$ can be identified with $m_{\star}$.
###### Definition 4.17.
Consider a resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$, with
adjunctions $L_{k}\dashv R_{k}$. An object $a$ of $\operatorname{\mathscr{E}}$
is _$k$ -extendable_ if it satisfies the following property for all
$e\in\operatorname{\mathscr{E}}$: For all path embeddings $m\colon
P\rightarrowtail R_{k}a$ and $n\colon P\rightarrowtail R_{k}e$ such that
$m_{\star}\rightleftarrows n_{\star}$ in the coslice category
$P/{\operatorname{\mathscr{C}}_{k}}$ (see the leftmost diagram below),
${P}$${\mathscr{S}_{m}}$${\mathscr{S}_{n}}$$\scriptstyle{m_{\star}}$$\scriptstyle{n_{\star}}$${Q}$${\mathscr{S}_{m^{\prime}}}$${\mathscr{S}_{n^{\prime}}}$$\scriptstyle{m^{\prime}_{\star}}$$\scriptstyle{n^{\prime}_{\star}}$
if $n^{\prime}\colon Q\rightarrowtail R_{k}e$ is a path embedding satisfying
$n\leq n^{\prime}$ in $\operatorname{\mathbb{P}}{(R_{k}e)}$, there is a path
embedding $m^{\prime}\colon Q\rightarrowtail R_{k}a$ such that $m\leq
m^{\prime}$ in $\operatorname{\mathbb{P}}{(R_{k}a)}$ and
$m^{\prime}_{\star}\rightleftarrows n^{\prime}_{\star}$ in
$Q/{\operatorname{\mathscr{C}}_{k}}$ (as displayed in the rightmost diagram
above).
We shall see in Proposition 4.19 below that, under appropriate assumptions,
the $k$-extendability property allows us to upgrade the relation
$\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}$ to the finer relation
$\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}$. For the next lemma, recall
that a category is _locally finite_ if there are finitely many arrows between
any two of its objects.
###### Lemma 4.18.
Let $\operatorname{\mathscr{C}}$ be an arboreal category whose full
subcategory $\operatorname{\mathscr{C}}_{p}$ consisting of the paths is
locally finite. If $f\colon P\twoheadrightarrow Q$ and $g\colon
Q\twoheadrightarrow P$ are quotients between paths, then $f$ and $g$ are
inverse to each other.
###### Proof.
The set $M$ of quotients $P\twoheadrightarrow P$ is a finite monoid with
respect to composition, and it satisfies the right-cancellation law because
every quotient is an epimorphism. Hence $M$ is a group, and so $g\circ f\in M$
has an inverse. It follows that $g\circ f$ is an embedding. Because there is
at most one embedding between any two paths by Lemma 3.10(a), $g\circ
f=\mathrm{id}_{P}$. By symmetry, also $f\circ g=\mathrm{id}_{Q}$. ∎
###### Proposition 4.19.
Consider a resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$ such that
$\operatorname{\mathscr{C}}_{p}^{k}$ is locally finite for all $k>0$. For all
$k$-extendable objects $a,b$ of $\operatorname{\mathscr{E}}$ admitting a
product, we have $a\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}b$ if and
only if $a\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}b$.
###### Proof.
Fix an arbitrary $k>0$ and recall that $\operatorname{\mathscr{C}}_{k}$ is an
arboreal category. The “if” part of the statement follows from the inclusion
${\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}}\subseteq{\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}}$.
For the “only if” part suppose that
$a\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}b$, i.e. $R_{k}a$ and
$R_{k}b$ are homomorphically equivalent in $\operatorname{\mathscr{C}}_{k}$.
To improve readability, let $X\coloneqq R_{k}a$ and $Y\coloneqq R_{k}b$. We
must prove that $X$ and $Y$ are bisimilar. As $X$ and $Y$ admit a product in
$\operatorname{\mathscr{C}}_{k}$, namely the image under $R_{k}$ of the
product of $a$ and $b$ in $\operatorname{\mathscr{E}}$, by Theorem 3.14 it
suffices to show that $X$ and $Y$ are back-and-forth equivalent.
Fix arbitrary morphisms $f\colon X\to Y$ and $g\colon Y\to X$, and let $m$ and
$n$ denote generic elements of $\operatorname{\mathbb{P}}{X}$ and
$\operatorname{\mathbb{P}}{Y}$, respectively. We claim that
$\mathcal{B}\coloneqq\\{\llbracket m,n\rrbracket\mid\exists\
s\colon\mathscr{S}_{m}\to\mathscr{S}_{n},\,t\colon\mathscr{S}_{n}\to\mathscr{S}_{m}\
\text{s.t.}\ \operatorname{\mathbb{P}}{s}(m_{\star})=n_{\star}\ \text{and}\
\operatorname{\mathbb{P}}{t}(n_{\star})=m_{\star}\\}$
is a back-and-forth system between $X$ and $Y$, i.e. it satisfies items
(i)–(iii) in Definition 3.13. For item (i), let $\bot_{X},\bot_{Y}$ be the
roots of $\operatorname{\mathbb{P}}{X}$ and $\operatorname{\mathbb{P}}{Y}$,
respectively. Note that $\mathscr{S}_{\bot_{X}}$ and $\mathscr{S}_{\bot_{Y}}$
can be identified, respectively, with $X$ and $Y$. As
$\operatorname{\mathbb{P}}{f}$ and $\operatorname{\mathbb{P}}{g}$ are forest
morphisms, $\operatorname{\mathbb{P}}{f}(\bot_{X})=\bot_{Y}$ and
$\operatorname{\mathbb{P}}{g}(\bot_{Y})=\bot_{X}$. So,
$\llbracket\bot_{X},\bot_{Y}\rrbracket\in\mathcal{B}$.
For item (ii), suppose $\llbracket m,n\rrbracket\in\mathcal{B}$ and let
$m^{\prime}\in\operatorname{\mathbb{P}}{X}$ satisfy $m\prec m^{\prime}$. We
seek $n^{\prime}\in\operatorname{\mathbb{P}}{Y}$ such that $n\prec n^{\prime}$
and $\llbracket m^{\prime},n^{\prime}\rrbracket\in\mathcal{B}$. By assumption,
there are arrows $s\colon\mathscr{S}_{m}\to\mathscr{S}_{n}$ and
$t\colon\mathscr{S}_{n}\to\mathscr{S}_{m}$ such that
$\operatorname{\mathbb{P}}{s}(m_{\star})=n_{\star}$ and
$\operatorname{\mathbb{P}}{t}(n_{\star})=m_{\star}$. Writing
$P\coloneqq\operatorname{dom}(m)$ and
$P^{\prime}\coloneqq\operatorname{dom}(n)$, we have the following diagrams
${P}$${{\cdot}}$${P^{\prime}}$${\mathscr{S}_{m}}$${\mathscr{S}_{n}}$$\scriptstyle{e}$$\scriptstyle{m_{\star}}$$\scriptstyle{\operatorname{\mathbb{P}}{s}(m_{\star})}$$\scriptstyle{\varphi}$$\scriptstyle{n_{\star}}$$\scriptstyle{s}$${P^{\prime}}$${{\cdot}}$${P}$${\mathscr{S}_{n}}$${\mathscr{S}_{m}}$$\scriptstyle{e^{\prime}}$$\scriptstyle{n_{\star}}$$\scriptstyle{\operatorname{\mathbb{P}}{t}(n_{\star})}$$\scriptstyle{\psi}$$\scriptstyle{m_{\star}}$$\scriptstyle{t}$
where $\varphi$ and $\psi$ are isomorphisms. By Lemma 4.18, $\varphi\circ e$
and $\psi\circ e^{\prime}$ are inverse to each other, thus the left-hand
diagram below commutes.
${P}$${\mathscr{S}_{m}}$${\mathscr{S}_{n}}$$\scriptstyle{m_{\star}}$$\scriptstyle{n_{\star}\circ\varphi\circ
e}$$\scriptstyle{s}$$\scriptstyle{t}$${Q}$${\mathscr{S}_{m^{\prime}}}$${\mathscr{S}_{n^{\prime}}}$$\scriptstyle{m^{\prime}_{\star}}$$\scriptstyle{n^{\prime}_{\star}}$$\scriptstyle{s^{\prime}}$$\scriptstyle{t^{\prime}}$
Let $Q\coloneqq\operatorname{dom}(m^{\prime})$. Since $b$ is $k$-extendable,
there exist a path embedding $n^{\prime}\colon Q\rightarrowtail Y$ such that
$n\leq n^{\prime}$ in $\operatorname{\mathbb{P}}{Y}$, and arrows
$s^{\prime}\colon\mathscr{S}_{m^{\prime}}\to\mathscr{S}_{n^{\prime}}$ and
$t^{\prime}\colon\mathscr{S}_{n^{\prime}}\to\mathscr{S}_{m^{\prime}}$ making
the right-hand diagram above commute. It follows that $\llbracket
m^{\prime},n^{\prime}\rrbracket\in\mathcal{B}$; just observe that
$\operatorname{\mathbb{P}}{s^{\prime}}(m^{\prime}_{\star})=n^{\prime}_{\star}$
because the composite $s^{\prime}\circ m^{\prime}_{\star}$ is an embedding,
and similarly
$\operatorname{\mathbb{P}}{t^{\prime}}(n^{\prime}_{\star})=m^{\prime}_{\star}$.
It remains to show that $n\prec n^{\prime}$. For any element $x$ of a tree,
denote by $\operatorname{\mathrm{ht}}(x)$ its height. As $n\leq n^{\prime}$ in
$\operatorname{\mathbb{P}}{Y}$, it is enough to show that
$\operatorname{\mathrm{ht}}(n^{\prime})=\operatorname{\mathrm{ht}}(n)+1$.
Using the fact that forest morphisms preserve the height of points and
$\operatorname{\mathbb{P}}{s^{\prime}}(m^{\prime}_{\star})=n^{\prime}_{\star}$,
we get
$\operatorname{\mathrm{ht}}(n^{\prime})=\operatorname{\mathrm{ht}}(n^{\prime}_{\star})=\operatorname{\mathrm{ht}}(m^{\prime}_{\star})=\operatorname{\mathrm{ht}}(m^{\prime})=\operatorname{\mathrm{ht}}(m)+1.$
Item (iii) is proved in a similar way using the fact that $a$ is
$k$-extendable. ∎
###### Definition 4.20.
Consider a resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$, and an object
$a\in\operatorname{\mathscr{E}}$. For all $k>0$, a _$k$ -extendable cover_ of
$a$ is a section $a\to a^{*}$ in $\operatorname{\mathscr{E}}$ such that
$a^{*}$ is $k$-extendable.
###### Proposition 4.21.
(HP) holds for any resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$ satisfying the
following properties for all $k>0$:
1. (i)
$\operatorname{\mathscr{C}}_{p}^{k}$ is locally finite.
2. (ii)
$\operatorname{\mathscr{E}}$ has binary products and each of its objects
admits a $k$-extendable cover.
###### Proof.
Fix a full subcategory $\operatorname{\mathscr{D}}$ of
$\operatorname{\mathscr{E}}$ saturated under
$\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}$. For the non-trivial
implication in (HP), assume $\operatorname{\mathscr{D}}$ is closed under
morphisms and let $a,b\in\operatorname{\mathscr{E}}$ satisfy
$a\rightarrow^{\operatorname{\mathscr{C}}}_{k}b$ and
$a\in\operatorname{\mathscr{D}}$.
If $G_{k}\coloneqq L_{k}R_{k}$, then
$a\rightarrow^{\operatorname{\mathscr{C}}}_{k}b$ implies $G_{k}a\to b$. Let
$s\colon a\to a^{*}$ and $t\colon G_{k}a\to(G_{k}a)^{*}$ be sections with
$a^{*}$ and $(G_{k}a)^{*}$ $k$-extendable objects. It follows from Lemma
3.20(a) that $a\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}a^{*}$ and
$G_{k}a\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}(G_{k}a)^{*}$. By
Lemma 3.20(b) we have
$a\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}G_{k}a$ and so, by
transitivity,
$a^{*}\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}(G_{k}a)^{*}$. An
application of Proposition 4.19 yields
$a^{*}\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}(G_{k}a)^{*}$. We thus
have the following diagram, where $t^{-1}$ denotes the left inverse of $t$.
${a^{*}}$${{\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}}}$${(G_{k}a)^{*}}$${a}$${{\rightleftarrows_{k}^{\operatorname{\mathscr{C}}}}}$${G_{k}a}$${b}$$\scriptstyle{s}$$\scriptstyle{t^{-1}}$
Since $a\in\operatorname{\mathscr{D}}$, and $\operatorname{\mathscr{D}}$ is
saturated under $\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}$ and closed
under morphisms, all the objects in the diagram above sit in
$\operatorname{\mathscr{D}}$. In particular, $b\in\operatorname{\mathscr{D}}$
and thus (HP) holds. ∎
### 4.4. Relativising to full subcategories
Let $\operatorname{\mathscr{E}}^{\prime}$ be a full subcategory of
$\operatorname{\mathscr{E}}$. We say that (HP) holds _relative to
$\operatorname{\mathscr{E}}^{\prime}$_ if the following condition is
satisfied: For any full subcategory $\operatorname{\mathscr{D}}$ of
$\operatorname{\mathscr{E}}$ saturated under
$\leftrightarrow_{k}^{\operatorname{\mathscr{C}}}$,
$\operatorname{\mathscr{D}}\cap\operatorname{\mathscr{E}}^{\prime}$ is closed
under morphisms in $\operatorname{\mathscr{E}}^{\prime}$ precisely when it is
upwards closed in $\operatorname{\mathscr{E}}^{\prime}$ with respect to
$\rightarrow^{\operatorname{\mathscr{C}}}_{k}$. Likewise for (HP#).
For the next proposition, observe that in order for a comonad $G$ on
$\operatorname{\mathscr{E}}$ to restrict to a full subcategory
$\operatorname{\mathscr{E}}^{\prime}$ of $\operatorname{\mathscr{E}}$ it is
necessary and sufficient that $Ga\in\operatorname{\mathscr{E}}^{\prime}$ for
all $a\in\operatorname{\mathscr{E}}^{\prime}$.
###### Proposition 4.22.
Consider a resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$, and let
$\operatorname{\mathscr{E}}^{\prime}$ be a full subcategory of
$\operatorname{\mathscr{E}}$ such that the induced comonads $G_{k}\coloneqq
L_{k}R_{k}$ restrict to $\operatorname{\mathscr{E}}^{\prime}$. If the
resource-indexed arboreal adjunction has the bisimilar companion property then
(HP) holds relative to $\operatorname{\mathscr{E}}^{\prime}$. If it is
idempotent then (HP#) holds relative to $\operatorname{\mathscr{E}}^{\prime}$.
###### Proof.
The same, mutatis mutandis, as for Propositions 4.5 and 4.7, respectively. ∎
Since the modal comonads $\mathbb{M}_{k}$ restrict to finite pointed Kripke
structures, the previous result yields a variant of Theorem 4.9 for finite
structures:
###### Theorem 4.23.
The following statements are equivalent for any graded modal formula $\varphi$
of modal depth at most $k$ in a modal vocabulary:
1. (1)
$\varphi$ is preserved under homomorphisms between finite pointed Kripke
structures.
2. (2)
$\varphi$ is logically equivalent over finite pointed Kripke structures to an
existential positive modal formula of modal depth at most $k$.
Similarly, since the guarded comonads $\mathbb{G}_{n,k}^{\mathfrak{g}}$
restrict to finite structures, we obtain the following variant of Theorem 4.11
for finite structures:
###### Theorem 4.24.
Let $\mathfrak{g}$ be a notion of guarding (either atom or loose). The
following statements are equivalent for any $\mathfrak{g}$-guarded sentence
$\varphi$ in $n$ variables of guarded-quantifier rank at most $k$ in a
relational vocabulary:
1. (1)
$\varphi$ is preserved under homomorphisms between finite structures.
2. (2)
$\varphi$ is logically equivalent over finite structures to an existential
positive $\mathfrak{g}$-guarded sentence in $n$ variables of guarded-
quantifier rank at most $k$.
In all our examples of resource-indexed arboreal adjunctions and covers, the
counits of the induced comonads $G_{k}\coloneqq L_{k}R_{k}$ are componentwise
surjective. The next easy observation, combined with Proposition 4.22, then
provides a useful criterion to relativise equi-resource homomorphism
preservation theorems to subclasses of structures. Recall that a _negative
formula_ is one obtained from negated atomic formulas and
$\vee,\wedge,\exists,\forall$.
###### Lemma 4.25.
Let $G$ be a comonad on $\mathbf{Struct}(\sigma)$ whose counit is
componentwise surjective and let $T$ be a set of negative sentences. Then $G$
restricts to $\operatorname{\mathbf{Mod}}(T)$.
###### Proof.
If $\psi\in T$, its negation is logically equivalent to a positive sentence
$\chi$. Positive sentences are preserved under surjective homomorphisms and
so, for all $A\in\mathbf{Struct}(\sigma)$, considering the component of the
counit $GA\twoheadrightarrow A$ we obtain
$GA\models\chi\ \Longrightarrow\ A\models\chi.$
I.e., $G$ restricts to $\operatorname{\mathbf{Mod}}(\psi)$. As
$\operatorname{\mathbf{Mod}}(T)=\bigcap_{\psi\in
T}{\operatorname{\mathbf{Mod}}(\psi)}$, the statement follows. ∎
The counits of the guarded comonads $\mathbb{G}_{n,k}^{\mathfrak{g}}$ are
componentwise surjective, thus the equirank-variable homomorphism preservation
theorem for guarded logics and its finite variant (Theorems 4.11 and 4.24,
respectively) admit a relativisation to any full subcategory of the form
$\operatorname{\mathbf{Mod}}(T)$ where $T$ is a set of negative
$\mathfrak{g}$-guarded sentences.
The counits of the modal comonads $\mathbb{M}_{k}$ are also componentwise
surjective. As positive modal formulas are preserved under surjective
homomorphisms between pointed Kripke structures, a slight variant of Lemma
4.25 shows that the comonads $\mathbb{M}_{k}$ restrict to any full subcategory
of the form $\operatorname{\mathbf{Mod}}(T)$, with $T$ a set of negative modal
formulas. Hence, the equidepth homomorphism preservation theorem for graded
modal logic and its finite version (Theorems 4.9 and 4.23, respectively) can
be relativised to any such subcategory.
Relativisations to subclasses of structures can be obtained even in the
absence of the bisimilar companion property; in that case, we need to ensure
that $k$-extendable covers can be constructed within the subclass. We defer
the statement of this result to Section 5.2 (see Corollary 5.11).
## 5\. An Axiomatic Approach
In this section, we identify sufficient conditions on a resource-indexed
arboreal adjunction between $\operatorname{\mathscr{E}}$ and
$\operatorname{\mathscr{C}}$ ensuring that property (HP) is satisfied (see
Corollary 5.10). We introduce first conditions (E1)–(E2) on
$\operatorname{\mathscr{E}}$ in Section 5.1 and then, in Section 5.2,
conditions (A1)–(A4) on the adjunctions. In Section 5.3, we derive a slight
generalisation of the equirank homomorphism preservation theorem by showing
that these conditions are satisfied by the Ehrenfeucht-Fraïssé resource-
indexed arboreal adjunction.
###### Remark 5.1.
Let us point out that we cannot derive from this axiomatic approach an
_equivariable_ homomorphism preservation theorem, whereby the number of
variables in a sentence is preserved, let alone an _equirank-variable_ one. In
fact, the corresponding ($k$-round) $n$-pebble comonads do not satisfy
property (A3) below. On the other hand, under the additional assumption that
$k\leq n+2$, where $k$ is the quantifier rank and $n$ is the number of
variables in a sentence, an equirank-variable homomorphism preservation
theorem was proved by Paine in [35]. Also, our approach does not readily apply
to _hybrid logic_ because the hybrid comonads $\mathbb{H}_{k}$ in [5] do not
satisfy the path restriction property (A4) (cf. Definition 5.8).
### 5.1. Axioms for the extensional category
We require that the category $\operatorname{\mathscr{E}}$ have the following
properties:
1. (E1)
$\operatorname{\mathscr{E}}$ has all finite limits and small colimits.
2. (E2)
$\operatorname{\mathscr{E}}$ is equipped with a proper factorisation system
such that:
* •
Embeddings are stable under pushouts along embeddings.
* •
Pushout squares of embeddings are also pullbacks.
* •
Pushout squares of embeddings are stable under pullbacks along embeddings.
###### Remark 5.2.
Note that property (E2) only involves one half of the factorisation system,
namely the embeddings. In fact, it could be weakened to the requirement that
$\operatorname{\mathscr{E}}$ admit a class of monomorphisms $\mathscr{N}$
satisfying appropriate properties. When $\mathscr{N}$ is the class of all
monomorphisms, these are akin to the conditions for an _adhesive category_ ,
cf. [29].
###### Example 5.3.
If $\sigma$ is a relational vocabulary, $\mathbf{Struct}(\sigma)$ satisfies
(E1)–(E2). In fact, it is well known that $\mathbf{Struct}(\sigma)$ is
complete and cocomplete, hence it satisfies (E1). For (E2), consider the
proper factorisation system given by surjective homomorphisms and embeddings.
Up to isomorphism, embeddings can be identified with inclusions of induced
substructures. The pushout of a span of embeddings in
$\mathbf{Struct}(\sigma)$ can be identified with a union of structures, and so
embeddings are stable under pushouts along embeddings. The remaining two
conditions in (E2) hold because they are satisfied in
$\operatorname{\mathbf{Set}}$ by the class of monomorphism, and the forgetful
functor $\mathbf{Struct}(\sigma)\to\operatorname{\mathbf{Set}}$ preserves and
reflects pullback and pushout diagrams consisting of embeddings.
We note in passing that properties (E1)–(E2) are stable under taking coslices:
###### Lemma 5.4.
If a category $\operatorname{\mathscr{E}}$ satisfies (E1)–(E2), then so does
$e/{\operatorname{\mathscr{E}}}$ for all $e\in\operatorname{\mathscr{E}}$.
###### Proof.
Fix an arbitrary object $e\in\operatorname{\mathscr{E}}$. It is well known
that limits and colimits in $e/{\operatorname{\mathscr{E}}}$ are inherited
from $\operatorname{\mathscr{E}}$, so $e/{\operatorname{\mathscr{E}}}$
satisfies (E1) because $\operatorname{\mathscr{E}}$ does.
By assumption, $\operatorname{\mathscr{E}}$ admits a proper factorisation
system satisfying (E2). Let $\mathscr{Q}$ and $\mathscr{M}$ be the classes of
arrows in $e/{\operatorname{\mathscr{E}}}$ whose underlying morphisms in
$\operatorname{\mathscr{E}}$ are quotients and embeddings, respectively. It is
folklore that $(\mathscr{Q},\mathscr{M})$ is a weak factorisation system in
$e/{\operatorname{\mathscr{E}}}$. Moreover, this factorisation system is
proper because the codomain functor $\operatorname{cod}\colon
e/{\operatorname{\mathscr{E}}}\to\operatorname{\mathscr{E}}$ is faithful.
Recall that $\operatorname{cod}\colon
e/{\operatorname{\mathscr{E}}}\to\operatorname{\mathscr{E}}$ preserves
pushouts, so embeddings in $e/{\operatorname{\mathscr{E}}}$ are stable under
pushouts along embeddings because the corresponding property is satisfied in
$\operatorname{\mathscr{E}}$. The remaining two properties in (E2) follow by a
similar reasoning, using the fact that $\operatorname{cod}\colon
e/{\operatorname{\mathscr{E}}}\to\operatorname{\mathscr{E}}$ preserves and
reflects limits and pushouts. ∎
###### Example 5.5.
It follows from Example 5.3 and Lemma 5.4 that, for all relational
vocabularies $\sigma$, the category $\mathbf{Struct}_{\bullet}(\sigma)$ of
pointed $\sigma$-structures satisfies (E1)–(E2).
### 5.2. Axioms for the resource-indexed adjunctions
We now assume that the extensional category $\operatorname{\mathscr{E}}$
satisfies (E1)–(E2), and proceed to introduce conditions on the resource-
indexed arboreal adjunction between $\operatorname{\mathscr{E}}$ and
$\operatorname{\mathscr{C}}$. To start with, consider an arbitrary adjunction
$L\dashv R\colon\operatorname{\mathscr{E}}\to\operatorname{\mathscr{C}}$. As
with any adjunction, there are hom-set bijections
$\operatorname{\mathscr{E}}(Lc,e)\longrightarrow\operatorname{\mathscr{C}}(c,Ra),\enspace
f\mapsto f^{\flat}$
natural in $c\in\operatorname{\mathscr{C}}$ and
$e\in\operatorname{\mathscr{E}}$. Explicitly, $f^{\flat}$ is defined as the
composite
${c}$${RLc}$${Re}$$\scriptstyle{\eta_{c}}$$\scriptstyle{Rf}$
where $\eta$ is the unit of the adjunction $L\,{\dashv}\,R$. The inverse of
the function $f\mapsto f^{\flat}$ sends $g\in\operatorname{\mathscr{C}}(c,Re)$
to the morphism $g^{\\#}$ given by the composition
${Lc}$${LRe}$${e,}$$\scriptstyle{Lg}$$\scriptstyle{\varepsilon_{e}}$
where $\varepsilon$ is the counit of the adjunction. Naturality of these
bijections means that
$(f_{1}\circ f_{2})^{\flat}=Rf_{1}\circ f_{2}^{\flat}$
for all morphisms $f_{1}\colon e\to e^{\prime}$ and $f_{2}\colon Lc\to e$ in
$\operatorname{\mathscr{E}}$, and
$(g_{1}\circ g_{2})^{\\#}=g_{1}^{\\#}\circ Lg_{2}$
for all morphisms $g_{1}\colon c\to Re$ and $g_{2}\colon c^{\prime}\to c$ in
$\operatorname{\mathscr{C}}$.
Next, we introduce the path restriction property for resource-indexed arboreal
adjunctions. In a nutshell, this states that whenever
$a\in\operatorname{\mathscr{E}}$ embeds into the image under $L_{k}$ of a
path, $a$ itself can be equipped with a path structure. Furthermore, these
path structures can be chosen in a coherent fashion. We start with an
auxiliary definition:
###### Definition 5.6.
Consider a resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$. A path
$P\in\operatorname{\mathscr{C}}_{p}^{k}$ is _smooth_ if there exist
$e\in\operatorname{\mathscr{E}}$ and an embedding $P\rightarrowtail R_{k}e$.
###### Remark 5.7.
The motivation for considering smooth paths arises from the fact that, when
considering a fresh binary relation symbol $I$ modelling equality in the logic
(cf. Example 3.18), the interpretation of $I$ in these paths is always an
equivalence relation.
###### Definition 5.8.
A resource-indexed arboreal adjunction between $\operatorname{\mathscr{E}}$
and $\operatorname{\mathscr{C}}$ has the _path restriction property_ if, for
all smooth paths $Q\in\operatorname{\mathscr{C}}_{p}^{k}$ and embeddings
${j\colon a\rightarrowtail L_{k}Q}$, there is a path
$Q_{a}\in\operatorname{\mathscr{C}}_{p}^{k}$ such that $L_{k}Q_{a}\cong a$ and
the following conditions are satisfied:
1. (i)
For all path embeddings $!_{P,Q}\colon P\rightarrowtail Q$ in
$\operatorname{\mathscr{C}}_{k}$ and all commutative diagrams
${L_{k}P}$${a}$${L_{k}Q}$$\scriptstyle{L_{k}(!_{P,Q})}$$\scriptstyle{f}$$\scriptstyle{j}$
there is an arrow $\ell\colon P\to Q_{a}$ such that $L_{k}\ell=f$.
2. (ii)
For all path embeddings $!_{P,Q}\colon P\rightarrowtail Q$ such that
$L_{k}(!_{P,Q})$ is an embedding, the pullback of $L_{k}(!_{P,Q})$ along $j$
is of the form $L_{k}\ell$ for some $\ell\colon Q_{b}\to Q_{a}$.
${b}$${L_{k}P}$${a}$${L_{k}Q}$${\lrcorner}$$\scriptstyle{L_{k}\ell}$$\scriptstyle{L_{k}(!_{P,Q})}$$\scriptstyle{j}$
Finally, recall that an object $a$ of a category $\operatorname{\mathscr{A}}$
is _finitely presentable_ if the associated hom-functor
$\operatorname{\mathscr{A}}(a,-)\colon\operatorname{\mathscr{A}}\to\operatorname{\mathbf{Set}}$
preserves directed colimits.
With regards to the resource-indexed arboreal adjunction, we assume the
following properties are satisfied for all $k>0$ and all paths
$P\in\operatorname{\mathscr{C}}_{p}^{k}$:
1. (A1)
The category $\operatorname{\mathscr{C}}_{p}^{k}$ is locally finite and has
finitely many objects up to isomorphism.
2. (A2)
$L_{k}P$ is finitely presentable in $\operatorname{\mathscr{E}}$.
3. (A3)
For all arrows $m\colon P\to R_{k}a$ in $\operatorname{\mathscr{C}}_{k}$, if
$m$ is an embedding then so is $m^{\\#}\colon L_{k}P\to a$. The converse holds
whenever $P$ is smooth.
4. (A4)
The path restriction property is satisfied.
###### Theorem 5.9.
Consider a resource-indexed arboreal adjunction between
$\operatorname{\mathscr{E}}$ and $\operatorname{\mathscr{C}}$ satisfying
(E1)–(E2) and (A1)–(A4). For all $a\in\operatorname{\mathscr{E}}$ and all
$k>0$, there exists a $k$-extendable cover of $a$.
The proof of the previous key fact is deferred to Section 6. Let us point out
the following immediate consequence:
###### Corollary 5.10.
(HP) holds for all resource-indexed arboreal adjunctions satisfying (E1)–(E2)
and (A1)–(A4).
###### Proof.
By Proposition 4.21 and Theorem 5.9. ∎
We can also deduce the following relativisation result. Let us say that a full
subcategory $\operatorname{\mathscr{D}}$ of a category
$\operatorname{\mathscr{A}}$ is _closed (in $\operatorname{\mathscr{A}}$)
under co-retracts_ provided that, whenever $A\in\operatorname{\mathscr{D}}$
and there is a section $A\to B$ in $\operatorname{\mathscr{A}}$, also
$B\in\operatorname{\mathscr{D}}$.
###### Corollary 5.11.
Let $\sigma$ be a relational vocabulary and consider a resource-indexed
arboreal adjunction between $\mathbf{Struct}(\sigma)$ and
$\operatorname{\mathscr{C}}$ satisfying (A1)–(A4). The following hold:
1. (1)
If $\operatorname{\mathscr{D}}$ is a full subcategory of
$\mathbf{Struct}(\sigma)$ closed under co-retracts such that each induced
comonad $G_{k}\coloneqq L_{k}R_{k}$ restricts to $\operatorname{\mathscr{D}}$,
then (HP) holds relative to $\operatorname{\mathscr{D}}$.
2. (2)
If the counits of the comonads $G_{k}$ are componentwise surjective and $T$ is
a set of negative sentences in the vocabulary $\sigma$, then (HP) holds
relative to $\operatorname{\mathbf{Mod}}(T)$.
###### Proof.
The proof of item 1 is the same, mutatis mutandis, as for Proposition 4.21,
using Theorem 5.9 and the fact that if $\operatorname{\mathscr{D}}$ is closed
under co-retracts then $k$-extendable covers can be constructed within
$\operatorname{\mathscr{D}}$.
Item 2 is an immediate consequence of item 1. Just observe that the comonads
$G_{k}$ restrict to $\operatorname{\mathbf{Mod}}(T)$ by Lemma 4.25, and
$\operatorname{\mathbf{Mod}}(T)$ is closed under co-retracts (cf. the proof of
the aforementioned lemma). ∎
### 5.3. The equirank homomorphism preservation theorem
As observed in Section 4.1, Rossman’s equirank homomorphism preservation
theorem is equivalent to property (HP) for the Ehrenfeucht-Fraïssé resource-
indexed arboreal adjunction between $\mathbf{Struct}(\sigma)$ and
$\mathscr{R}^{E}(\sigma^{I})$. In turn, by Corollary 5.10, to establish (HP)
it suffices to show that the latter resource-indexed arboreal adjunction
satisfies (E1)–(E2) and (A1)–(A4). By Example 5.3, the category
$\mathbf{Struct}(\sigma)$ satisfies (E1)–(E2) when equipped with the
(surjective homomorphisms, embeddings) factorisation system, so it remains to
show that (A1)–(A4) hold. Before doing so, note that Corollary 5.11 yields the
following slight generalisation of the equirank homomorphism preservation
theorem (just observe that the counits of the induced comonads on
$\mathbf{Struct}(\sigma)$ are componentwise surjective).
###### Theorem 5.12.
Let $\sigma$ be a relational vocabulary and let $\operatorname{\mathscr{D}}$
be a full subcategory of $\mathbf{Struct}(\sigma)$ closed under co-retracts
such that the comonads on $\mathbf{Struct}(\sigma)$ induced by the
Ehrenfeucht-Fraïssé resource-indexed arboreal adjunction restrict to
$\operatorname{\mathscr{D}}$. Then the equirank homomorphism preservation
theorem holds relative to $\operatorname{\mathscr{D}}$.
In particular, the equirank homomorphism preservation theorem holds relative
to $\operatorname{\mathbf{Mod}}(T)$ whenever $T$ is a set of negative
sentences in the vocabulary $\sigma$.
###### Remark 5.13.
A consequence of the first part of Theorem 5.12 is that the equirank
homomorphism preservation theorem admits a relativisation to any class of
structures that is _co-homomorphism closed_ , i.e. downwards closed with
respect to the homomorphism preorder on $\mathbf{Struct}(\sigma)$, a fact
already pointed out by Rossman in [38, §7.1.2].
We proceed to verify conditions (A1)–(A4) for the adjunctions $L_{k}\dashv
R_{k}$, where
$L_{k}\coloneqq HL^{E}_{k}\ \text{ and }\ R_{k}\coloneqq R^{E}_{k}J$
with the notation of Example 3.18. Note that, since a first-order sentence
contains only finitely many relation symbols, in order to deduce the equirank
homomorphism preservation theorem, as well as Theorem 5.12 above, we can
assume without loss of generality that the relational vocabulary $\sigma$ is
finite.
(A1) Recall from Example 3.7 that, for all $k>0$, the paths in
$\mathscr{R}^{E}_{k}(\sigma^{I})$ are those forest-ordered
$\sigma^{I}$-structures $(A,\leq)$ such that the order is a chain of
cardinality at most $k$. Thus, $A$ has cardinality at most $k$. It follows at
once that there are finitely many paths in $\mathscr{R}^{E}_{k}(\sigma^{I})$
up to isomorphism, and at most one arrow between any two paths.
(A2) For any path $P=(A,\leq)$ in $\mathscr{R}^{E}_{k}(\sigma^{I})$, $L_{k}P$
is the quotient of the $\sigma$-reduct of $A$ with respect to the equivalence
relation generated by $I^{A}$. As $A$ is finite, so is $L_{k}P$. The finitely
presentable objects in $\mathbf{Struct}(\sigma)$ are precisely the finite
$\sigma$-structures (see e.g. [11, §5.1]), hence $L_{k}P$ is finitely
presentable.
(A3) Consider an arrow $m\colon P\to R_{k}B$ in
$\mathscr{R}^{E}_{k}(\sigma^{I})$, with $P=(A,\leq)$ a path and $B$ a
$\sigma$-structure. Let $B^{\prime}\coloneqq J(B)$ be the
$\sigma^{I}$-structure obtained from $B$ by interpreting $I$ as the identity
relation. Then $R_{k}B$ is obtained by equipping $\mathbb{E}_{k}(B^{\prime})$
with the prefix order. Consider the $\sigma$-homomorphism
$L_{k}m=Hm\colon H(A)\to H\mathbb{E}_{k}(B^{\prime}).$
For convenience of notation, given an element $x\in A$ we write $[x]$ for the
corresponding element of $H(A)$, and likewise for elements of
$\mathbb{E}_{k}(B^{\prime})$. Then $m^{\\#}\colon L_{k}P\to B$ is the
composite of $Hm$ with the homomorphism $H\mathbb{E}_{k}(B^{\prime})\to B$
sending the equivalence class of an element of $\mathbb{E}_{k}(B^{\prime})$ to
the last element of any of its representatives. This map is well defined
because, for any pair of sequences in $I^{\mathbb{E}_{k}(B^{\prime})}$, their
last elements coincide.
Suppose $m$ is an embedding. If $m^{\\#}([x])=m^{\\#}([y])$ then $(m(x),m(y))$
belongs to the equivalence relation generated by
$I^{\mathbb{E}_{k}(B^{\prime})}$, and so $(x,y)$ belongs to the equivalence
relation generated by $I^{A}$. It follows that $[x]=[y]$ in $H(A)$, and so
$m^{\\#}$ is injective. The same argument, mutatis mutandis, shows that
$m^{\\#}$ reflects the interpretation of the relation symbols, hence is an
embedding.
Conversely, suppose that $m^{\\#}$ is an embedding and $P$ is smooth. Consider
an embedding $n\colon P\rightarrowtail R_{k}C$ with
$C\in\mathbf{Struct}(\sigma)$. Note that the restriction of
$I^{\mathbb{E}_{k}(C^{\prime})}$ to the image of $n$ is an equivalence
relation, hence $I^{A}$ is an equivalence relation. As any forest morphism
whose domain is linearly ordered is injective, $m$ is injective. So, it
remains to show that $m$ reflects the interpretation of the relation symbols.
For all $x,y\in A$, if
$(m(x),m(y))\in I^{\mathbb{E}_{k}(B^{\prime})}$
then $m^{\\#}([x])=m^{\\#}([y])$ and so $[x]=[y]$ because $m^{\\#}$ is
injective. That is, $(x,y)\in I^{A}$, showing that $m$ reflects the
interpretation of the relation $I$. Suppose now that $S$ is a relation symbol
different from $I$. For convenience of notation we shall assume that $S$ has
arity $2$; the general case is a straightforward adaptation. For all $x,y\in
A$, if
$(m(x),m(y))\in S^{\mathbb{E}_{k}(B^{\prime})}$
then $(m^{\\#}([x]),m^{\\#}([y]))\in S^{B}$ and so $([x],[y])\in S^{H(A)}$
because $m^{\\#}$ is an embedding. That is, there are
$x^{\prime},y^{\prime}\in A$ such that $(x,x^{\prime}),(y,y^{\prime})\in
I^{A}$ and $(x^{\prime},y^{\prime})\in S^{A}$. We claim that the following
property holds, from which it follows that $m$ is an embedding:
$(x,x^{\prime}),(y,y^{\prime})\in I^{A}\ \text{ and }\
(x^{\prime},y^{\prime})\in S^{A}\ \Longrightarrow\ (x,y)\in S^{A}.$ ($\ast$)
In turn, this is a consequence of the fact that $n$ is an embedding and, in
$\mathbb{E}_{k}(C^{\prime})$,
$(n(x),n(x^{\prime})),(n(y),n(y^{\prime}))\in I^{\mathbb{E}_{k}(C^{\prime})}\
\text{ and }\ (n(x^{\prime}),n(y^{\prime}))\in S^{\mathbb{E}_{k}(C^{\prime})}$
imply $(n(x),n(y))\in S^{\mathbb{E}_{k}(C^{\prime})}$.
(A4) Finally, we show that the path restriction property is satisfied. Let
$Q=(B,\leq)$ be a smooth path in $\mathscr{R}^{E}_{k}(\sigma^{I})$, and let
$j\colon A\rightarrowtail H(B)$ be an embedding in $\mathbf{Struct}(\sigma)$.
Without loss of generality, we can identify $A$ with a substructure of $H(B)$,
and $j$ with the inclusion map. As observed above, since $Q$ is smooth,
$I^{B}$ is an equivalence relation and property ($\ast$ ‣ 5.3) is satisfied
(with $B$ in place of $A$). If $q_{B}\colon B\twoheadrightarrow H(B)$ is the
canonical quotient map, let $Q_{A}$ denote the substructure of $B$ whose
underlying set is
$\\{x\in B\mid q_{B}(x)\in A\\}.$
Then $Q_{A}$ is a path in $\mathscr{R}^{E}_{k}(\sigma^{I})$ when equipped with
the restriction of the order on $B$ and, using ($\ast$ ‣ 5.3), we get
$H(Q_{A})\cong A$.
It follows from the definition of $Q_{A}$ that item (i) in Definition 5.8 is
satisfied. Just observe that any substructure of $Q_{A}$ that is downwards
closed in $Q$ is also downwards closed in $Q_{A}$. With regards to item (ii),
consider a path embedding $!_{P,Q}\colon P\rightarrowtail Q$ with $P=(C,\leq)$
and form the following pullback square in $\mathbf{Struct}(\sigma)$.
${D}$${H(C)}$${A}$${H(B)}$${\lrcorner}$$\scriptstyle{L_{k}(!_{P,Q})}$$\scriptstyle{j}$
Identifying $H(C)$ with a substructure of $H(B)$, we can assume $D$ is the
intersection of $A$ and $H(C)$. Because $C$ is a substructure of $B$, it
follows that $Q_{D}$ is a substructure of $Q_{A}$. Moreover, because $C$ is
downwards closed in $B$, $Q_{D}$ is downwards closed in $Q_{A}$. That is,
there is an inclusion $Q_{D}\rightarrowtail Q_{A}$ whose image under $L_{k}$
coincides with the pullback of $L_{k}(!_{P,Q})$ along $j$. Hence the path
restriction property holds.
## 6\. Proof of Theorem 5.9
For the remainder of this section, we fix an arbitrary resource-indexed
arboreal adjunction between $\operatorname{\mathscr{E}}$ and
$\operatorname{\mathscr{C}}$, with adjunctions $L_{k}\dashv
R_{k}\colon\operatorname{\mathscr{E}}\to\operatorname{\mathscr{C}}_{k}$,
satisfying (E1)–(E2) and (A1)–(A4).
### 6.1. Relative extendability
For all $k>0$, we denote by $\overline{\operatorname{\mathscr{C}}}_{k}$ the
full subcategory of $\operatorname{\mathscr{C}}_{k}$ whose objects are
colimits of finite diagrams of embeddings in
$\operatorname{\mathscr{C}}_{p}^{k}$. Further, we write
$L_{k}[\overline{\operatorname{\mathscr{C}}}_{k}]$ for the full subcategory of
$\operatorname{\mathscr{E}}$ defined by the objects of the form $L_{k}c$ for
$c\in\overline{\operatorname{\mathscr{C}}}_{k}$.
###### Remark 6.1.
It follows from (A1) that $\overline{\operatorname{\mathscr{C}}}_{k}$ is
equivalent to a finite category. Therefore
$L_{k}[\overline{\operatorname{\mathscr{C}}}_{k}]$ contains, up to
isomorphism, only finitely many objects.
As we shall see in the following lemma, every path embedding $P\rightarrowtail
R_{k}a$ is homomorphically equivalent to one of the form $P\rightarrowtail
R_{k}\widetilde{a}$ with $\widetilde{a}\in
L_{k}[\overline{\operatorname{\mathscr{C}}}_{k}]$. Consequently, in the
definition of $k$-extendable object (see Definition 4.17) we can assume
without loss of generality that $e\in
L_{k}[\overline{\operatorname{\mathscr{C}}}_{k}]$. This observation, combined
with Remark 6.1, will allow us to control the size of the diagrams featuring
in the proof of Theorem 5.9.
###### Lemma 6.2.
For all path embeddings $m\colon P\rightarrowtail R_{k}a$, there are
$\widetilde{a}\in L_{k}[\overline{\operatorname{\mathscr{C}}}_{k}]$ and a path
embedding $\widetilde{m}\colon P\rightarrowtail R_{k}\widetilde{a}$ such that
$m\rightleftarrows\widetilde{m}$ in $P/{\operatorname{\mathscr{C}}_{k}}$.
###### Proof.
Fix an arbitrary path embedding $m\colon P\rightarrowtail R_{k}a$. By (A1),
there is a finite set of paths
$\mathscr{P}=\\{P_{1},\ldots,P_{j}\\}\subseteq\operatorname{\mathscr{C}}_{p}^{k}$
such that each path in $\operatorname{\mathscr{C}}_{k}$ is isomorphic to
exactly one member of $\mathscr{P}$. We can assume without loss of generality
that $P\in\mathscr{P}$.
For each path embedding $p\in\operatorname{\mathbb{P}}{(R_{k}a)}$, denote by
$T_{p}$ the tree obtained by first considering the tree
${\uparrow}p\subseteq\operatorname{\mathbb{P}}{(R_{k}a)}$ and then replacing
each node $q$ (which is an isomorphism class of a path embedding) with the
unique path $P_{i}\in\mathscr{P}$ such that $P_{i}\cong\operatorname{dom}(q)$.
We assume that $T_{p}$ is _reduced_ , i.e. given any two nodes $x$ and $y$ of
$T_{p}$ that cover the same node, if the trees ${\uparrow}x$ and ${\uparrow}y$
are equal then $x=y$. (If $T_{p}$ is not reduced, we can remove branches in
the obvious manner to obtain a maximal reduced subtree $T^{\prime}_{p}$.) We
refer to $T_{p}$ as the _type_ of $p$; note that this is a finite tree. In
particular, if $\bot$ is the root of $\operatorname{\mathbb{P}}{(R_{k}a)}$, we
get a finite tree $T_{\bot}$.
Now, for each node $x$ of $T_{\bot}$, we shall define a path embedding $m_{x}$
into $R_{k}a$ whose domain belongs to $\mathscr{P}$. The definition of $m_{x}$
is by induction on the height of $x$. Suppose $x$ has height $0$, i.e. $x$ is
the root of $T_{\bot}$. Then $x=P_{i}$ for a unique $i\in\\{1,\ldots,j\\}$.
Define $m_{x}$ as the restriction of $m$ to $P_{i}$, i.e. the composition of
$m\colon P\rightarrowtail R_{k}a$ with the unique embedding
$P_{i}\rightarrowtail P$. Next, suppose $m_{z}$ has been defined for all nodes
$z$ of height at most $l$, and let $x$ be a node of height $l+1$ labeled by
some $P_{j}$. We distinguish two cases:
* •
If there is a node $y\geq x$ such that $T_{m}$ coincides with the tree
${\uparrow}y\subseteq T_{\bot}$, then we let $m_{x}$ be the restriction of $m$
to $P_{j}$. Note that, in this case, the type of $m_{x}$ coincides with the
tree ${\uparrow}x\subseteq T_{\bot}$. Moreover, if $z$ is the predecessor of
$x$ then $m_{z}$ will also be an appropriate restriction of $m$, and thus
$m_{x}$ extends $m_{z}$.
* •
Otherwise, we let $m_{x}\colon P_{j}\rightarrowtail R_{k}a$ be any path
embedding such that:
1. (i)
The type of $m_{x}$ coincides with the tree ${\uparrow}x\subseteq T_{\bot}$.
2. (ii)
$m_{x}$ extends $m_{z}$, where $z$ is the predecessor of $x$.
Note that such an embedding $m_{x}$ exists because $x\in{\uparrow}z\subseteq
T_{\bot}$ and, by inductive hypothesis, ${\uparrow}z$ coincides with the type
of $m_{z}$.
The set
$V\coloneqq\\{m_{x}\mid x\in T_{\bot}\\}$
is finite and contains $m$. We regard $V$ as a cocone over a finite diagram
$D$ of paths and embeddings between them. Let $\widetilde{a}\coloneqq
L_{k}(\operatornamewithlimits{colim}D)$ and note that $\widetilde{a}\in
L_{k}[\overline{\operatorname{\mathscr{C}}}_{k}]$. The functor $L_{k}$
preserves colimits because it is left adjoint, hence $\widetilde{a}$ is the
colimit in $\operatorname{\mathscr{E}}$ of the diagram $L_{k}D$. The cocone
$\\{n^{\\#}\mid n\in V\\}$ with vertex $a$ over $L_{k}D$ then factors through
a unique morphism $f\colon\widetilde{a}\to a$. By construction, $m\colon
P\rightarrowtail R_{k}a$ factors through $R_{k}f$, and so there is
$\widetilde{m}\colon P\rightarrowtail\widetilde{a}$ such that
$m=R_{k}f\circ\widetilde{m}$. Hence, $\widetilde{m}\to m$.
Next, with the aim of showing that $m\to\widetilde{m}$, we shall define a
morphism ${R_{k}a\to R_{k}\widetilde{a}}$. As $R_{k}a$ is path generated, it
suffices to define a cocone
$W=\\{\varphi_{p}\mid p\in\operatorname{\mathbb{P}}{(R_{k}a)}\\}$
with vertex $R_{k}\widetilde{a}$ over the diagram of path embeddings into
$R_{k}a$. Suppose $p\in\operatorname{\mathbb{P}}{(R_{k}a)}$. We define the
corresponding arrow $\varphi_{p}$ by induction on the height of $p$:
1. (i)
If $p$ is the root of $\operatorname{\mathbb{P}}{(R_{k}a)}$, then it factors
through $R_{k}f\colon R_{k}\widetilde{a}\rightarrowtail R_{k}a$, and so it
yields an embedding $\varphi_{p}\colon\operatorname{dom}(p)\rightarrowtail
R_{k}\widetilde{a}$.
2. (ii)
Suppose that $p$ has height $l+1$ and $\varphi_{q}$ has been defined whenever
$q$ has height at most $l$. We distinguish two cases: if $p$ factors through
$R_{k}f\colon R_{k}\widetilde{a}\rightarrowtail R_{k}a$, i.e. $p=R_{k}f\circ
s_{p}$ for some embedding $s_{p}$, then we set $\varphi_{p}\coloneqq s_{p}$.
This is the case, in particular, when $p\leq m$ in
$\operatorname{\mathbb{P}}{(R_{k}a)}$. Clearly, if $p$ extends $q$ then
$\varphi_{p}$ extends $\varphi_{q}$.
Otherwise, let $q$ be such that $p\succ q$. By inductive hypothesis, we can
suppose that $R_{k}f\circ\varphi_{q}$ coincides with an embedding $m_{x}\colon
P_{j}\rightarrowtail R_{k}a$ in $V$ (up to an isomorphism
$\operatorname{dom}(q)\cong P_{j}$) whose type coincides with the tree
${\uparrow}x\subseteq T_{\bot}$. As $q$ corresponds to a node $y$ covering $x$
labeled by some $P_{h}\cong\operatorname{dom}(p)$, by definition of $V$ there
is an embedding $m_{y}\colon P_{h}\rightarrowtail R_{k}a$ in $V$ such that
$m_{y}$ extends $m_{x}$, and the type $m_{y}$ coincides with ${\uparrow}y$.
Since $m_{y}$ factors through $R_{k}f$, precomposing with the isomorphism
$\operatorname{dom}(p)\cong P_{h}$ we get an embedding
$\varphi_{p}\colon\operatorname{dom}(p)\rightarrowtail R_{k}\widetilde{a}$.
Observe that $\varphi_{p}$ extends $\varphi_{q}$.
The compatibility condition for the cocone $W$ states that $\varphi_{p}$
extends $\varphi_{q}$ whenever $p$ extends $q$, which is ensured by the
definition above. Thus, $W$ induces a morphism $g\colon R_{k}a\to
R_{k}\widetilde{a}$ and, by construction, $g\circ m=\widetilde{m}$. Hence,
$m\to\widetilde{m}$. ∎
The construction of $k$-extendable covers is akin to that of
$\omega$-saturated elementary extensions in model theory, where one starts
with a first-order structure $M$ and constructs an elementary extension
$M_{1}$ of $M$ in which all types over (finite subsets of) $M$ are realised,
then an elementary extension $M_{2}$ of $M_{1}$ in which all types over
$M_{1}$ are realised, and so forth. The union of the induced elementary chain
of models yields the desired $\omega$-saturated elementary extension of $M$.
In the same spirit, we introduce a notion of $k$-extendability relative to a
homomorphism, which models the one-step construction just outlined.
###### Definition 6.3.
Let $h\colon a\to b$ be an arrow in $\operatorname{\mathscr{E}}$. We say that
$b$ is _$k$ -extendable relative to $h$_ if the following property is
satisfied for all $e\in L_{k}[\overline{\operatorname{\mathscr{C}}}_{k}]$: For
all path embeddings $m\colon P\rightarrowtail R_{k}a$ and $n\colon
P\rightarrowtail R_{k}e$ such that $m_{\star}\rightleftarrows n_{\star}$,
${P}$${\mathscr{S}_{m}}$${\mathscr{S}_{n}}$$\scriptstyle{m_{\star}}$$\scriptstyle{n_{\star}}$
if $n^{\prime}\colon Q\rightarrowtail R_{k}e$ is a path embedding such that
$n\leq n^{\prime}$ in $\operatorname{\mathbb{P}}{(R_{k}e)}$, there is a path
embedding $m^{\prime}\colon Q\rightarrowtail R_{k}b$ such that the leftmost
diagram below commutes and $m^{\prime}_{\star}\rightleftarrows
n^{\prime}_{\star}$.
${P}$${Q}$${R_{k}a}$${R_{k}b}$$\scriptstyle{!}$$\scriptstyle{m}$$\scriptstyle{m^{\prime}}$$\scriptstyle{R_{k}h}$${Q}$${\mathscr{S}_{m^{\prime}}}$${\mathscr{S}_{n^{\prime}}}$$\scriptstyle{m^{\prime}_{\star}}$$\scriptstyle{n^{\prime}_{\star}}$
Suppose that, given an object $a\in\operatorname{\mathscr{E}}$, we are able to
construct a section $s\colon a\to b$ such that $b$ is $k$-extendable relative
to $s$. Iterating this process countably many times, we obtain a
$k$-extendable cover $a\to a^{*}$, thus settling Theorem 5.9. The main hurdle
consists in establishing the following proposition; a proof is offered in
Section 6.2.
###### Proposition 6.4.
For all $a\in\operatorname{\mathscr{E}}$ and all $k>0$ there is a section
$s\colon a\to b$ such that $b$ is $k$-extendable relative to $s$.
We can finally prove Theorem 5.9:
###### Proof of Theorem 5.9.
Let $a\in\operatorname{\mathscr{E}}$. By applying Proposition 6.4 repeatedly,
we obtain a chain of sections
${a}$${b_{1}}$${b_{2}}$${b_{3}}$${\cdots}$$\scriptstyle{s_{1}}$$\scriptstyle{s_{2}}$$\scriptstyle{s_{3}}$$\scriptstyle{s_{4}}$
such that $b_{i}$ is $k$-extendable relative to $s_{i}$, for all $i\geq 1$.
Denote the previous diagram by $D$ and let $a^{*}$ be the colimit of $D$ in
$\operatorname{\mathscr{E}}$, which exists by (E1). Let $h_{i}\colon b_{i}\to
a^{*}$ be the colimit map with domain $b_{i}$, and $s\colon a\to a^{*}$ the
one with domain $a$. As all the arrows in $D$ are sections, so are the colimit
maps; in particular, $s$ is a section.
We claim that $a^{*}$ is $k$-extendable. Suppose $m\colon P\rightarrowtail
R_{k}(a^{*})$ and $n\colon P\rightarrowtail R_{k}e$ are path embeddings such
that $m_{\star}\rightleftarrows n_{\star}$. By Lemma 6.2, we can assume
without loss of generality that $e\in
L_{k}[\overline{\operatorname{\mathscr{C}}}_{k}]$.
By (A2), $L_{k}P$ is finitely presentable in $\operatorname{\mathscr{E}}$ and
so $m^{\\#}\colon L_{k}P\to a^{*}$ factors through one of the colimit maps.
Assume without loss of generality that $m^{\\#}$ factors through $h_{j}\colon
b_{j}\to a$ for some $j\,{\geq}\,1$, so there is an arrow $r\colon L_{k}P\to
b_{j}$ satisfying $m^{\\#}=h_{j}\circ r$. If $m_{j}\coloneqq r^{\flat}$, it
follows that $m=R_{k}h_{j}\circ m_{j}$. In particular, $m_{j}$ is an embedding
because so is $m$. Since $R_{k}h_{j}$ is a section, Remark 4.13 entails
$m_{j}\rightleftarrows m$, and so $(m_{j})_{\star}\rightleftarrows m_{\star}$
by Lemma 4.15(c). Because $m_{\star}\rightleftarrows n_{\star}$, also
$(m_{j})_{\star}\rightleftarrows n_{\star}$.
Now, let $n^{\prime}\colon Q\rightarrowtail R_{k}e$ be any path embedding such
that $n\leq n^{\prime}$ in $\operatorname{\mathbb{P}}{(R_{k}e)}$. Since
$b_{j+1}$ is $k$-extendable relative to $s_{j+1}$, there is a path embedding
$m^{\prime}\colon Q\rightarrowtail R_{k}b_{j+1}$ such that
$m^{\prime}_{\star}\rightleftarrows n^{\prime}_{\star}$ and the following
diagram commutes.
${P}$${Q}$${R_{k}b_{j}}$${R_{k}b_{j+1}}$$\scriptstyle{!}$$\scriptstyle{m_{j}}$$\scriptstyle{m^{\prime}}$$\scriptstyle{R_{k}s_{j+1}}$
It follows that
$m=R_{k}h_{j}\circ m_{j}=R_{k}h_{j+1}\circ R_{k}s_{j+1}\circ
m_{j}=R_{k}h_{j+1}\circ m^{\prime}\circ{!}$
and so $m^{\prime\prime}\coloneqq R_{k}h_{j+1}\circ m^{\prime}\colon
Q\rightarrowtail R_{k}(a^{*})$ satisfies $m\leq m^{\prime\prime}$ in
$\operatorname{\mathbb{P}}{(R_{k}(a^{*}))}$. Again by Remark 4.13 and Lemma
4.15(c) we get $m^{\prime\prime}_{\star}\rightleftarrows m^{\prime}_{\star}$,
and thus $m^{\prime\prime}_{\star}\rightleftarrows n^{\prime}_{\star}$. This
shows that $a^{*}$ is $k$-extendable. ∎
### 6.2. Proof of Proposition 6.4
Fix an object $a$ of $\operatorname{\mathscr{E}}$ and a positive integer $k$.
To improve readability, we drop the subscript from $L_{k}$ and $R_{k}$, and
simply write $L$ and $R$ (but continue to denote by
$\operatorname{\mathscr{C}}_{k}$ the arboreal category). We must find a
section $s\colon a\to b$ such that $b$ is $k$-extendable relative to $s$.
Consider all pairs of path embeddings
$(u\colon P\rightarrowtail Ra,v\colon P\rightarrowtail Re)$
in $\operatorname{\mathscr{C}}_{k}$ such that $e\in
L[\overline{\operatorname{\mathscr{C}}}_{k}]$ and $Lv_{\star}\to u^{\\#}$ in
$LP/{\operatorname{\mathscr{E}}}$.
###### Remark 6.5.
Note that $Lv_{\star}\to u^{\\#}$ entails that $Lv_{\star}$ is an embedding.
Just observe that $u^{\\#}$ is an embedding by the first part of (A3).
Each such pair $(u,v)$ induces a pushout square in
$\operatorname{\mathscr{E}}$ as follows.
${LP}$${L\mathscr{S}_{v}}$${a}$${a+_{LP}L\mathscr{S}_{v}}$$\scriptstyle{u^{\\#}}$$\scriptstyle{Lv_{\star}}$$\scriptstyle{\lambda_{(u,v)}}$$\scriptstyle{\iota_{(u,v)}}$${\ulcorner}$
This pushout square exists by (E1) and consists entirely of embeddings by
virtue of (E2).
###### Lemma 6.6.
$\iota_{(u,v)}$ is a section.
###### Proof.
Just observe that, since $Lv_{\star}\to u^{\\#}$, there is $g\colon
L\mathscr{S}_{v}\to a$ such that ${g\circ Lv_{\star}=u^{\\#}}$. By the
universal property of the pushout, there is an arrow $h\colon
a+_{LP}L\mathscr{S}_{v}\to a$ such that $h\circ\iota_{(u,v)}$ is the identity
of $a$. ∎
We let $D$ be the diagram in $\operatorname{\mathscr{E}}$ consisting of all
the morphisms
$\iota_{(u,v)}\colon a\to a+_{LP}L\mathscr{S}_{v}$
as above. Because $\operatorname{\mathscr{C}}_{k}$ is locally finite and $e$
varies among the objects of $L[\overline{\operatorname{\mathscr{C}}}_{k}]$,
choosing representatives for isomorphism classes in an appropriate way we can
assume by Remark 6.1 that $D$ is a small diagram.
By (E1), $D$ admits a colimit $b\coloneqq\operatornamewithlimits{colim}D$. In
other words, $b$ is obtained as a _wide pushout_ in
$\operatorname{\mathscr{E}}$. Denote by $s\colon a\to b$ the colimit map with
domain $a$, and by
$t_{(u,v)}\colon a+_{LP}L\mathscr{S}_{v}\to b$
the colimit map corresponding to the arrow $\iota_{(u,v)}$. As all arrows in
$D$ are sections by Lemma 6.6, so are the colimit maps. In particular,
$s\colon a\to b$ is a section.
We claim that $b$ is $k$-extendable relative to $s$, thus settling Proposition
6.4. Assume we are given path embeddings $m\colon P\rightarrowtail Ra$ and
$n\colon P\rightarrowtail Re$, with $e\in
L[\overline{\operatorname{\mathscr{C}}}_{k}]$, such that
$m_{\star}\rightleftarrows n_{\star}$ as displayed in the leftmost diagram
below.
${P}$${\mathscr{S}_{m}}$${\mathscr{S}_{n}}$$\scriptstyle{m_{\star}}$$\scriptstyle{n_{\star}}$$\scriptstyle{f}$$\scriptstyle{g}$
${LP}$${a}$${L\mathscr{S}_{n}}$$\scriptstyle{m^{\\#}}$$\scriptstyle{Ln_{\star}}$$\scriptstyle{\mathfrak{i}_{m}^{\\#}\circ
Lg}$
If $\mathfrak{i}_{m}\colon\mathscr{S}_{m}\rightarrowtail Ra$ is the canonical
embedding, we get a commutative triangle as on the right-hand side above. Just
observe that
$\mathfrak{i}_{m}^{\\#}\circ Lg\circ Ln_{\star}=\mathfrak{i}_{m}^{\\#}\circ
Lm_{\star}=(\mathfrak{i}_{m}\circ m_{\star})^{\\#}=m^{\\#}.$
Hence $Ln_{\star}\to m^{\\#}$. Let
$\iota_{(m,n)}\colon a\to a+_{LP}L\mathscr{S}_{n}$
be the corresponding arrow in the diagram $D$. To improve readability we shall
write, respectively, $\iota$, $\lambda$ and $t$ instead of $\iota_{(m,n)}$,
$\lambda_{(m,n)}$ and $t_{(m,n)}$.
Let $n^{\prime}\colon Q\rightarrowtail Re$ be a path embedding with $n\leq
n^{\prime}$ in $\operatorname{\mathbb{P}}{(Re)}$. We must exhibit a path
embedding $m^{\prime}\colon Q\rightarrowtail Rb$ such that the leftmost square
below commutes and $m^{\prime}_{\star}\rightleftarrows n^{\prime}_{\star}$.
${P}$${Q}$${Ra}$${Rb}$$\scriptstyle{!}$$\scriptstyle{m}$$\scriptstyle{m^{\prime}}$$\scriptstyle{Rs}$
${Q}$${\mathscr{S}_{m^{\prime}}}$${\mathscr{S}_{n^{\prime}}}$$\scriptstyle{m^{\prime}_{\star}}$$\scriptstyle{n^{\prime}_{\star}}$
(2)
Note that, because $n\leq n^{\prime}$, we get
$\mathscr{S}_{n^{\prime}}\leq\mathscr{S}_{n}$ in
$\operatorname{\mathbb{S}}{Re}$. Thus, $n^{\prime}_{\star}$ can be identified
with a path embedding into $\mathscr{S}_{n}$. Consider the arrow
$\xi\coloneqq(\lambda\circ Ln^{\prime}_{\star})^{\flat}\colon Q\to
R(a+_{LP}L\mathscr{S}_{n}).$
###### Lemma 6.7.
$\xi$ is an embedding.
###### Proof.
By the first part of (A3), $(n^{\prime})^{\\#}$ is an embedding. Then
$Ln^{\prime}$ is an embedding since $(n^{\prime})^{\\#}=\varepsilon_{e}\circ
Ln^{\prime}$, and so is $Ln^{\prime}_{\star}$. It follows that $\lambda\circ
Ln^{\prime}_{\star}$ is an embedding because it is a composition of
embeddings, and $\xi$ is an embedding by the second part of (A3). ∎
Lemma 6.7, combined with the fact that $Rt$ is a section (hence an embedding),
entails that the composite $m^{\prime}\coloneqq Rt\circ\xi\colon
Q\rightarrowtail Rb$ is an embedding. Moreover
$\displaystyle m^{\prime}\circ{!}$
$\displaystyle=((m^{\prime}\circ{!})^{\\#})^{\flat}=((m^{\prime})^{\\#}\circ
L{!})^{\flat}=(t\circ\lambda\circ Ln^{\prime}_{\star}\circ L{!})^{\flat}$
$\displaystyle=(t\circ\lambda\circ Ln_{\star})^{\flat}=(t\circ\iota\circ
m^{\\#})^{\flat}=(s\circ m^{\\#})^{\flat}=Rs\circ m,$
showing that the leftmost diagram in equation (2) commutes. Since $Rt$ is a
section, we have $\xi\rightleftarrows m^{\prime}$ by Remark 4.13, and so
$\xi_{\star}\rightleftarrows m^{\prime}_{\star}$ by Lemma 4.15(c). Therefore,
in order to show that $m^{\prime}_{\star}\rightleftarrows n^{\prime}_{\star}$
it suffices to prove that $\xi_{\star}\rightleftarrows n^{\prime}_{\star}$. We
have
$\lambda^{\flat}\circ n^{\prime}_{\star}=((\lambda^{\flat}\circ
n^{\prime}_{\star})^{\\#})^{\flat}=(\lambda\circ
Ln^{\prime}_{\star})^{\flat}=\xi$
and thus $n^{\prime}_{\star}\to\xi$. It follows from Remark 4.16 that
$n^{\prime}_{\star}\to\xi_{\star}$.
It remains to show that $\xi_{\star}\to n^{\prime}_{\star}$; the proof of this
fact will occupy us for the rest of this section. As
$\operatorname{\mathscr{C}}_{k}$ is an arboreal category, $\mathscr{S}_{\xi}$
is the colimit of its path embeddings. Thus, in order to define a morphism
$\mathscr{S}_{\xi}\to\mathscr{S}_{n^{\prime}}$ it suffices to define a
compatible cocone with vertex $\mathscr{S}_{n^{\prime}}$ over the diagram of
path embeddings into $\mathscr{S}_{\xi}$. By Lemma 4.15(a), the path
embeddings into $\mathscr{S}_{\xi}$ can be identified with the path embeddings
into $R(a+_{LP}L\mathscr{S}_{n})$ that are comparable with $\xi$. For each
such path embedding $q\colon Q^{\prime}\rightarrowtail
R(a+_{LP}L\mathscr{S}_{n})$, we shall define an arrow $\zeta_{q}\colon
Q^{\prime}\to Re$ and prove that these form a compatible cocone. We will then
deduce, using the induced mediating morphism $\mathscr{S}_{\xi}\to Re$, that
$\xi_{\star}\to n^{\prime}_{\star}$.
Fix an arbitrary path embedding $q\colon Q^{\prime}\rightarrowtail
R(a+_{LP}L\mathscr{S}_{n})$ above $\xi$ and consider the following diagram in
$\operatorname{\mathscr{E}}$, where the four vertical faces are pullbacks.
${\overline{LP}}$${\overline{L\mathscr{S}_{n}}}$${\overline{a}}$${LQ^{\prime}}$${LP}$${L\mathscr{S}_{n}}$${a}$${a+_{LP}L\mathscr{S}_{n}}$$\scriptstyle{\nu}$$\scriptstyle{\mu_{1}}$$\scriptstyle{\mu_{2}}$$\scriptstyle{\tau_{2}}$$\scriptstyle{\tau_{1}}$$\scriptstyle{\sigma_{1}}$$\scriptstyle{Ln_{\star}}$$\scriptstyle{m^{\\#}}$$\scriptstyle{\lambda}$$\scriptstyle{\iota}$$\scriptstyle{\sigma_{2}}$$\scriptstyle{q^{\\#}}$
(3)
Note that the previous diagram consists entirely of embeddings because
$q^{\\#}$ is an embedding by the first part of (A3), and the pullback in
$\operatorname{\mathscr{E}}$ of an embedding exists by (E1) and is again an
embedding.
Because $q$ is above $\xi$, there is an embedding $Q\rightarrowtail
Q^{\prime}$, and thus also an embedding $P\rightarrowtail Q^{\prime}$. By the
universal property of pullbacks, there are unique arrows
$\vartheta\colon LP\rightarrowtail\overline{a}\ \ \text{ and }\ \ \Delta\colon
LP\rightarrowtail\overline{LP}$
making the following diagrams commute.
${LP}$${\overline{a}}$${LQ^{\prime}}$${a}$${{a+_{LP}L\mathscr{S}_{n}}}$$\scriptstyle{L!}$$\scriptstyle{m^{\\#}}$$\scriptstyle{\vartheta}$${\lrcorner}$$\scriptstyle{\sigma_{1}}$$\scriptstyle{\sigma_{2}}$$\scriptstyle{q^{\\#}}$$\scriptstyle{\iota}$
${LP}$${\overline{LP}}$${\overline{a}}$${LP}$${a}$$\scriptstyle{\vartheta}$$\scriptstyle{\mathrm{id}_{LP}}$$\scriptstyle{\Delta}$${\lrcorner}$$\scriptstyle{\mu_{1}}$$\scriptstyle{\mu_{2}}$$\scriptstyle{\sigma_{2}}$$\scriptstyle{m^{\\#}}$
Note in particular that $\mu_{2}$ is a retraction whose right inverse is
$\Delta$. As $\mu_{2}$ is also an embedding, it must be an isomorphism with
(two-sided) inverse $\Delta$.
By (A4) (more precisely, by item (i) in Definition 5.8) there is an arrow
$w\colon P\to Q_{\overline{a}}$ between paths such that $Lw=\vartheta$. Hence,
we can consider $\sigma_{2}^{\flat}\colon Q_{\overline{a}}\to Ra$. Note that
$(\sigma_{2}^{\flat}\circ w)^{\\#}=\sigma_{2}\circ
Lw=\sigma_{2}\circ\vartheta=m^{\\#},$
and so $\sigma_{2}^{\flat}\circ w=m$. In particular, $w$ is an embedding. As
$Q_{\overline{a}}$ is a path,
$\mathfrak{i}_{w}\colon\mathscr{S}_{w}\rightarrowtail Q_{\overline{a}}$ can be
identified with the identity $Q_{\overline{a}}\to Q_{\overline{a}}$. By Lemma
4.15(b), there is a unique arrow $\psi_{q}\colon
Q_{\overline{a}}\to\mathscr{S}_{m}$ making the following diagram commute.
${Q_{\overline{a}}}$${\mathscr{S}_{m}}$${Ra}$$\scriptstyle{\psi_{q}}$$\scriptstyle{\sigma_{2}^{\flat}}$$\scriptstyle{\mathfrak{i}_{m}}$
###### Lemma 6.8.
The following diagram commutes.
${\overline{LP}}$${\overline{a}}$${L\mathscr{S}_{n}}$$\scriptstyle{\mu_{1}}$$\scriptstyle{Ln_{\star}\circ\mu_{2}}$$\scriptstyle{L(f\circ\psi_{q})}$
###### Proof.
Note that
$\mathfrak{i}_{m}\circ\psi_{q}\circ w=\sigma_{2}^{\flat}\circ
w=m=\mathfrak{i}_{m}\circ m_{\star}$
and so $\psi_{q}\circ w=m_{\star}$ since $\mathfrak{i}_{m}$ is a monomorphism.
Applying the functor $L$ to the outer commutative diagram on the left-hand
side below, we obtain the commutative diagram on the right-hand side.
${P}$${Q_{\overline{a}}}$${\mathscr{S}_{m}}$${\mathscr{S}_{n}}$$\scriptstyle{w}$$\scriptstyle{m_{\star}}$$\scriptstyle{n_{\star}}$$\scriptstyle{\psi_{q}}$$\scriptstyle{f}$${LP}$${\overline{a}}$${L\mathscr{S}_{n}}$$\scriptstyle{\vartheta}$$\scriptstyle{Ln_{\star}}$$\scriptstyle{L(f\circ\psi_{q})}$
Hence, precomposing with $\mu_{2}$ we get
$Ln_{\star}\circ\mu_{2}=L(f\circ\psi_{q})\circ\vartheta\circ\mu_{2}=L(f\circ\psi_{q})\circ\mu_{1}.\qed$
For convenience of notation, let us write
$\widetilde{\gamma}_{q}\coloneqq L(f\circ\psi_{q})\colon\overline{a}\to
L\mathscr{S}_{n}\ \text{ and }\ \gamma_{q}\coloneqq\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ\widetilde{\gamma}_{q}\colon\overline{a}\to e.$
With this notation we have
$\displaystyle\gamma_{q}\circ\mu_{1}$ $\displaystyle=\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ\widetilde{\gamma}_{q}\circ\mu_{1}$
$\displaystyle=\varepsilon_{e}\circ L\mathfrak{i}_{n}\circ
Ln_{\star}\circ\mu_{2}$ Lemma 6.8 $\displaystyle=\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ\tau_{2}\circ\nu$
and so the leftmost diagram below commutes.
${\overline{LP}}$${\overline{L\mathscr{S}_{n}}}$${\overline{a}}$${e}$$\scriptstyle{\nu}$$\scriptstyle{\mu_{1}}$$\scriptstyle{\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ\tau_{2}}$$\scriptstyle{\gamma_{q}}$${\overline{LP}}$${\overline{L\mathscr{S}_{n}}}$${\overline{a}}$${LQ^{\prime}}$$\scriptstyle{\nu}$$\scriptstyle{\mu_{1}}$$\scriptstyle{\tau_{1}}$$\scriptstyle{\sigma_{1}}$${\ulcorner}$
Now, note that by (E2) the top face of diagram (3), displayed in the rightmost
diagram above, is a pushout in $\operatorname{\mathscr{E}}$. By the universal
property of pushouts, there is a unique $\delta_{q}\colon LQ^{\prime}\to e$
satisfying
$\delta_{q}\circ\sigma_{1}=\gamma_{q}\ \text{ and }\
\delta_{q}\circ\tau_{1}=\varepsilon_{e}\circ L\mathfrak{i}_{n}\circ\tau_{2}.$
Define $\zeta_{q}\coloneqq(\delta_{q})^{\flat}\colon Q^{\prime}\to Re$ for all
path embeddings $q\colon Q^{\prime}\rightarrowtail R(a+_{LP}L\mathscr{S}_{n})$
above $\xi$. Further, if $q$ is below $\xi$, we let $\zeta_{q}$ be the obvious
restriction of $\zeta_{\xi}$.
###### Lemma 6.9.
The following family of arrows forms a compatible cocone over the diagram of
path embeddings into $\mathscr{S}_{\xi}$:
$\\{\zeta_{q}\mid q\colon Q^{\prime}\rightarrowtail
R(a+_{LP}L\mathscr{S}_{n})\ \text{is a path embedding comparable with
$\xi$}\\}.$
###### Proof.
Fix arbitrary path embeddings
$q\colon Q^{\prime}\rightarrowtail R(a+_{LP}L\mathscr{S}_{n})\ \text{ and }\
q^{\prime}\colon Q^{\prime\prime}\rightarrowtail R(a+_{LP}L\mathscr{S}_{n})$
comparable with $\xi$. The compatibility condition for the cocone states that
$\zeta_{q}$ extends $\zeta_{q^{\prime}}$ whenever $q\geq q^{\prime}$. It
suffices to settle the case where $\xi\leq q\leq q^{\prime}$. Also, it is
enough to show that $\delta_{q}$ extends $\delta_{q^{\prime}}$, i.e.
$\delta_{q}\circ L{!}=\delta_{q^{\prime}}$ where ${!}\colon
Q^{\prime\prime}\rightarrowtail Q^{\prime}$ is the unique embedding. Just
observe that $\delta_{q}\circ L{!}=\delta_{q^{\prime}}$ entails
$\zeta_{q}\circ{!}=\delta_{q}^{\flat}\circ{!}=((\delta_{q}^{\flat}\circ{!})^{\\#})^{\flat}=(\delta_{q}\circ
L{!})^{\flat}=\delta_{q^{\prime}}^{\flat}=\zeta_{q^{\prime}}.$
Consider the following diagram all whose vertical faces are pullbacks and note
that by (A4), and more precisely by item (ii) in Definition 5.8, the pullback
of $L!$ along $\sigma_{1}$ is of the form $L\ell$ for some arrow $\ell\colon
Q_{\overline{\overline{a}}}\to Q_{\overline{a}}$.
${\overline{\overline{LP}}}$${\overline{\overline{L\mathscr{S}_{n}}}}$${\overline{\overline{a}}}$${LQ^{\prime\prime}}$${\overline{LP}}$${\overline{L\mathscr{S}_{n}}}$${\overline{a}}$${LQ^{\prime}}$${LP}$${L\mathscr{S}_{n}}$${e}$${a}$${a+_{LP}L\mathscr{S}_{n}}$$\scriptstyle{\overline{\tau}_{2}}$$\scriptstyle{\overline{\tau}_{1}}$$\scriptstyle{\overline{\sigma}_{1}}$$\scriptstyle{L{!}}$$\scriptstyle{\delta_{q^{\prime}}}$$\scriptstyle{\nu}$$\scriptstyle{\mu_{1}}$$\scriptstyle{\mu_{2}}$$\scriptstyle{\tau_{2}}$$\scriptstyle{\tau_{1}}$$\scriptstyle{\sigma_{1}}$$\scriptstyle{L\ell}$$\scriptstyle{\delta_{q}}$$\scriptstyle{Ln_{\star}}$$\scriptstyle{m^{\\#}}$$\scriptstyle{\lambda}$$\scriptstyle{\iota}$$\scriptstyle{\sigma_{2}}$$\scriptstyle{q^{\\#}}$
In view of the definition of $\delta_{q^{\prime}}$ in terms of the universal
property of pushouts, it suffices to show that $\delta_{q}\circ L{!}$
satisfies
$(\delta_{q}\circ L{!})\circ\overline{\sigma}_{1}=\gamma_{q^{\prime}}\ \text{
and }\ (\delta_{q}\circ L{!})\circ\overline{\tau}_{1}=\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ\tau_{2}\circ\overline{\tau}_{2}.$
The latter equation follows at once from the identity
$\delta_{q}\circ\tau_{1}=\varepsilon_{e}\circ L\mathfrak{i}_{n}\circ\tau_{2}$.
With regards to the former, we have
$(\delta_{q}\circ
L{!})\circ\overline{\sigma}_{1}=\delta_{q}\circ\sigma_{1}\circ
L\ell=\gamma_{q}\circ L\ell.$
Thus it suffices to show that $\gamma_{q}\circ L\ell=\gamma_{q^{\prime}}$, and
this clearly follows if we prove that $\widetilde{\gamma}_{q}\circ
L\ell=\widetilde{\gamma}_{q^{\prime}}$. Recall that $\psi_{q^{\prime}}$ is the
unique morphism such that the composite
${Q_{\overline{\overline{a}}}}$${\mathscr{S}_{m}}$${Ra}$$\scriptstyle{\psi_{q^{\prime}}}$$\scriptstyle{\mathfrak{i}_{m}}$
coincides with $(\sigma_{2}\circ L\ell)^{\flat}$. But
$(\sigma_{2}\circ
L\ell)^{\flat}=((\sigma_{2}^{\flat}\circ\ell)^{\\#})^{\flat}=\sigma_{2}^{\flat}\circ\ell,$
so $\mathfrak{i}_{m}\circ\psi_{q}\circ\ell=\sigma_{2}^{\flat}\circ\ell$
entails $\psi_{q^{\prime}}=\psi_{q}\circ\ell$. Therefore,
$\widetilde{\gamma}_{q^{\prime}}=L(f\circ\psi_{q^{\prime}})=L(f\circ\psi_{q}\circ\ell)=\widetilde{\gamma}_{q}\circ
L\ell.\qed$
The previous lemma entails the existence of a unique morphism
$h\colon\mathscr{S}_{\xi}\to Re$ satisfying $h\circ q=\zeta_{q}$ for all path
embeddings $q$ into $R(a+_{LP}L\mathscr{S}_{n})$ that are comparable with
$\xi$. In order to conclude that $\xi_{\star}\to n^{\prime}_{\star}$ as
desired, we prove the following useful property of the cocone consisting of
the morphisms $\zeta_{q}$.
###### Lemma 6.10.
Let $G\coloneqq LR$ and consider the composite morphism
${RL\mathscr{S}_{n}}$${RGe}$${Re.}$$\scriptstyle{RL\mathfrak{i}_{n}}$$\scriptstyle{R\varepsilon_{e}}$
If $q=R\lambda\circ\alpha$ for some arrow $\alpha\colon
Q^{\prime}\rightarrowtail RL\mathscr{S}_{n}$, then
$\zeta_{q}=R\varepsilon_{e}\circ RL\mathfrak{i}_{n}\circ\alpha$.
###### Proof.
Suppose that $q=R\lambda\circ\alpha$ for some $\alpha\colon
Q^{\prime}\rightarrowtail RL\mathscr{S}_{n}$. We have
$(R\varepsilon_{e}\circ
RL\mathfrak{i}_{n}\circ\alpha)^{\\#}=((\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ\alpha^{\\#})^{\flat})^{\\#}=\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ\alpha^{\\#}.$
By the universal property of $\delta_{q}=\zeta_{q}^{\\#}$,
$\zeta_{q}=R\varepsilon_{e}\circ RL\mathfrak{i}_{n}\circ\alpha$ if, and only
if,
$(\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ\alpha^{\\#})\circ\tau_{1}=\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ\tau_{2}$ (4)
and
$(\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ\alpha^{\\#})\circ\sigma_{1}=\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ\widetilde{\gamma}_{q}.$ (5)
Observe that $\alpha^{\\#}\circ\tau_{1}=\tau_{2}$ because
$\lambda\circ\alpha^{\\#}\circ\tau_{1}=q^{\\#}\circ\tau_{1}=\lambda\circ\tau_{2}$
and $\lambda$ is a monomorphism. Thus, equation (4) holds. Further, note that
$q^{\\#}=(R\lambda\circ\alpha)^{\\#}=((\lambda\circ\alpha^{\\#})^{\flat})^{\\#}=\lambda\circ\alpha^{\\#}$
and so $\tau_{1}$ in diagram (3) is an isomorphism. As pushout squares of
embeddings in $\operatorname{\mathscr{E}}$ are also pullbacks by (E2),
$\mu_{1}$ in diagram (3) is also an isomorphism. Therefore,
$\displaystyle\lambda\circ\alpha^{\\#}\circ\sigma_{1}$
$\displaystyle=q^{\\#}\circ\sigma_{1}=q^{\\#}\circ\sigma_{1}\circ\mu_{1}\circ\mu_{1}^{-1}=\lambda\circ
Ln_{\star}\circ\mu_{2}\circ\mu_{1}^{-1}$
$\displaystyle=\lambda\circ\widetilde{\gamma}_{q}\circ\mu_{1}\circ\mu_{1}^{-1}=\lambda\circ\widetilde{\gamma}_{q}$
and so $\alpha^{\\#}\circ\sigma_{1}=\widetilde{\gamma}_{q}$. Equation (5) then
follows at once. ∎
Since $\xi=R\lambda\circ(Ln^{\prime}_{\star})^{\flat}$, recalling that we
identify $n^{\prime}_{\star}$ with a path embedding into $\mathscr{S}_{n}$, an
application of the previous lemma with
$\alpha\coloneqq(Ln^{\prime}_{\star})^{\flat}$ yields
$\zeta_{\xi}=R\varepsilon_{e}\circ
RL\mathfrak{i}_{n}\circ(Ln^{\prime}_{\star})^{\flat}=(\varepsilon_{e}\circ
L\mathfrak{i}_{n}\circ Ln^{\prime}_{\star})^{\flat}=(\varepsilon_{e}\circ
Ln^{\prime})^{\flat}=((n^{\prime})^{\\#})^{\flat}=n^{\prime}.$
In other words, $h\circ\xi_{\star}=n^{\prime}$ and so $\xi_{\star}\to
n^{\prime}$. It follows from Remark 4.16 that $\xi_{\star}\to
n^{\prime}_{\star}$, thus concluding the proof of Proposition 6.4.
## References
* [1] S. Abramsky, _Whither semantics?_ , Theoretical Computer Science 807 (2020), 3–14.
* [2] S. Abramsky, _Structure and Power: an emerging landscape_ , Fundamenta Informaticae 186 (2022), no. 1–4, pp. 1–26, Special Issue in Honor of the Boris Trakhtenbrot Centenary.
* [3] S. Abramsky, A. Dawar, and P. Wang, _The pebbling comonad in finite model theory_ , 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), 2017, pp. 1–12.
* [4] S. Abramsky and D. Marsden, _Comonadic semantics for guarded fragments_ , Proceedings of the 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS, 2021.
* [5] S. Abramsky and D. Marsden, _Comonadic semantics for hybrid logic_ , 47th International Symposium on Mathematical Foundations of Computer Science (MFCS), Leibniz International Proceedings in Informatics, vol. 241, Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022, pp. 7:1–7:14.
* [6] S. Abramsky and L. Reggio, _Arboreal categories and resources_ , 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021), Leibniz International Proceedings in Informatics, vol. 198, Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021, pp. 115:1–115:20.
* [7] S. Abramsky and L. Reggio, _Arboreal categories: An axiomatic theory of resources_ , Preprint available at https://arxiv.org/abs/2102.08109, 2022\.
* [8] S. Abramsky and N. Shah, _Relating structure and power: Comonadic semantics for computational resources_ , 27th EACSL Annual Conference on Computer Science Logic (CSL), 2018, pp. 2:1–2:17.
* [9] S. Abramsky and N. Shah, _Relating structure and power: Comonadic semantics for computational resources_ , Journal of Logic and Computation 31 (2021), no. 6, 1390–1428.
* [10] J. Adámek, H. Herrlich, and G. E. Strecker, _Abstract and concrete categories. The joy of cats_ , Online edition, 2004.
* [11] J. Adámek and J. Rosický, _Locally presentable and accessible categories_ , London Mathematical Society Lecture Note Series, vol. 189, Cambridge University Press, Cambridge, 1994.
* [12] H. Andréka, J. van Benthem, and I. Németi, _Modal languages and bounded fragments of predicate logic_ , Journal of Philosophical Logic 27 (1998), no. 3, 217–274.
* [13] J. Barwise, _On Moschovakis closure ordinals_ , Journal of Symbolic Logic 42 (1977), no. 2, 292–296.
* [14] J. van Benthem, _Dinamic bits and pieces_ , ILLC research report, University of Amsterdam, 1997.
* [15] P. Blackburn, M. De Rijke, and Y. Venema, _Modal logic_ , Cambridge Tracts in Theoretical Computer Science, vol. 53, Cambridge University Press, 2002.
* [16] A. Ó Conghaile and A. Dawar, _Game comonads & generalised quantifiers_, 29th EACSL Annual Conference on Computer Science Logic, CSL, LIPIcs, vol. 183, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021, pp. 16:1–16:17.
* [17] B. A. Davey and H. A. Priestley, _Introduction to lattices and order_ , second ed., Cambridge University Press, New York, 2002.
* [18] A. Dawar, T. Jakl, and L. Reggio, _Lovász-type theorems and game comonads_ , Proceedings of the 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS, 2021.
* [19] M. de Rijke, _A note on graded modal logic_ , Studia Logica 64 (2000), no. 2, 271–283.
* [20] A. Ehrenfeucht, _An application of games to the completeness problem for formalized theories_ , Fund. Math. 49 (1960/61), 129–141.
* [21] R. Fraïssé, _Sur quelques classifications des systèmes de relations_ , Publ. Sci. Univ. Alger. Sér. A 1 (1954), 35–182.
* [22] P. J. Freyd and G. M. Kelly, _Categories of continuous functors, I_ , Journal of Pure and Applied Algebra 2 (1972), no. 3, 169–191.
* [23] M. Hennessy and R. Milner, _On observing nondeterminism and concurrency_ , Automata, Languages and Programming (Berlin, Heidelberg), Springer Berlin Heidelberg, 1980, pp. 299–309.
* [24] W. Hodges, _Model theory_ , Encyclopedia of Mathematics and its Applications, vol. 42, Cambridge University Press, 1993.
* [25] N. Immerman, _Upper and lower bounds for first order expressibility_ , Journal of Computer and System Sciences 25 (1982), no. 1, 76–98.
* [26] T. Jakl, D. Marsden, and N. Shah, _A game comonadic account of Courcelle and Feferman-Vaught-Mostowski theorems_ , Preprint available at https://arxiv.org/abs/2205.05387, 2022.
* [27] A. Joyal and I. Moerdijk, _A completeness theorem for open maps_ , Ann. Pure Appl. Logic 70 (1994), no. 1, 51–86.
* [28] A. Joyal, M. Nielson, and G. Winskel, _Bisimulation and open maps_ , Proceedings of 8th Annual IEEE Symposium on Logic in Computer Science, 1993, pp. 418–427.
* [29] S. Lack and P. Sobociński, _Adhesive categories_ , Foundations of Software Science and Computation Structures (Berlin, Heidelberg), Springer Berlin Heidelberg, 2004, pp. 273–288.
* [30] L. Libkin, _Elements of finite model theory_ , Texts in Theoretical Computer Science. An EATCS Series, Springer, 2004.
* [31] J. Łoś, _On the extending of models. I_ , Fund. Math. 42 (1955), 38–54.
* [32] R. C. Lyndon, _Properties preserved under homomorphism_ , Pacific J. Math. 9 (1959), 143–154.
* [33] S. Mac Lane, _Categories for the working mathematician_ , 2nd ed., GTM, vol. 5, Springer-Verlag, New York, 1998.
* [34] J. Nešetřil and P. Ossona de Mendez, _Tree-depth, subgraph coloring and homomorphism bounds_ , European Journal of Combinatorics 27 (2006), no. 6, 1022–1041.
* [35] T. Paine, _A pebbling comonad for finite rank and variable logic, and an application to the equirank-variable homomorphism preservation theorem_ , Electronic Notes in Theoretical Computer Science 352 (2020), 191–209.
* [36] G. N. Raney, _Completely distributive complete lattices_ , Proc. Amer. Math. Soc. 3 (1952), 677–680.
* [37] E. Riehl, _Factorization systems_ , Notes available at https://emilyriehl.github.io/files/factorization.pdf.
* [38] B. Rossman, _Homomorphism preservation theorems_ , J. ACM 55 (2008), no. 3, 15:1–15:53.
* [39] A. Tarski, _Contributions to the theory of models. III_ , Nederl. Akad. Wetensch. Proc. Ser. A. 58 (1955), 56–64 = Indagationes Math. 17, 56–64 (1955).
* [40] K. Tent and M. Ziegler, _A course in model theory_ , Lecture Notes in Logic, vol. 40, Cambridge University Press, 2012.
|
# Surface Acoustic Wave Cavity Optomechanics with WSe2 Single Photon Emitters
Sahil D. Patel These authors contributed equally to this work. Department of
Electrical and Computer Engineering, University of California Santa Barbara,
Santa Barbara, CA 93106, USA Kamyar Parto These authors contributed equally
to this work. Department of Electrical and Computer Engineering, University
of California Santa Barbara, Santa Barbara, CA 93106, USA Michael Choquer
These authors contributed equally to this work. Department of Electrical and
Computer Engineering, University of California Santa Barbara, Santa Barbara,
CA 93106, USA Sammy Umezawa, Landon Hellman, Daniella Polishchuk Department
of Electrical and Computer Engineering, University of California Santa
Barbara, Santa Barbara, CA 93106, USA Galan Moody<EMAIL_ADDRESS>Department
of Electrical and Computer Engineering, University of California Santa
Barbara, Santa Barbara, CA 93106, USA
###### Abstract
Surface acoustic waves (SAWs) are a versatile tool for coherently interfacing
with a variety of solid-state quantum systems spanning microwave to optical
frequencies, including superconducting qubits, spins, and quantum emitters.
Here, we demonstrate SAW cavity optomechanics with quantum emitters in 2D
materials, specifically monolayer WSe2, on a planar lithium niobate SAW
resonator driven by superconducting electronics. Using steady-state
photoluminescence spectroscopy and time-resolved single-photon counting, we
map the temporal dynamics of modulated 2D emitters under coupling to different
SAW cavity modes, showing energy-level splitting consistent with deformation
potential coupling of 30 meV/$\%$. We leverage the large anisotropic strain
from the SAW to modulate the excitonic fine-structure splitting on a
nanosecond timescale, which may find applications for on-demand entangled
photon-pair generation from 2D materials. Cavity optomechanics with SAWs and
2D quantum emitters provides opportunities for compact sensors and quantum
electro-optomechanics in a multi-functional integrated platform that combines
phononic, optical, and superconducting electronic quantum systems.
## I Introduction
The coupling between solid-state quantum emitters and confined acoustic modes
in optomechanical resonators is an elegant approach for coherently
controlling, transferring, and entangling a variety of quantum degrees of
freedom, including photons, phonons, and spins [1, 2, 3]. Among the various
optomechanical systems under development [4], surface acoustic wave (SAW)
resonators integrated with single-photon emitters (SPEs) have several
potential advantages over other strategies. SPEs are remarkably sensitive to
local strain and exhibit frequency shifts nearly two orders-of-magnitude
larger ($\sim$10 GHz/pm) than microscale optical resonators ($\sim$100
MHz/pm). The use of SPEs provides a strong optical nonlinearity that ensures
only single photons are emitted, typically with sub-nanowatt optical power
requirements. Heterogeneous integration schemes have been used to combine
several types of SPEs with SAW resonators, including III-V quantum dots (QDs)
[5, 6, 7, 8] and neutral divacancy centers in SiC [9]. When operated in the
sideband-resolved regime using gigahertz-frequency resonators, parametric
modulation of SPEs enables microwave-frequency information to be encoded as
optical photons, which may pave the way for efficient and low-noise
transduction between microwave and optical frequency qubits.
Outstanding challenges with existing strategies for SPE optomechanics include
the complexity in materials growth and device fabrication, weak optomechanical
coupling relative to the emitter and SAW resonator decoherence rates, and the
use of weakly piezoelectric materials such as GaAs and ZnO. The recent
discovery of SPEs in 2D materials [10, 11], such as WSe2 [12, 13, 14, 15, 16]
and hexagonal boron nitride [17, 18], provide an opportunity to address each
of these challenges. 2D SPEs, which originate from crystalline defects in the
host material, exhibit high optical extraction efficiency and brightness with
detection rates up to 25 MHz [19, 20, 21], indistinguishable [22] and near
transform-limited linewidths [23], high single-photon purity, unique spin-
valley phenomena [24, 25, 26], high working temperatures [27, 28, 29], and
site selective engineering [30, 31, 32, 33]. The layered structure of 2D
materials arising from van der Waals forces ensures that the defects are two-
dimensional and are able to function at surfaces, devoid from any surface
states, allowing for strong proximity interaction with their surrounding
environment.
This strong proximity interaction, in addition to the site specific
fabrication and relaxed lattice-matching requirements, make 2D SPEs an ideal
two-level system to be integrated with SAW resonators. Proximity effects allow
for efficient coupling with the SAW deformation potential, while the ability
to deterministically transfer 2D monolayers onto nearly any surface allows for
nanoscale precision in positioning of a single SPE within the resonator.
Indeed, modulation of few-layer hBN SPEs with propagating SAWs has resulted in
strong deformation potential coupling [34]. Amongst 2D material SPEs, WSe2
emitters are more suitable for cavity optomechanics due to larger proximity
effects in monolayers, which partially contributes to their large strain
susceptibility up to 120 meV/$\%$ [35]. The integration of 2D SPEs with high-
quality SAW resonators, which has not been demonstrated to date, is the next
critical step for exploring the potential of 2D materials for quantum
optomechanics.
Figure 1: LiNbO3 SAW integration with WSe2 single photon emitters. (a) Optical
image of a 300 MHz SAW cavity with length of 2600 $\mu$m after transfer of a
monolayer WSe2. (b) $|S_{11}|$ cavity reflection spectrum showing modes
centered at 300 MHz. The gradient areas and vertical dashed lines denote the
Bragg mirror band edges. (c) $S_{11}$ phase measurement providing information
on the coupling regime of the resonances. Average intrinsic/extrinsic loaded
quality factors of 1,900/3,700 were extracted from a simultaneous fit to the
magnitude and phase of $S_{11}$. (d) Representative PL spectra of SPEs in the
WSe2 monolayer measured at 4.4 K. (e) Second-order auto-correlation function
demonstrating photon anti-bunching from an emitter in WSe2 with
$g^{\left(2\right)}\left(0\right)$ as low as 0.22.
In this work, we parametrically modulate the resonance frequency of SPEs in
monolayer WSe2 integrated with a LiNbO3 SAW resonator driven by
superconducting electronics. We demonstrate cavity phonon-SPE coupling with
deformation potential coupling of at least 30 meV/$\%$. The dynamics of the
SPE-SAW cavity system is measured through time-resolved, stroboscopic, and
steady-state photoluminescence spectroscopy. By sweeping the SAW frequency and
the cavity phonon occupation, we demonstrate exquisite control over the local
strain. We show that when driven on resonance, the SAW modulates and mixes the
emission from exciton fine-structure transitions that have been attributed to
anisotropic exchange [13] and intervalley symmetry breaking in the presence of
defects [36], providing a dynamical on-chip control knob for mixing of the
WSe2 SPE doublets. These results establish a new experimental platform that
combines cavity optomechanics with 2D material quantum optics, paving the way
for efficient and high-speed manipulation of 2D quantum emitters for single-
photon switching, tuning and stabilization, and entangled photon-pair
generation.
## II Results
Figure 2: Microwave frequency-dependent energy splitting of a WSe2 single-
photon emitter. (a) Schematic illustration of how the energy bandgap is
modulated by the SAW at a given instance in time. The out-of-plane strain
vector is indicated by the arrows on the surface of the substrate. (b) Slice
plot from (c) showing the single-photon emitter being modulated by the SAW
with zero splitting of the emission energy at 301.5 MHz and 0.46 meV splitting
at 303.5 MHz (c) The quantum emitter emission modulation as a function of
applied frequency to the SAW cavity. The spectral jumps are due to spectral
jitter in WSe2 emitters that appear at slow timescales of the measurement
under non-resonant optical excitation. (d) SAW cavity reflection spectrum
magnitude corresponding to the modulated emission shown in (c).
SAW cavities were fabricated on bulk LiNbO3 using a combination of NbN
sputtering and electron beam lithography to define superconducting Bragg
reflectors and interdigital transducers (IDTs). The periodicity of the mirrors
and the acoustic impedance within the mirrors defines the mirror reflection
spectral window, which is centered at 300 MHz for a variety of lengths
spanning 900 $\mu$m to 2600 $\mu$m. Electromechanical measurements of the
$S_{11}$ parameter for the cavity yield internal quality factors on the order
of 12,000 and external quality factors of 7,000, which are limited by the loss
due to the mirrors. Using an all dry-transfer technique, monolayer WSe2 was
exfoliated and transferred to the SAW cavities [37]. Figure 1(a) shows the
optical image of an exemplary SAW resonator and integrated WSe2 flake.
$S_{11}$ measurements taken at 4.4 K confirm the presence of the cavity modes,
which appear as a series of dips in the cavity reflection spectrum shown in
Fig. 1(b). A simultaneous fit to the magnitude and phase (Fig. 1(c))
demonstrates that the cavity is in the undercoupled regime with average
intrinsic quality factor of 1900 and extrinsic quality factor of 3700 after
the flake transfer, which are similar to loaded quality factors of SAWs
integrated with III-V QDs [7].
WSe2 SPEs are first characterized using steady-state photoluminescence (PL)
spectroscopy to identify individual emitters, with two representive SPEs shown
in Fig.1(d). Figure 1(e) shows the second-order autocorrelation measurement of
an SPE with $g^{\left(2\right)}(0)=0.22$ demonstrating the anti-bunching
behaviour of the emitted light. After identifying the SPE and SAW cavity
resonances, the SAW frequency was fixed at the location of each cavity
resonance, and the PL measurement was repeated. When driving the SAW on
resonance, a standing surface acoustic wave is formed inside of the cavity.
Depending on the position of the SPE relative to the node and anti-node of the
standing wave, the SPE experiences dynamic compressive and tensile strain from
the SAW (Fig. 2(a)). Through deformation potential coupling, the strain
modulates the local bandgap of the material, resulting in a temporally varying
energy shift of the SPE at the frequency of the SAW mode. Figure 2(b) shows
how the emission from an SPE is impacted by the SAW cavity mode. When the
cavity mode is populated by driving the IDT on one of the SAW resonances, the
SPE zero-phonon line (ZPL) is split into a double-peak structure with a peak-
to-peak separation of $2\Delta E=0.92$ meV. This split-peak structure is a
clear signature of SPE-SAW coupling, consistent with previous observations
from III-V QDs [5] and SiC vacancy centers [9] coupled to surface acoustic
waves cavities. The PL signals were fit to a Lorentzian function modulated by
a sinusoidal interaction in the time domain to extract the modulation energy
$\Delta E$ [38].
Next, the evolution of the PL spectrum as a function of the SAW cavity
frequency at a fixed power of 4 dBm was measured (Fig. 2(c)). Maximum
modulation of the PL lineshape is evident at the SAW cavity resonant
frequencies (Fig. 2(d)) between 299-304 MHz. For the three resonances at
299.425 MHz, 300.975 MHz, and 303.561 MHz, clear splitting is observed,
distinctly different from when driving the SAW cavity off resonance.
Interestingly, this emitter shows a non-zero splitting between 299-301.5 MHz,
which may be attributed to mode mixing between the neighboring cavity
resonances due to the SPE being spatially located somewhere between a node and
anti-node for this frequency range as similarly reported for QDs [39]. Note
that among the emitters that were measured, only ones with linewidth narrower
than 1 nm (limited by spectrometer resolution) exhibited a measurable
splitting. Among these emitters, nearly 75$\%$ exhibited coupling to at least
one of the cavity modes, a result which is not surprising given that the
positions of the SPEs with respect to node/anti-node of the cavity is randomly
distributed. Notably, the SPE in Fig. 2(b) consists of a doublet with a fine-
structure splitting on the order of 700 $\mu$eV, which has been previously
attributed to anisotropic strain [13] and intervalley excitonic mixing in WSe2
[36, 40, 29]. Interestingly, both peaks of the doublet exhibit the same
splitting as a function of frequency, and given the extent of the modulation,
the two peaks can overlap and mix at the position of the cavity resonances.
This mixing has implications for photon-state engineering and entanglement as
discussed below.
While it is clear that the SPEs are modulated by the SAWs, to rule out
alternative explanations, such as induced non-radiative decay or local
heating, we perform time-resolved and stroboscopic PL measurements to map out
the SAW dynamics. First, Fig. 3(a) shows the time-resolved PL dynamics of the
emitter shown in Fig. 2 with power ($P_{RF}=6$ dBm) and without power applied
to the SAW IDT. In both cases, the SPE recombination lifetime is longer than
$\sim 2$ ns, ruling out any new non-radiative recombination or thermal
processes that could lead to faster recombination and broadening of the
linewidth in the steady-state PL spectrum. We next performed stroboscopic
measurements in which the time arrival of the emitted photons with respect to
the phase of the applied SAW waveform is measured through single-photon
counting and binning. The emission was spectrally filtered using a
monochromator to isolate photons near the wings of the modulated steady-state
PL spectrum (Fig. 3(b)). Results from this measurement are shown by the
histogram in Fig. 3 (c), which demonstrates clear modulation of the emission
waveform at the fundamental SAW frequency of $f_{RF}$. We observe an
additional frequency component at $2f_{RF}$ due to the limited resolution of
our monochromator with respect to the total linewidth of our modulated
emitter. A fit of the data using a Monte Carlo-like simulation of a modulated
emitter overlapped with a non-ideal bandpass filter is shown as the solid line
in Fig. 3(c) (see methods).
Figure 3: Time-resolved and stroboscopic photoluminescence measurements. (a)
Time-resolved photoluminescence of a WSe2 emitter with the SAW off (top panel)
and SAW on (bottom panel). A recombination lifetime ¿ 2 ns is observed in both
cases, ruling out any non-radiative, electrodynamic charging, or thermal
dissipation mechanisms for the observed modulated signals. (b) Conceptual
illustration of the stroboscopic measurement. A monochromator is used to
filter out a portion of the modulated signal. The filtered photons are sent to
a single photon avalanche diode (SPAD) and their arrival with respect to the
SAW drive signal, which modulates the SPE frequency, is recorded. This allows
unravelling of the temporal dynamics of the SPE modulation at nanosecond time
scales (dark green peaks) which is otherwise inaccessible due to slow time-
scale of steady-state PL measurement (dark gray time-averaged spectrum, which
is a projection along the time-axis). (c) Results from the stroboscopic
measurement (points) with a fit from a Monte Carlo simulation (solid line).
Based on the center of the bandpass optical filter, both fRF and a $2f_{RF}$
components are observed as expected.
To understand the origin of the SAW-mediated modulation of the SPEs, we next
perform experiments measuring $\Delta E$ as a function of the applied power to
the IDT. A contour map of the PL spectrum as a function of the square root of
the applied power ($P_{RF}$) is shown in Fig.4(a). The splitting $\Delta E$ is
extracted at each applied power and fit as previously discussed for each of
the resonant cavity modes at 298.425 MHz, 299.425 MHz, and 300.975 MHz. From
Fig. 4(a), it is clear that $\Delta E$ increases monotonically with the power.
On a double-logarithmic scale (Fig.4(b)), we find that $\Delta
E\propto\sqrt{P_{RF}}$ with an average slope of 0.9865 meV/$\sqrt{mW}$ for all
resonant cavity modes. This result is consistent with deformation potential
coupling as the physical mechanism between the SAW and SPE that gives rise to
the energy modulation [41]. At applied power above $\sim$4 dBm, $\Delta E$
saturates, likely due to heating of the NbN IDT and mirrors, which introduces
ohmic losses by driving the cavity from the superconducting to normally
conducting state. In addition, prior to the saturation regime, the linear fit
of the power dependence allows us to rule out any additional contributions,
such as Stark-induced electric field coupling, as having a negligible effect
on the SPE splitting and modulation, since this would scale as $\Delta
E\propto P_{RF}$.
Figure 4: Deformation potential coupling and fine-structure mixing of the
exciton-biexciton radiative cascade in monolayer WSe2. (a) Contour map showing
the increase in $\Delta E$ with applied IDT power to a SAW cavity mode at
298.425 MHz. (b) $\Delta E$ as a function of applied power. The average slope
from the fits is 0.9865 meV/$\sqrt{mW}$, consistent with deformation potential
coupling. (c) Fine-structure splitting of a representative single-photon
emitter in WSe2 with horizontally (H) and vertically (V) polarized transitions
split into a doublet separated by $0.7$ meV. (d) Mixing of the single-photon
emitter fine-structure states in (c) when injecting phonons into the SAW mode
at 299.452 MHz. The characteristic double-peak spectrum of SAW-modulated SPEs
is observed for both the H and V transitions, which results in a single three-
fold Lorentzian in which the central peak arises from mixing of the H and V
transitions. (d) Conceptual illustration of biexciton-exciton radiative
cascade from WSe2 emitters with non-zero fine-structure splitting. GS, X, and
XX denote the ground-state, exciton, and biexciton states. (e) Mixing of the
emission from fine-structure states of the exciton-biexciton-like cascade in
WSe2 upon SAW modulation. The top panel shows the emission spectrum for no
applied SAW power with the characteristic set of doublets corresponding to the
exciton-like (X) and biexciton-like (XX) transitions. The bottom panel shows
the spectrum from the same emitter with SAW modulation. The doublets merge as
demonstrated in (c), and the highlighted region demonstrates the energies at
which the fine-structure splitting is erased by mixing both the H and V
photons.
To extract the deformation potential coupling efficiency, a finite element
simulation is used to extract the tensile strain component along the
propagation axis of the cavity at various applied powers to the IDT. A maximum
strain of 0.03$\%$ at 10 dBm applied power is determined for our 300 MHz
resonator with a cavity length L = 2600 $\mu$m (see methods). Assuming that
the strain in the SAW fully transfers to the WSe2 flake, this results in
$\sim$3 meV/0.1$\%$ frequency shift for WSe2 SPEs; however, we note that this
is a lower bound for the sensitivity of the emitter, given that the van der
Waals interaction between the monolayer and LiNbO3 surface could potentially
reduce the full transfer of the strain to the SPE. Previously reported
measurements of the static energy shift versus strain applied to SPEs leads to
an average value of 2 meV/0.1$\%$ shift (up to 12 meV/0.1$\%$ maximum) for
WSe2 emitters. These values are within the same order of magnitude as our
extracted coupling efficiency, which suggests the strain at the LiNbO3 surface
is efficiently transferred to the WSe2 monolayer.
## III Discussion
The zero-phonon line of SPEs in WSe2 typically exhibit a doublet with energy
splitting of $\sim 0.7$ meV, as shown in Fig.4(b). The doublet is thought to
arise from asymmetry in the confining potential, which hybridizes the spin and
valley states. This hybridization leads to splitting of the SPE transition
into orthogonal linearly polarized transitions shown as H and V [36, 40, 29].
Similar fine-structure splitting effects have been observed in other SPE
platforms, most notably III-V QDs [42, 43, 44]. In some cases, such as the
deterministic generation of linearly polarized photons, fine-structure
splitting can be advantageous [45, 46], while in others, such as generation of
entangled-photon pairs via the biexciton-exciton radiative cascade, the fine-
structure splitting can be detrimental by reducing the entanglement fidelity
[47, 48]. Despite observations of the radiative biexciton-exciton cascade in
monolayer WSe2 [49], the non-zero fine-structure splitting has prevented any
measurements of polarization entanglement with 2D materials.
SAW control of the single-photon emission a unique, on-chip mechanism to
manipulate the fine-structure splitting. As shown in Fig. 4(c), each fine-
structure peak with its associated polarization can be split into two peaks
with an energy spacing tuned by the applied SAW IDT power. In a specific range
of powers, the two adjacent peaks of the H and V transitions overlap,
resulting in a steady-state PL spectrum consisting of three Lorentzians for
which the side peaks are vertically and horizontally polarized and the central
peak becomes a mixture of the two polarizations (Fig.4). By aligning an
optical bandpass filter to the central peak, the collected photons are
reminiscent of a atomic-like emitter without fine structure, and the SAW
modulation may serve as a mechanism to erase the fine structure altogether,
although at a cost of reduced brightness. The SAW-mediated fine-structure
mixing may find immediate applications for entangled-photon pair generation
from WSe2 SPEs through the radiative biexciton cascade, whereby the emission
of a photon from the biexciton-to-exciton transition leads to a second photon
emitted from the exciton-to-ground state transition (Fig. 4(d)). Indeed, for
many SPEs, we observe exciton-biexciton transitions with fine-structure split
doublets, as shown in the top panel of Fig. 4(d). When turning on the SAW
drive power at 299.425 MHz, the deformation potential mixes the transitions in
the central regions highlighted in the bottom panel of Fig. 4(e).
The underlying mechanism of the fine-structure mixing still remains an
interesting open question worth future investigation. Two potential mechanisms
can be envisioned to cause the mixing. First, similar to Fig. 2a, due to
oscillating compressive and tensile strain, the bandgap of WSe2 oscillates at
the SAW frequency, causing the atomic-level transitions to renormalize and
oscillate as well. In this picture, the atomistic description of the two-level
system that is responsible for the fine-structure splitting remains relatively
the same, and the two fine-structure peaks oscillate in-phase with one
another. This mechanism would only appear to have their fine-structure
splitting removed on the central peak due to the slow time-scale of the
steady-state PL measurements. For the second possible mechanism, the single-
particle wavefunctions of the ground and excited states become compressed and
elongated across the strain axis, which also causes the fine-structure
transition energies to oscillate. Here, however, the fundamental mechanism of
the oscillation is not bandgap renormalization, but instead strain-induced
modification to the atomic morphology of the defect. In this case, similar to
control of the fine-structure splitting via strain in III-V QDs [50], the
compressive and tensile strain can restore the system symmetry associated with
the exchange interaction, eliminating the fine structure altogether. In the
former case, the central Lorentzian appearing in the PL spectra would become a
mixed state of H and V polarized light on slow timescales relative to the
inverse of the fine-structure splitting. In the latter case, the central
Lorentzian is a coherent entangled state. These two scenarios, on a slow time-
scale of steady state PL, lead to the same spectral response, and given that
their polarimetric density matrices are identical, they cannot be
distinguished from each other with polarization state tomography. It is
plausible that both mechanisms, each acting with different strengths, are
simultaneously in play.
While the current iteration of the SPE-SAW resonator is in the initial stage
of development, with the resonator operating in a fully classical regime, the
next generation of devices require reducing the mode volume and increasing the
operation frequency to the gigahertz range to reach the so-called side-band
resolved regime. This would allow for coherent quantum phenomena to be
observed, such optical sideband pumping to herald the generation of single
cavity phonons [51], photon-phonon entangled states, and acoustically driven
Rabi oscillations [9]. Although this work focuses on classical control of SPEs
with SAWs, we are not limited by any intrinsic property of the 2D emitter-
cavity system. WSe2 emitters have been observed with linewidths as low as
$\sim 1$ GHz using resonance fluorescence measurements [52], and recently near
transform-limited hBN emitters [23] have been reported to have linewidths as
low as 200 MHz. In order to bring the system to the side-band resolved regime,
the main challenge is to resonantly excite the system to minimize the pure
dephasing and to increase the SAW resonance frequencies while maintaining a
large Q to increase zero-point strain amplitude, bringing the system to the
regime of quantum optomechanics [3].
## IV conclusion
In summary, acoustic control of single-photon emitters in monolayer WSe2
integrated with LiNbO3 surface acoustic wave resonators is demonstrated
through electro-mechanical and opto-mechanical spectroscopy. The observed
single-photon emitter modulation is consistent with deformation potential
coupling through strain with sensitivity up to 30 meV/%. We demonstrate a
near-term application of classical control of WSe2 emitters through SAW-
mediated single-photon frequency modulation and high-speed fine-structure
manipulation, which may open the door for demonstrations of entangled-photon
pair generation from 2D materials. The integration of 2D materials with
gigahertz-frequency SAW resonators in the future would enable operation in the
quantum regime with demonstrations of sideband-resolved excitation and
detection, quantum transduction, and photon-phonon entanglement.
Acknowledgements. We gratefully acknowledge support from NSF Award No.
ECCS-2032272 and the NSF Quantum Foundry through Q-AMASE-i program Award No.
DMR-1906325. We thank Hubert Krenner, Matthias Weiß, and Emeline Nysten at WWU
Münster and Kien Le at UCSB for valuable input and discussions.
Disclosure. The authors declare no conflicts of interest.
Author Contributions. S.D.P., K.P., and M.C. contributed equally to this work.
G.M conceived the experiments and supervised the project. M.C. designed and
fabricated the surface acoustic wave resonators. K.P., S.U., and L.H prepared
the samples. S.D.P., M.C., and D.P. assembled the samples. K.P. and S.D.P.
performed steady-state PL spectroscopy, time-resolved PL spectroscopy, second-
order auto-correlation, stroboscopic PL measurements, and S-parameter
measurements. All authors discussed the results and commented on the
manuscript at all stages.
## Methods
### 1\. Device Fabrication
Surface acoustic wave (SAW) resonators were fabricated on bulk 128° YX-cut
lithium niobate, which is a piezoelectric substrate with high
electromechanical coupling ($K^{2}=5.4\%$). The SAW resonators were fabricated
with 20 nm of NbN deposited by DC magnetron reactive sputtering. Distributed
Bragg grating mirrors were then patterned using optical lithography and
inductively coupled plasma reactive ion etching with CF4 chemistry. Various
cavity lengths $L$ were fabricated, where $L=d+2L_{m}$. The inner SAW mirror
edges are separated by $d$ and the modes penetrate the mirrors by
$L_{m}=w/r_{s}\approx 130$ $\mu$m for a NbN width $w=10$ $\mu$m and single-
period reflectivity $r_{s}\approx 0.02$ [7, 53]. One-port SAW resonators were
fabricated by placing a NbN interdigital transducer (IDT) within the SAW
resonator. Contact pads composed of 10 nm of Ti and 90 nm of Au were deposited
using a lift-off process. Finally, monolayer WSe2 flakes were identified using
mechanical exfoliation and high-contrast optical imaging. Monolayers were then
integrated within the SAW resonator using an all-dry transfer method. After
the device fabrication was completed, the samples were attached to an OFHC
copper mount that holds both the sample and PCB, and the devices were wire-
bonded prior to the experiments.
### 2\. Electro-Mechanical Characterization
The $S_{11}$ scattering parameter was measured to ascertain the SAW
resonator’s intrinsic quality factor, $Q_{i}$, and external quality factor,
$Q_{e}$. The output from a vector network analyzer (VNA) was sent to the
single port of the SAW resonator IDT. The reflected signal from the resonator
was sent back to the VNA, and the magnitude and phase of $S_{11}$ were
measured as a function of RF frequency at 4.4 K. Figure 1 shows representative
results from this measurement. Dips in the $|S_{11}|$ spectrum within the
bandwidth of the SAW resonator mirrors ($\sim 298-304$ MHz) are indicative of
the different SAW resonator modes. By simultaneously fitting the magnitude and
phase at each resonance frequency to Eqn. 1, we extract both $Q_{i}$ and
$Q_{e}$ for each resonance.
$S_{11}(f)=\frac{(Q_{\mathrm{e},n}-Q_{\mathrm{i},n})/Q_{\mathrm{e},n}+2iQ_{\mathrm{i},n}(f-f_{n})/f}{(Q_{\mathrm{e},n}+Q_{\mathrm{i},n})/Q_{\mathrm{e},n}+2iQ_{\mathrm{i},n}(f-f_{n})/f}$
(1)
The table below summarizes the results from the fits for each of the cavity
resonances measured after transferring the 2D flakes.
$f_{SAW}$ [MHz] | 298.425 | 299.425 | 300.975 | 303.561
---|---|---|---|---
$\hskip 19.91684ptQ_{i}$ | 1,300 | 3,000 | 1,600 | 1,700
$\hskip 19.91684ptQ_{e}$ | 5,900 | 800 | 2,300 | 6,000
Table 1: Intrinsic (Qi) and external (Qe) quality factors for each of the SAW
resonator modes.
### 3\. Acousto-Optical Microscopy
A 532 nm continuous-wave laser source was used for the steady-state and
stroboscopic measurements. A dichroic mirror at 540 nm is used to separate the
optical excitation and collection paths. An additional 600 nm long-pass
optical filter is used to further extinguish the excitation laser in the
collection path. An infinity-corrected 0.55 NA objective with 13 mm working
distance is used for spectroscopy. Samples were placed on a customized radio-
frequency (RF) PCB sample carrier for microwave connection and were cooled to
4.4 K inside a Montana S200 cryostation. Optical spectra were acquired using a
Princeton instruments HRS-500 with 300/1200/1800 groove/mm gratings and a
thermo-electrically cooled Pixis silicon CCD. Second-order autocorrelation
measurements with continuous-wave optical excitation were performed by
utilizing the spectrometer as a monochromator to filter the emission from
individual emitters. The optical signals were then collected in a multimode
fiber beamsplitter connected to two single-photon avalanche detectors
(Excelitas SCPM-AQRH-13-FC). Swabian time-tagging electronics were used for
photon counting.
Time-resolved photoluminescence measurements were performed using a 660 nm, 80
MHz repetition rate pulsed laser source and single-photon counting. For
stroboscopic measurements, the internal oscillator of the RF signal generator
for driving the SAW IDT was connected to an external clock generator for
synchronization between the SAW and detected photons. The external clock
provided a pulsed signal with fRF/30 to the start channel of the photon
counting module. The emitter photoluminescence detected after the
monochromator with the single-photon detector was connected to the stop
channel, and a histogram of start-stop times was constructed as the
spectrometer grating was scanned across the modulated SPE resonances.
### 4\. Time-Domain SAW Simulations
To understand and predict the temporal dynamics of the stroboscopic
measurements, we carried out temporal simulations in MATLAB. In the classical
regime, the spectral response of the system at any given time can be denoted
as:
$\displaystyle\Omega(t)=\frac{\Gamma}{1+\frac{(\omega-\omega_{\circ}-\Delta
Esin(2\pi f_{RF}))^{2}}{\Gamma^{2}}}$ (2)
where $\Gamma$ and $\omega_{\circ}$ are the radiative decay rate and the
frequency of the emitter and $\Delta E$ and $f_{RF}$ are the amplitude of the
SAW modulation and frequency of the SAW. The monochromator is modeled as a
square pulse in frequency space as $U(\omega_{l})-U(\omega_{h})$ where
$\omega_{l}$ and $\omega_{h}$ are the low-pass and high-pass corner
frequencies of an ideal bandpass filter. While performing the stroboscopic
measurment, the counts on the single photon detector would follow
$\Omega(t)(U(\omega_{l})-U(\omega_{h}))$. The fit to the data is calculated
using this expression for an emitter with a lifetime of 2 ns, a non-
radiatively broadened linewidth of 2 meV, and an ideal bandpass filter with a
bandwidth of 3 meV where the filter is set on the high energy wing of the
spectral response.
### 5\. Finite Element Simulations of SAW Cavity
A COMSOL Multiphysics finite-element simulation was used to model the strain
amplitude at the location of the WSe2 emitter. A two-dimensional simulation
was constructed representing a cross section of the LiNbO3-NbN SAW cavity
along the propagation direction and the surface normal. Material parameters
for LiNbO3 were extracted from Ref. [54], and the elastic, piezoelectric, and
permittivity tensors were then rotated 38° to simulate a 128° YX-cut LiNbO3
substrate. NbN reflectors and IDT electrodes were simulated with electrostatic
floating-potential and terminal and ground boundary conditions, respectively.
The coupled piezoelectric equations of motion were solved using a frequency-
domain simulation with 0 dBm power applied to the IDT electrodes. Perfectly
matched layers were applied to the boundaries of the simulation domain to
absorb any scattered radiation. The strain amplitude was extracted by taking
the maximum value of the tensile strain component oriented along the SAW
propagation direction between the IDT and the NbN reflector placed further
from the IDT. Over the frequency range of 299-301 MHz, a maximum tensile
strain amplitude of 0.0119 % at 0 dBm applied IDT power coincided with a SAW
cavity resonance at 299.665 MHz, in good agreement with the observed resonance
at 299.425 MHz from the $S_{11}$ parameter measurement.
## References
* Delsing _et al._ [2019] P. Delsing, A. N. Cleland, M. J. A. Schuetz, J. Knörzer, G. Giedke, J. I. Cirac, K. Srinivasan, M. Wu, K. C. Balram, C. Bäuerle, T. Meunier, C. J. B. Ford, P. V. Santos, E. Cerda-Méndez, H. Wang, H. J. Krenner, E. D. S. Nysten, M. Weiß, G. R. Nash, L. Thevenard, C. Gourdon, P. Rovillain, M. Marangolo, J.-Y. Duquesne, G. Fischerauer, W. Ruile, A. Reiner, B. Paschke, D. Denysenko, D. Volkmer, A. Wixforth, H. Bruus, M. Wiklund, J. Reboud, J. M. Cooper, Y. Fu, M. S. Brugger, F. Rehfeldt, and C. Westerhausen, The 2019 surface acoustic waves roadmap, Journal of Physics D: Applied Physics 52, 353001 (2019).
* Schuetz _et al._ [2015] M. J. A. Schuetz, E. M. Kessler, G. Giedke, L. M. K. Vandersypen, M. D. Lukin, and J. I. Cirac, Universal Quantum Transducers Based on Surface Acoustic Waves, Physical Review X 5, 031031 (2015).
* Choquer _et al._ [2022] M. Choquer, M. Weiß, E. D. Nysten, M. Lienhart, P. Machnikowski, D. Wigger, H. J. Krenner, and G. Moody, Quantum control of optically active artificial atoms with surface acoustic waves, IEEE Transactions on Quantum Engineering 3, 5100217 (2022).
* Safavi-Naeini _et al._ [2019] A. H. Safavi-Naeini, D. Van Thourhout, R. Baets, and R. Van Laer, Controlling phonons and photons at the wavelength scale: integrated photonics meets integrated phononics, Optica 6, 213 (2019).
* Gell _et al._ [2008] J. R. Gell, M. B. Ward, R. J. Young, R. M. Stevenson, P. Atkinson, D. Anderson, G. A. C. Jones, D. A. Ritchie, and A. J. Shields, Modulation of single quantum dot energy levels by a surface-acoustic-wave, Applied Physics Letters 93, 81115 (2008).
* Metcalfe _et al._ [2010] M. Metcalfe, S. M. Carr, A. Muller, G. S. Solomon, and J. Lawall, Resolved Sideband Emission of InAs/GaAs Quantum Dots Strained by Surface Acoustic Waves, Physical Review Letters 105, 37401 (2010).
* Nysten _et al._ [2020a] E. D. Nysten, A. Rastelli, and H. J. Krenner, A hybrid (al) gaas-linbo3 surface acoustic wave resonator for cavity quantum dot optomechanics, Applied Physics Letters 117, 121106 (2020a).
* DeCrescent _et al._ [2022] R. A. DeCrescent, Z. Wang, P. Imany, R. C. Boutelle, C. A. McDonald, T. Autry, J. D. Teufel, S. W. Nam, R. P. Mirin, and K. L. Silverman, Large single-phonon optomechanical coupling between quantum dots and tightly confined surface acoustic waves in the quantum regime, Physical Review Applied 18, 034067 (2022).
* Whiteley _et al._ [2019] S. J. Whiteley, G. Wolfowicz, C. P. Anderson, A. Bourassa, H. Ma, M. Ye, G. Koolstra, K. J. Satzinger, M. V. Holt, F. J. Heremans, A. N. Cleland, D. I. Schuster, G. Galli, and D. D. Awschalom, Spin–phonon interactions in silicon carbide addressed by Gaussian acoustics, Nature Physics 15, 490 (2019), arXiv:1804.10996 .
* Azzam _et al._ [2021] S. I. Azzam, K. Parto, and G. Moody, Prospects and challenges of quantum emitters in 2d materials, Applied Physics Letters 118, 240502 (2021).
* Kianinia _et al._ [2022] M. Kianinia, Z.-Q. Xu, M. Toth, and I. Aharonovich, Quantum emitters in 2D materials: Emitter engineering, photophysics, and integration in photonic nanostructures, Applied Physics Reviews 9, 011306 (2022).
* Srivastava _et al._ [2015] A. Srivastava, M. Sidler, A. V. Allain, D. S. Lembke, A. Kis, and A. Imamoğlu, Optically active quantum dots in monolayer wse2, Nature Nanotechnology 10, 491 (2015).
* He _et al._ [2015] Y.-M. He, G. Clark, J. R. Schaibley, Y. He, M.-C. Chen, Y.-J. Wei, X. Ding, Q. Zhang, W. Yao, X. Xu, C.-Y. Lu, and J.-W. Pan, Single quantum emitters in monolayer semiconductors, Nature Nanotechnology 10, 497 (2015).
* Chakraborty _et al._ [2015] C. Chakraborty, L. Kinnischtzke, K. M. Goodfellow, R. Beams, and A. N. Vamivakas, Voltage-controlled quantum light from an atomically thin semiconductor, Nature Nanotechnology 10, 507 (2015).
* Koperski _et al._ [2015] M. Koperski, K. Nogajewski, A. Arora, V. Cherkez, P. Mallet, J. Y. Veuillen, J. Marcus, P. Kossacki, and M. Potemski, Single photon emitters in exfoliated wse2 structures, Nature Nanotechnology 10, 503 (2015).
* Tonndorf _et al._ [2015] P. Tonndorf, R. Schmidt, R. Schneider, J. Kern, M. Buscema, G. A. Steele, A. Castellanos-Gomez, H. S. van der Zant, S. M. de Vasconcellos, and R. Bratschitsch, Single-photon emission from localized excitons in an atomically thin semiconductor, Optica 2, 347 (2015).
* Tran _et al._ [2016a] T. T. Tran, C. Elbadawi, D. Totonjian, C. J. Lobo, G. Grosso, H. Moon, D. R. Englund, M. J. Ford, I. Aharonovich, and M. Toth, Robust multicolor single photon emission from point defects in hexagonal boron nitride, ACS Nano 10, 7331 (2016a), pMID: 27399936\.
* Jungwirth _et al._ [2016] N. R. Jungwirth, B. Calderon, Y. Ji, M. G. Spencer, M. E. Flatté, and G. D. Fuchs, Temperature dependence of wavelength selectable zero-phonon emission from single defects in hexagonal boron nitride, Nano letters 16, 6052 (2016).
* Grosso _et al._ [2017] G. Grosso, H. Moon, B. Lienhard, S. Ali, D. K. Efetov, M. M. Furchi, P. Jarillo-Herrero, M. J. Ford, I. Aharonovich, and D. Englund, Tunable and high-purity room temperature single-photon emission from atomic defects in hexagonal boron nitride, Nature Communications 8, 705 (2017).
* Luo _et al._ [2018] Y. Luo, G. D. Shepard, J. V. Ardelean, D. A. Rhodes, B. Kim, K. Barmak, J. C. Hone, and S. Strauf, Deterministic coupling of site-controlled quantum emitters in monolayer WSe2 to plasmonic nanocavities, Nature Nanotechnology 13, 1137 (2018).
* Zhao _et al._ [2021] H. Zhao, M. T. Pettes, Y. Zheng, and H. Htoon, Site-controlled telecom-wavelength single-photon emitters in atomically-thin MoTe2, Nature Communications 12, 6753 (2021).
* Fournier _et al._ [2022] C. Fournier, S. Roux, K. Watanabe, T. Taniguchi, S. Buil, J. Barjon, J.-P. Hermier, and A. Delteil, Two-photon interference from a quantum emitter in hexagonal boron nitride, arXiv preprint arXiv:2210.05590 (2022).
* Dietrich _et al._ [2020] A. Dietrich, M. Doherty, I. Aharonovich, and A. Kubanek, Solid-state single photon source with fourier transform limited lines at room temperature, Physical Review B 101, 081401 (2020).
* Schaibley _et al._ [2016] J. R. Schaibley, H. Yu, G. Clark, P. Rivera, J. S. Ross, K. L. Seyler, W. Yao, and X. Xu, Valleytronics in 2d materials, Nature Reviews Materials 1, 1 (2016).
* Exarhos _et al._ [2019] A. L. Exarhos, D. A. Hopper, R. N. Patel, M. W. Doherty, and L. C. Bassett, Magnetic-field-dependent quantum emission in hexagonal boron nitride at room temperature, Nature communications 10, 1 (2019).
* Stern _et al._ [2022] H. L. Stern, Q. Gu, J. Jarman, S. Eizagirre Barker, N. Mendelson, D. Chugh, S. Schott, H. H. Tan, H. Sirringhaus, I. Aharonovich, _et al._ , Room-temperature optically detected magnetic resonance of single defects in hexagonal boron nitride, Nature communications 13, 1 (2022).
* Tran _et al._ [2016b] T. T. Tran, C. Elbadawi, D. Totonjian, C. J. Lobo, G. Grosso, H. Moon, D. R. Englund, M. J. Ford, I. Aharonovich, and M. Toth, Robust multicolor single photon emission from point defects in hexagonal boron nitride, ACS Nano 10, 7331 (2016b).
* Luo _et al._ [2019] Y. Luo, N. Liu, X. Li, J. C. Hone, and S. Strauf, Single photon emission in WSe2 up 160 K by quantum yield control, 2D Materials 6, 035017 (2019).
* Parto _et al._ [2021] K. Parto, S. I. Azzam, K. Banerjee, and G. Moody, Defect and strain engineering of monolayer WSe2 enables site-controlled single-photon emission up to 150 K, Nature Communications 12, 3585 (2021).
* Branny _et al._ [2017] A. Branny, S. Kumar, R. Proux, and B. D. Gerardot, Deterministic strain-induced arrays of quantum emitters in a two-dimensional semiconductor, Nature communications 8, 15053 (2017).
* Palacios-Berraquero _et al._ [2017] C. Palacios-Berraquero, D. M. Kara, A. R.-P. Montblanch, M. Barbone, P. Latawiec, D. Yoon, A. K. Ott, M. Loncar, A. C. Ferrari, and M. Atatüre, Large-scale quantum-emitter arrays in atomically thin semiconductors, Nature Communications 8, 15093 (2017).
* Fournier _et al._ [2021] C. Fournier, A. Plaud, S. Roux, A. Pierret, M. Rosticher, K. Watanabe, T. Taniguchi, S. Buil, X. Quélin, J. Barjon, J. Barjon, J.-P. Hermier, and A. Delteil, Position-controlled quantum emitters with reproducible emission wavelength in hexagonal boron nitride, Nature Communications 12, 3779 (2021).
* Parto _et al._ [2022] K. Parto, S. I. Azzam, N. Lewis, S. D. Patel, S. Umezawa, K. Watanabe, T. Taniguchi, and G. Moody, Cavity-enhanced 2d material quantum emitters deterministically integrated with silicon nitride microresonators, arXiv preprint arXiv:2206.14845 (2022).
* Lazić _et al._ [2019] S. Lazić, A. Espinha, S. Pinilla Yanguas, C. Gibaja, F. Zamora, P. Ares, M. Chhowalla, W. S. Paz, J. J. Burgos, A. Hernández-Mínguez, P. V. Santos, and H. P. van der Meulen, Dynamically tuned non-classical light emission from atomic defects in hexagonal boron nitride, Communications Physics 2, 113 (2019).
* Iff _et al._ [2019] O. Iff, D. Tedeschi, J. Martín-Sánchez, M. Moczała-Dusanowska, S. Tongay, K. Yumigeta, J. Taboada-Gutiérrez, M. Savaresi, A. Rastelli, P. Alonso-González, _et al._ , Strain-tunable single photon sources in wse2 monolayers, Nano Letters 19, 6931 (2019).
* Linhart _et al._ [2019] L. Linhart, M. Paur, V. Smejkal, J. Burgdörfer, T. Mueller, and F. Libisch, Localized intervalley defect excitons as single-photon emitters in WSe2, Physical review letters 123, 146401 (2019).
* Castellanos-Gomez _et al._ [2014] A. Castellanos-Gomez, M. Buscema, R. Molenaar, V. Singh, L. Janssen, H. S. Van Der Zant, and G. A. Steele, Deterministic transfer of two-dimensional materials by all-dry viscoelastic stamping, 2D Materials 1, 011002 (2014).
* Manenti _et al._ [2016] R. Manenti, M. J. Peterer, A. Nersisyan, E. B. Magnusson, A. Patterson, and P. J. Leek, Surface acoustic wave resonators in the quantum regime, Physical Review B 93, 041411 (2016).
* Nysten _et al._ [2020b] E. D. S. Nysten, A. Rastelli, and H. J. Krenner, A hybrid (Al)GaAs-LiNbO 3 surface acoustic wave resonator for cavity quantum dot optomechanics, Applied Physics Letters 117, 121106 (2020b), arXiv:2007.11082 .
* Chakraborty _et al._ [2019] C. Chakraborty, N. R. Jungwirth, G. D. Fuchs, and A. N. Vamivakas, Electrical manipulation of the fine-structure splitting of wse 2 quantum emitters, Physical Review B 99, 045308 (2019).
* Pustiowski _et al._ [2015] J. Pustiowski, K. Müller, M. Bichler, G. Koblmüller, J. J. Finley, A. Wixforth, and H. J. Krenner, Independent dynamic acousto-mechanical and electrostatic control of individual quantum dots in a LiNbO3-GaAs hybrid, Applied Physics Letters 106, 013107 (2015).
* Bayer _et al._ [2002] M. Bayer, G. Ortner, O. Stern, A. Kuther, A. Gorbunov, A. Forchel, P. Hawrylak, S. Fafard, K. Hinzer, T. Reinecke, _et al._ , Fine structure of neutral and charged excitons in self-assembled in (ga) as/(al) gaas quantum dots, Physical Review B 65, 195315 (2002).
* Seguin _et al._ [2005] R. Seguin, A. Schliwa, S. Rodt, K. Pötschke, U. Pohl, and D. Bimberg, Size-dependent fine-structure splitting in self-organized inas/gaas quantum dots, Physical review letters 95, 257402 (2005).
* Schuck _et al._ [2021] C. F. Schuck, R. Boutelle, K. Silverman, G. Moody, and P. J. Simmonds, Single-photon generation from self-assembled gaas/inalas (111) a quantum dots with ultrasmall fine-structure splitting, Journal of Physics: Photonics 3, 024012 (2021).
* Wang _et al._ [2019] H. Wang, Y.-M. He, T.-H. Chung, H. Hu, Y. Yu, S. Chen, X. Ding, M.-C. Chen, J. Qin, X. Yang, _et al._ , Towards optimal single-photon sources from polarized microcavities, Nature Photonics 13, 770 (2019).
* Gerhardt _et al._ [2019] S. Gerhardt, M. Deppisch, S. Betzold, T. H. Harder, T. C. Liew, A. Predojević, S. Höfling, and C. Schneider, Polarization-dependent light-matter coupling and highly indistinguishable resonant fluorescence photons from quantum dot-micropillar cavities with elliptical cross section, Physical Review B 100, 115305 (2019).
* Hudson _et al._ [2007] A. Hudson, R. Stevenson, A. Bennett, R. Young, C. Nicoll, P. Atkinson, K. Cooper, D. Ritchie, and A. Shields, Coherence of an entangled exciton-photon state, Physical Review Letters 99, 266802 (2007).
* Stevenson _et al._ [2006] R. M. Stevenson, R. J. Young, P. Atkinson, K. Cooper, D. A. Ritchie, and A. J. Shields, A semiconductor source of triggered entangled photon pairs, Nature 439, 179 (2006).
* He _et al._ [2016] Y.-M. He, O. Iff, N. Lundt, V. Baumann, M. Davanco, K. Srinivasan, S. Höfling, and C. Schneider, Cascaded emission of single photons from the biexciton in monolayered WSe2, Nature Communications 7, 13409 (2016).
* Seidl _et al._ [2006] S. Seidl, M. Kroner, A. Högele, K. Karrai, R. J. Warburton, A. Badolato, and P. M. Petroff, Effect of uniaxial stress on excitons in a self-assembled quantum dot, Applied Physics Letters 88, 203113 (2006).
* Imany _et al._ [2022] P. Imany, Z. Wang, R. A. DeCrescent, R. C. Boutelle, C. A. McDonald, T. Autry, S. Berweger, P. Kabos, S. W. Nam, R. P. Mirin, _et al._ , Quantum phase modulation with acoustic cavities and quantum dots, Optica 9, 501 (2022).
* Kumar _et al._ [2016] S. Kumar, M. Brotóns-Gisbert, R. Al-Khuzheyri, A. Branny, G. Ballesteros-Garcia, J. F. Sánchez-Royo, and B. D. Gerardot, Resonant laser spectroscopy of localized excitons in monolayer wse 2, Optica 3, 882 (2016).
* Morgan [2007] D. Morgan, _Surface acoustic wave filters with applications to electronic communications and Signal Processing_ (Elsevier Science, 2007).
* Tarumi _et al._ [2012] R. Tarumi, T. Matsuhisa, and Y. Shibutani, Low temperature elastic constants and piezoelectric coefficients of LiNbO3 and LiTaO3: Resonant ultrasound spectroscopy measurement and lattice dynamics analysis, Japanese Journal of Applied Physics 51, 07GA02 (2012).
|
# EMReact: A Tool for Modelling Electromagnetic Field Induced Effects in
Chemical Reactions by Solving the Discrete Stochastic Master Equation
Kelvin Dsouzaa and Daryoosh Vashaeea,b ‡Corresponding author<EMAIL_ADDRESS>aElectrical and Computer Engineering Department, North Carolina State
University, Raleigh, North Carolina, USA bMaterials Science and Engineering
Department, North Carolina State University, Raleigh, North Carolina, USA
###### Abstract
The effects of electromagnetic fields (EMF) have been widely debated
concerning their role in chemical reactions. Reactions, usually took hours or
days to complete, have been shown to happen a thousand times faster using EMF
radiations. This work develops a formalism and a computer program to evaluate
and quantify the EMF effects in chemical reactions. The master equation
employed in this program solves the internal energy of the reaction under EMFs
while including collisional effects. Multiphoton absorption and emission are
made possible with the transitioning energy close to the EMF and are
influenced by the dielectric properties of the system. Dimethyl Sulfoxide and
Benzyl Chloride are simulated under different EMF intensities. The results
show that EMF absorption is closely related to the collisional redistribution
of energy in molecules. The EMF effect can be interpreted as a shift of the
thermodynamic equilibrium. Under such nonequilibrium energy distribution, the
”temperature” is not a reliable quantity for defining the state of the system.
###### Program Summary
Program Title: EMReact
Developer’s repository link: https://github.com/keludso/EMReact.git
Licensing provisions: BSD License
Programming language: MATLAB
Nature of problem: The role of Electromagnetic radiation in chemical
reactions.
Solution method: Solving the Discrete Energy Stochastic Master Equation
employing the Gillespie’s Algorithm.
###### keywords:
Electromagnetic filed effects, Microwave, Field induced reactions, Non-
equilibrium reactions, Master equation, Gillespie’s Algorithm.
## 1 Introduction
Electromagnetic field-assisted processes in the form of microwave heating span
a wide range of research and industrial areas, such as low-temperature
sintering, low-temperature de-crystallization, enhancement of reaction rates,
and catalytic effects in organic and inorganic synthesis[1, 2, 3, 4, 5, 6, 7,
8, 9, 10]. While the source of the heating mechanism is well understood,
certain effects such as enhancement of reaction rates, increased yield, and
creation of distinct transition pathways, which cannot be achieved through the
conventional heating mechanism, have led to the term non-thermal or EMF
specific effects[11, 12, 13, 14, 15, 16, 17]. The term non-thermal effect is
loosely described as any reaction or process that cannot be achieved through
the conventional heating technique, which has gained a reputation for being
something mysterious. There has been an increasing debate among the scientific
community regarding the non-thermal effects being just a manifestation of the
thermal effect in the form of non-uniform localized heating, hotspots, and/or
inaccuracy in temperature measurement[18]. Exhaustive reviews have been
published on both the thermal and non-thermal effects[19, 20, 21]. While there
are satisfactory explanations for the observed phenomena in each case,
modeling these effects to get an intuitive understanding of the process
remains challenging.
The microwave energy is typically too low to cause electronic transitions in
materials, and unlike the laser, microwave absorption has a more subtle effect
on the system. The microwave field interacts with the dipole of the molecule,
increasing its energy. The thermal effect in microwave absorption can be
explained due to energy lost to the system due to the relaxation of these
excited dipole moments. By tracking the internal vibrational energy
distribution of the system, we can get information regarding the equilibrium
properties and temperature of the system. We can model microwave absorption as
multiphoton absorption in a collisional environment to implement this idea.
Intramolecular vibrational redistribution and vibrational energy transfer can
be used to model energy dissipation among vibrational levels[22, 23]. As we
will see, collisional effects play a vital role in the redistribution of
energy, which gives rise to the heating of the sample as an observable
quantity.
Ma J., in their paper, used internal energy distribution to explain non-
thermal microwave effects using a two-channel process where he hypothesized
that microwave activates a rapid channel to enhance the reaction rate
demonstrating the distribution of internal states deviate from equilibrium
under a strong microwave field[24, 25]. The non-thermal microwave effect in
certain applications has been explained by the (1) increase in pre-exponential
factor in the Arrhenius Equation, which directly influences the molecular
collisions, (2) Decrease in the activation energy, which directly increases
the reaction rate, (3) Localized microscopic high temperatures (4)
Intermediate transition states, and (5) polar specific effects in the
medium[26] .
This paper introduces a model using the master equation formulation to
demonstrate the effect of the microwave as a multiphoton absorption in a
collisional environment.
## 2 Methodology
### 2.1 Master Equation
The temporal evolution of the population of molecules under thermal
dissociation, including the multiphoton absorption, can be described by the
energy-grained master equation:[27, 28]
$\displaystyle\frac{dN_{i}}{dt}=\frac{I(t)}{\hbar\omega}[\sigma_{i,i-1}N_{i-1}+\frac{\rho_{i}}{\rho_{i+1}}\sigma_{i+1,i}N_{i+1}-(\sigma_{i+1,i}+\frac{\rho_{i}}{\rho_{i+1}}\sigma_{i,i-1})N_{i}]$
(1)
$\displaystyle+\omega\sum_{j}P_{i,i+1}N_{i}-\omega\sum_{j}P_{i,i-1}N_{i}-\sum_{m}k_{m}N_{i}$
where the concentration of the species at the ith energy level is given by
$N_{i}$. I(t) is the intensity (W/cm2) of the microwave radiation. $P_{i}$ is
the transfer probability of the molecule from the ith energy level. $\omega$
is the collisional frequency. $k_{i}$ is the unimolecular rate constant of the
ith channel. $\sigma_{i+1,i}$ is the microscopic cross-section for the
absorption of microwave energy from level i to i+1. $\rho_{i}$ is the density
of states at the ith energy level.
As it is difficult to find a solution for the Master equation given in Eq(1),
a stochastic algorithm is used to solve it, as provided by Gillespie[29, 30,
31]. The algorithm determines the time step and reaction that would occur in
an evenly distributed system accounting for random fluctuations. To implement
the stochastic process, first, a cumulative reaction term $k_{0}$ is
calculated as the sum of all the reactions $k_{v}$ occurring in the system,
according to Eq (2). Using random numbers given by $r_{1}$ and $r_{2}$, the
time step $\tau$ is added to the simulation time, and the reaction at that
time step is chosen, according to Eq (3) and (4), respectively.
$k_{0}=\sum_{v=1}^{M}k_{v}$ (2)
$\tau=(\frac{1}{k_{0}}ln\frac{1}{r_{1}})$ (3)
$\sum_{v-1}^{\mu-1}k_{v}<r_{2}k_{0}\leq\sum_{v=1}^{\mu}k_{v}$ (4)
To put it all together, given the initial conditions, the program calculates
the reaction rate for the different processes: microwave absorption/emission,
elastic/inelastic collision, and dissociation. Each molecule then starts with
the initial internal energy. Based on the reaction chosen by Eq (4) and the
time step by Eq (3), the appropriate energy is updated to the molecule’s
internal energy, and the program progresses. This cycle repeats until the time
exceeds the set limit or the molecule dissociates. The flowchart in Figure 1
demonstrates this process.
Figure 1: The flowchart for solving the master equation using Gillespie’s
algorithm. The system is initialized to the set parameters of temperature
(Temp), Number of Trajectories (NTraj), time limit (TLIM), Initial energy
(EBEGIN) and size of energy grain(Egrid reaction rates and density of states
are initially calculated, and the stochastic process begins. Energy (E) is
updated at each time step ($\tau$) and the simulation for each molecule runs
if the set time has exceeded (T$>$TLIM), or the molecule dissociates
(kdissociation).
### 2.2 EMF effects
The electromagnetic wave interacts with a molecule by distorting the electron
cloud in the direction of the applied field. As this distortion relaxes,
energy is gained in the system due to intermolecular interactions leading to
heating in the system. This mechanism is known as dielectric heating, and it
primarily depends on a cluster of atoms or molecules, and it is not a quantum
mechanical phenomenon.
The property of the material that determines the coupling of electromagnetic
waves is the relative permittivity. A material with permanent or induced
dipoles will store charges if placed between two electrodes in a circuit, and
the permittivity of the material defines this charge storing ability. The
permittivity is defined as complex variable given in Eq(5).
$\epsilon^{*}=\epsilon^{\prime}-j\epsilon"$ (5)
Where $\epsilon^{\prime}$ is the real part of the permittivity which is
related to the charge storage and $\epsilon"$ is the imaginary part which
indicates the loss term. As dielectric heating is known to heat the bulk of
the sample, it is directly related to the loss tangent of the system. The loss
tangent or the energy dissipation factor is given by Eq(6).
$tan\delta=\frac{\epsilon^{\prime}}{\epsilon"}$ (6)
The loss tangent can be experimentally calculated for different
temperatures[32]. Dielectric heating can also be viewed as the efficiency of
conversion of EMF energy into heat, and it is related to both the dielectric
and thermal properties of the system. The relationship is given by Eq(7)[33].
$P=\sigma|E|^{2}=(\omega\epsilon_{0}\epsilon^{\prime}tan\delta)|E|^{2}$ (7)
Where P represents the power dissipated per unit volume in a material,
$\sigma$ is the conductivity, E is the electric field and $\omega$ is the
angular frequency. The power dissipated in the system is equivalent to the
multiphoton jump due to the EMF absorption. In particular, we consider a bulk
system with a continuum in the density of states. Therefore, modeling the EMF
absorption as a multiphoton absorption is justified as the energy is assumed
to be absorbed by all the molecules equally, i.e., we can relate this to the
power absorbed by the whole system (not a single molecule). In the next step,
the energy gained by the multiphoton absorption is dissipated quickly through
collisional effects trying to maintain the equilibrium in the system. The
thermal effect is seen as an increase in temperature, while the non-thermal
effect is observed as a deviation from this equilibrium.
The EMF intensity plays a crucial role in the dynamics of the system. At
higher microwave intensity, voltage breakdown occurs leading to arc discharges
and highly ionized gas plasma. Very high values of microwave
power($>$106W/cm2) can lead to a higher number of molecules in the excited
states with insufficient time for redistribution of energy which can lead to
wrong interpretation of the result.
### 2.3 Molecular Collisions
The collision frequency $\omega$ is a function of temperature, pressure, and
atomic potential. The molecules are modelled as Lennard-Jones sphere and
interact via the Lennard-Jones potential between molecules A and bath gas M.
The collisional frequency is calculated using the following Eq(8)
$\omega=\pi\sigma_{LJ}[M]\sqrt{\frac{8RT}{\pi\mu_{A-M}}}\Omega^{*}_{A-M}$ (8)
Where $\Omega^{*}_{A-M}$ is the collision integral given in Eq(9), [M] the
number density of the system, $\sigma_{LJ}{M}$ the collision diameter, T(K)
the temperature, and $\mu$ the atomic mass. The collision integral in this
model is given by where ${\epsilon_{A-M}}$ is the depth of the Lennard Jones
potential well[34, 35].
$\Omega\approx[0.636+0.567\log\frac{kT}{\epsilon_{A-M}}]$ (9)
A bi-exponential model gives the transfer probability function used in this
model as in Eq (10).[36, 37].
$P_{ij}=(1-a)e^{\frac{(E_{i}-E_{j})}{\alpha}}+be^{\frac{(E_{i}-E_{j})}{\beta}}$
(10)
Where a,b,$\alpha$, and $\beta$ are coefficients for collisional energy
transfer. The up and down collision are related by the balance equation given
by:
$\frac{P_{i,j}}{P_{j,i}}=\frac{\rho_{i}}{\rho_{j}}e^{(\frac{E_{i}-E_{j}}{k_{b}T})}$
(11)
The energy transfer, collision, and dissociation processes are all connected
to the density of states. Many methods exist to calculate the density of
states based on the size and accuracy of the molecule[38, 39]. Statistical
tools and computer programs are available to generate the density of states of
a molecule based on the vibrational and rotational frequencies.
The system parameters, such as the energy distribution, photons absorbed
/emitted, and trajectories reacted, are stored to evaluate the time evolution
of the system. The time bin is fixed and stores the average value in that
interval. This gives an advantage to the memory required for each simulation.
Considering the system to be linear, the average values of an ensemble of
molecules can describe the system’s behavior. The size of the energy grain
must be chosen to simulate the molecule with sufficient accuracy. Finer energy
grain will result in more memory space and would be required in reactions with
multiple channels.
### 2.4 Unimolecular Dissociation
A critical quantity to evaluate the reaction rate is the dissociation rate
constant. The Rice-Ramsperger-Kassel-Marcus (RRKM) model is widely used to
study the unimolecular kinetics and is based on the fundamental assumption
that a micro-canonical ensemble of states exists initially for the activated
molecule and is maintained as it decomposes. It is given by:
$k(E,J)=\frac{W(E,J)}{h\rho(E,J)}$ (12)
Where $W(E,J)$ is the transition state’s sum of states. To include the EMF
effect, we can assume that the system reaction to sudden changes is slow, and
the average energy shifts as EMF energy is absorbed. Due to the non-
equilibrium distribution of the system, the vibrational temperature of the
system would be different. This can be considered a change in its vibrational
temperature and be included empirically in the RRKM equation, according to Eq
(13).
$k_{MW}=Ae^{\frac{-E_{crit}}{\beta(E)T}}$ (13)
Here A and $\beta$ values are determined experimentally by fitting the plot of
the observed values.
## 3 Results
EMReact is written in MATLAB, and the source code is available and can be
downloaded from the provided link[40]. The code, without the EMF inclusion, is
written based on the MULTIWELL suite developed by Barker et. al.[41, 42, 43,
44]. The program utilizes MATLAB functionalities and offers graphical
visualization and prompt inputs for a user-oriented approach.
Figure 2: Density of states for DMSO and Benzyl Chloride
For this work, we have chosen two systems which that are known to exhibit
thermal as well as non-thermal effects. Dimethyl Sulfoxide which is a solvent
and good EMF absorber, and Benzyl Chloride which is a precursor to several
microwaves-assisted reactions. The density of states is calculated for an
energy grid of 5 cm-1 employing the Wang-Landau method implemented in the
Desnum package of the MULTIWELL suite, as shown in Figure 2. The density of
states, as shown, is relatively smooth even at lower energy which leads,
leading to little error during interpolation for the selected energy grid. The
vibrational frequencies used to calculate the density of states for the two
systems are calculated and experimentally determined by Chen and
Naganathappa[45, 46].
### 3.1 Dimethyl Sulfoxide
Dimethyl Sulfoxide (DMSO) is an aprotic polar solvent that has been used
extensively in microwave-assisted processes[47, 48]. DMSO can couple to the
microwave field increasing its temperature due to dielectric losses which
makes it one of the reasons it is widely used. The parameters used for the
simulations are given in Table 1.
Simulation Parameters for DMSO[49] $\sigma_{LJ}$=3.66, $\epsilon_{A-M}=168.18$
$tan_{\delta}$=0.8524, $\epsilon$=13, $\omega$=2.4 GHz N${}_{TRAJ}=3000$,
EBEGIN=1000 cm-1, TTLIM=5 ms, Temp=300 K
Figure 3: a: Internal Energy distribution for DMSO at 5ms. b: Average energy
$<E>$ with respect to time
The internal energy distribution at different microwave intensities for a time
limit of 5 ms is given in Figure 3(a). Without microwave radiation (MW=0W/cm2)
the internal energy follows a Boltzmann distribution given by
$Ae^{(-(E-Ei)/k_{B}T)}$. The starting internal energy set at 1000 cm-1 reaches
equilibrium due to the vibrational-vibrational energy transfer by the
collisional process in 500 $\mu$s. This is shown by the average energy plot
shown in Figure 3(b). The average energy $<E>$ of the internal energy
distribution depends on the temperature of the system. As microwave radiation
is added to the system, the average energy plot shifts based on the microwave
intensity.
At the microwave power of 100 W/cm2, the energy distribution is modified such
that the trailing edge of the distribution has a lower slope than the leading
edge indicating a non-uniform distribution. Unequal distribution is difficult
to observe on the internal distribution plot but can be seen by the average
energy plot. The thermal effect in the form of shifting of the distribution
due to temperature increase is not observed due to the time limit of the
simulation. The average energy plot for 100 W/cm2 is less than the average
energy at no microwave radiation due to the spread of the energy state in the
trailing edge of the energy distribution.
Figure 4: Internal energy distribution by microwave radiation at 2 kW/cm2 with
and without stirring. The collisional effects distribute the energy more
efficiently thus reducing the sudden increase in temperature.
At higher microwave power at 1 kW/cm2, the thermal and non-thermal effect is
evident through the shift in the energy distribution and the slight spread in
the population at high energy states. The higher energy states are more
susceptible to microwave absorption due to the continuum of the density of
states compared to the lower states. The average energy plot is also shown to
be shifted higher than the equilibrium distribution.
An important factor in many microwave experiments is the effect of stirring
the system under radiation. It has been observed that the microwave effect is
reduced under stirring[12]. The effect of stirring can be incorporated into
the system by increasing the collisional frequency. The vibrational-
vibrational energy transfer is responsible for redistributing the energy in
the system leading to equilibrium at a fixed temperature. Increasing the
collisional frequency increases this redistribution and effectively cancels
the microwave effect. As shown in Figure 4, the internal energy distribution
for a system under 2 kW/cm2 of microwave power represents a thermal
equilibrium distribution after stirring.
## 4 Benzyl Chloride
Microwave-assisted synthesis of phosphonium salt through the nucleophilic
substitution of benzyl chloride with triphenylphosphine has been reported and
has shown to exhibit non-thermal effects at 373 K[50]. Benzyl chloride is a
relatively neutral to microwave absorption with a low loss tangent. The
simulation parameters are shown in Table 2.
The internal energy distribution at a time limit of 5 ms is shown in Figure
5(a) at different MW intensities. The corresponding average energy plot with
respect to time is shown in Figure 5(b). At low microwave intensity, the shift
in the equilibrium is not observed. As Benzyl Chloride is not a good absorber
of MW, the temperature changes through the shift in the equilibrium
distribution under microwave radiation are not prominent for the time limit of
5 ms. The internal distribution at a microwave power of 100 W/cm2 and 1 kW/cm2
does not show many changes compared to the distribution with no Microwave
Power. The average energy plot shows a shift in the energy level compared to
the reference energy indicating a non-thermal effect.
Simulation Parameters for Benzyl Chloride[51] $\sigma_{LJ}$=6,
$\epsilon_{A-M}=410$ $tan_{\delta}$=0.08, $\epsilon$=7, $\omega$=2.4 GHz
N${}_{TRAJ}=3000$, EBEGIN=1000 cm-1, TTLIM=5 ms, Temp=350 K
The non-thermal effect, seen as a nonequilibrium distribution, is caused by
some states pushing to higher energies while the collisional reaction
constantly tries to redistribute this change, resulting in a longer energy
trail in the higher energy side with a shift in the peak of the distribution.
Figure 5: a: Internal energy distribution for Benzyl Chloride at 5ms. b:
Average energy $<E>$ with respect to time
Based on the simulation parameters and the competing microwave
absorption/emission, there is a microwave intensity beyond which the non-
thermal effect will be more evident, which we call the characteristic
intensity. The characteristic intensity can be defined as the intensity beyond
which the microwave absorption/emission rate dominates over the collision
rate. It is expected that microwave induces a nonequilibrium distribution at
this level and exhibits a non-thermal effect. Figure 6 shows the microwave
absorption/emission rate of the two reactants studied, along with the
collisional frequency. The rate of microwave absorption/emission increases
linearly with microwave power. It is determined by the permittivity and the
density of states of the system; hence, it is a material-dependent value. The
microwave characteristic intensity for DMSO is found to be 2 kW/cm2, and for
Benzyl Chloride is 15 kW/cm2.
Figure 6: Interaction rate and collisional frequency versus microwave
intensity for reactants (DMSO: red, Benzyl chloride: blue) along with the
average collisional frequency(black).
### 4.1 Dissociation Reactions
In studying the enhancement of reaction rate under the microwave, we have
implemented unimolecular dissociation simulations with CO2 and H2O2 molecules
in an argon bath under microwave radiation. The microwave effect on CO2 and
H2O2 has been studied and reported for applications in lasers, carbon monoxide
conversion, microwave plasma/catalyst and waste-water treatment[52, 53, 54,
55, 56]. The dissociation rate constant for CO2 is given in the literature[57,
58]. Figure 7(a) presents the plot of trajectories reacted with increasing
microwave intensity. Following the trend in the figure, the reaction rate
increases with both temperature and microwave intensity. At low MW
intensities, the reaction rate is too low for temperatures below 1200 K.
However, when the MW intensity increases above 104 W/cm2, the reaction rate
becomes significant even at 1000 K. At high temperatures, the reaction rate
saturates at lower MW intensity. For example, at 1500 K, the rate slows down
at about 5×104 W/cm2, while at 1000 K, the rate change happens at about 2×105
W/cm2. The main reason for this difference is that at higher temperatures, the
collision rate dominates and eventually spreads the distribution over the MW
absorption and emission; hence, the effect of MW becomes less significant.
Figure 7: a: Reaction rate for CO2 versus microwave intensities at different
temperatures. b: Reaction Rates for H2O2 for increasing microwave intensities
at different temperatures.
For unimolecular dissociation of H2O2, the trajectories reacted with respect
to microwave intensity are shown in Figure 7(b). The slope of the reaction
rate increases initially with increasing temperature and there is a downward
trend at a temperature higher than 1000 K. The difference in the plot for
these two molecular systems can be attributed to the difference in the density
of states and dissociation rate constant. At higher temperatures, the
collisional effects in the H2O2 molecule play an essential role in the
redistribution of energy attained from multi-photon absorption. Reducing the
reaction rate at a higher temperature may seem counter-intuitive, but the
similar argument of energy dissipation due to collision is the reason for this
effect. The collisional parameters at higher temperatures dominate and due to
the increase in the accessible density of states at high temperatures leads to
faster energy dissipation from photon absorption and a reduction in the
reaction rate.
## 5 Conclusion
The discussed model gives a realistic description of the temporal evolution of
the population due to multiphoton absorption in the presence of collision.
Solving the master equation using a Monte Carlo approach can be used to solve
several problems related to EMF-assisted reactions. Through the examples of
DMSO and Benzyl Chloride, an idea of how we can differentiate between the
thermal and non-thermal effects under EMF radiation is discussed.
Educational insight into stirring leading to the reduction of the EMF effect
is explored as a collisional effect. The thermal effect under EMF radiation
can be seen as a shift in the internal energy distribution due to the
dielectric loss leading to an increase in the temperature of the system. The
non-thermal effect was shown as a deviation from the equilibrium distribution
with an unequal number of molecules in the high and low energy states.
## Acknowledgement(s)
This study is partially based upon work supported by the Air Force Office of
Scientific Research (AFOSR) under contract number FA9550-19-1-0363 and the
National Science Foundation (NSF) under grant number CBET-2110603
## Disclosure statement
The authors report there are no competing interests to declare.
## Data availability statement
The data that support the findings of this study are openly available at
https://github.com/keludso/EMReact.git and
https://research.ece.ncsu.edu/nanoengineering [40].
## References
* [1] N. Amin, K. Dsouza and D. Vashaee, Applied Physics Letters 112, 093103(2018).
* [2] D. Vashaee, A. Nozariasbmarz, L. Tayebi and J.S. Krasinski, US Patent App. 15/778 704 (2018).
* [3] A. Nozariasbmarz, M. Hosseini and D. Vashaee, Acta Materialia 179, 85–92 (2019).
* [4] A. Nozariasbmarz, F. Suarez, J.H. Dycus, M.J. Cabral, J.M. LeBeau, M.C. Ozturk and D. Vashaee, Nano Energy 67, 104265 (2020).
* [5] K. Rybakov and I. Volkovskaya, Ceramics International 45 (7, Part B), 9567–9572 (2019).
* [6] M. Nüchter, U. Müller, B. Ondruschka, A. Tied and W. Lautenschläger, Chemical Engineering & Technology 26 (12), 1207–1216 (2003).
* [7] S. Horikoshi, T. Watanabe, A. Narita, Y. Suzuki and N. Serpone, Scientific Reports 8 (1), 5151 (2018).
* [8] C.O. Kappe, Angew Chem Int Ed Engl 43 (46), 6250–6284 (2004).
* [9] G. Cravotto and D. Carnaroglio, Microwave Chemistry (2017).
* [10] T.N. Danks and G. Wagner in Microwave Assisted Organic Synthesis (2005), Chap. 4, pp. 75–101.
* [11] G. Camelia, S. Gabriel, E. H. Grant, H.G. Edward, S.J.H. Ben and D. Michael P. Mingos, Chem. Soc. Rev. 27, 213–224 (1998)
* [12] M.A. Herrero, J.M. Kremsner and C.O. Kappe, J Org Chem 73 (1), 36–47 (2007).
* [13] C. Shibata, T. Kashima and K. Ohuchi, Jpn. J. Appl. Phys. 35 (1), 316–319 (1996).
* [14] S. Garbacia, B. Desai, O. Lavastre and C.O. Kappe, The Journal of Organic Chemistry 68 (23), 9136–9139 (2003).
* [15] M. Kanno, K. Nakamura, E. Kanai, K. Hoki, H. Kono and M. Tanaka, J Phys Chem A 116 (9), 2177–2183 (2012).
* [16] J. Jacob, L.H.L. Chia and F.Y.C. Boey, Journal of Materials Science 30 (21), 5321–5327 (1995).
* [17] R. Roy, R. Peelamedu, L. Hurtt, J. Cheng and D. Agrawal, Materials Research Innovations 6 (3), 128–140 (2002).
* [18] C.O. Kappe, Accounts of Chemical Research 46 (7), 1579–1587 (2013).
* [19] de la Hoz Antonio, D.O.Ángel and A. Moreno, Chem. Soc. Rev. 34, 164–178 (2005).
* [20] P.L. Spargo, Organic Process Research & Development 9 (5), 697–697 (2005).
* [21] D. Bogdal, Microwave-assisted Organic Synthesis (2005).
* [22] R. Stannard and W.M. Gelbart, The Journal of Physical Chemistry 85 (24), 3592–3599 (1981).
* [23] I. W. M. Smith, J. Chem. Soc., Faraday Trans. 93, 3741–3750 (1997).
* [24] J. Ma, The Journal of Physical Chemistry A 120 (41), 7989–7997 (2016).
* [25] Y. Hu, D. Ma and J. Ma, The Journal of Physical Chemistry A 125 (12), 2690–2696 (2021).
* [26] L. Perreux and A. Loupy, Tetrahedron 57 (45), 9199–9223 (2001).
* [27] J.S. Shin, C.J. Choi, K.H. Jung and C. Jung Kim, Journal of Photochemistry and Pho- tobiology A: Chemistry 84 (2), 105–112 (1994)
* [28] B. Toselli, J.C. Ferrero and E.H. Staricco, The Journal of Physical Chemistry 89 (8), 1492–1499 (1985).
* [29] D.T. Gillespie, The Journal of Physical Chemistry 81 (25), 2340–2361 (1977).
* [30] D.T. Gillespie, Journal of Computational Physics 22 (4), 403–434 (1976).
* [31] J. Melunis and U. Hershberg, Engineering Applications of Artificial Intelligence 62, 304–311 (2017).
* [32] M.T. Sebastian, in Dielectric Materials for Wireless Communication, Elsevier, Amsterdam, 2008, pp. 11–47
* [33] A.C.Metaxas and R.J.Meredith, Industrial Microwave Heating(1988).
* [34] P. Atkins and J. de Paula, Physical Chemistry (2006).
* [35] J. Troe, The Journal of Chemical Physics 66 (11), 4758–4775 (1977)
* [36] J. Troe, The Journal of Chemical Physics 97 (1), 288–292 (1992).
* [37] M. Strekalov, Chemical Physics Letters 487 (1), 129–132 (2010)
* [38] T. Beyer and D.F. Swinehart, Commun. ACM 16 (6), 379 (1973)
* [39] S.E. Stein and B.S. Rabinovitch, The Journal of Chemical Physics 58 (6), 2438–2445 (1973).
* [40] K. Dsouza and D. Vashaee, EMReact https://research.ece.ncsu.edu/nanoengineering/monte-carlo-microwave -effects-in-reactions/, https://github.com/keludso/EMReact.git.
* [41] J.R. Barker, International Journal of Chemical Kinetics 33 (4), 232–245 (2001).
* [42] J.R. Barker, International Journal of Chemical Kinetics 41 (12), 748–763 (2009)
* [43] J.R. Barker, Chemical Physics 77 (2), 301–318 (1983).
* [44] J.R. Barker, T.L. Nguyen, J.F. Stanton, C. Aieta, M. Ceotto, F. Gabas, T.J.D. Kumar, C.G.L. Li, L.L. Lohr, A. Maranzana, N.F. Ortiz, J.M. Preses, J.M. Simmie, J.A. Sonk and P.J. Stimac, MultiWell-2017 Software Suite; J. R. Barker, University of Michigan, Ann Arbor, Michigan, USA
* [45] Y.L. Chen, R. Panneerselvam, D.Y. Wu and Z.Q. Tian, Journal of Raman Spectroscopy 48 (1), 53–63 (2017).
* [46] C.A. Naganathappa Mahadevappa, Indian Journal of Pure and Applied Physics (IJPAP) 57 (4) (2019).
* [47] W. Tian, Z. Li and L. Wu, Chemical Physics 528, 110523 (2020)
* [48] S. Horikoshi, T. Watanabe, M. Kamata, Y. Suzuki and N. Serpone, RSC Adv. 5, 90272– 90280 (2015)
* [49] N.C. for Biotechnology Information, PubChem Compound Summary for CID 679, Dimethyl sulfoxide. https://pubchem.ncbi.nlm.nih.gov/compound/Dimethyl- sulfoxide.
* [50] J. Cvengros, S. Toma, S. Marque and A. Loupy, Canadian Journal of Chemistry 82 (9), 1365–1371 (2004)
* [51] N.C. for Biotechnology Information, PubChem Compound Summary for CID 7503, Benzyl chloride . ¡https://pubchem.ncbi.nlm.nih.gov/compound/Benzyl-chloride¿.
* [52] P.H. Liao, W.T. Wong and K.V. Lo, Journal of Environmental Engineering and Science 4 (1), 77–81 (2005)
* [53] P.H. LIAO, W.T. WONG and K.V. LO, Journal of Environmental Science and Health, Part A 40 (9), 1753–1761 (2005).
* [54] J. Hunt, A. Ferrari, A. Lita, M. Crosswhite, B. Ashley and A.E. Stiegman, The Journal of Physical Chemistry C 117 (51), 26871–26880 (2013)
* [55] D.C.M. van den Bekerom, J.M.P. Linares, T. Verreycken, E.M. van Veldhuizen, S. Nijdam, G. Berden, W.A. Bongers, M.C.M. van de Sanden and G.J. van Rooij, Plasma Sources Science Technology 28 (5), 055015 (2019)
* [56] Y. Qin, G. Niu, X. Wang, D. Luo and Y. Duan, Journal of CO2 Utilization 28, 283–291 (2018)
* [57] L.B. Ibragimova, G.D. Smekhov, O.P. Shatalov, A.V. Eremin and V.V. Shumova, High Temperature 38 (1), 33–36 (2000)
* [58] M.H. Lietzke and C. Mullins, J. Inorg. Nucl. Chem 43 (1981).
|
import sys import os def add_to_path(p): if p and p not in sys.path:
sys.path.append(p) . . textwidth=433.62pt, textheight=505.89pt
# Vertical Airborne Wind Energy Farms with High Power Density per Ground Area
based on Multi-Aircraft Systems
Jochem De Schutter1, Jakob Harzer1, Moritz Diehl1,2 1Systems Control and
Optimization Laboratory, Department of Microsystems Engineering (IMTEK) and
Department of Mathematics, University of Freiburg, Georges-Koehler-Allee 102,
79110 Freiburg, Germany<EMAIL_ADDRESS><EMAIL_ADDRESS>freiburg.de<EMAIL_ADDRESS>of Mathematics,
University of Freiburg, Ernst-Zermelo-Strasse 1, 79104 Freiburg im Breisgau,
Germany.
###### Abstract
This paper proposes and simulates vertical airborne wind energy (AWE) farms
based on multi-aircraft systems with high power density (PD) per ground area.
These farms consist of many independently ground located systems that are
flying at the same inclination angle, but with different tether lengths, such
that all aircraft fly in a large planar elliptical area that is vertical to
the tethers. The individual systems are assigned non-overlapping flight
cylinders depending on the wind direction. Detailed calculations that take
into account Betz’ limit, assuming a cubically averaged wind power density of
7 m/s, give a potential yearly average PD of 43 MW/km2. A conventional wind
farm with typical packing density would yield a PD of 2.4 MW/km2 in the same
wind field. More refined simulations using optimal control result in a more
modest PD of 6 MW/km2 for practically recommended flight trajectories. This PD
can already be achieved with small-scale aircraft with a wing span of 5.5 m.
The simulations additionally show that the achievable PD is more than an order
of magnitude higher than for a single-aircraft AWE system with the same wing
span.
## I Introduction
Because of the abundant availability of wind and solar energy resources, in
principle only a tiny fraction of the earth’s surface area would suffice to
generate all of humanity’s energy needs. Nevertheless the power density (PD)
per ground surface area of wind and solar power technologies is still a
relevant quantity, since the infrastructure costs of renewable energy farms,
such as grid connection and installation logistics, scale proportionally with
the farm area. The power density of existing wind power farms is estimated to
be around PD = 2 MW/km2. For solar PV farms we have a PD of around 10 MW/km2
[1].
Airborne wind energy (AWE) is an upcoming renewable energy technology which
aims at harvesting the steady and strong high-altitude winds that cannot be
reached by conventional wind technology, at only a fraction of the resources.
AWE developers mainly consider single-aircraft AWE systems (S-AWES), which are
based on the principle of one tethered aircraft flying fast crosswind
maneuvres. However, S-AWES are subject to several limitations that impede the
technology to increase PD with respect to conventional wind.
First, S-AWES are characterized by high tether drag dissipation losses. These
losses are inversely proportional to the aircraft size, which is why large and
heavy (and thus, costly) aircraft are needed to achieve the efficiency needed
for a high PD. Second, large aircraft come with an inherently large turning
radius, which corresponds to a large trajectory footprint on the ground.
Therefore, a very dense geometric spacing of units - possibly with shared
airspace - needs to be achieved for high PD. Third, since the maximum tether
length is limited due to the drag losses, S-AWES in park configuration would
all operate at similar altitudes, which, in combination with close geometric
spacing, would lead to large losses due to wake interaction [2].
$\displaystyle D$$\displaystyle d$$\displaystyle
D\sin\theta_{\mathrm{e}}$$\displaystyle\theta_{\mathrm{e}}$ Figure 1: Top and
side view of a vertical M-AWES wind farm with $N=30$,
$\theta_{\mathrm{e}}=40^{\circ}$, $d/D=0.16$ and $l_{\min}/D=0.42$.
To overcome these limitations, this work proposes (and simulates) the concept
of vertical AWE farms based on multi-aircraft AWE systems (M-AWES), as
depicted in Fig 1. In M-AWES, two or more aircraft fly very tight loops around
a shared, quasi-stationary main tether [3]. Therefore, M-AWES are very
efficient even for small aircraft size while using airspace more effectively
resulting in a lower trajectory footprint on the ground. The proposed vertical
M-AWES farm layout additionally exploits the fact that M-AWES can fly at
arbitrarily high locations above the ground, so that they can operate at
distinct locations in the sky, thereby avoiding wake interaction. A somewhat
similar idea based on networked rotary AWE systems was proposed (but not
simulated) in [4].
The remainder of this text is structured as follows. Section II describes the
vertical M-AWES park concept and determines an upper PD limit based on a
simplified analysis. Section III states the system model used in the
simulation study while Section IV proposes an optimal control problem (OCP)
formulation to compute PD-optimal power cycles. Section V describes the
numerical results of a case study, where we compute PD-optimal orbits for both
S- and M-AWES, for a small and moderate aircraft size, and where we
investigate the trade-off between PD and wing area efficiency. Section VI
discusses the main conclusions and gives suggestions for future research.
## II Vertical M-AWES parks
The vertical M-AWES parks proposed in this work are circular and consist of
many independently ground located M-AWES that are all flying at the same
tether inclination angle $\theta_{\mathrm{e}}$, but with different tether
lengths, depending on the wind direction, such that all systems fly in a large
planar elliptical area that is vertical to the tethers, as shown in Fig 1.
The individual systems are assigned non-overlapping “operation cylinders” that
depend on the wind direction and correspond to a circular ground area.
Otherwise, the individual systems are completely independent and can e.g. be
started and landed independently. Each system might be located on a small
tower of e.g. 5 m height in order to minimize its impact on the usability of
the ground area for agriculture.
The distance from one system to all others on the ground is lower bounded by
the maximum diameter $d$ of each system’s ground area circle. The optimal
circle packing density of $N$ circles in a large circle of diameter $D$
depends on $N$ and is assumed to be $\rho_{\mathrm{circle}}=70\%$ here. Thus,
from now on we assume that
$N\frac{\pi d^{2}}{4}=\rho_{\mathrm{circle}}\frac{\pi D^{2}}{4}$ (1)
which means that we choose to build altogether
$N=\rho_{\mathrm{circle}}D^{2}/d^{2}$ individual units in the park. This
abstracts from the individual system’s size $d$ and number $N$, such that we
only need to remember the packing loss factor $\rho_{\mathrm{circle}}$.
Together, the systems form a large inclined elliptical area in the sky, and
all wings fly in this elliptical area. The area is perpendicular to the
tethers, and thus forms an ellipse with a maximum width that is equal to $D$,
but a minimum width of $D\sin\theta_{\mathrm{e}}$. The M-AWES are assumed to
be able to fly ellipses with approximately this aspect ratio so the circle
packing from the ground can be mapped by an affine transformation to the
ellipse packing in the sky. The elliptical area forms an angle of
$\theta_{\mathrm{e}}$ with the vertical, and thus, the effective area of this
“actuation ellipse” is again reduced by a factor $\cos\theta_{\mathrm{e}}$
resulting in a height of the inclined ellipse of only
$D\cos\theta_{\mathrm{e}}\sin\theta_{\mathrm{e}}$.
The shortest tether length $l_{\min}$ defines the location of the lowest point
of the ellipse, which is located $l_{\min}\sin\theta_{\mathrm{e}}$ above the
ground and $l_{\min}\cos\theta_{\mathrm{e}}$ downwind, extending the ground
boundaries of the wind park. This causes an extended park diameter
$D+2l_{\min}\cos\theta_{\mathrm{e}}$, defining a circle above which the AWE
systems can fly. This virtual area enlargement does not lead to increased
infrastructure costs and is therefore neglected.
We assume that the M-AWES wing area is adapted to reach the Betz limit
$\eta=16/27$ on the available flight area assigned to each individual system.
The overall power of the wind park is then proportional to its ground area,
but affected by a variety of losses:
* •
the circle packing loss $\rho_{\mathrm{circle}}=70\%$;
* •
the geometric area reduction efficiency that reaches a maximum value of
$\cos\bar{\theta}_{\mathrm{e}}\sin\bar{\theta}_{\mathrm{e}}=0.5$ for
$\bar{\theta}_{\mathrm{e}}=45^{\circ}$;
* •
the Betz factor $\eta=16/27$;
leading altogether to an effective loss of
$\eta_{\mathrm{tot}}=\eta\cos\bar{\theta}_{\mathrm{e}}\sin\bar{\theta}_{\mathrm{e}}\rho_{\mathrm{circle}}=0.3\rho_{\mathrm{circle}}=21\%\
.$ (2)
The maximum power density is then given by
$\mathrm{PD}_{\mathrm{max}}=\frac{1}{2}\eta_{\mathrm{tot}}\rho_{\mathrm{air}}v^{3}.$
(3)
For example, for a wind speed of $v=7\ \mathrm{m/s}$ and
$\rho_{\mathrm{air}}=1.2\ \mathrm{kg/m}^{3}$ this results in a
$\mathrm{PD}_{\mathrm{max}}$ of 43 MW/km2.
To obtain an estimate for a conventional wind energy farm operating in the
same wind field, we assume operation at Betz’ limit and a circular packing
with a distance of at least 6 rotor diameters between the systems. There are
no geometric area reduction losses, resulting in a total efficiency
$\eta_{\mathrm{tot}}=1.2\%$ and a $\mathrm{PD}_{\mathrm{max}}$ of 2.4 MW/km2
when wake losses are ignored. This PD is a factor 17 lower than the potential
PD of the vertical M-AWES farm.
## III System model
To make a more realistic assessment, we will compute detailed, PD-optimal
orbits for individual S- and M-AWES in this work. We consider “lift-mode” AWE
systems, where power is produced in a periodic fashion: first, the tether is
reeled out at high tension, driving a winch at the ground station. Then,
tether is reeled back in again at low tension, resulting in a net positive
energy gain. This section presents the M-AWES dynamics used in the optimal
control problem formulation in Section IV and introduces an averaged induction
model to account for the Betz losses.
### III-A Multi-aircraft dynamics
In the following simulations, we use the multi-aircraft model structure and
model parameters described in [5]. The system dynamics model all six degrees-
of-freedom of the aircraft in the system. The tethers are assumed to be
straight and inelastic, which is a good assumption when tether tension is
high. The dynamics are expressed in non-minimal coordinates and summarized by
the implicit DAE
$F(\dot{x}(t),x(t),u(t),z(t),\theta,a)=0$ (4)
and consistency conditions $C(x(t))=0$.
The system variables consist first of the state vector
$x\in\mathbb{R}^{n_{\mathrm{x}}}$. The control vector
$u\in\mathbb{R}^{n_{\mathrm{u}}}$ consists of the aircraft aileron, elevator
and rudder deflection rates as well as the tether reeling acceleration. The
algebraic state $z\in\mathbb{R}^{n_{\mathrm{z}}}$ consists of the Lagrange
multipliers related to the constraints that define the interlinkage of
aircraft and tethers. The system parameters $\theta\in\mathbb{R}^{n_{\theta}}$
represent parameters that can be optimized over, such as the main tether
diameter and, in the M-AWES case, the secondary tether length and diameter.
The variable $a\in\mathbb{R}$ is the average induction factor that will be
explained in Section III-B.
For the sake of brevity, we refer the reader to [5] for a complete and formal
description of the system variables, dynamics, aerodynamic forces, consistency
conditions, etc. Here we only explicitly discuss those model components
relevant for the wind power availability. We model the wind shear in a
simplified way with a power law approximation:
$u_{\infty}(z)=u_{\mathrm{ref}}\left(\frac{z}{z_{\mathrm{ref}}}\right)^{c_{\mathrm{f}}}\
,$ (5)
with $u_{\infty}(z)$ the freestream velocity at altitude $z$ and
$u_{\mathrm{ref}}$ the reference wind speed measured at altitude
$z_{\mathrm{ref}}=100$ m, with $c_{\mathrm{f}}=0.15$ a surface friction
coefficient typical for flat, open terrain.
The atmospheric density drop with altitude is modeled using the international
standard atmosphere model [6]:
$\rho(z)\coloneqq\rho_{\mathrm{0}}\left(\frac{T_{0}-T_{\mathrm{L}}z}{T_{0}}\right)^{\frac{g}{T_{\mathrm{L}}R}-1}\
,$ (6)
where $R$ is the universal gas constant. The parameters $T_{0}$ and $\rho_{0}$
are the temperature and air density at sea level, and $T_{\mathrm{L}}$ is the
temperature lapse rate.
The model is based on a validated, small aircraft model with wing span
$b_{\mathrm{ref}}=5.5\ \mathrm{m}$, mass $m_{\mathrm{ref}}=36.5\ \mathrm{kg}$
and inertia tensor $J_{\mathrm{ref}}$ given in [7]. In order to be able to
evaluate the dynamics also for larger wing spans $b$, we utilize the following
mass upscaling formula:
$\displaystyle m$
$\displaystyle=m_{\mathrm{ref}}\left(\frac{b}{b_{\mathrm{ref}}}\right)^{\kappa}\
,\quad\text{and}\quad
J=J_{\mathrm{ref}}\left(\frac{b}{b_{\mathrm{ref}}}\right)^{\kappa+2}\ ,$ (7)
with upscaling exponent $\kappa=2.4$.
### III-B Induction model
State-of-the-art induction models for AWE are typically a variation of Betz’
analysis for conventional wind turbines, thus based on a steady-state analysis
[8, 9]. However, detailed wind field simulations [2] show that for lift-mode
AWE systems, induction is inherently time-dependent: at the beginning of the
reel-out phase, induction starts to build up, reaching its peak when
transitioning into the reel-in phase, after which it starts to decline. When
the power cycle re-starts, the wind is almost “fresh” again.
While the induction model proposed in [10] accounts for the dynamic
variability of the aircraft trajectories over a power cycle, it still assumes
an instantaneous build-up of induction. Therefore, in this work, we propose
the following model, based on a momentum balance applied to averaged flight
quantities.
We first compute the annular swept area for each aircraft $k\in\mathcal{K}$,
with $\mathcal{K}$ the index set of all aircraft in the system. We integrate
over the reel-out phase the norm of the aircraft’s flight speed $\dot{q}$
multiplied with the wing span, weighted with the local dynamic pressure:
$A_{\mathrm{s},k}\coloneqq\int\limits_{0}^{T_{\mathrm{ro}}}\frac{1}{2}\rho(q_{\mathrm{z},k}(t))u^{2}_{\mathrm{\infty}}(q_{\mathrm{z},k}(t))b\lVert\dot{q}_{k}(t)\rVert\mathrm{d}t\
,$ (8)
where, by including the dynamic pressure inside the integral, we account for
variability of wind speed and air density along the trajectory. The parameter
$T_{\mathrm{ro}}$ is the reel-out phase duration, and $q_{\mathrm{z},k}$ is
the vertical position component of aircraft $k$.
We assume that the force acting on this annulus is the main tether force, and
that during the reel-in phase, this force is zero. The average tether force
over one power cycle of period $T$ is then given by:
$\bar{F}_{\mathrm{t}}\coloneqq\frac{1}{T}\int\limits_{0}^{T_{\mathrm{ro}}}F_{\mathrm{t}}(t)\mathrm{d}t\
,$ (9)
with the expression of the main tether force $F_{\mathrm{t}}(t)$ in [5].
Momentum conservation applied to these average quantities then gives an
algebraic equation for the average induction factor $a$, i.e.:
$\bar{F}_{\mathrm{t}}=4a(1-a)\sum\limits_{k\in\mathcal{K}}A_{\mathrm{s},k}\ .$
The apparent wind speed that each aircraft $k$ experiences is then given by
$u_{\mathrm{a},k}\coloneqq(1-a)u_{\infty}(q_{\mathrm{z},k})e_{\mathrm{x}}-\dot{q}$
(10)
with $e_{\mathrm{x}}\coloneqq\begin{bmatrix}1&0&0\end{bmatrix}^{\top}$ the
unit vector in the $x$-direction.
Similar to the steady-state models proposed in [8] and [9], this induction
model assumes that both flight annulus and tether force are perpendicular to
the wind speed vector. This is a crude assumption, given that the main tether
elevation angle is intrinsically non-zero. Nevertheless, this averaged
approach gives a first-order account of the time-dependency of the induction
for lift-mode systems, which is necessary in particular for lift-mode orbits
with short reel-out phases.
By adding the values $A_{\mathrm{s},k}$ in the M-AWES case, we assume that the
swept areas of the different aircraft do not overlap. To avoid double-
counting, a no-overlap condition will be enforced as a constraint in the
optimal control problem. For S-AWES, we will check a posteriori that the swept
area does not self-overlap.
## IV Problem formulation
This section discusses the path constraints used for the simulations in this
work, and introduces a constraint which expresses the available flight
cylinder as a function of the radius of the corresponding circular ground
area. Then, this section presents a periodic OCP formulation to compute PD-
optimal power cycles based on the dynamics presented in the previous section.
### IV-A Flight envelope
Path constraints need to be enforced along the trajectory, to avoid flight
envelope violations and to preserve structural integrity of the airframes and
the tethers. More concretely, we impose the following constraints:
* •
Tether stress should not exceed the material yield stress with a safety factor
3.
* •
The tether force should be strictly positive to avoid tether sag and to
preserve model validity.
* •
Aircraft roll and pitch angles should be smaller than 90∘ to avoid collision
with the tether. Note that for real-world trajectories, a larger safety factor
would be appropriate.
* •
The angle-of-attack and side-slip angle of all aircraft are bounded to avoid
stall and preserve model validity.
* •
The aileron, elevator and rudder control surfaces and their rates are bounded.
* •
The aircraft should remain above the ground with a safety distance of 100 m.
* •
The tether length is bounded from above by a value $l_{\mathrm{t,max}}$ = 700
m. For the M-AWES, this constraint is active at the end of the reel-out phase.
* •
The induction factor $a$ should be positive and smaller than 0.5 to avoid flow
acceleration or reversal.
We refer the reader to [5] for those numerical bound values not mentioned
explicitly in this text.
### IV-B Cylindrical flight constraint
A central feature of vertical M-AWES parks is that each system is assigned an
individual, tilted flight cylinder. The intersection of this flight cylinder
with the ground gives a circular area with radius $R$ (= $d/2$). The flight
cylinder has an ellipsoidal cross-section with a major axis length $R$ and a
minor axis length $R\sin\theta_{\mathrm{e}}$, with $\theta_{\mathrm{e}}$ the
elevation angle of the cylinder. This elevation angle is included as an
optimization variable.
We express the flight cylinder constraint for all aircraft in the system in
the following way. Note that for M-AWES, the constraint is also imposed on the
juncture node between main tether and secondary tethers. First, we rotate the
aircraft position into the ellipsoidal cylinder frame:
$\displaystyle\hat{q}_{\mathrm{y},k}$ $\displaystyle\coloneqq
q_{\mathrm{y},k}$ (11) $\displaystyle\hat{q}_{\mathrm{z},k}$
$\displaystyle\coloneqq
q_{\mathrm{z},k}\cos(\theta_{\mathrm{e}})-q_{k,\mathrm{x}}\sin(\theta_{\mathrm{e}})\
.$ (12)
Then, we define the constraining ellipse axes as
$\displaystyle\hat{R}_{\mathrm{y}}$ $\displaystyle\coloneqq R-b/2$ (13)
$\displaystyle\hat{R}_{\mathrm{z}}$ $\displaystyle\coloneqq
R\sin\theta_{\mathrm{e}}-b/2\ ,$ (14)
which ensures that the entire wing with span $b$ remains in the ellipse. The
constraint then reads as
$\frac{\hat{q}_{\mathrm{z},k}^{2}}{\hat{R}_{\mathrm{z}}^{2}}+\frac{\hat{q}_{\mathrm{y},k}^{2}}{\hat{R}_{\mathrm{y}}^{2}}\leq
1\ .$ (15)
These flight constraints, together with the constraints mentioned Section IV-A
are summarized by the expression $h(x(t),u(t),z(t),\theta,a)\geq 0$.
In the M-AWES case, we need to ensure that the swept areas of the two
connected aircraft do not overlap during the reel-out phase, to preserve model
validity. For the purposes of this work, we therefore propose to pre-structure
the M-AWES OCP so that the solution consists of one single loop: half a loop
for the reel-out phase, and half a loop for the reel-in phase. During the
reel-out phase, each aircraft is assigned one half of the flight cylinder.
During the reel-in phase, the two aircraft switch flight regions.
Formally, we express the no-overlap constraint for aircraft $k$ as
$\displaystyle
h_{\mathrm{no},k}(x,\theta_{\mathrm{e}},\phi_{0})\coloneqq\hat{q}_{\mathrm{z},k}\cos(\phi_{0})-\hat{q}_{\mathrm{y},k}\sin(\phi_{0})\
.$ (16)
The angle $\phi_{0}$ rotates the intersecting half-plane that divides the
flight cylinder in two and can be chosen freely by the optimizer.
For a dual-aircraft system, with aircraft nodes $k\in\mathcal{K}=\\{2,3\\}$,
the no-overlap condition for the two aircraft is combined with a phase-fixing
constraint on the tether reel-out speed $\dot{l}_{\mathrm{t}}$:
$h_{\mathrm{no}}(x,\theta_{\mathrm{e}},\phi_{0})\coloneqq\begin{bmatrix}h_{\mathrm{no},2}(x,\theta_{\mathrm{e}},\phi_{0})\\\
-h_{\mathrm{no},3}(x,\theta_{\mathrm{e}},\phi_{0})\\\
\dot{l}_{\mathrm{t}}\end{bmatrix}\ .$ (17)
This constraint is greater or smaller than zero depending on the phase.The
constraints mentioned in this section are summarized by the expression
$h(x(t),u(t),z(t),\theta,a)\geq 0$.
### IV-C Optimal control problem
We can now directly compute periodic flight trajectories that optimize the
power density, by solving the following periodic OCP:
$\displaystyle\underset{\displaystyle\begin{subarray}{c}x(\cdot),u(\cdot),z(\cdot)\\\
\theta,a,T_{\mathrm{ro}},T_{\mathrm{ri}}\\\
R,\theta_{\mathrm{e}},\phi_{0}\end{subarray}}{\mathrm{min}}\quad-\frac{1}{T}\int\limits_{0}^{T}\rho_{\mathrm{circle}}\frac{P(t)}{\pi
R^{2}}\mathrm{d}t$ $\displaystyle F(\dot{x}(t),x(t),u(t),z(t),\theta)$
$\displaystyle=0,$ $\displaystyle\ \forall t\in[0,T],$ $\displaystyle
h(x(t),u(t),z(t),\theta,a)$ $\displaystyle\geq 0,$ $\displaystyle\ \forall
t\in[0,T],$ $\displaystyle h_{\mathrm{no}}(x(t),\theta_{\mathrm{e}},\phi_{0})$
$\displaystyle\geq 0,$ $\displaystyle\ \forall t\in[0,T_{\mathrm{ro}}],$
$\displaystyle-h_{\mathrm{no}}(x(t),\theta_{\mathrm{e}},\phi_{0})$
$\displaystyle\geq 0,$ $\displaystyle\ \forall t\in(T_{\mathrm{ro}},T],$
$\displaystyle x(0)-x(T)$ $\displaystyle=0,$
$\displaystyle\bar{F}_{\mathrm{t}}-4a(1-a)\sum\limits_{k\in\mathcal{K}}A_{\mathrm{s},k}$
$\displaystyle=0,$
where the overall time $T$ is defined as the sum of the reel-out time and the
reel-in time, which are free optimization variables: $T\coloneqq
T_{\mathrm{ro}}+T_{\mathrm{ri}}$. The initial and final state of the
trajectory are free, but must be equal. The cost function is chosen so as to
maximize the average power output divided by the circular ground area occupied
by the system. This PD is multiplied with $\rho_{\mathrm{circle}}$ to account
for packing losses.
The M-AWES OCP is discretized using direct collocation with 40 intervals, and
Radau polynomials of degree 4. For S-AWES, the no-overlap conditions in (17)
are omitted, but the phase-fixing constraint is retained. Since the optimal
time period is larger for this problem, we increase the number of collocation
intervals to 100.
The NLP is formulated in Python using the open-source AWE optimal control
framework AWEbox [11], which builds on the symbolic framework for algorithmic
differentation and nonlinear optimization CasADi [12]. AWEbox solves the NLP
with IPOPT [13] and the linear solver MA57 [14].
## V Numerical results
This section presents and discusses PD-optimal periodic orbits for M-AWES and
S-AWES, both for a small and moderate aircraft size. In a second step,
periodic orbits for each variant are computed for a range of fixed values for
the ground circle radius $R$, in order to investigate the trade-off between
power density and wing area efficiency. We conclude with a critical discussion
of the obtained results in the light of the modeling assumptions.
### V-A Optimal power density solutions
The periodic OCP is solved for both a S-AWES and a M-AWES with two aircraft.
First we use the small wing span ($b=5.5$ m) of the original aircraft model.
Then we do the same for an upscaled version of the same model ($b=26$ m).
Table I summarizes the optimal results for the different variants. All of the
results have a similar optimal elevation angle
$\theta_{\mathrm{e}}^{\ast}\approx 40^{\circ}$. This is close to the
theoretically optimal value of $45^{\circ}$ obtained in Section II.
Fig. 2 shows the optimal trajectories for the small-size S-AWES and M-AWES.
The S-AWES has a rather large optimal ground circle radius $R^{\ast}=46.5$ m,
and very low power output because of the large tether drag losses. Hence the
power density is also impractically low (0.2 MW/km2). The optimal radius for
the M-AWES is a factor 2.5 smaller, at a value only slightly larger than three
wing spans. The dual-aircraft configuration thus allows the system to fly
extremely tight circles. Combined with the efficiency gain due to reduced
tether drag and the increased flying altitude, this results in a power density
that is 35 times higher (7.3 MW/km2) than for S-AWES. Optimizing for power
density drives this system to make very efficient use of the available
airspace: the optimal induction factor $a^{\ast}=0.21$ is close to the
theoretically optimal value of 1/3. For the S-AWES, induction is almost
negligible.
$\displaystyle{-20}$$\displaystyle{0}$$\displaystyle{20}$$\displaystyle\hat{z}$
[m]S-AWES$\displaystyle{-50}$$\displaystyle{-25}$$\displaystyle{0}$$\displaystyle{25}$$\displaystyle{50}$$\displaystyle\hat{y}$
[m]$\displaystyle{-10}$$\displaystyle{0}$$\displaystyle{10}$$\displaystyle\hat{z}$
[m]M-AWESreel-out Figure 2: PD-optimal flight trajectories in ellipsoidal
coordinates for $b=5.5$ m, with $\theta^{\ast}_{\mathrm{e}}\approx
40^{\circ}$.
$\displaystyle{-100}$$\displaystyle{0}$$\displaystyle{100}$$\displaystyle\hat{y}$
[m]$\displaystyle{-50}$$\displaystyle{0}$$\displaystyle{50}$$\displaystyle\hat{z}$
[m]S-AWES$\displaystyle{-100}$$\displaystyle{0}$$\displaystyle{100}$$\displaystyle\hat{y}$
[m]M-AWES - reel-out Figure 3: PD-optimal flight trajectories in ellipsoidal
coordinates for $b=26.0$ m, with $\theta_{\mathrm{e}}^{\ast}\approx
40^{\circ}$.
Fig. 3 shows the optimal trajectories for the moderate-size S-AWES and M-AWES.
The S-AWES solution improves on two fronts compared to the small-size results.
First, since the relative tether drag contribution to the total system drag
decreases with increasing aircraft size, the moderate-size system is more
efficient and flies at a higher altitude, resulting in a significantly higher
power output. Second, for a more than four times larger aircraft, the ground
area radius only increases by a factor of 2.5, the induction factor increases
by a factor of 5, and the PD (1.7 MW/km2) increases by a factor larger than 6.
For the M-AWES, the ground radius increases with a factor 5, but the large
aircraft size also reduces the impact of the secondary tether drag, which is
why the power density increases slightly to 8.4 MW/km2, which is still almost
a factor 5 larger than for the moderate-size S-AWES.
Note that while in the envisioned vertical wind farms, M-AWES need an
individual flight cylinder to avoid collisions, this need not be the case for
S-AWES. In fact, S-AWES can be packed closer together than done in this work,
even with overlapping flight cones when synchronized properly, as proposed in
[15]. From this perspective, the obtained PD results are too pessimistic.
However, the model used in this work neglects wake interaction effects within
the farm, which is a good assumption for a vertical M-AWES farm, but not for
densely and horizontally packed S-AWES, in particular as they grow larger. The
induction factors obtained in this study are small ($<$ 0.05) but they might
still have a non-negligible effect on the power output of downstream systems.
In this sense, the proposed packing densities might not be overly
conservative.
TABLE I: PD-optimal (A-D) and practically recommended (E) solution parameters and outputs. Label | System | $b$ [m] | $a$ [-] | $R$ [m] | PD [$\frac{\text{MW}}{\text{km}^{2}}$] | $\bar{P}$ [kW]
---|---|---|---|---|---|---
A | S-AWES | 5.5 | 0.01 | 46.5 | 0.2 | 2.0
B | M-AWES | 5.5 | 0.21 | 18.4 | 7.3 | 11.0
C | S-AWES | 26.0 | 0.05 | 116.8 | 1.7 | 107.0
D | M-AWES | 26.0 | 0.09 | 96.0 | 8.4 | 347.6
E | M-AWES | 5.5 | 0.13 | 30.4 | 5.9 | 24.5
### V-B Trade-off between ground and wing area
Optimizing for power density results in a suboptimal solution in terms of
average power output for a given wing area
$\bar{P}_{\mathrm{S}}\coloneqq\bar{P}/(\lvert\mathcal{K}\rvert S)$, with $S$
the aerodynamic surface of a single aircraft in the system. In practice, a
trade-off between these two objectives needs to be found: for a given power
output, we want to both minimize the trajectory footprint and the required
wing area.
Starting from the PD-optimal solution, the Pareto front between PD and
$\bar{P}_{\mathrm{S}}$ is constructed by re-solving the OCP for fixed and
increasing values of $R$. Fig 4 shows the result of this parametric sweep for
all variants. After a certain value of $R$, the $\bar{P}_{\mathrm{S}}$-optimal
solution is reached and the cylindrical flight constraint becomes inactive for
all larger $R$.
For increasing $R$, all system variants are able to increase power output in
two ways. First, the systems fly at lower, more power-optimal elevation
angles, down to a value $\theta_{\mathrm{e}}\approx 25^{\circ}$ for all
variants. The larger value of $R$ compensates the minor ellipse axis reduction
with $\sin\theta_{\mathrm{e}}$. Second, the systems fly trajectories with a
larger harvesting area, which results in a lower induction factor but overall
in a net increase in power.
The M-AWES power output increases by a factor up to 2.9 for the small-size
system and by up to a factor of 1.7 for the moderate-size system, at the cost
reducing the PD with a factor of 2.6 in both cases. A good compromise for the
small-size system might be PD $\approx$ 6 MW/km2 and
$\bar{P}_{\mathrm{S}}\approx$ 4 kW/m2, marked by the label “E” in Fig. 4 and
summarized in Table I. Interestingly, the small-size M-AWES completely
dominates the moderate-size S-AWES by a large margin. M-AWES based on small
aircraft can thus be efficiently deployed both as a single unit for small-
scale applications, as well as in AWE farms for utility-scale electricity
generation.
$\displaystyle{0}$$\displaystyle{1}$$\displaystyle{2}$$\displaystyle{3}$$\displaystyle{4}$$\displaystyle{5}$$\displaystyle\bar{P}_{\mathrm{S}}$
[kW/m2]$\displaystyle{0}$$\displaystyle{2}$$\displaystyle{4}$$\displaystyle{6}$$\displaystyle{8}$PD
[W/m2]ABCDEM-AWES, b = 5.5 mM-AWES, b = 26 mS-AWES, b = 5.5 mS-AWES, b = 26 m
Figure 4: Pareto efficiency front between power density and power per wing
area for small- and moderate-size S-AWES and M-AWES.
## VI Conclusion
In this paper we proposed vertical M-AWES farms with high PD per ground area.
We determined the theoretical potential of these farms and computed and
compared detailed PD-optimal flight trajectories for both M-AWES and S-AWES of
different sizes. The achieved PD of the recommended small-size M-AWES design
“E” is significantly lower than the theoretical estimate, by a factor of 7. A
big loss factor is the fact that the optimal flight annulus only covers part
of the elliptical cross-section, and thus does not exploit the total available
harvesting area. Future work should explore M-AWES trajectories that use more
area of the elliptical cylinders in order to achieve power densities that are
closer to what is theoretically possible.
## Acknowledgements
This research was supported by DFG via Research Unit FOR 2401 and project
424107692 and by the EU via ELO-X 953348.
## References
* [1] J. Van Zalk, P. Behrens, The spatial extent of renewable and non-renewable power generation: A review and meta-analysis of power densities and their application in the u.s, Energy Policy 123 (2018) 83–91.
* [2] T. Haas, J. De Schutter, M. Diehl, J. Meyers, Large-eddy simulation of airborne wind energy farms, Wind Energy Science 7 (3) (2022) 1093–1135.
* [3] M. Zanon, S. Gros, J. Andersson, M. Diehl, Airborne wind energy based on dual airfoils, IEEE Transactions on Control Systems Technology 21 (2013) 1215–1222.
* [4] R. Read, Kite networks for harvesting wind energy, in: Airborne Wind Energy: Advances in Technology Development and Research, Springer Singapore, 2018.
* [5] J. De Schutter, R. Leuthold, T. Bronnenmeyer, R. Paelinck, M. Diehl, Optimal control of stacked multi-kite systems for utility-scale airborne wind energy, in: Proceedings of the IEEE Conference on Decision and Control (CDC), 2019.
* [6] C. Archer, An introduction to meteorology for airborne wind energy, in: Airborne Wind Energy, Springer Berlin / Heidelberg, 2013.
* [7] E. C. Malz, J. Koenemann, S. Sieberling, S. Gros, A reference model for airborne wind energy systems for optimization and control, Renewable Energy 140 (2019) 1004–1011.
* [8] M. De Lellis, R. Reginatto, R. Saraiva, A. Trofino, The betz limit applied to airborne wind energy, Renewable Energy 127 (2018) 32–40.
* [9] M. Kheiri, V. S. Nasrabad, F. Bourgault, A new perspective on the aerodynamic performance and power limit of crosswind kite systems, Journal of Wind Engineering and Industrial Aerodynamics.
* [10] R. Leuthold, J. De Schutter, E. C. Malz, G. Licitra, S. Gros, M. Diehl, Operational regions of a multi-kite awe system, in: European Control Conference (ECC), 2018.
* [11] https://github.com/awebox (2022).
* [12] J. Andersson, J. Gillis, G. Horn, J. Rawlings, M. Diehl, CasADi – a software framework for nonlinear optimization and optimal control, Mathematical Programming Computation 11 (1) (2019) 1–36.
* [13] A. Wächter, L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Mathematical Programming 106 (1) (2006) 25–57.
* [14] HSL, A collection of Fortran codes for large scale scientific computation. (2011).
* [15] P. Faggiani, R. Schmehl, Airborne Wind Energy: Advances in Technology Development and Research, Springer Singapore, 2018, Ch. Design and Economic of a Pumping Kite Wind Park, pp. 391–411.
|
[1]organization=Department of Mathematics, University of British Columbia,
addressline=1984 Mathematics Road, postcode=BC V6T 1Z2, city=Vancouver,
country=Canada [2]organization=Department of Chemical & Biological
Engineering, University of British Columbia, addressline=2360 E Mall,
postcode=BC V6T 1Z3, city=Vancouver, country=Canada
# A Cartesian-octree adaptive front-tracking solver for immersed biological
capsules in large complex domains
Damien P. Huet<EMAIL_ADDRESS>Anthony Wachs<EMAIL_ADDRESS>
###### Abstract
We present an open-source adaptive front-tracking solver for biological
capsules in viscous flows. The membrane elastic and bending forces are solved
on a Lagrangian triangulation using a linear Finite Element Method and a
paraboloid fitting method. The fluid flow is solved on an octree adaptive grid
using the open-source platform Basilisk. The Lagrangian and Eulerian grids
communicate using an Immersed Boundary Method by means of Peskin-like
regularized Dirac delta functions. We demonstrate the accuracy of our solver
with extensive validations: in Stokes conditions against the Boundary Integral
Method, and in the presence of inertia against similar (but not adaptive)
front-tracking solvers. Excellent qualitative and quantitative agreements are
shown. We then demonstrate the robustness of the present solver in a
challenging case of extreme membrane deformation, and illustrate its
capability to simulate inertial capsule-laden flows in complex STL-defined
geometries, opening the door for bioengineering applications featuring large
three-dimensional channel structures. The source code and all the test cases
presented in this paper are freely available.
## 1 Introduction
The numerical study of membrane-enclosed fluid objects, or capsules, has seen
tremendous interest over the past three decades due to the wide range of
applications in the biomedical and bioengineering world. Indeed, numerical
simulation of capsule dynamics in viscous flows is crucial to better
characterize and understand blood flow through capillary microcirculation and
develop applications such as targeted drug delivery [1], migration of
cancerous leukocytes through the microvascular network [2, 3] and cell sorting
and cell characterization in microfluidic devices [4, 5]. In particular, the
latter application has the potential to speed-up labour-intensive diagnosis
procedures or to extract relevant components of biofluids. For instance
intertial centrifugation in spiral-shaped microchannels has been shown to
efficiently and accurately segregate cells based on their size, and could be
applied to perform non-destructive blood plasma extraction [6, 7, 8].
The study of capsules from a mechanical point of view was paved in 1981 by the
pioneering analytical work of Barthès-Biesel & Rallison [9], who derived from
the thin-shell theory a time-dependant expression for the deformation of an
elastic capsule in a shear flow in the limit of small deformations. A decade
later, Pozrikidis went beyond the assumption of small deformations, using the
Boundary Integral Method (BIM) to investigate finite deformations of elastic
capsules in a shear flow [10, 11]. This work was quickly followed by Eggleton
& Popel who simulated spherical and biconcave capsules in shear flows using
the Front-Tracking Method (FTM) [12]. Capitalizing on the advantages of the
BIM $-$ such as a lower computing cost compared to the FTM, and the ability to
simulate true Stokes conditions $-$ Pozrikidis investigated the bending
resistance of capsules and proposed a simplified bending model for biological
membranes valid for small deformations [13], leading to the first numerical
simulation of an RBC based on the thin-shell theory [14]. The work of
Pozrikidis was later extended by Zhao et al., who proposed a BIM able to
simulate RBCs in complex geometries with up to 30% volume fraction [15]. In
the 2000s, Barthès-Biesel and Lac also used the BIM and studied finite
deformations of capsules devoid of bending resistance: they considered the
effect of the membrane constitutive law and exhibited buckling instabilities
[16] as well as the dynamics of two interacting capsules in a shear flow [17].
Despite the major success of the BIM to simulate capsules and biological
cells, the FTM is still being developed. Indeed, while the FTM is more
computationally intensive than the BIM because it necessitates meshing the
whole 3D fluid domain, and while it can require very small time steps to
satisfy stability conditions depending on the considered membrane forces; the
FTM can handle inertial regimes, thus allowing to examine a wider range of
applications. As such, Bagchi uses the FTM to perform two-dimensional
simulations of several thousand RBCs in a shear flow [18], allowing the study
of RBC interactions at the mesoscale. In the next years, Doddi & Bagchi [19]
and Yazdani & Bagchi [20, 21, 22] develop respectively three-dimensional
implementations of the elastic membrane stress and of the Helfrich’s bending
stress for biological membranes, the latter not being limited to small
deformations as was the case for the formulation of Pozrikidis. The ability to
consider finite Reynolds numbers allowed Doddi & Bagchi to extend the work of
Lac et al. on capsule interactions to intertial flows [23]. Their framework
was later extended to complex geometries by Balogh & Bagchi [24], enabling
them to study the dynamics of hundreds of RBCs in a microvascular network with
a hematocrit (volume fraction of RBCs) of 30% [25]. Another variant of the FTM
is to use a Lattice-Bolztman fluid solver rather than the traditional PDE-
based Navier-Stokes solver: this can bring significant performance improvement
especially at low Reynolds number where the Lattice-Boltzman Method (LBM)
performs well. For instance, Li & Sarkar extend the work of Barthès-Biesel and
Lac on the instabilities of elastic capsules in shear flows using an FTM-LBM
solver [26], and Zhang et al. describe a similar framework able to simulate
RBCs [27], including cell-cell aggregation phenomena [28]. More recently, Ames
et al. [29] harnessed the performance improvements of GPUs and demonstrated an
impressive 17 million RBCs simulated in a microvascular network using a
similar FTM-LBM framework.
Other methods to simulate biological capsules and vesicles include the RBC
model of Fedosov et al. [30]. In the work of Fedosov, the RBC membrane
mechanics is not governed by the thin shell theory but rather by a coarse-
grained molecular dynamics model. The membrane of the RBC is discretized, with
each edge representing several nonlinear springs which correspond to elastic
and viscoelastic properties of the membrane of the RBC. The model parameters
are found for an extremely fine mesh of over $27000$ nodes, where the lengths
of the edges correspond to that of biological spectrins. Yet, in Fedosov’s
model practical RBC simulations are conducted with different model parameters
which are intended to display the same mechanical behavior as that obtained
with a fine mesh, but using a number of nodes orders of magnitude lower. These
coarse-grained mechanical properties of the membrane are shown to lead to
results which accuracy lies within the range of experimental measurement
errors for the specific cases considered. However the range of validity of
this coarse-graining step is not obvious and the spatial convergence can be
non-monotonous or even not exist (see the transverse diameter plot in figure 1
in [30]), indicating to use this model with care. A last approach to capsule
simulations is to adopt a fully Eulerian framework, where the membrane is not
discretized with Lagrangian nodes, edges and faces. Instead, the capsule
configuration is described using the Eulerian grid employed to solve the
Navier-Stokes equations. Removing the need of a Lagrangian grid is a desirable
description as the IBM can reduce the spatial and temporal accuracies to first
order if no special treatment is implemented. In the Eulerian capsule
description, Volume-Of-Fluid (VOF), level-set or phase-field methods can be
utilized to track the position of the membrane, similar to what is done in the
context of fluid-fluid interfaces [31, 32, 33, 34]. If the considered membrane
mechanical behavior is independant of the past configuration of the membrane,
for instance if there is no resistance to shear and high resistance to
bending, the membrane forces can be computed using techniques developed for
surface-tension flows: the local curvature can be computed using height-
functions in the case of a VOF description, or by numerically differentiating
the level-set or phase-field function near the interface [35]. However, in
most biological applications the membrane properties do depend on the past
membrane configurations due to its elastic behvior. In such cases a quantity
representing the membrane stretch needs to be initialized and advected in the
vicinity of the membrane, for instance the left Cauchy-Green deformation
tensor. Ii et al. have demonstrated that this approach is possible and
scalable [36, 37, 38], although more comparisons with the FTM are needed in
order to evalute the performance and the accuracy of the Eulerian methods,
especially for long-lasting simulation.
Concomitent to these developments of capsule simulations, the IBM gained great
popularity in the particle-laden flows community [39, 40, 41], and in the past
two decades some adaptive IBM have been proposed in cases of immersed solid
particles. In this context, Roma et al. [42] present a two-dimensional
adaptive IBM implementation where the Adaptive Mesh Refinement (AMR) is
achieved by means of overlapping rectangles $-$ or “patches” $-$ of finer grid
cells in the regions of interest, i.e. where higher accuracy of the flow field
is needed. This method was later improved and proved second-order accurate by
Griffith et al. [43] and Vanella et al. [44]. Previously, Agreasar et al. [45]
had used a non-patched adaptive FTM-IBM method in order to simulate
axisymmetric circular cells. Their IBM implementation did not use Peskin-like
regularized Dirac delta functions: instead the Lagrangian grid on the membrane
communicates with the background Eulerian grid via an area-weighted
extrapolation. More recently, Cheng & Wachs [46] used the IBM coupled with an
LBM solver to achieve adaptive simulations in the case of a single rigid
sphere in various flow conditions.
The goal of this paper is to present an efficient framework to study the
dynamics of dilute suspensions of capsules in complex geometries, not limited
to non-inertial regimes. As the BIM cannot be used at finite Reynolds numbers,
we use the FTM and therefore the whole 3D fluid domain is discretized. Since a
vast range of realistic applications consider geometries of sizes orders of
magnitude larger than the typical size of a capsule, requiring hundreds of
millions to billions of Eulerian grid cells when the Cartesian grid has a
constant grid size, we develop an adaptive FTM solver rendering achievable to
simulate configurations that were previously out of reach with a constant grid
size. We provide the open-source code as part of the Basilisk platform [47,
34, 48, 49].
The paper is organized as follows: in Section 2 we present the problem
formulation and the governing equations for both the fluid and the capsule
dynamics. We describe the implementation of our numerical model in Section 3,
emphasizing the finite element membrane model and the FTM method. Numerous
validation cases are shown in Section 4, for increasingly difficult
configurations: validations are performed by comparing our computed results
against accurate BIM data available in literature whenever possible, otherwise
against other FTM results. Section 5 contains new results generated with the
present method, where the adaptive mesh capability dramatically improves
computational efficiency. In Section 6 we summarize our work and discuss the
strengths, weaknesses and possible improvements of the present method as well
as future perspectives.
## 2 Governing equations
### 2.1 Fluid motion
The fluid phase is assumed Newtonian and incompressible: the fluid surrounding
and enclosed by the elastic membranes is described using the mixture Navier-
Stokes equations:
$\rho\left(\frac{\partial\bm{u}}{\partial
t}+\bm{u}\cdot\nabla\bm{u}\right)=-\nabla
p+\nabla\cdot\left(\mu\left(\nabla\bm{u}+(\nabla\bm{u})^{T}\right)\right)+\bm{f_{b}}$
(1) $\nabla\cdot\bm{u}=0$ (2)
where $\bm{u}$ is the velocity, $p$ is the pressure, $\rho$ is the constant
density and $\mu$ is the variable viscosity field, since we will consider non-
unity viscosity ratios $\lambda_{\mu}=\mu_{i}/\mu_{e}\neq 1$, with $\mu_{i}$
and $\mu_{e}$ the internal and external viscosities. $\bm{f_{b}}$ denotes the
body force containing the membrane elastic and bending force densities acting
on the fluid:
$\bm{f_{b}}=\bm{f}_{\text{elastic}}+\bm{f}_{\text{bending}}=\left(\bm{F}_{\text{elastic}}+\bm{F}_{\text{bending}}\right)/V$,
with $V$ a relevant control volume and $\bm{F}_{\text{elastic}}$ and
$\bm{F}_{\text{bending}}$ the integrated membrane force densities.
### 2.2 Membrane mechanics
We assume that the lipid-bilayer membrane is infinitely thin: please note that
this is not a strong assumption for most biological cells, as thickness of the
biological membrane (lipid-bilayer) is $5nm$ while an RBC characteristic size
is $10\mu m$. A biological membrane undergoing deformation responds with
elastic and bending stresses, described with two distinct mechanical models.
The elastic strains and stresses are described using the theory of thin
shells[50]. We summarize this framework here, but the interested reader is
referred to the work of [9] for more details. In this continuous description
of the capsule, we first introduce the projectors $\bm{P}=\bm{I}-\bm{n}\bm{n}$
and $\bm{P_{R}}=\bm{I}-\bm{n_{R}}\bm{n_{R}}$ onto the current and reference
(stress-free) membranes shapes, with $\bm{I}$ the identity tensor and $\bm{n}$
and $\bm{n_{R}}$ the unit normal vectors to the current and reference
membranes configurations, which are both oriented outward. The membrane
strains are described using the surface deformation gradient tensor
$\bm{F_{s}}$, derived from the classical deformation gradient tensor $\bm{F}$
as follows:
$\bm{F_{s}}=\bm{P}\cdot\bm{F}\cdot\bm{P_{R}}.$ (3)
The surface right Cauchy-Green deformation tensor $C_{s}$ is then defined from
$\bm{F_{s}}$:
$\bm{C_{s}}=\bm{F_{s}^{T}}\cdot\bm{F_{s}}.$ (4)
Let the three eigenvalues of $\bm{F_{s}}$ be $\lambda_{1},\lambda_{2},0$
associated with the eigenvectors $\bm{t_{1}},\bm{t_{2}},\bm{n}$. Then the
eigenbasis of $\bm{C_{s}}$ is the same as that of $\bm{F_{s}}$, associated
with eigenvalues $\lambda_{1}^{2},\lambda_{2}^{2},0$. Note that in the stress-
free configuration, at the beginning of a typical simulation,
$\lambda_{1}=\lambda_{2}=1$, and $\bm{F_{s}}=\bm{C_{s}}=\bm{P}=\bm{P_{R}}$.
The above quantities are useful to compute the membrane elastic stress, which
can be expressed using a surface strain-energy function
$W_{s}(\lambda_{1},\lambda_{2})$:
$\sigma_{i}=\frac{1}{\lambda_{j}}\frac{\partial
W_{s}}{\partial\lambda_{i}},\qquad i\neq j.$ (5)
In this work, two distinct strain-energy functions corresponding to two
membrane elastic laws are used to describe several types of lipid bilayers in
various conditions:
1. 1.
The Neo-Hookean law, used to describe vesicles and artifical capsules, and
which corresponding strain-energy function is
$W_{s}^{NH}=\frac{E_{s}}{6}\left(\lambda_{1}^{2}+\lambda_{2}^{2}+\frac{1}{\lambda_{1}^{2}\lambda_{2}^{2}}-3\right),$
(6)
where $E_{s}$ denotes the shear modulus.
2. 2.
The Skalak law, used to describe the elastic response of RBC membranes, and
which strain energy function is
$W_{s}^{Sk}=\frac{E_{s}}{4}\left(I_{1}^{2}+2I_{1}-2I_{2}+CI_{2}^{2}\right),$
(7)
where the invariants $I_{1}=\lambda_{1}^{2}+\lambda_{2}^{2}-2$ and
$I_{2}=\lambda_{1}^{2}\lambda_{2}^{2}-1$ have been introduced, as well as the
area dilatation modulus $C$ preventing strong area changes and is taken
“large” [10, 51] in order to describe the strong area incompressibility of
RBCs. Unless otherwise stated, the value $C=10$ is used in the simulation
results presented below.
Once the elastic stress is known, the elastic force exterted by the membrane
onto the fluid is simply
$\bm{F}_{\text{elastic}}=\nabla\cdot\bm{\sigma},$ (8)
although we will follow the approach of [52] and use the principle of virtual
work instead of directly computing the divergence of the stress, as explained
in Section 3.4.1.
The bending stresses are described using Helfrich’s bending energy per unit
area $\mathcal{E}_{B}$ [53]:
$\mathcal{E}_{B}=2E_{b}\left(\kappa-\kappa_{0}\right)^{2}+E_{g}\kappa_{g}$ (9)
where $\kappa=(\kappa_{1}+\kappa_{2})/2$ is the local mean curvature,
$\kappa_{g}=\kappa_{1}\kappa_{2}$ is the Gaussian curvature, and $E_{b}$ and
$E_{g}$ are their associated bending modulii. $\kappa_{1}$ and $\kappa_{2}$
are the two principal curvatures at a given point of the two-dimensional
membrane sheet, and $\kappa_{0}$ is the reference curvature. Then, the bending
stresses are derived from the total bending energy
$\int_{\Gamma}\left(2E_{b}\left(\kappa-\kappa_{0}\right)^{2}+E_{g}\kappa_{g}\right)dS$
by means of a variational derivative, to yield the normal bending force per
unit area [54]:
$\bm{F}_{\text{bending}}/A=-2E_{b}(\Delta_{s}(\kappa-\kappa_{0})+2(\kappa-\kappa_{0})(\kappa^{2}-\kappa_{g}+\kappa_{0}\kappa))\bm{n},$
(10)
where the operator $\Delta_{s}$ is the surface Laplacian $-$ or Laplace-
Beltrami operator $-$ defined as
$\Delta_{s}=\nabla_{s}\cdot\nabla_{s}=\left((\bm{I}-\bm{n}\bm{n})\cdot\nabla\right)\cdot\left((\bm{I}-\bm{n}\bm{n})\cdot\nabla\right)$,
and $A$ is a relevant control area. Note how $E_{g}$ has disappeared in the
variational formulation because $\kappa_{g}$ is a topoligical invariant [55,
54].
At this point a parallel with surface tension forces is enlightening: the
bending energy related to surface tension acting on a droplet is proportional
to the area of the interface, and leads to surface tension forces proportional
to the curvature, i.e. to the second derivative of the interface geometry. In
contrast, as stated above the bending energy related to lipid-bilayer
membranes is proportional to the curvature, and thus the corresponding bending
force depends on the second derivative of the curvature, i.e. to fourth-order
derivatives of the geometry. As such, the numerical simulations of biological
capsules subject to bending stresses is a formidable challenge, and the
interested reader is referred to the reviews of [56, 54]. Our approach to
computing the bending force is described in Section 3.4.2, and corresponds to
method E in [56].
## 3 Numerical method
### 3.1 Adaptive Finite Volume solver for the Navier-Stokes equations
Assuming the body force field $\bm{f_{b}}$ is known, Eq. (1$-$2) are solved
using the open-source platform Basilisk [48]. The viscous term
$\nabla\cdot(\mu(\nabla u+(\nabla u)^{T}))$ is treated implicitely using a
multigrid Poisson solver [34, 47], the incompressibility condition is
satisfied by the classical projection method of Chorin [57], and the advection
term $\bm{u}\cdot\nabla\bm{u}$ is solved using the second-order Bell-Colella-
Glaz upwind advection scheme [58]. To this end, the divergence-free velocity
field and the viscosity field are located on cell faces while a velocity field
approximately divergence-free, the pressure field, and the body force field
are all defined on the cell centers.
Basilisk computes the solution of the Navier-Stokes equations on an octree
grid, which allows to coarsen and refine computational cells throughout the
simulations while keeping a structured mesh. The coarsening and refinement of
grid cells is implemented using a wavelet-based algoritm: for the sake of
completeness we present here a short overview of the adaptivity process, and
the interested reader is referred to [47, 34, 59] for more in-depth
descriptions. Let $f$ be a field of interest which variations in space will
govern the size of the grid cells. First, $f$ is downsampled onto a lower
level grid by volume-averaging: we call this downsampled field $f_{d}$. Then,
$f_{d}$ is upsampled back to the original grid using second-order
interpolations, resulting in the downsampled-then-upsampled field $f_{du}$.
Since $f$ and $f_{du}$ are defined on the same (original) grid, the sampling
error field $\epsilon^{i}=\|f^{i}-f^{i}_{du}\|$ can be defined in each
computational cell $i$. Finally, $\epsilon^{i}$ is used to decide if cell $i$
should be coarsened or refined based on an adaptivity criterion $\zeta>0$:
$\begin{cases}\epsilon^{i}>\zeta\quad\Rightarrow\quad\text{refine cell}\;i\\\
\epsilon^{i}<\frac{2\zeta}{3}\quad\Rightarrow\quad\text{coarsen cell}\;i\\\
\frac{2\zeta}{3}\leq\epsilon^{i}\leq\zeta\quad\Rightarrow\quad\text{leave cell
$i$ at its current level}.\end{cases}$ (11)
This wavelet-based algorithm is very versatile as any field of interest can be
used to influence the refinement level of the octree grid. In this study, the
adaptivity is based on the velocity field and on both the presence of domain
boundaries and the capsule membrane. In other words, the fields of interest
that the wavelet adaptivity algorithm considers are: $u_{x}$, $u_{y}$,
$u_{z}$, $c_{s}$ and $\xi$, where the first three scalar fields are the three
components of the velocity field $\bm{u}$, $c_{s}$ is the fluid volume
fraction field (in case of complex geometries), and $\xi$ is a scalar field
which varies strongly in the vicinity of the membrane and is constant
elsewhere $-$ see Section 3.3.2 for a definition of $\xi$. The refinement
criterion $\zeta$ can be different for different fields of interest. In this
study, we choose $\zeta$ to be very small when applied to $c_{s}$ and $\xi$ in
order to impose the maximum level of refinement in the vicinity of walls and
of the membrane (typically we choose $\zeta<10^{-10}$); while we found by
trial and error that having $\zeta$ of the order of $1\%$ of the
characteristic velocity when applied to $u_{x}$, $u_{y}$ and $u_{z}$ leads to
satisfactory refinement and coarsening of computational cells in the rest of
the fluid domain.
### 3.2 Second-order treatment of solid embedded boundaries
In Basilisk, complex geometries are handled using the sharp, second-order and
conservative embedded boundaries method of Johansen and Colella [60]. In this
method, the solid boundaries are assumed to cut the Eulerian cells in a
piecewise linear fashion. Boundary conditions are enforced by estimating the
flux on the solid boudary using second-order interpolations on a stencil
involving the surrounding fluid cells. An example of such interpolation
stencil in two dimensions is shown in figure 1. This method can be implemented
so that only the volume and face fluid fractions are necessary to describe the
boundary and recover the boundary flux [47]. As such, at the beginning of the
simulation these two fluid fraction fields are generated from either a user-
defined level-set function describing the geometry or an STL file.
Figure 1: A two-dimensional example of an interpolation stencil estimating the
boundary flux $\Phi^{b}$ with second-order accuracy. The gray area denotes the
solid, the red line denotes the solid boundary, and the arrows denote the five
face fluxes that are computed in the cut cell of interest $i,j$. The circle
dots denote the Eulerian cell centers while the square dots show the locations
of the data points $\Phi$ from which the boundary flux $F^{f}_{i,j}$ is
interpolated. The interpolation line normal to the solid boundary is
represented by a dashed line. In case of Dirichlet boundary conditions,
$\Phi^{b}$ is known but the data points $\Phi^{I}_{i+1}$ and $\Phi^{I}_{i+2}$
are themselves interpolated with second-order accuracy along the dotted lines
using the centers of the Eulerian grid cells.
If no additional treatment is done, it is well known that this class of cut-
cell methods suffers from unreasonably strict CFL restrictions due to cells
with small fluid volume fractions. Indeed, when a cell is cut by a solid
boundary the effective CFL condition becomes $\Delta t<c\Delta x/(f|\bm{u}|)$,
where $c$ and $f$ are the volume and face fractions, and $|\bm{u}|$ is the
velocity norm in the considered cell. If $c/f$ is close to zero, the time step
$\Delta t$ may become arbitrarily small. To alleviate this issue, a “flux
redistribution” technique is carried out, where the fluxes of problematic
small cells are redistributed to their neighbors, thus preventing $\Delta t$
from becoming arbitrarily small [61, 62, 63].
### 3.3 Front-Tracking Method (FTM)
#### 3.3.1 Standard FTM formulation
The capsule configuration is described using the Front Tracking Method: we
adopt a Lagrangian representation of the capsule, which we cover by a
triangulated, unstructured mesh [64, 33]. This Langrangian mesh communicates
with the Eulerian octree grid used to decribe the background fluid by means of
regularized Dirac-delta functions introduced by Peskin [65], which role is to
interpolate velocities from the Eulerian grid to a Lagrangian node; and to
spread membrane forces from a Lagrangian node to the Eulerian grid. In this
paper we use a cosine-shaped regularized delta function:
$\delta(\bm{x_{0}}-\bm{x})=\begin{cases}\begin{aligned}
&\frac{1}{64\Delta^{3}}\prod_{i=1}^{3}\cos\left(\frac{\pi}{2\Delta}(x_{0,i}-x_{i})\right)\qquad\text{if}\quad|x_{0,i}-x_{i}|<2\Delta\\\
&0\qquad\text{otherwise}\end{aligned}\end{cases},$ (12)
where $\Delta$ is the length of an Eulerian cell and
$\bm{x_{0}}=[x_{0},y_{0},z_{0}]$ corresponds in practice to the coordinates of
a Lagrangian node. The prefactor $1/(64\Delta^{3})$ ensures that the discrete
integral over the whole space $\int_{\Omega}\delta(\bm{x_{0}}-\bm{x})d\bm{x}$
is equal to $1$. Then, the velocity $\bm{u_{0}}$ of a given Lagrangian node
located at $\bm{x_{0}}$ is interpolated from the Eulerian velocity field
$\bm{u}$ using:
$\bm{u_{0}}=\int_{\Omega}\bm{u}(\bm{x})\delta(\bm{x_{0}}-\bm{x})d\bm{x}\quad\Longleftrightarrow\quad\bm{u_{0}}=\sum_{i\in\text{stencil}}\bm{u_{i}}\delta(\bm{x_{0}}-\bm{x_{i}})\Delta^{3},$
(13)
where “stencil” denotes the Eulerian cells which center $\bm{x_{i}}$ is such
that $\delta(\bm{x_{0}}-\bm{x_{i}})\neq 0$, and $\bm{u_{i}}$ is the velocity
of a given fluid cell. Similarly, the membrane force $\bm{F_{0}}$ at a
Lagrangian node is spread to a force density field $\bm{f}$ using:
$\mathcal{M}(\text{supp}(\delta))\bm{f}=\int_{\Omega}\bm{F_{0}}\delta(\bm{x_{0}}-\bm{x})d\bm{x}\Longleftrightarrow\bm{f_{i}}=\bm{F_{0}}\delta(\bm{x_{0}}-\bm{x_{i}}),$
(14)
where $\mathcal{M}(\text{supp}(\delta))$ is the measure of the support of the
regularized Dirac-delta function, and is in practice equal to $64\Delta^{3}$
in three dimensions for the regularized Dirac-delta function we choose in Eq.
(12).
Once the Lagrangian velocities of all the capsule nodes have been interpolated
from the Eulerian velocity field using Eq. (13), the position of each node is
updated using a second-order Runge-Kutta time integration scheme. The membrane
stresses are then computed from the new configuration of the capsule using the
methods described later in Section 3.4, and transferred to the background
fluid using Eq. (14). In the context of particle-laden flows, the use of the
IBM requires sub-time stepping for the particle advection due to the orders of
magnitude difference in the fluid time scale and the solid-solid interactions
time scale [39]. This is not necessary for capsule-laden flows described by
the FTM, i.e. the time step for the advection of Lagrangian nodes is equal to
that of the fluid solver. It follows that the trajectories of each Lagrangian
node coincide with the streamlines of the flow and that the triangulations of
two interacting capsules can never overlap $-$ provided that the time step is
sufficiently small. As such, there is no need for any ad-hoc repulsive force
between two approaching capsules or between a capsule approaching a wall, as
was the case in e.g. [66]: the non-penetration condition is seamlessly handled
by the local flow field. In the latter case of close interaction between a
capsule and a wall, however, the definition of the IBM stencil must be
altered, as described in the next subsection.
Additionally, the case of capsules of inner viscosity $\mu_{i}$ different from
the viscosity of the surrounding fluid $\mu_{e}$ needs special treatment. We
adopt the approach developped in the original FTM by Unverdi & Tryggvason
[64]: a discrete indicator function $I$ is computed from a discrete “grid-
gradient” field $\bm{G}(\bm{x})$:
$\bm{G}(\bm{x})=\sum_{i\in\mathcal{T}}S_{i}\delta(\bm{x}-\bm{x}_{i})\bm{n}_{i},$
(15)
where $\mathcal{T}$ denotes the set of all triangles of the discretization of
the surface of all the capsules, $S_{i}$ is the surface area of triangle $i$,
$\bm{x}_{i}$ is the position vector of its centroid and $\bm{n}_{i}$ is its
unit inward normal vector. In practice, $\bm{G}(\bm{x})$ is computed by
looping over all triangles of the discretizations of all capsules and
spreading the quantity $S_{i}\bm{n}_{i}$ using the regularized Dirac-delta
functions introduced previously. As such, $\bm{G}(\bm{x})$ is non-zero in the
union of all the IBM stencils. The discrete indicator function $I(\bm{x})$ is
computed by solving the following Poisson problem:
$\Delta I=\nabla\cdot\bm{G}.$ (16)
Since $I$ is a regularized step function, it should have constant values away
from the capsule membranes. To guarantee this property, we only update $I$ in
the cells where $\bm{G}$ is non-zero and we re-initialize $I$ to $0$ or $1$
elsewhere, as suggested in [33].
#### 3.3.2 Adaptive FTM strategy
Our current implementation of the FTM requires all cells in an IBM stencil to
be the same size $\Delta$. As a results, all the Eulerian cells around the
membrane must be the same size as well, although future studies may lift this
restriction. In practice, since the flow physics is happening in the vicinity
of the capsule, we need the cell sizes around the membrane to be the smallest
grid size of the fluid domain. To enforce this condition, we create a scalar
field $\xi$ initialized at each time step to be: (i) $0$ if the Eulerian cell
does not belong to any IBM stencil; or (ii) a randomly generated value between
$0$ and $1$ otherwise. In other words, the scalar field $\xi$ tags the IBM
stencils with noise while the rest of the domain is set to a constant value.
Feeding this scalar field to Basilisk’s wavelet adaptation algorithm ensures
that all the stencil cells are defined at the finest level, and that no IBM
stencil contains Eulerian cells of different cell levels.
### 3.4 Computation of the membrane forces
#### 3.4.1 Computation of the elastic force with the Finite Element Method
In order to compute the nodal elastic forces given by Eq. (8), we employ a
Finite Element Method (FEM). In most FEM solvers from Engineering
applications, the sought quantity is the displacement of a structure under a
known applied stress. In the case of biological membranes, we rather seek the
internal stress of the membrane under a known displacement [55]. Charrier et
al. [52] have been the first to design this specific FEM framework: we base
our implementation on their work as well as that of Doddi & Bagchi [19].
Consider an arbitrary triangle $T_{i}$ on the discretized membrane: in order
to compute the elastic force of its three vertices, we first rotate it to a
common plane $-$ e.g. the $x,y$-plane $-$ using a rotation matrix $\bm{R_{i}}$
from the current orientation of the triangle to its orientation in the common
plane. Then, we assume the position of the triangle vertices in a stress-free
configuration is known in the common plane, and we compute the displacements
$\bm{v_{k}}$ of each of the three vertices of $T_{i}$. Using linear shape
functions, the deformation gradient tensor and the Cauchy-Green deformation
gradient tensor attached to $T_{i}$ can be computed:
$\bm{F}=\frac{\partial\bm{v_{k}}}{\partial\bm{x^{p}}},\qquad\bm{C}=\bm{F^{T}}\bm{F},$
(17)
where $(\bm{x^{p}},\bm{y^{p}})$ is the basis of the common plane. Note that
$\bm{F}$ and $\bm{C}$ are two-dimensional tensors and correspond to the
tangential components of $\bm{F_{s}}$ and $\bm{C_{s}}$ in Eq. (3$-$4). By
diagonalizing $\bm{C}$ and taking the square root of its eigenvalues ($\bm{C}$
is symmetric positive definite), we can access the two principal stretch
ratios $\lambda_{1}$, $\lambda_{2}$ attached to $T_{i}$. Following Charrier et
al. [52], the principle of virtual work yields the expression linking the
nodal force and nodal displacement at node $j$:
$\bm{F^{P}_{\text{elastic},j}}=A_{i}\frac{\partial
W}{\partial\lambda_{1}}\frac{\partial\lambda_{1}}{\partial\bm{v_{j}}}+A_{i}\frac{\partial
W}{\partial\lambda_{2}}\frac{\partial\lambda_{2}}{\partial\bm{v_{j}}},$ (18)
where $A_{i}$ is the area of $T_{i}$. Rotating Eq. (18) back to the current
reference frame of $T_{i}$, we get the final expression of the contribution of
triangle $T_{i}$ to the elastic force of node $j$:
$\begin{split}\bm{F_{\text{elastic},j}}&=\bm{R^{T}}\bm{F^{P}_{\text{elastic},j}}\\\
&=A_{i}\bm{R^{T}}\left(\frac{\partial
W}{\partial\lambda_{1}}\frac{\partial\lambda_{1}}{\partial\bm{v_{j}}}+\frac{\partial
W}{\partial\lambda_{2}}\frac{\partial\lambda_{2}}{\partial\bm{v_{j}}}\right).\end{split}$
(19)
This FEM implementation is summarized in algorithm 1.
Algorithm 1 Pseudocode for the Finite Element Method
loop over all triangles $i$
loop over the three nodes $j$ of $T_{i}$ Compute
$\bm{x^{P}_{j}}=\bm{R}\bm{x_{j}}$ Compute the nodal displacement
$\bm{v_{j}}=\bm{x^{P}_{j}}-\bm{x^{P}_{j,\,t=0}}$
end loopCompute $\bm{F},\,\bm{C}$ from Eq. (17) Compute the eigenvalues of
$\bm{C}$ and $\bm{F}$, i.e.
$\lambda_{1}^{2},\,\lambda_{2}^{2},\,\lambda_{1},\,\lambda_{2}$
loop over the three nodes $j$ of $T_{i}$ Compute
$\partial\lambda_{1}/\partial\bm{v_{j}}$,
$\partial\lambda_{2}/\partial\bm{v_{j}}$ Compute
$\bm{F_{\text{elastic},j}^{P}}$ from Eq. (18) Rotate
$\bm{F_{\text{elastic},j}^{P}}$ to the current orientation of $T_{i}$ Add
$\bm{F_{\text{elastic},j}}$ to the total elastic force of node $j$
end loop
end loop
#### 3.4.2 Computation of the bending force using paraboloid fits
The computation of $\bm{F}_{\text{bending}}$ relies on the local evaluation
of: (i) the mean and Gaussian curvatures $\kappa$ and $\kappa_{g}$, (ii) the
Laplace-Beltrami operator of the mean curvature $\Delta_{s}\kappa$, and (iii)
a relevant control area $A$.
To evaluate $\kappa$ and $\kappa_{g}$ at node $i$, we blend the approaches of
Farutin et al. [67] and Yazdani & Bagchi [21]. A local reference frame is
attached to node $i$, with the $z$-direction coinciding with the approximate
normal vector $\bm{n_{i}}$. Then, a paraboloid is fitted to node $i$ and its
one-ring neighbors. In our triangulated surface, most nodes have six
neighbors111Exactly twelve nodes have five neighbors, since we discretize a
spherical membrane by subdividing each triangle of an icosahedron. Each newly
created node is projected back to a sphere, and if necessary projected onto a
more complex shape, e.g. a biconcave membrane., making the system
overdetermined and a least-squares method is used. From this paraboloid
fitting, we can derive the local mean and Gaussian curvatures $-$ see
equations (12) and (13) in [21] $-$, as well as a refined approximation of
$\bm{n_{i}}$. This procedure is iterated using the newest normal vector
approximation to define the local frame of reference, until satisfactory
convergence of $\bm{n_{i}}$ is reached. Our numerical experimentations show
that between three to five iterations usually suffice to obtain a converged
normal vector.
The same paraboloid fitting method is used to compute $\Delta_{s}\kappa$, or
$\Delta_{s}(\kappa-\kappa_{0})$ in the case of non-zero reference curvature.
This time, a paraboloid is fitted to the curvatures of node $i$ and its
neighbors, and then differentiated to obtain the desired surface Laplacian.
The last term $A$ is necessary to obtain a bending force as opposed to a
bending force per surface area. Let $A_{i}$ denote the nodal area attached to
node $i$: at any time the sum of all nodal areas need to equal the total area
of the capsule, i.e. $\sum_{i=0}^{N}A_{i}=A_{tot}$ with $N$ the number of
Lagrangian nodes and $A_{tot}$ the total area of the discretized surface of
the capsule. The Voronoi area of node $i$ enforces this property only for non-
obtuse triangles. As such, we adopt the “mixed-area” of Meyer et al. [56, 68]
which treats the special case of obtuse triangles separately: if a triangle
$j$ is not obtuse, its contribution to the nodal area of its vertices is the
standard Voronoi area; while if $j$ is an obtuse triangle, the nodal area of
its obtuse vertex is $A_{j}/2$ while the nodal area of the remaining two
vertices is $A_{j}/4$, where $A_{j}$ is the area of triangle $j$.
## 4 Validation Cases
### 4.1 Elastic and bending forces of an isolated membrane
#### 4.1.1 Elongation of a flat elastic membrane
Our first validation case focuses on the computation of the elastic stress in
the membrane. To this end, we stretch an isolated flat membrane devoid of
bending resistence in one of its principal direction $\bm{e_{1}}$ while
ensuring the principal stress $T_{2}$ in the second principal direction
$\bm{e_{2}}$ remains zero. We then analyze the non-zero principal stress
$T_{1}=(\partial W/\partial\lambda_{1})/\lambda_{2}$ as a function of the
principal stretch $e_{1}=(\lambda_{1}^{2}-1)/2$. This test is repeated for two
membranes: the former obeying the neo-Hookean law and the latter obeying the
Skalak law. Note that in order to set the principal stress $T_{2}$ equal to
zero, we impose $\lambda_{2}$ to a value strictly lower than $1$, i.e. the
membrane is shrinked in the second principal direction:
$\lambda_{2}=\begin{cases}1/\sqrt{\lambda_{1}}\qquad\text{for the neo-Hookean
law}\\\ \sqrt{(1+C\lambda_{1}^{2})/(1+C\lambda_{1}^{4})}\qquad\text{for the
Skalak law}\end{cases}$ (20)
We compare our results to the exact stress derived by Barthès-Biesel et al.
[51] in figure 2, with $E_{s}=C=1$. The data we generate overlaps perfectly
with the analytical stress-strain relations, thus validating the
implementation of our Finite-Element solver for the elastic membrane stresses.
The source code to reproduce this validation case is available online [69].
Figure 2: Stress-strain response of an isolated flat membrane for the neo-
Hookean and Skalak elastic laws. The results from this study are compared to
exact expressions derived in [51].
#### 4.1.2 Bending force of a curved membrane
In order to validate our bending force, we follow the procedure of
Guckenberger et al. [56]: considering a biconcave membrane with zero reference
curvature, we compare the mean and Gaussian curvatures, Laplace-Beltrami
operator of the mean curvature, and total nodal bending force density to
analytical expressions derived using a symbolic calculus software. Since the
biconcave capsule has a rotational symmetry around the $z$-axis and a symmetry
with respect to the $(x,y)$-plane, we plot our results according to the angle
$\theta$ defined in figure 3, with $\theta$ varying from $0$ to $\pi/2$. This
biconcave shape is a good candidate to test the bending force since its two
principal curvatures are in general not equal to each other and are varying
along the surface of the biconcave shape $-$ even changing sign. The following
results are obtained with a biconcave membrane discretized by a triangulation
containing $5120$ triangular elements.
Figure 3: Schematic of a biconcave capsule centered at the origin, and
definition of the polar angle $\theta$.
We compare our computed mean and Gaussian curvatures to their respective
analytical expressions in figure 4(a): the agreement is very satisfactory for
both curvatures. Figure 4(b) shows the Laplace-Beltrami operator of the
curvature against its analytical expression: the general trend still matches
that of the analytical expression very well, but a few outliers deviate from
it by a few percents. The same behavior is observed in figure 4(c) which shows
the nodal bending force density. The fact that the behavior of figure 4(b) and
figure 4(c) is similar is note surprising, as the nodal bending force density
plotted in figure 4(c) directly involves the Laplace-Beltrami operator of the
mean curvature shown in figure 4(b). It is expected to see some small
deviations to the theory when taking the Laplace-Beltrami operator of the mean
curvature, as we are essentially taking a fourth-order derivative of the
geometry of the membrane, and Guckenberger et al. [56] observe a similar noise
when performing the same tests (see figures 6c and 8e in [56]). In fact, they
show that most other methods perform much worse at computing the Laplace-
Beltrami operator of the mean curvature, and hence at computing the total
bending force. As such, our implementation of the bending force shows the
expected performance. The code to reproduce this test case is available at
[70].
(a)
(b)
(c)
Figure 4: Comparison of the computed mean and Gaussian curvatures (top left),
Laplace-Beltrami operator of the mean curvature (top right) and nodal bending
force density (bottom) to their analytical expressions. All quantities are
plotted against the polar angle $\theta$ defined in figure 3.
### 4.2 Initially spherical capsule in an unbounded shear flow
#### 4.2.1 Neo-Hookean elasticity without bending resistance
Figure 5: Taylor deformation parameter as a function of the non-dimensional
time of an initially spherical Neo-Hookean capsule in an unbounded shear flow.
The solid line corresponds to 32 Eulerian cells per initial diameter and a
Lagrangian discretization using 1280 triangles, while the dashed line
corresponds to 64 Eulerian cells per initial diameter and a Lagrangian
discretization using 5120 triangles.
We now seek validation of the coupling between the membrane solver and the
fluid solver. To this end, we consider an initially spherical capsule of
radius $a$ in an unbounded shear flow. The elasticity is governed by the neo-
Hookean law, and the flow field is initialized to be that of an undisturbed
shear flow. As the capsule deforms, we plot the Taylor deformation parameter
$D=(a_{max}-a_{min})/(a_{max}+a_{min})$ as a function of the non-dimensional
time $\dot{\gamma}t$, with $a_{max}$ and $a_{min}$ the maximum and minimum
radii of the capsule at a given time, and $\dot{\gamma}$ the shear rate. We
perform this simulation for various Capillary numbers $Ca=\mu
a^{2}\dot{\gamma}/E_{s}$, with $E_{s}$ the elastic modulus. In this test case,
Stokes conditions are intended so we set the Reynolds number to 0.01. At time
$t=0$, the flow field is set to that of a fully developped shear flow:
$u_{x}=\dot{\gamma}y$. The computational box is bi-periodic in the x and z
directions, while Dirichlet boundary conditions for the velocity are imposed
in the y direction. The length of the computational box is equal to 8 initial
radii, the size of the most refined Eulerian cells is set to $1/128$ that of
the domain length, and the membrane is discretize by 1280 triangles. The non-
dimensional time step $\dot{\gamma}\Delta t$ is set to $10^{-3}$, except for
the Capillary numbers $Ca=0.025$ and $Ca=0.0125$ where the time step is
decreased to $\dot{\gamma}\Delta t=10^{-4}$ to stabilize the elastic force
computation.
This case has been widely studied in the literature: in figure 5 we compare
our results to those of [10, 11] who used the BIM, as well as [19] who used
the FTM. The agreement is very satisfactory: the steady-state value we obtain
for $D$ is well within the range of the reported data, both in the transient
regime and once a steady-state is reached. We also show in figure 5 the
results for a finer triangulation of the membrane with 5120 triangles, and a
refined Eulerian mesh with the finest Eulerian cell size corresponding to
$1/256$ that of the domain length. The only difference is that the steady
state is longer to reach for $Ca=0.025$ and $Ca=0.0125$, due to the apparition
of buckling instabilities on the membrane as a result of the absence of
bending stresses. This buckling instability has been observed both
experimentally [71] and numerically [72, 22], although in numerical
simulations the wavelength is unphysical and determined by the size of the
mesh discretizing the capsule [55]. In our simulations, we do observe the same
dependance of the wavelength of the membrane buckles on the Lagrangian mesh
element size. An example of this buckling instability is shown in figure 6.
The code to reproduce this test case is available at [73].
Figure 6: A zoomed in snapshot of a buckling membrane at $Ca=0.0125$ with a
Lagrangian discretization comprising 5120 triangles. This behavior is arising
due to the absence of bending stresses, and the buckling wavelength is
dependant on the Lagrangian discretization. The color field represents the
$x$-component of the velocity.
#### 4.2.2 Including bending resistance
We further validate our solver by considering the similar case of a capsule
deforming in a shear flow, this time with the addition of a bending force. As
in the previous case, an initially spherical, unstressed capsule is placed in
a shear flow where the initial velocity field is fully developped. The
Capillary number is $Ca=0.05$, and the non-dimensional bending coefficient
$\tilde{E_{b}}=E_{s}/(a^{2}E_{b})$ is chosen equal to $0.0375$. The membrane
is discretized with 5120 triangles and the same Eulerian resolution as in the
previous case is chosen. Due to the stiffness of the bending force, we set the
time step to $\Delta t=10^{-4}$. The Taylor deformation parameter is compared
to that of various studies in the literature in figure 7. The capsule deforms
under the action of the flow field and the Taylor deformation parameter
quickly attains a steady state of about $D=0.15$. We remark that the data
reported in the literature is scattered by about 20% which underlines the
challenges to simulate Helfrich’s bending force, as was previously noted by
[56]. We also note that our results are situated well within the range of the
reported data: we are close to the results of Zhu & Brandt [74] and Le et al.
[75], and our curve is located in the middle of the reported range that we
borrowed from [56, 54]. Given such a wide range of reported literature data,
it is difficult to conduct a rigorous quantitative analysis. Nevertheless, we
conclude from figure 7 that our bending force shows a similar behavior as that
of other studies, a claim also supported by the validation case in Section
4.4. The code to reproduce this test case is available at [76]
Figure 7: Taylor deformation parameter of an isolated capsule undergoing
elastic and bending stresses in a shear flow. The capillary number is
$Ca=0.05$ and the non-dimensional bending coefficient is
$\tilde{E_{b}}=0.0375$.
### 4.3 Initially spherical capsule flowing through a constricted channel
To validate our implementation for an elastic capsule in the presence of
complex boundaries, we consider the case of a capsule flowing through a
constricted square channel proposed by Park & Dimitrakopoulos [77]. The
elasticity of the membrane is governed by the Skalak law with the area
dilatation modulus $C$ set to 1, the capsule is initially pre-inflated such
that its circumference is increased by $5\%$, and the flow is driven by an
imposed uniform velocity field at the inlet and outlet boundaries. We follow
[77] and choose the Capillary number to be $0.1$, and since Stokes conditions
are intended we set $Re=0.01$.
The results are presented in figure 8 and figure 9. The qualitative agreement
in figure 8 is very satisfactory as the capsule shape is visually identical to
that of [77]. We draw the reader’s attention to the adaptive Eulerian mesh on
the right-hand side of figure 8: the cells size is imposed to be minimal at
the solid boundaries and around the capsule, while everywhere else the
adaptivity criterion is governed by the velocity gradients. As a result, the
grid cells away from the membrane and from the walls quickly coarsen to up to
three levels lower, except in the vicinity of the corners where stronger
velocity gradients occur. Figure 9 shows the non-dimensional lengths of the
capsule in the $x$-, $y$\- and $z$-directions with respect to the non-
dimensional position of its center $x_{c}/H_{c}$, with $x_{c}$ and $H_{c}$ the
$x$-position of the center of the capsule and the half-height of the
constriction, respectively. As found by Park & Dimitrakopoulos, the final
shape of the capsule is not exactly spherical as it remains shrinked in the
$x$-direction downstream of the constriction. Despite some small deviations
during the extreme deformation of the capsule, around $x_{c}/H_{c}=-1$, the
overall quantitative agreement of the transient shape of the capsule is also
satisfactory, especially considering that other authors have reproduced this
case with similar or larger deviations from the results reported by Park &
Dimitrakopoulos [24, 38]. The code to reproduce this test case is available at
[78].
Figure 8: Snapshots of the capsule as it flows through the constriction. Left:
Park & Dimitrakopoulos [77]; Right: this study. Figure 9: Non-dimensional
lengths of the capsules in the three directions $x$, $y$ and $z$ with respect
to the non-dimensional $x$-position of the center of the capsule. Results are
compared to [77].
### 4.4 Red blood cell in an unbounded shear flow
The next test case aims at validating the membrane solver when a viscosity
ratio $\lambda_{\mu}=\mu_{i}/\mu_{e}$ is different than $1$. To this end, we
consider an RBC in an unbounded shear flow, with $\lambda_{\mu}=5$. The
membrane forces include the Skalak elastic law and the Helfrich’s bending
force. The Capillary number is $Ca=0.1$, the area dilatation modulus $C$ is
chosen equal to $50$ and the non-dimensional bending coefficient is
$\tilde{E_{b}}=0.01$. The reference curvature is $c_{0}a=-2.09$ [79, 20],
where $a=(3V/4\pi)^{1/3}$ is the radius of the sphere of equal volume as that
of the RBC. The initial shape of the RBC is biconcave and is described by the
following equations, for an RBC which largest radius is orthogonal to the $y$
direction [80]:
$\begin{cases}x=ac\cos\phi\sin\psi\\\
y=\frac{ac}{2}\sin\phi\left(\alpha_{1}+\alpha_{2}\cos^{2}\phi+\alpha_{3}\cos^{4}\phi\right)\qquad\qquad\text{with}\;\phi\in[0,2\pi],\;\psi\in[-\frac{\pi}{2},\frac{\pi}{2})\\\
z=ac\cos\phi\sin\psi,\end{cases}$ (21)
with $\alpha_{1}=0.207$, $\alpha_{2}=2.003$ and $\alpha_{3}=-1.123$. Since we
consider a viscosity ratio, we also define the initial indicator function $I$
as the volume fraction of inner fluid:
$I(\bm{x})=\begin{cases}1\;\text{if}\;\Phi(\bm{x})<0\\\
0\;\text{if}\;\Phi(\bm{x})>0\\\ \text{between 0 and 1 otherwise},\end{cases}$
(22)
where $\Phi$ is the level-set alternative formulation of Eq. (21):
$\Phi(x,y,z)=\frac{x^{2}+z^{2}}{(ac)^{2}}+\frac{4y^{2}}{(ac)^{2}}\left(\alpha_{1}+\alpha_{2}\frac{x^{2}+z^{2}}{(ac)^{2}}+\alpha_{3}\left(\frac{x^{2}+z^{2}}{(ac)^{2}}\right)^{2}\right).$
(23)
The initial fluid velocity is set to that of an unbounded shear flow of shear
rate $\dot{\gamma}$ with the velocity gradient in the direction of the greater
axis of the RBC. The dimensionless time step we use is $\dot{\gamma}\Delta
t=10^{-4}$ and is determined from trial and error.
Figure 10: Tumbling motion of an isolated RBC in a shear flow. Top: this
study; bottom: Yazdani & Bagchi [20].
The qualitative results are presented in figure 10, where we include snapshots
of the same case from Yazdani & Bagchi [20]. We observe that the RBC is
undergoing a tumbling motion, a behavior of RBCs that is not seen without
viscosity ratio in this range of Capillary numbers [20, 80]. Moreover, the
deformation of the RBC matches qualitatively well that of [20]. However our
tumbling period seems slightly shorter than that of [20]: we attribute this
small discrepancy to the fact that we may have set different values for the
area dilatation modulus $C$, as [20] only provides a range of values:
$C\in[50,400]$. Nevertheless, the results in figure 10 show that in our
implementation, the combination of elastic forces, bending forces and visocity
ratio matches well the qualitative results observed in the literature, and
that the overall agreement is satisfactory. The code to reproduce this case is
available at [81].
### 4.5 Capsules interception in a shear flow
Our last validation case focuses on the interactions of two capsules. Two
initially spherical, pre-inflated neo-Hookean capsules are placed in an
unbounded shear flow with their initial positions offset in the horizontal and
vertical directions as shown in figure 11. Since the capsules are offset in
the vertical direction, they gain horizontal velocities of opposite signs and
their trajectories eventually intercept. This configuration is a good
validation candidate since we can compare our results to those obtained by Lac
et al. using the boundary integral method [17]. We consider a computational
box of size $16a$ where $a$ is the initial radius of the capsules. The finest
Eulerian resolution corresponds to the domain being discretized by $512$ cells
in each direction, and the two membranes are discretized with $5120$
triangles. A non-dimensional time step of $\dot{\gamma}\Delta t=2.5\cdot
10^{-4}$ is chosen. The Reynolds number is set to $0.01$.
Figure 11: Schematic of the two capsules in in the shear flow, prior to the
interception. The horizontal and vertical gaps $\Delta x_{1}$ and $\Delta
x_{2}$ are defined, and the red arrows represent the velocities of the centers
of the capsules.
Figure 12: Snapshots of the interception of two neo-Hookean capsules in a
shear flow. Left: Boundary Integral results of Lac et al. [17]. Right: This
study. The color field corresponds to the vertical component of the velocity
(rescaled for each snapshot).
Figure 12 shows the qualitative comparison of the shape of the two capsules at
several stages of the interception. In our simulations, the color field
corresponds to the vertical component of the velocity. At each stage, there is
visually no difference in the shape of the capsules. If we track the center of
each capsule throughout the simulation, we can compute their difference
$\Delta x_{2}$ in the vertical direction and their difference in the
horizontal direction $\Delta x_{1}$. Normalizing by the initial diameter $2a$
of the capsules, we plot in figure 13 the vertical gap between the two
capsules as their intercept, and we compare our results to those of Lac et al.
[17]. The agreement is very satisfactory: the transient regime is very well
captured, both methods showing a maximum non-dimensional vertical gap of about
$0.72$; and the steady-state reached is about $0.54$. Small discrepancies can
be observed around $\Delta x_{1}/2a=-2$ where our vertical gap is slightly
lower than that of Lac et al.; and for $\Delta x_{1}/2a$ between $4$ and $6$
where our slope is still slightly negative while that of Lac et al. is
essentially zero. Those discrepancies are minor and could be explained by our
choice of Reynolds number $\text{Re}=10^{-2}$ while the boundary integral
method operates in true Stokes conditions. Regarding the adaptive mesh, as
stated above we perform this simulation using an equivalent fluid resolution
of $64$ cells per initial diameter, in a cubic box $8$ diameters in length.
Our simulation requires about $4.5\cdot 10^{5}$ fluid cells, while using a
constant mesh size would require about $1.3\cdot 10^{8}$ cells. For this
specific case, using an adaptive grid therefore reduces the number of fluid
cells by a factor of about $300$.
Figure 13: Non-dimensional vertical gap $\Delta x_{2}/2a$ against the non-
dimension horizontal gap $\Delta x_{1}/2a$ between the centers of the two
capsules. The results from this study are compared to Lac et al. [17].
A similar configuration was later examined by Doddi & Bagchi [23] in the
presence of inertia. In a cubic box of size $H=4a\pi$, periodic in the $x$ and
$z$ directions, for a Capillary number of 0.05 and an initial vertical
distance $\Delta x_{2}/2a=0.2$, they observed that the capsules don’t
intercept when the Reynolds number $Re=\rho\dot{\gamma}a^{2}/\mu$ is greater
or equal to 3. Instead, the vertical component of the center of the capsules
changes sign and the direction of movement is reversed. We reproduce these
results from Doddi & Bagchi (figure 8c in [23]): each capsule is discretized
with $5120$ triangles, the Eulerian equivalent resolution is $40$ points per
initial diameter, and the non-dimensional time step is $\Delta t=10^{-3}$.
Figure 14 shows the vertical position of the centers of the two capsules with
respect to time, for Reynolds numbers of 3, 10 and 50, and the generated data
is compared to [23]. Our results superimpose very well with [23], in
particular for $Re=3$ and $Re=10$. For $Re=50$, the agreement is still very
satisfactory although we notice that we predict a slightly larger overshoot
around $t=18$ compared to the results of [23].
Figure 14: Non-dimensional vertical gap $\Delta x_{2}/2a$ against the non-
dimensional time $\dot{\gamma}t$ for Reynolds number of 3, 10 and 50. The
results of this study are compared to Doddi & Bagchi [23].
Overall, the quantitative agreement in figure 13 and figure 14 is very
satisfactory and it validates our adaptive front-tracking solver for several
capsules, both in Stokes conditions and in the presence of inertia. The code
for these two cases is available at [82, 83].
## 5 Results
### 5.1 Capsule flowing through a narrow constriction
In their original study, Park & Dimitrakopoulos [77] investigated the case
presented in Section 4.3 for relatively wide constriction sizes $-$ the half
size of the constriction $H_{c}$ was greater than or equal to the capsule
radius $a$. In this subsection we instead decrease the constriction size to
$H_{c}=a/2$ in order to demonstrate the robustness of our solver in cases of
extreme deformations, including close to domain boundaries. As in Section 4.3,
the capsule is initially circular and pre-inflated such that each distance on
the capsule surface is increased by 5%. It obeys the Skalak elastic law with
$C=1$ and $Ca=0.1$. Since the capsule undergoes extreme deformations, we also
consider a bending force in order to supress unphysically sharp corners: the
non-dimensional bending coefficient is set to $\tilde{E_{b}}=10^{-3}$. In
order to resolve well the capsule deformation, we increase the resolution of
the triangulation, which now comprises 20480 triangles. We also perform this
simulation for two finest Eulerian grid resolutions of 50 and 100 grid cells
per initial diameter respectively. The flow is driven by an imposed uniform
velocity field at the inlet and outlet boundaries, and the Reynolds number is
set to 0.01 to model Stokes flow conditions. For this case the non-dimensional
time step is set to $a\Delta t/U=2.5\cdot 10^{-5}$, with $a$ the initial
radius of the capsule and $U$ the characteristic velocity of the fluid. It
appears that this strict restriction on the time step is due to the stiff
bending force combined to the very fine discretization we choose.
Figure 15: Qualitative comparison of a capsule flowing through a constricted
channel: when the constriction size is equal to the initial diameter of the
capsule (left); and when the constriction size is equal to half of the initial
diameter of the capsule, with a finest Eulerian grid resolution of 100 cells
per initial diameter (right). Color field: $x$-component of the velocity
field. Figure 16: Non-dimensional $x$-, $y$\- and $z$-lengths of the capsule
$l_{x}/2a$, $l_{y}/2a$ and $l_{z}/2a$ with respect to the non-dimensional
$x$-position of the center of the capsule $x_{c}/H_{c}$ as it flows though a
constriction size equal to: (i) the initial diameter of the capsule (solid
line); and (ii) half of the initial diameter of the capsule (dashed line). The
dotted line corresponds to a finest Eulerian resolution of 100 Eulerian grid
cells per initial diameter (shown for $x_{c}/H_{c}=a/2$ only).
Qualitative results are shown in figure 15. As expected, the deformation of
the capsule is considerably greater when the constriction size is halved: the
capsule becomes almost flat as it reaches the center of the constriction.
Figure 15 also confirms that the Eulerian mesh is refined only in the region
of interest, as the grid size quickly increases with the distance from the
constriction and from the capsule. We show quantitative results in figure 16,
where the non-dimensional $x$-, $y$\- and $z$-lengths of the capsule are
plotted against the non-dimensional $x$-position of the center of the capsule.
Unsurpisingly, the capsule vertical length $l_{y}$ decreases by over a factor
of two when the capsule reaches the center of the constriction, before sharply
increasing again as the front of the capsule expands while leaving the
constriction. The sharp point observed for $l_{y}$ at $x_{c}/H_{c}=0$ is
simply due to the non-locality of the variable $l_{y}$: for $x_{c}/H_{c}\leq
0$ the maximum height of the capsule is located at its rear, while for
$x_{c}/H_{c}\geq 0$ it is located at its front. We also note that the maximum
decrease of the capsule height $l_{y,\text{min}}^{N}$ in the case of a narrow
constriction is much more pronounced than its counterpart
$l_{y,\text{min}}^{W}$ in the wider constriction. However, the minimum capsule
height is not halved when the constriction size is halved, i.e.
$l_{y,\text{min}}^{N}>l_{y,\text{min}}^{W}/2$ . On the other hand, the maximum
$x$-length of the capsule more than doubles in the case of a narrow
constriction when compared to the wider constriction size, and the maximum
$z$-length more than triples. Therefore, the capsule preference to elongate in
the streamwise direction rather than the spanwise direction is reduced when
the constriction is narrower. Finally, the capsule reaches a steady shape
after the constriction for $x_{c}>6H_{c}$: the constriction size does not
appear to affect this steady shape, which is a slightly deformed sphere
compressed in the streamwise direction.
In figure 16 the $x$-, $y$\- and $z$-lengths of the capsule are shown for two
maximum Eulerian resolutions: $D/\Delta x=50$ and $D/\Delta x=100$, with $D$
the initial diameter of the capsule and $\Delta x$ the smallest Eulerian cell
size. Relatively minor differences are observed between the two mesh
resolutions, indicating that a maximum Eulerian resolution of $D/\Delta x=50$
is sufficient to capture the underlying physics of this configuration. In
terms of performance, conducting the previous convergence study up to
$D/\Delta x=100$ with a constant grid size would have required about $4.2\cdot
10^{7}$ fluid cells, while our simulation used less than $4.6\cdot 10^{6}$
fluid cells, thus allowing a tenfold reduction in the number of fluid grid
cells, and likely reducing the computational resources by a factor of 7 to 10
when accounting for the computational overload due to the complex tree
structure of the grid and the adaptivity algorithm.
### 5.2 Capsule-laden flows in large complex channel geometries
It is clear from the previous simulation results presented in this paper that
the adaptive mesh refinement is useful to lower the number of cells inside the
fluid domain, and thus the amount of computations per time step. However, it
can also be desirable to reduce the number of cells outside the fluid domain,
as they can also be associated with a large computational and memory cost.
This is because in cases of complex geometries the computational domain of
Cartesian grid methods is by design a bounding box that surrounds the fluid
domain. As such, if the volume fraction of the fluid domain in this bounding
box is low and if a constant grid size is used, most of the computational
cells are located inside the solid walls and a significant amount of memory
and computational resources are allocated for these “solid” cells where no
physics happens. This is especially true in cases of large, three-dimensional
channel geometries. For instance, let us consider a helical pipe of radius
$R_{\text{pipe}}$ connecting an upper and a lower arm of length
$L_{\text{arm}}$, where a capsule of radius $R_{c}=R_{\text{pipe}}/4$ is
placed at the top of the geometry. If we assume this geometry is embedded in a
bounding box of depth $2R_{\text{helix}}=10R_{\text{pipe}}$, height
$H_{\text{helix}}=12R_{\text{helix}}$ and length
$2L_{\text{arm}}=H_{\text{helix}}$, as shown in figure 17, ensuring 16 grid
cells per initial capsule diameter using a uniform Eulerian grid would require
over 1.2 billion grid cells, rendering the computation extremely expensive. In
contrast, using Basilisk’s adaptive mesh as shown in figure 17 allows to
reduce the number of computational cells by a factor of about 200, down to
less than 6 million grid cells. If only the helix itself is of interest and
not the connecting arms, using a uniform Eulerian grid would require about 200
million grid cells, and using our adaptive solver would still reduce the
number of grid cells by over a factor of 30.
(a)
(b)
(c)
Figure 17: Adaptive mesh around a helical geometry and its connecting pipes:
(a) full computational box around the whole geometry; (b) three-dimensional
trajectory of a capsule at $Re=10,50,100$ (for visual clarity the helix is
shrinked in the vertical direction); (c) adaptive mesh and velocity field in
the vertical plane. In (c), the color field corresponds to the $x$ component
of the velocity, where blue is into the page and orange in out of the page.
The capsule is about to cross the vertical plane for the third time, hence the
additonal small cells inside the third circular cross-section from the top.
(a)
(b)
Figure 18: Trajectory of a solitary capsule in the helix at $Re=10,50,100$:
(a) normalized distance from the helix centerline; (b) radial migration in a
plane orthogonal to the helix centerline. $N^{\star}$ corresponds to the
number of revolutions around the vertical axis of the helix.
As a demonstration of the capability of the present solver to handle such
large and complex geometries, simulations are carried out in the helical
geometry described above and shown in figure 17. A neo-Hookean capsule of
radius $R_{c}$ with $Ca=0.1$ is placed on the upper straight pipe centerline
at a distance of $1.5R_{\text{helix}}$ prior to entering the helix. We propose
to study the inertial migration of this capsule for three Reynolds numbers:
$Re=10,50$ and $100$. We set the non-dimensional time step
$R_{\text{helix}}\Delta t/U$ to $10^{-3}$, with $U$ the characteristic
velocity of the fluid; and we choose a finest Eulerian grid resolution
corresponding to 16 grid cells per initial diameter. Each case ran for about
three days on 96 processors, with around $6\cdot 10^{4}$ cells per processor.
To analyze the trajectories, we define the distance $r^{\star}$ of the capsule
centroid to the helix centerline normalized by the pipe radius
$R_{\text{pipe}}$, as well as the number of helical periods
$N^{\star}=(H_{\text{helix}}-z)/H_{1p}$, with $H_{1p}$ the height of one
vertical period of the helix, i.e. its pitch distance. A slice of the flow
field in the helix for $Re=100$ is shown in figure 17, and a movie of this
simulation as well as the code to reproduce it are available at [84]. The
trajectories of the capsule are shown in the three-dimensional space in figure
17, in one dimension by showing $r^{\star}$ as a function of $N^{\star}$ in
figure 18, and in a cross-section of the pipe orthogonal to the helix
centerline in figure 18. Only the path corresponding to the capsule located
inside the helix is shown in these figures. Immediately after release, the
capsule moves away from the centerline for all Reynolds numbers. The initial
overshoot of $r^{\star}$ increases with the Reynolds number, from
$r^{\star}_{\text{max}}\approx 0.3$ at $Re=10$ to
$r^{\star}_{\text{max}}\approx 0.6$ at $Re=100$. After four helical
revolutions, the capsule exits the helix with a steady position of
$r^{\star}_{\infty}\approx 0.38$ and $r^{\star}_{\infty}\approx 0.45$ for
$Re=50$ and $Re=100$ respectively. In the case of $Re=10$, however, a steady
state is not yet reached when the capsule exits the helix, but we can
extrapolate the capsule trajectory to find that $r^{\star}_{\infty}\approx
0.18$ for this Reynolds number. For all Reynolds numbers, the steady position
is located in the lower half of the cross-section, at an angle $\theta$ from
the horizontal line of about $-\pi/2$, although $\theta$ increases slightly
with the Reynolds number. Interestingly, we note that the capsule transient
path is longer for $Re=100$ than that for $Re=50$: as can be seen in figure
18, for $Re=100$ the capsule seems to reach an unstable equilibrium at
$r^{\star}\approx 0.58$ and $\theta\approx 0$ for as long as 1.5 helical
periods, before continuing its spiralling motion towards a stable steady-state
location. Further investigation would be necessary to characterize this
behavior and determine if, for instance, this unstable equilibrium corresponds
to the center of a vortex, but this is not the focus of the present paper. The
purpose of this simulation is to show that the present solver is able to
simulate large three-dimensional channel geometries, and has the potential to
simulate full microfluidic devices.
## 6 Conclusion and perspectives
We have presented an adaptive front-tracking solver to simulate deformable
capsules in viscous flows at zero and finite inertia. The membrane mechanics
is governed by an elastic and a bending resistance, and a non-unity viscosity
ratio is allowed. Moreover, the present solver is compatible with complex STL-
defined geometries, thus providing all the ingredients needed to simulate
realistic flow configurations of biological cells, including red blood cells,
both in-vivo and in-vitro. Numerous validation cases are presented against
data available in the literature: we compare our results mainly to the highly
accurate boundary integral method in Stokes conditions, and to other front-
tracking methods at non zero Reynolds numbers. Very good qualitative and
quantitative agreement is shown in all cases. We then demonstrate the
robustness of the present solver in more challenging configurations, as well
as its potential to tackle very large, three-dimensional channel geometries
relevant to inertial microfluidic applications. Moreover, the present
implementation is open-source as part of the Basilisk platfrom: the documented
source code is freely available, as well as the source files to reproduce all
the simulations presented in this paper [49].
Although the present adaptive front-tracking solver can simulate all the range
of Reynolds numbers, the non-inertial limit is challenging because of the
computation of the viscous term. The simulations we show in this paper at
$Re=0.01$ are several times slower to complete than their counterpart at,
e.g., $Re=O(1)$ or $Re=O(10)$. As a result, if only the non-inertial regime is
sought, boundary integral solvers likely remain the most efficient method by
far. Another challenge is the stiffness of the bending stresses: since the
Helfrich’s bending formulation involves such high-order derivatives of the
membrane geometry, and since the time integration of the membrane problem is
explicit, the time step is controlled by the time scale associated with the
bending force whenever bending effects are included. This is a known challenge
of computing bending stresses on elastic membranes (see, e.g., p. 40 of [55]).
As a result, we have to decrease our time step by one order of magnitude
(sometimes even more) whenever the bending stresses are included.
Unfortunately, to our knowledge the stability condition associated with the
bending force is unknown and it is therefore not possible to stabilize
simulations by employing an adaptive time-step strategy, as is already the
case in Basilisk with the CFL condition and with the celerity of capillary
waves for surface tension stresses. One could investigate the implicit or
“approximately implicit” treatment of the immersed boundary method as done by
Roma, Peskin & Berger [42].
On the implementation side, the fluid solver from Basilisk is compatible with
shared and distributed memory systems $-$ i.e. using OpenMP and MPI libraries
$-$, allowing to run simulations on large supercomputers. Naturally, we have
enabled our front-tracking solver to be compatible with MPI as well. However,
ensuring a good scaling with the number of capsules is not trivial as the
domain decomposition of the Eulerian adaptive mesh is governed by a Z-ordering
algorithm. As a result, when the Eulerian mesh is adaptive, the stencils
attached to the Lagrangian nodes of a given capsule are most likely containing
Eulerian cells handled by several distinct processors. In other words, a
single capsule has to exist on many different processors in order to
communicate with the background Eulerian mesh. Interpolating velocities from
the fluid cells to the Lagrangian nodes, and spreading the Lagrangian forces
to the fluid cells thus requires expensive inter-processor communications.
Investigating efficient strategies to simulate a large number of capsules with
an adaptive mesh is left for future works. That being said, in all the
simulations shown in this study with one or two capsules considered, at most
$5\%$ of the total computation time is spent in the front-tracking solver.
Consequently, unless simulating a large number of capsules $-$ e.g., $O(100)$
$-$, the bottleneck is still the Navier-Stokes solver. Moreover when a large
number of capsules is considered, one could argue that a uniform Eulerian mesh
can be more efficient than its adaptive counterpart because a large number of
capsules would likely result in a high volume fraction of capsules. An
efficient parallel implementation of the present front-tracking solver
restricted to uniform Eulerian grids is straightforward and could be
implemented in future studies if dense volume fractions are considered.
Another extension to the present solver could be to allow the triangulation of
the membrane to be adaptive as well. Such adaptive triangulations have been
considered to simulate fluid-fluid interfaces [33], but in the case of elastic
membranes special care needs to be given to coarsen or refine the shape
functions of a triangle. However only simulations featuring extreme membrane
deformation would benefit from an adaptive membrane triangulation, as the
front-tracking solver is only taking a few percents of the total computing
time. For the applications we seek where reasonable membrane deformations are
expected, the gain of an adaptive membrane triangulation is likely close to
zero.
Another possible improvement to the present solver would be to allow the
support of the regularized Dirac-delta functions to include grid cells of
different sizes. Indeed, the current method imposes a constant grid size in
the vicinity of the membrane in order to apply the IBM in a straightforward
manner, in a similar fashion to our Distributed Lagrange
Multriplier/Fictitious Domain method implemented in Basilisk to simulate flows
laden with rigid particles [85], but this can result in imposing a finer
Eulerian grid resolution around the membrane than what is necessary to
properly resolve some parts of the membrane. This scenario typically happens
when the capsule is close or will come close to a sharp boundary, such as in
the case of the narrow constriction in Section 5.1. In figure 15, for
instance, as the capsule enters the narrow constriction, the Eulerian grid
resolution around its tail is much finer than necessary due to the very fine
grid resolution needed at the front of the capsule, which is located in a flow
with strong gradients and close to sharp boundaries. A conceptually simple way
to allow the size of the Eulerian grid cells to change along the membrane
would be: (i) to propagate the forcing term or the averaging operator to
smaller grid cells located inside the stencil of interest; and (ii) to
increase the stencil size if a large grid cell is encountered, such that any
stencil size is always four times larger in each direction than its largest
grid cell. This adaptive extension of the immersed boundary method is under
current investigation and shows promising results.
As demonstrated in Section 5.2, the current state of our adaptive solver
allows resolved inertial simulations of capsule-laden flows in large three-
dimensional geometries for a fraction of the computational cost of that of
uniform front-tracking solvers implemented on uniform Cartesian grids. As
such, our solver has the potential to provide valuable insight to help develop
inertial migration microfluidic devices. The present solver may even allow the
simulation of full microfluidic geometries consisting in several stacked
layers of microfluidic channels, such as the spiralling geometries in the
experimental work of Fang et al. [8]. This has the potential to provide
valuable qualitative and quantitative information about the flow field,
capsule dynamics and sorting efficiency of a given realistic microfluidic
geometry, thus reducing the number of manufacturing iterations during the
design process of such devices. Our medium-term objective is to tackle this
type of virtual design problem, considering sub-domains of the full geometry
as a first step, while concomitently investigating the possible improvements
stated above.
## Acknowledgements
Damien P. Huet thanks Antoine Morente for designing the helix STL geometry and
for helpful discussions. The authors greatly appreciate the financial support
of the Natural Sciences and Engineering Research Council of Canada (NSERC) via
Anthony Wachs’ Discovery Grant RGPIN-2016-06572. This research was enabled by
support provided by Compute Canada (http://www.computecanada.ca) through
Anthony Wachs’s 2020 and 2021 Computing Resources for Research Groups
allocation qpf-764-ac.
## References
* [1] M. W. Dewhirst, T. W. Secomb, Transport of drugs from blood vessels to tumour tissue, Nature Reviews Cancer 17 (12) (2017) 738–750.
* [2] D. F. Puleri, P. Balogh, A. Randles, Computational models of cancer cell transport through the microcirculation, Biomechanics and Modeling in Mechanobiology 20 (4) (2021) 1209–1230.
* [3] P. Balogh, J. Gounley, S. Roychowdhury, A. Randles, A data-driven approach to modeling cancer cell mechanics during microcirculatory transport, Scientific Reports 11 (1) (2021) 1–18.
* [4] E. Islamzada, K. Matthews, Q. Guo, A. T. Santoso, S. P. Duffy, M. D. Scott, H. Ma, Deformability based sorting of stored red blood cells reveals donor-dependent aging curves, Lab on a Chip 20 (2) (2020) 226–235.
* [5] S. R. Bazaz, A. Mihandust, R. Salomon, H. A. N. Joushani, W. Li, H. A. Amiri, F. Mirakhorli, S. Zhand, J. Shrestha, M. Miansari, et al., Zigzag microchannel for rigid inertial separation and enrichment (z-rise) of cells and particles, Lab on a Chip.
* [6] N. Takeishi, H. Yamashita, T. Omori, N. Yokoyama, S. Wada, M. Sugihara-Seki, Inertial migration of red blood cells under a newtonian fluid in a circular channel, arXiv preprint arXiv:2209.11933.
* [7] A. Gangadhar, S. A. Vanapalli, Inertial focusing of particles and cells in the microfluidic labyrinth device: Role of sharp turns, Biomicrofluidics 16 (4) (2022) 044114.
* [8] Y. Fang, S. Zhu, W. Cheng, Z. Ni, N. Xiang, Efficient bioparticle extraction using a miniaturized inertial microfluidic centrifuge, Lab on a Chip 22 (18) (2022) 3545–3554.
* [9] D. Barthes-Biesel, J. Rallison, The time-dependent deformation of a capsule freely suspended in a linear shear flow, Journal of Fluid Mechanics 113 (1981) 251–267.
* [10] C. Pozrikidis, Finite deformation of liquid capsules enclosed by elastic membranes in simple shear flow, Journal of Fluid Mechanics 297 (1995) 123–152.
* [11] S. Ramanujan, C. Pozrikidis, Deformation of liquid capsules enclosed by elastic membranes in simple shear flow: large deformations and the effect of fluid viscosities, Journal of fluid mechanics 361 (1998) 117–143.
* [12] C. D. Eggleton, A. S. Popel, Large deformation of red blood cell ghosts in a simple shear flow, Physics of fluids 10 (8) (1998) 1834–1845.
* [13] C. Pozrikidis, Effect of membrane bending stiffness on the deformation of capsules in simple shear flow, Journal of Fluid Mechanics 440 (2001) 269.
* [14] C. Pozrikidis, Numerical simulation of the flow-induced deformation of red blood cells, Annals of biomedical engineering 31 (10) (2003) 1194–1205.
* [15] H. Zhao, A. H. Isfahani, L. N. Olson, J. B. Freund, A spectral boundary integral method for flowing blood cells, Journal of Computational Physics 229 (10) (2010) 3726–3744.
* [16] E. Lac, D. Barthes-Biesel, N. Pelekasis, J. Tsamopoulos, Spherical capsules in three-dimensional unbounded stokes flows: effect of the membrane constitutive law and onset of buckling, Journal of Fluid Mechanics 516 (2004) 303–334.
* [17] E. Lac, A. Morel, D. Barthès-Biesel, Hydrodynamic interaction between two identical capsules in simple shear flow, Journal of Fluid Mechanics 573 (2007) 149–169.
* [18] P. Bagchi, Mesoscale simulation of blood flow in small vessels, Biophysical journal 92 (6) (2007) 1858–1877.
* [19] S. K. Doddi, P. Bagchi, Lateral migration of a capsule in a plane poiseuille flow in a channel, International Journal of Multiphase Flow 34 (10) (2008) 966–986.
* [20] A. Z. Yazdani, P. Bagchi, Phase diagram and breathing dynamics of a single red blood cell and a biconcave capsule in dilute shear flow, Physical Review E 84 (2) (2011) 026314.
* [21] A. Yazdani, P. Bagchi, Three-dimensional numerical simulation of vesicle dynamics using a front-tracking method, Physical Review E 85 (5) (2012) 056308\.
* [22] A. Yazdani, P. Bagchi, Influence of membrane viscosity on capsule dynamics in shear flow, Journal of fluid mechanics 718 (2013) 569–595.
* [23] S. K. Doddi, P. Bagchi, Effect of inertia on the hydrodynamic interaction between two liquid capsules in simple shear flow, International journal of multiphase flow 34 (4) (2008) 375–392.
* [24] P. Balogh, P. Bagchi, A computational approach to modeling cellular-scale blood flow in complex geometry, Journal of Computational Physics 334 (2017) 280–307.
* [25] P. Balogh, P. Bagchi, Analysis of red blood cell partitioning at bifurcations in simulated microvascular networks, Physics of Fluids 30 (5) (2018) 051902.
* [26] X. Li, K. Sarkar, Front tracking simulation of deformation and buckling instability of a liquid capsule enclosed by an elastic membrane, Journal of Computational Physics 227 (10) (2008) 4998–5018.
* [27] J. Zhang, P. C. Johnson, A. S. Popel, An immersed boundary lattice boltzmann approach to simulate deformable liquid capsules and its application to microscopic blood flows, Physical biology 4 (4) (2007) 285.
* [28] J. Zhang, P. C. Johnson, A. S. Popel, Red blood cell aggregation and dissociation in shear flows simulated by lattice boltzmann method, Journal of biomechanics 41 (1) (2008) 47–55.
* [29] J. Ames, D. F. Puleri, P. Balogh, J. Gounley, E. W. Draeger, A. Randles, Multi-gpu immersed boundary method hemodynamics simulations, Journal of computational science 44 (2020) 101153.
* [30] D. A. Fedosov, B. Caswell, G. E. Karniadakis, A multiscale red blood cell model with accurate mechanics, rheology, and dynamics, Biophysical journal 98 (10) (2010) 2215–2225.
* [31] C. W. Hirt, B. D. Nichols, Volume of fluid (vof) method for the dynamics of free boundaries, Journal of computational physics 39 (1) (1981) 201–225.
* [32] J. U. Brackbill, D. B. Kothe, C. Zemach, A continuum method for modeling surface tension, Journal of computational physics 100 (2) (1992) 335–354.
* [33] G. Tryggvason, B. Bunner, A. Esmaeeli, D. Juric, N. Al-Rawahi, W. Tauber, J. Han, S. Nas, Y.-J. Jan, A front-tracking method for the computations of multiphase flow, Journal of computational physics 169 (2) (2001) 708–759.
* [34] S. Popinet, An accurate adaptive solver for surface-tension-driven interfacial flows, Journal of Computational Physics 228 (16) (2009) 5838–5866.
* [35] G.-H. Cottet, E. Maitre, A level set method for fluid-structure interactions with immersed surfaces, Mathematical models and methods in applied sciences 16 (03) (2006) 415–438.
* [36] S. Ii, X. Gong, K. Sugiyama, J. Wu, H. Huang, S. Takagi, A full eulerian fluid-membrane coupling method with a smoothed volume-of-fluid approach, Communications in Computational Physics 12 (2) (2012) 544.
* [37] S. Ii, K. Sugiyama, S. Takagi, Y. Matsumoto, A computational blood flow analysis in a capillary vessel including multiple red blood cells and platelets, Journal of Biomechanical Science and Engineering 7 (1) (2012) 72–83.
* [38] S. Ii, K. Shimizu, K. Sugiyama, S. Takagi, Continuum and stochastic approach for cell adhesion process based on eulerian fluid-capsule coupling with lagrangian markers, Journal of Computational Physics 374 (2018) 769–786.
* [39] M. Uhlmann, An immersed boundary method with direct forcing for the simulation of particulate flows, Journal of Computational Physics 209 (2) (2005) 448–476.
* [40] T. Kempe, J. Fröhlich, An improved immersed boundary method with direct forcing for the simulation of particle laden flows, Journal of Computational Physics 231 (9) (2012) 3663–3684.
* [41] W.-P. Breugem, A second-order accurate immersed boundary method for fully resolved simulations of particle-laden flows, Journal of Computational Physics 231 (13) (2012) 4469–4498.
* [42] A. Roma, C. Peskin, M. Berger, An adaptive version of the immersed boundary method, Journal of Computational Physics 153 (2) (1999) 509–534.
* [43] B. E. Griffith, R. D. Hornung, D. M. McQueen, C. S. Peskin, An adaptive, formally second order accurate version of the immersed boundary method, Journal of computational physics 223 (1) (2007) 10–49.
* [44] M. Vanella, A. Posa, E. Balaras, Adaptive mesh refinement for immersed boundary methods, Journal of Fluids Engineering 136 (4).
* [45] G. Agresar, J. Linderman, G. Tryggvason, K. Powell, An adaptive, cartesian, front-tracking method for the motion, deformation and adhesion of circulating cells, Journal of Computational Physics 143 (2) (1998) 346–380.
* [46] Z. Cheng, A. Wachs, An immersed boundary/multi-relaxation time lattice boltzmann method on adaptive octree grids for the particle-resolved simulation of particle-laden flows, Journal of Computational Physics (2022) 111669\.
* [47] S. Popinet, Gerris: a tree-based adaptive solver for the incompressible euler equations in complex geometries, Journal of Computational Physics 190 (2) (2003) 572–600.
* [48] S. Popinet, A quadtree-adaptive multigrid solver for the Serre–Green–Naghdi equations, Journal of Computational Physics 302 (2015) 336–358.
* [49] D. P. Huet, http://basilisk.fr/sandbox/huet, accessed: 2022-10-05 (2022).
* [50] A. E. Green, J. E. Adkins, Large elastic deformations and non-linear continuum mechanics.
* [51] D. Barthes-Biesel, A. Diaz, E. Dhenin, Effect of constitutive laws for two-dimensional membranes on flow-induced capsule deformation, Journal of Fluid Mechanics 460 (2002) 211–222.
* [52] J. Charrier, S. Shrivastava, R. Wu, Free and constrained inflation of elastic membranes in relation to thermoforming—non-axisymmetric problems, The Journal of Strain Analysis for Engineering Design 24 (2) (1989) 55–74.
* [53] W. Helfrich, Elastic properties of lipid bilayers: theory and possible experiments, Zeitschrift für Naturforschung c 28 (11-12) (1973) 693–703.
* [54] A. Guckenberger, S. Gekle, Theory and algorithms to compute helfrich bending forces: A review, Journal of Physics: Condensed Matter 29 (20) (2017) 203001.
* [55] D. Barthes-Biesel, Motion and deformation of elastic capsules and vesicles in flow, Annual Review of fluid mechanics 48 (2016) 25–52.
* [56] A. Guckenberger, M. P. Schraml, P. G. Chen, M. Leonetti, S. Gekle, On the bending algorithms for soft objects in flows, Computer Physics Communications 207 (2016) 1–23.
* [57] A. J. Chorin, Numerical solution of the navier-stokes equations, Mathematics of computation 22 (104) (1968) 745–762.
* [58] J. B. Bell, P. Colella, H. M. Glaz, A second-order projection method for the incompressible navier-stokes equations, Journal of Computational Physics 85 (2) (1989) 257–283.
* [59] J. A. Van Hooft, S. Popinet, C. C. Van Heerwaarden, S. J. Van der Linden, S. R. de Roode, B. J. Van de Wiel, Towards adaptive grids for atmospheric boundary-layer simulations, Boundary-layer meteorology 167 (3) (2018) 421–443.
* [60] H. Johansen, P. Colella, A cartesian grid embedded boundary method for poisson’s equation on irregular domains, Journal of Computational Physics 147 (1) (1998) 60–85.
* [61] P. Colella, D. T. Graves, B. J. Keen, D. Modiano, A cartesian grid embedded boundary method for hyperbolic conservation laws, Journal of Computational Physics 211 (1) (2006) 347–366.
* [62] S. Popinet, http://basilisk.fr/src/embed.h#lifting-the-small-cell-cfl-restriction, accessed: 2022-10-05 (2018).
* [63] A. Ghigo, http://basilisk.fr/sandbox/ghigo/src/myembed.h, accessed: 22-11-2022 (2021).
* [64] S. O. Unverdi, G. Tryggvason, A front-tracking method for viscous, incompressible, multi-fluid flows, Journal of computational physics 100 (1) (1992) 25–37.
* [65] C. Peskin, Numerical analysis of blood flow in the heart, Journal of Computational Physics 25 (3) (1977) 220–252.
* [66] L. Lu, M. J. Morse, A. Rahimian, G. Stadler, D. Zorin, Scalable simulation of realistic volume fraction red blood cell flows through vascular networks, in: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2019, pp. 1–30.
* [67] A. Farutin, T. Biben, C. Misbah, 3d numerical simulations of vesicle and inextensible capsule dynamics, Journal of Computational Physics 275 (2014) 539–568.
* [68] M. Meyer, M. Desbrun, P. Schröder, A. H. Barr, Discrete differential-geometry operators for triangulated 2-manifolds, in: Visualization and mathematics III, Springer, 2003, pp. 35–57.
* [69] D. P. Huet, http://basilisk.fr/sandbox/huet/tests/lagrangian_caps/uniaxial_stretch.c, accessed: 2022-10-05 (2022).
* [70] D. P. Huet, http://basilisk.fr/sandbox/huet/tests/lagrangian_caps/biconcave_curvatures.c, accessed: 2022-10-05 (2022).
* [71] I. Koleva, H. Rehage, Deformation and orientation dynamics of polysiloxane microcapsules in linear shear flow, Soft Matter 8 (13) (2012) 3681–3693.
* [72] J. Walter, A.-V. Salsac, D. Barthès-Biesel, P. Le Tallec, Coupling of finite element and boundary integral methods for a capsule in a stokes flow, International journal for numerical methods in engineering 83 (7) (2010) 829–850.
* [73] D. P. Huet, http://basilisk.fr/sandbox/huet/tests/lagrangian_caps/nh_shear.c, accessed: 2022-10-05 (2022).
* [74] L. Zhu, L. Brandt, The motion of a deforming capsule through a corner, Journal of Fluid Mechanics 770 (2015) 374–397.
* [75] D. V. Le, Effect of bending stiffness on the deformation of liquid capsules enclosed by thin shells in shear flow, Physical Review E 82 (1) (2010) 016318\.
* [76] D. P. Huet, http://basilisk.fr/sandbox/huet/tests/lagrangian_caps/bending_shear.c, accessed: 2022-10-05 (2022).
* [77] S.-Y. Park, P. Dimitrakopoulos, Transient dynamics of an elastic capsule in a microfluidic constriction, Soft matter 9 (37) (2013) 8844–8855.
* [78] D. P. Huet, http://basilisk.fr/sandbox/huet/tests/lagrangian_caps/constricted_channel.c, accessed: 2022-10-05 (2022).
* [79] C. Pozrikidis, Resting shape and spontaneous membrane curvature of red blood cells, Mathematical Medicine and Biology: A Journal of the IMA 22 (1) (2005) 34–52.
* [80] C. Pozrikidis, Computational hydrodynamics of capsules and biological cells.
* [81] D. P. Huet, http://basilisk.fr/sandbox/huet/tests/lagrangian_caps/rbc_shear.c, accessed: 2022-10-05 (2022).
* [82] D. P. Huet, http://basilisk.fr/sandbox/huet/tests/lagrangian_caps/caps_interception.c, accessed: 2022-10-05 (2022).
* [83] D. P. Huet, http://basilisk.fr/sandbox/huet/tests/lagrangian_caps/caps_interception_inertia.c, accessed: 2022-10-05 (2022).
* [84] D. P. Huet, http://basilisk.fr/sandbox/huet/cases/lagrangian_caps/helix.c, accessed: 2022-10-05 (2022).
* [85] C. Selçuk, A. R. Ghigo, S. Popinet, A. Wachs, A fictitious domain method with distributed lagrange multipliers on adaptive quad/octrees for the direct numerical simulation of particle-laden flows, Journal of Computational Physics 430 (2021) 109954.
|
# An Elastic Quartic Twist Theory for Chromonic Liquid Crystals
Silvia Paparini<EMAIL_ADDRESS>Epifanio G. Virga<EMAIL_ADDRESS>Dipartimento di Matematica, Università di Pavia, Via Ferrata 5, 27100 Pavia,
Italy
(August 27, 2024)
###### Abstract
Chromonic liquid crystals are lyotropic materials which are attracting growing
interest for their adapatbility to living systems. To describe their elastic
properties, the classical Oseen-Frank theory requires anomalously small twist
constants and (comparatively) large saddle-splay constants, so large as to
violate one of Ericksen’s inequalities, which guarantee that the Oseen-Frank
stored-energy density is bounded below. While such a violation does not
prevent the existence and stability of equilibrium distortions in problems
with fixed geometric confinement, the study of free-boundary problems for
droplets has revealed a number of paradoxical consequences. Minimizing
sequences driving the total energy to negative infinity have been constructed
by employing ever growing needle-shaped tactoids incorporating a diverging
twist [Phy. Rev. E 106, 044703 (2022)]. To overcome these difficulties, we
propose here a novel elastic theory that extends for chromonics the classical
Oseen-Frank stored energy by adding a quartic twist term. We show that the
total energy of droplets is bounded below in the quartic twist theory, so that
the known paradoxes are ruled out. The quartic term introduces a
phenomenological length $a$ in the theory; this affects the equilibrium of
chromonics confined within capillary tubes. Use of published experimental data
allows us to estimate $a$.
## I Introduction
Liquid crystals are _ordered_ fluids divided into two big families:
_thermotropic_ and _lyotropic_. What distinguishes these families is the agent
that drives the transition form ordinary, isotropic fluids to anisotropic
ones, typically birefringent and optically uniaxial. It is temperature in the
former family and concentration in the latter. Correspondingly, ordering takes
place at sufficiently low temperature or sufficiently high concentration.
Here we shall be concerned with the _nematic_ phase, where order arises only
in the _orientation_ of the typical elongated constituents111These may be
either molecules, molecular aggregates, or colloidal particles. of liquid
crystals, _not_ in their position in space. In both families, nematic order is
described by the _director_ $\bm{n}$, a unit vector field which designates the
local prevailing orientation, corresponding to the _optic_ axis. It can
(easily) vary in space in response to different external agencies; its optical
nature contributes to the aesthetic fascination of these materials.
When $\bm{n}$ has not everywhere the same orientation, its distorted _texture_
exhibits a _curvature_ elasticity described by a stored energy expressed in
terms of the invariants of $\nabla\bm{n}$, the latter taken to be a tensorial
measure of local distortion. The classical theory of nematic elasticity is
based on the assumption that in the ground state $\bm{n}$ has everywhere the
same orientation and that the energy stored in a distortion measures the work
done to produce it starting from the ground state. The Oseen-Frank theory
posits a stored energy quadratic in $\nabla\bm{n}$ and features four elastic
constants, one for each elementary distortional mode.
Chromonic liquid crystals (CLCs) are lyotropic materials, which include Sunset
Yellow (SSY), a popular dye in food industry, and disodium cromoglycate
(DSCG), an anti-asthmatic drug. In these materials, molecules stuck themselves
in columns, which in aqueous solutions develop a nematic orientational order.
In CLCs, the director designates the average direction in space of the
constituting supra-molecular aggregates.
A number of reviews have already appeared in the literature [1, 2, 3, 4, 5],
to which we refer the reader. They also witness the scientific interest
surrounding this theme.
Experiments have been performed with these materials in capillary tubes, with
either circular [6, 7] or rectangular [8] cross-sections, as well as on
cylindrical shells [9], all enforcing _degenerate planar_ anchoring, which
allows constituting columns to glide freely on the anchoring surface, provided
they remain tangent to it. These experiments revealed a tendency of CLCs to
acquire at equilibrium a _twisted_ configuration in cylinders, represented by
an _escaped twist_ (ET) director field. Due to the lack of chirality in both
the molecular aggregates constituting CLCs and their condensed phase, ET
fields come equally likely in two variants with opposite chiralities, as
illustrated by the cartoons in Fig. 1. 222Escaped-twist (ET) is a name perhaps
first used in in [10] for what before had been called _twist-bend_ in [11] or
_escaped_ in the third dimension in [12].
(a) Left-handed
(b) Right-handed
Figure 1: Cartoons of two ET fields with opposite chiralities in a cylinder.
The nematic director $\bm{n}$ is schematized as a prolate ellipsoid.
Conventionally, we call left-handed the arrangement in (a) as in going from
the axis towards the periphery of the cylinder $\bm{n}$ winds around the
radius as the last four fingers of our left hand would do around our left
thumb. By contrast, the arrangement in (b) is then right-handed.
Despite the lack of _uniformity_ in the ground state of these phases,333The
classification of the most general _uniform_ distortions, which can fill the
whole three-dimensional space, is given in [13] and recalled in Sect. II.1.
their curvature elasticity has been modeled by the Oseen-Frank theory, albeit
with an anomalously small twist constant $K_{22}$. To accommodate the
experimental findings and justify the twisted ground state, this constant has
to be smaller than the saddle-splay constant $K_{24}$, in violation of one of
the inequalities Ericksen [14] had put forward to guarantee that the Oseen-
Frank stored energy be bounded below.
Actually, as shown in [15], the violation of one Ericksen’s inequality does
not prevent the twisted ground state from being locally stable in a cylinder
enforcing degenerate planar anchoring on its lateral boundary. The same
conclusion was reached in [16]. But, as shown in [17], free-boundary problems
may reveal noxious consequences of violating Ericksen’s inequalities. If
$K_{22}<K_{24}$, a CLC droplet, tactoidal444 _Tactoids_ are elongated,
cylindrically symmetric shapes with pointed ends as poles. in shape and
surrounded by an isotropic fluid environment enforcing degenerate planar
anchoring for the director at the interface, is predicted to be unstable
against _shape change_ : it would split indefinitely in smaller tactoids while
the total free energy plummets to negative infinity (see [17], for more
details).
This prediction is in sharp contrast with the wealth of experimental
observations of CLC tactoidal droplets, stable in the biphasic region of phase
space, where nematic and isotropic phases coexist in equilibrium. Experiments
have been carried out with a number of substances (including DSCG and SSY)
stabilized by the addition of neutral (achiral) condensing agents (such as PEG
and Spm) [18, 19, 20, 21, 22]. These studies have consistently reported stable
twisted bipolar tactoids.
Thus, we meet with a contradiction: if we adopt the Oseen-Frank theory to
describe CLCs, we need to assume that $K_{24}>K_{22}$ to explain the twisted
nematic textures observed in cylindrical capillaries, but we also need to
assume that $K_{24}<K_{22}$ to justify the very existence of twisted tactoids
observed in the biphasic coexistence region.
This paper presents our way to overcome this difficulty arising from applying
the Oseen-Frank theory to the curvature elasticity of chromonics. We propose
to add a specific _quartic_ term to the stored-energy density.
Higher-order theories are not new in liquid crystal science. Go under this
name either theories that allow for higher spatial gradients of $\bm{n}$ in
the energy and theories that allow for higher powers in the first gradient.
Under the first category, which perhaps has seen its first manifestation in
[23] (see also [24]), falls, for example, Dozov’s theory [25] for both twist-
bend and splay-bend phases predicted long ago by Meyer [26] and more recently
observed in real materials [27]. Under the second category falls, for example,
a simple one-dimensional model for splay-bend nematics [28], then extended to
incorporate a whole class of seven modulated ground states, of which twist-
bend and splay bend are just two instances [29]. A hybrid theory was also
proposed in [30], where both higher gradients of $\bm{n}$ and higher powers of
the first gradient are allowed in the stored-energy density, with spatial
derivatives and their powers balanced according to a criterion motivated by a
molecular model.555It should also be noted that other theories are known as
“quartic” (see, for, example, the classical paper [31] and the more recent
contribution [32]), but they owe this name to an elastic term globally quartic
in de Gennes’ order tensor and its derivatives, added to the commonly
considered version of the Landau de Gennes theory to resolve the spay-bend
elastic constant degeneracy in the reduction to the Oseen-Frank theory. These
theories serve a different purpose.
The single quartic energy term contemplated in our theory will only contain
the twist measure of nematic distortion; hence the name _quartic twist_
theory.
The paper is organized as follows. In Sect. II, we describe the energetics of
chromonics, starting from the classical quadratic Oseen-Frank theory. In
particular, we see what our theory has to say about the paradoxes for
chromonic droplets encountered within the Oseen-Frank theory. Although we do
not solve any free-boundary problem within the proposed theory, we provide a
uniform lower bound for the total energy that prevents the existence of the
minimizing sequences with diverging energy that generate paradoxes within the
Oseen-Frank theory. In Sect. III, we apply the quartic twist theory to a CLC
in a cylinder subject to degenerate planar anchoring on the lateral boundary
and compare the predictions of this theory for different values of a
phenomenological length scale brought in by the added quartic term with those
of the classical theory, which is recovered in the limit where that length
scale vanishes. Finally, in Sect. IV, we summarize our conclusions.
## II Energetics
An extended theory may become more acceptable if its parent theory is first
laid out. Thus, we start by summarizing the classical quadratic theory for the
elasticity of nematic liquid crystals, albeit formulated in a novel,
equivalent way that serves better our purpose. It will then become easier to
justify how it should be modified to prevent the paroxysmal concentration of
twist which in free-boundary problems leads the total energy to negative
infinity [17].
### II.1 Classical Quadratic Energy
The classical elastic theory of liquid crystals goes back to the pioneering
works of Oseen [33] and Frank [34].666Also a paper by Zocher [35], mainly
concerned with the effect of a magnetic field on director distortions, is
often mentioned among the founding contributions. Some authors go to the
extent of also naming the theory after him. Others, in contrast, name the
theory only after Frank, as they only deem his contribution to be fully aware
of the nature of $\bm{n}$ as a _mesoscopic_ descriptor of molecular order.
This theory is variational in nature, as it is based on a bulk free energy
functional $\mathscr{F}_{\mathrm{b}}$ written in the form
$\mathscr{F}_{\mathrm{b}}[\bm{n}]:=\int_{\mathscr{B}}W_{\mathrm{OF}}(\bm{n},\nabla\bm{n})\operatorname{d}\\!V,$
(1)
where $\mathscr{B}$ is a region in space occupied by the material and $V$ is
the volume measure. In (1), $W_{\mathrm{OF}}$ measures the distortional cost
produced by a deviation from a uniform director field $\bm{n}$. It is chosen
to be the most general frame-indifferent, even function quadratic in
$\nabla\bm{n}$,
$W_{\mathrm{OF}}(\bm{n},\nabla\bm{n}):=\frac{1}{2}K_{11}\left(\operatorname{div}\bm{n}\right)^{2}+\frac{1}{2}K_{22}\left(\bm{n}\cdot\operatorname{curl}\bm{n}\right)^{2}+\frac{1}{2}K_{33}|\bm{n}\times\operatorname{curl}\bm{n}|^{2}+K_{24}\left[\operatorname{tr}(\nabla\bm{n})^{2}-(\operatorname{div}\bm{n})^{2}\right].$
(2)
Here $K_{11}$, $K_{22}$, $K_{33}$, and $K_{24}$ are elastic constants
characteristic of the material. They are traditionally referred to as the
_splay_ , _twist_ , _bend_ , and _saddle-splay_ constants, respectively, by
the features of four different orientation fields, each with a distortion
energy proportional to a single term in (2) (see, for example, Chap. 3 of
[36]).
Recently, Selinger [37] has reinterpreted the classical formula (2) by
decomposing the saddle-splay mode into a set of other independent modes. The
starting point of this decomposition is a novel representation of
$\nabla\bm{n}$ (see also [38]),
$\nabla\bm{n}=-\bm{b}\otimes\bm{n}+\frac{1}{2}T\mathbf{W}(\bm{n})+\frac{1}{2}S\mathbf{P}(\bm{n})+\mathbf{D},$
(3)
where $\bm{b}:=-(\nabla\bm{n})\bm{n}=\bm{n}\times\operatorname{curl}\bm{n}$ is
the _bend_ vector, $T:=\bm{n}\cdot\operatorname{curl}\bm{n}$ is the _twist_ ,
$S:=\operatorname{div}\bm{n}$ is the _splay_ , $\mathbf{W}(\bm{n})$ is the
skew-symmetric tensor that has $\bm{n}$ as axial vector,
$\mathbf{P}(\bm{n}):=\mathbf{I}-\bm{n}\otimes\bm{n}$ is the projection onto
the plane orthogonal to $\bm{n}$, and $\mathbf{D}$ is a symmetric tensor such
that $\mathbf{D}\bm{n}=\bm{0}$ and $\operatorname{tr}\mathbf{D}=0$. By its own
definition, $\mathbf{D}\neq\bm{0}$ admits the following biaxial
representation,
$\mathbf{D}=q(\bm{n}_{1}\otimes\bm{n}_{1}-\bm{n}_{2}\otimes\bm{n}_{2}),$ (4)
where $q>0$ and $(\bm{n}_{1},\bm{n}_{2})$ is a pair of orthogonal unit vectors
in the plane orthogonal to $\bm{n}$, oriented so that
$\bm{n}=\bm{n}_{1}\times\bm{n}_{2}$.777It is argued in [39] that $q$ should be
given the name _tetrahedral_ splay, to which we would actually prefer
_octupolar_ splay for the role played by a cubic (octupolar) potential on the
unit sphere [40] in representing all scalar measures of distortion, but $T$.
By use of the following identity,
$2q^{2}=\operatorname{tr}(\nabla\bm{n})^{2}+\frac{1}{2}T^{2}-\frac{1}{2}S^{2},$
(5)
we can easily give (2) the equivalent form
$W_{\mathrm{OF}}(\bm{n},\nabla\bm{n})=\frac{1}{2}(K_{11}-K_{24})S^{2}+\frac{1}{2}(K_{22}-K_{24})T^{2}+\frac{1}{2}K_{33}B^{2}+2K_{24}q^{2},$
(6)
where $B^{2}:=\bm{b}\cdot\bm{b}$. Since $(S,T,B,q)$ are all independent
_distortion characteristics_ , it readily follows from (6) that
$W_{\mathrm{OF}}$ is positive semi-definite whenever
$\displaystyle K_{11}$ $\displaystyle\geqq$ $\displaystyle K_{24}\geqq 0,$
(7a) $\displaystyle K_{22}$ $\displaystyle\geqq$ $\displaystyle K_{24}\geqq
0,$ (7b) $\displaystyle K_{33}$ $\displaystyle\geqq$ $\displaystyle 0,$ (7c)
which are the celebrated _Ericksen’s inequalities_ [14]. If these inequalities
are satisfied in strict form, the global ground state of $W_{\mathrm{OF}}$ is
attained on the uniform director field, characterized by
$S=T=B=q=0.$ (8)
As already mentioned in the Introduction, inequality (7b) must be violated for
the ground state of $W_{\mathrm{OF}}$ to be different from (8), involving a
non-vanishing $T$.
The class of _uniform_ distortions was defined in [13] as the one comprising
all director fields for which the distortion characteristics are constant in
space. Equivalently said, a uniform distortion is a director field that can
_fill_ three-dimensional space. It was proven that there are two distinct
families of uniform distortions, characterized by the following conditions
[13],
$S=0,\quad T=\pm 2q,\quad b_{1}=\pm b_{2}=b,$ (9)
where $q$ and $b$ are arbitrary parameters such that $q>0$ and $B^{2}=2b^{2}$,
and $b_{1}$ and $b_{2}$ are defined by the decomposition
$\bm{b}=b_{1}\bm{n}_{1}+b_{2}\bm{n}_{2}.$ (10)
The general director field corresponding to (9) is the _heliconical_ ground
state of twist-bend nematic phases,888With opposite _chiralities_ , one for
each sign in (9). in which $\bm{n}$ makes a fixed _cone_ angle with a given
axis in space (called the _helix_ axis), around which $\bm{n}$ precesses
periodically [13].999In opposite senses, according to the sign of chirality.
The special instance in which $B=0$ corresponds to the _single twist_ that
characterizes _cholesteric_ liquid crystals.
The distortion for which all characteristics vanish, but $T$, is a _double
twist_.101010Here we adopt the terminology proposed by Selinger [39] (see also
[41]) and distinguish between _single_ and _double_ twists, the former being
_uniform_ and the latter not. It is _not_ uniform and cannot fill space; it
can possibly be realized locally, but not everywhere. In words, we say that it
is a _frustrated_ ground state. It is, however, relevant to CLCs, as a double
twist is attained exactly on the symmetry axis of both chiral variants of the
ET field [15].
Liquid crystals are (within good approximation) incompressible fluids. Thus,
when the region $\mathscr{B}$ is _not_ fixed, for a given amount of material,
$\mathscr{B}$ is subject to the _isoperimetric_ constraint that prescribes its
volume,
$V(\mathscr{B})=V_{0}.$ (11)
When $\mathscr{B}$ is surrounded by an isotropic fluid, a surface energy
arises at the free interface $\partial\mathscr{B}$, which, following [42], we
represent as
$\mathscr{F}_{\mathrm{s}}[\bm{n}]:=\int_{\partial\mathscr{B}}\gamma[1+\omega(\bm{n}\cdot\bm{\nu})^{2}]\operatorname{d}\\!A,$
(12)
where $\bm{\nu}$ is the outer unit normal to $\partial\mathscr{B}$, $\gamma>0$
is the _isotropic_ surface tension, and $\omega>-1$ is a dimensionless
parameter weighting the _anisotropic_ component of surface tension against the
isotropic one. For $\omega>0$, $\mathscr{F}_{\mathrm{s}}$ promotes the
_degenerate planar_ anchoring, whereas for $\omega<0$, it promotes the
_homeotropic_ anchoring. For free-boundary problems, the total free energy
functional will then be written as
$\mathscr{F}_{\mathrm{t}}[\bm{n}]:=\mathscr{F}_{\mathrm{b}}[\bm{n}]+\mathscr{F}_{\mathrm{s}}[\bm{n}]$
(13)
and the domain $\mathscr{B}$, subject to (11), is also an unknown to be
determined so as to minimize $\mathscr{F}_{\mathrm{t}}$.
### II.2 Quartic Twist Energy
The essential feature of our theory is to envision a double twist with two
equivalent chiral variants as ground state of CLCs in three-dimensional space,
$S=0,\quad T=\pm T_{0},\quad B=0,\quad q=0.$ (14)
The degeneracy of the ground double twist in (14) arises from in the achiral
nature of the molecular aggregates that constitute these materials, which is
reflected in the lack of chirality of their condensed phases.
The elastic stored energy must equally penalize both ground chiral variants.
Our minimalistic proposal to achieve this goal is to add a _quartic twist_
term to the Oseen-Frank stored-energy density:
$W_{\mathrm{QT}}(\bm{n},\nabla\bm{n})=\frac{1}{2}(K_{11}-K_{24})S^{2}+\frac{1}{2}(K_{22}-K_{24})T^{2}+\frac{1}{2}K_{23}B^{2}+\frac{1}{2}K_{24}(2q)^{2}+\frac{1}{4}K_{22}a^{2}T^{4},$
(15)
where $a$ is a _characteristic length_. Unlike $W_{\mathrm{OF}}$ when (7b) is
violated, $W_{\mathrm{QT}}$ is bounded below whenever
$\displaystyle K_{11}$ $\displaystyle\geqq$ $\displaystyle K_{24}\geqq 0,$
(16a) $\displaystyle K_{24}$ $\displaystyle\geqq$ $\displaystyle K_{22}\geqq
0,$ (16b) $\displaystyle K_{33}$ $\displaystyle\geqq$ $\displaystyle 0.$ (16c)
If these inequalities hold, as we shall assume here, then $W_{\mathrm{QT}}$ is
minimum at the degenerate double-twist (14) characterized by
$T_{0}:=\frac{1}{a}\sqrt{\frac{K_{24}-K_{22}}{K_{22}}}.$ (17)
As a consequence, $W_{\mathrm{QT}}$ obeys the following uniform bound,
$W_{\mathrm{QT}}\geqq-\frac{1}{4a^{2}}\frac{(K_{24}-K_{22})^{2}}{K_{22}}.$
(18)
The parameter $a$ encodes the length scale over which distortions would be
locally stored in the ground state.111111In the elastic model proposed in [13]
for twist-bend nematics, a quartic free energy was posited that admits as
ground state either of two families of uniform heliconical fields with
opposite chirality. There too, a length scale appears in the equilibrium
pitch. The distortion state characterized by this length is the same
everywhere. As to the physical size of such a length scale, it may be
comprised in a wide range. While at the lower end we may place the persistence
length of the molecular order, which characterizes the flexibility of CLC
aggregates,121212The persistence length of self-assembled flexible aggregates
is a length over which unit vectors tangential to the aggregates lose
correlation. For CLCs, it is estimated on the order of tens to hundreds of
$\mathrm{nm}$ [43] the upper end is hard to make definite. We expect that $a$
would be exposed to the same indeterminacy that affects many (if not all)
supramolecular structures in lyotropic systems. The most telling example is
perhaps given by cholestric liquid crystals, which give rise to a chiral
structure (characterized by a single twist $T=\pm 2q$) starting from chiral
molecules. If the macroscopic pitch (measured by $|1/T|$) were determined by
the molecular chirality,131313 _Via_ the naive geometric argument that
represents chiral molecules as cylindrical screws and derives the pitch of
their assemblies by close packing them so as to fit grooves with grooves. it
would result several orders of magnitude smaller than the observed
ones.141414For lyotropic cholesterics, the mismatch between microscopic and
macroscopic pitches, which has recently received new experimental evidence in
systems of basic living constituents [44, 45], is still debated. Interesting
theories based on either molecular shape fluctuations [46, 47] or surface
charge patterns [48] have met with some experimental disagreement [49]. Here,
we shall treat $a$ as a phenomenological parameter, to be determined
experimentally.
From now on, in the definition (1) for $\mathscr{F}_{\mathrm{b}}$ we shall
replace $W_{\mathrm{OF}}$ with $W_{\mathrm{QT}}$.
### II.3 Energy Lower Bound
Our quartic theory is built with the intent of curing the paradoxes
encountered within the Oseen-Frank theory when handling free-boundary problems
for chromonics. As shown in [17], these paradoxes ultimately rely on the
explicit construction of minimizing sequences of director fields and
admissible shapes $\\{\bm{n}_{h},\mathscr{B}_{h}\\}_{h\in\mathbb{N}}$ for the
total free energy $\mathscr{F}_{\mathrm{t}}$ in (13), over which
$\mathscr{F}_{\mathrm{t}}$ diverges to negative infinity. Thus, in short,
$\mathscr{F}_{\mathrm{t}}$ is proved to be _unbounded_ below in a class of
free-boundary problems, if $W_{\mathrm{OF}}$ is chosen to be the stored-energy
density with (7b) violated.151515For the classical quadratic energy in a
_fixed_ domain with degenerate planar anchoring conditions, boundedness and
local stability of the bulk free energy $\mathscr{F}_{\mathrm{b}}$ were
established in [15], even when (7b) is violated.
Besides the violation of (7b), all minimizing sequences provided in [17] have
a common feature: they induce a high concentration of twist in needle-shaped
droplets made sharper and sharper. Here we show that such a disruptive
mechanism cannot be at work within the quartic twist theory. We are not in a
position to prove that free-boundary problems are well-posed within this
theory. Indeed, these are formidable problems already within the classical
quadratic theory (see, for example, the early works [50, 51] and the more
recent contributions [52, 53]). We rather prove that
$\mathscr{F}_{\mathrm{t}}$ is uniformly bounded below.
It is an easy consequence of (18) and (11) that
$\mathscr{F}_{\mathrm{b}}[\bm{n}]\geqq-\frac{(K_{24}-K_{22})^{2}}{K_{22}}\frac{V_{0}}{4a^{2}}.$
(19)
Similarly, it follows from (12) that
$\mathscr{F}_{\mathrm{s}}[\bm{n}]\geqq\gamma_{\omega}A(\partial\mathscr{B}),$
(20)
where
$\gamma_{\omega}:=\gamma(1+\min\\{0,\omega\\})>0$ (21)
and $A$ denotes the area measure. By the isoperimetric inequality (see, for
example, [54]),
$A(\partial\mathscr{B})\geqq(36\pi V_{0}^{2})^{1/3},$ (22)
and so (19) and (20) imply that
$\mathscr{F}_{\mathrm{t}}[\bm{n}]\geqq-\frac{(K_{24}-K_{22})^{2}}{K_{22}}\frac{V_{0}}{4a^{2}}+\gamma_{\omega}(36\pi
V_{0}^{2})^{1/3},$ (23)
thus establishing a lower bound for $\mathscr{F}_{\mathrm{t}}$ that only
depends on material constants and the prescribed volume $V_{0}$.
## III Chromonics in a Cylinder
Having established that the paradoxical behaviour encountered for chromonic
droplets within the classical quadratic theory is ruled out by our quartic
theory, we can study within the latter the problem that in [6, 7] first led to
interpret as ET fields the equilibrium distortions observed in chromonics
filling a cylinder enforcing degenerate planar anchoring conditions on its
later boundary.
Here the domain $\mathscr{B}$ is a circular cylinder of radius $R$ and height
$L$ (see Fig. 2)
Figure 2: A circular cylinder of radius $R$ and height $L$ filled with a
chromonic liquid crystal. The lateral boundary $\partial_{R}\mathscr{B}$
enforces a degenerate planar anchoring condition for the director, here
represented by a prolate ellipsoid with long axis orthogonal to the outer unit
normal $\bm{\nu}$, but free to rotate about it.
and the director field is subject to a degenerate planar anchoring condition
on the lateral boundary $\partial_{R}\mathscr{B}$ of $\mathscr{B}$:
$\bm{n}\cdot\bm{\nu}|_{\partial_{R}\mathscr{B}}=0,$ (24)
where $\bm{\nu}$ denotes the outer unit normal to $\partial_{R}\mathscr{B}$.
Free-boundary conditions are imposed on the bases of $\mathscr{B}$; the
problem will be assumed to be invariant under translations along the
cylinder’s axis.
As a consequence of (24), $\mathscr{F}_{\mathrm{s}}$ in (12) is a constant
independent of $\bm{n}$ and will be omitted. The total free energy
$\mathscr{F}_{\mathrm{t}}$ in (13) thus reduces to $\mathscr{F}_{\mathrm{b}}$
and will be scaled to the reference energy $2\pi K_{22}L$,
$\mathcal{F}_{\mathrm{b}}[\bm{n}]:=\frac{\mathscr{F}_{\mathrm{b}}[\bm{n}]}{2\pi
K_{22}L}.$ (25)
We further assume that in the frame
$(\bm{e}_{r},\bm{e}_{\vartheta},\bm{e}_{z})$ of cylindrical coordinates
$(r,\vartheta,z)$, with $\bm{e}_{z}$ along the axis of $\mathscr{B}$, $\bm{n}$
is represented as
$\bm{n}=\sin\beta(r)\bm{e}_{\vartheta}+\cos\beta(r)\bm{e}_{z},$ (26)
where the _polar_ angle $\beta\in[-\pi,\pi]$, which $\bm{n}$ makes with the
cylinder’s axis, depends only on the radial coordinate $r$. In these
coordinates, $\partial_{R}\mathscr{B}$ is represented as the set
$\\{(r,\vartheta,z):r=R,0\leqq\vartheta\leqq 2\pi,0\leqq z\leqq L\\}$, and
(24) is automatically satisfied, as
$\bm{\nu}|_{\partial_{R}\mathscr{B}}=\bm{e}_{r}$. Moreover, as shown in [15],
the distortion characteristics for a director field represented by (26) are
given by
$\displaystyle S$ $\displaystyle=$ $\displaystyle 0,$ (27a) $\displaystyle T$
$\displaystyle=$ $\displaystyle\beta^{\prime}+\frac{1}{r}\cos\beta\sin\beta,$
(27b) $\displaystyle B$ $\displaystyle=$
$\displaystyle\frac{1}{r}\sin^{2}\beta,$ (27c) $\displaystyle q$
$\displaystyle=$
$\displaystyle\frac{1}{2}\left|\beta^{\prime}-\frac{1}{r}\cos\beta\sin\beta\right|,$
(27d)
where a prime denotes differentiation.
Standard computations transform $\mathcal{F}_{b}$ in (25) into the _reduced_
functional
$\mathcal{F}_{\lambda}[\beta]:=\mathcal{F}_{2}[\beta]+\lambda^{2}\mathcal{F}_{4}[\beta],$
(28)
where $\mathcal{F}_{2}$ corresponds to the (scaled) quadratic Oseen-Frank’s
free energy functional,
$\mathcal{F}_{2}[\beta]:=\int_{0}^{1}\left(\frac{\rho\beta^{\prime
2}}{2}+\dfrac{1}{2\rho}\cos^{2}\beta\sin^{2}\beta+\dfrac{k_{3}}{2\rho}\sin^{4}\beta\right)\operatorname{d}\\!\rho+\frac{1}{2}(1-2k_{24})\sin^{2}\beta(1),$
(29)
while
$\mathcal{F}_{4}[\beta]:=\int_{0}^{1}\frac{\rho}{4}\left(\beta^{\prime}+\frac{1}{\rho}\sin\beta\cos\beta\right)^{4}\operatorname{d}\\!\rho,$
(30)
is the quartic contribution. Here
$\rho:=\frac{r}{R}\in[0,1]$ (31)
and
$k_{3}:=\frac{K_{33}}{K_{22}},\quad k_{24}:=\frac{K_{24}}{K_{22}},$ (32)
are scaled elastic constants, which will be assumed to obey the inequalities
$k_{3}>0\quad\text{and}\quad k_{24}>1,$ (33)
the strong form of (16c) and (16b), respectively.
For the integral in (28) to be convergent, $\beta$ must be subject to the
condition
$\beta(0)=0,$ (34)
which amounts to require that $\bm{n}$ is along $\bm{e}_{z}$ on the cylinder’s
axis. The strong form of the Euler Lagrange equation for
$\mathcal{F}_{\lambda}$ in (28) reduces here to
$\displaystyle\frac{1}{\rho^{2}}\cos\beta\sin\beta\left[1+2(k_{3}-1)\sin^{2}\beta\right]-\frac{\beta^{\prime}}{\rho}-\beta^{\prime\prime}$
$\displaystyle+\lambda^{2}\left(\beta^{\prime}+\frac{\sin\beta\cos\beta}{\rho}\right)^{2}\left[\frac{4\sin^{2}\beta\beta^{\prime}}{\rho}-\frac{3\beta^{\prime}}{\rho}-3\beta^{\prime\prime}+\frac{\sin\beta\cos\beta}{\rho^{2}}\left(3-2\sin^{2}\beta\right)\right]=0,$
(35)
subject to (34) and to the following free boundary condition at $\rho=1$,
$\left.\left[(1-2k_{24})\cos\beta\sin\beta+\beta^{\prime}+\lambda^{2}\left(\beta^{\prime}+\sin\beta\cos\beta\right)^{3}\right]\right|_{\rho=1}=0.$
(36)
Solution to (III) subject to (34) and (36) enjoy a _chiral_ symmetry: if
$\beta(\rho)$ is a solution, then also $-\beta(\rho)$ is so, which by (27)
shares with the former all distortion characteristics, but $T$, which is
opposite in sign. All chirality-conjugated solutions have the same energy, as
follows immediately from (28) with (29) and (30). For definiteness, in the
following, we shall resolve this degeneracy by taking $\beta\geqq 0$, but it
should always be taken as also representing its conjugated companion with
opposite twist.161616We are tacitly assuming that $\beta$ has only one sign in
$[0,1]$, which is a property expected for the minimizers of
$\mathcal{F}_{\lambda}$.
For $\lambda=0$, equations (III), (34), and (36) have a closed-form solution,
given by (see [55] and also [7, 15])
$\beta_{0}(\rho):=\arctan\left(\frac{2\sqrt{k_{24}(k_{24}-1)}\rho}{\sqrt{k_{3}}\left[k_{24}-(k_{24}-1)\rho^{2}\right]}\right).$
(37)
Unfortunately, for $\lambda>0$, the solution is not known explicitly. We now
discuss some numerical solutions for various values of $\lambda$ and then
study analytically their asymptotics.
### III.1 Numerical Solutions
To illustrate our theory, we solve numerically equation (III) subject to (34)
and (36) using the experimental data for DSCG available from [56] and [57].
The latter refer to an aqueous solution with concentration $c=14.0\%$ (wt/wt)
at temperature $21.5\,^{\circ}\mathrm{C}$, for which $k_{3}=30$ and
$k_{24}=7.5$.
These data were obtained by using the Oseen-Frank theory. We argue that the
experimental determination in [56] of $K_{11}$, $K_{22}$, and $K_{33}$ (and so
of $k_{3}$) would _not_ be affected by the quartic twist term introduced here
in the elastic stored-energy density because those measurements result from an
instability driven by a magnetic field; the occurrence of such an instability
is determined by the change in sign of the second variation of the total free-
energy functional. The quartic twist term does _not_ affect the second
variation, and thus neither does the quartic elastic constant the magnetic
instability. On the other hand, in our theory, measurements of $K_{24}$ are
expected to reveal the role played by the quartic twist term.
For DSCG, the data available from [57] do not allow us too estimate both
$k_{24}$ and $\lambda$, and so to illustrate the role of $\lambda$, we first
adopt the value of $k_{24}$ derived from the quadratic theory. Below, in Sect.
III.1.1, we use more abundant data available for SSY to estimate both $k_{24}$
and $\lambda$ for that material using our quartic theory.
Figure 3 shows the graphs of the solution $\beta_{\lambda}(\rho)$ for
$\lambda$ ranging from $0$ to $4$. For $\lambda=0$ (red graph in Fig. 3),
$\beta_{\lambda}$ reduces to $\beta_{0}$ in (37). As $\lambda$ increases, the
convexity of the graph becomes less and less pronounced, and it is almost lost
for $\lambda=4$, where the linear asymptotic regime described in Sect. III.2
below is already nearly attained.
Figure 3: Graphs of the solution $\beta_{\lambda}(\rho)$ of (III) subject to
(34) and (36) for several values of $\lambda$: ten equally spaced between
$\lambda=0$ and $\lambda=1$, followed by $\lambda=2,3,4$. The (red) graph
corresponding to $\lambda=0$ reproduces the solution $\beta_{0}$ in (37).
Material moduli are such that $k_{3}=30$ and $k_{24}=7.5$; they are taken from
[56] and [57] and refer to an aqueous solution of DSCG with concentration
$c=14\%$ (wt/wt) at $21.5\,^{\circ}\mathrm{C}$.
To further illustrate these solutions, we introduce the following
dimensionless ratios,
$\displaystyle\frac{2q}{T}$
$\displaystyle=\frac{\beta_{\lambda}^{\prime}(\rho)-\frac{1}{\rho}\cos\beta_{\lambda}(\rho)\sin\beta_{\lambda}(\rho)}{\beta_{\lambda}^{\prime}(\rho)+\frac{1}{\rho}\cos\beta_{\lambda}(\rho)\sin\beta_{\lambda}(\rho)},$
(38a) $\displaystyle\frac{T}{T_{0}}$
$\displaystyle=\lambda\frac{\beta_{\lambda}^{\prime}(\rho)+\frac{1}{\rho}\cos\beta(\rho)\sin\beta(\rho)}{\sqrt{k_{24}-1}},$
(38b)
where use was also made of both (27) and (17). They are functions of $\rho$,
which may be taken to represent local measures of uniformity and frustration,
respectively. Where $2q/T$ gets closer to $1$, the distortion is nearer to be
uniform, as intended in (9), and so fitter to fill space. Where $T/T_{0}$ gets
closer to $1$, the distortion is farther from being frustrated, as the energy
is closer to its absolute minimum. The graphs of both these ratios are plotted
in Fig. 4 for different values of $\lambda$.
(a) Uniformity ratio.
(b) Frustration ratio.
Figure 4: Graphs of both uniformity and frustration ratios in (38) as
functions of $\rho$. All parameters are the same as in Fig. 3. Red curves
refer to the analytic solution $\beta_{0}$ in (37), for which $\lambda=0$. The
corresponding ratio of frustration vanishes identically because $T_{0}$
diverges to $+\infty$ as $\lambda\to 0$.
Upon increasing $\lambda$, the distortion in the cylinder becomes both less
uniform and less frustrated.
Finally, in Fig. 5, we plot the minimum of the total energy
$\mathcal{F}_{\lambda}[\beta_{\lambda}]$ (referred to
$\mathcal{F}_{0}[\beta_{0}]$) against $\lambda$. The asymptotic behaviour
shown by this graph as $\lambda\to 0$ will be justified in Sect. III.2 below.
Figure 5: Graph of the minimum total energy
$\mathcal{F}_{\lambda}[\beta_{\lambda}]$ against $\lambda$. Its behaviour for
$\lambda\to 0$ is in agreement with the asymptotic analysis in Sect. III.2.
#### III.1.1 Experimental Estimate
In [7], an experiment was performed with a solution of SYY in water (at a
concentration $c=30.0\%$ (wt/wt) and temperature $25\,^{\circ}\mathrm{C}$)
inserted in a capillary tube with diameter $90.6\,\mu\mathrm{m}\pm
400\,\mathrm{nm}$. In equilibrium, the polar angle $\beta$ was measured at
different distances from the capillary’s axis; these data (taken from Fig. 3
of [7]) are reproduced in Fig. 6.
Figure 6: Best fit of the data shown in Fig. 3 of [7] with the solution
$\beta_{\lambda}$ to (III) for $k_{3}=8.7$ taken from [56]. Best fitting
parameters were found to be $k_{24}=30$ and $\lambda=0.14$.
.
Taking $k_{3}=8.7$ from the measurements performed in [56] for the same
material in the same conditions,171717The absolute measured values are
$K_{11}\approx 4.3\,\mathrm{pN}$, $K_{22}\approx 0.7\,\mathrm{pN}$, and
$K_{33}\approx 6.1\,\mathrm{pN}$. we determined $k_{24}$ and $\lambda$ from
the best (least squares) fit shown in Fig. 6, obtaining
$k_{24}=30\quad\text{and}\quad\lambda=0.14,$ (39)
from which we estimate
$K_{24}\approx 21\,\mathrm{pN}\quad\text{and}\quad a\approx
6.3\,\mu\mathrm{m}.$ (40)
### III.2 Asymptotics
Here, to justify the outcomes of the numerical computations shown above, we
study the asymptotic behaviour of $\mathcal{F}_{\lambda}$ and its minimizers
for both $\lambda\to 0$ and $\lambda\to\infty$.
We first take $\lambda\ll 1$ and assume that in this limit $\beta_{\lambda}$
is a perturbation to $\beta_{0}$, the minimizer of $\mathcal{F}_{0}$ described
by (37),
$\beta_{\lambda}(\rho)=\beta_{0}(\rho)+\lambda^{\alpha}u(\rho),$ (41)
where $\alpha>0$ is a scalar parameter and $u$ is a function in in
$\mathcal{H}^{1}([0,1])\cap\mathcal{L}^{\infty}([0,1])$ subject to $u(0)=0$,
while remaining free at $\rho=1$. We shall determine _both_ $\alpha$ and $u$
by minimizing the approximate form of $\mathcal{F}_{\lambda}$ evaluated at
$\beta_{\lambda}$ in (41).
To this end, we compute
$\mathcal{F}_{\lambda}[\beta_{\lambda}]\approx\mathcal{F}_{0}[\beta_{0}]+\lambda^{\alpha}\delta\mathcal{F}_{0}(\beta_{0})[u]+\lambda^{2}\mathcal{F}_{4}[\beta_{0}]+\frac{1}{2}\lambda^{2\alpha}\delta^{2}\mathcal{F}_{0}(\beta_{0})[u]+\lambda^{2+\alpha}\delta\mathcal{F}_{4}(\beta_{0})[u],$
(42)
where $\delta\mathcal{F}_{0}(\beta_{0})$ and
$\delta^{2}\mathcal{F}_{0}(\beta_{0})$ are the first and second variations of
$\mathcal{F}_{0}$ at $\beta_{0}$, while $\delta\mathcal{F}_{4}(\beta_{0})$ is
the first variation of $\mathcal{F}_{4}$. Since $\beta_{0}$ is a critical
point of $\mathcal{F}_{0}$, $\delta\mathcal{F}_{0}(\beta_{0})\equiv 0$. Since,
as proved in [15], $\delta^{2}\mathcal{F}_{0}(\beta_{0})$ is positive
definite, the only minimizer of $\mathcal{F}_{\lambda}$ in (42) is trivially
$u\equiv 0$ as long as $\alpha<2$. For $\alpha=2$, $u$ is determined by
minimizing the quartic term in $\lambda$, the least order depending un $u$,
which is a well-behaved functional with both quadratic and linear terms in
$u$. The quadratic term in $\lambda$ already dominates the energy
$\mathcal{F}_{\lambda}$ and is independent of $u$. For $\alpha>2$, the last
term in (42) is dominant, but the minimizing $u$ does not exist, as the
functional
$\delta\mathcal{F}_{4}(\beta)[u]=\int_{0}^{1}\rho\left(\beta_{0}^{\prime}+\frac{1}{\rho}\cos\beta_{0}\sin\beta_{0}\right)^{3}\left(u^{\prime}+\frac{u}{\rho}\right)\operatorname{d}\\!\rho$
(43)
is unbounded below. This can easily seen by computing
$\delta\mathcal{F}_{4}(\beta)[u]$ for $u_{A}=A\rho$, with $A$ a constant, and
taking the limit as $A\to-\infty$. Thus, we have proved that (41) can hold
only for $\alpha=2$ and that both $\beta_{\lambda}$ and
$\mathcal{F}_{\lambda}[\beta_{\lambda}]$ differ from $\beta_{0}$ and
$\mathcal{F}_{\lambda}[\beta_{\lambda}]$, respectively, by terms in
$\lambda^{2}$, as confirmed by the graphs in Figs. 3 and 5.
We now consider the limit as $\lambda\to\infty$. In this case, we start by
remarking that the uniform state represented by $\beta\equiv 0$ is a solution
of (III) for all values of $\lambda$. As in (41), we perturb it by setting
$\beta_{\lambda}(\rho)=\frac{1}{\lambda^{\alpha}}u(\rho),$ (44)
where again $\alpha>0$ and
$u\in\mathcal{H}^{1}([0,1])\cap\mathcal{L}^{\infty}([0,1])$ such that
$u(0)=0$. To determine $\alpha$ and $u$, we minimize
$\mathcal{F}_{\lambda}[\beta_{\lambda}]$, which for $\lambda\gg 1$ is given
the approximate form
$\mathcal{F}_{\lambda}[\beta_{\lambda}]\approx\frac{1}{2}\frac{1}{\lambda^{2\alpha}}\delta^{2}\mathcal{F}_{2}(0)[u]+\frac{1}{24}\frac{1}{\lambda^{4\alpha-2}}\delta^{4}\mathcal{F}_{4}(0)[u],$
(45)
where $\delta^{4}\mathcal{F}_{4}$ is the fourth variation of
$\mathcal{F}_{4}$, the first that does not vanish identically on $\beta\equiv
0$.
For $\alpha>1$, $1/\lambda^{2\alpha}$ is the dominant power in (45), but the
minimum of $\mathcal{F}_{\lambda}$ is not attained because
$\delta^{2}\mathcal{F}_{2}(0)$ is unbounded below. To see this, it suffices to
compute
$\delta^{2}\mathcal{F}_{2}(0)[u]=\int_{0}^{1}\left(\rho u^{\prime
2}+\frac{u^{2}}{\rho}\right)\operatorname{d}\\!\rho+(1-2k_{24})u^{2}(1)$ (46)
for $u_{A}=A\rho$, with $A$ again positive constant, which gives
$\delta^{2}\mathcal{F}_{2}(0)[u_{A}]=2A^{2}(1-k_{24})\to-\infty\quad\text{for}\quad
A\to\infty,$ (47)
by (33).
For $\alpha=1$, both terms in (45) scale like $1/\lambda^{2}$ and
$\mathcal{F}_{\lambda}$ becomes
$\mathcal{F}_{\lambda}[\beta_{\lambda}]\approx\frac{1}{2\lambda^{2}}\left\\{\int_{0}^{1}\left[\rho
u^{\prime
2}+\frac{u^{2}}{\rho}+\frac{1}{2}\rho\left(u^{\prime}+\frac{u}{\rho}\right)^{4}\right]\operatorname{d}\\!\rho+(1-2k_{24})u^{2}(1)\right\\},$
(48)
whose equilibrium equations are
$\displaystyle\frac{1}{\rho}u-u^{\prime}-\rho u^{\prime\prime}$
$\displaystyle=0,$ (49a) $\displaystyle\left.\left(\rho
u^{\prime}+\frac{1}{\rho^{2}}(\rho
u^{\prime}+u)^{3}+(1-2k_{24})u\right)\right|_{\rho=1}$ $\displaystyle=0,$
(49b)
which are solved by $u=u_{\infty}$ with
$u_{\infty}(\rho):=\frac{1}{2}\sqrt{k_{24}-1}\rho.$ (50)
For $\alpha<1$, the second power in (45) prevails on the first, and the
minimum is trivially $u\equiv 0$. Thus, we conclude that in (44) we must set
$\alpha=1$ and $u=u_{\infty}$, that is,
$\beta_{\lambda}(\rho)\approx\frac{1}{2\lambda}\sqrt{k_{24}-1}\rho.$ (51)
Correspondingly, it follows from (48) that
$\mathcal{F}_{\lambda}[\beta_{\lambda}]\approx-\frac{1}{8}(k_{24}-1)^{2}.$
(52)
Finally, we study, for any given $\lambda$, the behaviour of the solutions to
(III) subject to (34) for $\rho\ll 1$. To this end, we take
$\beta_{\lambda}(\rho)\approx A\rho^{\alpha}$ with $A$ and $\alpha$ positive
constants and compute the limit as $\rho\to 0$ in (III); it is easily checked
that compatibility demands that $\alpha=1$. Similarly, by taking
$\beta_{\lambda}(\rho)\approx A\rho+B\rho^{2}$ and computing again the limit
as $\rho\to 0$ in (III), we see that $B$ must vanish. We conclude that the
minimum $\beta_{\lambda}$ of $\mathcal{F}_{\lambda}$ has the following
asymptotic behaviour,
$\beta_{\lambda}(\rho)=A\rho+\mathcal{O}(\rho^{3}),$ (53)
with $A$ a constant, in agreement with the graphs shown in Fig. 3. The
constant $A$ is related to the twist $T$ on the cylinder’s axis, as it follows
from (53) and (27b) that for $\rho=0$ the distortion characteristics are
$S=B=q=0\quad\text{and}\quad T=\frac{2A}{R},$ (54)
typical of a double twist.
## IV Conclusions
The study of chromonics, which are lyotropic liquid crystals resulting from
supramolecular aggregations dissolved in water, has revealed the emergence at
the macroscopic scale of a twisted ground state, which owing to the achiral
nature of the microscopic constituents occurs equally likely in two variants
of opposite chiralities.
To accommodate these new materials within the classical elastic theory of
liquid crystals, researchers hypothesized anomalously small twist constants
and (comparatively) large saddle-splay constants, violating one of the
Ericksen inequalities, which guarantee the classical Oseen-Frank stored energy
to be bounded below. For equilibrium problems in confined, fixed geometries,
such a violation seemed to have no consequences on the existence and stability
of energy minimizers, but free-boundary problems were found paradoxical. In
particular, minimizing sequences of tactoidal droplets were shown to drive the
total energy to negative infinity by squeezing (double) twist within ever
growing needle-shaped domains.
This paper proposes to describe the elasticity of chromonics by a adding to
the Oseen-Frank classical energy-density a term quartic in the measure of
twist. In so doing, the theory introduces a phenomenological length $a$, whose
microscopic origin is as problematic as the origin of spontaneous (single)
twist in cholesteric liquid crystals. We may speculate that $a$ depends on
temperature and it possibly arises from the flexibility of the supramolecular
aggregates that constitute these materials.
Two are the major conclusions reached here. First, the proposed quartic twist
theory is proven to remove the paradoxes encountered in free-boundary problems
with the classical quadratic theory.
Second, when applied to chromonics confined in a capillary tube enforcing
degenerate planar anchoring for the nematic director $\bm{n}$ on the lateral
boundary, the quartic twist theory predicts equilibrium configurations for
$\bm{n}$ that depend on the ratio $\lambda=a/R$, where $R$ is the capillary’s
radius. Using published experimental data for SYY, we can estimate both
$K_{24}$ and $a$. To further put the theory to the test, it might be
worthwhile performing similar experiments on a series of capillaries with very
different radia.
Two other theoretical challenges are suggested by this study. First, the
proposed theory posits a degenerate double twist as ground state for
chromonics. As recently shown in [41] for the non-degenerate double twist in
the ground state of cholesterics, this is a source of elastic frustration. A
natural question is then how such a frustration could be eased; the answer
should depend on how the domain length scale compares with the
phenomenological length $a$.
A second, more demanding question is how $a$ is related to the structure of
the microscopic constituents of the material and their possible
polydispersity. These difficult questions should be addressed by future
research.
## References
* Lydon [1998a] J. Lydon, Chromonic liquid crystal phases, Curr. Opin. Colloid Interface Sci. 3, 458 (1998a).
* Lydon [1998b] J. Lydon, Chromonics, in _Handbook of Liquid Crystals: Low Molecular Weight Liquid Crystals II_ (John Wiley & Sons, Weinheim, Germany, 1998) Chap. XVIII, pp. 981–1007.
* Lydon [2010] J. Lydon, Chromonic review, J. Mater. Chem. 20, 10071 (2010).
* Lydon [2011] J. Lydon, Chromonic liquid crystalline phases, Liq. Cryst. 38, 1663 (2011).
* Dierking and Martins Figueiredo Neto [2020] I. Dierking and A. Martins Figueiredo Neto, Novel trends in lyotropic liquid crystals, Crystals 10, 604 (2020).
* Nayani _et al._ [2015] K. Nayani, R. Chang, J. Fu, P. W. Ellis, A. Fernandez-Nieves, J. O. Park, and M. Srinivasarao, Spontaneous emergence of chirality in achiral lyotropic chromonic liquid crystals confined to cylinders, Nat. Commun. 6, 8067 (2015).
* Davidson _et al._ [2015a] Z. S. Davidson, L. Kang, J. Jeong, T. Still, P. J. Collings, T. C. Lubensky, and A. G. Yodh, Chiral structures and defects of lyotropic chromonic liquid crystals induced by saddle-splay elasticity, Phys. Rev. E 91, 050501(R) (2015a), see also Erratum [58] and Supplementary Information https://journals.aps.org/pre/supplemental/10.1103/PhysRevE.91.050501/Supplementary_Info_Planar_Davidson_et_al.pdf.
* Fu _et al._ [2017] J. Fu, K. Nayani, J. Park, and M. Srinivasarao, Spontaneous emergence of twist and formation of monodomain in lyotropic chromonic liquid crystals confined to capillaries, NPG Asia Mater. 9, e393 (2017).
* Javadi _et al._ [2018] A. Javadi, J. Eun, and J. Jeong, Cylindrical nematic liquid crystal shell: effect of saddle-splay elasticity, Soft Matter 14, 9005 (2018).
* Ondris-Crawford _et al._ [1993] R. J. Ondris-Crawford, G. P. Crawford, S. Zumer, and J. W. Doane, Curvature-induced configuration transition in confined nematic liquid crystals, Phys. Rev. Lett. 70, 194 (1993).
* Cladis and Kléman [1972] P. E. Cladis and M. Kléman, Non-singular disclinations of strength ${S}=+1$ in nematics, J. Phys. France 33, 591 (1972).
* Meyer [1973] R. B. Meyer, On the existence of even indexed disclinations in nematic liquid crystals, Phil. Mag. 27, 405 (1973).
* Virga [2019] E. G. Virga, Uniform distortions and generalized elasticity of liquid crystals, Phys. Rev. E 100, 052701 (2019).
* Ericksen [1966] J. L. Ericksen, Inequalities in liquid crystal theory, Phys. Fluids 9, 1205 (1966).
* Paparini and Virga [2022a] S. Paparini and E. G. Virga, Stability against the odds: the case of chromonic liquid crystals, J. Nonlinear Sci. 32, 74 (2022a).
* Long and Selinger [2022a] C. Long and J. V. Selinger, Violation of Ericksen inequalities in lyotropic chromonic liquid crystals, J. Elast. (2022a).
* Paparini and Virga [2022b] S. Paparini and E. G. Virga, Paradoxes for chromonic liquid crystal droplets, Phys. Rev. E 106, 044703 (2022b).
* Tortora _et al._ [2010] L. Tortora, H.-S. Park, S.-W. Kang, V. Savaryn, S.-H. Hong, K. Kaznatcheev, D. Finotello, S. Sprunt, S. Kumar, and O. D. Lavrentovich, Self-assembly, condensation, and order in aqueous lyotropic chromonic liquid crystals crowded with additives, Soft Matter 6, 4157 (2010).
* Tortora and Lavrentovich [2011] L. Tortora and O. D. Lavrentovich, Chiral symmetry breaking by spatial confinement in tactoidal droplets of lyotropic chromonic liquid crystals, Proc. Natl. Acad. Sci. USA 108, 5163 (2011).
* Peng and Lavrentovich [2015] C. Peng and O. D. Lavrentovich, Chirality amplification and detection by tactoids of lyotropic chromonic liquid crystals, Soft Matter 11, 7221 (2015).
* Nayani _et al._ [2017] K. Nayani, J. Fu, R. Chang, J. O. Park, and M. Srinivasarao, Using chiral tactoids as optical probes to study the aggregation behavior of chromonics, Proceedings of the National Academy of Sciences 114, 3826 (2017), https://www.pnas.org/content/114/15/3826.full.pdf .
* Shadpour _et al._ [2019] S. Shadpour, J. P. Vanegas, A. Nemati, and T. Hegmann, Amplification of chirality by adenosine monophosphate-capped luminescent gold nanoclusters in nematic lyotropic chromonic liquid crystal tactoids, ACS Omega 4, 1662 (2019).
* Nehring and Saupe [1971] J. Nehring and A. Saupe, On the elastic theory of uniaxial liquid crystals, J. Chem. Phys. 54, 337 (1971).
* Oldano and Barbero [1985] C. Oldano and G. Barbero, An _ab initio_ analysis of the second-order elasticity effect on nematic configurations, Phys. Lett. A 110, 213 (1985).
* Dozov [2001] I. Dozov, On the spontaneous symmetry breaking in the mesophases of achiral banana-shaped molecules, Europhys. Lett. 56, 247 (2001).
* Meyer [1976] R. B. Meyer, Structural problems in liquid crystal physics, in _Molecular Fluids_ , Les Houches Summer School in Theoretical Physics, Vol. XXV-1973, edited by R. Balian and G. Weill (Gordon and Breach, New York, 1976) pp. 273–373.
* Cestari _et al._ [2011] M. Cestari, S. Diez-Berart, D. A. Dunmur, A. Ferrarini, M. R. de la Fuente, D. J. B. Jackson, D. O. Lopez, G. R. Luckhurst, M. A. Perez-Jubindo, R. M. Richardson, J. Salud, B. A. Timimi, and H. Zimmermann, Phase behavior and properties of the liquid-crystal dimer 1′′,7′′-bis(4-cyanobiphenyl-4′-yl) heptane: A twist-bend nematic liquid crystal, Phys. Rev. E 84, 031704 (2011).
* Lelidis and Barbero [2016] I. Lelidis and G. Barbero, Nematic phases with spontaneous splay–bend deformation: standard elastic description, Liq. Cryst. 43, 208 (2016).
* Barbero and Lelidis [2019] G. Barbero and I. Lelidis, Fourth-order nematic elasticity and modulated nematic phases: a poor man’s approach, Liq. Cryst. 46, 535 (2019).
* Lelidis and Barbero [2019] I. Lelidis and G. Barbero, Nonlinear nematic elasticity, J. Mol. Liq. 275, 116 (2019).
* Longa _et al._ [1987] L. Longa, D. Monselesan, and H.-R. Trebin, An extension of the Landau-Ginzburg-de Gennes theory for liquid crystals, Liq. Cryst. 2, 769 (1987).
* Golovaty _et al._ [2021] D. Golovaty, M. Novack, and P. Stenberg, A novel landau-de gennes model with quartic elastic terms, Eur. J. Appl. Math. 32, 177 (2021).
* Oseen [1933] C. W. Oseen, The theory of liquid crystals, Trans. Faraday Soc. 29, 883 (1933).
* Frank [1958] F. C. Frank, On the theory of liquid crystals, Discuss. Faraday Soc. 25, 19 (1958).
* Zocher [1933] H. Zocher, The effect of a magnetic field on the nematic state, Trans. Faraday Soc. 29, 945 (1933).
* Virga [1994] E. G. Virga, _Variational Theories for Liquid Crystals_ , Applied Mathematics and Mathematical Computation, Vol. 8 (Chapman & Hall, London, 1994).
* Selinger [2018] J. V. Selinger, Interpretation of saddle-splay and the Oseen-Frank free energy in liquid crystals, Liq. Cryst. Rev. 6, 129 (2018).
* Machon and Alexander [2016] T. Machon and G. P. Alexander, Umbilic lines in orientational order, Phys. Rev. X 6, 011033 (2016).
* Selinger [2022] J. V. Selinger, Director deformations, geometric frustration, and modulated phases in liquid crystals, Ann. Rev. Condens. Matter Phys. 13 (2022), First posted online on October 12, 2021. Volume publication date, March 2022\.
* Pedrini and Virga [2020] A. Pedrini and E. G. Virga, Liquid crystal distortions revealed by an octupolar tensor, Phys. Rev. E 101, 012703 (2020).
* Long and Selinger [2022b] C. Long and J. V. Selinger, Explicit demonstration of geometric frustration in chiral liquid crystals (2022b), arXiv:2210.15832 [cond-mat.soft].
* Rapini and Papoular [1969] A. Rapini and M. Papoular, Distorsion d’une lamelle mématique sous champ magnétique conditions d’ancrage aux parois, J. Phys. Colloq. 30, C4.54 (1969), available at https://hal.archives-ouvertes.fr/jpa-00213715/document.
* Zhou [2017] S. Zhou, _Lyotropic Chromonic Liquid Crystals_ , Springer Theses (Springer, Cham, Switzerland, 2017).
* Stanley _et al._ [2005] C. B. Stanley, H. Hong, and H. H. Strey, DNA cholesteric pitch as a function of density and ionic strength, Biophys. J. 89, 2552 (2005).
* Tortora _et al._ [2020] M. M. C. Tortora, G. Mishra, D. Prešern, and J. P. K. Doye, Chiral shape fluctuations and the origin of chirality in cholesteric phases of DNA origamis, Sci. Adv. 6, 5163 (2020).
* Harris _et al._ [1997] A. B. Harris, R. D. Kamien, and T. C. Lubensky, Microscopic origin of cholesteric pitch, Phys. Rev. Lett. 78, 1476 (1997).
* Harris _et al._ [1999] A. B. Harris, R. D. Kamien, and T. C. Lubensky, Molecular chirality and chiral parameters, Rev. Mod. Phys. 71, 1745 (1999).
* Kornyshev _et al._ [2002] A. Kornyshev, S. Leikin, and S. Malinin, Chiral electrostatic interaction and cholesteric liquid crystals of DNA, Eur. Phys. J. E 7, 83 (2002).
* Grelet and Fraden [2003] E. Grelet and S. Fraden, What is the origin of chirality in the cholesteric phase of virus suspensions?, Phys. Rev. Lett. 90, 198302 (2003).
* Virga [1989] E. G. Virga, Drops of nematic liquid crystals, Arch. Rational Mech. Anal. 107, 371 (1989), reprinted in [59].
* Lin and Poon [1996] F. H. Lin and C. C. Poon, On nematic liquid crystal droplets, in _Elliptic and parabolic methods in geometry_ (A. K. Peters, Ltd., Wellesley, MA, 1996) pp. 91–121.
* Geng and Lin [2022] Z. Geng and F. Lin, The two-dimensional liquid crystal droplet problem with a tangential boundary condition, Arch. Rational Mech. Anal. 243, 1181 (2022).
* Lin and Wang [2022] F. Lin and C. Wang, Isotropic-nematic phase transition and liquid crystal droplets, Comm. Pure Appl. Math. (2022).
* Osserman [1978] R. Osserman, The isoperimetric inequality, Bull. Amer. Math. Soc. 84, 1182 (1978).
* Burylov [1997] S. Burylov, Equilibrium configuration of a nematic liquid crystal confined to a cylindrical cavity, J. Exp. Theor. Phys 85, 873 (1997).
* Zhou _et al._ [2012] S. Zhou, Y. A. Nastishin, M. M. Omelchenko, L. Tortora, V. G. Nazarenko, O. P. Boiko, T. Ostapenko, T. Hu, C. C. Almasan, S. N. Sprunt, J. T. Gleeson, and O. D. Lavrentovich, Elasticity of lyotropic chromonic liquid crystals probed by director reorientation in a magnetic field, Phys. Rev. Lett. 109, 037801 (2012).
* Eun _et al._ [2019] J. Eun, S.-J. Kim, and J. Jeong, Effects of chiral dopants on double-twist configurations of lyotropic chromonic liquid crystals in a cylindrical cavity, Phys. Rev. E 100, 012702 (2019).
* Davidson _et al._ [2015b] Z. S. Davidson, L. Kang, J. Jeong, T. Still, P. J. Collings, T. C. Lubensky, and A. G. Yodh, Erratum: Chiral structures and defects of lyotropic chromonic liquid crystals induced by saddle-splay elasticity [Phys. Rev. E 91, 050501(R) (2015)], Phys. Rev. E 92, 019905 (2015b).
* Virga [1991] E. G. Virga, Drops of nematic liquid crystals, in _Mechanics and Thermodynamics of Continua_ , edited by H. Markovitz, V. J. Mizel, and D. R. Owen (Springer, Berlin, 1991) pp. 211–230.
|
# Note on stability of an abstract coupled hyperbolic-parabolic system:
singular case
Kaïs Ammari LR Analysis and Control of PDEs, LR 22ES03, Department of
Mathematics, Faculty of Sciences of Monastir, University of Monastir, Tunisia
<EMAIL_ADDRESS>, Farhat Shel LR Analysis and Control of PDEs, LR
22ES03, Department of Mathematics, Faculty of Sciences of Monastir, University
of Monastir, Tunisia<EMAIL_ADDRESS>and Zhuangyi Liu Department of
Mathematics and Statistics, University of Minnesota, Duluth, MN 55812-3000,
United States<EMAIL_ADDRESS>
###### Abstract.
In this paper we try to complete the stability analysis for an abstract system
of coupled hyperbolic and parabolic equations
$\left\\{\begin{array}[]{lll}\displaystyle u_{tt}+Au-A^{\alpha}w=0,\\\
w_{t}+A^{\alpha}u_{t}+A^{\beta}w=0,\\\
u(0)=u_{0},u_{t}(0)=u_{1},w(0)=w_{0},\end{array}\right.$
where $A$ is a self-adjoint, positive definite operator on a complex Hilbert
space $H$, and $(\alpha,\beta)\in[0,1]\times[0,1]$, which is considered in
[1], and after, in [4]. Our contribution is to identify a fine scale of
polynomial stability of the solution in the region
$S_{3}:=\left\\{(\alpha,\beta)\in[0,1]\times[0,1];\,\beta<2\alpha-1\right\\}$
taking into account the presence of a singularity at zero.
###### Key words and phrases:
Hyperbolic-parabolic system, stability
###### 2010 Mathematics Subject Classification:
35B65, 35K90, 47D03
###### Contents
1. 1 Introduction
2. 2 Stabilization
3. 3 Application
## 1\. Introduction
In this paper, we study the stability of the following system :
(1.1) $u_{tt}+Au-A^{\alpha}w=0,$ (1.2) $w_{t}+A^{\alpha}u_{t}+A^{\beta}w=0,$
(1.3) $u(0)=u_{0},u_{t}(0)=u_{1},w(0)=w_{0},$
where $A$ is a self-adjoint, positive definite operator on a complex Hilbert
space $H$, and $(\alpha,\beta)\in
S_{3}=\left\\{(a,b)\in[0,1]\times[0,1];\,b<2a-1\right\\}$.
By denoting $U=(u,u_{t},w)^{T},U_{0}=(u_{0},u_{1},w_{0})^{T}$, system
(1.1)-(1.3) can be written as an abstract linear evolution equation on the
space $\mathcal{H}=\mathcal{D}(A^{\frac{1}{2}})\times H\times H$,
(1.4)
$\left\\{\begin{array}[]{ll}\displaystyle\frac{dU}{dt}(t)=\mathcal{A}_{\alpha,\beta}U(t),\,t\geq
0,\\\ U(0)=U_{0},\end{array}\right.$
where the operator
$\mathcal{A}_{\alpha,\beta}:\mathcal{D}(\mathcal{A}_{\alpha,\beta})\subset\mathcal{H}\rightarrow\mathcal{H}$
is defined by
$\mathcal{A}_{\alpha,\beta}=\left(\begin{array}[]{ccll}0&I&0\\\
-A&0&A^{\alpha}\\\ 0&-A^{\alpha}&-A^{\beta}\end{array}\right),$
with the domain
$\mathcal{D}(\mathcal{A}_{\alpha,\beta})=\mathcal{D}(A)\times\mathcal{D}(A^{\alpha})\times\mathcal{D}(A^{\alpha}),$
Firstly, we give a detailed review about the well-posedness of the problem
(1.4). The operator $\mathcal{A}_{\alpha,\beta}$ is densely defined and
dissipative, we will prove that its closure generates a $C_{0}$-semigroup of
contractions. Using a Lumer-Phillips theorem [6], it suffices to prove that
the adjoint operator $\mathcal{A}^{*}_{\alpha,\beta}$ is also dissipative.
The operator $\mathcal{A}^{*}_{\alpha,\beta}$ is a closed extension of the
operator
$\mathcal{M}_{\alpha,\beta}=\left(\begin{array}[]{ccll}0&-I&0\\\
A&0&-A^{\alpha}\\\ 0&A^{\alpha}&-A^{\beta}\end{array}\right),$
with domain
$\mathcal{D}(\mathcal{M}_{\alpha,\beta})=\mathcal{D}(\mathcal{A}_{\alpha,\beta})=\mathcal{D}(A)\times\mathcal{D}(A^{\alpha})\times\mathcal{D}(A^{\alpha}).$
The operator $\mathcal{M}_{\alpha,\beta}$ is densely defined and dissipative,
then it is closable and its $\overline{\mathcal{M}}_{\alpha,\beta}$ is also
dissipative. To conclude, it suffices to prove that
$\mathcal{A}^{*}_{\alpha,\beta}=\overline{\mathcal{M}}_{\alpha,\beta}$ . For
this we use the following lemma [2, 1].
###### Lemma 1.1.
[2] We consider on the Hilbert spaces $G$ and $H_{1}$ the operators
$\displaystyle\mathcal{A}:\mathcal{D}(\mathcal{A})\subset G\rightarrow
G,\;\;\;\;\;B:\mathcal{D}(B)\subset H_{1}\rightarrow G,$ $\displaystyle
B^{*}:\mathcal{D}(B^{*})\subset G\rightarrow
H_{1},\;\;\;\;\;\mathcal{C}:\mathcal{D}(\mathcal{C})\subset H_{1}\rightarrow
H_{1}.$
and we consider the operator matrix $\mathcal{M}$ on $G\times H_{1}$ defined
by
$\mathcal{M}:=\left(\begin{array}[]{ccll}\mathcal{A}&B\\\
-B^{*}&\mathcal{C}\end{array}\right),\;\;\;\mathcal{D}(\mathcal{M}):=\left(\mathcal{D}(\mathcal{A})\cap\mathcal{D}(B^{*})\right)\times\left(\mathcal{D}(B)\cap\mathcal{D}(\mathcal{C})\right).$
Assume that $\mathcal{C}$ is boundedly invertible, that
$\mathcal{B}\in\mathcal{L}\left(\mathcal{D}(\mathcal{C}),G\right)$ and that
$\mathcal{C}^{-1}\mathcal{B}^{*}$ extends to a bounded linear operator (we
denote its closure with the same symbol). Then $\mathcal{M}$ is closed if and
only if $\mathcal{A}+\mathcal{B}\mathcal{C}^{-1}\mathcal{B}^{*}$ is closed and
one has
$\displaystyle\overline{\mathcal{M}}=\left(\begin{array}[]{ccll}I&\mathcal{B}\mathcal{C}^{-1}\\\
0&I\end{array}\right)\left(\begin{array}[]{ccll}\overline{\mathcal{A}+\mathcal{B}\mathcal{C}^{-1}\mathcal{B}^{*}}&0\\\
0&\mathcal{C}\end{array}\right)\left(\begin{array}[]{ccll}I&0\\\
-\mathcal{C}^{-1}\mathcal{B}^{*}&I\end{array}\right)$
$\displaystyle\mathcal{D}\left(\overline{\mathcal{M}}\right)=\left\\{\left(\begin{array}[]{cll}u\\\
v\end{array}\right)\in\mathcal{D}(\overline{\mathcal{A}+\mathcal{B}\mathcal{C}^{-1}\mathcal{B}^{*}})\times
H_{1},\;-\mathcal{C}^{-1}\mathcal{B}^{*}u+v\in\mathcal{D}(\mathcal{C})\right\\}.$
By taking $G=\mathcal{D}(A^{1/2})$, $H_{1}=H\times H$, $\mathcal{A}=0$,
$\mathcal{B}=(-I\;0):\mathcal{D}(\mathcal{B})=\mathcal{D}(A^{\alpha})\times\mathcal{D}(A^{\alpha})\subset
H\times H\rightarrow\mathcal{D}(A^{1/2})$,
$\mathcal{C}=\left(\begin{array}[]{ccll}0&-A^{\alpha}\\\
A^{\alpha}&-A^{\beta}\end{array}\right)$, with
$\mathcal{D}(\mathcal{C})=\mathcal{D}(A^{\alpha})\times\mathcal{D}(A^{\alpha})$,
it appears that $\mathcal{B}^{*}=\left(\begin{array}[]{cll}-A\\\
0\end{array}\right)$,
$\mathcal{C}^{-1}\mathcal{B}^{*}=\left(\begin{array}[]{cll}A^{1+\beta-2\alpha}\\\
A^{1-\alpha}\end{array}\right)$,
$\mathcal{A}+\mathcal{B}\mathcal{C}^{-1}\mathcal{B}^{*}=-A^{\beta+1-2\alpha}$
(with domain $\mathcal{D}(A)$), and that
$\mathcal{D}\left(\overline{\mathcal{A}+\mathcal{B}\mathcal{C}^{-1}\mathcal{B}^{*}}\right)=\mathcal{D}(A^{1/2})$.
Furthermore, the operators $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$
satisfies the hypothesis in Lemma 1.1, then
$\mathcal{D}\left(\overline{\mathcal{M}}_{\alpha,\beta}\right)=\left\\{(u,v,\theta)^{T}\in\mathcal{H},\;-A^{1+\beta-2\alpha}u+v\in\mathcal{D}(A^{\alpha}),\;-A^{1-\alpha}u+\theta\in\mathcal{D}(A^{\alpha})\right\\}.$
But a direct calculation, gives
$\mathcal{D}\left(\mathcal{A}^{*}_{\alpha,\beta}\right)\subset\left\\{(u,v,\theta)^{T}\in\mathcal{H},\;-A^{1+\beta-2\alpha}u+v\in\mathcal{D}(A^{\alpha}),\;-A^{1-\alpha}u+\theta\in\mathcal{D}(A^{\alpha})\right\\}.$
Hence $\mathcal{A}^{*}_{\alpha,\beta}=\overline{\mathcal{M}}_{\alpha,\beta}$
and $\overline{\mathcal{A}}_{\alpha,\beta}$ generates a $C_{0}$-semigroup of
contractions.
Now, concerning the stability of the semigroup
$e^{t\mathcal{A}_{\alpha,\beta}}$, recall that Ammar-Khodja et al. [1], are
firstly proved that for every $\alpha,\beta\geq 0$,
$e^{t\mathcal{A}_{\alpha,\beta}}$ is exponentially stable if and only if
$\max(1-2\alpha,2\alpha-1)<\beta<2\alpha.$ Later on, a stability analysis has
been performed by J. Hao and Z. Liu in [5] for
$(\alpha,\beta)\in[0,1]\times[0,1]$. They divided the unit square
$[0,1]\times[0,1]$ into four regions $S$, $S_{1}$, $S_{2}$, $S_{3}$ where
$\displaystyle S$ $\displaystyle:=$
$\displaystyle\left\\{(\alpha,\beta)\in[0,1]\times[0,1];\,\max(1-2\alpha,2\alpha-1)\leq\beta\leq
2\alpha\right\\},$ $\displaystyle S_{1}$ $\displaystyle:=$
$\displaystyle\left\\{(\alpha,\beta)\in[0,1]\times[0,1];\;0<\beta-2\alpha,\;\alpha\geq
0,\;\frac{1}{2}\leq\beta\leq 1\right\\},$ $\displaystyle S_{2}$
$\displaystyle:=$
$\displaystyle\left\\{(a,b)\in[0,1]\times[0,1];\;\beta<1-2\alpha,\;\alpha\geq
0,\;0\leq\beta\leq\frac{1}{2}\right\\}$
and $S_{3}$ as defined below. We summarize the main results in the following
theorem (see also [4]).
###### Theorem 1.2.
The semigroup $e^{t\mathcal{A}_{\alpha,\beta}}$ has the following stability
properties:
(i) In $S$, it is exponentially stable;
(ii) In $S_{1}$, it is polynomially stable of order
$\frac{1}{2(\beta-2\alpha)}$;
(iii) In $S_{2}$, it is polynomially stable of order
$\frac{1}{2-2(\beta+2\alpha)}$;
(iv) In $S_{3}$, it is not asymptotically stable.
To justify the non asymptotic stability in region $S_{3}$, they proved that
$0\in\sigma(\mathcal{A}_{\alpha,\beta})$, where
$\sigma(\mathcal{A}_{\alpha,\beta})$ is the spectrum of
$\mathcal{A}_{\alpha,\beta}$ . Moreover, it can be shown that
$\sigma(\mathcal{A}_{\alpha,\beta})\cap\mathbf{i}\mathbb{R}=\left\\{0\right\\}$
in the region $S_{3}$.
The main result of this paper then concerns the precise asymptotic behaviour
of the solutions of (1.1)-(1.3) for $(\alpha,\beta)$ in the region $S_{3}$,
with initial condition in a special subspace of
$\mathcal{D}(\mathcal{A}_{\alpha,\beta})$. Precisely, we will estimate
$\|e^{t\mathcal{A}_{\alpha,\beta}}\mathcal{A}_{\alpha,\beta}\left(I-\mathcal{A}_{\alpha,\beta}\right)^{-1}\|$
when $t\longrightarrow\infty$. Our technique is a special frequency and
spectral analysis of the corresponding operator.
## 2\. Stabilization
We will justify that the resolvent has only a singularity (at zero) on the
imaginary axis (Lemma 2.3 below), and that
$\left(\mathbf{i}\lambda-\mathcal{A}_{\alpha,\beta}\right)^{-1}$ is bounded
outside a neighborhood of zero in $\mathbb{R}$ (Lemma 2.4 below). then we
apply a result due to Batty, Chill and Tomilov ([3, Theorem 7.6 ]) which
relate the decay of
$\|e^{t\mathcal{A}_{\alpha,\beta}}\mathcal{A}_{\alpha,\beta}\left(I-\mathcal{A}_{\alpha,\beta}\right)^{-1}\|$
to the growth of
$\left(\mathbf{i}\lambda-\mathcal{A}_{\alpha,\beta}\right)^{-1}$ near zero.
So, first we recall the corresponding result
###### Theorem 2.1.
([3], Theorem 7.6.) Let $(T(t))_{t\geq 0}$ be a bounded $C_{0}$-semigroup on a
Hilbert space $X$, with generator $\mathcal{A}$. assume that
$\sigma(\mathcal{A})\cap\mathbf{i}\mathbb{R}=\left\\{0\right\\}$, and let
$\gamma\geq 1$. The following are equivalent:
(i)
$\|\left(\mathbf{i}s-\mathcal{A}\right)^{-1}\|=\left\\{\begin{array}[]{ll}\displaystyle
O\,(|s|^{-\gamma}),\;\;\,s\rightarrow 0,\\\ \displaystyle
O\,(1),\;\;\;\;\;\;\;\;\,|s|\rightarrow\infty,\end{array}\right.$
(ii)
$\|T(t)A^{\gamma}(I-A)^{-\gamma}\|=O\,\left(\frac{1}{t}\right),\;\;\;\;t\rightarrow\infty,$
(iii)
$\|T(t)A(I-A)^{-1}\|=O\,\left(\frac{1}{t^{1/\gamma}}\right),\;\;\;\;t\rightarrow\infty$.
Hence, our main result is the following
###### Theorem 2.2.
We have the following decay in the region $S_{3}$
(2.1)
$\|e^{t\mathcal{A}_{\alpha,\beta}}\mathcal{A}_{\alpha,\beta}\left(I-\mathcal{A}_{\alpha,\beta}\right)^{-1}\|=O\,\left(\frac{1}{t}\right),\;\;\;\;t\rightarrow\infty.$
In particular, the decay of
$e^{t\mathcal{A}_{\alpha,\beta}}\mathcal{A}_{\alpha,\beta}x$ to zero is
uniform with respect to $x\in\mathcal{D}(\mathcal{A}_{\alpha,\beta})$.
It follows also that, for every $z\in Ran(\mathcal{A}_{\alpha,\beta})$ we have
$\|e^{t\mathcal{A}_{\alpha,\beta}}z\|=O\,\left(\frac{1}{t}\right),\;\;\;\;t\rightarrow\infty.$
###### Proof.
In view of Theorem 2.1, the proof is a direct consequence of the following
three lemmas.
###### Lemma 2.3.
In the region $S_{3}$, the operator $\mathcal{A}_{\alpha,\beta}$ satisfies
$\sigma(\mathcal{A}_{\alpha,\beta})\cap\mathbf{i}\mathbb{R}=\left\\{0\right\\}.$
###### Proof.
It has been proved in [5], at the third section that
$\left\\{0\right\\}\subset\sigma(\mathcal{A}_{\alpha,\beta})\cap\mathbf{i}\mathbb{R}$.
Conversely, to show that there is no nonzero spectrum point on the imaginary
axis, we use a contradiction argument. In fact, let $\lambda\in\mathbb{R}$,
$\lambda\neq 0$ such that
$\mathbf{i}\lambda\in\sigma(\mathcal{A}_{\alpha,\beta})$. Then, there exists a
sequence $(U_{n})\subset\mathcal{D}(\mathcal{A}_{\alpha,\beta})$, with
$\|U_{n}\|=1$ for all $n$, such that
(2.2) $\lim_{n\rightarrow\infty}\|\left(\mathbf{i}\lambda
I-\mathcal{A}_{\alpha,\beta}\right)U_{n}\|=0$
or there exists a sequence
$(U_{n})\subset\mathcal{D}(\mathcal{M}_{\alpha,\beta})=\mathcal{D}(\mathcal{A}_{\alpha,\beta})$,
with $\|U_{n}\|=1$ for all $n$, such that
(2.3) $\lim_{n\rightarrow\infty}\|\left(\mathbf{i}\lambda
I+\mathcal{M}_{\alpha,\beta}\right)U_{n}\|=0.$
Setting $U_{n}=(u_{n},v_{n},\theta_{n})$, then (2.2) is equivalent to
(2.4) $\displaystyle\mathbf{i}\lambda
A^{1/2}u_{n}-A^{1/2}v_{n}=o(1),\;\;\;\text{in}\;H,$ (2.5)
$\displaystyle\mathbf{i}\lambda
v_{n}+Au_{n}-A^{\alpha}\theta_{n}=o(1),\;\;\;\text{in}\;H,$ (2.6)
$\displaystyle\mathbf{i}\lambda\theta_{n}+A^{\alpha}v_{n}+A^{\beta}\theta_{n}=o(1),\;\;\;\text{in}\;H.$
First, since
$Re\left(\left\langle\left(\mathbf{i}\lambda
I-\mathcal{A}_{\alpha,\beta}\right)U_{n},U_{n}\right\rangle_{\mathcal{H}}\right)=\|A^{\beta/2}\theta_{n}\|^{2}$
we obtain
(2.7) $\lim_{n\rightarrow\infty}\|A^{\beta/2}\theta_{n}\|=0,$
and in particular
(2.8) $\lim_{n\rightarrow\infty}\|\theta_{n}\|=0.$
Second, taking inner product of (2.4) with $\frac{1}{\lambda}A^{1/2}u_{n}$,
(2.5) with $\frac{1}{\lambda}v_{n}$ and (2.6) with
$\frac{1}{\lambda}\theta_{n}$, taking into account (2.7) and (2.8), we get
(2.9)
$\displaystyle\mathbf{i}\|A^{1/2}u_{n}\|^{2}-\frac{1}{\lambda}\left\langle
v_{n},Au_{n}\right\rangle=o(1),$ (2.10)
$\displaystyle\mathbf{i}\|v_{n}\|^{2}+\frac{1}{\lambda}\left\langle
Au_{n},v_{n}\right\rangle-\frac{1}{\lambda}\left\langle
A^{\alpha}\theta_{n},v_{n}\right\rangle=o(1),$ (2.11)
$\displaystyle\frac{1}{\lambda}\left\langle
A^{\alpha}v_{n},\theta_{n}\right\rangle=o(1).$
Then, by combining (2.9)-(2.11) and using that $\|U_{n}\|=1$, it yields
(2.12) $\|A^{1/2}u_{n}\|=\frac{1}{2}+o(1),\;\;\;\;\|v_{n}\|=\frac{1}{2}+o(1).$
Now, since in the region $S_{3}$, $1-\alpha<\frac{1}{2}$ and
$\frac{3}{2}-2\alpha+\beta<\frac{1}{2}$, then using the boundedness of
$\|A^{1/2}u_{n}\|$ we get, by interpolation,
(2.13) $\|A^{1-\alpha}u_{n}\|=O(1),\;\;\;\;\|A^{1-2\alpha+\beta}u_{n}\|=O(1).$
Then, replacing $\mathbf{i}v_{n}$ by $\lambda u_{n}$ in (2.5), due to (2.4)
and interpolation, next taking the inner product of the obtained equation with
$\frac{1}{\lambda}A^{1-2\alpha+\beta}u_{n}$ to get
(2.14)
$-\lambda\|A^{\frac{1}{2}-\alpha+\frac{\beta}{2}}u_{n}\|^{2}+\frac{1}{\lambda}\|A^{1-\alpha+\frac{\beta}{2}}u_{n}\|^{2}-\frac{1}{\lambda}\left\langle
A^{\alpha}\theta_{n},A^{1-2\alpha+\beta}u_{n}\right\rangle=o(1).$
Taking the inner product of (2.6) with $\frac{1}{\lambda}A^{1-\alpha}u_{n}$,
we get
(2.15)
$\mathbf{i}\left\langle\theta_{n},A^{1-\alpha}u_{n}\right\rangle+\frac{1}{\lambda}\left\langle
A^{1/2}v_{n},A^{1/2}u_{n}\right\rangle+\frac{1}{\lambda}\left\langle
A^{\alpha}\theta_{n},A^{1-2\alpha+\beta}u_{n}\right\rangle=o(1).$
By (2.8) and (2.13), the first term in (2.14) converge to zero. Moreover,
using (2.4), we can replace $\frac{1}{\lambda}A^{1/2}v_{n}$ in the second term
in (2.15) by $\mathbf{i}A^{1/2}u_{n}$. Consequently, the sum of (2.14) and
(2.15) yields
(2.16)
$-\lambda\|A^{\frac{1}{2}-\alpha+\frac{\beta}{2}}u_{n}\|^{2}+\frac{1}{\lambda}\|A^{1-\alpha+\frac{\beta}{2}}u_{n}\|^{2}+\mathbf{i}\|A^{\frac{1}{2}}u_{n}\|^{2}=o(1).$
Hence, $\|A^{\frac{1}{2}}u_{n}\|^{2}=o(1)$, which contradict the first
estimate in (2.12).
The same approach applied to (2.3) leads to the same conclusion without any
difficulty. ∎
###### Lemma 2.4.
In the region $S_{3}$,
$\|\left(\mathbf{i}s-\mathcal{A}_{\alpha,\beta}\right)^{-1}\|=O\,(1),\;\;\,s\rightarrow\infty.$
###### Proof.
It is a direct consequence of Theorem 2.3 in [5], since
$\limsup\limits_{\lambda\in\mathbb{R},\,\lambda\rightarrow\infty}|\lambda|^{\frac{\beta}{\alpha}}\|\left(\mathbf{i}\lambda-\mathcal{A}_{\alpha,\beta}\right)^{-1}\|<\infty$.
∎
###### Lemma 2.5.
In the region $S_{3}$,
$\|\left(\mathbf{i}s-\mathcal{A}_{\alpha,\beta}\right)^{-1}\|=O\,(|s|^{-1}),\;\;\,s\rightarrow
0.$
###### Proof.
By contradiction, suppose that $\limsup\limits_{s\in\mathbb{R},\,s\rightarrow
0}\|s\left(\mathbf{i}sI-\mathcal{A}_{\alpha,\beta}\right)^{-1}\|=\infty$.
Putting $s=\frac{1}{w}$, this is equivalent to
$\limsup\limits_{w\in\mathbb{R},\,|w|\rightarrow\infty}\|w^{-1}\left(\mathbf{i}w^{-1}I-\mathcal{A}_{\alpha,\beta}\right)^{-1}\|=\infty$.
Then, there exists a sequence $(w_{n})$ of real numbers with
$|w_{n}|\rightarrow\infty$, and a sequence
$(U_{n})\subset\mathcal{D}(\mathcal{A}_{\alpha,\beta})$, with $\|U_{n}\|=1$
for all $n$ such that
(2.17)
$\lim_{n\rightarrow\infty}|w_{n}|\|\left(\mathbf{i}w_{n}^{-1}I-\mathcal{A}_{\alpha,\beta}\right)U_{n}\|=0.$
Setting $U_{n}=(u_{n},v_{n},\theta_{n})$, then (2.17) is rewretten explicitely
as follows
(2.18)
$\displaystyle\mathbf{i}w_{n}^{-1}|w_{n}|A^{1/2}u_{n}-|w_{n}|A^{1/2}v_{n}=o(1),\;\;\;\text{in}\;H,$
(2.19)
$\displaystyle\mathbf{i}w_{n}^{-1}|w_{n}|v_{n}+|w_{n}|Au_{n}-|w_{n}|A^{\alpha}\theta_{n}=o(1),\;\;\;\text{in}\;H,$
(2.20)
$\displaystyle\mathbf{i}w_{n}^{-1}|w_{n}|\theta_{n}+|w_{n}|A^{\alpha}v_{n}+|w_{n}|A^{\beta}\theta_{n}=o(1),\;\;\;\text{in}\;H.$
Since
$Re\left(\left\langle|w_{n}|\left(\mathbf{i}w_{n}^{-1}I-\mathcal{A}_{\alpha,\beta}\right)U_{n},U_{n}\right\rangle_{\mathcal{H}}\right)=-|w_{n}|\|A^{\beta/2}\theta_{n}\|^{2}$,
it yields
(2.21) $\lim_{n\rightarrow\infty}|w_{n}|^{1/2}\|A^{\beta/2}\theta_{n}\|=0,$
and in particular
(2.22) $\lim_{n\rightarrow\infty}\|\theta_{n}\|=0.$
Then, applying $|w_{n}|^{-1}A^{-\alpha}$ to (2.20), we get, (by taking into
account (2.22)),
(2.23) $\lim_{n\rightarrow\infty}\|v_{n}\|=0.$
Now, taking inner product of (2.18) with $A^{1/2}u_{n}$, (2.19) with $v_{n}$
and (2.20) with $\theta_{n}$, taking into account (2.21), (2.22) and (2.23),
we have
(2.24)
$\displaystyle\mathbf{i}w_{n}^{-1}|w_{n}|\|A^{1/2}u_{n}\|^{2}-|w_{n}|\left\langle
v_{n},Au_{n}\right\rangle=o(1),$ (2.25) $\displaystyle|w_{n}|\left\langle
Au_{n},v_{n}\right\rangle-|w_{n}|\left\langle
A^{\alpha}\theta_{n},v_{n}\right\rangle=o(1),$ (2.26)
$\displaystyle|w_{n}|\left\langle
A^{\alpha}\theta_{n},v_{n}\right\rangle=o(1).$
By combining (2.24)-(2.26), it yields
(2.27) $\lim_{n\rightarrow\infty}\|A^{1/2}u_{n}\|=0.$
The promised contradiction follows from (2.22), (2.23) and (2.27). Thus, the
proof of Lemma 2.5 is completed. ∎
Which ends the proof of theorem. ∎
###### Remark 2.6.
In $S_{3}$, the semigroup $e^{t\mathcal{A}_{\alpha,\beta}}$ is of Gevrey class
$\delta>\frac{\alpha}{\beta}$ and in particular, it is infinitely
differentiable [5]. Thus, for $z\in Ran(\mathcal{A}_{\alpha,\beta})$, we not
only has polynomial stability, but also have instantaneous smoothness.
## 3\. Application
As application of Theorem 2.2, we consider the following thermoplate system:
$\left\\{\begin{array}[]{lll}\displaystyle
u_{tt}+\Delta^{2}u-(-\Delta)^{\alpha}w=0,\Omega\times(0,+\infty),\\\
w_{t}+(-\Delta)^{\alpha}u_{t}-\Delta w=0,\,\Omega\times(0,+\infty),\\\
u=\Delta u=w=0,\,\Gamma\times(0,+\infty),\\\
u(x,0)=u_{0}(x),u_{t}(x,0)=u_{1}(x),w(x,0)=w_{0}(x),\,\Omega,\end{array}\right.$
where $\Omega$ be a bounded domain in $\mathbb{R}^{n}$ with smooth boundary
$\Gamma$, $\beta=\frac{1}{2}$ and $3/4<\alpha\leq 1.$
Here, $H=L^{2}(\Omega),A=\Delta^{2}$ avec $\mathcal{D}(A)=\left\\{u\in
H^{2}(\Omega)\cap H^{1}_{0}(\Omega);\,\Delta u=0\;\text{on}\;\Gamma\right\\}$.
Then, according to Theorem 2.2 the corresponding semigroup satisfies the
estimate (2.1) for all $\alpha\in(3/4,1].$
## References
* [1] F. Ammar-Khodja, A. Bader and A. Benabdallah, Dynamic stabilization of systems via decoupling techniques, ESAIM Control Optim. Calc. Var., 4 (1999), 577–593.
* [2] F. V. Atkinson, H. Langer, R. Mennicken and A. A. Shkalikov, The essential spectrum of some matrix operators. Math. Nachr., 167 (1994), 5–20.
* [3] C. J. K. Batty, R. Chill and Y. Tomilov, Fine scales of decays of operator semigroups. J. Eur. Math. Soc., 18 (2016), 853–929. stability of linear dynamical systems in Hilbert space, Ann. Differential Equations, 1 (1985), 43–56.
* [4] J. Hao and Z. Liu, Stability of an abstract system of coupled hyperbolic and parabolic equations, Z. Angew. Math. Phys., 64 (2013), 1145–1159.
* [5] J. Hao, Z. Liu and J. Yong, Regularity analysis for an abstract system of coupled hyperbolic and parabolic equations, Journal of Differential Equations, 259 (2015), 4763-4798.
* [6] G. Lumer and R. S. Phillips, Dissipative operators in a Banach space, Pacific Journal of Mathematics, 11 (1961), 679-698.
|
# A Note on the Asymptotic Expansion of Matrix Coefficients over $p$-adic
Fields
Zahi Hazan
###### Abstract
In this short note, presented as a “community service”, followed by the PhD
research of the author, we draw the relation between Casselman’s theorem
[Cas93] regarding the asymptotic behavior of matrix coefficients over $p$-adic
fields and its expression as a finite sum of finite functions.
## 1 Introduction
Let $k$ be a non-Archimedean locally compact field with $\mathcal{O}$, its
ring of integers, and $\mathcal{P}$, the maximal ideal in $\mathcal{O}$.
Denote a uniformizer of $\mathcal{P}$ by $\varpi$, and the cardinality of the
residue field by $q$. Let $G$ be the group of $k$-rational points of a
reductive algebraic group defined over $k$. Let $(\pi,V)$ be a (complex)
smooth, admissible, irreducible representation of $G$. For a parabolic
subgroup $P$ of $G$ with Levi decomposition $P=MN$, we denote by $V(N)$ the
subspace of $V$ generated by
$\left\\{\pi(n)v-v|\ n\in N,\ v\in V\right\\}.$ (1)
We also denote $V_{N}=V/V(N)$. This is the space of a smooth representation
$\left(\pi_{N},V_{N}\right)$ of $M$, called the Jacquet module of
$\left(\pi,V\right)$ along $N$.
Let $P_{0}=M_{0}N_{0}$ be a minimal parabolic subgroup. Let
$A_{M}=Z\left(M\right)$ be the center of $M$. For simplicity we denote
$A=A_{M_{0}}$. Let $\Sigma$ be the set of all roots corresponding to the pair
$\left(G,A\right)$. i.e., the non-trivial eigencharacters of the adjoint
action of $A$ on the lie algebra $\mathfrak{g}$ of $G$. Let $\Sigma^{+}$ be
the subset of positive roots determined by $P_{0}$, so that
$\mathfrak{n}=\bigoplus_{\alpha\in\Sigma^{+}}\mathfrak{g}_{\alpha}$, where
$\mathfrak{g}_{\alpha}$ is the eigenspace of $\alpha$, and $\mathfrak{n}$ is
the lie algebra of $N_{0}$. Let $\Delta=\Delta_{P_{0}}$ be the basis of
$\Sigma^{+}$, so that every root in $\Sigma^{+}$ is a sum of roots in
$\Delta$.
We have the following definitions and theorem from [Cas93, Corollary 4.3.4].
For each $\Theta\subseteq\Delta$ and $0<\varepsilon\leq 1$, we define
$A_{\Theta}^{-}\left(\varepsilon\right)=\left\\{a\in
A\left|\begin{array}[]{l}\left|\alpha\left(a\right)\right|\leq\varepsilon\
\forall\alpha\in\Delta\backslash\Theta\\\
\varepsilon<\left|\alpha\left(a\right)\right|\leq 1\
\forall\alpha\in\Theta\end{array}\right.\right\\}.$ (2)
This is a subset of
$A^{-}=\left\\{a\in A|\ \left|\alpha(a)\right|\leq 1\
\forall\alpha\in\Delta\right\\}.$ (3)
For any $0<\varepsilon\leq 1$, $A^{-}$ is the disjoint union of
$A_{\Theta}^{-}\left(\varepsilon\right)$ as $\Theta$ ranges over all subsets
of $\Delta$. For each $\Theta\subseteq\Delta$ we denote by
$P_{\Theta}=M_{\Theta}N_{\Theta}$ the standard parabolic subgroup
corresponding to $\Theta$. Let $N^{-}_{\Theta}$ be the unipotent radical
opposite to $N_{\Theta}$. We define a canonical non-degenerate pairing of
$V_{N_{\Theta}}$ with $\tilde{V}_{N^{-}_{\Theta}}$ according to the formula
$\langle u_{\Theta},\tilde{u}_{\Theta}\rangle_{N_{\Theta}}=\langle
v,\tilde{v}\rangle,$ (4)
where $v,\tilde{v}$ are any two canonical lifts of
$u_{\Theta},\tilde{u}_{\Theta}$.
###### Theorem A (Casselman).
Let $v\in V$ and $\tilde{v}\in\tilde{V}$ be given. For
$\Theta\subseteq\Delta$, let $u_{\Theta},\ \tilde{u}_{\Theta}$ be their images
in $V_{N_{\Theta}},\ \tilde{V}_{N^{-}_{\Theta}}$. There exists $\varepsilon>0$
such that for any $\Theta\subseteq\Delta$ and $a\in
A_{\Theta}^{-}(\varepsilon)$ one has
$\left\langle\pi(a)v,\tilde{v}\right\rangle=\left\langle\pi_{N_{\Theta}}(a)u_{\Theta},\tilde{u}_{\Theta}\right\rangle_{N_{\Theta}}.$
(5)
For a subgroup $H$ of $G$, we say that a function is an $H$-finite function
(or simply finite) if the space spanned by its right $H$-translations is
finite dimensional. By Jacquet and Langlands [JL06, Lemma 8.1] we have an
explicit basis for all $H$-finite function when $H$ is a locally compact
abelian group.
###### Theorem B (Jacquet and Langlands).
Let $Z$ be a locally compact abelian group of the form
$Z=K\times\mathbb{Z}^{r}\times\mathbb{R}^{n}$ (6)
where $K$ is a compact group. For $1\leq i\leq r+n$ let
$\xi_{i}:\left(z_{0},x_{1},\ldots,x_{r+n}\right)\to x_{i}$ be the projection
map. Then, for any sequence of non-negative integers $p_{1},\ldots,p_{r+n}$
and any quasi-character $\chi$ of $Z$, the function
$\chi\prod_{i=1}^{r+n}\xi_{i}^{p_{i}}$ is continuous and finite. These
functions form a basis of the space of continuous finite functions on $Z$.
Our goal is to show how A and B give an asymptotic expansion of matrix
coefficients in terms of a finite linear combination of $A_{M}$-finite
functions. In detail,
###### Theorem 1.
Let $v\in V$ and $\tilde{v}\in\tilde{V}$. There exist $\varepsilon>0$ and
finite sets of vectors, that depend on $\\{\pi,v,\tilde{v}\\}$,
$\underline{p^{\prime}}=\left(p^{\prime}_{1},\ldots,p^{\prime}_{r}\right)\in\mathbb{R}^{r},\
\underline{p}=\left(p_{1},\ldots,p_{r}\right)\in\mathbb{Z}^{r}_{\geq 0}$, and
$\underline{\chi}=\left(\chi_{1},\ldots,\chi_{r}\right)$ where for all $1\leq
i\leq r$, $\chi_{i}:k^{\times}\to\mathbb{C}^{\times}$ are unitary characters,
such that for all $a\in A^{-}$, one has
$\left\langle\pi(a)v,\tilde{v}\right\rangle=\sum_{\Theta\subseteq\Delta,\underline{p^{\prime}},\underline{p},\underline{\chi}}\chi_{A_{\Theta}^{-}\left(\varepsilon\right)}(a)\alpha_{\underline{p^{\prime}},\underline{p},\underline{\chi}}\prod_{i=1}^{r_{\Theta}}\chi_{i}(a_{i})\left|a_{i}\right|^{p^{\prime}_{i}}\log_{q}^{p_{i}}\left|a_{i}\right|,$
(7)
where $\chi_{A_{\Theta}^{-}\left(\varepsilon\right)}(a)$ is the indicator
function of $A_{\Theta}^{-}\left(\varepsilon\right)$, $r_{\Theta}$ is such
that $A_{M_{\Theta}}\cong(k^{\times})^{r_{\Theta}}$ by the map
$a\mapsto\left(a_{1},\ldots,a_{r_{\Theta}}\right)$, and
$\alpha_{\underline{p^{\prime}},\underline{p},\underline{\chi}}\in\mathbb{C}$
are such that
$\alpha_{\underline{p^{\prime}},\underline{p},\underline{\chi}}=0$ for all but
finitely many $\underline{p^{\prime}},\underline{p},\underline{\chi}$.
## 2 Proof of 1
Let $P=MN$ be a standard parabolic subgroup. We deduce from B that in our case
we have,
###### Corollary 2.
Let $f:A_{M}\to\mathbb{R}$ be a continuous finite function. Let $r$ be such
that $A_{M}\cong(k^{\times})^{r}$ by the map
$a\mapsto\left(a_{1},\ldots,a_{r}\right)$. Then, for all $a\in A_{M}$, $f(a)$
is spanned by
$\prod_{i=1}^{r}\chi_{i}(a_{i})\left|a_{i}\right|^{p^{\prime}_{i}}\log_{q}^{p_{i}}\left|a_{i}\right|,$
(8)
where $\left(p^{\prime}_{1},\ldots,p^{\prime}_{r}\right)\in\mathbb{R}^{r},\
\left(p_{1},\ldots,p_{r}\right)\in\mathbb{Z}^{r}_{\geq 0}$, and for all $1\leq
i\leq r$, $\chi_{i}:k^{\times}\to\mathbb{C}^{\times}$ are unitary characters.
###### Proof.
We have $k^{\times}\cong\mathcal{O}^{\times}\times\mathbb{Z}$ by the map
$x\mapsto\left(\frac{x}{\left|x\right|},\log_{q}\left|x\right|\right)$. A
character of $k^{\times}$ is of the form
$\chi^{\prime}(\cdot)\left|\cdot\right|^{s}$, where $\chi^{\prime}$ is a
unitary character and $0\neq s\in\mathbb{C}$ (we can assume $s\in\mathbb{R}$
by attaching the imaginary part to $\chi^{\prime}$). Let
$(a_{1},\ldots,a_{r})\in(k^{\times})^{r}$ be the image of $a\in A_{M}$. Let
$\chi$ be a quasi character of $A_{M}$. Then
$\chi(a)=\prod_{i=1}^{r}\chi_{i}(a_{i})\left|a_{i}\right|^{p^{\prime}_{i}}$,
where $\chi_{i}$ is unitary and $p^{\prime}_{i}\in\mathbb{R}$ for all $1\leq
i\leq r$. Applying B, gives that the space of continuous finite function on
$A_{M}$ is spanned by $\chi\prod_{i=1}^{r}\xi_{i}^{p_{i}}$, where $\chi$ is a
quasi character of $A_{M}$ and $\xi_{i}(a)=\log_{q}\left|a_{i}\right|$.
Therefore, this space is spanned by
$\prod_{i=1}^{r}\chi_{i}(a_{i})\left|a_{i}\right|^{p^{\prime}_{i}}\log_{q}\left|a\right|^{p_{i}}$.
∎
As mentioned above, $A^{-}$ is the disjoint union of the
$A_{\Theta}^{-}\left(\varepsilon\right)$ as $\Theta$ ranges over all subsets
of $\Delta$. Thus, in order to deduce 1 from A and 2, it is left to show that
for each $\Theta\subseteq\Delta$, the function
$x\mapsto\left\langle\pi_{N}(x)u,\tilde{u}\right\rangle_{N}$ is an
$A_{M}$-finite function, where $P=MN$ is the standard parabolic subgroup
corresponding to $\Theta$. Namely,
###### Proposition 3.
Let $u\in V_{N}$, $\tilde{u}\in\tilde{V}_{N^{-}}$. For all $a\in A_{M}$ and
$m\in M$, there exist $\\{b_{i}\\}_{1\leq i\leq\ell}\subseteq A_{M}$ and
$\\{c_{i}(a)\\}_{1\leq i\leq\ell}\subseteq\mathbb{C}$, such that
$\left\langle\pi_{N}(ma)u,\tilde{u}\right\rangle_{N}=\sum_{i=1}^{\ell}c_{i}(a)\left\langle\pi_{N}(mb_{i})u,\tilde{u}\right\rangle_{N}.$
(9)
Before proving 3 we need the following lemma.
###### Lemma 4.
Let $R$ be a group with center $Z\left(R\right)\cong K\times\mathbb{Z}^{r}$,
where $K$ is a compact group. Denote the standard basis of $\mathbb{Z}^{r}$ by
$\left\\{e_{1},\ldots,e_{r}\right\\}$. Let $\left(H,\sigma\right)$ be a
(complex) smooth $R$-module of finite length.
1. (i.)
For all finite dimensional spaces $W\subseteq H$ and all $1\leq j\leq r$ there
exists a finite dimensional $Z(R)$-invariant space $W_{j}\subseteq H$, such
that
$\sigma(e_{j})W\subseteq W+W_{j}.$ (10)
2. (ii.)
For all finite dimensional spaces $W\subseteq H$, there exists a finite
dimensional $Z(R)$-invariant space $W^{\prime}\subseteq H$, such that
$\sigma(\mathbb{Z}^{r})W\subseteq W+W^{\prime}.$ (11)
3. (iii.)
Let $v\in H$. The $Z\left(R\right)$-module generated by $v$ is finite
dimensional.
###### Proof.
We begin by proving part (i.). It is sufficient to prove this part for a one
dimensional space $W$ as the general case follows directly. Hence, we assume
that $W$ is spanned as a $R$-module by $v\in H$. We prove this part by
induction on the length of $H$. First, assume the length is $1$. i.e. $H$ is
irreducible. Then, by Schur’s lemma, $Z\left(R\right)$ acts on $H$ as a
scalar. Thus, the $Z\left(R\right)$-module generated by $v$ is of dimension
$1$ and all the parts of the lemma follow. Now, assume that the assertion is
true for modules of length $d-1$. Let $H$ be a $R$-module of length $d$. That
is, there exists a sequence of $R$-modules
$0=H_{0}\subsetneq H_{1}\subsetneq H_{2}\subsetneq\ldots\subsetneq H_{d}=H$
(12)
such that $H_{i+1}/H_{i}$ is irreducible for all $0\leq i<d$.
By the fact that $H_{d}/H_{d-1}$ is irreducible, and by Schur’s lemma, for all
$1\leq j\leq r$ there exists $\alpha_{j}\in\mathbb{C}$, such that
$\sigma\left(e_{j}\right)\left(v+H_{d-1}\right)=\alpha_{j}v+H_{d-1}.$ (13)
In particular,
$\sigma\left(e_{j}\right)v=\alpha_{j}v+h_{j},$ (14)
where $h_{j}\in H_{d-1}$.
Let $w=\sum_{i=1}^{\ell}c_{i}\sigma(g_{i})v\in W$, where $c_{i}\in\mathbb{C}$
and $g_{i}\in R$ for all $1\leq i\leq\ell$. Then, by eq. 14
$\sigma\left(e_{j}\right)w=\sigma\left(e_{j}\right)\left(\sum_{i=1}^{\ell}c_{i}\sigma(g_{i})v\right)=\sum_{i=1}^{\ell}c_{i}\sigma(g_{i})\sigma\left(e_{j}\right)v=\sum_{i=1}^{\ell}c_{i}\sigma(g_{i})\left(\alpha_{j}v+h_{j}\right).$
(15)
Denote by $U_{j}$ the $R$-module spanned by $h_{j}$. In this notation, eq. 15
gives
$\sigma(e_{j})w\in\mathbb{C}w+U_{j}.$ (16)
We have $U_{j}\subseteq H_{d-1}$. Thus, by the induction hypothesis, there
exists a finite dimensional $Z(R)$-invariant space $W^{\prime}_{j}\subseteq
H_{d-1}$, such that
$\sigma(e_{j})U_{j}\subseteq U_{j}+W^{\prime}_{j}.$ (17)
We take $W_{j}=U_{j}+W^{\prime}_{j}$. Thus, $U_{j}\subseteq W_{j}$, so eq. 16
gives $\sigma(e_{j})w\in\mathbb{C}w+W_{j}$ and by eq. 17 gives
$\sigma(e_{j})W_{j}=\sigma(e_{j})\left(U_{j}+W^{\prime}_{j}\right)\subseteq
W_{j}+W^{\prime}_{j}=W_{j}.$ (18)
Next, we prove part (ii.). Let $1\leq j\leq r$. First,
$\sigma(0e_{j})W=W.$ (19)
Let $0\neq n\in\mathbb{Z}$. By eq. 10 we have
$\displaystyle\sigma(ne_{j})W$
$\displaystyle\subseteq\sigma((n-\mathrm{sgn}(n)1)e_{j})\left(W+W_{j}\right)$
(20) $\displaystyle=\sigma((n-\mathrm{sgn}(n)1)e_{j})W+W_{j}$ (21)
$\displaystyle=\ldots=\sigma(e_{j})\left(W+W_{j}\right)\subseteq\sigma(e_{j})W+W_{j},$
(22)
where eq. 21 is due to the $Z(R)$-invariant property of $W_{j}$, and eq. 22 is
followed by repeating eqs. 20 and 21 $n$ times. Thus, by taking
$W^{\prime}=\sum_{j=1}^{r}W_{j}$ the statement readily follows.
In order to deduce part (iii.) we first note that $\sigma$ is smooth, so for
all $v\in H$, the space $\mathrm{sp}\\{\sigma(x)v|\ x\in K\\}$ is of finite
dimension. Hence, for a finite dimensional space $W\subseteq H$, the space
$\sigma(K)W=\mathrm{sp}\\{\sigma(x)w|\ x\in K,w\in W\\}$ (23)
is also of finite dimension. It follows that $\sigma(K)\mathbb{C}v$ is finite
dimensional. Therefore, by part 2 there exists $W^{\prime}\subseteq H$ a
finite dimensional $Z(R)$-invariant space such that
$\mathrm{sp}_{\mathbb{C}}\\{\sigma(z)v|\ z\in
Z(R)\\}=\sigma(\mathbb{Z}^{r})\left(\sigma(K)\mathbb{C}v\right)\subseteq\sigma(K)\mathbb{C}v+W^{\prime}.$
(24)
∎
We are now ready to prove 3.
###### Proof of 3.
The Jacquet module is a smooth $G$-module of finite length [Cas93, Theorems
3.3.1 and 6.3.10]. Applying part (iii.) of 4 for $R=M$ (with $Z(R)=A_{M}$),
$H=V_{N}$, $\sigma=\pi_{N}$, and $v=u\in V_{N}$, gives that
$V:=\left\\{\pi_{N}\left(a\right)u|\ a\in A_{M}\right\\}$ is of finite
dimension. Let
$\left\\{\pi_{N}\left(b_{1}\right)u,\ldots,\pi_{N}\left(b_{\ell}\right)u\right\\}$
be a basis of $V$. Then,
$\pi_{N}\left(a\right)u=\sum_{i=1}^{\ell}c_{i}(a)\pi_{N}\left(b_{i}\right)u.$
(25)
Therefore,
$\left\langle\pi_{N}\left(ma\right)u,\tilde{u}\right\rangle_{N}=\sum_{i=1}^{\ell}c_{i}\left(a\right)\left\langle\pi_{N}\left(mb_{i}\right)u,\tilde{u}\right\rangle_{N}.$
(26)
∎
## Acknowledgment
We would like to express our appreciation to Elad Zelingher for useful
discussions and his valuable comments on earlier versions of this manuscript.
We also thank David Soudry for his important suggestions.
## References
* [Cas93] William Casselman. Introduction to the theory of admissible representations of p-adic reductive groups, unpublished notes distributed by p. Sally, draft May, 7:361, 1993.
* [JL06] Hervé Jacquet and Robert P Langlands. Automorphic Forms on GL (2): Part 1, volume 114. Springer, 2006.
School of Mathematical Sciences, Tel Aviv University, Ramat Aviv, Tel Aviv
6997801, Israel
E-mail address<EMAIL_ADDRESS>
|
# Personalized Reward Learning with
Interaction-Grounded Learning (IGL)
Jessica Maghakian
Stony Brook University
&Paul Mineiro
Microsoft Research NYC
&Kishan Panaganti
Texas A&M University
&Mark Rucker
University of Virginia &Akanksha Saran
Microsoft Research NYC
&Cheng Tan
Microsoft Research NYC
###### Abstract
In an era of countless content offerings, recommender systems alleviate
information overload by providing users with personalized content suggestions.
Due to the scarcity of explicit user feedback, modern recommender systems
typically optimize for the same fixed combination of implicit feedback signals
across all users. However, this approach disregards a growing body of work
highlighting that (i) implicit signals can be used by users in diverse ways,
signaling anything from satisfaction to active dislike, and (ii) different
users communicate preferences in different ways. We propose applying the
recent Interaction Grounded Learning (IGL) paradigm to address the challenge
of learning representations of diverse user communication modalities. Rather
than taking a fixed, human-designed reward function, IGL is able to learn
personalized reward functions for different users and then optimize directly
for the latent user satisfaction. We demonstrate the success of IGL with
experiments using simulations as well as with real-world production traces.
## 1 Introduction
From shopping to reading the news, modern Internet users have access to an
overwhelming amount of content and choices from online services. Recommender
systems offer a way to improve user experience and decrease information
overload by providing a customized selection of content. A key challenge for
recommender systems is the rarity of explicit user feedback, such as ratings
or likes/dislikes (Grčar et al., 2005). Rather than explicit feedback,
practitioners typically use more readily available implicit signals, such as
clicks (Hu et al., 2008), webpage dwell time (Yi et al., 2014), or inter-
arrival times (Wu et al., 2017) as a proxy signal for user satisfaction. These
implicit signals are used as the reward objective in recommender systems, with
the popular Click-Through Rate (CTR) metric as the gold standard for the field
(Silveira et al., 2019). However, directly using implicit signals as the
reward function presents several issues.
_Implicit signals do not directly map to user satisfaction._ Although clicks
are routinely equated with user satisfaction, there are examples of
unsatisfied users interacting with content via clicks. Clickbait exploits
cognitive biases such as caption bias (Hofmann et al., 2012) or the curiosity
gap (Scott, 2021) so that low quality content attracts more clicks. Direct
optimization of the CTR degrades user experience by promoting clickbait items
(Wang et al., 2021). Recent work shows that users will even click on content
that they know a priori they will dislike. In a study of online news reading,
Lu et al. (2018a) discovered that 15% of the time, users would click on
articles that they strongly disliked. Similarly, although longer webpage dwell
times are associated with satisfied users, a study by Kim et al. (2014) found
that dwell time is also significantly impacted by page topic, readability and
content length.
_Different users communicate in different ways._ Demographic background is
known to have an impact on the ways in which users engage with recommender
systems. A study by Beel et al. (2013) shows that older users have CTR more
than 3x higher than their younger counterparts. Gender also has an impact on
interactions, e.g. men are more likely to leave dislikes on YouTube videos
than women (Khan, 2017). At the same time, a growing body of work shows that
recommender systems do not provide consistent performance across demographic
subgroups. For example, multiple studies on ML fairness in recommender systems
show that women on average receive less accurate recommendations compared to
men (Ekstrand et al., 2018; Mansoury et al., 2020). Current systems are also
unfair across different age brackets, with statistically significant
recommendation utility degradation as the age of the user increases (Neophytou
et al., 2022). The work of Neophytou et al. identifies usage features as the
most predictive of mean recommender utility, hinting that the inconsistent
performance in recommendation algorithms across subgroups arises from the
differences in how users interact with the recommender system.
These challenges motivate the need for personalized reward functions. However,
extensively modeling the ways in which implicit signals are used or how
demographics impact interaction style is costly and inefficient. Current
state-of-the-art systems utilize reward functions that are manually engineered
combinations of implicit signals, typically refined through laborious trial
and error methods. Yet as recommender systems and their users evolve, so do
the ways in which users implicitly communicate preferences. Any extensive
models or hand tuned reward functions developed now could easily become
obsolete within a few years time.
To this end, we propose Interaction Grounded Learning (IGL) Xie et al. (2021)
for personalized reward learning (IGL-P). IGL is a learning paradigm where a
learner optimizes for unobservable rewards by interacting with the environment
and associating observable feedback with the true latent reward. Prior IGL
approaches assume the feedback either depends on the reward alone Xie et al.
(2021), or on the reward and action Xie et al. (2022). These methods are
unable to disambiguate personalized feedback which depends on the context.
Other approaches such as reinforcement learning and traditional contextual
bandits suffer from the choice of reward function. However our proposed
personalized IGL, IGL-P, resolves the 2 above challenges while making minimal
assumptions about the value of observed user feedback. Our new approach is
able to incorporate both explicit and implicit signals, leverage ambiguous
user feedback and adapt to the different ways in which users interact with the
system.
Our Contributions: We present IGL-P, the first IGL strategy for context-
dependent feedback, the first use of inverse kinematics as an IGL objective,
and the first IGL strategy for more than two latent states.Our proposed
approach provides an alternative to agent learning methods which require hand-
crafted reward functions. Using simulations and real production data, we
demonstrate that IGL-P is able to learn personalized rewards when applied to
the domain of online recommender systems, which require at least 3 reward
states.
## 2 Problem Setting
### 2.1 Contextual Bandits
The contextual bandit (Auer et al., 2002; Langford & Zhang, 2007) is a
statistical model of myopic decision making which is pervasively applied in
recommendation systems (Bouneffouf et al., 2020). IGL operates via reduction
to contextual bandits, hence, we briefly review contextual bandits here.
The contextual bandit problem proceeds over $T$ rounds. At each round
$t\in[T]$, the learner receives a context $x_{t}\in\mathcal{X}$ (the context
space), selects an action $a_{t}\in\mathcal{A}$ (the action space), and then
observes a reward $r_{t}(a_{t})$, where $r_{t}:\mathcal{A}\to[0,1]$ is the
underlying reward function. We assume that for each round $t$, conditioned on
$x_{t}$, $r_{t}$ is sampled from a distribution
$\mathbb{P}_{r_{t}}(\cdot\mid{}x_{t})$. A contextual bandit algorithm attempts
to minimize the cumulative regret
$\displaystyle\mathrm{\mathbf{Reg}}_{\mathsf{CB}}(T)\vcentcolon=\sum_{t=1}^{T}r_{t}(\pi^{\star}(x_{t}))-r_{t}(a_{t})$
(1)
relative to an optimal policy $\pi^{\star}$ over a policy class $\Pi$.
In general, both the contexts $x_{1},\ldots,x_{T}$ and the distributions
$\mathbb{P}_{r_{1}},\ldots,\mathbb{P}_{r_{T}}$ can be selected in an
arbitrary, potentially adaptive fashion based on the history. In the sequel we
will describe IGL in a stochastic environment, but the reduction induces a
nonstationary contextual bandit problem, and therefore the existence of
adversarial contextual bandit algorithms is relevant.
### 2.2 Interaction Grounded Learning
IGL is a problem setting in which the learner’s goal is to optimally interact
with the environment with no explicit reward to ground its policies. IGL
extends the contextual bandit framework by eliding the reward from the
learning algorithm and providing feedback instead (Xie et al., 2021). We
describe the stochastic setting where $(x_{t},r_{t},y_{t})\sim D$ triples are
sampled iid from an unknown distribution; the learner receives the context
$x_{t}\in\mathcal{X}$, selects an action $a_{t}\in\mathcal{A}$, and then
observes the feedback $y_{t}(a_{t})$, where $y_{t}:\mathcal{A}\to[0,1]$ is the
underlying feedback function. Note $r_{t}(a_{t})$ is never revealed to the
algorithm: nonetheless, the regret notion remains the same as Eq. 1. An
information-theoretic argument proves assumptions relating the feedback to the
underlying reward are necessary to succeed (Xie et al., 2022).
#### 2.2.1 Specialization to Recommendation
For specific application in the recommendation domain, we depart from prior
art in IGL (Xie et al., 2021; 2022) in two ways: first, in the assumed
relationship between feedback and underlying reward; and second, in the number
of latent reward states. For the remainder of the paper, we represent the
users as $x$ (i.e., as context), content items as action $a$, the user
satisfaction with recommended content as $r$, and the feedback signals in
response to the recommended content (user interactions with the system
interface) as $y$.
recommender system$x$$a$I like this!$r$$y$ Figure 1: IGL in the recommender
system setting. The learner observes the context $x$, plays an action $a$, and
then observes a feedback $y$ (that is dependent on the latent reward $r$), but
not $r$ itself.
##### Feedback Dependence Assumption
Xie et al. (2021) assumed full contextual independence of the feedback on the
context and chosen action, i.e. $y\perp x,a|r$. For recommender systems, this
implies that all users communicate preferences identically for all content. In
a subsequent paper, Xie et al. (2022) loosen the full conditional independence
by considering context conditional independence, i.e. $y\perp x|a,r$. For our
setting, this corresponds to the user feedback varying for combinations of
preference and content, but remaining consistent across all users. Neither of
these two assumptions are natural in the recommendation setting because
different users interact with recommender systems in different ways.(Beel et
al., 2013; Shin, 2020). In this work, we assume $y\perp a|x,r$, i.e., the
feedback $y$ is independent of the displayed content $a$ given the user $x$
and their disposition toward the displayed content $r$. Thus, we assume that
users may communicate in different ways, but a given user expresses
satisfaction, dissatisfaction and indifference to all content in the same way.
##### Number of Latent Reward States
Prior work demonstrates that a binary latent reward assumption, along with an
assumption that rewards are rare under a known reference policy, are
sufficient for IGL to succeed. Specifically, optimizing the contrast between a
learned policy and the oblivious uniform policy is able to succeed when
feedback is both context and action independent Xie et al. (2021); and
optimizing the contrast between the learned policy and all constant-action
policies succeeds when the feedback is context independent Xie et al. (2022).
Although the binary latent reward assumption (e.g., satisfied or dissatisfied)
appears reasonable for recommendation scenarios, it fails to account for user
indifference versus user dissatisfaction. This observation was first motivated
by our production data, where a 2 state IGL policy would sometimes maximize
feedback signals with obviously negative semantics. Assuming users ignore most
content most of the time (Nguyen et al., 2014), negative feedback can be as
difficult to elicit as positive feedback, and a 2 state IGL model is unable to
distinguish between these extremes. Hence, we posit a minimal latent state
model for recommender systems involves 3 states: (i) $r=1$, when users are
satisfied with the recommended content, (ii) $r=0$, when users are indifferent
or inattentive, and (iii) $r=-1$, when users are dissatisfied. We motivate the
3 state model with a simulated experiment in Section A.1.
## 3 Derivations
Prior approaches to IGL use contrastive learning objectives (Xie et al., 2021;
2022), but the novel feedback dependence assumption in the prior section
impedes this line of attack. Essentially, given arbitrary dependence upon $x$,
learning must operate on each example in isolation without requiring
comparison across examples. This motivates attempting to predict the current
action from the current context and the currently observed feedback, i.e.,
inverse kinematics.
##### Inverse Kinematics
Traditionally in robotics and computer animation, inverse kinematics is the
mathematical process used to recover the movement (action) of a robot/object
in the world from some other data such as the position and orientation of a
robot manipulator/video of a moving object (in our case, the other data is
context and feedback). We motivate our inverse kinematics strategy using exact
expectations. When acting according to any policy $P(a|x)$, we can imagine
trying to predict the action taken given the context and feedback; the
posterior distribution is
$\displaystyle P(a|y,x)$ $\displaystyle=\frac{P(a|x)P(y|a,x)}{P(y|x)}$
$\displaystyle\left(\text{Bayes rule}\right)$ (2)
$\displaystyle=P(a|x)\sum_{r}\frac{P(y|r,a,x)}{P(y|x)}P(r|a,x)$
$\displaystyle\left(\text{Total Probability}\right)$
$\displaystyle=P(a|x)\sum_{r}\frac{P(y|r,x)}{P(y|x)}P(r|a,x)$
$\displaystyle\left(y\perp a|x,r\right)$
$\displaystyle=P(a|x)\sum_{r}\frac{P(r|y,x)}{P(r|x)}P(r|a,x)$
$\displaystyle\left(\text{Bayes rule}\right)$
$\displaystyle=\sum_{r}P(r|y,x)\left(\frac{P(r|a,x)P(a|x)}{\sum_{a}P(r|a,x)P(a|x)}\right).$
$\displaystyle\left(\text{Total Probability}\right)$
We arrive at the inner product between a reward decoder term ($P(r|y,x$)) and
a reward predictor term ($P(r|a,x)$).
##### Extreme Event Detection
Direct extraction of a reward predictor using maximum likelihood on the action
prediction problem with Eq. 2 is frustrated by two identifiability issues:
first, this expression is invariant to a permutation of the rewards on a
context dependent basis; and second, the relative scale of two terms being
multiplied is not uniquely determined by their product. To mitigate the first
issue, we assume $\sum_{a}P(r=0|a,x)P(a|x)>\frac{1}{2}$, i.e., nonzero rewards
are rare under $P(a|x)$; and to mitigate the second issue, we assume the
feedback can be perfectly decoded, i.e., $P(r|y,x)\in\\{0,1\\}$. Under these
assumptions, we have
$\displaystyle r=0\implies P(a|y,x)$
$\displaystyle=\frac{P(r=0|a,x)P(a|x)}{\sum_{a}P(r=0|a,x)P(a|x)}$
$\displaystyle\leq 2P(r=0|a,x)P(a|x)\leq 2P(a|x).$ (3)
Eq. 3 forms the basis for our extreme event detector: anytime the posterior
probability of an action is predicted to be more than twice the prior
probability, we deduce $r\neq 0$.
Note a feedback merely being apriori rare or frequent (i.e., the magnitude of
$P(y|x)$ under the policy $P(a|x)$) does not imply that observing such
feedback will induce an extreme event detection; rather the feedback must have
a probability that strongly depends upon which action is taken. Because
feedback is assumed conditionally independent of action given the reward, the
only way for feedback to help predict which action is played is via the
(action dependence of the) latent reward.
##### Extreme Event Disambiguation
With 2 latent states, $r\neq 0\implies r=1$, and we can reduce to a standard
contextual bandit with inferred rewards
$\mathop{\mathbbm{1}}(P(a|y,x)>2P(a|x))$. With 3 latent states, $r\neq
0\implies r=\pm 1$, and additional information is necessary to disambiguate
the extreme events. We assume partial reward information is available via a
“definitely negative” function111“Definitely positive” information can be
incorporated analogously.
$\texttt{DN}:\mathcal{X}\times\mathcal{Y}\to\\{-1,0\\}$ where
$P(\texttt{DN}(x,y)=0|r=1)=1$ and $P(\texttt{DN}(x,y)=-1|r=-1)>0$. This
reduces extreme event disambiguation to one-sided learning (Bekker & Davis,
2020) applied only to extreme events, where we try to predict the underlying
latent state given $(x,a)$. We assume partial labelling is selected completely
at random Elkan & Noto (2008) and treat the (constant) negative labelling
propensity $\alpha$ as a hyperparameter. We arrive at our 3-state reward
extractor
$\displaystyle\rho(x,a,y)$ $\displaystyle=\begin{cases}0&P(a|y,x)\leq
2P(a|x)\\\ -\alpha^{-1}&P(a|y,x)>2P(a|x)\text{ and }\texttt{DN}(x,y)=-1\\\
1&\text{otherwise}\end{cases},$ (4)
equivalent to Bekker & Davis (2020, Equation 11). Setting $\alpha=1$ embeds
2-state IGL.
Algorithm 1 IGL; Inverse Kinematics; 2 or 3 Latent States; On or Off-Policy.
0: Contextual bandit algorithm CB-Alg.
0: Calibrated weighted multiclass classification algorithm MC-Alg.
0: Definitely negative oracle DN. # 3SCOMMENT$\texttt{DN}(\ldots)=0$ for 2
state IGL
0: Negative labelling propensity $\alpha$. # 3SCOMMENT$\alpha=1$ for 2 state
IGL
0: Action set size $K$.
1: $\pi\leftarrow\textrm{new }\texttt{CB-Alg}$.
2: $\texttt{IK}\leftarrow\textrm{new }\texttt{MC-Alg}$.
3: for $t=1,2,\dots;$ do
4: Observe context $x_{t}$ and action set $A_{t}$ with $|A_{t}|=K$.
5: if On-Policy IGL then
6: $P(\cdot|x_{t})\leftarrow\pi.\textrm{predict}(x_{t},A_{t})$.
7: Play $a_{t}\sim P(\cdot|x_{t})$ and observe feedback $y_{t}$.
8: else
9: Observe $\left(x_{t},a_{t},y_{t},P(\cdot|x_{t})\right)$.
10: $w_{t}\leftarrow 1/(KP(a_{t}|x_{t}))$. # 3SCOMMENTSynthetic uniform
distribution
11:
$\hat{P}(a_{t}|y_{t},x_{t})\leftarrow\texttt{IK}.\textrm{predict}((x_{t},y_{t}),A_{t},a_{t})$.
# 3SCOMMENTPredict action probability
12: if $K\hat{P}(a_{t}|y_{t},x_{t})\leq 2$ then # 3SCOMMENT$\hat{r}_{t}=0$
13: $\pi.\textrm{learn}(x_{t},a_{t},A_{t},r_{t}=0)$
14: else # 3SCOMMENT$\hat{r}_{t}\neq 0$
15: if $\texttt{DN}(\ldots)=0$ then
16: $\pi.\textrm{learn}(x_{t},a_{t},A_{t},r_{t}=1,P(\cdot|x_{t}))$
17: else # 3SCOMMENTDefinitely negative
18: $\pi.\textrm{learn}(x_{t},a_{t},A_{t},r_{t}=-\alpha^{-1},P(\cdot|x_{t}))$
19: $\texttt{IK}.\textrm{learn}((x_{t},y_{t}),A_{t},a_{t},w_{t})$.
##### Implementation Notes
In practice, $P(a|x)$ is known but the other probabilities are estimated.
$\hat{P}(a|y,x)$ is estimated online using maximum likelihood on the problem
predicting $a$ from $(x,y)$, i.e., on a data stream of tuples $((x,y),a)$. The
current estimates induce $\hat{\rho}(x,a,y)$ based upon the plug-in version of
Eq. 4. In this manner, the original data stream of $(x,a,y)$ tuples is
transformed into stream of $\left(x,a,\hat{r}=\hat{\rho}(x,a,y)\right)$ tuples
and reduced to a standard online contextual bandit problem.
As an additional complication, although $P(a|x)$ is known, it is typically a
good policy under which rewards are not rare (e.g., offline learning with a
good historical policy; or acting online according to the policy being learned
by the IGL procedure). Therefore, we use importance weighting to synthesize a
uniform action distribution $P(a|x)$ from the true action distribution.222When
the number of actions is changing from round to round, we use importance
weighting to synthesize a non-uniform action distribution with low rewards,
but we elide this detail for ease of exposition. Ultimately we arrive at the
procedure of Algorithm 1.
## 4 Empirical Evaluations
Evaluation Settings: Evaluation settings include simulation using a supervised
classification dataset, online news recommendation on Facebook, and a
production image recommendation scenario.
Abbreviations: Algorithms are denoted by the following abbreviations:
Personalized IGL for 2 latent states (IGL-P(2)); Personalized IGL for 3 latent
states (IGL-P(3)); Contextual Bandits for the Facebook news setting that
maximizes for emoji, non-like click-based reactions (CB-emoji); Contextual
Bandits for the Facebook news setting that maximizes for comment interactions
(CB-comment); Contextual Bandits for the production recommendation scenario
that maximizes for CTR (CB-Click).
General Evaluation Setup: At each time step $t$, the context $x_{t}$ is
provided from either the simulator (Section 4.1, Section 4.2) or the logged
production data (Section 4.3). The learner then selects an action $a_{t}$ and
receives feedback $y_{t}$. In these evaluations, each user provides feedback
in exactly one interaction and different user feedback signals are mutually
exclusive, so that $y_{t}$ is a one-hot vector. In simulated environments, the
ground truth reward is sometimes used for evaluation but never revealed to the
algorithm.
### 4.1 Covertype IGL Simulation
To highlight that personalized IGL can distinguish between different user
communication styles, we create a simulated 2-state IGL scenario from a
supervised classification dataset. First, we apply a supervised-to-bandit
transform to convert the dataset into a contextual bandit simulation (Bietti
et al., 2021), i.e., the algorithm is presented the example features as
context, chooses one of the classes as an action, and experiences a binary
reward which indicates whether or not it matches the example label. In the IGL
simulation, this reward is experienced but not revealed to the algorithm.
Instead, the latent reward is converted into a feedback signal as follows:
each example is assigned one of $N$ different user ids and the user id is
revealed to the algorithm as part of the example features. The simulated user
will generate feedback in the form of one of $M$ different word ids. Unknown
to the algorithm, the words are divided equally into “good” and “bad” words,
and the users are divided equally into “normal” and “bizarro” users. “Normal”
users indicate positive and zero reward via “good” and “bad” words
respectively, while “bizarro” users employ the exact opposite communication
convention.
Figure 2: The proposed personalized IGL algorithm successfully disambiguates
both different user communication styles and different event semantics. See
Section 4.1 for details.
We simulated using the Covertype (Blackard & Dean, 1999) dataset with
$M=N=100$, and an (inverse kinematics) model class which embedded both user
and word ids into a 2 dimensional space. Fig. 2 demonstrates that both the
user population and the words are cleanly separated into two latent groups.
Additional results showcasing the learning curves for inverse kinematics,
reward and policy learning are shown in Section A.2 along with more detailed
description of the model parameters.
### 4.2 Fairness in Facebook News Recommendation
Personalized reward learning is the key to more fair recommender systems.
Previous work (Neophytou et al., 2022) suggests that inconsistent performance
in recommender systems across user subgroups arises due to differences in user
communication modalities. We now test this hypothesis in the setting of
Facebook news recommendation. Our simulations are built on a dataset
(Martinchek, 2016) of all posts by the official Facebook pages of 3 popular
news outlets (Fox News, The Huffington Post and TIME Magazine) that span the
political spectrum. Posts range from May to November 2016 and contain text
content information, as well as logged interaction counts, which include
comments and shares, as well as diverse click-based reactions (see Fig. 3).
Figure 3: Facebook click-based reactions: like, love, haha, wow, sad and angry
(image source: Meta ). The reactions allow users to engage with content using
diverse communication signals.
Constructing a hand-engineered reward signal using these feedbacks is
difficult, and Facebook itself came under fire for utilizing reward weights
that disproportionately promote toxic, low quality news. One highly criticized
iteration of the reward ranking algorithm treated emoji reactions as five
times more valuable than likes (Merrill & Oremus, 2021). Future iterations of
the ranking algorithm promoted comments, in an attempt to bolster “meaningful
social interactions” (Hagey & Jeff Horwitz, 2021). Our experiments evaluate
the performance of CB algorithms using these two reward functions, referring
to them as CB-emoji and CB-comment.
We model the news recommendation problem as a 3 latent reward state problem,
with readers of different news outlets as different contexts. Given a curated
selection of posts, the goal of the learner is to select the best article to
show to the reader. The learner can leverage action features including the
post type (link, video, photo, status or event) as well as embeddings of the
text content that were generated using pre-trained transformers (Reimers &
Gurevych, 2019). User feedback is drawn from a fixed probability distribution
(unknown to the algorithms) that depends on the user type and latent reward of
the chosen action. As an approximation of the true latent reward signal, we
use low dimensional embeddings of the different news outlets combined with
aggregate statistics from the post feedback to categorize whether the users
had a positive ($r=1$), neutral ($r=0$) or negative experience ($r=-1$) with
the post. This categorization is not available to the evaluated algorithms.
Finally, we implement IGL-P(3) with the angry reaction as a negative oracle to
disambiguate the positive and negative reward states.
(a) Average fraction of rewards that are positive
(b) Average fraction of rewards that are negative
Figure 4: IGL-P(3) uses personalized reward learning to achieve fair news
recommendations across diverse reader bases, while CB policies based on
rewards used in practice by Facebook perform inconsistently, with subsets of
users receiving both fewer high quality recommendations and more low quality
recommendations. Standard error on all averages shown is $<0.005$.
Fig. 4 shows the results of our online news recommendation experiments. While
the performance of both CB-emoji and CB-comment varies significantly across
the different reader groups, IGL-P(3) maintains relatively stable performance
for both positive and negative rewards. The CB algorithm that maximizes emoji
reactions achieves the best performance for TIME readers at the cost of very
bad performance for the Fox News and Huffington Post Readers. On the other
hand, the CB algorithm that maximizes comment engagement achieves best
performance with Fox News readers, however it still performs worse than
IGL-P(3). Finally, our simulations show that the CB-comment objective that was
introduced to decrease low quality news actually significantly _increased_ it
for the Fox News reader population. Additional details of this experiment,
including model parameters and learning curves are available in Section A.3.
### 4.3 Production Results
Algorithm | Clicks | Likes | Dislikes
---|---|---|---
IGL-P(3) | $[0.999,1.067,1.152]$ | $[0.985,1.029,1.054]$ | $[0.751,1.072,1.274]$
IGL-P(2) | $[0.926,1.005,1.091]$ | $[0.914,0.949,0.988]$ | $[1.141,1.337,1.557]$
Table 1: Relative metrics lift over a production baseline. The production baseline uses a hand-engineered reward function which is not available to IGL algorithms. Shown are point estimates and associated bootstrap 95% confidence regions. IGL-P(2) erroneously increases dislikes to the detriment of other metrics. IGL-P(3) is equivalent to the hand-engineered baseline. Algorithm | Clicks | Likes | Dislikes
---|---|---|---
IGL-P(3) | $[1.000,1.010,1.020]$ | $[1.006,1.026,1.049]$ | $[0.890,0.918,0.955]$
CB-Click | $[0.968,0.979,0.990]$ | $[0.935,0.959,0.977]$ | $[1.234,1.311,1.348]$
Table 2: Relative metrics lift over a CB policy that uses the click feedback
as the reward. Point estimates and associated bootstrap 95% confidence regions
are reported. IGL-P(3) significantly outperforms CB-Click at minimizing user
dissatisfaction. Surprisingly, IGL-P(3) even outperforms CB-Click with respect
to click feedback, providing evidence for the proposed latent state
model.444Data for Table 1 and Table 2 are drawn from different date ranges due
to compliance limitations, so the IGL-P(3) performance need not match.
Our production setting is a real world image recommendation system that serves
hundreds of millions of users. In our recommendation system interface, users
provide feedback in the form of clicks, likes, dislikes or no feedback. All
four signals are mutually exclusive and the user only provides one feedback
after each interaction. For these experiments, we use data that spans millions
of interactions. The production baseline is a contextual bandit algorithm with
a hand-engineered multi-task reward function which dominates approaches that
only use click feedback555The utility of multi-task learning for
recommendation systems is well-established, e.g., Chen et al. (2021); Lu et
al. (2018b); Chen et al. (2019).. Consequently, any improvements over the
production policy imply improvement over any bandit algorithm optimizing for
click feedback.
We implement IGL-P(2) and IGL-P(3) and report the performance as relative lift
metrics over the production baseline. Unlike the simulation setting, we no
longer have access to the user’s latent reward after each interaction. As a
result, we evaluate IGL by comparing all feedback signals. An increase in both
clicks and likes, and a decrease in dislikes, are considered desirable
outcomes. Table 1 shows the results of our empirical study. IGL-P(2) exhibits
an inability to avoid extreme _negative_ events. Although the true latent
state is unknown, IGL-P(2) is Pareto-dominated due to an increase in dislikes.
IGL-P(3) does not exhibit this pathology. These results indicate the utility
of a $3$ latent state model in real world recommendation systems. To clearly
illustrate the improvement of IGL-P(3) and provide an additional benchmark, we
also report that IGL-P(3) significantly outperforms a contextual bandit policy
CB-Click using the industry standard CTR reward in Table 2.
## 5 Related Work
### 5.1 Eliminating reward engineering
Online learning algorithms in supervised learning, contextual bandits,
reinforcement learning (Bishop & Nasrabadi, 2006; Kober et al., 2013; Zhou,
2015; Bzdok et al., 2018; Hoi et al., 2021; Brunton & Kutz, 2022; Alsubari et
al., 2022) all optimize over hand-crafted scalar rewards. Partial label
learning (PLL) approaches (Feng & An, 2019; Gong et al., 2022; Wang et al.,
2022) in supervised learning learn strategies when the underlying ground-truth
label are noisy. Branches of reinforcement learning like inverse reinforcement
learning (Fu et al., 2017; Arora & Doshi, 2021), imitation learning (Hussein
et al., 2017; Oh et al., 2018), robust reinforcement learning (Zhou et al.,
2021; Panaganti et al., 2022) exists that learn from historical data without
access to the underlying true reward models. However these methods require
expert demonstration data, are incapable of generalizing Chen et al. (2022) to
different feedback signals, and generally need expensive compute and enormous
sampling capabilities for learning. In this context, IGL (Xie et al., 2021;
2022) is a novel paradigm, where a learner’s goal is to optimally interact
with the environment with no explicit reward or demonstrations to ground its
policies. Our proposed approach to IGL, IGL-P(3) further enables learning of
personalized reward functions for different feedback signals.
### 5.2 Recommender Systems
Recommender systems are a well-studied field due to their direct link to
product revenues (Naumov et al., 2019; Steck et al., 2021). The rapid growth
of online content has generated interest in ML-based solutions (Li et al.,
2011) that are able to offer more diverse personalized recommendations to
internet users. Traditional vanilla recommendation approaches can be divided
into three types. Content-based approaches maintain representations for users
based on their content and recommend new content with good similarity metrics
for particular users (Balabanović & Shoham, 1997; IJntema et al., 2010; Kompan
& Bieliková, 2010; Lops et al., 2019; Argyriou et al., 2020; Javed et al.,
2021). In contrast, collaborative filtering approaches employ user rating
predictions based on historical consumed content and underlying user
similarities (Balabanović & Shoham, 1997; Schafer et al., 2007; Hu et al.,
2008; Argyriou et al., 2020; Steck & Liang, 2021). Finally, there are hybrid
approaches that combine the previous two contrasting approaches to better
represent user profiles for improved recommendations (Balabanović & Shoham,
1997; Funakoshi & Ohguro, 2000; Burke, 2007; Argyriou et al., 2020; Javed et
al., 2021). Our work is a significant departure from these approaches, in that
_we learn representations of user’s communication style_ via their content
interaction history for improved diverse personalized recommendations.
Our work is significantly different from these in the following ways: (i) we
propose a novel personalized IGL algorithm based on the inverse kinematics
strategy as described in Section 3, (ii) we leverage the capability of IGL
learning rewards based off of implicit and explicit feedback signals and avoid
the costly, inefficient, status quo process of reward engineering, and (iii)
we formulate and propose a new recommendation system based on the IGL
paradigm. The works that are closest to ours are Xie et al. (2021; 2022) which
introduce and solve for the IGL paradigm under different assumptions. However,
we propose a personalized IGL algorithm for recommendation systems with
improved reward predictor models, more practical assumptions on feedback
signals, and more intricacies described in Section 2.2.1.
## 6 Discussion
We evaluated the proposed personalized IGL approach (IGL-P) in three different
settings: (1) A simulation using a supervised classification dataset shows
that IGL-P can learn to successfully distinguish between different
communication modalities; (2) A simulation for online news recommendation
based on real data from Facebook users shows that IGL-P leverages insights
about different communication modalities to learn better policies and achieve
fairness with consistent performance among diverse user groups; (3) A real-
world experiment deployed in an image recommendation product showcases that
the proposed method outperforms the hand-engineered reward baseline, and
succeeds in a practical application.
This work assumes that users may communicate in different ways, but a given
user expresses (dis)satisfaction or indifference to all content in the same
way. This assumption was critical to deriving the inverse kinematics approach,
but in practice user feedback can also depend upon content (Freeman et al.,
2020). IGL with arbitrary joint content-action dependence of feedback is
intractable, but plausibly there exists a tractable IGL setting with a
constrained joint content-action dependence, we are very much interested in
exploring as part of future work. Furthermore, although we established the
utility of a three state model over a two state model in our experiments,
perhaps more than three states is necessary for more complex recommendation
scenarios.
Although we consider the application of recommender systems, personalized
reward learning can benefit any application suffering from a one-size-fits-all
approach. One candidate domain is the detection of underlying medical
conditions and development of new treatments and interventions. Systematic
misdiagnoses in subgroups with symptoms different than the “norm” are
tragically standard in medicine. IGL-P can empower medical practitioners to
address systematic inequality by tuning diagnostic tools via personalization
and moving past an oblivious diagnostic approach. Other potential applications
are learned brain-computer and human-computer interfaces (Xie et al., 2022).
One example is a self-calibrating eye tracker where people’s response to
erroneous agent control is idiosyncratic (Gao et al., 2021).
## References
* Alsubari et al. (2022) S Nagi Alsubari, Sachin N Deshmukh, A Abdullah Alqarni, Nizar Alsharif, TH Aldhyani, Fawaz Waselallah Alsaade, and Osamah I Khalaf. Data analytics for the identification of fake reviews using supervised learning. _CMC-Computers, Materials & Continua_, 70(2):3189–3204, 2022.
* Argyriou et al. (2020) Andreas Argyriou, Miguel González-Fierro, and Le Zhang. Microsoft recommenders: Best practices for production-ready recommendation systems. In _Companion Proceedings of the Web Conference 2020_ , pp. 50–51. Association for Computing Machinery, 2020. ISBN 9781450370240.
* Arora & Doshi (2021) Saurabh Arora and Prashant Doshi. A survey of inverse reinforcement learning: Challenges, methods and progress. _Artificial Intelligence_ , 297:103500, 2021.
* Auer et al. (2002) Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed bandit problem. _SIAM journal on computing_ , 32(1):48–77, 2002\.
* Balabanović & Shoham (1997) Marko Balabanović and Yoav Shoham. Fab: content-based, collaborative recommendation. _Communications of the ACM_ , 40(3):66–72, 1997\.
* Beel et al. (2013) Joeran Beel, Stefan Langer, Andreas Nürnberger, and Marcel Genzmehr. The impact of demographics (age and gender) and other user-characteristics on evaluating recommender systems. In _International Conference on Theory and Practice of Digital Libraries_ , pp. 396–400. Springer, 2013.
* Bekker & Davis (2020) Jessa Bekker and Jesse Davis. Learning from positive and unlabeled data: A survey. _Machine Learning_ , 109(4):719–760, 2020.
* Bietti et al. (2021) Alberto Bietti, Alekh Agarwal, and John Langford. A contextual bandit bake-off. _J. Mach. Learn. Res._ , 22:133–1, 2021.
* Bishop & Nasrabadi (2006) Christopher M Bishop and Nasser M Nasrabadi. _Pattern recognition and machine learning_ , volume 4. Springer, 2006.
* Blackard & Dean (1999) Jock A Blackard and Denis J Dean. Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables. _Computers and electronics in agriculture_ , 24(3):131–151, 1999.
* Bouneffouf et al. (2020) Djallel Bouneffouf, Irina Rish, and Charu Aggarwal. Survey on applications of multi-armed and contextual bandits. In _2020 IEEE Congress on Evolutionary Computation (CEC)_ , pp. 1–8. IEEE, 2020.
* Brunton & Kutz (2022) Steven L Brunton and J Nathan Kutz. _Data-driven science and engineering: Machine learning, dynamical systems, and control_. Cambridge University Press, 2022.
* Burke (2007) Robin Burke. Hybrid web recommender systems. _The adaptive web_ , pp. 377–408, 2007.
* Bzdok et al. (2018) Danilo Bzdok, Martin Krzywinski, and Naomi Altman. Machine learning: supervised methods. _Nature methods_ , 15(1):5, 2018.
* Chen et al. (2021) Lei Chen, Jie Cao, Guixiang Zhu, Youquan Wang, and Weichao Liang. A multi-task learning approach for improving travel recommendation with keywords generation. _Knowledge-Based Systems_ , 233:107521, 2021.
* Chen et al. (2022) Xiaocong Chen, Lina Yao, Xianzhi Wang, Aixin Sun, and Quan Z Sheng. Generative adversarial reward learning for generalized behavior tendency inference. _IEEE Transactions on Knowledge and Data Engineering_ , 2022.
* Chen et al. (2019) Zhongxia Chen, Xiting Wang, Xing Xie, Tong Wu, Guoqing Bu, Yining Wang, and Enhong Chen. Co-attentive multi-task learning for explainable recommendation. In _IJCAI_ , pp. 2137–2143, 2019.
* Ekstrand et al. (2018) Michael D Ekstrand, Mucun Tian, Ion Madrazo Azpiazu, Jennifer D Ekstrand, Oghenemaro Anuyah, David McNeill, and Maria Soledad Pera. All the cool kids, how do they fit in?: Popularity and demographic biases in recommender evaluation and effectiveness. In _Conference on fairness, accountability and transparency_ , pp. 172–186. PMLR, 2018.
* Elkan & Noto (2008) Charles Elkan and Keith Noto. Learning classifiers from only positive and unlabeled data. In _Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining_ , pp. 213–220, 2008.
* Feng & An (2019) Lei Feng and Bo An. Partial label learning with self-guided retraining. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 33, pp. 3542–3549, 2019.
* Freeman et al. (2020) Cole Freeman, Hamed Alhoori, and Murtuza Shahzad. Measuring the diversity of facebook reactions to research. _Proceedings of the ACM on Human-Computer Interaction_ , 4(GROUP):1–17, 2020.
* Fu et al. (2017) Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adversarial inverse reinforcement learning. _arXiv preprint arXiv:1710.11248_ , 2017.
* Funakoshi & Ohguro (2000) Kaname Funakoshi and Takeshi Ohguro. A content-based collaborative recommender system with detailed use of evaluations. In _KES’2000. Fourth International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies. Proceedings (Cat. No. 00TH8516)_ , volume 1, pp. 253–256. IEEE, 2000.
* Gao et al. (2021) Jensen Gao, Siddharth Reddy, Glen Berseth, Nicholas Hardy, Nikhilesh Natraj, Karunesh Ganguly, Anca D. Dragan, and Sergey Levine. X2T: training an x-to-text typing interface with online learning from user feedback. In _9th International Conference on Learning Representations, ICLR_ , 2021.
* Gong et al. (2022) Xiuwen Gong, Dong Yuan, and Wei Bao. Partial label learning via label influence function. In _International Conference on Machine Learning_ , pp. 7665–7678. PMLR, 2022.
* Grčar et al. (2005) Miha Grčar, Dunja Mladenič, Blaž Fortuna, and Marko Grobelnik. Data sparsity issues in the collaborative filtering framework. In _International workshop on knowledge discovery on the web_ , pp. 58–76. Springer, 2005.
* Hagey & Jeff Horwitz (2021) Keach Hagey and Jeff Jeff Horwitz. Facebook tried to make its platform a healthier place. it got angrier instead. _The Wall Street Journal_ , 2021. URL https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215.
* Hofmann et al. (2012) Katja Hofmann, Fritz Behr, and Filip Radlinski. On caption bias in interleaving experiments. In _Proceedings of the 21st ACM international conference on Information and knowledge management_ , pp. 115–124, 2012.
* Hoi et al. (2021) Steven CH Hoi, Doyen Sahoo, Jing Lu, and Peilin Zhao. Online learning: A comprehensive survey. _Neurocomputing_ , 459:249–289, 2021.
* Hu et al. (2008) Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In _2008 Eighth IEEE international conference on data mining_ , pp. 263–272. Ieee, 2008.
* Hussein et al. (2017) Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan, and Chrisina Jayne. Imitation learning: A survey of learning methods. _ACM Computing Surveys (CSUR)_ , 50(2):1–35, 2017\.
* IJntema et al. (2010) Wouter IJntema, Frank Goossen, Flavius Frasincar, and Frederik Hogenboom. Ontology-based news recommendation. In _Proceedings of the 2010 EDBT/ICDT Workshops_ , pp. 1–6, 2010\.
* Javed et al. (2021) Umair Javed, Kamran Shaukat, Ibrahim A Hameed, Farhat Iqbal, Talha Mahboob Alam, and Suhuai Luo. A review of content-based and context-based recommendation systems. _International Journal of Emerging Technologies in Learning (iJET)_ , 16(3):274–306, 2021.
* Khan (2017) M Laeeq Khan. Social media engagement: What motivates user participation and consumption on YouTube? _Computers in human behavior_ , 66:236–247, 2017.
* Kim et al. (2014) Youngho Kim, Ahmed Hassan, Ryen W White, and Imed Zitouni. Modeling dwell time to predict click-level satisfaction. In _Proceedings of the 7th ACM international conference on Web search and data mining_ , pp. 193–202, 2014.
* Kober et al. (2013) Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. _The International Journal of Robotics Research_ , 32(11):1238–1274, 2013.
* Kompan & Bieliková (2010) Michal Kompan and Mária Bieliková. Content-based news recommendation. In _International conference on electronic commerce and web technologies_ , pp. 61–72. Springer, 2010.
* Langford & Zhang (2007) John Langford and Tong Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. _Advances in neural information processing systems_ , 20(1):96–1, 2007.
* Li et al. (2011) Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. In _Proceedings of the Forth International Conference on Web Search and Web Data Mining, WSDM 2011, Hong Kong, China_ , pp. 297–306. ACM, 2011.
* Lops et al. (2019) Pasquale Lops, Dietmar Jannach, Cataldo Musto, Toine Bogers, and Marijn Koolen. Trends in content-based recommendation. _User Modeling and User-Adapted Interaction_ , 29(2):239–249, 2019.
* Lu et al. (2018a) Hongyu Lu, Min Zhang, and Shaoping Ma. Between clicks and satisfaction: Study on multi-phase user preferences and satisfaction for online news reading. In _The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval_, pp. 435–444, 2018a.
* Lu et al. (2018b) Yichao Lu, Ruihai Dong, and Barry Smyth. Why i like it: multi-task learning for recommendation and explanation. In _Proceedings of the 12th ACM Conference on Recommender Systems_ , pp. 4–12, 2018b.
* Mansoury et al. (2020) Masoud Mansoury, Himan Abdollahpouri, Jessie Smith, Arman Dehpanah, Mykola Pechenizkiy, and Bamshad Mobasher. Investigating potential factors associated with gender discrimination in collaborative recommender systems. In _The Thirty-Third International Flairs Conference_ , 2020.
* Martinchek (2016) Patrick Martinchek. 2012-2016 Facebook Posts. https://data.world/martinchek/2012-2016-facebook-posts, 2016.
* Merrill & Oremus (2021) Jeremy Merrill and Will Oremus. Five points for anger, one for a ‘like’: How facebook’s formula fostered rage and misinformation. _The Washington Post_ , 2021. URL https://www.washingtonpost.com/technology/2021/10/26/facebook-angry-emoji-algorithm/.
* (46) Meta. Reactions now available globally. _Facebook News_. URL https://about.fb.com/news/2016/02/reactions-now-available-globally/.
* Naumov et al. (2019) Maxim Naumov, Dheevatsa Mudigere, Hao-Jun Michael Shi, Jianyu Huang, Narayanan Sundaraman, Jongsoo Park, Xiaodong Wang, Udit Gupta, Carole-Jean Wu, Alisson G Azzolini, et al. Deep learning recommendation model for personalization and recommendation systems. _arXiv preprint arXiv:1906.00091_ , 2019.
* Neophytou et al. (2022) Nicola Neophytou, Bhaskar Mitra, and Catherine Stinson. Revisiting popularity and demographic biases in recommender evaluation and effectiveness. In _European Conference on Information Retrieval_ , pp. 641–654. Springer, 2022.
* Nguyen et al. (2014) Tien T Nguyen, Pik-Mai Hui, F Maxwell Harper, Loren Terveen, and Joseph A Konstan. Exploring the filter bubble: the effect of using recommender systems on content diversity. In _Proceedings of the 23rd international conference on World wide web_ , pp. 677–686, 2014.
* Oh et al. (2018) Junhyuk Oh, Yijie Guo, Satinder Singh, and Honglak Lee. Self-imitation learning. In _International Conference on Machine Learning_ , pp. 3878–3887. PMLR, 2018.
* Panaganti et al. (2022) Kishan Panaganti, Zaiyan Xu, Dileep Kalathil, and Mohammad Ghavamzadeh. Robust reinforcement learning using offline data. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), _Advances in Neural Information Processing Systems_ , 2022. URL https://openreview.net/forum?id=AK6S9MZwM0.
* Reimers & Gurevych (2019) Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, 11 2019\.
* Schafer et al. (2007) J Ben Schafer, Dan Frankowski, Jon Herlocker, and Shilad Sen. Collaborative filtering recommender systems. In _The adaptive web_ , pp. 291–324. Springer, 2007.
* Scott (2021) Kate Scott. You won’t believe what’s in this paper! Clickbait, relevance and the curiosity gap. _Journal of pragmatics_ , 175:53–66, 2021.
* Shin (2020) Donghee Shin. How do users interact with algorithm recommender systems? The interaction of users, algorithms, and performance. _Computers in Human Behavior_ , 109:106344, 2020.
* Silveira et al. (2019) Thiago Silveira, Min Zhang, Xiao Lin, Yiqun Liu, and Shaoping Ma. How good your recommender system is? A survey on evaluations in recommendation. _International Journal of Machine Learning and Cybernetics_ , 10(5):813–831, 2019.
* Steck & Liang (2021) Harald Steck and Dawen Liang. Negative interactions for improved collaborative filtering: Don’t go deeper, go higher. In _Fifteenth ACM Conference on Recommender Systems_ , pp. 34–43, 2021.
* Steck et al. (2021) Harald Steck, Linas Baltrunas, Ehtsham Elahi, Dawen Liang, Yves Raimond, and Justin Basilico. Deep learning for recommender systems: A netflix case study. _AI Magazine_ , 42:7–18, Nov. 2021. doi: 10.1609/aimag.v42i3.18140. URL https://ojs.aaai.org/index.php/aimagazine/article/view/18140.
* Wang et al. (2022) Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, and Junbo Zhao. Pico: Contrastive label disambiguation for partial label learning. _arXiv preprint arXiv:2201.08984_ , 2022.
* Wang et al. (2021) Wenjie Wang, Fuli Feng, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua. Clicks can be cheating: Counterfactual recommendation for mitigating clickbait issue. In _Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , pp. 1288–1297, 2021\.
* Wu et al. (2017) Qingyun Wu, Hongning Wang, Liangjie Hong, and Yue Shi. Returning is believing: Optimizing long-term user engagement in recommender systems. In _Proceedings of the 2017 ACM on Conference on Information and Knowledge Management_ , pp. 1927–1936, 2017.
* Xie et al. (2021) Tengyang Xie, John Langford, Paul Mineiro, and Ida Momennejad. Interaction-Grounded Learning. In _International Conference on Machine Learning_ , pp. 11414–11423. PMLR, 2021.
* Xie et al. (2022) Tengyang Xie, Akanksha Saran, Dylan J Foster, Lekan Molu, Ida Momennejad, Nan Jiang, Paul Mineiro, and John Langford. Interaction-Grounded Learning with Action-inclusive Feedback. _arXiv preprint arXiv:2206.08364_ , 2022.
* Yi et al. (2014) Xing Yi, Liangjie Hong, Erheng Zhong, Nanthan Nan Liu, and Suju Rajan. Beyond clicks: dwell time for personalization. In _Proceedings of the 8th ACM Conference on Recommender systems_ , pp. 113–120, 2014.
* Zhou (2015) Li Zhou. A survey on contextual multi-armed bandits. _arXiv preprint arXiv:1508.03326_ , 2015.
* Zhou et al. (2021) Zhengqing Zhou, Zhengyuan Zhou, Qinxun Bai, Linhai Qiu, Jose Blanchet, and Peter Glynn. Finite-sample regret bound for distributionally robust offline tabular reinforcement learning. In _International Conference on Artificial Intelligence and Statistics_ , pp. 3331–3339. PMLR, 2021.
## Appendix A Appendix
### A.1 Motivating the 3 State Model for Recommender Systems
Through a simulated experiment, we demonstrate that the traditional 2 latent
state reward model used by prior IGL methods (Xie et al., 2021; 2022) is not
sufficient for the recommender system scenario. We further illustrate that a 3
latent reward state model can overcome the limitations of the two latent state
model, and achieve personalized reward learning for recommendations.
Simulator Design. Before the start of each experiment, user profiles with
fixed latent rewards for each action are generated. The users are also
assigned predetermined communication styles, so the probability of emitting a
given signal conditioned on the latent reward remains static throughout the
duration of the experiment. For the available feedback, users can provide
feedback using five signals: (1) like, (2) dislike, (3) click, (4) skip and
(5) none. The feedback includes a mix of explicit (likes, dislikes) and
implicit (clicks, skips, none) signals. Despite receiving no human input on
the assumed meaning of the implicit signals, we will demonstrate that IGL can
determine which feedback are associated with which latent state. In addition
to policy optimization, IGL can also be a tool for automated feature
discovery. To reveal the qualitative properties of the approach, the simulated
probabilities for observing a particular feedback given the reward are chosen
so that they can be perfectly decoded, i.e., each feedback has a nonzero
emission probability in exactly one latent reward state. Production data does
not obey this constraint (e.g., accidental emissions of all feedback occur at
some rate): theoretical analysis of our approach without perfectly decodable
rewards is a topic for future work.
IGL-P(2). We implement Algorithm 1 for 2 latent states as IGL-P(2). The
experiment here shows the following two results about IGL-P(2): (i) it is able
to succeed in the scenario when there are 2 underlying latent rewards and (ii)
it can no longer do so when there are 3 latent states. Fig. 5 shows the
simulator setup used, where clicks and likes are used to communicate
satisfaction, and dislikes, skips and no feedback (none) convey (active or
passive) dissatisfaction.
(a) 2 latent state model
(b) 3 latent state model
Figure 5: Simulator settings for 2 state and 3 state latent model. In Fig. 5a,
$\mathbf{r=0}$ corresponds to anything other than the user actively enjoying
the content, whereas in Fig. 5b, lack of user enjoyment is split into
indifference and active dissatisfaction.
Fig. 6 shows the distribution of rewards for IGL-P(2) as a function of the
number of iterations, for both the 2 and 3 latent state model. When there are
only 2 latent rewards, IGL-P(2) consistently improves; however for 3 latent
states, IGL-P(3) oscillates between $r=1$ and $r=-1$, resulting in much lower
average user satisfaction. The empirical results demonstrate that although
IGL-P(2) can successfully identify and maximize the rare feedbacks it
encounters, it is unable to distinguish between satisfied and dissatisfied
users.
(a) Two latent states
(b) Three latent states
Figure 6: Performance of IGL-P(2) in simulated environment. Although IGL-P(2)
is successful with the 2 state simulator, it fails on the 3 state simulator
and oscillates between attempting to maximize $\mathbf{r=1}$ and
$\mathbf{r=-1}$.
IGL-P(3): Personalized Reward Learning for Recommendations. Since IGL-P(2) is
not sufficient for the recommendation system setting, we now explore the
performance of IGL-P(3). Using the same simulator as Fig. 5b, we evaluated
IGL-P(3). Fig. 7a demonstrates the distribution of the rewards over the course
of the experiment. IGL-P(3) quickly converged, and because of the partial
negative feedback for dislikes, never attempted to maximize the $r=-1$ state.
Even though users used the ambiguous skip signal to express dissatisfaction
80% of the time, IGL-P(3) was still able to learn user preferences.
In order for IGL-P(3) to succeed, the algorithm requires direct grounding from
the dislike signal. We next examined how IGL-P(3) is impacted by increased or
decreased presence of user dislikes. Fig. 7b was generated by varying the
probability $p$ of users emitting dislikes given $r=-1$, and then averaging
over 10 experiments for each choice of $p$. While lower dislike emission
probabilities are associated with slower convergence, IGL-P(3) is able to
overcome the increase in unlabeled feedback and learn to associate the skip
signal with user dissatifaction. Once the feedback decoding stabilizes,
regardless of the dislike emission probability, IGL-P(3) enjoys strong
performance for the remainder of the experiment.
(a) Ground truth learning curves, $\mathbf{P(\text{dislike}|r=-1)=0.2}$.
(b) Effect of varying $\mathbf{P(\text{dislike}|r=-1)}$.
Figure 7: Performance of IGL-P(3) in simulated environment. In Fig. 7a,
IGL-P(3) successfully maximizes user satisfaction while minimizing
dissatisfaction. Fig. 7b demonstrates how IGL-P(3) is robust to varying the
frequency of partial information received, although more data is needed for
convergence when “definitely bad” events are less frequent.
### A.2 Additional Results for the Covertype IGL Simulation
Model Parameters For The Covertype IGL Simulation. For the Covertype IGL
simulation we trained both a CB and IK model (as shown in Algorithm 1). We use
online multiclass learning to implement the IK model, where the classes are
all the possible actions and the label is the action played. Both CB and IK
are linear logistic regression models implemented in PyTorch, trained using
the cross-entropy loss. Both models used Adam to update their weights with a
learning rate of $2.5e^{-3}$. The first layer of weights for both models had
$512$ weights that were initialized using a Cauchy distribution with
$\sigma=0.2$. The learning curves are shown in Figure 8b
(a) Reward learning.
(b) Accuracy of the inverse kinematics model used for the intermediate task of
action prediction.
Figure 8: Learning curves for the simulated experiment in Sec. 4.1.
### A.3 Additional Results for Facebook News Simulation
Model Parameters For The Facebook News Simulation. For the Facebook news
simulation there were 2 CB models and an IK model. All models used Adam to
update their weights with a learning rate of $10^{-3}$, batch size 100, and
cross-entropy loss function. The reward learning curves are shown in Figures
9,10 and 11.
Figure 9: Reward learning curves for IGL-P(3) in Facebook news recommendation
experiment. Over time, IGL-P(3) learns the difference between the three states
and tries to minimize $r=0$ and $r=-1$ while maximizing $r=1$ across all
contexts.
Figure 10: Reward learning curves for CB-emoji in Facebook news recommendation
experiment. In two of the three different contexts (The Huffington Post
readers and TIME readers), CB-emoji tries to and succeeds in maximizes $r=-1$.
Figure 11: Reward learning curves for CB-comment in Facebook news
recommendation experiment. Although CB-comment performs successfully with TIME
readers, it maximizes the unhappiness of Fox News readers and promotes
indifference in The Huffington Post readers.
|
# CLAS: Coordinating Multi-Robot Manipulation
with Central Latent Action Spaces
Elie Aljalbout<EMAIL_ADDRESS>Maximilian Karl<EMAIL_ADDRESS>Patrick van der Smagt
Machine Learning Research Lab Volkswagen Group Munich Germany
###### Abstract
Multi-robot manipulation tasks involve various control entities that can be
separated into dynamically independent parts. A typical example of such real-
world tasks is dual-arm manipulation. Learning to naively solve such tasks
with reinforcement learning is often unfeasible due to the sample complexity
and exploration requirements growing with the dimensionality of the action and
state spaces. Instead, we would like to handle such environments as multi-
agent systems and have several agents control parts of the whole. However,
decentralizing the generation of actions requires coordination across agents
through a channel limited to information central to the task. This paper
proposes an approach to coordinating multi-robot manipulation through learned
latent action spaces that are shared across different agents. We validate our
method in simulated multi-robot manipulation tasks and demonstrate improvement
over previous baselines in terms of sample efficiency and learning
performance.
###### keywords:
Multi-robot manipulation, latent action spaces, reinforcement learning
Figure 1: Two robot arms cooperating on an object lifting task. The red cube
indicates the target pose. Traditionally, two agents would control the
separate robot arms in a control space of the robot such as joint torque
control. We explore the option of learning latent central actions spaces which
are robot-agnostic and central to the task. In our example, a possible action
space would correspond to the force $F$ and torque $\tau$ acting on the center
of mass of the cube.
## 1 Introduction
Most recent successes of reinforcement learning (RL) methods have been in
single-agent environments. Applications include games (Mnih et al., 2013;
Silver et al., 2016), robotics (Kober et al., 2013), and autonomous driving
(Kiran et al., 2021). However, many control problems can naturally be
distributed to multiple agents. For instance, in robotics, tasks involving
multiple robots can be framed as multi-agent systems. In this work, we
consider cooperative multi-agent systems. Such environments can also be framed
as single-agent RL problems, with policies receiving full observations and
outputting a single action responsible for actuating all the different
entities. However, such an approach would suffer from great sample complexity
due to the difficulty of exploration and fitting a policy under high-
dimensional action, state, and observation spaces. Instead, in a multi-agent
reinforcement learning (MARL) approach, each agent is responsible for
actuating a sub-part of the environment and could have access to either all
observations or a subset. This simplifies the exploration and sample
requirement for each individual agent.
However, multi-agent methods suffer from the lack of information present to
each agent, which results in multiple problems (Graña et al., 2011; Canese et
al., 2021). Hence, in the literature, multiple solutions have been proposed to
approach these problems. Most of these methods attempt to establish either an
explicit or implicit communication channel between multiple agents, allowing
for sharing a certain amount of information that is assumed important to the
task. The latter refers to obtaining information about other agents through
learned approximate models, not direct communication. Examples of such methods
include opponent modeling (Raileanu et al., 2018; Liu et al., 2020; Yu et al.,
2021) and latent intention/goal estimation (Xie et al., 2020; Wang et al.,
2020).
Another challenge for multi-agent systems is the decentralized action
generation. This aspect can be ignored for the classical application domains
examined by previous MARL research, such as games and particle environments.
However, it becomes critical when dealing with physical tasks, such as dual-
arm manipulation, where decoupled actions could lead to instabilities and even
damage the robots. Hence, in this work, we focus on this problem.
We postulate that there exists an agent-agnostic latent action space that can
alternatively be used for solving certain families of cooperative multi-robot
manipulation tasks. To illustrate the concept, we take the example of multiple
robot manipulators lifting a single object. In the standard multi-agent
approach to solving this task, each agent would be responsible for actuating
one robot. Each agent’s action space would correspond to some control commands
suitable for the robot, e. g. joint torques, velocities, Cartesian poses. The
goal of this task is to move the object; hence a simpler and more intuitive
action space would ideally be expressed with respect to the object and not the
robots. For instance, such an action could represent a wrench (force and
torque) applied to the object. Task-specific action spaces are not trivial to
implement unless the object is rigidly attached to the end-effector of the
robot and its physical properties are known. In this work, we aim to learn
such an action space and use it for learning multi-robot manipulation tasks
that require coordination. We propose a method for learning latent central
action spaces and learning decentralized policies acting independently in
these spaces. The previous example is an ideal case where the obtained latent
action space has an interpretable physical meaning. However, we restrict
ourselves to the general case where this shared latent space could also have a
semantic uninterpretable meaning, and study its effect on decentralized
learning in cooperative multi-robot manipulation tasks.
## 2 Related Work
#### Multi-agent cooperative control
MARL methods assign different agents to different parts of the action and
state spaces. This reduces the complexity of learning and exploration of the
individual components, but the overall problem remains very challenging. MARL
solutions could be simplified using custom policy parametrizations such as
finite state controllers (Bernstein et al., 2009; Amato et al., 2010) or
transforming the problem to enable tractable planning and search (Dibangoye
and Buffet, 2018; Dibangoye et al., 2016). However, decentralized MARL methods
fail to achieve a level of coordination, which is needed for physical systems
control. Hence, several approaches have been proposed to enable a feasible
exchange of information during control. This could either be achieved through
explicit communication channels such as in Guestrin et al. (2002); Sukhbaatar
et al. (2016); Singh et al. (2018); Das et al. (2019); Pretorius et al.
(2020); Niu et al. (2021), or via implicit information exchange as part of the
policy architecture or the learning algorithm (Gupta et al., 2017; Lee et al.,
2020; Lowe et al., 2017). For instance, multiple methods are based on modeling
the other agents’ policies (Raileanu et al., 2018; Liu et al., 2020; Yu et
al., 2021). This kind of method is usually based on the centralized training
decentralized execution (CTDE) paradigm, where training each policy can
benefit from the information that is usually exclusive to the other agents at
execution time. Others have proposed using CTDE for learning a central
dynamics model and use the model for training decentralized policies (Zhang et
al., 2021b; Willemsen et al., 2021). Similarly, Lowe et al. (2017) and
Foerster et al. (2018) propose training decentralized actors using a
centralized critic. Another common approach is to decompose the value function
to the different agents (Sunehag et al., 2018; Rashid et al., 2018). Beyond
CTDE, multiple other solutions have been proposed for alleviating the non-
stationarity of MARL tasks. For instance, Liu et al. (2021) propose
engineering the reward function to punish competitive actions taken by
individual agents. Gupta et al. (2017) relied on policy parameter sharing
across agents, which allows multiple agents to use the same policy network
while passing an agent index as part of the observation. A more extensive
overview on MARL can be found in (Zhang et al., 2021a).
#### Latent action representation
Different control settings enable different kinds of behavior. For instance,
motion control is ideal for reaching a given goal but not perfectly suited for
manipulating objects or applying forces. Similarly, in RL, previous work
showed that the choice of action space representation could lead to
improvements in sample efficiency, energy consumption, robustness (Martín-
Martín et al., 2019; Bogdanovic et al., 2020; Aljalbout et al., 2021; Varin et
al., 2019; Ulmer et al., 2021) or learning speed (Peng and van de Panne,
2017). Types of action representation include torque, joint PD-controller,
inverse dynamics, muscle activation or variable impedance control (Peng and
van de Panne, 2017; Varin et al., 2019; Martín-Martín et al., 2019; Bogdanovic
et al., 2020), but also DMPs can be seen as an action representation (Buchli
et al., 2011; Schaal, 2006). Bahl et al. (2020) embed DMPs in the action space
by having the policy outputs the DMP parameters. The studies further show that
a relation between the task space and choice of action space is of importance
(Martín-Martín et al., 2019). Ganapathi et al. (2022) integrate a
differentiable implementation of forward kinematics in neural networks to
combine cartesian and joint space control. More recent publications also show
the possibility of learning these action representations from interaction with
the environment. Zhou et al. (2020) and Allshire et al. (2021) learn a
conditional variational autoencoder in order to obtain a latent action space.
Policy search is then performed in this latent action representation. During
inference, the decoder part of the auto-encoder is used to transform the
latent action back into the original action space. Zhou et al. (2020)
additionally emphasizes constraining the policy to be within the support of
the dataset. Karamcheti et al. (2021) use language embeddings to inform the
learning of latent action spaces. Rana et al. (2022) learn a latent skill-
based action space where the skills run at a higher frequency than the policy
actions.
Figure 2: System overview under full agent observability. (Left) we use a
conditional autoencoder for learning the central latent action space. The
encoder receives all observations and actions from all agents and produces a
latent action $v_{c}$. This latent action together with the full observation
is given to the agent-specific decoders together with the observation. Each
decoder outputs an action that is in the original action space of the
corresponding agent. (Right) All agents share the same policy acting in the
latent action space. The learned decoders map the latent action into the
original action space.
## 3 Method
We are mainly interested in learning multi-robot manipulation tasks.
Typically, such tasks involve multiple robot manipulators (i. e. robot arms)
that simultaneously interact with an object to achieve a predefined goal.
Having a single actor control all robots would ideally lead to good
coordination, but would suffer in exploration due to the large action and
state spaces (Alles and Aljalbout, 2022). Alternatively, control could be
split into multiple agents handling one robot each. By doing so, we reduce the
dimensionality of the individual agents’ action and state spaces, hence
reducing the sample complexity and exploration requirements. However, by
decentralizing the process of action generation, coordination between the
different agents’ policies becomes challenging. Another main difference to
single-agent approaches is the lack of information present to each agent at
execution time. Namely, each agent can only receive a subset of all the
observations of the environment. The local agent observations usually
correspond to agent-specific observations $\mathbf{o}_{i}$ (e.g.
proprioceptive measurements in a robotics scenario) and task-related
observations $\mathbf{o}_{c}$ (e.g. object poses), that are shared across all
agents’ observations.
### 3.1 Problem Formulation
Decentralized cooperative control tasks could be formulated as decentralized
partially-observable Markov decision processes (Dec-POMDP). A Dec-POMDP is
defined by the set $\langle
N,\linebreak[1]\mathcal{X},\linebreak[1]\\{\mathcal{U}_{i}\\}_{i\in\\{1,\dots,N\\}},\linebreak[1]\mathcal{T},\linebreak[1]\\{r_{i}\\}_{i\in\\{1,\dots,N\\}},\linebreak[1]\gamma,\linebreak[1]\\{\mathcal{O}_{i}\\}_{i\in\\{1,\dots,N\\}},\linebreak[1]\rho\rangle$,
where $N$ is number of agents ($N=1$ corresponds to the single-agent problem),
$\mathcal{X}$ is the state space shared by all agents, $\mathcal{U}_{i}$ is
the action space for agent $i$, $\mathcal{T}$ represents the environment
dynamics, $r_{i}$ is the reward function for agent $i$, $\gamma$ is the
discount factor, $\mathcal{O}_{i}$ is the observation space of agent $i$, and
$\rho$ is the initial state distribution. Since we are interested in
cooperative tasks, all agents share the same reward $r_{1}=r_{2}=\dots=r_{N}$.
Dec-POMDP is a special type of partially observed stochastic games. Optimally
solving Dec-POMDPs is a challenging combinatorial problem that is NEXP-
complete (Bernstein et al., 2002).
### 3.2 Central Latent Action Spaces
Previous approaches relied on the centralized training and decentralized
execution paradigm to allow using full observation and action information at
least at training time. We follow this line of work and propose learning a
latent central action space $\mathcal{V}$, which is shared across all agents.
Controls in this space represent single actions acting on the whole
environment and should somehow be translated again into commands to be
executed by the individual robots. The motivation behind this method is that
cooperative tasks usually involve different agents manipulating the same
entities to achieve a high-level goal. The overall action that is reflected on
those entities is a result of all the control commands from all the agents.
Our approach is illustrated in figure 2. We learn a central latent action
space using a conditional autoencoder. Given this model, all agents share a
single policy to output actions in the latent action space, and translate the
given actions to the original action space of each robot based on the learned
decoders.
Figure 3: System overview under partial agent observability. (Left) we use a
conditional autoencoder for learning the central latent action space. The
encoder receives all observations and actions from all agents and produces a
latent action $v$. The latent action contains agent-specific actions $v_{i}$
as well as a central latent action $v_{c}$. This latent action together with
each agent’s observation are given to the agent specific decoders together
with the observation. Each decoder outputs an action that is in the original
action space of the corresponding agent. (Right) All agents share the same
policy acting on the object in the latent action space. They each have a
separate policy acting in the latent agent-specific action space. We use the
learned decoders to map the latent action into their original action space.
To learn a latent central action space, we use Stochastic Gradient Variational
Bayes (Kingma and Welling, 2013) to overcome the intractable inference
distributions involved in learning mappings to this space. First we look at
the case where each agent receives full observations
$\mathbf{o}\in\mathcal{O}_{1}\times\mathcal{O}_{2}\times\dots\times\mathcal{O}_{N}$.
For that, we introduce the graphical models in figure 2. The generative
process of each agent’s original action $\mathbf{u}_{i}$ is conditioned on the
latent central action $\mathbf{v}$ and the observation $\mathbf{o}$ (figure
2). The latter is also used during the inference process, as shown in figure
2. Additionally—to infer latent actions—actions from all agents
$\mathbf{u}=[\mathbf{u}_{1},\dots,\mathbf{u}_{N}]$ are needed. This is
possible since the inference/encoder network will not be used for producing
actions in the original space at execution time. Based on this model, all
agents could share a copy of the same policy, which outputs a latent central
action $\mathbf{v}$ based on the full observation $\mathbf{o}$. However, they
would each have a different decoder to translate the latent action
$\mathbf{v}$ into their original action space. This is illustrated in figure 2
for a hypothetical environment with two agents. The extension to more agents
is trivial. Having a shared policy is feasible in this scenario since the
latent action space is supposed to have a lower dimensionality than the
aggregated action space of all control agents (e. g. robots). To illustrate
this, we go back to the lifting example. Controlling the joint velocity of two
robots with six degrees of freedom would result in an action space with a
dimension of twelve. Instead, controlling the wrench applied to the object
only requires an action space with six dimensions. Note that this number does
not grow with the number of agents or robots. We derive a lower bound to the
marginal likelihood:
$\displaystyle p\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)$
$\displaystyle=\int
p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)\,p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)\,d\mathbf{v}$
$\displaystyle\ln
p\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)$
$\displaystyle=\ln\int
p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)\,p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)\,\frac{q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)}{q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)}\,d\mathbf{v}$
$\displaystyle\geq\int
q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)\ln(p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right))\frac{p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)}{q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)}d\mathbf{v}$
$\displaystyle=\mathbb{E}_{q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)}\left[\ln
p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)\right]-\operatorname{KL}\left(q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)\mathrel{}\middle|\mathrel{}\mathrel{}\middle|\mathrel{}p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)\right)$
(1) $\displaystyle=\mathcal{L}(\mathbf{u},\theta,\phi,\psi\,|\,\mathbf{o}),$
(2)
where
$\operatorname{KL}\left(\mathrel{}\middle|\mathrel{}\mathrel{}\middle|\mathrel{}\right)$
is the Kullback-Leibler divergence,
$q\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)$ is
the approximate posterior distribution:
$\displaystyle
q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)$
$\displaystyle=\mathcal{N}\left(\mathbf{v};\mu_{v},\sigma_{v}^{2}\right)$
$\displaystyle[\mu_{v},\sigma_{v}]$
$\displaystyle=g_{\phi}(\mathbf{o},\mathbf{u}).$ (3)
Since the generative process of each agent’s action is distributed, the
likelihood is composed of multiple terms:
$p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)=[p_{\theta_{1}}\left(\mathbf{u}_{1}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right),\dots,p_{\theta_{N}}\left(\mathbf{u}_{N}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)],$
(4)
Where $\theta_{i}$ refers to decoder parameters for agent $i$, and
$\theta=\\{\theta_{i}\\}_{i\in N}$. Note that the prior is conditioned on the
observations. It is parameterized by $\psi$ and has a policy-like form
$p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)$. We
train it simultaneously to the encoder and decoders using the same loss
function from equation (2).
As mentioned, agents in a Dec-POMDP have only access to a subset of the
observations. However, we notice that in most environments, a certain part of
the observations is shared across all agents, and that is usually related to
either objects in the scene or any kind of other task-specific observations;
but not to the agent’s embodiment. Even when this condition fails, it could be
enforced in the learning process. For instance, in (Liu and Kitani, 2022), the
latent space is designed to contain information about the object relevant to
the task.
We introduce a new set of graphical models, as seen in figure 3. In this new
model, the latent action space is partitioned into $N+1$ parts. The first $N$
correspond to latent actions $\mathbf{v}_{i}$, which are specific to each
agent. The last part $\mathbf{v}_{c}$ is central and shared with all agents.
The generative process of each agent’s action (in the original action space)
is now conditioned on the agent’s observation $\mathbf{o}_{i}$, the latent
agent-specific action $\mathbf{v}_{i}$, and the latent central action
$\mathbf{v}_{c}$. As for inference, the whole latent action variable is
conditioned on the full observation $\mathbf{o}$ and the full action
$\mathbf{u}$. As in the previous case, using the full observation and action
for inference is possible because the encoder would not be used during
control. Instead, each agent has a set of two policies: one policy producing
the latent agent-specific action $\mathbf{v}_{i}$ based on $\mathbf{o}_{i}$;
and another policy that is shared across all agents, and which generate the
latent central action based on the shared observation $\mathbf{o}_{c}$. These
two latent actions are then concatenated and decoded into the original action
space of the agent. We show the architecture of the policy in figure
LABEL:fig:graphical_decpomd. Note that the policy updates also affect the
decoder. The new lower bound is very similar to the one in equation (2), with
the minor difference of:
$p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)=[p_{\theta_{1}}\left(\mathbf{u}_{1}\mathrel{}\middle|\mathrel{}\mathbf{o}_{1},\mathbf{o}_{c},\mathbf{v}_{1},\mathbf{v}_{c}\right),\dots,p_{\theta_{N}}\left(\mathbf{u}_{N}\mathrel{}\middle|\mathrel{}\mathbf{o}_{N},\mathbf{o}_{c},\mathbf{v}_{N},\mathbf{v}_{c}\right)].$
(5)
### 3.3 Implementation Details:
The encoders, decoders, prior distributions, and policies involved in this
method are implemented as multi-layer neural networks using PyTorch (Paszke et
al., 2019). All distributions are transformed gaussian distributions using a
hyperbolic tangent function ($tanh$). Actor, critic, and prior networks have
two hidden layers. Encoders and decoders have three hidden layers. For
training, we use the Adam optimizer (Kingma and Ba, 2014). Each policy is
optimized using soft-actor-critic (SAC) (Haarnoja et al., 2018). All modules
are trained using randomly sampled data from the replay buffer. The latter
contains trajectories sampled from the previously described multi-agent
policy. At the beginning of training, we only update the latent action model
using random actions in a warm-up phase that lasts for a hundred thousand
steps. We found this step to help the training performance and stability.
Figure 4: Close-up screenshots from the simulation environments used in our
experiments. The environments are provided by robosuite (Zhu et al., 2020).
(left) dual-arm-peg-in-hole environment (middle) dual-arm-lift and (right)
four-arm-lift environment with modified gripper structure.
## 4 Experiments
We designed our experiments to investigate the following questions:
($\mathpzc{Q1}$) Can central latent action spaces help coordinate action
generation in decentralized cooperative robotic manipulation? ($\mathpzc{Q2}$)
Does our method improve sample efficiency with respect to the selected
baselines? ($\mathpzc{Q3}$) Can our method reach or exceed the performance of
single-agent approaches with full-state information? ($\mathpzc{Q4}$) Is our
method scalable to more than two-arm manipulation tasks? ($\mathpzc{Q5}$) How
robust is our method to external disturbances? ($\mathpzc{Q6}$) Does our
method recover meaningful and task-relevant action representations?
### 4.1 Environments
fig:reward_plots_app [dual-arm-peg-in-hole][b] [dual-arm-lift][b] [four-arm-
lift][b]
Figure 5: Episodic rewards in simulated multi-robot manipulation tasks. We
compare our method (CLAS) to centralized single-agent and decentralized multi-
agent approaches. Our approach outperforms the considered decentralized multi-
agent approaches in all environments. It also manages to solve the four-arm-
lift task in which all the single-agent and decentralized multi-agent fail.
We evaluated our method in three simulated environments based on robosuite
(Zhu et al., 2020). The environments are selected/built such that they require
cooperation between multiple robot arms. Due to the lack of standardized
environments that are suitable for our use case (i. e. multi-robot
manipulation), we use existing environments from public benchmarks when
suitable and build alternatives when needed. Due to the nature of our problem,
we select environments that have continuous state and action spaces. In all of
the environments, each agent observations are the corresponding robot’s joint
position and velocity as well as its end-effector pose.
#### Dual-arm-peg-in-hole.
In the first environment, two robot arms cooperate on the peg-in-hole task. A
close-up view of the scene and the objects can be seen on the left in figure
4. We are using the original reward from robosuite, which is composed of a
reaching and orientation reward. The shared observation $o_{c}$ corresponds to
the poses of the peg and hole and the distance between them.
#### Dual-arm-lift.
For the second environment, we decided to use the dual-arm-lift environment.
In this environment, a rectangular pot with two handles is placed on a surface
between two robot arms. The task for each robot is to reach and grip the
handle before cooperatively lifting the pot off the surface. During initial
experiments, we noticed that the provided reward does not promote cooperation
and can be easily tricked. The maximum reward per time step can be reached by
controlling a single agent to tilt and lift the pot only slightly off the
table. This is due to the generous maximum tilt angle of $30^{\circ}$ and the
successful lift height of $0.10$. The other major component of the reward
measures the ability to reach and grasp the pot handles. However, we are not
interested in assessing the reaching and gripping capabilities but want to
rather reward a cooperative lifting behavior. Therefore we are considering the
following modifications to the reward of the environment. At the start of an
episode, we move each robot’s end-effector close to its handle and weld the
pot handle to the end-effector with a distance constraint in the MuJoCo
((Todorov et al., 2012)) simulator. We chose a distance constraint because it
constrains the position but leaves rotational coordinates free. We remove the
gripper fingers to avoid unwanted collisions. We visualize the resulting
starting condition in the middle of figure 4. We also modify the reward
function to enforce success only during high lifts. Additionally, the maximum
tilt angle is reduced such that both robots must cooperate to keep the pot
level at all times. We describe the final reward in equation (B.1) in the
appendix. The shared observation $o_{c}$ corresponds to the pose of the pot.
#### Four-arm-lift.
The third environment is an extension of the dual-arm-lift environment and
uses two additional robot arms to lift the pot (i. e. total of four robot
arms). Here the pot weight is increased to keep the coordination requirement.
We build this environment for the sole purpose of testing scalability to more
than two robots/agents. The pot with four handles and the robot arms’
placement can be seen on the right in figure 4.
The changes to the lifting environments were evaluated with manual human
control to ensure that tricking the system or solving the task with a single
robot arm is not possible. Keeping a high reward was only possible when the
pot is lifted vertically for a long period of steps. All environments use a
joint velocity controller which receives desired joint velocities from the
policy.
### 4.2 Baselines:
To validate our method, we compare it to well-established baselines that have
been previously applied to continuous control. Our experiments include the
following baselines:
* •
SINGLE: refers to having a single agent controlling all robots.
* •
LASER: uses a latent action space on top of a single agent controlling all
robots. This is based on the work in (Allshire et al., 2021).
* •
FULL_DEC: refers to having all agents trained with the exact observations and
actions they will have access to during execution. The agents are not provided
with a communication channel.
* •
SHARED_Q: similar architecture to FULL_DEC, but all agents are trained using a
central critic. This baseline is based on the work in (Lowe et al., 2017).
* •
CLAS: refers to our method and abbreviates “central latent action spaces.”
The first two single-agent approaches are included as strong baselines and
reference. They serve us to better understand the different environments and
to elaborately analyze our results. Finally, to make the comparison more
reliable, we use SAC for training the different agents in all baselines.
Success rates over $10$ evaluation runs under external disturbances.
| FULL_DEC | SHARED_Q | CLAS
---|---|---|---
none | 70% | 100% | 100%
$[0,50]$ | 30% | 80% | 100%
$[50,100]$ | 20% | 60% | 100%
$[100,150]$ | 10% | 70% | 100%
$[150,200]$ | 20% | 40% | 100%
$[200,250]$ | 0% | 40% | 100%
$[250,300]$ | 0% | 30% | 90%
$[300,350]$ | 0% | 20% | 60%
Figure 6: Effect of applying disturbances (forces) at the center of mass of
the pot in the four-arm-lift environment. (left) Episode reward (right)
success rate under different ranges of disturbances. Results are based on $10$
evaluation runs. Our method demonstrates robustness against different ranges
of disturbances in comparison to the other decentralized baselines, which
success rate decreases dramatically as we increase the disturbance.
### 4.3 Results
Task Performance. Figure LABEL:fig:reward_plots_app shows the episodic reward
obtained by our method and the baselines on the two considered environments.
Looking at the single-agent approaches, we observe that both baselines reach
high reward areas for the dual-arm tasks. However, they both fail to solve the
four-arm-lift task. At the end of training, the best mean episode reward
achieved by a single agent is substantially smaller than the maximum possible
reward and has a very large variance. This illustrates the problem of learning
multi-robot manipulation tasks with large action and observation spaces with a
single-agent RL approach. In contrast to the dual-arm tasks, the four-arm-lift
environment features state and action spaces twice the size.
Next, we analyze the results from MARL-based methods. FULL_DEC and SHARED_Q
struggle to keep up with single-agent RL methods. Both methods do not
explicitly encourage coordination. Hence, this result might indicate that our
environments are well-suited for studying Dec-POMDPs, since they require a
certain degree of coordination to be solved. The two approaches manage to
solve the peg-in-hole task but struggle in the two other environments. They
also lead to very similar results. In contrast, our method (CLAS) successfully
solves all tasks even under partial observability. In the dual-arm-peg-in-hole
environment, it reaches a high episode reward after only 250 thousand
environment interaction steps, while the two other MARL approaches fail to do
so in triple the number of steps. Furthermore, it achieves a final performance
very close to the one achieved by single-agent methods. In the dual-arm-lift
environment, our approach outperforms both MARL-based baselines. Additionally,
it surpasses the final performance of the two other MARL approaches after only
a half amount of steps. More importantly, CLAS slightly outperforms the
single-agent methods. In the four-arm-lift environment, CLAS is the only
studied method that manages to solve the task and achieve a high reward. Even
the single-agent baselines which have access to full state information fail in
this task. This indicates that acting in the latent central action space
enables coordinated control even under partial observability and action
decentralization. Finally, we notice that our method leads to significantly
lower performance variance, which makes deploying it in real-world scenarios
more reliable.
Robustness analysis. We aim to evaluate the coordination capability of our
method by quantifying its robustness to external disturbances. We perform this
experiment on the four-arm-lift environment and compare the different
decentralized baselines to our method. For each method, we pick the model from
the training run with the best achieved performance. We then evaluate the
corresponding agents in the same environment as before, however, when
additionally applying an external force to the pot. The force is applied
during the steps in the interval $[10,100]$ and the values of the force vector
are uniformly sampled at each step to be in a certain range. We experimented
with multiple ranges. The results can be seen in figure 6. Under no
disturbances (”none”), all methods achieve a high reward and a decent success
rate. After applying disturbances in the range $[200-250]$, FULL_DEC fails in
all evaluation runs to solve the task. The success rate of SHARED_Q goes down
to $40\%$, but its reward remains relatively high as the agent manages to lift
the pot a bit but not always to the target height. On the other hand, our
method CLAS is almost not affected by this level of disturbances. As expected,
when increasing the magnitude of the forces, all methods start to fail more
often at solving the task, but CLAS appears to remain reasonably robust.
## 5 Conclusion
We propose latent central action spaces for decentralized multi-agent control
in cooperative tasks. The main idea behind our method is to enable coordinated
control of multiple robot manipulators based on sharing a latent action space
that is agent-agnostic. During training time, our approach benefits from
central access to all observations and actions and uses this data to train the
latent action space model. During execution, each agent benefits from the
latent central action model to produce control commands that are coordinated
with other agents. We compare our approach to different baselines and show
that latent central action spaces improve the overall performance and
efficiency of learning. Interestingly, our method solves a task in which
centralized baselines struggle. Finally, we show that our approach improves
robustness against external disturbances.
## References
* Aljalbout et al. (2021) Elie Aljalbout, Ji Chen, Konstantin Ritt, Maximilian Ulmer, and Sami Haddadin. Learning vision-based reactive policies for obstacle avoidance. In _Conference on Robot Learning_ , pages 2040–2054. PMLR, 2021.
* Alles and Aljalbout (2022) Marvin Alles and Elie Aljalbout. Learning to centralize dual-arm assembly. _Frontiers in Robotics and AI_ , 9, 2022.
* Allshire et al. (2021) Arthur Allshire, Roberto Martín-Martín, Charles Lin, Shawn Manuel, Silvio Savarese, and Animesh Garg. Laser: Learning a latent action space for efficient reinforcement learning. In _2021 IEEE International Conference on Robotics and Automation (ICRA)_ , pages 6650–6656. IEEE, 2021.
* Amato et al. (2010) Christopher Amato, Daniel S Bernstein, and Shlomo Zilberstein. Optimizing fixed-size stochastic controllers for pomdps and decentralized pomdps. _Autonomous Agents and Multi-Agent Systems_ , 21(3):293–320, 2010.
* Bahl et al. (2020) Shikhar Bahl, Mustafa Mukadam, Abhinav Gupta, and Deepak Pathak. Neural dynamic policies for end-to-end sensorimotor learning. _Advances in Neural Information Processing Systems_ , 33:5058–5069, 2020.
* Bernstein et al. (2002) Daniel S Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. The complexity of decentralized control of markov decision processes. _Mathematics of operations research_ , 27(4):819–840, 2002.
* Bernstein et al. (2009) Daniel S Bernstein, Christopher Amato, Eric A Hansen, and Shlomo Zilberstein. Policy iteration for decentralized control of markov decision processes. _Journal of Artificial Intelligence Research_ , 34:89–132, 2009.
* Bogdanovic et al. (2020) Miroslav Bogdanovic, Majid Khadiv, and Ludovic Righetti. Learning variable impedance control for contact sensitive tasks. _IEEE Robotics and Automation Letters_ , 5(4):6129–6136, 2020. ISSN 2377-3766. 10.1109/LRA.2020.3011379.
* Buchli et al. (2011) Jonas Buchli, Freek Stulp, Evangelos Theodorou, and Stefan Schaal. Learning variable impedance control. _The International Journal of Robotics Research_ , 30(7):820–833, 2011. ISSN 0278-3649. 10.1177/0278364911402527. URL https://doi.org/10.1177/0278364911402527. Publisher: SAGE Publications Ltd STM.
* Canese et al. (2021) Lorenzo Canese, Gian Carlo Cardarilli, Luca Di Nunzio, Rocco Fazzolari, Daniele Giardino, Marco Re, and Sergio Spanò. Multi-agent reinforcement learning: A review of challenges and applications. _Applied Sciences_ , 11(11):4948, 2021.
* Das et al. (2019) Abhishek Das, Théophile Gervet, Joshua Romoff, Dhruv Batra, Devi Parikh, Mike Rabbat, and Joelle Pineau. Tarmac: Targeted multi-agent communication. In _International Conference on Machine Learning_ , pages 1538–1546. PMLR, 2019.
* Dibangoye and Buffet (2018) Jilles Dibangoye and Olivier Buffet. Learning to act in decentralized partially observable MDPs. In Jennifer Dy and Andreas Krause, editors, _Proceedings of the 35th International Conference on Machine Learning_ , volume 80 of _Proceedings of Machine Learning Research_ , pages 1233–1242. PMLR, 2018\.
* Dibangoye et al. (2016) Jilles Steeve Dibangoye, Christopher Amato, Olivier Buffet, and François Charpillet. Optimally solving dec-pomdps as continuous-state mdps. _Journal of Artificial Intelligence Research_ , 55:443–497, 2016.
* Foerster et al. (2018) Jakob Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. Counterfactual multi-agent policy gradients. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 32, 2018.
* Ganapathi et al. (2022) Aditya Ganapathi, Pete Florence, Jake Varley, Kaylee Burns, Ken Goldberg, and Andy Zeng. Implicit kinematic policies: Unifying joint and cartesian action spaces in end-to-end robot learning. _arXiv preprint arXiv:2203.01983_ , 2022.
* Graña et al. (2011) Manuel Graña, Borja Fernandez-Gauna, and Jose Manuel Lopez-Guede. Cooperative multi-agent reinforcement learning for multi-component robotic systems: guidelines for future research. _Paladyn_ , 2(2):71–81, 2011.
* Guestrin et al. (2002) Carlos Guestrin, Michail Lagoudakis, and Ronald Parr. Coordinated reinforcement learning. In _ICML_ , volume 2, pages 227–234. Citeseer, 2002.
* Gupta et al. (2017) Jayesh K Gupta, Maxim Egorov, and Mykel Kochenderfer. Cooperative multi-agent control using deep reinforcement learning. In _International conference on autonomous agents and multiagent systems_ , pages 66–83. Springer, 2017.
* Haarnoja et al. (2018) Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In _International conference on machine learning_ , pages 1861–1870. PMLR, 2018.
* Karamcheti et al. (2021) Siddharth Karamcheti, Megha Srivastava, Percy Liang, and Dorsa Sadigh. LILA: Language-informed latent actions. In _5th Annual Conference on Robot Learning_ , 2021. URL https://openreview.net/forum?id=_lkBGOctkip.
* Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014.
* Kingma and Welling (2013) Diederik P Kingma and Max Welling. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_ , 2013.
* Kiran et al. (2021) B Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A Al Sallab, Senthil Yogamani, and Patrick Pérez. Deep reinforcement learning for autonomous driving: A survey. _IEEE Transactions on Intelligent Transportation Systems_ , 2021.
* Kober et al. (2013) Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. _The International Journal of Robotics Research_ , 32(11):1238–1274, 2013.
* Lee et al. (2020) Youngwoon Lee, Jingyun Yang, and Joseph J. Lim. Learning to coordinate manipulation skills via skill behavior diversification. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=ryxB2lBtvH.
* Liu et al. (2021) Luyu Liu, Qianyuan Liu, Yong Song, Bao Pang, Xianfeng Yuan, and Qingyang Xu. A collaborative control method of dual-arm robots based on deep reinforcement learning. _Applied Sciences_ , 11(4):1816, 2021.
* Liu et al. (2020) Minghuan Liu, Ming Zhou, Weinan Zhang, Yuzheng Zhuang, Jun Wang, Wulong Liu, and Yong Yu. Multi-agent interactions modeling with correlated policies. _arXiv preprint arXiv:2001.03415_ , 2020.
* Liu and Kitani (2022) Xingyu Liu and Kris M. Kitani. V-mao: Generative modeling for multi-arm manipulation of articulated objects. In Aleksandra Faust, David Hsu, and Gerhard Neumann, editors, _Proceedings of the 5th Conference on Robot Learning_ , volume 164 of _Proceedings of Machine Learning Research_ , pages 287–296. PMLR, 08–11 Nov 2022. URL https://proceedings.mlr.press/v164/liu22a.html.
* Lowe et al. (2017) Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_ , NIPS’17, pages 6382–6393. Curran Associates Inc., 2017. ISBN 978-1-5108-6096-4.
* Martín-Martín et al. (2019) Roberto Martín-Martín, Michelle A. Lee, Rachel Gardner, Silvio Savarese, Jeannette Bohg, and Animesh Garg. Variable impedance control in end-effector space: An action space for reinforcement learning in contact-rich tasks. In _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 1010–1017, 2019. 10.1109/IROS40897.2019.8968201. ISSN: 2153-0866.
* Mnih et al. (2013) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. _arXiv preprint arXiv:1312.5602_ , 2013.
* Niu et al. (2021) Yaru Niu, Rohan R Paleja, and Matthew C Gombolay. Multi-agent graph-attention communication and teaming. In _AAMAS_ , pages 964–973, 2021.
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems 32_ , pages 8024–8035. Curran Associates, Inc., 2019.
* Peng and van de Panne (2017) Xue Bin Peng and Michiel van de Panne. Learning locomotion skills using DeepRL: does the choice of action space matter? In _Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation_ , SCA ’17, pages 1–13. Association for Computing Machinery, 2017. ISBN 978-1-4503-5091-4. 10.1145/3099564.3099567. URL https://doi.org/10.1145/3099564.3099567.
* Pretorius et al. (2020) Arnu Pretorius, Scott Cameron, Andries Petrus Smit, Elan van Biljon, Lawrence Francis, Femi Azeez, Alexandre Laterre, and Karim Beguir. Learning to communicate through imagination with model-based deep multi-agent reinforcement learning. 2020\.
* Raileanu et al. (2018) Roberta Raileanu, Emily Denton, Arthur Szlam, and Rob Fergus. Modeling others using oneself in multi-agent reinforcement learning. In _International conference on machine learning_ , pages 4257–4266. PMLR, 2018.
* Rana et al. (2022) Krishan Rana, Ming Xu, Brendan Tidd, Michael Milford, and Niko Suenderhauf. Residual skill policies: Learning an adaptable skill-based action space for reinforcement learning for robotics. In _6th Annual Conference on Robot Learning_ , 2022. URL https://openreview.net/forum?id=0nb97NQypbK.
* Rashid et al. (2018) Tabish Rashid, Mikayel Samvelyan, Christian Schroeder, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. In _International Conference on Machine Learning_ , pages 4295–4304. PMLR, 2018.
* Schaal (2006) Stefan Schaal. Dynamic movement primitives -a framework for motor control in humans and humanoid robotics. In _Adaptive Motion of Animals and Machines_. Springer, 2006.
* Silver et al. (2016) David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. _Nature_ , 529(7587):484–489, 2016. ISSN 0028-0836. 10.1038/nature16961.
* Singh et al. (2018) Amanpreet Singh, Tushar Jain, and Sainbayar Sukhbaatar. Learning when to communicate at scale in multiagent cooperative and competitive tasks. In _International Conference on Learning Representations_ , 2018.
* Sukhbaatar et al. (2016) Sainbayar Sukhbaatar, arthur szlam, and Rob Fergus. Learning multiagent communication with backpropagation. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 29. Curran Associates, Inc., 2016.
* Sunehag et al. (2018) Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinícius Flores Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z. Leibo, Karl Tuyls, and Thore Graepel. Value-decomposition networks for cooperative multi-agent learning based on team reward. In _AAMAS_ , pages 2085–2087, 2018. URL http://dl.acm.org/citation.cfm?id=3238080.
* Todorov et al. (2012) Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In _2012 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , pages 5026–5033. IEEE, 2012.
* Ulmer et al. (2021) Maximilian Ulmer, Elie Aljalbout, Sascha Schwarz, and Sami Haddadin. Learning robotic manipulation skills using an adaptive force-impedance action space. _arXiv preprint arXiv:2110.09904_ , 2021.
* Varin et al. (2019) Patrick Varin, Lev Grossman, and Scott Kuindersma. A comparison of action spaces for learning manipulation tasks. In _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 6015–6021, 2019. 10.1109/IROS40897.2019.8967946. ISSN: 2153-0866.
* Wang et al. (2020) Rose E Wang, J Chase Kew, Dennis Lee, Tsang-Wei Edward Lee, Tingnan Zhang, Brian Ichter, Jie Tan, and Aleksandra Faust. Model-based reinforcement learning for decentralized multiagent rendezvous. _arXiv preprint arXiv:2003.06906_ , 2020.
* Willemsen et al. (2021) Daniël Willemsen, Mario Coppola, and Guido CHE de Croon. Mambpo: Sample-efficient multi-robot reinforcement learning using learned world models. In _2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 5635–5640. IEEE, 2021.
* Xie et al. (2020) Annie Xie, Dylan P Losey, Ryan Tolsma, Chelsea Finn, and Dorsa Sadigh. Learning latent representations to influence multi-agent interaction. _arXiv preprint arXiv:2011.06619_ , 2020.
* Yu et al. (2021) Xiaopeng Yu, Jiechuan Jiang, Haobin Jiang, and Zongqing Lu. Model-based opponent modeling. _arXiv preprint arXiv:2108.01843_ , 2021.
* Zhang et al. (2021a) Kaiqing Zhang, Zhuoran Yang, and Tamer Başar. Multi-agent reinforcement learning: A selective overview of theories and algorithms. _Handbook of Reinforcement Learning and Control_ , pages 321–384, 2021a.
* Zhang et al. (2021b) Qizhen Zhang, Chris Lu, Animesh Garg, and Jakob Foerster. Centralized model and exploration policy for multi-agent rl. _arXiv preprint arXiv:2107.06434_ , 2021b.
* Zhou et al. (2020) Wenxuan Zhou, Sujay Bajracharya, and David Held. Plas: Latent action space for offline reinforcement learning. In _Conference on Robot Learning_ , 2020.
* Zhu et al. (2020) Yuke Zhu, Josiah Wong, Ajay Mandlekar, and Roberto Martín-Martín. robosuite: A modular simulation framework and benchmark for robot learning. In _arXiv preprint arXiv:2009.12293_ , 2020.
## Appendix A Further Details
### A.1 Models
We provide further figures illustrating the computational architecture and
graphical models related to the different components of the algorithm. Figure
LABEL:fig:policygm shows the grahical models of the policies involved in our
method under full and partial observability.
### A.2 Derivations
Here we go over the derivation of equation 1 and provide more steps and
explanations on how the derivation is performed:
$\displaystyle p\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)$
$\displaystyle=\int
p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)\,p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)\,d\mathbf{v}$
$\displaystyle\ln
p\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)$
$\displaystyle=\ln\int
p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)\,p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)d\mathbf{v}$
$\displaystyle\ln
p\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)$
$\displaystyle=\ln\int
p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)\,p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)\,\frac{q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)}{q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)}\,d\mathbf{v}$
$\displaystyle\geq\int
q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)\ln\Bigl{(}p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)\frac{p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)}{q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)}\Bigr{)}d\mathbf{v}$
$\displaystyle=\mathbb{E}_{q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)}\left[\ln\Bigl{(}p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)\frac{p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)}{q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)}\Bigr{)}\right]$
$\displaystyle=\mathbb{E}_{q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)}\left[\ln
p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)-\ln
q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)+\ln
p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)\right]$
$\displaystyle=\mathbb{E}_{q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)}\left[\ln
p_{\theta}\left(\mathbf{u}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{v}\right)\right]-\operatorname{KL}\left(q_{\phi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o},\mathbf{u}\right)\mathrel{}\middle|\mathrel{}\mathrel{}\middle|\mathrel{}p_{\psi}\left(\mathbf{v}\mathrel{}\middle|\mathrel{}\mathbf{o}\right)\right)$
$\displaystyle=\mathcal{L}(\mathbf{u},\theta,\phi,\psi\,|\,\mathbf{o}).$
The inequality step is based on Jensen’s inequality, the pre-last step is due
to the product and quotient rules of logarithms, and the last step is based on
the definition of the KL divergence. The derivation is in line with the
original lower bound derivation for variational autoencoders (Kingma and
Welling, 2013).
fig:policygm [][b]
$\mathbf{v}_{i}$$\mathbf{v}_{c}$$\mathbf{o}_{i}$$\mathbf{o}_{c}$$N$ [][b]
$\mathbf{v}$$\mathbf{o}$
Figure 7: Graphical models of the policies used by CLAS for the cases of
partial (a) and full observability (b).
fig:graphical_posg [Graphical model of action generation.][b]
$\mathbf{u}_{i}$$\mathbf{v}$$\mathbf{o}$$N$ [Graphical model of latent action
inference][b] $\mathbf{u}_{i}$$\mathbf{u}_{-i}$$\mathbf{v}$$\mathbf{o}$$N-1$
Figure 8: Graphical models under full access to observations for all agents.
(a) action generation, (b) latent action inference. During generation of
actions $\mathbf{u}_{i}$ each agent $i$ requires input from global
observations $\mathbf{o}$ and central latent actions $\mathbf{v}$. In order to
infer latent actions $\mathbf{v}$ information from all agents and the global
observation is needed.
fig:graphical_decpomd [Generative action model with latent action space.][b]
$\mathbf{u}_{i}$$\mathbf{v}_{i}$$\mathbf{v}_{c}$$\mathbf{o}_{i}$$\mathbf{o}_{c}$$N$
[Graphical model of latent action inference][b]
$\mathbf{u}_{i}$$\mathbf{u}_{-i}$$\mathbf{v}_{i}$$\mathbf{v}_{c}$$\mathbf{o}_{i}$$\mathbf{o}_{-i}$$\mathbf{o}_{c}$$N-1$
Figure 9: Graphical models under partial agent observability. (a) action
generation, (b) latent action inference. During generation of action
$\mathbf{u}_{i}$ the input observation excludes all the other agents
observations $\mathbf{o}_{-i}$ and latent actions $\mathbf{v}_{-i}$. Inference
is done based on observations and actions from all agents.
## Appendix B Experiments
### B.1 Setup
Here we provide further details concerning our setup and experimental design,
as to enable easy reproduction of our work.
The environments we used are based on joint velocity control action spaces.
Each agent receives the corresponding robot’s proprioceptive measurements, and
the shared observation corresponds to object observations. For evaluation, we
run each episode for 500 steps leading to maximal reward of 500. We run all
evaluation experiments 10 times with different random seeds.
The reward used in the Lift environment is the following:
$\displaystyle r_{\text{lift}}$ $\displaystyle=\max(d-0.05,0)$ $\displaystyle
r_{dir}$ $\displaystyle=\left\\{\begin{array}[]{lr}1,&\text{for
}\cos(\alpha)\geq\cos(10^{\circ})\\\ 0,&\text{for
}\cos(\alpha)<\cos(10^{\circ})\end{array}\right.$ (8) $\displaystyle r$
$\displaystyle=\frac{1}{3}\left\\{\begin{array}[]{lr}3\,r_{\text{dir}},&\text{for
}d>0.35\\\ 10\,r_{\text{dir}}+r_{\text{lift}},&\text{for }d\leq
0.35,\end{array}\right.$ (11)
where $d$ represents the distance between the surface and the pot, $\alpha$
the tilt angle of the pot.
### B.2 Results under full observability
Here we study the performance of our method in the case where all agents have
access to the full observation. We again compare to the same baselines.
Similar to the results in section 4, our methods outperforms all MARL
baselines in terms of final reward and sample efficiency. It also approaches
the performance of the centralized single agents, and even outperforms them in
four-arm-lift.
fig:resfullobs [][b] [][b]
Figure 10: Results under full agent observability.
### B.3 Ablations
In section 4, we showed that the shared latent actions are active during
control. To make sure that the shared latent actions are not ignored during
execution we perform the following experiment. We replace the shared latent
actions with zeros during inference, and compare the achieved episodic reward
to the standard case using our method. The results are in figure 13. For the
peg-in-hole environment, the difference in performance is minor. This is
mainly due to the fact that this task does not necessarily involve objects
that are independent of the robots. Instead the peg and hole are attached to
the corresponding robot. The improvement in results shown by our method in
figure LABEL:fig:reward_plots_app is mainly due to the centralized training of
the latent action space model. However, for the lifting environments, where a
robots-independent object is to be manipulated, masking the shared latent
action makes a huge difference. Namely, masking the shared latent actions with
zeros leads to very low rewards. These two results indicate that our action
space model maps robot actions into actions acting on the objects in some
space.
### B.4 Results under Asymmetry of action spaces
To check whether our approach is capable of handling asymmetric action spaces
and multiple robots, we compare its performance to the baselines again in the
dual-arm-lift environments. However, this time we use two different robots in
the environment, namely we use a Panda and a Sawyer robot. The panda is
equipped with a joint velocity action space and Sawyer with an operational
space controller. The results are in figure 11. CLAS is the only decentralized
method that finds policies capable of lifting the pot, while the other two
decentralized baselines as well as SINGLE struggle to do so.
Figure 11: Reward plots in the dual-arm-lift environments when using two
different robots with different action spaces.
### B.5 Coordination
To demonstrate the coordination achieved by both agents we plot the desired
joint velocity generated by the policy and the achieved joint velocity for
both agents. This can be seen in figure 14 and 15. We notice that the dominant
pattern across all plots is the diagonal. This shows that the policy outputs
are used by both robots as opposed to having one robot being controlled by a
policy, while the other being purely reactive and ignoring its policy outputs.
The fourth joint is the only exception, where some policy outputs are ignored
(mapped to zeros). However, also in this case, the most values fall on the
diagonal.
### B.6 Qualitative Results
Analyzing the central latent action space. To further validate our method, we
examine the shared latent actions produced during evaluation. Figure 12 shows
trajectories of the shared latent actions produced by our model for the dual-
arm-lift task. We observe that most shared latent action dimensions are active
during control. One of the latent actions is constant during execution which
illustrates that our approach could successfully recover a lower-dimension
action space even when configured differently. Furthermore, we notice that the
sequence of actions from the most varying latent action (in red) highly
correlates with the z-position trajectory of the pot (figure 12). The
z-position follows the mentioned latent action with a slight time delay. In
this case, this latent action represents desired z-positions of the pot needed
to lift it. This is an interesting finding since our approach does not
explicitly enforce any physical form or structure on the latent action space.
The emergence of this property is purely due to the compression capabilities
of variational autoencoders. Note that the plots in figure
LABEL:fig:latent_ana are qualitative results only meant to illustrate emergent
latent actions spaces, and do not mean that our approach is interpretable.
fig:latent_ana [][b] [][b]
Figure 12: Central latent action trajectories for the lifting task. (a)
Trajectories of all latent action dimensions. (b) Correlation between one
shared latent action and the z position of the pot. The z-position trajectory
of the pot (blue curve) follows the latent action trajectory (red curve).
Figure 13: Effect of masking the shared latent action on the achieved total
reward.
Figure 14: Plots of the achieved joint velocity based on the commanded joint
velocity for the two agents involved in the Lifting task. Each row indicates a
joint [1-4].
Figure 15: Plots of the achieved joint velocity based on the commanded joint
velocity for the two agents involved in the Lifting task. Each row indicates a
joint [5-7].
|
The success of deep learning over the past decade mainly relies on gradient-based optimisation and backpropagation. This paper focuses on analysing the performance of first-order gradient-based optimisation algorithms, gradient descent and proximal gradient, with time-varying non-convex cost function under (proximal) Polyak-Łojasiewicz condition. Specifically, we focus on using the forward mode of automatic differentiation to compute gradients in the fast-changing problems where calculating gradients using the backpropagation algorithm is either impossible or inefficient. Upper bounds for tracking and asymptotic errors are derived for various cases, showing the linear convergence to a solution or a neighbourhood of an optimal solution, where the convergence rate decreases with the increase in the dimension of the problem. We show that for a solver with constraints on computing resources, the number of forward gradient iterations at each step can be a design parameter that trades off between the tracking performance and computing constraints.
Online optimisation, forward gradient, over-parameterised systems, PL condition
§ INTRODUCTION
Gradient-based optimisation is the core of most machine learning algorithms <cit.>. Backpropagation, the reverse mode of Automatic Differentiation (AD) algorithms, has been the main algorithm for computing gradients. <cit.> has argued that in training neural networks, one can calculate gradients based on directional derivatives, which is faster than backpropagation. An unbiased estimate of gradients, named forward gradients, can be calculated in a single forward run of the function and speeds up training up to twice as fast in some examples. Forward differentiation is a mode of automatic differentiation algorithm. It is based on the mathematical definition of dual numbers (see <cit.> for a review).
Given a function $f: \mathbb{R}^n \to \mathbb{R}$, the forward gradient at point $\boldsymbol{x}$ is defined as
\begin{align}
\boldsymbol{g}(\boldsymbol{x}, \boldsymbol{u}) = \langle \nabla f(\boldsymbol{x}), \boldsymbol{u}\rangle \boldsymbol{u},
\end{align}
where $\boldsymbol{u}$ is a random vector with zero mean and unit variance. Note that $\langle \nabla f(\boldsymbol{x}), \boldsymbol{u} \rangle$ is the directional derivative of $f$ at point $\boldsymbol{x}$ in the direction $\boldsymbol{u}$. The forward gradient, $g(\boldsymbol{x}, \boldsymbol{u})$, is an unbiased estimator for $\nabla f(\boldsymbol{x})$ as long as the components of direction $\boldsymbol{u}$ are independent and sampled from a distribution with zero mean and unit variance; $\left[\mathbb{E}(g(\boldsymbol{x}, \boldsymbol{u}))\right]_i = \mathbb{E}(u_i \sum_{j=1}^n (\nabla f(\boldsymbol{x}))_j u_j) = (\nabla f(\boldsymbol{x}))_i$, where $a_i$ is the $i$-th element of vector $\boldsymbol{a}$.
Deep neural networks can be considered as a particular case of the over-parameterised system of nonlinear equations in which there are potentially a staggering number of trainable parameters <cit.>. In <cit.> authors argue that focusing on the convexity of the problem does not lead to a suitable framework for analysing such loss functions and instead show that the Ployak-Łojasiewicz (PL) condition <cit.> is satisfied on the most of parameter space. This in turn explains fast convergence of Gradient Descent (GD) algorithm to a global minimum.
Another area that has observed a flurry of activity is anchored at studying time-varying objective functions. These problems arise in many applications including but not limited to robotics <cit.>, smart grids <cit.>, communication systems <cit.> and signal processing <cit.>. An example is when a machine learning problem's training data (problem parameters) are observed during the learning process. These can be considered as a sequence of optimisation problems and a solver may solve each problem completely before approaching the next problem, which is not feasible in cases where each instance of the problem is of large scale (considering the rate of change and the speed of solver) or involves communication over a network. Therefore, a more efficient way may be to solve each problem partially (e.g. performing only a few gradient descent steps) before receiving the next problem. These algorithms are called running method in <cit.>. Consider a neural network that has been trained over a data set $\mathcal{D} = \{\boldsymbol{a}_i, b_i \}_{i=0}^{n-1}$. Let $\mathcal{L}_0(\boldsymbol{x}) = \frac{1}{2}\sum_{i=0}^{n-1} \|f(\boldsymbol{x} ; \boldsymbol{a}_i) - b_i \|^2$. Assume that, while deploying the pre-trained neural network, new samples of data arrive at a high rate and one needs to update the training variable of the network, $\boldsymbol{x}$. This is an over-parameterised online optimisation problem. We assume that only the latest $n$ data samples to update the variables are used. The corresponding cost functions is
\begin{align}
\mathcal{L}_k(\boldsymbol{x}) = \frac{1}{2}\sum_{i=n_k}^{n_k+n-1} \| f(\boldsymbol{x} ; \boldsymbol{a}_i) - b_i \|^2,
\end{align}
where $n_k - n_{k-1}$ is the number of newly observed samples.
Note that we assume that the structure of the neural network does not change during time and we only try to update its variables after observing new data examples (See <cit.> and references therein for a review on expandable neural networks). We assume
* The algorithm uses the latest samples (because they represent the data distribution better);
* Only $n$ samples (e.g. because of memory limitations) can be used;
* Due to its fast computation, forward gradient is to be used in the optimisation steps.
Ideally, the solution generated by the method at each time step, $\boldsymbol{x}_k$, follows $\boldsymbol{x}_k^*$. In other words, the tracking error $\|\boldsymbol{x}_k - \boldsymbol{x}_k^* \|$ or the instantaneous sub-optimality gap $\|\mathcal{L}_k(\boldsymbol{x}_k) - \mathcal{L}_k(\boldsymbol{x}_k^*) \|$ either shrink or remains bounded as $k$ grows. We assume that $\ell$ optimisation step is applied to $\mathcal{L}_k$ before receiving the new batch of data, where $\ell$ is chosen based on the above constraints.
In this paper, we focus on the analysis of the performance of forward gradient-based method in over-parameterised time-varying settings characterised by the following cost function at time $k$:
\begin{align}\label{composite_sum_timevar}
\min_{\boldsymbol{x}\in \mathbb{R}^m} \ \{ \mathcal{L}_k(\boldsymbol{x}) := g_k(\boldsymbol{x}) + h_k(\boldsymbol{x}) \}.
\end{align}
Function $g_k$ is smooth and potentially nonconvex and $h_k(\boldsymbol{x})$ is possibly nonsmooth convex function that imposes some structure on the solution such as sparsity. We consider two cases with or without the presence of function $h$ in the cost function. Such problems arise in the online machine learning <cit.> and have been well studied under (strong) convexity assumption. The contributions of this paper are listed below:
* We prove the linear convergence in expectation of forward-gradient method to the minimiser for smooth nonconvex PL objective functions (Theorem <ref>).
* We prove that the expected tracking error of the forward-gradient method in a time-varying nonconvex setting, under smoothness and PL assumptions, converges to a neighbourhood of zero at a linear rate (Theorem <ref>). We show that the number of iterations at each step, $\ell$, is a design parameter that trades off between the tracking performance and computing resources constraints.
* In the case where the objective cost function is the sum of a PL and a convex (possibly non-smooth) function, we prove that the expected tracking error of the proximal forward-gradient-based algorithms converges to a neighbourhood of zero for time-varying cost functions under proximal-PL condition (Theorem <ref>). Error depend on the problem dimension, but the rate does not. The proof for quadratic growth of the proximal-PL functions is also novel.
The proofs are described in detail in the technical report associated with this paper <cit.>.
The rest of the paper is organised as follows: in Section <ref>, we discuss related works, and in Section <ref>, we review notations, definitions, basic assumptions and some background knowledge. In Section <ref>, we prove the linear convergence of forward gradient algorithm for the PL objective functions. In Section <ref>, we analyse the performance of gradient descent and proximal gradient algorithms under PL and Proximal-PL conditions. Section <ref> concludes the paper.
§ RELATED WORKS
PL condition in over-parameterised nonlinear systems: The nonlinear least square problem has been extensively studied in under-parameterised systems <cit.>. In <cit.> the sufficient condition for satisfying the PL condition in a wide neural network with least square loss function and linear convergence of (S)GD have been studied and the relation with neural tangent kernel has been discussed <cit.>.
Static cost functions: In <cit.> convergence of Randomised Coordinate Descent (RCD), Stochastic Gradient Descent, and SVRG algorithms under PL condition and Proximal coordinate descent algorithm under Proximal-PL has been studied. Inexact gradient descent algorithms have been analysed under a wide range assumptions (See <cit.> and references therein). In <cit.> $\mathcal{O}(1/k)$ convergence of SGD under weaker condition (Weak Growth Condition) has been proved.
Time-varying cost function: A closely related line of research to this paper is online convex optimisation (OCO) which was originally introduced in seminal work by <cit.>. Since the focus of this work is on nonconvex optimisation first order algorithm with non-exact gradients, we omit to discuss the vast literature on the online exact first order algorithms under strong convexity and smoothness assumptions (see <cit.> and references therein). <cit.> analysed the regret bounds for online first order optimisation algorithms under (strong) convexity and smoothness with noisy gradients. Regret analysis of online stochastic fixed and time-varying optimisation have been done in <cit.> and <cit.>. <cit.> and <cit.> have analysed the online (proximal) gradient methods with sub-Weibull gradient error under strong convexity and PL conditions, respectively. In <cit.> authors have proposed a method for estimating the gradient in a time-varying setting with zeroth-order distributed optimisation algorithm.
§ NOTATION AND PRELIMINARIES
Throughout this report, we show the set of real numbers as $\mathbb{R}$. For vectors $\boldsymbol{a}, \boldsymbol{b}\in \mathbb{R}^n$ the Euclidean inner product and its corresponding norms are denoted by $\langle \boldsymbol{a} , \boldsymbol{b} \rangle$ and $\| \boldsymbol{a}\|$ respectively. We denote the open ball centered at $\boldsymbol{x}$ with radius $r$ by $B(\boldsymbol{x}, r)$. With $\pi_{\mathcal{X}}(\boldsymbol{x})$ we indicate the projection of vector $\boldsymbol{x}$ into set $\mathcal{X}$. The Euclidean distance to a set $\mathcal{X} \in \mathbb{R}^n$ is defined by $dist(\boldsymbol{x}, \mathcal{X}):= \inf_{\boldsymbol{u} \in \mathcal{X}} \| \boldsymbol{u} - \boldsymbol{x} \|$ for every $\boldsymbol{x}\in \mathbb{R}^n$. We denote the mapping of projection over set $\mathcal{X}$ with $\pi_{\mathcal{X}}$. With $\mathcal{X}^*$ we denote the set of minimisers of a function. For a matrix $A\in \mathbb{R}^{m\times n}$, $\| A\|$ and $\| A\|_F$ are the spectral and Frobenius norms of the matrix, respectively. We use $\mathcal{D} F(\cdot)$ and $\mathcal{D}^2 F(\cdot)$ to show the derivative and Hessian of a function $F:\mathbb{R}^n \to \mathbb{R}$, respectively. We use $\widetilde{\mathcal{O}}(\cdot)$ notation by ignoring the logarithmic factors in big-O notation, i.e. $\widetilde{\mathcal{O}}(f(n)) = \mathcal{O}(f(n)\log f(n))$.
In what follows, we introduce some definitions, assumptions and lemmas that will be utilised in the paper.
The function $\mathcal{L}: \mathbb{R}^n \to \mathbb{R}$ is $\beta-smooth$ if it is differentiable and
\begin{align*}
\|\nabla \mathcal{L}(\boldsymbol{y}) - \nabla \mathcal{L}(\boldsymbol{x}) \| \leq \beta\| \boldsymbol{y} - \boldsymbol{x} \|, \quad \forall \boldsymbol{x}, \boldsymbol{y} \in \mathbb{R}^n.
\end{align*}
The function $\mathcal{L}: \mathbb{R}^n \to \mathbb{R}$ satisfies the Polyak-Łojasiewicz (PL) condition on a set $\mathcal{X}$ with constant $\mu$ if
\begin{align}
\frac{1}{2} \|\nabla \mathcal{L}(\boldsymbol{x}) \|^2 \geq \mu (\mathcal{L}(\boldsymbol{x}) - \mathcal{L}^*), \quad \forall \boldsymbol{x} \in \mathcal{X}.
\end{align}
(Descent Lemma <cit.>) Let the function $\mathcal{L}: \mathbb{R}^n \to \mathbb{R}$ be a $\beta_{\mathcal{L}}$-smooth function. Then for every $\boldsymbol{x}$, $\boldsymbol{y} \in \mathbb{R}^n$ and for every $\boldsymbol{z} \in [\boldsymbol{x}, \boldsymbol{y}] := \{(1-\alpha)\boldsymbol{x} + \alpha \boldsymbol{y} : \alpha \in [0,1] \}$ the following holds
\begin{align*}
\mathcal{L}(\boldsymbol{y}) \leq \mathcal{L}(\boldsymbol{x}) + \langle \nabla \mathcal{L}(\boldsymbol{z}), \boldsymbol{y} - \boldsymbol{x}\rangle + \frac{\beta_{\mathcal{L}}}{2} \|\boldsymbol{y} - \boldsymbol{x} \|^2.
\end{align*}
(<cit.>) Let $\mathcal{L}: \mathbb{R}^n \to \mathbb{R}$ be a $\beta$-smooth and $\mu-PL$ function. Then
\begin{align*}
\frac{\mu}{2}\|\boldsymbol{x} - \pi_{\mathcal{X}^*}(\boldsymbol{x}) \|^2\leq \mathcal{L}(\boldsymbol{x}) - \mathcal{L}^* \leq \frac{\beta}{2}\|\boldsymbol{x} - \pi_{\mathcal{X}^*}(\boldsymbol{x}) \|^2, \quad \forall \boldsymbol{x}.
\end{align*}
(Proximal-PL condition <cit.>) Let $\mathcal{L}(\boldsymbol{x}) = g(\boldsymbol{x})+ h(\boldsymbol{x})$ where $g$ is $\beta-$smooth and $h$ is a convex function. The function $f$ satisfies $\mu-$Proximal PL-condition if the following holds
\begin{align}\label{prox PL ineq}
\frac{1}{2}\mathcal{D}_h(\boldsymbol{x}, \beta) \geq \mu(\mathcal{L}(\boldsymbol{x}) - \mathcal{L}^*), \quad \forall \boldsymbol{x},
\end{align}
\begin{align}
\mathcal{D}_h(\boldsymbol{x}, \alpha):= -2\alpha \min_{\boldsymbol{y}} \left\{ \langle \nabla g(\boldsymbol{x}), \boldsymbol{y}-\boldsymbol{x} \rangle + \frac{\alpha}{2}\|\boldsymbol{y}-\boldsymbol{x} \|^2 + h(\boldsymbol{y}) - h(\boldsymbol{x})\right\}
\end{align}
§.§ Over-parameterised systems and PL condition
For a system of $n$ nonlinear equations
\begin{align*}
f(\boldsymbol{x}; \boldsymbol{a}_i) = b_i, \quad i=1,2, \dots, n,
\end{align*}
where $\{\boldsymbol{a}_i, b_i \}_{i=1}^n$ is the set of model parameters and one aims at finding $\boldsymbol{x}\in \mathbb{R}^m$ that solves the system of equations. Aggregating all equations in a single map amounts to
\begin{align}\label{Single-map}
\mathcal{F}(\boldsymbol{x}) = \boldsymbol{y}, \ \text{where} \ \boldsymbol{x}\in \mathbb{R}^m, \mathcal{F}(\cdot):\mathbb{R}^m \to \mathbb{R}^n.
\end{align}
The system in (<ref>) is solved through minimising a certain loss function $\mathcal{L}(\boldsymbol{x})$:
\begin{align}
\mathcal{L}(\boldsymbol{x}):=\frac{1}{2} \|\mathcal{F}(\boldsymbol{x}) - \boldsymbol{b} \|^2=\frac{1}{2} \sum_{i=1}^n (f(\boldsymbol{x}; \boldsymbol{a}_i) - b_i)^2.
\end{align}
This problem has been studied extensively in under-parameterised setting (where $m<n$). We refer to exact solution of (<ref>) as interpolation.
Let $D\mathcal{F}(\boldsymbol{x})$ be the differential of the map $\mathcal{F}$ at $\boldsymbol{x}$ which can be represented as a $n\times m$ matrix. The tangent Kernel of $\mathcal{F}$ is defined as a $n\times n$ positive semidefinite matrix $K(\boldsymbol{x}):=D\mathcal{F}(\boldsymbol{x})D\mathcal{F}^T(\boldsymbol{x})$.
We say that a non-negative function $\mathcal{L}$ satisfies $\mu-PL^*$ condition on a set $\mathcal{X}\in \mathbb{R}^m$ for $\mu >0$, if
\begin{align}\label{PL}
\frac{1}{2} \|\nabla \mathcal{L}(\boldsymbol{x}) \|^2 \geq \mu \mathcal{L}(\boldsymbol{x}), \quad \forall \boldsymbol{x} \in \mathcal{X}.
\end{align}
* if a non-negative function satisfies $\mu-PL^*$, then it will satisfy $\mu-PL$ condition too, that is
\begin{align}
\frac{1}{2}\|\nabla \mathcal{L} (\boldsymbol{x} \|^2 \geq \mu(\mathcal{L}(\boldsymbol{x}) - \mathcal{L}^*)
\end{align}
* Every stationary point of a PL function, is a global minimum.
The following results are of great importance in the later discussions.
If $\mathcal{L}(\boldsymbol{x})$ is lower bounded with $\beta$-Lipschitz continuous gradients and satisfies $\mu-PL$ condition in the region $\mathcal{X} = \overline{B(\boldsymbol{x_0}, \rho)}$ where $\rho > \frac{\sqrt{2\beta(\mathcal{L}(\boldsymbol{x}_0) - \mathcal{L}^*)}}{\mu}$, then there exists a global minimum point $x^* \in \mathcal{X}$ and the gradient descent algorithm with a small enough step-size ($\frac{1}{\beta}$) converges to $x^*$ in a linear rate.
If $\mathcal{F}(\boldsymbol{x})$ is such that $\lambda_{min}(K(\boldsymbol{x}))\geq \mu > 0$ for all $\boldsymbol{x} \in \mathcal{X}$, then the square loss function $\mathcal{L}(\boldsymbol{x}) = \frac{1}{2}\|\mathcal{F}(\boldsymbol{x})- \boldsymbol{y} \|^2$ satisfies $\mu-PL^*$ condition on $\mathcal{X}$.
Note that $K(\boldsymbol{x})=D\mathcal{F}(\boldsymbol{x})D\mathcal{F}^T(\boldsymbol{x})$ implies that $rank(K(\boldsymbol{x})) = rank(D\mathcal{F}(\boldsymbol{x}))$. In an over-parameterised system, $m>n$, starting from a random initial point there is a high probability that $D\mathcal{F}(\boldsymbol{x})$ is full rank. However, from Theorem <ref>, one needs $\mu-PL^*$ condition in a large enough ball with radius $\mathcal{O}(\frac{1}{\mu})$ for linear convergence of the GD algorithm. For establishing such conditions, one intuitively can expect that in the cases of $\mathcal{C}^2$ function that if Hessian (curvature) is small enough in the neighbourhood of a point, then the Tangent Kernel should be almost constant and therefore, the conditions for linear convergence of the GD algorithm will be satisfied. It turns out that in the case of highly over-parameterised Neural Networks (wide NNs) with linear output layer, the Hessian matrix will have arbitrary small spectral norm (a transition to linearity). This is formulated as follows:
For an $L$ layer neural network with a linear output layer and minimum width $m$ of the hidden layers, for any $R>0$ and any $\boldsymbol{x}\in B_{R}(\boldsymbol{x}_0)$. the Hessian spectral norm satisfies the following with a high probability
\begin{align}
\|\mathcal{D}^2 \mathcal{F}(\boldsymbol{x}) \| = \widetilde{\mathcal{O}}\left (\frac{R^{3L}}{\sqrt{m}} \right).
\end{align}
With a nonlinear output layer, $\boldsymbol{x} \mapsto \varphi(\boldsymbol{x})$, the square loss function will satisfy $\mu\kappa^2$-PL condition where $\kappa:=\inf_{\boldsymbol{x} \in B(\boldsymbol{x}_0, \rho)} \|\mathcal{D}\varphi(\mathcal{F}(\boldsymbol{x})) \|$.
All in all, wide neural networks, as a particular case of over-parameterised systems, satisfy the $\mu-PL^*$ condition, which explains the fast convergence of (S)GD to a global minimum in square-loss problems. In the following sections, we theoretically analyse the performance of using forward gradient in the same setting (over-parameterised systems).
§ OPTIMISATION USING FORWARD GRADIENT
In this section, we analyse the performance of various gradient-based algorithms using forward gradient.
§.§ Forward gradient under PL condition for fixed cost functions
With the focus on the basic unconstrained optimisation problem
\begin{align}
\min_{\boldsymbol{x}\in \mathbb{R}^m} \mathcal{L}(x).
\end{align}
Consider the following iterations where the solver is not fast enough to do one gradient calculation, but it is able to do $\ell$ forward gradient updates at time step $k$
\begin{align}\label{GD with directional derivative}
\boldsymbol{x}_{k+1}^{(i+1)} = \boldsymbol{x}_{k+1}^{(i)} - \alpha_k^{(i)} \boldsymbol{v}_{k}(\boldsymbol{x}_{k+1}^{(i)}, U_{k+1}^{(i)}), \quad \textit{for}\ i=0,1, \dots, \ell -1,
\end{align}
where $\boldsymbol{x}_{k+1}^{(\ell)} := \boldsymbol{x}_{k+1}$, $\boldsymbol{x}_{k+1}^{(0)} := \boldsymbol{x}_k$, $\alpha_k$ is step-size, $\boldsymbol{v}(\boldsymbol{x}, \boldsymbol{u}) = \langle \nabla \mathcal{L}(\boldsymbol{x}), \boldsymbol{u}\rangle \boldsymbol{u}$, and $U_k \sim \mathcal{N}(\boldsymbol{0}, I_m)$ are i.i.d random directions for all $k$ and $i$. It is extensively studied in <cit.> where the following properties have been proved:
\begin{align}
\mathbb{E} \boldsymbol{v}(\boldsymbol{x}, U_k) &= \nabla \mathcal{L}(\boldsymbol{x})\\
\mathbb{E}(\|\boldsymbol{v}(\boldsymbol{x}, U_k) \|^2) &\leq (m+4)\| \nabla \mathcal{L}(\boldsymbol{x})\|^2 \label{Var-Dir_Deriv}
\end{align}
In the following we analyse the convergence of this algorithm under PL condition. We refer the reader to the extended version of this paper <cit.> for proofs.
Assume that function $\mathcal{L}$ is $\beta-$smooth, has a non-empty solution set $\mathcal{X}^*$, and satisfies $\mu-$PL condition. Consider the algorithm (<ref>) with a step-size of $\frac{1}{\beta(m+4)}$. If random vector $U_k$ is chosen from a standard normal distribution, i.e. and $U_k \sim \mathcal{N}(\boldsymbol{0}, I_m)$, then the algorithm has an expected linear convergence rate
\begin{align}
\mathbb{E}\{\mathcal{L}(x_k) - \mathcal{L}^* \} \leq \left(1-\frac{\mu}{(m+4)\beta}\right)^k (\mathcal{L}(x_0) - \mathcal{L}^*)
\end{align}
§.§ Time varying Optimisation
In what follows, we prove a convergence property of online line-search methods for the objective functions that satisfy the following assumptions.
(Polyak-Łojasiewicz Condition) There exist a scalar $\mu>0$ such that $\mathcal{L}_{k}(\boldsymbol{x})$ satisfies (<ref>) for all $k$ and for all $\boldsymbol{x} \in \mathbb{R}^m$.
We also need to quantify the speed with which the objective function varies in each step.
(Drift in Time): There exist non-negative scalars $\eta_0$ and $\eta^*$ such that $\mathcal{L}_{k+1}(\boldsymbol{x}) - \mathcal{L}_k(\boldsymbol{x}) \leq \eta_0$ for all $\boldsymbol{x} \in \mathbb{R}^m$ and $\mathcal{L}_{k+1}^* - \mathcal{L}_k^* \leq \eta^*$ for all $k$.
In an over-parameterised setting with non-linear least square loss function, one can argue that $\mathcal{L}_k^* = 0$, for all $k$. Therefore, $\eta^* = 0$ in this setting.
§.§.§ Forward gradient convergence in Time varying setting
In this part, we analyse the performance of the forward-gradient algorithm with a fixed step size for the time-varying nonconvex cost functions under PL and smoothness assumptions. We prove that the tracking error linearly converges to a neighbourhood of zero where the rate of convergence depends on the dimension of the problem and the asymptotic error is independent of the fact that we are not using exact gradients.
Assume that $\mathcal{L}_k$ is $\beta-$smooth for each $k$, and Assumptions <ref> and <ref> hold. If we use directional derivative algorithm with constant step-size $\alpha_k^{(i)} = \alpha < \frac{2}{\beta(m+4)}$ and random direction $U_k^{(i)}$ is chosen from a standard normal distribution, then
\begin{align}\label{err_bdd_timevar}
\mathbb{E}\{\| \boldsymbol{x}_{k+1} - \pi_{\mathcal{X}_{k+1}^*}(\boldsymbol{x}_{k+1}) \|^2\} &\leq \frac{\eta_0 + \eta^*}{\mu^2 \gamma \ell} + \frac{2}{\mu}(1-2\mu\gamma\ell)^k (\mathcal{L}_0(\boldsymbol{x}_0) - \mathcal{L}_0^*),
\end{align}
where $\eta_0$ and $\eta^*$ are as in Assumption <ref>, $\gamma \in (0, \tilde{\gamma})$ where $\tilde{\gamma} = min \{\alpha(1-\frac{\beta}{2}(m+4)\alpha), \frac{1}{2\mu\ell} \}$, and $\mathcal{X}_k^*$ is the set of minimisers of $\mathcal{L}_k(\boldsymbol{x})$.
Using descent lemma and (<ref>) we have
\begin{align*}
\mathcal{L}_{k}(\boldsymbol{x}_{k+1}^{(i+1)}) &\leq \mathcal{L}_k(\boldsymbol{x}_{k+1}^{(i)}) + \langle \nabla \mathcal{L}_{k}(\boldsymbol{x}_{k+1}^{(i)}), \boldsymbol{x}_{k+1}^{(i+1)} - \boldsymbol{x}_{k+1}^{(i)} \rangle + \frac{\beta}{2}\|\boldsymbol{x}_{k+1}^{(i+1)} - \boldsymbol{x}_{k+1}^{(i)} \|^2 \\
& = \mathcal{L}_k(\boldsymbol{x}_{k+1}^{(i)}) -\alpha_k^{(i)} \langle \nabla \mathcal{L}_{k}(\boldsymbol{x}_{k+1}^{(i)}), \boldsymbol{v}_k(\boldsymbol{x}_{k+1}^{(i)}, U_{k+1}^{(i)}) \rangle + \frac{\beta}{2}(\alpha_k^{(i)})^2\|\boldsymbol{v}_k(\boldsymbol{x}_{k+1}^{(i)}, U_{k+1}^{(i)}) \|^2.
\end{align*}
By taking conditional expectation given $\boldsymbol{x}_{k+1}^{(i)}$ with respect to $U_{k+1}^{(i)}$, and using (<ref>) we obtain
\begin{align}
\mathbb{E}\{\mathcal{L}_k(\boldsymbol{x}_{k+1}^{(i+1)}) \mid \boldsymbol{x}_{k+1}^{(i)} \} &\leq \mathcal{L}_k(\boldsymbol{x}_{k+1}^{(i)}) -\alpha_{k}^{(i)} \|\nabla \mathcal{L}_{k}(\boldsymbol{x}_{k+1}^{(i)}) \|^2 +\frac{1}{2}\beta (\alpha_k^{(i)})^2(m+4)\|\nabla \mathcal{L}_{k}(\boldsymbol{x}_{k+1}^{(i)}) \|^2 \nonumber\\
& = \mathcal{L}_k(\boldsymbol{x}_{k+1}^{(i)}) -\underbrace{\alpha_{k}^{(i)}(1-\frac{\beta}{2}(m+4)\alpha_{k}^{(i)})}_{:=\gamma_{k}^{(i)}}\|\nabla\mathcal{L}_{k}(\boldsymbol{x}_{k+1}^{(i)}) \|^2 \label{descent_lemma_implication}.
\end{align}
We now fix $\epsilon_1 <\alpha_{k}^{(i)}= \alpha < \frac{2}{\beta(m+4)} - \epsilon_2$ and $\gamma_{k}^{(i)} = \gamma >0$ for some $\epsilon_1 >0$ and $\epsilon_2 >0$. Assuming that $\mathcal{L}_k$ satisfies $\mu-PL$ condition at $\boldsymbol{x}_{k+1}^{(i)}$, for all $i$ and $k$, we have that
\begin{align}\label{bound_one_iteration}
\mathbb{E}\{\mathcal{L}_k(\boldsymbol{x}_{k+1}^{(i+1)}) \mid \boldsymbol{x}_{k+1}^{(i)} \} &\leq \mathcal{L}_k(\boldsymbol{x}_{k+1}^{(i)}) -2\mu \gamma (\mathcal{L}_k(\boldsymbol{x}_{k+1}^{(i)}) - \mathcal{L}_k^*), \quad \text{for } i=0, \dots, \ell-1.
\end{align}
Applying the above inequality recursively for $i=0,1, \dots, \ell-1$ amounts to
\begin{align*}
\mathbb{E}\{\mathcal{L}_k(\boldsymbol{x}_{k+1}) \mid \boldsymbol{x}_{k} \} &\leq \mathcal{L}_k(\boldsymbol{x}_{k}) -2\mu \gamma \sum_{i=0}^\ell \left(\mathcal{L}_k(\boldsymbol{x}_{k+1}^{(i)}) - \mathcal{L}_k^*\right)\\
&\stackrel{(a)}{\leq} \mathcal{L}_k(\boldsymbol{x}_{k}) -2\mu \gamma \ell \left(\mathcal{L}_k(\boldsymbol{x}_{k}) - \mathcal{L}_k^*\right) ,
\end{align*}
where in (a) we have used the (<ref>). By adding and subtracting terms we have
\begin{align*}
\mathbb{E}\left\{\mathcal{L}_{k+1}(\boldsymbol{x}_{k+1}) - \mathcal{L}_{k+1}^* | \boldsymbol{x}_k\right\} = & \mathbb{E}\left\{\mathcal{L}_{k+1}(\boldsymbol{x}_{k+1}) - \mathcal{L}_{k}(\boldsymbol{x}_{k+1}) | \boldsymbol{x}_k\right\} + \mathbb{E}\left\{\mathcal{L}_{k}(\boldsymbol{x}_{k+1}) - \mathcal{L}_{k}(\boldsymbol{x}_{k}) | \boldsymbol{x}_k\right\} \\
&+ \left(\mathcal{L}_k(\boldsymbol{x}_k) - \mathcal{L}_k^*\right) + (\mathcal{L}_k^* - \mathcal{L}_{k+1}^*) \\
&\stackrel{(b)}{\leq} \eta_0 + \eta^* + (1-2\gamma \mu \ell) (\mathcal{L}_k(\boldsymbol{x}_k) - \mathcal{L}_k^*),
\end{align*}
where in (b) we have used the bounds on drift and (<ref>).
By using the tower rule of expectations and induction we have
\begin{align}
\mathbb{E}\{\mathcal{L}_k(\boldsymbol{x}_{k+1}) - \mathcal{L}_{k+1}^*\} &\leq (\eta_0 + \eta^*) \sum_{i=0}^{k} (1-2\gamma \mu \ell)^{k-i} + (1-2\mu\gamma\ell)^k (\mathcal{L}_0(\boldsymbol{x}_0) - \mathcal{L}_0^*) \\
&\leq \frac{\eta_0 + \eta^*}{2\mu \gamma \ell} + (1-2\mu\gamma\ell)^k (\mathcal{L}_0(\boldsymbol{x}_0) - \mathcal{L}_0^*).
\end{align}
Invoking (<ref>) and choosing $\gamma \in (0, \tilde{\gamma})$ completes the proof.
In the case of a pre-trained neural network where the term $\mathcal{L}_0(\boldsymbol{x}_{0}) - \mathcal{L}_{0}^*$ is small (or zero), the slower convergence rate in higher dimensions is not a problem while the asymptotic error bound, the first term in <ref>, is of great interest. By choosing the parameter $\ell$ properly, one can get better bounds based on the speed of its solver or equivalently the number of iterations at each step.
Under the hypotheses of Theorem <ref> the following holds:
\begin{align}
\limsup_{k\to \infty}{\mathbb{E}\{\mathcal{L}_k(\boldsymbol{x}_{k+1}) - \mathcal{L}_{k+1}^*\} }&\leq \frac{\eta_0 + \eta^*}{2\mu\gamma\ell}.
\end{align}
This in turn results in $\limsup_{k \to \infty} \| \boldsymbol{x}_k - \pi_{\mathcal{X}_{k}^*}(\boldsymbol{x}_{k}) \|^2 \leq \frac{\eta_0 + \eta^*}{\mu^2\gamma\ell}$.
One can choose $\alpha_k = \alpha = \frac{1}{\beta(m+4)}$ to maximise the convergence rate, but this results in maximising the asymptotic optimality gap as well.
§.§.§ Convergence of Proximal Gradient with forward gradients
The proximal gradient algorithm, also known as forward-backward algorithm, applies to the optimisation problems where the cost function is the sum of two functions
\begin{align}\label{sum_problem}
\min_{\boldsymbol{x}\in \mathbb{R}^m}\left\{\mathcal{L}(\boldsymbol{x}) := g(\boldsymbol{x}) + h(\boldsymbol{x})\right\}
\end{align}
which satisfies the following assumption.
Function $g$ is $\beta$-smooth and $h$ is a possibly nonsmooth convex function for all $k$.
The proximal PL-condition in Definition <ref> has been used to analyse the convergence of the generalised proximal gradient algorithm <cit.> where authors have shown that the proximal PL condition is equivalent to the KL condition <cit.>. An important example of cost satisfying the proximal-PL inequality is the $\ell1$-regularized least squares problem.
Let $\mathcal{L}:\mathbb{R}^n \to \mathbb{R}$ be a function that satisfies Proximal-PL condition and Assumption <ref>. Then there exist a constant $\xi > 0$ such that for every $\boldsymbol{x}\in \mathbb{R}^n$ the following holds
\begin{align}
\frac{\xi}{2}\|\boldsymbol{x} - \pi_{\mathcal{X}^*}(\boldsymbol{x}) \|^2 \leq \mathcal{L}(\boldsymbol{x}) - \mathcal{L}^*
\end{align}
Consider the problem (<ref>) where the Assumption <ref> holds and $\mathcal{L}$ satisfies the proximal-PL condition with parameter $\mu$. Then for proximal gradient algorithm with the step size $\frac{1}{\beta}$ the following holds for some $C>0$
\begin{align*}
\|\boldsymbol{x}_k - \pi_{\mathcal{X}^*}(\boldsymbol{x}_k) \| \leq C (1-\frac{\mu}{\beta})^{\frac{k}{2}}
\end{align*}
The proof is a result of Lemma <ref> and Theorem 5 in <cit.>.
The cost function does not need satisfy proximal-PL condition on the whole space. The condition should only be satisfied in a large enough ball around initial point, $B(\boldsymbol{x}_0, R)$ where $R \geq \frac{\sqrt{\frac{2}{\beta} (\mathcal{L}(\boldsymbol{x}_{0}) - \mathcal{L}^*)}}{1-\sqrt{1-\frac{\mu}{\beta}}}$.
We are interested in analysing the performance of proximal gradient algorithm in an online setting using forward gradient
\begin{align}\label{Prox_grad_iter}
\boldsymbol{x}_{k+1} = \textit{prox}_{\gamma_k h_k} (\boldsymbol{x}_k - \gamma_k \boldsymbol{v}_k(\boldsymbol{x}_k, U_k)) := \argmin_{\boldsymbol{y}} h_k(\boldsymbol{y}) + \frac{1}{2\gamma_k}\|\boldsymbol{y} - (\boldsymbol{x}_k - \gamma_k \boldsymbol{v}_k(\boldsymbol{x}_k, U_k)) \|^2
\end{align}
where $\boldsymbol{v}_k(\boldsymbol{x}, U)$ is the forward gradient of function $g_k$ at point $\boldsymbol{x}$. We aim at analysing the algorithm above in terms of expected optimality gap.
The functions $g_k$ has bounded gradients and functions $h_k$ sub-gradients for each $k$, i.e. $\| \nabla g_k(\boldsymbol{x})\| \leq c_1$ and $\| \partial h_k(\boldsymbol{x}) \| \leq c_2$.
We analyse the performance of the algorithm above, (<ref>), under the proximal PL assumption for time-varying objective functions. We show that the tracking error of the algorithm linearly converges to a neighbourhood of zero, where the asymptotic error depend on the dimension.
Assume that $\mathcal{L}_k$ is sum of a $\beta-$smooth function $g_k$ and a nonsmooth convex function $h_k$ which satisfies Proximal-PL condition with parameter $\mu$ and Assumptions <ref> and <ref> hold. If we use proximal algorithm with forward gradient and constant step-size $\gamma_k = \frac{1}{\beta}$ and random direction $U_k$ is chosen from a standard normal distribution, then
\begin{align*}
\mathbb{E}\{\| \boldsymbol{x}_{k+1} - \pi_{\mathcal{X}_{k+1}^*}(\boldsymbol{x}_{k+1}) \|^2\} &\leq \frac{2}{ \xi (1-(1-\frac{\mu}{\beta})^{\ell})} \bigg(\eta_0 + \eta^* + 2G_1\sqrt{(m+3)} \bigg) + \frac{2}{\xi}(1-\frac{\mu}{\beta})^{k\ell} (\mathcal{L}_{0}(\boldsymbol{x}_{0})-\mathcal{L}_{0}^*)
\end{align*}
where $\xi$ is as in Lemma <ref> and $G_1 = \frac{2c_1 (c_1 + c_2)}{\beta}$ with $c_1$ and $c_2$ as in Assumption <ref>.
Under the hypotheses of Theorem <ref> the following holds:
\begin{align}
\limsup_{k\to \infty} \mathbb{E} \bigg\{ \mathcal{L}_{k+1}(\boldsymbol{x}_{k+1})-\mathcal{L}_{k+1}^* \bigg\} \leq \frac{1}{1-(1-\frac{\mu}{\beta})^{\ell}} (\eta_0 + \eta^* + G_1\sqrt{(m+3)})
\end{align}
§ ILLUSTRATIVE NUMERICAL RESULTS
The theoretical error bounds of previous sections are illustrated here employing a numerical example. We consider a sequence of time-varying linear least square problems of the form
\begin{align}
\mathcal{L}_k(\boldsymbol{x}) = \frac{1}{2}\|A_k\boldsymbol{x} - \boldsymbol{b}_k \|^2
\end{align}
where $A_k\in \mathbb{R}^{n\times m}$ and $b_k \in \mathbb{R}^n$. The cost function $\mathcal{L}_k$ satisfies PL condition for every choice of $A_k$ and $\boldsymbol{b}_k$. We consider the over-parameterised case $m=60$ and $n = 10$. The vector $\boldsymbol{b}_k$ is generated as $\boldsymbol{b}_{k+1} = \boldsymbol{b}_k + \delta b_k$, where $\delta b_k$ follows a normal distribution $\mathcal{N}(\boldsymbol{0}, 10^{-2}I_n)$. The matrix $A_k$ is generated using its singular value decomposition, $A_k = U\Sigma_k V^T$ where $U \in \mathbb{R}^{n \times r}$ and $V \in \mathbb{R}^{m \times r}$ are orthogonal matrices and $\Sigma_{k+1} = \Sigma_k - 10^{-6}I_r$ that $\Sigma_0 = diag\ \{0.1, 0.2, \dots, 1\}$ and $r=10$. This setting assures that $\sup_{\boldsymbol{x}} \{\mathcal{L}_{k+1}(\boldsymbol{x}) - \mathcal{L}_k(\boldsymbol{x})\}$ is bounded. We have run 50 different experiments with the same starting point and have calculated the average loss. Results are plotted in Figure <ref>, showing the performance of the actual algorithm and the derived theoretical bounds, validating the convergence results.
Performance of online gradient descent algorithm using directional derivatives, and the theoretical bound
§ CONCLUSION
In this paper, we analysed the performance of gradient-based first-order algorithms focusing on faster algorithms for calculating gradients, namely forward gradients. We have exploited the fact that non-linear least square problems in over-parameterised settings will satisfy the PL condition. Based on this observation, we proved that the (proximal-) gradient-based algorithm based on the forward mode of automatic differentiation (forward gradient) with nonconvex objective functions convergences to optimal value function at a linear rate. We have also analysed the convergence of these algorithms in a time-varying setting and proved the linear convergence to the neighbourhood of a global minimiser. This paper gives new insights on using the forward mode of automatic differentiation in problems with limited resources and fast-changing cost functions where calculating full gradients using the backpropagation algorithm is either impossible or inefficient.
§ PROOF OF LEMMA <REF>
Using descent lemma we have
\begin{align}
\mathcal{L}(\boldsymbol{x}_{k+1}) &\leq \mathcal{L}(\boldsymbol{x}_k) + \langle \nabla \mathcal{L}(\boldsymbol{x}_k), \boldsymbol{x}_{k+1} - \boldsymbol{x}_k \rangle + \frac{\beta}{2}\|\boldsymbol{x}_{k+1} - \boldsymbol{x}_k \|^2 \\
& = \mathcal{L}(\boldsymbol{x}_k) -\alpha_k \langle \nabla \mathcal{L}(\boldsymbol{x}_k), \boldsymbol{v}_k(\boldsymbol{x}_k, U_k) \rangle + \frac{\beta}{2}\alpha_k^2\|\boldsymbol{v}_k(\boldsymbol{x}_k, U_k) \|^2
\end{align}
By taking conditional expectation given $\boldsymbol{x}_k$ with respect to $U_k$, and using (<ref>) we obtain
\begin{align*}
\mathbb{E}\{\mathcal{L}(\boldsymbol{x}_{k+1}) \mid \boldsymbol{x}_k \} &\leq \mathcal{L}(\boldsymbol{x}_k) -\alpha_k \|\nabla \mathcal{L}(\boldsymbol{x}_k) \|^2 +\frac{1}{2}\beta \alpha_k^2(m+4)\|\nabla \mathcal{L}(\boldsymbol{x}_k) \|^2\\
& = \mathcal{L}(\boldsymbol{x}_k) -\underbrace{\alpha_k(1-\frac{\beta}{2}(m+4)\alpha_k)}_{:=\gamma_k}\|\nabla\mathcal{L}(\boldsymbol{x}_k) \|^2.
\end{align*}
We now fix $\alpha_k = \alpha = \frac{1}{\beta(m+4)}$ which implies $\gamma_k = \gamma = \frac{1}{2\beta(m+4)}$. Assuming that $\mathcal{L}$ satisfies $\mu-PL$ condition at $\boldsymbol{x}_k$, we have that
\begin{align}
\mathbb{E}\{\mathcal{L}(\boldsymbol{x}_{k+1}) \mid \boldsymbol{x}_k \} &\leq \mathcal{L}(\boldsymbol{x}_k) -2\mu \gamma (\mathcal{L}(\boldsymbol{x}_k) - \mathcal{L}^*).
\end{align}
By using the tower rule of expectations and induction we have
\begin{align}
\mathbb{E}\{\mathcal{L}(\boldsymbol{x}_{k+1}) - \mathcal{L}^*\} &\leq (\eta_0 + \eta^*) (1-2\mu\gamma)^k (\mathcal{L}(\boldsymbol{x}_0) - \mathcal{L}_0^*) \\
&\leq (1-2\mu\gamma)^k (\mathcal{L}(\boldsymbol{x}_0) - \mathcal{L}_0^*).
\end{align}
Invoking $\gamma = \frac{1}{2\beta(m+4)}$ completes the proof.
§ PROOF OF LEMMA <REF>
Let $\boldsymbol{y} : = \argmin_{\boldsymbol{y}}\left\{ \langle \nabla g(\boldsymbol{x}), \boldsymbol{y}-\boldsymbol{x} \rangle + \frac{\beta}{2}\|\boldsymbol{y}-\boldsymbol{x} \|^2 + h(\boldsymbol{y}) - h(\boldsymbol{x})\right\} = \textit{prox}_{\frac{1}{\beta}h}(\boldsymbol{x} - \frac{1}{\beta} \nabla g(\boldsymbol{x}))$,
\begin{align*}
\mathcal{L}(\boldsymbol{y}) &= g(\boldsymbol{y}) + h(\boldsymbol{y}) +h(\boldsymbol{x}) - h(\boldsymbol{x}) \\
&\leq g(\boldsymbol{x}) + h(\boldsymbol{y}) + \langle \nabla g(\boldsymbol{x}), \boldsymbol{y} - \boldsymbol{x} \rangle + \frac{\beta}{2}\|\boldsymbol{y} - \boldsymbol{x}\|^2 +h(\boldsymbol{x}) - h(\boldsymbol{x}) \\
& \stackrel{(a)}{=} \mathcal{L}(\boldsymbol{x}) -\frac{1}{2\beta} \mathcal{D}_h(\boldsymbol{x}, \beta)
\end{align*}
where in (a) we have used the optimality condition and definition of Proximal-PL condition. Therefore,
\begin{align}
\mathcal{D}_{h}(\boldsymbol{x} , \beta) \leq 2\beta ( \mathcal{L}(\boldsymbol{x}) - \mathcal{L}^* )
\end{align}
Using the optimiality condition we have $\boldsymbol{y} - \boldsymbol{x} = -\frac{1}{\beta} \big( \nabla g(\boldsymbol{x}) + \boldsymbol{s} \big)$ where $\boldsymbol{s}\in \partial h(\boldsymbol{y})$. Plugging this back into (<ref>) gives
\begin{align*}
\mathcal{D}_h(\boldsymbol{x}, \beta) &= -2\beta \bigg(\langle \nabla g(\boldsymbol{x}), \boldsymbol{y} - \boldsymbol{x} \rangle + \frac{\beta}{2}\|\boldsymbol{y} - \boldsymbol{x}\|^2 +h(\boldsymbol{y}) - h(\boldsymbol{x}) \bigg)\\
&= -2\beta \bigg(\langle -\beta(\boldsymbol{y} - \boldsymbol{x}) - \boldsymbol{s}, \boldsymbol{y} - \boldsymbol{x} \rangle + \frac{\beta}{2}\|\boldsymbol{y} - \boldsymbol{x}\|^2 +h(\boldsymbol{y}) - h(\boldsymbol{x}) \bigg)\\
&=-2\beta \bigg( -\frac{\beta}{2}\|\boldsymbol{y} - \boldsymbol{x} \|^2 - \langle \boldsymbol{s}, \boldsymbol{y} - \boldsymbol{x} \rangle +h(\boldsymbol{y}) - h(\boldsymbol{x}) \bigg)\\
&= \beta^2 \|\boldsymbol{y} - \boldsymbol{x} \|^2 + 2\beta \big(h(\boldsymbol{x}) - h(\boldsymbol{y}) -\langle \boldsymbol{s}, \boldsymbol{x} - \boldsymbol{y} \rangle \big)\\
&\stackrel{(a)}{\geq} \beta^2 \|\boldsymbol{y} - \boldsymbol{x} \|^2
\end{align*}
where in (a) we have used the convexity of $h$.
Using the fact that the conditions Proximal-EB (error bounds) and Proximal-PL are equivalent (see Appendix G in <cit.>) there exist $c>0$ such that
\begin{align}
\|\boldsymbol{x} - \pi_{\mathcal{X}^*}(\boldsymbol{x}) \| \leq c \left\|\boldsymbol{x} - \textit{prox}_{\frac{1}{\beta}h}(\boldsymbol{x} - \frac{1}{\beta} \nabla g(\boldsymbol{x})) \right\|
\end{align}
combining the last three inequalities yields in
\begin{align}
2\beta ( \mathcal{L}(\boldsymbol{x}) - \mathcal{L}^* ) \geq \mathcal{D}_{h}(\boldsymbol{x} , \beta) \geq \beta^2 \|\boldsymbol{y} - \boldsymbol{x} \|^2 \geq \frac{\beta^2}{c^2} \| \boldsymbol{x} - \pi_{\mathcal{X}^*}(\boldsymbol{x}) \|^2
\end{align}
setting $\xi = \frac{\beta}{c^2}$ completes the proof.
§ PROOF OF REMARK <REF>
By using Lipschitz continuity of gradient of $g$ we can see
\begin{align*}
\mathcal{L}(\boldsymbol{x}_{k+1}) &= g(\boldsymbol{x}_{k+1}) + h(\boldsymbol{x}_{k+1}) +h(\boldsymbol{x}_{k}) - h(\boldsymbol{x}_{k}) \\
&\leq g(\boldsymbol{x}_{k}) + \langle \nabla g(\boldsymbol{x}_{k}), \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle + \frac{\beta}{2}\|\boldsymbol{x}_{k+1} - \boldsymbol{x}_{k}\|^2 +h(\boldsymbol{x}_{k}) - h(\boldsymbol{x}_{k}) \\
& \stackrel{(a)}{=} \mathcal{L}(\boldsymbol{x}_{k}) -\frac{1}{2\beta} \mathcal{D}_h(\boldsymbol{x}_{k}, \beta)
\end{align*}
where in (a) we have used the Proximal PL definition (<ref>). Therefore
\begin{align} \label{upper_D}
\mathcal{D}_h(\boldsymbol{x}_{k}, \beta) \leq 2 \beta \big( \mathcal{L}(\boldsymbol{x}_{k}) - \mathcal{L}(\boldsymbol{x}_{k+1}) \big) \leq 2 \beta \big( \mathcal{L}(\boldsymbol{x}_{k}) - \mathcal{L}^* \big).
\end{align}
Using the optimiality condition for (<ref>) results in that $\boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} = -\frac{1}{\beta} \big( \nabla g(\boldsymbol{x}_{k}) + \boldsymbol{s}^{k+1} \big)$ where $\boldsymbol{s}^{k+1}\in \partial h(\boldsymbol{x}_{k+1})$. Plugging this back into (<ref>) we have
\begin{align*}
\mathcal{D}_h(\boldsymbol{x}_{k}, \beta) &= -2\beta \bigg(\langle \nabla g(\boldsymbol{x}_{k}), \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle + \frac{\beta}{2}\|\boldsymbol{x}_{k+1} - \boldsymbol{x}_{k}\|^2 +h(\boldsymbol{x}_{k+1}) - h(\boldsymbol{x}_{k}) \bigg)\\
&= -2\beta \bigg(\langle -\beta(\boldsymbol{x}_{k+1} - \boldsymbol{x}_{k}) - \boldsymbol{s}^{k+1}, \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle + \frac{\beta}{2}\|\boldsymbol{x}_{k+1} - \boldsymbol{x}_{k}\|^2 +h(\boldsymbol{x}_{k+1}) - h(\boldsymbol{x}_{k}) \bigg)\\
&=-2\beta \bigg( -\frac{\beta}{2}\|\boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \|^2 - \langle \boldsymbol{s}^{k+1}, \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle +h(\boldsymbol{x}_{k+1}) - h(\boldsymbol{x}_{k}) \bigg)\\
&= \beta^2 \|\boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \|^2 + 2\beta \big(h(\boldsymbol{x}_{k}) - h(\boldsymbol{x}_{k+1}) -\langle \boldsymbol{s}^{k+1}, \boldsymbol{x}_{k} - \boldsymbol{x}_{k+1} \rangle \big)\\
&\stackrel{(a)}{\geq} \beta^2 \|\boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \|^2
\end{align*}
where in (a) we have used the convexity of $h$. By combining the last inequality with (<ref>) we have
\begin{align}
\|\boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \| &\leq \sqrt{\frac{2}{\beta}\big( \mathcal{L}(\boldsymbol{x}_{k}) - \mathcal{L}^* \big)} \\
& \leq \sqrt{\frac{2}{\beta} (\mathcal{L}(\boldsymbol{x}_{0}) - \mathcal{L}^*)}(1-\frac{\mu}{\beta})^{\frac{k}{2}}
\end{align}
which in turn gives a bound on the whole length of the sequence
\begin{align}
\|\boldsymbol{x}_{k+1}- \boldsymbol{x}_0 \| \leq \sum_{i=0}^{k} \|\boldsymbol{x}_{i+1} - \boldsymbol{x}_{i} \| \leq \frac{\sqrt{\frac{2}{\beta} (\mathcal{L}(\boldsymbol{x}_{0}) - \mathcal{L}^*)}}{1-\sqrt{1-\frac{\mu}{\beta}}}:=R
\end{align}
§ PROOF OF THEOREM <REF>
Using Assumption <ref> we have
\begin{align*}
\mathcal{L}_{k+1}(\boldsymbol{x}_{k+1}) &\leq \eta_0 + g_k(\boldsymbol{x}_{k+1}) + h_k(\boldsymbol{x}_{k+1}) + h_k(\boldsymbol{x}_{k}) - h_k(\boldsymbol{x}_{k})\\
& \stackrel{(a)}{\leq} \eta_0 + g_k(\boldsymbol{x}_{k}) + h_k(\boldsymbol{x}_{k})+ \langle \nabla g_k(\boldsymbol{x}_{k}), \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle + \frac{\beta}{2}\|\boldsymbol{x}_{k+1}- \boldsymbol{x}_{k} \|^2 + h_k(\boldsymbol{x}_{k+1})- h_k(\boldsymbol{x}_{k})\\
& = \eta_0 + \mathcal{L}_k(\boldsymbol{x}_{k}) + h_k(\boldsymbol{x}_{k+1})+ \langle \nabla g_k(\boldsymbol{x}_{k}), \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle + \frac{\beta}{2}\|\boldsymbol{x}_{k+1}- \boldsymbol{x}_{k} \|^2 - h_k(\boldsymbol{x}_{k})\\
&= \eta_0 + \mathcal{L}_k(\boldsymbol{x}_{k}) + h_k(\boldsymbol{x}_{k+1})+ \langle \boldsymbol{v}_k(\boldsymbol{x}_k, U_k) +\nabla g_k(\boldsymbol{x}_{k}) - \boldsymbol{v}_k(\boldsymbol{x}_k, U_k), \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle \\
&+ \frac{\beta}{2}\|\boldsymbol{x}_{k+1}- \boldsymbol{x}_{k} \|^2 - h_k(\boldsymbol{x}_{k})\\
&\stackrel{(b)}{=} \eta_0 + \mathcal{L}_k(\boldsymbol{x}_{k}) - \frac{1}{2\beta} \widetilde{\mathcal{D}}_{h_k}(\boldsymbol{x}_k, \beta, U_k) +
\langle \nabla g_k(\boldsymbol{x}_{k}) - \boldsymbol{v}_k(\boldsymbol{x}_k, U_k), \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle
\end{align*}
where in (a) we have used the smoothness of $g_k$ and in (b) we have used the following definition
\begin{align}
\widetilde{\mathcal{D}}_h(\boldsymbol{x}, \alpha, U): = -2\alpha \min_{\boldsymbol{y}} \left\{ \langle \boldsymbol{v}({\boldsymbol{x}}, U), \boldsymbol{y}-\boldsymbol{x} \rangle + \frac{\alpha}{2}\|\boldsymbol{y}-\boldsymbol{x} \|^2 + h(\boldsymbol{y}) - h(\boldsymbol{x})\right\}.
\end{align}
By adding and subtracting term we have
\begin{align*}
\mathcal{L}_{k+1}(\boldsymbol{x}_{k+1})-\mathcal{L}_{k+1}^* &\leq \eta_0 + \eta^* + \mathcal{L}_k(\boldsymbol{x}_{k}) - \mathcal{L}_k^* - \frac{1}{2\beta} {\mathcal{D}}_{h_k}(\boldsymbol{x}_k, \beta) \\
&+ \frac{1}{2\beta} {\mathcal{D}}_{h_k}(\boldsymbol{x}_k, \beta) - \frac{1}{2\beta} \widetilde{\mathcal{D}}_{h_k}(\boldsymbol{x}_k, \beta, U_k) + \langle \nabla g_k(\boldsymbol{x}_{k}) - \boldsymbol{v}_k(\boldsymbol{x}_k, U_k), \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle \\
&\stackrel{(a)}{\leq} \eta_0 + \eta^* + (1-\frac{\mu}{\beta})(\mathcal{L}_k(\boldsymbol{x}_k) - \mathcal{L}_k^*) + \frac{1}{2\beta}\left(\widetilde{\mathcal{D}}_{h_k}(\boldsymbol{x}_k, \beta, U_k) - \mathcal{D}_{h_k}(\boldsymbol{x}_k, \beta) \right) \\
& + \langle \nabla g_k(\boldsymbol{x}_{k}) - v(\boldsymbol{x}_k, U_k), \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle \\
&\stackrel{(b)}{\leq} \eta_0 + \eta^* + (1-\frac{\mu}{\beta})(\mathcal{L}_k(\boldsymbol{x}_k) - \mathcal{L}_k^*)\\
&+ \min_{\boldsymbol{y}} \left\{ \langle \nabla g_k(\boldsymbol{x}_k), \boldsymbol{y}-\boldsymbol{x}_k \rangle + \frac{\beta}{2}\|\boldsymbol{y}-\boldsymbol{x}_k \|^2 + h_k(\boldsymbol{y}) - h_k(\boldsymbol{x}_k)\right\}\\
& - \min_{\boldsymbol{y}} \left\{ \langle \boldsymbol{v}_k({\boldsymbol{x}_k}, U_k), \boldsymbol{y}-\boldsymbol{x}_k \rangle + \frac{\beta}{2}\|\boldsymbol{y}-\boldsymbol{x}_k \|^2 + h(\boldsymbol{y}) - h(\boldsymbol{x}_k)\right\}\\
& + \langle \nabla g_k(\boldsymbol{x}_{k}) - \boldsymbol{v}_k(\boldsymbol{x}_k, U_k), \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle\\
&\stackrel{(c)}{\leq} \eta_0 + \eta^* + (1-\frac{\mu}{\beta})(\mathcal{L}_k(\boldsymbol{x}_k) - \mathcal{L}_k^*) +2 \langle \nabla g_k(\boldsymbol{x}_{k}) - \boldsymbol{v}_k(\boldsymbol{x}_k, U_k), \boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \rangle
\end{align*}
where in (a) we have used (<ref>), in (b) we have used the definitions for $\widetilde{\mathcal{D}}_{h_k}$ and $\mathcal{D}_{h_k}$, and (c) is true because of the update step and sub-optimality condition.
Note that form optimality condition of (<ref>) we have
\begin{align*}
\boldsymbol{x}_{k+1}- \boldsymbol{x}_{k}\in -\frac{1}{\beta}\left({\boldsymbol{v}_k(\boldsymbol{x}_{k}, U_k) + \partial h_k(\boldsymbol{x}_{k+1})}\right).
\end{align*}
Therefore, $\|\boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \|$ is bounded because of boundedness of (sub)gradients, $\|\boldsymbol{x}_{k+1} - \boldsymbol{x}_{k} \| \leq \frac{c_1 + c_2}{\beta}$. Using this in the last inequality above results in
\begin{align}
\mathcal{L}_{k+1}(\boldsymbol{x}_{k+1})-\mathcal{L}_{k+1}^* \leq \eta_0 + \eta^* + (1-\frac{\mu}{\beta})( \mathcal{L}_{k}(\boldsymbol{x}_{k})-\mathcal{L}_{k}^*) + 2\frac{c_1 + c_2}{\beta}\|\nabla g_k(\boldsymbol{x}_{k}) - \boldsymbol{v}_k(\boldsymbol{x}_k, U_k) \|.
\end{align}
By taking conditional expectation given $\boldsymbol{x}_k$ with respect to $U_k$, we have
\begin{align*}
\mathbb{E} \bigg\{ \mathcal{L}_{k+1}(\boldsymbol{x}_{k+1})-\mathcal{L}_{k+1}^* \mid \boldsymbol{x}_k \bigg\} &\leq \eta_0 + \eta^* + (1-\frac{\mu}{\beta}) (\mathcal{L}_{k}(\boldsymbol{x}_{k})-\mathcal{L}_{k}^*) \\
&+ 2\frac{c_1 + c_2}{\beta}\ \mathbb{E} \bigg\{ \|\nabla g_k(\boldsymbol{x}_{k}) - \boldsymbol{v}_k(\boldsymbol{x}_k, U_k) \| \mid \boldsymbol{x}_k \bigg\}.
\end{align*}
Invoking Lyapunov's inequality, $(\mathbb{E}\|Y\|^r)^{1/r}\leq (\mathbb{E}\|Y\|^s)^{1/s}$ for $r=1, s=2$, we have
\begin{align*}
\mathbb{E} \bigg\{ \mathcal{L}_{k+1}(\boldsymbol{x}_{k+1})-\mathcal{L}_{k+1}^* \mid \boldsymbol{x}_k \bigg\} &\leq \eta_0 + \eta^* + (1-\frac{\mu}{\beta}) (\mathcal{L}_{k}(\boldsymbol{x}_{k})-\mathcal{L}_{k}^*) \\
&+ 2\frac{c_1 + c_2}{\beta}\ \sqrt{ \mathbb{E} \bigg\{ \|\nabla g_k(\boldsymbol{x}_{k}) - \boldsymbol{v}_k(\boldsymbol{x}_k, U_k) \|^2 \mid \boldsymbol{x}_k \bigg\}}\\
& \stackrel{(a)}{=} \eta_0 + \eta_* + (1-\frac{\mu}{\beta}) (\mathcal{L}_{k}(\boldsymbol{x}_{k})-\mathcal{L}_{k}^*) + 2\frac{c_1 + c_2}{\beta}\sqrt{(m+3)} \| \nabla g_k(\boldsymbol{x}_k) \|\\
& \stackrel{(b)}{\leq} (1-\frac{\mu}{\beta}) (\mathcal{L}_{k}(\boldsymbol{x}_{k})-\mathcal{L}_{k}^*) + \eta_0 + \eta^* + 2c_1\frac{c_1 + c_2}{\beta}\sqrt{(m+3)},
\end{align*}
where in (a) and (b) we have used (<ref>) and the boundedness of gradients, respectively. Using tower rule of expectations
\begin{align*}
\mathbb{E} \bigg\{ \mathcal{L}_{k+1}(\boldsymbol{x}_{k+1})-\mathcal{L}_{k+1}^* \bigg\} &\leq (1-\frac{\mu}{\beta})^{k} (\mathcal{L}_{0}(\boldsymbol{x}_{0})-\mathcal{L}_{0}^*) \\
&+ \frac{1}{1-(1-\frac{\mu}{\beta})} \bigg(\eta_0 + \eta^* + 2c_1 \frac{c_1 + c_2}{\beta}\sqrt{(m+3)} \bigg),
\end{align*}
Similarly, with $\ell$ iterations at each step we have
\begin{align*}
\mathbb{E} \bigg\{ \mathcal{L}_{k+1}(\boldsymbol{x}_{k+1})-\mathcal{L}_{k+1}^* \bigg\} &\leq (1-\frac{\mu}{\beta})^{k\ell} (\mathcal{L}_{0}(\boldsymbol{x}_{0})-\mathcal{L}_{0}^*) \\
&+ \frac{1}{1-(1-\frac{\mu}{\beta})^{\ell}} \bigg(\eta_0 + \eta^* + 2c_1 \frac{c_1 + c_2}{\beta}\sqrt{(m+3)} \bigg),
\end{align*}
Invoking Lemma <ref> completes the proof.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.